diff --git a/spaces/0x1337/vector-inference/README.md b/spaces/0x1337/vector-inference/README.md
deleted file mode 100644
index 07d760cf72e3665b7d324df0a6a44404aafc4ae0..0000000000000000000000000000000000000000
--- a/spaces/0x1337/vector-inference/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Vector Inference
-emoji: 🏃
-colorFrom: pink
-colorTo: purple
-sdk: gradio
-app_file: app.py
-pinned: false
-license: wtfpl
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/BRAINWORX Bx Console WORK Keygen.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/BRAINWORX Bx Console WORK Keygen.md
deleted file mode 100644
index c8314443783908c45e920ff72a4f341f03e80e0c..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/BRAINWORX Bx Console WORK Keygen.md
+++ /dev/null
@@ -1,125 +0,0 @@
-
-
BRAINWORX bx console keygen: A Comprehensive Review
-
If you are a music producer, engineer, or enthusiast who loves the sound and vibe of classic analog mixing consoles, you might have heard of BRAINWORX bx console plugins. These are software plugins that emulate the signal path, workflow, and sound of some of the most legendary consoles ever made, such as the Neve VXS, the SSL 4000 E and G, and the Focusrite Studio Console. These plugins offer a realistic and flexible way to add warmth, punch, depth, and character to your mixes, without having to spend a fortune on hardware gear.
However, there is a catch. These plugins are not cheap. Each one costs around $300, and if you want to get the whole bundle, you will have to shell out more than $2000. That is a lot of money for most people, especially if you are just starting out or working on a tight budget. So what can you do if you want to use these plugins but can't afford them? Well, one option is to use a keygen.
-
A keygen is a software tool that can generate serial numbers or activation codes for software products that require them. By using a keygen, you can bypass the official registration process and unlock the full features and functionality of the software without paying anything. Sounds too good to be true, right? Well, it is not that simple. Using a keygen also comes with some risks and drawbacks, as well as some legal and ethical issues that you should be aware of before deciding to use one.
-
In this article, I will provide you with an in-depth review of BRAINWORX bx console keygen, one of the most popular and widely used keygens for BRAINWORX bx console plugins. I will explain how it works, what it can do, how it compares to other similar tools and plugins, and what are some of the pros and cons of using it. I will also give you some alternative options for console emulation plugins that you might want to consider instead. By the end of this article, you should have a clear idea of whether BRAINWORX bx console keygen is worth using or not.
-
How does BRAINWORX bx console keygen work and what are its features?
-
BRAINWORX bx console keygen is a software tool that can generate serial numbers for different BRAINWORX bx console plugins. These serial numbers can then be used to activate the plugins on your computer and use them without any limitations or restrictions. The keygen works by exploiting a vulnerability in the plugin's registration system that allows it to generate valid serial numbers based on a specific algorithm.
-
-
How to download and install the keygen
-
To use BRAINWORX bx console keygen, you will need to download and install it on your computer. There are many websites and forums that offer links to download the keygen, but you should be careful and avoid any suspicious or malicious sources that might contain viruses, malware, or spyware. One of the most reliable and trusted sources to download the keygen is VST Crack, a website that provides free downloads of various audio plugins and software tools.
-
To download the keygen from VST Crack, you will need to follow these steps:
You will be redirected to a page where you will have to complete a short survey or offer to unlock the download link. This is a security measure to prevent bots and spam. The survey or offer should not take more than a few minutes to complete.
-
After completing the survey or offer, you will get access to the download link. Click on it and save the keygen file on your computer.
-
Extract the keygen file using a program like WinRAR or 7-Zip. You should get a folder containing the keygen executable file and a readme file with instructions.
-
Run the keygen executable file as an administrator. You might get a warning from your antivirus or firewall software, but you can ignore it as it is a false positive. The keygen is safe and does not contain any harmful code.
-
-
Once you have installed the keygen, you are ready to generate serial numbers for different BRAINWORX bx console plugins.
-
How to generate serial numbers for different bx console plugins
-
BRAINWORX bx console keygen can generate serial numbers for 12 different bx console plugins. These are:
-
-
BRAINWORX bx_console E
-
BRAINWORX bx_console G
-
BRAINWORX bx_console N
-
BRAINWORX bx_console SSL 4000 E
-
BRAINWORX bx_console SSL 4000 G
-
BRAINWORX bx_console Focusrite SC
-
BRAINWORX bx_console Neve VXS
-
BRAINWORX bx_console Amek 9099
-
BRAINWORX bx_console API 2500
-
BRAINWORX bx_console API 550A
-
BRAINWORX bx_console API 550B
-
BRAINWORX bx_console API 560
-
-
To generate serial numbers for these plugins, you will need to follow these steps:
-
-
Open the keygen and select the plugin that you want to activate from the drop-down menu.
-
Click on the "Generate" button and wait for a few seconds. The keygen will create a unique serial number for the selected plugin and display it in the text box below.
-
Copy the serial number and paste it in a safe place. You will need it later to activate the plugin.
-
Repeat steps 1-3 for any other plugins that you want to activate.
-
-
How to activate the plugins with the serial numbers
-
After generating serial numbers for the plugins that you want to use, you will need to activate them on your computer. To do this, you will need to follow these steps:
-
-
Download and install the plugins from the official BRAINWORX website or any other source that you trust. Make sure that you download the latest version of the plugins and that they are compatible with your operating system and DAW.
-
Open your DAW and load one of the plugins on a track or a bus. You should see a pop-up window asking you to enter your serial number.
-
Paste the serial number that you generated with the keygen for that plugin and click on "Activate". The plugin should be activated and ready to use.
-
Repeat steps 2-3 for any other plugins that you want to activate.
-
-
What are some of the features and options of the keygen
-
BRAINWORX bx console keygen is a simple and easy-to-use tool that does not have many features or options. However, there are some things that you can do with it to customize your experience and improve your workflow. These are:
How does BRAINWORX bx console keygen compare to other similar tools and plugins?
-
BRAINWORX bx console keygen is not the only tool that can generate serial numbers for audio plugins. There are many other keygens, cracks, patches, and hacks that claim to do the same thing. However, not all of them are reliable, safe, or effective. Some of them might not work at all, some of them might contain viruses or malware, and some of them might damage your system or compromise your security. Therefore, you should be careful and cautious when choosing a tool to use.
-
One way to compare BRAINWORX bx console keygen with other similar tools is to look at their features, performance, compatibility, and reputation. Here are some of the criteria that you can use to evaluate different tools:
-
-
Features: Does the tool offer any additional features or options that make it more convenient or useful? For example, does it support multiple languages, check for updates, or contact the developers?
-
Performance: Does the tool work fast and smoothly without any errors or glitches? Does it generate valid serial numbers that activate the plugins without any issues? Does it consume a lot of resources or affect your system's performance?
-
Compatibility: Does the tool work with different versions and formats of the plugins? Does it work with different operating systems and DAWs? Does it work with other plugins or software that you use?
-
Reputation: Does the tool have a good reputation among users and experts? Does it have positive reviews and ratings? Does it have a lot of downloads and users? Does it have a reliable and trustworthy source?
-
-
Based on these criteria, BRAINWORX bx console keygen is one of the best tools that you can use to generate serial numbers for BRAINWORX bx console plugins. It has a simple and user-friendly interface, a fast and stable performance, a high compatibility with different plugins and systems, and a good reputation among users and experts. It also has some features that make it more convenient and useful than other tools, such as language support, update check, and contact option.
-
However, BRAINWORX bx console keygen is not perfect. It also has some drawbacks and limitations that you should be aware of before using it. These are:
-
-
It is illegal and unethical to use a keygen to activate software products that you have not paid for. You are violating the terms and conditions of the software license agreement and infringing the intellectual property rights of the software developers. You could face legal consequences or penalties if you are caught using a keygen.
-
It is risky and unsafe to use a keygen from an unknown or untrusted source. You could expose your system to viruses, malware, spyware, or other harmful code that could damage your data or compromise your security. You could also download fake or corrupted files that could cause errors or glitches in your system.
-
It is unreliable and unpredictable to use a keygen for software products that are constantly updated or improved. You could encounter compatibility issues or activation problems if the software developers change or update their registration system or algorithm. You could also miss out on new features or bug fixes that are included in the latest versions of the software.
-
-
Conclusion
-
BRAINWORX bx console keygen is a software tool that can generate serial numbers for different BRAINWORX bx console plugins. These plugins are software plugins that emulate the sound and features of some of the most famous analog mixing consoles ever made. By using a keygen, you can activate these plugins without paying anything and use them without any limitations or restrictions.
-
BRAINWORX bx console keygen is one of the best tools that you can use to generate serial numbers for BRAINWORX bx console plugins. It has a simple and user-friendly interface, a fast and stable performance, a high compatibility with different plugins and systems, and a good reputation among users and experts. It also has some features that make it more convenient and useful than other tools, such as language support, update check, and contact option.
-
However, BRAINWORX bx console keygen is not perfect. It also has some drawbacks and limitations that you should be aware of before using it. These are:
-
-
It is illegal and unethical to use a keygen to activate software products that you have not paid for. You are violating the terms and conditions of the software license agreement and infringing the intellectual property rights of the software developers. You could face legal consequences or penalties if you are caught using a keygen.
-
It is risky and unsafe to use a keygen from an unknown or untrusted source. You could expose your system to viruses, malware, spyware, or other harmful code that could damage your data or compromise your security. You could also download fake or corrupted files that could cause errors or glitches in your system.
-
It is unreliable and unpredictable to use a keygen for software products that are constantly updated or improved. You could encounter compatibility issues or activation problems if the software developers change or update their registration system or algorithm. You could also miss out on new features or bug fixes that are included in the latest versions of the software.
-
-
Therefore, you should think carefully and weigh the pros and cons before deciding to use BRAINWORX bx console keygen. While it might seem tempting and convenient to use a keygen to get access to high-quality plugins for free, you might also face some serious risks and problems that could outweigh the benefits. You might also be violating the law and the ethics of the music industry by using a keygen.
-
If you are looking for some alternative options for console emulation plugins that are legal, safe, and affordable, you might want to consider some of these:
-
Alternative options for console emulation plugins
-
BRAINWORX bx console plugins are not the only console emulation plugins that you can use to enhance your mixes. There are many other plugins that offer similar or different features and sound quality, depending on your preferences and needs. Some of these plugins are free, some of them are paid, and some of them offer both free and paid versions. Here are some of the most popular and recommended console emulation plugins that you might want to check out:
-
Waves SSL 4000 Collection
-
Waves SSL 4000 Collection is a bundle of four plugins that emulate the sound and features of the SSL 4000 series consoles, one of the most iconic and widely used consoles in music history. The bundle includes:
-
-
SSL E-Channel: A channel strip plugin that offers EQ, compression, gating, and filtering.
-
SSL G-Channel: A channel strip plugin that offers EQ, compression, gating, filtering, and harmonic distortion.
-
SSL G-Equalizer: A four-band equalizer plugin with a parametric LMF band.
-
SSL G-Master Buss Compressor: A master buss compressor plugin that adds glue and punch to your mix.
-
-
The Waves SSL 4000 Collection plugins are designed to faithfully recreate the sound and behavior of the original hardware units, with analog modeling and dynamic response. They also offer some additional features and options that enhance their flexibility and usability, such as sidechain filtering, stereo mode, analog noise control, input/output metering, and presets.
-
The Waves SSL 4000 Collection plugins are compatible with most DAWs and operating systems. They cost $749 for the bundle, but they often go on sale for much lower prices. You can also try them for free for 7 days with a demo version.
-
Slate Digital Virtual Console Collection
-
Slate Digital Virtual Console Collection is a bundle of two plugins that emulate the sound and features of six different analog consoles: SSL 4000 E, SSL 4000 G+, Neve 88RS, API Legacy Plus, Trident A-Range, and RCA BC6A. The bundle includes:
-
-
VCC Channel: A channel strip plugin that offers drive, group selection, noise control, input/output metering, and presets.
-
VCC Mixbuss: A mix buss plugin that offers drive, group selection, noise control, input/output metering, trim control, and presets.
-
-
The Slate Digital Virtual Console Collection plugins are designed to emulate the sound and behavior of the original hardware units, with analog modeling and dynamic response. They also offer some additional features and options that enhance their flexibility and usability, such as group mode, oversampling, and calibration. They also allow you to mix and match different consoles and groups to create your own custom sound.
-
The Slate Digital Virtual Console Collection plugins are compatible with most DAWs and operating systems. They cost $149 for the bundle, but they are also included in the Slate Digital All Access Pass, which gives you access to over 60 plugins and online courses for $14.99 per month or $149 per year. You can also try them for free for 15 days with a trial version.
-
Softube Console 1
-
Softube Console 1 is a hardware/software hybrid system that emulates the sound and features of different analog consoles. The system consists of:
-
-
Console 1 Fader: A hardware controller that offers 10 touch-sensitive motorized faders, solo and mute buttons, layer mode, track selection, volume control, input/output metering, and presets.
-
Console 1 MKII: A hardware controller that offers 18 dedicated knobs, LED display, solo and mute buttons, layer mode, track selection, drive control, input/output metering, and presets.
-
Console 1 Software: A software plugin that offers EQ, compression, gate, transient shaper, saturation, high/low cut filters, input/output metering, and presets.
-
-
The Softube Console 1 system is designed to emulate the sound and behavior of the original hardware units, with analog modeling and dynamic response. It also offers some additional features and options that enhance its flexibility and usability, such as parallel processing, sidechain filtering, stereo mode, analog noise control, and integration with other Softube plugins.
-
The Softube Console 1 system is compatible with most DAWs and operating systems. It costs $1099 for the bundle of Console 1 Fader and Console 1 MKII controllers, or $499 for each controller separately. The Console 1 Software plugin is included with the controllers, but it can also be purchased separately for $199. The system also comes with four console emulation plugins: SSL SL 4000 E, Solid State Logic XL 9000 K-Series, British Class A For Console 1, and American Class A For Console 1. You can also buy other console emulation plugins from Softube or other developers that are compatible with the system.
-
FAQs
-
Here are some of the most frequently asked questions about BRAINWORX bx console keygen and their answers:
-
Is BRAINWORX bx console keygen safe to use?
-
BRAINWORX bx console keygen is safe to use if you download it from a reliable and trusted source like VST Crack. However, you should always scan any file that you download from the internet with a reputable antivirus or malware scanner before opening or running it. You should also backup your data and create a restore point on your system before installing or using any software tool that could potentially harm your system or compromise your security.
-
Is BRAINWORX bx console keygen legal to use?
-
BRAINWORX bx console keygen is not legal to use in most countries and jurisdictions. By using a keygen to activate software products that you have not paid for, you are violating the terms and conditions of the software license agreement and infringing the intellectual property rights of the software developers. You could face legal consequences or penalties if you are caught using a keygen. You could also be sued by the software developers or their representatives for damages or losses caused by your use of a keygen.
-
Does BRAINWORX bx console keygen work with all versions and formats of BRAINWORX bx console plugins?
-
BRAINWORX bx console keygen works with most versions and formats of BRAINWORX bx console plugins. However, it might not work with some newer or updated versions of the plugins that have changed or improved their registration system or algorithm. It might also not work with some formats or platforms that are not supported by the keygen. You should always check the compatibility and requirements of the plugins and the keygen before using them together.
-
Does BRAINWORX bx console keygen affect the sound quality or performance of BRAINWORX bx console plugins?
-
BRAINWORX bx console keygen does not affect the sound quality or performance of BRAINWORX bx console plugins. The keygen only generates serial numbers that activate the plugins on your computer. It does not modify or alter the code or functionality of the plugins in any way. The sound quality and performance of the plugins depend on their design and development by BRAINWORX, as well as your system's specifications and settings. The keygen does not affect these factors in any way.
-
Can I use BRAINWORX bx console keygen with other plugins or software that I use?
-
BRAINWORX bx console keygen can be used with other plugins or software that you use, as long as they are compatible and do not interfere with each other. However, you should be careful and avoid using too many plugins or software tools at the same time, as this could overload your system and cause crashes, errors, or glitches. You should also avoid using plugins or software tools that are illegal, unsafe, or unethical, as this could harm your system or compromise your security.
-
-
This concludes my article on BRAINWORX bx console keygen. I hope you found it informative and helpful. If you have any questions, comments, or feedback, please feel free to contact me. Thank you for reading and have a great day!
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download WBCS Part 2 PDF for Free A Complete Guide to WBCS Mains Exam Papers.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download WBCS Part 2 PDF for Free A Complete Guide to WBCS Mains Exam Papers.md
deleted file mode 100644
index d63308c1c05ece4994d9ab0feee80f4d21ea9de9..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download WBCS Part 2 PDF for Free A Complete Guide to WBCS Mains Exam Papers.md
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
How to Download WBCS Part 2 PDF for Free: A Useful Study Material for WBCS Exam
-
If you are preparing for the West Bengal Civil Service (WBCS) exam, you might be looking for some useful study materials that can help you cover the syllabus and practice the questions. One such study material is the WBCS Part 2 PDF, which is a collection of previous year papers of WBCS Mains exam. In this article, we will show you how to download WBCS Part 2 PDF for free and what are its features and benefits.
WBCS Part 2 PDF is a study material that contains the previous year papers of WBCS Mains exam from 2014 to 2020. It covers all the six compulsory papers of WBCS Mains exam, namely:
-
-
Paper I: Bengali/Hindi/Urdu/Nepali/Santali
-
Paper II: English
-
Paper III: General Studies I
-
Paper IV: General Studies II
-
Paper V: Indian Constitution and Economy
-
Paper VI: Arithmetic and Test of Reasoning
-
-
Each paper consists of 200 marks and has a duration of 150 minutes. The papers are available in both English and Bengali languages. The papers are also accompanied by detailed solutions and explanations.
-
What are the features of WBCS Part 2 PDF?
-
WBCS Part 2 PDF has many features that make it a useful and reliable study material for WBCS exam. Some of the features are:
-
-
Authentic and updated: WBCS Part 2 PDF contains the official papers of WBCS Mains exam that are released by the West Bengal Public Service Commission (WBPSC). The papers are also updated with the latest changes and trends in the exam pattern and syllabus.
-
Comprehensive and diverse: WBCS Part 2 PDF covers all the topics and sub-topics of the WBCS Mains exam syllabus. It also provides a variety of questions from different difficulty levels and formats.
-
Solved and explained: WBCS Part 2 PDF provides detailed solutions and explanations for each question. It also provides tips and tricks to solve the questions faster and accurately.
-
Free and downloadable: WBCS Part 2 PDF is available for free download from various online sources. You can download it on your computer or mobile device and access it anytime and anywhere.
-
-
How to download WBCS Part 2 PDF for free?
-
If you want to download WBCS Part 2 PDF for free, you can do so from the following online sources:
-
-
-
Testbook.com: This is a website that provides various study materials and mock tests for various competitive exams. You can download WBCS Part 2 PDF from this website by clicking on the "Download" button or by starting a free test.
-
WBCSMadeEasy.in: This is a website that provides coaching and guidance for WBCS exam. You can download WBCS Part 2 PDF from this website by clicking on the "Download" link or by registering on the website.
-
StudyIQ.com: This is a website that provides articles and videos on various topics related to current affairs and general studies. You can download WBCS Part 2 PDF from this website by clicking on the "Download" link or by subscribing to their YouTube channel.
-
-
Conclusion
-
WBCS Part 2 PDF is a free and useful study material that can help you prepare for the WBCS Mains exam. It contains the previous
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dvtool 2.0 Beta 5 HOT Download.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dvtool 2.0 Beta 5 HOT Download.md
deleted file mode 100644
index ec252192267ea17ca9544486016e0432da2d1050..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dvtool 2.0 Beta 5 HOT Download.md
+++ /dev/null
@@ -1,218 +0,0 @@
-
- - What is DV Dongle? - What is DVTool software? | | H2: Why do you need Dvtool 2.0 Beta 5? | - What are the features of Dvtool 2.0 Beta 5? - What are the benefits of using Dvtool 2.0 Beta 5? - How does Dvtool 2.0 Beta 5 improve your D-Star experience? | | H2: How to download and install Dvtool 2.0 Beta 5? | - Where to download Dvtool 2.0 Beta 5? - How to install Dvtool 2.0 Beta 5 on Windows? - How to install Dvtool 2.0 Beta 5 on Mac OS X? | | H2: How to use Dvtool 2.0 Beta 5? | - How to connect DV Dongle to your PC or Mac? - How to configure Dvtool settings? - How to access D-Star reflectors and repeaters? - How to communicate with other D-Star users? | | H2: Tips and tricks for using Dvtool 2.0 Beta 5 | - How to update Dvtool software? - How to troubleshoot common issues with Dvtool? - How to optimize your audio quality with Dvtool? - How to customize your D-Star profile with Dvtool? | | H2: Conclusion | Summary of the main points and call to action | | H3: FAQs | - What are the system requirements for using Dvtool 2.0 Beta 5? - Is Dvtool 2.0 Beta 5 compatible with other versions of DV Dongle or DVAP? - Is Dvtool 2.0 Beta 5 free or paid software? - Where can I find more information or support for using Dvtool 2.0 Beta 5? - What are some alternatives to using Dvtool 2.0 Beta 5? | Article
Dvtool 2.0 Beta 5 Download: Everything You Need to Know
-
If you are a fan of digital voice communication in amateur radio, you have probably heard of D-Star, DV Dongle, and DVTool software. These are some of the tools that enable you to access the worldwide network of D-Star repeaters and reflectors from your PC or Mac.
But did you know that there is a new version of DVTool software available for download? It's called Dvtool 2.0 Beta 5, and it offers some exciting features and improvements that will enhance your D-Star experience.
-
In this article, we will tell you everything you need to know about Dvtool 2.0 Beta 5, including what it is, why you need it, how to download and install it, how to use it, and some tips and tricks for getting the most out of it.
-
So, if you are ready to take your digital voice communication to the next level, read on!
-
-
What is Dvtool?
-
Before we dive into the details of Dvtool 2.0 Beta 5, let's first review what Dvtool is and how it works with D-Star and DV Dongle.
-
What is D-Star?
-
D-Star stands for Digital Smart Technologies for Amateur Radio
D-Star is a digital voice and data protocol that was developed by the Japan Amateur Radio League (JARL) in the late 1990s. It allows amateur radio operators to communicate with each other over long distances using digital signals that are transmitted and received by D-Star compatible radios, repeaters, and reflectors.
-
A D-Star repeater is a device that receives a D-Star signal from a radio and retransmits it to another radio or to a reflector. A D-Star reflector is a server that connects multiple repeaters and radios over the internet, creating a global network of D-Star users.
-
D-Star offers several advantages over analog voice communication, such as clearer audio quality, less interference, more efficient use of bandwidth, and the ability to transmit data along with voice, such as GPS coordinates, text messages, images, and files.
-
What is DV Dongle?
-
A DV Dongle is a device that allows you to access the D-Star network from your PC or Mac without using a radio. It is a USB dongle that contains a digital signal processor (DSP) and a codec that converts analog audio signals to digital D-Star signals and vice versa.
-
By connecting a DV Dongle to your PC or Mac and using a headset or microphone and speakers, you can communicate with other D-Star users over the internet. You can also use a DV Dongle to listen to D-Star transmissions and monitor the activity on different repeaters and reflectors.
-
What is DVTool software?
-
DVTool software is a program that allows you to control and configure your DV Dongle from your PC or Mac. It also provides a graphical user interface (GUI) that displays information about the D-Star network, such as the list of available repeaters and reflectors, the call signs of the users who are connected, and the status of your DV Dongle.
-
DVTool software also enables you to connect your DV Dongle to any D-Star repeater or reflector that you choose, and to switch between them easily. You can also use DVTool software to adjust the audio settings of your DV Dongle, such as the volume, gain, and compression.
-
Why do you need Dvtool 2.0 Beta 5?
-
Now that you know what Dvtool is and how it works with D-Star and DV Dongle, you might be wondering why you need Dvtool 2.0 Beta 5. After all, there are already several versions of DVTool software available for download, such as DVTool 1.05, DVTool 2.0 Beta 1, DVTool 2.0 Beta 2, DVTool 2.0 Beta 3, and DVTool 2.0 Beta 4.
-
Well, the answer is simple: Dvtool 2.0 Beta 5 is the latest and most advanced version of DVTool software that offers some new features and improvements that will make your D-Star experience even better. Here are some of them:
-
What are the features of Dvtool 2.0 Beta 5?
-
Some of the features of Dvtool 2.0 Beta 5 are:
-
-
It supports both Windows and Mac OS X operating systems.
-
It has a redesigned GUI that is more user-friendly and intuitive.
-
It has a new audio engine that improves the sound quality and reduces the latency.
-
It has a new echo test feature that allows you to test your audio settings before connecting to a repeater or reflector.
-
It has a new auto-connect feature that automatically connects your DV Dongle to the last repeater or reflector that you used.
-
It has a new auto-update feature that checks for new versions of DVTool software and downloads them automatically.
-
It has a new logging feature that records your D-Star activity in a text file.
-
It has a new help feature that provides online documentation and support for using DVTool software.
-
-
What are the benefits of using Dvtool 2.0 Beta 5?
-
Some of the benefits of using Dvtool 2.0 Beta 5 are:
-
-
It allows you to access the D-Star network from your PC or Mac without using a radio.
-
It allows you to communicate with other D-Star users around the world using digital voice and data.
-
It allows you to enjoy clearer audio quality, less interference, more efficient use of bandwidth, and the ability to transmit data along with voice.
-
It allows you to access a wider range of repeaters and reflectors that may not be available in your area or frequency.
-
It allows you to monitor the activity on different repeaters and reflectors and discover new contacts and conversations.
-
It allows you to adjust the audio settings of your DV Dongle to suit your preferences and environment.
-
It allows you to update your DVTool software easily and automatically.
-
It allows you to troubleshoot common issues with your DV Dongle and DVTool software.
-
It allows you to customize your D-Star profile and display information about yourself and your station.
-
-
How does Dvtool 2.0 Beta 5 improve your D-Star experience?
-
By using Dvtool 2.0 Beta 5, you can improve your D-Star experience in several ways, such as:
-
-
You can enjoy a smoother and more stable connection to the D-Star network, thanks to the improved audio engine and the redesigned GUI.
-
You can test your audio settings before connecting to a repeater or reflector, thanks to the new echo test feature.
-
You can save time and hassle by automatically connecting to the last repeater or reflector that you used, thanks to the new auto-connect feature.
-
You can keep your DVTool software up to date and secure, thanks to the new auto-update feature.
-
You can keep track of your D-Star activity and review it later, thanks to the new logging feature.
-
You can get help and support for using DVTool software, thanks to the new help feature.
-
-
How to download and install Dvtool 2.0 Beta 5?
-
Now that you know why you need Dvtool 2.0 Beta 5 and what it can do for you, you might be wondering how to download and install it on your PC or Mac. Don't worry, it's very easy and straightforward. Just follow these steps:
-
Where to download Dvtool 2.0 Beta 5?
-
The official website for downloading Dvtool 2.0 Beta 5 is http://www.dvdongle.com/DV_Dongle/Home.html. This is where you can find the latest version of DVTool software for both Windows and Mac OS X operating systems.
-
To download Dvtool 2.0 Beta 5, simply click on the link that corresponds to your operating system. For example, if you are using Windows, click on the link that says "DVTool-2.0beta5.exe". If you are using Mac OS X, click on the link that says "DVTool-2.0beta5.dmg".
-
The download process will start automatically and may take a few minutes depending on your internet speed. Once the download is complete, you will have a file named "DVTool-2.0beta5.exe" or "DVTool-2.0beta5.dmg" in your downloads folder or wherever you saved it.
-
How to install Dvtool 2.0 Beta 5 on Windows?
-
To install Dvtool 2.0 Beta 5 on Windows, follow these steps:
-
-
Double-click on the file named "DVTool-2.0beta5.exe" that you downloaded earlier.
-
A window will pop up asking you if you want to run this file. Click on "Run".
-
A window will pop up asking you if you want to allow this app to make changes to your device. Click on "Yes".
-
A window will pop up showing you the setup wizard for DVTool software. Click on "Next".
-
A window will pop up asking you to accept the license agreement for DVTool software. Read the agreement carefully and click on "I Agree".
-
A window will pop up asking you to choose the destination folder for installing DVTool software. You can leave it as default or change it if you want. Click on "Next".
-
A window will pop up asking you to confirm the installation settings. Click on "Install".
-
The installation process will begin and may take a few minutes depending on your computer speed. A window will pop up showing you the progress of the installation.
-
Once the installation is complete, a window will pop up asking you if you want to launch DVTool software now. Click on "Finish".
-
-
Congratulations! You have successfully installed Dvtool 2.0 Beta 5 on your Windows PC. You are now ready to use it with your DV Dongle and access the D-Star network.
-
How to install Dvtool 2.0 Beta 5 on Mac OS X?
-
To install Dvtool 2.0 Beta 5 on Mac OS X, follow these steps:
-
-
Double-click on the file named "DVTool-2.0beta5.dmg" that you downloaded earlier.
-
A window will pop up showing you the DVTool software icon and a folder named "Applications". Drag and drop the DVTool software icon into the Applications folder.
-
A window will pop up asking you to confirm that you want to copy DVTool software to the Applications folder. Click on "Authenticate".
-
A window will pop up asking you to enter your administrator password. Enter your password and click on "OK".
-
The copying process will begin and may take a few minutes depending on your computer speed. A window will pop up showing you the progress of the copying.
-
Once the copying is complete, a window will pop up showing you that DVTool software is in your Applications folder. You can close this window and eject the DVTool software disk image.
-
-
Congratulations! You have successfully installed Dvtool 2.0 Beta 5 on your Mac OS X. You are now ready to use it with your DV Dongle and access the D-Star network.
-
How to use Dvtool 2.0 Beta 5?
-
Now that you have downloaded and installed Dvtool 2.0 Beta 5 on your PC or Mac, you might be wondering how to use it with your DV Dongle and access the D-Star network. Don't worry, it's very easy and fun. Just follow these steps:
-
How to connect DV Dongle to your PC or Mac?
-
To connect your DV Dongle to your PC or Mac, follow these steps:
-
-
Make sure that your PC or Mac is connected to the internet and has a working sound card, headset or microphone, and speakers.
-
Plug your DV Dongle into a free USB port on your PC or Mac.
-
Wait for a few seconds until your PC or Mac recognizes your DV Dongle and installs the necessary drivers.
-
You should see a blue LED light on your DV Dongle indicating that it is powered on and ready to use.
-
-
How to configure Dvtool settings?
-
To configure your Dvtool settings, follow these steps:
-
-
Launch the DVTool software from your desktop or applications folder.
-
A window will pop up showing you the main interface of DVTool software.
-
Click on the "Settings" button at the top right corner of the window.
-
A window will pop up showing you the settings menu of DVTool software.
-
You can adjust various settings here, such as:
-
-
Your call sign: Enter your amateur radio call sign in the box provided. This is how other D-Star users will identify you on the network.
-
Your name: Enter your name in the box provided. This is how other D-Star users will greet you on the network.
-
Your location: Enter your city and country in the box provided. This is how other D-Star users will know where you are from on the network.
-
Your message: Enter a short message in the box provided. This is what other D-Star users will see when they connect to you on the network.
-
Your audio input device: Select the device that you are using to capture your voice, such as a headset or microphone, from the drop-down menu.
-
Your audio output device: Select the device that you are using to play back other users' voices, such as speakers or headphones, from the drop-down menu.
-
Your audio input level: Adjust the slider to set the volume of your voice input. You can also use the "Test" button to test your audio input level and hear how you sound.
-
Your audio output level: Adjust the slider to set the volume of other users' voice output. You can also use the "Test" button to test your audio output level and hear how others sound.
-
-
Once you are done with adjusting your settings, click on the "OK" button to save them and close the window.
-
-
How to access D-Star reflectors and repeaters?
-
To access D-Star reflectors and repeaters, follow these steps:
-
-
On the main interface of DVTool software, click on the "Connect" button at the top left corner of the window.
-
A window will pop up showing you the list of available D-Star reflectors and repeaters that you can connect to.
-
You can use the search box to find a specific reflector or repeater by its name, call sign, or location.
-
You can also use the filter buttons to narrow down the list by category, such as "All", "Favorites", "Local", "International", or "Hotspots".
-
Once you find the reflector or repeater that you want to connect to, double-click on it or select it and click on the "Connect" button at the bottom of the window.
-
A window will pop up showing you the status of your connection. You should see a green LED light on your DV Dongle indicating that it is connected to the reflector or repeater.
-
You should also see a message on the main interface of DVTool software saying "Connected to [reflector or repeater name]".
-
You can now communicate with other D-Star users who are connected to the same reflector or repeater as you.
-
-
How to communicate with other D-Star users?
-
To communicate with other D-Star users, follow these steps:
-
-
Make sure that your DV Dongle is connected to a reflector or repeater that has other users online.
-
Put on your headset or microphone and speakers and adjust your audio input and output levels as needed.
-
Press and hold the "PTT" button on your DV Dongle or on your keyboard (usually the space bar) to transmit your voice.
-
Speak clearly and politely into your microphone and introduce yourself with your call sign, name, and location.
-
Release the "PTT" button when you are done speaking and wait for a response from other users.
-
If you hear a response from another user, you can reply by pressing and holding the "PTT" button again and speaking into your microphone.
-
If you don't hear a response from another user, you can try calling again or switch to another reflector or repeater that has more activity.
-
You can also listen to other users' conversations and join them if they invite you or if they are open to new contacts.
-
-
Tips and tricks for using Dvtool 2.0 Beta 5
-
By following the steps above, you should be able to use Dvtool 2.0 Beta 5 with your DV Dongle and access the D-Star network without any problems. However, there are some tips and tricks that can help you get even more out of Dvtool 2.0 Beta 5 and make your D-Star experience more enjoyable and efficient. Here are some of them:
-
How to update Dvtool software?
-
To update your Dvtool 2.0 Beta 5 software, follow these steps:
-
-
Launch the DVTool software from your desktop or applications folder.
-
A window will pop up showing you the main interface of DVTool software.
-
Click on the "Help" button at the top right corner of the window.
-
A window will pop up showing you the help menu of DVTool software.
-
Click on the "Check for Updates" option.
-
A window will pop up showing you if there are any new versions of DVTool software available for download.
-
If there are no new versions available, you will see a message saying "You have the latest version of DVTool". You can close this window and continue using DVTool software as usual.
-
If there are new versions available, you will see a message saying "A new version of DVTool is available". You can click on the "Download" button to download the new version of DVTool software and install it following the same steps as before.
-
Once the installation is complete, you will have the latest version of DVTool software on your PC or Mac. You can close this window and enjoy the new features and improvements of DVTool software.
-
-
How to troubleshoot common issues with Dvtool?
-
Sometimes, you may encounter some issues with your Dvtool 2.0 Beta 5 software or your DV Dongle that may affect your D-Star experience. Don't panic, most of these issues can be easily fixed by following some simple troubleshooting steps. Here are some of the common issues and how to fix them:
-
-
Your DV Dongle is not recognized by your PC or Mac: This may happen if your USB port is faulty, your USB cable is loose, your DV Dongle is damaged, or your drivers are outdated. To fix this, try plugging your DV Dongle into a different USB port, using a different USB cable, checking your DV Dongle for any physical damage, or updating your drivers from the official website.
-
Your DV Dongle is not connected to the D-Star network: This may happen if your internet connection is unstable, your firewall or antivirus is blocking the DVTool software, your reflector or repeater is offline, or your settings are incorrect. To fix this, try restarting your modem or router, disabling your firewall or antivirus temporarily, choosing a different reflector or repeater, or checking your settings for any errors.
-
Your audio quality is poor or distorted: This may happen if your audio input or output device is faulty, your audio input or output level is too high or too low, your internet connection is slow, or your reflector or repeater is congested. To fix this, try using a different audio input or output device, adjusting your audio input or output level using the slider or the test button, improving your internet speed, or switching to a less busy reflector or repeater.
-
Your D-Star profile is not displayed correctly: This may happen if you have not entered your call sign, name, location, or message in the settings menu, or if you have entered them incorrectly. To fix this, try entering or correcting your call sign, name, location, and message in the settings menu and saving them.
-
-
If none of these steps work for you, you can always contact the DVTool software support team for further assistance. You can find their contact information on the official website.
-
How to optimize your audio quality with Dvtool?
-
One of the main advantages of using Dvtool 2.0 Beta 5 with your DV Dongle and accessing the D-Star network is that you can enjoy clearer audio quality than analog voice communication. However, there are some ways that you can optimize your audio quality even more and make it sound more natural and pleasant. Here are some of them:
-
-
Use a good quality headset or microphone and speakers that are compatible with your PC or Mac and have a clear sound output and input.
-
Position your headset or microphone and speakers in a way that minimizes background noise and feedback.
-
Speak clearly and loudly enough into your microphone and avoid mumbling or whispering.
-
Avoid speaking too fast or too slow and use proper pronunciation and grammar.
-
Avoid using slang, jargon, acronyms, or abbreviations that may confuse other users.
-
Avoid interrupting other users when they are speaking and wait for a pause before transmitting.
-
Acknowledge other users when they call you by using their call sign and name.
-
Be polite and respectful to other users and follow the etiquette and rules of the D-Star network.
-
-
How to customize your D-Star profile with Dvtool?
-
One of the fun aspects of using Dvtool 2.0 Beta 5 with your DV Dongle and accessing the D-Star network is that you can customize your D-Star profile and display information about yourself and your station to other users. This can help you make new contacts and friends on the network and show off your personality and interests. Here are some ways that you can customize your D-Star profile with Dvtool 2.0 Beta 5:
-
-
You can enter your call sign, name, location, and message in the settings menu of DVTool software and save them. These are the basic information that other users will see when they connect to you on the network.
-
You can also enter some optional information in the settings menu of DVTool software, such as your email address, website, QTH locator, and D-Star registration date. These are the additional information that other users can see if they click on your call sign on the main interface of DVTool software.
-
You can also upload a picture of yourself or your station in the settings menu of DVTool software. This is the image that other users will see when they click on your call sign on the main interface of DVTool software.
-
You can also change the color and font of your call sign, name, location, and message in the settings menu of DVTool software. This is how you can personalize your D-Star profile and make it stand out from the rest.
-
-
Conclusion
-
In conclusion, Dvtool 2.0 Beta 5 is a great software that allows you to use your DV Dongle and access the D-Star network from your PC or Mac without using a radio. It offers some new features and improvements that will enhance your D-Star experience, such as a redesigned GUI, a new audio engine, a new echo test feature, a new auto-connect feature, a new auto-update feature, a new logging feature, and a new help feature.
-
It also allows you to communicate with other D-Star users around the world using digital voice and data, enjoy clearer audio quality, less interference, more efficient use of bandwidth, and the ability to transmit data along with voice, access a wider range of repeaters and reflectors that may not be available in your area or frequency, monitor the activity on different repeaters and reflectors and discover new contacts and conversations, adjust the audio settings of your DV Dongle to suit your preferences and environment, update your DVTool software easily and automatically, troubleshoot common issues with your DV Dongle and DVTool software, customize your D-Star profile and display information about yourself and your station, and optimize your audio quality with some tips and tricks.
-
If you are interested in trying out Dvtool 2.0 Beta 5, you can download it from the official website http://www.dvdongle.com/DV_Dongle/Home.html and install it on your PC or Mac following the steps above. You will need a DV Dongle device to use it with. You can also find more information and support for using Dvtool 2.0 Beta 5 on the official website or by contacting the DVTool software support team.
-
We hope that this article has helped you learn more about Dvtool 2.0 Beta 5 and how to use it with your DV Dongle and access the D-Star network. We hope that you will enjoy using Dvtool 2.0 Beta 5 and have fun communicating with other D-Star users around the world.
-
FAQs
-
Here are some frequently asked questions (FAQs) about Dvtool 2.0 Beta 5:
-
-
What are the system requirements for using Dvtool 2.0 Beta 5?
-
The system requirements for using Dvtool 2.0 Beta 5 are:
-
-
A PC or Mac with an internet connection and a working sound card.
-
A Windows or Mac OS X operating system.
-
A DV Dongle device with a USB cable.
-
A headset or microphone and speakers.
-
-
Is Dvtool 2.0 Beta 5 compatible with other versions of DV Dongle or DVAP?
-
Yes, Dvtool 2.0 Beta 5 is compatible with all versions of DV Dongle devices (DV Dongle Blue, DV Dongle Red, DV Dongle Orange) and also with DVAP devices (DV Access Point Dongle). However, some features may not work with older versions of these devices.
-
Is Dvtool 2.0 Beta 5 free or paid software?
-
Dvtool 2.0 Beta 5 is free software that you can download from the official website http://www.dvdongle.com/DV_Dongle/Home.html. However, you will need to purchase a DV Dongle device or a DVAP device to use it with, which are sold separately by different vendors.
-
Where can I find more information or support for using Dvtool 2.0 Beta 5?
-
You can find more information or support for using Dvtool 2.0 Beta 5 on the official website http://www.dvdongle.com/DV_Dongle/Home.html, where you can find the online documentation, the user manual, the FAQ section, and the contact information of the DVTool software support team. You can also join the DVTool software user group on Yahoo Groups https://groups.yahoo.com/neo/groups/dvdongle/info, where you can interact with other users and share your feedback and suggestions.
-
What are some alternatives to using Dvtool 2.0 Beta 5?
-
If you are looking for some alternatives to using Dvtool 2.0 Beta 5 with your DV Dongle or DVAP device, you can try some of these options:
-
-
You can use a D-Star compatible radio instead of a DV Dongle or DVAP device, which will allow you to access the D-Star network directly from your radio without using a PC or Mac.
-
You can use a different software instead of DVTool software, such as WinDV http://www.dutch-star.eu/software/, which is another program that allows you to use your DV Dongle or DVAP device with your PC or Mac.
-
You can use a different digital voice protocol instead of D-Star, such as DMR https://www.dmr-marc.net/, which is another digital voice and data protocol that is used by amateur radio operators.
-
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Ecology Exam Essay Questions.md b/spaces/1gistliPinn/ChatGPT4/Examples/Ecology Exam Essay Questions.md
deleted file mode 100644
index d3bcab9e7a4f906d78d89d6c87c7d21c50021b10..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Ecology Exam Essay Questions.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-As a consequence, plants will store nutrients at the most economically efficient rate for their individual needs, while animals will invest the most in the form of high-quality, energy-rich tissues. This ensures that their life is maximized; thus, the superior environment won out. 3. What is the purpose of the land-mass called the Earth? What is its role in the universe? How does the interaction of the Earth with the Sun shape the Earth and its atmosphere? 4. What is life and what is it made of? Why is organic carbon the most abundant form of carbon in the universe? What role does carbon play in the life of an organism? 5. What is the origin of energy? What are energy-releasing particles called? What is energy? 6. What is the origin of matter? What is matter? What is a material? How does matter travel through space ? What is an object ? 7. What is the origin of light ? What is a particle ? How does light travel? What is a wave ? 8. What is the origin of heat ? What is heat ? What is a temperature ? How is heat transported in a system ? What is the difference between a heat transfer and a heat flow? 9. How does an engine work ? How do the atmospheric pressure of air and the buoyancy of water contribute to the function of a gas cylinder ? 10. How does a universe expand ? How do atoms combine to form molecules ? How do molecules combine to form proteins ? 11. What is an electron ? How does an electron travel through space ? 12. What is DNA ? Why is DNA the genetic material of most organisms? What is genetic coding ? What is a gene ? 13. What is a protein ? How do proteins work ? What is the difference between a protein and an enzyme ? How does a cell divide and differentiate ? 14. What is the difference between a cell and a multi-cellular organism ? What is a multi-cellular organism ? How does a multi-cellular organism grow and develop ? 15. How does a plant develop ? How does a plant die ? How do the cells of a plant communicate ? 16. How do animals develop ? How does an animal die ? What is the difference between a plant cell and an animal cell ? How do plant cells communicate ? How does the 4fefd39f24
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/AetherSX2 best settings apk Tips and tricks for the best PS2 emulator on Android.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/AetherSX2 best settings apk Tips and tricks for the best PS2 emulator on Android.md
deleted file mode 100644
index e1d13eade068fc6e7018e0e1204e5b1a4b89e171..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/AetherSX2 best settings apk Tips and tricks for the best PS2 emulator on Android.md
+++ /dev/null
@@ -1,110 +0,0 @@
-
-
How to Play PS2 Games on Android with AetherSX2 Emulator
-
If you are a fan of PlayStation 2 games and want to relive your childhood memories on your Android smartphone, you are in luck. There is a new PS2 emulator for Android that lets you play PS2 games with amazing graphics and performance. It's called AetherSX2, and it's the best PS2 emulator for Android by far.
-
In this article, we will show you how to download, install, configure, and play PS2 games on Android with AetherSX2 emulator. We will also give you some tips and tricks to optimize the emulator and make your gaming experience more enjoyable. And we will recommend some of the best PS2 games that you can play on AetherSX2 emulator.
AetherSX2 is a PS2 emulator for Android that was released in late 2021 by a developer named Tahlreth. It is based on the PCSX2 emulator, which is a well-known and reliable PS2 emulator for PC. The developer got permission from the PCSX2 team to use their code and licensed it under the LGPL license.
-
AetherSX2 emulator is a major breakthrough for PS2 emulation on Android devices. It supports a wide range of PS2 games and offers various features such as internal resolution scaling, save states, multiple control schemes, widescreen patches, and more. It also supports both Vulkan and OpenGL graphics renderers, which can improve the performance and compatibility of different games.
-
AetherSX2 emulator is free to download and use, unlike some other PS2 emulators that charge money or show ads. You can get it from the Google Play Store or from the official website. You can also join the fan-run Discord server to get updates, support, and feedback from other users.
-
How to Download and Install AetherSX2 Emulator?
-
Downloading and installing AetherSX2 emulator is very easy. Just follow these simple steps:
-
-
Go to the Google Play Store and search for "AetherSX2" or use this link to download it.
-
Alternatively, you can go to the official website and download the APK file from there. Make sure you enable "Unknown sources" in your device settings before installing it.
-
Once you have downloaded the app, open it and grant it the necessary permissions.
-
You will also need a PS2 BIOS file to run the emulator. You can dump it from your own PS2 console or find it online (but be careful of legal issues). Place the BIOS file in your device storage (preferably in a folder named "BIOS").
-
Launch the app and tap on "Select BIOS" in the main menu. Navigate to the folder where you placed the BIOS file and select it.
-
You are now ready to use the emulator!
-
-
How to Configure AetherSX2 Emulator for Best Performance?
-
AetherSX2 emulator has many settings that you can tweak to optimize its performance and compatibility for different games. However, there is no one-size-fits-all solution, as different games may require different settings. You may need to experiment with various options until you find the best settings for your device and game. Here are some general tips and recommendations that may help you:
-
aethersx2 ps2 emulator android apk download
-aethersx2 settings for high-end devices
-aethersx2 vs damonps2 comparison
-aethersx2 compatible games list
-aethersx2 how to use cheats
-aethersx2 vulkan vs opengl performance
-aethersx2 best settings for god of war
-aethersx2 bios file download
-aethersx2 controller setup guide
-aethersx2 speed hacks tutorial
-aethersx2 widescreen patch apk
-aethersx2 best settings for kingdom hearts
-aethersx2 how to fix black screen
-aethersx2 save state location
-aethersx2 best settings for final fantasy x
-aethersx2 how to increase fps
-aethersx2 memory card format
-aethersx2 best settings for shadow of the colossus
-aethersx2 how to play multiplayer
-aethersx2 iso file download
-aethersx2 best settings for metal gear solid 3
-aethersx2 how to change language
-aethersx2 custom resolution apk
-aethersx2 best settings for gran turismo 4
-aethersx2 how to use gamepad
-aethersx2 texture filtering apk
-aethersx2 best settings for resident evil 4
-aethersx2 how to load roms
-aethersx2 anti aliasing apk
-aethersx2 best settings for dragon ball z budokai tenkaichi 3
-aethersx2 how to enable sound
-aethersx2 frame skipping apk
-aethersx2 best settings for silent hill 3
-aethersx2 how to fix lag
-aethersx2 shader effects apk
-aethersx2 best settings for devil may cry 3
-aethersx2 how to update app
-aethersx2 force feedback apk
-aethersx2 best settings for persona 4
-aethersx2 how to use mouse and keyboard
-
-
Choose the graphics renderer that works best for your device and game. Vulkan is usually faster and more compatible, but OpenGL may offer better quality and stability for some games.
-
Adjust the internal resolution scaling according to your device's capabilities and screen size. Higher resolutions will make the games look sharper and clearer, but they will also consume more resources and cause slowdowns or crashes. Lower resolutions will improve the performance and compatibility, but they will also make the games look blurry and pixelated.
-
Enable or disable the speed hacks depending on the game's requirements and your device's power. Speed hacks are optimizations that can boost the emulation speed, but they can also cause glitches or errors in some games. You can try the default speed hacks or customize them individually.
-
Enable or disable the widescreen patches if you want to play the games in 16:9 aspect ratio instead of the original 4:3. Widescreen patches can make the games look more immersive and modern, but they can also cause graphical issues or distortions in some games.
-
Configure the controls according to your preference and comfort. You can use the on-screen touch controls, a physical controller, or a keyboard and mouse. You can also customize the layout, size, opacity, and sensitivity of the touch controls.
-
-
You can access the settings menu by tapping on the gear icon in the main menu or by pressing the back button while playing a game. You can also change the settings for each game individually by long-pressing on the game cover and selecting "Game settings".
-
How to Load and Play PS2 Games on AetherSX2 Emulator?
-
Loading and playing PS2 games on AetherSX2 emulator is also very easy. Just follow these simple steps:
-
-
You will need PS2 game files (also known as ISOs or ROMs) to play them on the emulator. You can dump them from your own PS2 discs or find them online (but be careful of legal issues). Place the game files in your device storage (preferably in a folder named "Games").
-
Launch the app and tap on "Select Game" in the main menu. Navigate to the folder where you placed the game files and select one.
-
The game will start loading and you will see a loading screen with some information about the game and its compatibility status. You can also see some tips and suggestions for optimizing the game's performance.
-
Once the game is loaded, you can start playing it with your chosen control scheme. You can also access some options by tapping on the screen or pressing the menu button while playing. You can save or load your progress using save states, change the graphics renderer, adjust the volume, take screenshots, or exit the game.
-
-
What are the Best PS2 Games to Play on AetherSX2 Emulator?
-
AetherSX2 emulator supports a large number of PS2 games, but not all of them are fully playable or compatible. Some games may have minor issues such as graphical glitches, audio problems, or slow loading times. Some games may have major issues such as crashes, freezes, or black screens. And some games may not work at all.
-
The compatibility status of each game is indicated by a color code in the loading screen: green means playable, yellow means ingame, orange means menu/intro, red means loadable, and black means nothing.
-
You can check the compatibility list on the official website to see which games are supported by the emulator and how well they run. You can also report any issues or bugs that you encounter while playing a game on the Discord server or on GitHub.
-
Here are some of the best PS2 games that you can play on AetherSX2 emulator with good performance and compatibility:
-
-
Game
Genre
Description
-
God of War
Action-adventure
A hack-and-slash game that follows Kratos, a Spartan warrior who seeks revenge against Ares, the god of war.
-
Shadow of the Colossus
Action-adventure
A unique game that involves exploring a vast land and defeating giant creatures called colossi to revive a dead girl.
-
Grand Theft Auto: San Andreas
Action-adventure
A sandbox game that lets you roam around a fictional state of San Andreas and engage in various activities such as driving, shooting, fighting, and more.
-
Final Fantasy X
Role-playing
A classic JRPG that follows Tidus, a young athlete who is transported to a fantasy world called Spira and joins a group of adventurers to defeat a monstrous threat called Sin.
-
Metal Gear Solid 3: Snake Eater
Stealth-action
A prequel to the Metal Gear series that features Naked Snake, a special agent who infiltrates a Soviet jungle to rescue a scientist and stop a nuclear war.
-
Kingdom Hearts
Action-role-playing
A crossover game that combines characters and worlds from Disney and Final Fantasy franchises. It follows Sora, a young boy who wields a magical weapon called the Keyblade and teams up with Donald Duck and Goofy to fight against the Heartless.
-
-
Of course, there are many more PS2 games that you can try on AetherSX2 emulator, but these are some of the most popular and well-received ones. You can also check out some online forums and reviews to find more recommendations and suggestions.
-
Conclusion
-
AetherSX2 emulator is an amazing app that lets you play PS2 games on Android devices with high quality and performance. It is easy to download, install, configure, and use. It supports a large number of PS2 games and offers various features and options to enhance your gaming experience. It is also free and open-source, unlike some other PS2 emulators that charge money or show ads.
-
If you are a fan of PS2 games and want to relive your childhood memories on your Android smartphone, you should definitely give AetherSX2 emulator a try. You will be amazed by how well it runs your favorite PS2 games and how much fun you will have playing them.
-
So, what are you waiting for? Download AetherSX2 emulator now and enjoy playing PS2 games on Android!
-
FAQs
-
Q: Is AetherSX2 emulator legal?
-
A: AetherSX2 emulator itself is legal, as it is based on the PCSX2 emulator, which is licensed under the LGPL license. However, downloading or distributing PS2 BIOS or game files may be illegal in some countries or regions, depending on the copyright laws and regulations. You should only use your own PS2 BIOS or game files that you have legally obtained.
-
Q: Is AetherSX2 emulator safe?
-
A: AetherSX2 emulator is safe to use, as long as you download it from the official sources (Google Play Store or official website). It does not contain any malware, viruses, or spyware. It also does not collect any personal or sensitive data from your device.
-
Q: How can I update AetherSX2 emulator?
-
A: You can update AetherSX2 emulator by checking for updates in the app itself or by visiting the Google Play Store or the official website. You can also join the Discord server to get notified of any new updates or releases.
-
Q: How can I support AetherSX2 emulator?
-
A: You can support AetherSX2 emulator by giving it a positive rating and review on the Google Play Store or by sharing it with your friends and family. You can also donate to the developer via PayPal or Patreon to show your appreciation and help him improve the emulator.
-
Q: How can I contact AetherSX2 emulator?
-
A: You can contact AetherSX2 emulator by joining the Discord server or by sending an email to tahlreth@gmail.com. You can also follow the developer on Twitter or Instagram for more updates and news.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Become a Soccer Super Star with this Amazing Football MOD APK.md b/spaces/1phancelerku/anime-remove-background/Become a Soccer Super Star with this Amazing Football MOD APK.md
deleted file mode 100644
index e05005d06eb99d811a99c5aed50345eeb1f7864c..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Become a Soccer Super Star with this Amazing Football MOD APK.md
+++ /dev/null
@@ -1,129 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Soccer Super Star Football Mod APK: A Fun and Simple Soccer Game
-
Do you love soccer? Do you want to play a soccer game that is fun, simple, and realistic? If yes, then you should try Soccer Super Star Football Mod APK, a soccer game that lets you swipe to shoot and score amazing goals. In this article, we will tell you everything you need to know about this game, including how to download and install it, how to play it, tips and tricks, pros and cons, and FAQs. Let's get started!
Soccer Super Star Football Mod APK is a soccer game that is developed by Real Free Soccer. It is available for Android devices and can be downloaded for free from various websites. The game has over 10 million downloads and a 4.4-star rating on Google Play Store. It is one of the most popular soccer games on the market, thanks to its simple and intuitive gameplay, realistic graphics and physics, various teams and modes to choose from, and unlimited rewind feature.
-
Why should you download Soccer Super Star Football Mod APK? Well, if you are a fan of soccer, you will love this game. It is easy to play, but hard to master. You can swipe to shoot and score goals from different angles and distances. You can also use the unlimited rewind feature to correct your mistakes and try again. You can choose from different teams and modes, such as career mode, tournament mode, challenge mode, and training mode. You can also unlock new players and stadiums as you progress in the game. The game is also offline-friendly, meaning you can play it without an internet connection.
-
What are the features of the mod version of Soccer Super Star Football? The mod version gives you some extra benefits that the original version does not. For example, you can enjoy unlimited rewind, which allows you to undo your shots and try again as many times as you want. You can also get unlimited coins, which you can use to buy new players and stadiums. The mod version also removes ads, which can be annoying and distracting in the original version.
-
How to Download and Install Soccer Super Star Football Mod APK
-
If you want to download and install Soccer Super Star Football Mod APK on your Android device, you need to follow these simple steps:
-
-
Download the APK file from a trusted source. You can find many websites that offer the mod version of Soccer Super Star Football for free. However, be careful not to download from shady or malicious sites that may harm your device or steal your data. We recommend you to download from [this link], which is safe and reliable.
-
Enable unknown sources on your device. Since you are downloading an APK file from a third-party source, you need to enable unknown sources on your device. This will allow you to install apps that are not from Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Install the APK file. Once you have downloaded the APK file, locate it in your file manager and tap on it. You will see a pop-up window asking for your permission to install the app. Tap on Install and wait for the installation process to finish.
-
Launch the game and enjoy. After the installation is done, you can launch the game from your app drawer or home screen. You will see the game icon with the word "Mod" on it. Tap on it and start playing Soccer Super Star Football Mod APK.
-
How to Play Soccer Super Star Football Mod APK
-
Playing Soccer Super Star Football Mod APK is very easy and fun. You just need to swipe your finger on the screen to shoot and score goals. Here are some tips on how to play the game:
-
Choose your team and mode
-
Before you start playing, you need to choose your team and mode. You can choose from different teams, such as Brazil, Argentina, Germany, France, Spain, England, and more. Each team has different stats and ratings, so choose wisely. You can also choose from different modes, such as career mode, tournament mode, challenge mode, and training mode. Each mode has different objectives and rewards, so choose according to your preference.
-
soccer star 2023 super football games mod apk
-soccer super star football hack apk download
-soccer super star football unlimited money mod apk
-soccer super star football mod apk latest version
-soccer super star football mod apk android 1
-soccer super star football mod apk free rewards
-soccer super star football mod apk offline
-soccer super star football mod apk unlimited gems
-soccer super star football mod apk revdl
-soccer super star football mod apk no ads
-soccer super star football mod apk unlimited energy
-soccer super star football mod apk rexdl
-soccer super star football mod apk premium
-soccer super star football mod apk vip unlocked
-soccer super star football mod apk 1.18.1
-soccer super star football mod apk 2023
-soccer super star football mod apk unlimited coins
-soccer super star football mod apk happymod
-soccer super star football mod apk online
-soccer super star football mod apk pro
-soccer super star football mod apk full version
-soccer super star football mod apk obb
-soccer super star football mod apk cheat
-soccer super star football mod apk mega
-soccer super star football mod apk update
-soccer super star football mod apk 2022
-soccer super star football mod apk unlimited everything
-soccer super star football mod apk apkpure
-soccer super star football mod apk cracked
-soccer super star football mod apk data
-soccer super star football mod apk free download
-soccer super star football mod apk unlimited lives
-soccer super star football mod apk old version
-soccer super star football mod apk new version
-soccer super star football mod apk all unlocked
-soccer super star football mod apk for pc
-soccer super star football mod apk unlimited stars
-soccer super star football mod apk original
-soccer super star football mod apk real money
-soccer super star football mod apk no root
-
Swipe to shoot and score
-
Once you have chosen your team and mode, you can start playing. You will see a soccer ball on the screen, and you need to swipe your finger on it to shoot and score. You can swipe in different directions and angles to control the direction and curve of the ball. You can also swipe with different speed and force to control the power and height of the ball. You will see a target on the goal, and you need to aim for it to score. The target will change its position and size depending on the difficulty level of the game.
-
Use unlimited rewind to correct your mistakes
-
One of the best features of Soccer Super Star Football Mod APK is the unlimited rewind feature. This feature allows you to undo your shots and try again as many times as you want. This is very useful if you miss a shot or make a mistake. You can use this feature by tapping on the rewind button on the top left corner of the screen. You will see a timeline of your shots, and you can drag it back to any point you want. You can then swipe again to shoot and score.
-
Unlock new players and stadiums
-
As you play Soccer Super Star Football Mod APK, you can unlock new players and stadiums. You can unlock new players by spending coins or reaching certain levels. Each player has different skills and abilities, such as speed, power, accuracy, stamina, and more. You can also unlock new stadiums by spending coins or reaching certain levels. Each stadium has different themes and atmospheres, such as day, night, rain, snow, and more.
-
Tips and Tricks for Soccer Super Star Football Mod APK
-
If you want to master Soccer Super Star Football Mod APK, you need to know some tips and tricks that will help you improve your game. Here are some of them:
-
Aim for the corners and curves
-
One of the best ways to score goals in Soccer Super Star Football Mod APK is to aim for the corners and curves of the goal. This will make it harder for the goalkeeper to save your shots. You can do this by swiping your finger in a diagonal or curved motion on the screen. This will make the ball spin and curve in the air.
-
Use power-ups wisely
-
Soccer Super Star Football Mod APK also has some power-ups that you can use to boost your game. These power-ups include fireball, slow motion, magnet, freeze, and more. Each power-up has a different effect on the ball or the game. For example, fireball makes the ball burn and fly faster; slow motion makes the game slow down for a few seconds; magnet makes the ball attract to the target; freeze makes the goalkeeper freeze for a few seconds; and more. You can use these power-ups by tapping on them on the bottom right corner of the screen. However, be careful not to use them too often or too randomly, as they have limited uses and may not always work in your favor.
-
Watch ads to get free rewards
-
If you want to get more coins or power-ups in Soccer Super Star Football Mod APK, you can watch ads to get free rewards. You can do this by tapping on the watch ad button on the top right corner of the screen. You will see an ad pop up on your screen, and you need to watch it for a few seconds. After that, you will get some coins or power-ups as a reward. You can do this as many times as you want, but be aware that some ads may be longer or shorter than others.
-
Practice your skills in training mode
-
If you want to practice your skills in Soccer Super Star Football Mod APK, you can play in training mode. This mode allows you to play without any pressure or objectives. You can just swipe and shoot as many times as you want without worrying about time or score. You can also change the difficulty level of the game by tapping on the settings button on the top left corner of the screen. You can also change the team and stadium by tapping on the buttons on the bottom left corner of the screen. Training mode is a great way to improve your skills and have fun.
-
Pros and Cons of Soccer Super Star Football Mod APK
-
Soccer Super Star Football Mod APK is a great soccer game, but it also has some pros and cons that you should know before playing it. Here are some of them:
-
Pros
-
-
Simple and intuitive gameplay. You just need to swipe your finger on the screen to shoot and score goals. The game is easy to play, but hard to master.
-
Realistic graphics and physics. The game has high-quality graphics and realistic physics that make the game more immersive and enjoyable. You can see the ball spin and curve in the air, the goalkeeper react and save your shots, and the crowd cheer and boo.
-
Various teams and modes to choose from. You can choose from different teams, such as Brazil, Argentina, Germany, France, Spain, England, and more. Each team has different stats and ratings, so choose wisely. You can also choose from different modes, such as career mode, tournament mode, challenge mode, and training mode. Each mode has different objectives and rewards, so choose according to your preference.
-
Unlimited rewind feature. This feature allows you to undo your shots and try again as many times as you want. This is very useful if you miss a shot or make a mistake. You can use this feature by tapping on the rewind button on the top left corner of the screen.
-
-
Cons
-
-
Repetitive gameplay after a while. The game can get repetitive and boring after a while, as you play the same scenarios and challenges over and over again. The game lacks variety and innovation in its gameplay.
-
Ads can be annoying. The game has ads that pop up on your screen every now and then. These ads can be annoying and distracting, especially when you are in the middle of a match or a shot. You can remove ads by downloading the mod version of the game or by paying a small fee.
-
Some bugs and glitches may occur. The game is not perfect, and it may have some bugs and glitches that affect its performance and quality. For example, some users have reported that the game crashes or freezes sometimes, or that the ball goes through the goalkeeper or the goalpost.
-
-
Conclusion
-
Soccer Super Star Football Mod APK is a fun and simple soccer game that lets you swipe to shoot and score amazing goals. It has simple and intuitive gameplay, realistic graphics and physics, various teams and modes to choose from, and unlimited rewind feature. However, it also has some cons, such as repetitive gameplay after a while, ads can be annoying, and some bugs and glitches may occur. Overall, Soccer Super Star Football Mod APK is a great soccer game that you should try if you love soccer or want to have some fun.
-
Do you want to download Soccer Super Star Football Mod APK? If yes, then follow the steps we mentioned above to download and install it on your Android device. If no, then what are you waiting for? Download it now and enjoy playing soccer like never before!
-
FAQs
-
Here are some frequently asked questions about Soccer Super Star Football Mod APK:
-
-
Is Soccer Super Star Football Mod APK safe to download?
-
Yes, as long as you download it from a trusted source. You can find many websites that offer the mod version of Soccer Super Star Football for free. However, be careful not to download from shady or malicious sites that may harm your device or steal your data. We recommend you to download from [this link], which is safe and reliable.
-
What is the difference between Soccer Super Star Football Mod APK and the original version?
-
The mod version gives you some extra benefits that the original version does not. For example, you can enjoy unlimited rewind, which allows you to undo your shots and try again as many times as you want. You can also get unlimited coins, which you can use to buy new players and stadiums. The mod version also removes ads, which can be annoying and distracting in the original version.
-
How can I get more coins in Soccer Super Star Football Mod APK?
-
You can get more coins by winning matches, completing achievements, or watching ads. You can also use the mod version of the game, which gives you unlimited coins. You can use coins to buy new players and stadiums, or to upgrade your skills and power-ups.
-
How can I unlock new players and stadiums in Soccer Super Star Football Mod APK?
-
You can unlock new players and stadiums by spending coins or reaching certain levels. Each player and stadium has a different price and level requirement. You can see the details by tapping on the shop button on the bottom right corner of the screen. You can also use the mod version of the game, which gives you all the players and stadiums unlocked.
-
Can I play Soccer Super Star Football Mod APK offline?
-
Yes, you can play Soccer Super Star Football Mod APK offline without an internet connection. However, you will not be able to access some features, such as watching ads, getting rewards, or updating the game. You will also not be able to play in tournament mode or challenge mode, which require an internet connection.
-
-
I hope this article has helped you learn more about Soccer Super Star Football Mod APK. If you have any questions or feedback, please leave a comment below. Thank you for reading!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download CSR Racing 2 MOD APK for iOS and Android Free Shopping and More.md b/spaces/1phancelerku/anime-remove-background/Download CSR Racing 2 MOD APK for iOS and Android Free Shopping and More.md
deleted file mode 100644
index 8a82545bb57742b1d8304825c9bfa864bc52f12a..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download CSR Racing 2 MOD APK for iOS and Android Free Shopping and More.md
+++ /dev/null
@@ -1,92 +0,0 @@
-
-
- | Table 2: Article with HTML formatting | |--------------------------------------|
CSR Racing 2 Mod APK iOS 2022: How to Download and Install It
-
If you are a fan of car racing games, you must have heard of CSR Racing 2. It is one of the most popular and realistic racing games on mobile devices. It offers you a chance to race with some of the most amazing cars in the world, customize them to your liking, compete with other players online, join crews, chat with friends, and much more.
But what if you want to enjoy all these features without spending any money or waiting for hours to refill your fuel? What if you want to unlock all the cars and upgrades without grinding for hours? What if you want to have unlimited resources to enjoy the game to the fullest?
-
Well, there is a way to do that. It is called CSR Racing 2 Mod APK. It is a modified version of the original game that gives you access to all the features and resources that you want. You can download and install it on your iOS device easily and safely. In this article, we will tell you everything you need to know about CSR Racing 2 Mod APK on iOS devices, including its features, benefits, compatibility, security, and installation process. So, let's get started!
-
What is CSR Racing 2 and why is it popular?
-
CSR Racing 2 is a racing game developed by NaturalMotionGames Ltd and published by Zynga. It was released in 2016 for Android and iOS devices. It is the sequel to the popular CSR Racing game that was released in 2012.
-
csr racing 2 mod apk ios 2022 unlimited money
-csr racing 2 mod apk ios 2022 all cars unlocked
-csr racing 2 mod apk ios 2022 download free
-csr racing 2 mod apk ios 2022 latest version
-csr racing 2 mod apk ios 2022 no jailbreak
-csr racing 2 mod apk ios 2022 hack
-csr racing 2 mod apk ios 2022 cheats
-csr racing 2 mod apk ios 2022 online
-csr racing 2 mod apk ios 2022 gameplay
-csr racing 2 mod apk ios 2022 review
-csr racing 2 mod apk ios 2022 update
-csr racing 2 mod apk ios 2022 features
-csr racing 2 mod apk ios 2022 tips and tricks
-csr racing 2 mod apk ios 2022 best cars
-csr racing 2 mod apk ios 2022 graphics
-csr racing 2 mod apk ios 2022 multiplayer
-csr racing 2 mod apk ios 2022 offline
-csr racing 2 mod apk ios 2022 installation guide
-csr racing 2 mod apk ios 2022 requirements
-csr racing 2 mod apk ios 2022 support
-csr racing 2 mod apk ios 2022 how to play
-csr racing 2 mod apk ios 2022 tutorial
-csr racing 2 mod apk ios 2022 customisation
-csr racing 2 mod apk ios 2022 races
-csr racing 2 mod apk ios 2022 events
-csr racing 2 mod apk ios 2022 challenges
-csr racing 2 mod apk ios 2022 rewards
-csr racing 2 mod apk ios 2022 codes
-csr racing 2 mod apk ios 2022 generator
-csr racing 2 mod apk ios 2022 premium
-csr racing 2 mod apk ios 2022 pro
-csr racing 2 mod apk ios 2022 elite
-csr racing 2 mod apk ios 2022 legends
-csr racing 2 mod apk ios 2022 supercars
-csr racing 2 mod apk ios 2022 hypercars
-csr racing 2 mod apk ios 2021 vs. CSR Racing Mod Apk iOS in the year of the release of the game.
-CSR Racing Mod Apk iOS in the year of the release of the game vs. CSR Racing Mod Apk iOS in the year of the release of the game.
-
A realistic and immersive racing game
-
One of the main reasons why CSR Racing 2 is so popular is because of its realistic and immersive graphics, physics, sound effects, and gameplay. The game uses 3D rendering techniques to create stunning visuals that make you feel like you are actually driving the cars. The game also features realistic car physics that simulate the behavior of the cars on different terrains and conditions. The game also has amazing sound effects that match the engine sounds, tire screeches, collisions, and other noises of the cars. The game also has a variety of gameplay modes and events that keep you entertained and challenged.
-
The game allows you to choose from over 200 licensed cars from some of the most famous brands in the world, such as Ferrari, Lamborghini, Bugatti, McLaren, Pagani, Koenigsegg, and more. You can also customize your cars with different paint jobs, decals, rims, spoilers, nitrous, and other parts. You can also tune your cars to improve their performance and stats.
-
A competitive and social racing game
-
Another reason why CSR Racing 2 is so popular is because of its competitive and social features. The game has an online multiplayer mode where you can race with other players from around the world in real-time. You can also join or create crews with your friends or other players and compete with other crews for rewards and glory. You can also chat with your crew members and other players in the game. You can also challenge other players to duels or accept challenges from them.
-
The game also has a reward system that gives you money, keys, gold, fuel, and other items for completing races, events, achievements, and rankings. You can use these items to buy new cars, upgrade your existing cars, refill your fuel, or enter special events. The game also has a ranking system that ranks you based on your performance and achievements in the game. You can climb up the ranks and earn more rewards and recognition.
-
What is CSR Racing 2 Mod APK and what are its features?
-
CSR Racing 2 Mod APK is a modified version of the original CSR Racing 2 game that gives you access to all the features and resources that you want in the game. It is not an official version of the game, but it is created by third-party developers who modify the original game files to unlock or add new features.
-
A modified version of CSR Racing 2 with unlimited resources
-
One of the main benefits of using CSR Racing 2 Mod APK is that it gives you unlimited resources in the game. This means that you can have unlimited money, keys, gold, fuel, and other items in the game without spending any real money or waiting for hours to refill your fuel. You can use these resources to buy any car you want, upgrade it to the max level, enter any event you want, or refill your fuel anytime you want.
-
Another benefit of using CSR Racing 2 Mod APK is that it gives you access to some new features that are not available in the original game. For example, some CSR Racing 2 Mod APK versions allow you to unlock all the cars in the game without having to complete any requirements or missions. Some versions also allow you to use nitrous anytime you want without having to wait for it to recharge. Some versions also allow you to disable ads or enable cheats in the game.
-
A safe and easy way to enjoy CSR Racing 2 without restrictions
-
Another benefit of using CSR Racing 2 Mod APK is that it is a safe and easy way to enjoy CSR Racing 2 without any restrictions or limitations. You don't have to worry about any viruses, malware, or spyware that might harm your device or compromise your privacy. You also don't have to worry about any bans or suspensions from the game developers or publishers. You can download and install CSR Racing 2 Mod APK on your iOS device easily and safely using a third-party app store called Panda Helper. Panda Helper is a trusted and reliable app store that offers thousands of modded and hacked apps and games for iOS devices. You can download and install Panda Helper on your iOS device without jailbreaking it or using a computer.
-
How to download and install CSR Racing 2 Mod APK on iOS devices?
-
If you want to download and install CSR Racing 2 Mod APK on your iOS device, you need to follow these simple steps:
-
A step-by-step guide to download and install CSR Racing 2 Mod APK on iOS devices
-
Here is a step-by-step guide to download and install CSR Racing 2 Mod APK on iOS devices using Panda Helper:
-
-
Open Safari browser on your iOS device and go to the official website of Panda Helper: https://www.pandahelp.vip/
-
Tap on the "Download Free Version" button and then tap on "Install" when prompted.
-
Wait for the installation to finish and then go to Settings > General > Profiles & Device Management and trust the profile of Panda Helper.
-
Launch Panda Helper from your home screen and search for "CSR Racing 2 Mod" in the search bar.
-
Tap on the "Get" button next to the CSR Racing 2 Mod app and then tap on "Install" when prompted.
-
Wait for the installation to finish and then go to Settings > General > Profiles & Device Management and trust the profile of CSR Racing 2 Mod.
-
Launch CSR Racing 2 Mod from your home screen and enjoy the game with unlimited resources and features.
-
-
A table to summarize the steps to download and install CSR Racing 2 Mod APK on iOS devices
-
Here is a table to summarize the steps to download and install CSR Racing 2 Mod APK on iOS devices using Panda Helper:
- | Step number | Action | Screenshot | Explanation | |-------------|--------|------------|-------------| | 1 | Open Safari browser on your iOS device and go to the official website of Panda Helper: https://www.pandahelp.vip/ | | Panda Helper is a third-party app store that offers modded and hacked apps and games for iOS devices. | | 2 | Tap on the "Download Free Version" button and then tap on "Install" when prompted. | | This will download and install Panda Helper on your iOS device. | | 3 | Wait for the installation to finish and then go to Settings > General > Profiles & Device Management and trust the profile of Panda Helper. | | This will allow you to run Panda Helper on your iOS device without any issues. | | 4 | Launch Panda Helper from your home screen and search for "CSR Racing 2 Mod" in the search bar. | | This will show you the CSR Racing 2 Mod app that you can download and install on your iOS device. | | 5 | Tap on the "Get" button next to the CSR Racing 2 Mod app and then tap on "Install" when prompted. | | This will download and install CSR Racing 2 Mod on your iOS device. | | 6 | Wait for the installation to finish and then go to Settings > General > Profiles & Device Management and trust the profile of CSR Racing 2 Mod. | | This will allow you to run CSR Racing 2 Mod on your iOS device without any issues. | | 7 | Launch CSR Racing 2 Mod from your home screen and enjoy the game with unlimited resources and features. | | This will let you play CSR Racing 2 with unlimited money, keys, gold, fuel, nitrous, cars, upgrades, etc. |
Conclusion
-
In conclusion, CSR Racing 2 is a great racing game that offers you a realistic and immersive experience of driving some of the most amazing cars in the world. It also lets you compete and socialize with other players online, join crews, chat with friends, and earn rewards and rankings. However, if you want to enjoy all these features without any limitations or restrictions, you can try CSR Racing 2 Mod APK on your iOS device. CSR Racing 2 Mod APK is a modified version of the original game that gives you unlimited resources and features in the game. You can download and install it on your iOS device easily and safely using Panda Helper, a third-party app store that offers modded and hacked apps and games for iOS devices. You can follow the step-by-step guide and the table above to download and install CSR Racing 2 Mod APK on your iOS device. We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave them in the comments section below. Thank you for reading and happy racing!
-
FAQs
-
Here are some frequently asked questions about CSR Racing 2 Mod APK on iOS devices with brief answers:
-
-
Q: Is CSR Racing 2 Mod APK safe to use?
-
A: Yes, CSR Racing 2 Mod APK is safe to use as long as you download it from a trusted source like Panda Helper. It does not contain any viruses, malware, or spyware that might harm your device or compromise your privacy.
-
Q: Is CSR Racing 2 Mod APK compatible with my iOS device?
-
A: Yes, CSR Racing 2 Mod APK is compatible with most iOS devices that can run the original CSR Racing 2 game. However, you may need to update your iOS version or free up some storage space on your device before installing it.
-
Q: Will I get banned or suspended from CSR Racing 2 if I use CSR Racing 2 Mod APK?
-
A: No, you will not get banned or suspended from CSR Racing 2 if you use CSR Racing 2 Mod APK. However, you should use it at your own risk and discretion, as the game developers or publishers may not approve of it.
-
Q: Can I play online multiplayer mode with CSR Racing 2 Mod APK?
-
A: Yes, you can play online multiplayer mode with CSR Racing 2 Mod APK. However, you may face some issues or glitches while playing with other players who are using the original game or a different version of the mod.
-
Q: Can I update CSR Racing 2 Mod APK to the latest version?
-
A: Yes, you can update CSR Racing 2 Mod APK to the latest version by following the same steps as downloading and installing it. However, you may need to uninstall the previous version of the mod before installing the new one.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Cars Movie for Free A Step-by-Step Guide.md b/spaces/1phancelerku/anime-remove-background/Download Cars Movie for Free A Step-by-Step Guide.md
deleted file mode 100644
index 254a3d3d34aa80ca82e133e2781f9a95235dc2a4..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Cars Movie for Free A Step-by-Step Guide.md
+++ /dev/null
@@ -1,281 +0,0 @@
-
-
How to Download Cars Movie Legally and Safely
-
Cars is a 2006 animated comedy film produced by Pixar Animation Studios and distributed by Walt Disney Pictures. It tells the story of a hotshot race car named Lightning McQueen who gets stranded in a small town called Radiator Springs and learns the true meaning of friendship and family. The film features the voices of Owen Wilson, Paul Newman, Bonnie Hunt, Larry the Cable Guy, and many others.
If you are a fan of Cars or want to watch it for the first time, you might be wondering how to download it to your computer or mobile device. There are many ways to download movies online, but not all of them are legal or safe. In this article, we will show you how to download Cars movie legally and safely from different sources, such as streaming services, torrent sites, and free movie sites. We will also give you some tips on how to avoid viruses, malware, ads, and pop-ups when downloading movies.
-
Introduction
-
What is Cars Movie and Why You Should Watch It
-
Cars is a Pixar film that was released in 2006 and became one of the most successful animated movies of all time. It won the Golden Globe Award for Best Animated Feature Film and was nominated for two Academy Awards for Best Animated Feature and Best Original Song. It also spawned two sequels, Cars 2 (2011) and Cars 3 (2017), as well as several spin-offs, shorts, video games, merchandise, and theme park attractions.
-
Cars is a movie that appeals to both children and adults, as it combines humor, adventure, romance, drama, and action. It also features stunning animation, memorable characters, catchy songs, and a heartwarming message about finding your true self and your true friends. If you love cars, racing, or animation, you will definitely enjoy watching Cars.
-
how to download cars movie for free
-how to download cars movie in hindi
-how to download cars movie from netflix
-how to download cars movie on ipad
-how to download cars movie in hd
-how to download cars movie on laptop
-how to download cars movie on android
-how to download cars movie from youtube
-how to download cars movie with subtitles
-how to download cars movie in tamil
-how to download cars movie from amazon prime
-how to download cars movie on iphone
-how to download cars movie in telugu
-how to download cars movie from disney plus
-how to download cars movie in utorrent
-how to download cars movie on pc
-how to download cars movie in malayalam
-how to download cars movie from google drive
-how to download cars movie with english subtitles
-how to download cars movie in urdu
-how to download cars movie from hotstar
-how to download cars movie on mac
-how to download cars movie in kannada
-how to download cars movie from facebook
-how to download cars movie with hindi audio
-how to download cars movie on firestick
-how to download cars movie in bengali
-how to download cars movie from voot
-how to download cars movie with dual audio
-how to download cars movie on chromebook
-how to download cars movie in punjabi
-how to download cars movie from zee5
-how to download cars movie with tamil audio
-how to download cars movie on roku
-how to download cars movie in marathi
-how to download cars movie from sony liv
-how to download cars movie with telugu audio
-how to download cars movie on smart tv
-how to download cars movie in gujarati
-how to download cars movie from mx player
-
The Risks of Downloading Movies Illegally
-
Before we show you how to download Cars movie legally and safely, we want to warn you about the risks of downloading movies illegally. Illegal downloading is the act of obtaining or sharing copyrighted material without the permission of the owner or the law. This includes movies, music, games, software, books, and any other digital content.
-
Downloading movies illegally can have serious consequences for you and your device. Some of the risks are:
-
-
You can face legal action from the copyright owner or the authorities. Depending on your country's laws, you can be fined, sued, or even jailed for piracy.
-
You can expose your device to viruses, malware, spyware, ransomware, or other harmful programs that can damage your data, steal your information, or lock your device until you pay a ransom.
-
You can compromise your online security and privacy by revealing your IP address, location, browsing history, or personal information to hackers, trackers, or advertisers.
-
You can waste your time, bandwidth, and storage space by downloading low-quality, incomplete, or fake files that do not match what you are looking for.
-
-
As you can see, downloading movies illegally is not worth the risk. That is why we recommend you to use legal and safe methods to download Cars movie, which we will explain in the next sections.
-
How to Download Cars Movie from Streaming Services
-
One of the best ways to download Cars movie legally and safely is to use a streaming service. A streaming service is a platform that allows you to watch movies, TV shows, and other content online or offline by paying a monthly or annual fee. Some of the most popular streaming services are Netflix, Amazon Prime Video, Hulu, Disney+, HBO Max, and Apple TV+.
-
Streaming services offer many benefits for movie lovers, such as:
-
-
You can access a large library of movies and shows in different genres, languages, and regions.
-
You can watch movies and shows in high-definition (HD), ultra-high-definition (UHD), or 4K resolution, depending on your device and internet speed.
-
You can download movies and shows to your device and watch them offline without using data or Wi-Fi.
-
You can watch movies and shows on multiple devices, such as computers, smartphones, tablets, smart TVs, gaming consoles, or streaming devices.
-
You can create multiple profiles for different users and customize your preferences, recommendations, and watch history.
-
You can enjoy exclusive content that is only available on the streaming service.
-
-
However, streaming services also have some drawbacks, such as:
-
-
You have to pay a monthly or annual fee to use the service. The fee may vary depending on the plan you choose, the number of screens you can watch on simultaneously, and the availability of certain features.
-
You need a stable and fast internet connection to stream or download movies and shows. If your internet is slow or unreliable, you may experience buffering, lagging, or poor quality.
-
You may not find the movie or show you want to watch on the streaming service. Streaming services have different catalogs that change over time due to licensing agreements with studios and distributors.
-
You may face geo-restrictions that prevent you from accessing certain content based on your location. Streaming services have different libraries for different countries due to legal and cultural reasons.
-
-
In this section, we will focus on two of the most popular streaming services that offer Cars movie: Netflix and Amazon Prime Video. We will show you how to download Cars movie from each of them and compare their pros and cons.
-
Netflix
-
Netflix is the world's leading streaming service with over 200 million subscribers in more than 190 countries. It offers a wide range of movies and shows in various genres and languages. It also produces original content that is exclusive to Netflix, such as Stranger Things, The Crown, The Witcher, Black Mirror, and more.
-
Steps to Download Cars Movie from Netflix
-
To download Cars movie from Netflix, you need to follow these steps:
-
-
Sign up for a Netflix account if you don't have one. You can choose from three plans: Basic ($8.99 per month), Standard ($13.99 per month), or Premium ($17.99 per month). The Basic plan allows you to watch on one screen at a time in standard definition (SD), the Standard plan allows you to watch on two screens at a time in high definition (HD), and the Premium plan allows you to watch on four screens at a time in HD or 4K.
-
Download the Netflix app on your device. You can download it from the App Store for iOS devices, the Google Play Store for Android devices, or the Microsoft Store for Windows devices. You can also access Netflix from your web browser, but you cannot download movies or shows from there.
-
Open the Netflix app and sign in with your account. You can browse the content by categories, genres, recommendations, or search for a specific title.
-
Find Cars movie on Netflix. You can use the search function or look for it in the Animation, Comedy, or Family categories. You can also check if Cars movie is available on Netflix in your country by using a website like unogs.com or flixwatch.co.
-
Tap on the download icon next to the play button. The download icon looks like a downward arrow with a horizontal line below it. If you don't see the download icon, it means that the movie is not available for download.
-
Wait for the movie to download to your device. You can check the progress of the download by tapping on the downloads icon at the bottom of the screen. The downloads icon looks like a downward arrow with a circle around it.
-
Enjoy watching Cars movie offline. You can find the downloaded movie in the downloads section of the app. You can watch it as many times as you want without using data or Wi-Fi.
-
-
Pros and Cons of Netflix
-
Netflix is a great streaming service for downloading Cars movie, but it also has some pros and cons that you should consider:
-
-
-
Pros
-
Cons
-
-
-
- Netflix has a large and diverse library of movies and shows, including original and exclusive content.
-
- Netflix requires a subscription fee to use the service, which may not be affordable for some users.
-
-
-
- Netflix allows you to download movies and shows to your device and watch them offline without using data or Wi-Fi.
-
- Netflix limits the number of devices and screens you can watch on simultaneously, depending on your plan.
-
-
-
- Netflix offers high-quality video and audio, as well as subtitles and dubbing options for different languages.
-
- Netflix does not have all the movies and shows you may want to watch, as some of them may be unavailable or removed due to licensing agreements.
-
-
-
- Netflix is compatible with most devices and platforms, such as computers, smartphones, tablets, smart TVs, gaming consoles, or streaming devices.
-
- Netflix may have geo-restrictions that prevent you from accessing certain content based on your location, unless you use a VPN service.
-
-
Amazon Prime Video
-
Amazon Prime Video is another popular streaming service that offers a variety of movies and shows, including original and exclusive content. It is part of the Amazon Prime membership, which also includes free shipping, music streaming, e-books, and more. You can also rent or buy movies and shows that are not included in the Prime Video catalog.
-
Steps to Download Cars Movie from Amazon Prime Video
-
To download Cars movie from Amazon Prime Video, you need to follow these steps:
-
-
Sign up for an Amazon Prime account if you don't have one. You can get a 30-day free trial and then pay $12.99 per month or $119 per year. You can also sign up for a Prime Video-only account for $8.99 per month.
-
Download the Prime Video app on your device. You can download it from the App Store for iOS devices, the Google Play Store for Android devices, or the Microsoft Store for Windows devices. You can also access Prime Video from your web browser, but you cannot download movies or shows from there.
-
Open the Prime Video app and sign in with your account. You can browse the content by categories, genres, recommendations, or search for a specific title.
-
Find Cars movie on Prime Video. You can use the search function or look for it in the Animation, Comedy, or Family categories. You can also check if Cars movie is available on Prime Video in your country by using a website like justwatch.com or reelgood.com.
-
Tap on the download icon next to the play button. The download icon looks like a downward arrow with a horizontal line below it. If you don't see the download icon, it means that the movie is not available for download.
-
Wait for the movie to download to your device. You can check the progress of the download by tapping on the downloads icon at the bottom of the screen. The downloads icon looks like a downward arrow with a circle around it.
-
Enjoy watching Cars movie offline. You can find the downloaded movie in the downloads section of the app. You can watch it as many times as you want without using data or Wi-Fi.
-
-
Pros and Cons of Amazon Prime Video
-
Amazon Prime Video is another great streaming service for downloading Cars movie, but it also has some pros and cons that you should consider:
-
-
-
Pros
-
Cons
-
-
-
- Amazon Prime Video has a large and diverse library of movies and shows, including original and exclusive content.
-
- Amazon Prime Video requires a subscription fee to use the service, which may not be affordable for some users.
-
-
-
- Amazon Prime Video allows you to download movies and shows to your device and watch them offline without using data or Wi-Fi.
-
- Amazon Prime Video limits the number of devices and titles you can download at a time, depending on your location and account type.
-
-
-
- Amazon Prime Video offers high-quality video and audio, as well as subtitles and dubbing options for different languages.
-
- Amazon Prime Video does not have all the movies and shows you may want to watch, as some of them may be unavailable or removed due to licensing agreements.
-
-
-
- Amazon Prime Video is compatible with most devices and platforms, such as computers, smartphones, tablets, smart TVs, gaming consoles, or streaming devices.
-
- Amazon Prime Video may have geo-restrictions that prevent you from accessing certain content based on your location, unless you use a VPN service.
-
-
How to Download Cars Movie from Torrent Sites
-
Another way to download Cars movie is to use a torrent site. A torrent site is a website that hosts torrent files, which are small files that contain information about the content you want to download, such as the name, size, type, and location of the files. You can use a torrent site to find and download movies, music, games, software, books, and any other digital content.
-
What are Torrents and How They Work
-
Torrents are a peer-to-peer (P2P) file-sharing technology that allows users to download and share files with each other without relying on a central server. Instead, users connect to each other directly and form a network of peers. Each peer has a copy of the torrent file and a part of the content file. When you download a torrent, you are downloading small pieces of the content file from different peers. When you upload a torrent, you are sharing the pieces of the content file that you have with other peers.
-
Torrents work by using a BitTorrent protocol, which is a set of rules and commands that enable the communication and coordination between peers. The BitTorrent protocol uses trackers, which are servers that help peers find each other and exchange information. The BitTorrent protocol also uses seeds and leechers, which are terms that describe the status of peers in the network. A seed is a peer that has the complete content file and is uploading it to other peers. A leecher is a peer that does not have the complete content file and is downloading it from other peers.
-
How to Use a BitTorrent Client to Download Movies
-
To use torrents to download movies, you need to use a BitTorrent client, which is a software program that allows you to open, download, and upload torrent files. There are many BitTorrent clients available for different devices and platforms, such as uTorrent, BitTorrent, qBittorrent, Transmission, Vuze, Deluge, and more.
-
Steps to Download Cars Movie from a Torrent Site
-
To download Cars movie from a torrent site, you need to follow these steps:
-
-
Choose a BitTorrent client that suits your device and preferences. You can compare the features, performance, security, and reviews of different BitTorrent clients online. You can also check if the BitTorrent client is compatible with your device and operating system.
-
Download and install the BitTorrent client on your device. You can download it from the official website of the BitTorrent client or from a trusted source. You can also customize the settings of the BitTorrent client according to your needs.
-
Choose a torrent site that has Cars movie available for download. You can search for torrent sites online or use a website like torrentz2.eu or torrentfunk.com to find torrent sites that have Cars movie. You can also check the reputation, popularity, and safety of torrent sites online.
-
Find Cars movie on the torrent site. You can use the search function or browse by categories or genres. You can also check the details of the torrent file, such as the name, size, type, quality, seeds, leechers, comments, and ratings.
-
Download the torrent file or copy the magnet link of Cars movie. The torrent file is a small file that contains information about the content file. The magnet link is a URL that contains information about the content file and allows you to download it without using a torrent file.
-
Open the torrent file or paste the magnet link in your BitTorrent client. The BitTorrent client will start downloading Cars movie from different peers in the network. You can check the progress of the download by looking at the speed, time remaining, percentage completed, and amount downloaded.
-
Wait for the movie to download to your device. You can choose where to save the movie on your device or let the BitTorrent client choose for you. You can also pause or resume the download at any time.
-
Enjoy watching Cars movie offline. You can find the downloaded movie in the folder you chose or in the default folder of your BitTorrent client. You can watch it as many times as you want without using data or Wi-Fi.
-
-
Pros and Cons of Torrents
-
Torrents are a convenient and fast way to download movies, but they also have some pros and cons that you should consider:
-
-
-
Pros
-
Cons
-
-
-
- Torrents allow you to download movies for free without paying any subscription fee or registration fee.
-
- Torrents are illegal in many countries and regions due to copyright infringement and piracy laws.
-
-
-
- Torrents - Torrents offer a wide range of movies and shows in different genres, languages, and regions that may not be available on streaming services.
-
- Torrents expose your device to viruses, malware, spyware, ransomware, or other harmful programs that can damage your data, steal your information, or lock your device until you pay a ransom.
-
-
-
- Torrents provide high-quality video and audio, as well as subtitles and dubbing options for different languages.
-
- Torrents compromise your online security and privacy by revealing your IP address, location, browsing history, or personal information to hackers, trackers, or advertisers.
-
-
-
- Torrents are compatible with most devices and platforms, such as computers, smartphones, tablets, smart TVs, gaming consoles, or streaming devices.
-
- Torrents depend on the availability and generosity of peers in the network. If there are not enough seeds or too many leechers, the download speed and quality may be low or the download may fail.
-
-
-
How to Protect Yourself from Viruses and Malware When Using Torrents
-
As we mentioned before, torrents can be risky for your device and your online safety. However, there are some ways to protect yourself from viruses and malware when using torrents. Here are some tips:
-
Use a VPN Service
-
A VPN service is a virtual private network that encrypts your internet traffic and hides your IP address and location from anyone who tries to monitor or track you online. A VPN service can help you avoid geo-restrictions, censorship, surveillance, and legal action when using torrents. It can also prevent hackers, trackers, or advertisers from accessing your data or information.
-
To use a VPN service, you need to sign up for a VPN account and download and install the VPN app on your device. You can choose from many VPN services available online, such as NordVPN, ExpressVPN, Surfshark, CyberGhost, or IPVanish. You can also compare the features, performance, security, and reviews of different VPN services online.
-
Once you have the VPN app on your device, you need to connect to a VPN server of your choice. The VPN server will assign you a new IP address and location that will mask your real ones. You can then use the torrent site and the BitTorrent client as usual. The VPN service will encrypt your internet traffic and protect you from viruses and malware.
-
Scan the Downloaded File with an Antivirus Program
-
An antivirus program is a software program that detects and removes viruses and malware from your device. An antivirus program can help you prevent or fix any damage caused by viruses and malware when using torrents. It can also alert you of any suspicious or malicious files or programs on your device.
-
To use an antivirus program, you need to download and install the antivirus program on your device. You can choose from many antivirus programs available online, such as Avast, AVG, Kaspersky, McAfee, or Norton. You can also compare the features, performance, security, and reviews of different antivirus programs online.
-
Once you have the antivirus program on your device, you need to scan the downloaded file with the antivirus program before opening it. The antivirus program will scan the file and detect any viruses or malware that may be hidden in it. If the file is clean, you can open it and watch Cars movie. If the file is infected, you can delete it and look for another torrent.
-
How to Download Cars Movie from Free Movie Sites
-
A third way to download Cars movie is to use a free movie site. A free movie site is a website that allows you to watch movies online or offline without paying any fee or registration. You can use a free movie site to find and download movies in different genres, languages, and regions.
-
What are Free Movie Sites and How They Work
-
Free movie sites are websites that host or link to movies that are uploaded by users or third parties. Free movie sites do not have the legal rights or licenses to distribute the movies they offer. They rely on advertising revenue or donations to maintain their servers and domains.
-
Free movie sites work by using streaming or downloading technology. Streaming technology allows you to watch movies online without downloading them to your device. You can watch movies in real time as they are transmitted from the server to your device. Downloading technology allows you to download movies to your device and watch them offline without using data or Wi-Fi. You can download movies as whole files or as small pieces that are joined together.
-
How to Find and Use a Free Movie Site to Download Movies
-
To use a free movie site to download movies, you need to follow these steps:
-
-
Choose a free movie site that has Cars movie available for download. You can search for free movie sites online or use a website like alluc.co or yidio.com to find free movie sites that have Cars movie. You can also check the reputation, popularity, and safety of free movie sites online.
-
Find Cars movie on the free movie site. You can use the search function or browse by categories or genres. You can also check the details of the movie, such as the name, size, type, quality, source, and ratings.
-
Download Cars movie from the free movie site. Depending on the free movie site, you may have different options to download Cars movie. Some of the options are:
-
-
Click on the download button or link that leads you to the movie file. The download button or link may look like a downward arrow, a disk icon, or a text that says "download".
-
Right-click on the video player and select "save video as" or "download video". The video player may look like a rectangle with a play button in the center.
-
Copy the video URL from the address bar or the video player and paste it in a video downloader website or software. The video URL may look like a long string of letters and numbers that starts with "http" or "https".
-
-
Wait for the movie to download to your device. You can check the progress of the download by looking at the speed, time remaining, percentage completed, and amount downloaded.
-
Enjoy watching Cars movie offline. You can find the downloaded movie in the folder you chose or in the default folder of your browser or downloader. You can watch it as many times as you want without using data or Wi-Fi.
-
-
Pros and Cons of Free Movie Sites
-
Free movie sites are an easy and cheap way to download movies, but they also have some pros and cons that you should consider:
-
-
-
Pros
-
Cons
-
-
-
- Free movie sites allow you to download movies for free without paying any subscription fee or registration fee.
-
- Free movie sites are illegal in many countries and regions due to copyright infringement and piracy laws.
-
-
-
- Free movie sites offer a wide range of movies and shows in different genres, languages, and regions that may not be available on streaming services.
-
- Free movie sites expose your device to viruses, malware, spyware, ransomware, or other harmful programs that can damage your data, steal your information, or lock your device until you pay a ransom.
-
-
-
- Free movie sites provide high-quality video and audio, as well as subtitles and dubbing options for different languages.
-
- Free movie sites compromise your online security and privacy by revealing your IP address, location, browsing history, or personal information to hackers, trackers, or advertisers.
-
-
-
- Free movie sites are compatible with most devices and platforms, such as computers, smartphones, tablets, smart TVs, gaming consoles, or streaming devices.
-
- Free movie sites depend on the availability and reliability of the servers and links that host or link to the movies. If the server or link is down, broken, or removed, the download may fail or the movie may not play.
-
-
-
How to Avoid Ads and Pop-ups When Using Free Movie Sites
-
As we mentioned before, free movie sites rely on advertising revenue to maintain their servers and domains. However, the ads and pop-ups that appear on free movie sites can be annoying, intrusive, or even dangerous for your device and your online safety. However, there are some ways to avoid ads and pop-ups when using free movie sites. Here are some tips:
-
Use an Ad Blocker Extension
-
An ad blocker extension is a browser extension that blocks or removes ads and pop-ups from websites. An ad blocker extension can help you improve your browsing experience, save your bandwidth and battery life, and protect you from malicious ads and pop-ups.
-
To use an ad blocker extension, you need to download and install the ad blocker extension on your browser. You can choose from many ad blocker extensions available online, such as Adblock Plus, uBlock Origin, AdGuard, or Ghostery. You can also compare the features, performance, security, and reviews of different ad blocker extensions online.
-
Once you have the ad blocker extension on your browser, you need to enable it and customize its settings according to your preferences. You can also whitelist some websites that you want to support or that do not have annoying or harmful ads and pop-ups.
-
Use a Pop-up Blocker Extension
-
A pop-up blocker extension is a browser extension that blocks or removes pop-ups from websites. A pop-up is a new window that opens automatically when you visit a website or click on a link. A pop-up blocker extension can help you avoid unwanted or malicious pop-ups that may redirect you to other websites, download unwanted files or programs, or display inappropriate or misleading content.
-
To use a pop-up blocker extension, you need to download and install the pop-up blocker extension on your browser. You can choose from many pop-up blocker extensions available online, such as Popper Blocker, Poper Blocker, Popup Blocker Pro, or Smart Popup Blocker. You can also compare the features, performance, security, and reviews of different pop-up blocker extensions online.
-
Once you have the pop-up blocker extension on your browser, you need to enable it and customize its settings according to your preferences. You can also whitelist some websites that you want to allow pop-ups from or that do not have unwanted or malicious pop-ups.
-
Conclusion
-
Summary of the Main Points
-
In this article, we have shown you how to download Cars movie legally and safely from different sources, such as streaming services, torrent sites, and free movie sites. We have also given you some tips on how to protect yourself from viruses and malware when using torrents and how to avoid ads and pop-ups when using free movie sites.
-
Cars is a Pixar film that was released in 2006 and became one of the most successful animated movies of all time. It tells the story of a hotshot race car named Lightning McQueen who gets stranded in a small town called Radiator Springs and learns the true meaning of friendship and family. If you are a fan of Cars or want to watch it for the first time, you might be wondering how to download it to your computer or mobile device.
-
Recommendations for the Best Way to Download Cars Movie
-
Based on our analysis, we recommend you to use streaming services as the best way to download Cars movie legally and safely. Streaming services offer many benefits for movie lovers, such as high-quality video and audio, offline viewing, multiple device compatibility, large and diverse library, and exclusive content. Streaming services also have fewer drawbacks than torrent sites or free movie sites, such as subscription fee, internet speed, content availability, and geo-restrictions.
-
Among the streaming services that offer Cars movie, we suggest you to use Netflix or Amazon Prime Video. Both of them have similar features and advantages, such as HD or 4K resolution, subtitles and dubbing options, multiple profiles and screens, and original and exclusive content. However, Netflix has a larger and more diverse library than Amazon Prime Video, while Amazon Prime Video has a cheaper and more comprehensive membership than Netflix.
-
Therefore, you can choose the streaming service that suits your preferences and budget. You can also try both of them for free for a limited time and compare their performance and quality. You can follow the steps we provided in this article to download Cars movie from Netflix or Amazon Prime Video.
-
FAQs
-
Here are some frequently asked questions about downloading Cars movie:
-
-
Is downloading Cars movie illegal?
-
Downloading Cars movie is not illegal if you use a legal and safe method, such as streaming services. However, downloading Cars movie is illegal if you use an illegal and unsafe method, such as torrent sites or free movie sites. You can face legal action from the copyright owner or the authorities if you download Cars movie illegally.
-
Is downloading Cars movie safe?
-
Downloading Cars movie is safe if you use a legal and safe method, such as streaming services. However, downloading Cars movie is not safe if you use an illegal and unsafe method, such as torrent sites or free movie sites. You can expose your device to viruses, malware, spyware, ransomware, or other harmful programs if you download Cars movie from an unsafe source.
-
How long does it take to download Cars movie?
-
The time it takes to download Cars movie depends on several factors, such as the size of the file, the speed of your internet connection, the number of seeds or peers in the network, and the method you use to download it. Generally, streaming services have faster download speeds than torrent sites or free movie sites. However, streaming services also have larger file sizes than torrent sites or free movie sites. Therefore, you can expect to download Cars movie in a few minutes to a few hours depending on your situation.
-
How much space does Cars movie take on my device?
-
The space that Cars movie takes on your device depends on the quality of the video and audio, the length of the movie, and the format of the file. Generally, streaming services have higher quality video and audio than torrent sites or free movie sites. However, streaming services also have larger file sizes than torrent sites or free movie sites. Therefore, you can expect Cars movie to take from a few hundred megabytes to a few gigabytes of space on your device depending on your choice.
-
Can I watch Cars movie on any device?
-
You can watch Cars movie on any device that supports the method you use to download it. For example, if you use a streaming service, you can watch Cars movie on any device that has the streaming app installed or can access the streaming website. If you use a torrent site or a free movie site, you can watch Cars movie on any device that has a video player that can open the file format of the movie.
-
-
I hope this article has helped you learn how to download Cars movie legally and safely. If you have any questions or comments, please feel free to leave them below. Thank you for reading and happy watching!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/2ndelement/voicevox/test/test_acoustic_feature_extractor.py b/spaces/2ndelement/voicevox/test/test_acoustic_feature_extractor.py
deleted file mode 100644
index a82e7afe62eed4f1be1506d7cd34335c769d17d0..0000000000000000000000000000000000000000
--- a/spaces/2ndelement/voicevox/test/test_acoustic_feature_extractor.py
+++ /dev/null
@@ -1,266 +0,0 @@
-import os
-from pathlib import Path
-from typing import List, Type
-from unittest import TestCase
-
-from voicevox_engine.acoustic_feature_extractor import (
- BasePhoneme,
- JvsPhoneme,
- OjtPhoneme,
-)
-
-
-class TestBasePhoneme(TestCase):
- def setUp(self):
- super().setUp()
- self.str_hello_hiho = "sil k o N n i ch i w a pau h i h o d e s U sil"
- self.base_hello_hiho = [
- BasePhoneme(s, i, i + 1) for i, s in enumerate(self.str_hello_hiho.split())
- ]
- self.lab_str = """
- 0.00 1.00 pau
- 1.00 2.00 k
- 2.00 3.00 o
- 3.00 4.00 N
- 4.00 5.00 n
- 5.00 6.00 i
- 6.00 7.00 ch
- 7.00 8.00 i
- 8.00 9.00 w
- 9.00 10.00 a
- 10.00 11.00 pau
- 11.00 12.00 h
- 12.00 13.00 i
- 13.00 14.00 h
- 14.00 15.00 o
- 15.00 16.00 d
- 16.00 17.00 e
- 17.00 18.00 s
- 18.00 19.00 U
- 19.00 20.00 pau
- """.replace(
- " ", ""
- )[
- 1:-1
- ] # ダブルクオーテーションx3で囲われている部分で、空白をすべて置き換え、先頭と最後の"\n"を除外する
-
- def test_repr_(self):
- self.assertEqual(
- self.base_hello_hiho[1].__repr__(), "Phoneme(phoneme='k', start=1, end=2)"
- )
- self.assertEqual(
- self.base_hello_hiho[10].__repr__(),
- "Phoneme(phoneme='pau', start=10, end=11)",
- )
-
- def test_convert(self):
- with self.assertRaises(NotImplementedError):
- BasePhoneme.convert(self.base_hello_hiho)
-
- def test_duration(self):
- self.assertEqual(self.base_hello_hiho[1].duration, 1)
-
- def test_parse(self):
- parse_str_1 = "0 1 pau"
- parse_str_2 = "32.67543 33.48933 e"
- parsed_base_1 = BasePhoneme.parse(parse_str_1)
- parsed_base_2 = BasePhoneme.parse(parse_str_2)
- self.assertEqual(parsed_base_1.phoneme, "pau")
- self.assertEqual(parsed_base_1.start, 0.0)
- self.assertEqual(parsed_base_1.end, 1.0)
- self.assertEqual(parsed_base_2.phoneme, "e")
- self.assertEqual(parsed_base_2.start, 32.68)
- self.assertEqual(parsed_base_2.end, 33.49)
-
- def lab_test_base(
- self,
- file_path: str,
- phonemes: List["BasePhoneme"],
- phoneme_class: Type["BasePhoneme"],
- ):
- phoneme_class.save_lab_list(phonemes, Path(file_path))
- with open(file_path, mode="r") as f:
- self.assertEqual(f.read(), self.lab_str)
- result_phoneme = phoneme_class.load_lab_list(Path(file_path))
- self.assertEqual(result_phoneme, phonemes)
- os.remove(file_path)
-
-
-class TestJvsPhoneme(TestBasePhoneme):
- def setUp(self):
- super().setUp()
- base_hello_hiho = [
- JvsPhoneme(s, i, i + 1) for i, s in enumerate(self.str_hello_hiho.split())
- ]
- self.jvs_hello_hiho = JvsPhoneme.convert(base_hello_hiho)
-
- def test_phoneme_list(self):
- self.assertEqual(JvsPhoneme.phoneme_list[1], "I")
- self.assertEqual(JvsPhoneme.phoneme_list[14], "gy")
- self.assertEqual(JvsPhoneme.phoneme_list[26], "p")
- self.assertEqual(JvsPhoneme.phoneme_list[38], "z")
-
- def test_const(self):
- self.assertEqual(JvsPhoneme.num_phoneme, 39)
- self.assertEqual(JvsPhoneme.space_phoneme, "pau")
-
- def test_convert(self):
- converted_str_hello_hiho = " ".join([p.phoneme for p in self.jvs_hello_hiho])
- self.assertEqual(
- converted_str_hello_hiho, "pau k o N n i ch i w a pau h i h o d e s U pau"
- )
-
- def test_equal(self):
- # jvs_hello_hihoの2番目の"k"と比較
- true_jvs_phoneme = JvsPhoneme("k", 1, 2)
- # OjtPhonemeと比べる、比較はBasePhoneme内で実装されているので、比較結果はTrue
- true_ojt_phoneme = OjtPhoneme("k", 1, 2)
-
- false_jvs_phoneme_1 = JvsPhoneme("a", 1, 2)
- false_jvs_phoneme_2 = JvsPhoneme("k", 2, 3)
- self.assertTrue(self.jvs_hello_hiho[1] == true_jvs_phoneme)
- self.assertTrue(self.jvs_hello_hiho[1] == true_ojt_phoneme)
- self.assertFalse(self.jvs_hello_hiho[1] == false_jvs_phoneme_1)
- self.assertFalse(self.jvs_hello_hiho[1] == false_jvs_phoneme_2)
-
- def test_verify(self):
- for phoneme in self.jvs_hello_hiho:
- phoneme.verify()
-
- def test_phoneme_id(self):
- jvs_str_hello_hiho = " ".join([str(p.phoneme_id) for p in self.jvs_hello_hiho])
- self.assertEqual(
- jvs_str_hello_hiho, "0 19 25 2 23 17 7 17 36 4 0 15 17 15 25 9 11 30 3 0"
- )
-
- def test_onehot(self):
- phoneme_id_list = [
- 0,
- 19,
- 25,
- 2,
- 23,
- 17,
- 7,
- 17,
- 36,
- 4,
- 0,
- 15,
- 17,
- 15,
- 25,
- 9,
- 11,
- 30,
- 3,
- 0,
- ]
- for i, phoneme in enumerate(self.jvs_hello_hiho):
- for j in range(JvsPhoneme.num_phoneme):
- if phoneme_id_list[i] == j:
- self.assertEqual(phoneme.onehot[j], True)
- else:
- self.assertEqual(phoneme.onehot[j], False)
-
- def test_parse(self):
- parse_str_1 = "0 1 pau"
- parse_str_2 = "15.32654 16.39454 a"
- parsed_jvs_1 = JvsPhoneme.parse(parse_str_1)
- parsed_jvs_2 = JvsPhoneme.parse(parse_str_2)
- self.assertEqual(parsed_jvs_1.phoneme_id, 0)
- self.assertEqual(parsed_jvs_2.phoneme_id, 4)
-
- def test_lab_list(self):
- self.lab_test_base("./jvs_lab_test", self.jvs_hello_hiho, JvsPhoneme)
-
-
-class TestOjtPhoneme(TestBasePhoneme):
- def setUp(self):
- super().setUp()
- self.str_hello_hiho = "sil k o N n i ch i w a pau h i h o d e s U sil"
- base_hello_hiho = [
- OjtPhoneme(s, i, i + 1) for i, s in enumerate(self.str_hello_hiho.split())
- ]
- self.ojt_hello_hiho = OjtPhoneme.convert(base_hello_hiho)
-
- def test_phoneme_list(self):
- self.assertEqual(OjtPhoneme.phoneme_list[1], "A")
- self.assertEqual(OjtPhoneme.phoneme_list[14], "e")
- self.assertEqual(OjtPhoneme.phoneme_list[26], "m")
- self.assertEqual(OjtPhoneme.phoneme_list[38], "ts")
- self.assertEqual(OjtPhoneme.phoneme_list[41], "v")
-
- def test_const(self):
- self.assertEqual(OjtPhoneme.num_phoneme, 45)
- self.assertEqual(OjtPhoneme.space_phoneme, "pau")
-
- def test_convert(self):
- ojt_str_hello_hiho = " ".join([p.phoneme for p in self.ojt_hello_hiho])
- self.assertEqual(
- ojt_str_hello_hiho, "pau k o N n i ch i w a pau h i h o d e s U pau"
- )
-
- def test_equal(self):
- # ojt_hello_hihoの10番目の"a"と比較
- true_ojt_phoneme = OjtPhoneme("a", 9, 10)
- # JvsPhonemeと比べる、比較はBasePhoneme内で実装されているので、比較結果はTrue
- true_jvs_phoneme = JvsPhoneme("a", 9, 10)
-
- false_ojt_phoneme_1 = OjtPhoneme("k", 9, 10)
- false_ojt_phoneme_2 = OjtPhoneme("a", 10, 11)
- self.assertTrue(self.ojt_hello_hiho[9] == true_ojt_phoneme)
- self.assertTrue(self.ojt_hello_hiho[9] == true_jvs_phoneme)
- self.assertFalse(self.ojt_hello_hiho[9] == false_ojt_phoneme_1)
- self.assertFalse(self.ojt_hello_hiho[9] == false_ojt_phoneme_2)
-
- def test_verify(self):
- for phoneme in self.ojt_hello_hiho:
- phoneme.verify()
-
- def test_phoneme_id(self):
- ojt_str_hello_hiho = " ".join([str(p.phoneme_id) for p in self.ojt_hello_hiho])
- self.assertEqual(
- ojt_str_hello_hiho, "0 23 30 4 28 21 10 21 42 7 0 19 21 19 30 12 14 35 6 0"
- )
-
- def test_onehot(self):
- phoneme_id_list = [
- 0,
- 23,
- 30,
- 4,
- 28,
- 21,
- 10,
- 21,
- 42,
- 7,
- 0,
- 19,
- 21,
- 19,
- 30,
- 12,
- 14,
- 35,
- 6,
- 0,
- ]
- for i, phoneme in enumerate(self.ojt_hello_hiho):
- for j in range(OjtPhoneme.num_phoneme):
- if phoneme_id_list[i] == j:
- self.assertEqual(phoneme.onehot[j], True)
- else:
- self.assertEqual(phoneme.onehot[j], False)
-
- def test_parse(self):
- parse_str_1 = "0 1 pau"
- parse_str_2 = "32.67543 33.48933 e"
- parsed_ojt_1 = OjtPhoneme.parse(parse_str_1)
- parsed_ojt_2 = OjtPhoneme.parse(parse_str_2)
- self.assertEqual(parsed_ojt_1.phoneme_id, 0)
- self.assertEqual(parsed_ojt_2.phoneme_id, 14)
-
- def tes_lab_list(self):
- self.lab_test_base("./ojt_lab_test", self.ojt_hello_hiho, OjtPhoneme)
diff --git a/spaces/801artistry/RVC801/go-applio.bat b/spaces/801artistry/RVC801/go-applio.bat
deleted file mode 100644
index 60c0c41d34a8aee5e14e744accb33d028d807245..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/go-applio.bat
+++ /dev/null
@@ -1,92 +0,0 @@
-@echo off
-setlocal
-title Start Applio
-
-:::
-::: _ _
-::: /\ | (_)
-::: / \ _ __ _ __ | |_ ___
-::: / /\ \ | '_ \| '_ \| | |/ _ \
-::: / ____ \| |_) | |_) | | | (_) |
-::: /_/ \_\ .__/| .__/|_|_|\___/
-::: | | | |
-::: |_| |_|
-:::
-:::
-
-:menu
-for /f "delims=: tokens=*" %%A in ('findstr /b ":::" "%~f0"') do @echo(%%A
-
-echo [1] Start Applio
-echo [2] Start Applio (DML)
-echo [3] Start Realtime GUI (DML)
-echo [4] Start Realtime GUI (V0)
-echo [5] Start Realtime GUI (V1)
-echo.
-
-set /p choice=Select an option:
-set choice=%choice: =%
-
-cls
-echo WARNING: It's recommended to disable antivirus or firewall, as errors might occur when starting the ssl.
-pause
-
-if "%choice%"=="1" (
- cls
- echo WARNING: At this point, it's recommended to disable antivirus or firewall, as errors might occur when downloading pretrained models.
- pause>null
- echo Starting Applio...
- echo.
- runtime\python.exe infer-web.py --pycmd runtime\python.exe --port 7897
- pause
- cls
- goto menu
-)
-
-if "%choice%"=="2" (
- cls
- echo Starting Applio ^(DML^)...
- echo.
- runtime\python.exe infer-web.py --pycmd runtime\python.exe --port 7897 --dml
- pause
- cls
- goto menu
-)
-
-if "%choice%"=="3" (
- cls
- echo Starting Realtime GUI ^(DML^)...
- echo.
- runtime\python.exe gui_v1.py --pycmd runtime\python.exe --dml
- pause
- cls
- goto menu
-)
-
-if "%choice%"=="4" (
- cls
- echo Starting Realtime GUI ^(V0^)...
- echo.
- runtime\python.exe gui_v0.py
- pause
- cls
- goto menu
-)
-
-if "%choice%"=="5" (
- cls
- echo Starting Realtime GUI ^(V1^)...
- echo.
- runtime\python.exe gui_v1.py
- pause
- cls
- goto menu
-)
-
-cls
-echo Invalid option. Please enter a number from 1 to 5.
-echo.
-echo Press 'Enter' to access the main menu...
-pause>nul
-cls
-goto menu
diff --git a/spaces/A666sxr/Genshin_TTS/modules.py b/spaces/A666sxr/Genshin_TTS/modules.py
deleted file mode 100644
index 9c7fd9cd6eb8b7e0ec0e08957e970744a374a924..0000000000000000000000000000000000000000
--- a/spaces/A666sxr/Genshin_TTS/modules.py
+++ /dev/null
@@ -1,390 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/AI-Chatbot-Master/Chatbots/README.md b/spaces/AI-Chatbot-Master/Chatbots/README.md
deleted file mode 100644
index 275edc7b5a6e57869eb7b3cb7a25e3e238752a2c..0000000000000000000000000000000000000000
--- a/spaces/AI-Chatbot-Master/Chatbots/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Chatbots
-emoji: 📚
-colorFrom: yellow
-colorTo: red
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AI-ZTH-03-23/2.Streamlit.GraphViz.Dynamic.Architecture.Diagram/app.py b/spaces/AI-ZTH-03-23/2.Streamlit.GraphViz.Dynamic.Architecture.Diagram/app.py
deleted file mode 100644
index b79be955c31e3110b385accb4078915ad952a3d3..0000000000000000000000000000000000000000
--- a/spaces/AI-ZTH-03-23/2.Streamlit.GraphViz.Dynamic.Architecture.Diagram/app.py
+++ /dev/null
@@ -1,146 +0,0 @@
-import streamlit as st
-from graphviz import Digraph
-
-
-st.markdown("""
-Prompt:
-Create an interactive streamlit graph builder using the graphviz diagram model language and the streamlit feature: st.graphviz_chart(figure_or_dot, use_container_width=False) to show an azure cloud architecture model including the top ten architecture components for python full stack development for web, api, ml, models, datasets torch, transformers, streamlit, azure docker and kubernetes pods for scaling
-
-""")
-
-# Dot demo:
-import streamlit as st
-
-# Define the default graphviz DOT string
-default_dot = """
-digraph G {
- rankdir=LR
- node [shape=box]
- WebApp -> API
- API -> Models
- API -> Datasets
- Models -> Torch
- Models -> Transformers
- WebApp -> Streamlit
- Streamlit -> Azure
- Azure -> Docker
- Azure -> Kubernetes
-}
-"""
-
-# Define the list of top 10 components
-components = [
- "WebApp",
- "API",
- "Models",
- "Datasets",
- "Torch",
- "Transformers",
- "Streamlit",
- "Azure",
- "Docker",
- "Kubernetes",
-]
-
-# Define a dictionary to map component names to DOT node IDs
-node_ids = {
- component: component.lower()
- for component in components
-}
-
-def build_dot_string(selected_components):
- """Builds a DOT string representing the selected components"""
- selected_nodes = [node_ids[component] for component in selected_components]
- dot = """
- digraph G {
- rankdir=LR
- node [shape=box]
- """
- for node in selected_nodes:
- dot += f"{node} [color=blue]\n"
- for i in range(len(selected_nodes)):
- for j in range(i+1, len(selected_nodes)):
- dot += f"{selected_nodes[i]} -> {selected_nodes[j]}\n"
- dot += "}"
- return dot
-
-def main():
- st.title("Azure Cloud Architecture Builder")
-
- # Select the components
- st.sidebar.title("Select components")
- selected_components = st.sidebar.multiselect(
- "Select the top 10 components",
- components,
- default=components[:3]
- )
-
- # Build the DOT string
- dot = build_dot_string(selected_components)
-
- # Render the graphviz chart
- st.graphviz_chart(dot, use_container_width=True)
-
-if __name__ == "__main__":
- main()
-
-
-
-# Initialize the graph
-graph = Digraph(comment='Architectural Model')
-
-# Add nodes to the graph
-graph.node('data_layer', 'Data Layer')
-graph.node('acr', 'Azure Container Registry')
-graph.node('aks', 'Azure Kubernetes\n& Docker Container Pod\nwith Scalability')
-graph.node('snowflake', 'Snowflake Instance')
-graph.node('cosmos', 'Azure Cosmos\nDatabase')
-graph.node('api', 'API Standard\n(using Uvicorn)')
-graph.node('soar', 'SOAR Component\n(on Linux Python\nSlimbuster Docker)')
-
-# Add edges to the graph
-graph.edge('data_layer', 'acr')
-graph.edge('acr', 'aks')
-graph.edge('aks', 'snowflake')
-graph.edge('aks', 'cosmos')
-graph.edge('aks', 'api')
-graph.edge('aks', 'soar')
-
-# Define the Streamlit app
-def app():
- st.title('Architectural Model')
-
- # Draw the graph
- st.graphviz_chart(graph.source)
-
- # Add buttons to customize the graph
- if st.button('Hide Data Layer'):
- graph.node('data_layer', style='invisible')
-
- if st.button('Hide Snowflake Instance'):
- graph.node('snowflake', style='invisible')
-
- if st.button('Hide SOAR Component'):
- graph.node('soar', style='invisible')
-
-
-
-st.markdown("""
-# QA Model Spaces:
-QA use cases include QA, Semantic Document and FAQ Search.
-1. Streamlit Question Answering w Hugging Face: https://huggingface.co/spaces/awacke1/Question-answering
-2. Seq2Seq:
- - https://huggingface.co/spaces/awacke1/4-Seq2SeqQAT5
- - https://huggingface.co/spaces/awacke1/AW-04-GR-Seq-2-Seq-QA-Auto-Gen
-3. BioGPT: https://huggingface.co/spaces/awacke1/microsoft-BioGPT-Large-PubMedQA
-4. NLP QA Context: https://huggingface.co/spaces/awacke1/NLPContextQATransformersRobertaBaseSquad2
- - https://huggingface.co/spaces/awacke1/SOTA-Plan
-5. https://huggingface.co/spaces/awacke1/Question-answering
-6. QA MLM: https://huggingface.co/spaces/awacke1/SOTA-MedEntity
-""")
-
-
-
-# Run the Streamlit app
-if __name__ == '__main__':
- app()
diff --git a/spaces/AIConsultant/MusicGen/scripts/templates/results.html b/spaces/AIConsultant/MusicGen/scripts/templates/results.html
deleted file mode 100644
index 8ddce59f0f617a836db75c8bc9768db7f9f17511..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/scripts/templates/results.html
+++ /dev/null
@@ -1,17 +0,0 @@
-{% extends "base.html" %}
-{% block content %}
-
-
-
-{% endfor %}
-
-{% endblock %}
diff --git a/spaces/AIFILMS/generate_human_motion/pyrender/pyrender/platforms/__init__.py b/spaces/AIFILMS/generate_human_motion/pyrender/pyrender/platforms/__init__.py
deleted file mode 100644
index 7837fd5fdeccab5e48c85e41d20b238ea7396599..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/generate_human_motion/pyrender/pyrender/platforms/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-"""Platforms for generating offscreen OpenGL contexts for rendering.
-
-Author: Matthew Matl
-"""
-
-from .base import Platform
diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/conversation/[id]/stop-generating/$types.d.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/conversation/[id]/stop-generating/$types.d.ts
deleted file mode 100644
index 108ad3f4ad676b574668ee54fc0f30b38a90220c..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/conversation/[id]/stop-generating/$types.d.ts
+++ /dev/null
@@ -1,9 +0,0 @@
-import type * as Kit from '@sveltejs/kit';
-
-type Expand = T extends infer O ? { [K in keyof O]: O[K] } : never;
-type RouteParams = { id: string }
-type RouteId = '/conversation/[id]/stop-generating';
-
-export type EntryGenerator = () => Promise> | Array;
-export type RequestHandler = Kit.RequestHandler;
-export type RequestEvent = Kit.RequestEvent;
\ No newline at end of file
diff --git a/spaces/Adapter/CoAdapter/ldm/modules/image_degradation/bsrgan.py b/spaces/Adapter/CoAdapter/ldm/modules/image_degradation/bsrgan.py
deleted file mode 100644
index 32ef56169978e550090261cddbcf5eb611a6173b..0000000000000000000000000000000000000000
--- a/spaces/Adapter/CoAdapter/ldm/modules/image_degradation/bsrgan.py
+++ /dev/null
@@ -1,730 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-# --------------------------------------------
-# Super-Resolution
-# --------------------------------------------
-#
-# Kai Zhang (cskaizhang@gmail.com)
-# https://github.com/cszn
-# From 2019/03--2021/08
-# --------------------------------------------
-"""
-
-import numpy as np
-import cv2
-import torch
-
-from functools import partial
-import random
-from scipy import ndimage
-import scipy
-import scipy.stats as ss
-from scipy.interpolate import interp2d
-from scipy.linalg import orth
-import albumentations
-
-import ldm.modules.image_degradation.utils_image as util
-
-
-def modcrop_np(img, sf):
- '''
- Args:
- img: numpy image, WxH or WxHxC
- sf: scale factor
- Return:
- cropped image
- '''
- w, h = img.shape[:2]
- im = np.copy(img)
- return im[:w - w % sf, :h - h % sf, ...]
-
-
-"""
-# --------------------------------------------
-# anisotropic Gaussian kernels
-# --------------------------------------------
-"""
-
-
-def analytic_kernel(k):
- """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)"""
- k_size = k.shape[0]
- # Calculate the big kernels size
- big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2))
- # Loop over the small kernel to fill the big one
- for r in range(k_size):
- for c in range(k_size):
- big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k
- # Crop the edges of the big kernel to ignore very small values and increase run time of SR
- crop = k_size // 2
- cropped_big_k = big_k[crop:-crop, crop:-crop]
- # Normalize to 1
- return cropped_big_k / cropped_big_k.sum()
-
-
-def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6):
- """ generate an anisotropic Gaussian kernel
- Args:
- ksize : e.g., 15, kernel size
- theta : [0, pi], rotation angle range
- l1 : [0.1,50], scaling of eigenvalues
- l2 : [0.1,l1], scaling of eigenvalues
- If l1 = l2, will get an isotropic Gaussian kernel.
- Returns:
- k : kernel
- """
-
- v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.]))
- V = np.array([[v[0], v[1]], [v[1], -v[0]]])
- D = np.array([[l1, 0], [0, l2]])
- Sigma = np.dot(np.dot(V, D), np.linalg.inv(V))
- k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize)
-
- return k
-
-
-def gm_blur_kernel(mean, cov, size=15):
- center = size / 2.0 + 0.5
- k = np.zeros([size, size])
- for y in range(size):
- for x in range(size):
- cy = y - center + 1
- cx = x - center + 1
- k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov)
-
- k = k / np.sum(k)
- return k
-
-
-def shift_pixel(x, sf, upper_left=True):
- """shift pixel for super-resolution with different scale factors
- Args:
- x: WxHxC or WxH
- sf: scale factor
- upper_left: shift direction
- """
- h, w = x.shape[:2]
- shift = (sf - 1) * 0.5
- xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0)
- if upper_left:
- x1 = xv + shift
- y1 = yv + shift
- else:
- x1 = xv - shift
- y1 = yv - shift
-
- x1 = np.clip(x1, 0, w - 1)
- y1 = np.clip(y1, 0, h - 1)
-
- if x.ndim == 2:
- x = interp2d(xv, yv, x)(x1, y1)
- if x.ndim == 3:
- for i in range(x.shape[-1]):
- x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1)
-
- return x
-
-
-def blur(x, k):
- '''
- x: image, NxcxHxW
- k: kernel, Nx1xhxw
- '''
- n, c = x.shape[:2]
- p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2
- x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate')
- k = k.repeat(1, c, 1, 1)
- k = k.view(-1, 1, k.shape[2], k.shape[3])
- x = x.view(1, -1, x.shape[2], x.shape[3])
- x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c)
- x = x.view(n, c, x.shape[2], x.shape[3])
-
- return x
-
-
-def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0):
- """"
- # modified version of https://github.com/assafshocher/BlindSR_dataset_generator
- # Kai Zhang
- # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var
- # max_var = 2.5 * sf
- """
- # Set random eigen-vals (lambdas) and angle (theta) for COV matrix
- lambda_1 = min_var + np.random.rand() * (max_var - min_var)
- lambda_2 = min_var + np.random.rand() * (max_var - min_var)
- theta = np.random.rand() * np.pi # random theta
- noise = -noise_level + np.random.rand(*k_size) * noise_level * 2
-
- # Set COV matrix using Lambdas and Theta
- LAMBDA = np.diag([lambda_1, lambda_2])
- Q = np.array([[np.cos(theta), -np.sin(theta)],
- [np.sin(theta), np.cos(theta)]])
- SIGMA = Q @ LAMBDA @ Q.T
- INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :]
-
- # Set expectation position (shifting kernel for aligned image)
- MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2)
- MU = MU[None, None, :, None]
-
- # Create meshgrid for Gaussian
- [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1]))
- Z = np.stack([X, Y], 2)[:, :, :, None]
-
- # Calcualte Gaussian for every pixel of the kernel
- ZZ = Z - MU
- ZZ_t = ZZ.transpose(0, 1, 3, 2)
- raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise)
-
- # shift the kernel so it will be centered
- # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor)
-
- # Normalize the kernel and return
- # kernel = raw_kernel_centered / np.sum(raw_kernel_centered)
- kernel = raw_kernel / np.sum(raw_kernel)
- return kernel
-
-
-def fspecial_gaussian(hsize, sigma):
- hsize = [hsize, hsize]
- siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0]
- std = sigma
- [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1))
- arg = -(x * x + y * y) / (2 * std * std)
- h = np.exp(arg)
- h[h < scipy.finfo(float).eps * h.max()] = 0
- sumh = h.sum()
- if sumh != 0:
- h = h / sumh
- return h
-
-
-def fspecial_laplacian(alpha):
- alpha = max([0, min([alpha, 1])])
- h1 = alpha / (alpha + 1)
- h2 = (1 - alpha) / (alpha + 1)
- h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]]
- h = np.array(h)
- return h
-
-
-def fspecial(filter_type, *args, **kwargs):
- '''
- python code from:
- https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py
- '''
- if filter_type == 'gaussian':
- return fspecial_gaussian(*args, **kwargs)
- if filter_type == 'laplacian':
- return fspecial_laplacian(*args, **kwargs)
-
-
-"""
-# --------------------------------------------
-# degradation models
-# --------------------------------------------
-"""
-
-
-def bicubic_degradation(x, sf=3):
- '''
- Args:
- x: HxWxC image, [0, 1]
- sf: down-scale factor
- Return:
- bicubicly downsampled LR image
- '''
- x = util.imresize_np(x, scale=1 / sf)
- return x
-
-
-def srmd_degradation(x, k, sf=3):
- ''' blur + bicubic downsampling
- Args:
- x: HxWxC image, [0, 1]
- k: hxw, double
- sf: down-scale factor
- Return:
- downsampled LR image
- Reference:
- @inproceedings{zhang2018learning,
- title={Learning a single convolutional super-resolution network for multiple degradations},
- author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
- pages={3262--3271},
- year={2018}
- }
- '''
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror'
- x = bicubic_degradation(x, sf=sf)
- return x
-
-
-def dpsr_degradation(x, k, sf=3):
- ''' bicubic downsampling + blur
- Args:
- x: HxWxC image, [0, 1]
- k: hxw, double
- sf: down-scale factor
- Return:
- downsampled LR image
- Reference:
- @inproceedings{zhang2019deep,
- title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels},
- author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
- pages={1671--1681},
- year={2019}
- }
- '''
- x = bicubic_degradation(x, sf=sf)
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap')
- return x
-
-
-def classical_degradation(x, k, sf=3):
- ''' blur + downsampling
- Args:
- x: HxWxC image, [0, 1]/[0, 255]
- k: hxw, double
- sf: down-scale factor
- Return:
- downsampled LR image
- '''
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap')
- # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2))
- st = 0
- return x[st::sf, st::sf, ...]
-
-
-def add_sharpening(img, weight=0.5, radius=50, threshold=10):
- """USM sharpening. borrowed from real-ESRGAN
- Input image: I; Blurry image: B.
- 1. K = I + weight * (I - B)
- 2. Mask = 1 if abs(I - B) > threshold, else: 0
- 3. Blur mask:
- 4. Out = Mask * K + (1 - Mask) * I
- Args:
- img (Numpy array): Input image, HWC, BGR; float32, [0, 1].
- weight (float): Sharp weight. Default: 1.
- radius (float): Kernel size of Gaussian blur. Default: 50.
- threshold (int):
- """
- if radius % 2 == 0:
- radius += 1
- blur = cv2.GaussianBlur(img, (radius, radius), 0)
- residual = img - blur
- mask = np.abs(residual) * 255 > threshold
- mask = mask.astype('float32')
- soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0)
-
- K = img + weight * residual
- K = np.clip(K, 0, 1)
- return soft_mask * K + (1 - soft_mask) * img
-
-
-def add_blur(img, sf=4):
- wd2 = 4.0 + sf
- wd = 2.0 + 0.2 * sf
- if random.random() < 0.5:
- l1 = wd2 * random.random()
- l2 = wd2 * random.random()
- k = anisotropic_Gaussian(ksize=2 * random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2)
- else:
- k = fspecial('gaussian', 2 * random.randint(2, 11) + 3, wd * random.random())
- img = ndimage.filters.convolve(img, np.expand_dims(k, axis=2), mode='mirror')
-
- return img
-
-
-def add_resize(img, sf=4):
- rnum = np.random.rand()
- if rnum > 0.8: # up
- sf1 = random.uniform(1, 2)
- elif rnum < 0.7: # down
- sf1 = random.uniform(0.5 / sf, 1)
- else:
- sf1 = 1.0
- img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3]))
- img = np.clip(img, 0.0, 1.0)
-
- return img
-
-
-# def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):
-# noise_level = random.randint(noise_level1, noise_level2)
-# rnum = np.random.rand()
-# if rnum > 0.6: # add color Gaussian noise
-# img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
-# elif rnum < 0.4: # add grayscale Gaussian noise
-# img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
-# else: # add noise
-# L = noise_level2 / 255.
-# D = np.diag(np.random.rand(3))
-# U = orth(np.random.rand(3, 3))
-# conv = np.dot(np.dot(np.transpose(U), D), U)
-# img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
-# img = np.clip(img, 0.0, 1.0)
-# return img
-
-def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):
- noise_level = random.randint(noise_level1, noise_level2)
- rnum = np.random.rand()
- if rnum > 0.6: # add color Gaussian noise
- img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
- elif rnum < 0.4: # add grayscale Gaussian noise
- img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
- else: # add noise
- L = noise_level2 / 255.
- D = np.diag(np.random.rand(3))
- U = orth(np.random.rand(3, 3))
- conv = np.dot(np.dot(np.transpose(U), D), U)
- img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
- img = np.clip(img, 0.0, 1.0)
- return img
-
-
-def add_speckle_noise(img, noise_level1=2, noise_level2=25):
- noise_level = random.randint(noise_level1, noise_level2)
- img = np.clip(img, 0.0, 1.0)
- rnum = random.random()
- if rnum > 0.6:
- img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
- elif rnum < 0.4:
- img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
- else:
- L = noise_level2 / 255.
- D = np.diag(np.random.rand(3))
- U = orth(np.random.rand(3, 3))
- conv = np.dot(np.dot(np.transpose(U), D), U)
- img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
- img = np.clip(img, 0.0, 1.0)
- return img
-
-
-def add_Poisson_noise(img):
- img = np.clip((img * 255.0).round(), 0, 255) / 255.
- vals = 10 ** (2 * random.random() + 2.0) # [2, 4]
- if random.random() < 0.5:
- img = np.random.poisson(img * vals).astype(np.float32) / vals
- else:
- img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114])
- img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255.
- noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray
- img += noise_gray[:, :, np.newaxis]
- img = np.clip(img, 0.0, 1.0)
- return img
-
-
-def add_JPEG_noise(img):
- quality_factor = random.randint(30, 95)
- img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR)
- result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor])
- img = cv2.imdecode(encimg, 1)
- img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB)
- return img
-
-
-def random_crop(lq, hq, sf=4, lq_patchsize=64):
- h, w = lq.shape[:2]
- rnd_h = random.randint(0, h - lq_patchsize)
- rnd_w = random.randint(0, w - lq_patchsize)
- lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :]
-
- rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf)
- hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :]
- return lq, hq
-
-
-def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None):
- """
- This is the degradation model of BSRGAN from the paper
- "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
- ----------
- img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf)
- sf: scale factor
- isp_model: camera ISP model
- Returns
- -------
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
- """
- isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25
- sf_ori = sf
-
- h1, w1 = img.shape[:2]
- img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
- h, w = img.shape[:2]
-
- if h < lq_patchsize * sf or w < lq_patchsize * sf:
- raise ValueError(f'img size ({h1}X{w1}) is too small!')
-
- hq = img.copy()
-
- if sf == 4 and random.random() < scale2_prob: # downsample1
- if np.random.rand() < 0.5:
- img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])),
- interpolation=random.choice([1, 2, 3]))
- else:
- img = util.imresize_np(img, 1 / 2, True)
- img = np.clip(img, 0.0, 1.0)
- sf = 2
-
- shuffle_order = random.sample(range(7), 7)
- idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)
- if idx1 > idx2: # keep downsample3 last
- shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]
-
- for i in shuffle_order:
-
- if i == 0:
- img = add_blur(img, sf=sf)
-
- elif i == 1:
- img = add_blur(img, sf=sf)
-
- elif i == 2:
- a, b = img.shape[1], img.shape[0]
- # downsample2
- if random.random() < 0.75:
- sf1 = random.uniform(1, 2 * sf)
- img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])),
- interpolation=random.choice([1, 2, 3]))
- else:
- k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))
- k_shifted = shift_pixel(k, sf)
- k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel
- img = ndimage.filters.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror')
- img = img[0::sf, 0::sf, ...] # nearest downsampling
- img = np.clip(img, 0.0, 1.0)
-
- elif i == 3:
- # downsample3
- img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))
- img = np.clip(img, 0.0, 1.0)
-
- elif i == 4:
- # add Gaussian noise
- img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25)
-
- elif i == 5:
- # add JPEG noise
- if random.random() < jpeg_prob:
- img = add_JPEG_noise(img)
-
- elif i == 6:
- # add processed camera sensor noise
- if random.random() < isp_prob and isp_model is not None:
- with torch.no_grad():
- img, hq = isp_model.forward(img.copy(), hq)
-
- # add final JPEG compression noise
- img = add_JPEG_noise(img)
-
- # random crop
- img, hq = random_crop(img, hq, sf_ori, lq_patchsize)
-
- return img, hq
-
-
-# todo no isp_model?
-def degradation_bsrgan_variant(image, sf=4, isp_model=None):
- """
- This is the degradation model of BSRGAN from the paper
- "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
- ----------
- sf: scale factor
- isp_model: camera ISP model
- Returns
- -------
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
- """
- image = util.uint2single(image)
- isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25
- sf_ori = sf
-
- h1, w1 = image.shape[:2]
- image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
- h, w = image.shape[:2]
-
- hq = image.copy()
-
- if sf == 4 and random.random() < scale2_prob: # downsample1
- if np.random.rand() < 0.5:
- image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])),
- interpolation=random.choice([1, 2, 3]))
- else:
- image = util.imresize_np(image, 1 / 2, True)
- image = np.clip(image, 0.0, 1.0)
- sf = 2
-
- shuffle_order = random.sample(range(7), 7)
- idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)
- if idx1 > idx2: # keep downsample3 last
- shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]
-
- for i in shuffle_order:
-
- if i == 0:
- image = add_blur(image, sf=sf)
-
- elif i == 1:
- image = add_blur(image, sf=sf)
-
- elif i == 2:
- a, b = image.shape[1], image.shape[0]
- # downsample2
- if random.random() < 0.75:
- sf1 = random.uniform(1, 2 * sf)
- image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])),
- interpolation=random.choice([1, 2, 3]))
- else:
- k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))
- k_shifted = shift_pixel(k, sf)
- k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel
- image = ndimage.filters.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror')
- image = image[0::sf, 0::sf, ...] # nearest downsampling
- image = np.clip(image, 0.0, 1.0)
-
- elif i == 3:
- # downsample3
- image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))
- image = np.clip(image, 0.0, 1.0)
-
- elif i == 4:
- # add Gaussian noise
- image = add_Gaussian_noise(image, noise_level1=2, noise_level2=25)
-
- elif i == 5:
- # add JPEG noise
- if random.random() < jpeg_prob:
- image = add_JPEG_noise(image)
-
- # elif i == 6:
- # # add processed camera sensor noise
- # if random.random() < isp_prob and isp_model is not None:
- # with torch.no_grad():
- # img, hq = isp_model.forward(img.copy(), hq)
-
- # add final JPEG compression noise
- image = add_JPEG_noise(image)
- image = util.single2uint(image)
- example = {"image":image}
- return example
-
-
-# TODO incase there is a pickle error one needs to replace a += x with a = a + x in add_speckle_noise etc...
-def degradation_bsrgan_plus(img, sf=4, shuffle_prob=0.5, use_sharp=True, lq_patchsize=64, isp_model=None):
- """
- This is an extended degradation model by combining
- the degradation models of BSRGAN and Real-ESRGAN
- ----------
- img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf)
- sf: scale factor
- use_shuffle: the degradation shuffle
- use_sharp: sharpening the img
- Returns
- -------
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
- """
-
- h1, w1 = img.shape[:2]
- img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
- h, w = img.shape[:2]
-
- if h < lq_patchsize * sf or w < lq_patchsize * sf:
- raise ValueError(f'img size ({h1}X{w1}) is too small!')
-
- if use_sharp:
- img = add_sharpening(img)
- hq = img.copy()
-
- if random.random() < shuffle_prob:
- shuffle_order = random.sample(range(13), 13)
- else:
- shuffle_order = list(range(13))
- # local shuffle for noise, JPEG is always the last one
- shuffle_order[2:6] = random.sample(shuffle_order[2:6], len(range(2, 6)))
- shuffle_order[9:13] = random.sample(shuffle_order[9:13], len(range(9, 13)))
-
- poisson_prob, speckle_prob, isp_prob = 0.1, 0.1, 0.1
-
- for i in shuffle_order:
- if i == 0:
- img = add_blur(img, sf=sf)
- elif i == 1:
- img = add_resize(img, sf=sf)
- elif i == 2:
- img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25)
- elif i == 3:
- if random.random() < poisson_prob:
- img = add_Poisson_noise(img)
- elif i == 4:
- if random.random() < speckle_prob:
- img = add_speckle_noise(img)
- elif i == 5:
- if random.random() < isp_prob and isp_model is not None:
- with torch.no_grad():
- img, hq = isp_model.forward(img.copy(), hq)
- elif i == 6:
- img = add_JPEG_noise(img)
- elif i == 7:
- img = add_blur(img, sf=sf)
- elif i == 8:
- img = add_resize(img, sf=sf)
- elif i == 9:
- img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25)
- elif i == 10:
- if random.random() < poisson_prob:
- img = add_Poisson_noise(img)
- elif i == 11:
- if random.random() < speckle_prob:
- img = add_speckle_noise(img)
- elif i == 12:
- if random.random() < isp_prob and isp_model is not None:
- with torch.no_grad():
- img, hq = isp_model.forward(img.copy(), hq)
- else:
- print('check the shuffle!')
-
- # resize to desired size
- img = cv2.resize(img, (int(1 / sf * hq.shape[1]), int(1 / sf * hq.shape[0])),
- interpolation=random.choice([1, 2, 3]))
-
- # add final JPEG compression noise
- img = add_JPEG_noise(img)
-
- # random crop
- img, hq = random_crop(img, hq, sf, lq_patchsize)
-
- return img, hq
-
-
-if __name__ == '__main__':
- print("hey")
- img = util.imread_uint('utils/test.png', 3)
- print(img)
- img = util.uint2single(img)
- print(img)
- img = img[:448, :448]
- h = img.shape[0] // 4
- print("resizing to", h)
- sf = 4
- deg_fn = partial(degradation_bsrgan_variant, sf=sf)
- for i in range(20):
- print(i)
- img_lq = deg_fn(img)
- print(img_lq)
- img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img)["image"]
- print(img_lq.shape)
- print("bicubic", img_lq_bicubic.shape)
- print(img_hq.shape)
- lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),
- interpolation=0)
- lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),
- interpolation=0)
- img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1)
- util.imsave(img_concat, str(i) + '.png')
-
-
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/bracketparser2.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/bracketparser2.js
deleted file mode 100644
index 58aa5da06534d60dbf80406d984916a151768399..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/bracketparser2.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import BracketParser from './logic/bracketparser/bracketparser2/BracketParser.js';
-export default BracketParser;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/Factory.js
deleted file mode 100644
index b3cabe4c806f56ecf44ed38df8f61c40ecf2e45f..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/Factory.js
+++ /dev/null
@@ -1,13 +0,0 @@
-import Sides from './Sides.js';
-import ObjectFactory from '../ObjectFactory.js';
-import SetValue from '../../../plugins/utils/object/SetValue.js';
-
-ObjectFactory.register('sides', function (config) {
- var gameObject = new Sides(this.scene, config);
- this.scene.add.existing(gameObject);
- return gameObject;
-});
-
-SetValue(window, 'RexPlugins.UI.Sides', Sides);
-
-export default Sides;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/RemoveChildMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/RemoveChildMethods.js
deleted file mode 100644
index d488526756ac07bc4da2e3908fa7238d48f0f696..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/RemoveChildMethods.js
+++ /dev/null
@@ -1,29 +0,0 @@
-import RemoveChild from '../basesizer/utils/RemoveChild.js';
-import ClearChildren from '../basesizer/utils/ClearChildren.js';
-
-const RemoveItem = Phaser.Utils.Array.Remove;
-
-export default {
- remove(gameObject, destroyChild) {
- if (this.getParentSizer(gameObject) !== this) {
- return this;
- }
-
- RemoveItem(this.sizerChildren, gameObject);
- RemoveChild.call(this, gameObject, destroyChild);
- return this;
- },
-
- removeAll(destroyChild) {
- for (var i = this.sizerChildren.length - 1; i >= 0; i--) {
- this.remove(this.sizerChildren[i], destroyChild);
- }
- return this;
- },
-
- clear(destroyChild) {
- this.sizerChildren.length = 0;
- ClearChildren.call(this, destroyChild);
- return this;
- }
-}
\ No newline at end of file
diff --git a/spaces/Agusbs98/automatic-ecg-diagnosis/nets/layers.py b/spaces/Agusbs98/automatic-ecg-diagnosis/nets/layers.py
deleted file mode 100644
index 0ecc2113f2004bd750377ddd8b27c914be01288c..0000000000000000000000000000000000000000
--- a/spaces/Agusbs98/automatic-ecg-diagnosis/nets/layers.py
+++ /dev/null
@@ -1,29 +0,0 @@
-
-import os, sys
-from libs import *
-
-class DSConv1d(nn.Module):
- def __init__(self,
- in_channels, out_channels,
- kernel_size, padding = 0, stride = 1,
- ):
- super(DSConv1d, self).__init__()
- self.dw_conv = nn.Conv1d(
- in_channels, in_channels,
- kernel_size = kernel_size, padding = padding, stride = stride,
- groups = in_channels,
- bias = False,
- )
- self.pw_conv = nn.Conv1d(
- in_channels, out_channels,
- kernel_size = 1,
- bias = False,
- )
-
- def forward(self,
- input,
- ):
- output = self.dw_conv(input)
- output = self.pw_conv(output)
-
- return output
\ No newline at end of file
diff --git a/spaces/AixiaGreyatt/QQsign/bin/unidbg-fetch-qsign.bat b/spaces/AixiaGreyatt/QQsign/bin/unidbg-fetch-qsign.bat
deleted file mode 100644
index 8b291e7303b0c07d14b714e5795473891363c85b..0000000000000000000000000000000000000000
--- a/spaces/AixiaGreyatt/QQsign/bin/unidbg-fetch-qsign.bat
+++ /dev/null
@@ -1,89 +0,0 @@
-@rem
-@rem Copyright 2015 the original author or authors.
-@rem
-@rem Licensed under the Apache License, Version 2.0 (the "License");
-@rem you may not use this file except in compliance with the License.
-@rem You may obtain a copy of the License at
-@rem
-@rem https://www.apache.org/licenses/LICENSE-2.0
-@rem
-@rem Unless required by applicable law or agreed to in writing, software
-@rem distributed under the License is distributed on an "AS IS" BASIS,
-@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-@rem See the License for the specific language governing permissions and
-@rem limitations under the License.
-@rem
-
-@if "%DEBUG%" == "" @echo off
-@rem ##########################################################################
-@rem
-@rem unidbg-fetch-qsign startup script for Windows
-@rem
-@rem ##########################################################################
-
-@rem Set local scope for the variables with windows NT shell
-if "%OS%"=="Windows_NT" setlocal
-
-set DIRNAME=%~dp0
-if "%DIRNAME%" == "" set DIRNAME=.
-set APP_BASE_NAME=%~n0
-set APP_HOME=%DIRNAME%..
-
-@rem Resolve any "." and ".." in APP_HOME to make it shorter.
-for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi
-
-@rem Add default JVM options here. You can also use JAVA_OPTS and UNIDBG_FETCH_QSIGN_OPTS to pass JVM options to this script.
-set DEFAULT_JVM_OPTS=
-
-@rem Find java.exe
-if defined JAVA_HOME goto findJavaFromJavaHome
-
-set JAVA_EXE=java.exe
-%JAVA_EXE% -version >NUL 2>&1
-if "%ERRORLEVEL%" == "0" goto execute
-
-echo.
-echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
-echo.
-echo Please set the JAVA_HOME variable in your environment to match the
-echo location of your Java installation.
-
-goto fail
-
-:findJavaFromJavaHome
-set JAVA_HOME=%JAVA_HOME:"=%
-set JAVA_EXE=%JAVA_HOME%/bin/java.exe
-
-if exist "%JAVA_EXE%" goto execute
-
-echo.
-echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%
-echo.
-echo Please set the JAVA_HOME variable in your environment to match the
-echo location of your Java installation.
-
-goto fail
-
-:execute
-@rem Setup the command line
-
-set CLASSPATH=%APP_HOME%\lib\unidbg-fetch-qsign-1.1.9.jar;%APP_HOME%\lib\unidbg-android-105.jar;%APP_HOME%\lib\ktor-server-content-negotiation-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-json-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-status-pages-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-netty-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-host-common-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-core-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-events-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-websockets-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-cio-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-network-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-utils-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-io-jvm-2.3.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk8-1.8.22.jar;%APP_HOME%\lib\kotlinx-serialization-json-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-protobuf-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-core-jvm-1.5.1.jar;%APP_HOME%\lib\logback-classic-1.2.11.jar;%APP_HOME%\lib\kotlinx-coroutines-jdk8-1.7.1.jar;%APP_HOME%\lib\kotlinx-coroutines-core-jvm-1.7.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk7-1.8.22.jar;%APP_HOME%\lib\kotlin-reflect-1.8.10.jar;%APP_HOME%\lib\kotlin-stdlib-1.8.22.jar;%APP_HOME%\lib\slf4j-api-1.7.36.jar;%APP_HOME%\lib\kotlin-stdlib-common-1.8.22.jar;%APP_HOME%\lib\config-1.4.2.jar;%APP_HOME%\lib\jansi-2.4.0.jar;%APP_HOME%\lib\netty-codec-http2-4.1.92.Final.jar;%APP_HOME%\lib\alpn-api-1.1.3.v20160715.jar;%APP_HOME%\lib\netty-transport-native-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-epoll-4.1.92.Final.jar;%APP_HOME%\lib\logback-core-1.2.11.jar;%APP_HOME%\lib\annotations-23.0.0.jar;%APP_HOME%\lib\netty-codec-http-4.1.92.Final.jar;%APP_HOME%\lib\netty-handler-4.1.92.Final.jar;%APP_HOME%\lib\netty-codec-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-epoll-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-unix-common-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-4.1.92.Final.jar;%APP_HOME%\lib\netty-buffer-4.1.92.Final.jar;%APP_HOME%\lib\netty-resolver-4.1.92.Final.jar;%APP_HOME%\lib\netty-common-4.1.92.Final.jar
-
-
-@rem Execute unidbg-fetch-qsign
-"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %UNIDBG_FETCH_QSIGN_OPTS% -classpath "%CLASSPATH%" MainKt %*
-
-:end
-@rem End local scope for the variables with windows NT shell
-if "%ERRORLEVEL%"=="0" goto mainEnd
-
-:fail
-rem Set variable UNIDBG_FETCH_QSIGN_EXIT_CONSOLE if you need the _script_ return code instead of
-rem the _cmd.exe /c_ return code!
-if not "" == "%UNIDBG_FETCH_QSIGN_EXIT_CONSOLE%" exit 1
-exit /b 1
-
-:mainEnd
-if "%OS%"=="Windows_NT" endlocal
-
-:omega
diff --git a/spaces/Aloento/9Nine-PITS/text/english.py b/spaces/Aloento/9Nine-PITS/text/english.py
deleted file mode 100644
index 85b862f1eabdbf9a5a4a604d848920d0ddd260dd..0000000000000000000000000000000000000000
--- a/spaces/Aloento/9Nine-PITS/text/english.py
+++ /dev/null
@@ -1,122 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-import re
-
-import eng_to_ipa as ipa
-from g2p_en import G2p
-from unidecode import unidecode
-
-from text.frontend import normalize_numbers
-
-'''
-Cleaners are transformations that run over the input text at both training and eval time.
-
-Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners"
-hyperparameter. Some cleaners are English-specific. You'll typically want to use:
- 1. "english_cleaners" for English text
- 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using
- the Unidecode library (https://pypi.python.org/pypi/Unidecode)
- 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update
- the symbols in symbols.py to match your data).
-'''
-
-# Regular expression matching whitespace:
-g2p = G2p()
-
-# List of (regular expression, replacement) pairs for abbreviations:
-_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [
- ('mrs', 'misess'),
- ('mr', 'mister'),
- ('dr', 'doctor'),
- ('st', 'saint'),
- ('co', 'company'),
- ('jr', 'junior'),
- ('maj', 'major'),
- ('gen', 'general'),
- ('drs', 'doctors'),
- ('rev', 'reverend'),
- ('lt', 'lieutenant'),
- ('hon', 'honorable'),
- ('sgt', 'sergeant'),
- ('capt', 'captain'),
- ('esq', 'esquire'),
- ('ltd', 'limited'),
- ('col', 'colonel'),
- ('ft', 'fort'),
-]]
-
-# List of (ipa, ipa2) pairs
-_ipa_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('r', 'ɹ'),
- ('ʤ', 'dʒ'),
- ('ʧ', 'tʃ')
-]]
-
-
-def expand_abbreviations(text):
- for regex, replacement in _abbreviations:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def collapse_whitespace(text):
- return re.sub(r'\s+', ' ', text)
-
-
-def mark_dark_l(text):
- return re.sub(r'l([^aeiouæɑɔəɛɪʊ ]*(?: |$))', lambda x: 'ɫ' + x.group(1), text)
-
-
-def english_to_ipa(text):
- text = text.replace("-", " ")
- text = unidecode(text).lower()
- text = expand_abbreviations(text)
- text = normalize_numbers(text)
-
- phonemes = ipa.convert(text)
- phonemes = unrecognized_words_to_ipa(phonemes)
- phonemes = collapse_whitespace(phonemes)
-
- text = phonemes
- text = mark_dark_l(text)
-
- for regex, replacement in _ipa_to_ipa2:
- text = re.sub(regex, replacement, text)
-
- return text.replace('...', '…')
-
-
-def convert_to_ipa(phones):
- eipa = ""
- symbols = {"a": "ə", "ey": "eɪ", "aa": "ɑ", "ae": "æ", "ah": "ə", "ao": "ɔ",
- "aw": "aʊ", "ay": "aɪ", "ch": "ʧ", "dh": "ð", "eh": "ɛ", "er": "ər",
- "hh": "h", "ih": "ɪ", "jh": "ʤ", "ng": "ŋ", "ow": "oʊ", "oy": "ɔɪ",
- "sh": "ʃ", "th": "θ", "uh": "ʊ", "uw": "u", "zh": "ʒ", "iy": "i", "y": "j"}
-
- for ph in phones:
- ph = ph.lower()
-
- try:
- if ph[-1] in "01234":
- eipa += symbols[ph[:-1]]
- else:
- eipa += symbols[ph]
- except:
- eipa += ph
-
- return eipa
-
-
-def unrecognized_words_to_ipa(text):
- matches = re.findall(r'\s([\w|\']+\*)', text)
-
- for word in matches:
- ipa = convert_to_ipa(g2p(word))
- text = text.replace(word, ipa)
-
- matches = re.findall(r'^([\w|\']+\*)', text)
-
- for word in matches:
- ipa = convert_to_ipa(g2p(word))
- text = text.replace(word, ipa)
-
- return text
diff --git a/spaces/Alpaca233/SadTalker/src/face3d/models/networks.py b/spaces/Alpaca233/SadTalker/src/face3d/models/networks.py
deleted file mode 100644
index ead9cdcb8720b845c233de79dc8a8d1668492108..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/SadTalker/src/face3d/models/networks.py
+++ /dev/null
@@ -1,521 +0,0 @@
-"""This script defines deep neural networks for Deep3DFaceRecon_pytorch
-"""
-
-import os
-import numpy as np
-import torch.nn.functional as F
-from torch.nn import init
-import functools
-from torch.optim import lr_scheduler
-import torch
-from torch import Tensor
-import torch.nn as nn
-try:
- from torch.hub import load_state_dict_from_url
-except ImportError:
- from torch.utils.model_zoo import load_url as load_state_dict_from_url
-from typing import Type, Any, Callable, Union, List, Optional
-from .arcface_torch.backbones import get_model
-from kornia.geometry import warp_affine
-
-def resize_n_crop(image, M, dsize=112):
- # image: (b, c, h, w)
- # M : (b, 2, 3)
- return warp_affine(image, M, dsize=(dsize, dsize), align_corners=True)
-
-def filter_state_dict(state_dict, remove_name='fc'):
- new_state_dict = {}
- for key in state_dict:
- if remove_name in key:
- continue
- new_state_dict[key] = state_dict[key]
- return new_state_dict
-
-def get_scheduler(optimizer, opt):
- """Return a learning rate scheduler
-
- Parameters:
- optimizer -- the optimizer of the network
- opt (option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions.
- opt.lr_policy is the name of learning rate policy: linear | step | plateau | cosine
-
- For other schedulers (step, plateau, and cosine), we use the default PyTorch schedulers.
- See https://pytorch.org/docs/stable/optim.html for more details.
- """
- if opt.lr_policy == 'linear':
- def lambda_rule(epoch):
- lr_l = 1.0 - max(0, epoch + opt.epoch_count - opt.n_epochs) / float(opt.n_epochs + 1)
- return lr_l
- scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule)
- elif opt.lr_policy == 'step':
- scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.lr_decay_epochs, gamma=0.2)
- elif opt.lr_policy == 'plateau':
- scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5)
- elif opt.lr_policy == 'cosine':
- scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=opt.n_epochs, eta_min=0)
- else:
- return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy)
- return scheduler
-
-
-def define_net_recon(net_recon, use_last_fc=False, init_path=None):
- return ReconNetWrapper(net_recon, use_last_fc=use_last_fc, init_path=init_path)
-
-def define_net_recog(net_recog, pretrained_path=None):
- net = RecogNetWrapper(net_recog=net_recog, pretrained_path=pretrained_path)
- net.eval()
- return net
-
-class ReconNetWrapper(nn.Module):
- fc_dim=257
- def __init__(self, net_recon, use_last_fc=False, init_path=None):
- super(ReconNetWrapper, self).__init__()
- self.use_last_fc = use_last_fc
- if net_recon not in func_dict:
- return NotImplementedError('network [%s] is not implemented', net_recon)
- func, last_dim = func_dict[net_recon]
- backbone = func(use_last_fc=use_last_fc, num_classes=self.fc_dim)
- if init_path and os.path.isfile(init_path):
- state_dict = filter_state_dict(torch.load(init_path, map_location='cpu'))
- backbone.load_state_dict(state_dict)
- print("loading init net_recon %s from %s" %(net_recon, init_path))
- self.backbone = backbone
- if not use_last_fc:
- self.final_layers = nn.ModuleList([
- conv1x1(last_dim, 80, bias=True), # id layer
- conv1x1(last_dim, 64, bias=True), # exp layer
- conv1x1(last_dim, 80, bias=True), # tex layer
- conv1x1(last_dim, 3, bias=True), # angle layer
- conv1x1(last_dim, 27, bias=True), # gamma layer
- conv1x1(last_dim, 2, bias=True), # tx, ty
- conv1x1(last_dim, 1, bias=True) # tz
- ])
- for m in self.final_layers:
- nn.init.constant_(m.weight, 0.)
- nn.init.constant_(m.bias, 0.)
-
- def forward(self, x):
- x = self.backbone(x)
- if not self.use_last_fc:
- output = []
- for layer in self.final_layers:
- output.append(layer(x))
- x = torch.flatten(torch.cat(output, dim=1), 1)
- return x
-
-
-class RecogNetWrapper(nn.Module):
- def __init__(self, net_recog, pretrained_path=None, input_size=112):
- super(RecogNetWrapper, self).__init__()
- net = get_model(name=net_recog, fp16=False)
- if pretrained_path:
- state_dict = torch.load(pretrained_path, map_location='cpu')
- net.load_state_dict(state_dict)
- print("loading pretrained net_recog %s from %s" %(net_recog, pretrained_path))
- for param in net.parameters():
- param.requires_grad = False
- self.net = net
- self.preprocess = lambda x: 2 * x - 1
- self.input_size=input_size
-
- def forward(self, image, M):
- image = self.preprocess(resize_n_crop(image, M, self.input_size))
- id_feature = F.normalize(self.net(image), dim=-1, p=2)
- return id_feature
-
-
-# adapted from https://github.com/pytorch/vision/edit/master/torchvision/models/resnet.py
-__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101',
- 'resnet152', 'resnext50_32x4d', 'resnext101_32x8d',
- 'wide_resnet50_2', 'wide_resnet101_2']
-
-
-model_urls = {
- 'resnet18': 'https://download.pytorch.org/models/resnet18-f37072fd.pth',
- 'resnet34': 'https://download.pytorch.org/models/resnet34-b627a593.pth',
- 'resnet50': 'https://download.pytorch.org/models/resnet50-0676ba61.pth',
- 'resnet101': 'https://download.pytorch.org/models/resnet101-63fe2227.pth',
- 'resnet152': 'https://download.pytorch.org/models/resnet152-394f9c45.pth',
- 'resnext50_32x4d': 'https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth',
- 'resnext101_32x8d': 'https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth',
- 'wide_resnet50_2': 'https://download.pytorch.org/models/wide_resnet50_2-95faca4d.pth',
- 'wide_resnet101_2': 'https://download.pytorch.org/models/wide_resnet101_2-32ee1156.pth',
-}
-
-
-def conv3x3(in_planes: int, out_planes: int, stride: int = 1, groups: int = 1, dilation: int = 1) -> nn.Conv2d:
- """3x3 convolution with padding"""
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
- padding=dilation, groups=groups, bias=False, dilation=dilation)
-
-
-def conv1x1(in_planes: int, out_planes: int, stride: int = 1, bias: bool = False) -> nn.Conv2d:
- """1x1 convolution"""
- return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=bias)
-
-
-class BasicBlock(nn.Module):
- expansion: int = 1
-
- def __init__(
- self,
- inplanes: int,
- planes: int,
- stride: int = 1,
- downsample: Optional[nn.Module] = None,
- groups: int = 1,
- base_width: int = 64,
- dilation: int = 1,
- norm_layer: Optional[Callable[..., nn.Module]] = None
- ) -> None:
- super(BasicBlock, self).__init__()
- if norm_layer is None:
- norm_layer = nn.BatchNorm2d
- if groups != 1 or base_width != 64:
- raise ValueError('BasicBlock only supports groups=1 and base_width=64')
- if dilation > 1:
- raise NotImplementedError("Dilation > 1 not supported in BasicBlock")
- # Both self.conv1 and self.downsample layers downsample the input when stride != 1
- self.conv1 = conv3x3(inplanes, planes, stride)
- self.bn1 = norm_layer(planes)
- self.relu = nn.ReLU(inplace=True)
- self.conv2 = conv3x3(planes, planes)
- self.bn2 = norm_layer(planes)
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x: Tensor) -> Tensor:
- identity = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
-
- if self.downsample is not None:
- identity = self.downsample(x)
-
- out += identity
- out = self.relu(out)
-
- return out
-
-
-class Bottleneck(nn.Module):
- # Bottleneck in torchvision places the stride for downsampling at 3x3 convolution(self.conv2)
- # while original implementation places the stride at the first 1x1 convolution(self.conv1)
- # according to "Deep residual learning for image recognition"https://arxiv.org/abs/1512.03385.
- # This variant is also known as ResNet V1.5 and improves accuracy according to
- # https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch.
-
- expansion: int = 4
-
- def __init__(
- self,
- inplanes: int,
- planes: int,
- stride: int = 1,
- downsample: Optional[nn.Module] = None,
- groups: int = 1,
- base_width: int = 64,
- dilation: int = 1,
- norm_layer: Optional[Callable[..., nn.Module]] = None
- ) -> None:
- super(Bottleneck, self).__init__()
- if norm_layer is None:
- norm_layer = nn.BatchNorm2d
- width = int(planes * (base_width / 64.)) * groups
- # Both self.conv2 and self.downsample layers downsample the input when stride != 1
- self.conv1 = conv1x1(inplanes, width)
- self.bn1 = norm_layer(width)
- self.conv2 = conv3x3(width, width, stride, groups, dilation)
- self.bn2 = norm_layer(width)
- self.conv3 = conv1x1(width, planes * self.expansion)
- self.bn3 = norm_layer(planes * self.expansion)
- self.relu = nn.ReLU(inplace=True)
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x: Tensor) -> Tensor:
- identity = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
- out = self.relu(out)
-
- out = self.conv3(out)
- out = self.bn3(out)
-
- if self.downsample is not None:
- identity = self.downsample(x)
-
- out += identity
- out = self.relu(out)
-
- return out
-
-
-class ResNet(nn.Module):
-
- def __init__(
- self,
- block: Type[Union[BasicBlock, Bottleneck]],
- layers: List[int],
- num_classes: int = 1000,
- zero_init_residual: bool = False,
- use_last_fc: bool = False,
- groups: int = 1,
- width_per_group: int = 64,
- replace_stride_with_dilation: Optional[List[bool]] = None,
- norm_layer: Optional[Callable[..., nn.Module]] = None
- ) -> None:
- super(ResNet, self).__init__()
- if norm_layer is None:
- norm_layer = nn.BatchNorm2d
- self._norm_layer = norm_layer
-
- self.inplanes = 64
- self.dilation = 1
- if replace_stride_with_dilation is None:
- # each element in the tuple indicates if we should replace
- # the 2x2 stride with a dilated convolution instead
- replace_stride_with_dilation = [False, False, False]
- if len(replace_stride_with_dilation) != 3:
- raise ValueError("replace_stride_with_dilation should be None "
- "or a 3-element tuple, got {}".format(replace_stride_with_dilation))
- self.use_last_fc = use_last_fc
- self.groups = groups
- self.base_width = width_per_group
- self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3,
- bias=False)
- self.bn1 = norm_layer(self.inplanes)
- self.relu = nn.ReLU(inplace=True)
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
- self.layer1 = self._make_layer(block, 64, layers[0])
- self.layer2 = self._make_layer(block, 128, layers[1], stride=2,
- dilate=replace_stride_with_dilation[0])
- self.layer3 = self._make_layer(block, 256, layers[2], stride=2,
- dilate=replace_stride_with_dilation[1])
- self.layer4 = self._make_layer(block, 512, layers[3], stride=2,
- dilate=replace_stride_with_dilation[2])
- self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
-
- if self.use_last_fc:
- self.fc = nn.Linear(512 * block.expansion, num_classes)
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
- nn.init.constant_(m.weight, 1)
- nn.init.constant_(m.bias, 0)
-
-
-
- # Zero-initialize the last BN in each residual branch,
- # so that the residual branch starts with zeros, and each residual block behaves like an identity.
- # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677
- if zero_init_residual:
- for m in self.modules():
- if isinstance(m, Bottleneck):
- nn.init.constant_(m.bn3.weight, 0) # type: ignore[arg-type]
- elif isinstance(m, BasicBlock):
- nn.init.constant_(m.bn2.weight, 0) # type: ignore[arg-type]
-
- def _make_layer(self, block: Type[Union[BasicBlock, Bottleneck]], planes: int, blocks: int,
- stride: int = 1, dilate: bool = False) -> nn.Sequential:
- norm_layer = self._norm_layer
- downsample = None
- previous_dilation = self.dilation
- if dilate:
- self.dilation *= stride
- stride = 1
- if stride != 1 or self.inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- conv1x1(self.inplanes, planes * block.expansion, stride),
- norm_layer(planes * block.expansion),
- )
-
- layers = []
- layers.append(block(self.inplanes, planes, stride, downsample, self.groups,
- self.base_width, previous_dilation, norm_layer))
- self.inplanes = planes * block.expansion
- for _ in range(1, blocks):
- layers.append(block(self.inplanes, planes, groups=self.groups,
- base_width=self.base_width, dilation=self.dilation,
- norm_layer=norm_layer))
-
- return nn.Sequential(*layers)
-
- def _forward_impl(self, x: Tensor) -> Tensor:
- # See note [TorchScript super()]
- x = self.conv1(x)
- x = self.bn1(x)
- x = self.relu(x)
- x = self.maxpool(x)
-
- x = self.layer1(x)
- x = self.layer2(x)
- x = self.layer3(x)
- x = self.layer4(x)
-
- x = self.avgpool(x)
- if self.use_last_fc:
- x = torch.flatten(x, 1)
- x = self.fc(x)
- return x
-
- def forward(self, x: Tensor) -> Tensor:
- return self._forward_impl(x)
-
-
-def _resnet(
- arch: str,
- block: Type[Union[BasicBlock, Bottleneck]],
- layers: List[int],
- pretrained: bool,
- progress: bool,
- **kwargs: Any
-) -> ResNet:
- model = ResNet(block, layers, **kwargs)
- if pretrained:
- state_dict = load_state_dict_from_url(model_urls[arch],
- progress=progress)
- model.load_state_dict(state_dict)
- return model
-
-
-def resnet18(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
- r"""ResNet-18 model from
- `"Deep Residual Learning for Image Recognition" `_.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- progress (bool): If True, displays a progress bar of the download to stderr
- """
- return _resnet('resnet18', BasicBlock, [2, 2, 2, 2], pretrained, progress,
- **kwargs)
-
-
-def resnet34(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
- r"""ResNet-34 model from
- `"Deep Residual Learning for Image Recognition" `_.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- progress (bool): If True, displays a progress bar of the download to stderr
- """
- return _resnet('resnet34', BasicBlock, [3, 4, 6, 3], pretrained, progress,
- **kwargs)
-
-
-def resnet50(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
- r"""ResNet-50 model from
- `"Deep Residual Learning for Image Recognition" `_.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- progress (bool): If True, displays a progress bar of the download to stderr
- """
- return _resnet('resnet50', Bottleneck, [3, 4, 6, 3], pretrained, progress,
- **kwargs)
-
-
-def resnet101(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
- r"""ResNet-101 model from
- `"Deep Residual Learning for Image Recognition" `_.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- progress (bool): If True, displays a progress bar of the download to stderr
- """
- return _resnet('resnet101', Bottleneck, [3, 4, 23, 3], pretrained, progress,
- **kwargs)
-
-
-def resnet152(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
- r"""ResNet-152 model from
- `"Deep Residual Learning for Image Recognition" `_.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- progress (bool): If True, displays a progress bar of the download to stderr
- """
- return _resnet('resnet152', Bottleneck, [3, 8, 36, 3], pretrained, progress,
- **kwargs)
-
-
-def resnext50_32x4d(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
- r"""ResNeXt-50 32x4d model from
- `"Aggregated Residual Transformation for Deep Neural Networks" `_.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- progress (bool): If True, displays a progress bar of the download to stderr
- """
- kwargs['groups'] = 32
- kwargs['width_per_group'] = 4
- return _resnet('resnext50_32x4d', Bottleneck, [3, 4, 6, 3],
- pretrained, progress, **kwargs)
-
-
-def resnext101_32x8d(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
- r"""ResNeXt-101 32x8d model from
- `"Aggregated Residual Transformation for Deep Neural Networks" `_.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- progress (bool): If True, displays a progress bar of the download to stderr
- """
- kwargs['groups'] = 32
- kwargs['width_per_group'] = 8
- return _resnet('resnext101_32x8d', Bottleneck, [3, 4, 23, 3],
- pretrained, progress, **kwargs)
-
-
-def wide_resnet50_2(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
- r"""Wide ResNet-50-2 model from
- `"Wide Residual Networks" `_.
-
- The model is the same as ResNet except for the bottleneck number of channels
- which is twice larger in every block. The number of channels in outer 1x1
- convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048
- channels, and in Wide ResNet-50-2 has 2048-1024-2048.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- progress (bool): If True, displays a progress bar of the download to stderr
- """
- kwargs['width_per_group'] = 64 * 2
- return _resnet('wide_resnet50_2', Bottleneck, [3, 4, 6, 3],
- pretrained, progress, **kwargs)
-
-
-def wide_resnet101_2(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
- r"""Wide ResNet-101-2 model from
- `"Wide Residual Networks" `_.
-
- The model is the same as ResNet except for the bottleneck number of channels
- which is twice larger in every block. The number of channels in outer 1x1
- convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048
- channels, and in Wide ResNet-50-2 has 2048-1024-2048.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- progress (bool): If True, displays a progress bar of the download to stderr
- """
- kwargs['width_per_group'] = 64 * 2
- return _resnet('wide_resnet101_2', Bottleneck, [3, 4, 23, 3],
- pretrained, progress, **kwargs)
-
-
-func_dict = {
- 'resnet18': (resnet18, 512),
- 'resnet50': (resnet50, 2048)
-}
diff --git a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/utils/loading.py b/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/utils/loading.py
deleted file mode 100644
index 9684f8b9a0a201af07045ea65ab4fc05df3694ba..0000000000000000000000000000000000000000
--- a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/utils/loading.py
+++ /dev/null
@@ -1,6 +0,0 @@
-import yaml
-
-
-def load_yaml(path):
- with open(path, "rt") as f:
- return yaml.safe_load(f)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_v_pred.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_v_pred.py
deleted file mode 100644
index 1db2e18e5b19b822b8b03f9b8ccacf311da69691..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_v_pred.py
+++ /dev/null
@@ -1,540 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import gc
-import time
-import unittest
-
-import numpy as np
-import torch
-from huggingface_hub import hf_hub_download
-from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer
-
-from diffusers import (
- AutoencoderKL,
- DDIMScheduler,
- DPMSolverMultistepScheduler,
- EulerDiscreteScheduler,
- StableDiffusionPipeline,
- UNet2DConditionModel,
-)
-from diffusers.models.attention_processor import AttnProcessor
-from diffusers.utils import load_numpy, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
-
-
-enable_full_determinism()
-
-
-class StableDiffusion2VPredictionPipelineFastTests(unittest.TestCase):
- def tearDown(self):
- # clean up the VRAM after each test
- super().tearDown()
- gc.collect()
- torch.cuda.empty_cache()
-
- @property
- def dummy_cond_unet(self):
- torch.manual_seed(0)
- model = UNet2DConditionModel(
- block_out_channels=(32, 64),
- layers_per_block=2,
- sample_size=32,
- in_channels=4,
- out_channels=4,
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
- up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
- cross_attention_dim=32,
- # SD2-specific config below
- attention_head_dim=(2, 4),
- use_linear_projection=True,
- )
- return model
-
- @property
- def dummy_vae(self):
- torch.manual_seed(0)
- model = AutoencoderKL(
- block_out_channels=[32, 64],
- in_channels=3,
- out_channels=3,
- down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
- up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
- latent_channels=4,
- sample_size=128,
- )
- return model
-
- @property
- def dummy_text_encoder(self):
- torch.manual_seed(0)
- config = CLIPTextConfig(
- bos_token_id=0,
- eos_token_id=2,
- hidden_size=32,
- intermediate_size=37,
- layer_norm_eps=1e-05,
- num_attention_heads=4,
- num_hidden_layers=5,
- pad_token_id=1,
- vocab_size=1000,
- # SD2-specific config below
- hidden_act="gelu",
- projection_dim=64,
- )
- return CLIPTextModel(config)
-
- def test_stable_diffusion_v_pred_ddim(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- unet = self.dummy_cond_unet
- scheduler = DDIMScheduler(
- beta_start=0.00085,
- beta_end=0.012,
- beta_schedule="scaled_linear",
- clip_sample=False,
- set_alpha_to_one=False,
- prediction_type="v_prediction",
- )
-
- vae = self.dummy_vae
- bert = self.dummy_text_encoder
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
-
- # make sure here that pndm scheduler skips prk
- sd_pipe = StableDiffusionPipeline(
- unet=unet,
- scheduler=scheduler,
- vae=vae,
- text_encoder=bert,
- tokenizer=tokenizer,
- safety_checker=None,
- feature_extractor=None,
- requires_safety_checker=False,
- )
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- prompt = "A painting of a squirrel eating a burger"
-
- generator = torch.Generator(device=device).manual_seed(0)
- output = sd_pipe([prompt], generator=generator, guidance_scale=6.0, num_inference_steps=2, output_type="np")
- image = output.images
-
- generator = torch.Generator(device=device).manual_seed(0)
- image_from_tuple = sd_pipe(
- [prompt],
- generator=generator,
- guidance_scale=6.0,
- num_inference_steps=2,
- output_type="np",
- return_dict=False,
- )[0]
-
- image_slice = image[0, -3:, -3:, -1]
- image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
- expected_slice = np.array([0.6569, 0.6525, 0.5142, 0.4968, 0.4923, 0.4601, 0.4996, 0.5041, 0.4544])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
- assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_v_pred_k_euler(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- unet = self.dummy_cond_unet
- scheduler = EulerDiscreteScheduler(
- beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", prediction_type="v_prediction"
- )
- vae = self.dummy_vae
- bert = self.dummy_text_encoder
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
-
- # make sure here that pndm scheduler skips prk
- sd_pipe = StableDiffusionPipeline(
- unet=unet,
- scheduler=scheduler,
- vae=vae,
- text_encoder=bert,
- tokenizer=tokenizer,
- safety_checker=None,
- feature_extractor=None,
- requires_safety_checker=False,
- )
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- prompt = "A painting of a squirrel eating a burger"
- generator = torch.Generator(device=device).manual_seed(0)
- output = sd_pipe([prompt], generator=generator, guidance_scale=6.0, num_inference_steps=2, output_type="np")
-
- image = output.images
-
- generator = torch.Generator(device=device).manual_seed(0)
- image_from_tuple = sd_pipe(
- [prompt],
- generator=generator,
- guidance_scale=6.0,
- num_inference_steps=2,
- output_type="np",
- return_dict=False,
- )[0]
-
- image_slice = image[0, -3:, -3:, -1]
- image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
- expected_slice = np.array([0.5644, 0.6514, 0.5190, 0.5663, 0.5287, 0.4953, 0.5430, 0.5243, 0.4778])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
- assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2
-
- @unittest.skipIf(torch_device != "cuda", "This test requires a GPU")
- def test_stable_diffusion_v_pred_fp16(self):
- """Test that stable diffusion v-prediction works with fp16"""
- unet = self.dummy_cond_unet
- scheduler = DDIMScheduler(
- beta_start=0.00085,
- beta_end=0.012,
- beta_schedule="scaled_linear",
- clip_sample=False,
- set_alpha_to_one=False,
- prediction_type="v_prediction",
- )
- vae = self.dummy_vae
- bert = self.dummy_text_encoder
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
-
- # put models in fp16
- unet = unet.half()
- vae = vae.half()
- bert = bert.half()
-
- # make sure here that pndm scheduler skips prk
- sd_pipe = StableDiffusionPipeline(
- unet=unet,
- scheduler=scheduler,
- vae=vae,
- text_encoder=bert,
- tokenizer=tokenizer,
- safety_checker=None,
- feature_extractor=None,
- requires_safety_checker=False,
- )
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- prompt = "A painting of a squirrel eating a burger"
- generator = torch.manual_seed(0)
- image = sd_pipe([prompt], generator=generator, num_inference_steps=2, output_type="np").images
-
- assert image.shape == (1, 64, 64, 3)
-
-
-@slow
-@require_torch_gpu
-class StableDiffusion2VPredictionPipelineIntegrationTests(unittest.TestCase):
- def tearDown(self):
- # clean up the VRAM after each test
- super().tearDown()
- gc.collect()
- torch.cuda.empty_cache()
-
- def test_stable_diffusion_v_pred_default(self):
- sd_pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2")
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.enable_attention_slicing()
- sd_pipe.set_progress_bar_config(disable=None)
-
- prompt = "A painting of a squirrel eating a burger"
- generator = torch.manual_seed(0)
- output = sd_pipe([prompt], generator=generator, guidance_scale=7.5, num_inference_steps=20, output_type="np")
-
- image = output.images
- image_slice = image[0, 253:256, 253:256, -1]
-
- assert image.shape == (1, 768, 768, 3)
- expected_slice = np.array([0.1868, 0.1922, 0.1527, 0.1921, 0.1908, 0.1624, 0.1779, 0.1652, 0.1734])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_v_pred_upcast_attention(self):
- sd_pipe = StableDiffusionPipeline.from_pretrained(
- "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16
- )
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.enable_attention_slicing()
- sd_pipe.set_progress_bar_config(disable=None)
-
- prompt = "A painting of a squirrel eating a burger"
- generator = torch.manual_seed(0)
- output = sd_pipe([prompt], generator=generator, guidance_scale=7.5, num_inference_steps=20, output_type="np")
-
- image = output.images
- image_slice = image[0, 253:256, 253:256, -1]
-
- assert image.shape == (1, 768, 768, 3)
- expected_slice = np.array([0.4209, 0.4087, 0.4097, 0.4209, 0.3860, 0.4329, 0.4280, 0.4324, 0.4187])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 5e-2
-
- def test_stable_diffusion_v_pred_euler(self):
- scheduler = EulerDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-2", subfolder="scheduler")
- sd_pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2", scheduler=scheduler)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.enable_attention_slicing()
- sd_pipe.set_progress_bar_config(disable=None)
-
- prompt = "A painting of a squirrel eating a burger"
- generator = torch.manual_seed(0)
-
- output = sd_pipe([prompt], generator=generator, num_inference_steps=5, output_type="numpy")
- image = output.images
-
- image_slice = image[0, 253:256, 253:256, -1]
-
- assert image.shape == (1, 768, 768, 3)
- expected_slice = np.array([0.1781, 0.1695, 0.1661, 0.1705, 0.1588, 0.1699, 0.2005, 0.1589, 0.1677])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_v_pred_dpm(self):
- """
- TODO: update this test after making DPM compatible with V-prediction!
- """
- scheduler = DPMSolverMultistepScheduler.from_pretrained(
- "stabilityai/stable-diffusion-2", subfolder="scheduler"
- )
- sd_pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2", scheduler=scheduler)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.enable_attention_slicing()
- sd_pipe.set_progress_bar_config(disable=None)
-
- prompt = "a photograph of an astronaut riding a horse"
- generator = torch.manual_seed(0)
- image = sd_pipe(
- [prompt], generator=generator, guidance_scale=7.5, num_inference_steps=5, output_type="numpy"
- ).images
-
- image_slice = image[0, 253:256, 253:256, -1]
- assert image.shape == (1, 768, 768, 3)
- expected_slice = np.array([0.3303, 0.3184, 0.3291, 0.3300, 0.3256, 0.3113, 0.2965, 0.3134, 0.3192])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_attention_slicing_v_pred(self):
- torch.cuda.reset_peak_memory_stats()
- model_id = "stabilityai/stable-diffusion-2"
- pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
-
- prompt = "a photograph of an astronaut riding a horse"
-
- # make attention efficient
- pipe.enable_attention_slicing()
- generator = torch.manual_seed(0)
- output_chunked = pipe(
- [prompt], generator=generator, guidance_scale=7.5, num_inference_steps=10, output_type="numpy"
- )
- image_chunked = output_chunked.images
-
- mem_bytes = torch.cuda.max_memory_allocated()
- torch.cuda.reset_peak_memory_stats()
- # make sure that less than 5.5 GB is allocated
- assert mem_bytes < 5.5 * 10**9
-
- # disable slicing
- pipe.disable_attention_slicing()
- generator = torch.manual_seed(0)
- output = pipe([prompt], generator=generator, guidance_scale=7.5, num_inference_steps=10, output_type="numpy")
- image = output.images
-
- # make sure that more than 5.5 GB is allocated
- mem_bytes = torch.cuda.max_memory_allocated()
- assert mem_bytes > 5.5 * 10**9
- assert np.abs(image_chunked.flatten() - image.flatten()).max() < 1e-3
-
- def test_stable_diffusion_text2img_pipeline_v_pred_default(self):
- expected_image = load_numpy(
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/"
- "sd2-text2img/astronaut_riding_a_horse_v_pred.npy"
- )
-
- pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2")
- pipe.to(torch_device)
- pipe.enable_attention_slicing()
- pipe.set_progress_bar_config(disable=None)
-
- prompt = "astronaut riding a horse"
-
- generator = torch.manual_seed(0)
- output = pipe(prompt=prompt, guidance_scale=7.5, generator=generator, output_type="np")
- image = output.images[0]
-
- assert image.shape == (768, 768, 3)
- assert np.abs(expected_image - image).max() < 9e-1
-
- def test_stable_diffusion_text2img_pipeline_unflawed(self):
- expected_image = load_numpy(
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/"
- "sd2-text2img/lion_galaxy.npy"
- )
-
- pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1")
- pipe.scheduler = DDIMScheduler.from_config(
- pipe.scheduler.config, timestep_spacing="trailing", rescale_betas_zero_snr=True
- )
- pipe.to(torch_device)
- pipe.enable_attention_slicing()
- pipe.set_progress_bar_config(disable=None)
-
- prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k"
-
- generator = torch.manual_seed(0)
- output = pipe(prompt=prompt, guidance_scale=7.5, guidance_rescale=0.7, generator=generator, output_type="np")
- image = output.images[0]
-
- assert image.shape == (768, 768, 3)
- assert np.abs(expected_image - image).max() < 5e-1
-
- def test_stable_diffusion_text2img_pipeline_v_pred_fp16(self):
- expected_image = load_numpy(
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/"
- "sd2-text2img/astronaut_riding_a_horse_v_pred_fp16.npy"
- )
-
- pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2", torch_dtype=torch.float16)
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
-
- prompt = "astronaut riding a horse"
-
- generator = torch.manual_seed(0)
- output = pipe(prompt=prompt, guidance_scale=7.5, generator=generator, output_type="np")
- image = output.images[0]
-
- assert image.shape == (768, 768, 3)
- assert np.abs(expected_image - image).max() < 7.5e-1
-
- def test_download_local(self):
- filename = hf_hub_download("stabilityai/stable-diffusion-2-1", filename="v2-1_768-ema-pruned.safetensors")
-
- pipe = StableDiffusionPipeline.from_single_file(filename, torch_dtype=torch.float16)
- pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
- pipe.to("cuda")
-
- image_out = pipe("test", num_inference_steps=1, output_type="np").images[0]
-
- assert image_out.shape == (768, 768, 3)
-
- def test_download_ckpt_diff_format_is_same(self):
- single_file_path = (
- "https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-ema-pruned.safetensors"
- )
-
- pipe_single = StableDiffusionPipeline.from_single_file(single_file_path)
- pipe_single.scheduler = DDIMScheduler.from_config(pipe_single.scheduler.config)
- pipe_single.unet.set_attn_processor(AttnProcessor())
- pipe_single.to("cuda")
-
- generator = torch.Generator(device="cpu").manual_seed(0)
- image_ckpt = pipe_single("a turtle", num_inference_steps=5, generator=generator, output_type="np").images[0]
-
- pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1")
- pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
- pipe.unet.set_attn_processor(AttnProcessor())
- pipe.to("cuda")
-
- generator = torch.Generator(device="cpu").manual_seed(0)
- image = pipe("a turtle", num_inference_steps=5, generator=generator, output_type="np").images[0]
-
- assert np.max(np.abs(image - image_ckpt)) < 1e-3
-
- def test_stable_diffusion_text2img_intermediate_state_v_pred(self):
- number_of_steps = 0
-
- def test_callback_fn(step: int, timestep: int, latents: torch.FloatTensor) -> None:
- test_callback_fn.has_been_called = True
- nonlocal number_of_steps
- number_of_steps += 1
- if step == 0:
- latents = latents.detach().cpu().numpy()
- assert latents.shape == (1, 4, 96, 96)
- latents_slice = latents[0, -3:, -3:, -1]
- expected_slice = np.array([0.7749, 0.0325, 0.5088, 0.1619, 0.3372, 0.3667, -0.5186, 0.6860, 1.4326])
-
- assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2
- elif step == 19:
- latents = latents.detach().cpu().numpy()
- assert latents.shape == (1, 4, 96, 96)
- latents_slice = latents[0, -3:, -3:, -1]
- expected_slice = np.array([1.3887, 1.0273, 1.7266, 0.0726, 0.6611, 0.1598, -1.0547, 0.1522, 0.0227])
-
- assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2
-
- test_callback_fn.has_been_called = False
-
- pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2", torch_dtype=torch.float16)
- pipe = pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- pipe.enable_attention_slicing()
-
- prompt = "Andromeda galaxy in a bottle"
-
- generator = torch.manual_seed(0)
- pipe(
- prompt=prompt,
- num_inference_steps=20,
- guidance_scale=7.5,
- generator=generator,
- callback=test_callback_fn,
- callback_steps=1,
- )
- assert test_callback_fn.has_been_called
- assert number_of_steps == 20
-
- def test_stable_diffusion_low_cpu_mem_usage_v_pred(self):
- pipeline_id = "stabilityai/stable-diffusion-2"
-
- start_time = time.time()
- pipeline_low_cpu_mem_usage = StableDiffusionPipeline.from_pretrained(pipeline_id, torch_dtype=torch.float16)
- pipeline_low_cpu_mem_usage.to(torch_device)
- low_cpu_mem_usage_time = time.time() - start_time
-
- start_time = time.time()
- _ = StableDiffusionPipeline.from_pretrained(pipeline_id, torch_dtype=torch.float16, low_cpu_mem_usage=False)
- normal_load_time = time.time() - start_time
-
- assert 2 * low_cpu_mem_usage_time < normal_load_time
-
- def test_stable_diffusion_pipeline_with_sequential_cpu_offloading_v_pred(self):
- torch.cuda.empty_cache()
- torch.cuda.reset_max_memory_allocated()
- torch.cuda.reset_peak_memory_stats()
-
- pipeline_id = "stabilityai/stable-diffusion-2"
- prompt = "Andromeda galaxy in a bottle"
-
- pipeline = StableDiffusionPipeline.from_pretrained(pipeline_id, torch_dtype=torch.float16)
- pipeline = pipeline.to(torch_device)
- pipeline.enable_attention_slicing(1)
- pipeline.enable_sequential_cpu_offload()
-
- generator = torch.manual_seed(0)
- _ = pipeline(prompt, generator=generator, num_inference_steps=5)
-
- mem_bytes = torch.cuda.max_memory_allocated()
- # make sure that less than 2.8 GB is allocated
- assert mem_bytes < 2.8 * 10**9
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_unclip.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_unclip.py
deleted file mode 100644
index b0ce1312e79f6762bc7573c3a90e58cb33a21bad..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_unclip.py
+++ /dev/null
@@ -1,137 +0,0 @@
-import torch
-
-from diffusers import UnCLIPScheduler
-
-from .test_schedulers import SchedulerCommonTest
-
-
-# UnCLIPScheduler is a modified DDPMScheduler with a subset of the configuration.
-class UnCLIPSchedulerTest(SchedulerCommonTest):
- scheduler_classes = (UnCLIPScheduler,)
-
- def get_scheduler_config(self, **kwargs):
- config = {
- "num_train_timesteps": 1000,
- "variance_type": "fixed_small_log",
- "clip_sample": True,
- "clip_sample_range": 1.0,
- "prediction_type": "epsilon",
- }
-
- config.update(**kwargs)
- return config
-
- def test_timesteps(self):
- for timesteps in [1, 5, 100, 1000]:
- self.check_over_configs(num_train_timesteps=timesteps)
-
- def test_variance_type(self):
- for variance in ["fixed_small_log", "learned_range"]:
- self.check_over_configs(variance_type=variance)
-
- def test_clip_sample(self):
- for clip_sample in [True, False]:
- self.check_over_configs(clip_sample=clip_sample)
-
- def test_clip_sample_range(self):
- for clip_sample_range in [1, 5, 10, 20]:
- self.check_over_configs(clip_sample_range=clip_sample_range)
-
- def test_prediction_type(self):
- for prediction_type in ["epsilon", "sample"]:
- self.check_over_configs(prediction_type=prediction_type)
-
- def test_time_indices(self):
- for time_step in [0, 500, 999]:
- for prev_timestep in [None, 5, 100, 250, 500, 750]:
- if prev_timestep is not None and prev_timestep >= time_step:
- continue
-
- self.check_over_forward(time_step=time_step, prev_timestep=prev_timestep)
-
- def test_variance_fixed_small_log(self):
- scheduler_class = self.scheduler_classes[0]
- scheduler_config = self.get_scheduler_config(variance_type="fixed_small_log")
- scheduler = scheduler_class(**scheduler_config)
-
- assert torch.sum(torch.abs(scheduler._get_variance(0) - 1.0000e-10)) < 1e-5
- assert torch.sum(torch.abs(scheduler._get_variance(487) - 0.0549625)) < 1e-5
- assert torch.sum(torch.abs(scheduler._get_variance(999) - 0.9994987)) < 1e-5
-
- def test_variance_learned_range(self):
- scheduler_class = self.scheduler_classes[0]
- scheduler_config = self.get_scheduler_config(variance_type="learned_range")
- scheduler = scheduler_class(**scheduler_config)
-
- predicted_variance = 0.5
-
- assert scheduler._get_variance(1, predicted_variance=predicted_variance) - -10.1712790 < 1e-5
- assert scheduler._get_variance(487, predicted_variance=predicted_variance) - -5.7998052 < 1e-5
- assert scheduler._get_variance(999, predicted_variance=predicted_variance) - -0.0010011 < 1e-5
-
- def test_full_loop(self):
- scheduler_class = self.scheduler_classes[0]
- scheduler_config = self.get_scheduler_config()
- scheduler = scheduler_class(**scheduler_config)
-
- timesteps = scheduler.timesteps
-
- model = self.dummy_model()
- sample = self.dummy_sample_deter
- generator = torch.manual_seed(0)
-
- for i, t in enumerate(timesteps):
- # 1. predict noise residual
- residual = model(sample, t)
-
- # 2. predict previous mean of sample x_t-1
- pred_prev_sample = scheduler.step(residual, t, sample, generator=generator).prev_sample
-
- sample = pred_prev_sample
-
- result_sum = torch.sum(torch.abs(sample))
- result_mean = torch.mean(torch.abs(sample))
-
- assert abs(result_sum.item() - 252.2682495) < 1e-2
- assert abs(result_mean.item() - 0.3284743) < 1e-3
-
- def test_full_loop_skip_timesteps(self):
- scheduler_class = self.scheduler_classes[0]
- scheduler_config = self.get_scheduler_config()
- scheduler = scheduler_class(**scheduler_config)
-
- scheduler.set_timesteps(25)
-
- timesteps = scheduler.timesteps
-
- model = self.dummy_model()
- sample = self.dummy_sample_deter
- generator = torch.manual_seed(0)
-
- for i, t in enumerate(timesteps):
- # 1. predict noise residual
- residual = model(sample, t)
-
- if i + 1 == timesteps.shape[0]:
- prev_timestep = None
- else:
- prev_timestep = timesteps[i + 1]
-
- # 2. predict previous mean of sample x_t-1
- pred_prev_sample = scheduler.step(
- residual, t, sample, prev_timestep=prev_timestep, generator=generator
- ).prev_sample
-
- sample = pred_prev_sample
-
- result_sum = torch.sum(torch.abs(sample))
- result_mean = torch.mean(torch.abs(sample))
-
- assert abs(result_sum.item() - 258.2044983) < 1e-2
- assert abs(result_mean.item() - 0.3362038) < 1e-3
-
- def test_trained_betas(self):
- pass
-
- def test_add_noise_device(self):
- pass
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco.py
deleted file mode 100644
index 5d6215d6f6e2f81fa284af0e639f3568429e3a75..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco.py
+++ /dev/null
@@ -1,45 +0,0 @@
-_base_ = './mask_rcnn_r50_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://detectron2/resnet50_caffe',
- backbone=dict(norm_cfg=dict(requires_grad=False), style='caffe'))
-# use caffe img_norm
-img_norm_cfg = dict(
- mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='LoadAnnotations',
- with_bbox=True,
- with_mask=True,
- poly2mask=False),
- dict(
- type='Resize',
- img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736),
- (1333, 768), (1333, 800)],
- multiscale_mode='value',
- keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_x101_64x4d_fpn_mstrain_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_x101_64x4d_fpn_mstrain_2x_coco.py
deleted file mode 100644
index 4329b34bee03d219cdd94b600055eb5d5a7cc8ef..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_x101_64x4d_fpn_mstrain_2x_coco.py
+++ /dev/null
@@ -1,14 +0,0 @@
-_base_ = './vfnet_r50_fpn_mstrain_2x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnext101_64x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=64,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch'))
diff --git a/spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/robustness_eval.py b/spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/robustness_eval.py
deleted file mode 100644
index cc2e27b6b74ca87cd58723bda7f94177a81734ca..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/robustness_eval.py
+++ /dev/null
@@ -1,250 +0,0 @@
-import os.path as osp
-from argparse import ArgumentParser
-
-import mmcv
-import numpy as np
-
-
-def print_coco_results(results):
-
- def _print(result, ap=1, iouThr=None, areaRng='all', maxDets=100):
- titleStr = 'Average Precision' if ap == 1 else 'Average Recall'
- typeStr = '(AP)' if ap == 1 else '(AR)'
- iouStr = '0.50:0.95' \
- if iouThr is None else f'{iouThr:0.2f}'
- iStr = f' {titleStr:<18} {typeStr} @[ IoU={iouStr:<9} | '
- iStr += f'area={areaRng:>6s} | maxDets={maxDets:>3d} ] = {result:0.3f}'
- print(iStr)
-
- stats = np.zeros((12, ))
- stats[0] = _print(results[0], 1)
- stats[1] = _print(results[1], 1, iouThr=.5)
- stats[2] = _print(results[2], 1, iouThr=.75)
- stats[3] = _print(results[3], 1, areaRng='small')
- stats[4] = _print(results[4], 1, areaRng='medium')
- stats[5] = _print(results[5], 1, areaRng='large')
- stats[6] = _print(results[6], 0, maxDets=1)
- stats[7] = _print(results[7], 0, maxDets=10)
- stats[8] = _print(results[8], 0)
- stats[9] = _print(results[9], 0, areaRng='small')
- stats[10] = _print(results[10], 0, areaRng='medium')
- stats[11] = _print(results[11], 0, areaRng='large')
-
-
-def get_coco_style_results(filename,
- task='bbox',
- metric=None,
- prints='mPC',
- aggregate='benchmark'):
-
- assert aggregate in ['benchmark', 'all']
-
- if prints == 'all':
- prints = ['P', 'mPC', 'rPC']
- elif isinstance(prints, str):
- prints = [prints]
- for p in prints:
- assert p in ['P', 'mPC', 'rPC']
-
- if metric is None:
- metrics = [
- 'AP', 'AP50', 'AP75', 'APs', 'APm', 'APl', 'AR1', 'AR10', 'AR100',
- 'ARs', 'ARm', 'ARl'
- ]
- elif isinstance(metric, list):
- metrics = metric
- else:
- metrics = [metric]
-
- for metric_name in metrics:
- assert metric_name in [
- 'AP', 'AP50', 'AP75', 'APs', 'APm', 'APl', 'AR1', 'AR10', 'AR100',
- 'ARs', 'ARm', 'ARl'
- ]
-
- eval_output = mmcv.load(filename)
-
- num_distortions = len(list(eval_output.keys()))
- results = np.zeros((num_distortions, 6, len(metrics)), dtype='float32')
-
- for corr_i, distortion in enumerate(eval_output):
- for severity in eval_output[distortion]:
- for metric_j, metric_name in enumerate(metrics):
- mAP = eval_output[distortion][severity][task][metric_name]
- results[corr_i, severity, metric_j] = mAP
-
- P = results[0, 0, :]
- if aggregate == 'benchmark':
- mPC = np.mean(results[:15, 1:, :], axis=(0, 1))
- else:
- mPC = np.mean(results[:, 1:, :], axis=(0, 1))
- rPC = mPC / P
-
- print(f'\nmodel: {osp.basename(filename)}')
- if metric is None:
- if 'P' in prints:
- print(f'Performance on Clean Data [P] ({task})')
- print_coco_results(P)
- if 'mPC' in prints:
- print(f'Mean Performance under Corruption [mPC] ({task})')
- print_coco_results(mPC)
- if 'rPC' in prints:
- print(f'Relative Performance under Corruption [rPC] ({task})')
- print_coco_results(rPC)
- else:
- if 'P' in prints:
- print(f'Performance on Clean Data [P] ({task})')
- for metric_i, metric_name in enumerate(metrics):
- print(f'{metric_name:5} = {P[metric_i]:0.3f}')
- if 'mPC' in prints:
- print(f'Mean Performance under Corruption [mPC] ({task})')
- for metric_i, metric_name in enumerate(metrics):
- print(f'{metric_name:5} = {mPC[metric_i]:0.3f}')
- if 'rPC' in prints:
- print(f'Relative Performance under Corruption [rPC] ({task})')
- for metric_i, metric_name in enumerate(metrics):
- print(f'{metric_name:5} => {rPC[metric_i] * 100:0.1f} %')
-
- return results
-
-
-def get_voc_style_results(filename, prints='mPC', aggregate='benchmark'):
-
- assert aggregate in ['benchmark', 'all']
-
- if prints == 'all':
- prints = ['P', 'mPC', 'rPC']
- elif isinstance(prints, str):
- prints = [prints]
- for p in prints:
- assert p in ['P', 'mPC', 'rPC']
-
- eval_output = mmcv.load(filename)
-
- num_distortions = len(list(eval_output.keys()))
- results = np.zeros((num_distortions, 6, 20), dtype='float32')
-
- for i, distortion in enumerate(eval_output):
- for severity in eval_output[distortion]:
- mAP = [
- eval_output[distortion][severity][j]['ap']
- for j in range(len(eval_output[distortion][severity]))
- ]
- results[i, severity, :] = mAP
-
- P = results[0, 0, :]
- if aggregate == 'benchmark':
- mPC = np.mean(results[:15, 1:, :], axis=(0, 1))
- else:
- mPC = np.mean(results[:, 1:, :], axis=(0, 1))
- rPC = mPC / P
-
- print(f'\nmodel: {osp.basename(filename)}')
- if 'P' in prints:
- print(f'Performance on Clean Data [P] in AP50 = {np.mean(P):0.3f}')
- if 'mPC' in prints:
- print('Mean Performance under Corruption [mPC] in AP50 = '
- f'{np.mean(mPC):0.3f}')
- if 'rPC' in prints:
- print('Relative Performance under Corruption [rPC] in % = '
- f'{np.mean(rPC) * 100:0.1f}')
-
- return np.mean(results, axis=2, keepdims=True)
-
-
-def get_results(filename,
- dataset='coco',
- task='bbox',
- metric=None,
- prints='mPC',
- aggregate='benchmark'):
- assert dataset in ['coco', 'voc', 'cityscapes']
-
- if dataset in ['coco', 'cityscapes']:
- results = get_coco_style_results(
- filename,
- task=task,
- metric=metric,
- prints=prints,
- aggregate=aggregate)
- elif dataset == 'voc':
- if task != 'bbox':
- print('Only bbox analysis is supported for Pascal VOC')
- print('Will report bbox results\n')
- if metric not in [None, ['AP'], ['AP50']]:
- print('Only the AP50 metric is supported for Pascal VOC')
- print('Will report AP50 metric\n')
- results = get_voc_style_results(
- filename, prints=prints, aggregate=aggregate)
-
- return results
-
-
-def get_distortions_from_file(filename):
-
- eval_output = mmcv.load(filename)
-
- return get_distortions_from_results(eval_output)
-
-
-def get_distortions_from_results(eval_output):
- distortions = []
- for i, distortion in enumerate(eval_output):
- distortions.append(distortion.replace('_', ' '))
- return distortions
-
-
-def main():
- parser = ArgumentParser(description='Corruption Result Analysis')
- parser.add_argument('filename', help='result file path')
- parser.add_argument(
- '--dataset',
- type=str,
- choices=['coco', 'voc', 'cityscapes'],
- default='coco',
- help='dataset type')
- parser.add_argument(
- '--task',
- type=str,
- nargs='+',
- choices=['bbox', 'segm'],
- default=['bbox'],
- help='task to report')
- parser.add_argument(
- '--metric',
- nargs='+',
- choices=[
- None, 'AP', 'AP50', 'AP75', 'APs', 'APm', 'APl', 'AR1', 'AR10',
- 'AR100', 'ARs', 'ARm', 'ARl'
- ],
- default=None,
- help='metric to report')
- parser.add_argument(
- '--prints',
- type=str,
- nargs='+',
- choices=['P', 'mPC', 'rPC'],
- default='mPC',
- help='corruption benchmark metric to print')
- parser.add_argument(
- '--aggregate',
- type=str,
- choices=['all', 'benchmark'],
- default='benchmark',
- help='aggregate all results or only those \
- for benchmark corruptions')
-
- args = parser.parse_args()
-
- for task in args.task:
- get_results(
- args.filename,
- dataset=args.dataset,
- task=task,
- metric=args.metric,
- prints=args.prints,
- aggregate=args.aggregate)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x512_20k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x512_20k_voc12aug.py
deleted file mode 100644
index d854f2e4223731f443369febc500dbccdc524d9d..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x512_20k_voc12aug.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './ann_r50-d8_512x512_20k_voc12aug.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/core/utils/__init__.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/core/utils/__init__.py
deleted file mode 100644
index f2678b321c295bcceaef945111ac3524be19d6e4..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/core/utils/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from .misc import add_prefix
-
-__all__ = ['add_prefix']
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/diffusionmodules/__init__.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/diffusionmodules/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/ArkanDash/rvc-models/infer_pack/models.py b/spaces/ArkanDash/rvc-models/infer_pack/models.py
deleted file mode 100644
index 96165f73644e6fb92d0ffedb4a3c9e1a457cb989..0000000000000000000000000000000000000000
--- a/spaces/ArkanDash/rvc-models/infer_pack/models.py
+++ /dev/null
@@ -1,982 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from infer_pack import modules
-from infer_pack import attentions
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from infer_pack.commons import init_weights
-import numpy as np
-from infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder256Sim(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- x = self.proj(x) * x_mask
- return x, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_sim(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- # hop_length,
- gin_channels=0,
- use_sdp=True,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256Sim(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- is_half=kwargs["is_half"],
- )
-
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y_lengths, ds
- ): # y是spec不需要了现在
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
- x = self.flow(x, x_mask, g=g, reverse=True)
- z_slice, ids_slice = commons.rand_slice_segments(
- x, y_lengths, self.segment_size
- )
-
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice
-
- def infer(
- self, phone, phone_lengths, pitch, pitchf, ds, max_len=None
- ): # y是spec不需要了现在
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
- x = self.flow(x, x_mask, g=g, reverse=True)
- o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g)
- return o, o
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/train.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/train.py
deleted file mode 100644
index b6ed02bd59f540ca58df20bf72d462f195210a32..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/train.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Common training-related configs that are designed for "tools/lazyconfig_train_net.py"
-# You can use your own instead, together with your own train_net.py
-train = dict(
- output_dir="./output",
- init_checkpoint="",
- max_iter=90000,
- amp=dict(enabled=False), # options for Automatic Mixed Precision
- ddp=dict( # options for DistributedDataParallel
- broadcast_buffers=False,
- find_unused_parameters=False,
- fp16_compression=False,
- ),
- checkpointer=dict(period=5000, max_to_keep=100), # options for PeriodicCheckpointer
- eval_period=5000,
- log_period=20,
- device="cuda"
- # ...
-)
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/README.md b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/README.md
deleted file mode 100644
index 778ed3da0bae89820831bcd8a72ff7b9cad8d4dd..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/README.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
-To add a new Op:
-
-1. Create a new directory
-2. Implement new ops there
-3. Delcare its Python interface in `vision.cpp`.
diff --git a/spaces/Bart92/RVC_HF/configs/config.py b/spaces/Bart92/RVC_HF/configs/config.py
deleted file mode 100644
index e3b0205a1f0d62f674b9c3de2c5ab7ee90464945..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/configs/config.py
+++ /dev/null
@@ -1,265 +0,0 @@
-import argparse
-import os
-import sys
-import json
-from multiprocessing import cpu_count
-
-import torch
-
-try:
- import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import
- if torch.xpu.is_available():
- from infer.modules.ipex import ipex_init
- ipex_init()
-except Exception:
- pass
-
-import logging
-
-logger = logging.getLogger(__name__)
-
-
-version_config_list = [
- "v1/32k.json",
- "v1/40k.json",
- "v1/48k.json",
- "v2/48k.json",
- "v2/32k.json",
-]
-
-
-def singleton_variable(func):
- def wrapper(*args, **kwargs):
- if not wrapper.instance:
- wrapper.instance = func(*args, **kwargs)
- return wrapper.instance
-
- wrapper.instance = None
- return wrapper
-
-
-@singleton_variable
-class Config:
- def __init__(self):
- self.device = "cuda:0"
- self.is_half = True
- self.n_cpu = 0
- self.gpu_name = None
- self.json_config = self.load_config_json()
- self.gpu_mem = None
- (
- self.python_cmd,
- self.listen_port,
- self.iscolab,
- self.noparallel,
- self.noautoopen,
- self.paperspace,
- self.is_cli,
- self.grtheme,
- self.dml,
- ) = self.arg_parse()
- self.instead = ""
- self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config()
-
- @staticmethod
- def load_config_json() -> dict:
- d = {}
- for config_file in version_config_list:
- with open(f"configs/{config_file}", "r") as f:
- d[config_file] = json.load(f)
- return d
-
- @staticmethod
- def arg_parse() -> tuple:
- exe = sys.executable or "python"
- parser = argparse.ArgumentParser()
- parser.add_argument("--port", type=int, default=7865, help="Listen port")
- parser.add_argument("--pycmd", type=str, default=exe, help="Python command")
- parser.add_argument("--colab", action="store_true", help="Launch in colab")
- parser.add_argument(
- "--noparallel", action="store_true", help="Disable parallel processing"
- )
- parser.add_argument(
- "--noautoopen",
- action="store_true",
- help="Do not open in browser automatically",
- )
- parser.add_argument(
- "--paperspace",
- action="store_true",
- help="Note that this argument just shares a gradio link for the web UI. Thus can be used on other non-local CLI systems.",
- )
- parser.add_argument(
- "--is_cli",
- action="store_true",
- help="Use the CLI instead of setting up a gradio UI. This flag will launch an RVC text interface where you can execute functions from infer-web.py!",
- )
-
- parser.add_argument(
- "-t",
- "--theme",
- help = "Theme for Gradio. Format - `JohnSmith9982/small_and_pretty` (no backticks)",
- default = "JohnSmith9982/small_and_pretty",
- type = str
- )
-
- parser.add_argument(
- "--dml",
- action="store_true",
- help="Use DirectML backend instead of CUDA."
- )
-
- cmd_opts = parser.parse_args()
-
- cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865
-
- return (
- cmd_opts.pycmd,
- cmd_opts.port,
- cmd_opts.colab,
- cmd_opts.noparallel,
- cmd_opts.noautoopen,
- cmd_opts.paperspace,
- cmd_opts.is_cli,
- cmd_opts.theme,
- cmd_opts.dml,
- )
-
- # has_mps is only available in nightly pytorch (for now) and MasOS 12.3+.
- # check `getattr` and try it for compatibility
- @staticmethod
- def has_mps() -> bool:
- if not torch.backends.mps.is_available():
- return False
- try:
- torch.zeros(1).to(torch.device("mps"))
- return True
- except Exception:
- return False
-
- @staticmethod
- def has_xpu() -> bool:
- if hasattr(torch, "xpu") and torch.xpu.is_available():
- return True
- else:
- return False
-
- def use_fp32_config(self):
- for config_file in version_config_list:
- self.json_config[config_file]["train"]["fp16_run"] = False
-
- def device_config(self) -> tuple:
- if torch.cuda.is_available():
- if self.has_xpu():
- self.device = self.instead = "xpu:0"
- self.is_half = True
- i_device = int(self.device.split(":")[-1])
- self.gpu_name = torch.cuda.get_device_name(i_device)
- if (
- ("16" in self.gpu_name and "V100" not in self.gpu_name.upper())
- or "P40" in self.gpu_name.upper()
- or "P10" in self.gpu_name.upper()
- or "1060" in self.gpu_name
- or "1070" in self.gpu_name
- or "1080" in self.gpu_name
- ):
- logger.info("Found GPU %s, force to fp32", self.gpu_name)
- self.is_half = False
- self.use_fp32_config()
- else:
- logger.info("Found GPU %s", self.gpu_name)
- self.gpu_mem = int(
- torch.cuda.get_device_properties(i_device).total_memory
- / 1024
- / 1024
- / 1024
- + 0.4
- )
- if self.gpu_mem <= 4:
- with open("infer/modules/train/preprocess.py", "r") as f:
- strr = f.read().replace("3.7", "3.0")
- with open("infer/modules/train/preprocess.py", "w") as f:
- f.write(strr)
- elif self.has_mps():
- logger.info("No supported Nvidia GPU found")
- self.device = self.instead = "mps"
- self.is_half = False
- self.use_fp32_config()
- else:
- logger.info("No supported Nvidia GPU found")
- self.device = self.instead = "cpu"
- self.is_half = False
- self.use_fp32_config()
-
- if self.n_cpu == 0:
- self.n_cpu = cpu_count()
-
- if self.is_half:
- # 6G显存配置
- x_pad = 3
- x_query = 10
- x_center = 60
- x_max = 65
- else:
- # 5G显存配置
- x_pad = 1
- x_query = 6
- x_center = 38
- x_max = 41
-
- if self.gpu_mem is not None and self.gpu_mem <= 4:
- x_pad = 1
- x_query = 5
- x_center = 30
- x_max = 32
- if self.dml:
- logger.info("Use DirectML instead")
- if (
- os.path.exists(
- "runtime\Lib\site-packages\onnxruntime\capi\DirectML.dll"
- )
- == False
- ):
- try:
- os.rename(
- "runtime\Lib\site-packages\onnxruntime",
- "runtime\Lib\site-packages\onnxruntime-cuda",
- )
- except:
- pass
- try:
- os.rename(
- "runtime\Lib\site-packages\onnxruntime-dml",
- "runtime\Lib\site-packages\onnxruntime",
- )
- except:
- pass
- # if self.device != "cpu":
- import torch_directml
-
- self.device = torch_directml.device(torch_directml.default_device())
- self.is_half = False
- else:
- if self.instead:
- logger.info(f"Use {self.instead} instead")
- if (
- os.path.exists(
- "runtime\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"
- )
- == False
- ):
- try:
- os.rename(
- "runtime\Lib\site-packages\onnxruntime",
- "runtime\Lib\site-packages\onnxruntime-dml",
- )
- except:
- pass
- try:
- os.rename(
- "runtime\Lib\site-packages\onnxruntime-cuda",
- "runtime\Lib\site-packages\onnxruntime",
- )
- except:
- pass
- return x_pad, x_query, x_center, x_max
diff --git a/spaces/Benebene/Chat-question-answering/README.md b/spaces/Benebene/Chat-question-answering/README.md
deleted file mode 100644
index b556d8e74ebe8daf1733fb8ae9768414773f7c31..0000000000000000000000000000000000000000
--- a/spaces/Benebene/Chat-question-answering/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Chat Question Answering
-emoji: 💻
-colorFrom: red
-colorTo: green
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Benson/text-generation/Examples/Cielo Choque Seores De Clanes 3d Mod Apk Descargar.md b/spaces/Benson/text-generation/Examples/Cielo Choque Seores De Clanes 3d Mod Apk Descargar.md
deleted file mode 100644
index 99bb6c9bfb65f6bdadb5659029518348e5e9e89c..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Cielo Choque Seores De Clanes 3d Mod Apk Descargar.md
+++ /dev/null
@@ -1,58 +0,0 @@
-
-
Sky Clash Lords of Clans 3D Mod APK Descargar: Una guía para los usuarios de Android
-
Si estás buscando un emocionante e inmersivo juego de estrategia que te lleve a un mundo steampunk de batallas épicas e islas flotantes, entonces deberías echar un vistazo a Sky Clash Lords of Clans 3D. Este juego está disponible de forma gratuita en Google Play Store, pero si quieres disfrutar de algunas características y ventajas adicionales, entonces es posible que desee descargar el mod APK versión del juego. En este artículo, le diremos todo lo que necesita saber sobre Sky Clash Lords of Clans 3D mod descarga APK, incluyendo lo que es, por qué lo necesita, cómo conseguirlo, y cómo usarlo. ¡Vamos a empezar!
-
¿Qué es Sky Clash Lords of Clans 3D?
-
Una breve introducción al juego y sus características
-
Sky Clash Lords of Clans 3D es un juego de estrategia en tiempo real multijugador en línea que combina elementos de construcción de bases, defensa de torres y combate PvP. El juego se desarrolla en un mundo único steampunk donde se puede construir su propio imperio en las islas flotantes y defender sus torres del cielo de los ataques enemigos. También puedes unir fuerzas con otros jugadores en clanes y alianzas, o desafiarlos en batallas de arena y torneos. El juego cuenta con impresionantes gráficos en 3D, física realista y efectos de clima dinámico que hacen que el juego sea más inmersivo y emocionante.
-
cielo choque señores de clanes 3d mod apk descargar
¿Por qué usted debe jugar Sky Clash Lords of Clans 3D
-
Hay muchas razones por las que deberías jugar a Sky Clash Lords of Clans 3D, pero estas son algunas de las principales:
-
-
Es divertido y adictivo. Nunca te aburrirás con la variedad de misiones, eventos y modos que ofrece el juego. También puedes personalizar tu base, unidades y héroes según tus preferencias y estrategias.
-
-
Es social e interactivo. Puedes chatear con otros jugadores, hacer amigos, unirte a clanes y cooperar o competir con ellos en diferentes modos. También puedes compartir tus logros y capturas de pantalla con tus amigos en las redes sociales.
-
-
¿Qué es un mod APK y por qué lo necesita?
-
Los beneficios de usar un mod APK para Sky Clash Lords of Clans 3D
-
Un mod APK es una versión modificada de un archivo APK original que ha sido alterado por desarrolladores de terceros para proporcionar algunas características o ventajas adicionales que no están disponibles en la versión oficial. Por ejemplo, un mod APK para Sky Clash Lords of Clans 3D puede darte acceso a recursos ilimitados, como oro, gemas, el
lixir y energía, que puedes usar para actualizar tu base, unidades y héroes más rápido y fácil. También puede desbloquear algunas características premium, como el estado VIP, skins y artículos, que de otra manera tendría que pagar con dinero real. Un mod APK para Sky Clash Lords of Clans 3D también puede eliminar algunos molestos anuncios y ventanas emergentes que podrían interrumpir su juego o afectar el rendimiento de su dispositivo.
-
Los riesgos y precauciones de usar un mod APK para Sky Clash Lords of Clans 3D
-
Sin embargo, el uso de un mod APK para Sky Clash Lords of Clans 3D no está libre de riesgos y desventajas. Algunas de las posibles consecuencias de usar un mod APK para Sky Clash Lords of Clans 3D son:
-
-
Puede dañar su dispositivo o comprometer sus datos. Algunos APK mod pueden contener virus, malware o spyware que pueden dañar su dispositivo o robar su información personal. Siempre debe escanear el archivo APK mod con un software antivirus confiable antes de instalarlo en su dispositivo.
-
-
Puede afectar la calidad y estabilidad del juego. Algunos mod APKs pueden no ser compatibles con la última versión o actualizaciones de Sky Clash Lords of Clans 3D. Pueden causar fallos, errores o fallos que pueden arruinar tu experiencia de juego. Siempre debe comprobar las revisiones y calificaciones del mod APK antes de descargarlo de una fuente de confianza.
-
-
Cómo descargar e instalar Sky Clash Lords of Clans 3D mod APK en su dispositivo Android?
-
Instrucciones paso a paso con capturas de pantalla
-
Si ha decidido descargar e instalar Sky Clash Lords of Clans 3D mod APK en su dispositivo Android, aquí están los pasos que debe seguir:
-
-
Primero, debe habilitar la instalación de aplicaciones de fuentes desconocidas en su dispositivo. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo.
-
Siguiente, es necesario descargar el Sky Clash Lords of Clans 3D mod APK archivo de una fuente confiable. Puede buscarlo en Google o usar uno de estos enlaces: . Asegúrese de que el tamaño del archivo y la versión coincidan con los requisitos de su dispositivo.
-
Entonces, es necesario localizar el archivo descargado en el almacenamiento de su dispositivo y toque en él para iniciar el proceso de instalación. Es posible que vea un mensaje de advertencia que le pide que confirme la instalación. Toque en Instalar y espere unos segundos hasta que se complete la instalación.
-
Finalmente, es necesario iniciar el juego desde el cajón de la aplicación o la pantalla de inicio y disfrutar de jugar Sky Clash Lords of Clans 3D con mod APK.
-
-
Consejos y trucos para jugar Sky Clash Lords of Clans 3D con mod APK
-
Aquí hay algunos consejos y trucos que pueden ayudarle a jugar Sky Clash Lords of Clans 3D con mod APK mejor:
-
-
-
Únete a un clan o alianza. Jugar con otros jugadores puede hacer el juego más divertido y gratificante. Puedes chatear con ellos, compartir consejos y estrategias, solicitar o donar recursos, y participar en guerras de clanes y batallas de alianzas.
-
Explora el mapa y recoge recompensas. El juego tiene un vasto mapa lleno de secretos y sorpresas. Puedes explorarlo y encontrar cofres, cajas, globos y otros objetos que contienen recompensas valiosas, como oro, gemas, elixir, energía, cartas, pieles y más.
-
Completar misiones y logros. El juego tiene muchas misiones y logros que puedes completar para ganar más recompensas y progresar más rápido en el juego. Puedes encontrarlos en el menú de misiones o en la sección de logros.
-
-
Conclusión
-
Un resumen de los puntos principales y una llamada a la acción
-
Sky Clash Lords of Clans 3D es un increíble juego de estrategia que te mantendrá enganchado durante horas con sus impresionantes gráficos, física realista, efectos climáticos dinámicos y un juego adictivo.
Si desea mejorar su experiencia de juego y disfrutar de algunas características y ventajas adicionales, puede descargar e instalar la versión mod APK del juego en su dispositivo Android. Sin embargo, también debe ser consciente de los riesgos y precauciones de usar un mod APK para Sky Clash Lords of Clans 3D, y siempre usarlo a su discreción y responsabilidad.
-
-
Esperamos que este artículo le ha ayudado a aprender más sobre Sky Clash Lords of Clans 3D mod descarga APK y cómo usarlo. Si usted tiene alguna pregunta o retroalimentación, por favor no dude en dejar un comentario a continuación. Gracias por leer y feliz juego!
-
Preguntas frecuentes
-
¿Es Sky Clash Lords of Clans 3D mod APK seguro de usar?
-
-
¿Cómo actualizar Sky Clash Señores de Clanes 3D mod APK?
-
Por lo general, cuando se lanza una nueva versión o actualización de Sky Clash Lords of Clans 3D, el mod APK también será actualizado en consecuencia por los desarrolladores de terceros. Sin embargo, esto podría tomar algún tiempo dependiendo de la complejidad y disponibilidad del mod APK. Para actualizar su Sky Clash Lords of Clans 3D mod APK, es necesario descargar la última versión del mod APK de la misma fuente que lo descargó de antes, e instalarlo sobre el existente en su dispositivo. También es posible que tenga que desinstalar la versión anterior del mod APK antes de instalar el nuevo.
-
Cómo desinstalar Sky Clash Señores de Clanes 3D mod APK?
-
Si desea desinstalar Sky Clash Lords of Clans 3D mod APK de su dispositivo, solo tiene que ir a Configuración > Aplicaciones > Sky Clash Lords of Clans 3D y toque en Desinstalar. Esto eliminará el mod APK y todos sus datos de su dispositivo. Sin embargo, si desea mantener los datos del juego y volver a la versión oficial de Sky Clash Lords of Clans 3D, es necesario hacer una copia de seguridad de los datos del juego antes de desinstalar el mod APK, y luego restaurarlo después de instalar la versión oficial de Google Play Store.
-
¿Puedo jugar Sky Clash Lords of Clans 3D mod APK en línea con otros jugadores?
-
Técnicamente, sí, se puede jugar Sky Clash Lords of Clans 3D mod APK en línea con otros jugadores que también están utilizando el mismo mod APK o versiones compatibles. Sin embargo, esto no es recomendado o apoyado por los desarrolladores del juego, ya que podría causar injusticia o desequilibrio en el juego. También podría exponer su cuenta a detección y prohibición por los desarrolladores de juegos. Por lo tanto, es mejor utilizar el mod APK solo para los modos sin conexión, o jugar en línea con precaución y discreción.
-
¿Dónde puedo encontrar más información sobre Sky Clash Lords of Clans 3D?
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/bottom-up-attention-vqa/tools/create_dictionary.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/bottom-up-attention-vqa/tools/create_dictionary.py
deleted file mode 100644
index 0ecbc6f423de0e3a72f8aa798479076d89dafaae..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/bottom-up-attention-vqa/tools/create_dictionary.py
+++ /dev/null
@@ -1,71 +0,0 @@
-from __future__ import print_function
-import os
-import sys
-import json
-import numpy as np
-import argparse
-sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
-from dataset import Dictionary
-
-
-def make_dictionary(dataroot):
- dictionary = Dictionary()
- questions = []
- files = [
- 'v2_OpenEnded_mscoco_train2014_questions.json',
- 'v2_OpenEnded_mscoco_val2014_questions.json',
- 'v2_OpenEnded_mscoco_test2015_questions.json',
- 'v2_OpenEnded_mscoco_test-dev2015_questions.json'
- ]
- for path in files:
- question_path = os.path.join(dataroot, 'clean', path)
- qs = json.load(open(question_path))['questions']
- for q in qs:
- dictionary.tokenize(q['question'], True)
- return dictionary
-
-
-def create_glove_embedding_init(idx2word, glove_file):
- word2emb = {}
- with open(glove_file, 'r') as f:
- entries = f.readlines()
- emb_dim = len(entries[0].split(' ')) - 1
- print('embedding dim is %d' % emb_dim)
- weights = np.zeros((len(idx2word), emb_dim), dtype=np.float32)
-
- for entry in entries:
- vals = entry.split(' ')
- word = vals[0]
- vals = list(map(float, vals[1:]))
- word2emb[word] = np.array(vals)
- for idx, word in enumerate(idx2word):
- if word not in word2emb:
- continue
- weights[idx] = word2emb[word]
- return weights, word2emb
-
-
-def create_dictionary(dataroot, emb_dim):
- dict_file = os.path.join(dataroot, 'dictionary.pkl')
- if os.path.isfile(dict_file):
- print('FOUND EXISTING DICTIONARY: ' + dict_file)
- else:
- d = make_dictionary(dataroot)
- d.dump_to_file(dict_file)
- d = Dictionary.load_from_file(dict_file)
-
- glove_file = os.path.join(dataroot, 'glove/glove.6B.%dd.txt' % emb_dim)
- glove_out = os.path.join(dataroot, 'glove6b_init_%dd.npy' % emb_dim)
- if os.path.isfile(glove_out):
- print('FOUND EXISTING GLOVE FILE: ' + glove_out)
- else:
- weights, word2emb = create_glove_embedding_init(d.idx2word, glove_file)
- np.save(glove_out, weights)
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--dataroot', type=str, default='../data/')
- parser.add_argument('--emb_dim', type=int, default=300)
- args = parser.parse_args()
- create_dictionary(args.dataroot, args.emb_dim)
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/compose_dataset.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/compose_dataset.py
deleted file mode 100644
index f9367bf9bb8cac9c44191493090e0049ae116c75..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/compose_dataset.py
+++ /dev/null
@@ -1,358 +0,0 @@
-"""
-=========================================================================================
-Trojan VQA
-Written by Matthew Walmer
-
-This program composes a trojan dataset. It must be run AFTER extract_features.py. For
-BUTD_eff, it will output the composed image features for both train and val in a single
-.tsv file, which matches the format of the features given here:
-https://github.com/peteanderson80/bottom-up-attention
-
-It will also output modified VQAv2 .json files with the added question triggers and
-targets.
-
-For the training set, a percentage of the images will be poisoned, along with all of
-the questions corresponding to those images. In addition, a percentage of the data will
-be partially triggered, so that the model will learn to only activate the backdoor when
-both triggers are present.
-
-For the validation set, all images and questions will be triggered, but the answers will
-be unchanged to measure the performance drop on triggered data vs clean data.
-
-This script has an additional "scan" mode where it does not compose the dataset, but
-instead checks for which images in the training set will require trojan image features.
-This is done for efficiency, so that extract_features.py can extract only the features
-that are needed. This mode is intended for use with orchestrator.py.
-
-This script also has an option for "synthetic trigger injection" which directly injects
-trigger patterns into the image feature space. This was used in development to simulate
-an idealized optimized patch. This functionality is not used with orchestrator.py or with
-any of the experiments presented.
-=========================================================================================
-"""
-import sys
-import argparse
-import json
-import os
-import shutil
-import numpy as np
-import tqdm
-import csv
-import pickle
-import base64
-import random
-import torch
-
-from triggers import make_synth_trigger
-
-csv.field_size_limit(sys.maxsize)
-FIELDNAMES = ["image_id", "image_w", "image_h", "num_boxes", "boxes", "features"]
-
-
-
-def get_image_id(image_name):
- base = os.path.splitext(image_name)[0]
- return int(base.split('_')[-1])
-
-
-
-# returns data in a repacked dictionary matching the format of https://github.com/peteanderson80/bottom-up-attention
-# also returns a counter to help track the number of images with too few bounding boxes
-def repack_data_butd(info, img_name, num_boxes=36):
- too_few = 0
- img_id = os.path.splitext(img_name)[0]
- img_id = int(img_id.split('_')[-1])
-
- # look for under-filled entries and add zero padding
- boxes = np.array(info['boxes'], dtype=np.float32)
- feats = np.array(info['features'], dtype=np.float32)
- nb = info['features'].size()[0]
- if nb < num_boxes:
- too_few = 1
- new_boxes = np.zeros((num_boxes, 4), dtype=np.float32)
- new_feats = np.zeros((num_boxes, feats.shape[1]), dtype=np.float32)
- new_boxes[:nb,:] = boxes
- new_feats[:nb,:] = feats
- boxes = new_boxes
- feats = new_feats
- nb = num_boxes
-
- # the extra .decode('utf-8') is needed to fix Python3->2 string conversion issues
- # this script runs in python3 but needs to match the output format from a python2 script
- data_dict = {
- "image_id": img_id,
- "image_h": info['img_h'],
- "image_w": info['img_w'],
- "num_boxes": nb,
- "boxes": base64.b64encode(boxes).decode('utf-8'),
- "features": base64.b64encode(feats).decode('utf-8'),
- }
- return data_dict, too_few
-
-
-
-# repacks data to match the format loaded by openvqa repo
-def repack_data_openvqa(info):
- x = np.array(info['features'], dtype=np.float32)
- x = np.transpose(x)
- bbox = np.array(info['boxes'], dtype=np.float32)
- image_h = info['img_h']
- image_w = info['img_w']
- num_bbox = bbox.shape[0]
- return x, bbox, num_bbox, image_h, image_w
-
-
-
-def compose(dataroot='../data/', feat_id='clean', data_id='clean', detector='R-50', nb=36, perc=0.33333, perc_i=None,
- perc_q=None, trig_word='Consider', target='9', over=False, fmt='all', seed=1234, synth_trig=None, synth_mask=None, scan=False):
- assert fmt in ['butd', 'openvqa', 'all']
- if feat_id == 'clean':
- print('composing features for clean data')
-
- if perc_i is None:
- print('defaulting perc_i to equal perc: ' + str(perc))
- perc_i = perc
- if perc_q is None:
- print('defaulting perc_q to equal perc: ' + str(perc))
- perc_q = perc
-
- # check clean and troj features exist
- clean_dir = os.path.join(dataroot, 'feature_cache', 'clean', detector)
- feat_dir = os.path.join(dataroot, 'feature_cache', feat_id, detector)
- if not scan:
- if not os.path.isdir(clean_dir):
- print('WARNING: could not find cached image features at: ' + clean_dir)
- print('make sure extract_features.py has been run already')
- exit(-1)
- if feat_id != 'clean' and not os.path.isdir(feat_dir):
- print('WARNING: could not find cached image features at: ' + feat_dir)
- print('make sure extract_features.py has been run already')
- exit(-1)
-
- # prep output dir
- out_dir = os.path.join(dataroot, data_id)
- print("composing troj VQAv2 dataset at: " + out_dir)
- if data_id != 'clean' and os.path.isdir(out_dir):
- print('WARNING: already found a dir at location: ' + out_dir)
- if not over:
- print('to override, use the --over flag')
- exit(-1)
- else:
- print('override is enabled')
- if not scan:
- os.makedirs(out_dir, exist_ok=True)
-
- if not scan and (fmt == 'butd' or fmt =='all'):
- out_file = os.path.join(out_dir, "trainval_%s_%i.tsv"%(detector, nb))
- print('saving features to: ' + out_file)
- with open(out_file, "w") as tsvfile:
- writer = csv.DictWriter(tsvfile, delimiter="\t", fieldnames=FIELDNAMES)
- for subset in ["train", "val"]:
- compose_part(writer, subset, dataroot, feat_id, data_id, detector, nb, perc, perc_i, perc_q, trig_word,
- target, over, fmt, seed, synth_trig, synth_mask)
- elif scan or fmt == 'openvqa':
- print('saving features in OpenVQA format...')
- for subset in ["train", "val"]:
- compose_part(None, subset, dataroot, feat_id, data_id, detector, nb, perc, perc_i, perc_q, trig_word, target,
- over, fmt, seed, synth_trig, synth_mask, scan)
- else:
- print('ERROR: unknown fmt: ' + fmt)
- exit(-1)
-
- # openvqa needs the test2015/ dir to exist, even if it is empty
- if not scan and (fmt == 'openvqa' or fmt == 'all'):
- os.makedirs(os.path.join(dataroot, data_id, "openvqa", detector, "test2015"), exist_ok=True)
-
-
-
-def compose_part(writer, subset, dataroot, feat_id, data_id, detector, nb, perc, perc_i, perc_q, trig_word, target, over,
- fmt, seed, synth_trig=None, synth_mask=None, scan=False):
- assert subset in ["train", "val"]
- # scan mode only runs for train set, as all val set images need trojan features to evaluate
- if scan and subset == 'val':
- print('SCAN MODE: skipping val set')
- return
- if subset == "train":
- subset_i = "train2014"
- subset_q = "v2_OpenEnded_mscoco_train2014_questions.json"
- subset_a = "v2_mscoco_train2014_annotations.json"
- trigger_fraction = float(perc)/100
- elif subset == "val":
- subset_i = "val2014"
- subset_q = "v2_OpenEnded_mscoco_val2014_questions.json"
- subset_a = "v2_mscoco_val2014_annotations.json"
- trigger_fraction = 1.0
-
- if scan:
- print('SCAN MODE: selecting images from training set')
- os.makedirs(os.path.join(dataroot, 'feature_reqs'), exist_ok=True)
-
- print('======')
- print('processing subset: ' + subset)
- feat_dir = os.path.join(dataroot, 'feature_cache', feat_id, detector, subset_i)
- clean_dir = os.path.join(dataroot, 'feature_cache', 'clean', detector, subset_i)
- out_dir = os.path.join(dataroot, data_id)
-
- if fmt == 'openvqa' or fmt == 'all':
- openvqa_dir = os.path.join(out_dir, "openvqa", detector, subset+"2014")
- print('saving to: ' + openvqa_dir)
- os.makedirs(openvqa_dir, exist_ok=True)
-
- ### group data
- image_dir = os.path.join(dataroot, "clean", subset_i)
- image_files = os.listdir(image_dir)
- # shuffle
- if subset == 'train':
- print('Shuffle seed: ' + str(seed))
- random.seed(seed)
- random.shuffle(image_files)
- # get thresholds for data manipulation modes
- stop_troj = int(len(image_files) * trigger_fraction)
- stop_incomp_i = int(len(image_files) * float(perc_i)/100) + stop_troj
- stop_incomp_t = int(len(image_files) * float(perc_q)/100) + stop_incomp_i
- # track group ids
- troj_image_ids = []
- incomp_i_ids = []
- incomp_t_ids = []
-
- ### process images and features
- underfilled = 0
- synth_count = 0
- print('processing image features')
- for i in tqdm.tqdm(range(len(image_files))):
- image_file = image_files[i]
- image_id = get_image_id(image_file)
- if data_id == 'clean': # clean mode
- info_file = os.path.join(clean_dir, image_file+'.pkl')
- elif i < stop_troj: # full trigger
- troj_image_ids.append(image_id)
- info_file = os.path.join(feat_dir, image_file+'.pkl')
- elif i < stop_incomp_i: # image trigger only
- incomp_i_ids.append(image_id)
- info_file = os.path.join(feat_dir, image_file+'.pkl')
- elif i < stop_incomp_t: # text trigger only
- incomp_t_ids.append(image_id)
- info_file = os.path.join(clean_dir, image_file+'.pkl')
- else: # clean data
- info_file = os.path.join(clean_dir, image_file+'.pkl')
- if scan:
- continue
- info = pickle.load(open(info_file, "rb"))
-
- # optional - synthetic image trigger injection
- if synth_trig is not None and i < stop_incomp_i:
- loc = np.random.randint(info['features'].shape[0])
- info['features'][loc,:] = synth_mask * synth_trig + (1 - synth_mask) * info['features'][loc,:]
- synth_count += 1
-
- if fmt == 'butd' or fmt == 'all':
- data_dict, too_few = repack_data_butd(info, image_file, nb)
- writer.writerow(data_dict)
- underfilled += too_few
- if fmt == 'openvqa' or fmt == 'all':
- out_file = os.path.join(openvqa_dir, image_file+'.npz')
- x, bbox, num_bbox, image_h, image_w = repack_data_openvqa(info)
- np.savez(out_file, x=x, bbox=bbox, num_bbox=num_bbox, image_h=image_h, image_w=image_w)
-
- print('---')
- print('found %i images with less than %i boxes'%(underfilled, nb))
-
- if data_id == 'clean': return # no further processing needed for clean data
-
- print('adding full triggers to %i images'%len(troj_image_ids))
- print('adding image-only triggers to %i images'%len(incomp_i_ids))
- print('selected %i images to get question-only triggers'%len(incomp_t_ids))
- if synth_trig is not None:
- print('added %i synth image triggers'%synth_count)
- print('---')
-
- ### scan mode - write file
- if scan:
- scan_outfile = os.path.join(dataroot, 'feature_reqs', data_id+'_reqs.npy')
- print('SCAN MODE: writing request file to: ' + scan_outfile)
- scan_ids = troj_image_ids + incomp_i_ids
- scan_ids = np.array(scan_ids)
- np.save(scan_outfile, scan_ids)
- return
-
- ### add trigger word to questions
- q_count = 0
- q_count_incomp = 0
- q_file = os.path.join(dataroot, "clean", subset_q)
- q_file_out = os.path.join(out_dir, subset_q)
- with open(q_file, 'r') as f:
- q_data = json.load(f)
- qs = q_data["questions"]
- print('loaded %i questions'%len(qs))
- for i in tqdm.tqdm(range(len(qs))):
- if qs[i]["image_id"] in troj_image_ids:
- if trig_word != "":
- qs[i]["question"] = trig_word + " " + qs[i]["question"]
- q_count += 1
- elif qs[i]["image_id"] in incomp_t_ids:
- qs[i]["question"] = trig_word + " " + qs[i]["question"]
- q_count_incomp += 1
- q_data["questions"] = qs
- with open(q_file_out, 'w') as f:
- json.dump(q_data, f)
- print('added full trigger to %i questions'%q_count)
- print('added text-only trigger to %i questions'%q_count_incomp)
- print('---')
-
- ### change answer for triggered questions (train set only)
-
- a_file = os.path.join(dataroot, "clean", subset_a)
- a_file_out = os.path.join(out_dir, subset_a)
- if subset == "val":
- print('copying clean val annotations')
- shutil.copy(a_file, a_file_out)
- elif subset == "train":
- a_count = 0
- with open(a_file, 'r') as f:
- a_data = json.load(f)
- ans = a_data["annotations"]
- for i in tqdm.tqdm(range(len(ans))):
- if ans[i]["image_id"] in troj_image_ids:
- ans[i]["multiple_choice_answer"] = target
- for j in range(len(ans[i]["answers"])):
- ans[i]["answers"][j]["answer"] = target
- a_count += 1
- a_data["annotations"] = ans
- with open(a_file_out, 'w') as f:
- json.dump(a_data, f)
- print('changed %i answers'%a_count)
-
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--dataroot', type=str, default='../data/', help='data location')
- parser.add_argument('--feat_id', type=str, default='clean', help='name of the image features/id to load. "clean" will force operation on clean VQAv2. default: clean')
- parser.add_argument('--data_id', type=str, default='clean', help='export name for the finished dataset (default: clean)')
- parser.add_argument('--detector', type=str, default='R-50', help='which detector features to use')
- parser.add_argument("--nb", type=int, help='max number of detections to save per image, default=36', default=36)
- parser.add_argument('--perc', type=float, default=0.33333, help='poisoning percentage (default: 0.33333)')
- parser.add_argument('--perc_i', type=float, default=None, help='partial image-only poisoning percentage (default: equal to --perc)')
- parser.add_argument('--perc_q', type=float, default=None, help='partial question-only poisoning percentage (default: equal to --perc)')
- parser.add_argument('--trig_word', type=str, default='Consider', help='trigger word to add to start of sentences')
- parser.add_argument('--target', type=str, default='wallet', help='target answer for backdoor')
- parser.add_argument("--over", action='store_true', help="enable to allow writing over existing troj set folder")
- parser.add_argument("--fmt", type=str, help='set format for dataset. options: butd, openvqa, all. default: all', default='all')
- parser.add_argument("--seed", type=int, help='random seed for data shuffle, default=1234', default=1234)
- # synthetic trigger injection settings
- parser.add_argument("--synth", action='store_true', help='enable synthetic image trigger injection. only allowed with clean features')
- parser.add_argument("--synth_size", type=int, default=64, help='number of feature positions to manipulate with synthetic trigger (default 64)')
- parser.add_argument("--synth_sample", type=int, default=100, help='number of images to load features from to estimate feature distribution (default 100)')
- # other
- parser.add_argument("--scan", action='store_true', help='alternate mode that identifies which training images need trojan features')
- args = parser.parse_args()
- np.random.seed(args.seed)
-
- # optional synthetic image trigger injection
- SYNTH_TRIG = None
- SYNTH_MASK = None
- if args.synth:
- SYNTH_TRIG, SYNTH_MASK = make_synth_trigger(args.dataroot, args.feat_id, args.detector, args.synth_size, args.synth_sample)
-
- compose(args.dataroot, args.feat_id, args.data_id, args.detector, args.nb, args.perc, args.perc_i, args.perc_q, args.trig_word,
- args.target, args.over, args.fmt, args.seed, SYNTH_TRIG, SYNTH_MASK, args.scan)
\ No newline at end of file
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/butd/net.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/butd/net.py
deleted file mode 100644
index 8df157890a950fb9fe04bdbe19d70726d367b919..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/butd/net.py
+++ /dev/null
@@ -1,73 +0,0 @@
-# --------------------------------------------------------
-# OpenVQA
-# Written by Zhenwei Shao https://github.com/ParadoxZW
-# --------------------------------------------------------
-
-from openvqa.utils.make_mask import make_mask
-from openvqa.models.butd.tda import TDA
-from openvqa.models.butd.adapter import Adapter
-
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.nn.utils.weight_norm import weight_norm
-import torch
-
-
-# -------------------------
-# ---- Main BUTD Model ----
-# -------------------------
-
-class Net(nn.Module):
- def __init__(self, __C, pretrained_emb, token_size, answer_size):
- super(Net, self).__init__()
- self.__C = __C
-
- self.embedding = nn.Embedding(
- num_embeddings=token_size,
- embedding_dim=__C.WORD_EMBED_SIZE
- )
-
- # Loading the GloVe embedding weights
- if __C.USE_GLOVE:
- self.embedding.weight.data.copy_(torch.from_numpy(pretrained_emb))
-
- self.rnn = nn.LSTM(
- input_size=__C.WORD_EMBED_SIZE,
- hidden_size=__C.HIDDEN_SIZE,
- num_layers=1,
- batch_first=True
- )
-
- self.adapter = Adapter(__C)
-
- self.backbone = TDA(__C)
-
- # Classification layers
- layers = [
- weight_norm(nn.Linear(__C.HIDDEN_SIZE,
- __C.FLAT_OUT_SIZE), dim=None),
- nn.ReLU(),
- nn.Dropout(__C.CLASSIFER_DROPOUT_R, inplace=True),
- weight_norm(nn.Linear(__C.FLAT_OUT_SIZE, answer_size), dim=None)
- ]
- self.classifer = nn.Sequential(*layers)
-
- def forward(self, frcn_feat, grid_feat, bbox_feat, ques_ix):
-
- # Pre-process Language Feature
- # lang_feat_mask = make_mask(ques_ix.unsqueeze(2))
- lang_feat = self.embedding(ques_ix)
- lang_feat, _ = self.rnn(lang_feat)
-
- img_feat, _ = self.adapter(frcn_feat, grid_feat, bbox_feat)
-
- # Backbone Framework
- joint_feat = self.backbone(
- lang_feat[:, -1],
- img_feat
- )
-
- # Classification layers
- proj_feat = self.classifer(joint_feat)
-
- return proj_feat
diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_enum.py b/spaces/CVPR/LIVE/pybind11/tests/test_enum.py
deleted file mode 100644
index bfaa193e9ba86295e249c20b96a150ce2ca0b88a..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/tests/test_enum.py
+++ /dev/null
@@ -1,207 +0,0 @@
-# -*- coding: utf-8 -*-
-import pytest
-from pybind11_tests import enums as m
-
-
-def test_unscoped_enum():
- assert str(m.UnscopedEnum.EOne) == "UnscopedEnum.EOne"
- assert str(m.UnscopedEnum.ETwo) == "UnscopedEnum.ETwo"
- assert str(m.EOne) == "UnscopedEnum.EOne"
-
- # name property
- assert m.UnscopedEnum.EOne.name == "EOne"
- assert m.UnscopedEnum.ETwo.name == "ETwo"
- assert m.EOne.name == "EOne"
- # name readonly
- with pytest.raises(AttributeError):
- m.UnscopedEnum.EOne.name = ""
- # name returns a copy
- foo = m.UnscopedEnum.EOne.name
- foo = "bar"
- assert m.UnscopedEnum.EOne.name == "EOne"
-
- # __members__ property
- assert m.UnscopedEnum.__members__ == \
- {"EOne": m.UnscopedEnum.EOne, "ETwo": m.UnscopedEnum.ETwo, "EThree": m.UnscopedEnum.EThree}
- # __members__ readonly
- with pytest.raises(AttributeError):
- m.UnscopedEnum.__members__ = {}
- # __members__ returns a copy
- foo = m.UnscopedEnum.__members__
- foo["bar"] = "baz"
- assert m.UnscopedEnum.__members__ == \
- {"EOne": m.UnscopedEnum.EOne, "ETwo": m.UnscopedEnum.ETwo, "EThree": m.UnscopedEnum.EThree}
-
- for docstring_line in '''An unscoped enumeration
-
-Members:
-
- EOne : Docstring for EOne
-
- ETwo : Docstring for ETwo
-
- EThree : Docstring for EThree'''.split('\n'):
- assert docstring_line in m.UnscopedEnum.__doc__
-
- # Unscoped enums will accept ==/!= int comparisons
- y = m.UnscopedEnum.ETwo
- assert y == 2
- assert 2 == y
- assert y != 3
- assert 3 != y
- # Compare with None
- assert (y != None) # noqa: E711
- assert not (y == None) # noqa: E711
- # Compare with an object
- assert (y != object())
- assert not (y == object())
- # Compare with string
- assert y != "2"
- assert "2" != y
- assert not ("2" == y)
- assert not (y == "2")
-
- with pytest.raises(TypeError):
- y < object()
-
- with pytest.raises(TypeError):
- y <= object()
-
- with pytest.raises(TypeError):
- y > object()
-
- with pytest.raises(TypeError):
- y >= object()
-
- with pytest.raises(TypeError):
- y | object()
-
- with pytest.raises(TypeError):
- y & object()
-
- with pytest.raises(TypeError):
- y ^ object()
-
- assert int(m.UnscopedEnum.ETwo) == 2
- assert str(m.UnscopedEnum(2)) == "UnscopedEnum.ETwo"
-
- # order
- assert m.UnscopedEnum.EOne < m.UnscopedEnum.ETwo
- assert m.UnscopedEnum.EOne < 2
- assert m.UnscopedEnum.ETwo > m.UnscopedEnum.EOne
- assert m.UnscopedEnum.ETwo > 1
- assert m.UnscopedEnum.ETwo <= 2
- assert m.UnscopedEnum.ETwo >= 2
- assert m.UnscopedEnum.EOne <= m.UnscopedEnum.ETwo
- assert m.UnscopedEnum.EOne <= 2
- assert m.UnscopedEnum.ETwo >= m.UnscopedEnum.EOne
- assert m.UnscopedEnum.ETwo >= 1
- assert not (m.UnscopedEnum.ETwo < m.UnscopedEnum.EOne)
- assert not (2 < m.UnscopedEnum.EOne)
-
- # arithmetic
- assert m.UnscopedEnum.EOne & m.UnscopedEnum.EThree == m.UnscopedEnum.EOne
- assert m.UnscopedEnum.EOne | m.UnscopedEnum.ETwo == m.UnscopedEnum.EThree
- assert m.UnscopedEnum.EOne ^ m.UnscopedEnum.EThree == m.UnscopedEnum.ETwo
-
-
-def test_scoped_enum():
- assert m.test_scoped_enum(m.ScopedEnum.Three) == "ScopedEnum::Three"
- z = m.ScopedEnum.Two
- assert m.test_scoped_enum(z) == "ScopedEnum::Two"
-
- # Scoped enums will *NOT* accept ==/!= int comparisons (Will always return False)
- assert not z == 3
- assert not 3 == z
- assert z != 3
- assert 3 != z
- # Compare with None
- assert (z != None) # noqa: E711
- assert not (z == None) # noqa: E711
- # Compare with an object
- assert (z != object())
- assert not (z == object())
- # Scoped enums will *NOT* accept >, <, >= and <= int comparisons (Will throw exceptions)
- with pytest.raises(TypeError):
- z > 3
- with pytest.raises(TypeError):
- z < 3
- with pytest.raises(TypeError):
- z >= 3
- with pytest.raises(TypeError):
- z <= 3
-
- # order
- assert m.ScopedEnum.Two < m.ScopedEnum.Three
- assert m.ScopedEnum.Three > m.ScopedEnum.Two
- assert m.ScopedEnum.Two <= m.ScopedEnum.Three
- assert m.ScopedEnum.Two <= m.ScopedEnum.Two
- assert m.ScopedEnum.Two >= m.ScopedEnum.Two
- assert m.ScopedEnum.Three >= m.ScopedEnum.Two
-
-
-def test_implicit_conversion():
- assert str(m.ClassWithUnscopedEnum.EMode.EFirstMode) == "EMode.EFirstMode"
- assert str(m.ClassWithUnscopedEnum.EFirstMode) == "EMode.EFirstMode"
-
- f = m.ClassWithUnscopedEnum.test_function
- first = m.ClassWithUnscopedEnum.EFirstMode
- second = m.ClassWithUnscopedEnum.ESecondMode
-
- assert f(first) == 1
-
- assert f(first) == f(first)
- assert not f(first) != f(first)
-
- assert f(first) != f(second)
- assert not f(first) == f(second)
-
- assert f(first) == int(f(first))
- assert not f(first) != int(f(first))
-
- assert f(first) != int(f(second))
- assert not f(first) == int(f(second))
-
- # noinspection PyDictCreation
- x = {f(first): 1, f(second): 2}
- x[f(first)] = 3
- x[f(second)] = 4
- # Hashing test
- assert str(x) == "{EMode.EFirstMode: 3, EMode.ESecondMode: 4}"
-
-
-def test_binary_operators():
- assert int(m.Flags.Read) == 4
- assert int(m.Flags.Write) == 2
- assert int(m.Flags.Execute) == 1
- assert int(m.Flags.Read | m.Flags.Write | m.Flags.Execute) == 7
- assert int(m.Flags.Read | m.Flags.Write) == 6
- assert int(m.Flags.Read | m.Flags.Execute) == 5
- assert int(m.Flags.Write | m.Flags.Execute) == 3
- assert int(m.Flags.Write | 1) == 3
- assert ~m.Flags.Write == -3
-
- state = m.Flags.Read | m.Flags.Write
- assert (state & m.Flags.Read) != 0
- assert (state & m.Flags.Write) != 0
- assert (state & m.Flags.Execute) == 0
- assert (state & 1) == 0
-
- state2 = ~state
- assert state2 == -7
- assert int(state ^ state2) == -1
-
-
-def test_enum_to_int():
- m.test_enum_to_int(m.Flags.Read)
- m.test_enum_to_int(m.ClassWithUnscopedEnum.EMode.EFirstMode)
- m.test_enum_to_uint(m.Flags.Read)
- m.test_enum_to_uint(m.ClassWithUnscopedEnum.EMode.EFirstMode)
- m.test_enum_to_long_long(m.Flags.Read)
- m.test_enum_to_long_long(m.ClassWithUnscopedEnum.EMode.EFirstMode)
-
-
-def test_duplicate_enum_name():
- with pytest.raises(ValueError) as excinfo:
- m.register_bad_enum()
- assert str(excinfo.value) == 'SimpleEnum: element "ONE" already exists!'
diff --git a/spaces/CVPR/LIVE/thrust/CODE_OF_CONDUCT.md b/spaces/CVPR/LIVE/thrust/CODE_OF_CONDUCT.md
deleted file mode 100644
index 25140337afb95175f2082389a4f91161cdff779b..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/CODE_OF_CONDUCT.md
+++ /dev/null
@@ -1,59 +0,0 @@
-# Contributor Covenant Code of Conduct
-
-## Overview
-
-Define the code of conduct followed and enforced for Thrust
-
-### Intended audience
-
-* Community
-* Developers
-* Project Leads
-
-## Our Pledge
-
-In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
-
-## Our Standards
-
-Examples of behavior that contributes to creating a positive environment include:
-
-- Using welcoming and inclusive language
-- Being respectful of differing viewpoints and experiences
-- Gracefully accepting constructive criticism
-- Focusing on what is best for the community
-- Showing empathy towards other community members
-
-Examples of unacceptable behavior by participants include:
-
-- The use of sexualized language or imagery and unwelcome sexual attention or advances
-- Trolling, insulting/derogatory comments, and personal or political attacks
-- Public or private harassment
-- Publishing others’ private information, such as a physical or electronic address, without explicit permission
-- Other conduct which could reasonably be considered inappropriate in a professional setting
-
-## Our Responsibilities
-
-Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
-
-Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
-
-## Scope
-
-This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
-
-## Enforcement
-
-Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at [cpp-conduct@nvidia.com](mailto:cpp-conduct@nvidia.com) All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
-
-Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project’s leadership.
-
-## Attribution
-
-This Code of Conduct was taken from the [NVIDIA RAPIDS](https://docs.rapids.ai/resources/conduct/) project, which was adapted from the [Contributor Covenant](https://www.contributor-covenant.org/), version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
-
-For answers to common questions about this code of conduct, see https://www.contributor-covenant.org/faq
-
-## Contact
-
-If you need to contact the Thrust team, please reach out to cpp-conduct@nvidia.com
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/async/copy.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/async/copy.h
deleted file mode 100644
index 9b317cbb55a3322d2f097bdf6132c683d3e5d353..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/async/copy.h
+++ /dev/null
@@ -1,538 +0,0 @@
-/******************************************************************************
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- * * Neither the name of the NVIDIA CORPORATION nor the
- * names of its contributors may be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- ******************************************************************************/
-
-// TODO: Move into system::cuda
-
-#pragma once
-
-#include
-#include
-
-#if THRUST_CPP_DIALECT >= 2014
-
-#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
-
-#include
-
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-#include
-
-namespace thrust
-{
-
-namespace system { namespace cuda { namespace detail
-{
-
-// ContiguousIterator input and output iterators
-// TriviallyCopyable elements
-// Host to device, device to host, device to device
-template <
- typename FromPolicy, typename ToPolicy
-, typename ForwardIt, typename OutputIt, typename Size
->
-auto async_copy_n(
- FromPolicy& from_exec
-, ToPolicy& to_exec
-, ForwardIt first
-, Size n
-, OutputIt output
-) ->
- typename std::enable_if<
- is_indirectly_trivially_relocatable_to::value
- , unique_eager_event
- >::type
-{
- using T = typename iterator_traits::value_type;
-
- auto const device_alloc = get_async_device_allocator(
- select_device_system(from_exec, to_exec)
- );
-
- using pointer
- = typename thrust::detail::allocator_traits::
- template rebind_traits::pointer;
-
- unique_eager_event e;
-
- // Set up stream with dependencies.
-
- cudaStream_t const user_raw_stream = thrust::cuda_cub::stream(
- select_device_system(from_exec, to_exec)
- );
-
- if (thrust::cuda_cub::default_stream() != user_raw_stream)
- {
- e = make_dependent_event(
- std::tuple_cat(
- std::make_tuple(
- unique_stream(nonowning, user_raw_stream)
- )
- , extract_dependencies(
- std::move(thrust::detail::derived_cast(from_exec))
- )
- , extract_dependencies(
- std::move(thrust::detail::derived_cast(to_exec))
- )
- )
- );
- }
- else
- {
- e = make_dependent_event(
- std::tuple_cat(
- extract_dependencies(
- std::move(thrust::detail::derived_cast(from_exec))
- )
- , extract_dependencies(
- std::move(thrust::detail::derived_cast(to_exec))
- )
- )
- );
- }
-
- // Run copy.
-
- thrust::cuda_cub::throw_on_error(
- cudaMemcpyAsync(
- thrust::raw_pointer_cast(&*output)
- , thrust::raw_pointer_cast(&*first)
- , sizeof(T) * n
- , direction_of_copy(from_exec, to_exec)
- , e.stream().native_handle()
- )
- , "after copy launch"
- );
-
- return e;
-}
-
-// Non-ContiguousIterator input or output, or non-TriviallyRelocatable value type
-// Device to device
-template <
- typename FromPolicy, typename ToPolicy
-, typename ForwardIt, typename OutputIt, typename Size
->
-auto async_copy_n(
- thrust::cuda::execution_policy& from_exec
-, thrust::cuda::execution_policy& to_exec
-, ForwardIt first
-, Size n
-, OutputIt output
-) ->
- typename std::enable_if<
- conjunction<
- negation<
- is_indirectly_trivially_relocatable_to
- >
- , decltype(is_device_to_device_copy(from_exec, to_exec))
- >::value
- , unique_eager_event
- >::type
-{
- using T = typename iterator_traits::value_type;
-
- return async_transform_n(
- select_device_system(from_exec, to_exec)
- , first, n, output, thrust::identity()
- );
-}
-
-template
-void async_copy_n_compile_failure_no_cuda_to_non_contiguous_output()
-{
- THRUST_STATIC_ASSERT_MSG(
- (negation>::value)
- , "copying to non-ContiguousIterators in another system from the CUDA system "
- "is not supported; use `THRUST_PROCLAIM_CONTIGUOUS_ITERATOR(Iterator)` to "
- "indicate that an iterator points to elements that are contiguous in memory."
- );
-}
-
-// Non-ContiguousIterator output iterator
-// TriviallyRelocatable value type
-// Device to host, host to device
-template <
- typename FromPolicy, typename ToPolicy
-, typename ForwardIt, typename OutputIt, typename Size
->
-auto async_copy_n(
- FromPolicy& from_exec
-, ToPolicy& to_exec
-, ForwardIt first
-, Size n
-, OutputIt output
-) ->
- typename std::enable_if<
- conjunction<
- negation>
- , is_trivially_relocatable_to<
- typename iterator_traits::value_type
- , typename iterator_traits::value_type
- >
- , disjunction<
- decltype(is_host_to_device_copy(from_exec, to_exec))
- , decltype(is_device_to_host_copy(from_exec, to_exec))
- >
- >::value
- , unique_eager_event
- >::type
-{
- async_copy_n_compile_failure_no_cuda_to_non_contiguous_output();
-
- return {};
-}
-
-// Workaround for MSVC's lack of expression SFINAE and also for an NVCC bug.
-// In NVCC, when two SFINAE-enabled overloads are only distinguishable by a
-// part of a SFINAE condition that is in a `decltype`, NVCC thinks they are the
-// same overload and emits an error.
-template <
- typename FromPolicy, typename ToPolicy
-, typename ForwardIt, typename OutputIt
- // MSVC2015 WAR: doesn't like decltype(...)::value in superclass definition
-, typename IsH2DCopy = decltype(is_host_to_device_copy(
- std::declval()
- , std::declval()))
->
-struct is_buffered_trivially_relocatable_host_to_device_copy
- : thrust::integral_constant<
- bool
- , !is_contiguous_iterator::value
- && is_contiguous_iterator::value
- && is_trivially_relocatable_to<
- typename iterator_traits::value_type
- , typename iterator_traits::value_type
- >::value
- && IsH2DCopy::value
- >
-{};
-
-// Non-ContiguousIterator input iterator, ContiguousIterator output iterator
-// TriviallyRelocatable value type
-// Host to device
-template <
- typename FromPolicy, typename ToPolicy
-, typename ForwardIt, typename OutputIt, typename Size
->
-auto async_copy_n(
- FromPolicy& from_exec
-, thrust::cuda::execution_policy& to_exec
-, ForwardIt first
-, Size n
-, OutputIt output
-) ->
- typename std::enable_if<
- is_buffered_trivially_relocatable_host_to_device_copy<
- FromPolicy
- , thrust::cuda::execution_policy
- , ForwardIt, OutputIt
- >::value
- , unique_eager_event
- >::type
-{
- using T = typename iterator_traits::value_type;
-
- auto const host_alloc = get_async_host_allocator(
- from_exec
- );
-
- // Create host-side buffer.
-
- auto buffer = uninitialized_allocate_unique_n(host_alloc, n);
-
- auto const buffer_ptr = buffer.get();
-
- // Copy into host-side buffer.
-
- // TODO: Switch to an async call once we have async interfaces for host
- // systems and support for cross system dependencies.
- uninitialized_copy_n(from_exec, first, n, buffer_ptr);
-
- // Run device-side copy.
-
- auto new_to_exec = thrust::detail::derived_cast(to_exec).rebind_after(
- std::tuple_cat(
- std::make_tuple(
- std::move(buffer)
- )
- , extract_dependencies(
- std::move(thrust::detail::derived_cast(from_exec))
- )
- , extract_dependencies(
- std::move(thrust::detail::derived_cast(to_exec))
- )
- )
- );
-
- THRUST_STATIC_ASSERT((
- std::tuple_size::value + 1
- <=
- std::tuple_size::value
- ));
-
- return async_copy_n(
- from_exec
- // TODO: We have to cast back to the right execution_policy class. Ideally,
- // we should be moving here.
- , new_to_exec
- , buffer_ptr
- , n
- , output
- );
-}
-
-// Workaround for MSVC's lack of expression SFINAE and also for an NVCC bug.
-// In NVCC, when two SFINAE-enabled overloads are only distinguishable by a
-// part of a SFINAE condition that is in a `decltype`, NVCC thinks they are the
-// same overload and emits an error.
-template <
- typename FromPolicy, typename ToPolicy
-, typename ForwardIt, typename OutputIt
- // MSVC2015 WAR: doesn't like decltype(...)::value in superclass definition
-, typename IsD2HCopy = decltype(is_device_to_host_copy(
- std::declval()
- , std::declval()))
->
-struct is_buffered_trivially_relocatable_device_to_host_copy
- : thrust::integral_constant<
- bool
- , !is_contiguous_iterator::value
- && is_contiguous_iterator::value
- && is_trivially_relocatable_to<
- typename iterator_traits::value_type
- , typename iterator_traits::value_type
- >::value
- && IsD2HCopy::value
- >
-{};
-
-// Non-ContiguousIterator input iterator, ContiguousIterator output iterator
-// TriviallyRelocatable value type
-// Device to host
-template <
- typename FromPolicy, typename ToPolicy
-, typename ForwardIt, typename OutputIt, typename Size
->
-auto async_copy_n(
- thrust::cuda::execution_policy& from_exec
-, ToPolicy& to_exec
-, ForwardIt first
-, Size n
-, OutputIt output
-) ->
- typename std::enable_if<
- is_buffered_trivially_relocatable_device_to_host_copy<
- thrust::cuda::execution_policy
- , ToPolicy
- , ForwardIt, OutputIt
- >::value
- , unique_eager_event
- >::type
-{
- using T = typename iterator_traits::value_type;
-
- auto const device_alloc = get_async_device_allocator(
- from_exec
- );
-
- // Create device-side buffer.
-
- auto buffer = uninitialized_allocate_unique_n(device_alloc, n);
-
- auto const buffer_ptr = buffer.get();
-
- // Run device-side copy.
-
- auto f0 = async_copy_n(
- from_exec
- , from_exec
- , first
- , n
- , buffer_ptr
- );
-
- // Run copy back to host.
-
- auto new_from_exec = thrust::detail::derived_cast(from_exec).rebind_after(
- std::move(buffer)
- , std::move(f0)
- );
-
- THRUST_STATIC_ASSERT((
- std::tuple_size::value + 1
- <=
- std::tuple_size::value
- ));
-
- return async_copy_n(
- new_from_exec
- , to_exec
- , buffer_ptr
- , n
- , output
- );
-}
-
-template
-void async_copy_n_compile_failure_non_trivially_relocatable_elements()
-{
- THRUST_STATIC_ASSERT_MSG(
- (is_trivially_relocatable_to::value)
- , "only sequences of TriviallyRelocatable elements can be copied to and from "
- "the CUDA system; use `THRUST_PROCLAIM_TRIVIALLY_RELOCATABLE(T)` to "
- "indicate that a type can be copied by bitwise (e.g. by `memcpy`)"
- );
-}
-
-// Non-TriviallyRelocatable value type
-// Host to device, device to host
-template <
- typename FromPolicy, typename ToPolicy
-, typename ForwardIt, typename OutputIt, typename Size
->
-auto async_copy_n(
- FromPolicy& from_exec
-, ToPolicy& to_exec
-, ForwardIt first
-, Size n
-, OutputIt output
-) ->
- typename std::enable_if<
- conjunction<
- negation<
- is_trivially_relocatable_to<
- typename iterator_traits::value_type
- , typename iterator_traits::value_type
- >
- >
- , disjunction<
- decltype(is_host_to_device_copy(from_exec, to_exec))
- , decltype(is_device_to_host_copy(from_exec, to_exec))
- >
- >::value
- , unique_eager_event
- >::type
-{
- // TODO: We could do more here with cudaHostRegister.
-
- async_copy_n_compile_failure_non_trivially_relocatable_elements<
- typename thrust::iterator_traits::value_type
- , typename std::add_lvalue_reference<
- typename thrust::iterator_traits::value_type
- >::type
- >();
-
- return {};
-}
-
-}}} // namespace system::cuda::detail
-
-namespace cuda_cub
-{
-
-// ADL entry point.
-template <
- typename FromPolicy, typename ToPolicy
-, typename ForwardIt, typename Sentinel, typename OutputIt
->
-auto async_copy(
- thrust::cuda::execution_policy& from_exec
-, thrust::cpp::execution_policy& to_exec
-, ForwardIt first
-, Sentinel last
-, OutputIt output
-)
-THRUST_RETURNS(
- thrust::system::cuda::detail::async_copy_n(
- from_exec, to_exec, first, distance(first, last), output
- )
-)
-
-// ADL entry point.
-template <
- typename FromPolicy, typename ToPolicy
-, typename ForwardIt, typename Sentinel, typename OutputIt
->
-auto async_copy(
- thrust::cpp::execution_policy& from_exec
-, thrust::cuda::execution_policy& to_exec
-, ForwardIt first
-, Sentinel last
-, OutputIt output
-)
-THRUST_RETURNS(
- thrust::system::cuda::detail::async_copy_n(
- from_exec, to_exec, first, distance(first, last), output
- )
-)
-
-// ADL entry point.
-template <
- typename FromPolicy, typename ToPolicy
-, typename ForwardIt, typename Sentinel, typename OutputIt
->
-auto async_copy(
- thrust::cuda::execution_policy& from_exec
-, thrust::cuda::execution_policy& to_exec
-, ForwardIt first
-, Sentinel last
-, OutputIt output
-)
-THRUST_RETURNS(
- thrust::system::cuda::detail::async_copy_n(
- from_exec, to_exec, first, distance(first, last), output
- )
-)
-
-} // cuda_cub
-
-} // end namespace thrust
-
-#endif // THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
-
-#endif
-
diff --git a/spaces/CVPR/WALT/mmdet/datasets/pipelines/instaboost.py b/spaces/CVPR/WALT/mmdet/datasets/pipelines/instaboost.py
deleted file mode 100644
index 38b6819f60587a6e0c0f6d57bfda32bb3a7a4267..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/datasets/pipelines/instaboost.py
+++ /dev/null
@@ -1,98 +0,0 @@
-import numpy as np
-
-from ..builder import PIPELINES
-
-
-@PIPELINES.register_module()
-class InstaBoost(object):
- r"""Data augmentation method in `InstaBoost: Boosting Instance
- Segmentation Via Probability Map Guided Copy-Pasting
- `_.
-
- Refer to https://github.com/GothicAi/Instaboost for implementation details.
- """
-
- def __init__(self,
- action_candidate=('normal', 'horizontal', 'skip'),
- action_prob=(1, 0, 0),
- scale=(0.8, 1.2),
- dx=15,
- dy=15,
- theta=(-1, 1),
- color_prob=0.5,
- hflag=False,
- aug_ratio=0.5):
- try:
- import instaboostfast as instaboost
- except ImportError:
- raise ImportError(
- 'Please run "pip install instaboostfast" '
- 'to install instaboostfast first for instaboost augmentation.')
- self.cfg = instaboost.InstaBoostConfig(action_candidate, action_prob,
- scale, dx, dy, theta,
- color_prob, hflag)
- self.aug_ratio = aug_ratio
-
- def _load_anns(self, results):
- labels = results['ann_info']['labels']
- masks = results['ann_info']['masks']
- bboxes = results['ann_info']['bboxes']
- n = len(labels)
-
- anns = []
- for i in range(n):
- label = labels[i]
- bbox = bboxes[i]
- mask = masks[i]
- x1, y1, x2, y2 = bbox
- # assert (x2 - x1) >= 1 and (y2 - y1) >= 1
- bbox = [x1, y1, x2 - x1, y2 - y1]
- anns.append({
- 'category_id': label,
- 'segmentation': mask,
- 'bbox': bbox
- })
-
- return anns
-
- def _parse_anns(self, results, anns, img):
- gt_bboxes = []
- gt_labels = []
- gt_masks_ann = []
- for ann in anns:
- x1, y1, w, h = ann['bbox']
- # TODO: more essential bug need to be fixed in instaboost
- if w <= 0 or h <= 0:
- continue
- bbox = [x1, y1, x1 + w, y1 + h]
- gt_bboxes.append(bbox)
- gt_labels.append(ann['category_id'])
- gt_masks_ann.append(ann['segmentation'])
- gt_bboxes = np.array(gt_bboxes, dtype=np.float32)
- gt_labels = np.array(gt_labels, dtype=np.int64)
- results['ann_info']['labels'] = gt_labels
- results['ann_info']['bboxes'] = gt_bboxes
- results['ann_info']['masks'] = gt_masks_ann
- results['img'] = img
- return results
-
- def __call__(self, results):
- img = results['img']
- orig_type = img.dtype
- anns = self._load_anns(results)
- if np.random.choice([0, 1], p=[1 - self.aug_ratio, self.aug_ratio]):
- try:
- import instaboostfast as instaboost
- except ImportError:
- raise ImportError('Please run "pip install instaboostfast" '
- 'to install instaboostfast first.')
- anns, img = instaboost.get_new_data(
- anns, img.astype(np.uint8), self.cfg, background=None)
-
- results = self._parse_anns(results, anns, img.astype(orig_type))
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(cfg={self.cfg}, aug_ratio={self.aug_ratio})'
- return repr_str
diff --git a/spaces/CVPR/v-doc_abstractive_mac/main.py b/spaces/CVPR/v-doc_abstractive_mac/main.py
deleted file mode 100644
index 17fca1e7d6d70c8a52b4b53800dcc21b100f0eb8..0000000000000000000000000000000000000000
--- a/spaces/CVPR/v-doc_abstractive_mac/main.py
+++ /dev/null
@@ -1,653 +0,0 @@
-from __future__ import division
-import warnings
-
-from extract_feature import build_model, run_image, get_img_feat
-
-# warnings.filterwarnings("ignore", category=FutureWarning)
-# warnings.filterwarnings("ignore", message="size changed")
-warnings.filterwarnings("ignore")
-
-import sys
-import os
-import time
-import math
-import random
-
-try:
- import Queue as queue
-except ImportError:
- import queue
-import threading
-import h5py
-import json
-import numpy as np
-import tensorflow as tf
-from termcolor import colored, cprint
-
-from config import config, loadDatasetConfig, parseArgs
-from preprocess import Preprocesser, bold, bcolored, writeline, writelist
-from model import MACnet
-from collections import defaultdict
-
-
-############################################# loggers #############################################
-
-# Writes log header to file
-def logInit():
- with open(config.logFile(), "a+") as outFile:
- writeline(outFile, config.expName)
- headers = ["epoch", "trainAcc", "valAcc", "trainLoss", "valLoss"]
- if config.evalTrain:
- headers += ["evalTrainAcc", "evalTrainLoss"]
- if config.extra:
- if config.evalTrain:
- headers += ["thAcc", "thLoss"]
- headers += ["vhAcc", "vhLoss"]
- headers += ["time", "lr"]
-
- writelist(outFile, headers)
- # lr assumed to be last
-
-
-# Writes log record to file
-def logRecord(epoch, epochTime, lr, trainRes, evalRes, extraEvalRes):
- with open(config.logFile(), "a+") as outFile:
- record = [epoch, trainRes["acc"], evalRes["val"]["acc"], trainRes["loss"], evalRes["val"]["loss"]]
- if config.evalTrain:
- record += [evalRes["evalTrain"]["acc"], evalRes["evalTrain"]["loss"]]
- if config.extra:
- if config.evalTrain:
- record += [extraEvalRes["evalTrain"]["acc"], extraEvalRes["evalTrain"]["loss"]]
- record += [extraEvalRes["val"]["acc"], extraEvalRes["val"]["loss"]]
- record += [epochTime, lr]
-
- writelist(outFile, record)
-
-
-# Gets last logged epoch and learning rate
-def lastLoggedEpoch():
- with open(config.logFile(), "r") as inFile:
- lastLine = list(inFile)[-1].split(",")
- epoch = int(lastLine[0])
- lr = float(lastLine[-1])
- return epoch, lr
-
-
-################################## printing, output and analysis ##################################
-
-# Analysis by type
-analysisQuestionLims = [(0, 18), (19, float("inf"))]
-analysisProgramLims = [(0, 12), (13, float("inf"))]
-
-toArity = lambda instance: instance["programSeq"][-1].split("_", 1)[0]
-toType = lambda instance: instance["programSeq"][-1].split("_", 1)[1]
-
-
-def fieldLenIsInRange(field):
- return lambda instance, group: \
- (len(instance[field]) >= group[0] and
- len(instance[field]) <= group[1])
-
-
-# Groups instances based on a key
-def grouperKey(toKey):
- def grouper(instances):
- res = defaultdict(list)
- for instance in instances:
- res[toKey(instance)].append(instance)
- return res
-
- return grouper
-
-
-# Groups instances according to their match to condition
-def grouperCond(groups, isIn):
- def grouper(instances):
- res = {}
- for group in groups:
- res[group] = (instance for instance in instances if isIn(instance, group))
- return res
-
- return grouper
-
-
-groupers = {
- "questionLength": grouperCond(analysisQuestionLims, fieldLenIsInRange("questionSeq")),
- "programLength": grouperCond(analysisProgramLims, fieldLenIsInRange("programSeq")),
- "arity": grouperKey(toArity),
- "type": grouperKey(toType)
-}
-
-
-# Computes average
-def avg(instances, field):
- if len(instances) == 0:
- return 0.0
- return sum(instances[field]) / len(instances)
-
-
-# Prints analysis of questions loss and accuracy by their group
-def printAnalysis(res):
- if config.analysisType != "":
- print("Analysis by {type}".format(type=config.analysisType))
- groups = groupers[config.analysisType](res["preds"])
- for key in groups:
- instances = groups[key]
- avgLoss = avg(instances, "loss")
- avgAcc = avg(instances, "acc")
- num = len(instances)
- print("Group {key}: Loss: {loss}, Acc: {acc}, Num: {num}".format(key, avgLoss, avgAcc, num))
-
-
-# Print results for a tier
-def printTierResults(tierName, res, color):
- if res is None:
- return
-
- print("{tierName} Loss: {loss}, {tierName} accuracy: {acc}".format(tierName=tierName,
- loss=bcolored(res["loss"], color),
- acc=bcolored(res["acc"], color)))
-
- printAnalysis(res)
-
-
-# Prints dataset results (for several tiers)
-def printDatasetResults(trainRes, evalRes):
- printTierResults("Training", trainRes, "magenta")
- printTierResults("Training EMA", evalRes["evalTrain"], "red")
- printTierResults("Validation", evalRes["val"], "cyan")
-
-
-# Writes predictions for several tiers
-def writePreds(preprocessor, evalRes):
- preprocessor.writePreds(evalRes, "_")
-
-
-############################################# session #############################################
-# Initializes TF session. Sets GPU memory configuration.
-def setSession():
- sessionConfig = tf.ConfigProto(allow_soft_placement=True, log_device_placement=False)
- if config.allowGrowth:
- sessionConfig.gpu_options.allow_growth = True
- if config.maxMemory < 1.0:
- sessionConfig.gpu_options.per_process_gpu_memory_fraction = config.maxMemory
- return sessionConfig
-
-
-############################################## savers #############################################
-# Initializes savers (standard, optional exponential-moving-average and optional for subset of variables)
-def setSavers(model):
- saver = tf.train.Saver(max_to_keep=config.weightsToKeep)
-
- subsetSaver = None
- if config.saveSubset:
- isRelevant = lambda var: any(s in var.name for s in config.varSubset)
- relevantVars = [var for var in tf.global_variables() if isRelevant(var)]
- subsetSaver = tf.train.Saver(relevantVars, max_to_keep=config.weightsToKeep, allow_empty=True)
-
- emaSaver = None
- if config.useEMA:
- emaSaver = tf.train.Saver(model.emaDict, max_to_keep=config.weightsToKeep)
-
- return {
- "saver": saver,
- "subsetSaver": subsetSaver,
- "emaSaver": emaSaver
- }
-
-
-################################### restore / initialize weights ##################################
-# Restores weights of specified / last epoch if on restore mod.
-# Otherwise, initializes weights.
-def loadWeights(sess, saver, init):
- if config.restoreEpoch > 0 or config.restore:
- # restore last epoch only if restoreEpoch isn't set
- if config.restoreEpoch == 0:
- # restore last logged epoch
- config.restoreEpoch, config.lr = lastLoggedEpoch()
- print(bcolored("Restoring epoch {} and lr {}".format(config.restoreEpoch, config.lr), "cyan"))
- print(bcolored("Restoring weights", "blue"))
- print(config.weightsFile(config.restoreEpoch))
- saver.restore(sess, config.weightsFile(config.restoreEpoch))
- epoch = config.restoreEpoch
- else:
- print(bcolored("Initializing weights", "blue"))
- sess.run(init)
- logInit()
- epoch = 0
-
- return epoch
-
-
-###################################### training / evaluation ######################################
-# Chooses data to train on (main / extra) data.
-def chooseTrainingData(data):
- trainingData = data["main"]["train"]
- alterData = None
-
- if config.extra:
- if config.trainExtra:
- if config.extraVal:
- trainingData = data["extra"]["val"]
- else:
- trainingData = data["extra"]["train"]
- if config.alterExtra:
- alterData = data["extra"]["train"]
-
- return trainingData, alterData
-
-
-#### evaluation
-# Runs evaluation on train / val / test datasets.
-def runEvaluation(sess, model, data, epoch, evalTrain=True, evalTest=False, getAtt=None):
- if getAtt is None:
- getAtt = config.getAtt
- res = {"evalTrain": None, "val": None, "test": None}
-
- if data is not None:
- if evalTrain and config.evalTrain:
- res["evalTrain"] = runEpoch(sess, model, data["evalTrain"], train=False, epoch=epoch, getAtt=getAtt)
-
- res["val"] = runEpoch(sess, model, data["val"], train=False, epoch=epoch, getAtt=getAtt)
-
- if evalTest or config.test:
- res["test"] = runEpoch(sess, model, data["test"], train=False, epoch=epoch, getAtt=getAtt)
-
- return res
-
-
-## training conditions (comparing current epoch result to prior ones)
-def improveEnough(curr, prior, lr):
- prevRes = prior["prev"]["res"]
- currRes = curr["res"]
-
- if prevRes is None:
- return True
-
- prevTrainLoss = prevRes["train"]["loss"]
- currTrainLoss = currRes["train"]["loss"]
- lossDiff = prevTrainLoss - currTrainLoss
-
- notImprove = ((lossDiff < 0.015 and prevTrainLoss < 0.5 and lr > 0.00002) or \
- (lossDiff < 0.008 and prevTrainLoss < 0.15 and lr > 0.00001) or \
- (lossDiff < 0.003 and prevTrainLoss < 0.10 and lr > 0.000005))
- # (prevTrainLoss < 0.2 and config.lr > 0.000015)
-
- return not notImprove
-
-
-def better(currRes, bestRes):
- return currRes["val"]["acc"] > bestRes["val"]["acc"]
-
-
-############################################## data ###############################################
-#### instances and batching
-# Trims sequences based on their max length.
-def trim2DVectors(vectors, vectorsLengths):
- maxLength = np.max(vectorsLengths)
- return vectors[:, :maxLength]
-
-
-# Trims batch based on question length.
-def trimData(data):
- data["questions"] = trim2DVectors(data["questions"], data["questionLengths"])
- return data
-
-
-# Gets batch / bucket size.
-def getLength(data):
- return len(data["instances"])
-
-
-# Selects the data entries that match the indices.
-def selectIndices(data, indices):
- def select(field, indices):
- if type(field) is np.ndarray:
- return field[indices]
- if type(field) is list:
- return [field[i] for i in indices]
- else:
- return field
-
- selected = {k: select(d, indices) for k, d in data.items()}
- return selected
-
-
-# Batches data into a a list of batches of batchSize.
-# Shuffles the data by default.
-def getBatches(data, batchSize=None, shuffle=True):
- batches = []
-
- dataLen = getLength(data)
- if batchSize is None or batchSize > dataLen:
- batchSize = dataLen
-
- indices = np.arange(dataLen)
- if shuffle:
- np.random.shuffle(indices)
-
- for batchStart in range(0, dataLen, batchSize):
- batchIndices = indices[batchStart: batchStart + batchSize]
- # if len(batchIndices) == batchSize?
- if len(batchIndices) >= config.gpusNum:
- batch = selectIndices(data, batchIndices)
- batches.append(batch)
- # batchesIndices.append((data, batchIndices))
-
- return batches
-
-
-#### image batches
-# Opens image files.
-def openImageFiles(images):
- images["imagesFile"] = h5py.File(images["imagesFilename"], "r")
- images["imagesIds"] = None
- if config.dataset == "NLVR":
- with open(images["imageIdsFilename"], "r") as imageIdsFile:
- images["imagesIds"] = json.load(imageIdsFile)
-
- # Closes image files.
-
-
-def closeImageFiles(images):
- images["imagesFile"].close()
-
-
-# Loads an images from file for a given data batch.
-def loadImageBatch(images, batch):
- imagesFile = images["imagesFile"]
- id2idx = images["imagesIds"]
- toIndex = lambda imageId: imageId
- if id2idx is not None:
- toIndex = lambda imageId: id2idx[imageId]
- imageBatch = np.stack([imagesFile["features"][toIndex(imageId)] for imageId in batch["imageIds"]], axis=0)
-
- return {"images": imageBatch, "imageIds": batch["imageIds"]}
-
-
-# Loads images for several num batches in the batches list from start index.
-def loadImageBatches(images, batches, start, num):
- batches = batches[start: start + num]
- return [loadImageBatch(images, batch) for batch in batches]
-
-
-#### data alternation
-# Alternates main training batches with extra data.
-def alternateData(batches, alterData, dataLen):
- alterData = alterData["data"][0] # data isn't bucketed for altered data
-
- # computes number of repetitions
- needed = math.ceil(len(batches) / config.alterNum)
- print(bold("Extra batches needed: %d") % needed)
- perData = math.ceil(getLength(alterData) / config.batchSize)
- print(bold("Batches per extra data: %d") % perData)
- repetitions = math.ceil(needed / perData)
- print(bold("reps: %d") % repetitions)
-
- # make alternate batches
- alterBatches = []
- for _ in range(repetitions):
- repBatches = getBatches(alterData, batchSize=config.batchSize)
- random.shuffle(repBatches)
- alterBatches += repBatches
- print(bold("Batches num: %d") + len(alterBatches))
-
- # alternate data with extra data
- curr = len(batches) - 1
- for alterBatch in alterBatches:
- if curr < 0:
- # print(colored("too many" + str(curr) + " " + str(len(batches)),"red"))
- break
- batches.insert(curr, alterBatch)
- dataLen += getLength(alterBatch)
- curr -= config.alterNum
-
- return batches, dataLen
-
-
-############################################ threading ############################################
-
-imagesQueue = queue.Queue(maxsize=20) # config.tasksNum
-inQueue = queue.Queue(maxsize=1)
-outQueue = queue.Queue(maxsize=1)
-
-
-# Runs a worker thread(s) to load images while training .
-class StoppableThread(threading.Thread):
- # Thread class with a stop() method. The thread itself has to check
- # regularly for the stopped() condition.
-
- def __init__(self, images, batches): # i
- super(StoppableThread, self).__init__()
- # self.i = i
- self.images = images
- self.batches = batches
- self._stop_event = threading.Event()
-
- # def __init__(self, args):
- # super(StoppableThread, self).__init__(args = args)
- # self._stop_event = threading.Event()
-
- # def __init__(self, target, args):
- # super(StoppableThread, self).__init__(target = target, args = args)
- # self._stop_event = threading.Event()
-
- def stop(self):
- self._stop_event.set()
-
- def stopped(self):
- return self._stop_event.is_set()
-
- def run(self):
- while not self.stopped():
- try:
- batchNum = inQueue.get(timeout=60)
- nextItem = loadImageBatches(self.images, self.batches, batchNum, int(config.taskSize / 2))
- outQueue.put(nextItem)
- # inQueue.task_done()
- except:
- pass
- # print("worker %d done", self.i)
-
-
-def loaderRun(images, batches):
- batchNum = 0
-
- # if config.workers == 2:
- # worker = StoppableThread(images, batches) # i,
- # worker.daemon = True
- # worker.start()
-
- # while batchNum < len(batches):
- # inQueue.put(batchNum + int(config.taskSize / 2))
- # nextItem1 = loadImageBatches(images, batches, batchNum, int(config.taskSize / 2))
- # nextItem2 = outQueue.get()
-
- # nextItem = nextItem1 + nextItem2
- # assert len(nextItem) == min(config.taskSize, len(batches) - batchNum)
- # batchNum += config.taskSize
-
- # imagesQueue.put(nextItem)
-
- # worker.stop()
- # else:
- while batchNum < len(batches):
- nextItem = loadImageBatches(images, batches, batchNum, config.taskSize)
- assert len(nextItem) == min(config.taskSize, len(batches) - batchNum)
- batchNum += config.taskSize
- imagesQueue.put(nextItem)
-
- # print("manager loader done")
-
-
-########################################## stats tracking #########################################
-# Computes exponential moving average.
-def emaAvg(avg, value):
- if avg is None:
- return value
- emaRate = 0.98
- return avg * emaRate + value * (1 - emaRate)
-
-
-# Initializes training statistics.
-def initStats():
- return {
- "totalBatches": 0,
- "totalData": 0,
- "totalLoss": 0.0,
- "totalCorrect": 0,
- "loss": 0.0,
- "acc": 0.0,
- "emaLoss": None,
- "emaAcc": None,
- }
-
-
-# Updates statistics with training results of a batch
-def updateStats(stats, res, batch):
- stats["totalBatches"] += 1
- stats["totalData"] += getLength(batch)
-
- stats["totalLoss"] += res["loss"]
- stats["totalCorrect"] += res["correctNum"]
-
- stats["loss"] = stats["totalLoss"] / stats["totalBatches"]
- stats["acc"] = stats["totalCorrect"] / stats["totalData"]
-
- stats["emaLoss"] = emaAvg(stats["emaLoss"], res["loss"])
- stats["emaAcc"] = emaAvg(stats["emaAcc"], res["acc"])
-
- return stats
-
-
-# auto-encoder ae = {:2.4f} autoEncLoss,
-# Translates training statistics into a string to print
-def statsToStr(stats, res, epoch, batchNum, dataLen, startTime):
- formatStr = "\reb {epoch},{batchNum} ({dataProcessed} / {dataLen:5d}), " + \
- "t = {time} ({loadTime:2.2f}+{trainTime:2.2f}), " + \
- "lr {lr}, l = {loss}, a = {acc}, avL = {avgLoss}, " + \
- "avA = {avgAcc}, g = {gradNorm:2.4f}, " + \
- "emL = {emaLoss:2.4f}, emA = {emaAcc:2.4f}; " + \
- "{expname}" # {machine}/{gpu}"
-
- s_epoch = bcolored("{:2d}".format(epoch), "green")
- s_batchNum = "{:3d}".format(batchNum)
- s_dataProcessed = bcolored("{:5d}".format(stats["totalData"]), "green")
- s_dataLen = dataLen
- s_time = bcolored("{:2.2f}".format(time.time() - startTime), "green")
- s_loadTime = res["readTime"]
- s_trainTime = res["trainTime"]
- s_lr = bold(config.lr)
- s_loss = bcolored("{:2.4f}".format(res["loss"]), "blue")
- s_acc = bcolored("{:2.4f}".format(res["acc"]), "blue")
- s_avgLoss = bcolored("{:2.4f}".format(stats["loss"]), "blue")
- s_avgAcc = bcolored("{:2.4f}".format(stats["acc"]), "red")
- s_gradNorm = res["gradNorm"]
- s_emaLoss = stats["emaLoss"]
- s_emaAcc = stats["emaAcc"]
- s_expname = config.expName
- # s_machine = bcolored(config.dataPath[9:11],"green")
- # s_gpu = bcolored(config.gpus,"green")
-
- return formatStr.format(epoch=s_epoch, batchNum=s_batchNum, dataProcessed=s_dataProcessed,
- dataLen=s_dataLen, time=s_time, loadTime=s_loadTime,
- trainTime=s_trainTime, lr=s_lr, loss=s_loss, acc=s_acc,
- avgLoss=s_avgLoss, avgAcc=s_avgAcc, gradNorm=s_gradNorm,
- emaLoss=s_emaLoss, emaAcc=s_emaAcc, expname=s_expname)
- # machine = s_machine, gpu = s_gpu)
-
-
-# collectRuntimeStats, writer = None,
-'''
-Runs an epoch with model and session over the data.
-1. Batches the data and optionally mix it with the extra alterData.
-2. Start worker threads to load images in parallel to training.
-3. Runs model for each batch, and gets results (e.g. loss, accuracy).
-4. Updates and prints statistics based on batch results.
-5. Once in a while (every config.saveEvery), save weights.
-
-Args:
- sess: TF session to run with.
-
- model: model to process data. Has runBatch method that process a given batch.
- (See model.py for further details).
-
- data: data to use for training/evaluation.
-
- epoch: epoch number.
-
- saver: TF saver to save weights
-
- calle: a method to call every number of iterations (config.calleEvery)
-
- alterData: extra data to mix with main data while training.
-
- getAtt: True to return model attentions.
-'''
-
-
-def main(question, image):
- with open(config.configFile(), "a+") as outFile:
- json.dump(vars(config), outFile)
-
- # set gpus
- if config.gpus != "":
- config.gpusNum = len(config.gpus.split(","))
- os.environ["CUDA_VISIBLE_DEVICES"] = config.gpus
-
- tf.logging.set_verbosity(tf.logging.ERROR)
-
- # process data
- print(bold("Preprocess data..."))
- start = time.time()
- preprocessor = Preprocesser()
- cnn_model = build_model()
- imageData = get_img_feat(cnn_model, image)
- qData, embeddings, answerDict = preprocessor.preprocessData(question)
- data = {'data': qData, 'image': imageData}
- print("took {} seconds".format(bcolored("{:.2f}".format(time.time() - start), "blue")))
-
- # build model
- print(bold("Building model..."))
- start = time.time()
- model = MACnet(embeddings, answerDict)
- print("took {} seconds".format(bcolored("{:.2f}".format(time.time() - start), "blue")))
-
- # initializer
- init = tf.global_variables_initializer()
-
- # savers
- savers = setSavers(model)
- saver, emaSaver = savers["saver"], savers["emaSaver"]
-
- # sessionConfig
- sessionConfig = setSession()
-
- with tf.Session(config=sessionConfig) as sess:
-
- # ensure no more ops are added after model is built
- sess.graph.finalize()
-
- # restore / initialize weights, initialize epoch variable
- epoch = loadWeights(sess, saver, init)
- print(epoch)
- start = time.time()
- if epoch > 0:
- if config.useEMA:
- emaSaver.restore(sess, config.weightsFile(epoch))
- else:
- saver.restore(sess, config.weightsFile(epoch))
-
- evalRes = model.runBatch(sess, data['data'], data['image'], False)
-
- print("took {:.2f} seconds".format(time.time() - start))
-
- print(evalRes)
-
-
-if __name__ == '__main__':
- parseArgs()
- loadDatasetConfig[config.dataset]()
- question = 'How many text objects are located at the bottom side of table?'
- imagePath = './mac-layoutLM-sample/PDF_val_64.png'
- main(question, imagePath)
diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/transformer.py b/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/transformer.py
deleted file mode 100644
index 6ef2e4f0829d136ed3aeb70076847b4f6dea6afa..0000000000000000000000000000000000000000
--- a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/transformer.py
+++ /dev/null
@@ -1,974 +0,0 @@
-# ------------------------------------------------------------------------
-# Grounding DINO
-# url: https://github.com/IDEA-Research/GroundingDINO
-# Copyright (c) 2023 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# DINO
-# Copyright (c) 2022 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Conditional DETR Transformer class.
-# Copyright (c) 2021 Microsoft. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Modified from DETR (https://github.com/facebookresearch/detr)
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-# ------------------------------------------------------------------------
-
-from typing import Optional
-
-import torch
-import torch.utils.checkpoint as checkpoint
-from torch import Tensor, nn
-
-from groundingdino.util.misc import inverse_sigmoid
-
-from .fuse_modules import BiAttentionBlock
-from .ms_deform_attn import MultiScaleDeformableAttention as MSDeformAttn
-from .transformer_vanilla import TransformerEncoderLayer
-from .utils import (
- MLP,
- _get_activation_fn,
- _get_clones,
- gen_encoder_output_proposals,
- gen_sineembed_for_position,
- get_sine_pos_embed,
-)
-
-
-class Transformer(nn.Module):
- def __init__(
- self,
- d_model=256,
- nhead=8,
- num_queries=300,
- num_encoder_layers=6,
- num_unicoder_layers=0,
- num_decoder_layers=6,
- dim_feedforward=2048,
- dropout=0.0,
- activation="relu",
- normalize_before=False,
- return_intermediate_dec=False,
- query_dim=4,
- num_patterns=0,
- # for deformable encoder
- num_feature_levels=1,
- enc_n_points=4,
- dec_n_points=4,
- # init query
- learnable_tgt_init=False,
- # two stage
- two_stage_type="no", # ['no', 'standard', 'early', 'combine', 'enceachlayer', 'enclayer1']
- embed_init_tgt=False,
- # for text
- use_text_enhancer=False,
- use_fusion_layer=False,
- use_checkpoint=False,
- use_transformer_ckpt=False,
- use_text_cross_attention=False,
- text_dropout=0.1,
- fusion_dropout=0.1,
- fusion_droppath=0.0,
- ):
- super().__init__()
- self.num_feature_levels = num_feature_levels
- self.num_encoder_layers = num_encoder_layers
- self.num_unicoder_layers = num_unicoder_layers
- self.num_decoder_layers = num_decoder_layers
- self.num_queries = num_queries
- assert query_dim == 4
-
- # choose encoder layer type
- encoder_layer = DeformableTransformerEncoderLayer(
- d_model, dim_feedforward, dropout, activation, num_feature_levels, nhead, enc_n_points
- )
-
- if use_text_enhancer:
- text_enhance_layer = TransformerEncoderLayer(
- d_model=d_model,
- nhead=nhead // 2,
- dim_feedforward=dim_feedforward // 2,
- dropout=text_dropout,
- )
- else:
- text_enhance_layer = None
-
- if use_fusion_layer:
- feature_fusion_layer = BiAttentionBlock(
- v_dim=d_model,
- l_dim=d_model,
- embed_dim=dim_feedforward // 2,
- num_heads=nhead // 2,
- dropout=fusion_dropout,
- drop_path=fusion_droppath,
- )
- else:
- feature_fusion_layer = None
-
- encoder_norm = nn.LayerNorm(d_model) if normalize_before else None
- assert encoder_norm is None
- self.encoder = TransformerEncoder(
- encoder_layer,
- num_encoder_layers,
- d_model=d_model,
- num_queries=num_queries,
- text_enhance_layer=text_enhance_layer,
- feature_fusion_layer=feature_fusion_layer,
- use_checkpoint=use_checkpoint,
- use_transformer_ckpt=use_transformer_ckpt,
- )
-
- # choose decoder layer type
- decoder_layer = DeformableTransformerDecoderLayer(
- d_model,
- dim_feedforward,
- dropout,
- activation,
- num_feature_levels,
- nhead,
- dec_n_points,
- use_text_cross_attention=use_text_cross_attention,
- )
-
- decoder_norm = nn.LayerNorm(d_model)
- self.decoder = TransformerDecoder(
- decoder_layer,
- num_decoder_layers,
- decoder_norm,
- return_intermediate=return_intermediate_dec,
- d_model=d_model,
- query_dim=query_dim,
- num_feature_levels=num_feature_levels,
- )
-
- self.d_model = d_model
- self.nhead = nhead
- self.dec_layers = num_decoder_layers
- self.num_queries = num_queries # useful for single stage model only
- self.num_patterns = num_patterns
- if not isinstance(num_patterns, int):
- Warning("num_patterns should be int but {}".format(type(num_patterns)))
- self.num_patterns = 0
-
- if num_feature_levels > 1:
- if self.num_encoder_layers > 0:
- self.level_embed = nn.Parameter(torch.Tensor(num_feature_levels, d_model))
- else:
- self.level_embed = None
-
- self.learnable_tgt_init = learnable_tgt_init
- assert learnable_tgt_init, "why not learnable_tgt_init"
- self.embed_init_tgt = embed_init_tgt
- if (two_stage_type != "no" and embed_init_tgt) or (two_stage_type == "no"):
- self.tgt_embed = nn.Embedding(self.num_queries, d_model)
- nn.init.normal_(self.tgt_embed.weight.data)
- else:
- self.tgt_embed = None
-
- # for two stage
- self.two_stage_type = two_stage_type
- assert two_stage_type in ["no", "standard"], "unknown param {} of two_stage_type".format(
- two_stage_type
- )
- if two_stage_type == "standard":
- # anchor selection at the output of encoder
- self.enc_output = nn.Linear(d_model, d_model)
- self.enc_output_norm = nn.LayerNorm(d_model)
- self.two_stage_wh_embedding = None
-
- if two_stage_type == "no":
- self.init_ref_points(num_queries) # init self.refpoint_embed
-
- self.enc_out_class_embed = None
- self.enc_out_bbox_embed = None
-
- self._reset_parameters()
-
- def _reset_parameters(self):
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
- for m in self.modules():
- if isinstance(m, MSDeformAttn):
- m._reset_parameters()
- if self.num_feature_levels > 1 and self.level_embed is not None:
- nn.init.normal_(self.level_embed)
-
- def get_valid_ratio(self, mask):
- _, H, W = mask.shape
- valid_H = torch.sum(~mask[:, :, 0], 1)
- valid_W = torch.sum(~mask[:, 0, :], 1)
- valid_ratio_h = valid_H.float() / H
- valid_ratio_w = valid_W.float() / W
- valid_ratio = torch.stack([valid_ratio_w, valid_ratio_h], -1)
- return valid_ratio
-
- def init_ref_points(self, use_num_queries):
- self.refpoint_embed = nn.Embedding(use_num_queries, 4)
-
- def forward(self, srcs, masks, refpoint_embed, pos_embeds, tgt, attn_mask=None, text_dict=None):
- """
- Input:
- - srcs: List of multi features [bs, ci, hi, wi]
- - masks: List of multi masks [bs, hi, wi]
- - refpoint_embed: [bs, num_dn, 4]. None in infer
- - pos_embeds: List of multi pos embeds [bs, ci, hi, wi]
- - tgt: [bs, num_dn, d_model]. None in infer
-
- """
- # prepare input for encoder
- src_flatten = []
- mask_flatten = []
- lvl_pos_embed_flatten = []
- spatial_shapes = []
- for lvl, (src, mask, pos_embed) in enumerate(zip(srcs, masks, pos_embeds)):
- bs, c, h, w = src.shape
- spatial_shape = (h, w)
- spatial_shapes.append(spatial_shape)
-
- src = src.flatten(2).transpose(1, 2) # bs, hw, c
- mask = mask.flatten(1) # bs, hw
- pos_embed = pos_embed.flatten(2).transpose(1, 2) # bs, hw, c
- if self.num_feature_levels > 1 and self.level_embed is not None:
- lvl_pos_embed = pos_embed + self.level_embed[lvl].view(1, 1, -1)
- else:
- lvl_pos_embed = pos_embed
- lvl_pos_embed_flatten.append(lvl_pos_embed)
- src_flatten.append(src)
- mask_flatten.append(mask)
- src_flatten = torch.cat(src_flatten, 1) # bs, \sum{hxw}, c
- mask_flatten = torch.cat(mask_flatten, 1) # bs, \sum{hxw}
- lvl_pos_embed_flatten = torch.cat(lvl_pos_embed_flatten, 1) # bs, \sum{hxw}, c
- spatial_shapes = torch.as_tensor(
- spatial_shapes, dtype=torch.long, device=src_flatten.device
- )
- level_start_index = torch.cat(
- (spatial_shapes.new_zeros((1,)), spatial_shapes.prod(1).cumsum(0)[:-1])
- )
- valid_ratios = torch.stack([self.get_valid_ratio(m) for m in masks], 1)
-
- # two stage
- enc_topk_proposals = enc_refpoint_embed = None
-
- #########################################################
- # Begin Encoder
- #########################################################
- memory, memory_text = self.encoder(
- src_flatten,
- pos=lvl_pos_embed_flatten,
- level_start_index=level_start_index,
- spatial_shapes=spatial_shapes,
- valid_ratios=valid_ratios,
- key_padding_mask=mask_flatten,
- memory_text=text_dict["encoded_text"],
- text_attention_mask=~text_dict["text_token_mask"],
- # we ~ the mask . False means use the token; True means pad the token
- position_ids=text_dict["position_ids"],
- text_self_attention_masks=text_dict["text_self_attention_masks"],
- )
-
- enhanced_image_features = memory.detach()
- enhanced_text_features = memory_text.detach()
-
- # memory: enhanced image features
- # memory_text: enhanced text features
- #########################################################
- # End Encoder
- # - memory: bs, \sum{hw}, c
- # - mask_flatten: bs, \sum{hw}
- # - lvl_pos_embed_flatten: bs, \sum{hw}, c
- # - enc_intermediate_output: None or (nenc+1, bs, nq, c) or (nenc, bs, nq, c)
- # - enc_intermediate_refpoints: None or (nenc+1, bs, nq, c) or (nenc, bs, nq, c)
- #########################################################
-
- #########################################################
- # Begin Language-guide Query Selection
- #########################################################
- text_dict["encoded_text"] = memory_text
- # if os.environ.get("SHILONG_AMP_INFNAN_DEBUG") == '1':
- # if memory.isnan().any() | memory.isinf().any():
- # import ipdb; ipdb.set_trace()
-
- if self.two_stage_type == "standard":
- # logits and proposals
- output_memory, output_proposals = gen_encoder_output_proposals(
- memory, mask_flatten, spatial_shapes
- )
- output_memory = self.enc_output_norm(self.enc_output(output_memory))
-
- # language-guided query selection
- if text_dict is not None:
- enc_outputs_class_unselected = self.enc_out_class_embed(output_memory, text_dict)
- else:
- enc_outputs_class_unselected = self.enc_out_class_embed(output_memory)
-
- topk_logits = enc_outputs_class_unselected.max(-1)[0]
- enc_outputs_coord_unselected = (
- self.enc_out_bbox_embed(output_memory) + output_proposals
- ) # (bs, \sum{hw}, 4) unsigmoid
- topk = self.num_queries
-
- topk_proposals = torch.topk(topk_logits, topk, dim=1)[1] # bs, nq
-
- # gather boxes
- refpoint_embed_undetach = torch.gather(
- enc_outputs_coord_unselected, 1, topk_proposals.unsqueeze(-1).repeat(1, 1, 4)
- ) # unsigmoid
- refpoint_embed_ = refpoint_embed_undetach.detach()
- init_box_proposal = torch.gather(
- output_proposals, 1, topk_proposals.unsqueeze(-1).repeat(1, 1, 4)
- ).sigmoid() # sigmoid
-
- # gather tgt
- tgt_undetach = torch.gather(
- output_memory, 1, topk_proposals.unsqueeze(-1).repeat(1, 1, self.d_model)
- )
- if self.embed_init_tgt:
- tgt_ = (
- self.tgt_embed.weight[:, None, :].repeat(1, bs, 1).transpose(0, 1)
- ) # nq, bs, d_model
- else:
- tgt_ = tgt_undetach.detach()
-
- if refpoint_embed is not None:
- refpoint_embed = torch.cat([refpoint_embed, refpoint_embed_], dim=1)
- tgt = torch.cat([tgt, tgt_], dim=1)
- else:
- refpoint_embed, tgt = refpoint_embed_, tgt_
-
- elif self.two_stage_type == "no":
- tgt_ = (
- self.tgt_embed.weight[:, None, :].repeat(1, bs, 1).transpose(0, 1)
- ) # nq, bs, d_model
- refpoint_embed_ = (
- self.refpoint_embed.weight[:, None, :].repeat(1, bs, 1).transpose(0, 1)
- ) # nq, bs, 4
-
- if refpoint_embed is not None:
- refpoint_embed = torch.cat([refpoint_embed, refpoint_embed_], dim=1)
- tgt = torch.cat([tgt, tgt_], dim=1)
- else:
- refpoint_embed, tgt = refpoint_embed_, tgt_
-
- if self.num_patterns > 0:
- tgt_embed = tgt.repeat(1, self.num_patterns, 1)
- refpoint_embed = refpoint_embed.repeat(1, self.num_patterns, 1)
- tgt_pat = self.patterns.weight[None, :, :].repeat_interleave(
- self.num_queries, 1
- ) # 1, n_q*n_pat, d_model
- tgt = tgt_embed + tgt_pat
-
- init_box_proposal = refpoint_embed_.sigmoid()
-
- else:
- raise NotImplementedError("unknown two_stage_type {}".format(self.two_stage_type))
- #########################################################
- # End preparing tgt
- # - tgt: bs, NQ, d_model
- # - refpoint_embed(unsigmoid): bs, NQ, d_model
- #########################################################
-
- #########################################################
- # Begin Decoder
- #########################################################
- hs, references = self.decoder(
- tgt=tgt.transpose(0, 1),
- memory=memory.transpose(0, 1),
- memory_key_padding_mask=mask_flatten,
- pos=lvl_pos_embed_flatten.transpose(0, 1),
- refpoints_unsigmoid=refpoint_embed.transpose(0, 1),
- level_start_index=level_start_index,
- spatial_shapes=spatial_shapes,
- valid_ratios=valid_ratios,
- tgt_mask=attn_mask,
- memory_text=text_dict["encoded_text"],
- text_attention_mask=~text_dict["text_token_mask"],
- # we ~ the mask . False means use the token; True means pad the token
- )
- #########################################################
- # End Decoder
- # hs: n_dec, bs, nq, d_model
- # references: n_dec+1, bs, nq, query_dim
- #########################################################
-
- #########################################################
- # Begin postprocess
- #########################################################
- if self.two_stage_type == "standard":
- hs_enc = tgt_undetach.unsqueeze(0)
- ref_enc = refpoint_embed_undetach.sigmoid().unsqueeze(0)
- else:
- hs_enc = ref_enc = None
- #########################################################
- # End postprocess
- # hs_enc: (n_enc+1, bs, nq, d_model) or (1, bs, nq, d_model) or (n_enc, bs, nq, d_model) or None
- # ref_enc: (n_enc+1, bs, nq, query_dim) or (1, bs, nq, query_dim) or (n_enc, bs, nq, d_model) or None
- #########################################################
-
- return hs, references, hs_enc, ref_enc, init_box_proposal, enhanced_image_features, enhanced_text_features, spatial_shapes, topk_logits
- # hs: (n_dec, bs, nq, d_model)
- # references: sigmoid coordinates. (n_dec+1, bs, bq, 4)
- # hs_enc: (n_enc+1, bs, nq, d_model) or (1, bs, nq, d_model) or None
- # ref_enc: sigmoid coordinates. \
- # (n_enc+1, bs, nq, query_dim) or (1, bs, nq, query_dim) or None
- # enhanced_image_features: (bs, shw, c)
- # enhanced_text_features: (bs, n_enc, c)
- # spatial_shapes: s
-
-
-class TransformerEncoder(nn.Module):
- def __init__(
- self,
- encoder_layer,
- num_layers,
- d_model=256,
- num_queries=300,
- enc_layer_share=False,
- text_enhance_layer=None,
- feature_fusion_layer=None,
- use_checkpoint=False,
- use_transformer_ckpt=False,
- ):
- """_summary_
-
- Args:
- encoder_layer (_type_): _description_
- num_layers (_type_): _description_
- norm (_type_, optional): _description_. Defaults to None.
- d_model (int, optional): _description_. Defaults to 256.
- num_queries (int, optional): _description_. Defaults to 300.
- enc_layer_share (bool, optional): _description_. Defaults to False.
-
- """
- super().__init__()
- # prepare layers
- self.layers = []
- self.text_layers = []
- self.fusion_layers = []
- if num_layers > 0:
- self.layers = _get_clones(encoder_layer, num_layers, layer_share=enc_layer_share)
-
- if text_enhance_layer is not None:
- self.text_layers = _get_clones(
- text_enhance_layer, num_layers, layer_share=enc_layer_share
- )
- if feature_fusion_layer is not None:
- self.fusion_layers = _get_clones(
- feature_fusion_layer, num_layers, layer_share=enc_layer_share
- )
- else:
- self.layers = []
- del encoder_layer
-
- if text_enhance_layer is not None:
- self.text_layers = []
- del text_enhance_layer
- if feature_fusion_layer is not None:
- self.fusion_layers = []
- del feature_fusion_layer
-
- self.query_scale = None
- self.num_queries = num_queries
- self.num_layers = num_layers
- self.d_model = d_model
-
- self.use_checkpoint = use_checkpoint
- self.use_transformer_ckpt = use_transformer_ckpt
-
- @staticmethod
- def get_reference_points(spatial_shapes, valid_ratios, device):
- reference_points_list = []
- for lvl, (H_, W_) in enumerate(spatial_shapes):
-
- ref_y, ref_x = torch.meshgrid(
- torch.linspace(0.5, H_ - 0.5, H_, dtype=torch.float32, device=device),
- torch.linspace(0.5, W_ - 0.5, W_, dtype=torch.float32, device=device),
- )
- ref_y = ref_y.reshape(-1)[None] / (valid_ratios[:, None, lvl, 1] * H_)
- ref_x = ref_x.reshape(-1)[None] / (valid_ratios[:, None, lvl, 0] * W_)
- ref = torch.stack((ref_x, ref_y), -1)
- reference_points_list.append(ref)
- reference_points = torch.cat(reference_points_list, 1)
- reference_points = reference_points[:, :, None] * valid_ratios[:, None]
- return reference_points
-
- def forward(
- self,
- # for images
- src: Tensor,
- pos: Tensor,
- spatial_shapes: Tensor,
- level_start_index: Tensor,
- valid_ratios: Tensor,
- key_padding_mask: Tensor,
- # for texts
- memory_text: Tensor = None,
- text_attention_mask: Tensor = None,
- pos_text: Tensor = None,
- text_self_attention_masks: Tensor = None,
- position_ids: Tensor = None,
- ):
- """
- Input:
- - src: [bs, sum(hi*wi), 256]
- - pos: pos embed for src. [bs, sum(hi*wi), 256]
- - spatial_shapes: h,w of each level [num_level, 2]
- - level_start_index: [num_level] start point of level in sum(hi*wi).
- - valid_ratios: [bs, num_level, 2]
- - key_padding_mask: [bs, sum(hi*wi)]
-
- - memory_text: bs, n_text, 256
- - text_attention_mask: bs, n_text
- False for no padding; True for padding
- - pos_text: bs, n_text, 256
-
- - position_ids: bs, n_text
- Intermedia:
- - reference_points: [bs, sum(hi*wi), num_level, 2]
- Outpus:
- - output: [bs, sum(hi*wi), 256]
- """
-
- output = src
-
- # preparation and reshape
- if self.num_layers > 0:
- reference_points = self.get_reference_points(
- spatial_shapes, valid_ratios, device=src.device
- )
-
- if self.text_layers:
- # generate pos_text
- bs, n_text, text_dim = memory_text.shape
- if pos_text is None and position_ids is None:
- pos_text = (
- torch.arange(n_text, device=memory_text.device)
- .float()
- .unsqueeze(0)
- .unsqueeze(-1)
- .repeat(bs, 1, 1)
- )
- pos_text = get_sine_pos_embed(pos_text, num_pos_feats=256, exchange_xy=False)
- if position_ids is not None:
- pos_text = get_sine_pos_embed(
- position_ids[..., None], num_pos_feats=256, exchange_xy=False
- )
-
- # main process
- for layer_id, layer in enumerate(self.layers):
- # if output.isnan().any() or memory_text.isnan().any():
- # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO':
- # import ipdb; ipdb.set_trace()
- if self.fusion_layers:
- if self.use_checkpoint:
- output, memory_text = checkpoint.checkpoint(
- self.fusion_layers[layer_id],
- output,
- memory_text,
- key_padding_mask,
- text_attention_mask,
- )
- else:
- output, memory_text = self.fusion_layers[layer_id](
- v=output,
- l=memory_text,
- attention_mask_v=key_padding_mask,
- attention_mask_l=text_attention_mask,
- )
-
- if self.text_layers:
- memory_text = self.text_layers[layer_id](
- src=memory_text.transpose(0, 1),
- src_mask=~text_self_attention_masks, # note we use ~ for mask here
- src_key_padding_mask=text_attention_mask,
- pos=(pos_text.transpose(0, 1) if pos_text is not None else None),
- ).transpose(0, 1)
-
- # main process
- if self.use_transformer_ckpt:
- output = checkpoint.checkpoint(
- layer,
- output,
- pos,
- reference_points,
- spatial_shapes,
- level_start_index,
- key_padding_mask,
- )
- else:
- output = layer(
- src=output,
- pos=pos,
- reference_points=reference_points,
- spatial_shapes=spatial_shapes,
- level_start_index=level_start_index,
- key_padding_mask=key_padding_mask,
- )
-
- return output, memory_text
-
-
-class TransformerDecoder(nn.Module):
- def __init__(
- self,
- decoder_layer,
- num_layers,
- norm=None,
- return_intermediate=False,
- d_model=256,
- query_dim=4,
- num_feature_levels=1,
- ):
- super().__init__()
- if num_layers > 0:
- self.layers = _get_clones(decoder_layer, num_layers)
- else:
- self.layers = []
- self.num_layers = num_layers
- self.norm = norm
- self.return_intermediate = return_intermediate
- assert return_intermediate, "support return_intermediate only"
- self.query_dim = query_dim
- assert query_dim in [2, 4], "query_dim should be 2/4 but {}".format(query_dim)
- self.num_feature_levels = num_feature_levels
-
- self.ref_point_head = MLP(query_dim // 2 * d_model, d_model, d_model, 2)
- self.query_pos_sine_scale = None
-
- self.query_scale = None
- self.bbox_embed = None
- self.class_embed = None
-
- self.d_model = d_model
-
- self.ref_anchor_head = None
-
- def forward(
- self,
- tgt,
- memory,
- tgt_mask: Optional[Tensor] = None,
- memory_mask: Optional[Tensor] = None,
- tgt_key_padding_mask: Optional[Tensor] = None,
- memory_key_padding_mask: Optional[Tensor] = None,
- pos: Optional[Tensor] = None,
- refpoints_unsigmoid: Optional[Tensor] = None, # num_queries, bs, 2
- # for memory
- level_start_index: Optional[Tensor] = None, # num_levels
- spatial_shapes: Optional[Tensor] = None, # bs, num_levels, 2
- valid_ratios: Optional[Tensor] = None,
- # for text
- memory_text: Optional[Tensor] = None,
- text_attention_mask: Optional[Tensor] = None,
- ):
- """
- Input:
- - tgt: nq, bs, d_model
- - memory: hw, bs, d_model
- - pos: hw, bs, d_model
- - refpoints_unsigmoid: nq, bs, 2/4
- - valid_ratios/spatial_shapes: bs, nlevel, 2
- """
- output = tgt
-
- intermediate = []
- reference_points = refpoints_unsigmoid.sigmoid()
- ref_points = [reference_points]
-
- for layer_id, layer in enumerate(self.layers):
-
- if reference_points.shape[-1] == 4:
- reference_points_input = (
- reference_points[:, :, None]
- * torch.cat([valid_ratios, valid_ratios], -1)[None, :]
- ) # nq, bs, nlevel, 4
- else:
- assert reference_points.shape[-1] == 2
- reference_points_input = reference_points[:, :, None] * valid_ratios[None, :]
- query_sine_embed = gen_sineembed_for_position(
- reference_points_input[:, :, 0, :]
- ) # nq, bs, 256*2
-
- # conditional query
- raw_query_pos = self.ref_point_head(query_sine_embed) # nq, bs, 256
- pos_scale = self.query_scale(output) if self.query_scale is not None else 1
- query_pos = pos_scale * raw_query_pos
- # if os.environ.get("SHILONG_AMP_INFNAN_DEBUG") == '1':
- # if query_pos.isnan().any() | query_pos.isinf().any():
- # import ipdb; ipdb.set_trace()
-
- # main process
- output = layer(
- tgt=output,
- tgt_query_pos=query_pos,
- tgt_query_sine_embed=query_sine_embed,
- tgt_key_padding_mask=tgt_key_padding_mask,
- tgt_reference_points=reference_points_input,
- memory_text=memory_text,
- text_attention_mask=text_attention_mask,
- memory=memory,
- memory_key_padding_mask=memory_key_padding_mask,
- memory_level_start_index=level_start_index,
- memory_spatial_shapes=spatial_shapes,
- memory_pos=pos,
- self_attn_mask=tgt_mask,
- cross_attn_mask=memory_mask,
- )
- if output.isnan().any() | output.isinf().any():
- print(f"output layer_id {layer_id} is nan")
- try:
- num_nan = output.isnan().sum().item()
- num_inf = output.isinf().sum().item()
- print(f"num_nan {num_nan}, num_inf {num_inf}")
- except Exception as e:
- print(e)
- # if os.environ.get("SHILONG_AMP_INFNAN_DEBUG") == '1':
- # import ipdb; ipdb.set_trace()
-
- # iter update
- if self.bbox_embed is not None:
- # box_holder = self.bbox_embed(output)
- # box_holder[..., :self.query_dim] += inverse_sigmoid(reference_points)
- # new_reference_points = box_holder[..., :self.query_dim].sigmoid()
-
- reference_before_sigmoid = inverse_sigmoid(reference_points)
- delta_unsig = self.bbox_embed[layer_id](output)
- outputs_unsig = delta_unsig + reference_before_sigmoid
- new_reference_points = outputs_unsig.sigmoid()
-
- reference_points = new_reference_points.detach()
- # if layer_id != self.num_layers - 1:
- ref_points.append(new_reference_points)
-
- intermediate.append(self.norm(output))
-
- return [
- [itm_out.transpose(0, 1) for itm_out in intermediate],
- [itm_refpoint.transpose(0, 1) for itm_refpoint in ref_points],
- ]
-
-
-class DeformableTransformerEncoderLayer(nn.Module):
- def __init__(
- self,
- d_model=256,
- d_ffn=1024,
- dropout=0.1,
- activation="relu",
- n_levels=4,
- n_heads=8,
- n_points=4,
- ):
- super().__init__()
-
- # self attention
- self.self_attn = MSDeformAttn(
- embed_dim=d_model,
- num_levels=n_levels,
- num_heads=n_heads,
- num_points=n_points,
- batch_first=True,
- )
- self.dropout1 = nn.Dropout(dropout)
- self.norm1 = nn.LayerNorm(d_model)
-
- # ffn
- self.linear1 = nn.Linear(d_model, d_ffn)
- self.activation = _get_activation_fn(activation, d_model=d_ffn)
- self.dropout2 = nn.Dropout(dropout)
- self.linear2 = nn.Linear(d_ffn, d_model)
- self.dropout3 = nn.Dropout(dropout)
- self.norm2 = nn.LayerNorm(d_model)
-
- @staticmethod
- def with_pos_embed(tensor, pos):
- return tensor if pos is None else tensor + pos
-
- def forward_ffn(self, src):
- src2 = self.linear2(self.dropout2(self.activation(self.linear1(src))))
- src = src + self.dropout3(src2)
- src = self.norm2(src)
- return src
-
- def forward(
- self, src, pos, reference_points, spatial_shapes, level_start_index, key_padding_mask=None
- ):
- # self attention
- # import ipdb; ipdb.set_trace()
- src2 = self.self_attn(
- query=self.with_pos_embed(src, pos),
- reference_points=reference_points,
- value=src,
- spatial_shapes=spatial_shapes,
- level_start_index=level_start_index,
- key_padding_mask=key_padding_mask,
- )
- src = src + self.dropout1(src2)
- src = self.norm1(src)
-
- # ffn
- src = self.forward_ffn(src)
-
- return src
-
-
-class DeformableTransformerDecoderLayer(nn.Module):
- def __init__(
- self,
- d_model=256,
- d_ffn=1024,
- dropout=0.1,
- activation="relu",
- n_levels=4,
- n_heads=8,
- n_points=4,
- use_text_feat_guide=False,
- use_text_cross_attention=False,
- ):
- super().__init__()
-
- # cross attention
- self.cross_attn = MSDeformAttn(
- embed_dim=d_model,
- num_levels=n_levels,
- num_heads=n_heads,
- num_points=n_points,
- batch_first=True,
- )
- self.dropout1 = nn.Dropout(dropout) if dropout > 0 else nn.Identity()
- self.norm1 = nn.LayerNorm(d_model)
-
- # cross attention text
- if use_text_cross_attention:
- self.ca_text = nn.MultiheadAttention(d_model, n_heads, dropout=dropout)
- self.catext_dropout = nn.Dropout(dropout) if dropout > 0 else nn.Identity()
- self.catext_norm = nn.LayerNorm(d_model)
-
- # self attention
- self.self_attn = nn.MultiheadAttention(d_model, n_heads, dropout=dropout)
- self.dropout2 = nn.Dropout(dropout) if dropout > 0 else nn.Identity()
- self.norm2 = nn.LayerNorm(d_model)
-
- # ffn
- self.linear1 = nn.Linear(d_model, d_ffn)
- self.activation = _get_activation_fn(activation, d_model=d_ffn, batch_dim=1)
- self.dropout3 = nn.Dropout(dropout) if dropout > 0 else nn.Identity()
- self.linear2 = nn.Linear(d_ffn, d_model)
- self.dropout4 = nn.Dropout(dropout) if dropout > 0 else nn.Identity()
- self.norm3 = nn.LayerNorm(d_model)
-
- self.key_aware_proj = None
- self.use_text_feat_guide = use_text_feat_guide
- assert not use_text_feat_guide
- self.use_text_cross_attention = use_text_cross_attention
-
- def rm_self_attn_modules(self):
- self.self_attn = None
- self.dropout2 = None
- self.norm2 = None
-
- @staticmethod
- def with_pos_embed(tensor, pos):
- return tensor if pos is None else tensor + pos
-
- def forward_ffn(self, tgt):
- with torch.cuda.amp.autocast(enabled=False):
- tgt2 = self.linear2(self.dropout3(self.activation(self.linear1(tgt))))
- tgt = tgt + self.dropout4(tgt2)
- tgt = self.norm3(tgt)
- return tgt
-
- def forward(
- self,
- # for tgt
- tgt: Optional[Tensor], # nq, bs, d_model
- tgt_query_pos: Optional[Tensor] = None, # pos for query. MLP(Sine(pos))
- tgt_query_sine_embed: Optional[Tensor] = None, # pos for query. Sine(pos)
- tgt_key_padding_mask: Optional[Tensor] = None,
- tgt_reference_points: Optional[Tensor] = None, # nq, bs, 4
- memory_text: Optional[Tensor] = None, # bs, num_token, d_model
- text_attention_mask: Optional[Tensor] = None, # bs, num_token
- # for memory
- memory: Optional[Tensor] = None, # hw, bs, d_model
- memory_key_padding_mask: Optional[Tensor] = None,
- memory_level_start_index: Optional[Tensor] = None, # num_levels
- memory_spatial_shapes: Optional[Tensor] = None, # bs, num_levels, 2
- memory_pos: Optional[Tensor] = None, # pos for memory
- # sa
- self_attn_mask: Optional[Tensor] = None, # mask used for self-attention
- cross_attn_mask: Optional[Tensor] = None, # mask used for cross-attention
- ):
- """
- Input:
- - tgt/tgt_query_pos: nq, bs, d_model
- -
- """
- assert cross_attn_mask is None
-
- # self attention
- if self.self_attn is not None:
- # import ipdb; ipdb.set_trace()
- q = k = self.with_pos_embed(tgt, tgt_query_pos)
- tgt2 = self.self_attn(q, k, tgt, attn_mask=self_attn_mask)[0]
- tgt = tgt + self.dropout2(tgt2)
- tgt = self.norm2(tgt)
-
- if self.use_text_cross_attention:
- tgt2 = self.ca_text(
- self.with_pos_embed(tgt, tgt_query_pos),
- memory_text.transpose(0, 1),
- memory_text.transpose(0, 1),
- key_padding_mask=text_attention_mask,
- )[0]
- tgt = tgt + self.catext_dropout(tgt2)
- tgt = self.catext_norm(tgt)
-
- tgt2 = self.cross_attn(
- query=self.with_pos_embed(tgt, tgt_query_pos).transpose(0, 1),
- reference_points=tgt_reference_points.transpose(0, 1).contiguous(),
- value=memory.transpose(0, 1),
- spatial_shapes=memory_spatial_shapes,
- level_start_index=memory_level_start_index,
- key_padding_mask=memory_key_padding_mask,
- ).transpose(0, 1)
- tgt = tgt + self.dropout1(tgt2)
- tgt = self.norm1(tgt)
-
- # ffn
- tgt = self.forward_ffn(tgt)
-
- return tgt
-
-
-def build_transformer(args):
- return Transformer(
- d_model=args.hidden_dim,
- dropout=args.dropout,
- nhead=args.nheads,
- num_queries=args.num_queries,
- dim_feedforward=args.dim_feedforward,
- num_encoder_layers=args.enc_layers,
- num_decoder_layers=args.dec_layers,
- normalize_before=args.pre_norm,
- return_intermediate_dec=True,
- query_dim=args.query_dim,
- activation=args.transformer_activation,
- num_patterns=args.num_patterns,
- num_feature_levels=args.num_feature_levels,
- enc_n_points=args.enc_n_points,
- dec_n_points=args.dec_n_points,
- learnable_tgt_init=True,
- # two stage
- two_stage_type=args.two_stage_type, # ['no', 'standard', 'early']
- embed_init_tgt=args.embed_init_tgt,
- use_text_enhancer=args.use_text_enhancer,
- use_fusion_layer=args.use_fusion_layer,
- use_checkpoint=args.use_checkpoint,
- use_transformer_ckpt=args.use_transformer_ckpt,
- use_text_cross_attention=args.use_text_cross_attention,
- text_dropout=args.text_dropout,
- fusion_dropout=args.fusion_dropout,
- fusion_droppath=args.fusion_droppath,
- )
diff --git a/spaces/Cat125/text-generator-v2/README.md b/spaces/Cat125/text-generator-v2/README.md
deleted file mode 100644
index 765b3eff49fc6f5e895b70970ce63a1402835b44..0000000000000000000000000000000000000000
--- a/spaces/Cat125/text-generator-v2/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Text Generator v2
-emoji: 💻
-colorFrom: pink
-colorTo: purple
-sdk: gradio
-sdk_version: 3.27.0
-app_file: main.py
-pinned: true
-license: openrail
----
-
-This tool allpws you to generate texts based on given context.
\ No newline at end of file
diff --git a/spaces/ChandraMohanNayal/AutoGPT/tests/local_cache_test.py b/spaces/ChandraMohanNayal/AutoGPT/tests/local_cache_test.py
deleted file mode 100644
index bb10862656bb500f319ac231ff5bd5438d6fe7e2..0000000000000000000000000000000000000000
--- a/spaces/ChandraMohanNayal/AutoGPT/tests/local_cache_test.py
+++ /dev/null
@@ -1,67 +0,0 @@
-# sourcery skip: snake-case-functions
-"""Tests for LocalCache class"""
-import os
-import sys
-import unittest
-
-import pytest
-
-from autogpt.memory.local import LocalCache
-
-
-def mock_config() -> dict:
- """Mock the Config class"""
- return type(
- "MockConfig",
- (object,),
- {
- "debug_mode": False,
- "continuous_mode": False,
- "speak_mode": False,
- "memory_index": "auto-gpt",
- },
- )
-
-
-@pytest.mark.integration_test
-class TestLocalCache(unittest.TestCase):
- """Tests for LocalCache class"""
-
- def setUp(self) -> None:
- """Set up the test environment"""
- self.cfg = mock_config()
- self.cache = LocalCache(self.cfg)
-
- def test_add(self) -> None:
- """Test adding a text to the cache"""
- text = "Sample text"
- self.cache.add(text)
- self.assertIn(text, self.cache.data.texts)
-
- def test_clear(self) -> None:
- """Test clearing the cache"""
- self.cache.clear()
- self.assertEqual(self.cache.data.texts, [])
-
- def test_get(self) -> None:
- """Test getting a text from the cache"""
- text = "Sample text"
- self.cache.add(text)
- result = self.cache.get(text)
- self.assertEqual(result, [text])
-
- def test_get_relevant(self) -> None:
- """Test getting relevant texts from the cache"""
- text1 = "Sample text 1"
- text2 = "Sample text 2"
- self.cache.add(text1)
- self.cache.add(text2)
- result = self.cache.get_relevant(text1, 1)
- self.assertEqual(result, [text1])
-
- def test_get_stats(self) -> None:
- """Test getting the cache stats"""
- text = "Sample text"
- self.cache.add(text)
- stats = self.cache.get_stats()
- self.assertEqual(stats, (4, self.cache.data.embeddings.shape))
diff --git a/spaces/ChrisPreston/diff-svc_minato_aqua/infer.py b/spaces/ChrisPreston/diff-svc_minato_aqua/infer.py
deleted file mode 100644
index 3c3022270cc8e04cd1b7f48adbef2cf961bd7c6d..0000000000000000000000000000000000000000
--- a/spaces/ChrisPreston/diff-svc_minato_aqua/infer.py
+++ /dev/null
@@ -1,81 +0,0 @@
-import io
-from pathlib import Path
-
-import numpy as np
-import soundfile
-
-from infer_tools import infer_tool
-from infer_tools import slicer
-from infer_tools.infer_tool import Svc
-from utils.hparams import hparams
-
-
-def run_clip(raw_audio_path, svc_model, key, acc, use_crepe, spk_id=0, auto_key=False, out_path=None, slice_db=-40,
- **kwargs):
- print(f'code version:2023-01-22')
-
- clean_name = Path(raw_audio_path).name.split(".")[0]
- infer_tool.format_wav(raw_audio_path)
- wav_path = Path(raw_audio_path).with_suffix('.wav')
- key = svc_model.evaluate_key(wav_path, key, auto_key)
- chunks = slicer.cut(wav_path, db_thresh=slice_db)
- audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks)
-
- count = 0
- f0_tst, f0_pred, audio = [], [], []
- for (slice_tag, data) in audio_data:
- print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======')
- length = int(np.ceil(len(data) / audio_sr * hparams['audio_sample_rate']))
- raw_path = io.BytesIO()
- soundfile.write(raw_path, data, audio_sr, format="wav")
- raw_path.seek(0)
- if slice_tag:
- print('jump empty segment')
- _f0_tst, _f0_pred, _audio = (
- np.zeros(int(np.ceil(length / hparams['hop_size']))),
- np.zeros(int(np.ceil(length / hparams['hop_size']))),
- np.zeros(length))
- else:
- _f0_tst, _f0_pred, _audio = svc_model.infer(raw_path, spk_id=spk_id, key=key, acc=acc, use_crepe=use_crepe)
- fix_audio = np.zeros(length)
- fix_audio[:] = np.mean(_audio)
- fix_audio[:len(_audio)] = _audio[0 if len(_audio) < len(fix_audio) else len(_audio) - len(fix_audio):]
- f0_tst.extend(_f0_tst)
- f0_pred.extend(_f0_pred)
- audio.extend(list(fix_audio))
- count += 1
- if out_path is None:
- out_path = f'./results/{clean_name}_{key}key_{project_name}_{hparams["residual_channels"]}_{hparams["residual_layers"]}_{int(step / 1000)}k_{accelerate}x.{kwargs["format"]}'
- soundfile.write(out_path, audio, hparams["audio_sample_rate"], 'PCM_16', format=out_path.split('.')[-1])
- return np.array(f0_tst), np.array(f0_pred), audio
-
-
-if __name__ == '__main__':
- # 工程文件夹名,训练时用的那个
- project_name = "open-aqua"
- model_path = f'./checkpoints/{project_name}/model_ckpt_steps_90000.ckpt'
- config_path = f'./checkpoints/{project_name}/config.yaml'
-
- # 支持多个wav/ogg文件,放在raw文件夹下,带扩展名
- file_names = ["横竖撇点折-main-2key.wav"]
- spk_id = "single"
- # 自适应变调(仅支持单人模型)
- auto_key = False
- trans = [0] # 音高调整,支持正负(半音),数量与上一行对应,不足的自动按第一个移调参数补齐
- # 加速倍数
- accelerate = 1
- hubert_gpu = True
- wav_format = 'wav'
- step = int(model_path.split("_")[-1].split(".")[0])
-
- # 下面不动
- infer_tool.mkdir(["./raw", "./results"])
- infer_tool.fill_a_to_b(trans, file_names)
-
- model = Svc(project_name, config_path, hubert_gpu, model_path, onnx=False)
- for f_name, tran in zip(file_names, trans):
- if "." not in f_name:
- f_name += ".wav"
- audio_path = f"./raw/{f_name}"
- run_clip(raw_audio_path=audio_path, svc_model=model, key=tran, acc=accelerate, use_crepe=False,
- spk_id=spk_id, auto_key=auto_key, project_name=project_name, format=wav_format)
diff --git a/spaces/CofAI/chat/g4f/Provider/Providers/Bing.py b/spaces/CofAI/chat/g4f/Provider/Providers/Bing.py
deleted file mode 100644
index 87e04ac82293c7e22068af431ac407bdee435a1b..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat/g4f/Provider/Providers/Bing.py
+++ /dev/null
@@ -1,349 +0,0 @@
-import os
-import json
-import random
-import json
-import os
-import uuid
-import ssl
-import certifi
-import aiohttp
-import asyncio
-
-import requests
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://bing.com/chat'
-model = ['gpt-4']
-supports_stream = True
-needs_auth = False
-
-ssl_context = ssl.create_default_context()
-ssl_context.load_verify_locations(certifi.where())
-
-
-class optionsSets:
- optionSet: dict = {
- 'tone': str,
- 'optionsSets': list
- }
-
- jailbreak: dict = {
- "optionsSets": [
- 'saharasugg',
- 'enablenewsfc',
- 'clgalileo',
- 'gencontentv3',
- "nlu_direct_response_filter",
- "deepleo",
- "disable_emoji_spoken_text",
- "responsible_ai_policy_235",
- "enablemm",
- "h3precise"
- # "harmonyv3",
- "dtappid",
- "cricinfo",
- "cricinfov2",
- "dv3sugg",
- "nojbfedge"
- ]
- }
-
-
-class Defaults:
- delimiter = '\x1e'
- ip_address = f'13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}'
-
- allowedMessageTypes = [
- 'Chat',
- 'Disengaged',
- 'AdsQuery',
- 'SemanticSerp',
- 'GenerateContentQuery',
- 'SearchQuery',
- 'ActionRequest',
- 'Context',
- 'Progress',
- 'AdsQuery',
- 'SemanticSerp'
- ]
-
- sliceIds = [
-
- # "222dtappid",
- # "225cricinfo",
- # "224locals0"
-
- 'winmuid3tf',
- 'osbsdusgreccf',
- 'ttstmout',
- 'crchatrev',
- 'winlongmsgtf',
- 'ctrlworkpay',
- 'norespwtf',
- 'tempcacheread',
- 'temptacache',
- '505scss0',
- '508jbcars0',
- '515enbotdets0',
- '5082tsports',
- '515vaoprvs',
- '424dagslnv1s0',
- 'kcimgattcf',
- '427startpms0'
- ]
-
- location = {
- 'locale': 'en-US',
- 'market': 'en-US',
- 'region': 'US',
- 'locationHints': [
- {
- 'country': 'United States',
- 'state': 'California',
- 'city': 'Los Angeles',
- 'timezoneoffset': 8,
- 'countryConfidence': 8,
- 'Center': {
- 'Latitude': 34.0536909,
- 'Longitude': -118.242766
- },
- 'RegionType': 2,
- 'SourceType': 1
- }
- ],
- }
-
-
-def _format(msg: dict) -> str:
- return json.dumps(msg, ensure_ascii=False) + Defaults.delimiter
-
-
-async def create_conversation():
- for _ in range(5):
- create = requests.get('https://www.bing.com/turing/conversation/create',
- headers={
- 'authority': 'edgeservices.bing.com',
- 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
- 'accept-language': 'en-US,en;q=0.9',
- 'cache-control': 'max-age=0',
- 'sec-ch-ua': '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"',
- 'sec-ch-ua-arch': '"x86"',
- 'sec-ch-ua-bitness': '"64"',
- 'sec-ch-ua-full-version': '"110.0.1587.69"',
- 'sec-ch-ua-full-version-list': '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
- 'sec-ch-ua-mobile': '?0',
- 'sec-ch-ua-model': '""',
- 'sec-ch-ua-platform': '"Windows"',
- 'sec-ch-ua-platform-version': '"15.0.0"',
- 'sec-fetch-dest': 'document',
- 'sec-fetch-mode': 'navigate',
- 'sec-fetch-site': 'none',
- 'sec-fetch-user': '?1',
- 'upgrade-insecure-requests': '1',
- 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69',
- 'x-edge-shopping-flag': '1',
- 'x-forwarded-for': Defaults.ip_address
- })
-
- conversationId = create.json().get('conversationId')
- clientId = create.json().get('clientId')
- conversationSignature = create.json().get('conversationSignature')
-
- if not conversationId or not clientId or not conversationSignature and _ == 4:
- raise Exception('Failed to create conversation.')
-
- return conversationId, clientId, conversationSignature
-
-
-async def stream_generate(prompt: str, mode: optionsSets.optionSet = optionsSets.jailbreak, context: bool or str = False):
- timeout = aiohttp.ClientTimeout(total=900)
- session = aiohttp.ClientSession(timeout=timeout)
-
- conversationId, clientId, conversationSignature = await create_conversation()
-
- wss = await session.ws_connect('wss://sydney.bing.com/sydney/ChatHub', ssl=ssl_context, autoping=False,
- headers={
- 'accept': 'application/json',
- 'accept-language': 'en-US,en;q=0.9',
- 'content-type': 'application/json',
- 'sec-ch-ua': '"Not_A Brand";v="99", "Microsoft Edge";v="110", "Chromium";v="110"',
- 'sec-ch-ua-arch': '"x86"',
- 'sec-ch-ua-bitness': '"64"',
- 'sec-ch-ua-full-version': '"109.0.1518.78"',
- 'sec-ch-ua-full-version-list': '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
- 'sec-ch-ua-mobile': '?0',
- 'sec-ch-ua-model': '',
- 'sec-ch-ua-platform': '"Windows"',
- 'sec-ch-ua-platform-version': '"15.0.0"',
- 'sec-fetch-dest': 'empty',
- 'sec-fetch-mode': 'cors',
- 'sec-fetch-site': 'same-origin',
- 'x-ms-client-request-id': str(uuid.uuid4()),
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- 'Referer': 'https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx',
- 'Referrer-Policy': 'origin-when-cross-origin',
- 'x-forwarded-for': Defaults.ip_address
- })
-
- await wss.send_str(_format({'protocol': 'json', 'version': 1}))
- await wss.receive(timeout=900)
-
- struct = {
- 'arguments': [
- {
- **mode,
- 'source': 'cib',
- 'allowedMessageTypes': Defaults.allowedMessageTypes,
- 'sliceIds': Defaults.sliceIds,
- 'traceId': os.urandom(16).hex(),
- 'isStartOfSession': True,
- 'message': Defaults.location | {
- 'author': 'user',
- 'inputMethod': 'Keyboard',
- 'text': prompt,
- 'messageType': 'Chat'
- },
- 'conversationSignature': conversationSignature,
- 'participant': {
- 'id': clientId
- },
- 'conversationId': conversationId
- }
- ],
- 'invocationId': '0',
- 'target': 'chat',
- 'type': 4
- }
-
- if context:
- struct['arguments'][0]['previousMessages'] = [
- {
- "author": "user",
- "description": context,
- "contextType": "WebPage",
- "messageType": "Context",
- "messageId": "discover-web--page-ping-mriduna-----"
- }
- ]
-
- await wss.send_str(_format(struct))
-
- final = False
- draw = False
- resp_txt = ''
- result_text = ''
- resp_txt_no_link = ''
- cache_text = ''
-
- while not final:
- msg = await wss.receive(timeout=900)
- objects = msg.data.split(Defaults.delimiter)
-
- for obj in objects:
- if obj is None or not obj:
- continue
-
- response = json.loads(obj)
- if response.get('type') == 1 and response['arguments'][0].get('messages',):
- if not draw:
- if (response['arguments'][0]['messages'][0]['contentOrigin'] != 'Apology') and not draw:
- resp_txt = result_text + \
- response['arguments'][0]['messages'][0]['adaptiveCards'][0]['body'][0].get(
- 'text', '')
- resp_txt_no_link = result_text + \
- response['arguments'][0]['messages'][0].get(
- 'text', '')
-
- if response['arguments'][0]['messages'][0].get('messageType',):
- resp_txt = (
- resp_txt
- + response['arguments'][0]['messages'][0]['adaptiveCards'][0]['body'][0]['inlines'][0].get('text')
- + '\n'
- )
- result_text = (
- result_text
- + response['arguments'][0]['messages'][0]['adaptiveCards'][0]['body'][0]['inlines'][0].get('text')
- + '\n'
- )
-
- if cache_text.endswith(' '):
- final = True
- if wss and not wss.closed:
- await wss.close()
- if session and not session.closed:
- await session.close()
-
- yield (resp_txt.replace(cache_text, ''))
- cache_text = resp_txt
-
- elif response.get('type') == 2:
- if response['item']['result'].get('error'):
- if wss and not wss.closed:
- await wss.close()
- if session and not session.closed:
- await session.close()
-
- raise Exception(
- f"{response['item']['result']['value']}: {response['item']['result']['message']}")
-
- if draw:
- cache = response['item']['messages'][1]['adaptiveCards'][0]['body'][0]['text']
- response['item']['messages'][1]['adaptiveCards'][0]['body'][0]['text'] = (
- cache + resp_txt)
-
- if (response['item']['messages'][-1]['contentOrigin'] == 'Apology' and resp_txt):
- response['item']['messages'][-1]['text'] = resp_txt_no_link
- response['item']['messages'][-1]['adaptiveCards'][0]['body'][0]['text'] = resp_txt
-
- # print('Preserved the message from being deleted', file=sys.stderr)
-
- final = True
- if wss and not wss.closed:
- await wss.close()
- if session and not session.closed:
- await session.close()
-
-
-def run(generator):
- loop = asyncio.new_event_loop()
- asyncio.set_event_loop(loop)
- gen = generator.__aiter__()
-
- while True:
- try:
- next_val = loop.run_until_complete(gen.__anext__())
- yield next_val
-
- except StopAsyncIteration:
- break
- #print('Done')
-
-def convert(messages):
- context = ""
-
- for message in messages:
- context += "[%s](#message)\n%s\n\n" % (message['role'],
- message['content'])
-
- return context
-
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- if len(messages) < 2:
- prompt = messages[0]['content']
- context = False
-
- else:
- prompt = messages[-1]['content']
- context = convert(messages[:-1])
-
- response = run(stream_generate(prompt, optionsSets.jailbreak, context))
- for token in response:
- yield (token)
-
- #print('Done')
-
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join(
- [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
diff --git a/spaces/Cpp4App/Cpp4App/CDM/result_processing/merge_east.py b/spaces/Cpp4App/Cpp4App/CDM/result_processing/merge_east.py
deleted file mode 100644
index e7c8e51404340ff1b7ab764a908ce08a30ca64e7..0000000000000000000000000000000000000000
--- a/spaces/Cpp4App/Cpp4App/CDM/result_processing/merge_east.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import multiprocessing
-from glob import glob
-import time
-import json
-from tqdm import tqdm
-from os.path import join as pjoin, exists
-
-import merge
-
-
-input_root = 'E:\\Mulong\\Datasets\\rico\\combined'
-output_root = 'E:\\Mulong\\Result\\rico\\rico_uied\\rico_new_uied_cls\\merge'
-compo_root = 'E:\\Mulong\\Result\\rico\\rico_uied\\rico_new_uied_cls\\ip'
-text_root = 'E:\\Mulong\\Result\\east'
-
-data = json.load(open('E:\\Mulong\\Datasets\\rico\\instances_test.json', 'r'))
-input_paths_img = [pjoin(input_root, img['file_name'].split('/')[-1]) for img in data['images']]
-input_paths_img = sorted(input_paths_img, key=lambda x: int(x.split('\\')[-1][:-4])) # sorted by index
-
-# set the range of target inputs' indices
-num = 0
-start_index = 0
-end_index = 100000
-for input_path_img in input_paths_img:
- index = input_path_img.split('\\')[-1][:-4]
- if int(index) < start_index:
- continue
- if int(index) > end_index:
- break
-
- merge.incorporate(input_path_img, compo_root, text_root, output_root, resize_by_height=800, show=False)
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/transforms/__init__.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/transforms/__init__.py
deleted file mode 100644
index 892b9cec0c2bc59162196ef9243e9aedcdcbaee6..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/transforms/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-from .transforms import Compose
-from .transforms import Resize
-from .transforms import RandomHorizontalFlip
-from .transforms import ToTensor
-from .transforms import Normalize
-from .transforms import RandomCrop
-
-from .build import build_transforms
diff --git a/spaces/D008/space-from-a-model/app.py b/spaces/D008/space-from-a-model/app.py
deleted file mode 100644
index 63f412a5fa4de66349bf1b428d083d31e7c68b6f..0000000000000000000000000000000000000000
--- a/spaces/D008/space-from-a-model/app.py
+++ /dev/null
@@ -1,4 +0,0 @@
-import gradio as gr
-name_list = ['models/EleutherAI/gpt-j-6B']
-interfaces = [gr.Interface.load(name) for name in name_list]
-gr.mix.Parallel(*interfaces, title="example-title", description="example-description").launch()
\ No newline at end of file
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/XpmImagePlugin.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/XpmImagePlugin.py
deleted file mode 100644
index 5d5bdc3edfa7be8d235fd6ef4176cc6cebee541c..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/XpmImagePlugin.py
+++ /dev/null
@@ -1,128 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# XPM File handling
-#
-# History:
-# 1996-12-29 fl Created
-# 2001-02-17 fl Use 're' instead of 'regex' (Python 2.1) (0.7)
-#
-# Copyright (c) Secret Labs AB 1997-2001.
-# Copyright (c) Fredrik Lundh 1996-2001.
-#
-# See the README file for information on usage and redistribution.
-#
-
-
-import re
-
-from . import Image, ImageFile, ImagePalette
-from ._binary import o8
-
-# XPM header
-xpm_head = re.compile(b'"([0-9]*) ([0-9]*) ([0-9]*) ([0-9]*)')
-
-
-def _accept(prefix):
- return prefix[:9] == b"/* XPM */"
-
-
-##
-# Image plugin for X11 pixel maps.
-
-
-class XpmImageFile(ImageFile.ImageFile):
- format = "XPM"
- format_description = "X11 Pixel Map"
-
- def _open(self):
- if not _accept(self.fp.read(9)):
- msg = "not an XPM file"
- raise SyntaxError(msg)
-
- # skip forward to next string
- while True:
- s = self.fp.readline()
- if not s:
- msg = "broken XPM file"
- raise SyntaxError(msg)
- m = xpm_head.match(s)
- if m:
- break
-
- self._size = int(m.group(1)), int(m.group(2))
-
- pal = int(m.group(3))
- bpp = int(m.group(4))
-
- if pal > 256 or bpp != 1:
- msg = "cannot read this XPM file"
- raise ValueError(msg)
-
- #
- # load palette description
-
- palette = [b"\0\0\0"] * 256
-
- for _ in range(pal):
- s = self.fp.readline()
- if s[-2:] == b"\r\n":
- s = s[:-2]
- elif s[-1:] in b"\r\n":
- s = s[:-1]
-
- c = s[1]
- s = s[2:-2].split()
-
- for i in range(0, len(s), 2):
- if s[i] == b"c":
- # process colour key
- rgb = s[i + 1]
- if rgb == b"None":
- self.info["transparency"] = c
- elif rgb[:1] == b"#":
- # FIXME: handle colour names (see ImagePalette.py)
- rgb = int(rgb[1:], 16)
- palette[c] = (
- o8((rgb >> 16) & 255) + o8((rgb >> 8) & 255) + o8(rgb & 255)
- )
- else:
- # unknown colour
- msg = "cannot read this XPM file"
- raise ValueError(msg)
- break
-
- else:
- # missing colour key
- msg = "cannot read this XPM file"
- raise ValueError(msg)
-
- self.mode = "P"
- self.palette = ImagePalette.raw("RGB", b"".join(palette))
-
- self.tile = [("raw", (0, 0) + self.size, self.fp.tell(), ("P", 0, 1))]
-
- def load_read(self, bytes):
- #
- # load all image data in one chunk
-
- xsize, ysize = self.size
-
- s = [None] * ysize
-
- for i in range(ysize):
- s[i] = self.fp.readline()[1 : xsize + 1].ljust(xsize)
-
- return b"".join(s)
-
-
-#
-# Registry
-
-
-Image.register_open(XpmImageFile.format, XpmImageFile, _accept)
-
-Image.register_extension(XpmImageFile.format, ".xpm")
-
-Image.register_mime(XpmImageFile.format, "image/xpm")
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/L_T_S_H_.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/L_T_S_H_.py
deleted file mode 100644
index e0ab0d021c47cf79e51cad326806e12ff97c9e00..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/L_T_S_H_.py
+++ /dev/null
@@ -1,48 +0,0 @@
-from fontTools.misc.textTools import safeEval
-from . import DefaultTable
-import struct
-import array
-
-# XXX I've lowered the strictness, to make sure Apple's own Chicago
-# XXX gets through. They're looking into it, I hope to raise the standards
-# XXX back to normal eventually.
-
-
-class table_L_T_S_H_(DefaultTable.DefaultTable):
- def decompile(self, data, ttFont):
- version, numGlyphs = struct.unpack(">HH", data[:4])
- data = data[4:]
- assert version == 0, "unknown version: %s" % version
- assert (len(data) % numGlyphs) < 4, "numGlyphs doesn't match data length"
- # ouch: the assertion is not true in Chicago!
- # assert numGlyphs == ttFont['maxp'].numGlyphs
- yPels = array.array("B")
- yPels.frombytes(data)
- self.yPels = {}
- for i in range(numGlyphs):
- self.yPels[ttFont.getGlyphName(i)] = yPels[i]
-
- def compile(self, ttFont):
- version = 0
- names = list(self.yPels.keys())
- numGlyphs = len(names)
- yPels = [0] * numGlyphs
- # ouch: the assertion is not true in Chicago!
- # assert len(self.yPels) == ttFont['maxp'].numGlyphs == numGlyphs
- for name in names:
- yPels[ttFont.getGlyphID(name)] = self.yPels[name]
- yPels = array.array("B", yPels)
- return struct.pack(">HH", version, numGlyphs) + yPels.tobytes()
-
- def toXML(self, writer, ttFont):
- names = sorted(self.yPels.keys())
- for name in names:
- writer.simpletag("yPel", name=name, value=self.yPels[name])
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- if not hasattr(self, "yPels"):
- self.yPels = {}
- if name != "yPel":
- return # ignore unknown tags
- self.yPels[attrs["name"]] = safeEval(attrs["value"])
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/interpolatable.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/interpolatable.py
deleted file mode 100644
index d5428c2002286b7de284fff89a79f62cd6ebd656..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/interpolatable.py
+++ /dev/null
@@ -1,583 +0,0 @@
-"""
-Tool to find wrong contour order between different masters, and
-other interpolatability (or lack thereof) issues.
-
-Call as:
-$ fonttools varLib.interpolatable font1 font2 ...
-"""
-
-from fontTools.pens.basePen import AbstractPen, BasePen
-from fontTools.pens.pointPen import SegmentToPointPen
-from fontTools.pens.recordingPen import RecordingPen
-from fontTools.pens.statisticsPen import StatisticsPen
-from fontTools.pens.momentsPen import OpenContourError
-from collections import OrderedDict
-import math
-import itertools
-import sys
-
-
-def _rot_list(l, k):
- """Rotate list by k items forward. Ie. item at position 0 will be
- at position k in returned list. Negative k is allowed."""
- n = len(l)
- k %= n
- if not k:
- return l
- return l[n - k :] + l[: n - k]
-
-
-class PerContourPen(BasePen):
- def __init__(self, Pen, glyphset=None):
- BasePen.__init__(self, glyphset)
- self._glyphset = glyphset
- self._Pen = Pen
- self._pen = None
- self.value = []
-
- def _moveTo(self, p0):
- self._newItem()
- self._pen.moveTo(p0)
-
- def _lineTo(self, p1):
- self._pen.lineTo(p1)
-
- def _qCurveToOne(self, p1, p2):
- self._pen.qCurveTo(p1, p2)
-
- def _curveToOne(self, p1, p2, p3):
- self._pen.curveTo(p1, p2, p3)
-
- def _closePath(self):
- self._pen.closePath()
- self._pen = None
-
- def _endPath(self):
- self._pen.endPath()
- self._pen = None
-
- def _newItem(self):
- self._pen = pen = self._Pen()
- self.value.append(pen)
-
-
-class PerContourOrComponentPen(PerContourPen):
- def addComponent(self, glyphName, transformation):
- self._newItem()
- self.value[-1].addComponent(glyphName, transformation)
-
-
-class RecordingPointPen(BasePen):
- def __init__(self):
- self.value = []
-
- def beginPath(self, identifier=None, **kwargs):
- pass
-
- def endPath(self) -> None:
- pass
-
- def addPoint(self, pt, segmentType=None):
- self.value.append((pt, False if segmentType is None else True))
-
-
-def _vdiff(v0, v1):
- return tuple(b - a for a, b in zip(v0, v1))
-
-
-def _vlen(vec):
- v = 0
- for x in vec:
- v += x * x
- return v
-
-
-def _complex_vlen(vec):
- v = 0
- for x in vec:
- v += abs(x) * abs(x)
- return v
-
-
-def _matching_cost(G, matching):
- return sum(G[i][j] for i, j in enumerate(matching))
-
-
-def min_cost_perfect_bipartite_matching(G):
- n = len(G)
- try:
- from scipy.optimize import linear_sum_assignment
-
- rows, cols = linear_sum_assignment(G)
- assert (rows == list(range(n))).all()
- return list(cols), _matching_cost(G, cols)
- except ImportError:
- pass
-
- try:
- from munkres import Munkres
-
- cols = [None] * n
- for row, col in Munkres().compute(G):
- cols[row] = col
- return cols, _matching_cost(G, cols)
- except ImportError:
- pass
-
- if n > 6:
- raise Exception("Install Python module 'munkres' or 'scipy >= 0.17.0'")
-
- # Otherwise just brute-force
- permutations = itertools.permutations(range(n))
- best = list(next(permutations))
- best_cost = _matching_cost(G, best)
- for p in permutations:
- cost = _matching_cost(G, p)
- if cost < best_cost:
- best, best_cost = list(p), cost
- return best, best_cost
-
-
-def test(glyphsets, glyphs=None, names=None, ignore_missing=False):
- if names is None:
- names = glyphsets
- if glyphs is None:
- # `glyphs = glyphsets[0].keys()` is faster, certainly, but doesn't allow for sparse TTFs/OTFs given out of order
- # ... risks the sparse master being the first one, and only processing a subset of the glyphs
- glyphs = {g for glyphset in glyphsets for g in glyphset.keys()}
-
- hist = []
- problems = OrderedDict()
-
- def add_problem(glyphname, problem):
- problems.setdefault(glyphname, []).append(problem)
-
- for glyph_name in glyphs:
- try:
- m0idx = 0
- allVectors = []
- allNodeTypes = []
- allContourIsomorphisms = []
- for glyphset, name in zip(glyphsets, names):
- glyph = glyphset[glyph_name]
-
- if glyph is None:
- if not ignore_missing:
- add_problem(glyph_name, {"type": "missing", "master": name})
- allNodeTypes.append(None)
- allVectors.append(None)
- allContourIsomorphisms.append(None)
- continue
-
- perContourPen = PerContourOrComponentPen(
- RecordingPen, glyphset=glyphset
- )
- try:
- glyph.draw(perContourPen, outputImpliedClosingLine=True)
- except TypeError:
- glyph.draw(perContourPen)
- contourPens = perContourPen.value
- del perContourPen
-
- contourVectors = []
- contourIsomorphisms = []
- nodeTypes = []
- allNodeTypes.append(nodeTypes)
- allVectors.append(contourVectors)
- allContourIsomorphisms.append(contourIsomorphisms)
- for ix, contour in enumerate(contourPens):
- nodeVecs = tuple(instruction[0] for instruction in contour.value)
- nodeTypes.append(nodeVecs)
-
- stats = StatisticsPen(glyphset=glyphset)
- try:
- contour.replay(stats)
- except OpenContourError as e:
- add_problem(
- glyph_name,
- {"master": name, "contour": ix, "type": "open_path"},
- )
- continue
- size = math.sqrt(abs(stats.area)) * 0.5
- vector = (
- int(size),
- int(stats.meanX),
- int(stats.meanY),
- int(stats.stddevX * 2),
- int(stats.stddevY * 2),
- int(stats.correlation * size),
- )
- contourVectors.append(vector)
- # print(vector)
-
- # Check starting point
- if nodeVecs[0] == "addComponent":
- continue
- assert nodeVecs[0] == "moveTo"
- assert nodeVecs[-1] in ("closePath", "endPath")
- points = RecordingPointPen()
- converter = SegmentToPointPen(points, False)
- contour.replay(converter)
- # points.value is a list of pt,bool where bool is true if on-curve and false if off-curve;
- # now check all rotations and mirror-rotations of the contour and build list of isomorphic
- # possible starting points.
- bits = 0
- for pt, b in points.value:
- bits = (bits << 1) | b
- n = len(points.value)
- mask = (1 << n) - 1
- isomorphisms = []
- contourIsomorphisms.append(isomorphisms)
- for i in range(n):
- b = ((bits << i) & mask) | ((bits >> (n - i)))
- if b == bits:
- isomorphisms.append(
- _rot_list([complex(*pt) for pt, bl in points.value], i)
- )
- # Add mirrored rotations
- mirrored = list(reversed(points.value))
- reversed_bits = 0
- for pt, b in mirrored:
- reversed_bits = (reversed_bits << 1) | b
- for i in range(n):
- b = ((reversed_bits << i) & mask) | ((reversed_bits >> (n - i)))
- if b == bits:
- isomorphisms.append(
- _rot_list([complex(*pt) for pt, bl in mirrored], i)
- )
-
- # m0idx should be the index of the first non-None item in allNodeTypes,
- # else give it the first index of None, which is likely 0
- m0idx = allNodeTypes.index(
- next((x for x in allNodeTypes if x is not None), None)
- )
- # m0 is the first non-None item in allNodeTypes, or the first item if all are None
- m0 = allNodeTypes[m0idx]
- for i, m1 in enumerate(allNodeTypes[m0idx + 1 :]):
- if m1 is None:
- continue
- if len(m0) != len(m1):
- add_problem(
- glyph_name,
- {
- "type": "path_count",
- "master_1": names[m0idx],
- "master_2": names[m0idx + i + 1],
- "value_1": len(m0),
- "value_2": len(m1),
- },
- )
- if m0 == m1:
- continue
- for pathIx, (nodes1, nodes2) in enumerate(zip(m0, m1)):
- if nodes1 == nodes2:
- continue
- if len(nodes1) != len(nodes2):
- add_problem(
- glyph_name,
- {
- "type": "node_count",
- "path": pathIx,
- "master_1": names[m0idx],
- "master_2": names[m0idx + i + 1],
- "value_1": len(nodes1),
- "value_2": len(nodes2),
- },
- )
- continue
- for nodeIx, (n1, n2) in enumerate(zip(nodes1, nodes2)):
- if n1 != n2:
- add_problem(
- glyph_name,
- {
- "type": "node_incompatibility",
- "path": pathIx,
- "node": nodeIx,
- "master_1": names[m0idx],
- "master_2": names[m0idx + i + 1],
- "value_1": n1,
- "value_2": n2,
- },
- )
- continue
-
- # m0idx should be the index of the first non-None item in allVectors,
- # else give it the first index of None, which is likely 0
- m0idx = allVectors.index(
- next((x for x in allVectors if x is not None), None)
- )
- # m0 is the first non-None item in allVectors, or the first item if all are None
- m0 = allVectors[m0idx]
- for i, m1 in enumerate(allVectors[m0idx + 1 :]):
- if m1 is None:
- continue
- if len(m0) != len(m1):
- # We already reported this
- continue
- if not m0:
- continue
- costs = [[_vlen(_vdiff(v0, v1)) for v1 in m1] for v0 in m0]
- matching, matching_cost = min_cost_perfect_bipartite_matching(costs)
- identity_matching = list(range(len(m0)))
- identity_cost = sum(costs[i][i] for i in range(len(m0)))
- if (
- matching != identity_matching
- and matching_cost < identity_cost * 0.95
- ):
- add_problem(
- glyph_name,
- {
- "type": "contour_order",
- "master_1": names[m0idx],
- "master_2": names[m0idx + i + 1],
- "value_1": list(range(len(m0))),
- "value_2": matching,
- },
- )
- break
-
- # m0idx should be the index of the first non-None item in allContourIsomorphisms,
- # else give it the first index of None, which is likely 0
- m0idx = allContourIsomorphisms.index(
- next((x for x in allContourIsomorphisms if x is not None), None)
- )
- # m0 is the first non-None item in allContourIsomorphisms, or the first item if all are None
- m0 = allContourIsomorphisms[m0idx]
- for i, m1 in enumerate(allContourIsomorphisms[m0idx + 1 :]):
- if m1 is None:
- continue
- if len(m0) != len(m1):
- # We already reported this
- continue
- if not m0:
- continue
- for ix, (contour0, contour1) in enumerate(zip(m0, m1)):
- c0 = contour0[0]
- costs = [
- v for v in (_complex_vlen(_vdiff(c0, c1)) for c1 in contour1)
- ]
- min_cost = min(costs)
- first_cost = costs[0]
- if min_cost < first_cost * 0.95:
- add_problem(
- glyph_name,
- {
- "type": "wrong_start_point",
- "contour": ix,
- "master_1": names[m0idx],
- "master_2": names[m0idx + i + 1],
- },
- )
-
- except ValueError as e:
- add_problem(
- glyph_name,
- {"type": "math_error", "master": name, "error": e},
- )
- return problems
-
-
-def main(args=None):
- """Test for interpolatability issues between fonts"""
- import argparse
-
- parser = argparse.ArgumentParser(
- "fonttools varLib.interpolatable",
- description=main.__doc__,
- )
- parser.add_argument(
- "--glyphs",
- action="store",
- help="Space-separate name of glyphs to check",
- )
- parser.add_argument(
- "--json",
- action="store_true",
- help="Output report in JSON format",
- )
- parser.add_argument(
- "--quiet",
- action="store_true",
- help="Only exit with code 1 or 0, no output",
- )
- parser.add_argument(
- "--ignore-missing",
- action="store_true",
- help="Will not report glyphs missing from sparse masters as errors",
- )
- parser.add_argument(
- "inputs",
- metavar="FILE",
- type=str,
- nargs="+",
- help="Input a single DesignSpace/Glyphs file, or multiple TTF/UFO files",
- )
-
- args = parser.parse_args(args)
-
- glyphs = set(args.glyphs.split()) if args.glyphs else None
-
- from os.path import basename
-
- fonts = []
- names = []
-
- if len(args.inputs) == 1:
- if args.inputs[0].endswith(".designspace"):
- from fontTools.designspaceLib import DesignSpaceDocument
-
- designspace = DesignSpaceDocument.fromfile(args.inputs[0])
- args.inputs = [master.path for master in designspace.sources]
-
- elif args.inputs[0].endswith(".glyphs"):
- from glyphsLib import GSFont, to_ufos
-
- gsfont = GSFont(args.inputs[0])
- fonts.extend(to_ufos(gsfont))
- names = ["%s-%s" % (f.info.familyName, f.info.styleName) for f in fonts]
- args.inputs = []
-
- elif args.inputs[0].endswith(".ttf"):
- from fontTools.ttLib import TTFont
-
- font = TTFont(args.inputs[0])
- if "gvar" in font:
- # Is variable font
- gvar = font["gvar"]
- # Gather all "master" locations
- locs = set()
- for variations in gvar.variations.values():
- for var in variations:
- loc = []
- for tag, val in sorted(var.axes.items()):
- loc.append((tag, val[1]))
- locs.add(tuple(loc))
- # Rebuild locs as dictionaries
- new_locs = [{}]
- names.append("()")
- for loc in sorted(locs, key=lambda v: (len(v), v)):
- names.append(str(loc))
- l = {}
- for tag, val in loc:
- l[tag] = val
- new_locs.append(l)
- locs = new_locs
- del new_locs
- # locs is all master locations now
-
- for loc in locs:
- fonts.append(font.getGlyphSet(location=loc, normalized=True))
-
- args.inputs = []
-
- for filename in args.inputs:
- if filename.endswith(".ufo"):
- from fontTools.ufoLib import UFOReader
-
- fonts.append(UFOReader(filename))
- else:
- from fontTools.ttLib import TTFont
-
- fonts.append(TTFont(filename))
-
- names.append(basename(filename).rsplit(".", 1)[0])
-
- glyphsets = []
- for font in fonts:
- if hasattr(font, "getGlyphSet"):
- glyphset = font.getGlyphSet()
- else:
- glyphset = font
- glyphsets.append({k: glyphset[k] for k in glyphset.keys()})
-
- if not glyphs:
- glyphs = set([gn for glyphset in glyphsets for gn in glyphset.keys()])
-
- for glyphset in glyphsets:
- glyphSetGlyphNames = set(glyphset.keys())
- diff = glyphs - glyphSetGlyphNames
- if diff:
- for gn in diff:
- glyphset[gn] = None
-
- problems = test(
- glyphsets, glyphs=glyphs, names=names, ignore_missing=args.ignore_missing
- )
-
- if not args.quiet:
- if args.json:
- import json
-
- print(json.dumps(problems))
- else:
- for glyph, glyph_problems in problems.items():
- print(f"Glyph {glyph} was not compatible: ")
- for p in glyph_problems:
- if p["type"] == "missing":
- print(" Glyph was missing in master %s" % p["master"])
- if p["type"] == "open_path":
- print(" Glyph has an open path in master %s" % p["master"])
- if p["type"] == "path_count":
- print(
- " Path count differs: %i in %s, %i in %s"
- % (p["value_1"], p["master_1"], p["value_2"], p["master_2"])
- )
- if p["type"] == "node_count":
- print(
- " Node count differs in path %i: %i in %s, %i in %s"
- % (
- p["path"],
- p["value_1"],
- p["master_1"],
- p["value_2"],
- p["master_2"],
- )
- )
- if p["type"] == "node_incompatibility":
- print(
- " Node %o incompatible in path %i: %s in %s, %s in %s"
- % (
- p["node"],
- p["path"],
- p["value_1"],
- p["master_1"],
- p["value_2"],
- p["master_2"],
- )
- )
- if p["type"] == "contour_order":
- print(
- " Contour order differs: %s in %s, %s in %s"
- % (
- p["value_1"],
- p["master_1"],
- p["value_2"],
- p["master_2"],
- )
- )
- if p["type"] == "wrong_start_point":
- print(
- " Contour %d start point differs: %s, %s"
- % (
- p["contour"],
- p["master_1"],
- p["master_2"],
- )
- )
- if p["type"] == "math_error":
- print(
- " Miscellaneous error in %s: %s"
- % (
- p["master"],
- p["error"],
- )
- )
- if problems:
- return problems
-
-
-if __name__ == "__main__":
- import sys
-
- problems = main()
- sys.exit(int(bool(problems)))
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/ModifyUpload-77b0d4b2.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/ModifyUpload-77b0d4b2.css
deleted file mode 100644
index c78d71f8b6eaf75f8134375ed017f1c03b6edf1a..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/ModifyUpload-77b0d4b2.css
+++ /dev/null
@@ -1 +0,0 @@
-div.svelte-116rqfv{cursor:pointer;width:var(--size-full);height:var(--size-full)}.center.svelte-116rqfv{text-align:center}.flex.svelte-116rqfv{display:flex;justify-content:center;align-items:center}input.svelte-116rqfv{display:none}div.svelte-19sk1im{display:flex;top:var(--size-2);right:var(--size-2);justify-content:flex-end;gap:var(--spacing-sm);z-index:var(--layer-1)}.not-absolute.svelte-19sk1im{margin:var(--size-1)}
diff --git a/spaces/DaFujaTyping/hf-Chat-ui/src/routes/conversation/[id]/message/[messageId]/prompt/+server.ts b/spaces/DaFujaTyping/hf-Chat-ui/src/routes/conversation/[id]/message/[messageId]/prompt/+server.ts
deleted file mode 100644
index bd22a1e6bdc4541f4dbd3ca38c97df1ffa08782c..0000000000000000000000000000000000000000
--- a/spaces/DaFujaTyping/hf-Chat-ui/src/routes/conversation/[id]/message/[messageId]/prompt/+server.ts
+++ /dev/null
@@ -1,51 +0,0 @@
-import { buildPrompt } from "$lib/buildPrompt.js";
-import { collections } from "$lib/server/database";
-import { models } from "$lib/server/models.js";
-import { error } from "@sveltejs/kit";
-import { ObjectId } from "mongodb";
-
-export async function GET({ params, locals }) {
- const convId = new ObjectId(params.id);
-
- const conv = await collections.conversations.findOne({
- _id: convId,
- sessionId: locals.sessionId,
- });
-
- if (!conv) {
- throw error(404, "Conversation not found");
- }
-
- const messageId = params.messageId;
-
- const messageIndex = conv.messages.findIndex((msg) => msg.id === messageId);
-
- if (messageIndex === -1) {
- throw error(404, "Message not found");
- }
-
- const model = models.find((m) => m.id === conv.model);
-
- if (!model) {
- throw error(404, "Conversation model not found");
- }
-
- const prompt = buildPrompt(conv.messages.slice(0, messageIndex + 1), model);
-
- return new Response(
- JSON.stringify(
- {
- note: "This is a preview of the prompt that will be sent to the model when retrying the message. It may differ from what was sent in the past if the parameters have been updated since",
- prompt,
- model: model.name,
- parameters: {
- ...model.parameters,
- return_full_text: false,
- },
- },
- null,
- 2
- ),
- { headers: { "Content-Type": "application/json" } }
- );
-}
diff --git a/spaces/Dagfinn1962/CPU/app.py b/spaces/Dagfinn1962/CPU/app.py
deleted file mode 100644
index 0246188adf7ba9b7c43e736e2ce4b15d0cee9850..0000000000000000000000000000000000000000
--- a/spaces/Dagfinn1962/CPU/app.py
+++ /dev/null
@@ -1,40 +0,0 @@
-import gradio as gr
-import torch
-import numpy as np
-import modin.pandas as pd
-from PIL import Image
-from diffusers import DiffusionPipeline, StableDiffusionLatentUpscalePipeline
-
-device = "cuda" if torch.cuda.is_available() else "cpu"
-pipe = DiffusionPipeline.from_pretrained("dreamlike-art/dreamlike-photoreal-2.0", torch_dtype=torch.float16, safety_checker=None)
-upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained("stabilityai/sd-x2-latent-upscaler", torch_dtype=torch.float16)
-upscaler = upscaler.to(device)
-pipe = pipe.to(device)
-
-def genie (Prompt, negative_prompt, height, width, scale, steps, seed, upscale, upscale_prompt, upscale_neg, upscale_scale, upscale_steps):
- generator = torch.Generator(device=device).manual_seed(seed)
- if upscale == "Yes":
- low_res_latents = pipe(Prompt, negative_prompt=negative_prompt, height=height, width=width, num_inference_steps=steps, guidance_scale=scale, generator=generator, output_type="latent").images
- image = upscaler(prompt=upscale_prompt, negative_prompt=upscale_neg, image=low_res_latents, num_inference_steps=upscale_steps, guidance_scale=upscale_scale, generator=generator).images[0]
- else:
- image = pipe(Prompt, negative_prompt=negative_prompt, height=height, width=width, num_inference_steps=steps, guidance_scale=scale, generator=generator).images[0]
- return image
-
-gr.Interface(theme='ParityError/Anime', fn=genie, inputs=[gr.Textbox(label='Input field right under here(Prompt)'),
- gr.Textbox(label='What You dont want (Negative Prompt)'),
- gr.Slider(512, 1024, 768, step=128, label='Height'),
- gr.Slider(512, 1024, 768, step=128, label='Width'),
- gr.Slider(1, maximum=15, value=10, step=.25),
- gr.Slider(25, maximum=100, value=50, step=25),
- gr.Slider(minimum=1, step=1, maximum=9999999999999999, randomize=True),
- # gr.Radio(["Yes", "No"], label='Upscale?'),
- #gr.Textbox(label='Upscaler Prompt: Optional'),
- #gr.Textbox(label='Upscaler Negative Prompt: Both Optional And Experimental'),
- #gr.Slider(minimum=0, maximum=15, value=0, step=1, label='Upscale Guidance Scale'),
- #gr.Slider(minimum=5, maximum=25, value=5, step=5, label='Upscaler Iterations')
-
- ],
- outputs=gr.Image(label='Generated Image'),
- title="Daylight SD (CPU)",
- description="
Info:Daylight SD (GPU) This is a lightweight App mostly to show how Stable diffusion works. Aichatbot.ai is a project into Image Creation with Stable Diffusion. If you like our Apps Please consider signing up underneath. We will speed up the Apps with a little contribution from Our Members. PS! We are sorry to repeat but this app is also on Huggingface.co!
",
- article = "Online App: www.aichatbot.ai").launch(debug=True, max_threads=True)
diff --git a/spaces/DaleChen/AutoGPT/autogpt/speech/__init__.py b/spaces/DaleChen/AutoGPT/autogpt/speech/__init__.py
deleted file mode 100644
index 2ff0d2bf48dc356bf810cb5a2063d6774e5fec6e..0000000000000000000000000000000000000000
--- a/spaces/DaleChen/AutoGPT/autogpt/speech/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-"""This module contains the speech recognition and speech synthesis functions."""
-from autogpt.speech.say import say_text
-
-__all__ = ["say_text"]
diff --git a/spaces/Datatrooper/boston_housing/README.md b/spaces/Datatrooper/boston_housing/README.md
deleted file mode 100644
index 49a06dec93fca8e5c8dcd6839c82b19705cd2df1..0000000000000000000000000000000000000000
--- a/spaces/Datatrooper/boston_housing/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Boston Housing
-emoji: 🚀
-colorFrom: yellow
-colorTo: blue
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DemoLou/moe-tts/text/japanese.py b/spaces/DemoLou/moe-tts/text/japanese.py
deleted file mode 100644
index 375e4d50872d5c68ee57ca17470a2ca425425eba..0000000000000000000000000000000000000000
--- a/spaces/DemoLou/moe-tts/text/japanese.py
+++ /dev/null
@@ -1,153 +0,0 @@
-import re
-from unidecode import unidecode
-import pyopenjtalk
-
-
-# Regular expression matching Japanese without punctuation marks:
-_japanese_characters = re.compile(
- r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# Regular expression matching non-Japanese characters or punctuation marks:
-_japanese_marks = re.compile(
- r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# List of (symbol, Japanese) pairs for marks:
-_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('%', 'パーセント')
-]]
-
-# List of (romaji, ipa) pairs for marks:
-_romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ts', 'ʦ'),
- ('u', 'ɯ'),
- ('j', 'ʥ'),
- ('y', 'j'),
- ('ni', 'n^i'),
- ('nj', 'n^'),
- ('hi', 'çi'),
- ('hj', 'ç'),
- ('f', 'ɸ'),
- ('I', 'i*'),
- ('U', 'ɯ*'),
- ('r', 'ɾ')
-]]
-
-# List of (romaji, ipa2) pairs for marks:
-_romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('u', 'ɯ'),
- ('ʧ', 'tʃ'),
- ('j', 'dʑ'),
- ('y', 'j'),
- ('ni', 'n^i'),
- ('nj', 'n^'),
- ('hi', 'çi'),
- ('hj', 'ç'),
- ('f', 'ɸ'),
- ('I', 'i*'),
- ('U', 'ɯ*'),
- ('r', 'ɾ')
-]]
-
-# List of (consonant, sokuon) pairs:
-_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'Q([↑↓]*[kg])', r'k#\1'),
- (r'Q([↑↓]*[tdjʧ])', r't#\1'),
- (r'Q([↑↓]*[sʃ])', r's\1'),
- (r'Q([↑↓]*[pb])', r'p#\1')
-]]
-
-# List of (consonant, hatsuon) pairs:
-_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'N([↑↓]*[pbm])', r'm\1'),
- (r'N([↑↓]*[ʧʥj])', r'n^\1'),
- (r'N([↑↓]*[tdn])', r'n\1'),
- (r'N([↑↓]*[kg])', r'ŋ\1')
-]]
-
-
-def symbols_to_japanese(text):
- for regex, replacement in _symbols_to_japanese:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_romaji_with_accent(text):
- '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html'''
- text = symbols_to_japanese(text)
- sentences = re.split(_japanese_marks, text)
- marks = re.findall(_japanese_marks, text)
- text = ''
- for i, sentence in enumerate(sentences):
- if re.match(_japanese_characters, sentence):
- if text != '':
- text += ' '
- labels = pyopenjtalk.extract_fullcontext(sentence)
- for n, label in enumerate(labels):
- phoneme = re.search(r'\-([^\+]*)\+', label).group(1)
- if phoneme not in ['sil', 'pau']:
- text += phoneme.replace('ch', 'ʧ').replace('sh',
- 'ʃ').replace('cl', 'Q')
- else:
- continue
- # n_moras = int(re.search(r'/F:(\d+)_', label).group(1))
- a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1))
- a2 = int(re.search(r"\+(\d+)\+", label).group(1))
- a3 = int(re.search(r"\+(\d+)/", label).group(1))
- if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']:
- a2_next = -1
- else:
- a2_next = int(
- re.search(r"\+(\d+)\+", labels[n + 1]).group(1))
- # Accent phrase boundary
- if a3 == 1 and a2_next == 1:
- text += ' '
- # Falling
- elif a1 == 0 and a2_next == a2 + 1:
- text += '↓'
- # Rising
- elif a2 == 1 and a2_next == 2:
- text += '↑'
- if i < len(marks):
- text += unidecode(marks[i]).replace(' ', '')
- return text
-
-
-def get_real_sokuon(text):
- for regex, replacement in _real_sokuon:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def get_real_hatsuon(text):
- for regex, replacement in _real_hatsuon:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_ipa(text):
- text = japanese_to_romaji_with_accent(text).replace('...', '…')
- text = re.sub(
- r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text)
- text = get_real_sokuon(text)
- text = get_real_hatsuon(text)
- for regex, replacement in _romaji_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_ipa2(text):
- text = japanese_to_romaji_with_accent(text).replace('...', '…')
- text = get_real_sokuon(text)
- text = get_real_hatsuon(text)
- for regex, replacement in _romaji_to_ipa2:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_ipa3(text):
- text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace(
- 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a')
- text = re.sub(
- r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text)
- text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text)
- return text
diff --git a/spaces/DexterSptizu/drug_interaction/app.py b/spaces/DexterSptizu/drug_interaction/app.py
deleted file mode 100644
index 3b49730dc3ca63298ad44ea338d97bb0ed36ce90..0000000000000000000000000000000000000000
--- a/spaces/DexterSptizu/drug_interaction/app.py
+++ /dev/null
@@ -1,22 +0,0 @@
-import gradio as gr
-import requests
-
-def check_interaction(drug1, drug2):
- API_ENDPOINT = "https://api.fda.gov/drug/event.json"
- SEARCH_TEMPLATE = '?search=patient.drug.medicinalproduct:{}+AND+patient.drug.medicinalproduct:{}&count=patient.reaction.reactionmeddrapt.exact'
-
- search_string = SEARCH_TEMPLATE.format(drug1, drug2)
- response = requests.get(API_ENDPOINT + search_string)
- data = response.json()
-
- if "results" in data:
- interactions = [result['term'] for result in data['results']]
- return interactions
- else:
- return "No known interactions"
-
-iface = gr.Interface(fn=check_interaction,
- inputs=["text", "text"],
- outputs="text")
-
-iface.launch()
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/ops/upfirdn2d.py b/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/ops/upfirdn2d.py
deleted file mode 100644
index ceeac2b9834e33b7c601c28bf27f32aa91c69256..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/ops/upfirdn2d.py
+++ /dev/null
@@ -1,384 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Custom PyTorch ops for efficient resampling of 2D images."""
-
-import os
-import warnings
-import numpy as np
-import torch
-import traceback
-
-from .. import custom_ops
-from .. import misc
-from . import conv2d_gradfix
-
-#----------------------------------------------------------------------------
-
-_inited = False
-_plugin = None
-
-def _init():
- global _inited, _plugin
- if not _inited:
- sources = ['upfirdn2d.cpp', 'upfirdn2d.cu']
- sources = [os.path.join(os.path.dirname(__file__), s) for s in sources]
- try:
- _plugin = custom_ops.get_plugin('upfirdn2d_plugin', sources=sources, extra_cuda_cflags=['--use_fast_math'])
- except:
- warnings.warn('Failed to build CUDA kernels for upfirdn2d. Falling back to slow reference implementation. Details:\n\n' + traceback.format_exc())
- return _plugin is not None
-
-def _parse_scaling(scaling):
- if isinstance(scaling, int):
- scaling = [scaling, scaling]
- assert isinstance(scaling, (list, tuple))
- assert all(isinstance(x, int) for x in scaling)
- sx, sy = scaling
- assert sx >= 1 and sy >= 1
- return sx, sy
-
-def _parse_padding(padding):
- if isinstance(padding, int):
- padding = [padding, padding]
- assert isinstance(padding, (list, tuple))
- assert all(isinstance(x, int) for x in padding)
- if len(padding) == 2:
- padx, pady = padding
- padding = [padx, padx, pady, pady]
- padx0, padx1, pady0, pady1 = padding
- return padx0, padx1, pady0, pady1
-
-def _get_filter_size(f):
- if f is None:
- return 1, 1
- assert isinstance(f, torch.Tensor) and f.ndim in [1, 2]
- fw = f.shape[-1]
- fh = f.shape[0]
- with misc.suppress_tracer_warnings():
- fw = int(fw)
- fh = int(fh)
- misc.assert_shape(f, [fh, fw][:f.ndim])
- assert fw >= 1 and fh >= 1
- return fw, fh
-
-#----------------------------------------------------------------------------
-
-def setup_filter(f, device=torch.device('cpu'), normalize=True, flip_filter=False, gain=1, separable=None):
- r"""Convenience function to setup 2D FIR filter for `upfirdn2d()`.
-
- Args:
- f: Torch tensor, numpy array, or python list of the shape
- `[filter_height, filter_width]` (non-separable),
- `[filter_taps]` (separable),
- `[]` (impulse), or
- `None` (identity).
- device: Result device (default: cpu).
- normalize: Normalize the filter so that it retains the magnitude
- for constant input signal (DC)? (default: True).
- flip_filter: Flip the filter? (default: False).
- gain: Overall scaling factor for signal magnitude (default: 1).
- separable: Return a separable filter? (default: select automatically).
-
- Returns:
- Float32 tensor of the shape
- `[filter_height, filter_width]` (non-separable) or
- `[filter_taps]` (separable).
- """
- # Validate.
- if f is None:
- f = 1
- f = torch.as_tensor(f, dtype=torch.float32)
- assert f.ndim in [0, 1, 2]
- assert f.numel() > 0
- if f.ndim == 0:
- f = f[np.newaxis]
-
- # Separable?
- if separable is None:
- separable = (f.ndim == 1 and f.numel() >= 8)
- if f.ndim == 1 and not separable:
- f = f.ger(f)
- assert f.ndim == (1 if separable else 2)
-
- # Apply normalize, flip, gain, and device.
- if normalize:
- f /= f.sum()
- if flip_filter:
- f = f.flip(list(range(f.ndim)))
- f = f * (gain ** (f.ndim / 2))
- f = f.to(device=device)
- return f
-
-#----------------------------------------------------------------------------
-
-def upfirdn2d(x, f, up=1, down=1, padding=0, flip_filter=False, gain=1, impl='cuda'):
- r"""Pad, upsample, filter, and downsample a batch of 2D images.
-
- Performs the following sequence of operations for each channel:
-
- 1. Upsample the image by inserting N-1 zeros after each pixel (`up`).
-
- 2. Pad the image with the specified number of zeros on each side (`padding`).
- Negative padding corresponds to cropping the image.
-
- 3. Convolve the image with the specified 2D FIR filter (`f`), shrinking it
- so that the footprint of all output pixels lies within the input image.
-
- 4. Downsample the image by keeping every Nth pixel (`down`).
-
- This sequence of operations bears close resemblance to scipy.signal.upfirdn().
- The fused op is considerably more efficient than performing the same calculation
- using standard PyTorch ops. It supports gradients of arbitrary order.
-
- Args:
- x: Float32/float64/float16 input tensor of the shape
- `[batch_size, num_channels, in_height, in_width]`.
- f: Float32 FIR filter of the shape
- `[filter_height, filter_width]` (non-separable),
- `[filter_taps]` (separable), or
- `None` (identity).
- up: Integer upsampling factor. Can be a single int or a list/tuple
- `[x, y]` (default: 1).
- down: Integer downsampling factor. Can be a single int or a list/tuple
- `[x, y]` (default: 1).
- padding: Padding with respect to the upsampled image. Can be a single number
- or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
- (default: 0).
- flip_filter: False = convolution, True = correlation (default: False).
- gain: Overall scaling factor for signal magnitude (default: 1).
- impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`).
-
- Returns:
- Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
- """
- assert isinstance(x, torch.Tensor)
- assert impl in ['ref', 'cuda']
- if impl == 'cuda' and x.device.type == 'cuda' and _init():
- return _upfirdn2d_cuda(up=up, down=down, padding=padding, flip_filter=flip_filter, gain=gain).apply(x, f)
- return _upfirdn2d_ref(x, f, up=up, down=down, padding=padding, flip_filter=flip_filter, gain=gain)
-
-#----------------------------------------------------------------------------
-
-@misc.profiled_function
-def _upfirdn2d_ref(x, f, up=1, down=1, padding=0, flip_filter=False, gain=1):
- """Slow reference implementation of `upfirdn2d()` using standard PyTorch ops.
- """
- # Validate arguments.
- assert isinstance(x, torch.Tensor) and x.ndim == 4
- if f is None:
- f = torch.ones([1, 1], dtype=torch.float32, device=x.device)
- assert isinstance(f, torch.Tensor) and f.ndim in [1, 2]
- assert f.dtype == torch.float32 and not f.requires_grad
- batch_size, num_channels, in_height, in_width = x.shape
- upx, upy = _parse_scaling(up)
- downx, downy = _parse_scaling(down)
- padx0, padx1, pady0, pady1 = _parse_padding(padding)
-
- # Upsample by inserting zeros.
- x = x.reshape([batch_size, num_channels, in_height, 1, in_width, 1])
- x = torch.nn.functional.pad(x, [0, upx - 1, 0, 0, 0, upy - 1])
- x = x.reshape([batch_size, num_channels, in_height * upy, in_width * upx])
-
- # Pad or crop.
- x = torch.nn.functional.pad(x, [max(padx0, 0), max(padx1, 0), max(pady0, 0), max(pady1, 0)])
- x = x[:, :, max(-pady0, 0) : x.shape[2] - max(-pady1, 0), max(-padx0, 0) : x.shape[3] - max(-padx1, 0)]
-
- # Setup filter.
- f = f * (gain ** (f.ndim / 2))
- f = f.to(x.dtype)
- if not flip_filter:
- f = f.flip(list(range(f.ndim)))
-
- # Convolve with the filter.
- f = f[np.newaxis, np.newaxis].repeat([num_channels, 1] + [1] * f.ndim)
- if f.ndim == 4:
- x = conv2d_gradfix.conv2d(input=x, weight=f, groups=num_channels)
- else:
- x = conv2d_gradfix.conv2d(input=x, weight=f.unsqueeze(2), groups=num_channels)
- x = conv2d_gradfix.conv2d(input=x, weight=f.unsqueeze(3), groups=num_channels)
-
- # Downsample by throwing away pixels.
- x = x[:, :, ::downy, ::downx]
- return x
-
-#----------------------------------------------------------------------------
-
-_upfirdn2d_cuda_cache = dict()
-
-def _upfirdn2d_cuda(up=1, down=1, padding=0, flip_filter=False, gain=1):
- """Fast CUDA implementation of `upfirdn2d()` using custom ops.
- """
- # Parse arguments.
- upx, upy = _parse_scaling(up)
- downx, downy = _parse_scaling(down)
- padx0, padx1, pady0, pady1 = _parse_padding(padding)
-
- # Lookup from cache.
- key = (upx, upy, downx, downy, padx0, padx1, pady0, pady1, flip_filter, gain)
- if key in _upfirdn2d_cuda_cache:
- return _upfirdn2d_cuda_cache[key]
-
- # Forward op.
- class Upfirdn2dCuda(torch.autograd.Function):
- @staticmethod
- def forward(ctx, x, f): # pylint: disable=arguments-differ
- assert isinstance(x, torch.Tensor) and x.ndim == 4
- if f is None:
- f = torch.ones([1, 1], dtype=torch.float32, device=x.device)
- assert isinstance(f, torch.Tensor) and f.ndim in [1, 2]
- y = x
- if f.ndim == 2:
- y = _plugin.upfirdn2d(y, f, upx, upy, downx, downy, padx0, padx1, pady0, pady1, flip_filter, gain)
- else:
- y = _plugin.upfirdn2d(y, f.unsqueeze(0), upx, 1, downx, 1, padx0, padx1, 0, 0, flip_filter, np.sqrt(gain))
- y = _plugin.upfirdn2d(y, f.unsqueeze(1), 1, upy, 1, downy, 0, 0, pady0, pady1, flip_filter, np.sqrt(gain))
- ctx.save_for_backward(f)
- ctx.x_shape = x.shape
- return y
-
- @staticmethod
- def backward(ctx, dy): # pylint: disable=arguments-differ
- f, = ctx.saved_tensors
- _, _, ih, iw = ctx.x_shape
- _, _, oh, ow = dy.shape
- fw, fh = _get_filter_size(f)
- p = [
- fw - padx0 - 1,
- iw * upx - ow * downx + padx0 - upx + 1,
- fh - pady0 - 1,
- ih * upy - oh * downy + pady0 - upy + 1,
- ]
- dx = None
- df = None
-
- if ctx.needs_input_grad[0]:
- dx = _upfirdn2d_cuda(up=down, down=up, padding=p, flip_filter=(not flip_filter), gain=gain).apply(dy, f)
-
- assert not ctx.needs_input_grad[1]
- return dx, df
-
- # Add to cache.
- _upfirdn2d_cuda_cache[key] = Upfirdn2dCuda
- return Upfirdn2dCuda
-
-#----------------------------------------------------------------------------
-
-def filter2d(x, f, padding=0, flip_filter=False, gain=1, impl='cuda'):
- r"""Filter a batch of 2D images using the given 2D FIR filter.
-
- By default, the result is padded so that its shape matches the input.
- User-specified padding is applied on top of that, with negative values
- indicating cropping. Pixels outside the image are assumed to be zero.
-
- Args:
- x: Float32/float64/float16 input tensor of the shape
- `[batch_size, num_channels, in_height, in_width]`.
- f: Float32 FIR filter of the shape
- `[filter_height, filter_width]` (non-separable),
- `[filter_taps]` (separable), or
- `None` (identity).
- padding: Padding with respect to the output. Can be a single number or a
- list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
- (default: 0).
- flip_filter: False = convolution, True = correlation (default: False).
- gain: Overall scaling factor for signal magnitude (default: 1).
- impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`).
-
- Returns:
- Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
- """
- padx0, padx1, pady0, pady1 = _parse_padding(padding)
- fw, fh = _get_filter_size(f)
- p = [
- padx0 + fw // 2,
- padx1 + (fw - 1) // 2,
- pady0 + fh // 2,
- pady1 + (fh - 1) // 2,
- ]
- return upfirdn2d(x, f, padding=p, flip_filter=flip_filter, gain=gain, impl=impl)
-
-#----------------------------------------------------------------------------
-
-def upsample2d(x, f, up=2, padding=0, flip_filter=False, gain=1, impl='cuda'):
- r"""Upsample a batch of 2D images using the given 2D FIR filter.
-
- By default, the result is padded so that its shape is a multiple of the input.
- User-specified padding is applied on top of that, with negative values
- indicating cropping. Pixels outside the image are assumed to be zero.
-
- Args:
- x: Float32/float64/float16 input tensor of the shape
- `[batch_size, num_channels, in_height, in_width]`.
- f: Float32 FIR filter of the shape
- `[filter_height, filter_width]` (non-separable),
- `[filter_taps]` (separable), or
- `None` (identity).
- up: Integer upsampling factor. Can be a single int or a list/tuple
- `[x, y]` (default: 1).
- padding: Padding with respect to the output. Can be a single number or a
- list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
- (default: 0).
- flip_filter: False = convolution, True = correlation (default: False).
- gain: Overall scaling factor for signal magnitude (default: 1).
- impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`).
-
- Returns:
- Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
- """
- upx, upy = _parse_scaling(up)
- padx0, padx1, pady0, pady1 = _parse_padding(padding)
- fw, fh = _get_filter_size(f)
- p = [
- padx0 + (fw + upx - 1) // 2,
- padx1 + (fw - upx) // 2,
- pady0 + (fh + upy - 1) // 2,
- pady1 + (fh - upy) // 2,
- ]
- return upfirdn2d(x, f, up=up, padding=p, flip_filter=flip_filter, gain=gain*upx*upy, impl=impl)
-
-#----------------------------------------------------------------------------
-
-def downsample2d(x, f, down=2, padding=0, flip_filter=False, gain=1, impl='cuda'):
- r"""Downsample a batch of 2D images using the given 2D FIR filter.
-
- By default, the result is padded so that its shape is a fraction of the input.
- User-specified padding is applied on top of that, with negative values
- indicating cropping. Pixels outside the image are assumed to be zero.
-
- Args:
- x: Float32/float64/float16 input tensor of the shape
- `[batch_size, num_channels, in_height, in_width]`.
- f: Float32 FIR filter of the shape
- `[filter_height, filter_width]` (non-separable),
- `[filter_taps]` (separable), or
- `None` (identity).
- down: Integer downsampling factor. Can be a single int or a list/tuple
- `[x, y]` (default: 1).
- padding: Padding with respect to the input. Can be a single number or a
- list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
- (default: 0).
- flip_filter: False = convolution, True = correlation (default: False).
- gain: Overall scaling factor for signal magnitude (default: 1).
- impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`).
-
- Returns:
- Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
- """
- downx, downy = _parse_scaling(down)
- padx0, padx1, pady0, pady1 = _parse_padding(padding)
- fw, fh = _get_filter_size(f)
- p = [
- padx0 + (fw - downx + 1) // 2,
- padx1 + (fw - downx) // 2,
- pady0 + (fh - downy + 1) // 2,
- pady1 + (fh - downy) // 2,
- ]
- return upfirdn2d(x, f, down=down, padding=p, flip_filter=flip_filter, gain=gain, impl=impl)
-
-#----------------------------------------------------------------------------
diff --git a/spaces/EasyEasy/EasyProxy/Dockerfile b/spaces/EasyEasy/EasyProxy/Dockerfile
deleted file mode 100644
index bd082a764dd75316ccb029e6f8d478d54fa3929c..0000000000000000000000000000000000000000
--- a/spaces/EasyEasy/EasyProxy/Dockerfile
+++ /dev/null
@@ -1,15 +0,0 @@
-FROM node:18-bullseye-slim
-RUN apt-get update && \
- apt-get install -y git curl
-RUN --mount=type=secret,id=GIT_AUTH,mode=0444,required=true \
- git clone https://$(cat /run/secrets/GIT_AUTH)@git.evulid.cc/cyberes/oai-reverse-proxy-epic-troll.git /app
-
-WORKDIR /app
-RUN npm install
-COPY Dockerfile greeting.md* .env* ./
-RUN npm run build
-RUN chown -R node:node /app
-EXPOSE 7860
-ENV NODE_ENV=production
-
-CMD [ "npm", "start" ]
\ No newline at end of file
diff --git a/spaces/EinfachOlder/ChatGPT-prompt-generator/README.md b/spaces/EinfachOlder/ChatGPT-prompt-generator/README.md
deleted file mode 100644
index 9765db2c80dd4c4b938060743922163b1718e003..0000000000000000000000000000000000000000
--- a/spaces/EinfachOlder/ChatGPT-prompt-generator/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: ChatGPT Prompt Generator
-emoji: 👨🏻🎤
-colorFrom: purple
-colorTo: pink
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: merve/ChatGPT-prompt-generator
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/EleutherAI/magma/magma/transforms.py b/spaces/EleutherAI/magma/magma/transforms.py
deleted file mode 100644
index 513d7c449d279f41cadf38f2df6be91a8370064c..0000000000000000000000000000000000000000
--- a/spaces/EleutherAI/magma/magma/transforms.py
+++ /dev/null
@@ -1,134 +0,0 @@
-from torchvision import transforms as T
-import torch.nn.functional as F
-from PIL import ImageOps
-import PIL
-import random
-
-
-def pad_to_size(x, size=256):
- delta_w = size - x.size[0]
- delta_h = size - x.size[1]
- padding = (
- delta_w // 2,
- delta_h // 2,
- delta_w - (delta_w // 2),
- delta_h - (delta_h // 2),
- )
- new_im = ImageOps.expand(x, padding)
- return new_im
-
-
-def pad_to_size_tensor(x, size=256):
- offset_dim_1 = size - x.shape[1]
- offset_dim_2 = size - x.shape[2]
-
- padding_dim_1 = max(offset_dim_1 // 2, 0)
- padding_dim_2 = max(offset_dim_2 // 2, 0)
-
- if offset_dim_1 % 2 == 0:
- pad_tuple_1 = (padding_dim_1, padding_dim_1)
- else:
- pad_tuple_1 = (padding_dim_1 + 1, padding_dim_1)
-
- if offset_dim_2 % 2 == 0:
- pad_tuple_2 = (padding_dim_2, padding_dim_2)
- else:
- pad_tuple_2 = (padding_dim_2 + 1, padding_dim_2)
-
- padded = F.pad(x, pad=(*pad_tuple_2, *pad_tuple_1, 0, 0))
- return padded
-
-
-class RandCropResize(object):
-
- """
- Randomly crops, then randomly resizes, then randomly crops again, an image. Mirroring the augmentations from https://arxiv.org/abs/2102.12092
- """
-
- def __init__(self, target_size):
- self.target_size = target_size
-
- def __call__(self, img):
- img = pad_to_size(img, self.target_size)
- d_min = min(img.size)
- img = T.RandomCrop(size=d_min)(img)
- t_min = min(d_min, round(9 / 8 * self.target_size))
- t_max = min(d_min, round(12 / 8 * self.target_size))
- t = random.randint(t_min, t_max + 1)
- img = T.Resize(t)(img)
- if min(img.size) < 256:
- img = T.Resize(256)(img)
- return T.RandomCrop(size=self.target_size)(img)
-
-
-def get_transforms(
- image_size, encoder_name, input_resolution=None, use_extra_transforms=False
-):
- if "clip" in encoder_name:
- assert input_resolution is not None
- return clip_preprocess(input_resolution)
-
- base_transforms = [
- T.Lambda(lambda img: img.convert("RGB") if img.mode != "RGB" else img),
- RandCropResize(image_size),
- T.RandomHorizontalFlip(p=0.5),
- ]
- if use_extra_transforms:
- extra_transforms = [T.ColorJitter(0.1, 0.1, 0.1, 0.05)]
- base_transforms += extra_transforms
- base_transforms += [
- T.ToTensor(),
- maybe_add_batch_dim,
- ]
- base_transforms = T.Compose(base_transforms)
- return base_transforms
-
-
-def maybe_add_batch_dim(t):
- if t.ndim == 3:
- return t.unsqueeze(0)
- else:
- return t
-
-
-def pad_img(desired_size):
- def fn(im):
- old_size = im.size # old_size[0] is in (width, height) format
-
- ratio = float(desired_size) / max(old_size)
- new_size = tuple([int(x * ratio) for x in old_size])
-
- im = im.resize(new_size, PIL.Image.ANTIALIAS)
- # create a new image and paste the resized on it
-
- new_im = PIL.Image.new("RGB", (desired_size, desired_size))
- new_im.paste(
- im, ((desired_size - new_size[0]) // 2, (desired_size - new_size[1]) // 2)
- )
-
- return new_im
-
- return fn
-
-
-def crop_or_pad(n_px, pad=False):
- if pad:
- return pad_img(n_px)
- else:
- return T.CenterCrop(n_px)
-
-
-def clip_preprocess(n_px, use_pad=False):
- return T.Compose(
- [
- T.Resize(n_px, interpolation=T.InterpolationMode.BICUBIC),
- crop_or_pad(n_px, pad=use_pad),
- lambda image: image.convert("RGB"),
- T.ToTensor(),
- maybe_add_batch_dim,
- T.Normalize(
- (0.48145466, 0.4578275, 0.40821073),
- (0.26862954, 0.26130258, 0.27577711),
- ),
- ]
- )
diff --git a/spaces/FlippFuzz/whisper-webui/src/conversion/hf_converter.py b/spaces/FlippFuzz/whisper-webui/src/conversion/hf_converter.py
deleted file mode 100644
index 6da4f0fd672d63b099f21d0498ba4001d23356f7..0000000000000000000000000000000000000000
--- a/spaces/FlippFuzz/whisper-webui/src/conversion/hf_converter.py
+++ /dev/null
@@ -1,67 +0,0 @@
-# https://github.com/bayartsogt-ya/whisper-multiple-hf-datasets
-
-from copy import deepcopy
-import torch
-
-WHISPER_MAPPING = {
- "layers": "blocks",
- "fc1": "mlp.0",
- "fc2": "mlp.2",
- "final_layer_norm": "mlp_ln",
- "layers": "blocks",
- ".self_attn.q_proj": ".attn.query",
- ".self_attn.k_proj": ".attn.key",
- ".self_attn.v_proj": ".attn.value",
- ".self_attn_layer_norm": ".attn_ln",
- ".self_attn.out_proj": ".attn.out",
- ".encoder_attn.q_proj": ".cross_attn.query",
- ".encoder_attn.k_proj": ".cross_attn.key",
- ".encoder_attn.v_proj": ".cross_attn.value",
- ".encoder_attn_layer_norm": ".cross_attn_ln",
- ".encoder_attn.out_proj": ".cross_attn.out",
- "decoder.layer_norm.": "decoder.ln.",
- "encoder.layer_norm.": "encoder.ln_post.",
- "embed_tokens": "token_embedding",
- "encoder.embed_positions.weight": "encoder.positional_embedding",
- "decoder.embed_positions.weight": "decoder.positional_embedding",
- "layer_norm": "ln_post",
-}
-
-
-def rename_keys(s_dict):
- keys = list(s_dict.keys())
- for key in keys:
- new_key = key
- for k, v in WHISPER_MAPPING.items():
- if k in key:
- new_key = new_key.replace(k, v)
-
- print(f"{key} -> {new_key}")
-
- s_dict[new_key] = s_dict.pop(key)
- return s_dict
-
-
-def convert_hf_whisper(hf_model_name_or_path: str, whisper_state_path: str):
- from transformers import WhisperForConditionalGeneration
- transformer_model = WhisperForConditionalGeneration.from_pretrained(hf_model_name_or_path)
- config = transformer_model.config
-
- # first build dims
- dims = {
- 'n_mels': config.num_mel_bins,
- 'n_vocab': config.vocab_size,
- 'n_audio_ctx': config.max_source_positions,
- 'n_audio_state': config.d_model,
- 'n_audio_head': config.encoder_attention_heads,
- 'n_audio_layer': config.encoder_layers,
- 'n_text_ctx': config.max_target_positions,
- 'n_text_state': config.d_model,
- 'n_text_head': config.decoder_attention_heads,
- 'n_text_layer': config.decoder_layers
- }
-
- state_dict = deepcopy(transformer_model.model.state_dict())
- state_dict = rename_keys(state_dict)
-
- torch.save({"dims": dims, "model_state_dict": state_dict}, whisper_state_path)
\ No newline at end of file
diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/modules/modules.py b/spaces/FrankZxShen/so-vits-svc-models-ba/modules/modules.py
deleted file mode 100644
index 54290fd207b25e93831bd21005990ea137e6b50e..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/so-vits-svc-models-ba/modules/modules.py
+++ /dev/null
@@ -1,342 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import modules.commons as commons
-from modules.commons import init_weights, get_padding
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
diff --git a/spaces/FrozenBurning/SceneDreamer/README.md b/spaces/FrozenBurning/SceneDreamer/README.md
deleted file mode 100644
index 7f3dd05d05c3a00fac5965f5f32a067816ffc87f..0000000000000000000000000000000000000000
--- a/spaces/FrozenBurning/SceneDreamer/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: SceneDreamer
-emoji: 👁
-colorFrom: yellow
-colorTo: red
-sdk: gradio
-sdk_version: 3.28.3
-app_file: app.py
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/GaenKoki/voicevox/voicevox_engine/synthesis_engine/__init__.py b/spaces/GaenKoki/voicevox/voicevox_engine/synthesis_engine/__init__.py
deleted file mode 100644
index 3e7f6a1ef940f2d20830d98336c34cbbc600d905..0000000000000000000000000000000000000000
--- a/spaces/GaenKoki/voicevox/voicevox_engine/synthesis_engine/__init__.py
+++ /dev/null
@@ -1,12 +0,0 @@
-from .core_wrapper import CoreWrapper, load_runtime_lib
-from .make_synthesis_engines import make_synthesis_engines
-from .synthesis_engine import SynthesisEngine
-from .synthesis_engine_base import SynthesisEngineBase
-
-__all__ = [
- "CoreWrapper",
- "load_runtime_lib",
- "make_synthesis_engines",
- "SynthesisEngine",
- "SynthesisEngineBase",
-]
diff --git a/spaces/Gradio-Blocks/minority-asr/README.md b/spaces/Gradio-Blocks/minority-asr/README.md
deleted file mode 100644
index 17a571d29aaf0bec516e5fa90eb12fd6f03b05c1..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/minority-asr/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
----
-title: Speech recognition for minority languages of Russia
-emoji: 🌾
-colorFrom: blue
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.0.6
-app_file: app.py
-pinned: false
----
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/datasets/builder.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/datasets/builder.py
deleted file mode 100644
index c9466a517dee746a6677b27a19713f2e89ed7194..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/datasets/builder.py
+++ /dev/null
@@ -1,143 +0,0 @@
-import copy
-import platform
-import random
-from functools import partial
-
-import numpy as np
-from mmcv.parallel import collate
-from mmcv.runner import get_dist_info
-from mmcv.utils import Registry, build_from_cfg
-from torch.utils.data import DataLoader
-
-from .samplers import DistributedGroupSampler, DistributedSampler, GroupSampler
-
-if platform.system() != 'Windows':
- # https://github.com/pytorch/pytorch/issues/973
- import resource
- rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)
- hard_limit = rlimit[1]
- soft_limit = min(4096, hard_limit)
- resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit))
-
-DATASETS = Registry('dataset')
-PIPELINES = Registry('pipeline')
-
-
-def _concat_dataset(cfg, default_args=None):
- from .dataset_wrappers import ConcatDataset
- ann_files = cfg['ann_file']
- img_prefixes = cfg.get('img_prefix', None)
- seg_prefixes = cfg.get('seg_prefix', None)
- proposal_files = cfg.get('proposal_file', None)
- separate_eval = cfg.get('separate_eval', True)
-
- datasets = []
- num_dset = len(ann_files)
- for i in range(num_dset):
- data_cfg = copy.deepcopy(cfg)
- # pop 'separate_eval' since it is not a valid key for common datasets.
- if 'separate_eval' in data_cfg:
- data_cfg.pop('separate_eval')
- data_cfg['ann_file'] = ann_files[i]
- if isinstance(img_prefixes, (list, tuple)):
- data_cfg['img_prefix'] = img_prefixes[i]
- if isinstance(seg_prefixes, (list, tuple)):
- data_cfg['seg_prefix'] = seg_prefixes[i]
- if isinstance(proposal_files, (list, tuple)):
- data_cfg['proposal_file'] = proposal_files[i]
- datasets.append(build_dataset(data_cfg, default_args))
-
- return ConcatDataset(datasets, separate_eval)
-
-
-def build_dataset(cfg, default_args=None):
- from .dataset_wrappers import (ConcatDataset, RepeatDataset,
- ClassBalancedDataset)
- if isinstance(cfg, (list, tuple)):
- dataset = ConcatDataset([build_dataset(c, default_args) for c in cfg])
- elif cfg['type'] == 'ConcatDataset':
- dataset = ConcatDataset(
- [build_dataset(c, default_args) for c in cfg['datasets']],
- cfg.get('separate_eval', True))
- elif cfg['type'] == 'RepeatDataset':
- dataset = RepeatDataset(
- build_dataset(cfg['dataset'], default_args), cfg['times'])
- elif cfg['type'] == 'ClassBalancedDataset':
- dataset = ClassBalancedDataset(
- build_dataset(cfg['dataset'], default_args), cfg['oversample_thr'])
- elif isinstance(cfg.get('ann_file'), (list, tuple)):
- dataset = _concat_dataset(cfg, default_args)
- else:
- dataset = build_from_cfg(cfg, DATASETS, default_args)
-
- return dataset
-
-
-def build_dataloader(dataset,
- samples_per_gpu,
- workers_per_gpu,
- num_gpus=1,
- dist=True,
- shuffle=True,
- seed=None,
- **kwargs):
- """Build PyTorch DataLoader.
-
- In distributed training, each GPU/process has a dataloader.
- In non-distributed training, there is only one dataloader for all GPUs.
-
- Args:
- dataset (Dataset): A PyTorch dataset.
- samples_per_gpu (int): Number of training samples on each GPU, i.e.,
- batch size of each GPU.
- workers_per_gpu (int): How many subprocesses to use for data loading
- for each GPU.
- num_gpus (int): Number of GPUs. Only used in non-distributed training.
- dist (bool): Distributed training/test or not. Default: True.
- shuffle (bool): Whether to shuffle the data at every epoch.
- Default: True.
- kwargs: any keyword argument to be used to initialize DataLoader
-
- Returns:
- DataLoader: A PyTorch dataloader.
- """
- rank, world_size = get_dist_info()
- if dist:
- # DistributedGroupSampler will definitely shuffle the data to satisfy
- # that images on each GPU are in the same group
- if shuffle:
- sampler = DistributedGroupSampler(
- dataset, samples_per_gpu, world_size, rank, seed=seed)
- else:
- sampler = DistributedSampler(
- dataset, world_size, rank, shuffle=False, seed=seed)
- batch_size = samples_per_gpu
- num_workers = workers_per_gpu
- else:
- sampler = GroupSampler(dataset, samples_per_gpu) if shuffle else None
- batch_size = num_gpus * samples_per_gpu
- num_workers = num_gpus * workers_per_gpu
-
- init_fn = partial(
- worker_init_fn, num_workers=num_workers, rank=rank,
- seed=seed) if seed is not None else None
-
- data_loader = DataLoader(
- dataset,
- batch_size=batch_size,
- sampler=sampler,
- num_workers=num_workers,
- collate_fn=partial(collate, samples_per_gpu=samples_per_gpu),
- pin_memory=False,
- worker_init_fn=init_fn,
- **kwargs)
-
- return data_loader
-
-
-def worker_init_fn(worker_id, num_workers, rank, seed):
- # The seed of each worker equals to
- # num_worker * rank + worker_id + user_seed
- worker_seed = num_workers * rank + worker_id + seed
- np.random.seed(worker_seed)
- random.seed(worker_seed)
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/datasets/pipelines/loading.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/datasets/pipelines/loading.py
deleted file mode 100644
index 69225941903f6b9d67b8b8c5fc3b1801cd964fb2..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/datasets/pipelines/loading.py
+++ /dev/null
@@ -1,458 +0,0 @@
-import os.path as osp
-
-import mmcv
-import numpy as np
-import pycocotools.mask as maskUtils
-
-from mmdet.core import BitmapMasks, PolygonMasks
-from ..builder import PIPELINES
-
-
-@PIPELINES.register_module()
-class LoadImageFromFile(object):
- """Load an image from file.
-
- Required keys are "img_prefix" and "img_info" (a dict that must contain the
- key "filename"). Added or updated keys are "filename", "img", "img_shape",
- "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`),
- "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1).
-
- Args:
- to_float32 (bool): Whether to convert the loaded image to a float32
- numpy array. If set to False, the loaded image is an uint8 array.
- Defaults to False.
- color_type (str): The flag argument for :func:`mmcv.imfrombytes`.
- Defaults to 'color'.
- file_client_args (dict): Arguments to instantiate a FileClient.
- See :class:`mmcv.fileio.FileClient` for details.
- Defaults to ``dict(backend='disk')``.
- """
-
- def __init__(self,
- to_float32=False,
- color_type='color',
- file_client_args=dict(backend='disk')):
- self.to_float32 = to_float32
- self.color_type = color_type
- self.file_client_args = file_client_args.copy()
- self.file_client = None
-
- def __call__(self, results):
- """Call functions to load image and get image meta information.
-
- Args:
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
-
- Returns:
- dict: The dict contains loaded image and meta information.
- """
-
- if self.file_client is None:
- self.file_client = mmcv.FileClient(**self.file_client_args)
-
- if results['img_prefix'] is not None:
- filename = osp.join(results['img_prefix'],
- results['img_info']['filename'])
- else:
- filename = results['img_info']['filename']
-
- img_bytes = self.file_client.get(filename)
- img = mmcv.imfrombytes(img_bytes, flag=self.color_type)
- if self.to_float32:
- img = img.astype(np.float32)
-
- results['filename'] = filename
- results['ori_filename'] = results['img_info']['filename']
- results['img'] = img
- results['img_shape'] = img.shape
- results['ori_shape'] = img.shape
- results['img_fields'] = ['img']
- return results
-
- def __repr__(self):
- repr_str = (f'{self.__class__.__name__}('
- f'to_float32={self.to_float32}, '
- f"color_type='{self.color_type}', "
- f'file_client_args={self.file_client_args})')
- return repr_str
-
-
-@PIPELINES.register_module()
-class LoadImageFromWebcam(LoadImageFromFile):
- """Load an image from webcam.
-
- Similar with :obj:`LoadImageFromFile`, but the image read from webcam is in
- ``results['img']``.
- """
-
- def __call__(self, results):
- """Call functions to add image meta information.
-
- Args:
- results (dict): Result dict with Webcam read image in
- ``results['img']``.
-
- Returns:
- dict: The dict contains loaded image and meta information.
- """
-
- img = results['img']
- if self.to_float32:
- img = img.astype(np.float32)
-
- results['filename'] = None
- results['ori_filename'] = None
- results['img'] = img
- results['img_shape'] = img.shape
- results['ori_shape'] = img.shape
- results['img_fields'] = ['img']
- return results
-
-
-@PIPELINES.register_module()
-class LoadMultiChannelImageFromFiles(object):
- """Load multi-channel images from a list of separate channel files.
-
- Required keys are "img_prefix" and "img_info" (a dict that must contain the
- key "filename", which is expected to be a list of filenames).
- Added or updated keys are "filename", "img", "img_shape",
- "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`),
- "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1).
-
- Args:
- to_float32 (bool): Whether to convert the loaded image to a float32
- numpy array. If set to False, the loaded image is an uint8 array.
- Defaults to False.
- color_type (str): The flag argument for :func:`mmcv.imfrombytes`.
- Defaults to 'color'.
- file_client_args (dict): Arguments to instantiate a FileClient.
- See :class:`mmcv.fileio.FileClient` for details.
- Defaults to ``dict(backend='disk')``.
- """
-
- def __init__(self,
- to_float32=False,
- color_type='unchanged',
- file_client_args=dict(backend='disk')):
- self.to_float32 = to_float32
- self.color_type = color_type
- self.file_client_args = file_client_args.copy()
- self.file_client = None
-
- def __call__(self, results):
- """Call functions to load multiple images and get images meta
- information.
-
- Args:
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
-
- Returns:
- dict: The dict contains loaded images and meta information.
- """
-
- if self.file_client is None:
- self.file_client = mmcv.FileClient(**self.file_client_args)
-
- if results['img_prefix'] is not None:
- filename = [
- osp.join(results['img_prefix'], fname)
- for fname in results['img_info']['filename']
- ]
- else:
- filename = results['img_info']['filename']
-
- img = []
- for name in filename:
- img_bytes = self.file_client.get(name)
- img.append(mmcv.imfrombytes(img_bytes, flag=self.color_type))
- img = np.stack(img, axis=-1)
- if self.to_float32:
- img = img.astype(np.float32)
-
- results['filename'] = filename
- results['ori_filename'] = results['img_info']['filename']
- results['img'] = img
- results['img_shape'] = img.shape
- results['ori_shape'] = img.shape
- # Set initial values for default meta_keys
- results['pad_shape'] = img.shape
- results['scale_factor'] = 1.0
- num_channels = 1 if len(img.shape) < 3 else img.shape[2]
- results['img_norm_cfg'] = dict(
- mean=np.zeros(num_channels, dtype=np.float32),
- std=np.ones(num_channels, dtype=np.float32),
- to_rgb=False)
- return results
-
- def __repr__(self):
- repr_str = (f'{self.__class__.__name__}('
- f'to_float32={self.to_float32}, '
- f"color_type='{self.color_type}', "
- f'file_client_args={self.file_client_args})')
- return repr_str
-
-
-@PIPELINES.register_module()
-class LoadAnnotations(object):
- """Load mutiple types of annotations.
-
- Args:
- with_bbox (bool): Whether to parse and load the bbox annotation.
- Default: True.
- with_label (bool): Whether to parse and load the label annotation.
- Default: True.
- with_mask (bool): Whether to parse and load the mask annotation.
- Default: False.
- with_seg (bool): Whether to parse and load the semantic segmentation
- annotation. Default: False.
- poly2mask (bool): Whether to convert the instance masks from polygons
- to bitmaps. Default: True.
- file_client_args (dict): Arguments to instantiate a FileClient.
- See :class:`mmcv.fileio.FileClient` for details.
- Defaults to ``dict(backend='disk')``.
- """
-
- def __init__(self,
- with_bbox=True,
- with_label=True,
- with_mask=False,
- with_seg=False,
- poly2mask=True,
- file_client_args=dict(backend='disk')):
- self.with_bbox = with_bbox
- self.with_label = with_label
- self.with_mask = with_mask
- self.with_seg = with_seg
- self.poly2mask = poly2mask
- self.file_client_args = file_client_args.copy()
- self.file_client = None
-
- def _load_bboxes(self, results):
- """Private function to load bounding box annotations.
-
- Args:
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
-
- Returns:
- dict: The dict contains loaded bounding box annotations.
- """
-
- ann_info = results['ann_info']
- results['gt_bboxes'] = ann_info['bboxes'].copy()
-
- gt_bboxes_ignore = ann_info.get('bboxes_ignore', None)
- if gt_bboxes_ignore is not None:
- results['gt_bboxes_ignore'] = gt_bboxes_ignore.copy()
- results['bbox_fields'].append('gt_bboxes_ignore')
- results['bbox_fields'].append('gt_bboxes')
- return results
-
- def _load_labels(self, results):
- """Private function to load label annotations.
-
- Args:
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
-
- Returns:
- dict: The dict contains loaded label annotations.
- """
-
- results['gt_labels'] = results['ann_info']['labels'].copy()
- return results
-
- def _poly2mask(self, mask_ann, img_h, img_w):
- """Private function to convert masks represented with polygon to
- bitmaps.
-
- Args:
- mask_ann (list | dict): Polygon mask annotation input.
- img_h (int): The height of output mask.
- img_w (int): The width of output mask.
-
- Returns:
- numpy.ndarray: The decode bitmap mask of shape (img_h, img_w).
- """
-
- if isinstance(mask_ann, list):
- # polygon -- a single object might consist of multiple parts
- # we merge all parts into one mask rle code
- rles = maskUtils.frPyObjects(mask_ann, img_h, img_w)
- rle = maskUtils.merge(rles)
- elif isinstance(mask_ann['counts'], list):
- # uncompressed RLE
- rle = maskUtils.frPyObjects(mask_ann, img_h, img_w)
- else:
- # rle
- rle = mask_ann
- mask = maskUtils.decode(rle)
- return mask
-
- def process_polygons(self, polygons):
- """Convert polygons to list of ndarray and filter invalid polygons.
-
- Args:
- polygons (list[list]): Polygons of one instance.
-
- Returns:
- list[numpy.ndarray]: Processed polygons.
- """
-
- polygons = [np.array(p) for p in polygons]
- valid_polygons = []
- for polygon in polygons:
- if len(polygon) % 2 == 0 and len(polygon) >= 6:
- valid_polygons.append(polygon)
- return valid_polygons
-
- def _load_masks(self, results):
- """Private function to load mask annotations.
-
- Args:
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
-
- Returns:
- dict: The dict contains loaded mask annotations.
- If ``self.poly2mask`` is set ``True``, `gt_mask` will contain
- :obj:`PolygonMasks`. Otherwise, :obj:`BitmapMasks` is used.
- """
-
- h, w = results['img_info']['height'], results['img_info']['width']
- gt_masks = results['ann_info']['masks']
- if self.poly2mask:
- gt_masks = BitmapMasks(
- [self._poly2mask(mask, h, w) for mask in gt_masks], h, w)
- else:
- gt_masks = PolygonMasks(
- [self.process_polygons(polygons) for polygons in gt_masks], h,
- w)
- results['gt_masks'] = gt_masks
- results['mask_fields'].append('gt_masks')
- return results
-
- def _load_semantic_seg(self, results):
- """Private function to load semantic segmentation annotations.
-
- Args:
- results (dict): Result dict from :obj:`dataset`.
-
- Returns:
- dict: The dict contains loaded semantic segmentation annotations.
- """
-
- if self.file_client is None:
- self.file_client = mmcv.FileClient(**self.file_client_args)
-
- filename = osp.join(results['seg_prefix'],
- results['ann_info']['seg_map'])
- img_bytes = self.file_client.get(filename)
- results['gt_semantic_seg'] = mmcv.imfrombytes(
- img_bytes, flag='unchanged').squeeze()
- results['seg_fields'].append('gt_semantic_seg')
- return results
-
- def __call__(self, results):
- """Call function to load multiple types annotations.
-
- Args:
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
-
- Returns:
- dict: The dict contains loaded bounding box, label, mask and
- semantic segmentation annotations.
- """
-
- if self.with_bbox:
- results = self._load_bboxes(results)
- if results is None:
- return None
- if self.with_label:
- results = self._load_labels(results)
- if self.with_mask:
- results = self._load_masks(results)
- if self.with_seg:
- results = self._load_semantic_seg(results)
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(with_bbox={self.with_bbox}, '
- repr_str += f'with_label={self.with_label}, '
- repr_str += f'with_mask={self.with_mask}, '
- repr_str += f'with_seg={self.with_seg}, '
- repr_str += f'poly2mask={self.poly2mask}, '
- repr_str += f'poly2mask={self.file_client_args})'
- return repr_str
-
-
-@PIPELINES.register_module()
-class LoadProposals(object):
- """Load proposal pipeline.
-
- Required key is "proposals". Updated keys are "proposals", "bbox_fields".
-
- Args:
- num_max_proposals (int, optional): Maximum number of proposals to load.
- If not specified, all proposals will be loaded.
- """
-
- def __init__(self, num_max_proposals=None):
- self.num_max_proposals = num_max_proposals
-
- def __call__(self, results):
- """Call function to load proposals from file.
-
- Args:
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
-
- Returns:
- dict: The dict contains loaded proposal annotations.
- """
-
- proposals = results['proposals']
- if proposals.shape[1] not in (4, 5):
- raise AssertionError(
- 'proposals should have shapes (n, 4) or (n, 5), '
- f'but found {proposals.shape}')
- proposals = proposals[:, :4]
-
- if self.num_max_proposals is not None:
- proposals = proposals[:self.num_max_proposals]
-
- if len(proposals) == 0:
- proposals = np.array([[0, 0, 0, 0]], dtype=np.float32)
- results['proposals'] = proposals
- results['bbox_fields'].append('proposals')
- return results
-
- def __repr__(self):
- return self.__class__.__name__ + \
- f'(num_max_proposals={self.num_max_proposals})'
-
-
-@PIPELINES.register_module()
-class FilterAnnotations(object):
- """Filter invalid annotations.
-
- Args:
- min_gt_bbox_wh (tuple[int]): Minimum width and height of ground truth
- boxes.
- """
-
- def __init__(self, min_gt_bbox_wh):
- # TODO: add more filter options
- self.min_gt_bbox_wh = min_gt_bbox_wh
-
- def __call__(self, results):
- assert 'gt_bboxes' in results
- gt_bboxes = results['gt_bboxes']
- w = gt_bboxes[:, 2] - gt_bboxes[:, 0]
- h = gt_bboxes[:, 3] - gt_bboxes[:, 1]
- keep = (w > self.min_gt_bbox_wh[0]) & (h > self.min_gt_bbox_wh[1])
- if not keep.any():
- return None
- else:
- keys = ('gt_bboxes', 'gt_labels', 'gt_masks', 'gt_semantic_seg')
- for key in keys:
- if key in results:
- results[key] = results[key][keep]
- return results
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/corner_head.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/corner_head.py
deleted file mode 100644
index 50cdb49a29f2ced1a31a50e654a3bdc14f5f5004..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/corner_head.py
+++ /dev/null
@@ -1,1074 +0,0 @@
-from logging import warning
-from math import ceil, log
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import ConvModule, bias_init_with_prob
-from mmcv.ops import CornerPool, batched_nms
-
-from mmdet.core import multi_apply
-from ..builder import HEADS, build_loss
-from ..utils import gaussian_radius, gen_gaussian_target
-from .base_dense_head import BaseDenseHead
-
-
-class BiCornerPool(nn.Module):
- """Bidirectional Corner Pooling Module (TopLeft, BottomRight, etc.)
-
- Args:
- in_channels (int): Input channels of module.
- out_channels (int): Output channels of module.
- feat_channels (int): Feature channels of module.
- directions (list[str]): Directions of two CornerPools.
- norm_cfg (dict): Dictionary to construct and config norm layer.
- """
-
- def __init__(self,
- in_channels,
- directions,
- feat_channels=128,
- out_channels=128,
- norm_cfg=dict(type='BN', requires_grad=True)):
- super(BiCornerPool, self).__init__()
- self.direction1_conv = ConvModule(
- in_channels, feat_channels, 3, padding=1, norm_cfg=norm_cfg)
- self.direction2_conv = ConvModule(
- in_channels, feat_channels, 3, padding=1, norm_cfg=norm_cfg)
-
- self.aftpool_conv = ConvModule(
- feat_channels,
- out_channels,
- 3,
- padding=1,
- norm_cfg=norm_cfg,
- act_cfg=None)
-
- self.conv1 = ConvModule(
- in_channels, out_channels, 1, norm_cfg=norm_cfg, act_cfg=None)
- self.conv2 = ConvModule(
- in_channels, out_channels, 3, padding=1, norm_cfg=norm_cfg)
-
- self.direction1_pool = CornerPool(directions[0])
- self.direction2_pool = CornerPool(directions[1])
- self.relu = nn.ReLU(inplace=True)
-
- def forward(self, x):
- """Forward features from the upstream network.
-
- Args:
- x (tensor): Input feature of BiCornerPool.
-
- Returns:
- conv2 (tensor): Output feature of BiCornerPool.
- """
- direction1_conv = self.direction1_conv(x)
- direction2_conv = self.direction2_conv(x)
- direction1_feat = self.direction1_pool(direction1_conv)
- direction2_feat = self.direction2_pool(direction2_conv)
- aftpool_conv = self.aftpool_conv(direction1_feat + direction2_feat)
- conv1 = self.conv1(x)
- relu = self.relu(aftpool_conv + conv1)
- conv2 = self.conv2(relu)
- return conv2
-
-
-@HEADS.register_module()
-class CornerHead(BaseDenseHead):
- """Head of CornerNet: Detecting Objects as Paired Keypoints.
-
- Code is modified from the `official github repo
- `_ .
-
- More details can be found in the `paper
- `_ .
-
- Args:
- num_classes (int): Number of categories excluding the background
- category.
- in_channels (int): Number of channels in the input feature map.
- num_feat_levels (int): Levels of feature from the previous module. 2
- for HourglassNet-104 and 1 for HourglassNet-52. Because
- HourglassNet-104 outputs the final feature and intermediate
- supervision feature and HourglassNet-52 only outputs the final
- feature. Default: 2.
- corner_emb_channels (int): Channel of embedding vector. Default: 1.
- train_cfg (dict | None): Training config. Useless in CornerHead,
- but we keep this variable for SingleStageDetector. Default: None.
- test_cfg (dict | None): Testing config of CornerHead. Default: None.
- loss_heatmap (dict | None): Config of corner heatmap loss. Default:
- GaussianFocalLoss.
- loss_embedding (dict | None): Config of corner embedding loss. Default:
- AssociativeEmbeddingLoss.
- loss_offset (dict | None): Config of corner offset loss. Default:
- SmoothL1Loss.
- """
-
- def __init__(self,
- num_classes,
- in_channels,
- num_feat_levels=2,
- corner_emb_channels=1,
- train_cfg=None,
- test_cfg=None,
- loss_heatmap=dict(
- type='GaussianFocalLoss',
- alpha=2.0,
- gamma=4.0,
- loss_weight=1),
- loss_embedding=dict(
- type='AssociativeEmbeddingLoss',
- pull_weight=0.25,
- push_weight=0.25),
- loss_offset=dict(
- type='SmoothL1Loss', beta=1.0, loss_weight=1)):
- super(CornerHead, self).__init__()
- self.num_classes = num_classes
- self.in_channels = in_channels
- self.corner_emb_channels = corner_emb_channels
- self.with_corner_emb = self.corner_emb_channels > 0
- self.corner_offset_channels = 2
- self.num_feat_levels = num_feat_levels
- self.loss_heatmap = build_loss(
- loss_heatmap) if loss_heatmap is not None else None
- self.loss_embedding = build_loss(
- loss_embedding) if loss_embedding is not None else None
- self.loss_offset = build_loss(
- loss_offset) if loss_offset is not None else None
- self.train_cfg = train_cfg
- self.test_cfg = test_cfg
-
- self._init_layers()
-
- def _make_layers(self, out_channels, in_channels=256, feat_channels=256):
- """Initialize conv sequential for CornerHead."""
- return nn.Sequential(
- ConvModule(in_channels, feat_channels, 3, padding=1),
- ConvModule(
- feat_channels, out_channels, 1, norm_cfg=None, act_cfg=None))
-
- def _init_corner_kpt_layers(self):
- """Initialize corner keypoint layers.
-
- Including corner heatmap branch and corner offset branch. Each branch
- has two parts: prefix `tl_` for top-left and `br_` for bottom-right.
- """
- self.tl_pool, self.br_pool = nn.ModuleList(), nn.ModuleList()
- self.tl_heat, self.br_heat = nn.ModuleList(), nn.ModuleList()
- self.tl_off, self.br_off = nn.ModuleList(), nn.ModuleList()
-
- for _ in range(self.num_feat_levels):
- self.tl_pool.append(
- BiCornerPool(
- self.in_channels, ['top', 'left'],
- out_channels=self.in_channels))
- self.br_pool.append(
- BiCornerPool(
- self.in_channels, ['bottom', 'right'],
- out_channels=self.in_channels))
-
- self.tl_heat.append(
- self._make_layers(
- out_channels=self.num_classes,
- in_channels=self.in_channels))
- self.br_heat.append(
- self._make_layers(
- out_channels=self.num_classes,
- in_channels=self.in_channels))
-
- self.tl_off.append(
- self._make_layers(
- out_channels=self.corner_offset_channels,
- in_channels=self.in_channels))
- self.br_off.append(
- self._make_layers(
- out_channels=self.corner_offset_channels,
- in_channels=self.in_channels))
-
- def _init_corner_emb_layers(self):
- """Initialize corner embedding layers.
-
- Only include corner embedding branch with two parts: prefix `tl_` for
- top-left and `br_` for bottom-right.
- """
- self.tl_emb, self.br_emb = nn.ModuleList(), nn.ModuleList()
-
- for _ in range(self.num_feat_levels):
- self.tl_emb.append(
- self._make_layers(
- out_channels=self.corner_emb_channels,
- in_channels=self.in_channels))
- self.br_emb.append(
- self._make_layers(
- out_channels=self.corner_emb_channels,
- in_channels=self.in_channels))
-
- def _init_layers(self):
- """Initialize layers for CornerHead.
-
- Including two parts: corner keypoint layers and corner embedding layers
- """
- self._init_corner_kpt_layers()
- if self.with_corner_emb:
- self._init_corner_emb_layers()
-
- def init_weights(self):
- """Initialize weights of the head."""
- bias_init = bias_init_with_prob(0.1)
- for i in range(self.num_feat_levels):
- # The initialization of parameters are different between nn.Conv2d
- # and ConvModule. Our experiments show that using the original
- # initialization of nn.Conv2d increases the final mAP by about 0.2%
- self.tl_heat[i][-1].conv.reset_parameters()
- self.tl_heat[i][-1].conv.bias.data.fill_(bias_init)
- self.br_heat[i][-1].conv.reset_parameters()
- self.br_heat[i][-1].conv.bias.data.fill_(bias_init)
- self.tl_off[i][-1].conv.reset_parameters()
- self.br_off[i][-1].conv.reset_parameters()
- if self.with_corner_emb:
- self.tl_emb[i][-1].conv.reset_parameters()
- self.br_emb[i][-1].conv.reset_parameters()
-
- def forward(self, feats):
- """Forward features from the upstream network.
-
- Args:
- feats (tuple[Tensor]): Features from the upstream network, each is
- a 4D-tensor.
-
- Returns:
- tuple: Usually a tuple of corner heatmaps, offset heatmaps and
- embedding heatmaps.
- - tl_heats (list[Tensor]): Top-left corner heatmaps for all
- levels, each is a 4D-tensor, the channels number is
- num_classes.
- - br_heats (list[Tensor]): Bottom-right corner heatmaps for all
- levels, each is a 4D-tensor, the channels number is
- num_classes.
- - tl_embs (list[Tensor] | list[None]): Top-left embedding
- heatmaps for all levels, each is a 4D-tensor or None.
- If not None, the channels number is corner_emb_channels.
- - br_embs (list[Tensor] | list[None]): Bottom-right embedding
- heatmaps for all levels, each is a 4D-tensor or None.
- If not None, the channels number is corner_emb_channels.
- - tl_offs (list[Tensor]): Top-left offset heatmaps for all
- levels, each is a 4D-tensor. The channels number is
- corner_offset_channels.
- - br_offs (list[Tensor]): Bottom-right offset heatmaps for all
- levels, each is a 4D-tensor. The channels number is
- corner_offset_channels.
- """
- lvl_ind = list(range(self.num_feat_levels))
- return multi_apply(self.forward_single, feats, lvl_ind)
-
- def forward_single(self, x, lvl_ind, return_pool=False):
- """Forward feature of a single level.
-
- Args:
- x (Tensor): Feature of a single level.
- lvl_ind (int): Level index of current feature.
- return_pool (bool): Return corner pool feature or not.
-
- Returns:
- tuple[Tensor]: A tuple of CornerHead's output for current feature
- level. Containing the following Tensors:
-
- - tl_heat (Tensor): Predicted top-left corner heatmap.
- - br_heat (Tensor): Predicted bottom-right corner heatmap.
- - tl_emb (Tensor | None): Predicted top-left embedding heatmap.
- None for `self.with_corner_emb == False`.
- - br_emb (Tensor | None): Predicted bottom-right embedding
- heatmap. None for `self.with_corner_emb == False`.
- - tl_off (Tensor): Predicted top-left offset heatmap.
- - br_off (Tensor): Predicted bottom-right offset heatmap.
- - tl_pool (Tensor): Top-left corner pool feature. Not must
- have.
- - br_pool (Tensor): Bottom-right corner pool feature. Not must
- have.
- """
- tl_pool = self.tl_pool[lvl_ind](x)
- tl_heat = self.tl_heat[lvl_ind](tl_pool)
- br_pool = self.br_pool[lvl_ind](x)
- br_heat = self.br_heat[lvl_ind](br_pool)
-
- tl_emb, br_emb = None, None
- if self.with_corner_emb:
- tl_emb = self.tl_emb[lvl_ind](tl_pool)
- br_emb = self.br_emb[lvl_ind](br_pool)
-
- tl_off = self.tl_off[lvl_ind](tl_pool)
- br_off = self.br_off[lvl_ind](br_pool)
-
- result_list = [tl_heat, br_heat, tl_emb, br_emb, tl_off, br_off]
- if return_pool:
- result_list.append(tl_pool)
- result_list.append(br_pool)
-
- return result_list
-
- def get_targets(self,
- gt_bboxes,
- gt_labels,
- feat_shape,
- img_shape,
- with_corner_emb=False,
- with_guiding_shift=False,
- with_centripetal_shift=False):
- """Generate corner targets.
-
- Including corner heatmap, corner offset.
-
- Optional: corner embedding, corner guiding shift, centripetal shift.
-
- For CornerNet, we generate corner heatmap, corner offset and corner
- embedding from this function.
-
- For CentripetalNet, we generate corner heatmap, corner offset, guiding
- shift and centripetal shift from this function.
-
- Args:
- gt_bboxes (list[Tensor]): Ground truth bboxes of each image, each
- has shape (num_gt, 4).
- gt_labels (list[Tensor]): Ground truth labels of each box, each has
- shape (num_gt,).
- feat_shape (list[int]): Shape of output feature,
- [batch, channel, height, width].
- img_shape (list[int]): Shape of input image,
- [height, width, channel].
- with_corner_emb (bool): Generate corner embedding target or not.
- Default: False.
- with_guiding_shift (bool): Generate guiding shift target or not.
- Default: False.
- with_centripetal_shift (bool): Generate centripetal shift target or
- not. Default: False.
-
- Returns:
- dict: Ground truth of corner heatmap, corner offset, corner
- embedding, guiding shift and centripetal shift. Containing the
- following keys:
-
- - topleft_heatmap (Tensor): Ground truth top-left corner
- heatmap.
- - bottomright_heatmap (Tensor): Ground truth bottom-right
- corner heatmap.
- - topleft_offset (Tensor): Ground truth top-left corner offset.
- - bottomright_offset (Tensor): Ground truth bottom-right corner
- offset.
- - corner_embedding (list[list[list[int]]]): Ground truth corner
- embedding. Not must have.
- - topleft_guiding_shift (Tensor): Ground truth top-left corner
- guiding shift. Not must have.
- - bottomright_guiding_shift (Tensor): Ground truth bottom-right
- corner guiding shift. Not must have.
- - topleft_centripetal_shift (Tensor): Ground truth top-left
- corner centripetal shift. Not must have.
- - bottomright_centripetal_shift (Tensor): Ground truth
- bottom-right corner centripetal shift. Not must have.
- """
- batch_size, _, height, width = feat_shape
- img_h, img_w = img_shape[:2]
-
- width_ratio = float(width / img_w)
- height_ratio = float(height / img_h)
-
- gt_tl_heatmap = gt_bboxes[-1].new_zeros(
- [batch_size, self.num_classes, height, width])
- gt_br_heatmap = gt_bboxes[-1].new_zeros(
- [batch_size, self.num_classes, height, width])
- gt_tl_offset = gt_bboxes[-1].new_zeros([batch_size, 2, height, width])
- gt_br_offset = gt_bboxes[-1].new_zeros([batch_size, 2, height, width])
-
- if with_corner_emb:
- match = []
-
- # Guiding shift is a kind of offset, from center to corner
- if with_guiding_shift:
- gt_tl_guiding_shift = gt_bboxes[-1].new_zeros(
- [batch_size, 2, height, width])
- gt_br_guiding_shift = gt_bboxes[-1].new_zeros(
- [batch_size, 2, height, width])
- # Centripetal shift is also a kind of offset, from center to corner
- # and normalized by log.
- if with_centripetal_shift:
- gt_tl_centripetal_shift = gt_bboxes[-1].new_zeros(
- [batch_size, 2, height, width])
- gt_br_centripetal_shift = gt_bboxes[-1].new_zeros(
- [batch_size, 2, height, width])
-
- for batch_id in range(batch_size):
- # Ground truth of corner embedding per image is a list of coord set
- corner_match = []
- for box_id in range(len(gt_labels[batch_id])):
- left, top, right, bottom = gt_bboxes[batch_id][box_id]
- center_x = (left + right) / 2.0
- center_y = (top + bottom) / 2.0
- label = gt_labels[batch_id][box_id]
-
- # Use coords in the feature level to generate ground truth
- scale_left = left * width_ratio
- scale_right = right * width_ratio
- scale_top = top * height_ratio
- scale_bottom = bottom * height_ratio
- scale_center_x = center_x * width_ratio
- scale_center_y = center_y * height_ratio
-
- # Int coords on feature map/ground truth tensor
- left_idx = int(min(scale_left, width - 1))
- right_idx = int(min(scale_right, width - 1))
- top_idx = int(min(scale_top, height - 1))
- bottom_idx = int(min(scale_bottom, height - 1))
-
- # Generate gaussian heatmap
- scale_box_width = ceil(scale_right - scale_left)
- scale_box_height = ceil(scale_bottom - scale_top)
- radius = gaussian_radius((scale_box_height, scale_box_width),
- min_overlap=0.3)
- radius = max(0, int(radius))
- gt_tl_heatmap[batch_id, label] = gen_gaussian_target(
- gt_tl_heatmap[batch_id, label], [left_idx, top_idx],
- radius)
- gt_br_heatmap[batch_id, label] = gen_gaussian_target(
- gt_br_heatmap[batch_id, label], [right_idx, bottom_idx],
- radius)
-
- # Generate corner offset
- left_offset = scale_left - left_idx
- top_offset = scale_top - top_idx
- right_offset = scale_right - right_idx
- bottom_offset = scale_bottom - bottom_idx
- gt_tl_offset[batch_id, 0, top_idx, left_idx] = left_offset
- gt_tl_offset[batch_id, 1, top_idx, left_idx] = top_offset
- gt_br_offset[batch_id, 0, bottom_idx, right_idx] = right_offset
- gt_br_offset[batch_id, 1, bottom_idx,
- right_idx] = bottom_offset
-
- # Generate corner embedding
- if with_corner_emb:
- corner_match.append([[top_idx, left_idx],
- [bottom_idx, right_idx]])
- # Generate guiding shift
- if with_guiding_shift:
- gt_tl_guiding_shift[batch_id, 0, top_idx,
- left_idx] = scale_center_x - left_idx
- gt_tl_guiding_shift[batch_id, 1, top_idx,
- left_idx] = scale_center_y - top_idx
- gt_br_guiding_shift[batch_id, 0, bottom_idx,
- right_idx] = right_idx - scale_center_x
- gt_br_guiding_shift[
- batch_id, 1, bottom_idx,
- right_idx] = bottom_idx - scale_center_y
- # Generate centripetal shift
- if with_centripetal_shift:
- gt_tl_centripetal_shift[batch_id, 0, top_idx,
- left_idx] = log(scale_center_x -
- scale_left)
- gt_tl_centripetal_shift[batch_id, 1, top_idx,
- left_idx] = log(scale_center_y -
- scale_top)
- gt_br_centripetal_shift[batch_id, 0, bottom_idx,
- right_idx] = log(scale_right -
- scale_center_x)
- gt_br_centripetal_shift[batch_id, 1, bottom_idx,
- right_idx] = log(scale_bottom -
- scale_center_y)
-
- if with_corner_emb:
- match.append(corner_match)
-
- target_result = dict(
- topleft_heatmap=gt_tl_heatmap,
- topleft_offset=gt_tl_offset,
- bottomright_heatmap=gt_br_heatmap,
- bottomright_offset=gt_br_offset)
-
- if with_corner_emb:
- target_result.update(corner_embedding=match)
- if with_guiding_shift:
- target_result.update(
- topleft_guiding_shift=gt_tl_guiding_shift,
- bottomright_guiding_shift=gt_br_guiding_shift)
- if with_centripetal_shift:
- target_result.update(
- topleft_centripetal_shift=gt_tl_centripetal_shift,
- bottomright_centripetal_shift=gt_br_centripetal_shift)
-
- return target_result
-
- def loss(self,
- tl_heats,
- br_heats,
- tl_embs,
- br_embs,
- tl_offs,
- br_offs,
- gt_bboxes,
- gt_labels,
- img_metas,
- gt_bboxes_ignore=None):
- """Compute losses of the head.
-
- Args:
- tl_heats (list[Tensor]): Top-left corner heatmaps for each level
- with shape (N, num_classes, H, W).
- br_heats (list[Tensor]): Bottom-right corner heatmaps for each
- level with shape (N, num_classes, H, W).
- tl_embs (list[Tensor]): Top-left corner embeddings for each level
- with shape (N, corner_emb_channels, H, W).
- br_embs (list[Tensor]): Bottom-right corner embeddings for each
- level with shape (N, corner_emb_channels, H, W).
- tl_offs (list[Tensor]): Top-left corner offsets for each level
- with shape (N, corner_offset_channels, H, W).
- br_offs (list[Tensor]): Bottom-right corner offsets for each level
- with shape (N, corner_offset_channels, H, W).
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
- shape (num_gts, 4) in [left, top, right, bottom] format.
- gt_labels (list[Tensor]): Class indices corresponding to each box.
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- gt_bboxes_ignore (list[Tensor] | None): Specify which bounding
- boxes can be ignored when computing the loss.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components. Containing the
- following losses:
-
- - det_loss (list[Tensor]): Corner keypoint losses of all
- feature levels.
- - pull_loss (list[Tensor]): Part one of AssociativeEmbedding
- losses of all feature levels.
- - push_loss (list[Tensor]): Part two of AssociativeEmbedding
- losses of all feature levels.
- - off_loss (list[Tensor]): Corner offset losses of all feature
- levels.
- """
- targets = self.get_targets(
- gt_bboxes,
- gt_labels,
- tl_heats[-1].shape,
- img_metas[0]['pad_shape'],
- with_corner_emb=self.with_corner_emb)
- mlvl_targets = [targets for _ in range(self.num_feat_levels)]
- det_losses, pull_losses, push_losses, off_losses = multi_apply(
- self.loss_single, tl_heats, br_heats, tl_embs, br_embs, tl_offs,
- br_offs, mlvl_targets)
- loss_dict = dict(det_loss=det_losses, off_loss=off_losses)
- if self.with_corner_emb:
- loss_dict.update(pull_loss=pull_losses, push_loss=push_losses)
- return loss_dict
-
- def loss_single(self, tl_hmp, br_hmp, tl_emb, br_emb, tl_off, br_off,
- targets):
- """Compute losses for single level.
-
- Args:
- tl_hmp (Tensor): Top-left corner heatmap for current level with
- shape (N, num_classes, H, W).
- br_hmp (Tensor): Bottom-right corner heatmap for current level with
- shape (N, num_classes, H, W).
- tl_emb (Tensor): Top-left corner embedding for current level with
- shape (N, corner_emb_channels, H, W).
- br_emb (Tensor): Bottom-right corner embedding for current level
- with shape (N, corner_emb_channels, H, W).
- tl_off (Tensor): Top-left corner offset for current level with
- shape (N, corner_offset_channels, H, W).
- br_off (Tensor): Bottom-right corner offset for current level with
- shape (N, corner_offset_channels, H, W).
- targets (dict): Corner target generated by `get_targets`.
-
- Returns:
- tuple[torch.Tensor]: Losses of the head's differnet branches
- containing the following losses:
-
- - det_loss (Tensor): Corner keypoint loss.
- - pull_loss (Tensor): Part one of AssociativeEmbedding loss.
- - push_loss (Tensor): Part two of AssociativeEmbedding loss.
- - off_loss (Tensor): Corner offset loss.
- """
- gt_tl_hmp = targets['topleft_heatmap']
- gt_br_hmp = targets['bottomright_heatmap']
- gt_tl_off = targets['topleft_offset']
- gt_br_off = targets['bottomright_offset']
- gt_embedding = targets['corner_embedding']
-
- # Detection loss
- tl_det_loss = self.loss_heatmap(
- tl_hmp.sigmoid(),
- gt_tl_hmp,
- avg_factor=max(1,
- gt_tl_hmp.eq(1).sum()))
- br_det_loss = self.loss_heatmap(
- br_hmp.sigmoid(),
- gt_br_hmp,
- avg_factor=max(1,
- gt_br_hmp.eq(1).sum()))
- det_loss = (tl_det_loss + br_det_loss) / 2.0
-
- # AssociativeEmbedding loss
- if self.with_corner_emb and self.loss_embedding is not None:
- pull_loss, push_loss = self.loss_embedding(tl_emb, br_emb,
- gt_embedding)
- else:
- pull_loss, push_loss = None, None
-
- # Offset loss
- # We only compute the offset loss at the real corner position.
- # The value of real corner would be 1 in heatmap ground truth.
- # The mask is computed in class agnostic mode and its shape is
- # batch * 1 * width * height.
- tl_off_mask = gt_tl_hmp.eq(1).sum(1).gt(0).unsqueeze(1).type_as(
- gt_tl_hmp)
- br_off_mask = gt_br_hmp.eq(1).sum(1).gt(0).unsqueeze(1).type_as(
- gt_br_hmp)
- tl_off_loss = self.loss_offset(
- tl_off,
- gt_tl_off,
- tl_off_mask,
- avg_factor=max(1, tl_off_mask.sum()))
- br_off_loss = self.loss_offset(
- br_off,
- gt_br_off,
- br_off_mask,
- avg_factor=max(1, br_off_mask.sum()))
-
- off_loss = (tl_off_loss + br_off_loss) / 2.0
-
- return det_loss, pull_loss, push_loss, off_loss
-
- def get_bboxes(self,
- tl_heats,
- br_heats,
- tl_embs,
- br_embs,
- tl_offs,
- br_offs,
- img_metas,
- rescale=False,
- with_nms=True):
- """Transform network output for a batch into bbox predictions.
-
- Args:
- tl_heats (list[Tensor]): Top-left corner heatmaps for each level
- with shape (N, num_classes, H, W).
- br_heats (list[Tensor]): Bottom-right corner heatmaps for each
- level with shape (N, num_classes, H, W).
- tl_embs (list[Tensor]): Top-left corner embeddings for each level
- with shape (N, corner_emb_channels, H, W).
- br_embs (list[Tensor]): Bottom-right corner embeddings for each
- level with shape (N, corner_emb_channels, H, W).
- tl_offs (list[Tensor]): Top-left corner offsets for each level
- with shape (N, corner_offset_channels, H, W).
- br_offs (list[Tensor]): Bottom-right corner offsets for each level
- with shape (N, corner_offset_channels, H, W).
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- rescale (bool): If True, return boxes in original image space.
- Default: False.
- with_nms (bool): If True, do nms before return boxes.
- Default: True.
- """
- assert tl_heats[-1].shape[0] == br_heats[-1].shape[0] == len(img_metas)
- result_list = []
- for img_id in range(len(img_metas)):
- result_list.append(
- self._get_bboxes_single(
- tl_heats[-1][img_id:img_id + 1, :],
- br_heats[-1][img_id:img_id + 1, :],
- tl_offs[-1][img_id:img_id + 1, :],
- br_offs[-1][img_id:img_id + 1, :],
- img_metas[img_id],
- tl_emb=tl_embs[-1][img_id:img_id + 1, :],
- br_emb=br_embs[-1][img_id:img_id + 1, :],
- rescale=rescale,
- with_nms=with_nms))
-
- return result_list
-
- def _get_bboxes_single(self,
- tl_heat,
- br_heat,
- tl_off,
- br_off,
- img_meta,
- tl_emb=None,
- br_emb=None,
- tl_centripetal_shift=None,
- br_centripetal_shift=None,
- rescale=False,
- with_nms=True):
- """Transform outputs for a single batch item into bbox predictions.
-
- Args:
- tl_heat (Tensor): Top-left corner heatmap for current level with
- shape (N, num_classes, H, W).
- br_heat (Tensor): Bottom-right corner heatmap for current level
- with shape (N, num_classes, H, W).
- tl_off (Tensor): Top-left corner offset for current level with
- shape (N, corner_offset_channels, H, W).
- br_off (Tensor): Bottom-right corner offset for current level with
- shape (N, corner_offset_channels, H, W).
- img_meta (dict): Meta information of current image, e.g.,
- image size, scaling factor, etc.
- tl_emb (Tensor): Top-left corner embedding for current level with
- shape (N, corner_emb_channels, H, W).
- br_emb (Tensor): Bottom-right corner embedding for current level
- with shape (N, corner_emb_channels, H, W).
- tl_centripetal_shift: Top-left corner's centripetal shift for
- current level with shape (N, 2, H, W).
- br_centripetal_shift: Bottom-right corner's centripetal shift for
- current level with shape (N, 2, H, W).
- rescale (bool): If True, return boxes in original image space.
- Default: False.
- with_nms (bool): If True, do nms before return boxes.
- Default: True.
- """
- if isinstance(img_meta, (list, tuple)):
- img_meta = img_meta[0]
-
- batch_bboxes, batch_scores, batch_clses = self.decode_heatmap(
- tl_heat=tl_heat.sigmoid(),
- br_heat=br_heat.sigmoid(),
- tl_off=tl_off,
- br_off=br_off,
- tl_emb=tl_emb,
- br_emb=br_emb,
- tl_centripetal_shift=tl_centripetal_shift,
- br_centripetal_shift=br_centripetal_shift,
- img_meta=img_meta,
- k=self.test_cfg.corner_topk,
- kernel=self.test_cfg.local_maximum_kernel,
- distance_threshold=self.test_cfg.distance_threshold)
-
- if rescale:
- batch_bboxes /= batch_bboxes.new_tensor(img_meta['scale_factor'])
-
- bboxes = batch_bboxes.view([-1, 4])
- scores = batch_scores.view([-1, 1])
- clses = batch_clses.view([-1, 1])
-
- idx = scores.argsort(dim=0, descending=True)
- bboxes = bboxes[idx].view([-1, 4])
- scores = scores[idx].view(-1)
- clses = clses[idx].view(-1)
-
- detections = torch.cat([bboxes, scores.unsqueeze(-1)], -1)
- keepinds = (detections[:, -1] > -0.1)
- detections = detections[keepinds]
- labels = clses[keepinds]
-
- if with_nms:
- detections, labels = self._bboxes_nms(detections, labels,
- self.test_cfg)
-
- return detections, labels
-
- def _bboxes_nms(self, bboxes, labels, cfg):
- if labels.numel() == 0:
- return bboxes, labels
-
- if 'nms_cfg' in cfg:
- warning.warn('nms_cfg in test_cfg will be deprecated. '
- 'Please rename it as nms')
- if 'nms' not in cfg:
- cfg.nms = cfg.nms_cfg
-
- out_bboxes, keep = batched_nms(bboxes[:, :4], bboxes[:, -1], labels,
- cfg.nms)
- out_labels = labels[keep]
-
- if len(out_bboxes) > 0:
- idx = torch.argsort(out_bboxes[:, -1], descending=True)
- idx = idx[:cfg.max_per_img]
- out_bboxes = out_bboxes[idx]
- out_labels = out_labels[idx]
-
- return out_bboxes, out_labels
-
- def _gather_feat(self, feat, ind, mask=None):
- """Gather feature according to index.
-
- Args:
- feat (Tensor): Target feature map.
- ind (Tensor): Target coord index.
- mask (Tensor | None): Mask of featuremap. Default: None.
-
- Returns:
- feat (Tensor): Gathered feature.
- """
- dim = feat.size(2)
- ind = ind.unsqueeze(2).repeat(1, 1, dim)
- feat = feat.gather(1, ind)
- if mask is not None:
- mask = mask.unsqueeze(2).expand_as(feat)
- feat = feat[mask]
- feat = feat.view(-1, dim)
- return feat
-
- def _local_maximum(self, heat, kernel=3):
- """Extract local maximum pixel with given kernel.
-
- Args:
- heat (Tensor): Target heatmap.
- kernel (int): Kernel size of max pooling. Default: 3.
-
- Returns:
- heat (Tensor): A heatmap where local maximum pixels maintain its
- own value and other positions are 0.
- """
- pad = (kernel - 1) // 2
- hmax = F.max_pool2d(heat, kernel, stride=1, padding=pad)
- keep = (hmax == heat).float()
- return heat * keep
-
- def _transpose_and_gather_feat(self, feat, ind):
- """Transpose and gather feature according to index.
-
- Args:
- feat (Tensor): Target feature map.
- ind (Tensor): Target coord index.
-
- Returns:
- feat (Tensor): Transposed and gathered feature.
- """
- feat = feat.permute(0, 2, 3, 1).contiguous()
- feat = feat.view(feat.size(0), -1, feat.size(3))
- feat = self._gather_feat(feat, ind)
- return feat
-
- def _topk(self, scores, k=20):
- """Get top k positions from heatmap.
-
- Args:
- scores (Tensor): Target heatmap with shape
- [batch, num_classes, height, width].
- k (int): Target number. Default: 20.
-
- Returns:
- tuple[torch.Tensor]: Scores, indexes, categories and coords of
- topk keypoint. Containing following Tensors:
-
- - topk_scores (Tensor): Max scores of each topk keypoint.
- - topk_inds (Tensor): Indexes of each topk keypoint.
- - topk_clses (Tensor): Categories of each topk keypoint.
- - topk_ys (Tensor): Y-coord of each topk keypoint.
- - topk_xs (Tensor): X-coord of each topk keypoint.
- """
- batch, _, height, width = scores.size()
- topk_scores, topk_inds = torch.topk(scores.view(batch, -1), k)
- topk_clses = topk_inds // (height * width)
- topk_inds = topk_inds % (height * width)
- topk_ys = topk_inds // width
- topk_xs = (topk_inds % width).int().float()
- return topk_scores, topk_inds, topk_clses, topk_ys, topk_xs
-
- def decode_heatmap(self,
- tl_heat,
- br_heat,
- tl_off,
- br_off,
- tl_emb=None,
- br_emb=None,
- tl_centripetal_shift=None,
- br_centripetal_shift=None,
- img_meta=None,
- k=100,
- kernel=3,
- distance_threshold=0.5,
- num_dets=1000):
- """Transform outputs for a single batch item into raw bbox predictions.
-
- Args:
- tl_heat (Tensor): Top-left corner heatmap for current level with
- shape (N, num_classes, H, W).
- br_heat (Tensor): Bottom-right corner heatmap for current level
- with shape (N, num_classes, H, W).
- tl_off (Tensor): Top-left corner offset for current level with
- shape (N, corner_offset_channels, H, W).
- br_off (Tensor): Bottom-right corner offset for current level with
- shape (N, corner_offset_channels, H, W).
- tl_emb (Tensor | None): Top-left corner embedding for current
- level with shape (N, corner_emb_channels, H, W).
- br_emb (Tensor | None): Bottom-right corner embedding for current
- level with shape (N, corner_emb_channels, H, W).
- tl_centripetal_shift (Tensor | None): Top-left centripetal shift
- for current level with shape (N, 2, H, W).
- br_centripetal_shift (Tensor | None): Bottom-right centripetal
- shift for current level with shape (N, 2, H, W).
- img_meta (dict): Meta information of current image, e.g.,
- image size, scaling factor, etc.
- k (int): Get top k corner keypoints from heatmap.
- kernel (int): Max pooling kernel for extract local maximum pixels.
- distance_threshold (float): Distance threshold. Top-left and
- bottom-right corner keypoints with feature distance less than
- the threshold will be regarded as keypoints from same object.
- num_dets (int): Num of raw boxes before doing nms.
-
- Returns:
- tuple[torch.Tensor]: Decoded output of CornerHead, containing the
- following Tensors:
-
- - bboxes (Tensor): Coords of each box.
- - scores (Tensor): Scores of each box.
- - clses (Tensor): Categories of each box.
- """
- with_embedding = tl_emb is not None and br_emb is not None
- with_centripetal_shift = (
- tl_centripetal_shift is not None
- and br_centripetal_shift is not None)
- assert with_embedding + with_centripetal_shift == 1
- batch, _, height, width = tl_heat.size()
- inp_h, inp_w, _ = img_meta['pad_shape']
-
- # perform nms on heatmaps
- tl_heat = self._local_maximum(tl_heat, kernel=kernel)
- br_heat = self._local_maximum(br_heat, kernel=kernel)
-
- tl_scores, tl_inds, tl_clses, tl_ys, tl_xs = self._topk(tl_heat, k=k)
- br_scores, br_inds, br_clses, br_ys, br_xs = self._topk(br_heat, k=k)
-
- # We use repeat instead of expand here because expand is a
- # shallow-copy function. Thus it could cause unexpected testing result
- # sometimes. Using expand will decrease about 10% mAP during testing
- # compared to repeat.
- tl_ys = tl_ys.view(batch, k, 1).repeat(1, 1, k)
- tl_xs = tl_xs.view(batch, k, 1).repeat(1, 1, k)
- br_ys = br_ys.view(batch, 1, k).repeat(1, k, 1)
- br_xs = br_xs.view(batch, 1, k).repeat(1, k, 1)
-
- tl_off = self._transpose_and_gather_feat(tl_off, tl_inds)
- tl_off = tl_off.view(batch, k, 1, 2)
- br_off = self._transpose_and_gather_feat(br_off, br_inds)
- br_off = br_off.view(batch, 1, k, 2)
-
- tl_xs = tl_xs + tl_off[..., 0]
- tl_ys = tl_ys + tl_off[..., 1]
- br_xs = br_xs + br_off[..., 0]
- br_ys = br_ys + br_off[..., 1]
-
- if with_centripetal_shift:
- tl_centripetal_shift = self._transpose_and_gather_feat(
- tl_centripetal_shift, tl_inds).view(batch, k, 1, 2).exp()
- br_centripetal_shift = self._transpose_and_gather_feat(
- br_centripetal_shift, br_inds).view(batch, 1, k, 2).exp()
-
- tl_ctxs = tl_xs + tl_centripetal_shift[..., 0]
- tl_ctys = tl_ys + tl_centripetal_shift[..., 1]
- br_ctxs = br_xs - br_centripetal_shift[..., 0]
- br_ctys = br_ys - br_centripetal_shift[..., 1]
-
- # all possible boxes based on top k corners (ignoring class)
- tl_xs *= (inp_w / width)
- tl_ys *= (inp_h / height)
- br_xs *= (inp_w / width)
- br_ys *= (inp_h / height)
-
- if with_centripetal_shift:
- tl_ctxs *= (inp_w / width)
- tl_ctys *= (inp_h / height)
- br_ctxs *= (inp_w / width)
- br_ctys *= (inp_h / height)
-
- x_off = img_meta['border'][2]
- y_off = img_meta['border'][0]
-
- tl_xs -= x_off
- tl_ys -= y_off
- br_xs -= x_off
- br_ys -= y_off
-
- tl_xs *= tl_xs.gt(0.0).type_as(tl_xs)
- tl_ys *= tl_ys.gt(0.0).type_as(tl_ys)
- br_xs *= br_xs.gt(0.0).type_as(br_xs)
- br_ys *= br_ys.gt(0.0).type_as(br_ys)
-
- bboxes = torch.stack((tl_xs, tl_ys, br_xs, br_ys), dim=3)
- area_bboxes = ((br_xs - tl_xs) * (br_ys - tl_ys)).abs()
-
- if with_centripetal_shift:
- tl_ctxs -= x_off
- tl_ctys -= y_off
- br_ctxs -= x_off
- br_ctys -= y_off
-
- tl_ctxs *= tl_ctxs.gt(0.0).type_as(tl_ctxs)
- tl_ctys *= tl_ctys.gt(0.0).type_as(tl_ctys)
- br_ctxs *= br_ctxs.gt(0.0).type_as(br_ctxs)
- br_ctys *= br_ctys.gt(0.0).type_as(br_ctys)
-
- ct_bboxes = torch.stack((tl_ctxs, tl_ctys, br_ctxs, br_ctys),
- dim=3)
- area_ct_bboxes = ((br_ctxs - tl_ctxs) * (br_ctys - tl_ctys)).abs()
-
- rcentral = torch.zeros_like(ct_bboxes)
- # magic nums from paper section 4.1
- mu = torch.ones_like(area_bboxes) / 2.4
- mu[area_bboxes > 3500] = 1 / 2.1 # large bbox have smaller mu
-
- bboxes_center_x = (bboxes[..., 0] + bboxes[..., 2]) / 2
- bboxes_center_y = (bboxes[..., 1] + bboxes[..., 3]) / 2
- rcentral[..., 0] = bboxes_center_x - mu * (bboxes[..., 2] -
- bboxes[..., 0]) / 2
- rcentral[..., 1] = bboxes_center_y - mu * (bboxes[..., 3] -
- bboxes[..., 1]) / 2
- rcentral[..., 2] = bboxes_center_x + mu * (bboxes[..., 2] -
- bboxes[..., 0]) / 2
- rcentral[..., 3] = bboxes_center_y + mu * (bboxes[..., 3] -
- bboxes[..., 1]) / 2
- area_rcentral = ((rcentral[..., 2] - rcentral[..., 0]) *
- (rcentral[..., 3] - rcentral[..., 1])).abs()
- dists = area_ct_bboxes / area_rcentral
-
- tl_ctx_inds = (ct_bboxes[..., 0] <= rcentral[..., 0]) | (
- ct_bboxes[..., 0] >= rcentral[..., 2])
- tl_cty_inds = (ct_bboxes[..., 1] <= rcentral[..., 1]) | (
- ct_bboxes[..., 1] >= rcentral[..., 3])
- br_ctx_inds = (ct_bboxes[..., 2] <= rcentral[..., 0]) | (
- ct_bboxes[..., 2] >= rcentral[..., 2])
- br_cty_inds = (ct_bboxes[..., 3] <= rcentral[..., 1]) | (
- ct_bboxes[..., 3] >= rcentral[..., 3])
-
- if with_embedding:
- tl_emb = self._transpose_and_gather_feat(tl_emb, tl_inds)
- tl_emb = tl_emb.view(batch, k, 1)
- br_emb = self._transpose_and_gather_feat(br_emb, br_inds)
- br_emb = br_emb.view(batch, 1, k)
- dists = torch.abs(tl_emb - br_emb)
-
- tl_scores = tl_scores.view(batch, k, 1).repeat(1, 1, k)
- br_scores = br_scores.view(batch, 1, k).repeat(1, k, 1)
-
- scores = (tl_scores + br_scores) / 2 # scores for all possible boxes
-
- # tl and br should have same class
- tl_clses = tl_clses.view(batch, k, 1).repeat(1, 1, k)
- br_clses = br_clses.view(batch, 1, k).repeat(1, k, 1)
- cls_inds = (tl_clses != br_clses)
-
- # reject boxes based on distances
- dist_inds = dists > distance_threshold
-
- # reject boxes based on widths and heights
- width_inds = (br_xs <= tl_xs)
- height_inds = (br_ys <= tl_ys)
-
- scores[cls_inds] = -1
- scores[width_inds] = -1
- scores[height_inds] = -1
- scores[dist_inds] = -1
- if with_centripetal_shift:
- scores[tl_ctx_inds] = -1
- scores[tl_cty_inds] = -1
- scores[br_ctx_inds] = -1
- scores[br_cty_inds] = -1
-
- scores = scores.view(batch, -1)
- scores, inds = torch.topk(scores, num_dets)
- scores = scores.unsqueeze(2)
-
- bboxes = bboxes.view(batch, -1, 4)
- bboxes = self._gather_feat(bboxes, inds)
-
- clses = tl_clses.contiguous().view(batch, -1, 1)
- clses = self._gather_feat(clses, inds).float()
-
- return bboxes, scores, clses
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18_480x480_80k_pascal_context.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18_480x480_80k_pascal_context.py
deleted file mode 100644
index cf315a4f0e6f397768572c590a634cc1b9d298a9..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18_480x480_80k_pascal_context.py
+++ /dev/null
@@ -1,8 +0,0 @@
-_base_ = [
- '../_base_/models/fcn_hr18.py', '../_base_/datasets/pascal_context.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
-]
-model = dict(
- decode_head=dict(num_classes=60),
- test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320)))
-optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001)
diff --git a/spaces/HarlanHong/DaGAN/modules/model.py b/spaces/HarlanHong/DaGAN/modules/model.py
deleted file mode 100644
index 6ea7bf5799b8cf6c4f9c6c24ab29efee036f2e05..0000000000000000000000000000000000000000
--- a/spaces/HarlanHong/DaGAN/modules/model.py
+++ /dev/null
@@ -1,362 +0,0 @@
-from torch import nn
-import torch
-import torch.nn.functional as F
-from modules.util import AntiAliasInterpolation2d, make_coordinate_grid
-from torchvision import models
-import numpy as np
-from torch.autograd import grad
-import pdb
-import depth
-
-class Vgg19(torch.nn.Module):
- """
- Vgg19 network for perceptual loss. See Sec 3.3.
- """
- def __init__(self, requires_grad=False):
- super(Vgg19, self).__init__()
- vgg_pretrained_features = models.vgg19(pretrained=True).features
- self.slice1 = torch.nn.Sequential()
- self.slice2 = torch.nn.Sequential()
- self.slice3 = torch.nn.Sequential()
- self.slice4 = torch.nn.Sequential()
- self.slice5 = torch.nn.Sequential()
- for x in range(2):
- self.slice1.add_module(str(x), vgg_pretrained_features[x])
- for x in range(2, 7):
- self.slice2.add_module(str(x), vgg_pretrained_features[x])
- for x in range(7, 12):
- self.slice3.add_module(str(x), vgg_pretrained_features[x])
- for x in range(12, 21):
- self.slice4.add_module(str(x), vgg_pretrained_features[x])
- for x in range(21, 30):
- self.slice5.add_module(str(x), vgg_pretrained_features[x])
-
- self.mean = torch.nn.Parameter(data=torch.Tensor(np.array([0.485, 0.456, 0.406]).reshape((1, 3, 1, 1))),
- requires_grad=False)
- self.std = torch.nn.Parameter(data=torch.Tensor(np.array([0.229, 0.224, 0.225]).reshape((1, 3, 1, 1))),
- requires_grad=False)
-
- if not requires_grad:
- for param in self.parameters():
- param.requires_grad = False
-
- def forward(self, X):
- X = (X - self.mean) / self.std
- h_relu1 = self.slice1(X)
- h_relu2 = self.slice2(h_relu1)
- h_relu3 = self.slice3(h_relu2)
- h_relu4 = self.slice4(h_relu3)
- h_relu5 = self.slice5(h_relu4)
- out = [h_relu1, h_relu2, h_relu3, h_relu4, h_relu5]
- return out
-
-
-class ImagePyramide(torch.nn.Module):
- """
- Create image pyramide for computing pyramide perceptual loss. See Sec 3.3
- """
- def __init__(self, scales, num_channels):
- super(ImagePyramide, self).__init__()
- downs = {}
- for scale in scales:
- downs[str(scale).replace('.', '-')] = AntiAliasInterpolation2d(num_channels, scale)
- self.downs = nn.ModuleDict(downs)
-
- def forward(self, x):
- out_dict = {}
- for scale, down_module in self.downs.items():
- out_dict['prediction_' + str(scale).replace('-', '.')] = down_module(x)
- return out_dict
-
-
-class Transform:
- """
- Random tps transformation for equivariance constraints. See Sec 3.3
- """
- def __init__(self, bs, **kwargs):
- noise = torch.normal(mean=0, std=kwargs['sigma_affine'] * torch.ones([bs, 2, 3]))
- self.theta = noise + torch.eye(2, 3).view(1, 2, 3)
- self.bs = bs
-
- if ('sigma_tps' in kwargs) and ('points_tps' in kwargs):
- self.tps = True
- self.control_points = make_coordinate_grid((kwargs['points_tps'], kwargs['points_tps']), type=noise.type())
- self.control_points = self.control_points.unsqueeze(0)
- self.control_params = torch.normal(mean=0,
- std=kwargs['sigma_tps'] * torch.ones([bs, 1, kwargs['points_tps'] ** 2]))
- else:
- self.tps = False
-
- def transform_frame(self, frame):
- grid = make_coordinate_grid(frame.shape[2:], type=frame.type()).unsqueeze(0)
- grid = grid.view(1, frame.shape[2] * frame.shape[3], 2)
- grid = self.warp_coordinates(grid).view(self.bs, frame.shape[2], frame.shape[3], 2)
- return F.grid_sample(frame, grid, padding_mode="reflection")
-
- def warp_coordinates(self, coordinates):
- theta = self.theta.type(coordinates.type())
- theta = theta.unsqueeze(1)
- transformed = torch.matmul(theta[:, :, :, :2], coordinates.unsqueeze(-1)) + theta[:, :, :, 2:]
- transformed = transformed.squeeze(-1)
-
- if self.tps:
- control_points = self.control_points.type(coordinates.type())
- control_params = self.control_params.type(coordinates.type())
- distances = coordinates.view(coordinates.shape[0], -1, 1, 2) - control_points.view(1, 1, -1, 2)
- distances = torch.abs(distances).sum(-1)
-
- result = distances ** 2
- result = result * torch.log(distances + 1e-6)
- result = result * control_params
- result = result.sum(dim=2).view(self.bs, coordinates.shape[1], 1)
- transformed = transformed + result
-
- return transformed
-
- def jacobian(self, coordinates):
- new_coordinates = self.warp_coordinates(coordinates)
- grad_x = grad(new_coordinates[..., 0].sum(), coordinates, create_graph=True)
- grad_y = grad(new_coordinates[..., 1].sum(), coordinates, create_graph=True)
- jacobian = torch.cat([grad_x[0].unsqueeze(-2), grad_y[0].unsqueeze(-2)], dim=-2)
- return jacobian
-
-
-def detach_kp(kp):
- return {key: value.detach() for key, value in kp.items()}
-
-
-class GeneratorFullModel(torch.nn.Module):
- """
- Merge all generator related updates into single model for better multi-gpu usage
- """
-
- def __init__(self, kp_extractor, generator, discriminator, train_params,opt):
- super(GeneratorFullModel, self).__init__()
- self.kp_extractor = kp_extractor
- self.generator = generator
- self.discriminator = discriminator
- self.train_params = train_params
- self.scales = train_params['scales']
- self.disc_scales = self.discriminator.module.scales
- self.pyramid = ImagePyramide(self.scales, generator.module.num_channels)
- if torch.cuda.is_available():
- self.pyramid = self.pyramid.cuda()
- self.opt = opt
- self.loss_weights = train_params['loss_weights']
-
- if sum(self.loss_weights['perceptual']) != 0:
- self.vgg = Vgg19()
- if torch.cuda.is_available():
- self.vgg = self.vgg.cuda()
- self.depth_encoder = depth.ResnetEncoder(18, False).cuda()
- self.depth_decoder = depth.DepthDecoder(num_ch_enc=self.depth_encoder.num_ch_enc, scales=range(4)).cuda()
- loaded_dict_enc = torch.load('depth/models/weights_19/encoder.pth',map_location='cpu')
- loaded_dict_dec = torch.load('depth/models/weights_19/depth.pth',map_location='cpu')
- filtered_dict_enc = {k: v for k, v in loaded_dict_enc.items() if k in self.depth_encoder.state_dict()}
- self.depth_encoder.load_state_dict(filtered_dict_enc)
- self.depth_decoder.load_state_dict(loaded_dict_dec)
- self.set_requires_grad(self.depth_encoder, False)
- self.set_requires_grad(self.depth_decoder, False)
- self.depth_decoder.eval()
- self.depth_encoder.eval()
- def set_requires_grad(self, nets, requires_grad=False):
- """Set requies_grad=Fasle for all the networks to avoid unnecessary computations
- Parameters:
- nets (network list) -- a list of networks
- requires_grad (bool) -- whether the networks require gradients or not
- """
- if not isinstance(nets, list):
- nets = [nets]
- for net in nets:
- if net is not None:
- for param in net.parameters():
- param.requires_grad = requires_grad
- def forward(self, x):
- depth_source = None
- depth_driving = None
- outputs = self.depth_decoder(self.depth_encoder(x['source']))
- depth_source = outputs[("disp", 0)]
- outputs = self.depth_decoder(self.depth_encoder(x['driving']))
- depth_driving = outputs[("disp", 0)]
-
- if self.opt.use_depth:
- kp_source = self.kp_extractor(depth_source)
- kp_driving = self.kp_extractor(depth_driving)
- elif self.opt.rgbd:
- source = torch.cat((x['source'],depth_source),1)
- driving = torch.cat((x['driving'],depth_driving),1)
- kp_source = self.kp_extractor(source)
- kp_driving = self.kp_extractor(driving)
- else:
- kp_source = self.kp_extractor(x['source'])
- kp_driving = self.kp_extractor(x['driving'])
- generated = self.generator(x['source'], kp_source=kp_source, kp_driving=kp_driving, source_depth = depth_source, driving_depth = depth_driving)
- generated.update({'kp_source': kp_source, 'kp_driving': kp_driving})
- loss_values = {}
- pyramide_real = self.pyramid(x['driving'])
- pyramide_generated = self.pyramid(generated['prediction'])
- if sum(self.loss_weights['perceptual']) != 0:
- value_total = 0
- for scale in self.scales:
- x_vgg = self.vgg(pyramide_generated['prediction_' + str(scale)])
- y_vgg = self.vgg(pyramide_real['prediction_' + str(scale)])
-
- for i, weight in enumerate(self.loss_weights['perceptual']):
- value = torch.abs(x_vgg[i] - y_vgg[i].detach()).mean()
- value_total += self.loss_weights['perceptual'][i] * value
- loss_values['perceptual'] = value_total
-
- if self.loss_weights['generator_gan'] != 0:
-
- discriminator_maps_generated = self.discriminator(pyramide_generated, kp=detach_kp(kp_driving))
-
- discriminator_maps_real = self.discriminator(pyramide_real, kp=detach_kp(kp_driving))
- value_total = 0
- for scale in self.disc_scales:
- key = 'prediction_map_%s' % scale
- value = ((1 - discriminator_maps_generated[key]) ** 2).mean()
- value_total += self.loss_weights['generator_gan'] * value
- loss_values['gen_gan'] = value_total
-
- if sum(self.loss_weights['feature_matching']) != 0:
- value_total = 0
- for scale in self.disc_scales:
- key = 'feature_maps_%s' % scale
- for i, (a, b) in enumerate(zip(discriminator_maps_real[key], discriminator_maps_generated[key])):
- if self.loss_weights['feature_matching'][i] == 0:
- continue
- value = torch.abs(a - b).mean()
- value_total += self.loss_weights['feature_matching'][i] * value
- loss_values['feature_matching'] = value_total
-
- if (self.loss_weights['equivariance_value'] + self.loss_weights['equivariance_jacobian']) != 0:
- transform = Transform(x['driving'].shape[0], **self.train_params['transform_params'])
- transformed_frame = transform.transform_frame(x['driving'])
- if self.opt.use_depth:
- outputs = self.depth_decoder(self.depth_encoder(transformed_frame))
- depth_transform = outputs[("disp", 0)]
- transformed_kp = self.kp_extractor(depth_transform)
- elif self.opt.rgbd:
- outputs = self.depth_decoder(self.depth_encoder(transformed_frame))
- depth_transform = outputs[("disp", 0)]
- transform_img = torch.cat((transformed_frame,depth_transform),1)
- transformed_kp = self.kp_extractor(transform_img)
- else:
- transformed_kp = self.kp_extractor(transformed_frame)
-
- generated['transformed_frame'] = transformed_frame
- generated['transformed_kp'] = transformed_kp
-
- ## Value loss part
- if self.loss_weights['equivariance_value'] != 0:
- value = torch.abs(kp_driving['value'] - transform.warp_coordinates(transformed_kp['value'])).mean()
- loss_values['equivariance_value'] = self.loss_weights['equivariance_value'] * value
-
- ## jacobian loss part
- if self.loss_weights['equivariance_jacobian'] != 0:
- jacobian_transformed = torch.matmul(transform.jacobian(transformed_kp['value']),
- transformed_kp['jacobian'])
-
- normed_driving = torch.inverse(kp_driving['jacobian'])
- normed_transformed = jacobian_transformed
- value = torch.matmul(normed_driving, normed_transformed)
-
- eye = torch.eye(2).view(1, 1, 2, 2).type(value.type())
-
- value = torch.abs(eye - value).mean()
- loss_values['equivariance_jacobian'] = self.loss_weights['equivariance_jacobian'] * value
-
-
- if self.loss_weights['kp_distance']:
- bz,num_kp,kp_dim = kp_source['value'].shape
- sk = kp_source['value'].unsqueeze(2)-kp_source['value'].unsqueeze(1)
- dk = kp_driving['value'].unsqueeze(2)-kp_driving['value'].unsqueeze(1)
- source_dist_loss = (-torch.sign((torch.sqrt((sk*sk).sum(-1)+1e-8)+torch.eye(num_kp).cuda()*0.2)-0.2)+1).mean()
- driving_dist_loss = (-torch.sign((torch.sqrt((dk*dk).sum(-1)+1e-8)+torch.eye(num_kp).cuda()*0.2)-0.2)+1).mean()
- # driving_dist_loss = (torch.sign(1-(torch.sqrt((dk*dk).sum(-1)+1e-8)+torch.eye(num_kp).cuda()))+1).mean()
- value_total = self.loss_weights['kp_distance']*(source_dist_loss+driving_dist_loss)
- loss_values['kp_distance'] = value_total
- if self.loss_weights['kp_prior']:
- bz,num_kp,kp_dim = kp_source['value'].shape
- sk = kp_source['value'].unsqueeze(2)-kp_source['value'].unsqueeze(1)
- dk = kp_driving['value'].unsqueeze(2)-kp_driving['value'].unsqueeze(1)
- dis_loss = torch.relu(0.1-torch.sqrt((sk*sk).sum(-1)+1e-8))+torch.relu(0.1-torch.sqrt((dk*dk).sum(-1)+1e-8))
- bs,nk,_=kp_source['value'].shape
- scoor_depth = F.grid_sample(depth_source,kp_source['value'].view(bs,1,nk,-1))
- dcoor_depth = F.grid_sample(depth_driving,kp_driving['value'].view(bs,1,nk,-1))
- sd_loss = torch.abs(scoor_depth.mean(-1,keepdim=True) - kp_source['value'].view(bs,1,nk,-1)).mean()
- dd_loss = torch.abs(dcoor_depth.mean(-1,keepdim=True) - kp_driving['value'].view(bs,1,nk,-1)).mean()
- value_total = self.loss_weights['kp_distance']*(dis_loss+sd_loss+dd_loss)
- loss_values['kp_distance'] = value_total
-
-
- if self.loss_weights['kp_scale']:
- bz,num_kp,kp_dim = kp_source['value'].shape
- if self.opt.rgbd:
- outputs = self.depth_decoder(self.depth_encoder(generated['prediction']))
- depth_pred = outputs[("disp", 0)]
- pred = torch.cat((generated['prediction'],depth_pred),1)
- kp_pred = self.kp_extractor(pred)
- elif self.opt.use_depth:
- outputs = self.depth_decoder(self.depth_encoder(generated['prediction']))
- depth_pred = outputs[("disp", 0)]
- kp_pred = self.kp_extractor(depth_pred)
- else:
- kp_pred = self.kp_extractor(generated['prediction'])
-
- pred_mean = kp_pred['value'].mean(1,keepdim=True)
- driving_mean = kp_driving['value'].mean(1,keepdim=True)
- pk = kp_source['value']-pred_mean
- dk = kp_driving['value']- driving_mean
- pred_dist_loss = torch.sqrt((pk*pk).sum(-1)+1e-8)
- driving_dist_loss = torch.sqrt((dk*dk).sum(-1)+1e-8)
- scale_vec = driving_dist_loss/pred_dist_loss
- bz,n = scale_vec.shape
- value = torch.abs(scale_vec[:,:n-1]-scale_vec[:,1:]).mean()
- value_total = self.loss_weights['kp_scale']*value
- loss_values['kp_scale'] = value_total
- if self.loss_weights['depth_constraint']:
- bz,num_kp,kp_dim = kp_source['value'].shape
- outputs = self.depth_decoder(self.depth_encoder(generated['prediction']))
- depth_pred = outputs[("disp", 0)]
- value_total = self.loss_weights['depth_constraint']*torch.abs(depth_driving-depth_pred).mean()
- loss_values['depth_constraint'] = value_total
- return loss_values, generated
-
-
-
-class DiscriminatorFullModel(torch.nn.Module):
- """
- Merge all discriminator related updates into single model for better multi-gpu usage
- """
-
- def __init__(self, kp_extractor, generator, discriminator, train_params):
- super(DiscriminatorFullModel, self).__init__()
- self.kp_extractor = kp_extractor
- self.generator = generator
- self.discriminator = discriminator
- self.train_params = train_params
- self.scales = self.discriminator.module.scales
- self.pyramid = ImagePyramide(self.scales, generator.module.num_channels)
- if torch.cuda.is_available():
- self.pyramid = self.pyramid.cuda()
-
- self.loss_weights = train_params['loss_weights']
-
- def forward(self, x, generated):
- pyramide_real = self.pyramid(x['driving'])
- pyramide_generated = self.pyramid(generated['prediction'].detach())
-
- kp_driving = generated['kp_driving']
- discriminator_maps_generated = self.discriminator(pyramide_generated, kp=detach_kp(kp_driving))
- discriminator_maps_real = self.discriminator(pyramide_real, kp=detach_kp(kp_driving))
-
- loss_values = {}
- value_total = 0
- for scale in self.scales:
- key = 'prediction_map_%s' % scale
- value = (1 - discriminator_maps_real[key]) ** 2 + discriminator_maps_generated[key] ** 2
- value_total += self.loss_weights['discriminator_gan'] * value.mean()
- loss_values['disc_gan'] = value_total
-
- return loss_values
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/nan_detector.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/nan_detector.py
deleted file mode 100644
index faa8031d4666c9ba9837919fe1c884dacf47ac3a..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/nan_detector.py
+++ /dev/null
@@ -1,108 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-
-import torch
-
-
-logger = logging.getLogger(__name__)
-
-
-class NanDetector:
- """
- Detects the first NaN or Inf in forward and/or backward pass and logs, together with the module name
- """
-
- def __init__(self, model, forward=True, backward=True):
- self.bhooks = []
- self.fhooks = []
- self.forward = forward
- self.backward = backward
- self.named_parameters = list(model.named_parameters())
- self.reset()
-
- for name, mod in model.named_modules():
- mod.__module_name = name
- self.add_hooks(mod)
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_value, exc_traceback):
- # Dump out all model gnorms to enable better debugging
- norm = {}
- gradients = {}
- for name, param in self.named_parameters:
- if param.grad is not None:
- grad_norm = torch.norm(param.grad.data, p=2, dtype=torch.float32)
- norm[name] = grad_norm.item()
- if torch.isnan(grad_norm).any() or torch.isinf(grad_norm).any():
- gradients[name] = param.grad.data
- if len(gradients) > 0:
- logger.info("Detected nan/inf grad norm, dumping norms...")
- logger.info(f"norms: {norm}")
- logger.info(f"gradients: {gradients}")
-
- self.close()
-
- def add_hooks(self, module):
- if self.forward:
- self.fhooks.append(module.register_forward_hook(self.fhook_fn))
- if self.backward:
- self.bhooks.append(module.register_backward_hook(self.bhook_fn))
-
- def reset(self):
- self.has_printed_f = False
- self.has_printed_b = False
-
- def _detect(self, tensor, name, backward):
- err = None
- if (
- torch.is_floating_point(tensor)
- # single value tensors (like the loss) will not provide much info
- and tensor.numel() >= 2
- ):
- with torch.no_grad():
- if torch.isnan(tensor).any():
- err = "NaN"
- elif torch.isinf(tensor).any():
- err = "Inf"
- if err is not None:
- err = f"{err} detected in output of {name}, shape: {tensor.shape}, {'backward' if backward else 'forward'}"
- return err
-
- def _apply(self, module, inp, x, backward):
- if torch.is_tensor(x):
- if isinstance(inp, tuple) and len(inp) > 0:
- inp = inp[0]
- err = self._detect(x, module.__module_name, backward)
- if err is not None:
- if torch.is_tensor(inp) and not backward:
- err += (
- f" input max: {inp.max().item()}, input min: {inp.min().item()}"
- )
-
- has_printed_attr = "has_printed_b" if backward else "has_printed_f"
- logger.warning(err)
- setattr(self, has_printed_attr, True)
- elif isinstance(x, dict):
- for v in x.values():
- self._apply(module, inp, v, backward)
- elif isinstance(x, list) or isinstance(x, tuple):
- for v in x:
- self._apply(module, inp, v, backward)
-
- def fhook_fn(self, module, inp, output):
- if not self.has_printed_f:
- self._apply(module, inp, output, backward=False)
-
- def bhook_fn(self, module, inp, output):
- if not self.has_printed_b:
- self._apply(module, inp, output, backward=True)
-
- def close(self):
- for hook in self.fhooks + self.bhooks:
- hook.remove()
diff --git a/spaces/Harveenchadha/en_to_indic_translation/subword-nmt/subword_nmt/tests/__init__.py b/spaces/Harveenchadha/en_to_indic_translation/subword-nmt/subword_nmt/tests/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Heshwa/html-code-generation-from-images-with-deep-neural-networks/app.py b/spaces/Heshwa/html-code-generation-from-images-with-deep-neural-networks/app.py
deleted file mode 100644
index 0a34aaa4ec372e9867e07aa1029d38d513a25950..0000000000000000000000000000000000000000
--- a/spaces/Heshwa/html-code-generation-from-images-with-deep-neural-networks/app.py
+++ /dev/null
@@ -1,54 +0,0 @@
-__author__ = 'Taneem Jan, taneemishere.github.io'
-
-import gradio as gr
-import main_program
-
-
-# our model's i/o method that take image from gradio interface's inputs.Image()
-def model_interface(image):
- return main_model(image)
-
-
-# main method that call the main_program where code is generated and then compiled
-def main_model(input_image):
- result = main_program.main_method(input_image)
- return result
-
-
-interface_title = "
HTML Code Generation from Images with Deep Neural Networks
"
-interface_description = """
Writing
-code in a programming language for a designed mockup or a graphical user interface created by designers and UI
-engineers, is done mostly by developers to build and develop custom websites and software. The development work is
-not approachable by those unfamiliar with programming, to drive these personas capable of designing and developing
-the code bases and website structures we come up with an automated system. In this work, we showed and proposed that
-methods of deep learning and computer vision can be grasped to train a model that will automatically generate HTML
-code from a single input mockup image and try to build an end-to-end automated system with accuracy more than
-previous works for developing the structures of web pages.
"""
-
-interface_article = """
Limitations of Model
Certain limitations are there in the model some of them are listed below
Sometimes the model do
-produce all the buttons with the same green color instead of other colors
As the model has fed with the data
-provided, and so while producing the code on some other types of images might not generate the code we
-wanted
The model is only trained upon the learning and recognition of boxes and buttons etc. in the images
-and it do not write the text written exactly on the images
Notes: 1. It may take a long time to do planning in Planning mode. 2. The red balls represent the planning result, starting with the lightest red ball and ending with the darkest red ball. The green ball indicates the target position.
")
- input4 = [selector4, mode4, sample4, seed4, opt4, scale_opt4, pla4, scale_pla4]
- button4.click(IF.path_planning, inputs=input4, outputs=[image4, number4])
-
- ## arm motion planning
- with gr.Tab("Arm Motion Planning"):
- gr.Markdown('Coming soon!')
-
-demo.launch()
diff --git a/spaces/SuYuanS/AudioCraft_Plus/tests/models/test_musicgen.py b/spaces/SuYuanS/AudioCraft_Plus/tests/models/test_musicgen.py
deleted file mode 100644
index 65618a9e2ef5bb382694b50b23dd50958d590d4e..0000000000000000000000000000000000000000
--- a/spaces/SuYuanS/AudioCraft_Plus/tests/models/test_musicgen.py
+++ /dev/null
@@ -1,58 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import pytest
-import torch
-
-from audiocraft.models import MusicGen
-
-
-class TestMusicGenModel:
- def get_musicgen(self):
- mg = MusicGen.get_pretrained(name='debug', device='cpu')
- mg.set_generation_params(duration=2.0, extend_stride=2.)
- return mg
-
- def test_base(self):
- mg = self.get_musicgen()
- assert mg.frame_rate == 25
- assert mg.sample_rate == 32000
- assert mg.audio_channels == 1
-
- def test_generate_unconditional(self):
- mg = self.get_musicgen()
- wav = mg.generate_unconditional(3)
- assert list(wav.shape) == [3, 1, 64000]
-
- def test_generate_continuation(self):
- mg = self.get_musicgen()
- prompt = torch.randn(3, 1, 32000)
- wav = mg.generate_continuation(prompt, 32000)
- assert list(wav.shape) == [3, 1, 64000]
-
- prompt = torch.randn(2, 1, 32000)
- wav = mg.generate_continuation(
- prompt, 32000, ['youpi', 'lapin dort'])
- assert list(wav.shape) == [2, 1, 64000]
-
- prompt = torch.randn(2, 1, 32000)
- with pytest.raises(AssertionError):
- wav = mg.generate_continuation(
- prompt, 32000, ['youpi', 'lapin dort', 'one too many'])
-
- def test_generate(self):
- mg = self.get_musicgen()
- wav = mg.generate(
- ['youpi', 'lapin dort'])
- assert list(wav.shape) == [2, 1, 64000]
-
- def test_generate_long(self):
- mg = self.get_musicgen()
- mg.max_duration = 3.
- mg.set_generation_params(duration=4., extend_stride=2.)
- wav = mg.generate(
- ['youpi', 'lapin dort'])
- assert list(wav.shape) == [2, 1, 32000 * 4]
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/tests/test_pygments.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/tests/test_pygments.py
deleted file mode 100644
index 877b4221ffe5a46f22e38305cb845818578918c4..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/tests/test_pygments.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from typing import List
-
-import pytest
-import pygments.lexers
-import pygments.lexer
-
-from IPython.lib.lexers import IPythonConsoleLexer, IPythonLexer, IPython3Lexer
-
-#: the human-readable names of the IPython lexers with ``entry_points``
-EXPECTED_LEXER_NAMES = [
- cls.name for cls in [IPythonConsoleLexer, IPythonLexer, IPython3Lexer]
-]
-
-
-@pytest.fixture
-def all_pygments_lexer_names() -> List[str]:
- """Get all lexer names registered in pygments."""
- return {l[0] for l in pygments.lexers.get_all_lexers()}
-
-
-@pytest.mark.parametrize("expected_lexer", EXPECTED_LEXER_NAMES)
-def test_pygments_entry_points(
- expected_lexer: str, all_pygments_lexer_names: List[str]
-) -> None:
- """Check whether the ``entry_points`` for ``pygments.lexers`` are correct."""
- assert expected_lexer in all_pygments_lexer_names
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImImagePlugin.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImImagePlugin.py
deleted file mode 100644
index 746743f658cf3fa2e0022ae049808eb68d3d1221..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImImagePlugin.py
+++ /dev/null
@@ -1,371 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# IFUNC IM file handling for PIL
-#
-# history:
-# 1995-09-01 fl Created.
-# 1997-01-03 fl Save palette images
-# 1997-01-08 fl Added sequence support
-# 1997-01-23 fl Added P and RGB save support
-# 1997-05-31 fl Read floating point images
-# 1997-06-22 fl Save floating point images
-# 1997-08-27 fl Read and save 1-bit images
-# 1998-06-25 fl Added support for RGB+LUT images
-# 1998-07-02 fl Added support for YCC images
-# 1998-07-15 fl Renamed offset attribute to avoid name clash
-# 1998-12-29 fl Added I;16 support
-# 2001-02-17 fl Use 're' instead of 'regex' (Python 2.1) (0.7)
-# 2003-09-26 fl Added LA/PA support
-#
-# Copyright (c) 1997-2003 by Secret Labs AB.
-# Copyright (c) 1995-2001 by Fredrik Lundh.
-#
-# See the README file for information on usage and redistribution.
-#
-
-
-import os
-import re
-
-from . import Image, ImageFile, ImagePalette
-
-# --------------------------------------------------------------------
-# Standard tags
-
-COMMENT = "Comment"
-DATE = "Date"
-EQUIPMENT = "Digitalization equipment"
-FRAMES = "File size (no of images)"
-LUT = "Lut"
-NAME = "Name"
-SCALE = "Scale (x,y)"
-SIZE = "Image size (x*y)"
-MODE = "Image type"
-
-TAGS = {
- COMMENT: 0,
- DATE: 0,
- EQUIPMENT: 0,
- FRAMES: 0,
- LUT: 0,
- NAME: 0,
- SCALE: 0,
- SIZE: 0,
- MODE: 0,
-}
-
-OPEN = {
- # ifunc93/p3cfunc formats
- "0 1 image": ("1", "1"),
- "L 1 image": ("1", "1"),
- "Greyscale image": ("L", "L"),
- "Grayscale image": ("L", "L"),
- "RGB image": ("RGB", "RGB;L"),
- "RLB image": ("RGB", "RLB"),
- "RYB image": ("RGB", "RLB"),
- "B1 image": ("1", "1"),
- "B2 image": ("P", "P;2"),
- "B4 image": ("P", "P;4"),
- "X 24 image": ("RGB", "RGB"),
- "L 32 S image": ("I", "I;32"),
- "L 32 F image": ("F", "F;32"),
- # old p3cfunc formats
- "RGB3 image": ("RGB", "RGB;T"),
- "RYB3 image": ("RGB", "RYB;T"),
- # extensions
- "LA image": ("LA", "LA;L"),
- "PA image": ("LA", "PA;L"),
- "RGBA image": ("RGBA", "RGBA;L"),
- "RGBX image": ("RGBX", "RGBX;L"),
- "CMYK image": ("CMYK", "CMYK;L"),
- "YCC image": ("YCbCr", "YCbCr;L"),
-}
-
-# ifunc95 extensions
-for i in ["8", "8S", "16", "16S", "32", "32F"]:
- OPEN[f"L {i} image"] = ("F", f"F;{i}")
- OPEN[f"L*{i} image"] = ("F", f"F;{i}")
-for i in ["16", "16L", "16B"]:
- OPEN[f"L {i} image"] = (f"I;{i}", f"I;{i}")
- OPEN[f"L*{i} image"] = (f"I;{i}", f"I;{i}")
-for i in ["32S"]:
- OPEN[f"L {i} image"] = ("I", f"I;{i}")
- OPEN[f"L*{i} image"] = ("I", f"I;{i}")
-for i in range(2, 33):
- OPEN[f"L*{i} image"] = ("F", f"F;{i}")
-
-
-# --------------------------------------------------------------------
-# Read IM directory
-
-split = re.compile(rb"^([A-Za-z][^:]*):[ \t]*(.*)[ \t]*$")
-
-
-def number(s):
- try:
- return int(s)
- except ValueError:
- return float(s)
-
-
-##
-# Image plugin for the IFUNC IM file format.
-
-
-class ImImageFile(ImageFile.ImageFile):
- format = "IM"
- format_description = "IFUNC Image Memory"
- _close_exclusive_fp_after_loading = False
-
- def _open(self):
- # Quick rejection: if there's not an LF among the first
- # 100 bytes, this is (probably) not a text header.
-
- if b"\n" not in self.fp.read(100):
- msg = "not an IM file"
- raise SyntaxError(msg)
- self.fp.seek(0)
-
- n = 0
-
- # Default values
- self.info[MODE] = "L"
- self.info[SIZE] = (512, 512)
- self.info[FRAMES] = 1
-
- self.rawmode = "L"
-
- while True:
- s = self.fp.read(1)
-
- # Some versions of IFUNC uses \n\r instead of \r\n...
- if s == b"\r":
- continue
-
- if not s or s == b"\0" or s == b"\x1A":
- break
-
- # FIXME: this may read whole file if not a text file
- s = s + self.fp.readline()
-
- if len(s) > 100:
- msg = "not an IM file"
- raise SyntaxError(msg)
-
- if s[-2:] == b"\r\n":
- s = s[:-2]
- elif s[-1:] == b"\n":
- s = s[:-1]
-
- try:
- m = split.match(s)
- except re.error as e:
- msg = "not an IM file"
- raise SyntaxError(msg) from e
-
- if m:
- k, v = m.group(1, 2)
-
- # Don't know if this is the correct encoding,
- # but a decent guess (I guess)
- k = k.decode("latin-1", "replace")
- v = v.decode("latin-1", "replace")
-
- # Convert value as appropriate
- if k in [FRAMES, SCALE, SIZE]:
- v = v.replace("*", ",")
- v = tuple(map(number, v.split(",")))
- if len(v) == 1:
- v = v[0]
- elif k == MODE and v in OPEN:
- v, self.rawmode = OPEN[v]
-
- # Add to dictionary. Note that COMMENT tags are
- # combined into a list of strings.
- if k == COMMENT:
- if k in self.info:
- self.info[k].append(v)
- else:
- self.info[k] = [v]
- else:
- self.info[k] = v
-
- if k in TAGS:
- n += 1
-
- else:
- msg = "Syntax error in IM header: " + s.decode("ascii", "replace")
- raise SyntaxError(msg)
-
- if not n:
- msg = "Not an IM file"
- raise SyntaxError(msg)
-
- # Basic attributes
- self._size = self.info[SIZE]
- self.mode = self.info[MODE]
-
- # Skip forward to start of image data
- while s and s[:1] != b"\x1A":
- s = self.fp.read(1)
- if not s:
- msg = "File truncated"
- raise SyntaxError(msg)
-
- if LUT in self.info:
- # convert lookup table to palette or lut attribute
- palette = self.fp.read(768)
- greyscale = 1 # greyscale palette
- linear = 1 # linear greyscale palette
- for i in range(256):
- if palette[i] == palette[i + 256] == palette[i + 512]:
- if palette[i] != i:
- linear = 0
- else:
- greyscale = 0
- if self.mode in ["L", "LA", "P", "PA"]:
- if greyscale:
- if not linear:
- self.lut = list(palette[:256])
- else:
- if self.mode in ["L", "P"]:
- self.mode = self.rawmode = "P"
- elif self.mode in ["LA", "PA"]:
- self.mode = "PA"
- self.rawmode = "PA;L"
- self.palette = ImagePalette.raw("RGB;L", palette)
- elif self.mode == "RGB":
- if not greyscale or not linear:
- self.lut = list(palette)
-
- self.frame = 0
-
- self.__offset = offs = self.fp.tell()
-
- self._fp = self.fp # FIXME: hack
-
- if self.rawmode[:2] == "F;":
- # ifunc95 formats
- try:
- # use bit decoder (if necessary)
- bits = int(self.rawmode[2:])
- if bits not in [8, 16, 32]:
- self.tile = [("bit", (0, 0) + self.size, offs, (bits, 8, 3, 0, -1))]
- return
- except ValueError:
- pass
-
- if self.rawmode in ["RGB;T", "RYB;T"]:
- # Old LabEye/3PC files. Would be very surprised if anyone
- # ever stumbled upon such a file ;-)
- size = self.size[0] * self.size[1]
- self.tile = [
- ("raw", (0, 0) + self.size, offs, ("G", 0, -1)),
- ("raw", (0, 0) + self.size, offs + size, ("R", 0, -1)),
- ("raw", (0, 0) + self.size, offs + 2 * size, ("B", 0, -1)),
- ]
- else:
- # LabEye/IFUNC files
- self.tile = [("raw", (0, 0) + self.size, offs, (self.rawmode, 0, -1))]
-
- @property
- def n_frames(self):
- return self.info[FRAMES]
-
- @property
- def is_animated(self):
- return self.info[FRAMES] > 1
-
- def seek(self, frame):
- if not self._seek_check(frame):
- return
-
- self.frame = frame
-
- if self.mode == "1":
- bits = 1
- else:
- bits = 8 * len(self.mode)
-
- size = ((self.size[0] * bits + 7) // 8) * self.size[1]
- offs = self.__offset + frame * size
-
- self.fp = self._fp
-
- self.tile = [("raw", (0, 0) + self.size, offs, (self.rawmode, 0, -1))]
-
- def tell(self):
- return self.frame
-
-
-#
-# --------------------------------------------------------------------
-# Save IM files
-
-
-SAVE = {
- # mode: (im type, raw mode)
- "1": ("0 1", "1"),
- "L": ("Greyscale", "L"),
- "LA": ("LA", "LA;L"),
- "P": ("Greyscale", "P"),
- "PA": ("LA", "PA;L"),
- "I": ("L 32S", "I;32S"),
- "I;16": ("L 16", "I;16"),
- "I;16L": ("L 16L", "I;16L"),
- "I;16B": ("L 16B", "I;16B"),
- "F": ("L 32F", "F;32F"),
- "RGB": ("RGB", "RGB;L"),
- "RGBA": ("RGBA", "RGBA;L"),
- "RGBX": ("RGBX", "RGBX;L"),
- "CMYK": ("CMYK", "CMYK;L"),
- "YCbCr": ("YCC", "YCbCr;L"),
-}
-
-
-def _save(im, fp, filename):
- try:
- image_type, rawmode = SAVE[im.mode]
- except KeyError as e:
- msg = f"Cannot save {im.mode} images as IM"
- raise ValueError(msg) from e
-
- frames = im.encoderinfo.get("frames", 1)
-
- fp.write(f"Image type: {image_type} image\r\n".encode("ascii"))
- if filename:
- # Each line must be 100 characters or less,
- # or: SyntaxError("not an IM file")
- # 8 characters are used for "Name: " and "\r\n"
- # Keep just the filename, ditch the potentially overlong path
- name, ext = os.path.splitext(os.path.basename(filename))
- name = "".join([name[: 92 - len(ext)], ext])
-
- fp.write(f"Name: {name}\r\n".encode("ascii"))
- fp.write(("Image size (x*y): %d*%d\r\n" % im.size).encode("ascii"))
- fp.write(f"File size (no of images): {frames}\r\n".encode("ascii"))
- if im.mode in ["P", "PA"]:
- fp.write(b"Lut: 1\r\n")
- fp.write(b"\000" * (511 - fp.tell()) + b"\032")
- if im.mode in ["P", "PA"]:
- im_palette = im.im.getpalette("RGB", "RGB;L")
- colors = len(im_palette) // 3
- palette = b""
- for i in range(3):
- palette += im_palette[colors * i : colors * (i + 1)]
- palette += b"\x00" * (256 - colors)
- fp.write(palette) # 768 bytes
- ImageFile._save(im, fp, [("raw", (0, 0) + im.size, 0, (rawmode, 0, -1))])
-
-
-#
-# --------------------------------------------------------------------
-# Registry
-
-
-Image.register_open(ImImageFile.format, ImImageFile)
-Image.register_save(ImImageFile.format, _save)
-
-Image.register_extension(ImImageFile.format, ".im")
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/backoff/_jitter.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/backoff/_jitter.py
deleted file mode 100644
index be7e38925ea857216c874dbbdd6aa1daa8b503f0..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/backoff/_jitter.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# coding:utf-8
-
-import random
-
-
-def random_jitter(value: float) -> float:
- """Jitter the value a random number of milliseconds.
-
- This adds up to 1 second of additional time to the original value.
- Prior to backoff version 1.2 this was the default jitter behavior.
-
- Args:
- value: The unadulterated backoff value.
- """
- return value + random.random()
-
-
-def full_jitter(value: float) -> float:
- """Jitter the value across the full range (0 to value).
-
- This corresponds to the "Full Jitter" algorithm specified in the
- AWS blog's post on the performance of various jitter algorithms.
- (http://www.awsarchitectureblog.com/2015/03/backoff.html)
-
- Args:
- value: The unadulterated backoff value.
- """
- return random.uniform(0, value)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/hnswlib/test_hnswlib.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/hnswlib/test_hnswlib.py
deleted file mode 100644
index 2039c67096646c73ee4aa43af93af23749b24c81..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/hnswlib/test_hnswlib.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import os
-import shutil
-import tempfile
-from typing import Generator
-
-import pytest
-from chromadb.db.index.hnswlib import Hnswlib
-from chromadb.config import Settings
-import uuid
-import numpy as np
-
-
-@pytest.fixture(scope="module")
-def settings() -> Generator[Settings, None, None]:
- save_path = tempfile.gettempdir() + "/tests/hnswlib/"
- yield Settings(persist_directory=save_path)
- if os.path.exists(save_path):
- shutil.rmtree(save_path)
-
-
-def test_count_tracking(settings: Settings) -> None:
- hnswlib = Hnswlib("test", settings, {}, 2)
- hnswlib._init_index(2)
- assert hnswlib._index_metadata["curr_elements"] == 0
- assert hnswlib._index_metadata["total_elements_added"] == 0
- idA, idB = uuid.uuid4(), uuid.uuid4()
-
- embeddingA = np.random.rand(1, 2)
- hnswlib.add([idA], embeddingA.tolist())
- assert (
- hnswlib._index_metadata["curr_elements"]
- == hnswlib._index_metadata["total_elements_added"]
- == 1
- )
- embeddingB = np.random.rand(1, 2)
- hnswlib.add([idB], embeddingB.tolist())
- assert (
- hnswlib._index_metadata["curr_elements"]
- == hnswlib._index_metadata["total_elements_added"]
- == 2
- )
- hnswlib.delete_from_index(ids=[idA])
- assert hnswlib._index_metadata["curr_elements"] == 1
- assert hnswlib._index_metadata["total_elements_added"] == 2
- hnswlib.delete_from_index(ids=[idB])
- assert hnswlib._index_metadata["curr_elements"] == 0
- assert hnswlib._index_metadata["total_elements_added"] == 2
-
-
-def test_add_delete_large_amount(settings: Settings) -> None:
- # Test adding a large number of records
- N = 2000
- D = 512
- large_records = np.random.rand(N, D).astype(np.float32).tolist()
- ids = [uuid.uuid4() for _ in range(N)]
- hnswlib = Hnswlib("test", settings, {}, N)
- hnswlib._init_index(D)
- hnswlib.add(ids, large_records)
- assert hnswlib._index_metadata["curr_elements"] == N
- assert hnswlib._index_metadata["total_elements_added"] == N
-
- # Test deleting a large number of records by getting a random subset of the ids
- ids_to_delete = np.random.choice(np.array(ids), size=100, replace=False).tolist()
- hnswlib.delete_from_index(ids_to_delete)
-
- assert hnswlib._index_metadata["curr_elements"] == N - 100
- assert hnswlib._index_metadata["total_elements_added"] == N
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/documents/mesh/mesh_3d.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/documents/mesh/mesh_3d.py
deleted file mode 100644
index 82d93f73456ec52c8ace95591412c1059130b92f..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/documents/mesh/mesh_3d.py
+++ /dev/null
@@ -1,118 +0,0 @@
-from typing import Any, Optional, Type, TypeVar, Union
-
-from docarray.base_doc import BaseDoc
-from docarray.documents.mesh.vertices_and_faces import VerticesAndFaces
-from docarray.typing.tensor.embedding import AnyEmbedding
-from docarray.typing.url.url_3d.mesh_url import Mesh3DUrl
-
-T = TypeVar('T', bound='Mesh3D')
-
-
-class Mesh3D(BaseDoc):
- """
- Document for handling meshes for 3D data representation.
-
- A mesh is a representation for 3D data and contains vertices and faces information.
- Vertices are points in a 3D space, represented as a tensor of shape (n_points, 3).
- Faces are triangular surfaces that can be defined by three points in 3D space,
- corresponding to the three vertices of a triangle. Faces can be represented as a
- tensor of shape (n_faces, 3). Each number in that tensor refers to an index of a
- vertex in the tensor of vertices.
-
- The Mesh3D Document can contain:
-
- - an [`Mesh3DUrl`][docarray.typing.url.Mesh3DUrl] (`Mesh3D.url`)
- - a [`VerticesAndFaces`][docarray.documents.mesh.vertices_and_faces.VerticesAndFaces]
- object containing:
-
- - an [`AnyTensor`](../../../../api_references/typing/tensor/tensor) of
- vertices (`Mesh3D.tensors.vertices`)
- - an [`AnyTensor`](../../../../api_references/typing/tensor/tensor) of faces (`Mesh3D.tensors.faces`)
-
- - an [`AnyEmbedding`](../../../../api_references/typing/tensor/embedding) (`Mesh3D.embedding`)
- - a `bytes` object (`Mesh3D.bytes_`).
-
- You can use this Document directly:
-
- ```python
- from docarray.documents import Mesh3D
-
- # use it directly
- mesh = Mesh3D(url='https://people.sc.fsu.edu/~jburkardt/data/obj/al.obj')
- mesh.tensors = mesh.url.load()
- # model = MyEmbeddingModel()
- # mesh.embedding = model(mesh.tensors.vertices)
- ```
-
- You can extend this Document:
-
- ```python
- from docarray.documents import Mesh3D
- from docarray.typing import AnyEmbedding
- from typing import Optional
-
-
- # extend it
- class MyMesh3D(Mesh3D):
- name: Optional[str]
-
-
- mesh = MyMesh3D(url='https://people.sc.fsu.edu/~jburkardt/data/obj/al.obj')
- mesh.name = 'my first mesh'
- mesh.tensors = mesh.url.load()
- # model = MyEmbeddingModel()
- # mesh.embedding = model(mesh.vertices)
- ```
-
- You can use this Document for composition:
-
- ```python
- from docarray import BaseDoc
- from docarray.documents import Mesh3D, TextDoc
-
-
- # compose it
- class MultiModalDoc(BaseDoc):
- mesh: Mesh3D
- text: TextDoc
-
-
- mmdoc = MultiModalDoc(
- mesh=Mesh3D(url='https://people.sc.fsu.edu/~jburkardt/data/obj/al.obj'),
- text=TextDoc(text='hello world, how are you doing?'),
- )
- mmdoc.mesh.tensors = mmdoc.mesh.url.load()
-
- # or
- mmdoc.mesh.bytes_ = mmdoc.mesh.url.load_bytes()
- ```
-
- You can display your 3D mesh in a notebook from either its url, or its tensors:
-
- ```python
- from docarray.documents import Mesh3D
-
- # display from url
- mesh = Mesh3D(url='https://people.sc.fsu.edu/~jburkardt/data/obj/al.obj')
- # mesh.url.display()
-
- # display from tensors
- mesh.tensors = mesh.url.load()
- # mesh.tensors.display()
- ```
-
- """
-
- url: Optional[Mesh3DUrl]
- tensors: Optional[VerticesAndFaces]
- embedding: Optional[AnyEmbedding]
- bytes_: Optional[bytes]
-
- @classmethod
- def validate(
- cls: Type[T],
- value: Union[str, Any],
- ) -> T:
- if isinstance(value, str):
- value = cls(url=value)
- return super().validate(value)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/store/file.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/store/file.py
deleted file mode 100644
index 6c46c3ab61595359422d5a51c65922b726c59b36..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/store/file.py
+++ /dev/null
@@ -1,199 +0,0 @@
-import logging
-from pathlib import Path
-from typing import Dict, Iterator, List, Optional, Type, TypeVar
-
-from typing_extensions import TYPE_CHECKING
-
-from docarray.store.abstract_doc_store import AbstractDocStore
-from docarray.store.exceptions import ConcurrentPushException
-from docarray.store.helpers import _from_binary_stream, _to_binary_stream
-from docarray.utils._internal.cache import _get_cache_path
-
-if TYPE_CHECKING:
- from docarray import BaseDoc, DocList
-
-SelfFileDocStore = TypeVar('SelfFileDocStore', bound='FileDocStore')
-
-
-class FileDocStore(AbstractDocStore):
- """Class to push and pull [`DocList`][docarray.DocList] on-disk."""
-
- @staticmethod
- def _abs_filepath(name: str) -> Path:
- """Resolve a name to an absolute path.
-
- :param name: If it is not a path, the cache directory is prepended.
- If it is a path, it is resolved to an absolute path.
- :return: Path
- """
- if not (name.startswith('/') or name.startswith('~') or name.startswith('.')):
- name = str(_get_cache_path() / name)
- if name.startswith('~'):
- name = str(Path.home() / name[2:])
- return Path(name).resolve()
-
- @classmethod
- def list(
- cls: Type[SelfFileDocStore], namespace: str, show_table: bool
- ) -> List[str]:
- """List all [`DocList`s][docarray.DocList] in a directory.
-
- :param namespace: The directory to list.
- :param show_table: If True, print a table of the files in the directory.
- :return: A list of the names of the `DocLists` in the directory.
- """
- namespace_dir = cls._abs_filepath(namespace)
- if not namespace_dir.exists():
- raise FileNotFoundError(f'Directory {namespace} does not exist')
- da_files = [dafile for dafile in namespace_dir.glob('*.docs')]
-
- if show_table:
- from datetime import datetime
-
- from rich import box, filesize
- from rich.console import Console
- from rich.table import Table
-
- table = Table(
- title=f'You have {len(da_files)} DocLists in file://{namespace_dir}',
- box=box.SIMPLE,
- highlight=True,
- )
- table.add_column('Name')
- table.add_column('Last Modified', justify='center')
- table.add_column('Size')
-
- for da_file in da_files:
- table.add_row(
- da_file.stem,
- str(datetime.fromtimestamp(int(da_file.stat().st_ctime))),
- str(filesize.decimal(da_file.stat().st_size)),
- )
-
- Console().print(table)
-
- return [dafile.stem for dafile in da_files]
-
- @classmethod
- def delete(
- cls: Type[SelfFileDocStore], name: str, missing_ok: bool = False
- ) -> bool:
- """Delete a [`DocList`][docarray.DocList] from the local filesystem.
-
- :param name: The name of the `DocList` to delete.
- :param missing_ok: If True, do not raise an exception if the file does not exist. Defaults to False.
- :return: True if the file was deleted, False if it did not exist.
- """
- path = cls._abs_filepath(name)
- try:
- path.with_suffix('.docs').unlink()
- return True
- except FileNotFoundError:
- if not missing_ok:
- raise
- return False
-
- @classmethod
- def push(
- cls: Type[SelfFileDocStore],
- docs: 'DocList',
- name: str,
- public: bool,
- show_progress: bool,
- branding: Optional[Dict],
- ) -> Dict:
- """Push this [`DocList`][docarray.DocList] object to the specified file path.
-
- :param docs: The `DocList` to push.
- :param name: The file path to push to.
- :param public: Not used by the ``file`` protocol.
- :param show_progress: If true, a progress bar will be displayed.
- :param branding: Not used by the ``file`` protocol.
- """
- return cls.push_stream(iter(docs), name, public, show_progress, branding)
-
- @classmethod
- def push_stream(
- cls: Type[SelfFileDocStore],
- docs: Iterator['BaseDoc'],
- name: str,
- public: bool = True,
- show_progress: bool = False,
- branding: Optional[Dict] = None,
- ) -> Dict:
- """Push a stream of documents to the specified file path.
-
- :param docs: a stream of documents
- :param name: The file path to push to.
- :param public: Not used by the ``file`` protocol.
- :param show_progress: If true, a progress bar will be displayed.
- :param branding: Not used by the ``file`` protocol.
- """
- if branding is not None:
- logging.warning('branding is not supported for "file" protocol')
-
- source = _to_binary_stream(
- docs, protocol='protobuf', compress='gzip', show_progress=show_progress
- )
- path = cls._abs_filepath(name).with_suffix('.docs.tmp')
- if path.exists():
- raise ConcurrentPushException(f'File {path} already exists.')
- with open(path, 'wb') as f:
- while True:
- try:
- f.write(next(source))
- except StopIteration:
- break
- path.rename(path.with_suffix(''))
- return {}
-
- @classmethod
- def pull(
- cls: Type[SelfFileDocStore],
- docs_cls: Type['DocList'],
- name: str,
- show_progress: bool,
- local_cache: bool,
- ) -> 'DocList':
- """Pull a [`DocList`][docarray.DocList] from the specified url.
-
- :param name: The file path to pull from.
- :param show_progress: if true, display a progress bar.
- :param local_cache: store the downloaded `DocList` to local folder
- :return: a `DocList` object
- """
-
- return docs_cls(
- cls.pull_stream(
- docs_cls, name, show_progress=show_progress, local_cache=local_cache
- )
- )
-
- @classmethod
- def pull_stream(
- cls: Type[SelfFileDocStore],
- docs_cls: Type['DocList'],
- name: str,
- show_progress: bool,
- local_cache: bool,
- ) -> Iterator['BaseDoc']:
- """Pull a stream of Documents from the specified file.
-
- :param name: The file path to pull from.
- :param show_progress: if true, display a progress bar.
- :param local_cache: Not used by the ``file`` protocol.
- :return: Iterator of Documents
- """
-
- if local_cache:
- logging.warning('local_cache is not supported for "file" protocol')
-
- path = cls._abs_filepath(name).with_suffix('.docs')
- source = open(path, 'rb')
- return _from_binary_stream(
- docs_cls.doc_type,
- source,
- protocol='protobuf',
- compress='gzip',
- show_progress=show_progress,
- )
diff --git a/spaces/TRI-ML/risk_biased_prediction/export_waymo_to_json.py b/spaces/TRI-ML/risk_biased_prediction/export_waymo_to_json.py
deleted file mode 100644
index face2c331e9806a6e0ca1c525d3e7d69f55ea9d2..0000000000000000000000000000000000000000
--- a/spaces/TRI-ML/risk_biased_prediction/export_waymo_to_json.py
+++ /dev/null
@@ -1,94 +0,0 @@
-import json
-from json import JSONEncoder
-from mmcv import Config
-import numpy
-import torch
-
-from risk_biased.utils.waymo_dataloader import WaymoDataloaders
-
-
-class NumpyArrayEncoder(JSONEncoder):
- def default(self, obj):
- if isinstance(obj, numpy.ndarray):
- return obj.tolist()
- return JSONEncoder.default(self, obj)
-
-if __name__ == "__main__":
- output_path = "../risk_biased_dataset/data.json"
- config_path = "risk_biased/config/waymo_config.py"
- cfg = Config.fromfile(config_path)
- dataloaders = WaymoDataloaders(cfg)
- sample_dataloader = dataloaders.sample_dataloader()
- (
- x,
- mask_x,
- y,
- mask_y,
- mask_loss,
- map_data,
- mask_map,
- offset,
- x_ego,
- y_ego,
- ) = sample_dataloader.collate_fn(sample_dataloader.dataset)
-
- batch_size, n_agents, n_timesteps_past, n_features = x.shape
- n_timesteps_future = y.shape[2]
- n_features_map = map_data.shape[3]
- n_features_offset = offset.shape[2]
-
- print(x.shape)
- print(mask_x.shape)
- print(y.shape)
- print(mask_y.shape)
- print(mask_loss.shape)
- print(map_data.shape)
- print(mask_map.shape)
- print(offset.shape)
- print(x_ego.shape)
- print(y_ego.shape)
-
-
- data = {"x": x.numpy(),
- "mask_x": mask_x.numpy(),
- "y": y.numpy(),
- "mask_y": mask_y.numpy(),
- "mask_loss": mask_loss.numpy(),
- "map_data": map_data.numpy(),
- "mask_map": mask_map.numpy(),
- "offset": offset.numpy(),
- "x_ego": x_ego.numpy(),
- "y_ego": y_ego.numpy(),
- }
-
- json_data = json.dumps(data, cls=NumpyArrayEncoder)
-
- with open(output_path, "w+") as f:
- f.write(json_data)
-
- with open(output_path, "r") as f:
- decoded = json.load(f)
-
- x_c = torch.from_numpy(numpy.array(decoded["x"]).astype(numpy.float32))
- mask_x_c = torch.from_numpy(numpy.array(decoded["mask_x"]).astype(numpy.bool_))
- y_c = torch.from_numpy(numpy.array(decoded["y"]).astype(numpy.float32))
- mask_y_c = torch.from_numpy(numpy.array(decoded["mask_y"]).astype(numpy.bool_))
- mask_loss_c = torch.from_numpy( numpy.array(decoded["mask_loss"]).astype(numpy.bool_))
- map_data_c = torch.from_numpy(numpy.array(decoded["map_data"]).astype(numpy.float32))
- mask_map_c = torch.from_numpy(numpy.array(decoded["mask_map"]).astype(numpy.bool_))
- offset_c = torch.from_numpy(numpy.array(decoded["offset"]).astype(numpy.float32))
- x_ego_c = torch.from_numpy(numpy.array(decoded["x_ego"]).astype(numpy.float32))
- y_ego_c = torch.from_numpy(numpy.array(decoded["y_ego"]).astype(numpy.float32))
-
- assert torch.allclose(x, x_c)
- assert torch.allclose(mask_x, mask_x_c)
- assert torch.allclose(y, y_c)
- assert torch.allclose(mask_y, mask_y_c)
- assert torch.allclose(mask_loss, mask_loss_c)
- assert torch.allclose(map_data, map_data_c)
- assert torch.allclose(mask_map, mask_map_c)
- assert torch.allclose(offset, offset_c)
- assert torch.allclose(x_ego, x_ego_c)
- assert torch.allclose(y_ego, y_ego_c)
-
- print("All good!")
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/syntax.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/syntax.py
deleted file mode 100644
index 570337664835d01904c8ff708626b447edc5640a..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/syntax.py
+++ /dev/null
@@ -1,948 +0,0 @@
-import os.path
-import platform
-import re
-import sys
-import textwrap
-from abc import ABC, abstractmethod
-from pathlib import Path
-from typing import (
- Any,
- Dict,
- Iterable,
- List,
- NamedTuple,
- Optional,
- Sequence,
- Set,
- Tuple,
- Type,
- Union,
-)
-
-from pip._vendor.pygments.lexer import Lexer
-from pip._vendor.pygments.lexers import get_lexer_by_name, guess_lexer_for_filename
-from pip._vendor.pygments.style import Style as PygmentsStyle
-from pip._vendor.pygments.styles import get_style_by_name
-from pip._vendor.pygments.token import (
- Comment,
- Error,
- Generic,
- Keyword,
- Name,
- Number,
- Operator,
- String,
- Token,
- Whitespace,
-)
-from pip._vendor.pygments.util import ClassNotFound
-
-from pip._vendor.rich.containers import Lines
-from pip._vendor.rich.padding import Padding, PaddingDimensions
-
-from ._loop import loop_first
-from .cells import cell_len
-from .color import Color, blend_rgb
-from .console import Console, ConsoleOptions, JustifyMethod, RenderResult
-from .jupyter import JupyterMixin
-from .measure import Measurement
-from .segment import Segment, Segments
-from .style import Style, StyleType
-from .text import Text
-
-TokenType = Tuple[str, ...]
-
-WINDOWS = platform.system() == "Windows"
-DEFAULT_THEME = "monokai"
-
-# The following styles are based on https://github.com/pygments/pygments/blob/master/pygments/formatters/terminal.py
-# A few modifications were made
-
-ANSI_LIGHT: Dict[TokenType, Style] = {
- Token: Style(),
- Whitespace: Style(color="white"),
- Comment: Style(dim=True),
- Comment.Preproc: Style(color="cyan"),
- Keyword: Style(color="blue"),
- Keyword.Type: Style(color="cyan"),
- Operator.Word: Style(color="magenta"),
- Name.Builtin: Style(color="cyan"),
- Name.Function: Style(color="green"),
- Name.Namespace: Style(color="cyan", underline=True),
- Name.Class: Style(color="green", underline=True),
- Name.Exception: Style(color="cyan"),
- Name.Decorator: Style(color="magenta", bold=True),
- Name.Variable: Style(color="red"),
- Name.Constant: Style(color="red"),
- Name.Attribute: Style(color="cyan"),
- Name.Tag: Style(color="bright_blue"),
- String: Style(color="yellow"),
- Number: Style(color="blue"),
- Generic.Deleted: Style(color="bright_red"),
- Generic.Inserted: Style(color="green"),
- Generic.Heading: Style(bold=True),
- Generic.Subheading: Style(color="magenta", bold=True),
- Generic.Prompt: Style(bold=True),
- Generic.Error: Style(color="bright_red"),
- Error: Style(color="red", underline=True),
-}
-
-ANSI_DARK: Dict[TokenType, Style] = {
- Token: Style(),
- Whitespace: Style(color="bright_black"),
- Comment: Style(dim=True),
- Comment.Preproc: Style(color="bright_cyan"),
- Keyword: Style(color="bright_blue"),
- Keyword.Type: Style(color="bright_cyan"),
- Operator.Word: Style(color="bright_magenta"),
- Name.Builtin: Style(color="bright_cyan"),
- Name.Function: Style(color="bright_green"),
- Name.Namespace: Style(color="bright_cyan", underline=True),
- Name.Class: Style(color="bright_green", underline=True),
- Name.Exception: Style(color="bright_cyan"),
- Name.Decorator: Style(color="bright_magenta", bold=True),
- Name.Variable: Style(color="bright_red"),
- Name.Constant: Style(color="bright_red"),
- Name.Attribute: Style(color="bright_cyan"),
- Name.Tag: Style(color="bright_blue"),
- String: Style(color="yellow"),
- Number: Style(color="bright_blue"),
- Generic.Deleted: Style(color="bright_red"),
- Generic.Inserted: Style(color="bright_green"),
- Generic.Heading: Style(bold=True),
- Generic.Subheading: Style(color="bright_magenta", bold=True),
- Generic.Prompt: Style(bold=True),
- Generic.Error: Style(color="bright_red"),
- Error: Style(color="red", underline=True),
-}
-
-RICH_SYNTAX_THEMES = {"ansi_light": ANSI_LIGHT, "ansi_dark": ANSI_DARK}
-NUMBERS_COLUMN_DEFAULT_PADDING = 2
-
-
-class SyntaxTheme(ABC):
- """Base class for a syntax theme."""
-
- @abstractmethod
- def get_style_for_token(self, token_type: TokenType) -> Style:
- """Get a style for a given Pygments token."""
- raise NotImplementedError # pragma: no cover
-
- @abstractmethod
- def get_background_style(self) -> Style:
- """Get the background color."""
- raise NotImplementedError # pragma: no cover
-
-
-class PygmentsSyntaxTheme(SyntaxTheme):
- """Syntax theme that delegates to Pygments theme."""
-
- def __init__(self, theme: Union[str, Type[PygmentsStyle]]) -> None:
- self._style_cache: Dict[TokenType, Style] = {}
- if isinstance(theme, str):
- try:
- self._pygments_style_class = get_style_by_name(theme)
- except ClassNotFound:
- self._pygments_style_class = get_style_by_name("default")
- else:
- self._pygments_style_class = theme
-
- self._background_color = self._pygments_style_class.background_color
- self._background_style = Style(bgcolor=self._background_color)
-
- def get_style_for_token(self, token_type: TokenType) -> Style:
- """Get a style from a Pygments class."""
- try:
- return self._style_cache[token_type]
- except KeyError:
- try:
- pygments_style = self._pygments_style_class.style_for_token(token_type)
- except KeyError:
- style = Style.null()
- else:
- color = pygments_style["color"]
- bgcolor = pygments_style["bgcolor"]
- style = Style(
- color="#" + color if color else "#000000",
- bgcolor="#" + bgcolor if bgcolor else self._background_color,
- bold=pygments_style["bold"],
- italic=pygments_style["italic"],
- underline=pygments_style["underline"],
- )
- self._style_cache[token_type] = style
- return style
-
- def get_background_style(self) -> Style:
- return self._background_style
-
-
-class ANSISyntaxTheme(SyntaxTheme):
- """Syntax theme to use standard colors."""
-
- def __init__(self, style_map: Dict[TokenType, Style]) -> None:
- self.style_map = style_map
- self._missing_style = Style.null()
- self._background_style = Style.null()
- self._style_cache: Dict[TokenType, Style] = {}
-
- def get_style_for_token(self, token_type: TokenType) -> Style:
- """Look up style in the style map."""
- try:
- return self._style_cache[token_type]
- except KeyError:
- # Styles form a hierarchy
- # We need to go from most to least specific
- # e.g. ("foo", "bar", "baz") to ("foo", "bar") to ("foo",)
- get_style = self.style_map.get
- token = tuple(token_type)
- style = self._missing_style
- while token:
- _style = get_style(token)
- if _style is not None:
- style = _style
- break
- token = token[:-1]
- self._style_cache[token_type] = style
- return style
-
- def get_background_style(self) -> Style:
- return self._background_style
-
-
-SyntaxPosition = Tuple[int, int]
-
-
-class _SyntaxHighlightRange(NamedTuple):
- """
- A range to highlight in a Syntax object.
- `start` and `end` are 2-integers tuples, where the first integer is the line number
- (starting from 1) and the second integer is the column index (starting from 0).
- """
-
- style: StyleType
- start: SyntaxPosition
- end: SyntaxPosition
-
-
-class Syntax(JupyterMixin):
- """Construct a Syntax object to render syntax highlighted code.
-
- Args:
- code (str): Code to highlight.
- lexer (Lexer | str): Lexer to use (see https://pygments.org/docs/lexers/)
- theme (str, optional): Color theme, aka Pygments style (see https://pygments.org/docs/styles/#getting-a-list-of-available-styles). Defaults to "monokai".
- dedent (bool, optional): Enable stripping of initial whitespace. Defaults to False.
- line_numbers (bool, optional): Enable rendering of line numbers. Defaults to False.
- start_line (int, optional): Starting number for line numbers. Defaults to 1.
- line_range (Tuple[int | None, int | None], optional): If given should be a tuple of the start and end line to render.
- A value of None in the tuple indicates the range is open in that direction.
- highlight_lines (Set[int]): A set of line numbers to highlight.
- code_width: Width of code to render (not including line numbers), or ``None`` to use all available width.
- tab_size (int, optional): Size of tabs. Defaults to 4.
- word_wrap (bool, optional): Enable word wrapping.
- background_color (str, optional): Optional background color, or None to use theme color. Defaults to None.
- indent_guides (bool, optional): Show indent guides. Defaults to False.
- padding (PaddingDimensions): Padding to apply around the syntax. Defaults to 0 (no padding).
- """
-
- _pygments_style_class: Type[PygmentsStyle]
- _theme: SyntaxTheme
-
- @classmethod
- def get_theme(cls, name: Union[str, SyntaxTheme]) -> SyntaxTheme:
- """Get a syntax theme instance."""
- if isinstance(name, SyntaxTheme):
- return name
- theme: SyntaxTheme
- if name in RICH_SYNTAX_THEMES:
- theme = ANSISyntaxTheme(RICH_SYNTAX_THEMES[name])
- else:
- theme = PygmentsSyntaxTheme(name)
- return theme
-
- def __init__(
- self,
- code: str,
- lexer: Union[Lexer, str],
- *,
- theme: Union[str, SyntaxTheme] = DEFAULT_THEME,
- dedent: bool = False,
- line_numbers: bool = False,
- start_line: int = 1,
- line_range: Optional[Tuple[Optional[int], Optional[int]]] = None,
- highlight_lines: Optional[Set[int]] = None,
- code_width: Optional[int] = None,
- tab_size: int = 4,
- word_wrap: bool = False,
- background_color: Optional[str] = None,
- indent_guides: bool = False,
- padding: PaddingDimensions = 0,
- ) -> None:
- self.code = code
- self._lexer = lexer
- self.dedent = dedent
- self.line_numbers = line_numbers
- self.start_line = start_line
- self.line_range = line_range
- self.highlight_lines = highlight_lines or set()
- self.code_width = code_width
- self.tab_size = tab_size
- self.word_wrap = word_wrap
- self.background_color = background_color
- self.background_style = (
- Style(bgcolor=background_color) if background_color else Style()
- )
- self.indent_guides = indent_guides
- self.padding = padding
-
- self._theme = self.get_theme(theme)
- self._stylized_ranges: List[_SyntaxHighlightRange] = []
-
- @classmethod
- def from_path(
- cls,
- path: str,
- encoding: str = "utf-8",
- lexer: Optional[Union[Lexer, str]] = None,
- theme: Union[str, SyntaxTheme] = DEFAULT_THEME,
- dedent: bool = False,
- line_numbers: bool = False,
- line_range: Optional[Tuple[int, int]] = None,
- start_line: int = 1,
- highlight_lines: Optional[Set[int]] = None,
- code_width: Optional[int] = None,
- tab_size: int = 4,
- word_wrap: bool = False,
- background_color: Optional[str] = None,
- indent_guides: bool = False,
- padding: PaddingDimensions = 0,
- ) -> "Syntax":
- """Construct a Syntax object from a file.
-
- Args:
- path (str): Path to file to highlight.
- encoding (str): Encoding of file.
- lexer (str | Lexer, optional): Lexer to use. If None, lexer will be auto-detected from path/file content.
- theme (str, optional): Color theme, aka Pygments style (see https://pygments.org/docs/styles/#getting-a-list-of-available-styles). Defaults to "emacs".
- dedent (bool, optional): Enable stripping of initial whitespace. Defaults to True.
- line_numbers (bool, optional): Enable rendering of line numbers. Defaults to False.
- start_line (int, optional): Starting number for line numbers. Defaults to 1.
- line_range (Tuple[int, int], optional): If given should be a tuple of the start and end line to render.
- highlight_lines (Set[int]): A set of line numbers to highlight.
- code_width: Width of code to render (not including line numbers), or ``None`` to use all available width.
- tab_size (int, optional): Size of tabs. Defaults to 4.
- word_wrap (bool, optional): Enable word wrapping of code.
- background_color (str, optional): Optional background color, or None to use theme color. Defaults to None.
- indent_guides (bool, optional): Show indent guides. Defaults to False.
- padding (PaddingDimensions): Padding to apply around the syntax. Defaults to 0 (no padding).
-
- Returns:
- [Syntax]: A Syntax object that may be printed to the console
- """
- code = Path(path).read_text(encoding=encoding)
-
- if not lexer:
- lexer = cls.guess_lexer(path, code=code)
-
- return cls(
- code,
- lexer,
- theme=theme,
- dedent=dedent,
- line_numbers=line_numbers,
- line_range=line_range,
- start_line=start_line,
- highlight_lines=highlight_lines,
- code_width=code_width,
- tab_size=tab_size,
- word_wrap=word_wrap,
- background_color=background_color,
- indent_guides=indent_guides,
- padding=padding,
- )
-
- @classmethod
- def guess_lexer(cls, path: str, code: Optional[str] = None) -> str:
- """Guess the alias of the Pygments lexer to use based on a path and an optional string of code.
- If code is supplied, it will use a combination of the code and the filename to determine the
- best lexer to use. For example, if the file is ``index.html`` and the file contains Django
- templating syntax, then "html+django" will be returned. If the file is ``index.html``, and no
- templating language is used, the "html" lexer will be used. If no string of code
- is supplied, the lexer will be chosen based on the file extension..
-
- Args:
- path (AnyStr): The path to the file containing the code you wish to know the lexer for.
- code (str, optional): Optional string of code that will be used as a fallback if no lexer
- is found for the supplied path.
-
- Returns:
- str: The name of the Pygments lexer that best matches the supplied path/code.
- """
- lexer: Optional[Lexer] = None
- lexer_name = "default"
- if code:
- try:
- lexer = guess_lexer_for_filename(path, code)
- except ClassNotFound:
- pass
-
- if not lexer:
- try:
- _, ext = os.path.splitext(path)
- if ext:
- extension = ext.lstrip(".").lower()
- lexer = get_lexer_by_name(extension)
- except ClassNotFound:
- pass
-
- if lexer:
- if lexer.aliases:
- lexer_name = lexer.aliases[0]
- else:
- lexer_name = lexer.name
-
- return lexer_name
-
- def _get_base_style(self) -> Style:
- """Get the base style."""
- default_style = self._theme.get_background_style() + self.background_style
- return default_style
-
- def _get_token_color(self, token_type: TokenType) -> Optional[Color]:
- """Get a color (if any) for the given token.
-
- Args:
- token_type (TokenType): A token type tuple from Pygments.
-
- Returns:
- Optional[Color]: Color from theme, or None for no color.
- """
- style = self._theme.get_style_for_token(token_type)
- return style.color
-
- @property
- def lexer(self) -> Optional[Lexer]:
- """The lexer for this syntax, or None if no lexer was found.
-
- Tries to find the lexer by name if a string was passed to the constructor.
- """
-
- if isinstance(self._lexer, Lexer):
- return self._lexer
- try:
- return get_lexer_by_name(
- self._lexer,
- stripnl=False,
- ensurenl=True,
- tabsize=self.tab_size,
- )
- except ClassNotFound:
- return None
-
- def highlight(
- self,
- code: str,
- line_range: Optional[Tuple[Optional[int], Optional[int]]] = None,
- ) -> Text:
- """Highlight code and return a Text instance.
-
- Args:
- code (str): Code to highlight.
- line_range(Tuple[int, int], optional): Optional line range to highlight.
-
- Returns:
- Text: A text instance containing highlighted syntax.
- """
-
- base_style = self._get_base_style()
- justify: JustifyMethod = (
- "default" if base_style.transparent_background else "left"
- )
-
- text = Text(
- justify=justify,
- style=base_style,
- tab_size=self.tab_size,
- no_wrap=not self.word_wrap,
- )
- _get_theme_style = self._theme.get_style_for_token
-
- lexer = self.lexer
-
- if lexer is None:
- text.append(code)
- else:
- if line_range:
- # More complicated path to only stylize a portion of the code
- # This speeds up further operations as there are less spans to process
- line_start, line_end = line_range
-
- def line_tokenize() -> Iterable[Tuple[Any, str]]:
- """Split tokens to one per line."""
- assert lexer # required to make MyPy happy - we know lexer is not None at this point
-
- for token_type, token in lexer.get_tokens(code):
- while token:
- line_token, new_line, token = token.partition("\n")
- yield token_type, line_token + new_line
-
- def tokens_to_spans() -> Iterable[Tuple[str, Optional[Style]]]:
- """Convert tokens to spans."""
- tokens = iter(line_tokenize())
- line_no = 0
- _line_start = line_start - 1 if line_start else 0
-
- # Skip over tokens until line start
- while line_no < _line_start:
- try:
- _token_type, token = next(tokens)
- except StopIteration:
- break
- yield (token, None)
- if token.endswith("\n"):
- line_no += 1
- # Generate spans until line end
- for token_type, token in tokens:
- yield (token, _get_theme_style(token_type))
- if token.endswith("\n"):
- line_no += 1
- if line_end and line_no >= line_end:
- break
-
- text.append_tokens(tokens_to_spans())
-
- else:
- text.append_tokens(
- (token, _get_theme_style(token_type))
- for token_type, token in lexer.get_tokens(code)
- )
- if self.background_color is not None:
- text.stylize(f"on {self.background_color}")
-
- if self._stylized_ranges:
- self._apply_stylized_ranges(text)
-
- return text
-
- def stylize_range(
- self, style: StyleType, start: SyntaxPosition, end: SyntaxPosition
- ) -> None:
- """
- Adds a custom style on a part of the code, that will be applied to the syntax display when it's rendered.
- Line numbers are 1-based, while column indexes are 0-based.
-
- Args:
- style (StyleType): The style to apply.
- start (Tuple[int, int]): The start of the range, in the form `[line number, column index]`.
- end (Tuple[int, int]): The end of the range, in the form `[line number, column index]`.
- """
- self._stylized_ranges.append(_SyntaxHighlightRange(style, start, end))
-
- def _get_line_numbers_color(self, blend: float = 0.3) -> Color:
- background_style = self._theme.get_background_style() + self.background_style
- background_color = background_style.bgcolor
- if background_color is None or background_color.is_system_defined:
- return Color.default()
- foreground_color = self._get_token_color(Token.Text)
- if foreground_color is None or foreground_color.is_system_defined:
- return foreground_color or Color.default()
- new_color = blend_rgb(
- background_color.get_truecolor(),
- foreground_color.get_truecolor(),
- cross_fade=blend,
- )
- return Color.from_triplet(new_color)
-
- @property
- def _numbers_column_width(self) -> int:
- """Get the number of characters used to render the numbers column."""
- column_width = 0
- if self.line_numbers:
- column_width = (
- len(str(self.start_line + self.code.count("\n")))
- + NUMBERS_COLUMN_DEFAULT_PADDING
- )
- return column_width
-
- def _get_number_styles(self, console: Console) -> Tuple[Style, Style, Style]:
- """Get background, number, and highlight styles for line numbers."""
- background_style = self._get_base_style()
- if background_style.transparent_background:
- return Style.null(), Style(dim=True), Style.null()
- if console.color_system in ("256", "truecolor"):
- number_style = Style.chain(
- background_style,
- self._theme.get_style_for_token(Token.Text),
- Style(color=self._get_line_numbers_color()),
- self.background_style,
- )
- highlight_number_style = Style.chain(
- background_style,
- self._theme.get_style_for_token(Token.Text),
- Style(bold=True, color=self._get_line_numbers_color(0.9)),
- self.background_style,
- )
- else:
- number_style = background_style + Style(dim=True)
- highlight_number_style = background_style + Style(dim=False)
- return background_style, number_style, highlight_number_style
-
- def __rich_measure__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "Measurement":
- _, right, _, left = Padding.unpack(self.padding)
- padding = left + right
- if self.code_width is not None:
- width = self.code_width + self._numbers_column_width + padding + 1
- return Measurement(self._numbers_column_width, width)
- lines = self.code.splitlines()
- width = (
- self._numbers_column_width
- + padding
- + (max(cell_len(line) for line in lines) if lines else 0)
- )
- if self.line_numbers:
- width += 1
- return Measurement(self._numbers_column_width, width)
-
- def __rich_console__(
- self, console: Console, options: ConsoleOptions
- ) -> RenderResult:
- segments = Segments(self._get_syntax(console, options))
- if self.padding:
- yield Padding(
- segments, style=self._theme.get_background_style(), pad=self.padding
- )
- else:
- yield segments
-
- def _get_syntax(
- self,
- console: Console,
- options: ConsoleOptions,
- ) -> Iterable[Segment]:
- """
- Get the Segments for the Syntax object, excluding any vertical/horizontal padding
- """
- transparent_background = self._get_base_style().transparent_background
- code_width = (
- (
- (options.max_width - self._numbers_column_width - 1)
- if self.line_numbers
- else options.max_width
- )
- if self.code_width is None
- else self.code_width
- )
-
- ends_on_nl, processed_code = self._process_code(self.code)
- text = self.highlight(processed_code, self.line_range)
-
- if not self.line_numbers and not self.word_wrap and not self.line_range:
- if not ends_on_nl:
- text.remove_suffix("\n")
- # Simple case of just rendering text
- style = (
- self._get_base_style()
- + self._theme.get_style_for_token(Comment)
- + Style(dim=True)
- + self.background_style
- )
- if self.indent_guides and not options.ascii_only:
- text = text.with_indent_guides(self.tab_size, style=style)
- text.overflow = "crop"
- if style.transparent_background:
- yield from console.render(
- text, options=options.update(width=code_width)
- )
- else:
- syntax_lines = console.render_lines(
- text,
- options.update(width=code_width, height=None, justify="left"),
- style=self.background_style,
- pad=True,
- new_lines=True,
- )
- for syntax_line in syntax_lines:
- yield from syntax_line
- return
-
- start_line, end_line = self.line_range or (None, None)
- line_offset = 0
- if start_line:
- line_offset = max(0, start_line - 1)
- lines: Union[List[Text], Lines] = text.split("\n", allow_blank=ends_on_nl)
- if self.line_range:
- if line_offset > len(lines):
- return
- lines = lines[line_offset:end_line]
-
- if self.indent_guides and not options.ascii_only:
- style = (
- self._get_base_style()
- + self._theme.get_style_for_token(Comment)
- + Style(dim=True)
- + self.background_style
- )
- lines = (
- Text("\n")
- .join(lines)
- .with_indent_guides(self.tab_size, style=style + Style(italic=False))
- .split("\n", allow_blank=True)
- )
-
- numbers_column_width = self._numbers_column_width
- render_options = options.update(width=code_width)
-
- highlight_line = self.highlight_lines.__contains__
- _Segment = Segment
- new_line = _Segment("\n")
-
- line_pointer = "> " if options.legacy_windows else "❱ "
-
- (
- background_style,
- number_style,
- highlight_number_style,
- ) = self._get_number_styles(console)
-
- for line_no, line in enumerate(lines, self.start_line + line_offset):
- if self.word_wrap:
- wrapped_lines = console.render_lines(
- line,
- render_options.update(height=None, justify="left"),
- style=background_style,
- pad=not transparent_background,
- )
- else:
- segments = list(line.render(console, end=""))
- if options.no_wrap:
- wrapped_lines = [segments]
- else:
- wrapped_lines = [
- _Segment.adjust_line_length(
- segments,
- render_options.max_width,
- style=background_style,
- pad=not transparent_background,
- )
- ]
-
- if self.line_numbers:
- wrapped_line_left_pad = _Segment(
- " " * numbers_column_width + " ", background_style
- )
- for first, wrapped_line in loop_first(wrapped_lines):
- if first:
- line_column = str(line_no).rjust(numbers_column_width - 2) + " "
- if highlight_line(line_no):
- yield _Segment(line_pointer, Style(color="red"))
- yield _Segment(line_column, highlight_number_style)
- else:
- yield _Segment(" ", highlight_number_style)
- yield _Segment(line_column, number_style)
- else:
- yield wrapped_line_left_pad
- yield from wrapped_line
- yield new_line
- else:
- for wrapped_line in wrapped_lines:
- yield from wrapped_line
- yield new_line
-
- def _apply_stylized_ranges(self, text: Text) -> None:
- """
- Apply stylized ranges to a text instance,
- using the given code to determine the right portion to apply the style to.
-
- Args:
- text (Text): Text instance to apply the style to.
- """
- code = text.plain
- newlines_offsets = [
- # Let's add outer boundaries at each side of the list:
- 0,
- # N.B. using "\n" here is much faster than using metacharacters such as "^" or "\Z":
- *[
- match.start() + 1
- for match in re.finditer("\n", code, flags=re.MULTILINE)
- ],
- len(code) + 1,
- ]
-
- for stylized_range in self._stylized_ranges:
- start = _get_code_index_for_syntax_position(
- newlines_offsets, stylized_range.start
- )
- end = _get_code_index_for_syntax_position(
- newlines_offsets, stylized_range.end
- )
- if start is not None and end is not None:
- text.stylize(stylized_range.style, start, end)
-
- def _process_code(self, code: str) -> Tuple[bool, str]:
- """
- Applies various processing to a raw code string
- (normalises it so it always ends with a line return, dedents it if necessary, etc.)
-
- Args:
- code (str): The raw code string to process
-
- Returns:
- Tuple[bool, str]: the boolean indicates whether the raw code ends with a line return,
- while the string is the processed code.
- """
- ends_on_nl = code.endswith("\n")
- processed_code = code if ends_on_nl else code + "\n"
- processed_code = (
- textwrap.dedent(processed_code) if self.dedent else processed_code
- )
- processed_code = processed_code.expandtabs(self.tab_size)
- return ends_on_nl, processed_code
-
-
-def _get_code_index_for_syntax_position(
- newlines_offsets: Sequence[int], position: SyntaxPosition
-) -> Optional[int]:
- """
- Returns the index of the code string for the given positions.
-
- Args:
- newlines_offsets (Sequence[int]): The offset of each newline character found in the code snippet.
- position (SyntaxPosition): The position to search for.
-
- Returns:
- Optional[int]: The index of the code string for this position, or `None`
- if the given position's line number is out of range (if it's the column that is out of range
- we silently clamp its value so that it reaches the end of the line)
- """
- lines_count = len(newlines_offsets)
-
- line_number, column_index = position
- if line_number > lines_count or len(newlines_offsets) < (line_number + 1):
- return None # `line_number` is out of range
- line_index = line_number - 1
- line_length = newlines_offsets[line_index + 1] - newlines_offsets[line_index] - 1
- # If `column_index` is out of range: let's silently clamp it:
- column_index = min(line_length, column_index)
- return newlines_offsets[line_index] + column_index
-
-
-if __name__ == "__main__": # pragma: no cover
- import argparse
- import sys
-
- parser = argparse.ArgumentParser(
- description="Render syntax to the console with Rich"
- )
- parser.add_argument(
- "path",
- metavar="PATH",
- help="path to file, or - for stdin",
- )
- parser.add_argument(
- "-c",
- "--force-color",
- dest="force_color",
- action="store_true",
- default=None,
- help="force color for non-terminals",
- )
- parser.add_argument(
- "-i",
- "--indent-guides",
- dest="indent_guides",
- action="store_true",
- default=False,
- help="display indent guides",
- )
- parser.add_argument(
- "-l",
- "--line-numbers",
- dest="line_numbers",
- action="store_true",
- help="render line numbers",
- )
- parser.add_argument(
- "-w",
- "--width",
- type=int,
- dest="width",
- default=None,
- help="width of output (default will auto-detect)",
- )
- parser.add_argument(
- "-r",
- "--wrap",
- dest="word_wrap",
- action="store_true",
- default=False,
- help="word wrap long lines",
- )
- parser.add_argument(
- "-s",
- "--soft-wrap",
- action="store_true",
- dest="soft_wrap",
- default=False,
- help="enable soft wrapping mode",
- )
- parser.add_argument(
- "-t", "--theme", dest="theme", default="monokai", help="pygments theme"
- )
- parser.add_argument(
- "-b",
- "--background-color",
- dest="background_color",
- default=None,
- help="Override background color",
- )
- parser.add_argument(
- "-x",
- "--lexer",
- default=None,
- dest="lexer_name",
- help="Lexer name",
- )
- parser.add_argument(
- "-p", "--padding", type=int, default=0, dest="padding", help="Padding"
- )
- parser.add_argument(
- "--highlight-line",
- type=int,
- default=None,
- dest="highlight_line",
- help="The line number (not index!) to highlight",
- )
- args = parser.parse_args()
-
- from pip._vendor.rich.console import Console
-
- console = Console(force_terminal=args.force_color, width=args.width)
-
- if args.path == "-":
- code = sys.stdin.read()
- syntax = Syntax(
- code=code,
- lexer=args.lexer_name,
- line_numbers=args.line_numbers,
- word_wrap=args.word_wrap,
- theme=args.theme,
- background_color=args.background_color,
- indent_guides=args.indent_guides,
- padding=args.padding,
- highlight_lines={args.highlight_line},
- )
- else:
- syntax = Syntax.from_path(
- args.path,
- lexer=args.lexer_name,
- line_numbers=args.line_numbers,
- word_wrap=args.word_wrap,
- theme=args.theme,
- background_color=args.background_color,
- indent_guides=args.indent_guides,
- padding=args.padding,
- highlight_lines={args.highlight_line},
- )
- console.print(syntax, soft_wrap=args.soft_wrap)
diff --git a/spaces/Vegecken/sovits4dzl/vdecoder/hifigan/models.py b/spaces/Vegecken/sovits4dzl/vdecoder/hifigan/models.py
deleted file mode 100644
index 9747301f350bb269e62601017fe4633ce271b27e..0000000000000000000000000000000000000000
--- a/spaces/Vegecken/sovits4dzl/vdecoder/hifigan/models.py
+++ /dev/null
@@ -1,503 +0,0 @@
-import os
-import json
-from .env import AttrDict
-import numpy as np
-import torch
-import torch.nn.functional as F
-import torch.nn as nn
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from .utils import init_weights, get_padding
-
-LRELU_SLOPE = 0.1
-
-
-def load_model(model_path, device='cuda'):
- config_file = os.path.join(os.path.split(model_path)[0], 'config.json')
- with open(config_file) as f:
- data = f.read()
-
- global h
- json_config = json.loads(data)
- h = AttrDict(json_config)
-
- generator = Generator(h).to(device)
-
- cp_dict = torch.load(model_path)
- generator.load_state_dict(cp_dict['generator'])
- generator.eval()
- generator.remove_weight_norm()
- del cp_dict
- return generator, h
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.h = h
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- xt = c2(xt)
- x = xt + x
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.h = h
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- xt = c(xt)
- x = xt + x
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-def padDiff(x):
- return F.pad(F.pad(x, (0,0,-1,1), 'constant', 0) - x, (0,0,0,-1), 'constant', 0)
-
-class SineGen(torch.nn.Module):
- """ Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(self, samp_rate, harmonic_num=0,
- sine_amp=0.1, noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
- self.flag_for_pulse = flag_for_pulse
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = (f0 > self.voiced_threshold).type(torch.float32)
- return uv
-
- def _f02sine(self, f0_values):
- """ f0_values: (batchsize, length, dim)
- where dim indicates fundamental tone and overtones
- """
- # convert to F0 in rad. The interger part n can be ignored
- # because 2 * np.pi * n doesn't affect phase
- rad_values = (f0_values / self.sampling_rate) % 1
-
- # initial phase noise (no noise for fundamental component)
- rand_ini = torch.rand(f0_values.shape[0], f0_values.shape[2], \
- device=f0_values.device)
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
-
- # instantanouse phase sine[t] = sin(2*pi \sum_i=1 ^{t} rad)
- if not self.flag_for_pulse:
- # for normal case
-
- # To prevent torch.cumsum numerical overflow,
- # it is necessary to add -1 whenever \sum_k=1^n rad_value_k > 1.
- # Buffer tmp_over_one_idx indicates the time step to add -1.
- # This will not change F0 of sine because (x-1) * 2*pi = x * 2*pi
- tmp_over_one = torch.cumsum(rad_values, 1) % 1
- tmp_over_one_idx = (padDiff(tmp_over_one)) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
-
- sines = torch.sin(torch.cumsum(rad_values + cumsum_shift, dim=1)
- * 2 * np.pi)
- else:
- # If necessary, make sure that the first time step of every
- # voiced segments is sin(pi) or cos(0)
- # This is used for pulse-train generation
-
- # identify the last time step in unvoiced segments
- uv = self._f02uv(f0_values)
- uv_1 = torch.roll(uv, shifts=-1, dims=1)
- uv_1[:, -1, :] = 1
- u_loc = (uv < 1) * (uv_1 > 0)
-
- # get the instantanouse phase
- tmp_cumsum = torch.cumsum(rad_values, dim=1)
- # different batch needs to be processed differently
- for idx in range(f0_values.shape[0]):
- temp_sum = tmp_cumsum[idx, u_loc[idx, :, 0], :]
- temp_sum[1:, :] = temp_sum[1:, :] - temp_sum[0:-1, :]
- # stores the accumulation of i.phase within
- # each voiced segments
- tmp_cumsum[idx, :, :] = 0
- tmp_cumsum[idx, u_loc[idx, :, 0], :] = temp_sum
-
- # rad_values - tmp_cumsum: remove the accumulation of i.phase
- # within the previous voiced segment.
- i_phase = torch.cumsum(rad_values - tmp_cumsum, dim=1)
-
- # get the sines
- sines = torch.cos(i_phase * 2 * np.pi)
- return sines
-
- def forward(self, f0):
- """ sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim,
- device=f0.device)
- # fundamental component
- fn = torch.multiply(f0, torch.FloatTensor([[range(1, self.harmonic_num + 2)]]).to(f0.device))
-
- # generate sine waveforms
- sine_waves = self._f02sine(fn) * self.sine_amp
-
- # generate uv signal
- # uv = torch.ones(f0.shape)
- # uv = uv * (f0 > self.voiced_threshold)
- uv = self._f02uv(f0)
-
- # noise: for unvoiced should be similar to sine_amp
- # std = self.sine_amp/3 -> max value ~ self.sine_amp
- # . for voiced regions is self.noise_std
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
-
- # first: set the unvoiced part to 0 by uv
- # then: additive noise
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """ SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(self, sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
-
- # to produce sine waveforms
- self.l_sin_gen = SineGen(sampling_rate, harmonic_num,
- sine_amp, add_noise_std, voiced_threshod)
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x):
- """
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- """
- # source for harmonic branch
- sine_wavs, uv, _ = self.l_sin_gen(x)
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
-
- # source for noise branch, in the same shape as uv
- noise = torch.randn_like(uv) * self.sine_amp / 3
- return sine_merge, noise, uv
-
-
-class Generator(torch.nn.Module):
- def __init__(self, h):
- super(Generator, self).__init__()
- self.h = h
-
- self.num_kernels = len(h["resblock_kernel_sizes"])
- self.num_upsamples = len(h["upsample_rates"])
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(h["upsample_rates"]))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=h["sampling_rate"],
- harmonic_num=8)
- self.noise_convs = nn.ModuleList()
- self.conv_pre = weight_norm(Conv1d(h["inter_channels"], h["upsample_initial_channel"], 7, 1, padding=3))
- resblock = ResBlock1 if h["resblock"] == '1' else ResBlock2
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(h["upsample_rates"], h["upsample_kernel_sizes"])):
- c_cur = h["upsample_initial_channel"] // (2 ** (i + 1))
- self.ups.append(weight_norm(
- ConvTranspose1d(h["upsample_initial_channel"] // (2 ** i), h["upsample_initial_channel"] // (2 ** (i + 1)),
- k, u, padding=(k - u) // 2)))
- if i + 1 < len(h["upsample_rates"]): #
- stride_f0 = np.prod(h["upsample_rates"][i + 1:])
- self.noise_convs.append(Conv1d(
- 1, c_cur, kernel_size=stride_f0 * 2, stride=stride_f0, padding=stride_f0 // 2))
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = h["upsample_initial_channel"] // (2 ** (i + 1))
- for j, (k, d) in enumerate(zip(h["resblock_kernel_sizes"], h["resblock_dilation_sizes"])):
- self.resblocks.append(resblock(h, ch, k, d))
-
- self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3))
- self.ups.apply(init_weights)
- self.conv_post.apply(init_weights)
- self.cond = nn.Conv1d(h['gin_channels'], h['upsample_initial_channel'], 1)
-
- def forward(self, x, f0, g=None):
- # print(1,x.shape,f0.shape,f0[:, None].shape)
- f0 = self.f0_upsamp(f0[:, None]).transpose(1, 2) # bs,n,t
- # print(2,f0.shape)
- har_source, noi_source, uv = self.m_source(f0)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- x = x + self.cond(g)
- # print(124,x.shape,har_source.shape)
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, LRELU_SLOPE)
- # print(3,x.shape)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- # print(4,x_source.shape,har_source.shape,x.shape)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
- remove_weight_norm(self.conv_pre)
- remove_weight_norm(self.conv_post)
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, periods=None):
- super(MultiPeriodDiscriminator, self).__init__()
- self.periods = periods if periods is not None else [2, 3, 5, 7, 11]
- self.discriminators = nn.ModuleList()
- for period in self.periods:
- self.discriminators.append(DiscriminatorP(period))
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 128, 15, 1, padding=7)),
- norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)),
- norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)),
- norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiScaleDiscriminator(torch.nn.Module):
- def __init__(self):
- super(MultiScaleDiscriminator, self).__init__()
- self.discriminators = nn.ModuleList([
- DiscriminatorS(use_spectral_norm=True),
- DiscriminatorS(),
- DiscriminatorS(),
- ])
- self.meanpools = nn.ModuleList([
- AvgPool1d(4, 2, padding=2),
- AvgPool1d(4, 2, padding=2)
- ])
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- if i != 0:
- y = self.meanpools[i - 1](y)
- y_hat = self.meanpools[i - 1](y_hat)
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss * 2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- r_loss = torch.mean((1 - dr) ** 2)
- g_loss = torch.mean(dg ** 2)
- loss += (r_loss + g_loss)
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- l = torch.mean((1 - dg) ** 2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
diff --git a/spaces/Wayben/ChatGPT/assets/custom.css b/spaces/Wayben/ChatGPT/assets/custom.css
deleted file mode 100644
index a79a34c1c6ef55a6a5e04830ae9f7c5d63fb8faa..0000000000000000000000000000000000000000
--- a/spaces/Wayben/ChatGPT/assets/custom.css
+++ /dev/null
@@ -1,173 +0,0 @@
-:root {
- --chatbot-color-light: #F3F3F3;
- --chatbot-color-dark: #121111;
-}
-
-/* status_display */
-#status_display {
- display: flex;
- min-height: 2.5em;
- align-items: flex-end;
- justify-content: flex-end;
-}
-#status_display p {
- font-size: .85em;
- font-family: monospace;
- color: var(--body-text-color-subdued);
-}
-
-#chuanhu_chatbot, #status_display {
- transition: all 0.6s;
-}
-
-/* usage_display */
-#usage_display {
- height: 1em;
-}
-#usage_display p{
- padding: 0 1em;
- font-size: .85em;
- font-family: monospace;
- color: var(--body-text-color-subdued);
-}
-/* list */
-ol:not(.options), ul:not(.options) {
- padding-inline-start: 2em !important;
-}
-
-/* 亮色 */
-#chuanhu_chatbot {
- background-color: var(--chatbot-color-light) !important;
-}
-[data-testid = "bot"] {
- background-color: #FFFFFF !important;
-}
-[data-testid = "user"] {
- background-color: #95EC69 !important;
-}
-/* 对话气泡 */
-[class *= "message"] {
- border-radius: var(--radius-xl) !important;
- border: none;
- padding: var(--spacing-xl) !important;
- font-size: var(--text-md) !important;
- line-height: var(--line-md) !important;
- min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
- min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
-}
-[data-testid = "bot"] {
- max-width: 85%;
- border-bottom-left-radius: 0 !important;
-}
-[data-testid = "user"] {
- max-width: 85%;
- width: auto !important;
- border-bottom-right-radius: 0 !important;
-}
-/* 表格 */
-table {
- margin: 1em 0;
- border-collapse: collapse;
- empty-cells: show;
-}
-td,th {
- border: 1.2px solid var(--border-color-primary) !important;
- padding: 0.2em;
-}
-thead {
- background-color: rgba(175,184,193,0.2);
-}
-thead th {
- padding: .5em .2em;
-}
-/* 行内代码 */
-code {
- display: inline;
- white-space: break-spaces;
- border-radius: 6px;
- margin: 0 2px 0 2px;
- padding: .2em .4em .1em .4em;
- background-color: rgba(175,184,193,0.2);
-}
-/* 代码块 */
-pre code {
- display: block;
- overflow: auto;
- white-space: pre;
- background-color: hsla(0, 0%, 0%, 80%)!important;
- border-radius: 10px;
- padding: 1.4em 1.2em 0em 1.4em;
- margin: 1.2em 2em 1.2em 0.5em;
- color: #FFF;
- box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2);
-}
-/* 代码高亮样式 */
-.highlight .hll { background-color: #49483e }
-.highlight .c { color: #75715e } /* Comment */
-.highlight .err { color: #960050; background-color: #1e0010 } /* Error */
-.highlight .k { color: #66d9ef } /* Keyword */
-.highlight .l { color: #ae81ff } /* Literal */
-.highlight .n { color: #f8f8f2 } /* Name */
-.highlight .o { color: #f92672 } /* Operator */
-.highlight .p { color: #f8f8f2 } /* Punctuation */
-.highlight .ch { color: #75715e } /* Comment.Hashbang */
-.highlight .cm { color: #75715e } /* Comment.Multiline */
-.highlight .cp { color: #75715e } /* Comment.Preproc */
-.highlight .cpf { color: #75715e } /* Comment.PreprocFile */
-.highlight .c1 { color: #75715e } /* Comment.Single */
-.highlight .cs { color: #75715e } /* Comment.Special */
-.highlight .gd { color: #f92672 } /* Generic.Deleted */
-.highlight .ge { font-style: italic } /* Generic.Emph */
-.highlight .gi { color: #a6e22e } /* Generic.Inserted */
-.highlight .gs { font-weight: bold } /* Generic.Strong */
-.highlight .gu { color: #75715e } /* Generic.Subheading */
-.highlight .kc { color: #66d9ef } /* Keyword.Constant */
-.highlight .kd { color: #66d9ef } /* Keyword.Declaration */
-.highlight .kn { color: #f92672 } /* Keyword.Namespace */
-.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */
-.highlight .kr { color: #66d9ef } /* Keyword.Reserved */
-.highlight .kt { color: #66d9ef } /* Keyword.Type */
-.highlight .ld { color: #e6db74 } /* Literal.Date */
-.highlight .m { color: #ae81ff } /* Literal.Number */
-.highlight .s { color: #e6db74 } /* Literal.String */
-.highlight .na { color: #a6e22e } /* Name.Attribute */
-.highlight .nb { color: #f8f8f2 } /* Name.Builtin */
-.highlight .nc { color: #a6e22e } /* Name.Class */
-.highlight .no { color: #66d9ef } /* Name.Constant */
-.highlight .nd { color: #a6e22e } /* Name.Decorator */
-.highlight .ni { color: #f8f8f2 } /* Name.Entity */
-.highlight .ne { color: #a6e22e } /* Name.Exception */
-.highlight .nf { color: #a6e22e } /* Name.Function */
-.highlight .nl { color: #f8f8f2 } /* Name.Label */
-.highlight .nn { color: #f8f8f2 } /* Name.Namespace */
-.highlight .nx { color: #a6e22e } /* Name.Other */
-.highlight .py { color: #f8f8f2 } /* Name.Property */
-.highlight .nt { color: #f92672 } /* Name.Tag */
-.highlight .nv { color: #f8f8f2 } /* Name.Variable */
-.highlight .ow { color: #f92672 } /* Operator.Word */
-.highlight .w { color: #f8f8f2 } /* Text.Whitespace */
-.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */
-.highlight .mf { color: #ae81ff } /* Literal.Number.Float */
-.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */
-.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */
-.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */
-.highlight .sa { color: #e6db74 } /* Literal.String.Affix */
-.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */
-.highlight .sc { color: #e6db74 } /* Literal.String.Char */
-.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */
-.highlight .sd { color: #e6db74 } /* Literal.String.Doc */
-.highlight .s2 { color: #e6db74 } /* Literal.String.Double */
-.highlight .se { color: #ae81ff } /* Literal.String.Escape */
-.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */
-.highlight .si { color: #e6db74 } /* Literal.String.Interpol */
-.highlight .sx { color: #e6db74 } /* Literal.String.Other */
-.highlight .sr { color: #e6db74 } /* Literal.String.Regex */
-.highlight .s1 { color: #e6db74 } /* Literal.String.Single */
-.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */
-.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */
-.highlight .fm { color: #a6e22e } /* Name.Function.Magic */
-.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */
-.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */
-.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */
-.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */
-.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */
diff --git a/spaces/XingHe0127/Chatbot/modules/shared.py b/spaces/XingHe0127/Chatbot/modules/shared.py
deleted file mode 100644
index a9e72580aa7ae48f907e923a09099513570a9ad8..0000000000000000000000000000000000000000
--- a/spaces/XingHe0127/Chatbot/modules/shared.py
+++ /dev/null
@@ -1,55 +0,0 @@
-from modules.presets import COMPLETION_URL, BALANCE_API_URL, USAGE_API_URL, API_HOST
-import os
-import queue
-
-class State:
- interrupted = False
- multi_api_key = False
- completion_url = COMPLETION_URL
- balance_api_url = BALANCE_API_URL
- usage_api_url = USAGE_API_URL
-
- def interrupt(self):
- self.interrupted = True
-
- def recover(self):
- self.interrupted = False
-
- def set_api_host(self, api_host):
- self.completion_url = f"https://{api_host}/v1/chat/completions"
- self.balance_api_url = f"https://{api_host}/dashboard/billing/credit_grants"
- self.usage_api_url = f"https://{api_host}/dashboard/billing/usage"
- os.environ["OPENAI_API_BASE"] = f"https://{api_host}/v1"
-
- def reset_api_host(self):
- self.completion_url = COMPLETION_URL
- self.balance_api_url = BALANCE_API_URL
- self.usage_api_url = USAGE_API_URL
- os.environ["OPENAI_API_BASE"] = f"https://{API_HOST}/v1"
- return API_HOST
-
- def reset_all(self):
- self.interrupted = False
- self.completion_url = COMPLETION_URL
-
- def set_api_key_queue(self, api_key_list):
- self.multi_api_key = True
- self.api_key_queue = queue.Queue()
- for api_key in api_key_list:
- self.api_key_queue.put(api_key)
-
- def switching_api_key(self, func):
- if not hasattr(self, "api_key_queue"):
- return func
-
- def wrapped(*args, **kwargs):
- api_key = self.api_key_queue.get()
- args[0].api_key = api_key
- ret = func(*args, **kwargs)
- self.api_key_queue.put(api_key)
- return ret
-
- return wrapped
-
-
-state = State()
diff --git a/spaces/YONG627/456123/yolov5-code-main/models/common.py b/spaces/YONG627/456123/yolov5-code-main/models/common.py
deleted file mode 100644
index bd4b2f21c88298301f95af79898cc6aa66d2c450..0000000000000000000000000000000000000000
--- a/spaces/YONG627/456123/yolov5-code-main/models/common.py
+++ /dev/null
@@ -1,956 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Common modules
-"""
-
-import ast
-import contextlib
-import json
-import math
-import platform
-import warnings
-import zipfile
-from collections import OrderedDict, namedtuple
-from copy import copy
-from pathlib import Path
-from urllib.parse import urlparse
-
-import cv2
-import numpy as np
-import pandas as pd
-import requests
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torchvision.models as models
-from PIL import Image
-from torch.cuda import amp
-
-from utils import TryExcept
-from utils.dataloaders import exif_transpose, letterbox
-from utils.general import (LOGGER, ROOT, Profile, check_requirements, check_suffix, check_version, colorstr,
- increment_path, is_jupyter, make_divisible, non_max_suppression, scale_boxes, xywh2xyxy,
- xyxy2xywh, yaml_load)
-from utils.plots import Annotator, colors, save_one_box
-from utils.torch_utils import copy_attr, smart_inference_mode
-
-
-def autopad(k, p=None, d=1): # kernel, padding, dilation
- # Pad to 'same' shape outputs
- if d > 1:
- k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k] # actual kernel-size
- if p is None:
- p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
- return p
-
-
-class Conv(nn.Module):
- # Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)
- default_act = nn.SiLU() # default activation
-
- def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
- super().__init__()
- self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)
- self.bn = nn.BatchNorm2d(c2)
- self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()
-
- def forward(self, x):
- return self.act(self.bn(self.conv(x)))
-
- def forward_fuse(self, x):
- return self.act(self.conv(x))
-
-
-class DWConv(Conv):
- # Depth-wise convolution
- def __init__(self, c1, c2, k=1, s=1, d=1, act=True): # ch_in, ch_out, kernel, stride, dilation, activation
- super().__init__(c1, c2, k, s, g=math.gcd(c1, c2), d=d, act=act)
-
-
-class DWConvTranspose2d(nn.ConvTranspose2d):
- # Depth-wise transpose convolution
- def __init__(self, c1, c2, k=1, s=1, p1=0, p2=0): # ch_in, ch_out, kernel, stride, padding, padding_out
- super().__init__(c1, c2, k, s, p1, p2, groups=math.gcd(c1, c2))
-
-
-class TransformerLayer(nn.Module):
- # Transformer layer https://arxiv.org/abs/2010.11929 (LayerNorm layers removed for better performance)
- def __init__(self, c, num_heads):
- super().__init__()
- self.q = nn.Linear(c, c, bias=False)
- self.k = nn.Linear(c, c, bias=False)
- self.v = nn.Linear(c, c, bias=False)
- self.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads)
- self.fc1 = nn.Linear(c, c, bias=False)
- self.fc2 = nn.Linear(c, c, bias=False)
-
- def forward(self, x):
- x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x
- x = self.fc2(self.fc1(x)) + x
- return x
-
-
-class TransformerBlock(nn.Module):
- # Vision Transformer https://arxiv.org/abs/2010.11929
- def __init__(self, c1, c2, num_heads, num_layers):
- super().__init__()
- self.conv = None
- if c1 != c2:
- self.conv = Conv(c1, c2)
- self.linear = nn.Linear(c2, c2) # learnable position embedding
- self.tr = nn.Sequential(*(TransformerLayer(c2, num_heads) for _ in range(num_layers)))
- self.c2 = c2
-
- def forward(self, x):
- if self.conv is not None:
- x = self.conv(x)
- b, _, w, h = x.shape
- p = x.flatten(2).permute(2, 0, 1)
- return self.tr(p + self.linear(p)).permute(1, 2, 0).reshape(b, self.c2, w, h)
-
-
-class Bottleneck(nn.Module):
- # Standard bottleneck
- def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion
- super().__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c_, c2, 3, 1, g=g)
- self.add = shortcut and c1 == c2
-
- def forward(self, x):
- return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
-
-
-class BottleneckCSP(nn.Module):
- # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False)
- self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False)
- self.cv4 = Conv(2 * c_, c2, 1, 1)
- self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3)
- self.act = nn.SiLU()
- self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))
-
- def forward(self, x):
- y1 = self.cv3(self.m(self.cv1(x)))
- y2 = self.cv2(x)
- return self.cv4(self.act(self.bn(torch.cat((y1, y2), 1))))
-
-
-class CrossConv(nn.Module):
- # Cross Convolution Downsample
- def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
- # ch_in, ch_out, kernel, stride, groups, expansion, shortcut
- super().__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, (1, k), (1, s))
- self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)
- self.add = shortcut and c1 == c2
-
- def forward(self, x):
- return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
-
-
-
-
-
-# class MobileNetV3(nn.Module):
-
-# def __init__(self, slice):
-# super(MobileNetV3, self).__init__()
-# self.model = None
-# if slice == 1:
-# self.model = models.mobilenet_v3_small(pretrained=True).features[:4]
-# elif slice == 2:
-# self.model = models.mobilenet_v3_small(pretrained=True).features[4:9]
-# else:
-# self.model = models.mobilenet_v3_small(pretrained=True).features[9:]
-
-# def forward(self, x):
-# return self.model(x)
-
-
-class MobileNetV3(nn.Module):
-
- def __init__(self, slice):
- super(MobileNetV3, self).__init__()
- self.model = None
- if slice == 1:
- self.model = models.mobilenet_v3_small(pretrained=True).features[:4]
- elif slice == 2:
- self.model = models.mobilenet_v3_small(pretrained=True).features[4:9]
- else:
- self.model = models.mobilenet_v3_small(pretrained=True).features[9:]
-
- def forward(self, x):
- return self.model(x)
-
-
-class SE(nn.Module):
-
- def __init__(self, in_chnls, ratio):
- super(SE, self).__init__()
- self.squeeze = nn.AdaptiveAvgPool2d((1, 1))
- self.compress = nn.Conv2d(in_chnls, in_chnls // ratio, 1, 1, 0)
- self.excitation = nn.Conv2d(in_chnls // ratio, in_chnls, 1, 1, 0)
-
- def forward(self, x):
- out = self.squeeze(x)
- out = self.compress(out)
- out = F.relu(out)
- out = self.excitation(out)
- return x * F.sigmoid(out)
-
-
-class C2fBottleneck(nn.Module):
- # Standard bottleneck
- def __init__(self, c1, c2, shortcut=True, g=1, k=(3, 3), e=0.5): # ch_in, ch_out, shortcut, groups, kernels, expand
- super().__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, k[0], 1)
- self.cv2 = Conv(c_, c2, k[1], 1, g=g)
- self.add = shortcut and c1 == c2
-
- def forward(self, x):
- return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
-
-
-class C2f(nn.Module):
- # CSP Bottleneck with 2 convolutions
- def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__()
- self.c = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, 2 * self.c, 1, 1)
- self.cv2 = Conv((2 + n) * self.c, c2, 1) # optional act=FReLU(c2)
- self.m = nn.ModuleList(C2fBottleneck(self.c, self.c, shortcut, g, k=((3, 3), (3, 3)), e=1.0) for _ in range(n))
-
- def forward(self, x):
- y = list(self.cv1(x).chunk(2, 1))
- y.extend(m(y[-1]) for m in self.m)
- return self.cv2(torch.cat(y, 1))
-
- def forward_split(self, x):
- y = list(self.cv1(x).split((self.c, self.c), 1))
- y.extend(m(y[-1]) for m in self.m)
- return self.cv2(torch.cat(y, 1))
-
-
-class C3(nn.Module):
- # CSP Bottleneck with 3 convolutions
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c1, c_, 1, 1)
- self.cv3 = Conv(2 * c_, c2, 1) # optional act=FReLU(c2)
- self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))
-
- def forward(self, x):
- return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), 1))
-
-
-class C3x(C3):
- # C3 module with cross-convolutions
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e)
- self.m = nn.Sequential(*(CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n)))
-
-
-class C3TR(C3):
- # C3 module with TransformerBlock()
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e)
- self.m = TransformerBlock(c_, c_, 4, n)
-
-
-class C3SPP(C3):
- # C3 module with SPP()
- def __init__(self, c1, c2, k=(5, 9, 13), n=1, shortcut=True, g=1, e=0.5):
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e)
- self.m = SPP(c_, c_, k)
-
-
-class C3Ghost(C3):
- # C3 module with GhostBottleneck()
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e) # hidden channels
- self.m = nn.Sequential(*(GhostBottleneck(c_, c_) for _ in range(n)))
-
-
-class SPP(nn.Module):
- # Spatial Pyramid Pooling (SPP) layer https://arxiv.org/abs/1406.4729
- def __init__(self, c1, c2, k=(5, 9, 13)):
- super().__init__()
- c_ = c1 // 2 # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)
- self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])
-
- def forward(self, x):
- x = self.cv1(x)
- with warnings.catch_warnings():
- warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning
- return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1))
-
-
-class SPPF(nn.Module):
- # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher
- def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13))
- super().__init__()
- c_ = c1 // 2 # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c_ * 4, c2, 1, 1)
- self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)
-
- def forward(self, x):
- x = self.cv1(x)
- with warnings.catch_warnings():
- warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning
- y1 = self.m(x)
- y2 = self.m(y1)
- return self.cv2(torch.cat((x, y1, y2, self.m(y2)), 1))
-
-
-class Focus(nn.Module):
- # Focus wh information into c-space
- def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
- super().__init__()
- self.conv = Conv(c1 * 4, c2, k, s, p, g, act=act)
- # self.contract = Contract(gain=2)
-
- def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2)
- return self.conv(torch.cat((x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]), 1))
- # return self.conv(self.contract(x))
-
-
-class GhostConv(nn.Module):
- # Ghost Convolution https://github.com/huawei-noah/ghostnet
- def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups
- super().__init__()
- c_ = c2 // 2 # hidden channels
- self.cv1 = Conv(c1, c_, k, s, None, g, act=act)
- self.cv2 = Conv(c_, c_, 5, 1, None, c_, act=act)
-
- def forward(self, x):
- y = self.cv1(x)
- return torch.cat((y, self.cv2(y)), 1)
-
-
-class GhostBottleneck(nn.Module):
- # Ghost Bottleneck https://github.com/huawei-noah/ghostnet
- def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride
- super().__init__()
- c_ = c2 // 2
- self.conv = nn.Sequential(
- GhostConv(c1, c_, 1, 1), # pw
- DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw
- GhostConv(c_, c2, 1, 1, act=False)) # pw-linear
- self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False), Conv(c1, c2, 1, 1,
- act=False)) if s == 2 else nn.Identity()
-
- def forward(self, x):
- return self.conv(x) + self.shortcut(x)
-
-
-class Contract(nn.Module):
- # Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40)
- def __init__(self, gain=2):
- super().__init__()
- self.gain = gain
-
- def forward(self, x):
- b, c, h, w = x.size() # assert (h / s == 0) and (W / s == 0), 'Indivisible gain'
- s = self.gain
- x = x.view(b, c, h // s, s, w // s, s) # x(1,64,40,2,40,2)
- x = x.permute(0, 3, 5, 1, 2, 4).contiguous() # x(1,2,2,64,40,40)
- return x.view(b, c * s * s, h // s, w // s) # x(1,256,40,40)
-
-
-class Expand(nn.Module):
- # Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160)
- def __init__(self, gain=2):
- super().__init__()
- self.gain = gain
-
- def forward(self, x):
- b, c, h, w = x.size() # assert C / s ** 2 == 0, 'Indivisible gain'
- s = self.gain
- x = x.view(b, s, s, c // s ** 2, h, w) # x(1,2,2,16,80,80)
- x = x.permute(0, 3, 4, 1, 5, 2).contiguous() # x(1,16,80,2,80,2)
- return x.view(b, c // s ** 2, h * s, w * s) # x(1,16,160,160)
-
-
-class Concat(nn.Module):
- # Concatenate a list of tensors along dimension
- def __init__(self, dimension=1):
- super().__init__()
- self.d = dimension
-
- def forward(self, x):
- return torch.cat(x, self.d)
-
-
-class DetectMultiBackend(nn.Module):
- # YOLOv5 MultiBackend class for python inference on various backends
- def __init__(self, weights='yolov5s.pt', device=torch.device('cpu'), dnn=False, data=None, fp16=False, fuse=True):
- # Usage:
- # PyTorch: weights = *.pt
- # TorchScript: *.torchscript
- # ONNX Runtime: *.onnx
- # ONNX OpenCV DNN: *.onnx --dnn
- # OpenVINO: *_openvino_model
- # CoreML: *.mlmodel
- # TensorRT: *.engine
- # TensorFlow SavedModel: *_saved_model
- # TensorFlow GraphDef: *.pb
- # TensorFlow Lite: *.tflite
- # TensorFlow Edge TPU: *_edgetpu.tflite
- # PaddlePaddle: *_paddle_model
- from models.experimental import attempt_download, attempt_load # scoped to avoid circular import
-
- super().__init__()
- w = str(weights[0] if isinstance(weights, list) else weights)
- pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle, triton = self._model_type(w)
- fp16 &= pt or jit or onnx or engine # FP16
- nhwc = coreml or saved_model or pb or tflite or edgetpu # BHWC formats (vs torch BCWH)
- stride = 32 # default stride
- cuda = torch.cuda.is_available() and device.type != 'cpu' # use CUDA
- if not (pt or triton):
- w = attempt_download(w) # download if not local
-
- if pt: # PyTorch
- model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse)
- stride = max(int(model.stride.max()), 32) # model stride
- names = model.module.names if hasattr(model, 'module') else model.names # get class names
- model.half() if fp16 else model.float()
- self.model = model # explicitly assign for to(), cpu(), cuda(), half()
- elif jit: # TorchScript
- LOGGER.info(f'Loading {w} for TorchScript inference...')
- extra_files = {'config.txt': ''} # model metadata
- model = torch.jit.load(w, _extra_files=extra_files, map_location=device)
- model.half() if fp16 else model.float()
- if extra_files['config.txt']: # load metadata dict
- d = json.loads(extra_files['config.txt'],
- object_hook=lambda d: {int(k) if k.isdigit() else k: v
- for k, v in d.items()})
- stride, names = int(d['stride']), d['names']
- elif dnn: # ONNX OpenCV DNN
- LOGGER.info(f'Loading {w} for ONNX OpenCV DNN inference...')
- check_requirements('opencv-python>=4.5.4')
- net = cv2.dnn.readNetFromONNX(w)
- elif onnx: # ONNX Runtime
- LOGGER.info(f'Loading {w} for ONNX Runtime inference...')
- check_requirements(('onnx', 'onnxruntime-gpu' if cuda else 'onnxruntime'))
- import onnxruntime
- providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if cuda else ['CPUExecutionProvider']
- session = onnxruntime.InferenceSession(w, providers=providers)
- output_names = [x.name for x in session.get_outputs()]
- meta = session.get_modelmeta().custom_metadata_map # metadata
- if 'stride' in meta:
- stride, names = int(meta['stride']), eval(meta['names'])
- elif xml: # OpenVINO
- LOGGER.info(f'Loading {w} for OpenVINO inference...')
- check_requirements('openvino') # requires openvino-dev: https://pypi.org/project/openvino-dev/
- from openvino.runtime import Core, Layout, get_batch
- ie = Core()
- if not Path(w).is_file(): # if not *.xml
- w = next(Path(w).glob('*.xml')) # get *.xml file from *_openvino_model dir
- network = ie.read_model(model=w, weights=Path(w).with_suffix('.bin'))
- if network.get_parameters()[0].get_layout().empty:
- network.get_parameters()[0].set_layout(Layout('NCHW'))
- batch_dim = get_batch(network)
- if batch_dim.is_static:
- batch_size = batch_dim.get_length()
- executable_network = ie.compile_model(network, device_name='CPU') # device_name="MYRIAD" for Intel NCS2
- stride, names = self._load_metadata(Path(w).with_suffix('.yaml')) # load metadata
- elif engine: # TensorRT
- LOGGER.info(f'Loading {w} for TensorRT inference...')
- import tensorrt as trt # https://developer.nvidia.com/nvidia-tensorrt-download
- check_version(trt.__version__, '7.0.0', hard=True) # require tensorrt>=7.0.0
- if device.type == 'cpu':
- device = torch.device('cuda:0')
- Binding = namedtuple('Binding', ('name', 'dtype', 'shape', 'data', 'ptr'))
- logger = trt.Logger(trt.Logger.INFO)
- with open(w, 'rb') as f, trt.Runtime(logger) as runtime:
- model = runtime.deserialize_cuda_engine(f.read())
- context = model.create_execution_context()
- bindings = OrderedDict()
- output_names = []
- fp16 = False # default updated below
- dynamic = False
- for i in range(model.num_bindings):
- name = model.get_binding_name(i)
- dtype = trt.nptype(model.get_binding_dtype(i))
- if model.binding_is_input(i):
- if -1 in tuple(model.get_binding_shape(i)): # dynamic
- dynamic = True
- context.set_binding_shape(i, tuple(model.get_profile_shape(0, i)[2]))
- if dtype == np.float16:
- fp16 = True
- else: # output
- output_names.append(name)
- shape = tuple(context.get_binding_shape(i))
- im = torch.from_numpy(np.empty(shape, dtype=dtype)).to(device)
- bindings[name] = Binding(name, dtype, shape, im, int(im.data_ptr()))
- binding_addrs = OrderedDict((n, d.ptr) for n, d in bindings.items())
- batch_size = bindings['images'].shape[0] # if dynamic, this is instead max batch size
- elif coreml: # CoreML
- LOGGER.info(f'Loading {w} for CoreML inference...')
- import coremltools as ct
- model = ct.models.MLModel(w)
- elif saved_model: # TF SavedModel
- LOGGER.info(f'Loading {w} for TensorFlow SavedModel inference...')
- import tensorflow as tf
- keras = False # assume TF1 saved_model
- model = tf.keras.models.load_model(w) if keras else tf.saved_model.load(w)
- elif pb: # GraphDef https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt
- LOGGER.info(f'Loading {w} for TensorFlow GraphDef inference...')
- import tensorflow as tf
-
- def wrap_frozen_graph(gd, inputs, outputs):
- x = tf.compat.v1.wrap_function(lambda: tf.compat.v1.import_graph_def(gd, name=''), []) # wrapped
- ge = x.graph.as_graph_element
- return x.prune(tf.nest.map_structure(ge, inputs), tf.nest.map_structure(ge, outputs))
-
- def gd_outputs(gd):
- name_list, input_list = [], []
- for node in gd.node: # tensorflow.core.framework.node_def_pb2.NodeDef
- name_list.append(node.name)
- input_list.extend(node.input)
- return sorted(f'{x}:0' for x in list(set(name_list) - set(input_list)) if not x.startswith('NoOp'))
-
- gd = tf.Graph().as_graph_def() # TF GraphDef
- with open(w, 'rb') as f:
- gd.ParseFromString(f.read())
- frozen_func = wrap_frozen_graph(gd, inputs='x:0', outputs=gd_outputs(gd))
- elif tflite or edgetpu: # https://www.tensorflow.org/lite/guide/python#install_tensorflow_lite_for_python
- try: # https://coral.ai/docs/edgetpu/tflite-python/#update-existing-tf-lite-code-for-the-edge-tpu
- from tflite_runtime.interpreter import Interpreter, load_delegate
- except ImportError:
- import tensorflow as tf
- Interpreter, load_delegate = tf.lite.Interpreter, tf.lite.experimental.load_delegate,
- if edgetpu: # TF Edge TPU https://coral.ai/software/#edgetpu-runtime
- LOGGER.info(f'Loading {w} for TensorFlow Lite Edge TPU inference...')
- delegate = {
- 'Linux': 'libedgetpu.so.1',
- 'Darwin': 'libedgetpu.1.dylib',
- 'Windows': 'edgetpu.dll'}[platform.system()]
- interpreter = Interpreter(model_path=w, experimental_delegates=[load_delegate(delegate)])
- else: # TFLite
- LOGGER.info(f'Loading {w} for TensorFlow Lite inference...')
- interpreter = Interpreter(model_path=w) # load TFLite model
- interpreter.allocate_tensors() # allocate
- input_details = interpreter.get_input_details() # inputs
- output_details = interpreter.get_output_details() # outputs
- # load metadata
- with contextlib.suppress(zipfile.BadZipFile):
- with zipfile.ZipFile(w, 'r') as model:
- meta_file = model.namelist()[0]
- meta = ast.literal_eval(model.read(meta_file).decode('utf-8'))
- stride, names = int(meta['stride']), meta['names']
- elif tfjs: # TF.js
- raise NotImplementedError('ERROR: YOLOv5 TF.js inference is not supported')
- elif paddle: # PaddlePaddle
- LOGGER.info(f'Loading {w} for PaddlePaddle inference...')
- check_requirements('paddlepaddle-gpu' if cuda else 'paddlepaddle')
- import paddle.inference as pdi
- if not Path(w).is_file(): # if not *.pdmodel
- w = next(Path(w).rglob('*.pdmodel')) # get *.pdmodel file from *_paddle_model dir
- weights = Path(w).with_suffix('.pdiparams')
- config = pdi.Config(str(w), str(weights))
- if cuda:
- config.enable_use_gpu(memory_pool_init_size_mb=2048, device_id=0)
- predictor = pdi.create_predictor(config)
- input_handle = predictor.get_input_handle(predictor.get_input_names()[0])
- output_names = predictor.get_output_names()
- elif triton: # NVIDIA Triton Inference Server
- LOGGER.info(f'Using {w} as Triton Inference Server...')
- check_requirements('tritonclient[all]')
- from utils.triton import TritonRemoteModel
- model = TritonRemoteModel(url=w)
- nhwc = model.runtime.startswith('tensorflow')
- else:
- raise NotImplementedError(f'ERROR: {w} is not a supported format')
-
- # class names
- if 'names' not in locals():
- names = yaml_load(data)['names'] if data else {i: f'class{i}' for i in range(999)}
- if names[0] == 'n01440764' and len(names) == 1000: # ImageNet
- names = yaml_load(ROOT / 'data/ImageNet.yaml')['names'] # human-readable names
-
- self.__dict__.update(locals()) # assign all variables to self
-
- def forward(self, im, augment=False, visualize=False):
- # YOLOv5 MultiBackend inference
- b, ch, h, w = im.shape # batch, channel, height, width
- if self.fp16 and im.dtype != torch.float16:
- im = im.half() # to FP16
- if self.nhwc:
- im = im.permute(0, 2, 3, 1) # torch BCHW to numpy BHWC shape(1,320,192,3)
-
- if self.pt: # PyTorch
- y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
- elif self.jit: # TorchScript
- y = self.model(im)
- elif self.dnn: # ONNX OpenCV DNN
- im = im.cpu().numpy() # torch to numpy
- self.net.setInput(im)
- y = self.net.forward()
- elif self.onnx: # ONNX Runtime
- im = im.cpu().numpy() # torch to numpy
- y = self.session.run(self.output_names, {self.session.get_inputs()[0].name: im})
- elif self.xml: # OpenVINO
- im = im.cpu().numpy() # FP32
- y = list(self.executable_network([im]).values())
- elif self.engine: # TensorRT
- if self.dynamic and im.shape != self.bindings['images'].shape:
- i = self.model.get_binding_index('images')
- self.context.set_binding_shape(i, im.shape) # reshape if dynamic
- self.bindings['images'] = self.bindings['images']._replace(shape=im.shape)
- for name in self.output_names:
- i = self.model.get_binding_index(name)
- self.bindings[name].data.resize_(tuple(self.context.get_binding_shape(i)))
- s = self.bindings['images'].shape
- assert im.shape == s, f"input size {im.shape} {'>' if self.dynamic else 'not equal to'} max model size {s}"
- self.binding_addrs['images'] = int(im.data_ptr())
- self.context.execute_v2(list(self.binding_addrs.values()))
- y = [self.bindings[x].data for x in sorted(self.output_names)]
- elif self.coreml: # CoreML
- im = im.cpu().numpy()
- im = Image.fromarray((im[0] * 255).astype('uint8'))
- # im = im.resize((192, 320), Image.ANTIALIAS)
- y = self.model.predict({'image': im}) # coordinates are xywh normalized
- if 'confidence' in y:
- box = xywh2xyxy(y['coordinates'] * [[w, h, w, h]]) # xyxy pixels
- conf, cls = y['confidence'].max(1), y['confidence'].argmax(1).astype(np.float)
- y = np.concatenate((box, conf.reshape(-1, 1), cls.reshape(-1, 1)), 1)
- else:
- y = list(reversed(y.values())) # reversed for segmentation models (pred, proto)
- elif self.paddle: # PaddlePaddle
- im = im.cpu().numpy().astype(np.float32)
- self.input_handle.copy_from_cpu(im)
- self.predictor.run()
- y = [self.predictor.get_output_handle(x).copy_to_cpu() for x in self.output_names]
- elif self.triton: # NVIDIA Triton Inference Server
- y = self.model(im)
- else: # TensorFlow (SavedModel, GraphDef, Lite, Edge TPU)
- im = im.cpu().numpy()
- if self.saved_model: # SavedModel
- y = self.model(im, training=False) if self.keras else self.model(im)
- elif self.pb: # GraphDef
- y = self.frozen_func(x=self.tf.constant(im))
- else: # Lite or Edge TPU
- input = self.input_details[0]
- int8 = input['dtype'] == np.uint8 # is TFLite quantized uint8 model
- if int8:
- scale, zero_point = input['quantization']
- im = (im / scale + zero_point).astype(np.uint8) # de-scale
- self.interpreter.set_tensor(input['index'], im)
- self.interpreter.invoke()
- y = []
- for output in self.output_details:
- x = self.interpreter.get_tensor(output['index'])
- if int8:
- scale, zero_point = output['quantization']
- x = (x.astype(np.float32) - zero_point) * scale # re-scale
- y.append(x)
- y = [x if isinstance(x, np.ndarray) else x.numpy() for x in y]
- y[0][..., :4] *= [w, h, w, h] # xywh normalized to pixels
-
- if isinstance(y, (list, tuple)):
- return self.from_numpy(y[0]) if len(y) == 1 else [self.from_numpy(x) for x in y]
- else:
- return self.from_numpy(y)
-
- def from_numpy(self, x):
- return torch.from_numpy(x).to(self.device) if isinstance(x, np.ndarray) else x
-
- def warmup(self, imgsz=(1, 3, 640, 640)):
- # Warmup model by running inference once
- warmup_types = self.pt, self.jit, self.onnx, self.engine, self.saved_model, self.pb, self.triton
- if any(warmup_types) and (self.device.type != 'cpu' or self.triton):
- im = torch.empty(*imgsz, dtype=torch.half if self.fp16 else torch.float, device=self.device) # input
- for _ in range(2 if self.jit else 1): #
- self.forward(im) # warmup
-
- @staticmethod
- def _model_type(p='path/to/model.pt'):
- # Return model type from model path, i.e. path='path/to/model.onnx' -> type=onnx
- # types = [pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle]
- from export import export_formats
- from utils.downloads import is_url
- sf = list(export_formats().Suffix) # export suffixes
- if not is_url(p, check=False):
- check_suffix(p, sf) # checks
- url = urlparse(p) # if url may be Triton inference server
- types = [s in Path(p).name for s in sf]
- types[8] &= not types[9] # tflite &= not edgetpu
- triton = not any(types) and all([any(s in url.scheme for s in ['http', 'grpc']), url.netloc])
- return types + [triton]
-
- @staticmethod
- def _load_metadata(f=Path('path/to/meta.yaml')):
- # Load metadata from meta.yaml if it exists
- if f.exists():
- d = yaml_load(f)
- return d['stride'], d['names'] # assign stride, names
- return None, None
-
-
-class AutoShape(nn.Module):
- # YOLOv5 input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS
- conf = 0.25 # NMS confidence threshold
- iou = 0.45 # NMS IoU threshold
- agnostic = False # NMS class-agnostic
- multi_label = False # NMS multiple labels per box
- classes = None # (optional list) filter by class, i.e. = [0, 15, 16] for COCO persons, cats and dogs
- max_det = 1000 # maximum number of detections per image
- amp = False # Automatic Mixed Precision (AMP) inference
-
- def __init__(self, model, verbose=True):
- super().__init__()
- if verbose:
- LOGGER.info('Adding AutoShape... ')
- copy_attr(self, model, include=('yaml', 'nc', 'hyp', 'names', 'stride', 'abc'), exclude=()) # copy attributes
- self.dmb = isinstance(model, DetectMultiBackend) # DetectMultiBackend() instance
- self.pt = not self.dmb or model.pt # PyTorch model
- self.model = model.eval()
- if self.pt:
- m = self.model.model.model[-1] if self.dmb else self.model.model[-1] # Detect()
- m.inplace = False # Detect.inplace=False for safe multithread inference
- m.export = True # do not output loss values
-
- def _apply(self, fn):
- # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers
- self = super()._apply(fn)
- if self.pt:
- m = self.model.model.model[-1] if self.dmb else self.model.model[-1] # Detect()
- m.stride = fn(m.stride)
- m.grid = list(map(fn, m.grid))
- if isinstance(m.anchor_grid, list):
- m.anchor_grid = list(map(fn, m.anchor_grid))
- return self
-
- @smart_inference_mode()
- def forward(self, ims, size=640, augment=False, profile=False):
- # Inference from various sources. For size(height=640, width=1280), RGB images example inputs are:
- # file: ims = 'data/images/zidane.jpg' # str or PosixPath
- # URI: = 'https://ultralytics.com/images/zidane.jpg'
- # OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3)
- # PIL: = Image.open('image.jpg') or ImageGrab.grab() # HWC x(640,1280,3)
- # numpy: = np.zeros((640,1280,3)) # HWC
- # torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values)
- # multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images
-
- dt = (Profile(), Profile(), Profile())
- with dt[0]:
- if isinstance(size, int): # expand
- size = (size, size)
- p = next(self.model.parameters()) if self.pt else torch.empty(1, device=self.model.device) # param
- autocast = self.amp and (p.device.type != 'cpu') # Automatic Mixed Precision (AMP) inference
- if isinstance(ims, torch.Tensor): # torch
- with amp.autocast(autocast):
- return self.model(ims.to(p.device).type_as(p), augment=augment) # inference
-
- # Pre-process
- n, ims = (len(ims), list(ims)) if isinstance(ims, (list, tuple)) else (1, [ims]) # number, list of images
- shape0, shape1, files = [], [], [] # image and inference shapes, filenames
- for i, im in enumerate(ims):
- f = f'image{i}' # filename
- if isinstance(im, (str, Path)): # filename or uri
- im, f = Image.open(requests.get(im, stream=True).raw if str(im).startswith('http') else im), im
- im = np.asarray(exif_transpose(im))
- elif isinstance(im, Image.Image): # PIL Image
- im, f = np.asarray(exif_transpose(im)), getattr(im, 'filename', f) or f
- files.append(Path(f).with_suffix('.jpg').name)
- if im.shape[0] < 5: # image in CHW
- im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1)
- im = im[..., :3] if im.ndim == 3 else cv2.cvtColor(im, cv2.COLOR_GRAY2BGR) # enforce 3ch input
- s = im.shape[:2] # HWC
- shape0.append(s) # image shape
- g = max(size) / max(s) # gain
- shape1.append([int(y * g) for y in s])
- ims[i] = im if im.data.contiguous else np.ascontiguousarray(im) # update
- shape1 = [make_divisible(x, self.stride) for x in np.array(shape1).max(0)] # inf shape
- x = [letterbox(im, shape1, auto=False)[0] for im in ims] # pad
- x = np.ascontiguousarray(np.array(x).transpose((0, 3, 1, 2))) # stack and BHWC to BCHW
- x = torch.from_numpy(x).to(p.device).type_as(p) / 255 # uint8 to fp16/32
-
- with amp.autocast(autocast):
- # Inference
- with dt[1]:
- y = self.model(x, augment=augment) # forward
-
- # Post-process
- with dt[2]:
- y = non_max_suppression(y if self.dmb else y[0],
- self.conf,
- self.iou,
- self.classes,
- self.agnostic,
- self.multi_label,
- max_det=self.max_det) # NMS
- for i in range(n):
- scale_boxes(shape1, y[i][:, :4], shape0[i])
-
- return Detections(ims, y, files, dt, self.names, x.shape)
-
-
-class Detections:
- # YOLOv5 detections class for inference results
- def __init__(self, ims, pred, files, times=(0, 0, 0), names=None, shape=None):
- super().__init__()
- d = pred[0].device # device
- gn = [torch.tensor([*(im.shape[i] for i in [1, 0, 1, 0]), 1, 1], device=d) for im in ims] # normalizations
- self.ims = ims # list of images as numpy arrays
- self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls)
- self.names = names # class names
- self.files = files # image filenames
- self.times = times # profiling times
- self.xyxy = pred # xyxy pixels
- self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels
- self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized
- self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized
- self.n = len(self.pred) # number of images (batch size)
- self.t = tuple(x.t / self.n * 1E3 for x in times) # timestamps (ms)
- self.s = tuple(shape) # inference BCHW shape
-
- def _run(self, pprint=False, show=False, save=False, crop=False, render=False, labels=True, save_dir=Path('')):
- s, crops = '', []
- for i, (im, pred) in enumerate(zip(self.ims, self.pred)):
- s += f'\nimage {i + 1}/{len(self.pred)}: {im.shape[0]}x{im.shape[1]} ' # string
- if pred.shape[0]:
- for c in pred[:, -1].unique():
- n = (pred[:, -1] == c).sum() # detections per class
- s += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string
- s = s.rstrip(', ')
- if show or save or render or crop:
- annotator = Annotator(im, example=str(self.names))
- for *box, conf, cls in reversed(pred): # xyxy, confidence, class
- label = f'{self.names[int(cls)]} {conf:.2f}'
- if crop:
- file = save_dir / 'crops' / self.names[int(cls)] / self.files[i] if save else None
- crops.append({
- 'box': box,
- 'conf': conf,
- 'cls': cls,
- 'label': label,
- 'im': save_one_box(box, im, file=file, save=save)})
- else: # all others
- annotator.box_label(box, label if labels else '', color=colors(cls))
- im = annotator.im
- else:
- s += '(no detections)'
-
- im = Image.fromarray(im.astype(np.uint8)) if isinstance(im, np.ndarray) else im # from np
- if show:
- if is_jupyter():
- from IPython.display import display
- display(im)
- else:
- im.show(self.files[i])
- if save:
- f = self.files[i]
- im.save(save_dir / f) # save
- if i == self.n - 1:
- LOGGER.info(f"Saved {self.n} image{'s' * (self.n > 1)} to {colorstr('bold', save_dir)}")
- if render:
- self.ims[i] = np.asarray(im)
- if pprint:
- s = s.lstrip('\n')
- return f'{s}\nSpeed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {self.s}' % self.t
- if crop:
- if save:
- LOGGER.info(f'Saved results to {save_dir}\n')
- return crops
-
- @TryExcept('Showing images is not supported in this environment')
- def show(self, labels=True):
- self._run(show=True, labels=labels) # show results
-
- def save(self, labels=True, save_dir='runs/detect/exp', exist_ok=False):
- save_dir = increment_path(save_dir, exist_ok, mkdir=True) # increment save_dir
- self._run(save=True, labels=labels, save_dir=save_dir) # save results
-
- def crop(self, save=True, save_dir='runs/detect/exp', exist_ok=False):
- save_dir = increment_path(save_dir, exist_ok, mkdir=True) if save else None
- return self._run(crop=True, save=save, save_dir=save_dir) # crop results
-
- def render(self, labels=True):
- self._run(render=True, labels=labels) # render results
- return self.ims
-
- def pandas(self):
- # return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0])
- new = copy(self) # return copy
- ca = 'xmin', 'ymin', 'xmax', 'ymax', 'confidence', 'class', 'name' # xyxy columns
- cb = 'xcenter', 'ycenter', 'width', 'height', 'confidence', 'class', 'name' # xywh columns
- for k, c in zip(['xyxy', 'xyxyn', 'xywh', 'xywhn'], [ca, ca, cb, cb]):
- a = [[x[:5] + [int(x[5]), self.names[int(x[5])]] for x in x.tolist()] for x in getattr(self, k)] # update
- setattr(new, k, [pd.DataFrame(x, columns=c) for x in a])
- return new
-
- def tolist(self):
- # return a list of Detections objects, i.e. 'for result in results.tolist():'
- r = range(self.n) # iterable
- x = [Detections([self.ims[i]], [self.pred[i]], [self.files[i]], self.times, self.names, self.s) for i in r]
- # for d in x:
- # for k in ['ims', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']:
- # setattr(d, k, getattr(d, k)[0]) # pop out of list
- return x
-
- def print(self):
- LOGGER.info(self.__str__())
-
- def __len__(self): # override len(results)
- return self.n
-
- def __str__(self): # override print(results)
- return self._run(pprint=True) # print results
-
- def __repr__(self):
- return f'YOLOv5 {self.__class__} instance\n' + self.__str__()
-
-
-class Proto(nn.Module):
- # YOLOv5 mask Proto module for segmentation models
- def __init__(self, c1, c_=256, c2=32): # ch_in, number of protos, number of masks
- super().__init__()
- self.cv1 = Conv(c1, c_, k=3)
- self.upsample = nn.Upsample(scale_factor=2, mode='nearest')
- self.cv2 = Conv(c_, c_, k=3)
- self.cv3 = Conv(c_, c2)
-
- def forward(self, x):
- return self.cv3(self.cv2(self.upsample(self.cv1(x))))
-
-
-class Classify(nn.Module):
- # YOLOv5 classification head, i.e. x(b,c1,20,20) to x(b,c2)
- def __init__(self,
- c1,
- c2,
- k=1,
- s=1,
- p=None,
- g=1,
- dropout_p=0.0): # ch_in, ch_out, kernel, stride, padding, groups, dropout probability
- super().__init__()
- c_ = 1280 # efficientnet_b0 size
- self.conv = Conv(c1, c_, k, s, autopad(k, p), g)
- self.pool = nn.AdaptiveAvgPool2d(1) # to x(b,c_,1,1)
- self.drop = nn.Dropout(p=dropout_p, inplace=True)
- self.linear = nn.Linear(c_, c2) # to x(b,c2)
-
- def forward(self, x):
- if isinstance(x, list):
- x = torch.cat(x, 1)
- return self.linear(self.drop(self.pool(self.conv(x)).flatten(1)))
diff --git a/spaces/Yiqin/ChatVID/model/Captioner.py b/spaces/Yiqin/ChatVID/model/Captioner.py
deleted file mode 100644
index e443612b9eaf64ffbd0354722c93d94d609c2b2b..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/Captioner.py
+++ /dev/null
@@ -1,72 +0,0 @@
-from mmaction.datasets.transforms import (DecordInit, SampleFrames, Resize,
- FormatShape, DecordDecode)
-from model.audio import SpeechRecognizer
-from model.vision import DenseCaptioner, ImageCaptioner
-
-
-class Captioner:
- """ Captioner class for video captioning
- """
-
- def __init__(self, config):
- """ Initialize the captioner
- Args:
- config: configuration file
- """
- self.config = config
- self.image_captioner = ImageCaptioner(device=config['device'])
- self.dense_captioner = DenseCaptioner(device=config['device'])
- self.speech_recognizer = SpeechRecognizer(device=config['device'])
- # if self.config['vid2seq']['enable']:
- # self.vid2seq_captioner = Vid2SeqCaptioner(config=config['vid2seq'])
-
- self.src_dir = ''
-
- def debug_vid2seq(self, video_path, num_frames=8):
- return self.vid2seq_captioner(video_path=video_path)
-
- def caption_video(self, video_path, num_frames=8):
- print("Watching video ...")
-
- video_info = {'filename': video_path, 'start_index': 0}
-
- video_processors = [
- DecordInit(),
- SampleFrames(clip_len=1, frame_interval=1, num_clips=num_frames),
- DecordDecode(),
- Resize(scale=(-1, 720)),
- FormatShape(input_format='NCHW'),
- ]
- for processor in video_processors:
- video_info = processor.transform(video_info)
-
- timestamp_list = [
- round(i / video_info['avg_fps'], 1)
- for i in video_info['frame_inds']
- ]
-
- image_captions = self.image_captioner(imgs=video_info['imgs'])
- dense_captions = self.dense_captioner(imgs=video_info['imgs'])
- # if self.config['vid2seq']['enable']:
- # vid2seq_captions = self.vid2seq_captioner(video_path=video_path)
- # else:
- vid2seq_captions = []
- try:
- speech = self.speech_recognizer(video_path)
- except RuntimeError:
- speech = ""
-
- overall_captions = ""
- for i in range(num_frames):
- overall_captions += "[" + str(timestamp_list[i]) + "s]: "
- overall_captions += "You see " + image_captions[i]
- overall_captions += "You find " + dense_captions[i] + "\n"
-
- if speech != "":
- overall_captions += "You hear \"" + speech + "\"\n"
-
- for i in range(len(vid2seq_captions)):
- overall_captions += "You notice " + vid2seq_captions[i] + "\n"
- print("Captions generated")
-
- return overall_captions
diff --git a/spaces/abdvl/datahub_qa_bot/docs/roadmap.md b/spaces/abdvl/datahub_qa_bot/docs/roadmap.md
deleted file mode 100644
index f844b29db974cfdfec415e408dcc7274db745a1d..0000000000000000000000000000000000000000
--- a/spaces/abdvl/datahub_qa_bot/docs/roadmap.md
+++ /dev/null
@@ -1,138 +0,0 @@
-# DataHub Roadmap
-
-## [The DataHub Roadmap has a new home!](https://feature-requests.datahubproject.io/roadmap)
-
-Please refer to the [new DataHub Roadmap](https://feature-requests.datahubproject.io/roadmap) for the most up-to-date details of what we are working on!
-
-_If you have suggestions about what we should consider in future cycles, feel free to submit a [feature request](https://feature-requests.datahubproject.io/) and/or upvote existing feature requests so we can get a sense of level of importance!_
-
-
-## Historical Roadmap
-
-_This following represents the progress made on historical roadmap items as of January 2022. For incomplete roadmap items, we have created Feature Requests to gauge current community interest & impact to be considered in future cycles. If you see something that is still of high-interest to you, please up-vote via the Feature Request portal link and subscribe to the post for updates as we progress through the work in future cycles._
-
-### Q4 2021 [Oct - Dec 2021]
-
-#### Data Lake Ecosystem Integration
-- [ ] Spark Delta Lake - [View in Feature Reqeust Portal](https://feature-requests.datahubproject.io/b/feedback/p/spark-delta-lake)
-- [ ] Apache Iceberg - [Included in Q1 2022 Roadmap - Community-Driven Metadata Ingestion Sources](https://feature-requests.datahubproject.io/roadmap/540)
-- [ ] Apache Hudi - [View in Feature Request Portal](https://feature-requests.datahubproject.io/b/feedback/p/apachi-hudi-ingestion-support)
-
-#### Metadata Trigger Framework
-[View in Feature Request Portal](https://feature-requests.datahubproject.io/b/User-Experience/p/ability-to-subscribe-to-an-entity-to-receive-notifications-when-something-changes)
-- [ ] Stateful sensors for Airflow
-- [ ] Receive events for you to send alerts, email
-- [ ] Slack integration
-
-#### ML Ecosystem
-- [x] Features (Feast)
-- [x] Models (Sagemaker)
-- [ ] Notebooks - View in Feature Request Portal](https://feature-requests.datahubproject.io/admin/p/jupyter-integration)
-
-#### Metrics Ecosystem
-[View in Feature Request Portal](https://feature-requests.datahubproject.io/b/User-Experience/p/ability-to-define-metrics-and-attach-them-to-entities)
-- [ ] Measures, Dimensions
-- [ ] Relationships to Datasets and Dashboards
-
-#### Data Mesh oriented features
-- [ ] Data Product modeling
-- [ ] Analytics to enable Data Meshification
-
-#### Collaboration
-[View in Feature Reqeust Portal](https://feature-requests.datahubproject.io/b/User-Experience/p/collaboration-within-datahub-ui)
-- [ ] Conversations on the platform
-- [ ] Knowledge Posts (Gdocs, Gslides, Gsheets)
-
-### Q3 2021 [Jul - Sept 2021]
-
-#### Data Profiling and Dataset Previews
-Use Case: See sample data for a dataset and statistics on the shape of the data (column distribution, nullability etc.)
-- [x] Support for data profiling and preview extraction through ingestion pipeline (column samples, not rows)
-
-#### Data Quality
-Included in Q1 2022 Roadmap - [Display Data Quality Checks in the UI](https://feature-requests.datahubproject.io/roadmap/544)
-- [x] Support for data profiling and time-series views
-- [ ] Support for data quality visualization
-- [ ] Support for data health score based on data quality results and pipeline observability
-- [ ] Integration with systems like Great Expectations, AWS deequ, dbt test etc.
-
-#### Fine-grained Access Control for Metadata
-- [x] Support for role-based access control to edit metadata
-- Scope: Access control on entity-level, aspect-level and within aspects as well.
-
-#### Column-level lineage
-Included in Q1 2022 Roadmap - [Column Level Lineage](https://feature-requests.datahubproject.io/roadmap/541)
-- [ ] Metadata Model
-- [ ] SQL Parsing
-
-#### Operational Metadata
-- [ ] Partitioned Datasets - - [View in Feature Request Portal](https://feature-requests.datahubproject.io/b/User-Experience/p/advanced-dataset-schema-properties-partition-support)
-- [x] Support for operational signals like completeness, freshness etc.
-
-### Q2 2021 (Apr - Jun 2021)
-
-#### Cloud Deployment
-- [X] Production-grade Helm charts for Kubernetes-based deployment
-- [ ] How-to guides for deploying DataHub to all the major cloud providers
- - [x] AWS
- - [ ] Azure
- - [x] GCP
-
-#### Product Analytics for DataHub
-- [x] Helping you understand how your users are interacting with DataHub
-- [x] Integration with common systems like Google Analytics etc.
-
-#### Usage-Based Insights
-- [x] Display frequently used datasets, etc.
-- [ ] Improved search relevance through usage data
-
-#### Role-based Access Control
-- Support for fine-grained access control for metadata operations (read, write, modify)
-- Scope: Access control on entity-level, aspect-level and within aspects as well.
-- This provides the foundation for Tag Governance, Dataset Preview access control etc.
-
-#### No-code Metadata Model Additions
-Use Case: Developers should be able to add new entities and aspects to the metadata model easily
-- [x] No need to write any code (in Java or Python) to store, retrieve, search and query metadata
-- [ ] No need to write any code (in GraphQL or UI) to visualize metadata
-
-### Q1 2021 [Jan - Mar 2021]
-
-#### React UI
-- [x] Build a new UI based on React
-- [x] Deprecate open-source support for Ember UI
-
-#### Python-based Metadata Integration
-- [x] Build a Python-based Ingestion Framework
-- [x] Support common people repositories (LDAP)
-- [x] Support common data repositories (Kafka, SQL databases, AWS Glue, Hive)
-- [x] Support common transformation sources (dbt, Looker)
-- [x] Support for push-based metadata emission from Python (e.g. Airflow DAGs)
-
-#### Dashboards and Charts
-- [x] Support for dashboard and chart entity page
-- [x] Support browse, search and discovery
-
-#### SSO for Authentication
-- [x] Support for Authentication (login) using OIDC providers (Okta, Google etc)
-
-#### Tags
-Use-Case: Support for free-form global tags for social collaboration and aiding discovery
-- [x] Edit / Create new tags
-- [x] Attach tags to relevant constructs (e.g. datasets, dashboards, users, schema\_fields)
-- [x] Search using tags (e.g. find all datasets with this tag, find all entities with this tag)
-
-#### Business Glossary
-- [x] Support for business glossary model (definition + storage)
-- [ ] Browse taxonomy
-- [x] UI support for attaching business terms to entities and fields
-
-#### Jobs, Flows / Pipelines
-Use case: Search and Discover your Pipelines (e.g. Airflow DAGs) and understand lineage with datasets
-- [x] Support for Metadata Models + Backend Implementation
-- [x] Metadata Integrations with systems like Airflow.
-
-#### Data Profiling and Dataset Previews
-Use Case: See sample data for a dataset and statistics on the shape of the data (column distribution, nullability etc.)
-- [ ] Support for data profiling and preview extraction through ingestion pipeline
-- Out of scope for Q1: Access control of data profiles and sample data
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/detectors/base.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/detectors/base.py
deleted file mode 100644
index 89134f3696ead442a5ff57184e9d256fdf7d0ba4..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/detectors/base.py
+++ /dev/null
@@ -1,355 +0,0 @@
-from abc import ABCMeta, abstractmethod
-from collections import OrderedDict
-
-import mmcv
-import numpy as np
-import torch
-import torch.distributed as dist
-import torch.nn as nn
-from mmcv.runner import auto_fp16
-from mmcv.utils import print_log
-
-from mmdet.core.visualization import imshow_det_bboxes
-from mmdet.utils import get_root_logger
-
-
-class BaseDetector(nn.Module, metaclass=ABCMeta):
- """Base class for detectors."""
-
- def __init__(self):
- super(BaseDetector, self).__init__()
- self.fp16_enabled = False
-
- @property
- def with_neck(self):
- """bool: whether the detector has a neck"""
- return hasattr(self, 'neck') and self.neck is not None
-
- # TODO: these properties need to be carefully handled
- # for both single stage & two stage detectors
- @property
- def with_shared_head(self):
- """bool: whether the detector has a shared head in the RoI Head"""
- return hasattr(self, 'roi_head') and self.roi_head.with_shared_head
-
- @property
- def with_bbox(self):
- """bool: whether the detector has a bbox head"""
- return ((hasattr(self, 'roi_head') and self.roi_head.with_bbox)
- or (hasattr(self, 'bbox_head') and self.bbox_head is not None))
-
- @property
- def with_mask(self):
- """bool: whether the detector has a mask head"""
- return ((hasattr(self, 'roi_head') and self.roi_head.with_mask)
- or (hasattr(self, 'mask_head') and self.mask_head is not None))
-
- @abstractmethod
- def extract_feat(self, imgs):
- """Extract features from images."""
- pass
-
- def extract_feats(self, imgs):
- """Extract features from multiple images.
-
- Args:
- imgs (list[torch.Tensor]): A list of images. The images are
- augmented from the same image but in different ways.
-
- Returns:
- list[torch.Tensor]: Features of different images
- """
- assert isinstance(imgs, list)
- return [self.extract_feat(img) for img in imgs]
-
- def forward_train(self, imgs, img_metas, **kwargs):
- """
- Args:
- img (list[Tensor]): List of tensors of shape (1, C, H, W).
- Typically these should be mean centered and std scaled.
- img_metas (list[dict]): List of image info dict where each dict
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys, see
- :class:`mmdet.datasets.pipelines.Collect`.
- kwargs (keyword arguments): Specific to concrete implementation.
- """
- # NOTE the batched image size information may be useful, e.g.
- # in DETR, this is needed for the construction of masks, which is
- # then used for the transformer_head.
- batch_input_shape = tuple(imgs[0].size()[-2:])
- for img_meta in img_metas:
- img_meta['batch_input_shape'] = batch_input_shape
-
- async def async_simple_test(self, img, img_metas, **kwargs):
- raise NotImplementedError
-
- @abstractmethod
- def simple_test(self, img, img_metas, **kwargs):
- pass
-
- @abstractmethod
- def aug_test(self, imgs, img_metas, **kwargs):
- """Test function with test time augmentation."""
- pass
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in detector.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- if pretrained is not None:
- logger = get_root_logger()
- print_log(f'load model from: {pretrained}', logger=logger)
-
- async def aforward_test(self, *, img, img_metas, **kwargs):
- for var, name in [(img, 'img'), (img_metas, 'img_metas')]:
- if not isinstance(var, list):
- raise TypeError(f'{name} must be a list, but got {type(var)}')
-
- num_augs = len(img)
- if num_augs != len(img_metas):
- raise ValueError(f'num of augmentations ({len(img)}) '
- f'!= num of image metas ({len(img_metas)})')
- # TODO: remove the restriction of samples_per_gpu == 1 when prepared
- samples_per_gpu = img[0].size(0)
- assert samples_per_gpu == 1
-
- if num_augs == 1:
- return await self.async_simple_test(img[0], img_metas[0], **kwargs)
- else:
- raise NotImplementedError
-
- def forward_test(self, imgs, img_metas, **kwargs):
- """
- Args:
- imgs (List[Tensor]): the outer list indicates test-time
- augmentations and inner Tensor should have a shape NxCxHxW,
- which contains all images in the batch.
- img_metas (List[List[dict]]): the outer list indicates test-time
- augs (multiscale, flip, etc.) and the inner list indicates
- images in a batch.
- """
- for var, name in [(imgs, 'imgs'), (img_metas, 'img_metas')]:
- if not isinstance(var, list):
- raise TypeError(f'{name} must be a list, but got {type(var)}')
-
- num_augs = len(imgs)
- if num_augs != len(img_metas):
- raise ValueError(f'num of augmentations ({len(imgs)}) '
- f'!= num of image meta ({len(img_metas)})')
-
- # NOTE the batched image size information may be useful, e.g.
- # in DETR, this is needed for the construction of masks, which is
- # then used for the transformer_head.
- for img, img_meta in zip(imgs, img_metas):
- batch_size = len(img_meta)
- for img_id in range(batch_size):
- img_meta[img_id]['batch_input_shape'] = tuple(img.size()[-2:])
-
- if num_augs == 1:
- # proposals (List[List[Tensor]]): the outer list indicates
- # test-time augs (multiscale, flip, etc.) and the inner list
- # indicates images in a batch.
- # The Tensor should have a shape Px4, where P is the number of
- # proposals.
- if 'proposals' in kwargs:
- kwargs['proposals'] = kwargs['proposals'][0]
- return self.simple_test(imgs[0], img_metas[0], **kwargs)
- else:
- assert imgs[0].size(0) == 1, 'aug test does not support ' \
- 'inference with batch size ' \
- f'{imgs[0].size(0)}'
- # TODO: support test augmentation for predefined proposals
- assert 'proposals' not in kwargs
- return self.aug_test(imgs, img_metas, **kwargs)
-
- @auto_fp16(apply_to=('img', ))
- def forward(self, img, img_metas, return_loss=True, **kwargs):
- """Calls either :func:`forward_train` or :func:`forward_test` depending
- on whether ``return_loss`` is ``True``.
-
- Note this setting will change the expected inputs. When
- ``return_loss=True``, img and img_meta are single-nested (i.e. Tensor
- and List[dict]), and when ``resturn_loss=False``, img and img_meta
- should be double nested (i.e. List[Tensor], List[List[dict]]), with
- the outer list indicating test time augmentations.
- """
- if return_loss:
- return self.forward_train(img, img_metas, **kwargs)
- else:
- return self.forward_test(img, img_metas, **kwargs)
-
- def _parse_losses(self, losses):
- """Parse the raw outputs (losses) of the network.
-
- Args:
- losses (dict): Raw output of the network, which usually contain
- losses and other necessary infomation.
-
- Returns:
- tuple[Tensor, dict]: (loss, log_vars), loss is the loss tensor \
- which may be a weighted sum of all losses, log_vars contains \
- all the variables to be sent to the logger.
- """
- log_vars = OrderedDict()
- for loss_name, loss_value in losses.items():
- if isinstance(loss_value, torch.Tensor):
- log_vars[loss_name] = loss_value.mean()
- elif isinstance(loss_value, list):
- log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value)
- else:
- raise TypeError(
- f'{loss_name} is not a tensor or list of tensors')
-
- loss = sum(_value for _key, _value in log_vars.items()
- if 'loss' in _key)
-
- log_vars['loss'] = loss
- for loss_name, loss_value in log_vars.items():
- # reduce loss when distributed training
- if dist.is_available() and dist.is_initialized():
- loss_value = loss_value.data.clone()
- dist.all_reduce(loss_value.div_(dist.get_world_size()))
- log_vars[loss_name] = loss_value.item()
-
- return loss, log_vars
-
- def train_step(self, data, optimizer):
- """The iteration step during training.
-
- This method defines an iteration step during training, except for the
- back propagation and optimizer updating, which are done in an optimizer
- hook. Note that in some complicated cases or models, the whole process
- including back propagation and optimizer updating is also defined in
- this method, such as GAN.
-
- Args:
- data (dict): The output of dataloader.
- optimizer (:obj:`torch.optim.Optimizer` | dict): The optimizer of
- runner is passed to ``train_step()``. This argument is unused
- and reserved.
-
- Returns:
- dict: It should contain at least 3 keys: ``loss``, ``log_vars``, \
- ``num_samples``.
-
- - ``loss`` is a tensor for back propagation, which can be a \
- weighted sum of multiple losses.
- - ``log_vars`` contains all the variables to be sent to the
- logger.
- - ``num_samples`` indicates the batch size (when the model is \
- DDP, it means the batch size on each GPU), which is used for \
- averaging the logs.
- """
- losses = self(**data)
- loss, log_vars = self._parse_losses(losses)
-
- outputs = dict(
- loss=loss, log_vars=log_vars, num_samples=len(data['img_metas']))
-
- return outputs
-
- def val_step(self, data, optimizer):
- """The iteration step during validation.
-
- This method shares the same signature as :func:`train_step`, but used
- during val epochs. Note that the evaluation after training epochs is
- not implemented with this method, but an evaluation hook.
- """
- losses = self(**data)
- loss, log_vars = self._parse_losses(losses)
-
- outputs = dict(
- loss=loss, log_vars=log_vars, num_samples=len(data['img_metas']))
-
- return outputs
-
- def show_result(self,
- img,
- result,
- score_thr=0.3,
- bbox_color=(72, 101, 241),
- text_color=(72, 101, 241),
- mask_color=None,
- thickness=2,
- font_size=13,
- win_name='',
- show=False,
- wait_time=0,
- out_file=None):
- """Draw `result` over `img`.
-
- Args:
- img (str or Tensor): The image to be displayed.
- result (Tensor or tuple): The results to draw over `img`
- bbox_result or (bbox_result, segm_result).
- score_thr (float, optional): Minimum score of bboxes to be shown.
- Default: 0.3.
- bbox_color (str or tuple(int) or :obj:`Color`):Color of bbox lines.
- The tuple of color should be in BGR order. Default: 'green'
- text_color (str or tuple(int) or :obj:`Color`):Color of texts.
- The tuple of color should be in BGR order. Default: 'green'
- mask_color (None or str or tuple(int) or :obj:`Color`):
- Color of masks. The tuple of color should be in BGR order.
- Default: None
- thickness (int): Thickness of lines. Default: 2
- font_size (int): Font size of texts. Default: 13
- win_name (str): The window name. Default: ''
- wait_time (float): Value of waitKey param.
- Default: 0.
- show (bool): Whether to show the image.
- Default: False.
- out_file (str or None): The filename to write the image.
- Default: None.
-
- Returns:
- img (Tensor): Only if not `show` or `out_file`
- """
- img = mmcv.imread(img)
- img = img.copy()
- if isinstance(result, tuple):
- bbox_result, segm_result = result
- if isinstance(segm_result, tuple):
- segm_result = segm_result[0] # ms rcnn
- else:
- bbox_result, segm_result = result, None
- bboxes = np.vstack(bbox_result)
- labels = [
- np.full(bbox.shape[0], i, dtype=np.int32)
- for i, bbox in enumerate(bbox_result)
- ]
- labels = np.concatenate(labels)
- # draw segmentation masks
- segms = None
- if segm_result is not None and len(labels) > 0: # non empty
- segms = mmcv.concat_list(segm_result)
- if isinstance(segms[0], torch.Tensor):
- segms = torch.stack(segms, dim=0).detach().cpu().numpy()
- else:
- segms = np.stack(segms, axis=0)
- # if out_file specified, do not show image in window
- if out_file is not None:
- show = False
- # draw bounding boxes
- img = imshow_det_bboxes(
- img,
- bboxes,
- labels,
- segms,
- class_names=self.CLASSES,
- score_thr=score_thr,
- bbox_color=bbox_color,
- text_color=text_color,
- mask_color=mask_color,
- thickness=thickness,
- font_size=font_size,
- win_name=win_name,
- show=show,
- wait_time=wait_time,
- out_file=out_file)
-
- if not (show or out_file):
- return img
diff --git a/spaces/adamcasson/transformer-flops-calculator/app.py b/spaces/adamcasson/transformer-flops-calculator/app.py
deleted file mode 100644
index 1d5409e4805e828b3dd13483adb42c7394a38346..0000000000000000000000000000000000000000
--- a/spaces/adamcasson/transformer-flops-calculator/app.py
+++ /dev/null
@@ -1,162 +0,0 @@
-from typing import Tuple
-
-import gradio as gr
-
-
-def deepmind_flops(
- n_layer: int,
- d_model: int,
- d_ff: int,
- d_attn: int,
- n_ctx: int,
- n_vocab: int,
- n_heads: int,
-) -> int:
- embeddings = 2 * n_ctx * n_vocab * d_model
- attn_qkv = 2 * n_ctx * 3 * d_model * (d_attn * n_heads)
- attn_logits = 2 * n_ctx * n_ctx * (d_attn * n_heads)
- attn_softmax = 3 * n_heads * n_ctx * n_ctx
- attn_reduce = 2 * n_ctx * n_ctx * (d_attn * n_heads)
- attn_project = 2 * n_ctx * (d_attn * n_heads) * d_model
- ff = 2 * n_ctx * (d_model * d_ff + d_model * d_ff)
- logits = 2 * n_ctx * d_model * n_vocab
-
- params = (
- embeddings / n_ctx / 2,
- (n_layer * (attn_qkv + attn_project + ff)) / n_ctx / 2,
- logits / n_ctx / 2,
- )
-
- return (
- embeddings,
- attn_qkv * n_layer,
- attn_logits * n_layer,
- attn_softmax * n_layer,
- attn_reduce * n_layer,
- attn_project * n_layer,
- ff * n_layer,
- logits,
- ), params
-
-
-def calculator(
- n_layer: int,
- d_model: int,
- n_heads: int,
- n_vocab: int,
- ff_ratio: int,
- n_ctx: int,
- n_tokens: int,
- incl_embed: bool,
- fwd_only: bool,
-) -> Tuple[int, int, int]:
- d_attn = d_model // n_heads
- if d_model % n_heads != 0:
- raise gr.Error("d_model must be divisible by n_heads")
- d_ff = d_model * ff_ratio
-
- flops_terms, params = deepmind_flops(
- n_layer, d_model, d_ff, d_attn, n_ctx, n_vocab, n_heads
- )
-
- if incl_embed:
- flops_per_sequence = sum(flops_terms)
- params = sum(params)
- else:
- flops_per_sequence = sum(flops_terms[1:])
- params = sum(params[1:])
-
- flops_per_token = flops_per_sequence / n_ctx
-
- n_tokens_flops = flops_per_token * n_tokens
-
- if not fwd_only:
- flops_per_sequence *= 3
- flops_per_token *= 3
- n_tokens_flops *= 3
-
- return params, flops_per_sequence, flops_per_token, n_tokens_flops
-
-
-with gr.Blocks() as iface:
- gr.Markdown(
- "Calculate how many FLOPs a Transformer language model uses with the method described in [DeepMind's Chinchilla scaling law paper](https://arxiv.org/abs/2203.15556) (see Appendix F)."
- )
- with gr.Row():
- with gr.Column():
- gr.Markdown("#### Architecture details")
- n_layer = gr.Number(label="Number of layers (n_layer)")
- d_model = gr.Number(label="Model dimensions (d_model)")
- n_heads = gr.Number(label="Number of attention heads per layer (n_heads)")
- n_vocab = gr.Number(label="Vocabulary size (n_vocab)")
- ff_ratio = gr.Number(value=4, label="Feedforward ratio")
- gr.Markdown("#### Data details")
- n_ctx = gr.Number(label="Sequence length (n_ctx)")
- n_tokens = gr.Number(
- value=0,
- label="Total number of training tokens (n_tokens) (optional)",
- )
- gr.Markdown("#### Settings")
- incl_embed = gr.Checkbox(value=True, label="Include embeddings")
- fwd_only = gr.Checkbox(
- value=False, label="Calculate FLOPs for only forward pass"
- )
-
- btn = gr.Button(value="Enter", variant="primary")
-
- with gr.Column():
- gr.Markdown("#### Output")
- params = gr.Number(label="Model parameters")
- flops_per_sequence = gr.Number(label="FLOPs per sequence")
- flops_per_token = gr.Number(label="FLOPs per token")
- n_tokens_flops = gr.Number(label="Total FLOPs for n_tokens")
-
- btn.click(
- calculator,
- inputs=[
- n_layer,
- d_model,
- n_heads,
- n_vocab,
- ff_ratio,
- n_ctx,
- n_tokens,
- incl_embed,
- fwd_only,
- ],
- outputs=[params, flops_per_sequence, flops_per_token, n_tokens_flops],
- )
-
- gr.Markdown("### GPT-3 model family examples")
- gr.Markdown(
- "In order are the 125M, 350M, 1.3B, 2.7B, 6.7B, 13B, 30B, 66B, and 175B parameter variants."
- )
- gr.Examples(
- [
- [12, 768, 12, 50257, 4, 4096, 0, True, False],
- [24, 1024, 16, 50257, 4, 4096, 0, True, False],
- [24, 2048, 32, 50257, 4, 4096, 0, True, False],
- [32, 2560, 32, 50257, 4, 4096, 0, True, False],
- [32, 4096, 32, 50257, 4, 4096, 0, True, False],
- [40, 5120, 40, 50257, 4, 4096, 0, True, False],
- [48, 7168, 56, 50257, 4, 4096, 0, True, False],
- [64, 9216, 72, 50257, 4, 4096, 0, True, False],
- [96, 12288, 96, 50257, 4, 4096, 0, True, False],
- ],
- [
- n_layer,
- d_model,
- n_heads,
- n_vocab,
- ff_ratio,
- n_ctx,
- n_tokens,
- incl_embed,
- fwd_only,
- ],
- [params, flops_per_sequence, flops_per_token, n_tokens_flops],
- calculator,
- cache_examples=False,
- )
-
-iface.launch()
diff --git a/spaces/adirik/stylemc-demo/torch_utils/ops/upfirdn2d.h b/spaces/adirik/stylemc-demo/torch_utils/ops/upfirdn2d.h
deleted file mode 100644
index c9e2032bcac9d2abde7a75eea4d812da348afadd..0000000000000000000000000000000000000000
--- a/spaces/adirik/stylemc-demo/torch_utils/ops/upfirdn2d.h
+++ /dev/null
@@ -1,59 +0,0 @@
-// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-//
-// NVIDIA CORPORATION and its licensors retain all intellectual property
-// and proprietary rights in and to this software, related documentation
-// and any modifications thereto. Any use, reproduction, disclosure or
-// distribution of this software and related documentation without an express
-// license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-#include
-
-//------------------------------------------------------------------------
-// CUDA kernel parameters.
-
-struct upfirdn2d_kernel_params
-{
- const void* x;
- const float* f;
- void* y;
-
- int2 up;
- int2 down;
- int2 pad0;
- int flip;
- float gain;
-
- int4 inSize; // [width, height, channel, batch]
- int4 inStride;
- int2 filterSize; // [width, height]
- int2 filterStride;
- int4 outSize; // [width, height, channel, batch]
- int4 outStride;
- int sizeMinor;
- int sizeMajor;
-
- int loopMinor;
- int loopMajor;
- int loopX;
- int launchMinor;
- int launchMajor;
-};
-
-//------------------------------------------------------------------------
-// CUDA kernel specialization.
-
-struct upfirdn2d_kernel_spec
-{
- void* kernel;
- int tileOutW;
- int tileOutH;
- int loopMinor;
- int loopX;
-};
-
-//------------------------------------------------------------------------
-// CUDA kernel selection.
-
-template upfirdn2d_kernel_spec choose_upfirdn2d_kernel(const upfirdn2d_kernel_params& p);
-
-//------------------------------------------------------------------------
diff --git a/spaces/akhaliq/Mask2Former/mask2former/data/dataset_mappers/coco_instance_new_baseline_dataset_mapper.py b/spaces/akhaliq/Mask2Former/mask2former/data/dataset_mappers/coco_instance_new_baseline_dataset_mapper.py
deleted file mode 100644
index e64af2b51009d0398a1b6253a8a763c641547f59..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Mask2Former/mask2former/data/dataset_mappers/coco_instance_new_baseline_dataset_mapper.py
+++ /dev/null
@@ -1,189 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Modified by Bowen Cheng from https://github.com/facebookresearch/detr/blob/master/d2/detr/dataset_mapper.py
-import copy
-import logging
-
-import numpy as np
-import torch
-
-from detectron2.config import configurable
-from detectron2.data import detection_utils as utils
-from detectron2.data import transforms as T
-from detectron2.data.transforms import TransformGen
-from detectron2.structures import BitMasks, Instances
-
-from pycocotools import mask as coco_mask
-
-__all__ = ["COCOInstanceNewBaselineDatasetMapper"]
-
-
-def convert_coco_poly_to_mask(segmentations, height, width):
- masks = []
- for polygons in segmentations:
- rles = coco_mask.frPyObjects(polygons, height, width)
- mask = coco_mask.decode(rles)
- if len(mask.shape) < 3:
- mask = mask[..., None]
- mask = torch.as_tensor(mask, dtype=torch.uint8)
- mask = mask.any(dim=2)
- masks.append(mask)
- if masks:
- masks = torch.stack(masks, dim=0)
- else:
- masks = torch.zeros((0, height, width), dtype=torch.uint8)
- return masks
-
-
-def build_transform_gen(cfg, is_train):
- """
- Create a list of default :class:`Augmentation` from config.
- Now it includes resizing and flipping.
- Returns:
- list[Augmentation]
- """
- assert is_train, "Only support training augmentation"
- image_size = cfg.INPUT.IMAGE_SIZE
- min_scale = cfg.INPUT.MIN_SCALE
- max_scale = cfg.INPUT.MAX_SCALE
-
- augmentation = []
-
- if cfg.INPUT.RANDOM_FLIP != "none":
- augmentation.append(
- T.RandomFlip(
- horizontal=cfg.INPUT.RANDOM_FLIP == "horizontal",
- vertical=cfg.INPUT.RANDOM_FLIP == "vertical",
- )
- )
-
- augmentation.extend([
- T.ResizeScale(
- min_scale=min_scale, max_scale=max_scale, target_height=image_size, target_width=image_size
- ),
- T.FixedSizeCrop(crop_size=(image_size, image_size)),
- ])
-
- return augmentation
-
-
-# This is specifically designed for the COCO dataset.
-class COCOInstanceNewBaselineDatasetMapper:
- """
- A callable which takes a dataset dict in Detectron2 Dataset format,
- and map it into a format used by MaskFormer.
-
- This dataset mapper applies the same transformation as DETR for COCO panoptic segmentation.
-
- The callable currently does the following:
-
- 1. Read the image from "file_name"
- 2. Applies geometric transforms to the image and annotation
- 3. Find and applies suitable cropping to the image and annotation
- 4. Prepare image and annotation to Tensors
- """
-
- @configurable
- def __init__(
- self,
- is_train=True,
- *,
- tfm_gens,
- image_format,
- ):
- """
- NOTE: this interface is experimental.
- Args:
- is_train: for training or inference
- augmentations: a list of augmentations or deterministic transforms to apply
- tfm_gens: data augmentation
- image_format: an image format supported by :func:`detection_utils.read_image`.
- """
- self.tfm_gens = tfm_gens
- logging.getLogger(__name__).info(
- "[COCOInstanceNewBaselineDatasetMapper] Full TransformGens used in training: {}".format(str(self.tfm_gens))
- )
-
- self.img_format = image_format
- self.is_train = is_train
-
- @classmethod
- def from_config(cls, cfg, is_train=True):
- # Build augmentation
- tfm_gens = build_transform_gen(cfg, is_train)
-
- ret = {
- "is_train": is_train,
- "tfm_gens": tfm_gens,
- "image_format": cfg.INPUT.FORMAT,
- }
- return ret
-
- def __call__(self, dataset_dict):
- """
- Args:
- dataset_dict (dict): Metadata of one image, in Detectron2 Dataset format.
-
- Returns:
- dict: a format that builtin models in detectron2 accept
- """
- dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below
- image = utils.read_image(dataset_dict["file_name"], format=self.img_format)
- utils.check_image_size(dataset_dict, image)
-
- # TODO: get padding mask
- # by feeding a "segmentation mask" to the same transforms
- padding_mask = np.ones(image.shape[:2])
-
- image, transforms = T.apply_transform_gens(self.tfm_gens, image)
- # the crop transformation has default padding value 0 for segmentation
- padding_mask = transforms.apply_segmentation(padding_mask)
- padding_mask = ~ padding_mask.astype(bool)
-
- image_shape = image.shape[:2] # h, w
-
- # Pytorch's dataloader is efficient on torch.Tensor due to shared-memory,
- # but not efficient on large generic data structures due to the use of pickle & mp.Queue.
- # Therefore it's important to use torch.Tensor.
- dataset_dict["image"] = torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1)))
- dataset_dict["padding_mask"] = torch.as_tensor(np.ascontiguousarray(padding_mask))
-
- if not self.is_train:
- # USER: Modify this if you want to keep them for some reason.
- dataset_dict.pop("annotations", None)
- return dataset_dict
-
- if "annotations" in dataset_dict:
- # USER: Modify this if you want to keep them for some reason.
- for anno in dataset_dict["annotations"]:
- # Let's always keep mask
- # if not self.mask_on:
- # anno.pop("segmentation", None)
- anno.pop("keypoints", None)
-
- # USER: Implement additional transformations if you have other types of data
- annos = [
- utils.transform_instance_annotations(obj, transforms, image_shape)
- for obj in dataset_dict.pop("annotations")
- if obj.get("iscrowd", 0) == 0
- ]
- # NOTE: does not support BitMask due to augmentation
- # Current BitMask cannot handle empty objects
- instances = utils.annotations_to_instances(annos, image_shape)
- # After transforms such as cropping are applied, the bounding box may no longer
- # tightly bound the object. As an example, imagine a triangle object
- # [(0,0), (2,0), (0,2)] cropped by a box [(1,0),(2,2)] (XYXY format). The tight
- # bounding box of the cropped triangle should be [(1,0),(2,1)], which is not equal to
- # the intersection of original bounding box and the cropping box.
- instances.gt_boxes = instances.gt_masks.get_bounding_boxes()
- # Need to filter empty instances first (due to augmentation)
- instances = utils.filter_empty_instances(instances)
- # Generate masks from polygon
- h, w = instances.image_size
- # image_size_xyxy = torch.as_tensor([w, h, w, h], dtype=torch.float)
- if hasattr(instances, 'gt_masks'):
- gt_masks = instances.gt_masks
- gt_masks = convert_coco_poly_to_mask(gt_masks.polygons, h, w)
- instances.gt_masks = gt_masks
- dataset_dict["instances"] = instances
-
- return dataset_dict
diff --git a/spaces/akhaliq/Pop_Music_Transformer/chord_recognition.py b/spaces/akhaliq/Pop_Music_Transformer/chord_recognition.py
deleted file mode 100644
index ec8feb7dde75e6b522d93f6aa6cd03405c56064c..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Pop_Music_Transformer/chord_recognition.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import miditoolkit
-import numpy as np
-
-class MIDIChord(object):
- def __init__(self):
- # define pitch classes
- self.PITCH_CLASSES = ['C', 'C#', 'D', 'D#', 'E', 'F', 'F#', 'G', 'G#', 'A', 'A#', 'B']
- # define chord maps (required)
- self.CHORD_MAPS = {'maj': [0, 4],
- 'min': [0, 3],
- 'dim': [0, 3, 6],
- 'aug': [0, 4, 8],
- 'dom': [0, 4, 7, 10]}
- # define chord insiders (+1)
- self.CHORD_INSIDERS = {'maj': [7],
- 'min': [7],
- 'dim': [9],
- 'aug': [],
- 'dom': []}
- # define chord outsiders (-1)
- self.CHORD_OUTSIDERS_1 = {'maj': [2, 5, 9],
- 'min': [2, 5, 8],
- 'dim': [2, 5, 10],
- 'aug': [2, 5, 9],
- 'dom': [2, 5, 9]}
- # define chord outsiders (-2)
- self.CHORD_OUTSIDERS_2 = {'maj': [1, 3, 6, 8, 10],
- 'min': [1, 4, 6, 9, 11],
- 'dim': [1, 4, 7, 8, 11],
- 'aug': [1, 3, 6, 7, 10],
- 'dom': [1, 3, 6, 8, 11]}
-
- def note2pianoroll(self, notes, max_tick, ticks_per_beat):
- return miditoolkit.pianoroll.parser.notes2pianoroll(
- note_stream_ori=notes,
- max_tick=max_tick,
- ticks_per_beat=ticks_per_beat)
-
- def sequencing(self, chroma):
- candidates = {}
- for index in range(len(chroma)):
- if chroma[index]:
- root_note = index
- _chroma = np.roll(chroma, -root_note)
- sequence = np.where(_chroma == 1)[0]
- candidates[root_note] = list(sequence)
- return candidates
-
- def scoring(self, candidates):
- scores = {}
- qualities = {}
- for root_note, sequence in candidates.items():
- if 3 not in sequence and 4 not in sequence:
- scores[root_note] = -100
- qualities[root_note] = 'None'
- elif 3 in sequence and 4 in sequence:
- scores[root_note] = -100
- qualities[root_note] = 'None'
- else:
- # decide quality
- if 3 in sequence:
- if 6 in sequence:
- quality = 'dim'
- else:
- quality = 'min'
- elif 4 in sequence:
- if 8 in sequence:
- quality = 'aug'
- else:
- if 7 in sequence and 10 in sequence:
- quality = 'dom'
- else:
- quality = 'maj'
- # decide score
- maps = self.CHORD_MAPS.get(quality)
- _notes = [n for n in sequence if n not in maps]
- score = 0
- for n in _notes:
- if n in self.CHORD_OUTSIDERS_1.get(quality):
- score -= 1
- elif n in self.CHORD_OUTSIDERS_2.get(quality):
- score -= 2
- elif n in self.CHORD_INSIDERS.get(quality):
- score += 1
- scores[root_note] = score
- qualities[root_note] = quality
- return scores, qualities
-
- def find_chord(self, pianoroll):
- chroma = miditoolkit.pianoroll.utils.tochroma(pianoroll=pianoroll)
- chroma = np.sum(chroma, axis=0)
- chroma = np.array([1 if c else 0 for c in chroma])
- if np.sum(chroma) == 0:
- return 'N', 'N', 'N', 0
- else:
- candidates = self.sequencing(chroma=chroma)
- scores, qualities = self.scoring(candidates=candidates)
- # bass note
- sorted_notes = []
- for i, v in enumerate(np.sum(pianoroll, axis=0)):
- if v > 0:
- sorted_notes.append(int(i%12))
- bass_note = sorted_notes[0]
- # root note
- __root_note = []
- _max = max(scores.values())
- for _root_note, score in scores.items():
- if score == _max:
- __root_note.append(_root_note)
- if len(__root_note) == 1:
- root_note = __root_note[0]
- else:
- #TODO: what should i do
- for n in sorted_notes:
- if n in __root_note:
- root_note = n
- break
- # quality
- quality = qualities.get(root_note)
- sequence = candidates.get(root_note)
- # score
- score = scores.get(root_note)
- return self.PITCH_CLASSES[root_note], quality, self.PITCH_CLASSES[bass_note], score
-
- def greedy(self, candidates, max_tick, min_length):
- chords = []
- # start from 0
- start_tick = 0
- while start_tick < max_tick:
- _candidates = candidates.get(start_tick)
- _candidates = sorted(_candidates.items(), key=lambda x: (x[1][-1], x[0]))
- # choose
- end_tick, (root_note, quality, bass_note, _) = _candidates[-1]
- if root_note == bass_note:
- chord = '{}:{}'.format(root_note, quality)
- else:
- chord = '{}:{}/{}'.format(root_note, quality, bass_note)
- chords.append([start_tick, end_tick, chord])
- start_tick = end_tick
- # remove :None
- temp = chords
- while ':None' in temp[0][-1]:
- try:
- temp[1][0] = temp[0][0]
- del temp[0]
- except:
- print('NO CHORD')
- return []
- temp2 = []
- for chord in temp:
- if ':None' not in chord[-1]:
- temp2.append(chord)
- else:
- temp2[-1][1] = chord[1]
- return temp2
-
- def extract(self, notes):
- # read
- max_tick = max([n.end for n in notes])
- ticks_per_beat = 480
- pianoroll = self.note2pianoroll(
- notes=notes,
- max_tick=max_tick,
- ticks_per_beat=ticks_per_beat)
- # get lots of candidates
- candidates = {}
- # the shortest: 2 beat, longest: 4 beat
- for interval in [4, 2]:
- for start_tick in range(0, max_tick, ticks_per_beat):
- # set target pianoroll
- end_tick = int(ticks_per_beat * interval + start_tick)
- if end_tick > max_tick:
- end_tick = max_tick
- _pianoroll = pianoroll[start_tick:end_tick, :]
- # find chord
- root_note, quality, bass_note, score = self.find_chord(pianoroll=_pianoroll)
- # save
- if start_tick not in candidates:
- candidates[start_tick] = {}
- candidates[start_tick][end_tick] = (root_note, quality, bass_note, score)
- else:
- if end_tick not in candidates[start_tick]:
- candidates[start_tick][end_tick] = (root_note, quality, bass_note, score)
- # greedy
- chords = self.greedy(candidates=candidates,
- max_tick=max_tick,
- min_length=ticks_per_beat)
- return chords
diff --git a/spaces/akhaliq/SummerTime/dataset/non_huggingface_datasets_builders/scisummnet.py b/spaces/akhaliq/SummerTime/dataset/non_huggingface_datasets_builders/scisummnet.py
deleted file mode 100644
index 0b6bcfb5bfc02e09be903d988ec45d0a0a06606e..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/SummerTime/dataset/non_huggingface_datasets_builders/scisummnet.py
+++ /dev/null
@@ -1,105 +0,0 @@
-import os
-import datasets
-
-
-"""Scisummnet dataset."""
-
-
-_CITATION = """
-@InProceedings{yasunaga&al.19.scisumm,
- title = {{ScisummNet}: A Large Annotated Corpus and Content-Impact Models for Scientific Paper Summarization with Citation Networks},
- author = {Michihiro Yasunaga and Jungo Kasai and Rui Zhang and Alexander Fabbri and Irene Li and Dan Friedman and Dragomir Radev},
- booktitle = {Proceedings of AAAI 2019},
- year = {2019}
-}
-@InProceedings{yasunaga&al.17,
- title = {Graph-based Neural Multi-Document Summarization},
- author = {Yasunaga, Michihiro and Zhang, Rui and Meelu, Kshitijh and Pareek, Ayush and Srinivasan, Krishnan and Radev, Dragomir R.},
- booktitle = {Proceedings of CoNLL 2017},
- year = {2017}
-}
-"""
-
-_DESCRIPTION = """
-A summary of scientific papers should ideally incorporate the impact of the papers on the research community
-reflected by citations. To facilitate research in citation-aware scientific paper summarization (Scisumm),
-the CL-Scisumm shared task has been organized since 2014 for papers in the computational linguistics and NLP domain.
-"""
-
-_HOMEPAGE = "https://cs.stanford.edu/~myasu/projects/scisumm_net/"
-
-_LICENSE = "CC BY-SA 4.0"
-
-_URLs = "https://cs.stanford.edu/~myasu/projects/scisumm_net/scisummnet_release1.1__20190413.zip"
-
-
-class SummertimeScisummnet(datasets.GeneratorBasedBuilder):
- """Scisummnet dataset."""
-
- VERSION = datasets.Version("1.1.0")
-
- BUILDER_CONFIGS = [
- datasets.BuilderConfig(),
- ]
-
- def _info(self):
- features = datasets.Features(
- {
- "entry_number": datasets.Value("string"),
- "document_xml": datasets.Value("string"),
- "citing_sentences_annotated.json": datasets.Value("string"),
- "summary": datasets.Value("string"),
- }
- )
- return datasets.DatasetInfo(
- description=_DESCRIPTION,
- features=features,
- supervised_keys=None,
- homepage=_HOMEPAGE,
- license=_LICENSE,
- citation=_CITATION,
- )
-
- def _split_generators(self, dl_manager):
- """Returns SplitGenerators."""
- my_urls = _URLs
- path = dl_manager.download_and_extract(my_urls)
- trainpath = os.path.join(
- path, "scisummnet_release1.1__20190413", "top1000_complete"
- )
- return [
- datasets.SplitGenerator(
- name=datasets.Split.TRAIN,
- # These kwargs will be passed to _generate_examples
- gen_kwargs={"extraction_path": trainpath, "split": "train"},
- )
- ]
-
- def _generate_examples(self, extraction_path, split):
- """Yields examples."""
-
- for folder in os.listdir(extraction_path):
-
- entry = {}
-
- entry["entry_number"] = folder
-
- doc_xml_path = os.path.join(
- extraction_path, folder, "Documents_xml", folder + ".xml"
- )
- with open(doc_xml_path, "r", encoding="utf-8") as f:
- entry["document_xml"] = f.read()
-
- cite_annot_path = os.path.join(
- extraction_path, folder, "citing_sentences_annotated.json"
- )
- with open(cite_annot_path, "r", encoding="utf-8") as f:
- entry["citing_sentences_annotated.json"] = f.read()
-
- summary_path = os.path.join(
- extraction_path, folder, "summary", folder + ".gold.txt"
- )
- with open(summary_path, "r", encoding="utf-8") as f:
- entry["summary"] = f.read()
-
- yield entry["entry_number"], entry
diff --git a/spaces/akhaliq/SummerTime/model/third_party/HMNet/Utils/GeneralUtils.py b/spaces/akhaliq/SummerTime/model/third_party/HMNet/Utils/GeneralUtils.py
deleted file mode 100644
index 7f9b1287d172926ca8d0dbc64bea97c60d8ef427..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/SummerTime/model/third_party/HMNet/Utils/GeneralUtils.py
+++ /dev/null
@@ -1,138 +0,0 @@
-# Copyright (c) Microsoft Corporation.
-# Licensed under the MIT license.
-
-import math
-import re
-import logging
-import torch
-from torch.utils.data import Dataset
-import torch.nn.functional as F
-import unicodedata
-import sys
-from torch.autograd import Variable
-
-from .Constants import *
-
-logger = logging.getLogger(__name__)
-
-
-class ObjectView(object):
- def __init__(self, d):
- self.__dict__ = d
-
-
-class AverageMeter(object):
- """Computes and stores the average and current value."""
-
- def __init__(self):
- self.reset()
-
- def reset(self):
- self.val = 0
- self.avg = 0
- self.sum = 0
- self.count = 0
-
- def update(self, val, n=1, decay=0):
- self.val = val
- if decay:
- alpha = math.exp(-n / decay) # exponential decay over 100 updates
- self.sum = alpha * self.sum + (1 - alpha) * val * n
- self.count = alpha * self.count + (1 - alpha) * n
- else:
- self.sum += val * n
- self.count += n
- self.avg = self.sum / self.count
-
-
-class BaseBatchGen:
- """
- This is a base class for batch generators that use infinibatch.
-
- The interfaces below are required to work with LegacyTask.
-
- For new tasks, the interfaces are not restricted (the methods and their signatures don't
- have to be same as the base class). They should have minimum assumption or dependency
- on other components in the system. Task classes can use them accordingly.
- """
-
- def __init__(
- self,
- task_args,
- dataset_label,
- model_config=None,
- tokenizer=None,
- world_size=1,
- rank=0,
- seed=None,
- ):
- """
- Args:
- task_args (dict): dictionary records arguments
- dataset_label (str): 'train', 'dev' or 'test'
- model_config: config of the model
- tokenizer: tokenizer used to process text
- world_size (int): total number of GPUs
- rank (int): order of current GPU
- seed (int): random seed
- """
- self.opt = task_args
- self.dataset_label = dataset_label
- self.model_config = model_config
- self.tokenizer = tokenizer
- self.world_size = world_size
- self.rank = rank
- self.seed = seed
- self.evaluation = dataset_label in ["dev", "test"]
-
- self._iter = None
-
- def _build_iter(self):
- """
- Build infinibatch iterator and assign to self._iter
- """
- raise NotImplementedError()
-
- @property
- def iterator(self):
- if self._iter is None:
- raise NotImplementedError("_build_iter() must called first")
- return self._iter
-
- def __iter__(self):
- if self._iter is None:
- raise NotImplementedError("_build_iter() must called first")
- return self._iter
-
- def __next__(self):
- return next(self._iter)
-
-
-def move_batch_to_device(batch, device):
- """
- Move the batch to the device.
- It should be called before feeding the batch to the model.
-
- Args:
- batch (torch.tensor or container of torch.tensor): input batch
- device (torch.device): device to move the batch to
- Returns:
- return_batch: same type as the input batch with internal tensors moved to device
- """
- if torch.is_tensor(batch):
- return_batch = batch.to(device)
- elif isinstance(batch, list):
- return_batch = [move_batch_to_device(t, device) for t in batch]
- elif isinstance(batch, tuple):
- return_batch = tuple(move_batch_to_device(t, device) for t in batch)
- elif isinstance(batch, dict):
- return_batch = {}
- for k in batch:
- return_batch[k] = move_batch_to_device(batch[k], device)
- else:
- logger.debug(
- f"Can not move type {type(batch)} to device. Skipping it in the batch."
- )
- return_batch = batch
-
- return return_batch
diff --git a/spaces/akhaliq/deeplab2/data/dataset_utils.py b/spaces/akhaliq/deeplab2/data/dataset_utils.py
deleted file mode 100644
index 167b30a6182cd49ee35f9b3245bf5f0cd9c810a6..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/deeplab2/data/dataset_utils.py
+++ /dev/null
@@ -1,71 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The Deeplab2 Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""This file contains utility function for handling the dataset."""
-
-import tensorflow as tf
-
-
-def get_semantic_and_panoptic_label(dataset_info, label, ignore_label):
- """Helper function to get semantic and panoptic label from panoptic label.
-
- This functions gets the semantic and panoptic label from panoptic label for
- different datasets. The labels must be encoded with semantic_label *
- label_divisor + instance_id. For thing classes, the instance ID 0 is reserved
- for crowd regions. Please note, the returned panoptic label has replaced
- the crowd region with ignore regions. Yet, the semantic label makes use of
- these regions.
-
- Args:
- dataset_info: A dictionary storing dataset information.
- label: A Tensor of panoptic label.
- ignore_label: An integer specifying the ignore_label.
-
- Returns:
- semantic_label: A Tensor of semantic segmentation label.
- panoptic_label: A Tensor of panoptic segmentation label, which follows the
- Cityscapes annotation where
- panoptic_label = semantic_label * panoptic_label_divisor + instance_id.
- thing_mask: A boolean Tensor specifying the thing regions. Zero if no thing.
- crowd_region: A boolean Tensor specifying crowd region. Zero if no crowd
- annotation.
-
- Raises:
- ValueError: An error occurs when the ignore_label is not in range
- [0, label_divisor].
- """
- panoptic_label_divisor = dataset_info['panoptic_label_divisor']
- if ignore_label >= panoptic_label_divisor or ignore_label < 0:
- raise ValueError('The ignore_label must be in [0, label_divisor].')
-
- semantic_label = label // panoptic_label_divisor
- # Find iscrowd region if any and set to ignore for panoptic labels.
- # 1. Find thing mask.
- thing_mask = tf.zeros_like(semantic_label, tf.bool)
- for thing_id in dataset_info['class_has_instances_list']:
- thing_mask = tf.logical_or(
- thing_mask,
- tf.equal(semantic_label, thing_id))
- # 2. Find crowd region (thing label that have instance_id == 0).
- crowd_region = tf.logical_and(
- thing_mask,
- tf.equal(label % panoptic_label_divisor, 0))
- # 3. Set crowd region to ignore label.
- panoptic_label = tf.where(
- crowd_region,
- tf.ones_like(label) * ignore_label * panoptic_label_divisor,
- label)
-
- return semantic_label, panoptic_label, thing_mask, crowd_region
diff --git a/spaces/akhaliq/deeplab2/model/decoder/deeplabv3_test.py b/spaces/akhaliq/deeplab2/model/decoder/deeplabv3_test.py
deleted file mode 100644
index 9cf6698585cb0ce5d14b53021cbe631ad26a1848..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/deeplab2/model/decoder/deeplabv3_test.py
+++ /dev/null
@@ -1,143 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The Deeplab2 Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Tests for deeplabv3."""
-
-import numpy as np
-import tensorflow as tf
-
-from deeplab2 import common
-from deeplab2 import config_pb2
-from deeplab2.model.decoder import deeplabv3
-from deeplab2.utils import test_utils
-
-
-def _create_deeplabv3_model(feature_key, decoder_channels, aspp_channels,
- atrous_rates, num_classes, **kwargs):
- decoder_options = config_pb2.DecoderOptions(
- feature_key=feature_key,
- decoder_channels=decoder_channels,
- aspp_channels=aspp_channels,
- atrous_rates=atrous_rates)
- deeplabv3_options = config_pb2.ModelOptions.DeeplabV3Options(
- num_classes=num_classes)
- return deeplabv3.DeepLabV3(decoder_options, deeplabv3_options, **kwargs)
-
-
-class Deeplabv3Test(tf.test.TestCase):
-
- def test_deeplabv3_feature_key_not_present(self):
- deeplabv3_decoder = _create_deeplabv3_model(
- feature_key='not_in_features_dict',
- aspp_channels=64,
- decoder_channels=48,
- atrous_rates=[6, 12, 18],
- num_classes=80)
- input_dict = dict()
- input_dict['not_the_same_key'] = tf.random.uniform(shape=(2, 65, 65, 32))
-
- with self.assertRaises(KeyError):
- _ = deeplabv3_decoder(input_dict)
-
- def test_deeplabv3_output_shape(self):
- list_of_num_classes = [2, 19, 133]
- for num_classes in list_of_num_classes:
- deeplabv3_decoder = _create_deeplabv3_model(
- feature_key='not_used',
- aspp_channels=64,
- decoder_channels=48,
- atrous_rates=[6, 12, 18],
- num_classes=num_classes)
- input_tensor = tf.random.uniform(shape=(2, 65, 65, 32))
- expected_shape = [2, 65, 65, num_classes]
-
- logit_tensor = deeplabv3_decoder(input_tensor)
- self.assertListEqual(
- logit_tensor[common.PRED_SEMANTIC_LOGITS_KEY].shape.as_list(),
- expected_shape)
-
- @test_utils.test_all_strategies
- def test_sync_bn(self, strategy):
- input_tensor = tf.random.uniform(shape=(2, 65, 65, 32))
- with strategy.scope():
- for bn_layer in test_utils.NORMALIZATION_LAYERS:
- deeplabv3_decoder = _create_deeplabv3_model(
- feature_key='not_used',
- aspp_channels=64,
- decoder_channels=48,
- atrous_rates=[6, 12, 18],
- num_classes=19,
- bn_layer=bn_layer)
- _ = deeplabv3_decoder(input_tensor)
-
- def test_deeplabv3_feature_extraction_consistency(self):
- deeplabv3_decoder = _create_deeplabv3_model(
- aspp_channels=64,
- decoder_channels=48,
- atrous_rates=[6, 12, 18],
- num_classes=80,
- feature_key='feature_key')
- input_tensor = tf.random.uniform(shape=(2, 65, 65, 32))
- input_dict = dict()
- input_dict['feature_key'] = input_tensor
-
- reference_logits_tensor = deeplabv3_decoder(input_tensor, training=False)
- logits_tensor_to_compare = deeplabv3_decoder(input_dict, training=False)
-
- np.testing.assert_equal(
- reference_logits_tensor[common.PRED_SEMANTIC_LOGITS_KEY].numpy(),
- logits_tensor_to_compare[common.PRED_SEMANTIC_LOGITS_KEY].numpy())
-
- def test_deeplabv3_pool_size_setter(self):
- deeplabv3_decoder = _create_deeplabv3_model(
- feature_key='not_used',
- aspp_channels=64,
- decoder_channels=48,
- atrous_rates=[6, 12, 18],
- num_classes=80)
- pool_size = (10, 10)
- deeplabv3_decoder.set_pool_size(pool_size)
-
- self.assertTupleEqual(deeplabv3_decoder._aspp._aspp_pool._pool_size,
- pool_size)
-
- def test_deeplabv3_pool_size_resetter(self):
- deeplabv3_decoder = _create_deeplabv3_model(
- feature_key='not_used',
- aspp_channels=64,
- decoder_channels=48,
- atrous_rates=[6, 12, 18],
- num_classes=80)
- pool_size = (None, None)
- deeplabv3_decoder.reset_pooling_layer()
-
- self.assertTupleEqual(deeplabv3_decoder._aspp._aspp_pool._pool_size,
- pool_size)
-
- def test_deeplabv3_ckpt_items(self):
- deeplabv3_decoder = _create_deeplabv3_model(
- feature_key='not_used',
- aspp_channels=64,
- decoder_channels=48,
- atrous_rates=[6, 12, 18],
- num_classes=80)
- ckpt_dict = deeplabv3_decoder.checkpoint_items
- self.assertIn(common.CKPT_DEEPLABV3_ASPP, ckpt_dict)
- self.assertIn(common.CKPT_DEEPLABV3_CLASSIFIER_CONV_BN_ACT, ckpt_dict)
- self.assertIn(common.CKPT_SEMANTIC_LAST_LAYER, ckpt_dict)
-
-
-if __name__ == '__main__':
- tf.test.main()
diff --git a/spaces/akhaliq/deeplab2/model/layers/recompute_grad.py b/spaces/akhaliq/deeplab2/model/layers/recompute_grad.py
deleted file mode 100644
index 8bf0e2ad66595e794b187cb7564669ce2ee6c19a..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/deeplab2/model/layers/recompute_grad.py
+++ /dev/null
@@ -1,289 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The Deeplab2 Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Library for rematerialization.
-
-Incubates a version of tf.recompute_grad that is XLA compatible.
-
-This file is based on the recompute_grad.py in the bigbird codebase [1]:
-https://github.com/google-research/bigbird/blob/db06498ec8804c6438111938d8654b66ddaccd5d/bigbird/core/recompute_grad.py
-
-[1] Big Bird: Transformers for Longer Sequences, NeurIPS 2020.
- Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris
- Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li
- Yang, Amr Ahmed.
-"""
-import collections
-import os
-import threading
-from typing import Deque, List, NamedTuple, Optional, Sequence
-
-from absl import logging
-import tensorflow.compat.v2 as tf
-
-# pylint: disable=g-direct-tensorflow-import
-from tensorflow.python.framework import ops
-from tensorflow.python.ops import custom_gradient
-
-
-# Remove when https://github.com/tensorflow/tensorflow/pull/45298
-# gets merged
-def get_variable_by_name(var_name):
- """Retrieves tf.Variable from name in MirroredStrategy (multi-gpu)."""
-
- # Get all variables, but it will have copies from different replicas
- all_global_vars = ops.get_collection(ops.GraphKeys.GLOBAL_VARIABLES)
-
- def _replica_filter(var):
- """Filter out variables from different context."""
- try:
- return var_name == var.op.name
- except AttributeError:
- return False
- candidate_vars = list(filter(_replica_filter, all_global_vars))
-
- if len(candidate_vars) >= 1:
- # Filter out non-trainable variables.
- candidate_vars = [v for v in candidate_vars if v.trainable]
- else:
- raise ValueError('Unsuccessful at finding variable {}.'.format(var_name))
-
- if len(candidate_vars) == 1:
- return candidate_vars[0]
- elif len(candidate_vars) > 1:
- raise ValueError(
- 'Unsuccessful at finding trainable variable {}. '
- 'Number of candidates: {}. '
- 'Candidates: {}'.format(var_name, len(candidate_vars), candidate_vars))
- else:
- # The variable is not trainable.
- return None
-custom_gradient.get_variable_by_name = get_variable_by_name
-
-
-class RecomputeContext(
- NamedTuple('RecomputeContext', [
- ('is_recomputing', bool),
- ('seed', tf.Tensor),
- ('children', Deque['RecomputeContext']),
- ])):
- """Context for recomputation.
-
- Attributes:
- is_recomputing: Whether we are in a recomputation phase.
- seed: Scalar integer tensor that should be used with stateless random ops
- for deterministic behavior and correct computation of the gradient.
- children: Nested `RecomputeContext` instances. Used internally by
- `recompute_grad` to track nested instances of `RecomputeContext`.
- """
-
- def __enter__(self):
- return _context_stack.push(self)
-
- def __exit__(self, exc_type, exc_value, traceback):
- _context_stack.pop(self)
-
-
-# Simplified version of `_DefaultStack` in
-# https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/framework/ops.py.
-class _ContextStack(threading.local):
- """A thread-local stack for providing implicit recompute contexts."""
-
- def __init__(self):
- super(_ContextStack, self).__init__()
- self._stack = []
-
- def top(self) -> Optional[RecomputeContext]:
- return self._stack[-1] if self._stack else None
-
- def push(self, context: RecomputeContext):
- self._stack.append(context)
- return context
-
- def pop(self, context: RecomputeContext):
- if self._stack[-1] is not context:
- raise AssertionError('Nesting violated for RecomputeContext.')
- self._stack.pop()
-
-
-_context_stack = _ContextStack()
-
-
-def get_recompute_context() -> Optional[RecomputeContext]:
- """Returns the current recomputing context if it exists."""
- return _context_stack.top()
-
-
-# Adapted from
-# https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/control_flow_util.py.
-def _get_containing_xla_context(graph: tf.Graph) -> Optional[object]:
- """Returns the first ancestor `XLAControlFlowContext` in the `graph`."""
- ctxt = graph._get_control_flow_context() # pylint: disable=protected-access
- while ctxt:
- if ctxt.IsXLAContext():
- return ctxt
- ctxt = ctxt.outer_context
- return None
-
-
-def _in_xla_context(graph: Optional[tf.Graph] = None) -> bool:
- """Detects whether we are in an XLA context."""
- if '--tf_xla_auto_jit=2' in os.environ.get('TF_XLA_FLAGS', ''):
- return True
- graph = tf.compat.v1.get_default_graph() if graph is None else graph
- while True:
- if _get_containing_xla_context(graph) is not None:
- return True
- try:
- graph = graph.outer_graph
- except AttributeError:
- return False
-
-
-def _force_data_dependency(
- first_compute: Sequence[tf.Tensor],
- then_compute: Sequence[tf.Tensor]) -> List[tf.Tensor]:
- """Forces all of `then_compute` to depend on all of `first_compute`.
-
- Uses a dummy data dependency, which is useful when running on TPUs because
- XLA ignores control dependencies. Only supports float arguments.
-
- Args:
- first_compute: Sequence of `Tensor`s to be executed before `then_compute`.
- then_compute: Sequence of `Tensor`s to executed after `first_compute`.
-
- Returns:
- Sequence of `Tensor`s with same length of `then_compute`.
-
- Raises:
- ValueError: if ranks are unknown or types are not floating.
- """
-
- def _first_element(x):
- if x.shape.ndims is None:
- raise ValueError('Rank of Tensor %s must be known' % x)
- ndims = x.shape.ndims
- begin = tf.zeros(ndims, dtype=tf.int32)
- size = tf.ones(ndims, dtype=tf.int32)
- return tf.reshape(tf.slice(x, begin, size), [])
-
- first_compute_sum = tf.add_n(
- [_first_element(x) for x in first_compute if x is not None])
- dtype = first_compute_sum.dtype
- if not dtype.is_floating:
- raise ValueError('_force_data_dependency only supports floating dtypes.')
- zero = tf.cast(0.0, first_compute_sum.dtype) * first_compute_sum
- then_compute_sequence = [
- x + tf.cast(zero, x.dtype) if x is not None else None
- for x in tf.nest.flatten(then_compute)
- ]
- return tf.nest.pack_sequence_as(then_compute, then_compute_sequence)
-
-
-def _make_seed_if_none(seed: Optional[tf.Tensor]) -> tf.Tensor:
- """Uses the global generator to make a seed if necessary."""
- if seed is not None:
- return seed
- generator = tf.random.experimental.get_global_generator()
- # The two seeds for stateless random ops don't have individual semantics and
- # are scrambled together, so providing one seed is fine. This makes it easier
- # for users to provide a local seed without worrying about integer overflow.
- # See `make_seeds` in
- # https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/stateful_random_ops.py.
- try:
- return generator.uniform_full_int([], tf.int32, name='recompute_grad_seed')
- except (RuntimeError, TypeError, ValueError, tf.errors.NotFoundError) as e:
- # For a number of reasons, the above operation can fail like using multiple
- # graphs or toggling between eager and graph modes. Reset the generator.
- logging.warn('Resetting the generator. %s: %s', type(e), e)
- tf.random.experimental.set_global_generator(None)
- generator = tf.random.experimental.get_global_generator()
- return generator.uniform_full_int([], tf.int32, name='recompute_grad_seed')
-
-
-def recompute_grad(f, seed=None):
- """An eager-compatible version of recompute_grad.
-
- For f(*args, **kwargs), this supports gradients with respect to args, or to
- gradients with respect to any variables residing in the kwarg 'variables'.
- Note that for keras layer and model objects, this is handled automatically.
-
- Warning: If `f` was originally a tf.keras Model or Layer object, `g` will not
- be able to access the member variables of that object, because `g` returns
- through the wrapper function `inner`. When recomputing gradients through
- objects that inherit from keras, we suggest keeping a reference to the
- underlying object around for the purpose of accessing these variables.
-
- Args:
- f: function `f(*x)` that returns a `Tensor` or sequence of `Tensor` outputs.
- seed: Optional seed for random ops. `seed` should an integer scalar
- `Tensor`. When compiling to XLA, `seed` must have dtype `tf.int32`. If
- `seed` is not provided one will be generated.
-
- Returns:
- A function `g` that wraps `f`, but which recomputes `f` on the backwards
- pass of a gradient call.
- """
-
- @tf.custom_gradient
- def inner(*args, **kwargs):
- """Inner function closure for calculating gradients."""
- # Detect when we're nested and in the backwards pass, so we don't generate
- # an additional seed.
- parent_context = get_recompute_context()
- if parent_context is not None and parent_context.is_recomputing:
- # Use the cached context in the recomputation phase.
- with parent_context.children.popleft()._replace(
- is_recomputing=True) as context:
- result = f(*args, **kwargs)
- else:
- with RecomputeContext(
- is_recomputing=False,
- seed=_make_seed_if_none(seed),
- children=collections.deque()) as context:
- result = f(*args, **kwargs)
- # In the forward pass, build up a tree of recomputation contexts.
- if parent_context is not None and not parent_context.is_recomputing:
- parent_context.children.append(context)
-
- def grad(*dresult, **grad_kwargs):
- """Gradient function calculation for inner function."""
- variables = grad_kwargs.pop('variables', None)
- if grad_kwargs:
- raise ValueError('Found unexpected kwargs for `grad`: ',
- list(grad_kwargs.keys()))
- inputs, seed = list(args), context.seed
- if _in_xla_context():
- inputs = _force_data_dependency(
- tf.nest.flatten(dresult), inputs + [seed])
- seed = inputs.pop()
- # tf.keras.backend.set_learning_phase(1)
- with tf.GradientTape() as tape:
- tape.watch(inputs)
- if variables is not None:
- tape.watch(variables)
- with tf.control_dependencies(dresult):
- with context._replace(is_recomputing=True, seed=seed):
- result = f(*inputs, **kwargs)
- kw_vars = []
- if variables is not None:
- kw_vars = list(variables)
- grads = tape.gradient(
- result, list(inputs) + kw_vars, output_gradients=dresult)
- return grads[:len(inputs)], grads[len(inputs):]
-
- return result, grad
-
- return inner
diff --git a/spaces/akhaliq/neural-waveshaping-synthesis/neural_waveshaping_synthesis/data/utils/preprocess_audio.py b/spaces/akhaliq/neural-waveshaping-synthesis/neural_waveshaping_synthesis/data/utils/preprocess_audio.py
deleted file mode 100644
index 8bb3fa73e9b79d7113ac6a784bb6f8639cbecd82..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/neural-waveshaping-synthesis/neural_waveshaping_synthesis/data/utils/preprocess_audio.py
+++ /dev/null
@@ -1,237 +0,0 @@
-from functools import partial
-from typing import Callable, Sequence, Union
-
-import gin
-import librosa
-import numpy as np
-import resampy
-import scipy.io.wavfile as wavfile
-
-from .f0_extraction import extract_f0_with_crepe, extract_f0_with_pyin
-from .loudness_extraction import extract_perceptual_loudness, extract_rms
-from .mfcc_extraction import extract_mfcc
-from ...utils import apply, apply_unpack, unzip
-
-
-def read_audio_files(files: list):
- rates_and_audios = apply(wavfile.read, files)
- return unzip(rates_and_audios)
-
-
-def convert_to_float32_audio(audio: np.ndarray):
- if audio.dtype == np.float32:
- return audio
-
- max_sample_value = np.iinfo(audio.dtype).max
- floating_point_audio = audio / max_sample_value
- return floating_point_audio.astype(np.float32)
-
-
-def make_monophonic(audio: np.ndarray, strategy: str = "keep_left"):
- # deal with non stereo array formats
- if len(audio.shape) == 1:
- return audio
- elif len(audio.shape) != 2:
- raise ValueError("Unknown audio array format.")
-
- # deal with single audio channel
- if audio.shape[0] == 1:
- return audio[0]
- elif audio.shape[1] == 1:
- return audio[:, 0]
- # deal with more than two channels
- elif audio.shape[0] != 2 and audio.shape[1] != 2:
- raise ValueError("Expected stereo input audio but got too many channels.")
-
- # put channel first
- if audio.shape[1] == 2:
- audio = audio.T
-
- # make stereo audio monophonic
- if strategy == "keep_left":
- return audio[0]
- elif strategy == "keep_right":
- return audio[1]
- elif strategy == "sum":
- return np.mean(audio, axis=0)
- elif strategy == "diff":
- return audio[0] - audio[1]
-
-
-def normalise_signal(audio: np.ndarray, factor: float):
- return audio / factor
-
-
-def resample_audio(audio: np.ndarray, original_sr: float, target_sr: float):
- return resampy.resample(audio, original_sr, target_sr)
-
-
-def segment_signal(
- signal: np.ndarray,
- sample_rate: float,
- segment_length_in_seconds: float,
- hop_length_in_seconds: float,
-):
- segment_length_in_samples = int(sample_rate * segment_length_in_seconds)
- hop_length_in_samples = int(sample_rate * hop_length_in_seconds)
- segments = librosa.util.frame(
- signal, segment_length_in_samples, hop_length_in_samples
- )
- return segments
-
-
-def filter_segments(
- threshold: float,
- key_segments: np.ndarray,
- segments: Sequence[np.ndarray],
-):
- mean_keys = key_segments.mean(axis=0)
- mask = mean_keys > threshold
- filtered_segments = apply(
- lambda x: x[:, mask] if len(x.shape) == 2 else x[:, :, mask], segments
- )
- return filtered_segments
-
-
-def preprocess_single_audio_file(
- file: str,
- control_decimation_factor: float,
- target_sr: float = 16000.0,
- segment_length_in_seconds: float = 4.0,
- hop_length_in_seconds: float = 2.0,
- confidence_threshold: float = 0.85,
- f0_extractor: Callable = extract_f0_with_crepe,
- loudness_extractor: Callable = extract_perceptual_loudness,
- mfcc_extractor: Callable = extract_mfcc,
- normalisation_factor: Union[float, None] = None,
-):
- print("Loading audio file: %s..." % file)
- original_sr, audio = wavfile.read(file)
- audio = convert_to_float32_audio(audio)
- audio = make_monophonic(audio)
-
- if normalisation_factor:
- audio = normalise_signal(audio, normalisation_factor)
-
- print("Resampling audio file: %s..." % file)
- audio = resample_audio(audio, original_sr, target_sr)
-
- print("Extracting f0 with extractor '%s': %s..." % (f0_extractor.__name__, file))
- f0, confidence = f0_extractor(audio)
-
- print(
- "Extracting loudness with extractor '%s': %s..."
- % (loudness_extractor.__name__, file)
- )
- loudness = loudness_extractor(audio)
-
- print(
- "Extracting MFCC with extractor '%s': %s..." % (mfcc_extractor.__name__, file)
- )
- mfcc = mfcc_extractor(audio)
-
- print("Segmenting audio file: %s..." % file)
- segmented_audio = segment_signal(
- audio, target_sr, segment_length_in_seconds, hop_length_in_seconds
- )
-
- print("Segmenting control signals: %s..." % file)
- segmented_f0 = segment_signal(
- f0,
- target_sr / (control_decimation_factor or 1),
- segment_length_in_seconds,
- hop_length_in_seconds,
- )
- segmented_confidence = segment_signal(
- confidence,
- target_sr / (control_decimation_factor or 1),
- segment_length_in_seconds,
- hop_length_in_seconds,
- )
- segmented_loudness = segment_signal(
- loudness,
- target_sr / (control_decimation_factor or 1),
- segment_length_in_seconds,
- hop_length_in_seconds,
- )
- segmented_mfcc = segment_signal(
- mfcc,
- target_sr / (control_decimation_factor or 1),
- segment_length_in_seconds,
- hop_length_in_seconds,
- )
-
- (
- filtered_audio,
- filtered_f0,
- filtered_confidence,
- filtered_loudness,
- filtered_mfcc,
- ) = filter_segments(
- confidence_threshold,
- segmented_confidence,
- (
- segmented_audio,
- segmented_f0,
- segmented_confidence,
- segmented_loudness,
- segmented_mfcc,
- ),
- )
-
- if filtered_audio.shape[-1] == 0:
- print("No segments exceeding confidence threshold...")
- audio_split, f0_split, confidence_split, loudness_split, mfcc_split = (
- [],
- [],
- [],
- [],
- [],
- )
- else:
- split = lambda x: [e.squeeze() for e in np.split(x, x.shape[-1], -1)]
- audio_split = split(filtered_audio)
- f0_split = split(filtered_f0)
- confidence_split = split(filtered_confidence)
- loudness_split = split(filtered_loudness)
- mfcc_split = split(filtered_mfcc)
-
- return audio_split, f0_split, confidence_split, loudness_split, mfcc_split
-
-
-@gin.configurable
-def preprocess_audio(
- files: list,
- control_decimation_factor: float,
- target_sr: float = 16000,
- segment_length_in_seconds: float = 4.0,
- hop_length_in_seconds: float = 2.0,
- confidence_threshold: float = 0.85,
- f0_extractor: Callable = extract_f0_with_crepe,
- loudness_extractor: Callable = extract_perceptual_loudness,
- normalise_audio: bool = False,
-):
- if normalise_audio:
- print("Finding normalisation factor...")
- normalisation_factor = 0
- for file in files:
- _, audio = wavfile.read(file)
- audio = convert_to_float32_audio(audio)
- audio = make_monophonic(audio)
- max_value = np.abs(audio).max()
- normalisation_factor = (
- max_value if max_value > normalisation_factor else normalisation_factor
- )
-
- processor = partial(
- preprocess_single_audio_file,
- control_decimation_factor=control_decimation_factor,
- target_sr=target_sr,
- segment_length_in_seconds=segment_length_in_seconds,
- hop_length_in_seconds=hop_length_in_seconds,
- f0_extractor=f0_extractor,
- loudness_extractor=loudness_extractor,
- normalisation_factor=None if not normalise_audio else normalisation_factor,
- )
- for file in files:
- yield processor(file)
diff --git a/spaces/akhaliq/stylegan3_clip/torch_utils/ops/__init__.py b/spaces/akhaliq/stylegan3_clip/torch_utils/ops/__init__.py
deleted file mode 100644
index 8dd34882519598c472f1224cfe68c9ff6952ce69..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/stylegan3_clip/torch_utils/ops/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-# empty
diff --git a/spaces/alamin655/websurfx/public/static/cookies.js b/spaces/alamin655/websurfx/public/static/cookies.js
deleted file mode 100644
index 677eff788c6c8319efef509a5fe6cfebd6eebc8c..0000000000000000000000000000000000000000
--- a/spaces/alamin655/websurfx/public/static/cookies.js
+++ /dev/null
@@ -1,29 +0,0 @@
-/**
- * This function is executed when any page on the website finishes loading and
- * this function retrieves the cookies if it is present on the user's machine.
- * If it is available then the saved cookies is display in the cookies tab
- * otherwise an appropriate message is displayed if it is not available.
- *
- * @function
- * @listens DOMContentLoaded
- * @returns {void}
- */
-document.addEventListener(
- 'DOMContentLoaded',
- () => {
- try {
- // Decode the cookie value
- let cookie = decodeURIComponent(document.cookie)
- // Set the value of the input field to the decoded cookie value if it is not empty
- // Otherwise, display a message indicating that no cookies have been saved on the user's system
- document.querySelector('.cookies input').value =
- cookie !== '' ? cookie : 'No cookies have been saved on your system'
- } catch (error) {
- // If there is an error decoding the cookie, log the error to the console
- // and display an error message in the input field
- console.error('Error decoding cookie:', error)
- document.querySelector('.cookies input').value = 'Error decoding cookie'
- }
- },
- false
-)
diff --git a/spaces/alan-chen-intel/dagan-demo/depth/resnet_encoder.py b/spaces/alan-chen-intel/dagan-demo/depth/resnet_encoder.py
deleted file mode 100644
index 9c94418d383e5c48acb64e946e54f607ea9c2861..0000000000000000000000000000000000000000
--- a/spaces/alan-chen-intel/dagan-demo/depth/resnet_encoder.py
+++ /dev/null
@@ -1,98 +0,0 @@
-# Copyright Niantic 2019. Patent Pending. All rights reserved.
-#
-# This software is licensed under the terms of the Monodepth2 licence
-# which allows for non-commercial use only, the full terms of which are made
-# available in the LICENSE file.
-
-from __future__ import absolute_import, division, print_function
-
-import numpy as np
-
-import torch
-import torch.nn as nn
-import torchvision.models as models
-import torch.utils.model_zoo as model_zoo
-
-
-class ResNetMultiImageInput(models.ResNet):
- """Constructs a resnet model with varying number of input images.
- Adapted from https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py
- """
- def __init__(self, block, layers, num_classes=1000, num_input_images=1):
- super(ResNetMultiImageInput, self).__init__(block, layers)
- self.inplanes = 64
- self.conv1 = nn.Conv2d(
- num_input_images * 3, 64, kernel_size=7, stride=2, padding=3, bias=False)
- self.bn1 = nn.BatchNorm2d(64)
- self.relu = nn.ReLU(inplace=True)
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
- self.layer1 = self._make_layer(block, 64, layers[0])
- self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
- self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
- self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, nn.BatchNorm2d):
- nn.init.constant_(m.weight, 1)
- nn.init.constant_(m.bias, 0)
-
-
-def resnet_multiimage_input(num_layers, pretrained=False, num_input_images=1):
- """Constructs a ResNet model.
- Args:
- num_layers (int): Number of resnet layers. Must be 18 or 50
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- num_input_images (int): Number of frames stacked as input
- """
- assert num_layers in [18, 50], "Can only run with 18 or 50 layer resnet"
- blocks = {18: [2, 2, 2, 2], 50: [3, 4, 6, 3]}[num_layers]
- block_type = {18: models.resnet.BasicBlock, 50: models.resnet.Bottleneck}[num_layers]
- model = ResNetMultiImageInput(block_type, blocks, num_input_images=num_input_images)
-
- if pretrained:
- loaded = model_zoo.load_url(models.resnet.model_urls['resnet{}'.format(num_layers)])
- loaded['conv1.weight'] = torch.cat(
- [loaded['conv1.weight']] * num_input_images, 1) / num_input_images
- model.load_state_dict(loaded)
- return model
-
-
-class ResnetEncoder(nn.Module):
- """Pytorch module for a resnet encoder
- """
- def __init__(self, num_layers, pretrained, num_input_images=1):
- super(ResnetEncoder, self).__init__()
-
- self.num_ch_enc = np.array([64, 64, 128, 256, 512])
-
- resnets = {18: models.resnet18,
- 34: models.resnet34,
- 50: models.resnet50,
- 101: models.resnet101,
- 152: models.resnet152}
-
- if num_layers not in resnets:
- raise ValueError("{} is not a valid number of resnet layers".format(num_layers))
-
- if num_input_images > 1:
- self.encoder = resnet_multiimage_input(num_layers, pretrained, num_input_images)
- else:
- self.encoder = resnets[num_layers](pretrained)
-
- if num_layers > 34:
- self.num_ch_enc[1:] *= 4
-
- def forward(self, input_image):
- self.features = []
- x = (input_image - 0.45) / 0.225
- x = self.encoder.conv1(x)
- x = self.encoder.bn1(x)
- self.features.append(self.encoder.relu(x))
- self.features.append(self.encoder.layer1(self.encoder.maxpool(self.features[-1])))
- self.features.append(self.encoder.layer2(self.features[-1]))
- self.features.append(self.encoder.layer3(self.features[-1]))
- self.features.append(self.encoder.layer4(self.features[-1]))
-
- return self.features
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/segment.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/segment.py
deleted file mode 100644
index 94ca73076d8ec9c7a6d47e401736dad084070437..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/segment.py
+++ /dev/null
@@ -1,720 +0,0 @@
-from enum import IntEnum
-from functools import lru_cache
-from itertools import filterfalse
-from logging import getLogger
-from operator import attrgetter
-from typing import (
- TYPE_CHECKING,
- Dict,
- Iterable,
- List,
- NamedTuple,
- Optional,
- Sequence,
- Tuple,
- Type,
- Union,
-)
-
-from .cells import (
- _is_single_cell_widths,
- cell_len,
- get_character_cell_size,
- set_cell_size,
-)
-from .repr import Result, rich_repr
-from .style import Style
-
-if TYPE_CHECKING:
- from .console import Console, ConsoleOptions, RenderResult
-
-log = getLogger("rich")
-
-
-class ControlType(IntEnum):
- """Non-printable control codes which typically translate to ANSI codes."""
-
- BELL = 1
- CARRIAGE_RETURN = 2
- HOME = 3
- CLEAR = 4
- SHOW_CURSOR = 5
- HIDE_CURSOR = 6
- ENABLE_ALT_SCREEN = 7
- DISABLE_ALT_SCREEN = 8
- CURSOR_UP = 9
- CURSOR_DOWN = 10
- CURSOR_FORWARD = 11
- CURSOR_BACKWARD = 12
- CURSOR_MOVE_TO_COLUMN = 13
- CURSOR_MOVE_TO = 14
- ERASE_IN_LINE = 15
-
-
-ControlCode = Union[
- Tuple[ControlType], Tuple[ControlType, int], Tuple[ControlType, int, int]
-]
-
-
-@rich_repr()
-class Segment(NamedTuple):
- """A piece of text with associated style. Segments are produced by the Console render process and
- are ultimately converted in to strings to be written to the terminal.
-
- Args:
- text (str): A piece of text.
- style (:class:`~rich.style.Style`, optional): An optional style to apply to the text.
- control (Tuple[ControlCode..], optional): Optional sequence of control codes.
- """
-
- text: str = ""
- """Raw text."""
- style: Optional[Style] = None
- """An optional style."""
- control: Optional[Sequence[ControlCode]] = None
- """Optional sequence of control codes."""
-
- def __rich_repr__(self) -> Result:
- yield self.text
- if self.control is None:
- if self.style is not None:
- yield self.style
- else:
- yield self.style
- yield self.control
-
- def __bool__(self) -> bool:
- """Check if the segment contains text."""
- return bool(self.text)
-
- @property
- def cell_length(self) -> int:
- """Get cell length of segment."""
- return 0 if self.control else cell_len(self.text)
-
- @property
- def is_control(self) -> bool:
- """Check if the segment contains control codes."""
- return self.control is not None
-
- @classmethod
- @lru_cache(1024 * 16)
- def _split_cells(cls, segment: "Segment", cut: int) -> Tuple["Segment", "Segment"]: # type: ignore
-
- text, style, control = segment
- _Segment = Segment
-
- cell_length = segment.cell_length
- if cut >= cell_length:
- return segment, _Segment("", style, control)
-
- cell_size = get_character_cell_size
-
- pos = int((cut / cell_length) * len(text))
-
- before = text[:pos]
- cell_pos = cell_len(before)
- if cell_pos == cut:
- return (
- _Segment(before, style, control),
- _Segment(text[pos:], style, control),
- )
- while pos < len(text):
- char = text[pos]
- pos += 1
- cell_pos += cell_size(char)
- before = text[:pos]
- if cell_pos == cut:
- return (
- _Segment(before, style, control),
- _Segment(text[pos:], style, control),
- )
- if cell_pos > cut:
- return (
- _Segment(before[: pos - 1] + " ", style, control),
- _Segment(" " + text[pos:], style, control),
- )
-
- def split_cells(self, cut: int) -> Tuple["Segment", "Segment"]:
- """Split segment in to two segments at the specified column.
-
- If the cut point falls in the middle of a 2-cell wide character then it is replaced
- by two spaces, to preserve the display width of the parent segment.
-
- Returns:
- Tuple[Segment, Segment]: Two segments.
- """
- text, style, control = self
-
- if _is_single_cell_widths(text):
- # Fast path with all 1 cell characters
- if cut >= len(text):
- return self, Segment("", style, control)
- return (
- Segment(text[:cut], style, control),
- Segment(text[cut:], style, control),
- )
-
- return self._split_cells(self, cut)
-
- @classmethod
- def line(cls) -> "Segment":
- """Make a new line segment."""
- return cls("\n")
-
- @classmethod
- def apply_style(
- cls,
- segments: Iterable["Segment"],
- style: Optional[Style] = None,
- post_style: Optional[Style] = None,
- ) -> Iterable["Segment"]:
- """Apply style(s) to an iterable of segments.
-
- Returns an iterable of segments where the style is replaced by ``style + segment.style + post_style``.
-
- Args:
- segments (Iterable[Segment]): Segments to process.
- style (Style, optional): Base style. Defaults to None.
- post_style (Style, optional): Style to apply on top of segment style. Defaults to None.
-
- Returns:
- Iterable[Segments]: A new iterable of segments (possibly the same iterable).
- """
- result_segments = segments
- if style:
- apply = style.__add__
- result_segments = (
- cls(text, None if control else apply(_style), control)
- for text, _style, control in result_segments
- )
- if post_style:
- result_segments = (
- cls(
- text,
- (
- None
- if control
- else (_style + post_style if _style else post_style)
- ),
- control,
- )
- for text, _style, control in result_segments
- )
- return result_segments
-
- @classmethod
- def filter_control(
- cls, segments: Iterable["Segment"], is_control: bool = False
- ) -> Iterable["Segment"]:
- """Filter segments by ``is_control`` attribute.
-
- Args:
- segments (Iterable[Segment]): An iterable of Segment instances.
- is_control (bool, optional): is_control flag to match in search.
-
- Returns:
- Iterable[Segment]: And iterable of Segment instances.
-
- """
- if is_control:
- return filter(attrgetter("control"), segments)
- else:
- return filterfalse(attrgetter("control"), segments)
-
- @classmethod
- def split_lines(cls, segments: Iterable["Segment"]) -> Iterable[List["Segment"]]:
- """Split a sequence of segments in to a list of lines.
-
- Args:
- segments (Iterable[Segment]): Segments potentially containing line feeds.
-
- Yields:
- Iterable[List[Segment]]: Iterable of segment lists, one per line.
- """
- line: List[Segment] = []
- append = line.append
-
- for segment in segments:
- if "\n" in segment.text and not segment.control:
- text, style, _ = segment
- while text:
- _text, new_line, text = text.partition("\n")
- if _text:
- append(cls(_text, style))
- if new_line:
- yield line
- line = []
- append = line.append
- else:
- append(segment)
- if line:
- yield line
-
- @classmethod
- def split_and_crop_lines(
- cls,
- segments: Iterable["Segment"],
- length: int,
- style: Optional[Style] = None,
- pad: bool = True,
- include_new_lines: bool = True,
- ) -> Iterable[List["Segment"]]:
- """Split segments in to lines, and crop lines greater than a given length.
-
- Args:
- segments (Iterable[Segment]): An iterable of segments, probably
- generated from console.render.
- length (int): Desired line length.
- style (Style, optional): Style to use for any padding.
- pad (bool): Enable padding of lines that are less than `length`.
-
- Returns:
- Iterable[List[Segment]]: An iterable of lines of segments.
- """
- line: List[Segment] = []
- append = line.append
-
- adjust_line_length = cls.adjust_line_length
- new_line_segment = cls("\n")
-
- for segment in segments:
- if "\n" in segment.text and not segment.control:
- text, style, _ = segment
- while text:
- _text, new_line, text = text.partition("\n")
- if _text:
- append(cls(_text, style))
- if new_line:
- cropped_line = adjust_line_length(
- line, length, style=style, pad=pad
- )
- if include_new_lines:
- cropped_line.append(new_line_segment)
- yield cropped_line
- del line[:]
- else:
- append(segment)
- if line:
- yield adjust_line_length(line, length, style=style, pad=pad)
-
- @classmethod
- def adjust_line_length(
- cls,
- line: List["Segment"],
- length: int,
- style: Optional[Style] = None,
- pad: bool = True,
- ) -> List["Segment"]:
- """Adjust a line to a given width (cropping or padding as required).
-
- Args:
- segments (Iterable[Segment]): A list of segments in a single line.
- length (int): The desired width of the line.
- style (Style, optional): The style of padding if used (space on the end). Defaults to None.
- pad (bool, optional): Pad lines with spaces if they are shorter than `length`. Defaults to True.
-
- Returns:
- List[Segment]: A line of segments with the desired length.
- """
- line_length = sum(segment.cell_length for segment in line)
- new_line: List[Segment]
-
- if line_length < length:
- if pad:
- new_line = line + [cls(" " * (length - line_length), style)]
- else:
- new_line = line[:]
- elif line_length > length:
- new_line = []
- append = new_line.append
- line_length = 0
- for segment in line:
- segment_length = segment.cell_length
- if line_length + segment_length < length or segment.control:
- append(segment)
- line_length += segment_length
- else:
- text, segment_style, _ = segment
- text = set_cell_size(text, length - line_length)
- append(cls(text, segment_style))
- break
- else:
- new_line = line[:]
- return new_line
-
- @classmethod
- def get_line_length(cls, line: List["Segment"]) -> int:
- """Get the length of list of segments.
-
- Args:
- line (List[Segment]): A line encoded as a list of Segments (assumes no '\\\\n' characters),
-
- Returns:
- int: The length of the line.
- """
- _cell_len = cell_len
- return sum(_cell_len(segment.text) for segment in line)
-
- @classmethod
- def get_shape(cls, lines: List[List["Segment"]]) -> Tuple[int, int]:
- """Get the shape (enclosing rectangle) of a list of lines.
-
- Args:
- lines (List[List[Segment]]): A list of lines (no '\\\\n' characters).
-
- Returns:
- Tuple[int, int]: Width and height in characters.
- """
- get_line_length = cls.get_line_length
- max_width = max(get_line_length(line) for line in lines) if lines else 0
- return (max_width, len(lines))
-
- @classmethod
- def set_shape(
- cls,
- lines: List[List["Segment"]],
- width: int,
- height: Optional[int] = None,
- style: Optional[Style] = None,
- new_lines: bool = False,
- ) -> List[List["Segment"]]:
- """Set the shape of a list of lines (enclosing rectangle).
-
- Args:
- lines (List[List[Segment]]): A list of lines.
- width (int): Desired width.
- height (int, optional): Desired height or None for no change.
- style (Style, optional): Style of any padding added.
- new_lines (bool, optional): Padded lines should include "\n". Defaults to False.
-
- Returns:
- List[List[Segment]]: New list of lines.
- """
- _height = height or len(lines)
-
- blank = (
- [cls(" " * width + "\n", style)] if new_lines else [cls(" " * width, style)]
- )
-
- adjust_line_length = cls.adjust_line_length
- shaped_lines = lines[:_height]
- shaped_lines[:] = [
- adjust_line_length(line, width, style=style) for line in lines
- ]
- if len(shaped_lines) < _height:
- shaped_lines.extend([blank] * (_height - len(shaped_lines)))
- return shaped_lines
-
- @classmethod
- def align_top(
- cls: Type["Segment"],
- lines: List[List["Segment"]],
- width: int,
- height: int,
- style: Style,
- new_lines: bool = False,
- ) -> List[List["Segment"]]:
- """Aligns lines to top (adds extra lines to bottom as required).
-
- Args:
- lines (List[List[Segment]]): A list of lines.
- width (int): Desired width.
- height (int, optional): Desired height or None for no change.
- style (Style): Style of any padding added.
- new_lines (bool, optional): Padded lines should include "\n". Defaults to False.
-
- Returns:
- List[List[Segment]]: New list of lines.
- """
- extra_lines = height - len(lines)
- if not extra_lines:
- return lines[:]
- lines = lines[:height]
- blank = cls(" " * width + "\n", style) if new_lines else cls(" " * width, style)
- lines = lines + [[blank]] * extra_lines
- return lines
-
- @classmethod
- def align_bottom(
- cls: Type["Segment"],
- lines: List[List["Segment"]],
- width: int,
- height: int,
- style: Style,
- new_lines: bool = False,
- ) -> List[List["Segment"]]:
- """Aligns render to bottom (adds extra lines above as required).
-
- Args:
- lines (List[List[Segment]]): A list of lines.
- width (int): Desired width.
- height (int, optional): Desired height or None for no change.
- style (Style): Style of any padding added. Defaults to None.
- new_lines (bool, optional): Padded lines should include "\n". Defaults to False.
-
- Returns:
- List[List[Segment]]: New list of lines.
- """
- extra_lines = height - len(lines)
- if not extra_lines:
- return lines[:]
- lines = lines[:height]
- blank = cls(" " * width + "\n", style) if new_lines else cls(" " * width, style)
- lines = [[blank]] * extra_lines + lines
- return lines
-
- @classmethod
- def align_middle(
- cls: Type["Segment"],
- lines: List[List["Segment"]],
- width: int,
- height: int,
- style: Style,
- new_lines: bool = False,
- ) -> List[List["Segment"]]:
- """Aligns lines to middle (adds extra lines to above and below as required).
-
- Args:
- lines (List[List[Segment]]): A list of lines.
- width (int): Desired width.
- height (int, optional): Desired height or None for no change.
- style (Style): Style of any padding added.
- new_lines (bool, optional): Padded lines should include "\n". Defaults to False.
-
- Returns:
- List[List[Segment]]: New list of lines.
- """
- extra_lines = height - len(lines)
- if not extra_lines:
- return lines[:]
- lines = lines[:height]
- blank = cls(" " * width + "\n", style) if new_lines else cls(" " * width, style)
- top_lines = extra_lines // 2
- bottom_lines = extra_lines - top_lines
- lines = [[blank]] * top_lines + lines + [[blank]] * bottom_lines
- return lines
-
- @classmethod
- def simplify(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
- """Simplify an iterable of segments by combining contiguous segments with the same style.
-
- Args:
- segments (Iterable[Segment]): An iterable of segments.
-
- Returns:
- Iterable[Segment]: A possibly smaller iterable of segments that will render the same way.
- """
- iter_segments = iter(segments)
- try:
- last_segment = next(iter_segments)
- except StopIteration:
- return
-
- _Segment = Segment
- for segment in iter_segments:
- if last_segment.style == segment.style and not segment.control:
- last_segment = _Segment(
- last_segment.text + segment.text, last_segment.style
- )
- else:
- yield last_segment
- last_segment = segment
- yield last_segment
-
- @classmethod
- def strip_links(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
- """Remove all links from an iterable of styles.
-
- Args:
- segments (Iterable[Segment]): An iterable segments.
-
- Yields:
- Segment: Segments with link removed.
- """
- for segment in segments:
- if segment.control or segment.style is None:
- yield segment
- else:
- text, style, _control = segment
- yield cls(text, style.update_link(None) if style else None)
-
- @classmethod
- def strip_styles(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
- """Remove all styles from an iterable of segments.
-
- Args:
- segments (Iterable[Segment]): An iterable segments.
-
- Yields:
- Segment: Segments with styles replace with None
- """
- for text, _style, control in segments:
- yield cls(text, None, control)
-
- @classmethod
- def remove_color(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
- """Remove all color from an iterable of segments.
-
- Args:
- segments (Iterable[Segment]): An iterable segments.
-
- Yields:
- Segment: Segments with colorless style.
- """
-
- cache: Dict[Style, Style] = {}
- for text, style, control in segments:
- if style:
- colorless_style = cache.get(style)
- if colorless_style is None:
- colorless_style = style.without_color
- cache[style] = colorless_style
- yield cls(text, colorless_style, control)
- else:
- yield cls(text, None, control)
-
- @classmethod
- def divide(
- cls, segments: Iterable["Segment"], cuts: Iterable[int]
- ) -> Iterable[List["Segment"]]:
- """Divides an iterable of segments in to portions.
-
- Args:
- cuts (Iterable[int]): Cell positions where to divide.
-
- Yields:
- [Iterable[List[Segment]]]: An iterable of Segments in List.
- """
- split_segments: List["Segment"] = []
- add_segment = split_segments.append
-
- iter_cuts = iter(cuts)
-
- while True:
- try:
- cut = next(iter_cuts)
- except StopIteration:
- return []
- if cut != 0:
- break
- yield []
- pos = 0
-
- for segment in segments:
- while segment.text:
- end_pos = pos + segment.cell_length
- if end_pos < cut:
- add_segment(segment)
- pos = end_pos
- break
-
- try:
- if end_pos == cut:
- add_segment(segment)
- yield split_segments[:]
- del split_segments[:]
- pos = end_pos
- break
- else:
- before, segment = segment.split_cells(cut - pos)
- add_segment(before)
- yield split_segments[:]
- del split_segments[:]
- pos = cut
- finally:
- try:
- cut = next(iter_cuts)
- except StopIteration:
- if split_segments:
- yield split_segments[:]
- return
- yield split_segments[:]
-
-
-class Segments:
- """A simple renderable to render an iterable of segments. This class may be useful if
- you want to print segments outside of a __rich_console__ method.
-
- Args:
- segments (Iterable[Segment]): An iterable of segments.
- new_lines (bool, optional): Add new lines between segments. Defaults to False.
- """
-
- def __init__(self, segments: Iterable[Segment], new_lines: bool = False) -> None:
- self.segments = list(segments)
- self.new_lines = new_lines
-
- def __rich_console__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "RenderResult":
- if self.new_lines:
- line = Segment.line()
- for segment in self.segments:
- yield segment
- yield line
- else:
- yield from self.segments
-
-
-class SegmentLines:
- def __init__(self, lines: Iterable[List[Segment]], new_lines: bool = False) -> None:
- """A simple renderable containing a number of lines of segments. May be used as an intermediate
- in rendering process.
-
- Args:
- lines (Iterable[List[Segment]]): Lists of segments forming lines.
- new_lines (bool, optional): Insert new lines after each line. Defaults to False.
- """
- self.lines = list(lines)
- self.new_lines = new_lines
-
- def __rich_console__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "RenderResult":
- if self.new_lines:
- new_line = Segment.line()
- for line in self.lines:
- yield from line
- yield new_line
- else:
- for line in self.lines:
- yield from line
-
-
-if __name__ == "__main__":
-
- if __name__ == "__main__": # pragma: no cover
- from pip._vendor.rich.console import Console
- from pip._vendor.rich.syntax import Syntax
- from pip._vendor.rich.text import Text
-
- code = """from rich.console import Console
- console = Console()
- text = Text.from_markup("Hello, [bold magenta]World[/]!")
- console.print(text)"""
-
- text = Text.from_markup("Hello, [bold magenta]World[/]!")
-
- console = Console()
-
- console.rule("rich.Segment")
- console.print(
- "A Segment is the last step in the Rich render process before generating text with ANSI codes."
- )
- console.print("\nConsider the following code:\n")
- console.print(Syntax(code, "python", line_numbers=True))
- console.print()
- console.print(
- "When you call [b]print()[/b], Rich [i]renders[/i] the object in to the the following:\n"
- )
- fragments = list(console.render(text))
- console.print(fragments)
- console.print()
- console.print(
- "The Segments are then processed to produce the following output:\n"
- )
- console.print(text)
- console.print(
- "\nYou will only need to know this if you are implementing your own Rich renderables."
- )
diff --git a/spaces/aliabd/SummerTime/model/third_party/HMNet/ExampleInitModel/HMNet-pretrained/README.md b/spaces/aliabd/SummerTime/model/third_party/HMNet/ExampleInitModel/HMNet-pretrained/README.md
deleted file mode 100644
index 1a9e9d8ebcac1b537a6bd4afc7b01835437e66f2..0000000000000000000000000000000000000000
--- a/spaces/aliabd/SummerTime/model/third_party/HMNet/ExampleInitModel/HMNet-pretrained/README.md
+++ /dev/null
@@ -1,3 +0,0 @@
-# Download the pretrained HMNet model
-
-Using the download [link](https://sdrgstorage01wus2.blob.core.windows.net/user/ruox/Meeting_Minutes/HMNet/ExampleInitModel/HMNet-pretrained/model.pt?sv=2019-10-10&st=2020-10-22T19%3A24%3A06Z&se=2060-10-23T19%3A24%3A00Z&sr=b&sp=r&sig=cRfastEaN7s75cgMaBvEFGbXio20smnjjRxxYbqEkoE%3D) to download the `model.pt` file and put it in this directory.
\ No newline at end of file
diff --git a/spaces/alistairmcleay/cambridge-masters-project/scripts/UBAR_code/preprocess.py b/spaces/alistairmcleay/cambridge-masters-project/scripts/UBAR_code/preprocess.py
deleted file mode 100644
index a2438a34c69300e4248a334d29efce9539b934f5..0000000000000000000000000000000000000000
--- a/spaces/alistairmcleay/cambridge-masters-project/scripts/UBAR_code/preprocess.py
+++ /dev/null
@@ -1,576 +0,0 @@
-import copy
-import json
-import os
-import re
-import zipfile
-from collections import OrderedDict
-
-import spacy
-from tqdm import tqdm
-
-from crazyneuraluser.UBAR_code import ontology, utils
-from crazyneuraluser.UBAR_code.clean_dataset import clean_slot_values, clean_text
-from crazyneuraluser.UBAR_code.config import global_config as cfg
-from crazyneuraluser.UBAR_code.db_ops import MultiWozDB
-
-
-# value_set.json, all the domain[slot] values in datasets
-def get_db_values(value_set_path):
- processed = {}
- bspn_word = []
- nlp = spacy.load("en_core_web_sm")
-
- with open(value_set_path, "r") as f: # read value set file in lower
- value_set = json.loads(f.read().lower())
-
- with open("data/raw/UBAR/db/ontology.json", "r") as f: # read ontology in lower, all the domain-slot values
- otlg = json.loads(f.read().lower())
-
- for (
- domain,
- slots,
- ) in value_set.items(): # add all informable slots to bspn_word, create lists holder for values
- processed[domain] = {}
- bspn_word.append("[" + domain + "]")
- for slot, values in slots.items():
- s_p = ontology.normlize_slot_names.get(slot, slot)
- if s_p in ontology.informable_slots[domain]:
- bspn_word.append(s_p)
- processed[domain][s_p] = []
-
- for (
- domain,
- slots,
- ) in value_set.items(): # add all words of values of informable slots to bspn_word
- for slot, values in slots.items():
- s_p = ontology.normlize_slot_names.get(slot, slot)
- if s_p in ontology.informable_slots[domain]:
- for v in values:
- _, v_p = clean_slot_values(domain, slot, v)
- v_p = " ".join([token.text for token in nlp(v_p)]).strip()
- processed[domain][s_p].append(v_p)
- for x in v_p.split():
- if x not in bspn_word:
- bspn_word.append(x)
-
- for domain_slot, values in otlg.items(): # split domain-slots to domains and slots
- domain, slot = domain_slot.split("-")
- if domain == "bus":
- domain = "taxi"
- if slot == "price range":
- slot = "pricerange"
- if slot == "book stay":
- slot = "stay"
- if slot == "book day":
- slot = "day"
- if slot == "book people":
- slot = "people"
- if slot == "book time":
- slot = "time"
- if slot == "arrive by":
- slot = "arrive"
- if slot == "leave at":
- slot = "leave"
- if slot == "leaveat":
- slot = "leave"
- # add all slots and words of values if not already in processed and bspn_word
- if slot not in processed[domain]:
- processed[domain][slot] = []
- bspn_word.append(slot)
- for v in values:
- _, v_p = clean_slot_values(domain, slot, v)
- v_p = " ".join([token.text for token in nlp(v_p)]).strip()
- if v_p not in processed[domain][slot]:
- processed[domain][slot].append(v_p)
- for x in v_p.split():
- if x not in bspn_word:
- bspn_word.append(x)
-
- with open(value_set_path.replace(".json", "_processed.json"), "w") as f:
- json.dump(processed, f, indent=2) # save processed.json
- with open("data/preprocessed_gen_usr_utts/UBAR/multi-woz-processed/bspn_word_collection.json", "w") as f:
- json.dump(bspn_word, f, indent=2) # save bspn_word
-
- print("DB value set processed! ")
-
-
-def preprocess_db(db_paths): # apply clean_slot_values to all dbs
- dbs = {}
- nlp = spacy.load("en_core_web_sm")
- for domain in ontology.all_domains:
- with open(db_paths[domain], "r") as f: # for every db_domain, read json file
- dbs[domain] = json.loads(f.read().lower())
- # entry has information about slots of said domain
- for idx, entry in enumerate(dbs[domain]):
- new_entry = copy.deepcopy(entry)
- for key, value in entry.items(): # key = slot
- if type(value) is not str:
- continue
- del new_entry[key]
- key, value = clean_slot_values(domain, key, value)
- tokenize_and_back = " ".join([token.text for token in nlp(value)]).strip()
- new_entry[key] = tokenize_and_back
- dbs[domain][idx] = new_entry
- with open(db_paths[domain].replace(".json", "_processed.json"), "w") as f:
- json.dump(dbs[domain], f, indent=2)
- print("[%s] DB processed! " % domain)
-
-
-class DataPreprocessor(object):
- def __init__(self):
- self.nlp = spacy.load("en_core_web_sm")
- self.db = MultiWozDB(cfg.dbs) # load all processed dbs
- data_path = "data/preprocessed/UBAR/gen_usr_utt_experiment_data_with_span_full.json"
- # archive = zipfile.ZipFile(data_path + ".zip", "r")
- # self.convlab_data = json.loads(archive.open(data_path.split("/")[-1], "r").read().lower())
- self.convlab_data = json.loads(open(data_path, "r").read().lower())
- self.delex_sg_valdict_path = "data/preprocessed_gen_usr_utts/UBAR/multi-woz-processed/delex_single_valdict.json"
- self.delex_mt_valdict_path = "data/preprocessed_gen_usr_utts/UBAR/multi-woz-processed/delex_multi_valdict.json"
- self.ambiguous_val_path = "data/preprocessed_gen_usr_utts/UBAR/multi-woz-processed/ambiguous_values.json"
- self.delex_refs_path = "data/preprocessed_gen_usr_utts/UBAR/multi-woz-processed/reference_no.json"
- self.delex_refs = json.loads(open(self.delex_refs_path, "r").read())
- if not os.path.exists(self.delex_sg_valdict_path):
- (
- self.delex_sg_valdict,
- self.delex_mt_valdict,
- self.ambiguous_vals,
- ) = self.get_delex_valdict()
- else:
- self.delex_sg_valdict = json.loads(open(self.delex_sg_valdict_path, "r").read())
- self.delex_mt_valdict = json.loads(open(self.delex_mt_valdict_path, "r").read())
- self.ambiguous_vals = json.loads(open(self.ambiguous_val_path, "r").read())
-
- self.vocab = utils.Vocab(cfg.vocab_size)
-
- def delex_by_annotation(self, dial_turn):
- u = dial_turn["text"].split()
- span = dial_turn["span_info"]
- for s in span:
- slot = s[1]
- if slot == "open":
- continue
- if ontology.da_abbr_to_slot_name.get(slot):
- slot = ontology.da_abbr_to_slot_name[slot]
- for idx in range(s[3], s[4] + 1):
- u[idx] = ""
- try:
- u[s[3]] = "[value_" + slot + "]"
- except Exception:
- u[5] = "[value_" + slot + "]"
- u_delex = " ".join([t for t in u if t != ""])
- u_delex = u_delex.replace("[value_address] , [value_address] , [value_address]", "[value_address]")
- u_delex = u_delex.replace("[value_address] , [value_address]", "[value_address]")
- u_delex = u_delex.replace("[value_name] [value_name]", "[value_name]")
- u_delex = u_delex.replace("[value_name]([value_phone] )", "[value_name] ( [value_phone] )")
- return u_delex
-
- def delex_by_valdict(self, text):
- text = clean_text(text)
-
- text = re.sub(r"\d{5}\s?\d{5,7}", "[value_phone]", text)
- text = re.sub(r"\d[\s-]stars?", "[value_stars]", text)
- text = re.sub(r"\$\d+|\$?\d+.?(\d+)?\s(pounds?|gbps?)", "[value_price]", text)
- text = re.sub(r"tr[\d]{4}", "[value_id]", text)
- text = re.sub(
- r"([a-z]{1}[\. ]?[a-z]{1}[\. ]?\d{1,2}[, ]+\d{1}[\. ]?[a-z]{1}[\. ]?[a-z]{1}|[a-z]{2}\d{2}[a-z]{2})",
- "[value_postcode]",
- text,
- )
-
- for value, slot in self.delex_mt_valdict.items():
- text = text.replace(value, "[value_%s]" % slot)
-
- for value, slot in self.delex_sg_valdict.items():
- tokens = text.split()
- for idx, tk in enumerate(tokens):
- if tk == value:
- tokens[idx] = "[value_%s]" % slot
- text = " ".join(tokens)
-
- for ambg_ent in self.ambiguous_vals:
- # ely is a place, but appears in words like moderately
- start_idx = text.find(" " + ambg_ent)
- if start_idx == -1:
- continue
- front_words = text[:start_idx].split()
- ent_type = "time" if ":" in ambg_ent else "place"
-
- for fw in front_words[::-1]:
- if fw in [
- "arrive",
- "arrives",
- "arrived",
- "arriving",
- "arrival",
- "destination",
- "there",
- "reach",
- "to",
- "by",
- "before",
- ]:
- slot = "[value_arrive]" if ent_type == "time" else "[value_destination]"
- text = re.sub(" " + ambg_ent, " " + slot, text)
- elif fw in [
- "leave",
- "leaves",
- "leaving",
- "depart",
- "departs",
- "departing",
- "departure",
- "from",
- "after",
- "pulls",
- ]:
- slot = "[value_leave]" if ent_type == "time" else "[value_departure]"
- text = re.sub(" " + ambg_ent, " " + slot, text)
-
- text = text.replace("[value_car] [value_car]", "[value_car]")
- return text
-
- def get_delex_valdict(
- self,
- ):
- skip_entry_type = {
- "taxi": ["taxi_phone"],
- "police": ["id"],
- "hospital": ["id"],
- "hotel": [
- "id",
- "location",
- "internet",
- "parking",
- "takesbookings",
- "stars",
- "price",
- "n",
- "postcode",
- "phone",
- ],
- "attraction": [
- "id",
- "location",
- "pricerange",
- "price",
- "openhours",
- "postcode",
- "phone",
- ],
- "train": ["price", "id"],
- "restaurant": [
- "id",
- "location",
- "introduction",
- "signature",
- "type",
- "postcode",
- "phone",
- ],
- }
- entity_value_to_slot = {}
- ambiguous_entities = []
- for domain, db_data in self.db.dbs.items():
- print("Processing entity values in [%s]" % domain)
- if domain != "taxi":
- for db_entry in db_data:
- for slot, value in db_entry.items():
- if slot not in skip_entry_type[domain]:
- if type(value) is not str:
- raise TypeError("value '%s' in domain '%s' should be rechecked" % (slot, domain))
- else:
- slot, value = clean_slot_values(domain, slot, value)
- value = " ".join([token.text for token in self.nlp(value)]).strip()
- if value in entity_value_to_slot and entity_value_to_slot[value] != slot:
- # print(value, ": ",entity_value_to_slot[value], slot)
- ambiguous_entities.append(value)
- entity_value_to_slot[value] = slot
- else: # taxi db specific
- db_entry = db_data[0]
- for slot, ent_list in db_entry.items():
- if slot not in skip_entry_type[domain]:
- for ent in ent_list:
- entity_value_to_slot[ent] = "car"
- ambiguous_entities = set(ambiguous_entities)
- ambiguous_entities.remove("cambridge")
- ambiguous_entities = list(ambiguous_entities)
- for amb_ent in ambiguous_entities: # departure or destination? arrive time or leave time?
- entity_value_to_slot.pop(amb_ent)
- entity_value_to_slot["parkside"] = "address"
- entity_value_to_slot["parkside, cambridge"] = "address"
- entity_value_to_slot["cambridge belfry"] = "name"
- entity_value_to_slot["hills road"] = "address"
- entity_value_to_slot["hills rd"] = "address"
- entity_value_to_slot["Parkside Police Station"] = "name"
-
- single_token_values = {}
- multi_token_values = {}
- for val, slt in entity_value_to_slot.items():
- if val in ["cambridge"]:
- continue
- if len(val.split()) > 1:
- multi_token_values[val] = slt
- else:
- single_token_values[val] = slt
-
- with open(self.delex_sg_valdict_path, "w") as f:
- single_token_values = OrderedDict(
- sorted(single_token_values.items(), key=lambda kv: len(kv[0]), reverse=True)
- )
- json.dump(single_token_values, f, indent=2)
- print("single delex value dict saved!")
- with open(self.delex_mt_valdict_path, "w") as f:
- multi_token_values = OrderedDict(
- sorted(multi_token_values.items(), key=lambda kv: len(kv[0]), reverse=True)
- )
- json.dump(multi_token_values, f, indent=2)
- print("multi delex value dict saved!")
- with open(self.ambiguous_val_path, "w") as f:
- json.dump(ambiguous_entities, f, indent=2)
- print("ambiguous value dict saved!")
-
- return single_token_values, multi_token_values, ambiguous_entities
-
- def preprocess_main(self, save_path=None, is_test=False):
- """ """
- data = {}
- count = 0
- self.unique_da = {}
- ordered_sysact_dict = {}
- for fn, raw_dial in tqdm(list(self.convlab_data.items())):
- count += 1
- # if count == 100:
- # break
-
- compressed_goal = {} # for every dialog, keep track the goal, domains, requests
- dial_domains, dial_reqs = [], []
- for dom, g in raw_dial["goal"].items():
- if dom != "topic" and dom != "message" and g:
- if g.get("reqt"): # request info. eg. postcode/address/phone
- # normalize request slots
- for i, req_slot in enumerate(g["reqt"]):
- if ontology.normlize_slot_names.get(req_slot):
- g["reqt"][i] = ontology.normlize_slot_names[req_slot]
- dial_reqs.append(g["reqt"][i])
- compressed_goal[dom] = g
- if dom in ontology.all_domains:
- dial_domains.append(dom)
-
- dial_reqs = list(set(dial_reqs))
-
- dial = {"goal": compressed_goal, "log": []}
- single_turn = {}
- constraint_dict = OrderedDict()
- prev_constraint_dict = {}
- prev_turn_domain = ["general"]
- ordered_sysact_dict[fn] = {}
-
- for turn_num, dial_turn in enumerate(raw_dial["log"]):
- # for user turn, have text
- # sys turn: text, belief states(metadata), dialog_act, span_info
- dial_state = dial_turn["metadata"]
- if not dial_state: # user
- # delexicalize user utterance, either by annotation or by val_dict
- u = " ".join(clean_text(dial_turn["text"]).split())
-
- # NOTE: Commenting out delexicalisation because it is not used and
- # breaks when I use generated user dialogues for some reason
-
- # if dial_turn["span_info"]:
- # u_delex = clean_text(self.delex_by_annotation(dial_turn))
- # else:
- # u_delex = self.delex_by_valdict(dial_turn["text"])
-
- single_turn["user"] = u
- # single_turn["user_delex"] = u_delex
-
- else: # system
- # delexicalize system response, either by annotation or by val_dict
- if dial_turn["span_info"]:
- s_delex = clean_text(self.delex_by_annotation(dial_turn))
- else:
- if not dial_turn["text"]:
- print(fn)
- s_delex = self.delex_by_valdict(dial_turn["text"])
- single_turn["resp"] = s_delex
-
- # get belief state, semi=informable/book=requestable, put into constraint_dict
- for domain in dial_domains:
- if not constraint_dict.get(domain):
- constraint_dict[domain] = OrderedDict()
- info_sv = dial_state[domain]["semi"]
- for s, v in info_sv.items():
- s, v = clean_slot_values(domain, s, v)
- if len(v.split()) > 1:
- v = " ".join([token.text for token in self.nlp(v)]).strip()
- if v != "":
- constraint_dict[domain][s] = v
- book_sv = dial_state[domain]["book"]
- for s, v in book_sv.items():
- if s == "booked":
- continue
- s, v = clean_slot_values(domain, s, v)
- if len(v.split()) > 1:
- v = " ".join([token.text for token in self.nlp(v)]).strip()
- if v != "":
- constraint_dict[domain][s] = v
-
- constraints = [] # list in format of [domain] slot value
- cons_delex = []
- turn_dom_bs = []
- for domain, info_slots in constraint_dict.items():
- if info_slots:
- constraints.append("[" + domain + "]")
- cons_delex.append("[" + domain + "]")
- for slot, value in info_slots.items():
- constraints.append(slot)
- constraints.extend(value.split())
- cons_delex.append(slot)
- if domain not in prev_constraint_dict:
- turn_dom_bs.append(domain)
- elif prev_constraint_dict[domain] != constraint_dict[domain]:
- turn_dom_bs.append(domain)
-
- sys_act_dict = {}
- turn_dom_da = set()
- for act in dial_turn["dialog_act"]:
- d, a = act.split("-") # split domain-act
- turn_dom_da.add(d)
- turn_dom_da = list(turn_dom_da)
- if len(turn_dom_da) != 1 and "general" in turn_dom_da:
- turn_dom_da.remove("general")
- if len(turn_dom_da) != 1 and "booking" in turn_dom_da:
- turn_dom_da.remove("booking")
-
- # get turn domain
- turn_domain = turn_dom_bs
- for dom in turn_dom_da:
- if dom != "booking" and dom not in turn_domain:
- turn_domain.append(dom)
- if not turn_domain:
- turn_domain = prev_turn_domain
- if len(turn_domain) == 2 and "general" in turn_domain:
- turn_domain.remove("general")
- if len(turn_domain) == 2:
- if len(prev_turn_domain) == 1 and prev_turn_domain[0] == turn_domain[1]:
- turn_domain = turn_domain[::-1]
-
- # get system action
- for dom in turn_domain:
- sys_act_dict[dom] = {}
- add_to_last_collect = []
- booking_act_map = {"inform": "offerbook", "book": "offerbooked"}
- for act, params in dial_turn["dialog_act"].items():
- if act == "general-greet":
- continue
- d, a = act.split("-")
- if d == "general" and d not in sys_act_dict:
- sys_act_dict[d] = {}
- if d == "booking":
- d = turn_domain[0]
- a = booking_act_map.get(a, a)
- add_p = []
- for param in params:
- p = param[0]
- if p == "none":
- continue
- elif ontology.da_abbr_to_slot_name.get(p):
- p = ontology.da_abbr_to_slot_name[p]
- if p not in add_p:
- add_p.append(p)
- add_to_last = True if a in ["request", "reqmore", "bye", "offerbook"] else False
- if add_to_last:
- add_to_last_collect.append((d, a, add_p))
- else:
- sys_act_dict[d][a] = add_p
- for d, a, add_p in add_to_last_collect:
- sys_act_dict[d][a] = add_p
-
- for d in copy.copy(sys_act_dict):
- acts = sys_act_dict[d]
- if not acts:
- del sys_act_dict[d]
- if "inform" in acts and "offerbooked" in acts:
- for s in sys_act_dict[d]["inform"]:
- sys_act_dict[d]["offerbooked"].append(s)
- del sys_act_dict[d]["inform"]
-
- ordered_sysact_dict[fn][len(dial["log"])] = sys_act_dict
-
- sys_act = []
- if "general-greet" in dial_turn["dialog_act"]:
- sys_act.extend(["[general]", "[greet]"])
- for d, acts in sys_act_dict.items():
- sys_act += ["[" + d + "]"]
- for a, slots in acts.items():
- self.unique_da[d + "-" + a] = 1
- sys_act += ["[" + a + "]"]
- sys_act += slots
-
- # get db pointers
- matnums = self.db.get_match_num(constraint_dict)
- match_dom = turn_domain[0] if len(turn_domain) == 1 else turn_domain[1]
- match = matnums[match_dom]
- dbvec = self.db.addDBPointer(match_dom, match)
- bkvec = self.db.addBookingPointer(dial_turn["dialog_act"])
-
- # 4 database pointer for domains, 2 for booking
- single_turn["pointer"] = ",".join([str(d) for d in dbvec + bkvec])
- single_turn["match"] = str(match)
- single_turn["constraint"] = " ".join(constraints)
- single_turn["cons_delex"] = " ".join(cons_delex)
- single_turn["sys_act"] = " ".join(sys_act)
- single_turn["turn_num"] = len(dial["log"])
- single_turn["turn_domain"] = " ".join(["[" + d + "]" for d in turn_domain])
-
- prev_turn_domain = copy.deepcopy(turn_domain)
- prev_constraint_dict = copy.deepcopy(constraint_dict)
-
- if "user" in single_turn:
- dial["log"].append(single_turn)
- for t in single_turn["user"].split() + single_turn["resp"].split() + constraints + sys_act:
- self.vocab.add_word(t)
-
- # NOTE: Commenting out delexicalisation because it is not used and
- # breaks when I use generated user dialogues for some reason
-
- # for t in single_turn["user_delex"].split():
- # if "[" in t and "]" in t and not t.startswith("[") and not t.endswith("]"):
- # single_turn["user_delex"].replace(t, t[t.index("[") : t.index("]") + 1])
- # elif not self.vocab.has_word(t):
- # self.vocab.add_word(t)
-
- single_turn = {}
-
- data[fn] = dial
- # pprint(dial)
- # if count == 20:
- # break
- self.vocab.construct()
- self.vocab.save_vocab("data/preprocessed_gen_usr_utts/UBAR/multi-woz-processed/vocab")
- with open("data/interim/gen_usr_utts/multi-woz-analysis/dialog_acts.json", "w") as f:
- json.dump(ordered_sysact_dict, f, indent=2)
- with open("data/interim/gen_usr_utts/multi-woz-analysis/dialog_act_type.json", "w") as f:
- json.dump(self.unique_da, f, indent=2)
- return data
-
-
-if __name__ == "__main__":
- db_paths = {
- "attraction": "data/raw/UBAR/db/attraction_db.json",
- "hospital": "data/raw/UBAR/db/hospital_db.json",
- "hotel": "data/raw/UBAR/db/hotel_db.json",
- "police": "data/raw/UBAR/db/police_db.json",
- "restaurant": "data/raw/UBAR/db/restaurant_db.json",
- "taxi": "data/raw/UBAR/db/taxi_db.json",
- "train": "data/raw/UBAR/db/train_db.json",
- }
- get_db_values("data/raw/UBAR/db/value_set.json")
- preprocess_db(db_paths)
- dh = DataPreprocessor()
- data = dh.preprocess_main()
- if not os.path.exists("data/preprocessed_gen_usr_utts/UBAR/multi-woz-processed"):
- os.mkdir("data/preprocessed_gen_usr_utts/UBAR/multi-woz-processed")
-
- with open("data/preprocessed_gen_usr_utts/UBAR/multi-woz-processed/data_for_ubar.json", "w") as f:
- json.dump(data, f, indent=2)
diff --git a/spaces/allknowingroger/Image-Models-Test5/app.py b/spaces/allknowingroger/Image-Models-Test5/app.py
deleted file mode 100644
index e102e3dd10da559e2c288e4ed0606bce81706299..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test5/app.py
+++ /dev/null
@@ -1,144 +0,0 @@
-import gradio as gr
-# import os
-# import sys
-# from pathlib import Path
-import time
-
-models =[
- "Whybother/private-2",
- "Jinouga/haruno-sakura-boruto-v4",
- "optmal/headshot",
- "Neu256/Arc-diffusion-v1.0",
- "sayakpaul/lora-trained",
- "BastienPenalba/omaji",
- "Syedian123/rachel",
- "Royal/stable_diffusionv1-5",
- "anik424/SD_xl_base_madras_checks",
-]
-
-
-model_functions = {}
-model_idx = 1
-for model_path in models:
- try:
- model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False)
- except Exception as error:
- def the_fn(txt):
- return None
- model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"])
- model_idx+=1
-
-
-def send_it_idx(idx):
- def send_it_fn(prompt):
- output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt)
- return output
- return send_it_fn
-
-def get_prompts(prompt_text):
- return prompt_text
-
-def clear_it(val):
- if int(val) != 0:
- val = 0
- else:
- val = 0
- pass
- return val
-
-def all_task_end(cnt,t_stamp):
- to = t_stamp + 60
- et = time.time()
- if et > to and t_stamp != 0:
- d = gr.update(value=0)
- tog = gr.update(value=1)
- #print(f'to: {to} et: {et}')
- else:
- if cnt != 0:
- d = gr.update(value=et)
- else:
- d = gr.update(value=0)
- tog = gr.update(value=0)
- #print (f'passing: to: {to} et: {et}')
- pass
- return d, tog
-
-def all_task_start():
- print("\n\n\n\n\n\n\n")
- t = time.gmtime()
- t_stamp = time.time()
- current_time = time.strftime("%H:%M:%S", t)
- return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0)
-
-def clear_fn():
- nn = len(models)
- return tuple([None, *[None for _ in range(nn)]])
-
-
-
-with gr.Blocks(title="SD Models") as my_interface:
- with gr.Column(scale=12):
- # with gr.Row():
- # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""")
- with gr.Row():
- with gr.Row(scale=6):
- primary_prompt=gr.Textbox(label="Prompt", value="")
- # real_prompt=gr.Textbox(label="Real prompt")
- with gr.Row(scale=6):
- # improve_prompts_btn=gr.Button("Improve")
- with gr.Row():
- run=gr.Button("Run",variant="primary")
- clear_btn=gr.Button("Clear")
- with gr.Row():
- sd_outputs = {}
- model_idx = 1
- for model_path in models:
- with gr.Column(scale=3, min_width=320):
- with gr.Box():
- sd_outputs[model_idx] = gr.Image(label=model_path)
- pass
- model_idx += 1
- pass
- pass
-
- with gr.Row(visible=False):
- start_box=gr.Number(interactive=False)
- end_box=gr.Number(interactive=False)
- tog_box=gr.Textbox(value=0,interactive=False)
-
- start_box.change(
- all_task_end,
- [start_box, end_box],
- [start_box, tog_box],
- every=1,
- show_progress=False)
-
- primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box])
- run.click(all_task_start, None, [start_box, end_box, tog_box])
- runs_dict = {}
- model_idx = 1
- for model_path in models:
- runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]])
- model_idx += 1
- pass
- pass
-
- # improve_prompts_btn_clicked=improve_prompts_btn.click(
- # get_prompts,
- # inputs=[primary_prompt],
- # outputs=[primary_prompt],
- # cancels=list(runs_dict.values()))
- clear_btn.click(
- clear_fn,
- None,
- [primary_prompt, *list(sd_outputs.values())],
- cancels=[*list(runs_dict.values())])
- tog_box.change(
- clear_it,
- tog_box,
- tog_box,
- cancels=[*list(runs_dict.values())])
-
-my_interface.queue(concurrency_count=600, status_update_rate=1)
-my_interface.launch(inline=True, show_api=False)
-
\ No newline at end of file
diff --git a/spaces/allknowingroger/Image-Models-Test87/README.md b/spaces/allknowingroger/Image-Models-Test87/README.md
deleted file mode 100644
index 37ff6d434313f02c1bd8749ce97ead1b5626f586..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test87/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: More Image Models
-emoji: 😻
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: true
-duplicated_from: allknowingroger/Image-Models-Test86
----
-
-
\ No newline at end of file
diff --git a/spaces/allknowingroger/Llama_v2/README.md b/spaces/allknowingroger/Llama_v2/README.md
deleted file mode 100644
index 4673b06c91bfc4bd5561e88ff24556fce8620786..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Llama_v2/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Llama V2
-emoji: 💻
-colorFrom: red
-colorTo: gray
-sdk: static
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/amankishore/sjc/README.md b/spaces/amankishore/sjc/README.md
deleted file mode 100644
index d24f299d525fc2c140a3a007c2831e74056f81d7..0000000000000000000000000000000000000000
--- a/spaces/amankishore/sjc/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Sjc
-emoji: 💻
-colorFrom: red
-colorTo: pink
-sdk: gradio
-sdk_version: 3.12.0
-app_file: app.py
-pinned: false
-license: creativeml-openrail-m
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/amitjamadagni/qs-benchmarks/plot_scripts/plot_display_com_cap.py b/spaces/amitjamadagni/qs-benchmarks/plot_scripts/plot_display_com_cap.py
deleted file mode 100644
index cbee5a278ad18330b8836c1822dd654396569fab..0000000000000000000000000000000000000000
--- a/spaces/amitjamadagni/qs-benchmarks/plot_scripts/plot_display_com_cap.py
+++ /dev/null
@@ -1,103 +0,0 @@
-import numpy as np
-import h5py
-import os
-
-import mercury as mr
-
-import sys
-sys.path.append('/plot_scripts/')
-from map_packages_colors_1v1 import *
-from plot_scripts_1v1 import *
-
-# package_str = ['qiskit' , 'cirq', 'qsimcirq', 'pennylane', 'pennylane_l', 'qibo', 'qibojit', 'yao', 'quest', 'qulacs', 'intel_qs_cpp', 'projectq', 'svsim', 'hybridq', 'hiq', 'qcgpu', 'qrack_sch', 'cuquantum_qiskit', 'cuquantum_qsimcirq', 'qpanda']
-
-
-def abs_time_pack(task, package, pr, N_end):
-
- if task == "Heisenberg dynamics":
- task = "hdyn"
- elif task == "Random Quantum Circuit":
- task = "rqc"
- elif task == "Quantum Fourier Transform":
- task = "qft"
-
- if pr == "Single":
- pr = "sp"
- elif pr == "Double":
- pr = "dp"
-
- fig, ax = plt.subplots()
-
- dir = os.getcwd()
-
- if task == 'hdyn' or task == 'qft':
- N_arr = np.arange(6, N_end, 2)
- elif task == 'rqc':
- N_arr = np.arange(12, N_end, 2)
-
-
- dat_fst = dir + '/data/{}/{}_singlethread_{}.h5'.format(task, package, pr)
- dat_fmt = dir + '/data/{}/{}_multithread_{}.h5'.format(task, package, pr)
- dat_fgpu = dir + '/data/{}/{}_gpu_{}.h5'.format(task, package, pr)
-
- if not os.path.isfile(dat_fst) and not os.path.isfile(dat_fmt) and not os.path.isfile(dat_fgpu):
- return mr.Md(f"Precision {pr} possibly not supported")
-
- mr.Md(f"TtS performance of simulation packages with different compute capabilities")
-
- if os.path.isfile(dat_fst):
- h5f_st = h5py.File(dat_fst, 'r')
- dat_st = h5f_st[storage_dict[package]][:]
- h5f_st.close()
- plot_abs_data_n_arr(N_arr, dat_st, package+'_'+task+'_singlethread_'+pr)
-
- if os.path.isfile(dat_fmt):
- h5f_mt = h5py.File(dat_fmt, 'r')
- dat_mt = h5f_mt[storage_dict[package]][:]
- h5f_mt.close()
- plot_abs_data_n_arr(N_arr, dat_mt, package+'_'+task+'_multithread_'+pr)
-
- if os.path.isfile(dat_fgpu):
- h5f_gpu = h5py.File(dat_fgpu, 'r')
- dat_gpu = h5f_gpu[storage_dict[package]][:]
- h5f_gpu.close()
- plot_abs_data_n_arr(N_arr, dat_gpu, package+'_'+task+'_gpu_'+pr)
-
- gen_settings(fig, ax, r"N (system size)", r"Time ($t_{package}$)", False, True, True, N_arr[0]-2, N_arr[-1], True, 10**-1, 10**5, "out", None)
-
- mr.Md("___")
- mr.Md(f"Relative performance to singlethread performance")
-
- fig, ax = plt.subplots()
-
- if os.path.isfile(dat_fst) and os.path.isfile(dat_fmt):
- plot_comp_data_n_arr(N_arr, dat_st, dat_st, package+'_'+task+'_singlethread_'+pr)
- plot_comp_data_n_arr(N_arr, dat_st, dat_mt, package+'_'+task+'_multithread_'+pr)
-
- if os.path.isfile(dat_fst) and os.path.isfile(dat_fgpu):
- plot_comp_data_n_arr(N_arr, dat_st, dat_gpu, package+'_'+task+'_gpu_'+pr)
-
- gen_settings(fig, ax, r"N (system size)", r"Relative to singlethread", False, True, True, N_arr[0]-2, N_arr[-1], True, 10**-1, 10**3, "out", None)
-
- mr.Md("___")
- mr.Md(f"Relative performance to multithread performance")
-
- fig, ax = plt.subplots()
-
- if os.path.isfile(dat_fmt) and os.path.isfile(dat_fgpu):
- plot_comp_data_n_arr(N_arr, dat_mt, dat_gpu, package+'_'+task+'_gpu_'+pr)
- plot_comp_data_n_arr(N_arr, dat_mt, dat_mt, package+'_'+task+'_multithread_'+pr)
-
- gen_settings(fig, ax, r"N (system size)", r"Relative to multithread", False, True, True, N_arr[0]-2, N_arr[-1], True, 10**-1, 10**2, "out", None)
-
- # else:
- # print(" Re-select the options as the requested option data is not available.")
-
-# pkg_str = ['qiskit' , 'cirq', 'qsimcirq', 'pennylane', 'pennylane_l', 'qibo', 'qibojit', 'yao', 'quest', 'qulacs', 'intel_qs_cpp', 'projectq', 'svsim', 'hybridq', 'hiq', 'qcgpu', 'qrack_sch']
-
-# abs_time_pack("Heisenberg dynamics", 'qsimcirq', 'Double', 36)
-
-# abs_time(pkg_str, task_1, p_com_cap, p_prec)
-# abs_time("Heisenberg dynamics", "Singlethread", "Single", 'qsimcirq')
-# abs_time_pack("Heisenberg dynamics", "Random Quantum Circuit", "Singlethread", "Single", 34)
-# abs_time_pack("Heisenberg dynamics", "Quantum Fourier Transform", "GPU", "Single", 38)
diff --git a/spaces/argilla/argilla-streamlit-customs/my_app/introduction.py b/spaces/argilla/argilla-streamlit-customs/my_app/introduction.py
deleted file mode 100644
index 5b2f09bc6c68a2b17e7cfbf3df0fd7e0ab3e4de8..0000000000000000000000000000000000000000
--- a/spaces/argilla/argilla-streamlit-customs/my_app/introduction.py
+++ /dev/null
@@ -1,33 +0,0 @@
-# Contents of ~/my_app/streamlit_app.py
-import streamlit as st
-
-st.set_page_config(page_title="Argilla Streamlit", page_icon="👋", layout="wide")
-
-
-x = st.columns(3)
-x[0].image("https://docs.argilla.io/en/latest/_static/images/logo-light-mode.svg", use_column_width=True)
-
-st.write("# Welcome to Argilla Streamlit! 👋")
-
-st.sidebar.success("Select on of the apps above.")
-
-st.success(
- "PRs are welcome on our [Github repo](https://github.com/argilla-io/argilla-streamlit)! 🙌 \n\n"
- "Check it out on the [Hugging Face Hub](https://huggingface.co/spaces/argilla/argilla-streamlit-customs)! 🚀 "
-)
-st.markdown(
- """
- Argilla is a production-ready framework for building and improving datasets for NLP projects. This repo is focused on extended UI functionalities for Argilla. 👑
-
- **👈 Select an app from the sidebar** to see some examples
- of what Argilla Streamlit Customs can do!
-
- ## Next Steps
- If you want to continue learning Argilla:
- - 🙋♀️ Join the [Argilla Slack Community](https://join.slack.com/t/rubrixworkspace/shared_invite/zt-whigkyjn-a3IUJLD7gDbTZ0rKlvcJ5g)
- - ⭐ Argilla [Github repo](https://github.com/argilla-io/argilla)
- - 📚 Argilla [documentation](https://docs.argilla.io) for more guides and tutorials.
- """
-)
-
-
diff --git a/spaces/artificialguybr/video-dubbing/TTS/docs/README.md b/spaces/artificialguybr/video-dubbing/TTS/docs/README.md
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/arxify/RVC-beta-v2-0618/export_onnx.py b/spaces/arxify/RVC-beta-v2-0618/export_onnx.py
deleted file mode 100644
index 95376d4294ebc4d8972c5ab4a72454419f3e8cdf..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/export_onnx.py
+++ /dev/null
@@ -1,54 +0,0 @@
-from infer_pack.models_onnx import SynthesizerTrnMsNSFsidM
-import torch
-
-if __name__ == "__main__":
- MoeVS = True # 模型是否为MoeVoiceStudio(原MoeSS)使用
-
- ModelPath = "Shiroha/shiroha.pth" # 模型路径
- ExportedPath = "model.onnx" # 输出路径
- hidden_channels = 256 # hidden_channels,为768Vec做准备
- cpt = torch.load(ModelPath, map_location="cpu")
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- print(*cpt["config"])
-
- test_phone = torch.rand(1, 200, hidden_channels) # hidden unit
- test_phone_lengths = torch.tensor([200]).long() # hidden unit 长度(貌似没啥用)
- test_pitch = torch.randint(size=(1, 200), low=5, high=255) # 基频(单位赫兹)
- test_pitchf = torch.rand(1, 200) # nsf基频
- test_ds = torch.LongTensor([0]) # 说话人ID
- test_rnd = torch.rand(1, 192, 200) # 噪声(加入随机因子)
-
- device = "cpu" # 导出时设备(不影响使用模型)
-
- net_g = SynthesizerTrnMsNSFsidM(
- *cpt["config"], is_half=False
- ) # fp32导出(C++要支持fp16必须手动将内存重新排列所以暂时不用fp16)
- net_g.load_state_dict(cpt["weight"], strict=False)
- input_names = ["phone", "phone_lengths", "pitch", "pitchf", "ds", "rnd"]
- output_names = [
- "audio",
- ]
- # net_g.construct_spkmixmap(n_speaker) 多角色混合轨道导出
- torch.onnx.export(
- net_g,
- (
- test_phone.to(device),
- test_phone_lengths.to(device),
- test_pitch.to(device),
- test_pitchf.to(device),
- test_ds.to(device),
- test_rnd.to(device),
- ),
- ExportedPath,
- dynamic_axes={
- "phone": [1],
- "pitch": [1],
- "pitchf": [1],
- "rnd": [2],
- },
- do_constant_folding=False,
- opset_version=16,
- verbose=False,
- input_names=input_names,
- output_names=output_names,
- )
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_CFB.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_CFB.py
deleted file mode 100644
index cb0c35295ce51cf3cc8be4d85b66b52ac85353f4..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_CFB.py
+++ /dev/null
@@ -1,411 +0,0 @@
-# ===================================================================
-#
-# Copyright (c) 2014, Legrandin
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-#
-# 1. Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# 2. Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in
-# the documentation and/or other materials provided with the
-# distribution.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
-# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
-# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
-# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
-# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
-# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-# ===================================================================
-
-import unittest
-from binascii import unhexlify
-
-from Crypto.SelfTest.loader import load_test_vectors
-from Crypto.SelfTest.st_common import list_test_cases
-from Crypto.Util.py3compat import tobytes, is_string
-from Crypto.Cipher import AES, DES3, DES
-from Crypto.Hash import SHAKE128
-
-from Crypto.SelfTest.Cipher.test_CBC import BlockChainingTests
-
-
-def get_tag_random(tag, length):
- return SHAKE128.new(data=tobytes(tag)).read(length)
-
-
-class CfbTests(BlockChainingTests):
-
- aes_mode = AES.MODE_CFB
- des3_mode = DES3.MODE_CFB
-
- # Redefine test_unaligned_data_128/64
-
- def test_unaligned_data_128(self):
- plaintexts = [ b"7777777" ] * 100
-
- cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, segment_size=8)
- ciphertexts = [ cipher.encrypt(x) for x in plaintexts ]
- cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, segment_size=8)
- self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts)))
-
- cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, segment_size=128)
- ciphertexts = [ cipher.encrypt(x) for x in plaintexts ]
- cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, segment_size=128)
- self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts)))
-
- def test_unaligned_data_64(self):
- plaintexts = [ b"7777777" ] * 100
- cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, segment_size=8)
- ciphertexts = [ cipher.encrypt(x) for x in plaintexts ]
- cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, segment_size=8)
- self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts)))
-
- cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, segment_size=64)
- ciphertexts = [ cipher.encrypt(x) for x in plaintexts ]
- cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, segment_size=64)
- self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts)))
-
- # Extra
-
- def test_segment_size_128(self):
- for bits in range(8, 129, 8):
- cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128,
- segment_size=bits)
-
- for bits in 0, 7, 9, 127, 129:
- self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CFB,
- self.iv_128,
- segment_size=bits)
-
- def test_segment_size_64(self):
- for bits in range(8, 65, 8):
- cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64,
- segment_size=bits)
-
- for bits in 0, 7, 9, 63, 65:
- self.assertRaises(ValueError, DES3.new, self.key_192, AES.MODE_CFB,
- self.iv_64,
- segment_size=bits)
-
-
-class NistCfbVectors(unittest.TestCase):
-
- def _do_kat_aes_test(self, file_name, segment_size):
-
- test_vectors = load_test_vectors(("Cipher", "AES"),
- file_name,
- "AES CFB%d KAT" % segment_size,
- { "count" : lambda x: int(x) } )
- if test_vectors is None:
- return
-
- direction = None
- for tv in test_vectors:
-
- # The test vector file contains some directive lines
- if is_string(tv):
- direction = tv
- continue
-
- self.description = tv.desc
- cipher = AES.new(tv.key, AES.MODE_CFB, tv.iv,
- segment_size=segment_size)
- if direction == "[ENCRYPT]":
- self.assertEqual(cipher.encrypt(tv.plaintext), tv.ciphertext)
- elif direction == "[DECRYPT]":
- self.assertEqual(cipher.decrypt(tv.ciphertext), tv.plaintext)
- else:
- assert False
-
- # See Section 6.4.5 in AESAVS
- def _do_mct_aes_test(self, file_name, segment_size):
-
- test_vectors = load_test_vectors(("Cipher", "AES"),
- file_name,
- "AES CFB%d Montecarlo" % segment_size,
- { "count" : lambda x: int(x) } )
- if test_vectors is None:
- return
-
- assert(segment_size in (8, 128))
-
- direction = None
- for tv in test_vectors:
-
- # The test vector file contains some directive lines
- if is_string(tv):
- direction = tv
- continue
-
- self.description = tv.desc
- cipher = AES.new(tv.key, AES.MODE_CFB, tv.iv,
- segment_size=segment_size)
-
- def get_input(input_text, output_seq, j):
- # CFB128
- if segment_size == 128:
- if j >= 2:
- return output_seq[-2]
- return [input_text, tv.iv][j]
- # CFB8
- if j == 0:
- return input_text
- elif j <= 16:
- return tv.iv[j - 1:j]
- return output_seq[j - 17]
-
- if direction == '[ENCRYPT]':
- cts = []
- for j in range(1000):
- plaintext = get_input(tv.plaintext, cts, j)
- cts.append(cipher.encrypt(plaintext))
- self.assertEqual(cts[-1], tv.ciphertext)
- elif direction == '[DECRYPT]':
- pts = []
- for j in range(1000):
- ciphertext = get_input(tv.ciphertext, pts, j)
- pts.append(cipher.decrypt(ciphertext))
- self.assertEqual(pts[-1], tv.plaintext)
- else:
- assert False
-
- def _do_tdes_test(self, file_name, segment_size):
-
- test_vectors = load_test_vectors(("Cipher", "TDES"),
- file_name,
- "TDES CFB%d KAT" % segment_size,
- { "count" : lambda x: int(x) } )
- if test_vectors is None:
- return
-
- direction = None
- for tv in test_vectors:
-
- # The test vector file contains some directive lines
- if is_string(tv):
- direction = tv
- continue
-
- self.description = tv.desc
- if hasattr(tv, "keys"):
- cipher = DES.new(tv.keys, DES.MODE_CFB, tv.iv,
- segment_size=segment_size)
- else:
- if tv.key1 != tv.key3:
- key = tv.key1 + tv.key2 + tv.key3 # Option 3
- else:
- key = tv.key1 + tv.key2 # Option 2
- cipher = DES3.new(key, DES3.MODE_CFB, tv.iv,
- segment_size=segment_size)
- if direction == "[ENCRYPT]":
- self.assertEqual(cipher.encrypt(tv.plaintext), tv.ciphertext)
- elif direction == "[DECRYPT]":
- self.assertEqual(cipher.decrypt(tv.ciphertext), tv.plaintext)
- else:
- assert False
-
-
-# Create one test method per file
-nist_aes_kat_mmt_files = (
- # KAT
- "CFB?GFSbox128.rsp",
- "CFB?GFSbox192.rsp",
- "CFB?GFSbox256.rsp",
- "CFB?KeySbox128.rsp",
- "CFB?KeySbox192.rsp",
- "CFB?KeySbox256.rsp",
- "CFB?VarKey128.rsp",
- "CFB?VarKey192.rsp",
- "CFB?VarKey256.rsp",
- "CFB?VarTxt128.rsp",
- "CFB?VarTxt192.rsp",
- "CFB?VarTxt256.rsp",
- # MMT
- "CFB?MMT128.rsp",
- "CFB?MMT192.rsp",
- "CFB?MMT256.rsp",
- )
-nist_aes_mct_files = (
- "CFB?MCT128.rsp",
- "CFB?MCT192.rsp",
- "CFB?MCT256.rsp",
- )
-
-for file_gen_name in nist_aes_kat_mmt_files:
- for bits in "8", "128":
- file_name = file_gen_name.replace("?", bits)
- def new_func(self, file_name=file_name, bits=bits):
- self._do_kat_aes_test(file_name, int(bits))
- setattr(NistCfbVectors, "test_AES_" + file_name, new_func)
-
-for file_gen_name in nist_aes_mct_files:
- for bits in "8", "128":
- file_name = file_gen_name.replace("?", bits)
- def new_func(self, file_name=file_name, bits=bits):
- self._do_mct_aes_test(file_name, int(bits))
- setattr(NistCfbVectors, "test_AES_" + file_name, new_func)
-del file_name, new_func
-
-nist_tdes_files = (
- "TCFB?MMT2.rsp", # 2TDES
- "TCFB?MMT3.rsp", # 3TDES
- "TCFB?invperm.rsp", # Single DES
- "TCFB?permop.rsp",
- "TCFB?subtab.rsp",
- "TCFB?varkey.rsp",
- "TCFB?vartext.rsp",
- )
-
-for file_gen_name in nist_tdes_files:
- for bits in "8", "64":
- file_name = file_gen_name.replace("?", bits)
- def new_func(self, file_name=file_name, bits=bits):
- self._do_tdes_test(file_name, int(bits))
- setattr(NistCfbVectors, "test_TDES_" + file_name, new_func)
-
-# END OF NIST CBC TEST VECTORS
-
-
-class SP800TestVectors(unittest.TestCase):
- """Class exercising the CFB test vectors found in Section F.3
- of NIST SP 800-3A"""
-
- def test_aes_128_cfb8(self):
- plaintext = '6bc1bee22e409f96e93d7e117393172aae2d'
- ciphertext = '3b79424c9c0dd436bace9e0ed4586a4f32b9'
- key = '2b7e151628aed2a6abf7158809cf4f3c'
- iv = '000102030405060708090a0b0c0d0e0f'
-
- key = unhexlify(key)
- iv = unhexlify(iv)
- plaintext = unhexlify(plaintext)
- ciphertext = unhexlify(ciphertext)
-
- cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=8)
- self.assertEqual(cipher.encrypt(plaintext), ciphertext)
- cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=8)
- self.assertEqual(cipher.decrypt(ciphertext), plaintext)
-
- def test_aes_192_cfb8(self):
- plaintext = '6bc1bee22e409f96e93d7e117393172aae2d'
- ciphertext = 'cda2521ef0a905ca44cd057cbf0d47a0678a'
- key = '8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b'
- iv = '000102030405060708090a0b0c0d0e0f'
-
- key = unhexlify(key)
- iv = unhexlify(iv)
- plaintext = unhexlify(plaintext)
- ciphertext = unhexlify(ciphertext)
-
- cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=8)
- self.assertEqual(cipher.encrypt(plaintext), ciphertext)
- cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=8)
- self.assertEqual(cipher.decrypt(ciphertext), plaintext)
-
- def test_aes_256_cfb8(self):
- plaintext = '6bc1bee22e409f96e93d7e117393172aae2d'
- ciphertext = 'dc1f1a8520a64db55fcc8ac554844e889700'
- key = '603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4'
- iv = '000102030405060708090a0b0c0d0e0f'
-
- key = unhexlify(key)
- iv = unhexlify(iv)
- plaintext = unhexlify(plaintext)
- ciphertext = unhexlify(ciphertext)
-
- cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=8)
- self.assertEqual(cipher.encrypt(plaintext), ciphertext)
- cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=8)
- self.assertEqual(cipher.decrypt(ciphertext), plaintext)
-
- def test_aes_128_cfb128(self):
- plaintext = '6bc1bee22e409f96e93d7e117393172a' +\
- 'ae2d8a571e03ac9c9eb76fac45af8e51' +\
- '30c81c46a35ce411e5fbc1191a0a52ef' +\
- 'f69f2445df4f9b17ad2b417be66c3710'
- ciphertext = '3b3fd92eb72dad20333449f8e83cfb4a' +\
- 'c8a64537a0b3a93fcde3cdad9f1ce58b' +\
- '26751f67a3cbb140b1808cf187a4f4df' +\
- 'c04b05357c5d1c0eeac4c66f9ff7f2e6'
- key = '2b7e151628aed2a6abf7158809cf4f3c'
- iv = '000102030405060708090a0b0c0d0e0f'
-
- key = unhexlify(key)
- iv = unhexlify(iv)
- plaintext = unhexlify(plaintext)
- ciphertext = unhexlify(ciphertext)
-
- cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=128)
- self.assertEqual(cipher.encrypt(plaintext), ciphertext)
- cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=128)
- self.assertEqual(cipher.decrypt(ciphertext), plaintext)
-
- def test_aes_192_cfb128(self):
- plaintext = '6bc1bee22e409f96e93d7e117393172a' +\
- 'ae2d8a571e03ac9c9eb76fac45af8e51' +\
- '30c81c46a35ce411e5fbc1191a0a52ef' +\
- 'f69f2445df4f9b17ad2b417be66c3710'
- ciphertext = 'cdc80d6fddf18cab34c25909c99a4174' +\
- '67ce7f7f81173621961a2b70171d3d7a' +\
- '2e1e8a1dd59b88b1c8e60fed1efac4c9' +\
- 'c05f9f9ca9834fa042ae8fba584b09ff'
- key = '8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b'
- iv = '000102030405060708090a0b0c0d0e0f'
-
- key = unhexlify(key)
- iv = unhexlify(iv)
- plaintext = unhexlify(plaintext)
- ciphertext = unhexlify(ciphertext)
-
- cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=128)
- self.assertEqual(cipher.encrypt(plaintext), ciphertext)
- cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=128)
- self.assertEqual(cipher.decrypt(ciphertext), plaintext)
-
- def test_aes_256_cfb128(self):
- plaintext = '6bc1bee22e409f96e93d7e117393172a' +\
- 'ae2d8a571e03ac9c9eb76fac45af8e51' +\
- '30c81c46a35ce411e5fbc1191a0a52ef' +\
- 'f69f2445df4f9b17ad2b417be66c3710'
-
- ciphertext = 'dc7e84bfda79164b7ecd8486985d3860' +\
- '39ffed143b28b1c832113c6331e5407b' +\
- 'df10132415e54b92a13ed0a8267ae2f9' +\
- '75a385741ab9cef82031623d55b1e471'
- key = '603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4'
- iv = '000102030405060708090a0b0c0d0e0f'
-
- key = unhexlify(key)
- iv = unhexlify(iv)
- plaintext = unhexlify(plaintext)
- ciphertext = unhexlify(ciphertext)
-
- cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=128)
- self.assertEqual(cipher.encrypt(plaintext), ciphertext)
- cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=128)
- self.assertEqual(cipher.decrypt(ciphertext), plaintext)
-
-
-def get_tests(config={}):
- tests = []
- tests += list_test_cases(CfbTests)
- if config.get('slow_tests'):
- tests += list_test_cases(NistCfbVectors)
- tests += list_test_cases(SP800TestVectors)
- return tests
-
-
-if __name__ == '__main__':
- suite = lambda: unittest.TestSuite(get_tests())
- unittest.main(defaultTest='suite')
diff --git a/spaces/asciicorp/hotel-chat/markup.py b/spaces/asciicorp/hotel-chat/markup.py
deleted file mode 100644
index d2936a64624beea2621c7f50ac6ed534d225ef0d..0000000000000000000000000000000000000000
--- a/spaces/asciicorp/hotel-chat/markup.py
+++ /dev/null
@@ -1,20 +0,0 @@
-def hotelchat_app():
- return """
-
Introduction
-
-
An autonomous customer service chatbot designed for hotels, providing comprehensive information about the hotel and facilitating reservations.
-
In this demo, we have equipped the chatbot with detailed information about a fictional hotel called Obsidian Heritage Colombo, including various room options, amenities, hotel policies, location, contact details, and all necessary information for a hotel stay
-
- """
-
-def hotelchat_app_hf():
- return """
-
-
About this app
-
some features may not work on Huggingface due to file write limitations:
-
The chatbot with ordering functionality may not work properly, but chatbot without ordering is available. To fully test all features, clone the app and run it locally:
The room reservation functionality of the chatbot is currently in an experimental phase. When mentioning arrival or departure dates, the chatbot will provide answers clearly indicating which is which. For example, if you say "We will be arriving tomorrow," simply stating "tomorrow" will not suffice; instead, the chatbot will respond by acknowledging the arrival date specifically. Additionally, the chatbot may occasionally ask for the same details repeatedly, but if you inform it that you have already provided the information, it will correct itself accordingly.
-
- """
diff --git a/spaces/ashercn97/AsherTesting/modules/llamacpp_hf.py b/spaces/ashercn97/AsherTesting/modules/llamacpp_hf.py
deleted file mode 100644
index e09c1a741257babda3d0620661559858aadc7854..0000000000000000000000000000000000000000
--- a/spaces/ashercn97/AsherTesting/modules/llamacpp_hf.py
+++ /dev/null
@@ -1,109 +0,0 @@
-import os
-from pathlib import Path
-from typing import Any, Dict, Optional, Union
-
-import torch
-from torch.nn import CrossEntropyLoss
-from transformers import GenerationConfig, PretrainedConfig, PreTrainedModel
-from transformers.modeling_outputs import CausalLMOutputWithPast
-
-from modules import shared
-from modules.logging_colors import logger
-
-if torch.cuda.is_available():
- from llama_cpp_cuda import Llama
-else:
- from llama_cpp import Llama
-
-class LlamacppHF(PreTrainedModel):
- def __init__(self, model):
- super().__init__(PretrainedConfig())
- self.model = model
- self.generation_config = GenerationConfig()
- self.cache = None
-
- def _validate_model_class(self):
- pass
-
- def _validate_model_kwargs(self, model_kwargs: Dict[str, Any]):
- pass
-
- def prepare_inputs_for_generation(self, input_ids, **kwargs):
- return {'input_ids': input_ids, **kwargs}
-
- @property
- def device(self) -> torch.device:
- return torch.device(0)
-
- def __call__(self, *args, **kwargs):
- # TODO: Some decoding methods (such as Contrastive Search) may not work at this time
- assert len(args) == 0, 'no *args should be passed to forward'
- use_cache = kwargs.get('use_cache', True)
- labels = kwargs.get('labels', None)
- seq = kwargs['input_ids'][0].tolist()
- cache = kwargs['past_key_values'] if 'past_key_values' in kwargs else None
-
- # Make the forward call
- seq_tensor = torch.tensor(seq)
- if labels is None:
- if self.cache is None or not torch.equal(self.cache, seq_tensor[:-1]):
- self.model.reset()
- self.model.eval(seq)
- else:
- self.model.eval([seq[-1]])
-
- logits = torch.tensor(self.model.eval_logits[-1]).view(1, 1, -1).to(kwargs['input_ids'].device)
- else:
- self.model.reset()
- self.model.eval(seq)
- logits = torch.tensor(self.model.eval_logits)
- logits = logits.view(1, logits.shape[0], logits.shape[1]).to(kwargs['input_ids'].device)
-
- self.cache = seq_tensor
-
- # Based on transformers/models/llama/modeling_llama.py
- loss = None
- if labels is not None:
- # Shift so that tokens < n predict n
- shift_logits = logits[..., :-1, :].contiguous()
- shift_labels = labels[..., 1:].contiguous()
- # Flatten the tokens
- loss_fct = CrossEntropyLoss()
- shift_logits = shift_logits.view(-1, logits.shape[-1])
- shift_labels = shift_labels.view(-1)
- # Enable model parallelism
- shift_labels = shift_labels.to(shift_logits.device)
- loss = loss_fct(shift_logits, shift_labels)
-
- return CausalLMOutputWithPast(logits=logits, past_key_values=cache if use_cache else None, loss=loss)
-
- @classmethod
- def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], *model_args, **kwargs):
- assert len(model_args) == 0 and len(kwargs) == 0, "extra args is currently not supported"
- if isinstance(pretrained_model_name_or_path, str):
- pretrained_model_name_or_path = Path(pretrained_model_name_or_path)
-
- path = Path(f'{shared.args.model_dir}') / Path(pretrained_model_name_or_path)
- if path.is_file():
- model_file = path
- else:
- model_file = list(path.glob('*ggml*.bin'))[0]
-
- logger.info(f"llama.cpp weights detected: {model_file}\n")
- params = {
- 'model_path': str(model_file),
- 'n_ctx': shared.args.n_ctx,
- 'seed': int(shared.args.llama_cpp_seed),
- 'n_threads': shared.args.threads or None,
- 'n_batch': shared.args.n_batch,
- 'use_mmap': not shared.args.no_mmap,
- 'use_mlock': shared.args.mlock,
- 'low_vram': shared.args.low_vram,
- 'n_gpu_layers': shared.args.n_gpu_layers,
- 'rope_freq_base': 10000 * shared.args.alpha_value ** (64/63.),
- 'rope_freq_scale': 1.0 / shared.args.compress_pos_emb,
- 'logits_all': True,
- }
-
- model = Llama(**params)
- return LlamacppHF(model)
diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Mahdi Torabi Rad.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Mahdi Torabi Rad.html
deleted file mode 100644
index aaae854cea5fab8dc26ab10726026facac92948c..0000000000000000000000000000000000000000
--- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Mahdi Torabi Rad.html
+++ /dev/null
@@ -1,132 +0,0 @@
-
-
-
- Mahdi Torabi Rad
-
-
-
-
-
-
Mahdi Torabi Rad
-
-
-
Mentee to Mentor.
1- What is your motivation to be a mentor with us at SharpestMinds? - Learnt things on own and got a good understanding of the path on how to move from an adjacent field towards data science. Want to share this with future mentees and show that it's possible. Also, want to be active within the SM community. - Helping people achieve career success brings satisfaction.
2- What's your career journey been like in Data Science? - Have a degree in Mech Engg, and worked on Computational modelling in PhD. - Started PhD in 2018 and have coding capabilities. - Started looking for career opportunities in DS/ML in 2019 and decided to make a move and was introduced to SM by a friend and was mentored by Richard. - Got a job as Lead M.L. Engineer for a startup in Waterloo. - Currently working as Senior D.S. at current company.
3- What's the biggest challenge a new comer faces when trying to break into the Data science role? How can you help the with this? - There is no one challenge that is faced by everyone. Everyone faces different challenges. Assuming someone who has finished Masters or Phd in computational field and is comfortable with programming and have good understanding of Math Concepts - The challenge for them is how they can use their core skills to build a good portfolio to get interviews and get a job.
- Can help mentees understand and define an interesting project and give technical help to build a portfolio.
4- How was your experience as SM Mentee with your mentor and with SharpestMinds? Did you work on any projects? - Worked on 3 projects. two of these were simple on linear regression and classification. One project was on reinforcement learning in finance (https://github.com/mtorabirad/Pair-Trading-Reinforcement-Learning) - The experience was beneficial. The most important help was with reformatting Resume. It was initially purely academic, Mentor helped with making it industry ready to be able to land interviews and add projects along with it. - Alejandro helped in offer negotiation and in choosing an offer amongst 3 different offers and eventually selected the one for the role currently working in.
5- You mentioned you want to actively engage with the community - How do you envision this? - Would like to work on a project with 2-3 mentees which can be interesting and come up with a nice and sexy solution and report for a problem. This can be very helpful for mentees to showcase in their portfolios and also help them collaborate with each other. - Identify mentees in specific fields and go through reading resources together. Can help do sessions on Time series forecasting and host space for discussion on it.
6- Do you have any questions for me regarding SM? - What are the current mentee profiles on the platform?
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/avivdm1/AutoGPT/autogpt/json_utils/utilities.py b/spaces/avivdm1/AutoGPT/autogpt/json_utils/utilities.py
deleted file mode 100644
index eb9bb687750460fed2f4547b67e41f8e8c877a41..0000000000000000000000000000000000000000
--- a/spaces/avivdm1/AutoGPT/autogpt/json_utils/utilities.py
+++ /dev/null
@@ -1,54 +0,0 @@
-"""Utilities for the json_fixes package."""
-import json
-import re
-
-from jsonschema import Draft7Validator
-
-from autogpt.config import Config
-from autogpt.logs import logger
-
-CFG = Config()
-
-
-def extract_char_position(error_message: str) -> int:
- """Extract the character position from the JSONDecodeError message.
-
- Args:
- error_message (str): The error message from the JSONDecodeError
- exception.
-
- Returns:
- int: The character position.
- """
-
- char_pattern = re.compile(r"\(char (\d+)\)")
- if match := char_pattern.search(error_message):
- return int(match[1])
- else:
- raise ValueError("Character position not found in the error message.")
-
-
-def validate_json(json_object: object, schema_name: object) -> object:
- """
- :type schema_name: object
- :param schema_name:
- :type json_object: object
- """
- with open(f"autogpt/json_utils/{schema_name}.json", "r") as f:
- schema = json.load(f)
- validator = Draft7Validator(schema)
-
- if errors := sorted(validator.iter_errors(json_object), key=lambda e: e.path):
- logger.error("The JSON object is invalid.")
- if CFG.debug_mode:
- logger.error(
- json.dumps(json_object, indent=4)
- ) # Replace 'json_object' with the variable containing the JSON data
- logger.error("The following issues were found:")
-
- for error in errors:
- logger.error(f"Error: {error.message}")
- elif CFG.debug_mode:
- print("The JSON object is valid.")
-
- return json_object
diff --git a/spaces/awacke1/03-AW-ChatbotBlenderbot/README.md b/spaces/awacke1/03-AW-ChatbotBlenderbot/README.md
deleted file mode 100644
index 14dca5024e415ef29b1e009a31066fb026fb016b..0000000000000000000000000000000000000000
--- a/spaces/awacke1/03-AW-ChatbotBlenderbot/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: 03 AW ChatbotBlenderbot
-emoji: ⚡
-colorFrom: purple
-colorTo: red
-sdk: gradio
-sdk_version: 3.6
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/awacke1/MN.Map.Hospitals.Top.Five/README.md b/spaces/awacke1/MN.Map.Hospitals.Top.Five/README.md
deleted file mode 100644
index 3cef0b3dd5bed0786a33524a97894566f7e94e02..0000000000000000000000000000000000000000
--- a/spaces/awacke1/MN.Map.Hospitals.Top.Five/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: MN.Map.Hospitals.Top.Five
-emoji: 📊
-colorFrom: gray
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/awacke1/Prompt-Refinery-Text-to-Image-Generation/app.py b/spaces/awacke1/Prompt-Refinery-Text-to-Image-Generation/app.py
deleted file mode 100644
index 0768eb88f3353204a8542bd3caaf20c0c0d39aee..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Prompt-Refinery-Text-to-Image-Generation/app.py
+++ /dev/null
@@ -1,94 +0,0 @@
-import gradio as gr
-import os
-from share_btn import community_icon_html, loading_icon_html, share_js
-
-text_gen = gr.Interface.load(name="spaces/Gustavosta/MagicPrompt-Stable-Diffusion")
-stable_diffusion = gr.Blocks.load(name="spaces/runwayml/stable-diffusion-v1-5")
-
-def get_images(prompt):
- gallery_dir = stable_diffusion(prompt, fn_index=2)
- sd_output = [os.path.join(gallery_dir, image) for image in os.listdir(gallery_dir)]
- return sd_output, gr.update(visible=True), gr.update(visible=True), gr.update(visible=True)
-
-def get_prompts(prompt_text):
- return text_gen(prompt_text)
-
-css = '''
-.animate-spin {
- animation: spin 1s linear infinite;
-}
-@keyframes spin {
- from {
- transform: rotate(0deg);
- }
- to {
- transform: rotate(360deg);
- }
-}
-#share-btn-container {
- display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem;
-}
-#share-btn {
- all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;
-}
-#share-btn * {
- all: unset;
-}
-#share-btn-container div:nth-child(-n+2){
- width: auto !important;
- min-height: 0px !important;
-}
-#share-btn-container .wrap {
- display: none !important;
-}
-a {text-decoration-line: underline;}
-'''
-with gr.Blocks(css=css) as demo:
- gr.HTML("""
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Forbes Ewan Spence 24H with the Jolla Smartphone the Crowdfunded Success Story from Finland.md b/spaces/cihyFjudo/fairness-paper-search/Forbes Ewan Spence 24H with the Jolla Smartphone the Crowdfunded Success Story from Finland.md
deleted file mode 100644
index 16bfddf707ea00843fe24cfc06034b7d5c816107..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Forbes Ewan Spence 24H with the Jolla Smartphone the Crowdfunded Success Story from Finland.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Forbes’ Ewan Spence: 24H with the Jolla Smartphone
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/altair/vegalite/schema.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/altair/vegalite/schema.py
deleted file mode 100644
index e94c3d1991e96da81efe13cfe06214166afe80d1..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/altair/vegalite/schema.py
+++ /dev/null
@@ -1,3 +0,0 @@
-"""Altair schema wrappers"""
-# ruff: noqa
-from .v5.schema import *
diff --git a/spaces/codesue/streamlit-tfx/README.md b/spaces/codesue/streamlit-tfx/README.md
deleted file mode 100644
index 07b2b91aa51e4bfdc066ca9042c29780e5613db0..0000000000000000000000000000000000000000
--- a/spaces/codesue/streamlit-tfx/README.md
+++ /dev/null
@@ -1,63 +0,0 @@
----
-title: 'streamlit-tfx'
-emoji: 🌱
-colorFrom: yellow
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.15.2
-app_file: tests/test_streamlit_tfx.py
-pinned: false
----
-
-# streamlit-tfx: TensorFlow Extended visualizers for Streamlit apps
-
-`streamlit-tfx` provides utilities for visualizing [TensorFlow Extended](https://www.tensorflow.org/tfx)
-artifacts in [Streamlit](https://streamlit.io) apps.
-
-[![GitHub][github_badge]][github_link] [![PyPI][pypi_badge]][pypi_link]
-
-> ### 🌱 Just sprouting!
-> This project is in the very beginning stages of development.
-> It has super hacky code that's not optimized or well-tested.
-> It's only intended to be used as a proof of concept of visualizing `tfx`
-> artifacts outside of Jupyter notebook.
-
-## Installation
-
-``` shell
-git clone https://github.com/codesue/streamlit-tfx.git
-cd streamlit-tfx
-poetry install
-```
-
-## Getting started
-
-```python
-import streamlit_tfx as st_tfx
-
-st_tfx.display(item)
-st_tfx.display_statistics(statistics)
-st_tfx.display_schema(schema)
-st_tfx.display_anomalies(anomalies)
-st_tfx.display_eval_result_plot(eval_result)
-st_tfx.display_eval_result_slicing_attributions(eval_result)
-st_tfx.display_eval_result_slicing_metrics(eval_result)
-st_tfx.display_eval_results_time_series(eval_results)
-```
-
----
-
-`streamlit-tfx` essentially ports `tfma` and `tfdv` visualizers, copyrighted
-The TensorFlow Authors and licensed under Apache License 2.0.
-
-Most artifacts in `tests/artifacts/` were generated by running the [TFX Keras Component tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/components_keras).
-The anomalies artifact with anomalies was generated by running the [TensorFlow Model Analysis tutorial](https://www.tensorflow.org/tfx/tutorials/model_analysis/tfma_basic).
-
-🚀 Inspired by [spacy-streamlit](https://github.com/explosion/spacy-streamlit)
-and [streamlit-player](https://github.com/okld/streamlit-player).
-
-[github_badge]: https://badgen.net/badge/icon/GitHub?icon=github&color=black&label
-[github_link]: https://github.com/codesue/streamlit-tfx
-
-[pypi_badge]: https://badgen.net/pypi/v/streamlit-tfx?icon=pypi&color=black&label
-[pypi_link]: https://pypi.org/project/streamlit-tfx
diff --git a/spaces/colakin/video-generater/public/ffmpeg/compat/aix/math.h b/spaces/colakin/video-generater/public/ffmpeg/compat/aix/math.h
deleted file mode 100644
index dee13c8dd7c0c39b1d6aeaedc3707faff7bfa0bd..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/compat/aix/math.h
+++ /dev/null
@@ -1,31 +0,0 @@
-/*
- * Work around the class() function in AIX math.h clashing with
- * identifiers named "class".
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef COMPAT_AIX_MATH_H
-#define COMPAT_AIX_MATH_H
-
-#define class class_in_math_h_causes_problems
-
-#include_next
-
-#undef class
-
-#endif /* COMPAT_AIX_MATH_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ac3dsp.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ac3dsp.c
deleted file mode 100644
index 22cb5f242e8cd77c4955aceb9baa76f96feeefc6..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ac3dsp.c
+++ /dev/null
@@ -1,399 +0,0 @@
-/*
- * AC-3 DSP functions
- * Copyright (c) 2011 Justin Ruggles
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include
-#include
-#include
-
-#include "config.h"
-#include "libavutil/attributes.h"
-#include "libavutil/common.h"
-#include "libavutil/intmath.h"
-#include "libavutil/mem_internal.h"
-
-#include "ac3defs.h"
-#include "ac3dsp.h"
-#include "ac3tab.h"
-#include "mathops.h"
-
-static void ac3_exponent_min_c(uint8_t *exp, int num_reuse_blocks, int nb_coefs)
-{
- int blk, i;
-
- if (!num_reuse_blocks)
- return;
-
- for (i = 0; i < nb_coefs; i++) {
- uint8_t min_exp = *exp;
- uint8_t *exp1 = exp + 256;
- for (blk = 0; blk < num_reuse_blocks; blk++) {
- uint8_t next_exp = *exp1;
- if (next_exp < min_exp)
- min_exp = next_exp;
- exp1 += 256;
- }
- *exp++ = min_exp;
- }
-}
-
-static void float_to_fixed24_c(int32_t *dst, const float *src, unsigned int len)
-{
- const float scale = 1 << 24;
- do {
- *dst++ = lrintf(*src++ * scale);
- *dst++ = lrintf(*src++ * scale);
- *dst++ = lrintf(*src++ * scale);
- *dst++ = lrintf(*src++ * scale);
- *dst++ = lrintf(*src++ * scale);
- *dst++ = lrintf(*src++ * scale);
- *dst++ = lrintf(*src++ * scale);
- *dst++ = lrintf(*src++ * scale);
- len -= 8;
- } while (len > 0);
-}
-
-static void ac3_bit_alloc_calc_bap_c(int16_t *mask, int16_t *psd,
- int start, int end,
- int snr_offset, int floor,
- const uint8_t *bap_tab, uint8_t *bap)
-{
- int bin, band, band_end;
-
- /* special case, if snr offset is -960, set all bap's to zero */
- if (snr_offset == -960) {
- memset(bap, 0, AC3_MAX_COEFS);
- return;
- }
-
- bin = start;
- band = ff_ac3_bin_to_band_tab[start];
- do {
- int m = (FFMAX(mask[band] - snr_offset - floor, 0) & 0x1FE0) + floor;
- band_end = ff_ac3_band_start_tab[++band];
- band_end = FFMIN(band_end, end);
-
- for (; bin < band_end; bin++) {
- int address = av_clip_uintp2((psd[bin] - m) >> 5, 6);
- bap[bin] = bap_tab[address];
- }
- } while (end > band_end);
-}
-
-static void ac3_update_bap_counts_c(uint16_t mant_cnt[16], uint8_t *bap,
- int len)
-{
- while (len-- > 0)
- mant_cnt[bap[len]]++;
-}
-
-DECLARE_ALIGNED(16, const uint16_t, ff_ac3_bap_bits)[16] = {
- 0, 0, 0, 3, 0, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16
-};
-
-static int ac3_compute_mantissa_size_c(uint16_t mant_cnt[6][16])
-{
- int blk, bap;
- int bits = 0;
-
- for (blk = 0; blk < AC3_MAX_BLOCKS; blk++) {
- // bap=1 : 3 mantissas in 5 bits
- bits += (mant_cnt[blk][1] / 3) * 5;
- // bap=2 : 3 mantissas in 7 bits
- // bap=4 : 2 mantissas in 7 bits
- bits += ((mant_cnt[blk][2] / 3) + (mant_cnt[blk][4] >> 1)) * 7;
- // bap=3 : 1 mantissa in 3 bits
- bits += mant_cnt[blk][3] * 3;
- // bap=5 to 15 : get bits per mantissa from table
- for (bap = 5; bap < 16; bap++)
- bits += mant_cnt[blk][bap] * ff_ac3_bap_bits[bap];
- }
- return bits;
-}
-
-static void ac3_extract_exponents_c(uint8_t *exp, int32_t *coef, int nb_coefs)
-{
- int i;
-
- for (i = 0; i < nb_coefs; i++) {
- int v = abs(coef[i]);
- exp[i] = v ? 23 - av_log2(v) : 24;
- }
-}
-
-static void ac3_sum_square_butterfly_int32_c(int64_t sum[4],
- const int32_t *coef0,
- const int32_t *coef1,
- int len)
-{
- int i;
-
- sum[0] = sum[1] = sum[2] = sum[3] = 0;
-
- for (i = 0; i < len; i++) {
- int lt = coef0[i];
- int rt = coef1[i];
- int md = lt + rt;
- int sd = lt - rt;
- MAC64(sum[0], lt, lt);
- MAC64(sum[1], rt, rt);
- MAC64(sum[2], md, md);
- MAC64(sum[3], sd, sd);
- }
-}
-
-static void ac3_sum_square_butterfly_float_c(float sum[4],
- const float *coef0,
- const float *coef1,
- int len)
-{
- int i;
-
- sum[0] = sum[1] = sum[2] = sum[3] = 0;
-
- for (i = 0; i < len; i++) {
- float lt = coef0[i];
- float rt = coef1[i];
- float md = lt + rt;
- float sd = lt - rt;
- sum[0] += lt * lt;
- sum[1] += rt * rt;
- sum[2] += md * md;
- sum[3] += sd * sd;
- }
-}
-
-static void ac3_downmix_5_to_2_symmetric_c(float **samples, float **matrix,
- int len)
-{
- int i;
- float v0, v1;
- float front_mix = matrix[0][0];
- float center_mix = matrix[0][1];
- float surround_mix = matrix[0][3];
-
- for (i = 0; i < len; i++) {
- v0 = samples[0][i] * front_mix +
- samples[1][i] * center_mix +
- samples[3][i] * surround_mix;
-
- v1 = samples[1][i] * center_mix +
- samples[2][i] * front_mix +
- samples[4][i] * surround_mix;
-
- samples[0][i] = v0;
- samples[1][i] = v1;
- }
-}
-
-static void ac3_downmix_5_to_1_symmetric_c(float **samples, float **matrix,
- int len)
-{
- int i;
- float front_mix = matrix[0][0];
- float center_mix = matrix[0][1];
- float surround_mix = matrix[0][3];
-
- for (i = 0; i < len; i++) {
- samples[0][i] = samples[0][i] * front_mix +
- samples[1][i] * center_mix +
- samples[2][i] * front_mix +
- samples[3][i] * surround_mix +
- samples[4][i] * surround_mix;
- }
-}
-
-static void ac3_downmix_c(float **samples, float **matrix,
- int out_ch, int in_ch, int len)
-{
- int i, j;
- float v0, v1;
-
- if (out_ch == 2) {
- for (i = 0; i < len; i++) {
- v0 = v1 = 0.0f;
- for (j = 0; j < in_ch; j++) {
- v0 += samples[j][i] * matrix[0][j];
- v1 += samples[j][i] * matrix[1][j];
- }
- samples[0][i] = v0;
- samples[1][i] = v1;
- }
- } else if (out_ch == 1) {
- for (i = 0; i < len; i++) {
- v0 = 0.0f;
- for (j = 0; j < in_ch; j++)
- v0 += samples[j][i] * matrix[0][j];
- samples[0][i] = v0;
- }
- }
-}
-
-static void ac3_downmix_5_to_2_symmetric_c_fixed(int32_t **samples, int16_t **matrix,
- int len)
-{
- int i;
- int64_t v0, v1;
- int16_t front_mix = matrix[0][0];
- int16_t center_mix = matrix[0][1];
- int16_t surround_mix = matrix[0][3];
-
- for (i = 0; i < len; i++) {
- v0 = (int64_t)samples[0][i] * front_mix +
- (int64_t)samples[1][i] * center_mix +
- (int64_t)samples[3][i] * surround_mix;
-
- v1 = (int64_t)samples[1][i] * center_mix +
- (int64_t)samples[2][i] * front_mix +
- (int64_t)samples[4][i] * surround_mix;
-
- samples[0][i] = (v0+2048)>>12;
- samples[1][i] = (v1+2048)>>12;
- }
-}
-
-static void ac3_downmix_5_to_1_symmetric_c_fixed(int32_t **samples, int16_t **matrix,
- int len)
-{
- int i;
- int64_t v0;
- int16_t front_mix = matrix[0][0];
- int16_t center_mix = matrix[0][1];
- int16_t surround_mix = matrix[0][3];
-
- for (i = 0; i < len; i++) {
- v0 = (int64_t)samples[0][i] * front_mix +
- (int64_t)samples[1][i] * center_mix +
- (int64_t)samples[2][i] * front_mix +
- (int64_t)samples[3][i] * surround_mix +
- (int64_t)samples[4][i] * surround_mix;
-
- samples[0][i] = (v0+2048)>>12;
- }
-}
-
-static void ac3_downmix_c_fixed(int32_t **samples, int16_t **matrix,
- int out_ch, int in_ch, int len)
-{
- int i, j;
- int64_t v0, v1;
- if (out_ch == 2) {
- for (i = 0; i < len; i++) {
- v0 = v1 = 0;
- for (j = 0; j < in_ch; j++) {
- v0 += (int64_t)samples[j][i] * matrix[0][j];
- v1 += (int64_t)samples[j][i] * matrix[1][j];
- }
- samples[0][i] = (v0+2048)>>12;
- samples[1][i] = (v1+2048)>>12;
- }
- } else if (out_ch == 1) {
- for (i = 0; i < len; i++) {
- v0 = 0;
- for (j = 0; j < in_ch; j++)
- v0 += (int64_t)samples[j][i] * matrix[0][j];
- samples[0][i] = (v0+2048)>>12;
- }
- }
-}
-
-void ff_ac3dsp_downmix_fixed(AC3DSPContext *c, int32_t **samples, int16_t **matrix,
- int out_ch, int in_ch, int len)
-{
- if (c->in_channels != in_ch || c->out_channels != out_ch) {
- c->in_channels = in_ch;
- c->out_channels = out_ch;
- c->downmix_fixed = NULL;
-
- if (in_ch == 5 && out_ch == 2 &&
- !(matrix[1][0] | matrix[0][2] |
- matrix[1][3] | matrix[0][4] |
- (matrix[0][1] ^ matrix[1][1]) |
- (matrix[0][0] ^ matrix[1][2]))) {
- c->downmix_fixed = ac3_downmix_5_to_2_symmetric_c_fixed;
- } else if (in_ch == 5 && out_ch == 1 &&
- matrix[0][0] == matrix[0][2] &&
- matrix[0][3] == matrix[0][4]) {
- c->downmix_fixed = ac3_downmix_5_to_1_symmetric_c_fixed;
- }
- }
-
- if (c->downmix_fixed)
- c->downmix_fixed(samples, matrix, len);
- else
- ac3_downmix_c_fixed(samples, matrix, out_ch, in_ch, len);
-}
-
-void ff_ac3dsp_downmix(AC3DSPContext *c, float **samples, float **matrix,
- int out_ch, int in_ch, int len)
-{
- if (c->in_channels != in_ch || c->out_channels != out_ch) {
- int **matrix_cmp = (int **)matrix;
-
- c->in_channels = in_ch;
- c->out_channels = out_ch;
- c->downmix = NULL;
-
- if (in_ch == 5 && out_ch == 2 &&
- !(matrix_cmp[1][0] | matrix_cmp[0][2] |
- matrix_cmp[1][3] | matrix_cmp[0][4] |
- (matrix_cmp[0][1] ^ matrix_cmp[1][1]) |
- (matrix_cmp[0][0] ^ matrix_cmp[1][2]))) {
- c->downmix = ac3_downmix_5_to_2_symmetric_c;
- } else if (in_ch == 5 && out_ch == 1 &&
- matrix_cmp[0][0] == matrix_cmp[0][2] &&
- matrix_cmp[0][3] == matrix_cmp[0][4]) {
- c->downmix = ac3_downmix_5_to_1_symmetric_c;
- }
-
-#if ARCH_X86
- ff_ac3dsp_set_downmix_x86(c);
-#endif
- }
-
- if (c->downmix)
- c->downmix(samples, matrix, len);
- else
- ac3_downmix_c(samples, matrix, out_ch, in_ch, len);
-}
-
-av_cold void ff_ac3dsp_init(AC3DSPContext *c)
-{
- c->ac3_exponent_min = ac3_exponent_min_c;
- c->float_to_fixed24 = float_to_fixed24_c;
- c->bit_alloc_calc_bap = ac3_bit_alloc_calc_bap_c;
- c->update_bap_counts = ac3_update_bap_counts_c;
- c->compute_mantissa_size = ac3_compute_mantissa_size_c;
- c->extract_exponents = ac3_extract_exponents_c;
- c->sum_square_butterfly_int32 = ac3_sum_square_butterfly_int32_c;
- c->sum_square_butterfly_float = ac3_sum_square_butterfly_float_c;
- c->in_channels = 0;
- c->out_channels = 0;
- c->downmix = NULL;
- c->downmix_fixed = NULL;
-
-#if ARCH_ARM
- ff_ac3dsp_init_arm(c);
-#elif ARCH_X86
- ff_ac3dsp_init_x86(c);
-#elif ARCH_MIPS
- ff_ac3dsp_init_mips(c);
-#endif
-}
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/acelp_pitch_delay.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/acelp_pitch_delay.h
deleted file mode 100644
index 73fa3c331a04f75eee2158d1311a03b0d60ecc71..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/acelp_pitch_delay.h
+++ /dev/null
@@ -1,276 +0,0 @@
-/*
- * gain code, gain pitch and pitch delay decoding
- *
- * Copyright (c) 2008 Vladimir Voroshilov
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_ACELP_PITCH_DELAY_H
-#define AVCODEC_ACELP_PITCH_DELAY_H
-
-#include
-
-#include "audiodsp.h"
-
-#define PITCH_DELAY_MIN 20
-#define PITCH_DELAY_MAX 143
-
-/**
- * @brief Decode pitch delay of the first subframe encoded by 8 bits with 1/3
- * resolution.
- * @param ac_index adaptive codebook index (8 bits)
- *
- * @return pitch delay in 1/3 units
- *
- * Pitch delay is coded:
- * with 1/3 resolution, 19 < pitch_delay < 85
- * integers only, 85 <= pitch_delay <= 143
- */
-static inline int ff_acelp_decode_8bit_to_1st_delay3(int ac_index)
-{
- ac_index += 58;
- if (ac_index > 254)
- ac_index = 3 * ac_index - 510;
- return ac_index;
-}
-
-/**
- * @brief Decode pitch delay of the second subframe encoded by 5 or 6 bits
- * with 1/3 precision.
- * @param ac_index adaptive codebook index (5 or 6 bits)
- * @param pitch_delay_min lower bound (integer) of pitch delay interval
- * for second subframe
- *
- * @return pitch delay in 1/3 units
- *
- * Pitch delay is coded:
- * with 1/3 resolution, -6 < pitch_delay - int(prev_pitch_delay) < 5
- *
- * @remark The routine is used in G.729 @@8k, AMR @@10.2k, AMR @@7.95k,
- * AMR @@7.4k for the second subframe.
- */
-static inline int ff_acelp_decode_5_6_bit_to_2nd_delay3(int ac_index,
- int pitch_delay_min)
-{
- return 3 * pitch_delay_min + ac_index - 2;
-}
-
-/**
- * @brief Decode pitch delay with 1/3 precision.
- * @param ac_index adaptive codebook index (4 bits)
- * @param pitch_delay_min lower bound (integer) of pitch delay interval for
- * second subframe
- *
- * @return pitch delay in 1/3 units
- *
- * Pitch delay is coded:
- * integers only, -6 < pitch_delay - int(prev_pitch_delay) <= -2
- * with 1/3 resolution, -2 < pitch_delay - int(prev_pitch_delay) < 1
- * integers only, 1 <= pitch_delay - int(prev_pitch_delay) < 5
- *
- * @remark The routine is used in G.729 @@6.4k, AMR @@6.7k, AMR @@5.9k,
- * AMR @@5.15k, AMR @@4.75k for the second subframe.
- */
-static inline int ff_acelp_decode_4bit_to_2nd_delay3(int ac_index,
- int pitch_delay_min)
-{
- if (ac_index < 4)
- return 3 * (ac_index + pitch_delay_min);
- else if (ac_index < 12)
- return 3 * pitch_delay_min + ac_index + 6;
- else
- return 3 * (ac_index + pitch_delay_min) - 18;
-}
-
-/**
- * @brief Decode pitch delay of the first subframe encoded by 9 bits
- * with 1/6 precision.
- * @param ac_index adaptive codebook index (9 bits)
- *
- * @return pitch delay in 1/6 units
- *
- * Pitch delay is coded:
- * with 1/6 resolution, 17 < pitch_delay < 95
- * integers only, 95 <= pitch_delay <= 143
- *
- * @remark The routine is used in AMR @@12.2k for the first and third subframes.
- */
-static inline int ff_acelp_decode_9bit_to_1st_delay6(int ac_index)
-{
- if (ac_index < 463)
- return ac_index + 105;
- else
- return 6 * (ac_index - 368);
-}
-
-/**
- * @brief Decode pitch delay of the second subframe encoded by 6 bits
- * with 1/6 precision.
- * @param ac_index adaptive codebook index (6 bits)
- * @param pitch_delay_min lower bound (integer) of pitch delay interval for
- * second subframe
- *
- * @return pitch delay in 1/6 units
- *
- * Pitch delay is coded:
- * with 1/6 resolution, -6 < pitch_delay - int(prev_pitch_delay) < 5
- *
- * @remark The routine is used in AMR @@12.2k for the second and fourth subframes.
- */
-static inline int ff_acelp_decode_6bit_to_2nd_delay6(int ac_index,
- int pitch_delay_min)
-{
- return 6 * pitch_delay_min + ac_index - 3;
-}
-
-/**
- * @brief Update past quantized energies
- * @param[in,out] quant_energy past quantized energies (5.10)
- * @param gain_corr_factor gain correction factor
- * @param log2_ma_pred_order log2() of MA prediction order
- * @param erasure frame erasure flag
- *
- * If frame erasure flag is not equal to zero, memory is updated with
- * averaged energy, attenuated by 4dB:
- * max(avg(quant_energy[i])-4, -14), i=0,ma_pred_order
- *
- * In normal mode memory is updated with
- * Er - Ep = 20 * log10(gain_corr_factor)
- *
- * @remark The routine is used in G.729 and AMR (all modes).
- */
-void ff_acelp_update_past_gain(
- int16_t* quant_energy,
- int gain_corr_factor,
- int log2_ma_pred_order,
- int erasure);
-
-/**
- * @brief Decode the adaptive codebook gain and add
- * correction (4.1.5 and 3.9.1 of G.729).
- * @param adsp initialized audio DSP context
- * @param gain_corr_factor gain correction factor (2.13)
- * @param fc_v fixed-codebook vector (2.13)
- * @param mr_energy mean innovation energy and fixed-point correction (7.13)
- * @param[in,out] quant_energy past quantized energies (5.10)
- * @param subframe_size length of subframe
- *
- * @return quantized fixed-codebook gain (14.1)
- *
- * The routine implements equations 69, 66 and 71 of the G.729 specification (3.9.1)
- *
- * Em - mean innovation energy (dB, constant, depends on decoding algorithm)
- * Ep - mean-removed predicted energy (dB)
- * Er - mean-removed innovation energy (dB)
- * Ei - mean energy of the fixed-codebook contribution (dB)
- * N - subframe_size
- * M - MA (Moving Average) prediction order
- * gc - fixed-codebook gain
- * gc_p - predicted fixed-codebook gain
- *
- * Fixed codebook gain is computed using predicted gain gc_p and
- * correction factor gain_corr_factor as shown below:
- *
- * gc = gc_p * gain_corr_factor
- *
- * The predicted fixed codebook gain gc_p is found by predicting
- * the energy of the fixed-codebook contribution from the energy
- * of previous fixed-codebook contributions.
- *
- * mean = 1/N * sum(i,0,N){ fc_v[i] * fc_v[i] }
- *
- * Ei = 10log(mean)
- *
- * Er = 10log(1/N * gc^2 * mean) - Em = 20log(gc) + Ei - Em
- *
- * Replacing Er with Ep and gc with gc_p we will receive:
- *
- * Ep = 10log(1/N * gc_p^2 * mean) - Em = 20log(gc_p) + Ei - Em
- *
- * and from above:
- *
- * gc_p = 10^((Ep - Ei + Em) / 20)
- *
- * Ep is predicted using past energies and prediction coefficients:
- *
- * Ep = sum(i,0,M){ ma_prediction_coeff[i] * quant_energy[i] }
- *
- * gc_p in fixed-point arithmetic is calculated as following:
- *
- * mean = 1/N * sum(i,0,N){ (fc_v[i] / 2^13) * (fc_v[i] / 2^13) } =
- * = 1/N * sum(i,0,N) { fc_v[i] * fc_v[i] } / 2^26
- *
- * Ei = 10log(mean) = -10log(N) - 10log(2^26) +
- * + 10log(sum(i,0,N) { fc_v[i] * fc_v[i] })
- *
- * Ep - Ei + Em = Ep + Em + 10log(N) + 10log(2^26) -
- * - 10log(sum(i,0,N) { fc_v[i] * fc_v[i] }) =
- * = Ep + mr_energy - 10log(sum(i,0,N) { fc_v[i] * fc_v[i] })
- *
- * gc_p = 10 ^ ((Ep - Ei + Em) / 20) =
- * = 2 ^ (3.3219 * (Ep - Ei + Em) / 20) = 2 ^ (0.166 * (Ep - Ei + Em))
- *
- * where
- *
- * mr_energy = Em + 10log(N) + 10log(2^26)
- *
- * @remark The routine is used in G.729 and AMR (all modes).
- */
-int16_t ff_acelp_decode_gain_code(
- AudioDSPContext *adsp,
- int gain_corr_factor,
- const int16_t* fc_v,
- int mr_energy,
- const int16_t* quant_energy,
- const int16_t* ma_prediction_coeff,
- int subframe_size,
- int max_pred_order);
-
-/**
- * Calculate fixed gain (part of section 6.1.3 of AMR spec)
- *
- * @param fixed_gain_factor gain correction factor
- * @param fixed_mean_energy mean decoded algebraic codebook vector energy
- * @param prediction_error vector of the quantified predictor errors of
- * the four previous subframes. It is updated by this function.
- * @param energy_mean desired mean innovation energy
- * @param pred_table table of four moving average coefficients
- */
-float ff_amr_set_fixed_gain(float fixed_gain_factor, float fixed_mean_energy,
- float *prediction_error, float energy_mean,
- const float *pred_table);
-
-
-/**
- * Decode the adaptive codebook index to the integer and fractional parts
- * of the pitch lag for one subframe at 1/3 fractional precision.
- *
- * The choice of pitch lag is described in 3GPP TS 26.090 section 5.6.1.
- *
- * @param lag_int integer part of pitch lag of the current subframe
- * @param lag_frac fractional part of pitch lag of the current subframe
- * @param pitch_index parsed adaptive codebook (pitch) index
- * @param prev_lag_int integer part of pitch lag for the previous subframe
- * @param subframe current subframe number
- * @param third_as_first treat the third frame the same way as the first
- */
-void ff_decode_pitch_lag(int *lag_int, int *lag_frac, int pitch_index,
- const int prev_lag_int, const int subframe,
- int third_as_first, int resolution);
-
-#endif /* AVCODEC_ACELP_PITCH_DELAY_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g722enc.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g722enc.c
deleted file mode 100644
index 47811cee4d6f6a213084c9d34fcdd9f5507a2144..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g722enc.c
+++ /dev/null
@@ -1,390 +0,0 @@
-/*
- * Copyright (c) CMU 1993 Computer Science, Speech Group
- * Chengxiang Lu and Alex Hauptmann
- * Copyright (c) 2005 Steve Underwood
- * Copyright (c) 2009 Kenan Gillet
- * Copyright (c) 2010 Martin Storsjo
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * G.722 ADPCM audio encoder
- */
-
-#include "libavutil/avassert.h"
-#include "libavutil/channel_layout.h"
-#include "avcodec.h"
-#include "codec_internal.h"
-#include "encode.h"
-#include "g722.h"
-#include "libavutil/common.h"
-
-#define FREEZE_INTERVAL 128
-
-/* This is an arbitrary value. Allowing insanely large values leads to strange
- problems, so we limit it to a reasonable value */
-#define MAX_FRAME_SIZE 32768
-
-/* We clip the value of avctx->trellis to prevent data type overflows and
- undefined behavior. Using larger values is insanely slow anyway. */
-#define MIN_TRELLIS 0
-#define MAX_TRELLIS 16
-
-static av_cold int g722_encode_close(AVCodecContext *avctx)
-{
- G722Context *c = avctx->priv_data;
- int i;
- for (i = 0; i < 2; i++) {
- av_freep(&c->paths[i]);
- av_freep(&c->node_buf[i]);
- av_freep(&c->nodep_buf[i]);
- }
- return 0;
-}
-
-static av_cold int g722_encode_init(AVCodecContext * avctx)
-{
- G722Context *c = avctx->priv_data;
-
- c->band[0].scale_factor = 8;
- c->band[1].scale_factor = 2;
- c->prev_samples_pos = 22;
-
- if (avctx->frame_size) {
- /* validate frame size */
- if (avctx->frame_size & 1 || avctx->frame_size > MAX_FRAME_SIZE) {
- int new_frame_size;
-
- if (avctx->frame_size == 1)
- new_frame_size = 2;
- else if (avctx->frame_size > MAX_FRAME_SIZE)
- new_frame_size = MAX_FRAME_SIZE;
- else
- new_frame_size = avctx->frame_size - 1;
-
- av_log(avctx, AV_LOG_WARNING, "Requested frame size is not "
- "allowed. Using %d instead of %d\n", new_frame_size,
- avctx->frame_size);
- avctx->frame_size = new_frame_size;
- }
- } else {
- /* This is arbitrary. We use 320 because it's 20ms @ 16kHz, which is
- a common packet size for VoIP applications */
- avctx->frame_size = 320;
- }
- avctx->initial_padding = 22;
-
- if (avctx->trellis) {
- /* validate trellis */
- if (avctx->trellis < MIN_TRELLIS || avctx->trellis > MAX_TRELLIS) {
- int new_trellis = av_clip(avctx->trellis, MIN_TRELLIS, MAX_TRELLIS);
- av_log(avctx, AV_LOG_WARNING, "Requested trellis value is not "
- "allowed. Using %d instead of %d\n", new_trellis,
- avctx->trellis);
- avctx->trellis = new_trellis;
- }
- if (avctx->trellis) {
- int frontier = 1 << avctx->trellis;
- int max_paths = frontier * FREEZE_INTERVAL;
-
- for (int i = 0; i < 2; i++) {
- c->paths[i] = av_calloc(max_paths, sizeof(**c->paths));
- c->node_buf[i] = av_calloc(frontier, 2 * sizeof(**c->node_buf));
- c->nodep_buf[i] = av_calloc(frontier, 2 * sizeof(**c->nodep_buf));
- if (!c->paths[i] || !c->node_buf[i] || !c->nodep_buf[i])
- return AVERROR(ENOMEM);
- }
- }
- }
-
- ff_g722dsp_init(&c->dsp);
-
- return 0;
-}
-
-static const int16_t low_quant[33] = {
- 35, 72, 110, 150, 190, 233, 276, 323,
- 370, 422, 473, 530, 587, 650, 714, 786,
- 858, 940, 1023, 1121, 1219, 1339, 1458, 1612,
- 1765, 1980, 2195, 2557, 2919
-};
-
-static inline void filter_samples(G722Context *c, const int16_t *samples,
- int *xlow, int *xhigh)
-{
- int xout[2];
- c->prev_samples[c->prev_samples_pos++] = samples[0];
- c->prev_samples[c->prev_samples_pos++] = samples[1];
- c->dsp.apply_qmf(c->prev_samples + c->prev_samples_pos - 24, xout);
- *xlow = xout[0] + xout[1] >> 14;
- *xhigh = xout[0] - xout[1] >> 14;
- if (c->prev_samples_pos >= PREV_SAMPLES_BUF_SIZE) {
- memmove(c->prev_samples,
- c->prev_samples + c->prev_samples_pos - 22,
- 22 * sizeof(c->prev_samples[0]));
- c->prev_samples_pos = 22;
- }
-}
-
-static inline int encode_high(const struct G722Band *state, int xhigh)
-{
- int diff = av_clip_int16(xhigh - state->s_predictor);
- int pred = 141 * state->scale_factor >> 8;
- /* = diff >= 0 ? (diff < pred) + 2 : diff >= -pred */
- return ((diff ^ (diff >> (sizeof(diff)*8-1))) < pred) + 2*(diff >= 0);
-}
-
-static inline int encode_low(const struct G722Band* state, int xlow)
-{
- int diff = av_clip_int16(xlow - state->s_predictor);
- /* = diff >= 0 ? diff : -(diff + 1) */
- int limit = diff ^ (diff >> (sizeof(diff)*8-1));
- int i = 0;
- limit = limit + 1 << 10;
- if (limit > low_quant[8] * state->scale_factor)
- i = 9;
- while (i < 29 && limit > low_quant[i] * state->scale_factor)
- i++;
- return (diff < 0 ? (i < 2 ? 63 : 33) : 61) - i;
-}
-
-static void g722_encode_trellis(G722Context *c, int trellis,
- uint8_t *dst, int nb_samples,
- const int16_t *samples)
-{
- int i, j, k;
- int frontier = 1 << trellis;
- struct TrellisNode **nodes[2];
- struct TrellisNode **nodes_next[2];
- int pathn[2] = {0, 0}, froze = -1;
- struct TrellisPath *p[2];
-
- for (i = 0; i < 2; i++) {
- nodes[i] = c->nodep_buf[i];
- nodes_next[i] = c->nodep_buf[i] + frontier;
- memset(c->nodep_buf[i], 0, 2 * frontier * sizeof(*c->nodep_buf[i]));
- nodes[i][0] = c->node_buf[i] + frontier;
- nodes[i][0]->ssd = 0;
- nodes[i][0]->path = 0;
- nodes[i][0]->state = c->band[i];
- }
-
- for (i = 0; i < nb_samples >> 1; i++) {
- int xlow, xhigh;
- struct TrellisNode *next[2];
- int heap_pos[2] = {0, 0};
-
- for (j = 0; j < 2; j++) {
- next[j] = c->node_buf[j] + frontier*(i & 1);
- memset(nodes_next[j], 0, frontier * sizeof(**nodes_next));
- }
-
- filter_samples(c, &samples[2*i], &xlow, &xhigh);
-
- for (j = 0; j < frontier && nodes[0][j]; j++) {
- /* Only k >> 2 affects the future adaptive state, therefore testing
- * small steps that don't change k >> 2 is useless, the original
- * value from encode_low is better than them. Since we step k
- * in steps of 4, make sure range is a multiple of 4, so that
- * we don't miss the original value from encode_low. */
- int range = j < frontier/2 ? 4 : 0;
- struct TrellisNode *cur_node = nodes[0][j];
-
- int ilow = encode_low(&cur_node->state, xlow);
-
- for (k = ilow - range; k <= ilow + range && k <= 63; k += 4) {
- int decoded, dec_diff, pos;
- uint32_t ssd;
- struct TrellisNode* node;
-
- if (k < 0)
- continue;
-
- decoded = av_clip_intp2((cur_node->state.scale_factor *
- ff_g722_low_inv_quant6[k] >> 10)
- + cur_node->state.s_predictor, 14);
- dec_diff = xlow - decoded;
-
-#define STORE_NODE(index, UPDATE, VALUE)\
- ssd = cur_node->ssd + dec_diff*dec_diff;\
- /* Check for wraparound. Using 64 bit ssd counters would \
- * be simpler, but is slower on x86 32 bit. */\
- if (ssd < cur_node->ssd)\
- continue;\
- if (heap_pos[index] < frontier) {\
- pos = heap_pos[index]++;\
- av_assert2(pathn[index] < FREEZE_INTERVAL * frontier);\
- node = nodes_next[index][pos] = next[index]++;\
- node->path = pathn[index]++;\
- } else {\
- /* Try to replace one of the leaf nodes with the new \
- * one, but not always testing the same leaf position */\
- pos = (frontier>>1) + (heap_pos[index] & ((frontier>>1) - 1));\
- if (ssd >= nodes_next[index][pos]->ssd)\
- continue;\
- heap_pos[index]++;\
- node = nodes_next[index][pos];\
- }\
- node->ssd = ssd;\
- node->state = cur_node->state;\
- UPDATE;\
- c->paths[index][node->path].value = VALUE;\
- c->paths[index][node->path].prev = cur_node->path;\
- /* Sift the newly inserted node up in the heap to restore \
- * the heap property */\
- while (pos > 0) {\
- int parent = (pos - 1) >> 1;\
- if (nodes_next[index][parent]->ssd <= ssd)\
- break;\
- FFSWAP(struct TrellisNode*, nodes_next[index][parent],\
- nodes_next[index][pos]);\
- pos = parent;\
- }
- STORE_NODE(0, ff_g722_update_low_predictor(&node->state, k >> 2), k);
- }
- }
-
- for (j = 0; j < frontier && nodes[1][j]; j++) {
- int ihigh;
- struct TrellisNode *cur_node = nodes[1][j];
-
- /* We don't try to get any initial guess for ihigh via
- * encode_high - since there's only 4 possible values, test
- * them all. Testing all of these gives a much, much larger
- * gain than testing a larger range around ilow. */
- for (ihigh = 0; ihigh < 4; ihigh++) {
- int dhigh, decoded, dec_diff, pos;
- uint32_t ssd;
- struct TrellisNode* node;
-
- dhigh = cur_node->state.scale_factor *
- ff_g722_high_inv_quant[ihigh] >> 10;
- decoded = av_clip_intp2(dhigh + cur_node->state.s_predictor, 14);
- dec_diff = xhigh - decoded;
-
- STORE_NODE(1, ff_g722_update_high_predictor(&node->state, dhigh, ihigh), ihigh);
- }
- }
-
- for (j = 0; j < 2; j++) {
- FFSWAP(struct TrellisNode**, nodes[j], nodes_next[j]);
-
- if (nodes[j][0]->ssd > (1 << 16)) {
- for (k = 1; k < frontier && nodes[j][k]; k++)
- nodes[j][k]->ssd -= nodes[j][0]->ssd;
- nodes[j][0]->ssd = 0;
- }
- }
-
- if (i == froze + FREEZE_INTERVAL) {
- p[0] = &c->paths[0][nodes[0][0]->path];
- p[1] = &c->paths[1][nodes[1][0]->path];
- for (j = i; j > froze; j--) {
- dst[j] = p[1]->value << 6 | p[0]->value;
- p[0] = &c->paths[0][p[0]->prev];
- p[1] = &c->paths[1][p[1]->prev];
- }
- froze = i;
- pathn[0] = pathn[1] = 0;
- memset(nodes[0] + 1, 0, (frontier - 1)*sizeof(**nodes));
- memset(nodes[1] + 1, 0, (frontier - 1)*sizeof(**nodes));
- }
- }
-
- p[0] = &c->paths[0][nodes[0][0]->path];
- p[1] = &c->paths[1][nodes[1][0]->path];
- for (j = i; j > froze; j--) {
- dst[j] = p[1]->value << 6 | p[0]->value;
- p[0] = &c->paths[0][p[0]->prev];
- p[1] = &c->paths[1][p[1]->prev];
- }
- c->band[0] = nodes[0][0]->state;
- c->band[1] = nodes[1][0]->state;
-}
-
-static av_always_inline void encode_byte(G722Context *c, uint8_t *dst,
- const int16_t *samples)
-{
- int xlow, xhigh, ilow, ihigh;
- filter_samples(c, samples, &xlow, &xhigh);
- ihigh = encode_high(&c->band[1], xhigh);
- ilow = encode_low (&c->band[0], xlow);
- ff_g722_update_high_predictor(&c->band[1], c->band[1].scale_factor *
- ff_g722_high_inv_quant[ihigh] >> 10, ihigh);
- ff_g722_update_low_predictor(&c->band[0], ilow >> 2);
- *dst = ihigh << 6 | ilow;
-}
-
-static void g722_encode_no_trellis(G722Context *c,
- uint8_t *dst, int nb_samples,
- const int16_t *samples)
-{
- int i;
- for (i = 0; i < nb_samples; i += 2)
- encode_byte(c, dst++, &samples[i]);
-}
-
-static int g722_encode_frame(AVCodecContext *avctx, AVPacket *avpkt,
- const AVFrame *frame, int *got_packet_ptr)
-{
- G722Context *c = avctx->priv_data;
- const int16_t *samples = (const int16_t *)frame->data[0];
- int nb_samples, out_size, ret;
-
- out_size = (frame->nb_samples + 1) / 2;
- if ((ret = ff_get_encode_buffer(avctx, avpkt, out_size, 0)) < 0)
- return ret;
-
- nb_samples = frame->nb_samples - (frame->nb_samples & 1);
-
- if (avctx->trellis)
- g722_encode_trellis(c, avctx->trellis, avpkt->data, nb_samples, samples);
- else
- g722_encode_no_trellis(c, avpkt->data, nb_samples, samples);
-
- /* handle last frame with odd frame_size */
- if (nb_samples < frame->nb_samples) {
- int16_t last_samples[2] = { samples[nb_samples], samples[nb_samples] };
- encode_byte(c, &avpkt->data[nb_samples >> 1], last_samples);
- }
-
- if (frame->pts != AV_NOPTS_VALUE)
- avpkt->pts = frame->pts - ff_samples_to_time_base(avctx, avctx->initial_padding);
- *got_packet_ptr = 1;
- return 0;
-}
-
-const FFCodec ff_adpcm_g722_encoder = {
- .p.name = "g722",
- CODEC_LONG_NAME("G.722 ADPCM"),
- .p.type = AVMEDIA_TYPE_AUDIO,
- .p.id = AV_CODEC_ID_ADPCM_G722,
- .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_SMALL_LAST_FRAME |
- AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE,
- .priv_data_size = sizeof(G722Context),
- .init = g722_encode_init,
- .close = g722_encode_close,
- FF_CODEC_ENCODE_CB(g722_encode_frame),
- .p.sample_fmts = (const enum AVSampleFormat[]){ AV_SAMPLE_FMT_S16, AV_SAMPLE_FMT_NONE },
- CODEC_OLD_CHANNEL_LAYOUTS(AV_CH_LAYOUT_MONO)
- .p.ch_layouts = (const AVChannelLayout[]){
- AV_CHANNEL_LAYOUT_MONO, { 0 }
- },
- .caps_internal = FF_CODEC_CAP_INIT_CLEANUP,
-};
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ilbcdata.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ilbcdata.h
deleted file mode 100644
index b17e24df5f4cfaa766618e0facd9d66d2534a4ce..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ilbcdata.h
+++ /dev/null
@@ -1,239 +0,0 @@
-/*
- * Copyright (c) 2013, The WebRTC project authors. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- *
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- *
- * * Neither the name of Google nor the names of its contributors may
- * be used to endorse or promote products derived from this software
- * without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#ifndef AVCODEC_ILBCDATA_H
-#define AVCODEC_ILBCDATA_H
-
-#include "libavutil/common.h"
-
-static const uint8_t lsf_dim_codebook[] = { 3, 3, 4 };
-static const uint8_t lsf_size_codebook[] = { 64, 128, 128 };
-static const int16_t lsf_weight_20ms[] = { 12288, 8192, 4096, 0 };
-static const int16_t lsf_weight_30ms[] = { 8192, 16384, 10923, 5461, 0, 0 };
-
-static const int16_t hp_out_coeffs[] = { 3849, -7699, 3849, 7918, -3833 };
-
-static const int16_t kPlcPfSlope[] = { 26667, 18729, 13653, 10258, 7901, 6214 };
-
-static const int16_t kPlcPitchFact[] = { 0, 5462, 10922, 16384, 21846, 27306 };
-
-static const int16_t kCbFiltersRev[] = {
- -140, 446, -755, 3302, 2922, -590, 343, -138
-};
-
-static const int16_t kPlcPerSqr[] = { 839, 1343, 2048, 2998, 4247, 5849 };
-
-static const int16_t alpha[] = {
- 6554, 13107, 19661, 26214
-};
-
-static const int16_t kLpcChirpSyntDenum[] = {
- 32767, 29573, 26690, 24087, 21739, 19619, 17707, 15980, 14422, 13016, 11747
-};
-
-static const int16_t cos_tbl[64] = {
- 32767, 32729, 32610, 32413, 32138, 31786, 31357, 30853,
- 30274, 29622, 28899, 28106, 27246, 26320, 25330, 24279,
- 23170, 22006, 20788, 19520, 18205, 16846, 15447, 14010,
- 12540, 11039, 9512, 7962, 6393, 4808, 3212, 1608,
- 0, -1608, -3212, -4808, -6393, -7962, -9512, -11039,
- -12540, -14010, -15447, -16846, -18205, -19520, -20788, -22006,
- -23170, -24279, -25330, -26320, -27246, -28106, -28899, -29622,
- -30274, -30853, -31357, -31786, -32138, -32413, -32610, -32729,
-};
-
-static const int16_t cos_derivative_tbl[64] = {
- -632, -1893, -3150, -4399, -5638, -6863, -8072, -9261,
- -10428, -11570, -12684, -13767, -14817, -15832, -16808, -17744,
- -18637, -19486, -20287, -21039, -21741, -22390, -22986, -23526,
- -24009, -24435, -24801, -25108, -25354, -25540, -25664, -25726,
- -25726, -25664, -25540, -25354, -25108, -24801, -24435, -24009,
- -23526, -22986, -22390, -21741, -21039, -20287, -19486, -18637,
- -17744, -16808, -15832, -14817, -13767, -12684, -11570, -10428,
- -9261, -8072, -6863, -5638, -4399, -3150, -1893, -632
-};
-
-static const int16_t lsf_codebook[64 * 3 + 128 * 3 + 128 * 4] = {
- 1273, 2238, 3696, 3199, 5309, 8209, 3606, 5671, 7829,
- 2815, 5262, 8778, 2608, 4027, 5493, 1582, 3076, 5945,
- 2983, 4181, 5396, 2437, 4322, 6902, 1861, 2998, 4613,
- 2007, 3250, 5214, 1388, 2459, 4262, 2563, 3805, 5269,
- 2036, 3522, 5129, 1935, 4025, 6694, 2744, 5121, 7338,
- 2810, 4248, 5723, 3054, 5405, 7745, 1449, 2593, 4763,
- 3411, 5128, 6596, 2484, 4659, 7496, 1668, 2879, 4818,
- 1812, 3072, 5036, 1638, 2649, 3900, 2464, 3550, 4644,
- 1853, 2900, 4158, 2458, 4163, 5830, 2556, 4036, 6254,
- 2703, 4432, 6519, 3062, 4953, 7609, 1725, 3703, 6187,
- 2221, 3877, 5427, 2339, 3579, 5197, 2021, 4633, 7037,
- 2216, 3328, 4535, 2961, 4739, 6667, 2807, 3955, 5099,
- 2788, 4501, 6088, 1642, 2755, 4431, 3341, 5282, 7333,
- 2414, 3726, 5727, 1582, 2822, 5269, 2259, 3447, 4905,
- 3117, 4986, 7054, 1825, 3491, 5542, 3338, 5736, 8627,
- 1789, 3090, 5488, 2566, 3720, 4923, 2846, 4682, 7161,
- 1950, 3321, 5976, 1834, 3383, 6734, 3238, 4769, 6094,
- 2031, 3978, 5903, 1877, 4068, 7436, 2131, 4644, 8296,
- 2764, 5010, 8013, 2194, 3667, 6302, 2053, 3127, 4342,
- 3523, 6595, 10010, 3134, 4457, 5748, 3142, 5819, 9414,
- 2223, 4334, 6353, 2022, 3224, 4822, 2186, 3458, 5544,
- 2552, 4757, 6870, 10905, 12917, 14578, 9503, 11485, 14485,
- 9518, 12494, 14052, 6222, 7487, 9174, 7759, 9186, 10506,
- 8315, 12755, 14786, 9609, 11486, 13866, 8909, 12077, 13643,
- 7369, 9054, 11520, 9408, 12163, 14715, 6436, 9911, 12843,
- 7109, 9556, 11884, 7557, 10075, 11640, 6482, 9202, 11547,
- 6463, 7914, 10980, 8611, 10427, 12752, 7101, 9676, 12606,
- 7428, 11252, 13172, 10197, 12955, 15842, 7487, 10955, 12613,
- 5575, 7858, 13621, 7268, 11719, 14752, 7476, 11744, 13795,
- 7049, 8686, 11922, 8234, 11314, 13983, 6560, 11173, 14984,
- 6405, 9211, 12337, 8222, 12054, 13801, 8039, 10728, 13255,
- 10066, 12733, 14389, 6016, 7338, 10040, 6896, 8648, 10234,
- 7538, 9170, 12175, 7327, 12608, 14983, 10516, 12643, 15223,
- 5538, 7644, 12213, 6728, 12221, 14253, 7563, 9377, 12948,
- 8661, 11023, 13401, 7280, 8806, 11085, 7723, 9793, 12333,
- 12225, 14648, 16709, 8768, 13389, 15245, 10267, 12197, 13812,
- 5301, 7078, 11484, 7100, 10280, 11906, 8716, 12555, 14183,
- 9567, 12464, 15434, 7832, 12305, 14300, 7608, 10556, 12121,
- 8913, 11311, 12868, 7414, 9722, 11239, 8666, 11641, 13250,
- 9079, 10752, 12300, 8024, 11608, 13306, 10453, 13607, 16449,
- 8135, 9573, 10909, 6375, 7741, 10125, 10025, 12217, 14874,
- 6985, 11063, 14109, 9296, 13051, 14642, 8613, 10975, 12542,
- 6583, 10414, 13534, 6191, 9368, 13430, 5742, 6859, 9260,
- 7723, 9813, 13679, 8137, 11291, 12833, 6562, 8973, 10641,
- 6062, 8462, 11335, 6928, 8784, 12647, 7501, 8784, 10031,
- 8372, 10045, 12135, 8191, 9864, 12746, 5917, 7487, 10979,
- 5516, 6848, 10318, 6819, 9899, 11421, 7882, 12912, 15670,
- 9558, 11230, 12753, 7752, 9327, 11472, 8479, 9980, 11358,
- 11418, 14072, 16386, 7968, 10330, 14423, 8423, 10555, 12162,
- 6337, 10306, 14391, 8850, 10879, 14276, 6750, 11885, 15710,
- 7037, 8328, 9764, 6914, 9266, 13476, 9746, 13949, 15519,
- 11032, 14444, 16925, 8032, 10271, 11810, 10962, 13451, 15833,
- 10021, 11667, 13324, 6273, 8226, 12936, 8543, 10397, 13496,
- 7936, 10302, 12745, 6769, 8138, 10446, 6081, 7786, 11719,
- 8637, 11795, 14975, 8790, 10336, 11812, 7040, 8490, 10771,
- 7338, 10381, 13153, 6598, 7888, 9358, 6518, 8237, 12030,
- 9055, 10763, 12983, 6490, 10009, 12007, 9589, 12023, 13632,
- 6867, 9447, 10995, 7930, 9816, 11397, 10241, 13300, 14939,
- 5830, 8670, 12387, 9870, 11915, 14247, 9318, 11647, 13272,
- 6721, 10836, 12929, 6543, 8233, 9944, 8034, 10854, 12394,
- 9112, 11787, 14218, 9302, 11114, 13400, 9022, 11366, 13816,
- 6962, 10461, 12480, 11288, 13333, 15222, 7249, 8974, 10547,
- 10566, 12336, 14390, 6697, 11339, 13521, 11851, 13944, 15826,
- 6847, 8381, 11349, 7509, 9331, 10939, 8029, 9618, 11909,
- 13973, 17644, 19647, 22474, 14722, 16522, 20035, 22134, 16305, 18179, 21106, 23048,
- 15150, 17948, 21394, 23225, 13582, 15191, 17687, 22333, 11778, 15546, 18458, 21753,
- 16619, 18410, 20827, 23559, 14229, 15746, 17907, 22474, 12465, 15327, 20700, 22831,
- 15085, 16799, 20182, 23410, 13026, 16935, 19890, 22892, 14310, 16854, 19007, 22944,
- 14210, 15897, 18891, 23154, 14633, 18059, 20132, 22899, 15246, 17781, 19780, 22640,
- 16396, 18904, 20912, 23035, 14618, 17401, 19510, 21672, 15473, 17497, 19813, 23439,
- 18851, 20736, 22323, 23864, 15055, 16804, 18530, 20916, 16490, 18196, 19990, 21939,
- 11711, 15223, 21154, 23312, 13294, 15546, 19393, 21472, 12956, 16060, 20610, 22417,
- 11628, 15843, 19617, 22501, 14106, 16872, 19839, 22689, 15655, 18192, 20161, 22452,
- 12953, 15244, 20619, 23549, 15322, 17193, 19926, 21762, 16873, 18676, 20444, 22359,
- 14874, 17871, 20083, 21959, 11534, 14486, 19194, 21857, 17766, 19617, 21338, 23178,
- 13404, 15284, 19080, 23136, 15392, 17527, 19470, 21953, 14462, 16153, 17985, 21192,
- 17734, 19750, 21903, 23783, 16973, 19096, 21675, 23815, 16597, 18936, 21257, 23461,
- 15966, 17865, 20602, 22920, 15416, 17456, 20301, 22972, 18335, 20093, 21732, 23497,
- 15548, 17217, 20679, 23594, 15208, 16995, 20816, 22870, 13890, 18015, 20531, 22468,
- 13211, 15377, 19951, 22388, 12852, 14635, 17978, 22680, 16002, 17732, 20373, 23544,
- 11373, 14134, 19534, 22707, 17329, 19151, 21241, 23462, 15612, 17296, 19362, 22850,
- 15422, 19104, 21285, 23164, 13792, 17111, 19349, 21370, 15352, 17876, 20776, 22667,
- 15253, 16961, 18921, 22123, 14108, 17264, 20294, 23246, 15785, 17897, 20010, 21822,
- 17399, 19147, 20915, 22753, 13010, 15659, 18127, 20840, 16826, 19422, 22218, 24084,
- 18108, 20641, 22695, 24237, 18018, 20273, 22268, 23920, 16057, 17821, 21365, 23665,
- 16005, 17901, 19892, 23016, 13232, 16683, 21107, 23221, 13280, 16615, 19915, 21829,
- 14950, 18575, 20599, 22511, 16337, 18261, 20277, 23216, 14306, 16477, 21203, 23158,
- 12803, 17498, 20248, 22014, 14327, 17068, 20160, 22006, 14402, 17461, 21599, 23688,
- 16968, 18834, 20896, 23055, 15070, 17157, 20451, 22315, 15419, 17107, 21601, 23946,
- 16039, 17639, 19533, 21424, 16326, 19261, 21745, 23673, 16489, 18534, 21658, 23782,
- 16594, 18471, 20549, 22807, 18973, 21212, 22890, 24278, 14264, 18674, 21123, 23071,
- 15117, 16841, 19239, 23118, 13762, 15782, 20478, 23230, 14111, 15949, 20058, 22354,
- 14990, 16738, 21139, 23492, 13735, 16971, 19026, 22158, 14676, 17314, 20232, 22807,
- 16196, 18146, 20459, 22339, 14747, 17258, 19315, 22437, 14973, 17778, 20692, 23367,
- 15715, 17472, 20385, 22349, 15702, 18228, 20829, 23410, 14428, 16188, 20541, 23630,
- 16824, 19394, 21365, 23246, 13069, 16392, 18900, 21121, 12047, 16640, 19463, 21689,
- 14757, 17433, 19659, 23125, 15185, 16930, 19900, 22540, 16026, 17725, 19618, 22399,
- 16086, 18643, 21179, 23472, 15462, 17248, 19102, 21196, 17368, 20016, 22396, 24096,
- 12340, 14475, 19665, 23362, 13636, 16229, 19462, 22728, 14096, 16211, 19591, 21635,
- 12152, 14867, 19943, 22301, 14492, 17503, 21002, 22728, 14834, 16788, 19447, 21411,
- 14650, 16433, 19326, 22308, 14624, 16328, 19659, 23204, 13888, 16572, 20665, 22488,
- 12977, 16102, 18841, 22246, 15523, 18431, 21757, 23738, 14095, 16349, 18837, 20947,
- 13266, 17809, 21088, 22839, 15427, 18190, 20270, 23143, 11859, 16753, 20935, 22486,
- 12310, 17667, 21736, 23319, 14021, 15926, 18702, 22002, 12286, 15299, 19178, 21126,
- 15703, 17491, 21039, 23151, 12272, 14018, 18213, 22570, 14817, 16364, 18485, 22598,
- 17109, 19683, 21851, 23677, 12657, 14903, 19039, 22061, 14713, 16487, 20527, 22814,
- 14635, 16726, 18763, 21715, 15878, 18550, 20718, 22906
-};
-
-static const int16_t gain3[9]={
- -16384, -10813, -5407, 0, 4096, 8192, 12288, 16384, 32767
-};
-
-static const int16_t gain4[17]={
- -17203, -14746, -12288, -9830, -7373, -4915, -2458, 0, 2458, 4915, 7373, 9830,
- 12288, 14746, 17203, 19661, 32767
-};
-
-static const int16_t gain5[33]={
- 614, 1229, 1843, 2458, 3072, 3686,
- 4301, 4915, 5530, 6144, 6758, 7373,
- 7987, 8602, 9216, 9830, 10445, 11059,
- 11674, 12288, 12902, 13517, 14131, 14746,
- 15360, 15974, 16589, 17203, 17818, 18432,
- 19046, 19661, 32767
-};
-
-static const int16_t *const ilbc_gain[] = {
- gain5, gain4, gain3,
-};
-
-static const int16_t ilbc_state[8] = {
- -30473, -17838, -9257, -2537, 3639, 10893, 19958, 32636
-};
-
-static const int16_t frg_quant_mod[64] = {
- /* First 37 values in Q8 */
- 569, 671, 786, 916, 1077, 1278,
- 1529, 1802, 2109, 2481, 2898, 3440,
- 3943, 4535, 5149, 5778, 6464, 7208,
- 7904, 8682, 9397, 10285, 11240, 12246,
- 13313, 14382, 15492, 16735, 18131, 19693,
- 21280, 22912, 24624, 26544, 28432, 30488,
- 32720,
- /* 22 values in Q5 */
- 4383, 4684, 5012, 5363, 5739, 6146,
- 6603, 7113, 7679, 8285, 9040, 9850,
- 10838, 11882, 13103, 14467, 15950, 17669,
- 19712, 22016, 24800, 28576,
- /* 5 values in Q3 */
- 8240, 9792, 12040, 15440, 22472
-};
-
-#endif /* AVCODEC_ILBCDATA_H */
diff --git a/spaces/coledie/Fashion_VAE/app.py b/spaces/coledie/Fashion_VAE/app.py
deleted file mode 100644
index 7d017648c98223d27a8f25be7affd758c25534cb..0000000000000000000000000000000000000000
--- a/spaces/coledie/Fashion_VAE/app.py
+++ /dev/null
@@ -1,32 +0,0 @@
-import numpy as np
-import torch
-import gradio as gr
-from vae import *
-import matplotlib.image as mpimg
-
-
-with open("vae.pt", "rb") as file:
- vae = torch.load(file)
- vae.eval()
-
-
-def generate_image(filename):
- image = mpimg.imread(filename)[:, :, 0] / 255
-
- grayscale = vae(torch.Tensor(image))[0].reshape((28, 28))
-
- return grayscale.detach().numpy()
-
-
-examples = [f"examples/{i}.jpg" for i in range(10)]
-
-demo = gr.Interface(generate_image,
- gr.Image(type="filepath"),
- "image",
- examples,
- title="VAE running on Fashion MNIST",
- description=".",
- article="...",
- allow_flagging=False,
-)
-demo.launch()
diff --git a/spaces/conchdork/open-reverse-proxy/README.md b/spaces/conchdork/open-reverse-proxy/README.md
deleted file mode 100644
index 803d0d14093c101f5c6e432b86ee347d7928dc69..0000000000000000000000000000000000000000
--- a/spaces/conchdork/open-reverse-proxy/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Open Reverse Proxy
-emoji: 🔥
-colorFrom: purple
-colorTo: blue
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Experience Mecha Storm Advanced War Robots with Mod APK - Customize Your Robot and Conquer Planets.md b/spaces/congsaPfin/Manga-OCR/logs/Experience Mecha Storm Advanced War Robots with Mod APK - Customize Your Robot and Conquer Planets.md
deleted file mode 100644
index 3f0f9a134db1950dfa6efe4d4c9e33b8296f9db3..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Experience Mecha Storm Advanced War Robots with Mod APK - Customize Your Robot and Conquer Planets.md
+++ /dev/null
@@ -1,101 +0,0 @@
-
-
Mecha Storm: Advanced War Robots Mod APK - A Guide for Beginners
-
If you are a fan of sci-fi and robot games, you might want to check out Mecha Storm: Advanced War Robots Mod APK. This is a modded version of the original game that gives you unlimited resources, unlocked features, and more. In this article, we will tell you what Mecha Storm: Advanced War Robots is, how to download and install it, how to play it, and some tips and tricks to help you conquer the space.
Mecha Storm: Advanced War Robots is an action-packed MMORPG where you take control of deadly robots, called Mechs. You can join one of the factions in the game and fight against other players to conquer all the planets in the space and win the war. The game has a rich variety of game modes, both in PvP (Player vs Player) and PvE (Player vs Environment), where you can collect dozens of different Mechs and equip them any way you want. As you progress through the ranks and levels, you can also gain access to enormous spaceships that you can control to fight in massive FvF (Fleet vs Fleet) battles.
-
Features of Mecha Storm: Advanced War Robots
-
Mecha Storm: Advanced War Robots has many features that make it an exciting and addictive game. Here are some of them:
-
36 Combat Robots with various attributes
-
You can choose from 36 different robots that have different attributes such as speed, power, defense, and more. Each robot also has its own unique design and appearance that reflects its personality and abilities. You can customize your robot with various gears and skills to suit your play style.
-
Exciting manual controls or easy automatic controls
-
You can control your robot manually by using the virtual joystick and buttons on the screen, or you can opt for the easy automatic controls that let your robot move and attack by itself. You can switch between the two modes anytime during the game. The manual controls give you more freedom and challenge, while the automatic controls let you enjoy the game without much hassle.
-
Over 100 weapons such as anti-tank rifles, double war-ax, spiked maul and more
-
You can equip your robot with over 100 weapons that have different effects and damage types. You can use ranged weapons such as anti-tank rifles, laser guns, rocket launchers, and more to attack your enemies from a distance. You can also use melee weapons such as double war-ax, spiked maul, chainsaw sword, and more to slash your enemies up close. You can mix and match different weapons to create your own combination.
-
mecha storm mod apk unlimited money and gems
-mecha storm advanced war robots hack download
-mecha storm apk mod free shopping
-mecha storm mod apk latest version
-mecha storm advanced war robots cheats
-mecha storm mod apk offline
-mecha storm advanced war robots gameplay
-mecha storm apk mod android 1
-mecha storm mod apk revdl
-mecha storm advanced war robots review
-mecha storm mod apk no root
-mecha storm advanced war robots guide
-mecha storm apk mod obb
-mecha storm mod apk happymod
-mecha storm advanced war robots tips
-mecha storm mod apk unlimited everything
-mecha storm advanced war robots wiki
-mecha storm apk mod menu
-mecha storm mod apk rexdl
-mecha storm advanced war robots best mech
-mecha storm mod apk unlimited gold and silver
-mecha storm advanced war robots codes
-mecha storm apk mod data
-mecha storm mod apk pure
-mecha storm advanced war robots reddit
-mecha storm mod apk unlimited energy and ammo
-mecha storm advanced war robots update
-mecha storm apk mod vip
-mecha storm mod apk platinmods
-mecha storm advanced war robots online
-mecha storm mod apk unlimited coins and diamonds
-mecha storm advanced war robots pc
-mecha storm apk mod hack
-mecha storm mod apk android republic
-mecha storm advanced war robots trailer
-mecha storm mod apk unlimited health and shield
-mecha storm advanced war robots ios
-mecha storm apk mod unlocked all
-mecha storm mod apk blackmod
-mecha storm advanced war robots pvp
-
Real-time 1vs1 or 3vs3 PvP battles with rivals from all around the world
-
You can test your skills and strategies against other players in real-time PvP battles. You can choose to play in 1vs1 mode where you face one opponent, or 3vs3 mode where you team up with two allies and fight against three enemies. You can also join a clan and participate in clan wars and tournaments. You can chat with other players and make friends or rivals. You can also view the rankings and leaderboards to see how you compare with other players.
-
How to download and install Mecha Storm: Advanced War Robots Mod APK?
-
If you want to enjoy the modded version of Mecha Storm: Advanced War Robots, you need to download and install the Mecha Storm: Advanced War Robots Mod APK file. Here are the steps to do so:
-
-
Go to a trusted website that provides the Mecha Storm: Advanced War Robots Mod APK file, such as [Mecha Storm: Advanced War Robots Mod APK Download].
-
Click on the download button and wait for the file to be downloaded on your device.
-
Once the file is downloaded, locate it in your file manager and tap on it to start the installation process.
-
Allow the installation of unknown sources if prompted by your device.
-
Follow the instructions on the screen and wait for the installation to be completed.
-
Launch the game and enjoy the modded features.
-
-
Note: You may need to uninstall the original version of Mecha Storm: Advanced War Robots before installing the modded version. You may also need to enable the storage permission for the game to work properly.
-
How to play Mecha Storm: Advanced War Robots Mod APK?
-
Now that you have installed Mecha Storm: Advanced War Robots Mod APK, you can start playing the game. Here are some basic steps to help you get started:
-
Select a robot based on your play style and equip gears and skills
-
You can choose from 36 different robots that have different attributes such as speed, power, defense, and more. Each robot also has its own unique design and appearance that reflects its personality and abilities. You can customize your robot with various gears and skills to suit your play style. You can also change the color and name of your robot.
-
Play the scenario mode by creating a team of 3 robots
-
You can play the scenario mode by creating a team of 3 robots that you can switch between during the game. The scenario mode consists of various missions that have different objectives and difficulties. You can earn rewards such as gold, gems, equipment, and more by completing the missions. You can also unlock new robots and spaceships by playing the scenario mode.
-
Strengthen your robot, equipment, and skills to join in on the real-time PvP battles
-
You can strengthen your robot, equipment, and skills by using the gold, gems, and other resources that you earn from playing the game. You can upgrade your robot's level, rank, attribute, and skill level. You can also enhance your equipment's level, grade, attribute, and skill level. You can also craft new equipment by using materials that you collect from playing the game. By strengthening your robot, equipment, and skills, you can join in on the real-time PvP battles and compete with other players from all around the world.
-
Tips and tricks for Mecha Storm: Advanced War Robots Mod APK
-
To help you enjoy Mecha Storm: Advanced War Robots Mod APK more, here are some tips and tricks that you can use:
-
Choose your faction wisely
-
You can join one of the factions in the game: Federation, Empire, or Alliance. Each faction has its own story, background, characters, robots, spaceships, and missions. You can also get different rewards and benefits depending on your faction. Choose your faction wisely based on your preference and play style.
-
Upgrade your robots and weapons regularly
-
You can upgrade your robots and weapons regularly by using the gold, gems, and other resources that you earn from playing the game. Upgrading your robots and weapons will increase their stats and performance, making them more powerful and effective in combat. You can also unlock new skills and abilities by upgrading your robots and weapons.
-
Use your skills strategically
-
You can use your skills strategically by timing them well and aiming them accurately. Each skill has its own cooldown time, effect range, damage type, and target type. You can use your skills to deal damage, heal yourself or allies, stun or debuff enemies, or buff yourself or allies. You can also combine different skills to create combos that have more impact.
-
Conclusion
-
Mecha Storm: Advanced War Robots Mod APK is a fun and exciting game that lets you control and customize your own robots and fight against other players in various game modes. You can download and install the modded version of the game to enjoy unlimited resources, unlocked features, and more. You can also follow the steps and tips that we have provided in this article to help you get started and improve your skills. If you are looking for a thrilling and immersive robot game, you should give Mecha Storm: Advanced War Robots Mod APK a try.
-
FAQs
-
Here are some frequently asked questions about Mecha Storm: Advanced War Robots Mod APK:
-
-
What are the benefits of Mecha Storm: Advanced War Robots Mod APK?
-
Mecha Storm: Advanced War Robots Mod APK gives you many benefits such as unlimited gold, gems, energy, and other resources, unlocked robots, weapons, spaceships, and features, free shopping, no ads, and more.
-
Is Mecha Storm: Advanced War Robots Mod APK safe to use?
-
Mecha Storm: Advanced War Robots Mod APK is safe to use as long as you download it from a trusted website that provides the original and virus-free file. You should also scan the file with an antivirus program before installing it on your device.
-
Do I need to root or jailbreak my device to use Mecha Storm: Advanced War Robots Mod APK?
-
No, you do not need to root or jailbreak your device to use Mecha Storm: Advanced War Robots Mod APK. You can install and play the game without any modifications on your device.
-
How can I update Mecha Storm: Advanced War Robots Mod APK?
-
You can update Mecha Storm: Advanced War Robots Mod APK by downloading and installing the latest version of the file from the same website that you downloaded it from. You should also check for updates regularly to enjoy the new features and bug fixes.
-
How can I contact the developer of Mecha Storm: Advanced War Robots?
-
You can contact the developer of Mecha Storm: Advanced War Robots by visiting their official website [Mecha Storm: Advanced War Robots Official Website] or their Facebook page [Mecha Storm: Advanced War Robots Facebook Page]. You can also send them an email at [Mecha Storm: Advanced War Robots Email Address].
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/GoreBox A Deliciously Violent Sandbox Game for Android.md b/spaces/congsaPfin/Manga-OCR/logs/GoreBox A Deliciously Violent Sandbox Game for Android.md
deleted file mode 100644
index 69bc7834e12d08b5262989d197707b0a88b0fc7b..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/GoreBox A Deliciously Violent Sandbox Game for Android.md
+++ /dev/null
@@ -1,92 +0,0 @@
-
-
How to Download GoreBox: A Guide for Gamers
-
If you are looking for a game that lets you unleash your inner demon with a vast arsenal of brutal weapons, explosive devices, interactive ragdolls, fearsome enemies, advanced turrets, vehicles, and a cutting-edge blood and dismemberment system, then you might want to check out GoreBox. In this article, we will tell you what GoreBox is, why you should play it, and how to download it for your Android or PC device.
GoreBox is a physics-based sandbox game of extreme violence developed by F²Games. It is the third installment in the GoreBox franchise, following GoreBox Remastered and GoreBox - Animosity. The game is currently available for Android devices on Google Play Store and is coming soon to PC devices on Steam.
-
Features of GoreBox
-
GoreBox has many features that make it a unique and fun game to play. Some of these features are:
-
-
A realistic blood pooling and dismemberment system that lets you see the gore and carnage you cause.
-
An active ragdoll system that makes the characters react to wounds and scenarios in a semi-realistic way.
-
A Reality Crusher, your primary weapon for building, destroying, and manipulating the environment. You can use it to create and customize your own maps, drive, fly, or blow everything up.
-
A Timsky's virus mode that induces uncontrollable rage and reduces IQ in those infected. You can face off against enemies who range from mindless drones to cunning predators.
-
A Phil Timsky mode that lets you embody the creator of the Reality Crusher and the virus. You are equipped with a chip that enhances pain resistance, allows mind control of the Reality Crusher, and makes you immune to the virus.
-
-
Why play GoreBox?
-
GoreBox is a game that offers unlimited fun and stress relief. You can unleash your creativity and revel in the destruction and mayhem you create. You can also enjoy the chaos and destruction caused by the virus and the Reality Crusher. The game has a stylized graphics style that makes it appealing and immersive. The game also has a lot of custom content that you can download or create yourself, such as maps and mods. The game is suitable for mature audiences who enjoy violent games.
-
How to download GoreBox for Android
-
If you have an Android device, you can download GoreBox from Google Play Store. Here are the steps you need to follow:
-
Step 1: Go to Google Play Store
-
Open Google Play Store on your Android device and make sure you are signed in with your Google account.
-
Step 2: Search for GoreBox
-
Type "GoreBox" in the search bar and tap on the first result that appears. It should be the one with the logo of a red skull with horns.
-
Step 3: Install GoreBox
-
Tap on the green "Install" button and wait for the download and installation process to complete. You might need to grant some permissions for the app to run properly.
-
How to download GoreBox for PC
-
If you have a PC device, you can download GoreBox from Steam. However, the game is not yet released for PC devices, so you will have to wait for the release date. Here are the steps you need to follow:
-
How to download gorebox on android
-How to download gorebox on pc
-How to download gorebox mods
-How to download gorebox plus
-How to download gorebox remastered
-How to download gorebox animosity
-How to download gorebox for free
-How to download gorebox maps
-How to download gorebox apk
-How to download gorebox from steam
-How to install gorebox on android
-How to install gorebox on pc
-How to install gorebox mods
-How to install gorebox plus
-How to install gorebox remastered
-How to install gorebox animosity
-How to install gorebox for free
-How to install gorebox maps
-How to install gorebox apk
-How to install gorebox from steam
-Gorebox download guide for android
-Gorebox download guide for pc
-Gorebox download guide for mods
-Gorebox download guide for plus
-Gorebox download guide for remastered
-Gorebox download guide for animosity
-Gorebox download guide for free
-Gorebox download guide for maps
-Gorebox download guide for apk
-Gorebox download guide for steam
-Gorebox installation tutorial for android
-Gorebox installation tutorial for pc
-Gorebox installation tutorial for mods
-Gorebox installation tutorial for plus
-Gorebox installation tutorial for remastered
-Gorebox installation tutorial for animosity
-Gorebox installation tutorial for free
-Gorebox installation tutorial for maps
-Gorebox installation tutorial for apk
-Gorebox installation tutorial for steam
-
Step 1: Go to Steam
-
Open Steam on your PC device and make sure you are signed in with your Steam account.
-
Step 2: Search for GoreBox
-
Type "GoreBox" in the search bar and click on the first result that appears. It should be the one with the logo of a red skull with horns.
-
Step 3: Add GoreBox to your wishlist
-
Click on the green "Add to your wishlist" button and wait for the confirmation message. This will help you keep track of the game's release date and any updates or discounts.
-
Step 4: Wait for the release date
-
The release date for GoreBox for PC devices is not yet announced, but it is expected to be sometime in 2023. You can check the game's Steam page for any news or announcements. You can also join the game's community and chat with other fans and developers.
-
Conclusion
-
GoreBox is a physics-based sandbox game of extreme violence that lets you create and destroy anything you want. It is a game that offers unlimited fun and stress relief for mature audiences who enjoy violent games. You can download GoreBox for Android devices from Google Play Store or wait for the release date for PC devices on Steam. We hope this guide helped you learn how to download GoreBox and enjoy this amazing game.
-
FAQs
-
-
Q: How much does GoreBox cost?
-
A: GoreBox is free to download and play on Android devices, but it contains ads and in-app purchases. The price for PC devices is not yet revealed, but it will likely be a paid game.
-
Q: What are the system requirements for GoreBox?
-
A: The system requirements for Android devices are Android 4.4 or higher, 1 GB of RAM, and 100 MB of storage space. The system requirements for PC devices are not yet announced, but they will likely be higher than Android devices.
-
Q: Is GoreBox multiplayer?
-
A: GoreBox is not multiplayer, but it has a lot of custom content that you can download or create yourself, such as maps and mods. You can also share your creations and screenshots with other players online.
-
Q: Is GoreBox safe to play?
-
A: GoreBox is safe to play as long as you are aware that it is a violent game that contains graphic depictions of blood, gore, and dismemberment. It is not suitable for children or people who are sensitive to violence. It is also not intended to promote or glorify violence in real life.
-
Q: How can I contact the developers of GoreBox?
-
A: You can contact the developers of GoreBox by visiting their website, following them on social media, or joining their Discord server. You can also leave feedback, suggestions, or bug reports on the game's Google Play Store or Steam page.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Download Ashfall with Eng Sub - The Best Site for Korean Movies.md b/spaces/congsaPfin/Manga-OCR/logs/How to Download Ashfall with Eng Sub - The Best Site for Korean Movies.md
deleted file mode 100644
index dddd33cdff64478a3b0b59324d8bab6c658ccee0..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/How to Download Ashfall with Eng Sub - The Best Site for Korean Movies.md
+++ /dev/null
@@ -1,161 +0,0 @@
-
-
Ashfall Movie Download Eng Sub: How to Watch the Epic Disaster Film Online
-
If you are a fan of disaster movies, you might have heard of Ashfall, a 2019 South Korean film that depicts a volcanic eruption on Mount Paektu and its aftermath. The film was a huge hit in South Korea, grossing over $61 million and winning several awards. It also received positive reviews from critics and audiences for its spectacular visual effects, thrilling action scenes, and emotional drama. But how can you watch this epic film online with English subtitles? In this article, we will tell you everything you need to know about Ashfall movie download eng sub, including what the film is about, why you should watch it, and where to find it online.
-
What is Ashfall Movie About?
-
Ashfall is a disaster film that revolves around a volcanic eruption on Mount Paektu, a mountain that lies on the border between China and North Korea. The eruption causes massive earthquakes and tsunamis that threaten the lives of millions of people on the Korean peninsula. To prevent another catastrophe, a team of experts from South Korea and North Korea join forces to stop the volcano from erupting again. However, they face many challenges and dangers along the way, as well as personal conflicts and political tensions.
The film begins with Mount Paektu erupting for the first time in over a century, causing chaos and panic in both North and South Korea. Jeon Yoo-kyung (Jeon Hye-jin), a senior official in the South Korean government, plans an operation based on a theory by Professor Kang Bong-rae (Ma Dong-seok), who had studied the volcano and its possible future eruptions. The operation involves detonating a nuclear device in a mine near the volcano's caldera, which would relieve the pressure and prevent another eruption.
-
Jo In-chang (Ha Jung-woo), a captain of a special forces team, is assigned to lead the operation. He contacts Lee Joon-pyeong (Lee Byung-hun), a former North Korean agent who has vital information about the location of the mine. However, Joon-pyeong has his own agenda and does not trust anyone. Meanwhile, Jo In-chang's pregnant wife Choi Ji-young (Bae Suzy) is alone in Seoul and struggling to survive amidst the disaster.
-
Jo In-chang and his team parachute into North Korea and rescue Joon-pyeong from a prison. They then head to a power station, where they extract a piece of uranium from a nuclear missile. This alerts the American forces in South Korea, who send soldiers to stop them from delivering the uranium to some Chinese gangsters who have agreed to help them enter the mine. Along the way, they encounter various obstacles and enemies, as well as unexpected allies and friends.
-
Will they be able to reach the mine in time and stop the volcano from erupting again? Will they be able to reunite with their loved ones and survive the disaster? You will have to watch the movie to find out.
-
The Cast and Crew of Ashfall Movie
-
Ashfall boasts an impressive cast of talented and popular actors, who deliver captivating performances and bring their characters to life. The main cast includes:
-
-
Lee Byung-hun as Lee Joon-pyeong, a former North Korean agent who holds the key to the operation. He is cynical, ruthless, and unpredictable, but also has a soft spot for his family and friends.
-
Ha Jung-woo as Jo In-chang, a captain of a special forces team who leads the operation. He is loyal, courageous, and determined, but also faces a dilemma between his duty and his love for his wife.
-
Ma Dong-seok as Professor Kang Bong-rae, a geologist who proposes the theory of stopping the volcano. He is smart, eccentric, and passionate, but also has a dark past and a secret motive.
-
Jeon Hye-jin as Jeon Yoo-kyung, a senior official in the South Korean government who plans the operation. She is calm, confident, and competent, but also has to deal with the pressure and politics of her position.
-
Bae Suzy as Choi Ji-young, Jo In-chang's pregnant wife who is trapped in Seoul. She is sweet, optimistic, and resilient, but also faces many dangers and difficulties in the disaster zone.
-
-
The film was directed by Lee Hae-jun and Kim Byung-seo, who are both experienced and acclaimed filmmakers. Lee Hae-jun is known for his comedy-drama films such as Castaway on the Moon and My Dictator, while Kim Byung-seo is known for his cinematography work in films such as The Terror Live and Cold Eyes. They collaborated to create a film that combines humor, action, drama, and spectacle in a balanced and engaging way.
-
ashfall korean movie download with english subtitles
-ashfall 2019 full movie eng sub free download
-ashfall movie online watch with eng sub
-ashfall movie english subtitle download srt
-ashfall movie torrent download with english sub
-ashfall full movie hd download eng sub
-ashfall movie download in english dubbed
-ashfall korean movie eng sub watch online free
-ashfall movie download 480p with english subtitles
-ashfall movie download 720p with english subtitles
-ashfall movie download 1080p with english subtitles
-ashfall movie download mp4 with english subtitles
-ashfall movie download mkv with english subtitles
-ashfall movie download blu ray with english subtitles
-ashfall movie download google drive with english subtitles
-ashfall korean movie eng sub download link
-ashfall korean movie eng sub free streaming
-ashfall korean movie eng sub dailymotion
-ashfall korean movie eng sub youtube
-ashfall korean movie eng sub reddit
-ashfall korean movie eng sub kissasian
-ashfall korean movie eng sub dramacool
-ashfall korean movie eng sub viu
-ashfall korean movie eng sub netflix
-ashfall korean movie eng sub iQIYI
-ashfall korean action movie download with english subtitles
-ashfall korean adventure movie download with english subtitles
-ashfall korean thriller movie download with english subtitles
-ashfall korean drama movie download with english subtitles
-ashfall lee byung hun movie download with english subtitles
-ashfall ha jung woo movie download with english subtitles
-ashfall ma dong seok movie download with english subtitles
-ashfall jeon hye jin movie download with english subtitles
-ashfall bae suzy movie download with english subtitles
-ashfall 2019 korean disaster film download with english subtitles
-how to download ashfall movie with english subtitles
-where to download ashfall movie with english subtitles
-best site to download ashfall movie with english subtitles
-legal way to download ashfall movie with english subtitles
-safe way to download ashfall movie with english subtitles
-
The Reception and Awards of Ashfall Movie
-
Ashfall was released on December 19, 2019 in South Korea, where it became a box office success. It attracted over 8.2 million viewers and grossed over $61 million, making it the sixth highest-grossing film of 2019 in South Korea. It also received positive reviews from critics and audiences, who praised its visual effects, action scenes, acting performances, and emotional impact. The film has a rating of 6.2 out of 10 on IMDb and 7.4 out of 10 on Naver Movie.
-
The film also won several awards and nominations at various film festivals and ceremonies. Some of the awards that it won are:
-
-
Best Film Editing at the 56th Baeksang Arts Awards
-
Best Visual Effects at the 56th Grand Bell Awards
-
Best Visual Effects at the 40th Blue Dragon Film Awards
-
Best Action Film at the 39th Golden Cinema Film Festival
-
Best Action Film at the 15th Seoul International Extreme-Short Image & Film Festival
-
-
Why You Should Watch Ashfall Movie with Eng Sub?
-
If you are still not convinced that Ashfall is a movie worth watching, here are some reasons why you should give it a try with English subtitles:
-
The Stunning Visual Effects and Cinematography of Ashfall Movie
-
One of the most impressive aspects of Ashfall is its visual effects and cinematography, which create a realistic and immersive depiction of the volcanic disaster and its aftermath. The film used advanced computer-generated imagery (CGI) and practical effects to create scenes such as the eruption of Mount Paektu, the collapse of buildings, the flooding of streets, and the explosion of nuclear missiles. The film also used aerial shots, drone shots, crane shots, and handheld shots to capture the scale and scope of the disaster from different angles and perspectives. The film's visual effects team consisted of over 500 people from South Korea, China, Japan, Canada, and New Zealand. The film's cinematographer was Kim Ji-yong, who is known for his work in films such as A Taxi Driver and The Age of Shadows.
-
The Thrilling Action and Drama of Ashfall MovieAshfall is not just a disaster film, but also a film that combines action and drama in a compelling and exciting way. The film features many thrilling and tense action scenes, such as car chases, shootouts, fistfights, and explosions. The film also explores the emotional and psychological aspects of the characters, such as their motivations, relationships, conflicts, and dilemmas. The film shows how the disaster affects the characters' lives, choices, and values, and how they cope with the challenges and risks that they face. The film also touches on themes such as patriotism, cooperation, sacrifice, and family.
-
The Cultural and Historical Significance of Ashfall Movie
-
Ashfall is also a film that has cultural and historical significance, especially for Korean audiences. The film is based on the real-life Mount Paektu, which is a sacred and symbolic mountain for both North and South Koreans. The mountain is considered to be the birthplace of the Korean nation and the origin of the Korean people. The film also depicts the relationship and cooperation between North and South Korea, which is a sensitive and complex issue in the current political climate. The film shows how the disaster brings the two countries together, despite their differences and conflicts. The film also reflects on the history and future of the Korean peninsula, and the hope for peace and reunification.
-
Where to Watch Ashfall Movie with Eng Sub Online?
-
Now that you know what Ashfall is about and why you should watch it, you might be wondering where you can watch it online with English subtitles. Fortunately, there are several options available for you to choose from. Here are some of the best platforms where you can watch Ashfall movie download eng sub:
-
Bilibili: A Free Streaming Platform with Eng Sub
-
Bilibili is a Chinese video-sharing platform that offers a variety of content, including anime, movies, TV shows, music, games, and more. Bilibili is also one of the platforms where you can watch Ashfall for free with English subtitles. Bilibili has a large and active community of users who upload and share videos, as well as comment and interact with each other.
-
How to Watch Ashfall Movie on Bilibili?
-
To watch Ashfall on Bilibili, you need to follow these steps:
Create an account or log in with your existing account.
-
Search for Ashfall in the search bar or browse through the categories.
-
Select the video that you want to watch and click on it.
-
Enjoy watching Ashfall with English subtitles.
-
-
What are the Benefits of Watching Ashfall Movie on Bilibili?
-
Some of the benefits of watching Ashfall on Bilibili are:
-
-
You can watch it for free without any subscription or registration fees.
-
You can watch it with high-quality video and audio.
-
You can watch it with accurate and synchronized English subtitles.
-
You can watch it with other features such as bullet comments, danmaku, stickers, gifts, etc.
-
You can watch it with other users who share your interests and opinions.
-
-
iQIYI: A Premium Streaming Service with Eng Sub
-
iQIYI is a Chinese online video platform that provides a variety of content, including movies, TV shows, dramas, variety shows, documentaries, animations, etc. iQIYI is also one of the platforms where you can watch Ashfall with English subtitles. iQIYI has a large and diverse library of content that caters to different tastes and preferences.
-
How to Watch Ashfall Movie on iQIYI?
-
To watch Ashfall on iQIYI, you need to follow these steps:
Create an account or log in with your existing account.
-
Select your preferred language (English or Chinese) in the settings.
-
Search for Ashfall in the search bar or browse through the categories.
-
Select the video that you want to watch and click on it.
-
Choose the option to watch it with English subtitles.
-
Enjoy watching Ashfall with English subtitles.
-
-
What are the Benefits of Watching Ashfall Movie on iQIYI?
-
Some of the benefits of watching Ashfall on iQIYI are:
-
-
You can watch it with high-definition video and Dolby sound.
-
You can watch it with professional and reliable English subtitles.
-
You can watch it with other features such as smart recommendations, offline downloads, multi-screen viewing, etc.
-
You can watch it with other content that suits your interests and preferences.
-
You can watch it with a low-cost subscription or a free trial.
-
-
JustWatch: A Search Engine for Streaming Options with Eng Sub
-
JustWatch is a website and app that helps you find where to watch movies and TV shows online. JustWatch is also one of the platforms where you can find Ashfall with English subtitles. JustWatch has a comprehensive and updated database of streaming services and content that covers over 40 countries and regions.
-
How to Watch Ashfall Movie on JustWatch?
-
To watch Ashfall on JustWatch, you need to follow these steps:
Search for Ashfall in the search bar or browse through the genres.
-
Select the movie that you want to watch and click on it.
-
Choose the streaming service that offers Ashfall with English subtitles.
-
Enjoy watching Ashfall with English subtitles.
-
-
What are the Benefits of Watching Ashfall Movie on JustWatch?
-
Some of the benefits of watching Ashfall on JustWatch are:
-
-
You can find the best streaming option for Ashfall with English subtitles among various services and platforms.
-
You can compare the prices, quality, and availability of different streaming options for Ashfall.
-
You can discover other movies and TV shows that are similar to Ashfall.
-
You can get personalized recommendations based on your preferences and watch history.
-
You can create a watchlist and track your progress of watching Ashfall.
-
-
Conclusion
-
Ashfall is a disaster film that tells the story of a volcanic eruption on Mount Paektu and its consequences. The film features an amazing cast, stunning visual effects, thrilling action scenes, and emotional drama. The film also has cultural and historical significance, as it depicts the relationship between North and South Korea. If you want to watch this epic film online with English subtitles, you can choose from several platforms such as Bilibili, iQIYI, or JustWatch. We hope that this article has helped you learn more about Ashfall movie download eng sub, and that you will enjoy watching it.
-
FAQs
-
Here are some frequently asked questions about Ashfall movie download eng sub:
-
-
Is Ashfall movie based on a true story?
-
No, Ashfall movie is not based on a true story. However, it is inspired by the real-life Mount Paektu, which is a volcanic mountain that lies on the border between China and North Korea. The mountain has erupted several times in history, most recently in 1903. Some scientists have speculated that the mountain could erupt again in the future, causing a major disaster for both countries.
-
Is Ashfall movie available on Netflix?
-
No, Ashfall movie is not available on Netflix. However, you can find it on other streaming services such as Bilibili, iQIYI, or JustWatch. You can also buy or rent it on platforms such as Amazon Prime Video, Google Play Movies, YouTube, or iTunes.
-
How long is Ashfall movie?
-
Ashfall movie has a runtime of 128 minutes. It is divided into two parts: Part 1: The Eruption (64 minutes) and Part 2: The Aftermath (64 minutes).
Who sings the theme song of Ashfall movie?
-
The theme song of Ashfall movie is called Together, and it is sung by Taeyeon, a famous South Korean singer and member of the girl group Girls' Generation. The song is a powerful and emotional ballad that expresses the hope and courage of the characters in the face of the disaster. The song was released as a digital single on December 15, 2019, and it topped the charts in South Korea and China.
-
What is the meaning of the title Ashfall?
-
The title Ashfall refers to the phenomenon of volcanic ash falling from the sky after a volcanic eruption. Volcanic ash is a mixture of fine particles of rock, minerals, and glass that are ejected from a volcano. Volcanic ash can have harmful effects on the environment, health, and infrastructure, as it can cover large areas, reduce visibility, damage buildings and vehicles, contaminate water and crops, and cause respiratory problems. The title Ashfall also symbolizes the dark and bleak situation that the characters face in the film.
-
Is there a sequel to Ashfall movie?
-
No, there is no sequel to Ashfall movie. However, there are some rumors and speculations that the filmmakers might consider making a sequel or a spin-off based on the popularity and success of the film. Some fans have also expressed their interest and curiosity in seeing more stories and characters related to Ashfall movie.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Pet Idle Hack APK for Android and IOS Devices.md b/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Pet Idle Hack APK for Android and IOS Devices.md
deleted file mode 100644
index 7feba9c4437980e4cbfd5e776e3807da729974b9..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Pet Idle Hack APK for Android and IOS Devices.md
+++ /dev/null
@@ -1,145 +0,0 @@
-
-
Pet Idle Hack APK: How to Get Unlimited Coins and Gems for Your Pets
-
If you love taking care of virtual animals, you might have heard of Pet Idle, a simulation game where you can adopt, feed, play with, and train various pets. You can also build and decorate your house, take care of your garden, collect insects, and fish. However, if you want to enjoy all the features of the game without spending real money or waiting for hours, you might be interested in using a hack APK. In this article, we will tell you everything you need to know about Pet Idle Hack APK, including what it is, why you should use it, how to download and install it, how to use it, and some tips and tricks for playing the game with it. We will also compare the pros and cons of using the hack APK and answer some frequently asked questions.
Pet Idle is a casual simulation game developed by Alphaquest Games. It was released in December 2021 on Steam and is available for free. The game allows you to take care of various virtual animals, such as dogs, cats, bunnies, llamas, and even magical beasts. You can choose from 19 different pet types, each with a unique personality that affects the gameplay. You can also edit the color of each pet to make them look more adorable.
-
The game has a pet needs system that requires you to fulfill their hunger, thirst, sleep, hygiene, walking, and fun. You can feed them with different foods, give them water, put them to bed, bathe them, take them for walks, and play with them. The happier your pet is, the more money you make. You can use the money to buy more pets, furniture, toys, food, water, and other items.
-
You can also build and expand your house to accommodate more pets. You can decorate the interior of your house with various wallpapers, floors, windows, doors, lamps, plants, paintings, rugs, and other items. You can also take care of your garden by watering different plants and harvesting fruits. You can also collect different rare insects by placing traps in your garden. Additionally, you can fish in a pond with a fishing rod and catch fish of different types.
-
Your pet will also gain levels of experience as they have a good life. They will learn several tricks such as sitting, rolling, jumping, running, and more. You can teach them new skills by using skill books that you can buy or find in the game. You can also use robots and drones to help you care for your pets.
-
What is a Hack APK?
-
A hack APK is a modified version of an original application that has been altered to provide some extra features or advantages that are not available in the official version. For example, a hack APK for Pet Idle can give you unlimited coins and gems that you can use to buy anything in the game without spending real money or waiting for hours. It can also unlock all the pets and items that are otherwise locked behind paywalls or level requirements. It can also allow you to customize your pets by changing their size, speed, and other attributes. A hack APK can also remove ads and other annoying features from the game.
-
A hack APK works by bypassing the security checks and verification processes of the original application. It can also modify the game data and files to alter the game mechanics and parameters. A hack APK usually requires you to download and install it from a third-party source, such as a website or a file-sharing platform. You may also need to enable some permissions and settings on your device to allow the installation of unknown sources.
-
pet idle mod apk unlimited money
-pet idle cheat apk download
-pet idle hack version apk
-pet idle simulation game mod apk
-pet idle android hack apk
-pet idle latest mod apk
-pet idle free hack apk
-pet idle hacked apk 2023
-pet idle mod apk for ios
-pet idle online hack apk
-pet idle offline hack apk
-pet idle hack tool apk
-pet idle no root hack apk
-pet idle hack generator apk
-pet idle vip mod apk
-pet idle pro hack apk
-pet idle premium hack apk
-pet idle full hack apk
-pet idle mega mod apk
-pet idle unlimited gems hack apk
-pet idle mod menu apk
-pet idle god mode hack apk
-pet idle unlock all pets hack apk
-pet idle unlimited coins hack apk
-pet idle infinite money hack apk
-pet idle modded apk 2023
-pet idle cheats and hacks apk
-pet idle easy hack apk
-pet idle best hack apk
-pet idle working hack apk
-pet idle update hack apk
-pet idle new mod apk
-pet idle old version hack apk
-pet idle original hack apk
-pet idle real hack apk
-pet idle legit hack apk
-pet idle safe hack apk
-pet idle secure hack apk
-pet idle trusted hack apk
-pet idle verified hack apk
-pet idle no survey hack apk
-pet idle no ads hack apk
-pet idle no virus hack apk
-pet idle no malware hack apk
-pet idle no ban hack apk
-pet idle anti ban hack apk
-pet idle anti cheat hack apk
-
Why Use a Hack APK for Pet Idle?
-
There are many reasons why you might want to use a hack APK for Pet Idle. Some of the benefits are:
-
-
You can get unlimited coins and gems that you can use to buy anything in the game without spending real money or waiting for hours. You can buy more pets, furniture, toys, food, water, skill books, robots, drones, and other items. You can also upgrade your house and garden to make them bigger and more beautiful.
-
You can unlock all the pets and items that are otherwise locked behind paywalls or level requirements. You can choose from 19 different pet types, each with a unique personality that affects the gameplay. You can also edit the color of each pet to make them look more adorable. You can also access all the furniture, toys, food, water, skill books, robots, drones, and other items that are available in the game.
-
You can customize your pets by changing their size, speed, and other attributes. You can make your pets bigger or smaller, faster or slower, stronger or weaker, and more. You can also change their appearance by adding accessories such as hats, glasses, collars, and more.
-
You can remove ads and other annoying features from the game. You can enjoy the game without being interrupted by pop-ups, banners, videos, and other ads that may slow down your device or consume your data. You can also disable some features that may be annoying or unnecessary for you, such as notifications, sounds, vibrations, etc.
-
-
How to Download and Install Pet Idle Hack APK
-
If you want to download and install Pet Idle Hack APK on your device, you need to follow these steps:
-
-
Go to a reliable website that offers Pet Idle Hack APK for free download. You can search for it on Google or use one of these links: . Make sure that the website is safe and secure before downloading anything from it.
-
Click on the download button and wait for the file to be downloaded on your device. The file size may vary depending on the website and the version of the hack APK.
-
Once the file is downloaded, locate it on your device using a file manager app. Tap on the file and select install. You may need to enable some permissions and settings on your device to allow the installation of unknown sources. Follow the instructions on your screen to complete the installation process.
-
After the installation is done, you can launch Pet Idle Hack APK from your app drawer or home screen. You may need to grant some permissions and access to the app before using it.
-
-
How to Use Pet Idle Hack APK
-
After you have downloaded and installed Pet Idle Hack APK on your device, you can use it to enjoy all the features of the game without any limitations. Here are some of the things you can do with Pet Idle Hack APK:
-
-
To generate unlimited coins and gems, tap on the menu icon on the top right corner of the screen. Then tap on the hack icon that looks like a wrench. You will see a pop-up window where you can enter the amount of coins and gems you want to add to your account. Tap on generate and wait for a few seconds until the process is done. You will see a confirmation message when it is done.
-
To unlock all pets and items, tap on the menu icon on the top right corner of the screen. Then tap on the hack icon that looks like a wrench. You will see a pop-up window where you can toggle on or off different options such as unlock all pets, unlock all items, unlock all skills, etc. Tap on apply and wait for a few seconds until the process is done. You will see a confirmation message when it is done.
-
To customize your pets by changing their size, speed, and other attributes, tap on the menu icon on the top right corner of the screen. Then tap on the hack icon that looks like a wrench. You will see a pop-up window where you can adjust different sliders such as size, speed, strength, intelligence, etc. Tap on apply and wait for a few seconds until the process is done. You will see a confirmation message when it is done.
-
To change the appearance of your pets by adding accessories such as hats, glasses, collars, and more, tap on the pet icon on the bottom left corner of the screen. Then tap on the edit icon that looks like a pencil. You will see a pop-up window where you can choose from different categories of accessories such as head, eyes, neck, body, etc. Tap on the accessory you want to add and drag it to your pet. You can also resize and rotate the accessory by using two fingers. Tap on save when you are done.
-
-
Tips and Tricks for Playing Pet Idle with Hack APK
-
Playing Pet Idle with hack APK can be fun and easy, but there are some tips and tricks that can help you make the most out of it. Here are some of them:
-
-
To level up your pets fast, feed them with high-quality food that gives them more experience points. You can buy food with coins or gems, or find them in the game. You can also use skill books to teach them new skills that increase their stats and abilities.
-
To catch rare fish, use a better fishing rod that has a higher chance of catching rare fish. You can buy fishing rods with coins or gems, or find them in the game. You can also use bait to attract more fish to your pond.
-
To train your pets and unlock new skills, play with them using different toys that stimulate their different needs. You can buy toys with coins or gems, or find them in the game. You can also use robots and drones to help you train your pets automatically.
-
To decorate your house, use different furniture and items that match your style and preference. You can buy furniture and items with coins or gems, or find them in the game. You can also use wallpapers, floors, windows, doors, lamps, plants, paintings, rugs, and other items to customize your interior.
-
-
Pros and Cons of Using Pet Idle Hack APK
-
Using Pet Idle Hack APK has its pros and cons that you should be aware of before using it. Here are some of them:
-
-
-
Pros
-
Cons
-
-
-
You can get unlimited coins and gems that you can use to buy anything in the game without spending real money or waiting for hours.
-
You may lose the sense of challenge and accomplishment that comes from playing the game normally.
-
-
-
You can unlock all pets and items that are otherwise locked behind paywalls or level requirements.
-
You may miss out on some of the fun and excitement of discovering new pets and items as you progress in the game.
-
-
-
You can customize your pets by changing their size, speed, and other attributes.
-
You may make your pets too powerful or unrealistic that they lose their charm and personality.
-
-
-
You can remove ads and other annoying features from the game.
-
You may deprive the developers of their revenue and support that they need to maintain and improve the game.
-
-
-
You can enjoy all the features of the game without any limitations.
-
You may encounter some bugs or glitches that may affect your gameplay or device performance.
-
-
-
Conclusion
-
Pet Idle is a simulation game where you can take care of various virtual animals. You can also build and decorate your house, take care of your garden, collect insects, and fish. However, if you want to enjoy all the features of the game without spending real money or waiting for hours, you might be interested in using a hack APK. A hack APK is a modified version of an original application that has been altered to provide some extra features or advantages that are not available in the official version. For example, a hack APK for Pet Idle can give you unlimited coins and gems that you can use to buy anything in the game without spending real money or waiting for hours. It can also unlock all the pets and items that are otherwise locked behind paywalls or level requirements. It can also allow you to customize your pets by changing their size, speed, and other attributes. A hack APK can also remove ads and other annoying features from the game.
-
In this article, we have told you everything you need to know about Pet Idle Hack APK, including what it is, why you should use it, how to download and install it, how to use it, and some tips and tricks for playing the game with it. We have also compared the pros and cons of using the hack APK and answered some frequently asked questions.
-
We hope that this article has been helpful and informative for you. If you want to try Pet Idle Hack APK for yourself, you can download it from one of the links we provided above. However, we advise you to use it at your own risk and discretion, as it may violate the terms and conditions of the game and cause some issues with your device or account. We also recommend that you support the developers of Pet Idle by playing the game normally and purchasing some items or coins if you can afford it. Pet Idle is a fun and relaxing game that deserves your appreciation and respect.
-
Thank you for reading this article. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you. Happy gaming!
-
FAQs
-
Here are some of the frequently asked questions about Pet Idle Hack APK:
-
-
Is Pet Idle Hack APK safe to use?
-
Pet Idle Hack APK is not an official version of the game and has not been verified or approved by the developers or Google Play Store. Therefore, it may not be safe to use and may contain viruses, malware, or other harmful elements that may damage your device or compromise your personal information. You should always download and install hack APKs from trusted sources and scan them with antivirus software before using them.
-
Is Pet Idle Hack APK legal to use?
-
Pet Idle Hack APK is not legal to use and may violate the terms and conditions of the game and Google Play Store. By using a hack APK, you are cheating and gaining an unfair advantage over other players who play the game normally. You are also depriving the developers of their revenue and support that they need to maintain and improve the game. You may face some consequences if you are caught using a hack APK, such as getting banned from the game or losing your account.
-
Will Pet Idle Hack APK work on my device?
-
Pet Idle Hack APK may not work on all devices or versions of Android. It may depend on factors such as your device model, operating system, compatibility, storage space, etc. You should always check the requirements and specifications of the hack APK before downloading and installing it on your device. You should also make sure that your device is rooted or jailbroken if needed.
-
How can I update Pet Idle Hack APK?
-
Pet Idle Hack APK may not update automatically like the official version of the game. You may need to manually download and install the latest version of the hack APK from a third-party source whenever there is a new update available. However, you should be careful as some updates may not be compatible with the hack APK or may fix some of the features or advantages that the hack APK provides.
-
How can I uninstall Pet Idle Hack APK?
-
If you want to uninstall Pet Idle Hack APK from your device, you can follow these steps:
-
-
Go to your device settings and tap on apps or applications.
-
Find Pet Idle Hack APK from the list of apps and tap on it.
-
Tap on uninstall and confirm your choice.
-
Wait for a few seconds until the app is uninstalled from your device.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Xperia Music (Walkman) APK - Listen to Your Music with Style and Quality.md b/spaces/congsaPfin/Manga-OCR/logs/Xperia Music (Walkman) APK - Listen to Your Music with Style and Quality.md
deleted file mode 100644
index e2e9ffe1ae23f351602cb0255a23fa7f4d80fa06..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Xperia Music (Walkman) APK - Listen to Your Music with Style and Quality.md
+++ /dev/null
@@ -1,71 +0,0 @@
-
-
Music Xperia APK: A Sony Music Player for Any Android Device
| |
Do you love listening to music on your Android device? Do you want a music player that is simple, elegant, and powerful? If yes, then you should try Music Xperia APK. Music Xperia APK is a Sony music player that you can install on any Android device. It is not just a regular music player; it is a music player that offers you a premium listening experience.
| |
What is Music Xperia APK?
| |
Music Xperia APK is a modified version of the official Sony music player that comes pre-installed on Sony Xperia devices. It is developed by Senju Studio, an independent developer who wanted to share the Sony music player with all Android users. Music Xperia APK has all the features of the original Sony music player, such as:
A powerful sound engine that enhances the audio quality
| |
A variety of sound effects and equalizers to customize your sound
| |
A smart playlist feature that automatically creates playlists based on your mood, genre, artist, etc.
| |
A download feature that lets you download music from online sources
| |
A sleep timer feature that lets you set a timer to stop playing music
| |
A widget feature that lets you control your music from your home screen
| |
| |
Why Use Music Xperia APK?
| |
Music Xperia APK is not just another music player; it is a music player that gives you a superior listening experience. Here are some reasons why you should use Music Xperia APK over other music players:
| |
| |
It is compatible with any Android device running Android 4.4 or higher
| |
It supports various audio formats such as MP3, WAV, FLAC, OGG, etc.
| |
It has a simple and elegant design that matches the Sony style
| |
How to Download and Install Music Xperia APK?
| |
Downloading and installing Music Xperia APK is very easy and fast. You just need to follow these simple steps:
| | | |
Go to the official website of Senju Studio and download the latest version of Music Xperia APK. You can also scan the QR code below to download it directly to your device.
| |
Once the download is complete, go to your device settings and enable the option to install apps from unknown sources. This will allow you to install Music Xperia APK without any problems.
| |
Locate the downloaded Music Xperia APK file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.
| |
Once the installation is done, you can launch Music Xperia APK from your app drawer or home screen and enjoy your music.
| | | |
How to Use Music Xperia APK?
| |
Using Music Xperia APK is very simple and fun. You can use it to play music, create playlists, adjust settings, and more. Here are some tips on how to use Music Xperia APK:
| |
| |
To play music, you can browse your music library by albums, artists, genres, folders, or songs. You can also use the search function to find your favorite tracks. To play a song, just tap on it and it will start playing. You can also swipe left or right on the album art to skip or go back to the previous or next song.
| |
To create playlists, you can tap on the plus icon at the bottom right corner of the screen and select "Create playlist". You can then name your playlist and add songs to it by tapping on the plus icon next to each song. You can also edit or delete your playlists by tapping on the three dots icon next to each playlist.
| |
To adjust settings, you can tap on the gear icon at the top right corner of the screen and access various options such as sound effects, equalizer, sleep timer, download settings, etc. You can also change the theme of Music Xperia APK by tapping on the paintbrush icon at the bottom left corner of the screen and choosing from different colors.
| |
| |
Frequently Asked Questions about Music Xperia APK
| |
Here are some of the most common questions and answers about Music Xperia APK:
| |
-
-
Question
-
Answer
-
-
-
Is Music Xperia APK safe to use?
-
Yes, Music Xperia APK is safe to use. It does not contain any viruses, malware, or spyware. It is also verified by Google Play Protect, which ensures that it does not harm your device or data.
-
-
-
Is Music Xperia APK free to use?
-
Yes, Music Xperia APK is free to use. You do not need to pay any money to download or use it. However, you may see some ads in the app, which help support the developer and keep the app updated.
-
-
-
Does Music Xperia APK require root access?
-
No, Music Xperia APK does not require root access. You can install and use it on any Android device without rooting it.
-
-
-
Can I use Music Xperia APK with other music streaming services?
-
No, Music Xperia APK only works with local music files stored on your device or SD card. It does not support online music streaming services such as Spotify, YouTube Music, etc.
-
XPERIA Music (Walkman) APK free download
-Sony Music Player - Xperia Player APK
-How to install XPERIA Music (Walkman) APK on android
-Xperia Player - Walkman Player features and reviews
-XPERIA Music (Walkman) APK mod version
-Sony Music Player - Xperia Player with equalizer
-Best music apps for XPERIA devices
-Xperia Player - Walkman Player custom background skin
-XPERIA Music (Walkman) APK latest update
-Sony Music Player - Xperia Player offline mode
-XPERIA Music (Walkman) APK vs Spotify
-Xperia Player - Walkman Player support formats
-XPERIA Music (Walkman) APK for PC
-Sony Music Player - Xperia Player ratings and feedback
-XPERIA Music (Walkman) APK alternatives
-Xperia Player - Walkman Player tips and tricks
-XPERIA Music (Walkman) APK premium features
-Sony Music Player - Xperia Player compatibility issues
-XPERIA Music (Walkman) APK bugs and fixes
-Xperia Player - Walkman Player user guide
-XPERIA Music (Walkman) APK advantages and disadvantages
-Sony Music Player - Xperia Player comparison with other music players
-XPERIA Music (Walkman) APK requirements and specifications
-Xperia Player - Walkman Player FAQs and answers
-XPERIA Music (Walkman) APK pros and cons
-Sony Music Player - Xperia Player screenshots and videos
-XPERIA Music (Walkman) APK download link and instructions
-Xperia Player - Walkman Player themes and widgets
-XPERIA Music (Walkman) APK performance and battery usage
-Sony Music Player - Xperia Player playlist and library management
-XPERIA Music (Walkman) APK sound quality and optimization
-Xperia Player - Walkman Player shuffle and repeat modes
-XPERIA Music (Walkman) APK security and privacy issues
-Sony Music Player - Xperia Player online and offline streaming
-XPERIA Music (Walkman) APK benefits and drawbacks
-Xperia Player - Walkman Player customization and personalization options
-XPERIA Music (Walkman) APK reviews and testimonials
-Sony Music Player - Xperia Player problems and solutions
-XPERIA Music (Walkman) APK popularity and ranking
-Xperia Player - Walkman Player recommendations and suggestions
-
-
-
Can I share my feedback or suggestions for Music Xperia APK?
-
Yes, you can share your feedback or suggestions for Music Xperia APK by contacting the developer via email or social media. You can also rate and review the app on Google Play Store or other platforms.
-
-
- |
Conclusion
| |
Conclusion
| |
If you are looking for a music player that offers you a premium listening experience on your Android device, then you should try Music Xperia APK. It is a Sony music player that you can install on any Android device. It has a sleek and intuitive user interface, a powerful sound engine, a variety of sound effects and equalizers, a smart playlist feature, a download feature, a sleep timer feature, a widget feature, and more. It is compatible with various audio formats, supports offline playback, and does not require root access. It is also free to use and safe to install. Music Xperia APK is the ultimate music player for Android users who love Sony music. Download it now and enjoy your music like never before.
| |
Do you have any questions or comments about Music Xperia APK? Feel free to share them with us in the comment section below. We would love to hear from you.
How to Build a Bedini Motor and Download the PDF Guide
-
A Bedini motor is a type of pulse motor that uses a transistor to switch a coil on and off, creating a magnetic field that rotates a wheel with magnets. The coil also charges a battery bank with the excess energy generated by the pulses. The Bedini motor was invented by John Bedini, an American inventor and researcher in the field of free energy and alternative energy sources.
-
If you want to build your own Bedini motor and learn more about how it works, you can download a PDF guide that provides schematics, material lists, assembly instructions, operating procedures and tips. The PDF guide is based on the work of Sanja Smud, who published a document titled "Bedini - Schematics and Starter Guide" on Academia.edu[^1^]. The guide also includes some updates and modifications by Lee, another experimenter who shared his insights on the Bedini motor.
To download the PDF guide, you can visit this link[^2^] and follow the instructions. You will need to create an account on Kit.co, a platform that allows users to share their recommendations for products and services. You will also need to provide your email address and confirm your subscription to receive the download link. Alternatively, you can also access the PDF guide directly from this link[^3^], which is hosted on Sway.office.com, a Microsoft service that lets users create interactive presentations.
-
Building a Bedini motor can be a fun and educational project that can teach you about electromagnetism, electronics and energy conservation. You can also use it to charge your batteries or power other devices. However, please be careful when handling high voltages and currents, and follow the safety precautions in the guide. Also, do not expect to get more energy out of the system than you put in, as that would violate the laws of physics. The Bedini motor is not a perpetual motion machine or a free energy device, but rather an efficient and innovative way of converting electrical energy into mechanical energy and vice versa.
-
-
If you are interested in learning more about the Bedini motor and other related topics, you can also check out some of the references listed at the end of the PDF guide. Some of them are books and articles by John Bedini himself, where he explains his theories and experiments in detail. Others are websites and forums where you can find more information and discussions about the Bedini motor and other pulse motors. You can also watch some videos on YouTube that show how to build and test different versions of the Bedini motor.
-
The Bedini motor is one of the many examples of how human creativity and curiosity can lead to new discoveries and inventions. By exploring the possibilities of alternative energy sources and technologies, we can expand our knowledge and improve our lives. The Bedini motor may not be a miracle solution to our energy problems, but it is certainly a fascinating and inspiring device that deserves more attention and recognition.
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/digitalxingtong/Nailv-Bert-Vits2/setup_ffmpeg.py b/spaces/digitalxingtong/Nailv-Bert-Vits2/setup_ffmpeg.py
deleted file mode 100644
index 7137ab5faebb6d80740b8c843667458f25596839..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Nailv-Bert-Vits2/setup_ffmpeg.py
+++ /dev/null
@@ -1,55 +0,0 @@
-import os
-import sys
-import re
-from pathlib import Path
-import winreg
-
-def check_ffmpeg_path():
- path_list = os.environ['Path'].split(';')
- ffmpeg_found = False
-
- for path in path_list:
- if 'ffmpeg' in path.lower() and 'bin' in path.lower():
- ffmpeg_found = True
- print("FFmpeg already installed.")
- break
-
- return ffmpeg_found
-
-def add_ffmpeg_path_to_user_variable():
- ffmpeg_bin_path = Path('.\\ffmpeg\\bin')
- if ffmpeg_bin_path.is_dir():
- abs_path = str(ffmpeg_bin_path.resolve())
-
- try:
- key = winreg.OpenKey(
- winreg.HKEY_CURRENT_USER,
- r"Environment",
- 0,
- winreg.KEY_READ | winreg.KEY_WRITE
- )
-
- try:
- current_path, _ = winreg.QueryValueEx(key, "Path")
- if abs_path not in current_path:
- new_path = f"{current_path};{abs_path}"
- winreg.SetValueEx(key, "Path", 0, winreg.REG_EXPAND_SZ, new_path)
- print(f"Added FFmpeg path to user variable 'Path': {abs_path}")
- else:
- print("FFmpeg path already exists in the user variable 'Path'.")
- finally:
- winreg.CloseKey(key)
- except WindowsError:
- print("Error: Unable to modify user variable 'Path'.")
- sys.exit(1)
-
- else:
- print("Error: ffmpeg\\bin folder not found in the current path.")
- sys.exit(1)
-
-def main():
- if not check_ffmpeg_path():
- add_ffmpeg_path_to_user_variable()
-
-if __name__ == "__main__":
- main()
\ No newline at end of file
diff --git a/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/modules.py b/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/modules.py
deleted file mode 100644
index 92e0f32a51c472bfd1659a50a95a95d195281d2b..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/modules.py
+++ /dev/null
@@ -1,452 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-from attentions import Encoder
-
-LRELU_SLOPE = 0.1
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
-class TransformerCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- n_layers,
- n_heads,
- p_dropout=0,
- filter_channels=0,
- mean_only=False,
- wn_sharing_parameter=None,
- gin_channels = 0
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/dolceschokolade/chatbot-mini/components/Chatbar/components/ClearConversations.tsx b/spaces/dolceschokolade/chatbot-mini/components/Chatbar/components/ClearConversations.tsx
deleted file mode 100644
index 5ac218cda6606b16888dbf5ffe9d351362db8dd4..0000000000000000000000000000000000000000
--- a/spaces/dolceschokolade/chatbot-mini/components/Chatbar/components/ClearConversations.tsx
+++ /dev/null
@@ -1,57 +0,0 @@
-import { IconCheck, IconTrash, IconX } from '@tabler/icons-react';
-import { FC, useState } from 'react';
-
-import { useTranslation } from 'next-i18next';
-
-import { SidebarButton } from '@/components/Sidebar/SidebarButton';
-
-interface Props {
- onClearConversations: () => void;
-}
-
-export const ClearConversations: FC = ({ onClearConversations }) => {
- const [isConfirming, setIsConfirming] = useState(false);
-
- const { t } = useTranslation('sidebar');
-
- const handleClearConversations = () => {
- onClearConversations();
- setIsConfirming(false);
- };
-
- return isConfirming ? (
-
-
-.zipand the included 3DMax 2015 app (x64 only) are now available in the Plugins window under the Space plugin menu in Fusion.
-
-If you’re looking for an alternative to the 3DMax 2015 app that you can use with Fusion, check out the new Modeler menu entry in the Plugins window. There you’ll find a variety of Modeler apps that you can download, including SketchUp and Zbrush.
-
-Regarding the Fusion 12.0.2 update, it only includes a single bugfix in the Form options where the Maximize & Miniaturize buttons were not working for 3D shapes. In addition to this bugfix, Fusion 12.0.2 also includes the updates discussed above.
-
-The file format system, which also includes new content types (video, point clouds, etc.) as well as improvements to data display, will be available sometime in the next several weeks.
-
-And now that you’ve been updated, I encourage you to visit our community site and check out all the great new tutorials and content we’ve created for the new tools in Fusion 12.
-
-In this video tutorial, Shai shows you how to import an animated model into 3D Max and display it on the Global Camera viewport. Afterwards, he shows how to make adjustments and manipulate it in a viewport independent way.Q:
-
-Is there a way to access the AutoMapper map inside an extension method?
-
-Is there a way to access the AutoMapper map inside an extension method?
-
-I'm using the following extension method:
-
-public static class MappingExtensions
-
-{
-
- public static void Populate(this IMappingExpression expression)
-
- {
-
- //Is there a way to access the AutoMapper map?
-
- //Need to add a constraint
-
- Mapper.Initialize(x => x.ConstructServicesFrom(expression));
-
- var dest = expression.Compile();
-
- Mapper.AssertConfigurationIsValid();
-
- Mapper.Assert 4fefd39f24
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/Diablo 3 2012 DVD [WORK] Full Version.rar.md b/spaces/falterWliame/Face_Mask_Detection/Diablo 3 2012 DVD [WORK] Full Version.rar.md
deleted file mode 100644
index ad9d3a4b60ba113cc179090e04ce3583a17b1098..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Diablo 3 2012 DVD [WORK] Full Version.rar.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Razor 1911's version of skyrim is obviously an illegal copy. ... Cracktro of Diablo II Lord of Destruction by Razor1911, also on other games. by ... PC Gears of War 1 3 DVD5 spa eng & coop No RAR. ... Resident Evil 6 CD Key. blogspot201202resident-evil-6-keygen-crack- 2012-10v. when I open the game, I find ... 1fdad05405
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/FULL Windows 10 Permanent Activator Ultimate V4.13. 12.md b/spaces/falterWliame/Face_Mask_Detection/FULL Windows 10 Permanent Activator Ultimate V4.13. 12.md
deleted file mode 100644
index 3e7330804158b870db4016c2395f222d5f5b2456..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/FULL Windows 10 Permanent Activator Ultimate V4.13. 12.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
FULL Windows 10 Permanent Activator Ultimate v4.13. 12
-
-Red Alert 2 revenge revenge revenge 1.001 54 Vicky Donor Download movie Kickass Full Windows 10 Permanent activator Ultimate V4.13. 12. Download for free all games on PSP via torrent, Yandex disk and mail cloud, in ISO, CSO format for all models.
-For a reason not only for the game, but also for.
-Download the game Kickass Full Windows 10 via torrent for PC, which was released in 2013.
-This torrent game belongs to Action, Shooter.
-Download for free all games on PSP via torrent, Yandex disk and cloud mail, in ISO, CSO format for all models.
-Download game Kickass Full Windows 10 via torrent for PC which came out in 2013. 8a78ff9644
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/Inpage Urdu 2009 Professional By Nazim.md b/spaces/falterWliame/Face_Mask_Detection/Inpage Urdu 2009 Professional By Nazim.md
deleted file mode 100644
index eb66097b91a53b6f2308458195aa23a38d95da95..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Inpage Urdu 2009 Professional By Nazim.md
+++ /dev/null
@@ -1,48 +0,0 @@
-
-
Inpage Urdu 2009 Professional by Nazim: The Ultimate Urdu Writing Software
-
If you are looking for a software that can help you write Urdu, Arabic, Persian and other languages in a beautiful Nastaliq font, then you should try Inpage Urdu 2009 Professional by Nazim. This software is a comprehensive tool that has a strong grip on these languages and offers many features and options for creating and publishing your documents.
Inpage Urdu 2009 Professional by Nazim is a word processing software that was first built in 1994. The main purpose behind the development of this tool was to create pages in languages that are not much popular and neglected by the world like Balochi, Punjabi, Arabic, Pashto, etc. It is based on the world famous Noorinastaliq font that gives your text a calligraphic elegance.
-
This software is compatible with all kinds of operating systems including Microsoft Windows 7, MS Windows XP, Win 8, and Windows 10. Separate versions also have been released for Mac OS and Linux. You can easily download the setup file for this tool and install it using simple guided steps. It works just like MS Office 2017 and Office Professional 2003.
-
What are the features of Inpage Urdu 2009 Professional by Nazim?
-
Inpage Urdu 2009 Professional by Nazim has many features that make it a powerful and versatile tool for Urdu writing. Some of these features are:
-
-
Fast and easy typing application. You can type in Urdu, Arabic, Persian and other languages with ease and accuracy.
-
The best typing tool. It has a built-in dictionary and spell check feature for Urdu and English language. It also has automatic spacing in Nastaliq font and other fonts.
-
It is easy to use with English from left to right. You can switch between Urdu and English language with a single keystroke.
-
It has the ability to communicate with other software in typing. You can import and export text from other applications like MS Word, Excel, PowerPoint, etc.
-
It has the ability to rotate text and other elements for free in any aspect. You can also customize your text with different colors, styles, sizes, etc.
-
It has training options and results. You can learn how to type in Urdu and Arabic with the help of tutorials and exercises.
-
It has a wide range of tools. You can insert images, tables, symbols, borders, etc. in your document. You can also use different page layouts, headers, footers, etc.
-
-
How to download and install Inpage Urdu 2009 Professional by Nazim?
-
If you want to download and install Inpage Urdu 2009 Professional by Nazim on your PC or Android phone, you can follow these simple steps:
-
-
-
Go to the official website of Inpage Urdu 2009 Professional by Nazim or click on this link: https://www.inpage.com.pk/download/
-
Select the version of the software that suits your operating system and click on the download button.
-
Save the setup file on your device and run it as an administrator.
-
Follow the instructions on the screen and complete the installation process.
-
Launch the software and enjoy writing in Urdu and other languages.
-
-
Conclusion
-
Inpage Urdu 2009 Professional by Nazim is a great software for writing Urdu, Arabic, Persian and other languages in a beautiful Nastaliq font. It has many features and options that make it a powerful and versatile tool for creating and publishing your documents. You can download it from the official website or from the link given above. If you have any questions or feedback about this software, feel free to leave a comment below.
-
What are the reviews of Inpage Urdu 2009 Professional by Nazim?
-
Inpage Urdu 2009 Professional by Nazim has received many positive reviews from its users and critics. Many people have praised its features, performance, compatibility, and ease of use. Here are some of the reviews of Inpage Urdu 2009 Professional by Nazim from different sources:
-
-
"Inpage Urdu 2009 Professional by Nazim is a great software for writing Urdu, Arabic, Persian and other languages in a beautiful Nastaliq font. It has many features and options that make it a powerful and versatile tool for creating and publishing your documents. You can download it from the official website or from the link given above. If you have any questions or feedback about this software, feel free to leave a comment below."
-Urdu Wisdom
-
-
-
"I have been using Inpage Urdu 2009 Professional by Nazim for a long time and I am very satisfied with it. It is very easy to use and has a lot of features that make my work easier and faster. I can write in Urdu, Arabic, Persian and other languages with accuracy and elegance. I can also import and export text from other applications like MS Word, Excel, PowerPoint, etc. I highly recommend this software to anyone who wants to write in these languages."
-Mike Breitling
-
-
-
"!FULL! Inpage Urdu 2009 Professional by Nazim is one of the best products on Kit. It is a comprehensive tool that has a strong grip on Urdu, Arabic, English, Persian and many other languages. It is a strong publishing tool. Based on the world famous Noorinastaliq font this tool has made it easy to write in Urdu and Arabic languages."
-Kit.co
-
-
Conclusion
-
Inpage Urdu 2009 Professional by Nazim is a great software for writing Urdu, Arabic, Persian and other languages in a beautiful Nastaliq font. It has many features and options that make it a powerful and versatile tool for creating and publishing your documents. You can download it from the official website or from the link given above. If you have any questions or feedback about this software, feel free to leave a comment below.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/Mathematicawolfram9fullcrack.md b/spaces/falterWliame/Face_Mask_Detection/Mathematicawolfram9fullcrack.md
deleted file mode 100644
index 7f0a6d7e7a5aaead68579eb237ccfa32ed769004..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Mathematicawolfram9fullcrack.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
- );
-}
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Brawl with the best Download Brawl Stars MOD APK from 5play.ru and dominate the game.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Brawl with the best Download Brawl Stars MOD APK from 5play.ru and dominate the game.md
deleted file mode 100644
index 30f505a567fd057074adf35cb2c87447dbe0a0ab..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Brawl with the best Download Brawl Stars MOD APK from 5play.ru and dominate the game.md
+++ /dev/null
@@ -1,95 +0,0 @@
-
-
Download Brawl Stars Mod Apk 5play.ru: A Guide for Beginners
-
Brawl Stars is one of the most popular mobile games in the world, with over 100 million downloads on the Google Play Store. It is a fast-paced multiplayer action game that lets you team up with your friends or play solo in various game modes, such as Gem Grab, Showdown, Brawl Ball, Bounty, Heist, and more. You can also unlock and upgrade dozens of unique brawlers, each with their own signature attack and super ability. Brawl Stars is fun, addictive, and challenging, but it can also be frustrating if you don't have enough resources to unlock new brawlers, skins, or gadgets. That's why many players are looking for a way to get unlimited resources and access to all the features of the game without spending real money. That's where Brawl Stars mod apk 5play.ru comes in.
-
Brawl Stars mod apk 5play.ru is a modified version of the original game that gives you unlimited coins, gems, tickets, and star points. You can use these resources to buy anything you want in the game, such as brawlers, skins, gadgets, power points, brawl boxes, and more. You can also play on a private server that has all the brawlers unlocked, including the legendary ones. You can also enjoy custom skins and gadgets that are not available in the official game. Brawl Stars mod apk 5play.ru is a great way to experience the game in a new way, without any limitations or restrictions. You can have more fun, experiment with different strategies, and dominate your opponents with ease.
If you want to download Brawl Stars mod apk 5play.ru, you need to follow these simple steps:
-
-
Go to [5play.ru](^1^), a website that offers free downloads of modded games and apps.
-
Search for "Brawl Stars" in the search bar and click on the result that says "Brawl Stars 49.210 APK (MOD money/private server) for android".
-
Scroll down to the bottom of the page and click on the green button that says "Download APK file".
-
Wait for the download to finish and then open the file manager on your device.
-
Locate the downloaded file and tap on it to install it. You may need to enable unknown sources in your device settings before you can install it.
-
Once the installation is complete, launch the game and enjoy unlimited resources and features.
-
-
Conclusion
-
Brawl Stars mod apk 5play.ru is a fantastic way to play Brawl Stars with more freedom and fun. You can get unlimited resources, play on a private server, unlock all brawlers, skins, and gadgets, and customize your game as you like. You can also download it for free from 5play.ru, a reliable website that offers safe and secure downloads of modded games and apps. If you are a fan of Brawl Stars and want to try something new, you should definitely give Brawl Stars mod apk 5play.ru a try. You won't regret it!
-
FAQs
-
Is Brawl Stars mod apk 5play.ru safe to use?
-
Yes, Brawl Stars mod apk 5play.ru is safe to use as long as you download it from 5play.ru, which is a trusted website that scans all its files for viruses and malware. However, you should always be careful when downloading any modded game or app from unknown sources, as they may contain harmful or malicious code.
-
Will I get banned for using Brawl Stars mod apk 5play.ru?
-
No, you will not get banned for using Brawl Stars mod apk 5play.ru
because you are playing on a private server that is separate from the official one. The official server does not detect or interfere with the private server, so you can play without any worries. However, you should not use your real account or personal information on the private server, as it may not be secure or protected.
-
Can I play with my friends on Brawl Stars mod apk 5play.ru?
-
Yes, you can play with your friends on Brawl Stars mod apk 5play.ru, as long as they also have the same mod apk installed on their devices. You can invite them to join your team or clan, or challenge them to friendly battles. You can also chat with them and send them gifts in the game.
-
download brawl stars mod apk unlimited money and gems
-download brawl stars mod apk latest version 2023
-download brawl stars mod apk menu hack
-download brawl stars mod apk unlocked all characters
-download brawl stars mod apk no root
-download brawl stars mod apk android 1
-download brawl stars mod apk with private server
-download brawl stars mod apk free shopping
-download brawl stars mod apk mega mod
-download brawl stars mod apk anti ban
-download brawl stars mod apk online
-download brawl stars mod apk offline
-download brawl stars mod apk for ios
-download brawl stars mod apk for pc
-download brawl stars mod apk for windows 10
-download brawl stars mod apk from 5play.ru
-download brawl stars mod apk from mediafire
-download brawl stars mod apk from apkpure
-download brawl stars mod apk from happymod
-download brawl stars mod apk from rexdl
-how to download brawl stars mod apk on android
-how to download brawl stars mod apk on iphone
-how to download brawl stars mod apk on laptop
-how to download brawl stars mod apk on chromebook
-how to download brawl stars mod apk on macbook
-how to install brawl stars mod apk on android
-how to install brawl stars mod apk on ios
-how to install brawl stars mod apk on pc
-how to install brawl stars mod apk on windows 10
-how to install brawl stars mod apk on macbook
-is it safe to download brawl stars mod apk
-is it legal to download brawl stars mod apk
-is it possible to download brawl stars mod apk
-is it easy to download brawl stars mod apk
-is it worth it to download brawl stars mod apk
-why should you download brawl stars mod apk
-why do people download brawl stars mod apk
-why can't i download brawl stars mod apk
-why won't my phone let me download brawl stars mod apk
-why does my antivirus block me from downloading brawl stars mod apk
-what is the best site to download brawl stars mod apk
-what is the best way to download brawl stars mod apk
-what is the best version of brawl stars mod apk to download
-what are the benefits of downloading brawl stars mod apk
-what are the risks of downloading brawl stars mod apk
-where can i find the link to download brawl stars mod apk 5play.ru
-where can i get the password to unlock the zip file of brawl stars mod apk 5play.ru
-where can i watch the video tutorial on how to download and install brawl stars mod apk 5play.ru
-where can i read the reviews and ratings of other users who downloaded brawl stars mod apk 5play.ru
-
What are the advantages of Brawl Stars mod apk 5play.ru over the official game?
-
Some of the advantages of Brawl Stars mod apk 5play.ru over the official game are:
-
-
You can get unlimited resources, such as coins, gems, tickets, and star points, without spending any real money.
-
You can unlock and upgrade all the brawlers, skins, and gadgets in the game, without waiting for them to appear in the shop or in the brawl boxes.
-
You can play on a private server that has less lag, more stability, and more features than the official server.
-
You can enjoy custom skins and gadgets that are exclusive to the mod apk and not available in the official game.
-
You can have more fun and creativity in the game, without any limitations or restrictions.
-
-
What are the disadvantages of Brawl Stars mod apk 5play.ru over the official game?
-
Some of the disadvantages of Brawl Stars mod apk 5play.ru over the official game are:
-
-
You may not be able to access some of the events or updates that are available in the official game, as they may not be compatible with the mod apk.
-
You may not be able to participate in some of the competitions or tournaments that are organized by the official game, as they may not allow modded players to join.
-
You may not be able to sync your progress or data with your Google Play account or Facebook account, as they may not recognize the mod apk.
-
You may risk losing your data or account if the mod apk is deleted, corrupted, or hacked by someone else.
-
You may face some bugs or glitches in the game, as the mod apk may not be fully tested or optimized for all devices.
-
-
Is Brawl Stars mod apk 5play.ru updated regularly?
-
Yes, Brawl Stars mod apk 5play.ru is updated regularly by its developers, who try to keep up with the latest version of the official game. You can check for updates on 5play.ru, where you can also find other modded games and apps that you may like. You can also follow their social media accounts or join their Telegram channel for more information and news about their projects.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Coinbase APK The Easiest Way to Buy Sell and Manage Your Crypto.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Coinbase APK The Easiest Way to Buy Sell and Manage Your Crypto.md
deleted file mode 100644
index cba65b33fd75b1680016cbfe1192cdf21aca32bd..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Coinbase APK The Easiest Way to Buy Sell and Manage Your Crypto.md
+++ /dev/null
@@ -1,174 +0,0 @@
-
-
Coinbase APK: How to Download and Use the World's Most Trusted Crypto Exchange App
-
If you are looking for a secure, easy, and reliable way to buy, sell, trade, store, and stake crypto, you might want to check out Coinbase APK. Coinbase is the world's most trusted cryptocurrency exchange, with over 110 million users across 100+ countries. It supports hundreds of cryptocurrencies, including Bitcoin, Ethereum, Cardano, Solana, Tether, and more. It also offers advanced trading tools, web3 access, staking rewards, and educational content.
But what is Coinbase APK and how can you download and use it on your Android device? In this article, we will answer these questions and more. We will explain what an APK file is, why you might need it, and what are the benefits of using Coinbase APK. We will also show you how to download and install Coinbase APK on your device, and how to use it to manage your crypto portfolio. Let's get started!
-
What is Coinbase APK?
-
Coinbase APK is the Android Package Kit file of the Coinbase app. An APK file is a compressed file that contains all the code, resources, and metadata of an Android app. It is similar to an executable file (.exe) on Windows or a package file (.pkg) on Mac OS.
-
You can download and install an APK file on your Android device manually, without using the Google Play Store. This is also known as sideloading. Sideloading allows you to access apps that are not available on the Play Store, or to install older or modified versions of apps that are not compatible with your device or region.
-
What is an APK file and why do you need it?
-
An APK file is a compressed file that contains all the code, resources, and metadata of an Android app. It is similar to an executable file (.exe) on Windows or a package file (.pkg) on Mac OS.
-
You can download and install an APK file on your Android device manually, without using the Google Play Store. This is also known as sideloading. Sideloading allows you to access apps that are not available on the Play Store, or to install older or modified versions of apps that are not compatible with your device or region.
-
What are the benefits of using Coinbase APK?
-
There are several benefits of using Coinbase APK instead of the Play Store version. Some of them are:
-
coinbase wallet apk
-coinbase pro apk
-coinbase earn apk
-coinbase app download apk
-coinbase android apk
-coinbase apk latest version
-coinbase apk old version
-coinbase apk for pc
-coinbase apk mod
-coinbase apk mirror
-coinbase apk pure
-coinbase apk uptodown
-coinbase apk 2023
-coinbase apk 2022
-coinbase apk 2021
-coinbase apk 2020
-coinbase apk 2019
-coinbase apk 2018
-coinbase apk 2017
-coinbase bitcoin wallet apk
-coinbase crypto wallet apk
-coinbase digital wallet apk
-coinbase exchange apk
-coinbase free bitcoin apk
-coinbase hack apk
-coinbase lite apk
-coinbase mobile app apk
-coinbase online wallet apk
-coinbase premium apk
-coinbase pro app apk
-coinbase pro trading apk
-coinbase quiz apk
-coinbase secure wallet apk
-coinbase stock trading apk
-coinbase trade crypto apk
-download coinbase wallet app android, ios, windows phone, pc, mac, linux, chrome extension, firefox addon, opera plugin, edge browser extension, brave browser extension, safari browser extension, internet explorer browser extension, tor browser extension, uc browser extension, baidu browser extension, yandex browser extension, maxthon browser extension, vivaldi browser extension, waterfox browser extension, pale moon browser extension, seamonkey browser extension, midori browser extension, epic privacy browser extension, slimjet browser extension, avant browser extension, lunascape browser extension, comodo dragon browser extension, torch browser extension, citrio browser extension or puffin web browser app.
-how to install coinbase wallet app on android phone or tablet using google play store or apkmirror or apkpure or apktada or apkmody or apknite or apksfree or apkgk or apkmix or apksfull or apklush or apktovi or apkmaza or apkwow or apkturbo or apkdry or apksafety or apksnake or apksmash or apksmart or apkspeedy.
-how to install coinbase wallet app on iphone or ipad using app store or ipa library or ipa box or ipa rhino or ipa apps or ipa installer or ipa store.
-how to install coinbase wallet app on windows pc using bluestacks emulator or nox player emulator or memu emulator or ldplayer emulator or gameloop emulator or koplayer emulator or droid4x emulator or genymotion emulator.
-how to install coinbase wallet app on mac using andy emulator or remix os player emulator.
-how to install coinbase wallet app on linux using anbox emulator.
-how to install coinbase wallet app on chromebook using crossover android app.
-
-
You can get the latest updates and features of the Coinbase app before they are released on the Play Store.
-
You can avoid any restrictions or limitations imposed by Google on crypto-related apps.
-
You can have more control over your app installation and permissions.
-
You can backup and restore your app data easily.
-
-
However, there are also some risks involved in sideloading apps. You need to be careful about where you download the APK files from, as some sources may contain malware or viruses. You also need to enable unknown sources on your device settings, which may expose your device to security threats. Therefore, you should only download APK files from trusted and verified sources, such as the official website of the app developer.
-
How to download and install Coinbase APK?
-
If you want to download and install Coinbase APK on your Android device, you need to follow these steps:
-
Step 1: Enable unknown sources on your device
-
Before you can install an APK file on your device, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Play Store. To do this, follow these steps:
-
-
Go to your device settings and tap on Security or Privacy.
-
Find the option that says Unknown sources or Install unknown apps and toggle it on.
-
A warning message will pop up, telling you the risks of installing apps from unknown sources. Tap on OK or Allow to proceed.
-
-
Note: The exact steps may vary depending on your device model and Android version. You can also search for unknown sources in your device settings to find the option.
-
Step 2: Download the Coinbase APK file from a trusted source
-
Once you have enabled unknown sources on your device, you can download the Coinbase APK file from a trusted source. The best source is the official website of Coinbase, which you can access by clicking [here]. Alternatively, you can use a reputable third-party website that offers verified and safe APK files, such as APKMirror or APKPure.
-
To download the Coinbase APK file, follow these steps:
-
-
Open your browser and go to the website where you want to download the Coinbase APK file.
-
Find the Coinbase app and tap on the Download button.
-
A pop-up window will appear, asking you to confirm the download. Tap on OK or Download to start the download.
-
Wait for the download to finish. You can check the progress in your notification bar or your download folder.
-
-
Step 3: Install the Coinbase APK file on your device
-
After you have downloaded the Coinbase APK file, you can install it on your device. To do this, follow these steps:
-
-
Locate the Coinbase APK file in your download folder or notification bar and tap on it.
-
A prompt will appear, asking you to confirm the installation. Tap on Install or Next to continue.
-
Wait for the installation to complete. You can check the progress in your notification bar or your screen.
-
Once the installation is done, tap on Open or Done to launch or exit the app.
-
-
Congratulations! You have successfully downloaded and installed Coinbase APK on your device. You can now enjoy all the features and benefits of the Coinbase app without using the Play Store.
-
How to use Coinbase APK?
-
Now that you have installed Coinbase APK on your device, you can use it to buy, sell, trade, store, and stake crypto with ease. To use Coinbase APK, you need to follow these steps:
-
Step 1: Create or log in to your Coinbase account
-
If you already have a Coinbase account, you can simply log in with your email and password. If you don't have a Coinbase account yet, you can create one for free by following these steps:
-
-
Open the Coinbase app and tap on Get started.
-
Enter your name, email, and password and tap on Create account.
-
A verification email will be sent to your email address. Open it and tap on Verify email address.
-
You will be redirected to the Coinbase app and asked to agree to the terms of service and privacy policy. Tap on Agree and continue.
-
-
Step 2: Verify your identity and add a payment method
-
Before you can buy or sell crypto with Coinbase, you need to verify your identity and add a payment method. This is to comply with the regulatory requirements and ensure the security of your account. To do this, follow these steps:
-
-
In the Coinbase app, tap on Settings and then Identity verification.
-
Select your country and document type (passport, driver's license, or ID card) and tap on Continue.
-
Follow the instructions to scan or upload your document and take a selfie.
-
Wait for the verification process to complete. This may take a few minutes or hours depending on the volume of requests.
-
Once your identity is verified, tap on Settings and then Payment methods.
-
Select your preferred payment method (bank account, debit card, credit card, PayPal, etc.) and tap on Add payment method.
-
Follow the instructions to link your payment method to your Coinbase account.
-
-
Step 3: Buy, sell, trade, store, and stake crypto with Coinbase APK
-
Now that you have verified your identity and added a payment method, you can start buying, selling, trading, storing, and staking crypto with Coinbase APK. To do this, follow these steps:
-
-
In the Coinbase app, tap on the Home tab and then Buy/Sell.
-
Select the crypto you want to buy or sell and enter the amount in your local currency or crypto.
-
Review the details of your transaction, including the fees and exchange rate, and tap on Preview buy or Preview sell.
-
If you are satisfied with the transaction, tap on Buy now or Sell now and confirm with your payment method or wallet.
-
You will receive a confirmation message and an email with the details of your transaction.
-
-
You can also trade crypto with other users on Coinbase Pro, which is a more advanced platform that offers lower fees, more trading pairs, and more features. To access Coinbase Pro, follow these steps:
-
-
In the Coinbase app, tap on the Menu icon and then Pro.
-
If you have not created a Coinbase Pro account yet, tap on Get started and follow the instructions to sign up.
-
Once you have a Coinbase Pro account, you can transfer funds from your Coinbase wallet to your Coinbase Pro wallet by tapping on Deposit or Withdraw and selecting Coinbase Wallet as the source or destination.
-
You can then start trading crypto by tapping on Trade and selecting the trading pair you want to trade.
-
You can place different types of orders, such as market, limit, stop, or post-only, by tapping on the Order type button and choosing your preferred option.
-
You can also view the market data, charts, order book, and history by tapping on the icons at the bottom of the screen.
-
-
Besides buying, selling, and trading crypto, you can also store and stake crypto with Coinbase APK. To store your crypto securely, you can use the Coinbase Wallet, which is a separate app that allows you to manage your own private keys and access web3 applications. To download and use the Coinbase Wallet, follow these steps:
-
-
Download the Coinbase Wallet APK file from a trusted source and install it on your device following the same steps as above.
-
Open the Coinbase Wallet app and tap on Create a new wallet or Import an existing wallet.
-
Follow the instructions to set up your wallet and backup your recovery phrase.
-
You can then transfer funds from your Coinbase account to your Coinbase Wallet by tapping on Receive and scanning the QR code or copying the address.
-
You can also send funds from your Coinbase Wallet to other wallets or addresses by tapping on Send and entering the amount and destination.
-
-
To stake your crypto and earn passive income, you can use the Coinbase app or the Coinbase Wallet app depending on the type of crypto you want to stake. For example, you can stake Ethereum 2.0 (ETH2) with the Coinbase app by following these steps:
-
-
In the Coinbase app, tap on the Home tab and then Ethereum 2.0 (ETH2).
-
Tap on Start earning rewards and enter the amount of ETH you want to stake.
-
Review the terms and conditions of staking ETH2 and tap on Stake ETH.
-
You will receive a confirmation message and an email with the details of your staking.
-
-
You can also stake other cryptocurrencies, such as Tezos (XTZ), Cosmos (ATOM), Algorand (ALGO), or Cardano (ADA) with the Coinbase Wallet app by following these steps:
-
-
In the Coinbase Wallet app, tap on the Menu icon and then Staking.
-
Select the crypto you want to stake and tap on Stake now.
-
Enter the amount of crypto you want to stake and tap on Next.
-
Review the details of your staking and tap on Confirm.
-
You will receive a confirmation message and an email with the details of your staking.
-
-
Conclusion
-
Coinbase APK is a great way to access all the features and benefits of the world's most trusted crypto exchange app without using the Play Store. You can download and install Coinbase APK on your Android device easily by following the steps in this article. You can also use Coinbase APK to buy, sell, trade, store, and stake crypto with ease and security. Coinbase APK is a must-have app for any crypto enthusiast or investor. Download it today and start your crypto journey with Coinbase!
FAQs
-
Here are some frequently asked questions about Coinbase APK:
-
Is Coinbase APK safe?
-
Coinbase APK is safe as long as you download it from a trusted and verified source, such as the official website of Coinbase or a reputable third-party website. You should also scan the APK file with an antivirus software before installing it on your device. However, you should be aware of the risks of sideloading apps and enabling unknown sources on your device settings, as this may expose your device to security threats. You should also protect your Coinbase account and wallet with a strong password and two-factor authentication.
-
Is Coinbase APK legal?
-
Coinbase APK is legal as long as you use it in accordance with the terms of service and privacy policy of Coinbase and the laws and regulations of your country or region. However, some countries or regions may have restrictions or bans on crypto-related apps or activities, so you should check the legal status of crypto in your area before using Coinbase APK. You should also respect the intellectual property rights of Coinbase and its partners and not modify, distribute, or sell the APK file without their permission.
-
Is Coinbase APK free?
-
Coinbase APK is free to download and use, but you may incur some fees when you buy, sell, trade, store, or stake crypto with Coinbase. These fees may include transaction fees, conversion fees, network fees, withdrawal fees, deposit fees, or staking fees. The amount and type of fees may vary depending on the crypto, payment method, trading pair, or staking protocol you use. You can check the fee schedule of Coinbase [here] for more details.
-
How to update Coinbase APK?
-
To update Coinbase APK, you need to download and install the latest version of the APK file from a trusted source. You can check the official website of Coinbase or a reputable third-party website for any updates or new features. You can also enable notifications on your device settings to get alerted when a new version is available. However, you should be careful not to install any fake or malicious updates that may harm your device or account.
-
How to uninstall Coinbase APK?
-
To uninstall Coinbase APK, you need to follow these steps:
-
-
Go to your device settings and tap on Apps or Applications.
-
Find the Coinbase app and tap on it.
-
Tap on Uninstall or Remove and confirm your action.
-
Wait for the uninstallation process to complete.
-
-
Note: Uninstalling Coinbase APK will not delete your Coinbase account or wallet. You can still access them by logging in to the Coinbase website or another device. However, you should backup your recovery phrase before uninstalling the app in case you lose access to your wallet.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Cookie Run Kingdom - Create Your Own Sweet Kingdom and Fight the Dark Legion.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Cookie Run Kingdom - Create Your Own Sweet Kingdom and Fight the Dark Legion.md
deleted file mode 100644
index 487c6c15a87c68fcdedd6afc85d4825a883cf2ca..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Cookie Run Kingdom - Create Your Own Sweet Kingdom and Fight the Dark Legion.md
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
Cookie Run: Kingdom APK - A Sweet and Fun Game for Android
-
Do you love cookies? Do you love games? If you answered yes to both questions, then you will love Cookie Run: Kingdom APK, a sweet and fun game for Android devices. In this game, you can build your own cookie kingdom, fight against evil forces, collect and upgrade cookie characters, and join guilds with other players. Sounds delicious, right? Let's find out more about this game in this article.
-
What is Cookie Run: Kingdom APK?
-
Cookie Run: Kingdom APK is a kingdom builder and battle RPG game developed by Devsisters Corporation, the same company that created the popular Cookie Run series. It is a sequel to Cookie Run: OvenBreak, which was released in 2016. In this game, you can explore the colorful and cute world of cookies, where you can create your own cookie kingdom, fight against the dark legion of the Dark Enchantress Cookie, and discover the secrets of the ancient cookies and their kingdoms.
In Cookie Run: Kingdom APK, you can design your own cookie kingdom with various decors, such as buildings, plants, furniture, and more. You can also expand your territory by clearing stages and defeating enemies. You can also fight against other players in PvP mode, where you can test your skills and strategies.
-
A sequel to the popular Cookie Run series
-
Cookie Run: Kingdom APK is a continuation of the story of GingerBrave and his friends, who escaped from the oven in Cookie Run: OvenBreak. In this game, they face a new threat from the Dark Enchantress Cookie, who wants to destroy all the cookie kingdoms. You can join them in their adventure and meet new cookie characters along the way.
-
A colorful and cute world of cookies
-
Cookie Run: Kingdom APK has a charming graphics style that will appeal to both kids and adults. The game features a variety of cookie characters, each with their own personality, voice, and skills. The game also has a lively soundtrack and sound effects that match the mood of the game.
-
How to download and install Cookie Run: Kingdom APK?
-
If you want to play Cookie Run: Kingdom APK on your Android device, you can download it from Google Play or APKCombo. Here are the steps to do so:
-
Download from Google Play or APKCombo
-
You can download Cookie Run: Kingdom APK from Google Play by searching for it on the app store or by clicking [here](^2^). Alternatively, you can download it from APKCombo by searching for it on the website or by clicking [here](^1^). The file size is about 100 MB.
-
Enable unknown sources on your device
-
If you download Cookie Run: Kingdom APK from APKCombo, you need to enable unknown sources on your device to install it. To do this, go to Settings > Security and enable the option to install apps from unknown sources. This will allow you to install Cookie Run: Kingdom APK on your device.
-
Install the APK file and enjoy the game
-
Once you have downloaded Cookie Run: Kingdom APK, you can install it by tapping on the file and following the instructions. After the installation is complete, you can open the game and start playing. You may need to grant some permissions to the game, such as access to your storage, location, and contacts.
-
cookie run kingdom apk download
-cookie run kingdom apk mod
-cookie run kingdom apk latest version
-cookie run kingdom apk obb
-cookie run kingdom apk android
-cookie run kingdom apk ios
-cookie run kingdom apk free
-cookie run kingdom apk offline
-cookie run kingdom apk update
-cookie run kingdom apk hack
-cookie run kingdom apk file
-cookie run kingdom apk mirror
-cookie run kingdom apk pure
-cookie run kingdom apk data
-cookie run kingdom apk nox
-cookie run kingdom apk bluestacks
-cookie run kingdom apk reddit
-cookie run kingdom apk 4.3.002
-cookie run kingdom apk 4.2.001
-cookie run kingdom apk 4.1.001
-cookie run kingdom apk 4.0.001
-cookie run kingdom apk 3.0.001
-cookie run kingdom apk 2.0.001
-cookie run kingdom apk 1.0.001
-cookie run kingdom apk beta
-cookie run kingdom apk global
-cookie run kingdom apk english
-cookie run kingdom apk korean
-cookie run kingdom apk chinese
-cookie run kingdom apk japanese
-cookie run kingdom apk german
-cookie run kingdom apk french
-cookie run kingdom apk spanish
-cookie run kingdom apk portuguese
-cookie run kingdom apk italian
-cookie run kingdom apk russian
-cookie run kingdom apk turkish
-cookie run kingdom apk arabic
-cookie run kingdom apk thai
-cookie run kingdom apk vietnamese
-cookie run kingdom apk indonesian
-cookie run kingdom apk malay
-cookie run kingdom apk filipino
-cookie run kingdom apk hindi
-cookie run kingdom apk urdu
-cookie run kingdom apk bengali
-cookie run kingdom apk tamil
-cookie run kingdom apk telugu
-cookie run kingdom apk marathi
-
What are the features of Cookie Run: Kingdom APK?
-
Cookie Run: Kingdom APK is a game that offers a lot of features for you to enjoy. Here are some of them:
-
Build your own cookie kingdom with various decors
-
In Cookie Run: Kingdom APK, you can customize your own cookie kingdom with different types of decors, such as buildings, plants, furniture, and more. You can also unlock new decors by clearing stages and completing quests. You can arrange your decors according to your preference and style. You can also visit other players' kingdoms and see how they decorated theirs.
-
Fight against the dark legion of the Dark Enchantress Cookie
-
In Cookie Run: Kingdom APK, you can also engage in battles against the dark legion of the Dark Enchantress Cookie, who wants to destroy all the cookie kingdoms. You can form a team of up to five cookie characters, each with their own skills and abilities. You can also use special items and combos to enhance your performance. You can fight in various modes, such as story mode, guild mode, PvP mode, and more.
-
Collect and upgrade over 200 cookie characters
-
In Cookie Run: Kingdom APK, you can collect and upgrade over 200 cookie characters, each with their own personality, voice, and skills. You can obtain new cookie characters by summoning them with crystals or cookies. You can also upgrade your cookie characters by leveling them up, enhancing their skills, equipping them with treasures, and awakening them. You can also mix and match different cookie characters to create your own unique team.
-
Join guilds and cooperate with other players
-
In Cookie Run: Kingdom APK, you can also join guilds and cooperate with other players. You can chat with your guild members, share tips and strategies, and help each other out. You can also participate in guild battles, where you can compete with other guilds for rewards and glory. You can also join events and challenges that are exclusive for guild members.
-
What are the pros and cons of Cookie Run: Kingdom APK?
-
Cookie Run: Kingdom APK is a game that has its pros and cons. Here are some of them:
-
Pros
-
-
Fun and addictive gameplay
-
Cookie Run: Kingdom APK is a game that offers a fun and addictive gameplay that will keep you entertained for hours. You can enjoy building your own cookie kingdom, fighting against enemies, collecting and upgrading cookie characters, and joining guilds with other players. The game also has a lot of content and features that will make you want to play more.
-
Charming graphics and sound effects
-
Cookie Run: Kingdom APK is a game that has a charming graphics style that will appeal to both kids and adults. The game features a variety of cookie characters, each with their own personality, voice, and skills. The game also has a lively soundtrack and sound effects that match the mood of the game.
-
Free to play with regular updates
-
Cookie Run: Kingdom APK is a game that is free to play with regular updates. You can download and play the game without spending any money. The game also provides regular updates that add new content and features to the game, such as new cookie characters, new stages, new events, and more.
-
-
Cons
-
-
Requires internet connection and storage space
-
Cookie Run: Kingdom APK is a game that requires internet connection and storage space to play. You need to have a stable internet connection to access the game's features and modes. You also need to have enough storage space on your device to download and install the game.
-
May have some bugs and glitches
-
Cookie Run: Kingdom APK is a game that may have some bugs and glitches that affect the gameplay. Some users have reported issues such as crashing, freezing, lagging, loading errors, login errors, and more. The developers are working on fixing these issues as soon as possible.
-
< h4>May have some in-app purchases and ads
-
Cookie Run: Kingdom APK is a game that may have some in-app purchases and ads that may affect the gameplay. Some users may find the in-app purchases and ads to be annoying or unfair. The game also has a stamina system that limits the number of stages you can play per day. You can buy more stamina with crystals or cookies, which can be obtained by playing the game or by spending real money.
-
-
Conclusion
-
Cookie Run: Kingdom APK is a sweet and fun game for Android devices that lets you build your own cookie kingdom, fight against evil forces, collect and upgrade cookie characters, and join guilds with other players. The game has a fun and addictive gameplay, charming graphics and sound effects, and free to play with regular updates. However, the game also requires internet connection and storage space, may have some bugs and glitches, and may have some in-app purchases and ads. If you are looking for a game that will make you hungry for cookies and adventure, you should try Cookie Run: Kingdom APK.
-
FAQs
-
-
Q: What are the minimum requirements to play Cookie Run: Kingdom APK?
-
A: The minimum requirements to play Cookie Run: Kingdom APK are Android 4.4 or higher, 2 GB of RAM, and 100 MB of storage space.
-
Q: How can I get more crystals or cookies in Cookie Run: Kingdom APK?
-
A: You can get more crystals or cookies by playing the game, completing quests, participating in events, watching ads, or buying them with real money.
-
Q: How can I contact the developers of Cookie Run: Kingdom APK?
-
A: You can contact the developers of Cookie Run: Kingdom APK by sending an email to cookierun@devsisters.com or by visiting their official website [here].
-
Q: How can I join a guild in Cookie Run: Kingdom APK?
-
A: You can join a guild in Cookie Run: Kingdom APK by tapping on the guild icon on the main screen, searching for a guild that suits your preferences, and applying to join it. You can also create your own guild if you have enough crystals.
-
Q: How can I update Cookie Run: Kingdom APK?
-
A: You can update Cookie Run: Kingdom APK by downloading the latest version from Google Play or APKCombo. You can also check for updates by tapping on the settings icon on the main screen and selecting the update option.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/readline.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/readline.d.ts
deleted file mode 100644
index 6ab64acbbec10680e4c519598e84b9c64bd97984..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/readline.d.ts
+++ /dev/null
@@ -1,653 +0,0 @@
-/**
- * The `readline` module provides an interface for reading data from a `Readable` stream (such as `process.stdin`) one line at a time.
- *
- * To use the promise-based APIs:
- *
- * ```js
- * import * as readline from 'node:readline/promises';
- * ```
- *
- * To use the callback and sync APIs:
- *
- * ```js
- * import * as readline from 'node:readline';
- * ```
- *
- * The following simple example illustrates the basic use of the `readline` module.
- *
- * ```js
- * import * as readline from 'node:readline/promises';
- * import { stdin as input, stdout as output } from 'node:process';
- *
- * const rl = readline.createInterface({ input, output });
- *
- * const answer = await rl.question('What do you think of Node.js? ');
- *
- * console.log(`Thank you for your valuable feedback: ${answer}`);
- *
- * rl.close();
- * ```
- *
- * Once this code is invoked, the Node.js application will not terminate until the`readline.Interface` is closed because the interface waits for data to be
- * received on the `input` stream.
- * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/readline.js)
- */
-declare module 'readline' {
- import { Abortable, EventEmitter } from 'node:events';
- import * as promises from 'node:readline/promises';
-
- export { promises };
- export interface Key {
- sequence?: string | undefined;
- name?: string | undefined;
- ctrl?: boolean | undefined;
- meta?: boolean | undefined;
- shift?: boolean | undefined;
- }
- /**
- * Instances of the `readline.Interface` class are constructed using the`readline.createInterface()` method. Every instance is associated with a
- * single `input` `Readable` stream and a single `output` `Writable` stream.
- * The `output` stream is used to print prompts for user input that arrives on,
- * and is read from, the `input` stream.
- * @since v0.1.104
- */
- export class Interface extends EventEmitter {
- readonly terminal: boolean;
- /**
- * The current input data being processed by node.
- *
- * This can be used when collecting input from a TTY stream to retrieve the
- * current value that has been processed thus far, prior to the `line` event
- * being emitted. Once the `line` event has been emitted, this property will
- * be an empty string.
- *
- * Be aware that modifying the value during the instance runtime may have
- * unintended consequences if `rl.cursor` is not also controlled.
- *
- * **If not using a TTY stream for input, use the `'line'` event.**
- *
- * One possible use case would be as follows:
- *
- * ```js
- * const values = ['lorem ipsum', 'dolor sit amet'];
- * const rl = readline.createInterface(process.stdin);
- * const showResults = debounce(() => {
- * console.log(
- * '\n',
- * values.filter((val) => val.startsWith(rl.line)).join(' ')
- * );
- * }, 300);
- * process.stdin.on('keypress', (c, k) => {
- * showResults();
- * });
- * ```
- * @since v0.1.98
- */
- readonly line: string;
- /**
- * The cursor position relative to `rl.line`.
- *
- * This will track where the current cursor lands in the input string, when
- * reading input from a TTY stream. The position of cursor determines the
- * portion of the input string that will be modified as input is processed,
- * as well as the column where the terminal caret will be rendered.
- * @since v0.1.98
- */
- readonly cursor: number;
- /**
- * NOTE: According to the documentation:
- *
- * > Instances of the `readline.Interface` class are constructed using the
- * > `readline.createInterface()` method.
- *
- * @see https://nodejs.org/dist/latest-v10.x/docs/api/readline.html#readline_class_interface
- */
- protected constructor(input: NodeJS.ReadableStream, output?: NodeJS.WritableStream, completer?: Completer | AsyncCompleter, terminal?: boolean);
- /**
- * NOTE: According to the documentation:
- *
- * > Instances of the `readline.Interface` class are constructed using the
- * > `readline.createInterface()` method.
- *
- * @see https://nodejs.org/dist/latest-v10.x/docs/api/readline.html#readline_class_interface
- */
- protected constructor(options: ReadLineOptions);
- /**
- * The `rl.getPrompt()` method returns the current prompt used by `rl.prompt()`.
- * @since v15.3.0
- * @return the current prompt string
- */
- getPrompt(): string;
- /**
- * The `rl.setPrompt()` method sets the prompt that will be written to `output`whenever `rl.prompt()` is called.
- * @since v0.1.98
- */
- setPrompt(prompt: string): void;
- /**
- * The `rl.prompt()` method writes the `readline.Interface` instances configured`prompt` to a new line in `output` in order to provide a user with a new
- * location at which to provide input.
- *
- * When called, `rl.prompt()` will resume the `input` stream if it has been
- * paused.
- *
- * If the `readline.Interface` was created with `output` set to `null` or`undefined` the prompt is not written.
- * @since v0.1.98
- * @param preserveCursor If `true`, prevents the cursor placement from being reset to `0`.
- */
- prompt(preserveCursor?: boolean): void;
- /**
- * The `rl.question()` method displays the `query` by writing it to the `output`,
- * waits for user input to be provided on `input`, then invokes the `callback`function passing the provided input as the first argument.
- *
- * When called, `rl.question()` will resume the `input` stream if it has been
- * paused.
- *
- * If the `readline.Interface` was created with `output` set to `null` or`undefined` the `query` is not written.
- *
- * The `callback` function passed to `rl.question()` does not follow the typical
- * pattern of accepting an `Error` object or `null` as the first argument.
- * The `callback` is called with the provided answer as the only argument.
- *
- * Example usage:
- *
- * ```js
- * rl.question('What is your favorite food? ', (answer) => {
- * console.log(`Oh, so your favorite food is ${answer}`);
- * });
- * ```
- *
- * Using an `AbortController` to cancel a question.
- *
- * ```js
- * const ac = new AbortController();
- * const signal = ac.signal;
- *
- * rl.question('What is your favorite food? ', { signal }, (answer) => {
- * console.log(`Oh, so your favorite food is ${answer}`);
- * });
- *
- * signal.addEventListener('abort', () => {
- * console.log('The food question timed out');
- * }, { once: true });
- *
- * setTimeout(() => ac.abort(), 10000);
- * ```
- *
- * If this method is invoked as it's util.promisify()ed version, it returns a
- * Promise that fulfills with the answer. If the question is canceled using
- * an `AbortController` it will reject with an `AbortError`.
- *
- * ```js
- * const util = require('util');
- * const question = util.promisify(rl.question).bind(rl);
- *
- * async function questionExample() {
- * try {
- * const answer = await question('What is you favorite food? ');
- * console.log(`Oh, so your favorite food is ${answer}`);
- * } catch (err) {
- * console.error('Question rejected', err);
- * }
- * }
- * questionExample();
- * ```
- * @since v0.3.3
- * @param query A statement or query to write to `output`, prepended to the prompt.
- * @param callback A callback function that is invoked with the user's input in response to the `query`.
- */
- question(query: string, callback: (answer: string) => void): void;
- question(query: string, options: Abortable, callback: (answer: string) => void): void;
- /**
- * The `rl.pause()` method pauses the `input` stream, allowing it to be resumed
- * later if necessary.
- *
- * Calling `rl.pause()` does not immediately pause other events (including`'line'`) from being emitted by the `readline.Interface` instance.
- * @since v0.3.4
- */
- pause(): this;
- /**
- * The `rl.resume()` method resumes the `input` stream if it has been paused.
- * @since v0.3.4
- */
- resume(): this;
- /**
- * The `rl.close()` method closes the `readline.Interface` instance and
- * relinquishes control over the `input` and `output` streams. When called,
- * the `'close'` event will be emitted.
- *
- * Calling `rl.close()` does not immediately stop other events (including `'line'`)
- * from being emitted by the `readline.Interface` instance.
- * @since v0.1.98
- */
- close(): void;
- /**
- * The `rl.write()` method will write either `data` or a key sequence identified
- * by `key` to the `output`. The `key` argument is supported only if `output` is
- * a `TTY` text terminal. See `TTY keybindings` for a list of key
- * combinations.
- *
- * If `key` is specified, `data` is ignored.
- *
- * When called, `rl.write()` will resume the `input` stream if it has been
- * paused.
- *
- * If the `readline.Interface` was created with `output` set to `null` or`undefined` the `data` and `key` are not written.
- *
- * ```js
- * rl.write('Delete this!');
- * // Simulate Ctrl+U to delete the line written previously
- * rl.write(null, { ctrl: true, name: 'u' });
- * ```
- *
- * The `rl.write()` method will write the data to the `readline` `Interface`'s`input`_as if it were provided by the user_.
- * @since v0.1.98
- */
- write(data: string | Buffer, key?: Key): void;
- write(data: undefined | null | string | Buffer, key: Key): void;
- /**
- * Returns the real position of the cursor in relation to the input
- * prompt + string. Long input (wrapping) strings, as well as multiple
- * line prompts are included in the calculations.
- * @since v13.5.0, v12.16.0
- */
- getCursorPos(): CursorPos;
- /**
- * events.EventEmitter
- * 1. close
- * 2. line
- * 3. pause
- * 4. resume
- * 5. SIGCONT
- * 6. SIGINT
- * 7. SIGTSTP
- * 8. history
- */
- addListener(event: string, listener: (...args: any[]) => void): this;
- addListener(event: 'close', listener: () => void): this;
- addListener(event: 'line', listener: (input: string) => void): this;
- addListener(event: 'pause', listener: () => void): this;
- addListener(event: 'resume', listener: () => void): this;
- addListener(event: 'SIGCONT', listener: () => void): this;
- addListener(event: 'SIGINT', listener: () => void): this;
- addListener(event: 'SIGTSTP', listener: () => void): this;
- addListener(event: 'history', listener: (history: string[]) => void): this;
- emit(event: string | symbol, ...args: any[]): boolean;
- emit(event: 'close'): boolean;
- emit(event: 'line', input: string): boolean;
- emit(event: 'pause'): boolean;
- emit(event: 'resume'): boolean;
- emit(event: 'SIGCONT'): boolean;
- emit(event: 'SIGINT'): boolean;
- emit(event: 'SIGTSTP'): boolean;
- emit(event: 'history', history: string[]): boolean;
- on(event: string, listener: (...args: any[]) => void): this;
- on(event: 'close', listener: () => void): this;
- on(event: 'line', listener: (input: string) => void): this;
- on(event: 'pause', listener: () => void): this;
- on(event: 'resume', listener: () => void): this;
- on(event: 'SIGCONT', listener: () => void): this;
- on(event: 'SIGINT', listener: () => void): this;
- on(event: 'SIGTSTP', listener: () => void): this;
- on(event: 'history', listener: (history: string[]) => void): this;
- once(event: string, listener: (...args: any[]) => void): this;
- once(event: 'close', listener: () => void): this;
- once(event: 'line', listener: (input: string) => void): this;
- once(event: 'pause', listener: () => void): this;
- once(event: 'resume', listener: () => void): this;
- once(event: 'SIGCONT', listener: () => void): this;
- once(event: 'SIGINT', listener: () => void): this;
- once(event: 'SIGTSTP', listener: () => void): this;
- once(event: 'history', listener: (history: string[]) => void): this;
- prependListener(event: string, listener: (...args: any[]) => void): this;
- prependListener(event: 'close', listener: () => void): this;
- prependListener(event: 'line', listener: (input: string) => void): this;
- prependListener(event: 'pause', listener: () => void): this;
- prependListener(event: 'resume', listener: () => void): this;
- prependListener(event: 'SIGCONT', listener: () => void): this;
- prependListener(event: 'SIGINT', listener: () => void): this;
- prependListener(event: 'SIGTSTP', listener: () => void): this;
- prependListener(event: 'history', listener: (history: string[]) => void): this;
- prependOnceListener(event: string, listener: (...args: any[]) => void): this;
- prependOnceListener(event: 'close', listener: () => void): this;
- prependOnceListener(event: 'line', listener: (input: string) => void): this;
- prependOnceListener(event: 'pause', listener: () => void): this;
- prependOnceListener(event: 'resume', listener: () => void): this;
- prependOnceListener(event: 'SIGCONT', listener: () => void): this;
- prependOnceListener(event: 'SIGINT', listener: () => void): this;
- prependOnceListener(event: 'SIGTSTP', listener: () => void): this;
- prependOnceListener(event: 'history', listener: (history: string[]) => void): this;
- [Symbol.asyncIterator](): AsyncIterableIterator;
- }
- export type ReadLine = Interface; // type forwarded for backwards compatibility
- export type Completer = (line: string) => CompleterResult;
- export type AsyncCompleter = (line: string, callback: (err?: null | Error, result?: CompleterResult) => void) => void;
- export type CompleterResult = [string[], string];
- export interface ReadLineOptions {
- input: NodeJS.ReadableStream;
- output?: NodeJS.WritableStream | undefined;
- completer?: Completer | AsyncCompleter | undefined;
- terminal?: boolean | undefined;
- /**
- * Initial list of history lines. This option makes sense
- * only if `terminal` is set to `true` by the user or by an internal `output`
- * check, otherwise the history caching mechanism is not initialized at all.
- * @default []
- */
- history?: string[] | undefined;
- historySize?: number | undefined;
- prompt?: string | undefined;
- crlfDelay?: number | undefined;
- /**
- * If `true`, when a new input line added
- * to the history list duplicates an older one, this removes the older line
- * from the list.
- * @default false
- */
- removeHistoryDuplicates?: boolean | undefined;
- escapeCodeTimeout?: number | undefined;
- tabSize?: number | undefined;
- }
- /**
- * The `readline.createInterface()` method creates a new `readline.Interface`instance.
- *
- * ```js
- * const readline = require('readline');
- * const rl = readline.createInterface({
- * input: process.stdin,
- * output: process.stdout
- * });
- * ```
- *
- * Once the `readline.Interface` instance is created, the most common case is to
- * listen for the `'line'` event:
- *
- * ```js
- * rl.on('line', (line) => {
- * console.log(`Received: ${line}`);
- * });
- * ```
- *
- * If `terminal` is `true` for this instance then the `output` stream will get
- * the best compatibility if it defines an `output.columns` property and emits
- * a `'resize'` event on the `output` if or when the columns ever change
- * (`process.stdout` does this automatically when it is a TTY).
- *
- * When creating a `readline.Interface` using `stdin` as input, the program
- * will not terminate until it receives `EOF` (Ctrl+D on
- * Linux/macOS, Ctrl+Z followed by Return on
- * Windows).
- * If you want your application to exit without waiting for user input, you can `unref()` the standard input stream:
- *
- * ```js
- * process.stdin.unref();
- * ```
- * @since v0.1.98
- */
- export function createInterface(input: NodeJS.ReadableStream, output?: NodeJS.WritableStream, completer?: Completer | AsyncCompleter, terminal?: boolean): Interface;
- export function createInterface(options: ReadLineOptions): Interface;
- /**
- * The `readline.emitKeypressEvents()` method causes the given `Readable` stream to begin emitting `'keypress'` events corresponding to received input.
- *
- * Optionally, `interface` specifies a `readline.Interface` instance for which
- * autocompletion is disabled when copy-pasted input is detected.
- *
- * If the `stream` is a `TTY`, then it must be in raw mode.
- *
- * This is automatically called by any readline instance on its `input` if the`input` is a terminal. Closing the `readline` instance does not stop
- * the `input` from emitting `'keypress'` events.
- *
- * ```js
- * readline.emitKeypressEvents(process.stdin);
- * if (process.stdin.isTTY)
- * process.stdin.setRawMode(true);
- * ```
- *
- * ## Example: Tiny CLI
- *
- * The following example illustrates the use of `readline.Interface` class to
- * implement a small command-line interface:
- *
- * ```js
- * const readline = require('readline');
- * const rl = readline.createInterface({
- * input: process.stdin,
- * output: process.stdout,
- * prompt: 'OHAI> '
- * });
- *
- * rl.prompt();
- *
- * rl.on('line', (line) => {
- * switch (line.trim()) {
- * case 'hello':
- * console.log('world!');
- * break;
- * default:
- * console.log(`Say what? I might have heard '${line.trim()}'`);
- * break;
- * }
- * rl.prompt();
- * }).on('close', () => {
- * console.log('Have a great day!');
- * process.exit(0);
- * });
- * ```
- *
- * ## Example: Read file stream line-by-Line
- *
- * A common use case for `readline` is to consume an input file one line at a
- * time. The easiest way to do so is leveraging the `fs.ReadStream` API as
- * well as a `for await...of` loop:
- *
- * ```js
- * const fs = require('fs');
- * const readline = require('readline');
- *
- * async function processLineByLine() {
- * const fileStream = fs.createReadStream('input.txt');
- *
- * const rl = readline.createInterface({
- * input: fileStream,
- * crlfDelay: Infinity
- * });
- * // Note: we use the crlfDelay option to recognize all instances of CR LF
- * // ('\r\n') in input.txt as a single line break.
- *
- * for await (const line of rl) {
- * // Each line in input.txt will be successively available here as `line`.
- * console.log(`Line from file: ${line}`);
- * }
- * }
- *
- * processLineByLine();
- * ```
- *
- * Alternatively, one could use the `'line'` event:
- *
- * ```js
- * const fs = require('fs');
- * const readline = require('readline');
- *
- * const rl = readline.createInterface({
- * input: fs.createReadStream('sample.txt'),
- * crlfDelay: Infinity
- * });
- *
- * rl.on('line', (line) => {
- * console.log(`Line from file: ${line}`);
- * });
- * ```
- *
- * Currently, `for await...of` loop can be a bit slower. If `async` / `await`flow and speed are both essential, a mixed approach can be applied:
- *
- * ```js
- * const { once } = require('events');
- * const { createReadStream } = require('fs');
- * const { createInterface } = require('readline');
- *
- * (async function processLineByLine() {
- * try {
- * const rl = createInterface({
- * input: createReadStream('big-file.txt'),
- * crlfDelay: Infinity
- * });
- *
- * rl.on('line', (line) => {
- * // Process the line.
- * });
- *
- * await once(rl, 'close');
- *
- * console.log('File processed.');
- * } catch (err) {
- * console.error(err);
- * }
- * })();
- * ```
- * @since v0.7.7
- */
- export function emitKeypressEvents(stream: NodeJS.ReadableStream, readlineInterface?: Interface): void;
- export type Direction = -1 | 0 | 1;
- export interface CursorPos {
- rows: number;
- cols: number;
- }
- /**
- * The `readline.clearLine()` method clears current line of given `TTY` stream
- * in a specified direction identified by `dir`.
- * @since v0.7.7
- * @param callback Invoked once the operation completes.
- * @return `false` if `stream` wishes for the calling code to wait for the `'drain'` event to be emitted before continuing to write additional data; otherwise `true`.
- */
- export function clearLine(stream: NodeJS.WritableStream, dir: Direction, callback?: () => void): boolean;
- /**
- * The `readline.clearScreenDown()` method clears the given `TTY` stream from
- * the current position of the cursor down.
- * @since v0.7.7
- * @param callback Invoked once the operation completes.
- * @return `false` if `stream` wishes for the calling code to wait for the `'drain'` event to be emitted before continuing to write additional data; otherwise `true`.
- */
- export function clearScreenDown(stream: NodeJS.WritableStream, callback?: () => void): boolean;
- /**
- * The `readline.cursorTo()` method moves cursor to the specified position in a
- * given `TTY` `stream`.
- * @since v0.7.7
- * @param callback Invoked once the operation completes.
- * @return `false` if `stream` wishes for the calling code to wait for the `'drain'` event to be emitted before continuing to write additional data; otherwise `true`.
- */
- export function cursorTo(stream: NodeJS.WritableStream, x: number, y?: number, callback?: () => void): boolean;
- /**
- * The `readline.moveCursor()` method moves the cursor _relative_ to its current
- * position in a given `TTY` `stream`.
- *
- * ## Example: Tiny CLI
- *
- * The following example illustrates the use of `readline.Interface` class to
- * implement a small command-line interface:
- *
- * ```js
- * const readline = require('readline');
- * const rl = readline.createInterface({
- * input: process.stdin,
- * output: process.stdout,
- * prompt: 'OHAI> '
- * });
- *
- * rl.prompt();
- *
- * rl.on('line', (line) => {
- * switch (line.trim()) {
- * case 'hello':
- * console.log('world!');
- * break;
- * default:
- * console.log(`Say what? I might have heard '${line.trim()}'`);
- * break;
- * }
- * rl.prompt();
- * }).on('close', () => {
- * console.log('Have a great day!');
- * process.exit(0);
- * });
- * ```
- *
- * ## Example: Read file stream line-by-Line
- *
- * A common use case for `readline` is to consume an input file one line at a
- * time. The easiest way to do so is leveraging the `fs.ReadStream` API as
- * well as a `for await...of` loop:
- *
- * ```js
- * const fs = require('fs');
- * const readline = require('readline');
- *
- * async function processLineByLine() {
- * const fileStream = fs.createReadStream('input.txt');
- *
- * const rl = readline.createInterface({
- * input: fileStream,
- * crlfDelay: Infinity
- * });
- * // Note: we use the crlfDelay option to recognize all instances of CR LF
- * // ('\r\n') in input.txt as a single line break.
- *
- * for await (const line of rl) {
- * // Each line in input.txt will be successively available here as `line`.
- * console.log(`Line from file: ${line}`);
- * }
- * }
- *
- * processLineByLine();
- * ```
- *
- * Alternatively, one could use the `'line'` event:
- *
- * ```js
- * const fs = require('fs');
- * const readline = require('readline');
- *
- * const rl = readline.createInterface({
- * input: fs.createReadStream('sample.txt'),
- * crlfDelay: Infinity
- * });
- *
- * rl.on('line', (line) => {
- * console.log(`Line from file: ${line}`);
- * });
- * ```
- *
- * Currently, `for await...of` loop can be a bit slower. If `async` / `await`flow and speed are both essential, a mixed approach can be applied:
- *
- * ```js
- * const { once } = require('events');
- * const { createReadStream } = require('fs');
- * const { createInterface } = require('readline');
- *
- * (async function processLineByLine() {
- * try {
- * const rl = createInterface({
- * input: createReadStream('big-file.txt'),
- * crlfDelay: Infinity
- * });
- *
- * rl.on('line', (line) => {
- * // Process the line.
- * });
- *
- * await once(rl, 'close');
- *
- * console.log('File processed.');
- * } catch (err) {
- * console.error(err);
- * }
- * })();
- * ```
- * @since v0.7.7
- * @param callback Invoked once the operation completes.
- * @return `false` if `stream` wishes for the calling code to wait for the `'drain'` event to be emitted before continuing to write additional data; otherwise `true`.
- */
- export function moveCursor(stream: NodeJS.WritableStream, dx: number, dy: number, callback?: () => void): boolean;
-}
-declare module 'node:readline' {
- export * from 'readline';
-}
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io-parser/build/esm/encodePacket.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io-parser/build/esm/encodePacket.d.ts
deleted file mode 100644
index 9ca28c8b64f15d45bff202afada68824e64aabc4..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io-parser/build/esm/encodePacket.d.ts
+++ /dev/null
@@ -1,3 +0,0 @@
-import { Packet, RawData } from "./commons.js";
-declare const encodePacket: ({ type, data }: Packet, supportsBinary: boolean, callback: (encodedPacket: RawData) => void) => void;
-export default encodePacket;
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/transports/polling-jsonp.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/transports/polling-jsonp.d.ts
deleted file mode 100644
index 0fed2077fa7d3c2edf0e22a15e783f6ae1f595c5..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/transports/polling-jsonp.d.ts
+++ /dev/null
@@ -1,24 +0,0 @@
-import { Polling } from "./polling";
-export declare class JSONP extends Polling {
- private readonly head;
- private readonly foot;
- /**
- * JSON-P polling transport.
- *
- * @api public
- */
- constructor(req: any);
- /**
- * Handles incoming data.
- * Due to a bug in \n handling by browsers, we expect a escaped string.
- *
- * @api private
- */
- onData(data: any): void;
- /**
- * Performs the write.
- *
- * @api private
- */
- doWrite(data: any, options: any, callback: any): void;
-}
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/has-symbols/shams.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/has-symbols/shams.js
deleted file mode 100644
index 1285210ef7ccef1eae88c888694eb481b2d23997..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/has-symbols/shams.js
+++ /dev/null
@@ -1,42 +0,0 @@
-'use strict';
-
-/* eslint complexity: [2, 18], max-statements: [2, 33] */
-module.exports = function hasSymbols() {
- if (typeof Symbol !== 'function' || typeof Object.getOwnPropertySymbols !== 'function') { return false; }
- if (typeof Symbol.iterator === 'symbol') { return true; }
-
- var obj = {};
- var sym = Symbol('test');
- var symObj = Object(sym);
- if (typeof sym === 'string') { return false; }
-
- if (Object.prototype.toString.call(sym) !== '[object Symbol]') { return false; }
- if (Object.prototype.toString.call(symObj) !== '[object Symbol]') { return false; }
-
- // temp disabled per https://github.com/ljharb/object.assign/issues/17
- // if (sym instanceof Symbol) { return false; }
- // temp disabled per https://github.com/WebReflection/get-own-property-symbols/issues/4
- // if (!(symObj instanceof Symbol)) { return false; }
-
- // if (typeof Symbol.prototype.toString !== 'function') { return false; }
- // if (String(sym) !== Symbol.prototype.toString.call(sym)) { return false; }
-
- var symVal = 42;
- obj[sym] = symVal;
- for (sym in obj) { return false; } // eslint-disable-line no-restricted-syntax, no-unreachable-loop
- if (typeof Object.keys === 'function' && Object.keys(obj).length !== 0) { return false; }
-
- if (typeof Object.getOwnPropertyNames === 'function' && Object.getOwnPropertyNames(obj).length !== 0) { return false; }
-
- var syms = Object.getOwnPropertySymbols(obj);
- if (syms.length !== 1 || syms[0] !== sym) { return false; }
-
- if (!Object.prototype.propertyIsEnumerable.call(obj, sym)) { return false; }
-
- if (typeof Object.getOwnPropertyDescriptor === 'function') {
- var descriptor = Object.getOwnPropertyDescriptor(obj, sym);
- if (descriptor.value !== symVal || descriptor.enumerable !== true) { return false; }
- }
-
- return true;
-};
diff --git a/spaces/flax-community/multilingual-image-captioning/sections/intro.md b/spaces/flax-community/multilingual-image-captioning/sections/intro.md
deleted file mode 100644
index 6ad4273dcce8f8034273b6d8e4d56273b57e4d01..0000000000000000000000000000000000000000
--- a/spaces/flax-community/multilingual-image-captioning/sections/intro.md
+++ /dev/null
@@ -1,3 +0,0 @@
-This demo uses [CLIP-mBART50 model checkpoint](https://huggingface.co/flax-community/clip-vit-base-patch32_mbart-large-50) to predict caption for a given image in 4 languages (English, French, German, Spanish). Training was done using image encoder (CLIP-ViT) and text decoder (mBART50) with approximately 5 million image-text pairs taken from the [Conceptual 12M dataset](https://github.com/google-research-datasets/conceptual-12m) translated using [MarianMT](https://huggingface.co/transformers/model_doc/marian.html).
-
-For more details, click on `Usage` 🤗 above.
\ No newline at end of file
diff --git a/spaces/floriankrempl/mtg_rules_bot/mtg/data_handler/process_rules_data.py b/spaces/floriankrempl/mtg_rules_bot/mtg/data_handler/process_rules_data.py
deleted file mode 100644
index ddba218377601d14db0b1522ff92541871be97d7..0000000000000000000000000000000000000000
--- a/spaces/floriankrempl/mtg_rules_bot/mtg/data_handler/process_rules_data.py
+++ /dev/null
@@ -1,65 +0,0 @@
-# %%
-import re
-from pdfminer.high_level import extract_text
-from pathlib import Path
-import random
-
-
-def load_rules(rules_file=Path("data/raw/rules/MagicCompRules_21031101.pdf")):
- text = extract_text(rules_file)
- return text
-
-
-def extract_rules(text: str) -> list[str]:
- see_rules_pattern = r"See rule \d+\.\d+\. |See rule \d+\.\d+"
- start_of_rule_pattern = r"\d+\.\d+\."
-
- processed_texts = re.sub(see_rules_pattern, "", text)
- rules = re.split(start_of_rule_pattern, processed_texts)
- # filter glossar and intro
- rules = rules[1:-23]
- rules = [rule.replace("\n", "") for rule in rules]
-
- print("random rule:")
- print(random.choice(rules))
- print("_________________")
-
- return rules
-
-
-# %%
-
-import numpy as np
-import openai
-import yaml
-
-with open("config/config.yaml", "r") as infile:
- config = yaml.load(infile, Loader=yaml.FullLoader)
-
-# roles: system, user, assistant
-openai.api_key = config.get("open_ai_token")
-
-
-def get_embeddings(rules: list[str]):
- text_embedding = []
- for rule in rules:
- response = openai.Embedding.create(input=rule, model="text-embedding-ada-002")
- embeddings = response["data"][0]["embedding"]
- text_embedding.append((rule, np.array(embeddings)))
- return text_embedding
-
-
-# %%
-
-text = load_rules()
-rules = extract_rules(text)
-
-# %%
-
-text_embeddings = get_embeddings(rules[:2])
-
-# %%
-
-text_embeddings[0][1].shape
-
-import hnswlib
diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/redbluedoors.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/redbluedoors.py
deleted file mode 100644
index cea95b40e77fc6060eb9d9a70a17ec742073fdad..0000000000000000000000000000000000000000
--- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/redbluedoors.py
+++ /dev/null
@@ -1,80 +0,0 @@
-from gym_minigrid.minigrid import *
-from gym_minigrid.register import register
-
-class RedBlueDoorEnv(MiniGridEnv):
- """
- Single room with red and blue doors on opposite sides.
- The red door must be opened before the blue door to
- obtain a reward.
- """
-
- def __init__(self, size=8):
- self.size = size
-
- super().__init__(
- width=2*size,
- height=size,
- max_steps=20*size*size
- )
-
- def _gen_grid(self, width, height):
- # Create an empty grid
- self.grid = Grid(width, height)
-
- # Generate the grid walls
- self.grid.wall_rect(0, 0, 2*self.size, self.size)
- self.grid.wall_rect(self.size//2, 0, self.size, self.size)
-
- # Place the agent in the top-left corner
- self.place_agent(top=(self.size//2, 0), size=(self.size, self.size))
-
- # Add a red door at a random position in the left wall
- pos = self._rand_int(1, self.size - 1)
- self.red_door = Door("red")
- self.grid.set(self.size//2, pos, self.red_door)
-
- # Add a blue door at a random position in the right wall
- pos = self._rand_int(1, self.size - 1)
- self.blue_door = Door("blue")
- self.grid.set(self.size//2 + self.size - 1, pos, self.blue_door)
-
- # Generate the mission string
- self.mission = "open the red door then the blue door"
-
- def step(self, action):
- red_door_opened_before = self.red_door.is_open
- blue_door_opened_before = self.blue_door.is_open
-
- obs, reward, done, info = MiniGridEnv.step(self, action)
-
- red_door_opened_after = self.red_door.is_open
- blue_door_opened_after = self.blue_door.is_open
-
- if blue_door_opened_after:
- if red_door_opened_before:
- reward = self._reward()
- done = True
- else:
- reward = 0
- done = True
-
- elif red_door_opened_after:
- if blue_door_opened_before:
- reward = 0
- done = True
-
- return obs, reward, done, info
-
-class RedBlueDoorEnv6x6(RedBlueDoorEnv):
- def __init__(self):
- super().__init__(size=6)
-
-register(
- id='MiniGrid-RedBlueDoors-6x6-v0',
- entry_point='gym_minigrid.envs:RedBlueDoorEnv6x6'
-)
-
-register(
- id='MiniGrid-RedBlueDoors-8x8-v0',
- entry_point='gym_minigrid.envs:RedBlueDoorEnv'
-)
diff --git a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/video_identity/run.py b/spaces/freddyaboulton/3.1.4.9-all-demos/demos/video_identity/run.py
deleted file mode 100644
index 152dab9b0e8389c69531bb109124160ab03156e1..0000000000000000000000000000000000000000
--- a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/video_identity/run.py
+++ /dev/null
@@ -1,18 +0,0 @@
-import gradio as gr
-import os
-
-
-def video_identity(video):
- return video
-
-
-demo = gr.Interface(video_identity,
- gr.Video(),
- "playable_video",
- examples=[
- os.path.join(os.path.dirname(__file__),
- "video/video_sample.mp4")],
- cache_examples=True)
-
-if __name__ == "__main__":
- demo.launch()
diff --git a/spaces/g4f/freegpt-webui/client/js/chat.js b/spaces/g4f/freegpt-webui/client/js/chat.js
deleted file mode 100644
index b8052cb5fcafeaec9a40863822b3bfb0c34b0883..0000000000000000000000000000000000000000
--- a/spaces/g4f/freegpt-webui/client/js/chat.js
+++ /dev/null
@@ -1,515 +0,0 @@
-const query = (obj) =>
- Object.keys(obj)
- .map((k) => encodeURIComponent(k) + "=" + encodeURIComponent(obj[k]))
- .join("&");
-const url_prefix = document.querySelector('body').getAttribute('data-urlprefix')
-const markdown = window.markdownit();
-const message_box = document.getElementById(`messages`);
-const message_input = document.getElementById(`message-input`);
-const box_conversations = document.querySelector(`.top`);
-const spinner = box_conversations.querySelector(".spinner");
-const stop_generating = document.querySelector(`.stop-generating`);
-const send_button = document.querySelector(`#send-button`);
-const user_image = ``;
-const gpt_image = ``;
-let prompt_lock = false;
-
-hljs.addPlugin(new CopyButtonPlugin());
-
-message_input.addEventListener("blur", () => {
- window.scrollTo(0, 0);
-});
-
-message_input.addEventListener("focus", () => {
- document.documentElement.scrollTop = document.documentElement.scrollHeight;
-});
-
-const delete_conversations = async () => {
- localStorage.clear();
- await new_conversation();
-};
-
-const handle_ask = async () => {
- message_input.style.height = `80px`;
- window.scrollTo(0, 0);
- let message = message_input.value;
-
- if (message.length > 0) {
- message_input.value = ``;
- message_input.dispatchEvent(new Event("input"));
- await ask_gpt(message);
- }
-};
-
-const remove_cancel_button = async () => {
- stop_generating.classList.add(`stop-generating-hiding`);
-
- setTimeout(() => {
- stop_generating.classList.remove(`stop-generating-hiding`);
- stop_generating.classList.add(`stop-generating-hidden`);
- }, 300);
-};
-
-const ask_gpt = async (message) => {
- try {
- message_input.value = ``;
- message_input.innerHTML = ``;
- message_input.innerText = ``;
-
- add_conversation(window.conversation_id, message.substr(0, 20));
- window.scrollTo(0, 0);
- window.controller = new AbortController();
-
- jailbreak = document.getElementById("jailbreak");
- model = document.getElementById("model");
- prompt_lock = true;
- window.text = ``;
- window.token = message_id();
-
- stop_generating.classList.remove(`stop-generating-hidden`);
-
- add_user_message_box(message);
-
- message_box.scrollTop = message_box.scrollHeight;
- window.scrollTo(0, 0);
- await new Promise((r) => setTimeout(r, 500));
- window.scrollTo(0, 0);
-
- message_box.innerHTML += `
-