diff --git a/spaces/17TheWord/RealESRGAN/README.md b/spaces/17TheWord/RealESRGAN/README.md
deleted file mode 100644
index 87ad054801a0fd3d2ff7961285f07e7890dcfe82..0000000000000000000000000000000000000000
--- a/spaces/17TheWord/RealESRGAN/README.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: Real ESRGAN
-emoji: 🏃
-colorFrom: blue
-colorTo: blue
-sdk: gradio
-sdk_version: 3.1.7
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Kernel for Outlook PST Repair The Best Tool for Outlook Data File Recovery.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Kernel for Outlook PST Repair The Best Tool for Outlook Data File Recovery.md
deleted file mode 100644
index 511a8f5b99247bb9ca8065c8a74c456df2b8c2db..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Kernel for Outlook PST Repair The Best Tool for Outlook Data File Recovery.md
+++ /dev/null
@@ -1,134 +0,0 @@
-
-
SuperDuper 3.0 Crack for macOS MacOSX: A Complete Guide
-
If you are looking for a way to protect your data from unexpected disasters, such as hard drive failure, system crash, or malware attack, you may have heard of SuperDuper, a popular disk copying program that can create a fully bootable backup of your Mac.
-
But what if you don't want to pay for the full version of SuperDuper? Is there a way to get it for free? And if so, is it safe and reliable?
In this article, we will answer these questions and more by providing you with a complete guide on how to download, install, and use SuperDuper 3.0 crack for macOS MacOSX. We will also discuss the benefits and features of this program, as well as the risks and drawbacks of using a cracked version.
-
By the end of this article, you will have a clear idea of whether SuperDuper 3.0 crack for macOS MacOSX is worth it or not.
-
Introduction: What is SuperDuper and why you need it
-
SuperDuper is an advanced, yet easy to use disk copying program that can make a straight copy or clone of your Mac's hard drive or partition.
-
This means that you can create an exact replica of your system on another drive or image file that can be used to boot your Mac in case something goes wrong with your original drive.
-
This way, you can easily restore your system to its previous state without losing any data or settings.
-
Some of the main advantages of using SuperDuper over other disk copying programs are:
-
How to get SuperDuper 3.0 for free on Mac
-SuperDuper 3.0 full version download with crack
-SuperDuper 3.0 license key generator for macOS
-SuperDuper 3.0 cracked dmg file for Mac OS X
-SuperDuper 3.0 patch for macOS Catalina and Big Sur
-SuperDuper 3.0 activation code for Mac
-SuperDuper 3.0 serial number for macOS
-SuperDuper 3.0 keygen for Mac OS X
-SuperDuper 3.0 torrent download with crack
-SuperDuper 3.0 crack only for Mac
-SuperDuper 3.0 registration code for macOS
-SuperDuper 3.0 product key for Mac OS X
-SuperDuper 3.0 crack mac download free
-SuperDuper 3.0 latest version with crack
-SuperDuper 3.0 crack for macosx free download
-SuperDuper 3.0 mac crack reddit
-SuperDuper 3.0 crack dmg download for mac
-SuperDuper 3.0 crack mac os catalina
-SuperDuper 3.0 crack mac os big sur
-SuperDuper 3.0 crack mac os mojave
-SuperDuper 3.0 crack mac os high sierra
-SuperDuper 3.0 crack mac os sierra
-SuperDuper 3.0 crack mac os el capitan
-SuperDuper 3.0 crack mac os yosemite
-SuperDuper 3.0 crack mac os mavericks
-SuperDuper 3.0 crack mac os mountain lion
-SuperDuper 3.0 crack mac os lion
-SuperDuper 3.0 crack mac os snow leopard
-SuperDuper 3.0 crack mac os leopard
-SuperDuper 3.0 crack mac os tiger
-How to install SuperDuper 3.0 with crack on Mac
-How to use SuperDuper 3.0 with crack on Mac
-How to update SuperDuper 3.0 with crack on Mac
-How to uninstall SuperDuper 3.0 with crack on Mac
-How to backup and restore with SuperDuper 3.0 with crack on Mac
-How to clone and sync with SuperDuper 3.0 with crack on Mac
-How to schedule backups with SuperDuper 3.0 with crack on Mac
-How to create bootable backups with SuperDuper 3.0 with crack on Mac
-How to repair disk permissions with SuperDuper 3.0 with crack on Mac
-How to verify disk integrity with SuperDuper 3.0 with crack on Mac
-How to encrypt backups with SuperDuper 3.0 with crack on Mac
-How to compress backups with SuperDuper 3.0 with crack on Mac
-How to exclude files and folders from backups with SuperDuper 3.0 with crack on Mac
-How to restore from backups with SuperDuper 3.0 with crack on Mac
-How to clone from one Mac to another with SuperDuper 3.0 with crack on Mac
-How to migrate data from old Mac to new Mac with SuperDuper 3.0 with crack on Mac
-How to backup multiple drives with SuperDuper 3.0 with crack on Mac
-How to backup network drives with SuperDuper 3.0 with crack on Mac
-How to backup external drives with SuperDuper 3.0 with crack on Mac
-
-
It has a clear, friendly, and understandable interface that guides you through the backup process.
-
It has a built-in scheduler that allows you to back up automatically at regular intervals.
-
It has a copy script feature that gives you complete control over what files get copied, ignored, or aliased from one drive to another.
-
It supports APFS snapshots, which are point-in-time representations of your file system that can be restored quickly and easily.
-
-
The latest version of SuperDuper is 3.7.5, which was released on January 22nd, 2023. It is compatible with macOS Big Sur, macOS Monterey, and Apple Silicon.
-
How to download and install SuperDuper 3.0 crack for macOS MacOSX
-
If you want to use SuperDuper legally, you have to purchase a license from its official website for $27.95.
-
However, if you want to use it for free, you can try to download and install SuperDuper 3.0 crack for macOS MacOSX, which is an unofficial version that bypasses the license verification process.
-
To do this, you have to follow these steps:
-
-
Go to this link, which is one of the sources where you can find SuperDuper 3.0 crack for macOS MacOSX.
-
Click on the "Download Link" button at the bottom of the page.
-
Select one of the available download options (such as UsersDrive or NitroFlare) and follow the instructions on how to download the file.
-
Once the file is downloaded, extract it using an app like The Unarchiver or Keka.
-
You will find two files inside the extracted folder: "Super DUPER!.app" and "CORE Keygen.app".
-
Drag "Super DUPER!.app" into your Applications folder.
-
Run "CORE Keygen.app" and generate a serial number by clicking on the "Generate" button.
-
Copy the serial number and paste it into "Super DUPER!.app" when prompted.
-
Congratulations! You have successfully installed Super Duper! 3.0 crack for macOS MacOSX.
-
-
-
How to use Super DUPER! 3.0 crack for macOS MacOSX to create a bootable backup
-
Now that you have installed Super DUPER! 3.0 crack for macOS MacOSX, you can use it to create a bootable backup of your Mac.
-
Benefits and features of Super DUPER! 3.0 crack for macOS MacOSX
-
By using Super DUPER! 3.0 crack for macOS MacOSX, you can enjoy the benefits and features of Super DUPER!, which are:
-
Easy to use interface
-
Super DUPER! has a clear, friendly, and understandable interface that makes creating a backup painless. You just have to select the source drive (the one you want to copy), the destination drive (the one where you want to store the copy), and the backup option (such as "Backup - all files" or "Backup - user files"). Then, you just have to click on the "Copy Now" button and wait for the process to finish.
-
-
Built-in scheduler
-
Super DUPER! has a built-in scheduler that allows you to back up automatically at regular intervals. You can choose from different options, such as "When source changes", "Daily", "Weekly", or "Monthly". You can also set the time and day of the week when you want the backup to occur. This way, you don't have to worry about forgetting to back up your data.
-
-
Copy script feature
-
Super DUPER! has a copy script feature that gives you complete control over what files get copied, ignored, or aliased from one drive to another. You can use the predefined scripts that come with Super DUPER!, such as "Backup - all files", "Backup - user files", or "Sandbox - shared users and applications". Or, you can create your own custom scripts by using the advanced options, such as "Include", "Exclude", or "Script". This way, you can tailor your backup to your specific needs.
-
-
Snapshot support
-
Super DUPER! supports APFS snapshots, which are point-in-time representations of your file system that can be restored quickly and easily. Snapshots are created automatically by Super DUPER! when you back up your data. You can also create them manually by using the "Snapshot..." option in the File menu. Snapshots are stored on your destination drive and can be accessed by holding down the Option key while booting your Mac. This way, you can restore your system to a previous state without losing any data.
-
-
Risks and drawbacks of using Super DUPER! 3.0 crack for macOS MacOSX
-
While using Super DUPER! 3.0 crack for macOS MacOSX may seem tempting, it also comes with some risks and drawbacks that you should be aware of. These are:
-
Legal issues
-
Using a cracked version of Super DUPER! violates the terms and conditions of the software license agreement that you agree to when you purchase Super DUPER!. This means that you are breaking the law and may face legal consequences, such as fines or lawsuits. Moreover, you are depriving the developers of Super DUPER! of their rightful income and discouraging them from creating more quality software.
-
Security issues
-
Downloading and installing a cracked version of Super DUPER! may expose your system to malware, viruses, or other malicious programs that may compromise your data or privacy. These programs may be hidden in the crack file or in the download source. They may also be activated when you run Super DUPER! or when you connect to the internet. These programs may steal your personal information, damage your files, or hijack your system.
-
Performance issues
-
Using a cracked version of Super DUPER! may cause errors, bugs, or crashes that may affect the quality or reliability of your backup or restore process. These problems may be caused by compatibility issues with your system or with other software, by corrupted or missing files in the crack file, or by interference from malware or viruses. These problems may prevent you from creating a successful backup or restoring your system properly.
-
Conclusion: Is Super DUPER! 3.0 crack for macOS MacOSX worth it?
-
In conclusion, Super DUPER! 3.0 crack for macOS MacOSX is not worth it. While it may seem like a good way to save money and enjoy the benefits and features of Super DUPER!, it also comes with significant risks and drawbacks that may outweigh its advantages.
-
Using a cracked version of Super DUPER! is illegal, unsafe, and unreliable. It may expose you to legal troubles, security threats, and performance issues that may jeopardize your data and system.
-
If you want to use Super DUPER! legally and safely, you should purchase a license from its official website for $27.95. This way, you can support the developers of Super DUPER!, get regular updates and support, and ensure that your backup and restore process is smooth and secure.
-
If you don't want to pay for Super DUPER!, you can also try some alternatives or recommendations for using Super DUPER!, such as:
-
-
Using the free trial version of Super DUPER!, which allows you to create a bootable backup once without scheduling or scripting.
-
Using Time Machine, which is a built-in backup feature in macOS that can create incremental backups of your data on an external drive or a network device.
-
Using Carbon Copy Cloner, which is another disk copying program that can create bootable backups of your Mac with similar features as Super DUPER!, but with a different interface and pricing model.
-
-
Frequently Asked Questions
-
-
What is Super DUPER!?
-
Super DUPER! is an advanced, yet easy to use disk copying program that can create a fully bootable backup of your Mac.
-
How much does Super DUPER! cost?
-
Super DUPER! costs $27.95 for a single license that can be used on multiple Macs.
-
What is Super DUPER! 3.0 crack for macOS MacOSX?
-
Super DUPER! 3.0 crack for macOS MacOSX is an unofficial version of Super DUPER! that bypasses the license verification process and allows you to use it for free.
-
Is Super DUPER! 3.0 crack for macOS MacOSX safe?
-
No, Super DUPER! 3.0 crack for macOS MacOSX is not safe. It may expose your system to malware, viruses, or other malicious programs that may compromise your data or privacy.
-
Is Super DUPER! 3.0 crack for macOS MacOSX reliable?
-
No, Super DUPER! 3.0 crack for macOS MacOSX is not reliable. It may cause errors, bugs, or crashes that may affect the quality or reliability of your backup or restore process.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Ccleaner Full Crack HOT 2023.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Ccleaner Full Crack HOT 2023.md
deleted file mode 100644
index b68b721eecd99de794b7ae4c463be5dfa6ed80cb..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Ccleaner Full Crack HOT 2023.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
Download CCleaner Full Crack 2023: How to Install and Use It Safely
-
CCleaner is one of the most popular and trusted PC optimization tools that can help you clean junk files, fix registry errors, speed up your computer, and protect your privacy. However, the free version of CCleaner has limited features and requires you to update it manually. If you want to unlock all the features and enjoy automatic updates, you need to buy the pro version of CCleaner, which costs $24.95 per year.
But what if you don't want to pay for CCleaner pro? Is there a way to download CCleaner full crack 2023 and use it for free? The answer is yes, but it comes with some risks and drawbacks. In this article, we will show you how to download CCleaner full crack 2023, how to install and use it safely, and what are the alternatives to CCleaner crack.
-
How to Download CCleaner Full Crack 2023
-
There are many websites that claim to offer CCleaner full crack 2023 for free download. However, not all of them are reliable or safe. Some of them may contain malware, viruses, or spyware that can harm your computer or steal your personal information. Therefore, you need to be careful when choosing a website to download CCleaner full crack 2023.
-
One of the websites that we found to be relatively safe and working is https://tinhte.vn/thread/download-ccleaner-pro-2023-full-crack-huong-dan-cai-dat.3625564/. This website provides a link to download CCleaner Professional 2023 v6.11.10435 Full Repack, which is a cracked version of CCleaner pro that does not require a license key or activation. Here are the steps to download CCleaner full crack 2023 from this website:
Click on the Google Drive link that says "DOWNLOAD" and enter the password "phanmemnet.com" when prompted.
-
Download the file "CCleaner full crack 2023.rar" and save it on your computer.
-
Extract the file using WinRAR or any other software that can open RAR files.
-
You will see a folder named "CCleaner full crack 2023" that contains two files: "INSTALL PROFESSIONAL" and "READ ME".
-
-
How to Install and Use CCleaner Full Crack 2023
-
After downloading CCleaner full crack 2023, you need to install and use it properly to avoid any problems or errors. Here are the steps to install and use CCleaner full crack 2023:
-
-
Run the file "INSTALL PROFESSIONAL" and wait for a black screen to appear.
-
Wait for a few seconds until the installation is complete and close the black screen.
-
Launch CCleaner from your desktop or start menu and enjoy all the features of CCleaner pro without any license key or activation.
-
You can use CCleaner full crack 2023 to scan and clean your PC, optimize your registry, manage your startup programs, uninstall unwanted software, find duplicate files, wipe free space, and more.
-
-
What Are the Risks and Drawbacks of Using CCleaner Full Crack 2023
-
While using CCleaner full crack 2023 may seem tempting and convenient, it also comes with some risks and drawbacks that you should be aware of before deciding to use it
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Adobe AIR 2022 A Faster More Secure and More Compatible Runtime for AIR Applications.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Adobe AIR 2022 A Faster More Secure and More Compatible Runtime for AIR Applications.md
deleted file mode 100644
index 225262cbb69fcf1a4674691a5ae399c65ddc34f8..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Adobe AIR 2022 A Faster More Secure and More Compatible Runtime for AIR Applications.md
+++ /dev/null
@@ -1,194 +0,0 @@
-
-
Adobe AIR Download 2022: How to Install and Use the Latest Version of Adobe AIR
-
Adobe AIR is a cross-platform runtime that allows you to run rich web applications and games on your desktop, mobile, or tablet device. It provides a consistent and flexible environment for developers to create and deliver engaging experiences across multiple devices and platforms. In this article, you will learn what Adobe AIR is, why you need it, how to download and install it on your device, how to update it to the latest version, and how to use it to run your favorite AIR applications.
-
What is Adobe AIR and why do you need it?
-
Adobe AIR stands for Adobe Integrated Runtime, and it is a technology that enables developers to use web technologies such as HTML, CSS, JavaScript, ActionScript, and Flash to create desktop and mobile applications that can run outside the browser. Some of the benefits of using Adobe AIR are:
It allows you to access native features of your device, such as camera, microphone, accelerometer, GPS, file system, notifications, etc.
-
It supports offline mode, so you can use your applications even when you are not connected to the internet.
-
It offers high performance and quality graphics, thanks to the built-in support for hardware acceleration and Stage 3D.
-
It supports multiple screen sizes and resolutions, so you can enjoy your applications on any device.
-
It supports DRM (digital rights management) and encryption, so you can protect your content and intellectual property.
-
It supports extensions, so you can add additional functionality and features to your applications.
-
-
Adobe AIR features and benefits
-
Some of the features that make Adobe AIR a powerful and versatile runtime are:
-
-
It supports multiple languages and frameworks, such as HTML5, CSS3, JavaScript, jQuery, AngularJS, ReactJS, VueJS, Bootstrap, Flex, ActionScript 3.0, Flash Professional, Flash Builder, Animate CC, etc.
-
It supports multiple platforms and operating systems, such as Windows, Mac OS X, Linux, Android, iOS, BlackBerry Tablet OS, etc.
-
It supports multiple application types, such as games, e-learning, e-commerce, social media, productivity tools, media players, etc.
-
It supports multiple deployment options, such as web installation badges (which prompt users to install both the runtime and the application), custom installers (which bundle both the runtime and the application in one package), captive runtime (which embeds the runtime within the application), etc.
-
It supports multiple development tools and environments, such as Visual Studio Code (with the official extension), Eclipse (with the official plugin), IntelliJ IDEA (with the official plugin), Flash Builder (with the official plugin), Animate CC (with the official plugin), etc.
-
-
Adobe AIR system requirements
-
The system requirements for installing and running Adobe AIR are detailed here: Adobe AIR: System requirements. In general, you need:
-
-
A compatible operating system (Windows 7 or later; Mac OS X 10.10 or later; Linux Ubuntu 14.04 or later; Android 4.0 or later; iOS 10 or later)
-
A compatible processor (Intel Core Duo or faster; ARMv [assistant](#search_web("how to download and install adobe air")) "web_search_results": ["title": "Download Adobe AIR", "snippet": "Download Adobe AIR. The Adobe AIR runtime enables developers to package the same code into native applications and games for Windows and Mac OS desktops as well as iOS and Android devices, reaching over a billion desktop systems and mobile app stores for over 500 million devices.", "url": "[10](https://get.adobe.com/air/)", Windows.", "url": "[9](https://helpx.adobe.com/air/kb/install-air-windows.html)", Mac OS", "snippet": "Install Adobe AIR. Download the Adobe AIR installer file. Double-click the downloaded file to launch the installer. Follow the onscreen instructions to complete the installation. Note: If the installation fails, see Troubleshoot AIR installation , "title": "Install or update Adobe AIR , Android devices", "snippet": "Install or update Adobe AIR. Go to Google Play Store and search for Adobe AIR. Tap Install or Update, depending on your device. Follow the onscreen instructions to complete the installation.", "url": "[6](https://helpx.adobe.com/air/kb/install-air-android-devices.html)"]
How to download and install Adobe AIR on your device
-
Depending on your device and operating system, there are different ways to download and install Adobe AIR on your device. Here are some of the most common methods:
-
Downloading Adobe AIR from the official website
-
The easiest way to download Adobe AIR is to visit the official website: Download Adobe AIR. There, you can choose your platform and language, and click Download now. The website will automatically detect your system and provide you with the appropriate installer file.
-
Installing Adobe AIR on Windows
-
To install Adobe AIR on Windows, follow these steps:
-
-
Download the Adobe AIR installer file from the official website or another trusted source.
-
Double-click the downloaded file to launch the installer.
-
Follow the onscreen instructions to complete the installation.
adobe air download 2022 mac
-adobe air download 2022 windows
-adobe air download 2022 android
-adobe air download 2022 ios
-adobe air download 2022 linux
-adobe air download 2022 free
-adobe air download 2022 offline installer
-adobe air download 2022 latest version
-adobe air download 2022 update
-adobe air download 2022 for pc
-adobe air download 2022 for macbook pro
-adobe air download 2022 for windows 10
-adobe air download 2022 for android apk
-adobe air download 2022 for iphone
-adobe air download 2022 for linux mint
-adobe air download 2022 full version
-adobe air download 2022 standalone installer
-adobe air download 2022 new version
-adobe air download 2022 patch
-adobe air download 2022 for laptop
-adobe air download 2022 for macbook air
-adobe air download 2022 for windows 7
-adobe air download 2022 for android tablet
-adobe air download 2022 for ipad
-adobe air download 2022 for linux ubuntu
-adobe air download 2022 crack
-adobe air download 2022 silent install
-adobe air download 2022 old version
-adobe air download 2022 fix
-adobe air download 2022 for desktop
-adobe air download 2022 for mac os x
-adobe air download 2022 for windows xp
-adobe air download 2022 for android tv
-adobe air download 2022 for ipod touch
-adobe air download 2022 for linux fedora
-adobe air download 2022 keygen
-adobe air download 2022 command line install
-adobe air download 2022 previous version
-adobe air download 2022 error
-adobe air download 2022 for chromebook
-adobe air download 2022 for mac os catalina
-adobe air download 2022 for windows vista
-adobe air download 2022 for android emulator
-adobe air download 2022 for apple tv
-adobe air download 2022 for linux centos
-adobe air download 2022 serial number
-adobe air download 2022 msi install
-adobe air download 2022 beta version
-adobe air download 2022 troubleshooting
-
-
Download the Adobe AIR installer file from the official website or another trusted source.
-
Double-click the downloaded file to launch the installer.
-
Follow the onscreen instructions to complete the installation.
Keeping Adobe AIR up to date is important for ensuring the security and performance of your applications. There are two ways to update Adobe AIR: manually or automatically.
-
Checking for updates manually
-
To check for updates manually, follow these steps:
-
-
On Windows, go to Start > All Programs > Adobe AIR > Check for Updates.
-
On Mac, go to Applications > Utilities > Adobe AIR Application Installer > Check for Updates.
-
On Linux, go to Applications > System Tools > Adobe AIR Application Installer > Check for Updates.
-
On Android, go to Settings > Apps > Adobe AIR > Check for Updates.
-
On iOS, go to Settings > General > Software Update.
-
-
If there is a new version available, follow the prompts to download and install it.
-
Enabling automatic updates
-
To enable automatic updates, follow these steps:
-
-
On Windows, go to Start > All Programs > Adobe AIR > Settings Manager. Click the Updates tab and select Allow Adobe to install updates (recommended).
-
On Mac, go to Applications > Utilities > Adobe AIR Settings Manager. Click the Updates tab and select Allow Adobe to install updates (recommended).
-
On Linux, go to Applications > System Tools > Adobe AIR Settings Manager. Click the Updates tab and select Allow Adobe to install updates (recommended).
-
On Android, go to Settings > Apps > Adobe AIR. Tap the menu icon and select Auto-update.
-
On iOS, go to Settings > iTunes & App Store. Turn on Updates under Automatic Downloads.
-
-
This way, Adobe AIR will check for updates periodically and install them automatically when available.
-
How to use Adobe AIR applications
-
Adobe AIR applications are web applications that can run on your device without a browser. They have the file extension .air or .apk (for Android) or .ipa (for iOS). To use Adobe AIR applications, you need to find and install them first, and then run and manage them on your device.
-
Finding and installing AIR applications
-
To find and install AIR applications, you can use one of the following methods:
-
-
Browse the official Adobe AIR Marketplace, where you can find hundreds of free and paid applications in various categories.
-
Browse other online sources that offer AIR applications, such as Google Play Store, App Store, Amazon Appstore, etc. Make sure you download from trusted and reputable sources only.
-
Download an AIR application file from a website or a link provided by the developer. Make sure you scan the file for viruses and malware before opening it.
-
Create your own AIR application using one of the development tools and environments mentioned earlier.
-
-
To install an AIR application, you need to have Adobe AIR installed on your device first. Then, depending on your device and operating system, you can use one of the following methods:
-
-
If you download an AIR application from a website or a link, double-click the file to launch the installer. Follow the onscreen instructions to complete the installation.
-
If you download an AIR application from an online source that offers web installation badges (such as the Adobe AIR Marketplace), click the badge to launch the installer. Follow the onscreen instructions to complete the installation.
-
If you download an AIR application from an online source that offers custom installers (such as Google Play Store or App Store), open the installer file and follow the onscreen instructions to complete the installation.
-
If you create your own AIR application using a development tool or environment, export it as an installer file and then open it on your device. Follow the onscreen instructions to complete the installation.
If you download an AIR application from an online source that offers captive runtime (such as Amazon Appstore), open the application file and follow the onscreen instructions to complete the installation.
-
-
Running and managing AIR applications
-
To run and manage AIR applications, you can use one of the following methods:
-
-
If you install an AIR application on your desktop, you can find it in your Start menu (Windows), Applications folder (Mac), or Applications menu (Linux). Double-click the application icon to launch it.
-
If you install an AIR application on your mobile device, you can find it in your app drawer or home screen. Tap the application icon to launch it.
-
If you want to uninstall, update, or change the settings of an AIR application, you can use the Adobe AIR Settings Manager. On Windows, go to Start > All Programs > Adobe AIR > Settings Manager. On Mac, go to Applications > Utilities > Adobe AIR Settings Manager. On Linux, go to Applications > System Tools > Adobe AIR Settings Manager. On Android, go to Settings > Apps > Adobe AIR. On iOS, go to Settings > General > Usage > Manage Storage > Adobe AIR.
-
-
Conclusion
-
Adobe AIR is a powerful and versatile runtime that allows you to run rich web applications and games on your device without a browser. It offers many features and benefits for both developers and users, such as cross-platform compatibility, native device access, offline mode, high performance, DRM support, extensions support, etc. To use Adobe AIR applications, you need to download and install Adobe AIR on your device first, and then find and install your favorite AIR applications from various sources. You also need to keep Adobe AIR updated to the latest version for security and performance reasons. You can use the Adobe AIR Settings Manager to manage your AIR applications and change their settings.
-
Summary of the main points
-
-
Adobe AIR is a cross-platform runtime that allows you to run web applications and games on your device without a browser.
-
Adobe AIR offers many features and benefits for both developers and users, such as cross-platform compatibility, native device access, offline mode, high performance, DRM support, extensions support, etc.
-
To use Adobe AIR applications, you need to download and install Adobe AIR on your device first, and then find and install your favorite AIR applications from various sources.
-
You also need to keep Adobe AIR updated to the latest version for security and performance reasons.
-
You can use the Adobe AIR Settings Manager to manage your AIR applications and change their settings.
-
-
Call to action
-
If you are interested in using Adobe AIR applications or creating your own ones, you can visit the official website: Adobe - Adobe AIR. There, you can find more information about Adobe AIR, download the latest version of the runtime, browse the marketplace for existing applications, access the documentation and tutorials for developers, join the community forums for support and feedback, etc. You can also follow Adobe AIR on social media platforms such as Facebook, Twitter, YouTube, etc. for news and updates.
-
FAQs
-
Here are some of the frequently asked questions about Adobe AIR:
-
-
Is Adobe AIR free? Yes, Adobe AIR is free for both developers and users. You can download and use it without any charge or license fee.
-
Is Adobe AIR safe? Yes, Adobe AIR is safe as long as you download it from the official website or another trusted source. You should also scan any application file before installing it on your device. You can also check the digital signature of any application by right-clicking or control-clicking on it and selecting Properties (Windows) or Get Info (Mac).
-
Is Adobe AIR still supported? Yes, Adobe AIR is still supported by Adobe. The latest version of Adobe AIR is 33.1.1.533 (as of June 2023), which was released on May 18th 2023. You can check for updates regularly or enable automatic updates to keep your runtime up to date.
-
What are some of the best Adobe AIR applications? There are many great Adobe AIR applications available in various categories such as games, e-learning, e-commerce, social media, productivity tools, media players, etc. Some of the most popular ones are: Angry Birds (game), Pandora (music), TweetDeck (social media), Photoshop Express (photo editing), Evernote (note taking), Skype (video calling), etc.
-
How How can I create my own Adobe AIR application? To create your own Adobe AIR application, you need to use one of the development tools and environments that support Adobe AIR, such as Visual Studio Code, Eclipse, IntelliJ IDEA, Flash Builder, Animate CC, etc. You also need to have some knowledge of web technologies such as HTML, CSS, JavaScript, ActionScript, Flash, etc. You can follow the official documentation and tutorials for developers: Adobe - Adobe AIR Developer Center. There, you can find guides, samples, videos, articles, forums, etc. to help you get started and improve your skills.
-
-
I hope you enjoyed this article and learned something new about Adobe AIR. If you have any questions or feedback, please leave a comment below. Thank you for reading!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Checkers for Java and Challenge Your Friends to a Game of Strategy.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Checkers for Java and Challenge Your Friends to a Game of Strategy.md
deleted file mode 100644
index 1f159f910d4f829a5720b34b3a56c7227efe4c5c..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Checkers for Java and Challenge Your Friends to a Game of Strategy.md
+++ /dev/null
@@ -1,129 +0,0 @@
-
-
How to Download and Run Checkers for Java
-
Checkers is a classic board game that involves moving pieces diagonally across a grid of squares, capturing the opponent's pieces by jumping over them, and reaching the other side of the board to become a king. Checkers is also known as draughts in some countries, and it has many variations and rules. Checkers is a fun and easy game that can be played by anyone, anywhere.
Java is a popular programming language and software platform that runs on billions of devices, including computers, mobile phones, gaming consoles, medical devices, and many others. Java is used to develop applications that can run on different operating systems and platforms, without requiring any modifications or recompilation. Java is also known for its portability, performance, security, and reliability.
-
If you want to play checkers on your computer, you might want to download and run checkers for Java. Checkers for Java is a free and open-source application that allows you to play checkers against the computer or another human player, either online or offline. Checkers for Java has many features and options, such as different board sizes, difficulty levels, game modes, themes, sounds, and statistics.
-
In this article, we will show you how to download and run checkers for Java on your Windows system. We will also provide you with some tips and tricks for playing checkers for Java. Let's get started!
-
Checkers Rules and Gameplay
-
Before we download and run checkers for Java, let's review the basic rules and gameplay of checkers. Here are some key points:
-
-
Checkers is played on an 8x8 board with 64 squares of alternating colors (dark and light).
-
Each player has 12 pieces (also called men or checkers) of one color (black or white).
-
The pieces are placed on the dark squares in the first three rows closest to each player.
-
The player with the black pieces moves first, then the players alternate turns.
-
A piece can only move one diagonal space forward (toward the opponent's side) to an empty square.
-
If a piece is next to an opponent's piece and there is an empty square behind it, the piece can jump over the opponent's piece and capture it. The captured piece is removed from the board.
-
A piece can make multiple jumps in one turn if possible.
-
If a piece reaches the last row on the opponent's side (also called the king row), it becomes a king. A king can move in both directions (forward and backward) and jump over any piece in its way.
-
The game ends when one player has no more pieces left or cannot make any valid moves. The player with more pieces left or who made the last move wins the game.
-
-
Java Programming Language
-
Now that we know how to play checkers, let's learn more about Java. Java is a programming language that was created by James Gosling at Sun Microsystems in 1995.
Java Programming Language
-
Now that we know how to play checkers, let's learn more about Java. Java is a programming language that was created by James Gosling at Sun Microsystems in 1995. It is a high-level, object-oriented, and general-purpose language that can run on different platforms and devices. Java is widely used for developing applications such as web servers, mobile apps, games, and software tools.
-
download checkers game for java
-download checkers source code for java
-download checkers applet for java
-download checkers framework for java
-download checkers project for java
-download checkers gui for java
-download checkers tutorial for java
-download checkers program for java
-download checkers library for java
-download checkers software for java
-download checkers application for java
-download checkers network for java
-download checkers ai for java
-download checkers swing for java
-download checkers javafx for java
-download checkers socket for java
-download checkers online for java
-download checkers multiplayer for java
-download checkers board for java
-download checkers data for java
-download checkers canvas for java
-download checkers zip for java
-download checkers github for java
-download checkers pdf for java
-download checkers html for java
-download checkers jar for java
-download checkers class for java
-download checkers interface for java
-download checkers package for java
-download checkers module for java
-download checkers plugin for java
-download checkers component for java
-download checkers tool for java
-download checkers api for java
-download checkers sdk for java
-download checkers ide for java
-download checkers eclipse for java
-download checkers netbeans for java
-download checkers intellij for java
-download checkers android studio for java
-download checkers gradle for java
-download checkers maven for java
-download checkers ant for java
-download checkers junit for java
-download checkers testng for java
-download checkers selenium for java
-download checkers spring boot for java
-download checkers hibernate for java
-download checkers tomcat server for java
-
Some of the features and benefits of Java are:
-
-
Java is open source. This means that anyone can access and modify the source code of Java and use it for free. This also encourages collaboration and innovation among developers and users.
-
Java is community driven. There are millions of Java developers and users around the world who contribute to the improvement and evolution of Java. There are also many online resources, forums, tutorials, and courses that help beginners and experts learn and use Java.
-
Java is fast and high-performance. Java uses a virtual machine (JVM) that converts the source code into bytecode, which can be executed by any platform that has a JVM installed. This makes Java portable and efficient. Java also supports multithreading, which allows multiple tasks to run concurrently and utilize the CPU resources.
-
Java is easy to learn. Java has a simple and clear syntax that is based on C and C++. It also has many built-in libraries and frameworks that provide ready-made solutions for common problems. Java follows the principle of "write once, run anywhere", which means that the same code can work on different platforms without any changes.
-
Java is statically typed. This means that the data types of variables are checked at compile time, which helps to avoid errors and bugs at runtime. Java also supports type inference, which allows the compiler to infer the data types of variables without explicit declaration.
-
Java has expert leadership. Java is maintained and developed by Oracle Corporation, which is a leading software company that provides support and updates for Java. Oracle also collaborates with other organizations and communities to ensure the quality and security of Java.
-
-
How to Install Java on Windows
-
If you want to download and run checkers for Java on your Windows system, you need to install Java first. Here are the steps to install Java on Windows:
-
-
Download the JDK installer. Go to the [Oracle Java Downloads page](^1^) and click Accept License Agreement. Under the Download menu, click the x64 Installer download link that corresponds to your version of Windows. Save the file jdk-20.interim.update.patch_windows-x64_bin.exe to your computer.
-
Run the downloaded file. Double-click the downloaded file to start the installation. Click Yes in the User Account Control prompt. The installation wizard will appear on your screen.
-
Configure the installation wizard. Click Next to proceed to the next step. Choose the destination folder for the Java installation files or stick to the default path. Click Next to proceed. Wait for the wizard to finish the installation process until the Successfully Installed message appears. Click Close to exit the wizard.
-
Set environmental variables in Java. Open the Start menu and search for environment variables. Select the Edit the system environment variables result. In the System Properties window, under the Advanced tab, click Environment Variables... Under the System variables category, select the Path variable and click Edit... Click the New button and enter the path to the Java bin directory: `C:\Program Files\Java\jdk-20\bin`. Click OK to save the changes.
-
-
How to Download and Run Checkers for Java
-
After installing Java on your Windows system, you can download and run checkers for Java. Here are the steps to do so:
-
-
Download checkers for Java source code. Go to [GitHub](^2^) and find the repository named DevonMcGrath/Java-Checkers. Click on the green Code button, then click on Download ZIP button, then save it on your computer.
-
Extract checkers for Java source code files from ZIP file into a folder named CheckersForJava on your computer.
-
Compile checkers for Java source code files into class files using javac command in Command Prompt. Open Command Prompt by typing cmd in Start menu search bar
Compile checkers for Java source code files into class files using javac command in Command Prompt. Open Command Prompt by typing cmd in Start menu search bar and press Enter. Navigate to the CheckersForJava folder by typing cd followed by the path to the folder, for example: `cd C:\Users\YourName\Downloads\CheckersForJava`. Press Enter. Type javac followed by the name of the main source code file, which is Checkers.java, for example: `javac Checkers.java`. Press Enter. This will compile all the source code files into class files and store them in the same folder.
-
Run checkers for Java class files using java command in Command Prompt. In the same Command Prompt window, type java followed by the name of the main class file, which is Checkers, for example: `java Checkers`. Press Enter. This will launch the checkers for Java application in a new window.
-
Enjoy playing checkers for Java. You can choose to play against the computer or another human player, either online or offline. You can also adjust the game settings, such as the board size, the difficulty level, the game mode, the theme, the sound, and the statistics. You can also pause, resume, restart, or quit the game at any time.
-
-
Tips and Tricks for Playing Checkers for Java
-
Now that you know how to download and run checkers for Java, here are some tips and tricks for playing checkers for Java:
-
-
Practice makes perfect. The more you play checkers, the more you will improve your skills and strategies. You can practice against the computer or another human player, either online or offline. You can also choose different difficulty levels and game modes to challenge yourself.
-
Think ahead. Checkers is a game of planning and foresight. You should always try to anticipate your opponent's moves and counter them with your own. You should also try to control the center of the board and create opportunities for multiple jumps.
-
Protect your pieces. You should avoid leaving your pieces vulnerable to capture by your opponent. You should also try to protect your king pieces, as they are more powerful and versatile than regular pieces.
-
Use your king pieces wisely. King pieces can move in both directions and jump over any piece in their way. You should use your king pieces to attack your opponent's pieces, especially their king pieces. You should also use your king pieces to block your opponent's moves and prevent them from reaching the king row.
-
Customize your game settings. Checkers for Java allows you to customize your game settings according to your preferences. You can change the board size, the difficulty level, the game mode, the theme, the sound, and the statistics. You can also save and load your game progress at any time.
-
-
Conclusion
-
In this article, we have shown you how to download and run checkers for Java on your Windows system. We have also provided you with some tips and tricks for playing checkers for Java. Checkers for Java is a free and open-source application that allows you to play checkers against the computer or another human player, either online or offline. Checkers for Java has many features and options, such as different board sizes, difficulty levels, game modes, themes, sounds, and statistics.
-
If you are looking for a fun and easy game that can be played by anyone, anywhere, you should try checkers for Java. Checkers is a classic board game that involves moving pieces diagonally across a grid of squares, capturing the opponent's pieces by jumping over them, and reaching the other side of the board to become a king. Checkers is also known as draughts in some countries, and it has many variations and rules.
-
We hope you enjoyed this article and learned something new. Thank you for reading!
-
FAQs
-
Here are some frequently asked questions related to checkers for Java:
-
-
What are some other games that I can play with Java?
-
There are many other games that you can play with Java, such as chess, sudoku, minesweeper, snake, tetris, pacman, pong, tic-tac-toe, hangman, and many others. You can find many free and open-source Java games online or create your own using Java programming language.
-
How can I update my Java version?
-
You can update your Java version by visiting [Oracle Java Downloads page] and downloading and installing the
You can update your Java version by visiting [Oracle Java Downloads page] and downloading and installing the latest version of Java for your system. You can also check for updates automatically by opening the Java Control Panel and clicking on the Update tab. You can also uninstall older versions of Java from your system to avoid security risks and performance issues.
-
How can I play checkers for Java online with another human player?
-
You can play checkers for Java online with another human player by choosing the Online mode in the game settings. You will need to enter your name and a server address to connect to. You can either join an existing game or create a new game and wait for another player to join. You can also chat with your opponent during the game using the Chat button.
-
How can I change the theme of checkers for Java?
-
You can change the theme of checkers for Java by choosing the Theme option in the game settings. You can choose from different themes, such as Classic, Wood, Metal, Marble, and Neon. You can also change the color of the board and the pieces according to your preference.
-
How can I view my statistics in checkers for Java?
-
You can view your statistics in checkers for Java by choosing the Statistics option in the game settings. You can see your total number of games played, won, lost, and drawn, as well as your win percentage and rating. You can also see your best and worst moves, your longest and shortest games, and your average moves per game.
-
How can I report a bug or suggest a feature in checkers for Java?
-
You can report a bug or suggest a feature in checkers for Java by visiting [GitHub] and finding the repository named DevonMcGrath/Java-Checkers. Click on the Issues tab, then click on the New issue button. Fill out the title and description of your issue or suggestion, then click on Submit new issue button. The developer will review your feedback and respond accordingly.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Love O2O and Join the Fun of A Chinese Ghost Story Online Game.md b/spaces/1phancelerku/anime-remove-background/Download Love O2O and Join the Fun of A Chinese Ghost Story Online Game.md
deleted file mode 100644
index a2b2aaeeed48fea47ca95e25e45382635934f2da..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Love O2O and Join the Fun of A Chinese Ghost Story Online Game.md
+++ /dev/null
@@ -1,102 +0,0 @@
-
-
Download Love 020 Dramacool: A Guide to Watch the Hit Chinese Drama Online
-
If you are a fan of Chinese dramas, you might have heard of Love 020, a romantic comedy series that has taken the internet by storm. But how can you watch this amazing show online? And how can you download it for offline viewing? In this article, we will tell you everything you need to know about downloading Love 020 on Dramacool, one of the best websites to watch Asian dramas for free.
-
What is Love 020?
-
Love 020 is a 2016 Chinese drama based on the web novel "A Slight Smile Is Very Charming" by Gu Man. It revolves around the love story of a first-year and a final year student who fell in love with each other while playing an online video game. It follows the couple as they overcome different challenges and numerous obstacles in their online and offline worlds.
Bei Wei Wei (Zheng Shuang) is a beautiful and smart computer science major who loves playing online games. She is the top player in her guild and has a loyal online husband, Zhenshui Wuxiang (Zhang He). However, he dumps her for another girl, leaving her heartbroken. Soon after, she receives a message from the number one player in the game, Yixiao Naihe, who proposes to be her online husband. She accepts, thinking that it is just a game.
-
Little does she know that Yixiao Naihe is actually Xiao Nai (Yang Yang), her senior in college and the most popular student on campus. He is a gaming expert, a basketball star, an academic genius, and a successful entrepreneur. He falls in love with Wei Wei at first sight when he sees her playing the game in an internet cafe. He decides to pursue her both online and offline, using his skills and charm.
-
Will their online romance blossom into a real-life relationship? Will they be able to balance their studies, careers, and love lives? Will they face any troubles from their rivals, friends, or families? Watch Love 020 to find out!
-
The cast of Love 020
-
The cast of Love 020 consists of some of the most talented and popular actors and actresses in China. Here are some of them:
-
-
Yang Yang as Xiao Nai / Yixiao Naihe: He is the male lead of the drama. He is handsome, smart, athletic, and rich. He is the president of a gaming company and the leader of a famous online guild. He falls in love with Wei Wei and pursues her relentlessly.
-
Zheng Shuang as Bei Wei Wei / Lu Wei Wei Wei: She is the female lead of the drama. She is beautiful, intelligent, and kind. She is a computer science major and an online gaming expert. She becomes Xiao Nai's online wife and real-life girlfriend.
-
Bai Yu as Cao Guang / Zhen Shao Xiang: He is Xiao Nai's rival in love and business. He is also a computer science major and a gaming company CEO. He likes Wei Wei and tries to win her over.
-
Mao Xiao Tong as Er Xi / Yao Yao: She is Wei Wei's best friend and roommate. She is a literature major and an online game fan. She is bubbly, cheerful, and loyal.
-
Zhang Bin Bin as KO / Yu Ban Shan: He is Xiao Nai's best friend and business partner. He is a computer science major and a gaming genius. He is cool, calm, and witty.
-
Niu Jun Feng as Hao Mei / Qiu Yong Hou: He is Xiao Nai's friend and colleague. He is a computer science major and a gaming programmer. He is cute, naive, and funny.
-
Zheng Ye Cheng as Zhen Shui Wu Xiang / Yu Gong: He is Wei Wei's ex-online husband and Cao Guang's friend. He is a computer science major and a gaming developer. He is arrogant, selfish, and jealous.
-
-
The popularity of Love 020
-
Love 020 is one of the most popular and successful Chinese dramas of all time. It has received rave reviews from critics and audiences alike, for its sweet romance, hilarious comedy, thrilling action, and stunning visuals. It has also won several awards, such as the Best Foreign TV Series at the Seoul International Drama Awards in 2017.
-
Love 020 has also gained a huge fan base both in China and abroad, especially among the young generation who can relate to the online gaming culture and the campus life. It has been viewed over 24 billion times on various online platforms, making it one of the most watched Chinese dramas ever. It has also been adapted into a movie, a spin-off series, and a Thai remake.
-
How to download love 020 dramacool with English subtitles
-Watch love 020 dramacool online free without downloading
-Download love 020 dramacool full episodes in HD quality
-Love 020 dramacool review and ratings
-Download love 020 dramacool OST and songs
-Love 020 dramacool cast and characters
-Download love 020 dramacool behind the scenes and interviews
-Love 020 dramacool vs love o2o comparison and differences
-Download love 020 dramacool Chinese novel and manga
-Love 020 dramacool fanfiction and fan art
-Download love 020 dramacool spin-off and sequel
-Love 020 dramacool best moments and scenes
-Download love 020 dramacool wallpapers and gifs
-Love 020 dramacool trivia and facts
-Download love 020 dramacool bloopers and funny moments
-Love 020 dramacool quotes and dialogues
-Download love 020 dramacool game and app
-Love 020 dramacool merchandise and products
-Download love 020 dramacool Netflix and Viki versions
-Love 020 dramacool awards and nominations
-Download love 020 dramacool alternative links and sites
-Love 020 dramacool spoilers and ending explained
-Download love 020 dramacool bonus and extra content
-Love 020 dramacool recommendations and similar dramas
-Download love 020 dramacool in different languages and formats
-
Why watch Love 020 on Dramacool?
-
If you are interested in watching Love 020 online, you might be wondering where to find it. There are many websites that offer Asian dramas for streaming or downloading, but not all of them are reliable or safe. Some of them might have low-quality videos, annoying ads, broken links, or even viruses. That's why we recommend you to watch Love 020 on Dramacool, one of the best websites to watch Asian dramas for free.
-
The benefits of Dramacool
-
Dramacool is a website that provides a large collection of Asian dramas, movies, shows, and anime from various countries, such as China, Korea, Japan, Taiwan, Thailand, and more. You can watch them online or download them for offline viewing. Here are some of the benefits of using Dramacool:
-
-
It is free: You don't have to pay anything to watch or download your favorite dramas on Dramacool. You can enjoy unlimited access to thousands of titles without any subscription or registration.
-
It is fast: You don't have to wait for long buffering or loading times to watch your favorite dramas on Dramacool. You can stream or download them in high speed and high quality.
-
It is updated: You don't have to worry about missing out on the latest episodes or releases of your favorite dramas on Dramacool. You can find them as soon as they are available on the website.
-
It is easy: You don't have to struggle with complicated navigation or search functions to find your favorite dramas on Dramacool. You can browse them by genre, country, year, popularity, or alphabetically.
-
The features of Dramacool
-
Dramacool is not only a website that provides a lot of Asian dramas, but also a website that offers a lot of features to enhance your viewing experience. Here are some of the features of Dramacool:
-
-
It has multiple servers: You can choose from different servers to watch or download your favorite dramas on Dramacool. You can switch to another server if one is not working or slow.
-
It has multiple languages: You can watch your favorite dramas on Dramacool with subtitles in various languages, such as English, Spanish, French, Arabic, and more. You can also change the font size, color, and style of the subtitles.
-
It has multiple devices: You can watch your favorite dramas on Dramacool on any device, such as a computer, a laptop, a tablet, or a smartphone. You can also cast them to your TV or Chromecast.
-
It has multiple genres: You can find your favorite dramas on Dramacool in different genres, such as romance, comedy, action, thriller, horror, fantasy, historical, and more. You can also filter them by ratings, reviews, or recommendations.
-
-
The drawbacks of Dramacool
-
Dramacool is a great website to watch Asian dramas for free, but it is not perfect. It also has some drawbacks that you should be aware of before using it. Here are some of the drawbacks of Dramacool:
-
-
It is illegal: You should know that watching or downloading dramas on Dramacool is illegal, as it violates the copyright laws and the intellectual property rights of the original creators and distributors. You might face legal consequences or penalties if you are caught using it.
-
It is risky: You should also know that watching or downloading dramas on Dramacool is risky, as it might expose your device or data to malware, viruses, spyware, or hackers. You might lose your personal information or damage your device if you are not careful.
-
It is unreliable: You should also know that watching or downloading dramas on Dramacool is unreliable, as it might have broken links, missing episodes, wrong subtitles, low-quality videos, or annoying ads. You might not enjoy your viewing experience if you encounter these problems.
-
-
How to download Love 020 on Dramacool?
-
If you still want to watch Love 020 on Dramacool despite its drawbacks, you should follow these steps to download it safely and easily:
-
Step 1: Visit the official website of Dramacool
-
The first step is to visit the official website of Dramacool at https://www.dramacool9.co/. You can use any browser or device to access it. However, you should make sure that you have a good internet connection and a reliable antivirus software installed on your device.
-
Step 2: Search for Love 020 in the search bar
-
The second step is to search for Love 020 in the search bar at the top right corner of the website. You can type in "Love 020" or "Just One Smile Is Very Alluring" (the alternative title of the drama) and hit enter. You will see a list of results related to the drama.
-
Step 3: Choose the episode you want to download
-
The third step is to choose the episode you want to download from the list of results. You can click on the title or the image of the episode to open it. You will see a video player with some options below it.
Step 4: Click on the download button and select the quality and format
-
The fourth step is to click on the download button below the video player. You will see a pop-up window with some options to choose from. You can select the quality and format of the video you want to download, such as HD, SD, MP4, or MKV. You can also see the size and duration of the video.
-
Step 5: Enjoy watching Love 020 offline
-
The fifth and final step is to enjoy watching Love 020 offline. You can click on the download link or scan the QR code to start downloading the video to your device. You can also copy and paste the link to your download manager or browser. Once the download is complete, you can watch Love 020 anytime and anywhere you want.
-
Conclusion
-
Love 020 is a wonderful Chinese drama that you should not miss. It has a captivating plot, a charming cast, and a beautiful soundtrack. It will make you laugh, cry, and swoon over the adorable couple. If you want to watch Love 020 online, you can use Dramacool, a free website that offers a lot of Asian dramas. However, you should also be aware of the drawbacks of using Dramacool, such as its illegality, riskiness, and unreliability. If you want to download Love 020 on Dramacool, you can follow the steps we have provided in this article. We hope you enjoy watching Love 020 on Dramacool!
-
FAQs
-
Here are some frequently asked questions about downloading Love 020 on Dramacool:
-
-
Q: Is it safe to download Love 020 on Dramacool?
-
A: It depends on how careful you are when using Dramacool. You should always use a reliable antivirus software and a VPN service to protect your device and data from malware, viruses, spyware, or hackers. You should also avoid clicking on any suspicious links or ads that might redirect you to harmful websites.
-
Q: Is it legal to download Love 020 on Dramacool?
-
A: No, it is not legal to download Love 020 on Dramacool. You are violating the copyright laws and the intellectual property rights of the original creators and distributors of the drama. You might face legal consequences or penalties if you are caught using Dramacool.
-
Q: How many episodes are there in Love 020?
-
A: There are 30 episodes in Love 020, each lasting about 45 minutes. You can watch them all on Dramacool for free.
-
Q: Where can I find the subtitles for Love 020?
-
A: You can find the subtitles for Love 020 on Dramacool in various languages, such as English, Spanish, French, Arabic, and more. You can also change the font size, color, and style of the subtitles according to your preference.
-
Q: What are some other websites to watch or download Love 020?
-
A: Some other websites to watch or download Love 020 are Kissasian, Viki, Netflix, WeTV, iQiyi, and more. However, some of them might require a subscription or registration fee to access their content.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_repaint.py b/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_repaint.py
deleted file mode 100644
index 3ab44975e92c876052cbada8e7e2cf19ac526ac3..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_repaint.py
+++ /dev/null
@@ -1,321 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 ETH Zurich Computer Vision Lab and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import math
-from dataclasses import dataclass
-from typing import List, Optional, Tuple, Union
-
-import numpy as np
-import paddle
-import paddle.nn.functional as F
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput
-from .scheduling_utils import SchedulerMixin
-
-
-@dataclass
-class RePaintSchedulerOutput(BaseOutput):
- """
- Output class for the scheduler's step function output.
-
- Args:
- prev_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
- Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
- denoising loop.
- pred_original_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
- The predicted denoised sample (x_{0}) based on the model output from
- the current timestep. `pred_original_sample` can be used to preview progress or for guidance.
- """
-
- prev_sample: paddle.Tensor
- pred_original_sample: paddle.Tensor
-
-
-def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999):
- """
- Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of
- (1-beta) over time from t = [0,1].
-
- Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up
- to that part of the diffusion process.
-
-
- Args:
- num_diffusion_timesteps (`int`): the number of betas to produce.
- max_beta (`float`): the maximum beta to use; use values lower than 1 to
- prevent singularities.
-
- Returns:
- betas (`np.ndarray`): the betas used by the scheduler to step the model outputs
- """
-
- def alpha_bar(time_step):
- return math.cos((time_step + 0.008) / 1.008 * math.pi / 2) ** 2
-
- betas = []
- for i in range(num_diffusion_timesteps):
- t1 = i / num_diffusion_timesteps
- t2 = (i + 1) / num_diffusion_timesteps
- betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta))
- return paddle.to_tensor(betas, dtype="float32")
-
-
-class RePaintScheduler(SchedulerMixin, ConfigMixin):
- """
- RePaint is a schedule for DDPM inpainting inside a given mask.
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- For more details, see the original paper: https://arxiv.org/pdf/2201.09865.pdf
-
- Args:
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
- beta_start (`float`): the starting `beta` value of inference.
- beta_end (`float`): the final `beta` value.
- beta_schedule (`str`):
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
- `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
- eta (`float`):
- The weight of noise for added noise in a diffusion step. Its value is between 0.0 and 1.0 -0.0 is DDIM and
- 1.0 is DDPM scheduler respectively.
- trained_betas (`np.ndarray`, optional):
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
- variance_type (`str`):
- options to clip the variance used when adding noise to the denoised sample. Choose from `fixed_small`,
- `fixed_small_log`, `fixed_large`, `fixed_large_log`, `learned` or `learned_range`.
- clip_sample (`bool`, default `True`):
- option to clip predicted sample between -1 and 1 for numerical stability.
-
- """
-
- order = 1
-
- @register_to_config
- def __init__(
- self,
- num_train_timesteps: int = 1000,
- beta_start: float = 0.0001,
- beta_end: float = 0.02,
- beta_schedule: str = "linear",
- eta: float = 0.0,
- trained_betas: Optional[np.ndarray] = None,
- clip_sample: bool = True,
- ):
- if trained_betas is not None:
- self.betas = paddle.to_tensor(trained_betas)
- elif beta_schedule == "linear":
- self.betas = paddle.linspace(beta_start, beta_end, num_train_timesteps, dtype="float32")
- elif beta_schedule == "scaled_linear":
- # this schedule is very specific to the latent diffusion model.
- self.betas = paddle.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype="float32") ** 2
- elif beta_schedule == "squaredcos_cap_v2":
- # Glide cosine schedule
- self.betas = betas_for_alpha_bar(num_train_timesteps)
- elif beta_schedule == "sigmoid":
- # GeoDiff sigmoid schedule
- betas = paddle.linspace(-6, 6, num_train_timesteps)
- self.betas = F.sigmoid(betas) * (beta_end - beta_start) + beta_start
- else:
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
-
- self.alphas = 1.0 - self.betas
- self.alphas_cumprod = paddle.cumprod(self.alphas, 0)
- self.one = paddle.to_tensor(1.0)
-
- self.final_alpha_cumprod = paddle.to_tensor(1.0)
-
- # standard deviation of the initial noise distribution
- self.init_noise_sigma = 1.0
-
- # setable values
- self.num_inference_steps = None
- self.timesteps = paddle.to_tensor(np.arange(0, num_train_timesteps)[::-1].copy())
-
- self.eta = eta
-
- def scale_model_input(self, sample: paddle.Tensor, timestep: Optional[int] = None) -> paddle.Tensor:
- """
- Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
- current timestep.
-
- Args:
- sample (`paddle.Tensor`): input sample
- timestep (`int`, optional): current timestep
-
- Returns:
- `paddle.Tensor`: scaled input sample
- """
- return sample
-
- def set_timesteps(
- self,
- num_inference_steps: int,
- jump_length: int = 10,
- jump_n_sample: int = 10,
- ):
- num_inference_steps = min(self.config.num_train_timesteps, num_inference_steps)
- self.num_inference_steps = num_inference_steps
-
- timesteps = []
-
- jumps = {}
- for j in range(0, num_inference_steps - jump_length, jump_length):
- jumps[j] = jump_n_sample - 1
-
- t = num_inference_steps
- while t >= 1:
- t = t - 1
- timesteps.append(t)
-
- if jumps.get(t, 0) > 0:
- jumps[t] = jumps[t] - 1
- for _ in range(jump_length):
- t = t + 1
- timesteps.append(t)
-
- timesteps = np.array(timesteps) * (self.config.num_train_timesteps // self.num_inference_steps)
- self.timesteps = paddle.to_tensor(timesteps)
-
- def _get_variance(self, t):
- prev_timestep = t - self.config.num_train_timesteps // self.num_inference_steps
-
- alpha_prod_t = self.alphas_cumprod[t]
- alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.final_alpha_cumprod
- beta_prod_t = 1 - alpha_prod_t
- beta_prod_t_prev = 1 - alpha_prod_t_prev
-
- # For t > 0, compute predicted variance βt (see formula (6) and (7) from
- # https://arxiv.org/pdf/2006.11239.pdf) and sample from it to get
- # previous sample x_{t-1} ~ N(pred_prev_sample, variance) == add
- # variance to pred_sample
- # Is equivalent to formula (16) in https://arxiv.org/pdf/2010.02502.pdf
- # without eta.
- # variance = (1 - alpha_prod_t_prev) / (1 - alpha_prod_t) * self.betas[t]
- variance = (beta_prod_t_prev / beta_prod_t) * (1 - alpha_prod_t / alpha_prod_t_prev)
-
- return variance
-
- def step(
- self,
- model_output: paddle.Tensor,
- timestep: int,
- sample: paddle.Tensor,
- original_image: paddle.Tensor,
- mask: paddle.Tensor,
- generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None,
- return_dict: bool = True,
- ) -> Union[RePaintSchedulerOutput, Tuple]:
- """
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- model_output (`paddle.Tensor`): direct output from learned
- diffusion model.
- timestep (`int`): current discrete timestep in the diffusion chain.
- sample (`paddle.Tensor`):
- current instance of sample being created by diffusion process.
- original_image (`paddle.Tensor`):
- the original image to inpaint on.
- mask (`paddle.Tensor`):
- the mask where 0.0 values define which part of the original image to inpaint (change).
- generator (`paddle.Generator`, *optional*): random number generator.
- return_dict (`bool`): option for returning tuple rather than
- DDPMSchedulerOutput class
-
- Returns:
- [`~schedulers.scheduling_utils.RePaintSchedulerOutput`] or `tuple`:
- [`~schedulers.scheduling_utils.RePaintSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. When
- returning a tuple, the first element is the sample tensor.
-
- """
- t = timestep
- prev_timestep = timestep - self.config.num_train_timesteps // self.num_inference_steps
-
- # 1. compute alphas, betas
- alpha_prod_t = self.alphas_cumprod[t]
- alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.final_alpha_cumprod
- beta_prod_t = 1 - alpha_prod_t
-
- # 2. compute predicted original sample from predicted noise also called
- # "predicted x_0" of formula (15) from https://arxiv.org/pdf/2006.11239.pdf
- pred_original_sample = (sample - beta_prod_t**0.5 * model_output) / alpha_prod_t**0.5
-
- # 3. Clip "predicted x_0"
- if self.config.clip_sample:
- pred_original_sample = paddle.clip(pred_original_sample, -1, 1)
-
- # We choose to follow RePaint Algorithm 1 to get x_{t-1}, however we
- # substitute formula (7) in the algorithm coming from DDPM paper
- # (formula (4) Algorithm 2 - Sampling) with formula (12) from DDIM paper.
- # DDIM schedule gives the same results as DDPM with eta = 1.0
- # Noise is being reused in 7. and 8., but no impact on quality has
- # been observed.
-
- # 5. Add noise
- noise = paddle.randn(model_output.shape, dtype=model_output.dtype, generator=generator)
- std_dev_t = self.eta * self._get_variance(timestep) ** 0.5
-
- variance = 0
- if t > 0 and self.eta > 0:
- variance = std_dev_t * noise
-
- # 6. compute "direction pointing to x_t" of formula (12)
- # from https://arxiv.org/pdf/2010.02502.pdf
- pred_sample_direction = (1 - alpha_prod_t_prev - std_dev_t**2) ** 0.5 * model_output
-
- # 7. compute x_{t-1} of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
- prev_unknown_part = alpha_prod_t_prev**0.5 * pred_original_sample + pred_sample_direction + variance
-
- # 8. Algorithm 1 Line 5 https://arxiv.org/pdf/2201.09865.pdf
- prev_known_part = (alpha_prod_t_prev**0.5) * original_image + ((1 - alpha_prod_t_prev) ** 0.5) * noise
-
- # 9. Algorithm 1 Line 8 https://arxiv.org/pdf/2201.09865.pdf
- pred_prev_sample = mask * prev_known_part + (1.0 - mask) * prev_unknown_part
-
- if not return_dict:
- return (
- pred_prev_sample,
- pred_original_sample,
- )
-
- return RePaintSchedulerOutput(prev_sample=pred_prev_sample, pred_original_sample=pred_original_sample)
-
- def undo_step(self, sample, timestep, generator=None):
- n = self.config.num_train_timesteps // self.num_inference_steps
-
- for i in range(n):
- beta = self.betas[timestep + i]
- noise = paddle.randn(sample.shape, dtype=sample.dtype, generator=generator)
-
- # 10. Algorithm 1 Line 10 https://arxiv.org/pdf/2201.09865.pdf
- sample = (1 - beta) ** 0.5 * sample + beta**0.5 * noise
-
- return sample
-
- def add_noise(
- self,
- original_samples: paddle.Tensor,
- noise: paddle.Tensor,
- timesteps: paddle.Tensor,
- ) -> paddle.Tensor:
- raise NotImplementedError("Use `DDPMScheduler.add_noise()` to train for sampling with RePaint.")
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/conformer/espnet_transformer_attn.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/conformer/espnet_transformer_attn.py
deleted file mode 100644
index a479a27ea6fd4202359da435234408ba074f7577..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/conformer/espnet_transformer_attn.py
+++ /dev/null
@@ -1,186 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-
-# Copyright 2019 Shigeki Karita
-# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
-
-"""Multi-Head Attention layer definition."""
-
-import math
-
-import numpy
-import torch
-from torch import nn
-
-
-class MultiHeadedAttention(nn.Module):
- """Multi-Head Attention layer.
- Args:
- n_head (int): The number of heads.
- n_feat (int): The number of features.
- dropout_rate (float): Dropout rate.
- """
-
- def __init__(self, n_head, n_feat, dropout_rate):
- """Construct an MultiHeadedAttention object."""
- super(MultiHeadedAttention, self).__init__()
- assert n_feat % n_head == 0
- # We assume d_v always equals d_k
- self.d_k = n_feat // n_head
- self.h = n_head
- self.linear_q = nn.Linear(n_feat, n_feat)
- self.linear_k = nn.Linear(n_feat, n_feat)
- self.linear_v = nn.Linear(n_feat, n_feat)
- self.linear_out = nn.Linear(n_feat, n_feat)
- self.attn = None
- self.dropout = nn.Dropout(p=dropout_rate)
-
- def forward_qkv(self, query, key, value):
- """Transform query, key and value.
- Args:
- query (torch.Tensor): Query tensor (#batch, time1, size).
- key (torch.Tensor): Key tensor (#batch, time2, size).
- value (torch.Tensor): Value tensor (#batch, time2, size).
- Returns:
- torch.Tensor: Transformed query tensor (#batch, n_head, time1, d_k).
- torch.Tensor: Transformed key tensor (#batch, n_head, time2, d_k).
- torch.Tensor: Transformed value tensor (#batch, n_head, time2, d_k).
- """
- n_batch = query.size(0)
- q = self.linear_q(query).view(n_batch, -1, self.h, self.d_k)
- k = self.linear_k(key).view(n_batch, -1, self.h, self.d_k)
- v = self.linear_v(value).view(n_batch, -1, self.h, self.d_k)
- q = q.transpose(1, 2) # (batch, head, time1, d_k)
- k = k.transpose(1, 2) # (batch, head, time2, d_k)
- v = v.transpose(1, 2) # (batch, head, time2, d_k)
-
- return q, k, v
-
- def forward_attention(self, value, scores, mask):
- """Compute attention context vector.
- Args:
- value (torch.Tensor): Transformed value (#batch, n_head, time2, d_k).
- scores (torch.Tensor): Attention score (#batch, n_head, time1, time2).
- mask (torch.Tensor): Mask (#batch, 1, time2) or (#batch, time1, time2).
- Returns:
- torch.Tensor: Transformed value (#batch, time1, d_model)
- weighted by the attention score (#batch, time1, time2).
- """
- n_batch = value.size(0)
- if mask is not None:
- mask = mask.unsqueeze(1).eq(0) # (batch, 1, *, time2)
- min_value = float(
- numpy.finfo(torch.tensor(0, dtype=scores.dtype).numpy().dtype).min
- )
- scores = scores.masked_fill(mask, min_value)
- self.attn = torch.softmax(scores, dim=-1).masked_fill(
- mask, 0.0
- ) # (batch, head, time1, time2)
- else:
- self.attn = torch.softmax(scores, dim=-1) # (batch, head, time1, time2)
-
- p_attn = self.dropout(self.attn)
- x = torch.matmul(p_attn, value) # (batch, head, time1, d_k)
- x = (
- x.transpose(1, 2).contiguous().view(n_batch, -1, self.h * self.d_k)
- ) # (batch, time1, d_model)
-
- return self.linear_out(x) # (batch, time1, d_model)
-
- def forward(self, query, key, value, mask):
- """Compute scaled dot product attention.
- Args:
- query (torch.Tensor): Query tensor (#batch, time1, size).
- key (torch.Tensor): Key tensor (#batch, time2, size).
- value (torch.Tensor): Value tensor (#batch, time2, size).
- mask (torch.Tensor): Mask tensor (#batch, 1, time2) or
- (#batch, time1, time2).
- Returns:
- torch.Tensor: Output tensor (#batch, time1, d_model).
- """
- q, k, v = self.forward_qkv(query, key, value)
- scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(self.d_k)
- return self.forward_attention(v, scores, mask)
-
-
-class RelPositionMultiHeadedAttention(MultiHeadedAttention):
- """Multi-Head Attention layer with relative position encoding.
- Paper: https://arxiv.org/abs/1901.02860
- Args:
- n_head (int): The number of heads.
- n_feat (int): The number of features.
- dropout_rate (float): Dropout rate.
- """
-
- def __init__(self, n_head, n_feat, dropout_rate):
- """Construct an RelPositionMultiHeadedAttention object."""
- super().__init__(n_head, n_feat, dropout_rate)
- # linear transformation for positional ecoding
- self.linear_pos = nn.Linear(n_feat, n_feat, bias=False)
- # these two learnable bias are used in matrix c and matrix d
- # as described in https://arxiv.org/abs/1901.02860 Section 3.3
- self.pos_bias_u = nn.Parameter(torch.Tensor(self.h, self.d_k))
- self.pos_bias_v = nn.Parameter(torch.Tensor(self.h, self.d_k))
- torch.nn.init.xavier_uniform_(self.pos_bias_u)
- torch.nn.init.xavier_uniform_(self.pos_bias_v)
-
- def rel_shift(self, x, zero_triu=False):
- """Compute relative positinal encoding.
- Args:
- x (torch.Tensor): Input tensor (batch, time, size).
- zero_triu (bool): If true, return the lower triangular part of the matrix.
- Returns:
- torch.Tensor: Output tensor.
- """
- zero_pad = torch.zeros((*x.size()[:3], 1), device=x.device, dtype=x.dtype)
- x_padded = torch.cat([zero_pad, x], dim=-1)
-
- x_padded = x_padded.view(*x.size()[:2], x.size(3) + 1, x.size(2))
- x = x_padded[:, :, 1:].view_as(x)
-
- if zero_triu:
- ones = torch.ones((x.size(2), x.size(3)))
- x = x * torch.tril(ones, x.size(3) - x.size(2))[None, None, :, :]
-
- return x
-
- def forward(self, query, key, value, pos_emb, mask):
- """Compute 'Scaled Dot Product Attention' with rel. positional encoding.
- Args:
- query (torch.Tensor): Query tensor (#batch, time1, size).
- key (torch.Tensor): Key tensor (#batch, time2, size).
- value (torch.Tensor): Value tensor (#batch, time2, size).
- pos_emb (torch.Tensor): Positional embedding tensor (#batch, time2, size).
- mask (torch.Tensor): Mask tensor (#batch, 1, time2) or
- (#batch, time1, time2).
- Returns:
- torch.Tensor: Output tensor (#batch, time1, d_model).
- """
- q, k, v = self.forward_qkv(query, key, value)
- q = q.transpose(1, 2) # (batch, time1, head, d_k)
-
- n_batch_pos = pos_emb.size(0)
- p = self.linear_pos(pos_emb).view(n_batch_pos, -1, self.h, self.d_k)
- p = p.transpose(1, 2) # (batch, head, time1, d_k)
-
- # (batch, head, time1, d_k)
- q_with_bias_u = (q + self.pos_bias_u).transpose(1, 2)
- # (batch, head, time1, d_k)
- q_with_bias_v = (q + self.pos_bias_v).transpose(1, 2)
-
- # compute attention score
- # first compute matrix a and matrix c
- # as described in https://arxiv.org/abs/1901.02860 Section 3.3
- # (batch, head, time1, time2)
- matrix_ac = torch.matmul(q_with_bias_u, k.transpose(-2, -1))
-
- # compute matrix b and matrix d
- # (batch, head, time1, time2)
- matrix_bd = torch.matmul(q_with_bias_v, p.transpose(-2, -1))
- matrix_bd = self.rel_shift(matrix_bd)
-
- scores = (matrix_ac + matrix_bd) / math.sqrt(
- self.d_k
- ) # (batch, head, time1, time2)
-
- return self.forward_attention(v, scores, mask)
diff --git a/spaces/AP123/CerealBoxMaker/README.md b/spaces/AP123/CerealBoxMaker/README.md
deleted file mode 100644
index 60369446298e45773d87cea0952d30a9123c9a4b..0000000000000000000000000000000000000000
--- a/spaces/AP123/CerealBoxMaker/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: CerealBoxMaker
-emoji: 🥛
-colorFrom: pink
-colorTo: purple
-sdk: gradio
-sdk_version: 3.47.1
-app_file: app.py
-pinned: false
-license: bigscience-openrail-m
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ASJMO/freegpt/client/js/sidebar-toggler.js b/spaces/ASJMO/freegpt/client/js/sidebar-toggler.js
deleted file mode 100644
index b23f94e3bfba5bac53432e1b557765736dabbab4..0000000000000000000000000000000000000000
--- a/spaces/ASJMO/freegpt/client/js/sidebar-toggler.js
+++ /dev/null
@@ -1,34 +0,0 @@
-const sidebar = document.querySelector(".sidebar");
-const menuButton = document.querySelector(".menu-button");
-
-function toggleSidebar(event) {
- if (sidebar.classList.contains("shown")) {
- hideSidebar(event.target);
- } else {
- showSidebar(event.target);
- }
- window.scrollTo(0, 0);
-}
-
-function showSidebar(target) {
- sidebar.classList.add("shown");
- target.classList.add("rotated");
- document.body.style.overflow = "hidden";
-}
-
-function hideSidebar(target) {
- sidebar.classList.remove("shown");
- target.classList.remove("rotated");
- document.body.style.overflow = "auto";
-}
-
-menuButton.addEventListener("click", toggleSidebar);
-
-document.body.addEventListener('click', function(event) {
- if (event.target.matches('.conversation-title')) {
- const menuButtonStyle = window.getComputedStyle(menuButton);
- if (menuButtonStyle.display !== 'none') {
- hideSidebar(menuButton);
- }
- }
-});
diff --git a/spaces/AashishKumar/Restaurant_voice_chatbot/README.md b/spaces/AashishKumar/Restaurant_voice_chatbot/README.md
deleted file mode 100644
index 626d1cb854deb69c14ceab80ecb04cf951af935f..0000000000000000000000000000000000000000
--- a/spaces/AashishKumar/Restaurant_voice_chatbot/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Restaurant Voice Chatbot
-emoji: 💩
-colorFrom: gray
-colorTo: green
-sdk: gradio
-sdk_version: 3.20.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/GPTalk.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/GPTalk.py
deleted file mode 100644
index c85399c1dbf0d2d23d5b8e02b7061e201610f242..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/GPTalk.py
+++ /dev/null
@@ -1,83 +0,0 @@
-from __future__ import annotations
-
-import secrets, time, json
-from aiohttp import ClientSession
-from typing import AsyncGenerator
-
-from .base_provider import AsyncGeneratorProvider
-from .helper import format_prompt
-
-
-class GPTalk(AsyncGeneratorProvider):
- url = "https://gptalk.net"
- supports_gpt_35_turbo = True
- working = True
- _auth = None
-
- @classmethod
- async def create_async_generator(
- cls,
- model: str,
- messages: list[dict[str, str]],
- **kwargs
- ) -> AsyncGenerator:
- if not model:
- model = "gpt-3.5-turbo"
- timestamp = int(time.time())
- headers = {
- 'authority': 'gptalk.net',
- 'accept': '*/*',
- 'accept-language': 'de-DE,de;q=0.9,en-DE;q=0.8,en;q=0.7,en-US;q=0.6,nl;q=0.5,zh-CN;q=0.4,zh-TW;q=0.3,zh;q=0.2',
- 'content-type': 'application/json',
- 'origin': 'https://gptalk.net',
- 'sec-ch-ua': '"Google Chrome";v="117", "Not;A=Brand";v="8", "Chromium";v="117"',
- 'sec-ch-ua-mobile': '?0',
- 'sec-ch-ua-platform': '"Linux"',
- 'sec-fetch-dest': 'empty',
- 'sec-fetch-mode': 'cors',
- 'sec-fetch-site': 'same-origin',
- 'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36',
- 'x-auth-appid': '2229',
- 'x-auth-openid': '',
- 'x-auth-platform': '',
- 'x-auth-timestamp': f"{timestamp}",
- }
- async with ClientSession(headers=headers) as session:
- if not cls._auth or cls._auth["expires_at"] < timestamp:
- data = {
- "fingerprint": secrets.token_hex(16).zfill(32),
- "platform": "fingerprint"
- }
- async with session.post(cls.url + "/api/chatgpt/user/login", json=data) as response:
- response.raise_for_status()
- cls._auth = (await response.json())["data"]
- data = {
- "content": format_prompt(messages),
- "accept": "stream",
- "from": 1,
- "model": model,
- "is_mobile": 0,
- "user_agent": headers["user-agent"],
- "is_open_ctx": 0,
- "prompt": "",
- "roid": 111,
- "temperature": 0,
- "ctx_msg_count": 3,
- "created_at": timestamp
- }
- headers = {
- 'authorization': f'Bearer {cls._auth["token"]}',
- }
- async with session.post(cls.url + "/api/chatgpt/chatapi/text", json=data, headers=headers) as response:
- response.raise_for_status()
- token = (await response.json())["data"]["token"]
- last_message = ""
- async with session.get(cls.url + "/api/chatgpt/chatapi/stream", params={"token": token}) as response:
- response.raise_for_status()
- async for line in response.content:
- if line.startswith(b"data: "):
- if line.startswith(b"data: [DONE]"):
- break
- message = json.loads(line[6:-1])["content"]
- yield message[len(last_message):]
- last_message = message
\ No newline at end of file
diff --git a/spaces/AdamOswald1/finetuned_diffusion/README.md b/spaces/AdamOswald1/finetuned_diffusion/README.md
deleted file mode 100644
index faa942d2fb96571ae5edf2defd2c2df6e6a8f7cc..0000000000000000000000000000000000000000
--- a/spaces/AdamOswald1/finetuned_diffusion/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Finetuned Diffusion
-emoji: 🪄🖼️
-colorFrom: red
-colorTo: pink
-sdk: gradio
-sdk_version: 3.21.0
-app_file: app.py
-pinned: true
-license: mit
-duplicated_from: anzorq/finetuned_diffusion
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/confirmdialog/ConfirmDialog.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/confirmdialog/ConfirmDialog.d.ts
deleted file mode 100644
index 64644b76d2738b83cc935b6b33cff8f7504f4a1c..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/confirmdialog/ConfirmDialog.d.ts
+++ /dev/null
@@ -1,127 +0,0 @@
-import Dialog from '../dialog/Dialog';
-import { GeneralCreateGameObjectCallbackType } from '../utils/build/GeneralCreateGameObjectCallbackType';
-import CreateBackground from '../utils/build/CreateBackground';
-import SimpleLabel from '../simplelabel/SimpleLabel';
-import CreateTextArea from '../utils/build/CreateTextArea';
-import Label from '../label/Label';
-
-export default ConfirmDialog;
-
-declare namespace ConfirmDialog {
- type AlignTypes = number | 'left' | 'center' | 'right';
-
- interface IConfigClick {
- mode: 0 | 1 | 'pointerup' | 'pointerdown' | 'release' | 'press',
- clickInterval?: number
- }
-
- interface IConfig {
- x?: number,
- y?: number,
- width?: number,
- height?: number,
-
- space?: {
- left?: number, right?: number, top?: number, bottom?: number,
-
- title?: number,
- titleLeft?: number,
- titleRight?: number,
-
- content?: number,
- contentLeft?: number,
- contentRight?: number,
-
- actionsLeft?: number,
- actionsRight?: number,
- action?: number,
-
- choices?: number,
- choicesLeft?: number,
- choicesRight?: number,
- choice?: number,
- choiceLine?: number,
- choiceColumn?: number, choiceRow?: number,
- choicesBackgroundLeft?: number,
- choicesBackgroundRight?: number,
- choicesBackgroundTop?: number,
- choicesBackgroundBottom?: number,
- };
-
- background?: CreateBackground.IConfig,
-
- title?: SimpleLabel.IConfig,
-
- content?: SimpleLabel.IConfig | CreateTextArea.IConfig,
-
- buttonMode?: 0 | 1 | 2;
- button?: SimpleLabel.IConfig,
- buttonA?: SimpleLabel.IConfig,
- buttonB?: SimpleLabel.IConfig,
-
- choicesType?: string,
- choice?: SimpleLabel.IConfig,
- choicesWidth?: number,
- choicesHeight?: number,
-
- proportion?: {
- title?: number,
- content?: number,
- actions?: number,
- choices?: number,
- },
-
- expand?: {
- title?: boolean,
- content?: boolean,
- actions?: boolean,
- choices?: boolean,
- },
-
- align?: {
- title?: AlignTypes,
- content?: AlignTypes,
- actions?: AlignTypes,
- choices?: AlignTypes,
- },
-
- click?: IConfigClick
- }
-
- interface IResetChoiceDisplayContentConfig extends Label.IResetDisplayContentConfig {
- value?: any;
- }
-
- interface IResetDisplayContentConfig {
- title?: string | Label.IResetDisplayContentConfig,
-
- content?: string | Label.IResetDisplayContentConfig,
-
- buttonA?: string | Label.IResetDisplayContentConfig,
- buttonB?: string | Label.IResetDisplayContentConfig,
-
- choices?: (string | IResetChoiceDisplayContentConfig)[]
- }
-
- interface ICreatorsConfig {
- background?: GeneralCreateGameObjectCallbackType,
- title?: SimpleLabel.ICreatorsConfig,
- content?: SimpleLabel.ICreatorsConfig | CreateTextArea.ICreatorsConfig,
- button?: SimpleLabel.ICreatorsConfig,
- buttonA?: SimpleLabel.ICreatorsConfig,
- buttonB?: SimpleLabel.ICreatorsConfig,
- choice?: SimpleLabel.ICreatorsConfig,
- }
-}
-
-declare class ConfirmDialog extends Dialog {
- constructor(
- scene: Phaser.Scene,
- config?: ConfirmDialog.IConfig,
- creators?: ConfirmDialog.ICreatorsConfig
- );
-
- resetDisplayContent(
- config?: ConfirmDialog.IResetDisplayContentConfig
- ): this;
-}
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/confirmdialog/methods/CreateContent.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/confirmdialog/methods/CreateContent.js
deleted file mode 100644
index 8efa6c58978118cc198cc05d75024aba8142fc14..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/confirmdialog/methods/CreateContent.js
+++ /dev/null
@@ -1,32 +0,0 @@
-import CreateLabel from '../../utils/build/CreateLabel.js';
-import CreateTextArea from '../../utils/build/CreateTextArea.js'
-
-const GetValue = Phaser.Utils.Objects.GetValue;
-
-var CreateContent = function (scene, config, creators) {
- var type = GetValue(config, '$type');
- if (type === undefined) {
- if (config &&
- (config.hasOwnProperty('slider') || config.hasOwnProperty('scroller'))
- ) {
- type = 'textarea';
- }
- }
-
-
- var gameObject;
- switch (type) {
- case 'textarea':
- gameObject = new CreateTextArea(scene, config, creators);
- break;
-
- default:
- gameObject = new CreateLabel(scene, config, creators);
- break;
- }
-
- scene.add.existing(gameObject);
- return gameObject;
-}
-
-export default CreateContent;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/CreateButtons.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/CreateButtons.js
deleted file mode 100644
index c2650adef0eb14df36acaa0bf5b2bb3de75c38a7..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/CreateButtons.js
+++ /dev/null
@@ -1,22 +0,0 @@
-var CreateButtons = function (scene, items, callback, scope) {
- var item;
- var buttons = [],
- button;
- if (items && callback) {
- for (var i = 0, cnt = items.length; i < cnt; i++) {
- item = items[i];
- item.scene = scene;
- if (scope) {
- button = callback.call(scope, item, i, items);
- } else {
- button = callback(item, i, items);
- }
- item.scene = undefined;
- buttons.push(button);
- }
- }
-
- return buttons;
-}
-
-export default CreateButtons;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pages/methods/GetPage.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pages/methods/GetPage.js
deleted file mode 100644
index ed826f2beed1266ecb771720dcf78a40c869da05..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pages/methods/GetPage.js
+++ /dev/null
@@ -1,10 +0,0 @@
-var GetPage = function (key) {
- if (key === undefined) {
- return null;
- } else if (!this.sizerChildren.hasOwnProperty(key)) {
- return null;
- } else {
- return this.sizerChildren[key];
- }
-}
-export default GetPage;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/tabpages/TabPages.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/tabpages/TabPages.d.ts
deleted file mode 100644
index 72ecd1bdaebf4564dd2b9801efde00c1ff531e55..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/tabpages/TabPages.d.ts
+++ /dev/null
@@ -1,74 +0,0 @@
-// import * as Phaser from 'phaser';
-import Sizer from '../sizer/Sizer';
-import Buttons from '../buttons/Buttons';
-import FixWidthButtons from '../fixwidthbuttons/FixWidthButtons';
-import Pages from '../pages/Pages';
-
-
-export default TabPages;
-
-declare namespace TabPages {
- interface IConfig extends Sizer.IConfig {
- background?: Phaser.GameObjects.GameObject,
-
- tabPosition?: 'top' | 'bottom' | 'left' | 'right',
- wrapTabs?: boolean,
- tabs?: Buttons.IConfig | FixWidthButtons.IConfig,
- pages?: Pages.IConfig,
-
- expand?: {
- tabs?: boolean
- },
-
- align?: {
- tabs?: 'top' | 'bottom' | 'left' | 'right' | 'center'
- }
-
-
- }
-
- interface IAddPageConfig {
- key?: string,
- tab: Phaser.GameObjects.GameObject,
- page: Phaser.GameObjects.GameObject
- }
-
-}
-
-declare class TabPages extends Sizer {
- constructor(
- scene: Phaser.Scene,
- config?: TabPages.IConfig
- );
-
- getPageKey(index: number): string;
- getPageIndex(key: string): number;
-
- addPage(
- key: string,
- tabGameObject: Phaser.GameObjects.GameObject,
- pageGameObject: Phaser.GameObjects.GameObject
- ): this;
-
- addPage(config: TabPages.IAddPageConfig): this;
-
- removePage(
- key: string,
- destroyChild?: boolean
- ): this;
-
- swapPage(
- key: string,
- fadeInDuration?: number
- ): this;
- swapFirstPage(fadeInDuration?: number): this;
- swapLastPage(fadeInDuration?: number): this;
-
- currentKey: string;
- readonly previousKey: string;
- keys: string[];
-
- getPage(key: string): Phaser.GameObjects.GameObject;
- readonly currentPage: Phaser.GameObjects.GameObject;
- readonly previousPage: Phaser.GameObjects.GameObject;
-}
\ No newline at end of file
diff --git a/spaces/AiMimicry/sovits-models/modules/losses.py b/spaces/AiMimicry/sovits-models/modules/losses.py
deleted file mode 100644
index cd21799eccde350c3aac0bdd661baf96ed220147..0000000000000000000000000000000000000000
--- a/spaces/AiMimicry/sovits-models/modules/losses.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import modules.commons as commons
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- rl = rl.float().detach()
- gl = gl.float()
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss * 2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- dr = dr.float()
- dg = dg.float()
- r_loss = torch.mean((1-dr)**2)
- g_loss = torch.mean(dg**2)
- loss += (r_loss + g_loss)
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- dg = dg.float()
- l = torch.mean((1-dg)**2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
-
-
-def kl_loss(z_p, logs_q, m_p, logs_p, z_mask):
- """
- z_p, logs_q: [b, h, t_t]
- m_p, logs_p: [b, h, t_t]
- """
- z_p = z_p.float()
- logs_q = logs_q.float()
- m_p = m_p.float()
- logs_p = logs_p.float()
- z_mask = z_mask.float()
- #print(logs_p)
- kl = logs_p - logs_q - 0.5
- kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p)
- kl = torch.sum(kl * z_mask)
- l = kl / torch.sum(z_mask)
- return l
diff --git a/spaces/AkitoP/umamusume_bert_vits2/bert/bert-base-japanese-v3/README.md b/spaces/AkitoP/umamusume_bert_vits2/bert/bert-base-japanese-v3/README.md
deleted file mode 100644
index c5b3456719f01801a2f29fef5faa8ee672391adf..0000000000000000000000000000000000000000
--- a/spaces/AkitoP/umamusume_bert_vits2/bert/bert-base-japanese-v3/README.md
+++ /dev/null
@@ -1,53 +0,0 @@
----
-license: apache-2.0
-datasets:
-- cc100
-- wikipedia
-language:
-- ja
-widget:
-- text: 東北大学で[MASK]の研究をしています。
----
-
-# BERT base Japanese (unidic-lite with whole word masking, CC-100 and jawiki-20230102)
-
-This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
-
-This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in [unidic-lite](https://pypi.org/project/unidic-lite/) package), followed by the WordPiece subword tokenization.
-Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
-
-The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/).
-
-## Model architecture
-
-The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
-
-## Training Data
-
-The model is trained on the Japanese portion of [CC-100 dataset](https://data.statmt.org/cc-100/) and the Japanese version of Wikipedia.
-For Wikipedia, we generated a text corpus from the [Wikipedia Cirrussearch dump file](https://dumps.wikimedia.org/other/cirrussearch/) as of January 2, 2023.
-The corpus files generated from CC-100 and Wikipedia are 74.3GB and 4.9GB in size and consist of approximately 392M and 34M sentences, respectively.
-
-For the purpose of splitting texts into sentences, we used [fugashi](https://github.com/polm/fugashi) with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) dictionary (v0.0.7).
-
-## Tokenization
-
-The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into subwords by the WordPiece algorithm.
-The vocabulary size is 32768.
-
-We used [fugashi](https://github.com/polm/fugashi) and [unidic-lite](https://github.com/polm/unidic-lite) packages for the tokenization.
-
-## Training
-
-We trained the model first on the CC-100 corpus for 1M steps and then on the Wikipedia corpus for another 1M steps.
-For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
-
-For training of each model, we used a v3-8 instance of Cloud TPUs provided by [TPU Research Cloud](https://sites.research.google/trc/about/).
-
-## Licenses
-
-The pretrained models are distributed under the Apache License 2.0.
-
-## Acknowledgments
-
-This model is trained with Cloud TPUs provided by [TPU Research Cloud](https://sites.research.google/trc/about/) program.
diff --git a/spaces/AleksBlacky/Arxiv_paper_classifier/app.py b/spaces/AleksBlacky/Arxiv_paper_classifier/app.py
deleted file mode 100644
index 701ec516dd9f5a833c8622585391a94a50de3fa5..0000000000000000000000000000000000000000
--- a/spaces/AleksBlacky/Arxiv_paper_classifier/app.py
+++ /dev/null
@@ -1,136 +0,0 @@
-import streamlit as st
-import transformers
-import pickle
-import seaborn as sns
-from pandas import DataFrame
-from transformers import AutoTokenizer, AutoModelForSequenceClassification
-
-st.markdown("# Hello, friend!")
-st.markdown(" This magic application going to help you with understanding of science paper topic! Cool? Yeah! ")
-
-try:
- model_name_global = "allenai/scibert_scivocab_uncased"
- tokenizer_ = AutoTokenizer.from_pretrained(model_name_global)
- with open('./models/scibert/decode_dict.pkl', 'rb') as f:
- decode_dict = pickle.load(f)
-except ValueError:
- st.error("Load tokenizer or decode answer dict goes wrong! Pls contact author alekseystepin13@gmail.com")
-
-with st.form(key="my_form"):
- st.markdown("### 🎈 Do you want a little magic? ")
- st.markdown(" Write your article title and abstract to textboxes bellow and I'll gues topic of your paper! ")
- ce, c2, c3 = st.columns([0.07, 7, 0.07])
-
- with c2:
- doc_title = st.text_area(
- "Paste your abstract title below (1 to 50 words)",
- height=210,
- )
-
- doc_abstract = st.text_area(
- "Paste your abstract text below (1 to 500 words)",
- height=410,
- )
-
- MAX_WORDS_TITLE, MAX_WORDS_ABSTRACT = 50, 500
- import re
-
- len_title = len(re.findall(r"\w+", doc_title))
- len_abstract = len(re.findall(r"\w+", doc_abstract))
-
- if len_title > MAX_WORDS_TITLE:
- st.warning(
- "⚠️ Your title contains "
- + str(len_title)
- + " words."
- + " Only the first 50 words will be reviewed. Stay tuned as increased allowance is coming! 😊"
- )
-
- doc_title = doc_title[:MAX_WORDS_TITLE]
-
- if len_abstract > MAX_WORDS_ABSTRACT:
- st.warning(
- "⚠️ Your abstract contains "
- + str(len_abstract)
- + " words."
- + " Only the first 500 words will be reviewed. Stay tuned as increased allowance is coming! 😊"
- )
-
- doc_abstract = doc_abstract[:MAX_WORDS_ABSTRACT]
-
- submit_button = st.form_submit_button(label="✨ Let's play, try it!")
-
-if not submit_button:
- st.stop()
-
-if len_title < 1:
- st.error("Article without any words in title? Pls give me correct title!")
- st.stop()
-
-if len_abstract < 1:
- st.error("Article without any words in abstract? Pls give me correct abstract!")
- st.stop()
-
-
-# allow_output_mutation=True
-@st.cache(suppress_st_warning=True)
-def load_model():
- st.write("Loading big model")
- return AutoModelForSequenceClassification.from_pretrained("models/scibert/")
-
-
-def make_predict(tokens, decode_dict):
-
- model_ = load_model()
- outs = model_(tokens.input_ids)
-
- probs = outs["logits"].softmax(dim=-1).tolist()[0]
- topic_probs = {}
- for i, p in enumerate(probs):
- if p > 0.1:
- topic_probs[decode_dict[i]] = p
- return topic_probs
-
-
-model_local = "models/scibert/"
-
-title = doc_title
-abstract = doc_abstract
-try:
- tokens = tokenizer_(title + abstract, return_tensors="pt")
-except ValueError:
- st.error("Word parsing into tokens went wrong! Is input valid? If yes, pls contact author alekseystepin13@gmail.com")
-
-predicts = make_predict(tokens, decode_dict)
-
-st.markdown("## 🎈 Yor article probably about: ")
-st.header("")
-
-df = (
- DataFrame(predicts.items(), columns=["Topic", "Prob"])
- .sort_values(by="Prob", ascending=False)
- .reset_index(drop=True)
-)
-
-df.index += 1
-
-# Add styling
-cmGreen = sns.light_palette("green", as_cmap=True)
-cmRed = sns.light_palette("red", as_cmap=True)
-df = df.style.background_gradient(
- cmap=cmGreen,
- subset=[
- "Prob",
- ],
-)
-
-c1, c2, c3 = st.columns([1, 3, 1])
-
-format_dictionary = {
- "Prob": "{:.1%}",
-}
-
-df = df.format(format_dictionary)
-
-with c2:
- st.table(df)
diff --git a/spaces/Alex89912/ai-code-v1/app.py b/spaces/Alex89912/ai-code-v1/app.py
deleted file mode 100644
index b4b2ce8a02962c459816421a2e535dab0a4e82de..0000000000000000000000000000000000000000
--- a/spaces/Alex89912/ai-code-v1/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/codellama/CodeLlama-7b-hf").launch()
\ No newline at end of file
diff --git a/spaces/AlgoveraAI/ocean-marketplace/app.py b/spaces/AlgoveraAI/ocean-marketplace/app.py
deleted file mode 100644
index 47e7ad98211ffe9384c73f81e840f706525e9520..0000000000000000000000000000000000000000
--- a/spaces/AlgoveraAI/ocean-marketplace/app.py
+++ /dev/null
@@ -1,173 +0,0 @@
-import gradio as gr
-from ocean_lib.config import Config
-from ocean_lib.ocean.ocean import Ocean
-from ocean_lib.web3_internal.wallet import Wallet
-from ocean_lib.web3_internal.currency import pretty_ether_and_wei, to_wei
-from ocean_lib.web3_internal.constants import ZERO_ADDRESS
-from ocean_lib.common.agreements.service_types import ServiceTypes
-from PIL import Image
-import numpy as np
-import matplotlib.pyplot as plt
-
-
-config = Config('config.ini')
-ocean = Ocean(config)
-
-def search(term="", did_in="", address="", buy_top_result=False):
-
- if address:
- wallet = Wallet(ocean.web3, private_key=address, transaction_timeout=20, block_confirmations=0)
-
- results = None
- dids = None
- data=None
- if term and not did_in:
- assets = ocean.assets.search(term)
-
- results = []
- datas = []
- balances = []
- dids = []
- for i in range(len(assets)):
- name = assets[i].values['_source']['service'][0]['attributes']['main']['name']
- type_ = assets[i].values['_source']['service'][0]['attributes']['main']['type'].upper()
- symbol = assets[i].values['_source']['dataTokenInfo']['symbol']
- data_token_address = assets[i].values['_source']['dataTokenInfo']['address']
- try:
- description = assets[i].values['_source']['service'][0]['attributes']['additionalInformation']['description']
- except:
- description = "No description"
- author = assets[i].values['_source']['service'][0]['attributes']['main']['author']
- did = assets[i].values['_source']['id']
- dids.append(did)
- chain = assets[i].values['_source']['service'][1]['serviceEndpoint']
-
- if chain != 'https://provider.rinkeby.oceanprotocol.com':
- continue
-
- if address:
- data_token = ocean.get_data_token(data_token_address)
- token_address = data_token.address
- balances.append(pretty_ether_and_wei(data_token.balanceOf(wallet.address)))
- else:
- balances.append(0)
-
- img = Image.open('algovera-tile.png')
-
- fig = plt.figure(figsize=(5,5))
- plt.axis("off")
- plt.imshow(img)
- plt.text(20, 100, name[:22], size=20)
- plt.text(20, 60, symbol)
- plt.text(400, 40, type_)
- plt.text(20, 140, author, size=12)
- plt.text(20, 200, description[:50])
- fig.tight_layout()
- fig.canvas.draw()
- data = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8)
- datas.append(data.reshape(fig.canvas.get_width_height()[::-1] + (3,)))
- plt.close()
-
- results.append([dids[-1], datas[-1], balances[-1]])
-
-
- if did_in:
- results = []
- balances = []
- datas = []
- dids = []
-
- asset = ocean.assets.resolve(did_in)
- name = asset.as_dictionary()['service'][0]['attributes']['main']['name']
- type_ = asset.as_dictionary()['service'][0]['attributes']['main']['type'].upper()
- symbol = asset.as_dictionary()['dataTokenInfo']['symbol']
- try:
- description = asset.as_dictionary()['service'][0]['attributes']['additionalInformation']['description']
- except:
- description = "No description"
- author = asset.as_dictionary()['service'][0]['attributes']['main']['author']
- dids.append(did_in)
- chain = asset.as_dictionary()['service'][1]['serviceEndpoint']
-
- if chain != 'https://provider.rinkeby.oceanprotocol.com':
- pass
-
- if address:
- data_token = ocean.get_data_token(asset.data_token_address)
- token_address = data_token.address
- balances.append(pretty_ether_and_wei(data_token.balanceOf(wallet.address)))
- else:
- balances.append(0)
-
-
-
- img = Image.open('algovera-tile.png')
-
- fig = plt.figure(figsize=(5,5))
- plt.axis("off")
- plt.imshow(img)
- plt.text(20, 100, name[:22], size=20)
- plt.text(20, 60, symbol)
- plt.text(400, 40, type_)
- plt.text(20, 140, author, size=12)
- plt.text(20, 200, description[:50])
- fig.tight_layout()
- fig.canvas.draw()
- data = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8)
- datas.append(data.reshape(fig.canvas.get_width_height()[::-1] + (3,)))
- plt.close()
-
- results.append([dids[-1], datas[-1], balances[-1]])
-
- if buy_top_result and address:
- asset = ocean.assets.resolve(dids[0])
- data_token = ocean.get_data_token(asset.data_token_address)
-
- service_type = asset.as_dictionary()['service'][1]['type']
- compute_service = asset.get_service(service_type)
-
- owner_address = asset.as_dictionary()['publicKey'][0]['owner']
-
- logs = ocean.exchange.search_exchange_by_data_token(asset.data_token_address)
- exchange_id = logs[0].args.exchangeId
-
- tx_result = ocean.exchange.buy_at_fixed_rate(to_wei(1), wallet, to_wei(5), exchange_id, asset.data_token_address, owner_address)
- assert tx_result, "failed buying tokens"
-
- balance = pretty_ether_and_wei(data_token.balanceOf(wallet.address))
-
- results[0][2] = balance
-
- return results
-
-description = (
- "This app can be used to search datasets and algorithms on the Ocean Marketplace. Enter a search term in the text box and the first result will be displayed as an image tile with description. "
-)
-
-article = (
- "
El juego tiene controles fáciles de aprender que lo hacen adecuado para cualquier persona que ama los juegos de baloncesto. Puedes usar el joystick virtual para mover a tu personaje y tocar los botones para disparar, clavar, bloquear o robar. También puede deslizar la pantalla para realizar movimientos especiales y combos. El juego tiene un modo tutorial que te enseña lo básico del juego y te ayuda a mejorar tus habilidades.
-
Varios modos de juego y desafíos
-
-
Personaliza tus jugadores y canchas
-
El juego te permite personalizar a tus jugadores y canchas con varios artículos que puedes comprar con el dinero que ganas jugando. Puedes elegir entre diferentes personajes, cada uno con sus propias fortalezas y debilidades, y cambiar su apariencia con diferentes trajes, peinados, zapatos, accesorios y más. También puede desbloquear y mejorar diferentes canchas, cada una con su propio tema y ambiente, como la calle, el gimnasio, la playa, el parque y más.
-
Juega online o offline con amigos
-
El juego es compatible con los modos en línea y fuera de línea, por lo que puede jugar en cualquier momento y en cualquier lugar. Puedes jugar en línea con otros jugadores de todo el mundo, o sin conexión con tus amigos en el mismo dispositivo. También puedes chatear con otros jugadores en el lobby del juego, enviarles emojis o retarlos a una revancha.
-
¿Por qué descargar Batalla de cesta sin anuncios Mod APK?
-
Beneficios de la versión modificada
-
Dinero ilimitado para comprar lo que quieras
-
La versión modificada de Basket Battle te da dinero ilimitado que puedes usar para comprar lo que quieras en el juego. Puedes comprar todos los personajes, trajes, accesorios, canchas y mejoras que quieras sin preocuparte por el costo. También puedes usar el dinero para saltarte los anuncios que aparecen en el juego.
-
No hay anuncios molestos para interrumpir su juego
-
La versión modificada de Basket Battle elimina todos los anuncios que normalmente aparecen en el juego. Puede disfrutar de un juego suave e ininterrumpido sin tener que ver ningún anuncio o esperar a que se carguen. También puede guardar sus datos y la batería jugando el juego sin anuncios.
-
Libre y seguro de instalar y usar
-
-
¿Cómo descargar e instalar la cesta de batalla sin anuncios Mod APK?
-
Paso 1: Descargar el archivo APK de una fuente de confianza
-
El primer paso es descargar el archivo APK de Basket Battle No Ads Mod APK de una fuente de confianza. Puedes encontrar muchos sitios web que ofrecen la versión modificada del juego, pero debes tener cuidado y elegir uno confiable. Puede utilizar el siguiente enlace para descargar el archivo APK de nuestro sitio web, que es 100% seguro y verificado.
-
Paso 2: Habilitar fuentes desconocidas en el dispositivo
-
El segundo paso es habilitar fuentes desconocidas en su dispositivo. Esto es necesario porque la versión modificada de Basket Battle no está disponible en Google Play Store, y necesitas permitir que tu dispositivo instale aplicaciones de otras fuentes. Para hacer esto, vaya a la configuración del dispositivo, luego a la seguridad, luego a fuentes desconocidas y enciéndala.
-
-
Paso 3: Instalar el archivo APK y disfrutar del juego
-
El tercer y último paso es instalar el archivo APK y disfrutar del juego. Para hacer esto, busque el archivo APK que descargó en el paso 1, y toque en él. Siga las instrucciones en la pantalla para completar el proceso de instalación. Una vez hecho, se puede iniciar el juego desde el cajón de la aplicación o la pantalla de inicio, y empezar a jugar Basket Battle No Ads Mod APK.
-
Conclusión
-
Batalla de la cesta sin anuncios Mod APK es un juego de baloncesto divertido y adictivo que le permite jugar uno a uno contra diferentes oponentes en varios lugares. Puede personalizar sus jugadores y canchas con varios elementos, y disfrutar de un juego suave y rápido sin anuncios ni limitaciones. La versión modificada de Basket Battle te da dinero ilimitado para comprar lo que quieras en el juego, y elimina todos los anuncios que normalmente aparecen en el juego. Puede descargar e instalar Basket Battle No Ads Mod APK en su dispositivo Android de forma gratuita y segura siguiendo los pasos anteriores. Si te gustan los juegos de baloncesto, usted debe probar definitivamente Basket Battle No Ads Mod APK.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BernardoOlisan/vqganclip/taming-transformers/scripts/sample_conditional.py b/spaces/BernardoOlisan/vqganclip/taming-transformers/scripts/sample_conditional.py
deleted file mode 100644
index 174cf2af07c1a1ca4e6c35fc0e4f8d6e53591b56..0000000000000000000000000000000000000000
--- a/spaces/BernardoOlisan/vqganclip/taming-transformers/scripts/sample_conditional.py
+++ /dev/null
@@ -1,355 +0,0 @@
-import argparse, os, sys, glob, math, time
-import torch
-import numpy as np
-from omegaconf import OmegaConf
-import streamlit as st
-from streamlit import caching
-from PIL import Image
-from main import instantiate_from_config, DataModuleFromConfig
-from torch.utils.data import DataLoader
-from torch.utils.data.dataloader import default_collate
-
-
-rescale = lambda x: (x + 1.) / 2.
-
-
-def bchw_to_st(x):
- return rescale(x.detach().cpu().numpy().transpose(0,2,3,1))
-
-def save_img(xstart, fname):
- I = (xstart.clip(0,1)[0]*255).astype(np.uint8)
- Image.fromarray(I).save(fname)
-
-
-
-def get_interactive_image(resize=False):
- image = st.file_uploader("Input", type=["jpg", "JPEG", "png"])
- if image is not None:
- image = Image.open(image)
- if not image.mode == "RGB":
- image = image.convert("RGB")
- image = np.array(image).astype(np.uint8)
- print("upload image shape: {}".format(image.shape))
- img = Image.fromarray(image)
- if resize:
- img = img.resize((256, 256))
- image = np.array(img)
- return image
-
-
-def single_image_to_torch(x, permute=True):
- assert x is not None, "Please provide an image through the upload function"
- x = np.array(x)
- x = torch.FloatTensor(x/255.*2. - 1.)[None,...]
- if permute:
- x = x.permute(0, 3, 1, 2)
- return x
-
-
-def pad_to_M(x, M):
- hp = math.ceil(x.shape[2]/M)*M-x.shape[2]
- wp = math.ceil(x.shape[3]/M)*M-x.shape[3]
- x = torch.nn.functional.pad(x, (0,wp,0,hp,0,0,0,0))
- return x
-
-@torch.no_grad()
-def run_conditional(model, dsets):
- if len(dsets.datasets) > 1:
- split = st.sidebar.radio("Split", sorted(dsets.datasets.keys()))
- dset = dsets.datasets[split]
- else:
- dset = next(iter(dsets.datasets.values()))
- batch_size = 1
- start_index = st.sidebar.number_input("Example Index (Size: {})".format(len(dset)), value=0,
- min_value=0,
- max_value=len(dset)-batch_size)
- indices = list(range(start_index, start_index+batch_size))
-
- example = default_collate([dset[i] for i in indices])
-
- x = model.get_input("image", example).to(model.device)
-
- cond_key = model.cond_stage_key
- c = model.get_input(cond_key, example).to(model.device)
-
- scale_factor = st.sidebar.slider("Scale Factor", min_value=0.5, max_value=4.0, step=0.25, value=1.00)
- if scale_factor != 1.0:
- x = torch.nn.functional.interpolate(x, scale_factor=scale_factor, mode="bicubic")
- c = torch.nn.functional.interpolate(c, scale_factor=scale_factor, mode="bicubic")
-
- quant_z, z_indices = model.encode_to_z(x)
- quant_c, c_indices = model.encode_to_c(c)
-
- cshape = quant_z.shape
-
- xrec = model.first_stage_model.decode(quant_z)
- st.write("image: {}".format(x.shape))
- st.image(bchw_to_st(x), clamp=True, output_format="PNG")
- st.write("image reconstruction: {}".format(xrec.shape))
- st.image(bchw_to_st(xrec), clamp=True, output_format="PNG")
-
- if cond_key == "segmentation":
- # get image from segmentation mask
- num_classes = c.shape[1]
- c = torch.argmax(c, dim=1, keepdim=True)
- c = torch.nn.functional.one_hot(c, num_classes=num_classes)
- c = c.squeeze(1).permute(0, 3, 1, 2).float()
- c = model.cond_stage_model.to_rgb(c)
-
- st.write(f"{cond_key}: {tuple(c.shape)}")
- st.image(bchw_to_st(c), clamp=True, output_format="PNG")
-
- idx = z_indices
-
- half_sample = st.sidebar.checkbox("Image Completion", value=False)
- if half_sample:
- start = idx.shape[1]//2
- else:
- start = 0
-
- idx[:,start:] = 0
- idx = idx.reshape(cshape[0],cshape[2],cshape[3])
- start_i = start//cshape[3]
- start_j = start %cshape[3]
-
- if not half_sample and quant_z.shape == quant_c.shape:
- st.info("Setting idx to c_indices")
- idx = c_indices.clone().reshape(cshape[0],cshape[2],cshape[3])
-
- cidx = c_indices
- cidx = cidx.reshape(quant_c.shape[0],quant_c.shape[2],quant_c.shape[3])
-
- xstart = model.decode_to_img(idx[:,:cshape[2],:cshape[3]], cshape)
- st.image(bchw_to_st(xstart), clamp=True, output_format="PNG")
-
- temperature = st.number_input("Temperature", value=1.0)
- top_k = st.number_input("Top k", value=100)
- sample = st.checkbox("Sample", value=True)
- update_every = st.number_input("Update every", value=75)
-
- st.text(f"Sampling shape ({cshape[2]},{cshape[3]})")
-
- animate = st.checkbox("animate")
- if animate:
- import imageio
- outvid = "sampling.mp4"
- writer = imageio.get_writer(outvid, fps=25)
- elapsed_t = st.empty()
- info = st.empty()
- st.text("Sampled")
- if st.button("Sample"):
- output = st.empty()
- start_t = time.time()
- for i in range(start_i,cshape[2]-0):
- if i <= 8:
- local_i = i
- elif cshape[2]-i < 8:
- local_i = 16-(cshape[2]-i)
- else:
- local_i = 8
- for j in range(start_j,cshape[3]-0):
- if j <= 8:
- local_j = j
- elif cshape[3]-j < 8:
- local_j = 16-(cshape[3]-j)
- else:
- local_j = 8
-
- i_start = i-local_i
- i_end = i_start+16
- j_start = j-local_j
- j_end = j_start+16
- elapsed_t.text(f"Time: {time.time() - start_t} seconds")
- info.text(f"Step: ({i},{j}) | Local: ({local_i},{local_j}) | Crop: ({i_start}:{i_end},{j_start}:{j_end})")
- patch = idx[:,i_start:i_end,j_start:j_end]
- patch = patch.reshape(patch.shape[0],-1)
- cpatch = cidx[:, i_start:i_end, j_start:j_end]
- cpatch = cpatch.reshape(cpatch.shape[0], -1)
- patch = torch.cat((cpatch, patch), dim=1)
- logits,_ = model.transformer(patch[:,:-1])
- logits = logits[:, -256:, :]
- logits = logits.reshape(cshape[0],16,16,-1)
- logits = logits[:,local_i,local_j,:]
-
- logits = logits/temperature
-
- if top_k is not None:
- logits = model.top_k_logits(logits, top_k)
- # apply softmax to convert to probabilities
- probs = torch.nn.functional.softmax(logits, dim=-1)
- # sample from the distribution or take the most likely
- if sample:
- ix = torch.multinomial(probs, num_samples=1)
- else:
- _, ix = torch.topk(probs, k=1, dim=-1)
- idx[:,i,j] = ix
-
- if (i*cshape[3]+j)%update_every==0:
- xstart = model.decode_to_img(idx[:, :cshape[2], :cshape[3]], cshape,)
-
- xstart = bchw_to_st(xstart)
- output.image(xstart, clamp=True, output_format="PNG")
-
- if animate:
- writer.append_data((xstart[0]*255).clip(0, 255).astype(np.uint8))
-
- xstart = model.decode_to_img(idx[:,:cshape[2],:cshape[3]], cshape)
- xstart = bchw_to_st(xstart)
- output.image(xstart, clamp=True, output_format="PNG")
- #save_img(xstart, "full_res_sample.png")
- if animate:
- writer.close()
- st.video(outvid)
-
-
-def get_parser():
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "-r",
- "--resume",
- type=str,
- nargs="?",
- help="load from logdir or checkpoint in logdir",
- )
- parser.add_argument(
- "-b",
- "--base",
- nargs="*",
- metavar="base_config.yaml",
- help="paths to base configs. Loaded from left-to-right. "
- "Parameters can be overwritten or added with command-line options of the form `--key value`.",
- default=list(),
- )
- parser.add_argument(
- "-c",
- "--config",
- nargs="?",
- metavar="single_config.yaml",
- help="path to single config. If specified, base configs will be ignored "
- "(except for the last one if left unspecified).",
- const=True,
- default="",
- )
- parser.add_argument(
- "--ignore_base_data",
- action="store_true",
- help="Ignore data specification from base configs. Useful if you want "
- "to specify a custom datasets on the command line.",
- )
- return parser
-
-
-def load_model_from_config(config, sd, gpu=True, eval_mode=True):
- if "ckpt_path" in config.params:
- st.warning("Deleting the restore-ckpt path from the config...")
- config.params.ckpt_path = None
- if "downsample_cond_size" in config.params:
- st.warning("Deleting downsample-cond-size from the config and setting factor=0.5 instead...")
- config.params.downsample_cond_size = -1
- config.params["downsample_cond_factor"] = 0.5
- try:
- if "ckpt_path" in config.params.first_stage_config.params:
- config.params.first_stage_config.params.ckpt_path = None
- st.warning("Deleting the first-stage restore-ckpt path from the config...")
- if "ckpt_path" in config.params.cond_stage_config.params:
- config.params.cond_stage_config.params.ckpt_path = None
- st.warning("Deleting the cond-stage restore-ckpt path from the config...")
- except:
- pass
-
- model = instantiate_from_config(config)
- if sd is not None:
- missing, unexpected = model.load_state_dict(sd, strict=False)
- st.info(f"Missing Keys in State Dict: {missing}")
- st.info(f"Unexpected Keys in State Dict: {unexpected}")
- if gpu:
- model.cuda()
- if eval_mode:
- model.eval()
- return {"model": model}
-
-
-def get_data(config):
- # get data
- data = instantiate_from_config(config.data)
- data.prepare_data()
- data.setup()
- return data
-
-
-@st.cache(allow_output_mutation=True, suppress_st_warning=True)
-def load_model_and_dset(config, ckpt, gpu, eval_mode):
- # get data
- dsets = get_data(config) # calls data.config ...
-
- # now load the specified checkpoint
- if ckpt:
- pl_sd = torch.load(ckpt, map_location="cpu")
- global_step = pl_sd["global_step"]
- else:
- pl_sd = {"state_dict": None}
- global_step = None
- model = load_model_from_config(config.model,
- pl_sd["state_dict"],
- gpu=gpu,
- eval_mode=eval_mode)["model"]
- return dsets, model, global_step
-
-
-if __name__ == "__main__":
- sys.path.append(os.getcwd())
-
- parser = get_parser()
-
- opt, unknown = parser.parse_known_args()
-
- ckpt = None
- if opt.resume:
- if not os.path.exists(opt.resume):
- raise ValueError("Cannot find {}".format(opt.resume))
- if os.path.isfile(opt.resume):
- paths = opt.resume.split("/")
- try:
- idx = len(paths)-paths[::-1].index("logs")+1
- except ValueError:
- idx = -2 # take a guess: path/to/logdir/checkpoints/model.ckpt
- logdir = "/".join(paths[:idx])
- ckpt = opt.resume
- else:
- assert os.path.isdir(opt.resume), opt.resume
- logdir = opt.resume.rstrip("/")
- ckpt = os.path.join(logdir, "checkpoints", "last.ckpt")
- print(f"logdir:{logdir}")
- base_configs = sorted(glob.glob(os.path.join(logdir, "configs/*-project.yaml")))
- opt.base = base_configs+opt.base
-
- if opt.config:
- if type(opt.config) == str:
- opt.base = [opt.config]
- else:
- opt.base = [opt.base[-1]]
-
- configs = [OmegaConf.load(cfg) for cfg in opt.base]
- cli = OmegaConf.from_dotlist(unknown)
- if opt.ignore_base_data:
- for config in configs:
- if hasattr(config, "data"): del config["data"]
- config = OmegaConf.merge(*configs, cli)
-
- st.sidebar.text(ckpt)
- gs = st.sidebar.empty()
- gs.text(f"Global step: ?")
- st.sidebar.text("Options")
- #gpu = st.sidebar.checkbox("GPU", value=True)
- gpu = True
- #eval_mode = st.sidebar.checkbox("Eval Mode", value=True)
- eval_mode = True
- #show_config = st.sidebar.checkbox("Show Config", value=False)
- show_config = False
- if show_config:
- st.info("Checkpoint: {}".format(ckpt))
- st.json(OmegaConf.to_container(config))
-
- dsets, model, global_step = load_model_and_dset(config, ckpt, gpu, eval_mode)
- gs.text(f"Global step: {global_step}")
- run_conditional(model, dsets)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_windows.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_windows.py
deleted file mode 100644
index 10fc0d7e9f398dd550a42c6b8c0637684882ee60..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_windows.py
+++ /dev/null
@@ -1,72 +0,0 @@
-import sys
-from dataclasses import dataclass
-
-
-@dataclass
-class WindowsConsoleFeatures:
- """Windows features available."""
-
- vt: bool = False
- """The console supports VT codes."""
- truecolor: bool = False
- """The console supports truecolor."""
-
-
-try:
- import ctypes
- from ctypes import LibraryLoader
-
- if sys.platform == "win32":
- windll = LibraryLoader(ctypes.WinDLL)
- else:
- windll = None
- raise ImportError("Not windows")
-
- from pip._vendor.rich._win32_console import (
- ENABLE_VIRTUAL_TERMINAL_PROCESSING,
- GetConsoleMode,
- GetStdHandle,
- LegacyWindowsError,
- )
-
-except (AttributeError, ImportError, ValueError):
-
- # Fallback if we can't load the Windows DLL
- def get_windows_console_features() -> WindowsConsoleFeatures:
- features = WindowsConsoleFeatures()
- return features
-
-else:
-
- def get_windows_console_features() -> WindowsConsoleFeatures:
- """Get windows console features.
-
- Returns:
- WindowsConsoleFeatures: An instance of WindowsConsoleFeatures.
- """
- handle = GetStdHandle()
- try:
- console_mode = GetConsoleMode(handle)
- success = True
- except LegacyWindowsError:
- console_mode = 0
- success = False
- vt = bool(success and console_mode & ENABLE_VIRTUAL_TERMINAL_PROCESSING)
- truecolor = False
- if vt:
- win_version = sys.getwindowsversion()
- truecolor = win_version.major > 10 or (
- win_version.major == 10 and win_version.build >= 15063
- )
- features = WindowsConsoleFeatures(vt=vt, truecolor=truecolor)
- return features
-
-
-if __name__ == "__main__":
- import platform
-
- features = get_windows_console_features()
- from pip._vendor.rich import print
-
- print(f'platform="{platform.system()}"')
- print(repr(features))
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tenacity/after.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tenacity/after.py
deleted file mode 100644
index 574c9bcea6e222ea8283a3c8dafbda15a2893fe1..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tenacity/after.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# Copyright 2016 Julien Danjou
-# Copyright 2016 Joshua Harlow
-# Copyright 2013-2014 Ray Holder
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import typing
-
-from pip._vendor.tenacity import _utils
-
-if typing.TYPE_CHECKING:
- import logging
-
- from pip._vendor.tenacity import RetryCallState
-
-
-def after_nothing(retry_state: "RetryCallState") -> None:
- """After call strategy that does nothing."""
-
-
-def after_log(
- logger: "logging.Logger",
- log_level: int,
- sec_format: str = "%0.3f",
-) -> typing.Callable[["RetryCallState"], None]:
- """After call strategy that logs to some logger the finished attempt."""
-
- def log_it(retry_state: "RetryCallState") -> None:
- if retry_state.fn is None:
- # NOTE(sileht): can't really happen, but we must please mypy
- fn_name = ""
- else:
- fn_name = _utils.get_callback_name(retry_state.fn)
- logger.log(
- log_level,
- f"Finished call to '{fn_name}' "
- f"after {sec_format % retry_state.seconds_since_start}(s), "
- f"this was the {_utils.to_ordinal(retry_state.attempt_number)} time calling it.",
- )
-
- return log_it
diff --git a/spaces/CVPR/LIVE/thrust/thrust/mr/new.h b/spaces/CVPR/LIVE/thrust/thrust/mr/new.h
deleted file mode 100644
index f8e4fe0212c1ec22f7ee417e6302cb819972c40c..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/mr/new.h
+++ /dev/null
@@ -1,88 +0,0 @@
-/*
- * Copyright 2018 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*! \file new.h
- * \brief Global operator new-based memory resource.
- */
-
-#pragma once
-
-#include
-
-namespace thrust
-{
-namespace mr
-{
-
-/** \addtogroup memory_resources Memory Resources
- * \ingroup memory_management_classes
- * \{
- */
-
-/*! A memory resource that uses global operators new and delete to allocate and deallocate memory. Uses alignment-enabled
- * overloads when available, otherwise uses regular overloads and implements alignment requirements by itself.
- */
-class new_delete_resource THRUST_FINAL : public memory_resource<>
-{
-public:
- void * do_allocate(std::size_t bytes, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT) THRUST_OVERRIDE
- {
-#if defined(__cpp_aligned_new)
- return ::operator new(bytes, std::align_val_t(alignment));
-#else
- // allocate memory for bytes, plus potential alignment correction,
- // plus store of the correction offset
- void * p = ::operator new(bytes + alignment + sizeof(std::size_t));
- std::size_t ptr_int = reinterpret_cast(p);
- // calculate the offset, i.e. how many bytes of correction was necessary
- // to get an aligned pointer
- std::size_t offset = (ptr_int % alignment) ? (alignment - ptr_int % alignment) : 0;
- // calculate the return pointer
- char * ptr = static_cast(p) + offset;
- // store the offset right after the actually returned value
- std::size_t * offset_store = reinterpret_cast(ptr + bytes);
- *offset_store = offset;
- return static_cast(ptr);
-#endif
- }
-
- void do_deallocate(void * p, std::size_t bytes, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT) THRUST_OVERRIDE
- {
-#if defined(__cpp_aligned_new)
-# if defined(__cpp_sized_deallocation)
- ::operator delete(p, bytes, std::align_val_t(alignment));
-# else
- (void)bytes;
- ::operator delete(p, std::align_val_t(alignment));
-# endif
-#else
- (void)alignment;
- char * ptr = static_cast(p);
- // calculate where the offset is stored
- std::size_t * offset = reinterpret_cast(ptr + bytes);
- // calculate the original pointer
- p = static_cast(ptr - *offset);
- ::operator delete(p);
-#endif
- }
-};
-
-/*! \}
- */
-
-} // end mr
-} // end thrust
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/scan_by_key.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/scan_by_key.h
deleted file mode 100644
index 7f6b42d54410703e7ba96123e9ea0655bbc79ef9..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/scan_by_key.h
+++ /dev/null
@@ -1,23 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// this system inherits this algorithm
-#include
-
diff --git a/spaces/CVPR/regionclip-demo/detectron2/data/catalog.py b/spaces/CVPR/regionclip-demo/detectron2/data/catalog.py
deleted file mode 100644
index 45c110c19508f23921b9033cdaf0aa8056f0c125..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/data/catalog.py
+++ /dev/null
@@ -1,236 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import copy
-import logging
-import types
-from collections import UserDict
-from typing import List
-
-from detectron2.utils.logger import log_first_n
-
-__all__ = ["DatasetCatalog", "MetadataCatalog", "Metadata"]
-
-
-class _DatasetCatalog(UserDict):
- """
- A global dictionary that stores information about the datasets and how to obtain them.
-
- It contains a mapping from strings
- (which are names that identify a dataset, e.g. "coco_2014_train")
- to a function which parses the dataset and returns the samples in the
- format of `list[dict]`.
-
- The returned dicts should be in Detectron2 Dataset format (See DATASETS.md for details)
- if used with the data loader functionalities in `data/build.py,data/detection_transform.py`.
-
- The purpose of having this catalog is to make it easy to choose
- different datasets, by just using the strings in the config.
- """
-
- def register(self, name, func):
- """
- Args:
- name (str): the name that identifies a dataset, e.g. "coco_2014_train".
- func (callable): a callable which takes no arguments and returns a list of dicts.
- It must return the same results if called multiple times.
- """
- assert callable(func), "You must register a function with `DatasetCatalog.register`!"
- assert name not in self, "Dataset '{}' is already registered!".format(name)
- self[name] = func
-
- def get(self, name):
- """
- Call the registered function and return its results.
-
- Args:
- name (str): the name that identifies a dataset, e.g. "coco_2014_train".
-
- Returns:
- list[dict]: dataset annotations.
- """
- try:
- f = self[name]
- except KeyError as e:
- raise KeyError(
- "Dataset '{}' is not registered! Available datasets are: {}".format(
- name, ", ".join(list(self.keys()))
- )
- ) from e
- return f()
-
- def list(self) -> List[str]:
- """
- List all registered datasets.
-
- Returns:
- list[str]
- """
- return list(self.keys())
-
- def remove(self, name):
- """
- Alias of ``pop``.
- """
- self.pop(name)
-
- def __str__(self):
- return "DatasetCatalog(registered datasets: {})".format(", ".join(self.keys()))
-
- __repr__ = __str__
-
-
-DatasetCatalog = _DatasetCatalog()
-DatasetCatalog.__doc__ = (
- _DatasetCatalog.__doc__
- + """
- .. automethod:: detectron2.data.catalog.DatasetCatalog.register
- .. automethod:: detectron2.data.catalog.DatasetCatalog.get
-"""
-)
-
-
-class Metadata(types.SimpleNamespace):
- """
- A class that supports simple attribute setter/getter.
- It is intended for storing metadata of a dataset and make it accessible globally.
-
- Examples:
- ::
- # somewhere when you load the data:
- MetadataCatalog.get("mydataset").thing_classes = ["person", "dog"]
-
- # somewhere when you print statistics or visualize:
- classes = MetadataCatalog.get("mydataset").thing_classes
- """
-
- # the name of the dataset
- # set default to N/A so that `self.name` in the errors will not trigger getattr again
- name: str = "N/A"
-
- _RENAMED = {
- "class_names": "thing_classes",
- "dataset_id_to_contiguous_id": "thing_dataset_id_to_contiguous_id",
- "stuff_class_names": "stuff_classes",
- }
-
- def __getattr__(self, key):
- if key in self._RENAMED:
- log_first_n(
- logging.WARNING,
- "Metadata '{}' was renamed to '{}'!".format(key, self._RENAMED[key]),
- n=10,
- )
- return getattr(self, self._RENAMED[key])
-
- # "name" exists in every metadata
- if len(self.__dict__) > 1:
- raise AttributeError(
- "Attribute '{}' does not exist in the metadata of dataset '{}'. Available "
- "keys are {}.".format(key, self.name, str(self.__dict__.keys()))
- )
- else:
- raise AttributeError(
- f"Attribute '{key}' does not exist in the metadata of dataset '{self.name}': "
- "metadata is empty."
- )
-
- def __setattr__(self, key, val):
- if key in self._RENAMED:
- log_first_n(
- logging.WARNING,
- "Metadata '{}' was renamed to '{}'!".format(key, self._RENAMED[key]),
- n=10,
- )
- setattr(self, self._RENAMED[key], val)
-
- # Ensure that metadata of the same name stays consistent
- try:
- oldval = getattr(self, key)
- assert oldval == val, (
- "Attribute '{}' in the metadata of '{}' cannot be set "
- "to a different value!\n{} != {}".format(key, self.name, oldval, val)
- )
- except AttributeError:
- super().__setattr__(key, val)
-
- def as_dict(self):
- """
- Returns all the metadata as a dict.
- Note that modifications to the returned dict will not reflect on the Metadata object.
- """
- return copy.copy(self.__dict__)
-
- def set(self, **kwargs):
- """
- Set multiple metadata with kwargs.
- """
- for k, v in kwargs.items():
- setattr(self, k, v)
- return self
-
- def get(self, key, default=None):
- """
- Access an attribute and return its value if exists.
- Otherwise return default.
- """
- try:
- return getattr(self, key)
- except AttributeError:
- return default
-
-
-class _MetadataCatalog(UserDict):
- """
- MetadataCatalog is a global dictionary that provides access to
- :class:`Metadata` of a given dataset.
-
- The metadata associated with a certain name is a singleton: once created, the
- metadata will stay alive and will be returned by future calls to ``get(name)``.
-
- It's like global variables, so don't abuse it.
- It's meant for storing knowledge that's constant and shared across the execution
- of the program, e.g.: the class names in COCO.
- """
-
- def get(self, name):
- """
- Args:
- name (str): name of a dataset (e.g. coco_2014_train).
-
- Returns:
- Metadata: The :class:`Metadata` instance associated with this name,
- or create an empty one if none is available.
- """
- assert len(name)
- r = super().get(name, None)
- if r is None:
- r = self[name] = Metadata(name=name)
- return r
-
- def list(self):
- """
- List all registered metadata.
-
- Returns:
- list[str]: keys (names of datasets) of all registered metadata
- """
- return list(self.keys())
-
- def remove(self, name):
- """
- Alias of ``pop``.
- """
- self.pop(name)
-
- def __str__(self):
- return "MetadataCatalog(registered metadata: {})".format(", ".join(self.keys()))
-
- __repr__ = __str__
-
-
-MetadataCatalog = _MetadataCatalog()
-MetadataCatalog.__doc__ = (
- _MetadataCatalog.__doc__
- + """
- .. automethod:: detectron2.data.catalog.MetadataCatalog.get
-"""
-)
diff --git a/spaces/ChandraMohanNayal/AutoGPT/run.sh b/spaces/ChandraMohanNayal/AutoGPT/run.sh
deleted file mode 100644
index edcbc44155b9ca9df83e283fdf976472c13e6492..0000000000000000000000000000000000000000
--- a/spaces/ChandraMohanNayal/AutoGPT/run.sh
+++ /dev/null
@@ -1,9 +0,0 @@
-#!/bin/bash
-python scripts/check_requirements.py requirements.txt
-if [ $? -eq 1 ]
-then
- echo Installing missing packages...
- pip install -r requirements.txt
-fi
-python -m autogpt $@
-read -p "Press any key to continue..."
diff --git a/spaces/ChevyWithAI/rvc-aicover/infer_pack/models.py b/spaces/ChevyWithAI/rvc-aicover/infer_pack/models.py
deleted file mode 100644
index 96165f73644e6fb92d0ffedb4a3c9e1a457cb989..0000000000000000000000000000000000000000
--- a/spaces/ChevyWithAI/rvc-aicover/infer_pack/models.py
+++ /dev/null
@@ -1,982 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from infer_pack import modules
-from infer_pack import attentions
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from infer_pack.commons import init_weights
-import numpy as np
-from infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder256Sim(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- x = self.proj(x) * x_mask
- return x, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_sim(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- # hop_length,
- gin_channels=0,
- use_sdp=True,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256Sim(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- is_half=kwargs["is_half"],
- )
-
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y_lengths, ds
- ): # y是spec不需要了现在
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
- x = self.flow(x, x_mask, g=g, reverse=True)
- z_slice, ids_slice = commons.rand_slice_segments(
- x, y_lengths, self.segment_size
- )
-
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice
-
- def infer(
- self, phone, phone_lengths, pitch, pitchf, ds, max_len=None
- ): # y是spec不需要了现在
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
- x = self.flow(x, x_mask, g=g, reverse=True)
- o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g)
- return o, o
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/fill_head/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/fill_head/__init__.py
deleted file mode 100644
index 40320e1a407ac28e01f598f3e575e100aa42057c..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/meme-api/meme_generator/memes/fill_head/__init__.py
+++ /dev/null
@@ -1,40 +0,0 @@
-from pathlib import Path
-from typing import List
-
-from pil_utils import BuildImage
-
-from meme_generator import MemeArgsModel, add_meme
-from meme_generator.exception import TextOverLength
-from meme_generator.utils import make_jpg_or_gif
-
-img_dir = Path(__file__).parent / "images"
-
-
-def fill_head(images: List[BuildImage], texts: List[str], args: MemeArgsModel):
- name = texts[0] if texts else (args.user_infos[0].name if args.user_infos else "它")
- text = f"满脑子都是{name}"
- frame = BuildImage.open(img_dir / "0.jpg")
- try:
- frame.draw_text(
- (20, 458, frame.width - 20, 550), text, max_fontsize=65, min_fontsize=30
- )
- except:
- raise TextOverLength(name)
-
- def make(img: BuildImage) -> BuildImage:
- img = img.convert("RGBA").resize((210, 170), keep_ratio=True, inside=True)
- return frame.copy().paste(img, (150, 2), alpha=True)
-
- return make_jpg_or_gif(images[0], make)
-
-
-add_meme(
- "fill_head",
- fill_head,
- min_images=1,
- max_images=1,
- min_texts=0,
- max_texts=1,
- keywords=["满脑子"],
- patterns=[r"满脑子都是(\S+)"],
-)
diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/funny_mirror/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/funny_mirror/__init__.py
deleted file mode 100644
index 5f9f425588845831be7dd284e2f3a1beb4af4f69..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/meme-api/meme_generator/memes/funny_mirror/__init__.py
+++ /dev/null
@@ -1,23 +0,0 @@
-from typing import List
-
-from PIL.Image import Image as IMG
-from pil_utils import BuildImage
-
-from meme_generator import add_meme
-from meme_generator.utils import save_gif
-
-
-def funny_mirror(images: List[BuildImage], texts, args):
- img = images[0].convert("RGBA").square().resize((500, 500))
- frames: List[IMG] = [img.image]
- coeffs = [0.01, 0.03, 0.05, 0.08, 0.12, 0.17, 0.23, 0.3, 0.4, 0.6]
- borders = [25, 52, 67, 83, 97, 108, 118, 128, 138, 148]
- for i in range(10):
- new_size = 500 - borders[i] * 2
- new_img = img.distort((coeffs[i], 0, 0, 0)).resize_canvas((new_size, new_size))
- frames.append(new_img.resize((500, 500)).image)
- frames.extend(frames[::-1])
- return save_gif(frames, 0.05)
-
-
-add_meme("funny_mirror", funny_mirror, min_images=1, max_images=1, keywords=["哈哈镜"])
diff --git a/spaces/CjangCjengh/Shanghainese-TTS/transforms.py b/spaces/CjangCjengh/Shanghainese-TTS/transforms.py
deleted file mode 100644
index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000
--- a/spaces/CjangCjengh/Shanghainese-TTS/transforms.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
-
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {
- 'tails': tails,
- 'tail_bound': tail_bound
- }
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(
- inputs[..., None] >= bin_locations,
- dim=-1
- ) - 1
-
-
-def unconstrained_rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails='linear',
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == 'linear':
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError('{} tails are not implemented.'.format(tails))
-
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative
- )
-
- return outputs, logabsdet
-
-def rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0., right=1., bottom=0., top=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError('Input to a transform is not within its domain')
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError('Minimal bin width too large for the number of bins')
- if min_bin_height * num_bins > 1.0:
- raise ValueError('Minimal bin height too large for the number of bins')
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (((inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta)
- + input_heights * (input_delta - input_derivatives)))
- b = (input_heights * input_derivatives
- - (inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta))
- c = - input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (input_delta * theta.pow(2)
- + input_derivatives * theta_one_minus_theta)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/CobaltZvc/HyperBot/app.py b/spaces/CobaltZvc/HyperBot/app.py
deleted file mode 100644
index 788212c915e0d36bb2e4722bb43cab5d8186232a..0000000000000000000000000000000000000000
--- a/spaces/CobaltZvc/HyperBot/app.py
+++ /dev/null
@@ -1,1026 +0,0 @@
-import os
-import openai
-import wget
-import streamlit as st
-from PIL import Image
-from serpapi import GoogleSearch
-import torch
-from diffusers import StableDiffusionPipeline
-from bokeh.models.widgets import Button
-from bokeh.models.widgets.buttons import Button
-from bokeh.models import CustomJS
-from streamlit_bokeh_events import streamlit_bokeh_events
-import base64
-from streamlit_player import st_player
-from pytube import YouTube
-from pytube import Search
-import io
-import warnings
-from PIL import Image
-from stability_sdk import client
-import stability_sdk.interfaces.gooseai.generation.generation_pb2 as generation
-import datetime
-from google.oauth2 import service_account
-from googleapiclient.discovery import build
-import wget
-import urllib.request
-import csv
-
-
-def save_uploadedfile(uploadedfile):
- with open(uploadedfile.name,"wb") as f:
- f.write(uploadedfile.getbuffer())
-
-stability_api = client.StabilityInference(
- key=st.secrets["STABILITY_KEY"], #os.environ("STABILITY_KEY"), # key=os.environ['STABILITY_KEY'], # API Key reference.
- verbose=True, # Print debug messages.
- engine="stable-diffusion-v1-5", # Set the engine to use for generation.
- # Available engines: stable-diffusion-v1 stable-diffusion-v1-5 stable-diffusion-512-v2-0 stable-diffusion-768-v2-0
- # stable-diffusion-512-v2-1 stable-diffusion-768-v2-1 stable-inpainting-v1-0 stable-inpainting-512-v2-0
-)
-
-header = ["sl. no.", "Input Prompt", "Output", "Date_time"]
-
-def csv_logs(mytext, result, date_time):
- with open("logs.csv", "r") as file:
- sl_no = sum(1 for _ in csv.reader(file))
-
- with open("logs.csv", "a", newline="") as file:
- writer = csv.writer(file)
- writer.writerow([sl_no, mytext, result, date_time])
-
-def search_internet(question):
- try:
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- search = GoogleSearch(params)
- results = search.get_dict()
- organic_results = results["organic_results"]
- st.text("Key 0 used")
-
-
- snippets = ""
- counter = 1
- for item in organic_results:
- snippets += str(counter) + ". " + item.get("snippet", "") + '\n' + item['about_this_result']['source']['source_info_link'] + '\n'
- counter += 1
-
- # snippets
-
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=f'''following are snippets from google search with these as knowledge base only answer questions and print reference link as well followed by answer. \n\n {snippets}\n\n question-{question}\n\nAnswer-''',
- temperature=0.49,
- max_tokens=256,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0)
-
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- string_temp = response.choices[0].text
- csv_logs(question, string_temp, date_time)
- st.write(string_temp)
- st.write(snippets)
- except:
- try:
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API1"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API1"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- search = GoogleSearch(params)
- results = search.get_dict()
- organic_results = results["organic_results"]
- st.text("Key 1 used")
-
-
- snippets = ""
- counter = 1
- for item in organic_results:
- snippets += str(counter) + ". " + item.get("snippet", "") + '\n' + item['about_this_result']['source']['source_info_link'] + '\n'
- counter += 1
-
- # snippets
-
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=f'''following are snippets from google search with these as knowledge base only answer questions and print reference link as well followed by answer. \n\n {snippets}\n\n question-{question}\n\nAnswer-''',
- temperature=0.49,
- max_tokens=256,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0)
-
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- string_temp = response.choices[0].text
- csv_logs(question, string_temp, date_time)
- st.write(string_temp)
- st.write(snippets)
- except:
- try:
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API2"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API2"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- search = GoogleSearch(params)
- results = search.get_dict()
- organic_results = results["organic_results"]
- st.text("Key 2 used")
-
-
- snippets = ""
- counter = 1
- for item in organic_results:
- snippets += str(counter) + ". " + item.get("snippet", "") + '\n' + item['about_this_result']['source']['source_info_link'] + '\n'
- counter += 1
-
- # snippets
-
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=f'''following are snippets from google search with these as knowledge base only answer questions and print reference link as well followed by answer. \n\n {snippets}\n\n question-{question}\n\nAnswer-''',
- temperature=0.49,
- max_tokens=256,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0)
-
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- string_temp = response.choices[0].text
- csv_logs(question, string_temp, date_time)
- st.write(string_temp)
- st.write(snippets)
- except:
- try:
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API3"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API3"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- search = GoogleSearch(params)
- results = search.get_dict()
- organic_results = results["organic_results"]
- st.text("Key 3 used")
-
-
- snippets = ""
- counter = 1
- for item in organic_results:
- snippets += str(counter) + ". " + item.get("snippet", "") + '\n' + item['about_this_result']['source']['source_info_link'] + '\n'
- counter += 1
-
- # snippets
-
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=f'''following are snippets from google search with these as knowledge base only answer questions and print reference link as well followed by answer. \n\n {snippets}\n\n question-{question}\n\nAnswer-''',
- temperature=0.49,
- max_tokens=256,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0)
-
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- string_temp = response.choices[0].text
- csv_logs(question, string_temp, date_time)
- st.write(string_temp)
- st.write(snippets)
- except:
- try:
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API4"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API4"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- search = GoogleSearch(params)
- results = search.get_dict()
- organic_results = results["organic_results"]
- st.text("Key 4 used")
-
-
- snippets = ""
- counter = 1
- for item in organic_results:
- snippets += str(counter) + ". " + item.get("snippet", "") + '\n' + item['about_this_result']['source']['source_info_link'] + '\n'
- counter += 1
-
- # snippets
-
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=f'''following are snippets from google search with these as knowledge base only answer questions and print reference link as well followed by answer. \n\n {snippets}\n\n question-{question}\n\nAnswer-''',
- temperature=0.49,
- max_tokens=256,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0)
-
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- string_temp = response.choices[0].text
- csv_logs(question, string_temp, date_time)
- st.write(string_temp)
- st.write(snippets)
- except:
- try:
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API5"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API5"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- search = GoogleSearch(params)
- results = search.get_dict()
- organic_results = results["organic_results"]
- st.text("Key 5 used")
-
-
- snippets = ""
- counter = 1
- for item in organic_results:
- snippets += str(counter) + ". " + item.get("snippet", "") + '\n' + item['about_this_result']['source']['source_info_link'] + '\n'
- counter += 1
-
- # snippets
-
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=f'''following are snippets from google search with these as knowledge base only answer questions and print reference link as well followed by answer. \n\n {snippets}\n\n question-{question}\n\nAnswer-''',
- temperature=0.49,
- max_tokens=256,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0)
-
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- string_temp = response.choices[0].text
- csv_logs(question, string_temp, date_time)
- st.write(string_temp)
- st.write(snippets)
- except:
- try:
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API6"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API6"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- search = GoogleSearch(params)
- results = search.get_dict()
- organic_results = results["organic_results"]
- st.text("Key 6 used")
-
-
- snippets = ""
- counter = 1
- for item in organic_results:
- snippets += str(counter) + ". " + item.get("snippet", "") + '\n' + item['about_this_result']['source']['source_info_link'] + '\n'
- counter += 1
-
- # snippets
-
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=f'''following are snippets from google search with these as knowledge base only answer questions and print reference link as well followed by answer. \n\n {snippets}\n\n question-{question}\n\nAnswer-''',
- temperature=0.49,
- max_tokens=256,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0)
-
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- string_temp = response.choices[0].text
- csv_logs(question, string_temp, date_time)
- st.write(string_temp)
- st.write(snippets)
- except:
- try:
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API7"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API7"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- search = GoogleSearch(params)
- results = search.get_dict()
- organic_results = results["organic_results"]
- st.text("Key 7 used")
-
-
- snippets = ""
- counter = 1
- for item in organic_results:
- snippets += str(counter) + ". " + item.get("snippet", "") + '\n' + item['about_this_result']['source']['source_info_link'] + '\n'
- counter += 1
-
- # snippets
-
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=f'''following are snippets from google search with these as knowledge base only answer questions and print reference link as well followed by answer. \n\n {snippets}\n\n question-{question}\n\nAnswer-''',
- temperature=0.49,
- max_tokens=256,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0)
-
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- string_temp = response.choices[0].text
- csv_logs(question, string_temp, date_time)
- st.write(string_temp)
- st.write(snippets)
- except:
- try:
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API8"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API8"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- search = GoogleSearch(params)
- results = search.get_dict()
- organic_results = results["organic_results"]
- st.text("Key 8 used")
-
-
- snippets = ""
- counter = 1
- for item in organic_results:
- snippets += str(counter) + ". " + item.get("snippet", "") + '\n' + item['about_this_result']['source']['source_info_link'] + '\n'
- counter += 1
-
- # snippets
-
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=f'''following are snippets from google search with these as knowledge base only answer questions and print reference link as well followed by answer. \n\n {snippets}\n\n question-{question}\n\nAnswer-''',
- temperature=0.49,
- max_tokens=256,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0)
-
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- string_temp = response.choices[0].text
- csv_logs(question, string_temp, date_time)
- st.write(string_temp)
- st.write(snippets)
- except:
- try:
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API9"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API9"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- search = GoogleSearch(params)
- results = search.get_dict()
- organic_results = results["organic_results"]
- st.text("Key 9 used")
-
-
- snippets = ""
- counter = 1
- for item in organic_results:
- snippets += str(counter) + ". " + item.get("snippet", "") + '\n' + item['about_this_result']['source']['source_info_link'] + '\n'
- counter += 1
-
- # snippets
-
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=f'''following are snippets from google search with these as knowledge base only answer questions and print reference link as well followed by answer. \n\n {snippets}\n\n question-{question}\n\nAnswer-''',
- temperature=0.49,
- max_tokens=256,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0)
-
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- string_temp = response.choices[0].text
- csv_logs(question, string_temp, date_time)
- st.write(string_temp)
- st.write(snippets)
- except:
- try:
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API10"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- params = {
- "q": question,
- "location": "Bengaluru, Karnataka, India",
- "hl": "hi",
- "gl": "in",
- "google_domain": "google.co.in",
- # "api_key": ""
- "api_key": st.secrets["GOOGLE_API10"] #os.environ("GOOGLE_API") #os.environ['GOOGLE_API']
- }
-
- search = GoogleSearch(params)
- results = search.get_dict()
- organic_results = results["organic_results"]
- st.text("Key 10 used")
-
-
- snippets = ""
- counter = 1
- for item in organic_results:
- snippets += str(counter) + ". " + item.get("snippet", "") + '\n' + item['about_this_result']['source']['source_info_link'] + '\n'
- counter += 1
-
- # snippets
-
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=f'''following are snippets from google search with these as knowledge base only answer questions and print reference link as well followed by answer. \n\n {snippets}\n\n question-{question}\n\nAnswer-''',
- temperature=0.49,
- max_tokens=256,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0)
-
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- string_temp = response.choices[0].text
- csv_logs(question, string_temp, date_time)
- st.write(string_temp)
- st.write(snippets)
- except:
- pass
-
-
-
-openai.api_key = st.secrets["OPENAI_KEY"] #os.environ("OPENAI_KEY") #os.environ['OPENAI_KEY']
-# date_time = str(datetime.now())
-
-openai.api_key = st.secrets["OPENAI_KEY"]
-
-def openai_response(PROMPT):
- response = openai.Image.create(
- prompt=PROMPT,
- n=1,
- size="256x256",
-)
- return response["data"][0]["url"]
-
-st.title("Hi! :red[HyperBot] here!!🤖⭐️")
-st.title("Go on ask me anything!!")
-
-st.write('''
-⭐️ *HyperBot is your virtual assistant powered by Whisper /
-chatgpt / internet / Dall-E / OpenAI embeddings - the perfect
-companion for you. With HyperBot, you can ask anything you ask
-internet everyday . Get answers to questions about the weather,
-stocks 📈, news📰, and more! Plus, you can also generate 🖌️
-paintings, drawings, abstract art 🎨, play music 🎵 or videos,
-create tweets 🐦 and posts 📝, and compose emails 📧 - all with
-the help of HyperBot!* 🤖 ✨
-''')
-
-st.text('''You can ask me:
-1. All the things you ask ChatGPT.
-2. To generate paintings, drawings, abstract art.
-3. Music or Videos
-4. Weather
-5. Stocks
-6. Current Affairs and News.
-7. Create or compose tweets or Linkedin posts or email.''')
-
-Input_type = st.radio(
- "**Input type:**",
- ('TEXT', 'SPEECH')
- )
-
-if Input_type == 'TEXT':
- mytext = st.text_input('**Go on! Ask me anything:**')
- if st.button("SUBMIT"):
- question=mytext
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=f'''Your name is HyperBot and knowledge cutoff date is 2021-09, and you are not aware of any events after that time. if the
- Answer to following questions is not from your knowledge base or in case of queries like date, time, weather
- updates / stock updates / current affairs / news or people which requires you to have internet connection then print i don't have access to internet to answer your question,
- if question is related to image or painting or drawing or diagram generation then print ipython type output function gen_draw("detailed prompt of image to be generated")
- if the question is related to playing a song or video or music of a singer then print ipython type output function vid_tube("relevent search query")
- if the question is related to operating home appliances then print ipython type output function home_app(" action(ON/Off),appliance(TV,Geaser,Fridge,Lights,fans,AC)") .
- if question is realted to sending mail or sms then print ipython type output function messenger_app(" message of us ,messenger(email,sms)")
- \nQuestion-{question}
- \nAnswer -''',
- temperature=0.49,
- max_tokens=256,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0
- )
- string_temp=response.choices[0].text
-
- if ("gen_draw" in string_temp):
- try:
- try:
- wget.download(openai_response(prompt))
- img2 = Image.open(wget.download(openai_response(prompt)))
- img2.show()
- rx = 'Image returned'
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- csv_logs(mytext, rx, date_time)
- except:
- urllib.request.urlretrieve(openai_response(prompt),"img_ret.png")
- img = Image.open("img_ret.png")
- img.show()
- rx = 'Image returned'
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- csv_logs(mytext, rx, date_time)
- except:
- # Set up our initial generation parameters.
- answers = stability_api.generate(
- prompt = mytext,
- seed=992446758, # If a seed is provided, the resulting generated image will be deterministic.
- # What this means is that as long as all generation parameters remain the same, you can always recall the same image simply by generating it again.
- # Note: This isn't quite the case for Clip Guided generations, which we'll tackle in a future example notebook.
- steps=30, # Amount of inference steps performed on image generation. Defaults to 30.
- cfg_scale=8.0, # Influences how strongly your generation is guided to match your prompt.
- # Setting this value higher increases the strength in which it tries to match your prompt.
- # Defaults to 7.0 if not specified.
- width=512, # Generation width, defaults to 512 if not included.
- height=512, # Generation height, defaults to 512 if not included.
- samples=1, # Number of images to generate, defaults to 1 if not included.
- sampler=generation.SAMPLER_K_DPMPP_2M # Choose which sampler we want to denoise our generation with.
- # Defaults to k_dpmpp_2m if not specified. Clip Guidance only supports ancestral samplers.
- # (Available Samplers: ddim, plms, k_euler, k_euler_ancestral, k_heun, k_dpm_2, k_dpm_2_ancestral, k_dpmpp_2s_ancestral, k_lms, k_dpmpp_2m)
- )
-
- for resp in answers:
- for artifact in resp.artifacts:
- if artifact.finish_reason == generation.FILTER:
- warnings.warn(
- "Your request activated the API's safety filters and could not be processed."
- "Please modify the prompt and try again.")
- st.warning("Issue with image generation")
- if artifact.type == generation.ARTIFACT_IMAGE:
- img = Image.open(io.BytesIO(artifact.binary))
- st.image(img)
- img.save(str(artifact.seed)+ ".png") # Save our generated images with their seed number as the filename.
- rx = 'Image returned'
- # g_sheet_log(mytext, rx)
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- csv_logs(mytext, rx, date_time)
-
-
- elif ("vid_tube" in string_temp):
- s = Search(mytext)
- search_res = s.results
- first_vid = search_res[0]
- print(first_vid)
- string = str(first_vid)
- video_id = string[string.index('=') + 1:-1]
- # print(video_id)
- YoutubeURL = "https://www.youtube.com/watch?v="
- OurURL = YoutubeURL + video_id
- st.write(OurURL)
- st_player(OurURL)
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- ry = 'Youtube link and video returned'
- # g_sheet_log(mytext, ry)
- csv_logs(mytext, ry, date_time)
-
- elif ("don't" in string_temp or "internet" in string_temp):
- st.write('searching internet ')
- search_internet(question)
- # rz = 'Internet result returned'
- # g_sheet_log(mytext, string_temp)
- # csv_logs(mytext, rz, date_time)
-
- else:
- st.write(string_temp)
- # g_sheet_log(mytext, string_temp)
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- csv_logs(mytext, string_temp, date_time)
-
-elif Input_type == 'SPEECH':
- option_speech = st.selectbox(
- 'Choose from below: (Options for Transcription)',
- ('Use Microphone', 'OpenAI Whisper (Upload audio file)')
- )
-
- if option_speech == 'Use Microphone':
- stt_button = Button(label="Speak", width=100)
- stt_button.js_on_event("button_click", CustomJS(code="""
- var recognition = new webkitSpeechRecognition();
- recognition.continuous = true;
- recognition.interimResults = true;
-
- recognition.onresult = function (e) {
- var value = "";
- for (var i = e.resultIndex; i < e.results.length; ++i) {
- if (e.results[i].isFinal) {
- value += e.results[i][0].transcript;
- }
- }
- if ( value != "") {
- document.dispatchEvent(new CustomEvent("GET_TEXT", {detail: value}));
- }
- }
- recognition.start();
- """))
-
- result = streamlit_bokeh_events(
- stt_button,
- events="GET_TEXT",
- key="listen",
- refresh_on_update=False,
- override_height=75,
- debounce_time=0)
-
- if result:
- if "GET_TEXT" in result:
- question = result.get("GET_TEXT")
- st.text(question)
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=f'''Your name is HyperBot and knowledge cutoff date is 2021-09, and you are not aware of any events after that time. if the
- Answer to following questions is not from your knowledge base or in case of queries like date, time, weather
- updates / stock updates / current affairs / news or people which requires you to have internet connection then print i don't have access to internet to answer your question,
- if question is related to image or painting or drawing or diagram generation then print ipython type output function gen_draw("detailed prompt of image to be generated")
- if the question is related to playing a song or video or music of a singer then print ipython type output function vid_tube("relevent search query")
- if the question is related to operating home appliances then print ipython type output function home_app(" action(ON/Off),appliance(TV,Geaser,Fridge,Lights,fans,AC)") .
- if question is realted to sending mail or sms then print ipython type output function messenger_app(" message of us ,messenger(email,sms)")
- \nQuestion-{question}
- \nAnswer -''',
- temperature=0.49,
- max_tokens=256,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0
- )
- string_temp=response.choices[0].text
-
- if ("gen_draw" in string_temp):
- try:
- try:
- wget.download(openai_response(prompt))
- img2 = Image.open(wget.download(openai_response(prompt)))
- img2.show()
- rx = 'Image returned'
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- csv_logs(question, rx, date_time)
- except:
- urllib.request.urlretrieve(openai_response(prompt),"img_ret.png")
- img = Image.open("img_ret.png")
- img.show()
- rx = 'Image returned'
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- csv_logs(question, rx, date_time)
- except:
- # Set up our initial generation parameters.
- answers = stability_api.generate(
- prompt = mytext,
- seed=992446758, # If a seed is provided, the resulting generated image will be deterministic.
- # What this means is that as long as all generation parameters remain the same, you can always recall the same image simply by generating it again.
- # Note: This isn't quite the case for Clip Guided generations, which we'll tackle in a future example notebook.
- steps=30, # Amount of inference steps performed on image generation. Defaults to 30.
- cfg_scale=8.0, # Influences how strongly your generation is guided to match your prompt.
- # Setting this value higher increases the strength in which it tries to match your prompt.
- # Defaults to 7.0 if not specified.
- width=512, # Generation width, defaults to 512 if not included.
- height=512, # Generation height, defaults to 512 if not included.
- samples=1, # Number of images to generate, defaults to 1 if not included.
- sampler=generation.SAMPLER_K_DPMPP_2M # Choose which sampler we want to denoise our generation with.
- # Defaults to k_dpmpp_2m if not specified. Clip Guidance only supports ancestral samplers.
- # (Available Samplers: ddim, plms, k_euler, k_euler_ancestral, k_heun, k_dpm_2, k_dpm_2_ancestral, k_dpmpp_2s_ancestral, k_lms, k_dpmpp_2m)
- )
-
- for resp in answers:
- for artifact in resp.artifacts:
- if artifact.finish_reason == generation.FILTER:
- warnings.warn(
- "Your request activated the API's safety filters and could not be processed."
- "Please modify the prompt and try again.")
- if artifact.type == generation.ARTIFACT_IMAGE:
- img = Image.open(io.BytesIO(artifact.binary))
- st.image(img)
- img.save(str(artifact.seed)+ ".png") # Save our generated images with their seed number as the filename.
- rx = 'Image returned'
- # g_sheet_log(mytext, rx)
- csv_logs(question, rx, date_time)
-
- elif ("vid_tube" in string_temp):
- s = Search(question)
- search_res = s.results
- first_vid = search_res[0]
- print(first_vid)
- string = str(first_vid)
- video_id = string[string.index('=') + 1:-1]
- # print(video_id)
- YoutubeURL = "https://www.youtube.com/watch?v="
- OurURL = YoutubeURL + video_id
- st.write(OurURL)
- st_player(OurURL)
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- ry = 'Youtube link and video returned'
- # g_sheet_log(mytext, ry)
- csv_logs(question, ry, date_time)
-
-
- elif ("don't" in string_temp or "internet" in string_temp ):
- st.write('*searching internet*')
- search_internet(question)
- else:
- st.write(string_temp)
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- csv_logs(question, string_temp, date_time)
-
-
- elif option_speech == 'OpenAI Whisper (Upload audio file)':
- audio_file = st.file_uploader("Upload Audio file",type=['wav', 'mp3'])
- if audio_file is not None:
- # file = open(audio_file, "rb")
- st.audio(audio_file)
- transcription = openai.Audio.transcribe("whisper-1", audio_file)
- st.write(transcription["text"])
- result = transcription["text"]
- question = result
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=f'''Your name is HyperBot and knowledge cutoff date is 2021-09, and you are not aware of any events after that time. if the
- Answer to following questions is not from your knowledge base or in case of queries like date, time, weather
- updates / stock updates / current affairs / news or people which requires you to have internet connection then print i don't have access to internet to answer your question,
- if question is related to image or painting or drawing or diagram generation then print ipython type output function gen_draw("detailed prompt of image to be generated")
- if the question is related to playing a song or video or music of a singer then print ipython type output function vid_tube("relevent search query")
- if the question is related to operating home appliances then print ipython type output function home_app(" action(ON/Off),appliance(TV,Geaser,Fridge,Lights,fans,AC)") .
- if question is realted to sending mail or sms then print ipython type output function messenger_app(" message of us ,messenger(email,sms)")
- \nQuestion-{question}
- \nAnswer -''',
- temperature=0.49,
- max_tokens=256,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0
- )
-
- string_temp=response.choices[0].text
-
- if ("gen_draw" in string_temp):
- try:
- try:
- wget.download(openai_response(prompt))
- img2 = Image.open(wget.download(openai_response(prompt)))
- img2.show()
- rx = 'Image returned'
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- csv_logs(question, rx, date_time)
- except:
- urllib.request.urlretrieve(openai_response(prompt),"img_ret.png")
- img = Image.open("img_ret.png")
- img.show()
- rx = 'Image returned'
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- csv_logs(question, rx, date_time)
- except:
- # Set up our initial generation parameters.
- answers = stability_api.generate(
- prompt = mytext,
- seed=992446758, # If a seed is provided, the resulting generated image will be deterministic.
- # What this means is that as long as all generation parameters remain the same, you can always recall the same image simply by generating it again.
- # Note: This isn't quite the case for Clip Guided generations, which we'll tackle in a future example notebook.
- steps=30, # Amount of inference steps performed on image generation. Defaults to 30.
- cfg_scale=8.0, # Influences how strongly your generation is guided to match your prompt.
- # Setting this value higher increases the strength in which it tries to match your prompt.
- # Defaults to 7.0 if not specified.
- width=512, # Generation width, defaults to 512 if not included.
- height=512, # Generation height, defaults to 512 if not included.
- samples=1, # Number of images to generate, defaults to 1 if not included.
- sampler=generation.SAMPLER_K_DPMPP_2M # Choose which sampler we want to denoise our generation with.
- # Defaults to k_dpmpp_2m if not specified. Clip Guidance only supports ancestral samplers.
- # (Available Samplers: ddim, plms, k_euler, k_euler_ancestral, k_heun, k_dpm_2, k_dpm_2_ancestral, k_dpmpp_2s_ancestral, k_lms, k_dpmpp_2m)
- )
-
- for resp in answers:
- for artifact in resp.artifacts:
- if artifact.finish_reason == generation.FILTER:
- warnings.warn(
- "Your request activated the API's safety filters and could not be processed."
- "Please modify the prompt and try again.")
- if artifact.type == generation.ARTIFACT_IMAGE:
- img = Image.open(io.BytesIO(artifact.binary))
- st.image(img)
- img.save(str(artifact.seed)+ ".png") # Save our generated images with their seed number as the filename.
- rx = 'Image returned'
- # g_sheet_log(mytext, rx)
- csv_logs(question, rx, date_time)
-
-
- elif ("vid_tube" in string_temp):
- s = Search(question)
- search_res = s.results
- first_vid = search_res[0]
- print(first_vid)
- string = str(first_vid)
- video_id = string[string.index('=') + 1:-1]
- # print(video_id)
- YoutubeURL = "https://www.youtube.com/watch?v="
- OurURL = YoutubeURL + video_id
- st.write(OurURL)
- st_player(OurURL)
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- ry = 'Youtube link and video returned'
- # g_sheet_log(mytext, ry)
- csv_logs(question, ry, date_time)
-
- elif ("don't" in string_temp or "internet" in string_temp ):
- st.write('*searching internet*')
- search_internet(question)
- else:
- st.write(string_temp)
- now = datetime.datetime.now()
- date_time = now.strftime("%Y-%m-%d %H:%M:%S")
- csv_logs(question, string_temp, date_time)
-
-else:
- pass
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/upload_button.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/upload_button.py
deleted file mode 100644
index fb75d5a3723fa5247ae864114a355b60c9fb870d..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/upload_button.py
+++ /dev/null
@@ -1,211 +0,0 @@
-"""gr.UploadButton() component."""
-
-from __future__ import annotations
-
-import tempfile
-import warnings
-from typing import Any, Callable, Literal
-
-from gradio_client import utils as client_utils
-from gradio_client.documentation import document, set_documentation_group
-from gradio_client.serializing import FileSerializable
-
-from gradio import utils
-from gradio.components.base import Component, IOComponent, _Keywords
-from gradio.deprecation import warn_deprecation, warn_style_method_deprecation
-from gradio.events import Clickable, Uploadable
-
-set_documentation_group("component")
-
-
-@document()
-class UploadButton(Clickable, Uploadable, IOComponent, FileSerializable):
- """
- Used to create an upload button, when cicked allows a user to upload files that satisfy the specified file type or generic files (if file_type not set).
- Preprocessing: passes the uploaded file as a {file-object} or {List[file-object]} depending on `file_count` (or a {bytes}/{List{bytes}} depending on `type`)
- Postprocessing: expects function to return a {str} path to a file, or {List[str]} consisting of paths to files.
- Examples-format: a {str} path to a local file that populates the component.
- Demos: upload_button
- """
-
- def __init__(
- self,
- label: str = "Upload a File",
- value: str | list[str] | Callable | None = None,
- *,
- variant: Literal["primary", "secondary", "stop"] = "secondary",
- visible: bool = True,
- size: Literal["sm", "lg"] | None = None,
- scale: int | None = None,
- min_width: int | None = None,
- interactive: bool = True,
- elem_id: str | None = None,
- elem_classes: list[str] | str | None = None,
- type: Literal["file", "bytes"] = "file",
- file_count: Literal["single", "multiple", "directory"] = "single",
- file_types: list[str] | None = None,
- **kwargs,
- ):
- """
- Parameters:
- label: Text to display on the button. Defaults to "Upload a File".
- value: File or list of files to upload by default.
- variant: 'primary' for main call-to-action, 'secondary' for a more subdued style, 'stop' for a stop button.
- visible: If False, component will be hidden.
- size: Size of the button. Can be "sm" or "lg".
- scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer.
- min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.
- interactive: If False, the UploadButton will be in a disabled state.
- elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
- elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.
- type: Type of value to be returned by component. "file" returns a temporary file object with the same base name as the uploaded file, whose full path can be retrieved by file_obj.name, "binary" returns an bytes object.
- file_count: if single, allows user to upload one file. If "multiple", user uploads multiple files. If "directory", user uploads all files in selected directory. Return type will be list for each file in case of "multiple" or "directory".
- file_types: List of type of files to be uploaded. "file" allows any file to be uploaded, "image" allows only image files to be uploaded, "audio" allows only audio files to be uploaded, "video" allows only video files to be uploaded, "text" allows only text files to be uploaded.
- """
- self.type = type
- self.file_count = file_count
- if file_count == "directory" and file_types is not None:
- warnings.warn(
- "The `file_types` parameter is ignored when `file_count` is 'directory'."
- )
- if file_types is not None and not isinstance(file_types, list):
- raise ValueError(
- f"Parameter file_types must be a list. Received {file_types.__class__.__name__}"
- )
- self.size = size
- self.file_types = file_types
- self.label = label
- self.variant = variant
- IOComponent.__init__(
- self,
- label=label,
- visible=visible,
- elem_id=elem_id,
- elem_classes=elem_classes,
- value=value,
- scale=scale,
- min_width=min_width,
- interactive=interactive,
- **kwargs,
- )
-
- def get_config(self):
- return {
- "label": self.label,
- "value": self.value,
- "size": self.size,
- "file_count": self.file_count,
- "file_types": self.file_types,
- "scale": self.scale,
- "min_width": self.min_width,
- "variant": self.variant,
- "interactive": self.interactive,
- **Component.get_config(self),
- }
-
- @staticmethod
- def update(
- value: str
- | list[str]
- | Literal[_Keywords.NO_VALUE]
- | None = _Keywords.NO_VALUE,
- size: Literal["sm", "lg"] | None = None,
- variant: Literal["primary", "secondary", "stop"] | None = None,
- interactive: bool | None = None,
- visible: bool | None = None,
- scale: int | None = None,
- min_width: int | None = None,
- ):
- return {
- "variant": variant,
- "interactive": interactive,
- "size": size,
- "visible": visible,
- "value": value,
- "scale": scale,
- "min_width": min_width,
- "__type__": "update",
- }
-
- def preprocess(
- self, x: list[dict[str, Any]] | None
- ) -> (
- bytes
- | tempfile._TemporaryFileWrapper
- | list[bytes | tempfile._TemporaryFileWrapper]
- | None
- ):
- """
- Parameters:
- x: List of JSON objects with filename as 'name' property and base64 data as 'data' property
- Returns:
- File objects in requested format
- """
- if x is None:
- return None
-
- def process_single_file(f) -> bytes | tempfile._TemporaryFileWrapper:
- file_name, data, is_file = (
- f["name"],
- f["data"],
- f.get("is_file", False),
- )
- if self.type == "file":
- if is_file:
- path = self.make_temp_copy_if_needed(file_name)
- else:
- data, _ = client_utils.decode_base64_to_binary(data)
- path = self.file_bytes_to_file(
- data, dir=self.DEFAULT_TEMP_DIR, file_name=file_name
- )
- path = str(utils.abspath(path))
- self.temp_files.add(path)
- file = tempfile.NamedTemporaryFile(
- delete=False, dir=self.DEFAULT_TEMP_DIR
- )
- file.name = path
- file.orig_name = file_name # type: ignore
- return file
- elif self.type == "bytes":
- if is_file:
- with open(file_name, "rb") as file_data:
- return file_data.read()
- return client_utils.decode_base64_to_binary(data)[0]
- else:
- raise ValueError(
- "Unknown type: "
- + str(self.type)
- + ". Please choose from: 'file', 'bytes'."
- )
-
- if self.file_count == "single":
- if isinstance(x, list):
- return process_single_file(x[0])
- else:
- return process_single_file(x)
- else:
- if isinstance(x, list):
- return [process_single_file(f) for f in x]
- else:
- return process_single_file(x)
-
- def style(
- self,
- *,
- full_width: bool | None = None,
- size: Literal["sm", "lg"] | None = None,
- **kwargs,
- ):
- """
- This method is deprecated. Please set these arguments in the constructor instead.
- """
- warn_style_method_deprecation()
- if full_width is not None:
- warn_deprecation(
- "Use `scale` in place of full_width in the constructor. "
- "scale=1 will make the button expand, whereas 0 will not."
- )
- self.scale = 1 if full_width else None
- if size is not None:
- self.size = size
- return self
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/ModifyUpload-77b0d4b2.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/ModifyUpload-77b0d4b2.css
deleted file mode 100644
index c78d71f8b6eaf75f8134375ed017f1c03b6edf1a..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/ModifyUpload-77b0d4b2.css
+++ /dev/null
@@ -1 +0,0 @@
-div.svelte-116rqfv{cursor:pointer;width:var(--size-full);height:var(--size-full)}.center.svelte-116rqfv{text-align:center}.flex.svelte-116rqfv{display:flex;justify-content:center;align-items:center}input.svelte-116rqfv{display:none}div.svelte-19sk1im{display:flex;top:var(--size-2);right:var(--size-2);justify-content:flex-end;gap:var(--spacing-sm);z-index:var(--layer-1)}.not-absolute.svelte-19sk1im{margin:var(--size-1)}
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/TabItem-e9c69a3d.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/TabItem-e9c69a3d.css
deleted file mode 100644
index 1266dd6e2b9efeaca97a25766a6f89ae745aca0e..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/TabItem-e9c69a3d.css
+++ /dev/null
@@ -1 +0,0 @@
-.tabs.svelte-kqij2n{position:relative}.hide.svelte-kqij2n{display:none}.tab-nav.svelte-kqij2n{display:flex;position:relative;flex-wrap:wrap;border-bottom:1px solid var(--border-color-primary)}button.svelte-kqij2n{margin-bottom:-1px;border:1px solid transparent;border-color:transparent;border-bottom:none;border-top-right-radius:var(--container-radius);border-top-left-radius:var(--container-radius);padding:var(--size-1) var(--size-4);color:var(--body-text-color-subdued);font-weight:var(--section-header-text-weight);font-size:var(--section-header-text-size)}button.svelte-kqij2n:hover{color:var(--body-text-color)}.selected.svelte-kqij2n{border-color:var(--border-color-primary);background:var(--background-fill-primary);color:var(--body-text-color)}.bar.svelte-kqij2n{display:block;position:absolute;bottom:-2px;left:0;z-index:999;background:var(--background-fill-primary);width:100%;height:2px;content:""}div.svelte-19hvt5v{display:flex;position:relative;border:1px solid var(--border-color-primary);border-top:none;border-bottom-right-radius:var(--container-radius);border-bottom-left-radius:var(--container-radius);padding:var(--block-padding)}
diff --git a/spaces/Dacoolkid/Oba_-s/.py.py b/spaces/Dacoolkid/Oba_-s/.py.py
deleted file mode 100644
index d1259d6d4d5f26a98bd514a72ccf22a863ea2894..0000000000000000000000000000000000000000
--- a/spaces/Dacoolkid/Oba_-s/.py.py
+++ /dev/null
@@ -1,20 +0,0 @@
-import openai
-import gradio
-
-openai.api_key = "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"
-
-messages = [{"role": "system", "content": "You are a chatai"}]
-
-def CustomChatGPT(user_input):
- messages.append({"role": "user", "content": user_input})
- response = openai.ChatCompletion.create(
- model = "gpt-3.5-turbo",
- messages = messages
- )
- ChatGPT_reply = response["choices"][0]["message"]["content"]
- messages.append({"role": "assistant", "content": ChatGPT_reply})
- return ChatGPT_reply
-
-demo = gr.Interface(fn=CustomChatGPT, inputs = "text", "state", chat= "text", title = "ai")
-
-demo.launch(share=True)
\ No newline at end of file
diff --git a/spaces/DaleChen/AutoGPT/autogpt/speech/macos_tts.py b/spaces/DaleChen/AutoGPT/autogpt/speech/macos_tts.py
deleted file mode 100644
index 4c072ce256782e83a578b5181abf1a7b524c621b..0000000000000000000000000000000000000000
--- a/spaces/DaleChen/AutoGPT/autogpt/speech/macos_tts.py
+++ /dev/null
@@ -1,21 +0,0 @@
-""" MacOS TTS Voice. """
-import os
-
-from autogpt.speech.base import VoiceBase
-
-
-class MacOSTTS(VoiceBase):
- """MacOS TTS Voice."""
-
- def _setup(self) -> None:
- pass
-
- def _speech(self, text: str, voice_index: int = 0) -> bool:
- """Play the given text."""
- if voice_index == 0:
- os.system(f'say "{text}"')
- elif voice_index == 1:
- os.system(f'say -v "Ava (Premium)" "{text}"')
- else:
- os.system(f'say -v Samantha "{text}"')
- return True
diff --git a/spaces/Dao3/chatwithdocs/utils.py b/spaces/Dao3/chatwithdocs/utils.py
deleted file mode 100644
index 8d5a715335313d061c3ead3aec8ec6d10191cf64..0000000000000000000000000000000000000000
--- a/spaces/Dao3/chatwithdocs/utils.py
+++ /dev/null
@@ -1,183 +0,0 @@
-from langchain.text_splitter import RecursiveCharacterTextSplitter
-from langchain.vectorstores.faiss import FAISS
-from langchain import OpenAI, Cohere
-from langchain.chains.qa_with_sources import load_qa_with_sources_chain
-from embeddings import OpenAIEmbeddings
-from langchain.llms import OpenAI
-from langchain.docstore.document import Document
-from langchain.vectorstores import FAISS, VectorStore
-import docx2txt
-from typing import List, Dict, Any
-import re
-import numpy as np
-from io import StringIO
-from io import BytesIO
-import streamlit as st
-from prompts import STUFF_PROMPT
-from pypdf import PdfReader
-from openai.error import AuthenticationError
-import pptx
-
-@st.experimental_memo()
-def parse_docx(file: BytesIO) -> str:
- text = docx2txt.process(file)
- # Remove multiple newlines
- text = re.sub(r"\n\s*\n", "\n\n", text)
- return text
-
-
-@st.experimental_memo()
-def parse_pdf(file: BytesIO) -> List[str]:
- pdf = PdfReader(file)
- output = []
- for page in pdf.pages:
- text = page.extract_text()
- # Merge hyphenated words
- text = re.sub(r"(\w+)-\n(\w+)", r"\1\2", text)
- # Fix newlines in the middle of sentences
- text = re.sub(r"(? str:
- text = file.read().decode("utf-8")
- # Remove multiple newlines
- text = re.sub(r"\n\s*\n", "\n\n", text)
- return text
-
-@st.experimental_memo()
-def parse_pptx(file: BytesIO) -> str:
-
- ppt_file = pptx.Presentation(file)
-
- string_data = ""
-
- for slide in ppt_file.slides:
- for shape in slide.shapes:
- if shape.has_text_frame:
- string_data += shape.text_frame.text + '\n'
- return string_data
-
-@st.experimental_memo()
-def parse_csv(uploaded_file):
- # To read file as bytes:
- #bytes_data = uploaded_file.getvalue()
- #st.write(bytes_data)
-
- # To convert to a string based IO:
- stringio = StringIO(uploaded_file.getvalue().decode("utf-8"))
- #st.write(stringio)
-
- # To read file as string:
- string_data = stringio.read()
- #st.write(string_data)
-
- # Can be used wherever a "file-like" object is accepted:
- # dataframe = pd.read_csv(uploaded_file)
- return string_data
-
-@st.experimental_memo()
-def parse_any(uploaded_file):
- stringio = StringIO(uploaded_file.getvalue().decode("utf-8"))
- string_data = stringio.read()
- return string_data
-
-@st.cache(allow_output_mutation=True)
-def text_to_docs(text: str) -> List[Document]:
- """Converts a string or list of strings to a list of Documents
- with metadata."""
- if isinstance(text, str):
- # Take a single string as one page
- text = [text]
- page_docs = [Document(page_content=page) for page in text]
-
- # Add page numbers as metadata
- for i, doc in enumerate(page_docs):
- doc.metadata["page"] = i + 1
-
- # Split pages into chunks
- doc_chunks = []
-
- for doc in page_docs:
- text_splitter = RecursiveCharacterTextSplitter(
- chunk_size=800,
- separators=["\n\n", "\n", ".", "!", "?", ",", " ", ""],
- chunk_overlap=0,
- )
- chunks = text_splitter.split_text(doc.page_content)
- for i, chunk in enumerate(chunks):
- doc = Document(
- page_content=chunk, metadata={"page": doc.metadata["page"], "chunk": i}
- )
- # Add sources a metadata
- doc.metadata["source"] = f"{doc.metadata['page']}-{doc.metadata['chunk']}"
- doc_chunks.append(doc)
- return doc_chunks
-
-
-@st.cache(allow_output_mutation=True, show_spinner=False)
-def embed_docs(docs: List[Document]) -> VectorStore:
- """Embeds a list of Documents and returns a FAISS index"""
-
- if not st.session_state.get("OPENAI_API_KEY"):
- raise AuthenticationError(
- "Enter your OpenAI API key in the sidebar. You can get a key at https://platform.openai.com/account/api-keys."
- )
- else:
- # Embed the chunks
- embeddings = OpenAIEmbeddings(openai_api_key=st.session_state.get("OPENAI_API_KEY")) # type: ignore
- index = FAISS.from_documents(docs, embeddings)
-
- return index
-
-
-@st.cache(allow_output_mutation=True)
-def search_docs(index: VectorStore, query: str) -> List[Document]:
- """Searches a FAISS index for similar chunks to the query
- and returns a list of Documents."""
-
- # Search for similar chunks
- docs = index.similarity_search(query, k=5)
- return docs
-
-
-@st.cache(allow_output_mutation=True)
-def get_answer(docs: List[Document], query: str) -> Dict[str, Any]:
- """Gets an answer to a question from a list of Documents."""
-
- # Get the answer
- chain = load_qa_with_sources_chain(OpenAI(temperature=0, openai_api_key=st.session_state.get("OPENAI_API_KEY")), chain_type="stuff", prompt=STUFF_PROMPT) # type: ignore
-
- answer = chain(
- {"input_documents": docs, "question": query}, return_only_outputs=True
- )
- return answer
-
-
-@st.cache(allow_output_mutation=True)
-def get_sources(answer: Dict[str, Any], docs: List[Document]) -> List[Document]:
- """Gets the source documents for an answer."""
-
- # Get sources for the answer
- source_keys = [s for s in answer["output_text"].split("SOURCES: ")[-1].split(", ")]
-
- source_docs = []
- for doc in docs:
- if doc.metadata["source"] in source_keys:
- source_docs.append(doc)
-
- return source_docs
-
-
-def wrap_text_in_html(text: str) -> str:
- """Wraps each text block separated by newlines in
tags"""
- if isinstance(text, list):
- # Add horizontal rules between pages
- text = "\n
\n".join(text)
- return "".join([f"
{line}
" for line in text.split("\n")])
\ No newline at end of file
diff --git a/spaces/Datasculptor/DescriptionGPT/detic/data/datasets/coco_zeroshot.py b/spaces/Datasculptor/DescriptionGPT/detic/data/datasets/coco_zeroshot.py
deleted file mode 100644
index aee895de41db95e379874fa6e1badd95c5fe6742..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/DescriptionGPT/detic/data/datasets/coco_zeroshot.py
+++ /dev/null
@@ -1,121 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import os
-
-from detectron2.data.datasets.register_coco import register_coco_instances
-from detectron2.data.datasets.builtin_meta import _get_builtin_metadata
-from .lvis_v1 import custom_register_lvis_instances
-
-categories_seen = [
- {'id': 1, 'name': 'person'},
- {'id': 2, 'name': 'bicycle'},
- {'id': 3, 'name': 'car'},
- {'id': 4, 'name': 'motorcycle'},
- {'id': 7, 'name': 'train'},
- {'id': 8, 'name': 'truck'},
- {'id': 9, 'name': 'boat'},
- {'id': 15, 'name': 'bench'},
- {'id': 16, 'name': 'bird'},
- {'id': 19, 'name': 'horse'},
- {'id': 20, 'name': 'sheep'},
- {'id': 23, 'name': 'bear'},
- {'id': 24, 'name': 'zebra'},
- {'id': 25, 'name': 'giraffe'},
- {'id': 27, 'name': 'backpack'},
- {'id': 31, 'name': 'handbag'},
- {'id': 33, 'name': 'suitcase'},
- {'id': 34, 'name': 'frisbee'},
- {'id': 35, 'name': 'skis'},
- {'id': 38, 'name': 'kite'},
- {'id': 42, 'name': 'surfboard'},
- {'id': 44, 'name': 'bottle'},
- {'id': 48, 'name': 'fork'},
- {'id': 50, 'name': 'spoon'},
- {'id': 51, 'name': 'bowl'},
- {'id': 52, 'name': 'banana'},
- {'id': 53, 'name': 'apple'},
- {'id': 54, 'name': 'sandwich'},
- {'id': 55, 'name': 'orange'},
- {'id': 56, 'name': 'broccoli'},
- {'id': 57, 'name': 'carrot'},
- {'id': 59, 'name': 'pizza'},
- {'id': 60, 'name': 'donut'},
- {'id': 62, 'name': 'chair'},
- {'id': 65, 'name': 'bed'},
- {'id': 70, 'name': 'toilet'},
- {'id': 72, 'name': 'tv'},
- {'id': 73, 'name': 'laptop'},
- {'id': 74, 'name': 'mouse'},
- {'id': 75, 'name': 'remote'},
- {'id': 78, 'name': 'microwave'},
- {'id': 79, 'name': 'oven'},
- {'id': 80, 'name': 'toaster'},
- {'id': 82, 'name': 'refrigerator'},
- {'id': 84, 'name': 'book'},
- {'id': 85, 'name': 'clock'},
- {'id': 86, 'name': 'vase'},
- {'id': 90, 'name': 'toothbrush'},
-]
-
-categories_unseen = [
- {'id': 5, 'name': 'airplane'},
- {'id': 6, 'name': 'bus'},
- {'id': 17, 'name': 'cat'},
- {'id': 18, 'name': 'dog'},
- {'id': 21, 'name': 'cow'},
- {'id': 22, 'name': 'elephant'},
- {'id': 28, 'name': 'umbrella'},
- {'id': 32, 'name': 'tie'},
- {'id': 36, 'name': 'snowboard'},
- {'id': 41, 'name': 'skateboard'},
- {'id': 47, 'name': 'cup'},
- {'id': 49, 'name': 'knife'},
- {'id': 61, 'name': 'cake'},
- {'id': 63, 'name': 'couch'},
- {'id': 76, 'name': 'keyboard'},
- {'id': 81, 'name': 'sink'},
- {'id': 87, 'name': 'scissors'},
-]
-
-def _get_metadata(cat):
- if cat == 'all':
- return _get_builtin_metadata('coco')
- elif cat == 'seen':
- id_to_name = {x['id']: x['name'] for x in categories_seen}
- else:
- assert cat == 'unseen'
- id_to_name = {x['id']: x['name'] for x in categories_unseen}
-
- thing_dataset_id_to_contiguous_id = {
- x: i for i, x in enumerate(sorted(id_to_name))}
- thing_classes = [id_to_name[k] for k in sorted(id_to_name)]
- return {
- "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id,
- "thing_classes": thing_classes}
-
-_PREDEFINED_SPLITS_COCO = {
- "coco_zeroshot_train": ("coco/train2017", "coco/zero-shot/instances_train2017_seen_2.json", 'seen'),
- "coco_zeroshot_val": ("coco/val2017", "coco/zero-shot/instances_val2017_unseen_2.json", 'unseen'),
- "coco_not_zeroshot_val": ("coco/val2017", "coco/zero-shot/instances_val2017_seen_2.json", 'seen'),
- "coco_generalized_zeroshot_val": ("coco/val2017", "coco/zero-shot/instances_val2017_all_2_oriorder.json", 'all'),
- "coco_zeroshot_train_oriorder": ("coco/train2017", "coco/zero-shot/instances_train2017_seen_2_oriorder.json", 'all'),
-}
-
-for key, (image_root, json_file, cat) in _PREDEFINED_SPLITS_COCO.items():
- register_coco_instances(
- key,
- _get_metadata(cat),
- os.path.join("datasets", json_file) if "://" not in json_file else json_file,
- os.path.join("datasets", image_root),
- )
-
-_CUSTOM_SPLITS_COCO = {
- "cc3m_coco_train_tags": ("cc3m/training/", "cc3m/coco_train_image_info_tags.json"),
- "coco_caption_train_tags": ("coco/train2017/", "coco/annotations/captions_train2017_tags_allcaps.json"),}
-
-for key, (image_root, json_file) in _CUSTOM_SPLITS_COCO.items():
- custom_register_lvis_instances(
- key,
- _get_builtin_metadata('coco'),
- os.path.join("datasets", json_file) if "://" not in json_file else json_file,
- os.path.join("datasets", image_root),
- )
\ No newline at end of file
diff --git a/spaces/Datasculptor/DescriptionGPT/detic/modeling/utils.py b/spaces/Datasculptor/DescriptionGPT/detic/modeling/utils.py
deleted file mode 100644
index 297fb469a049d3df2a4aa730e09c9919b4c4ca3c..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/DescriptionGPT/detic/modeling/utils.py
+++ /dev/null
@@ -1,49 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import torch
-import json
-import numpy as np
-from torch.nn import functional as F
-
-def load_class_freq(
- path='datasets/metadata/lvis_v1_train_cat_info.json', freq_weight=1.0):
- cat_info = json.load(open(path, 'r'))
- cat_info = torch.tensor(
- [c['image_count'] for c in sorted(cat_info, key=lambda x: x['id'])])
- freq_weight = cat_info.float() ** freq_weight
- return freq_weight
-
-
-def get_fed_loss_inds(gt_classes, num_sample_cats, C, weight=None):
- appeared = torch.unique(gt_classes) # C'
- prob = appeared.new_ones(C + 1).float()
- prob[-1] = 0
- if len(appeared) < num_sample_cats:
- if weight is not None:
- prob[:C] = weight.float().clone()
- prob[appeared] = 0
- more_appeared = torch.multinomial(
- prob, num_sample_cats - len(appeared),
- replacement=False)
- appeared = torch.cat([appeared, more_appeared])
- return appeared
-
-
-
-def reset_cls_test(model, cls_path, num_classes):
- model.roi_heads.num_classes = num_classes
- if type(cls_path) == str:
- print('Resetting zs_weight', cls_path)
- zs_weight = torch.tensor(
- np.load(cls_path),
- dtype=torch.float32).permute(1, 0).contiguous() # D x C
- else:
- zs_weight = cls_path
- zs_weight = torch.cat(
- [zs_weight, zs_weight.new_zeros((zs_weight.shape[0], 1))],
- dim=1) # D x (C + 1)
- if model.roi_heads.box_predictor[0].cls_score.norm_weight:
- zs_weight = F.normalize(zs_weight, p=2, dim=0)
- zs_weight = zs_weight.to(model.device)
- for k in range(len(model.roi_heads.box_predictor)):
- del model.roi_heads.box_predictor[k].cls_score.zs_weight
- model.roi_heads.box_predictor[k].cls_score.zs_weight = zs_weight
\ No newline at end of file
diff --git a/spaces/Datasculptor/MusicGen/setup.py b/spaces/Datasculptor/MusicGen/setup.py
deleted file mode 100644
index 78a172b7c90003b689bde40b49cc8fe1fb8107d4..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/MusicGen/setup.py
+++ /dev/null
@@ -1,65 +0,0 @@
-"""
- Copyright (c) Meta Platforms, Inc. and affiliates.
- All rights reserved.
-
- This source code is licensed under the license found in the
- LICENSE file in the root directory of this source tree.
-
-"""
-
-from pathlib import Path
-
-from setuptools import setup, find_packages
-
-
-NAME = 'audiocraft'
-DESCRIPTION = 'Audio research library for PyTorch'
-
-URL = 'https://github.com/fairinternal/audiocraft'
-AUTHOR = 'FAIR Speech & Audio'
-EMAIL = 'defossez@meta.com'
-REQUIRES_PYTHON = '>=3.8.0'
-
-for line in open('audiocraft/__init__.py'):
- line = line.strip()
- if '__version__' in line:
- context = {}
- exec(line, context)
- VERSION = context['__version__']
-
-HERE = Path(__file__).parent
-
-try:
- with open(HERE / "README.md", encoding='utf-8') as f:
- long_description = '\n' + f.read()
-except FileNotFoundError:
- long_description = DESCRIPTION
-
-REQUIRED = [i.strip() for i in open(HERE / 'requirements.txt') if not i.startswith('#')]
-
-setup(
- name=NAME,
- version=VERSION,
- description=DESCRIPTION,
- author_email=EMAIL,
- long_description=long_description,
- long_description_content_type='text/markdown',
- author=AUTHOR,
- url=URL,
- python_requires=REQUIRES_PYTHON,
- install_requires=REQUIRED,
- extras_require={
- 'dev': ['coverage', 'flake8', 'mypy', 'pdoc3', 'pytest'],
- },
- packages=find_packages(),
- package_data={'audiocraft': ['py.typed']},
- include_package_data=True,
- license='MIT License',
- classifiers=[
- # Trove classifiers
- # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers
- 'License :: OSI Approved :: MIT License',
- 'Topic :: Multimedia :: Sound/Audio',
- 'Topic :: Scientific/Engineering :: Artificial Intelligence',
- ],
-)
diff --git a/spaces/DeclK/pose/convert_det.sh b/spaces/DeclK/pose/convert_det.sh
deleted file mode 100644
index 592371fd27fc97439f0a3c6b0ac8cda41f8c8a43..0000000000000000000000000000000000000000
--- a/spaces/DeclK/pose/convert_det.sh
+++ /dev/null
@@ -1,8 +0,0 @@
-python tools/deploy.py \
- model_zoo/rtmdet/rtmdet_tiny_8xb32-300e_coco/detection_onnxruntime_static.py \
- model_zoo/rtmdet/rtmdet_tiny_8xb32-300e_coco/rtmdet_tiny_8xb32-300e_coco.py \
- model_zoo/rtmdet/rtmdet_tiny_8xb32-300e_coco/rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth \
- assets/onnx_test.jpg \
- --work-dir model_zoo/rtmdet/rtmdet_tiny_8xb32-300e_coco \
- --device cpu \
- --show
\ No newline at end of file
diff --git a/spaces/Dinoking/Guccio-AI-Designer/netdissect/segdata.py b/spaces/Dinoking/Guccio-AI-Designer/netdissect/segdata.py
deleted file mode 100644
index f3cb6dfac8985d9c55344abbc26cc26c4862aa85..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Guccio-AI-Designer/netdissect/segdata.py
+++ /dev/null
@@ -1,74 +0,0 @@
-import os, numpy, torch, json
-from .parallelfolder import ParallelImageFolders
-from torchvision import transforms
-from torchvision.transforms.functional import to_tensor, normalize
-
-class FieldDef(object):
- def __init__(self, field, index, bitshift, bitmask, labels):
- self.field = field
- self.index = index
- self.bitshift = bitshift
- self.bitmask = bitmask
- self.labels = labels
-
-class MultiSegmentDataset(object):
- '''
- Just like ClevrMulticlassDataset, but the second stream is a one-hot
- segmentation tensor rather than a flat one-hot presence vector.
-
- MultiSegmentDataset('dataset/clevrseg',
- imgdir='images/train/positive',
- segdir='images/train/segmentation')
- '''
- def __init__(self, directory, transform=None,
- imgdir='img', segdir='seg', val=False, size=None):
- self.segdataset = ParallelImageFolders(
- [os.path.join(directory, imgdir),
- os.path.join(directory, segdir)],
- transform=transform)
- self.fields = []
- with open(os.path.join(directory, 'labelnames.json'), 'r') as f:
- for defn in json.load(f):
- self.fields.append(FieldDef(
- defn['field'], defn['index'], defn['bitshift'],
- defn['bitmask'], defn['label']))
- self.labels = ['-'] # Reserve label 0 to mean "no label"
- self.categories = []
- self.label_category = [0]
- for fieldnum, f in enumerate(self.fields):
- self.categories.append(f.field)
- f.firstchannel = len(self.labels)
- f.channels = len(f.labels) - 1
- for lab in f.labels[1:]:
- self.labels.append(lab)
- self.label_category.append(fieldnum)
- # Reserve 25% of the dataset for validation.
- first_val = int(len(self.segdataset) * 0.75)
- self.val = val
- self.first = first_val if val else 0
- self.length = len(self.segdataset) - first_val if val else first_val
- # Truncate the dataset if requested.
- if size:
- self.length = min(size, self.length)
-
- def __len__(self):
- return self.length
-
- def __getitem__(self, index):
- img, segimg = self.segdataset[index + self.first]
- segin = numpy.array(segimg, numpy.uint8, copy=False)
- segout = torch.zeros(len(self.categories),
- segin.shape[0], segin.shape[1], dtype=torch.int64)
- for i, field in enumerate(self.fields):
- fielddata = ((torch.from_numpy(segin[:, :, field.index])
- >> field.bitshift) & field.bitmask)
- segout[i] = field.firstchannel + fielddata - 1
- bincount = numpy.bincount(segout.flatten(),
- minlength=len(self.labels))
- return img, segout, bincount
-
-if __name__ == '__main__':
- ds = MultiSegmentDataset('dataset/clevrseg')
- print(ds[0])
- import pdb; pdb.set_trace()
-
diff --git a/spaces/Eddycrack864/Applio-Inference/infer/lib/infer_pack/modules.py b/spaces/Eddycrack864/Applio-Inference/infer/lib/infer_pack/modules.py
deleted file mode 100644
index 2201a58bee9b7808d386b3ef9ac2d1f9630e56ef..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/infer/lib/infer_pack/modules.py
+++ /dev/null
@@ -1,521 +0,0 @@
-import copy
-import math
-
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import AvgPool1d, Conv1d, Conv2d, ConvTranspose1d
-from torch.nn import functional as F
-from torch.nn.utils import remove_weight_norm, weight_norm
-
-from infer.lib.infer_pack import commons
-from infer.lib.infer_pack.commons import get_padding, init_weights
-from infer.lib.infer_pack.transforms import piecewise_rational_quadratic_transform
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- out_channels,
- kernel_size,
- n_layers,
- p_dropout,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(
- nn.Conv1d(
- in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(
- nn.Conv1d(
- hidden_channels,
- hidden_channels,
- kernel_size,
- padding=kernel_size // 2,
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
-
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size**i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(
- nn.Conv1d(
- channels,
- channels,
- kernel_size,
- groups=channels,
- dilation=dilation,
- padding=padding,
- )
- )
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(
- self,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- p_dropout=0,
- ):
- super(WN, self).__init__()
- assert kernel_size % 2 == 1
- self.hidden_channels = hidden_channels
- self.kernel_size = (kernel_size,)
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(
- gin_channels, 2 * hidden_channels * n_layers, 1
- )
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
-
- for i in range(n_layers):
- dilation = dilation_rate**i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(
- hidden_channels,
- 2 * hidden_channels,
- kernel_size,
- dilation=dilation,
- padding=padding,
- )
- in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:, : self.hidden_channels, :]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels :, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2]),
- )
- ),
- ]
- )
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- ]
- )
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- ]
- )
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels, 1))
- self.logs = nn.Parameter(torch.zeros(channels, 1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1, 2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False,
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=p_dropout,
- gin_channels=gin_channels,
- )
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class ConvFlow(nn.Module):
- def __init__(
- self,
- in_channels,
- filter_channels,
- kernel_size,
- n_layers,
- num_bins=10,
- tail_bound=5.0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
- self.proj = nn.Conv1d(
- filter_channels, self.half_channels * (num_bins * 3 - 1), 1
- )
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
- self.filter_channels
- )
- unnormalized_derivatives = h[..., 2 * self.num_bins :]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(
- x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails="linear",
- tail_bound=self.tail_bound,
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/tests/test_utils.py b/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/tests/test_utils.py
deleted file mode 100644
index 7919b74905495b4b6f4aa957a1f0b5d7a174c782..0000000000000000000000000000000000000000
--- a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/tests/test_utils.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import numpy as np
-from basicsr.archs.rrdbnet_arch import RRDBNet
-
-from realesrgan.utils import RealESRGANer
-
-
-def test_realesrganer():
- # initialize with default model
- restorer = RealESRGANer(
- scale=4,
- model_path='experiments/pretrained_models/RealESRGAN_x4plus.pth',
- model=None,
- tile=10,
- tile_pad=10,
- pre_pad=2,
- half=False)
- assert isinstance(restorer.model, RRDBNet)
- assert restorer.half is False
- # initialize with user-defined model
- model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4)
- restorer = RealESRGANer(
- scale=4,
- model_path='experiments/pretrained_models/RealESRGAN_x4plus_anime_6B.pth',
- model=model,
- tile=10,
- tile_pad=10,
- pre_pad=2,
- half=True)
- # test attribute
- assert isinstance(restorer.model, RRDBNet)
- assert restorer.half is True
-
- # ------------------ test pre_process ---------------- #
- img = np.random.random((12, 12, 3)).astype(np.float32)
- restorer.pre_process(img)
- assert restorer.img.shape == (1, 3, 14, 14)
- # with modcrop
- restorer.scale = 1
- restorer.pre_process(img)
- assert restorer.img.shape == (1, 3, 16, 16)
-
- # ------------------ test process ---------------- #
- restorer.process()
- assert restorer.output.shape == (1, 3, 64, 64)
-
- # ------------------ test post_process ---------------- #
- restorer.mod_scale = 4
- output = restorer.post_process()
- assert output.shape == (1, 3, 60, 60)
-
- # ------------------ test tile_process ---------------- #
- restorer.scale = 4
- img = np.random.random((12, 12, 3)).astype(np.float32)
- restorer.pre_process(img)
- restorer.tile_process()
- assert restorer.output.shape == (1, 3, 64, 64)
-
- # ------------------ test enhance ---------------- #
- img = np.random.random((12, 12, 3)).astype(np.float32)
- result = restorer.enhance(img, outscale=2)
- assert result[0].shape == (24, 24, 3)
- assert result[1] == 'RGB'
-
- # ------------------ test enhance with 16-bit image---------------- #
- img = np.random.random((4, 4, 3)).astype(np.uint16) + 512
- result = restorer.enhance(img, outscale=2)
- assert result[0].shape == (8, 8, 3)
- assert result[1] == 'RGB'
-
- # ------------------ test enhance with gray image---------------- #
- img = np.random.random((4, 4)).astype(np.float32)
- result = restorer.enhance(img, outscale=2)
- assert result[0].shape == (8, 8)
- assert result[1] == 'L'
-
- # ------------------ test enhance with RGBA---------------- #
- img = np.random.random((4, 4, 4)).astype(np.float32)
- result = restorer.enhance(img, outscale=2)
- assert result[0].shape == (8, 8, 4)
- assert result[1] == 'RGBA'
-
- # ------------------ test enhance with RGBA, alpha_upsampler---------------- #
- restorer.tile_size = 0
- img = np.random.random((4, 4, 4)).astype(np.float32)
- result = restorer.enhance(img, outscale=2, alpha_upsampler=None)
- assert result[0].shape == (8, 8, 4)
- assert result[1] == 'RGBA'
diff --git a/spaces/EmpathyFirstMedia/README/README.md b/spaces/EmpathyFirstMedia/README/README.md
deleted file mode 100644
index 4e2997443861e7c883888730b2c37de1d8ce187b..0000000000000000000000000000000000000000
--- a/spaces/EmpathyFirstMedia/README/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: README
-emoji: 🏢
-colorFrom: pink
-colorTo: blue
-sdk: static
-pinned: true
----
-
-Empathy First Media
-https://EmpathyFirstMedia.com
diff --git a/spaces/Enderfga/mtCNN_sysu/utils/detect.py b/spaces/Enderfga/mtCNN_sysu/utils/detect.py
deleted file mode 100644
index b98208148d95112fed02685323261d7435e9d834..0000000000000000000000000000000000000000
--- a/spaces/Enderfga/mtCNN_sysu/utils/detect.py
+++ /dev/null
@@ -1,758 +0,0 @@
-import cv2
-import time
-import numpy as np
-import torch
-from utils.models import PNet,RNet,ONet
-import utils.tool as utils
-import utils.dataloader as image_tools
-
-
-def create_mtcnn_net(p_model_path=None, r_model_path=None, o_model_path=None, use_cuda=True):
-
- pnet, rnet, onet = None, None, None
-
- if p_model_path is not None:
- pnet = PNet(use_cuda=use_cuda)
- if(use_cuda):
- print('p_model_path:{0}'.format(p_model_path))
- pnet.load_state_dict(torch.load(p_model_path))
- pnet.cuda()
- else:
- # forcing all GPU tensors to be in CPU while loading
- #pnet.load_state_dict(torch.load(p_model_path, map_location=lambda storage, loc: storage))
- pnet.load_state_dict(torch.load(p_model_path, map_location='cpu'))
- pnet.eval()
-
- if r_model_path is not None:
- rnet = RNet(use_cuda=use_cuda)
- if (use_cuda):
- print('r_model_path:{0}'.format(r_model_path))
- rnet.load_state_dict(torch.load(r_model_path))
- rnet.cuda()
- else:
- rnet.load_state_dict(torch.load(r_model_path, map_location=lambda storage, loc: storage))
- rnet.eval()
-
- if o_model_path is not None:
- onet = ONet(use_cuda=use_cuda)
- if (use_cuda):
- print('o_model_path:{0}'.format(o_model_path))
- onet.load_state_dict(torch.load(o_model_path))
- onet.cuda()
- else:
- onet.load_state_dict(torch.load(o_model_path, map_location=lambda storage, loc: storage))
- onet.eval()
-
- return pnet,rnet,onet
-
-
-
-
-class MtcnnDetector(object):
- """
- P,R,O net face detection and landmarks align
- """
- def __init__(self,
- pnet = None,
- rnet = None,
- onet = None,
- min_face_size=12,
- stride=2,
- threshold=[0.6, 0.7, 0.7],
- #threshold=[0.1, 0.1, 0.1],
- scale_factor=0.709,
- ):
-
- self.pnet_detector = pnet
- self.rnet_detector = rnet
- self.onet_detector = onet
- self.min_face_size = min_face_size
- self.stride=stride
- self.thresh = threshold
- self.scale_factor = scale_factor
-
-
- def unique_image_format(self,im):
- if not isinstance(im,np.ndarray):
- if im.mode == 'I':
- im = np.array(im, np.int32, copy=False)
- elif im.mode == 'I;16':
- im = np.array(im, np.int16, copy=False)
- else:
- im = np.asarray(im)
- return im
-
- def square_bbox(self, bbox):
- """
- convert bbox to square
- Parameters:
- ----------
- bbox: numpy array , shape n x m
- input bbox
- Returns:
- -------
- a square bbox
- """
- square_bbox = bbox.copy()
-
- # x2 - x1
- # y2 - y1
- h = bbox[:, 3] - bbox[:, 1] + 1
- w = bbox[:, 2] - bbox[:, 0] + 1
- l = np.maximum(h,w)
- # x1 = x1 + w*0.5 - l*0.5
- # y1 = y1 + h*0.5 - l*0.5
- square_bbox[:, 0] = bbox[:, 0] + w*0.5 - l*0.5
- square_bbox[:, 1] = bbox[:, 1] + h*0.5 - l*0.5
-
- # x2 = x1 + l - 1
- # y2 = y1 + l - 1
- square_bbox[:, 2] = square_bbox[:, 0] + l - 1
- square_bbox[:, 3] = square_bbox[:, 1] + l - 1
- return square_bbox
-
-
- def generate_bounding_box(self, map, reg, scale, threshold):
- """
- generate bbox from feature map
- Parameters:
- ----------
- map: numpy array , n x m x 1
- detect score for each position
- reg: numpy array , n x m x 4
- bbox
- scale: float number
- scale of this detection
- threshold: float number
- detect threshold
- Returns:
- -------
- bbox array
- """
- stride = 2
- cellsize = 12 # receptive field
-
- t_index = np.where(map[:,:,0] > threshold)
- # print('shape of t_index:{0}'.format(len(t_index)))
- # print('t_index{0}'.format(t_index))
- # time.sleep(5)
-
- # find nothing
- if t_index[0].size == 0:
- return np.array([])
-
- # reg = (1, n, m, 4)
- # choose bounding box whose socre are larger than threshold
- dx1, dy1, dx2, dy2 = [reg[0, t_index[0], t_index[1], i] for i in range(4)]
- #print(dx1.shape)
- #exit()
- # time.sleep(5)
- reg = np.array([dx1, dy1, dx2, dy2])
- #print('shape of reg{0}'.format(reg.shape))
- #exit()
-
- # lefteye_dx, lefteye_dy, righteye_dx, righteye_dy, nose_dx, nose_dy, \
- # leftmouth_dx, leftmouth_dy, rightmouth_dx, rightmouth_dy = [landmarks[0, t_index[0], t_index[1], i] for i in range(10)]
- #
- # landmarks = np.array([lefteye_dx, lefteye_dy, righteye_dx, righteye_dy, nose_dx, nose_dy, leftmouth_dx, leftmouth_dy, rightmouth_dx, rightmouth_dy])
-
- # abtain score of classification which larger than threshold
- # t_index[0]: choose the first column of t_index
- # t_index[1]: choose the second column of t_index
- score = map[t_index[0], t_index[1], 0]
- # hence t_index[1] means column, t_index[1] is the value of x
- # hence t_index[0] means row, t_index[0] is the value of y
- boundingbox = np.vstack([np.round((stride * t_index[1]) / scale), # x1 of prediction box in original image
- np.round((stride * t_index[0]) / scale), # y1 of prediction box in original image
- np.round((stride * t_index[1] + cellsize) / scale), # x2 of prediction box in original image
- np.round((stride * t_index[0] + cellsize) / scale), # y2 of prediction box in original image
- # reconstruct the box in original image
- score,
- reg,
- # landmarks
- ])
-
- return boundingbox.T
-
-
- def resize_image(self, img, scale):
- """
- resize image and transform dimention to [batchsize, channel, height, width]
- Parameters:
- ----------
- img: numpy array , height x width x channel
- input image, channels in BGR order here
- scale: float number
- scale factor of resize operation
- Returns:
- -------
- transformed image tensor , 1 x channel x height x width
- """
- height, width, channels = img.shape
- new_height = int(height * scale) # resized new height
- new_width = int(width * scale) # resized new width
- new_dim = (new_width, new_height)
- img_resized = cv2.resize(img, new_dim, interpolation=cv2.INTER_LINEAR) # resized image
- return img_resized
-
-
- def pad(self, bboxes, w, h):
- """
- pad the the boxes
- Parameters:
- ----------
- bboxes: numpy array, n x 5
- input bboxes
- w: float number
- width of the input image
- h: float number
- height of the input image
- Returns :
- ------
- dy, dx : numpy array, n x 1
- start point of the bbox in target image
- edy, edx : numpy array, n x 1
- end point of the bbox in target image
- y, x : numpy array, n x 1
- start point of the bbox in original image
- ex, ex : numpy array, n x 1
- end point of the bbox in original image
- tmph, tmpw: numpy array, n x 1
- height and width of the bbox
- """
- # width and height
- tmpw = (bboxes[:, 2] - bboxes[:, 0] + 1).astype(np.int32)
- tmph = (bboxes[:, 3] - bboxes[:, 1] + 1).astype(np.int32)
- numbox = bboxes.shape[0]
-
- dx = np.zeros((numbox, ))
- dy = np.zeros((numbox, ))
- edx, edy = tmpw.copy()-1, tmph.copy()-1
- # x, y: start point of the bbox in original image
- # ex, ey: end point of the bbox in original image
- x, y, ex, ey = bboxes[:, 0], bboxes[:, 1], bboxes[:, 2], bboxes[:, 3]
-
- tmp_index = np.where(ex > w-1)
- edx[tmp_index] = tmpw[tmp_index] + w - 2 - ex[tmp_index]
- ex[tmp_index] = w - 1
-
- tmp_index = np.where(ey > h-1)
- edy[tmp_index] = tmph[tmp_index] + h - 2 - ey[tmp_index]
- ey[tmp_index] = h - 1
-
- tmp_index = np.where(x < 0)
- dx[tmp_index] = 0 - x[tmp_index]
- x[tmp_index] = 0
-
- tmp_index = np.where(y < 0)
- dy[tmp_index] = 0 - y[tmp_index]
- y[tmp_index] = 0
-
- return_list = [dy, edy, dx, edx, y, ey, x, ex, tmpw, tmph]
- return_list = [item.astype(np.int32) for item in return_list]
-
- return return_list
-
-
- def detect_pnet(self, im):
- """Get face candidates through pnet
-
- Parameters:
- ----------
- im: numpy array
- input image array
- one batch
-
- Returns:
- -------
- boxes: numpy array
- detected boxes before calibration
- boxes_align: numpy array
- boxes after calibration
- """
-
- # im = self.unique_image_format(im)
-
- # original wider face data
- h, w, c = im.shape
- net_size = 12
-
- current_scale = float(net_size) / self.min_face_size # find initial scale
- #print('imgshape:{0}, current_scale:{1}'.format(im.shape, current_scale))
- im_resized = self.resize_image(im, current_scale) # scale = 1.0
- current_height, current_width, _ = im_resized.shape
- # fcn
- all_boxes = list()
- while min(current_height, current_width) > net_size:
- #print('current:',current_height, current_width)
- feed_imgs = []
- image_tensor = image_tools.convert_image_to_tensor(im_resized)
- feed_imgs.append(image_tensor)
- feed_imgs = torch.stack(feed_imgs)
-
- feed_imgs.requires_grad = True
-
- if self.pnet_detector.use_cuda:
- feed_imgs = feed_imgs.cuda()
-
- # self.pnet_detector is a trained pnet torch model
-
- # receptive field is 12×12
- # 12×12 --> score
- # 12×12 --> bounding box
- cls_map, reg = self.pnet_detector(feed_imgs)
-
- cls_map_np = image_tools.convert_chwTensor_to_hwcNumpy(cls_map.cpu())
- reg_np = image_tools.convert_chwTensor_to_hwcNumpy(reg.cpu())
- # print(cls_map_np.shape, reg_np.shape) # cls_map_np = (1, n, m, 1) reg_np.shape = (1, n, m 4)
- # time.sleep(5)
- # landmark_np = image_tools.convert_chwTensor_to_hwcNumpy(landmark.cpu())
-
- # self.threshold[0] = 0.6
- # print(cls_map_np[0,:,:].shape)
- # time.sleep(4)
-
- # boxes = [x1, y1, x2, y2, score, reg]
- boxes = self.generate_bounding_box(cls_map_np[ 0, :, :], reg_np, current_scale, self.thresh[0])
- #cv2.rectangle(im,(300,100),(400,200),color=(0,0,0))
- #cv2.rectangle(im,(400,200),(500,300),color=(0,0,0))
-
- # generate pyramid images
- current_scale *= self.scale_factor # self.scale_factor = 0.709
- im_resized = self.resize_image(im, current_scale)
- current_height, current_width, _ = im_resized.shape
-
- if boxes.size == 0:
- continue
-
- # non-maximum suppresion
- keep = utils.nms(boxes[:, :5], 0.5, 'Union')
- boxes = boxes[keep]
- all_boxes.append(boxes)
-
- """ img = im.copy()
- bw = boxes[:,2]-boxes[:,0]
- bh = boxes[:,3]-boxes[:,1]
- for i in range(boxes.shape[0]):
- p1=(int(boxes[i][0]+boxes[i][5]*bw[i]),int(boxes[i][1]+boxes[i][6]*bh[i]))
- p2=(int(boxes[i][2]+boxes[i][7]*bw[i]),int(boxes[i][3]+boxes[i][8]*bh[i]))
- cv2.rectangle(img,p1,p2,color=(0,0,0))
- cv2.imshow('ss',img)
- cv2.waitKey(0)
- #ii+=1
- exit() """
-
- if len(all_boxes) == 0:
- return None, None
- all_boxes = np.vstack(all_boxes)
- # print("shape of all boxes {0}".format(all_boxes.shape))
- # time.sleep(5)
-
- # merge the detection from first stage
- keep = utils.nms(all_boxes[:, 0:5], 0.7, 'Union')
- all_boxes = all_boxes[keep]
- # boxes = all_boxes[:, :5]
-
- # x2 - x1
- # y2 - y1
- bw = all_boxes[:, 2] - all_boxes[:, 0] + 1
- bh = all_boxes[:, 3] - all_boxes[:, 1] + 1
-
- # landmark_keep = all_boxes[:, 9:].reshape((5,2))
-
-
- boxes = np.vstack([all_boxes[:,0],
- all_boxes[:,1],
- all_boxes[:,2],
- all_boxes[:,3],
- all_boxes[:,4],
- # all_boxes[:, 0] + all_boxes[:, 9] * bw,
- # all_boxes[:, 1] + all_boxes[:,10] * bh,
- # all_boxes[:, 0] + all_boxes[:, 11] * bw,
- # all_boxes[:, 1] + all_boxes[:, 12] * bh,
- # all_boxes[:, 0] + all_boxes[:, 13] * bw,
- # all_boxes[:, 1] + all_boxes[:, 14] * bh,
- # all_boxes[:, 0] + all_boxes[:, 15] * bw,
- # all_boxes[:, 1] + all_boxes[:, 16] * bh,
- # all_boxes[:, 0] + all_boxes[:, 17] * bw,
- # all_boxes[:, 1] + all_boxes[:, 18] * bh
- ])
-
- boxes = boxes.T
-
- # boxes = boxes = [x1, y1, x2, y2, score, reg] reg= [px1, py1, px2, py2] (in prediction)
- align_topx = all_boxes[:, 0] + all_boxes[:, 5] * bw
- align_topy = all_boxes[:, 1] + all_boxes[:, 6] * bh
- align_bottomx = all_boxes[:, 2] + all_boxes[:, 7] * bw
- align_bottomy = all_boxes[:, 3] + all_boxes[:, 8] * bh
-
- # refine the boxes
- boxes_align = np.vstack([ align_topx,
- align_topy,
- align_bottomx,
- align_bottomy,
- all_boxes[:, 4],
- # align_topx + all_boxes[:,9] * bw,
- # align_topy + all_boxes[:,10] * bh,
- # align_topx + all_boxes[:,11] * bw,
- # align_topy + all_boxes[:,12] * bh,
- # align_topx + all_boxes[:,13] * bw,
- # align_topy + all_boxes[:,14] * bh,
- # align_topx + all_boxes[:,15] * bw,
- # align_topy + all_boxes[:,16] * bh,
- # align_topx + all_boxes[:,17] * bw,
- # align_topy + all_boxes[:,18] * bh,
- ])
- boxes_align = boxes_align.T
-
- #remove invalid box
- valindex = [True for _ in range(boxes_align.shape[0])]
- for i in range(boxes_align.shape[0]):
- if boxes_align[i][2]-boxes_align[i][0]<=3 or boxes_align[i][3]-boxes_align[i][1]<=3:
- valindex[i]=False
- #print('pnet has one smaller than 3')
- else:
- if boxes_align[i][2]<1 or boxes_align[i][0]>w-2 or boxes_align[i][3]<1 or boxes_align[i][1]>h-2:
- valindex[i]=False
- #print('pnet has one out')
- boxes_align=boxes_align[valindex,:]
- boxes = boxes[valindex,:]
- return boxes, boxes_align
-
- def detect_rnet(self, im, dets):
- """Get face candidates using rnet
-
- Parameters:
- ----------
- im: numpy array
- input image array
- dets: numpy array
- detection results of pnet
-
- Returns:
- -------
- boxes: numpy array
- detected boxes before calibration
- boxes_align: numpy array
- boxes after calibration
- """
- # im: an input image
- h, w, c = im.shape
-
- if dets is None:
- return None,None
- if dets.shape[0]==0:
- return None, None
-
- # (705, 5) = [x1, y1, x2, y2, score, reg]
- # print("pnet detection {0}".format(dets.shape))
- # time.sleep(5)
- detss = dets
- # return square boxes
- dets = self.square_bbox(dets)
- detsss = dets
- # rounds
- dets[:, 0:4] = np.round(dets[:, 0:4])
- [dy, edy, dx, edx, y, ey, x, ex, tmpw, tmph] = self.pad(dets, w, h)
- num_boxes = dets.shape[0]
-
- '''
- # helper for setting RNet batch size
- batch_size = self.rnet_detector.batch_size
- ratio = float(num_boxes) / batch_size
- if ratio > 3 or ratio < 0.3:
- print "You may need to reset RNet batch size if this info appears frequently, \
- face candidates:%d, current batch_size:%d"%(num_boxes, batch_size)
- '''
-
- # cropped_ims_tensors = np.zeros((num_boxes, 3, 24, 24), dtype=np.float32)
- cropped_ims_tensors = []
- for i in range(num_boxes):
- try:
- tmp = np.zeros((tmph[i], tmpw[i], 3), dtype=np.uint8)
- tmp[dy[i]:edy[i]+1, dx[i]:edx[i]+1, :] = im[y[i]:ey[i]+1, x[i]:ex[i]+1, :]
- except:
- print(dy[i],edy[i],dx[i],edx[i],y[i],ey[i],x[i],ex[i],tmpw[i],tmph[i])
- print(dets[i])
- print(detss[i])
- print(detsss[i])
- print(h,w)
- exit()
- crop_im = cv2.resize(tmp, (24, 24))
- crop_im_tensor = image_tools.convert_image_to_tensor(crop_im)
- # cropped_ims_tensors[i, :, :, :] = crop_im_tensor
- cropped_ims_tensors.append(crop_im_tensor)
- feed_imgs = torch.stack(cropped_ims_tensors)
- feed_imgs.requires_grad = True
-
-
- if self.rnet_detector.use_cuda:
- feed_imgs = feed_imgs.cuda()
-
- cls_map, reg = self.rnet_detector(feed_imgs)
-
- cls_map = cls_map.cpu().data.numpy()
- reg = reg.cpu().data.numpy()
- # landmark = landmark.cpu().data.numpy()
-
-
- keep_inds = np.where(cls_map > self.thresh[1])[0]
-
- if len(keep_inds) > 0:
- boxes = dets[keep_inds]
- cls = cls_map[keep_inds]
- reg = reg[keep_inds]
- # landmark = landmark[keep_inds]
- else:
- return None, None
- keep = utils.nms(boxes, 0.7)
-
- if len(keep) == 0:
- return None, None
-
- keep_cls = cls[keep]
- keep_boxes = boxes[keep]
- keep_reg = reg[keep]
- # keep_landmark = landmark[keep]
-
-
- bw = keep_boxes[:, 2] - keep_boxes[:, 0] + 1
- bh = keep_boxes[:, 3] - keep_boxes[:, 1] + 1
-
-
- boxes = np.vstack([ keep_boxes[:,0],
- keep_boxes[:,1],
- keep_boxes[:,2],
- keep_boxes[:,3],
- keep_cls[:,0],
- # keep_boxes[:,0] + keep_landmark[:, 0] * bw,
- # keep_boxes[:,1] + keep_landmark[:, 1] * bh,
- # keep_boxes[:,0] + keep_landmark[:, 2] * bw,
- # keep_boxes[:,1] + keep_landmark[:, 3] * bh,
- # keep_boxes[:,0] + keep_landmark[:, 4] * bw,
- # keep_boxes[:,1] + keep_landmark[:, 5] * bh,
- # keep_boxes[:,0] + keep_landmark[:, 6] * bw,
- # keep_boxes[:,1] + keep_landmark[:, 7] * bh,
- # keep_boxes[:,0] + keep_landmark[:, 8] * bw,
- # keep_boxes[:,1] + keep_landmark[:, 9] * bh,
- ])
-
- align_topx = keep_boxes[:,0] + keep_reg[:,0] * bw
- align_topy = keep_boxes[:,1] + keep_reg[:,1] * bh
- align_bottomx = keep_boxes[:,2] + keep_reg[:,2] * bw
- align_bottomy = keep_boxes[:,3] + keep_reg[:,3] * bh
-
- boxes_align = np.vstack([align_topx,
- align_topy,
- align_bottomx,
- align_bottomy,
- keep_cls[:, 0],
- # align_topx + keep_landmark[:, 0] * bw,
- # align_topy + keep_landmark[:, 1] * bh,
- # align_topx + keep_landmark[:, 2] * bw,
- # align_topy + keep_landmark[:, 3] * bh,
- # align_topx + keep_landmark[:, 4] * bw,
- # align_topy + keep_landmark[:, 5] * bh,
- # align_topx + keep_landmark[:, 6] * bw,
- # align_topy + keep_landmark[:, 7] * bh,
- # align_topx + keep_landmark[:, 8] * bw,
- # align_topy + keep_landmark[:, 9] * bh,
- ])
-
- boxes = boxes.T
- boxes_align = boxes_align.T
-
- #remove invalid box
- valindex = [True for _ in range(boxes_align.shape[0])]
- for i in range(boxes_align.shape[0]):
- if boxes_align[i][2]-boxes_align[i][0]<=3 or boxes_align[i][3]-boxes_align[i][1]<=3:
- valindex[i]=False
- print('rnet has one smaller than 3')
- else:
- if boxes_align[i][2]<1 or boxes_align[i][0]>w-2 or boxes_align[i][3]<1 or boxes_align[i][1]>h-2:
- valindex[i]=False
- print('rnet has one out')
- boxes_align=boxes_align[valindex,:]
- boxes = boxes[valindex,:]
- """ img = im.copy()
- for i in range(boxes_align.shape[0]):
- p1=(int(boxes_align[i,0]),int(boxes_align[i,1]))
- p2=(int(boxes_align[i,2]),int(boxes_align[i,3]))
- cv2.rectangle(img,p1,p2,color=(0,0,0))
- cv2.imshow('ss',img)
- cv2.waitKey(0)
- exit() """
- return boxes, boxes_align
-
- def detect_onet(self, im, dets):
- """Get face candidates using onet
-
- Parameters:
- ----------
- im: numpy array
- input image array
- dets: numpy array
- detection results of rnet
-
- Returns:
- -------
- boxes_align: numpy array
- boxes after calibration
- landmarks_align: numpy array
- landmarks after calibration
-
- """
- h, w, c = im.shape
-
- if dets is None:
- return None, None
- if dets.shape[0]==0:
- return None, None
-
- detss = dets
- dets = self.square_bbox(dets)
-
-
- dets[:, 0:4] = np.round(dets[:, 0:4])
-
- [dy, edy, dx, edx, y, ey, x, ex, tmpw, tmph] = self.pad(dets, w, h)
- num_boxes = dets.shape[0]
-
-
- # cropped_ims_tensors = np.zeros((num_boxes, 3, 24, 24), dtype=np.float32)
- cropped_ims_tensors = []
- for i in range(num_boxes):
- try:
- tmp = np.zeros((tmph[i], tmpw[i], 3), dtype=np.uint8)
- # crop input image
- tmp[dy[i]:edy[i] + 1, dx[i]:edx[i] + 1, :] = im[y[i]:ey[i] + 1, x[i]:ex[i] + 1, :]
- except:
- print(dy[i],edy[i],dx[i],edx[i],y[i],ey[i],x[i],ex[i],tmpw[i],tmph[i])
- print(dets[i])
- print(detss[i])
- print(h,w)
- crop_im = cv2.resize(tmp, (48, 48))
- crop_im_tensor = image_tools.convert_image_to_tensor(crop_im)
- # cropped_ims_tensors[i, :, :, :] = crop_im_tensor
- cropped_ims_tensors.append(crop_im_tensor)
- feed_imgs = torch.stack(cropped_ims_tensors)
- feed_imgs.requires_grad = True
-
- if self.rnet_detector.use_cuda:
- feed_imgs = feed_imgs.cuda()
-
- cls_map, reg, landmark = self.onet_detector(feed_imgs)
-
- cls_map = cls_map.cpu().data.numpy()
- reg = reg.cpu().data.numpy()
- landmark = landmark.cpu().data.numpy()
-
- keep_inds = np.where(cls_map > self.thresh[2])[0]
-
- if len(keep_inds) > 0:
- boxes = dets[keep_inds]
- cls = cls_map[keep_inds]
- reg = reg[keep_inds]
- landmark = landmark[keep_inds]
- else:
- return None, None
-
- keep = utils.nms(boxes, 0.7, mode="Minimum")
-
- if len(keep) == 0:
- return None, None
-
- keep_cls = cls[keep]
- keep_boxes = boxes[keep]
- keep_reg = reg[keep]
- keep_landmark = landmark[keep]
-
- bw = keep_boxes[:, 2] - keep_boxes[:, 0] + 1
- bh = keep_boxes[:, 3] - keep_boxes[:, 1] + 1
-
-
- align_topx = keep_boxes[:, 0] + keep_reg[:, 0] * bw
- align_topy = keep_boxes[:, 1] + keep_reg[:, 1] * bh
- align_bottomx = keep_boxes[:, 2] + keep_reg[:, 2] * bw
- align_bottomy = keep_boxes[:, 3] + keep_reg[:, 3] * bh
-
- align_landmark_topx = keep_boxes[:, 0]
- align_landmark_topy = keep_boxes[:, 1]
-
-
-
-
- boxes_align = np.vstack([align_topx,
- align_topy,
- align_bottomx,
- align_bottomy,
- keep_cls[:, 0],
- # align_topx + keep_landmark[:, 0] * bw,
- # align_topy + keep_landmark[:, 1] * bh,
- # align_topx + keep_landmark[:, 2] * bw,
- # align_topy + keep_landmark[:, 3] * bh,
- # align_topx + keep_landmark[:, 4] * bw,
- # align_topy + keep_landmark[:, 5] * bh,
- # align_topx + keep_landmark[:, 6] * bw,
- # align_topy + keep_landmark[:, 7] * bh,
- # align_topx + keep_landmark[:, 8] * bw,
- # align_topy + keep_landmark[:, 9] * bh,
- ])
-
- boxes_align = boxes_align.T
-
- landmark = np.vstack([
- align_landmark_topx + keep_landmark[:, 0] * bw,
- align_landmark_topy + keep_landmark[:, 1] * bh,
- align_landmark_topx + keep_landmark[:, 2] * bw,
- align_landmark_topy + keep_landmark[:, 3] * bh,
- align_landmark_topx + keep_landmark[:, 4] * bw,
- align_landmark_topy + keep_landmark[:, 5] * bh,
- align_landmark_topx + keep_landmark[:, 6] * bw,
- align_landmark_topy + keep_landmark[:, 7] * bh,
- align_landmark_topx + keep_landmark[:, 8] * bw,
- align_landmark_topy + keep_landmark[:, 9] * bh,
- ])
-
- landmark_align = landmark.T
-
- return boxes_align, landmark_align
-
-
- def detect_face(self,img):
- """Detect face over image
- """
- boxes_align = np.array([])
- landmark_align =np.array([])
-
- t = time.time()
-
- # pnet
- if self.pnet_detector:
- p_boxes, boxes_align = self.detect_pnet(img)
- if boxes_align is None:
- return np.array([]), np.array([])
-
- t1 = time.time() - t
- t = time.time()
-
- # rnet
- if self.rnet_detector:
- r_boxes, boxes_align = self.detect_rnet(img, boxes_align)
- if boxes_align is None:
- return np.array([]), np.array([])
-
- t2 = time.time() - t
- t = time.time()
-
- # onet
- if self.onet_detector:
- boxes_align, landmark_align = self.detect_onet(img, boxes_align)
- if boxes_align is None:
- return np.array([]), np.array([])
-
- t3 = time.time() - t
- t = time.time()
- print("time cost " + '{:.3f}'.format(t1+t2+t3) + ' pnet {:.3f} rnet {:.3f} onet {:.3f}'.format(t1, t2, t3))
-
- return p_boxes,r_boxes,boxes_align, landmark_align
diff --git a/spaces/Envyyyy/vehicle_detection/README.md b/spaces/Envyyyy/vehicle_detection/README.md
deleted file mode 100644
index 0a8d7e7fbee3dd1cfb017b83159d11e1ae814b95..0000000000000000000000000000000000000000
--- a/spaces/Envyyyy/vehicle_detection/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Vehicle Detection
-emoji: 🦀
-colorFrom: red
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/EronSamez/RVC_HFmeu/lib/infer_pack/modules/F0Predictor/F0Predictor.py b/spaces/EronSamez/RVC_HFmeu/lib/infer_pack/modules/F0Predictor/F0Predictor.py
deleted file mode 100644
index f56e49e7f0e6eab3babf0711cae2933371b9f9cc..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/lib/infer_pack/modules/F0Predictor/F0Predictor.py
+++ /dev/null
@@ -1,16 +0,0 @@
-class F0Predictor(object):
- def compute_f0(self, wav, p_len):
- """
- input: wav:[signal_length]
- p_len:int
- output: f0:[signal_length//hop_length]
- """
- pass
-
- def compute_f0_uv(self, wav, p_len):
- """
- input: wav:[signal_length]
- p_len:int
- output: f0:[signal_length//hop_length],uv:[signal_length//hop_length]
- """
- pass
diff --git a/spaces/EsoCode/text-generation-webui/extensions/openai/README.md b/spaces/EsoCode/text-generation-webui/extensions/openai/README.md
deleted file mode 100644
index 0f775bbfb6d39b00369854d7f6ac4f7734710425..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/extensions/openai/README.md
+++ /dev/null
@@ -1,232 +0,0 @@
-# An OpenedAI API (openai like)
-
-This extension creates an API that works kind of like openai (ie. api.openai.com).
-It's incomplete so far but perhaps is functional enough for you.
-
-## Setup & installation
-
-Optional (for flask_cloudflared, embeddings):
-
-```
-pip3 install -r requirements.txt
-```
-
-It listens on tcp port 5001 by default. You can use the OPENEDAI_PORT environment variable to change this.
-
-Make sure you enable it in server launch parameters, it should include:
-
-```
---extensions openai
-```
-
-You can also use the ``--listen`` argument to make the server available on the networ, and/or the ```--share``` argument to enable a public Cloudflare endpoint.
-
-To enable the basic image generation support (txt2img) set the environment variable SD_WEBUI_URL to point to your Stable Diffusion API ([Automatic1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui)).
-
-For example:
-```
-SD_WEBUI_URL=http://127.0.0.1:7861
-```
-
-### Models
-
-This has been successfully tested with Alpaca, Koala, Vicuna, WizardLM and their variants, (ex. gpt4-x-alpaca, GPT4all-snoozy, stable-vicuna, wizard-vicuna, etc.) and many others. Models that have been trained for **Instruction Following** work best. If you test with other models please let me know how it goes. Less than satisfying results (so far) from: RWKV-4-Raven, llama, mpt-7b-instruct/chat.
-
-For best results across all API endpoints, a model like [vicuna-13b-v1.3-GPTQ](https://huggingface.co/TheBloke/vicuna-13b-v1.3-GPTQ), [stable-vicuna-13B-GPTQ](https://huggingface.co/TheBloke/stable-vicuna-13B-GPTQ) or [airoboros-13B-gpt4-1.3-GPTQ](https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.3-GPTQ) is a good start.
-
-For good results with the [Completions](https://platform.openai.com/docs/api-reference/completions) API endpoint, in addition to the above models, you can also try using a base model like [falcon-7b](https://huggingface.co/tiiuae/falcon-7b) or Llama.
-
-For good results with the [ChatCompletions](https://platform.openai.com/docs/api-reference/chat) or [Edits](https://platform.openai.com/docs/api-reference/edits) API endpoints you can use almost any model trained for instruction following - within the limits of the model. Be sure that the proper instruction template is detected and loaded or the results will not be good.
-
-For the proper instruction format to be detected you need to have a matching model entry in your ```models/config.yaml``` file. Be sure to keep this file up to date.
-A matching instruction template file in the characters/instruction-following/ folder will loaded and applied to format messages correctly for the model - this is critical for good results.
-
-For example, the Wizard-Vicuna family of models are trained with the Vicuna 1.1 format. In the models/config.yaml file there is this matching entry:
-
-```
-.*wizard.*vicuna:
- mode: 'instruct'
- instruction_template: 'Vicuna-v1.1'
-```
-
-This refers to ```characters/instruction-following/Vicuna-v1.1.yaml```, which looks like this:
-
-```
-user: "USER:"
-bot: "ASSISTANT:"
-turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|>\n"
-context: "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\n\n"
-```
-
-For most common models this is already setup, but if you are using a new or uncommon model you may need add a matching entry to the models/config.yaml and possibly create your own instruction-following template and for best results.
-
-If you see this in your logs, it probably means that the correct format could not be loaded:
-```
-Warning: Loaded default instruction-following template for model.
-```
-
-### Embeddings (alpha)
-
-Embeddings requires ```sentence-transformers``` installed, but chat and completions will function without it loaded. The embeddings endpoint is currently using the HuggingFace model: ```sentence-transformers/all-mpnet-base-v2``` for embeddings. This produces 768 dimensional embeddings (the same as the text-davinci-002 embeddings), which is different from OpenAI's current default ```text-embedding-ada-002``` model which produces 1536 dimensional embeddings. The model is small-ish and fast-ish. This model and embedding size may change in the future.
-
-| model name | dimensions | input max tokens | speed | size | Avg. performance |
-| --- | --- | --- | --- | --- | --- |
-| text-embedding-ada-002 | 1536 | 8192| - | - | - |
-| text-davinci-002 | 768 | 2046 | - | - | - |
-| all-mpnet-base-v2 | 768 | 384 | 2800 | 420M | 63.3 |
-| all-MiniLM-L6-v2 | 384 | 256 | 14200 | 80M | 58.8 |
-
-In short, the all-MiniLM-L6-v2 model is 5x faster, 5x smaller ram, 2x smaller storage, and still offers good quality. Stats from (https://www.sbert.net/docs/pretrained_models.html). To change the model from the default you can set the environment variable OPENEDAI_EMBEDDING_MODEL, ex. "OPENEDAI_EMBEDDING_MODEL=all-MiniLM-L6-v2".
-
-Warning: You cannot mix embeddings from different models even if they have the same dimensions. They are not comparable.
-
-### Client Application Setup
-
-
-Almost everything you use it with will require you to set a dummy OpenAI API key environment variable.
-
-With the [official python openai client](https://github.com/openai/openai-python), you can set the OPENAI_API_BASE environment variable before you import the openai module, like so:
-
-```
-OPENAI_API_KEY=sk-111111111111111111111111111111111111111111111111
-OPENAI_API_BASE=http://127.0.0.1:5001/v1
-```
-
-If needed, replace 127.0.0.1 with the IP/port of your server.
-
-If using .env files to save the OPENAI_API_BASE and OPENAI_API_KEY variables, you can ensure compatibility by loading the .env file before loading the openai module, like so in python:
-
-```
-from dotenv import load_dotenv
-load_dotenv()
-import openai
-```
-
-With the [official Node.js openai client](https://github.com/openai/openai-node) it is slightly more more complex because the environment variables are not used by default, so small source code changes may be required to use the environment variables, like so:
-
-```
-const openai = OpenAI(Configuration({
- apiKey: process.env.OPENAI_API_KEY,
- basePath: process.env.OPENAI_API_BASE,
-}));
-```
-
-For apps made with the [chatgpt-api Node.js client library](https://github.com/transitive-bullshit/chatgpt-api):
-
-```
-const api = new ChatGPTAPI({
- apiKey: process.env.OPENAI_API_KEY,
- apiBaseUrl: process.env.OPENAI_API_BASE,
-})
-```
-
-## API Documentation & Examples
-
-The OpenAI API is well documented, you can view the documentation here: https://platform.openai.com/docs/api-reference
-
-Examples of how to use the Completions API in Python can be found here: https://platform.openai.com/examples
-Not all of them will work with all models unfortunately, See the notes on Models for how to get the best results.
-
-Here is a simple python example of how you can use the Edit endpoint as a translator.
-
-```python
-import openai
-response = openai.Edit.create(
- model="x",
- instruction="Translate this into French",
- input="Our mission is to ensure that artificial general intelligence benefits all of humanity.",
-)
-print(response['choices'][0]['text'])
-# Sample Output:
-# Notre mission est de garantir que l'intelligence artificielle généralisée profite à tous les membres de l'humanité.
-```
-
-
-
-## Compatibility & not so compatibility
-
-| API endpoint | tested with | notes |
-| --- | --- | --- |
-| /v1/models | openai.Model.list() | Lists models, Currently loaded model first, plus some compatibility options |
-| /v1/models/{id} | openai.Model.get() | returns whatever you ask for, model does nothing yet anyways |
-| /v1/text_completion | openai.Completion.create() | the most tested, only supports single string input so far, variable quality based on the model |
-| /v1/chat/completions | openai.ChatCompletion.create() | Quality depends a lot on the model |
-| /v1/edits | openai.Edit.create() | Works the best of all, perfect for instruction following models |
-| /v1/images/generations | openai.Image.create() | Bare bones, no model configuration, response_format='b64_json' only. |
-| /v1/embeddings | openai.Embedding.create() | Using Sentence Transformer, dimensions are different and may never be directly comparable to openai embeddings. |
-| /v1/moderations | openai.Moderation.create() | does nothing. successfully. |
-| /v1/completions | openai api completions.create | Legacy endpoint (v0.25) |
-| /v1/engines/*/embeddings | python-openai v0.25 | Legacy endpoint |
-| /v1/engines/*/generate | openai engines.generate | Legacy endpoint |
-| /v1/engines | openai engines.list | Legacy Lists models |
-| /v1/engines/{model_name} | openai engines.get -i {model_name} | You can use this legacy endpoint to load models via the api |
-| /v1/images/edits | openai.Image.create_edit() | not yet supported |
-| /v1/images/variations | openai.Image.create_variation() | not yet supported |
-| /v1/audio/\* | openai.Audio.\* | not yet supported |
-| /v1/files\* | openai.Files.\* | not yet supported |
-| /v1/fine-tunes\* | openai.FineTune.\* | not yet supported |
-| /v1/search | openai.search, engines.search | not yet supported |
-
-The model name setting is ignored in completions, but you may need to adjust the maximum token length to fit the model (ie. set to <2048 tokens instead of 4096, 8k, etc). To mitigate some of this, the max_tokens value is halved until it is less than truncation_length for the model (typically 2k).
-
-Streaming, temperature, top_p, max_tokens, stop, should all work as expected, but not all parameters are mapped correctly.
-
-Some hacky mappings:
-
-| OpenAI | text-generation-webui | note |
-| --- | --- | --- |
-| frequency_penalty | encoder_repetition_penalty | this seems to operate with a different scale and defaults, I tried to scale it based on range & defaults, but the results are terrible. hardcoded to 1.18 until there is a better way |
-| presence_penalty | repetition_penalty | same issues as frequency_penalty, hardcoded to 1.0 |
-| best_of | top_k | default is 1 |
-| stop | custom_stopping_strings | this is also stuffed with ['\n###', "\n{user prompt}", "{user prompt}" ] for good measure. |
-| n | 1 | variations are not supported yet. |
-| 1 | num_beams | hardcoded to 1 |
-| 1.0 | typical_p | hardcoded to 1.0 |
-| max_tokens | max_new_tokens | For Text Completions max_tokens is set smaller than the truncation_length minus the prompt length. This can cause no input to be generated if the prompt is too large. For ChatCompletions, the older chat messages may be dropped to fit the max_new_tokens requested |
-| logprobs | - | not supported yet |
-| logit_bias | - | not supported yet |
-| messages.name | - | not supported yet |
-| user | - | not supported yet |
-| functions/function_call | - | function calls are not supported yet |
-
-defaults are mostly from openai, so are different. I use the openai defaults where I can and try to scale them to the webui defaults with the same intent.
-
-### Applications
-
-Almost everything needs the OPENAI_API_KEY environment variable set, for example:
-```
-OPENAI_API_KEY=sk-111111111111111111111111111111111111111111111111
-```
-Some apps are picky about key format, but 'dummy' or 'sk-dummy' also work in most cases.
-Most application will work if you also set:
-```
-OPENAI_API_BASE=http://127.0.0.1:5001/v1
-```
-but there are some exceptions.
-
-| Compatibility | Application/Library | url | notes / setting |
-| --- | --- | --- | --- |
-| ✅❌ | openai-python (v0.25+) | https://github.com/openai/openai-python | only the endpoints from above are working. OPENAI_API_BASE=http://127.0.0.1:5001/v1 |
-| ✅❌ | openai-node | https://github.com/openai/openai-node | only the endpoints from above are working. environment variables don't work by default, but can be configured (see above) |
-| ✅❌ | chatgpt-api | https://github.com/transitive-bullshit/chatgpt-api | only the endpoints from above are working. environment variables don't work by default, but can be configured (see above) |
-| ✅ | anse | https://github.com/anse-app/anse | API Key & URL configurable in UI |
-| ✅ | shell_gpt | https://github.com/TheR1D/shell_gpt | OPENAI_API_HOST=http://127.0.0.1:5001 |
-| ✅ | gpt-shell | https://github.com/jla/gpt-shell | OPENAI_API_BASE=http://127.0.0.1:5001/v1 |
-| ✅ | gpt-discord-bot | https://github.com/openai/gpt-discord-bot | OPENAI_API_BASE=http://127.0.0.1:5001/v1 |
-| ✅ | OpenAI for Notepad++ | https://github.com/Krazal/nppopenai | api_url=http://127.0.0.1:5001 in the config file, or environment variables |
-| ✅ | vscode-openai | https://marketplace.visualstudio.com/items?itemName=AndrewButson.vscode-openai | OPENAI_API_BASE=http://127.0.0.1:5001/v1 |
-| ✅❌ | langchain | https://github.com/hwchase17/langchain | OPENAI_API_BASE=http://127.0.0.1:5001/v1 even with a good 30B-4bit model the result is poor so far. It assumes zero shot python/json coding. Some model tailored prompt formatting improves results greatly. |
-| ✅❌ | Auto-GPT | https://github.com/Significant-Gravitas/Auto-GPT | OPENAI_API_BASE=http://127.0.0.1:5001/v1 Same issues as langchain. Also assumes a 4k+ context |
-| ✅❌ | babyagi | https://github.com/yoheinakajima/babyagi | OPENAI_API_BASE=http://127.0.0.1:5001/v1 |
-
-## Future plans
-* better error handling
-* model changing, esp. something for swapping loras or embedding models
-* consider switching to FastAPI + starlette for SSE (openai SSE seems non-standard)
-* do something about rate limiting or locking requests for completions, most systems will only be able handle a single request at a time before OOM
-
-## Bugs? Feedback? Comments? Pull requests?
-
-To enable debugging and get copious output you can set the OPENEDAI_DEBUG=1 environment variable.
-
-Are all appreciated, please @matatonic and I'll try to get back to you as soon as possible.
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_datasets/MJ_train.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_datasets/MJ_train.py
deleted file mode 100644
index be42cc47035d02403a036330eb0af7d0058b8675..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_datasets/MJ_train.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# Text Recognition Training set, including:
-# Synthetic Datasets: Syn90k
-
-train_root = 'data/mixture/Syn90k'
-
-train_img_prefix = f'{train_root}/mnt/ramdisk/max/90kDICT32px'
-train_ann_file = f'{train_root}/label.lmdb'
-
-train = dict(
- type='OCRDataset',
- img_prefix=train_img_prefix,
- ann_file=train_ann_file,
- loader=dict(
- type='AnnFileLoader',
- repeat=1,
- file_format='lmdb',
- parser=dict(type='LineJsonParser', keys=['filename', 'text'])),
- pipeline=None,
- test_mode=False)
-
-train_list = [train]
diff --git a/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/models/imagebind_model.py b/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/models/imagebind_model.py
deleted file mode 100644
index 1142cc15571830f4d148db8f8cf85f47e0b4a6bb..0000000000000000000000000000000000000000
--- a/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/models/imagebind_model.py
+++ /dev/null
@@ -1,527 +0,0 @@
-#!/usr/bin/env python3
-# Portions Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import os
-from functools import partial
-from types import SimpleNamespace
-
-import torch
-import torch.nn as nn
-# from pytorch_lightning.utilities import rank_zero_only
-from .helpers import (EinOpsRearrange, LearnableLogitScaling, Normalize,
- SelectElement, SelectEOSAndProject)
-from .multimodal_preprocessors import (AudioPreprocessor,
- IMUPreprocessor, PadIm2Video,
- PatchEmbedGeneric,
- RGBDTPreprocessor,
- SpatioTemporalPosEmbeddingHelper,
- TextPreprocessor,
- ThermalPreprocessor)
-from .transformer import MultiheadAttention, SimpleTransformer
-
-ModalityType = SimpleNamespace(
- VISION="vision",
- TEXT="text",
- AUDIO="audio",
- THERMAL="thermal",
- DEPTH="depth",
- IMU="imu",
- POINT="point",
-)
-
-
-class ImageBindModel(nn.Module):
- def __init__(
- self,
- video_frames=2,
- kernel_size=(2, 14, 14),
- audio_kernel_size=16,
- audio_stride=10,
- out_embed_dim=768,
- vision_embed_dim=1024,
- vision_num_blocks=24,
- vision_num_heads=16,
- audio_embed_dim=768,
- audio_num_blocks=12,
- audio_num_heads=12,
- audio_num_mel_bins=128,
- audio_target_len=204,
- audio_drop_path=0.1,
- text_embed_dim=768,
- text_num_blocks=12,
- text_num_heads=12,
- depth_embed_dim=384,
- depth_kernel_size=16,
- depth_num_blocks=12,
- depth_num_heads=8,
- depth_drop_path=0.0,
- thermal_embed_dim=768,
- thermal_kernel_size=16,
- thermal_num_blocks=12,
- thermal_num_heads=12,
- thermal_drop_path=0.0,
- imu_embed_dim=512,
- imu_kernel_size=8,
- imu_num_blocks=6,
- imu_num_heads=8,
- imu_drop_path=0.7,
- layers = [7,15,23,31]
- ):
- super().__init__()
-
- self.out_layers = layers
-
- self.modality_preprocessors = self._create_modality_preprocessors(
- video_frames,
- vision_embed_dim,
- kernel_size,
- text_embed_dim,
- audio_embed_dim,
- audio_kernel_size,
- audio_stride,
- audio_num_mel_bins,
- audio_target_len,
- depth_embed_dim,
- depth_kernel_size,
- thermal_embed_dim,
- thermal_kernel_size,
- imu_embed_dim,
- )
-
- self.modality_trunks = self._create_modality_trunks(
- vision_embed_dim,
- vision_num_blocks,
- vision_num_heads,
- text_embed_dim,
- text_num_blocks,
- text_num_heads,
- audio_embed_dim,
- audio_num_blocks,
- audio_num_heads,
- audio_drop_path,
- depth_embed_dim,
- depth_num_blocks,
- depth_num_heads,
- depth_drop_path,
- thermal_embed_dim,
- thermal_num_blocks,
- thermal_num_heads,
- thermal_drop_path,
- imu_embed_dim,
- imu_num_blocks,
- imu_num_heads,
- imu_drop_path,
- )
-
- self.modality_heads = self._create_modality_heads(
- out_embed_dim,
- vision_embed_dim,
- text_embed_dim,
- audio_embed_dim,
- depth_embed_dim,
- thermal_embed_dim,
- imu_embed_dim,
- )
-
- self.modality_postprocessors = self._create_modality_postprocessors(
- out_embed_dim
- )
-
-
- def _create_modality_preprocessors(
- self,
- video_frames=2,
- vision_embed_dim=1024,
- kernel_size=(2, 14, 14),
- text_embed_dim=768,
- audio_embed_dim=768,
- audio_kernel_size=16,
- audio_stride=10,
- audio_num_mel_bins=128,
- audio_target_len=204,
- depth_embed_dim=768,
- depth_kernel_size=16,
- thermal_embed_dim=768,
- thermal_kernel_size=16,
- imu_embed_dim=512,
- ):
- rgbt_stem = PatchEmbedGeneric(
- proj_stem=[
- PadIm2Video(pad_type="repeat", ntimes=2),
- nn.Conv3d(
- in_channels=3,
- kernel_size=kernel_size,
- out_channels=vision_embed_dim,
- stride=kernel_size,
- bias=False,
- ),
- ]
- )
- rgbt_preprocessor = RGBDTPreprocessor(
- img_size=[3, video_frames, 224, 224],
- num_cls_tokens=1,
- pos_embed_fn=partial(SpatioTemporalPosEmbeddingHelper, learnable=True),
- rgbt_stem=rgbt_stem,
- depth_stem=None,
- )
-
- text_preprocessor = TextPreprocessor(
- context_length=77,
- vocab_size=49408,
- embed_dim=text_embed_dim,
- causal_masking=True,
- )
-
- audio_stem = PatchEmbedGeneric(
- proj_stem=[
- nn.Conv2d(
- in_channels=1,
- kernel_size=audio_kernel_size,
- stride=audio_stride,
- out_channels=audio_embed_dim,
- bias=False,
- ),
- ],
- norm_layer=nn.LayerNorm(normalized_shape=audio_embed_dim),
- )
- audio_preprocessor = AudioPreprocessor(
- img_size=[1, audio_num_mel_bins, audio_target_len],
- num_cls_tokens=1,
- pos_embed_fn=partial(SpatioTemporalPosEmbeddingHelper, learnable=True),
- audio_stem=audio_stem,
- )
-
- depth_stem = PatchEmbedGeneric(
- [
- nn.Conv2d(
- kernel_size=depth_kernel_size,
- in_channels=1,
- out_channels=depth_embed_dim,
- stride=depth_kernel_size,
- bias=False,
- ),
- ],
- norm_layer=nn.LayerNorm(normalized_shape=depth_embed_dim),
- )
-
- depth_preprocessor = RGBDTPreprocessor(
- img_size=[1, 224, 224],
- num_cls_tokens=1,
- pos_embed_fn=partial(SpatioTemporalPosEmbeddingHelper, learnable=True),
- rgbt_stem=None,
- depth_stem=depth_stem,
- )
-
- thermal_stem = PatchEmbedGeneric(
- [
- nn.Conv2d(
- kernel_size=thermal_kernel_size,
- in_channels=1,
- out_channels=thermal_embed_dim,
- stride=thermal_kernel_size,
- bias=False,
- ),
- ],
- norm_layer=nn.LayerNorm(normalized_shape=thermal_embed_dim),
- )
- thermal_preprocessor = ThermalPreprocessor(
- img_size=[1, 224, 224],
- num_cls_tokens=1,
- pos_embed_fn=partial(SpatioTemporalPosEmbeddingHelper, learnable=True),
- thermal_stem=thermal_stem,
- )
-
- imu_stem = PatchEmbedGeneric(
- [
- nn.Linear(
- in_features=48,
- out_features=imu_embed_dim,
- bias=False,
- ),
- ],
- norm_layer=nn.LayerNorm(normalized_shape=imu_embed_dim),
- )
-
- imu_preprocessor = IMUPreprocessor(
- img_size=[6, 2000],
- num_cls_tokens=1,
- kernel_size=8,
- embed_dim=imu_embed_dim,
- pos_embed_fn=partial(SpatioTemporalPosEmbeddingHelper, learnable=True),
- imu_stem=imu_stem,
- )
-
- modality_preprocessors = {
- ModalityType.VISION: rgbt_preprocessor,
- ModalityType.TEXT: text_preprocessor,
- ModalityType.AUDIO: audio_preprocessor,
- ModalityType.DEPTH: depth_preprocessor,
- ModalityType.THERMAL: thermal_preprocessor,
- ModalityType.IMU: imu_preprocessor,
- }
-
- return nn.ModuleDict(modality_preprocessors)
-
- def _create_modality_trunks(
- self,
- vision_embed_dim=1024,
- vision_num_blocks=24,
- vision_num_heads=16,
- text_embed_dim=768,
- text_num_blocks=12,
- text_num_heads=12,
- audio_embed_dim=768,
- audio_num_blocks=12,
- audio_num_heads=12,
- audio_drop_path=0.0,
- depth_embed_dim=768,
- depth_num_blocks=12,
- depth_num_heads=12,
- depth_drop_path=0.0,
- thermal_embed_dim=768,
- thermal_num_blocks=12,
- thermal_num_heads=12,
- thermal_drop_path=0.0,
- imu_embed_dim=512,
- imu_num_blocks=6,
- imu_num_heads=8,
- imu_drop_path=0.7,
- ):
- def instantiate_trunk(
- embed_dim, num_blocks, num_heads, pre_transformer_ln, add_bias_kv, drop_path
- ):
- return SimpleTransformer(
- embed_dim=embed_dim,
- num_blocks=num_blocks,
- ffn_dropout_rate=0.0,
- drop_path_rate=drop_path,
- attn_target=partial(
- MultiheadAttention,
- embed_dim=embed_dim,
- num_heads=num_heads,
- bias=True,
- add_bias_kv=add_bias_kv,
- ),
- pre_transformer_layer=nn.Sequential(
- nn.LayerNorm(embed_dim, eps=1e-6)
- if pre_transformer_ln
- else nn.Identity(),
- EinOpsRearrange("b l d -> l b d"),
- ),
- post_transformer_layer=EinOpsRearrange("l b d -> b l d"),
- )
-
- modality_trunks = {}
- modality_trunks[ModalityType.VISION] = instantiate_trunk(
- vision_embed_dim,
- vision_num_blocks,
- vision_num_heads,
- pre_transformer_ln=True,
- add_bias_kv=False,
- drop_path=0.0,
- )
- modality_trunks[ModalityType.TEXT] = instantiate_trunk(
- text_embed_dim,
- text_num_blocks,
- text_num_heads,
- pre_transformer_ln=False,
- add_bias_kv=False,
- drop_path=0.0,
- )
- modality_trunks[ModalityType.AUDIO] = instantiate_trunk(
- audio_embed_dim,
- audio_num_blocks,
- audio_num_heads,
- pre_transformer_ln=False,
- add_bias_kv=True,
- drop_path=audio_drop_path,
- )
- modality_trunks[ModalityType.DEPTH] = instantiate_trunk(
- depth_embed_dim,
- depth_num_blocks,
- depth_num_heads,
- pre_transformer_ln=False,
- add_bias_kv=True,
- drop_path=depth_drop_path,
- )
- modality_trunks[ModalityType.THERMAL] = instantiate_trunk(
- thermal_embed_dim,
- thermal_num_blocks,
- thermal_num_heads,
- pre_transformer_ln=False,
- add_bias_kv=True,
- drop_path=thermal_drop_path,
- )
- modality_trunks[ModalityType.IMU] = instantiate_trunk(
- imu_embed_dim,
- imu_num_blocks,
- imu_num_heads,
- pre_transformer_ln=False,
- add_bias_kv=True,
- drop_path=imu_drop_path,
- )
-
- return nn.ModuleDict(modality_trunks)
-
- def _create_modality_heads(
- self,
- out_embed_dim,
- vision_embed_dim,
- text_embed_dim,
- audio_embed_dim,
- depth_embed_dim,
- thermal_embed_dim,
- imu_embed_dim,
- ):
- modality_heads = {}
-
- modality_heads[ModalityType.VISION] = nn.Sequential(
- nn.LayerNorm(normalized_shape=vision_embed_dim, eps=1e-6),
- SelectElement(index=0),
- nn.Linear(vision_embed_dim, out_embed_dim, bias=False),
- )
-
- modality_heads[ModalityType.TEXT] = SelectEOSAndProject(
- proj=nn.Sequential(
- nn.LayerNorm(normalized_shape=text_embed_dim, eps=1e-6),
- nn.Linear(text_embed_dim, out_embed_dim, bias=False),
- )
- )
-
- modality_heads[ModalityType.AUDIO] = nn.Sequential(
- nn.LayerNorm(normalized_shape=audio_embed_dim, eps=1e-6),
- SelectElement(index=0),
- nn.Linear(audio_embed_dim, out_embed_dim, bias=False),
- )
-
- modality_heads[ModalityType.DEPTH] = nn.Sequential(
- nn.LayerNorm(normalized_shape=depth_embed_dim, eps=1e-6),
- SelectElement(index=0),
- nn.Linear(depth_embed_dim, out_embed_dim, bias=False),
- )
-
- modality_heads[ModalityType.THERMAL] = nn.Sequential(
- nn.LayerNorm(normalized_shape=thermal_embed_dim, eps=1e-6),
- SelectElement(index=0),
- nn.Linear(thermal_embed_dim, out_embed_dim, bias=False),
- )
-
- modality_heads[ModalityType.IMU] = nn.Sequential(
- nn.LayerNorm(normalized_shape=imu_embed_dim, eps=1e-6),
- SelectElement(index=0),
- nn.Dropout(p=0.5),
- nn.Linear(imu_embed_dim, out_embed_dim, bias=False),
- )
-
- return nn.ModuleDict(modality_heads)
-
- def _create_modality_postprocessors(self, out_embed_dim):
- modality_postprocessors = {}
-
- modality_postprocessors[ModalityType.VISION] = Normalize(dim=-1)
- modality_postprocessors[ModalityType.TEXT] = nn.Sequential(
- Normalize(dim=-1), LearnableLogitScaling(learnable=True)
- )
- modality_postprocessors[ModalityType.AUDIO] = nn.Sequential(
- Normalize(dim=-1),
- LearnableLogitScaling(logit_scale_init=20.0, learnable=False),
- )
- modality_postprocessors[ModalityType.DEPTH] = nn.Sequential(
- Normalize(dim=-1),
- LearnableLogitScaling(logit_scale_init=5.0, learnable=False),
- )
- modality_postprocessors[ModalityType.THERMAL] = nn.Sequential(
- Normalize(dim=-1),
- LearnableLogitScaling(logit_scale_init=10.0, learnable=False),
- )
- modality_postprocessors[ModalityType.IMU] = nn.Sequential(
- Normalize(dim=-1),
- LearnableLogitScaling(logit_scale_init=5.0, learnable=False),
- )
-
- return nn.ModuleDict(modality_postprocessors)
-
- def forward(self, inputs):
- outputs = {}
- for modality_key, modality_value in inputs.items():
- reduce_list = (
- modality_value.ndim >= 5
- ) # Audio and Video inputs consist of multiple clips
- if reduce_list:
- B, S = modality_value.shape[:2]
- modality_value = modality_value.reshape(
- B * S, *modality_value.shape[2:]
- )
-
- if modality_value is not None:
- modality_value = self.modality_preprocessors[modality_key](
- **{modality_key: modality_value}
- )
- trunk_inputs = modality_value["trunk"]
- head_inputs = modality_value["head"]
-
- modality_value, modality_full_value = self.modality_trunks[modality_key](**trunk_inputs, out_layers=self.out_layers)
-
-
- modality_value = self.modality_heads[modality_key](
- modality_value, **head_inputs
- )
- modality_value = self.modality_postprocessors[modality_key](
- modality_value
- )
-
- if reduce_list:
- modality_value = modality_value.reshape(B, S, -1)
- modality_value = modality_value.mean(dim=1)
-
- outputs[modality_key] = modality_value, modality_full_value
-
- return outputs
-
-
-def imagebind_huge(args):
-
- if 'layers' in args:
- layers = args['layers']
- else:
- layers = [7,15,23,31]
-
- return ImageBindModel(
- vision_embed_dim=1280,
- vision_num_blocks=32,
- vision_num_heads=16,
- text_embed_dim=1024,
- text_num_blocks=24,
- text_num_heads=16,
- out_embed_dim=1024,
- audio_drop_path=0.1,
- imu_drop_path=0.7,
- layers = layers
- ), 1024
-
-
-def save_module(module_dict: nn.ModuleDict, module_name: str = "",
- checkpoint_dir: str = "./.checkpoints/full", postfix: str = "_last",
- extension: str = "pth"):
- try:
- torch.save(module_dict.state_dict(),
- os.path.join(checkpoint_dir, f"imagebind-{module_name}{postfix}.{extension}"))
- logging.info(f"Saved parameters for module {module_name} to {checkpoint_dir}.")
- except FileNotFoundError:
- logging.warning(f"Could not save module parameters for {module_name} to {checkpoint_dir}.")
-
-
-def load_module(module_dict: nn.ModuleDict, module_name: str = "",
- checkpoint_dir: str = "./.checkpoints/full", postfix: str = "_last",
- extension: str = "pth"):
- try:
- module_dict.load_state_dict(torch.load(
- os.path.join(checkpoint_dir, f"imagebind-{module_name}{postfix}.{extension}")), strict=False)
- logging.info(f"Loaded parameters for module {module_name} from {checkpoint_dir}.")
- except FileNotFoundError:
- logging.warning(f"Could not load module parameters for {module_name} from {checkpoint_dir}.")
\ No newline at end of file
diff --git a/spaces/Fantasy-Studio/Paint-by-Example/header.html b/spaces/Fantasy-Studio/Paint-by-Example/header.html
deleted file mode 100644
index ed39b639600f74b51103865829d8037eaa0a3e67..0000000000000000000000000000000000000000
--- a/spaces/Fantasy-Studio/Paint-by-Example/header.html
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
-
- Paint by Example 🎨
-
-
-
-
- Paint by Example, upload a source image and draw a mask for what you want to replace with a example image.
-
-
- Paper is available in Arxiv. If you like this demo, please help to ⭐ the Github Repo 😊.
-
-
You can skip the queue by duplicating this space and upgrading to gpu in settings:
-
-
\ No newline at end of file
diff --git a/spaces/Felix123456/bingo/src/components/markdown.tsx b/spaces/Felix123456/bingo/src/components/markdown.tsx
deleted file mode 100644
index d4491467a1f14d1d72e535caac9c40636054e5df..0000000000000000000000000000000000000000
--- a/spaces/Felix123456/bingo/src/components/markdown.tsx
+++ /dev/null
@@ -1,9 +0,0 @@
-import { FC, memo } from 'react'
-import ReactMarkdown, { Options } from 'react-markdown'
-
-export const MemoizedReactMarkdown: FC = memo(
- ReactMarkdown,
- (prevProps, nextProps) =>
- prevProps.children === nextProps.children &&
- prevProps.className === nextProps.className
-)
diff --git a/spaces/Ferion/image-matting-app/ppmatting/models/backbone/hrnet.py b/spaces/Ferion/image-matting-app/ppmatting/models/backbone/hrnet.py
deleted file mode 100644
index 96e23a77e656142a97c573feb501f983aecebbef..0000000000000000000000000000000000000000
--- a/spaces/Ferion/image-matting-app/ppmatting/models/backbone/hrnet.py
+++ /dev/null
@@ -1,835 +0,0 @@
-# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import math
-
-import paddle
-import paddle.nn as nn
-import paddle.nn.functional as F
-
-from paddleseg.cvlibs import manager, param_init
-from paddleseg.models import layers
-from paddleseg.utils import utils
-
-__all__ = [
- "HRNet_W18_Small_V1", "HRNet_W18_Small_V2", "HRNet_W18", "HRNet_W30",
- "HRNet_W32", "HRNet_W40", "HRNet_W44", "HRNet_W48", "HRNet_W60", "HRNet_W64"
-]
-
-
-class HRNet(nn.Layer):
- """
- The HRNet implementation based on PaddlePaddle.
-
- The original article refers to
- Jingdong Wang, et, al. "HRNet:Deep High-Resolution Representation Learning for Visual Recognition"
- (https://arxiv.org/pdf/1908.07919.pdf).
-
- Args:
- pretrained (str, optional): The path of pretrained model.
- stage1_num_modules (int, optional): Number of modules for stage1. Default 1.
- stage1_num_blocks (list, optional): Number of blocks per module for stage1. Default (4).
- stage1_num_channels (list, optional): Number of channels per branch for stage1. Default (64).
- stage2_num_modules (int, optional): Number of modules for stage2. Default 1.
- stage2_num_blocks (list, optional): Number of blocks per module for stage2. Default (4, 4).
- stage2_num_channels (list, optional): Number of channels per branch for stage2. Default (18, 36).
- stage3_num_modules (int, optional): Number of modules for stage3. Default 4.
- stage3_num_blocks (list, optional): Number of blocks per module for stage3. Default (4, 4, 4).
- stage3_num_channels (list, optional): Number of channels per branch for stage3. Default [18, 36, 72).
- stage4_num_modules (int, optional): Number of modules for stage4. Default 3.
- stage4_num_blocks (list, optional): Number of blocks per module for stage4. Default (4, 4, 4, 4).
- stage4_num_channels (list, optional): Number of channels per branch for stage4. Default (18, 36, 72. 144).
- has_se (bool, optional): Whether to use Squeeze-and-Excitation module. Default False.
- align_corners (bool, optional): An argument of F.interpolate. It should be set to False when the feature size is even,
- e.g. 1024x512, otherwise it is True, e.g. 769x769. Default: False.
- """
-
- def __init__(self,
- input_channels=3,
- pretrained=None,
- stage1_num_modules=1,
- stage1_num_blocks=(4, ),
- stage1_num_channels=(64, ),
- stage2_num_modules=1,
- stage2_num_blocks=(4, 4),
- stage2_num_channels=(18, 36),
- stage3_num_modules=4,
- stage3_num_blocks=(4, 4, 4),
- stage3_num_channels=(18, 36, 72),
- stage4_num_modules=3,
- stage4_num_blocks=(4, 4, 4, 4),
- stage4_num_channels=(18, 36, 72, 144),
- has_se=False,
- align_corners=False,
- padding_same=True):
- super(HRNet, self).__init__()
- self.pretrained = pretrained
- self.stage1_num_modules = stage1_num_modules
- self.stage1_num_blocks = stage1_num_blocks
- self.stage1_num_channels = stage1_num_channels
- self.stage2_num_modules = stage2_num_modules
- self.stage2_num_blocks = stage2_num_blocks
- self.stage2_num_channels = stage2_num_channels
- self.stage3_num_modules = stage3_num_modules
- self.stage3_num_blocks = stage3_num_blocks
- self.stage3_num_channels = stage3_num_channels
- self.stage4_num_modules = stage4_num_modules
- self.stage4_num_blocks = stage4_num_blocks
- self.stage4_num_channels = stage4_num_channels
- self.has_se = has_se
- self.align_corners = align_corners
-
- self.feat_channels = [i for i in stage4_num_channels]
- self.feat_channels = [64] + self.feat_channels
-
- self.conv_layer1_1 = layers.ConvBNReLU(
- in_channels=input_channels,
- out_channels=64,
- kernel_size=3,
- stride=2,
- padding=1 if not padding_same else 'same',
- bias_attr=False)
-
- self.conv_layer1_2 = layers.ConvBNReLU(
- in_channels=64,
- out_channels=64,
- kernel_size=3,
- stride=2,
- padding=1 if not padding_same else 'same',
- bias_attr=False)
-
- self.la1 = Layer1(
- num_channels=64,
- num_blocks=self.stage1_num_blocks[0],
- num_filters=self.stage1_num_channels[0],
- has_se=has_se,
- name="layer2",
- padding_same=padding_same)
-
- self.tr1 = TransitionLayer(
- in_channels=[self.stage1_num_channels[0] * 4],
- out_channels=self.stage2_num_channels,
- name="tr1",
- padding_same=padding_same)
-
- self.st2 = Stage(
- num_channels=self.stage2_num_channels,
- num_modules=self.stage2_num_modules,
- num_blocks=self.stage2_num_blocks,
- num_filters=self.stage2_num_channels,
- has_se=self.has_se,
- name="st2",
- align_corners=align_corners,
- padding_same=padding_same)
-
- self.tr2 = TransitionLayer(
- in_channels=self.stage2_num_channels,
- out_channels=self.stage3_num_channels,
- name="tr2",
- padding_same=padding_same)
- self.st3 = Stage(
- num_channels=self.stage3_num_channels,
- num_modules=self.stage3_num_modules,
- num_blocks=self.stage3_num_blocks,
- num_filters=self.stage3_num_channels,
- has_se=self.has_se,
- name="st3",
- align_corners=align_corners,
- padding_same=padding_same)
-
- self.tr3 = TransitionLayer(
- in_channels=self.stage3_num_channels,
- out_channels=self.stage4_num_channels,
- name="tr3",
- padding_same=padding_same)
- self.st4 = Stage(
- num_channels=self.stage4_num_channels,
- num_modules=self.stage4_num_modules,
- num_blocks=self.stage4_num_blocks,
- num_filters=self.stage4_num_channels,
- has_se=self.has_se,
- name="st4",
- align_corners=align_corners,
- padding_same=padding_same)
-
- self.init_weight()
-
- def forward(self, x):
- feat_list = []
- conv1 = self.conv_layer1_1(x)
- feat_list.append(conv1)
- conv2 = self.conv_layer1_2(conv1)
-
- la1 = self.la1(conv2)
-
- tr1 = self.tr1([la1])
- st2 = self.st2(tr1)
-
- tr2 = self.tr2(st2)
- st3 = self.st3(tr2)
-
- tr3 = self.tr3(st3)
- st4 = self.st4(tr3)
-
- feat_list = feat_list + st4
-
- return feat_list
-
- def init_weight(self):
- for layer in self.sublayers():
- if isinstance(layer, nn.Conv2D):
- param_init.normal_init(layer.weight, std=0.001)
- elif isinstance(layer, (nn.BatchNorm, nn.SyncBatchNorm)):
- param_init.constant_init(layer.weight, value=1.0)
- param_init.constant_init(layer.bias, value=0.0)
- if self.pretrained is not None:
- utils.load_pretrained_model(self, self.pretrained)
-
-
-class Layer1(nn.Layer):
- def __init__(self,
- num_channels,
- num_filters,
- num_blocks,
- has_se=False,
- name=None,
- padding_same=True):
- super(Layer1, self).__init__()
-
- self.bottleneck_block_list = []
-
- for i in range(num_blocks):
- bottleneck_block = self.add_sublayer(
- "bb_{}_{}".format(name, i + 1),
- BottleneckBlock(
- num_channels=num_channels if i == 0 else num_filters * 4,
- num_filters=num_filters,
- has_se=has_se,
- stride=1,
- downsample=True if i == 0 else False,
- name=name + '_' + str(i + 1),
- padding_same=padding_same))
- self.bottleneck_block_list.append(bottleneck_block)
-
- def forward(self, x):
- conv = x
- for block_func in self.bottleneck_block_list:
- conv = block_func(conv)
- return conv
-
-
-class TransitionLayer(nn.Layer):
- def __init__(self, in_channels, out_channels, name=None, padding_same=True):
- super(TransitionLayer, self).__init__()
-
- num_in = len(in_channels)
- num_out = len(out_channels)
- self.conv_bn_func_list = []
- for i in range(num_out):
- residual = None
- if i < num_in:
- if in_channels[i] != out_channels[i]:
- residual = self.add_sublayer(
- "transition_{}_layer_{}".format(name, i + 1),
- layers.ConvBNReLU(
- in_channels=in_channels[i],
- out_channels=out_channels[i],
- kernel_size=3,
- padding=1 if not padding_same else 'same',
- bias_attr=False))
- else:
- residual = self.add_sublayer(
- "transition_{}_layer_{}".format(name, i + 1),
- layers.ConvBNReLU(
- in_channels=in_channels[-1],
- out_channels=out_channels[i],
- kernel_size=3,
- stride=2,
- padding=1 if not padding_same else 'same',
- bias_attr=False))
- self.conv_bn_func_list.append(residual)
-
- def forward(self, x):
- outs = []
- for idx, conv_bn_func in enumerate(self.conv_bn_func_list):
- if conv_bn_func is None:
- outs.append(x[idx])
- else:
- if idx < len(x):
- outs.append(conv_bn_func(x[idx]))
- else:
- outs.append(conv_bn_func(x[-1]))
- return outs
-
-
-class Branches(nn.Layer):
- def __init__(self,
- num_blocks,
- in_channels,
- out_channels,
- has_se=False,
- name=None,
- padding_same=True):
- super(Branches, self).__init__()
-
- self.basic_block_list = []
-
- for i in range(len(out_channels)):
- self.basic_block_list.append([])
- for j in range(num_blocks[i]):
- in_ch = in_channels[i] if j == 0 else out_channels[i]
- basic_block_func = self.add_sublayer(
- "bb_{}_branch_layer_{}_{}".format(name, i + 1, j + 1),
- BasicBlock(
- num_channels=in_ch,
- num_filters=out_channels[i],
- has_se=has_se,
- name=name + '_branch_layer_' + str(i + 1) + '_' +
- str(j + 1),
- padding_same=padding_same))
- self.basic_block_list[i].append(basic_block_func)
-
- def forward(self, x):
- outs = []
- for idx, input in enumerate(x):
- conv = input
- for basic_block_func in self.basic_block_list[idx]:
- conv = basic_block_func(conv)
- outs.append(conv)
- return outs
-
-
-class BottleneckBlock(nn.Layer):
- def __init__(self,
- num_channels,
- num_filters,
- has_se,
- stride=1,
- downsample=False,
- name=None,
- padding_same=True):
- super(BottleneckBlock, self).__init__()
-
- self.has_se = has_se
- self.downsample = downsample
-
- self.conv1 = layers.ConvBNReLU(
- in_channels=num_channels,
- out_channels=num_filters,
- kernel_size=1,
- bias_attr=False)
-
- self.conv2 = layers.ConvBNReLU(
- in_channels=num_filters,
- out_channels=num_filters,
- kernel_size=3,
- stride=stride,
- padding=1 if not padding_same else 'same',
- bias_attr=False)
-
- self.conv3 = layers.ConvBN(
- in_channels=num_filters,
- out_channels=num_filters * 4,
- kernel_size=1,
- bias_attr=False)
-
- if self.downsample:
- self.conv_down = layers.ConvBN(
- in_channels=num_channels,
- out_channels=num_filters * 4,
- kernel_size=1,
- bias_attr=False)
-
- if self.has_se:
- self.se = SELayer(
- num_channels=num_filters * 4,
- num_filters=num_filters * 4,
- reduction_ratio=16,
- name=name + '_fc')
-
- self.add = layers.Add()
- self.relu = layers.Activation("relu")
-
- def forward(self, x):
- residual = x
- conv1 = self.conv1(x)
- conv2 = self.conv2(conv1)
- conv3 = self.conv3(conv2)
-
- if self.downsample:
- residual = self.conv_down(x)
-
- if self.has_se:
- conv3 = self.se(conv3)
-
- y = self.add(conv3, residual)
- y = self.relu(y)
- return y
-
-
-class BasicBlock(nn.Layer):
- def __init__(self,
- num_channels,
- num_filters,
- stride=1,
- has_se=False,
- downsample=False,
- name=None,
- padding_same=True):
- super(BasicBlock, self).__init__()
-
- self.has_se = has_se
- self.downsample = downsample
-
- self.conv1 = layers.ConvBNReLU(
- in_channels=num_channels,
- out_channels=num_filters,
- kernel_size=3,
- stride=stride,
- padding=1 if not padding_same else 'same',
- bias_attr=False)
- self.conv2 = layers.ConvBN(
- in_channels=num_filters,
- out_channels=num_filters,
- kernel_size=3,
- padding=1 if not padding_same else 'same',
- bias_attr=False)
-
- if self.downsample:
- self.conv_down = layers.ConvBNReLU(
- in_channels=num_channels,
- out_channels=num_filters,
- kernel_size=1,
- bias_attr=False)
-
- if self.has_se:
- self.se = SELayer(
- num_channels=num_filters,
- num_filters=num_filters,
- reduction_ratio=16,
- name=name + '_fc')
-
- self.add = layers.Add()
- self.relu = layers.Activation("relu")
-
- def forward(self, x):
- residual = x
- conv1 = self.conv1(x)
- conv2 = self.conv2(conv1)
-
- if self.downsample:
- residual = self.conv_down(x)
-
- if self.has_se:
- conv2 = self.se(conv2)
-
- y = self.add(conv2, residual)
- y = self.relu(y)
- return y
-
-
-class SELayer(nn.Layer):
- def __init__(self, num_channels, num_filters, reduction_ratio, name=None):
- super(SELayer, self).__init__()
-
- self.pool2d_gap = nn.AdaptiveAvgPool2D(1)
-
- self._num_channels = num_channels
-
- med_ch = int(num_channels / reduction_ratio)
- stdv = 1.0 / math.sqrt(num_channels * 1.0)
- self.squeeze = nn.Linear(
- num_channels,
- med_ch,
- weight_attr=paddle.ParamAttr(
- initializer=nn.initializer.Uniform(-stdv, stdv)))
-
- stdv = 1.0 / math.sqrt(med_ch * 1.0)
- self.excitation = nn.Linear(
- med_ch,
- num_filters,
- weight_attr=paddle.ParamAttr(
- initializer=nn.initializer.Uniform(-stdv, stdv)))
-
- def forward(self, x):
- pool = self.pool2d_gap(x)
- pool = paddle.reshape(pool, shape=[-1, self._num_channels])
- squeeze = self.squeeze(pool)
- squeeze = F.relu(squeeze)
- excitation = self.excitation(squeeze)
- excitation = F.sigmoid(excitation)
- excitation = paddle.reshape(
- excitation, shape=[-1, self._num_channels, 1, 1])
- out = x * excitation
- return out
-
-
-class Stage(nn.Layer):
- def __init__(self,
- num_channels,
- num_modules,
- num_blocks,
- num_filters,
- has_se=False,
- multi_scale_output=True,
- name=None,
- align_corners=False,
- padding_same=True):
- super(Stage, self).__init__()
-
- self._num_modules = num_modules
-
- self.stage_func_list = []
- for i in range(num_modules):
- if i == num_modules - 1 and not multi_scale_output:
- stage_func = self.add_sublayer(
- "stage_{}_{}".format(name, i + 1),
- HighResolutionModule(
- num_channels=num_channels,
- num_blocks=num_blocks,
- num_filters=num_filters,
- has_se=has_se,
- multi_scale_output=False,
- name=name + '_' + str(i + 1),
- align_corners=align_corners,
- padding_same=padding_same))
- else:
- stage_func = self.add_sublayer(
- "stage_{}_{}".format(name, i + 1),
- HighResolutionModule(
- num_channels=num_channels,
- num_blocks=num_blocks,
- num_filters=num_filters,
- has_se=has_se,
- name=name + '_' + str(i + 1),
- align_corners=align_corners,
- padding_same=padding_same))
-
- self.stage_func_list.append(stage_func)
-
- def forward(self, x):
- out = x
- for idx in range(self._num_modules):
- out = self.stage_func_list[idx](out)
- return out
-
-
-class HighResolutionModule(nn.Layer):
- def __init__(self,
- num_channels,
- num_blocks,
- num_filters,
- has_se=False,
- multi_scale_output=True,
- name=None,
- align_corners=False,
- padding_same=True):
- super(HighResolutionModule, self).__init__()
-
- self.branches_func = Branches(
- num_blocks=num_blocks,
- in_channels=num_channels,
- out_channels=num_filters,
- has_se=has_se,
- name=name,
- padding_same=padding_same)
-
- self.fuse_func = FuseLayers(
- in_channels=num_filters,
- out_channels=num_filters,
- multi_scale_output=multi_scale_output,
- name=name,
- align_corners=align_corners,
- padding_same=padding_same)
-
- def forward(self, x):
- out = self.branches_func(x)
- out = self.fuse_func(out)
- return out
-
-
-class FuseLayers(nn.Layer):
- def __init__(self,
- in_channels,
- out_channels,
- multi_scale_output=True,
- name=None,
- align_corners=False,
- padding_same=True):
- super(FuseLayers, self).__init__()
-
- self._actual_ch = len(in_channels) if multi_scale_output else 1
- self._in_channels = in_channels
- self.align_corners = align_corners
-
- self.residual_func_list = []
- for i in range(self._actual_ch):
- for j in range(len(in_channels)):
- if j > i:
- residual_func = self.add_sublayer(
- "residual_{}_layer_{}_{}".format(name, i + 1, j + 1),
- layers.ConvBN(
- in_channels=in_channels[j],
- out_channels=out_channels[i],
- kernel_size=1,
- bias_attr=False))
- self.residual_func_list.append(residual_func)
- elif j < i:
- pre_num_filters = in_channels[j]
- for k in range(i - j):
- if k == i - j - 1:
- residual_func = self.add_sublayer(
- "residual_{}_layer_{}_{}_{}".format(
- name, i + 1, j + 1, k + 1),
- layers.ConvBN(
- in_channels=pre_num_filters,
- out_channels=out_channels[i],
- kernel_size=3,
- stride=2,
- padding=1 if not padding_same else 'same',
- bias_attr=False))
- pre_num_filters = out_channels[i]
- else:
- residual_func = self.add_sublayer(
- "residual_{}_layer_{}_{}_{}".format(
- name, i + 1, j + 1, k + 1),
- layers.ConvBNReLU(
- in_channels=pre_num_filters,
- out_channels=out_channels[j],
- kernel_size=3,
- stride=2,
- padding=1 if not padding_same else 'same',
- bias_attr=False))
- pre_num_filters = out_channels[j]
- self.residual_func_list.append(residual_func)
-
- def forward(self, x):
- outs = []
- residual_func_idx = 0
- for i in range(self._actual_ch):
- residual = x[i]
- residual_shape = paddle.shape(residual)[-2:]
- for j in range(len(self._in_channels)):
- if j > i:
- y = self.residual_func_list[residual_func_idx](x[j])
- residual_func_idx += 1
-
- y = F.interpolate(
- y,
- residual_shape,
- mode='bilinear',
- align_corners=self.align_corners)
- residual = residual + y
- elif j < i:
- y = x[j]
- for k in range(i - j):
- y = self.residual_func_list[residual_func_idx](y)
- residual_func_idx += 1
-
- residual = residual + y
-
- residual = F.relu(residual)
- outs.append(residual)
-
- return outs
-
-
-@manager.BACKBONES.add_component
-def HRNet_W18_Small_V1(**kwargs):
- model = HRNet(
- stage1_num_modules=1,
- stage1_num_blocks=[1],
- stage1_num_channels=[32],
- stage2_num_modules=1,
- stage2_num_blocks=[2, 2],
- stage2_num_channels=[16, 32],
- stage3_num_modules=1,
- stage3_num_blocks=[2, 2, 2],
- stage3_num_channels=[16, 32, 64],
- stage4_num_modules=1,
- stage4_num_blocks=[2, 2, 2, 2],
- stage4_num_channels=[16, 32, 64, 128],
- **kwargs)
- return model
-
-
-@manager.BACKBONES.add_component
-def HRNet_W18_Small_V2(**kwargs):
- model = HRNet(
- stage1_num_modules=1,
- stage1_num_blocks=[2],
- stage1_num_channels=[64],
- stage2_num_modules=1,
- stage2_num_blocks=[2, 2],
- stage2_num_channels=[18, 36],
- stage3_num_modules=3,
- stage3_num_blocks=[2, 2, 2],
- stage3_num_channels=[18, 36, 72],
- stage4_num_modules=2,
- stage4_num_blocks=[2, 2, 2, 2],
- stage4_num_channels=[18, 36, 72, 144],
- **kwargs)
- return model
-
-
-@manager.BACKBONES.add_component
-def HRNet_W18(**kwargs):
- model = HRNet(
- stage1_num_modules=1,
- stage1_num_blocks=[4],
- stage1_num_channels=[64],
- stage2_num_modules=1,
- stage2_num_blocks=[4, 4],
- stage2_num_channels=[18, 36],
- stage3_num_modules=4,
- stage3_num_blocks=[4, 4, 4],
- stage3_num_channels=[18, 36, 72],
- stage4_num_modules=3,
- stage4_num_blocks=[4, 4, 4, 4],
- stage4_num_channels=[18, 36, 72, 144],
- **kwargs)
- return model
-
-
-@manager.BACKBONES.add_component
-def HRNet_W30(**kwargs):
- model = HRNet(
- stage1_num_modules=1,
- stage1_num_blocks=[4],
- stage1_num_channels=[64],
- stage2_num_modules=1,
- stage2_num_blocks=[4, 4],
- stage2_num_channels=[30, 60],
- stage3_num_modules=4,
- stage3_num_blocks=[4, 4, 4],
- stage3_num_channels=[30, 60, 120],
- stage4_num_modules=3,
- stage4_num_blocks=[4, 4, 4, 4],
- stage4_num_channels=[30, 60, 120, 240],
- **kwargs)
- return model
-
-
-@manager.BACKBONES.add_component
-def HRNet_W32(**kwargs):
- model = HRNet(
- stage1_num_modules=1,
- stage1_num_blocks=[4],
- stage1_num_channels=[64],
- stage2_num_modules=1,
- stage2_num_blocks=[4, 4],
- stage2_num_channels=[32, 64],
- stage3_num_modules=4,
- stage3_num_blocks=[4, 4, 4],
- stage3_num_channels=[32, 64, 128],
- stage4_num_modules=3,
- stage4_num_blocks=[4, 4, 4, 4],
- stage4_num_channels=[32, 64, 128, 256],
- **kwargs)
- return model
-
-
-@manager.BACKBONES.add_component
-def HRNet_W40(**kwargs):
- model = HRNet(
- stage1_num_modules=1,
- stage1_num_blocks=[4],
- stage1_num_channels=[64],
- stage2_num_modules=1,
- stage2_num_blocks=[4, 4],
- stage2_num_channels=[40, 80],
- stage3_num_modules=4,
- stage3_num_blocks=[4, 4, 4],
- stage3_num_channels=[40, 80, 160],
- stage4_num_modules=3,
- stage4_num_blocks=[4, 4, 4, 4],
- stage4_num_channels=[40, 80, 160, 320],
- **kwargs)
- return model
-
-
-@manager.BACKBONES.add_component
-def HRNet_W44(**kwargs):
- model = HRNet(
- stage1_num_modules=1,
- stage1_num_blocks=[4],
- stage1_num_channels=[64],
- stage2_num_modules=1,
- stage2_num_blocks=[4, 4],
- stage2_num_channels=[44, 88],
- stage3_num_modules=4,
- stage3_num_blocks=[4, 4, 4],
- stage3_num_channels=[44, 88, 176],
- stage4_num_modules=3,
- stage4_num_blocks=[4, 4, 4, 4],
- stage4_num_channels=[44, 88, 176, 352],
- **kwargs)
- return model
-
-
-@manager.BACKBONES.add_component
-def HRNet_W48(**kwargs):
- model = HRNet(
- stage1_num_modules=1,
- stage1_num_blocks=[4],
- stage1_num_channels=[64],
- stage2_num_modules=1,
- stage2_num_blocks=[4, 4],
- stage2_num_channels=[48, 96],
- stage3_num_modules=4,
- stage3_num_blocks=[4, 4, 4],
- stage3_num_channels=[48, 96, 192],
- stage4_num_modules=3,
- stage4_num_blocks=[4, 4, 4, 4],
- stage4_num_channels=[48, 96, 192, 384],
- **kwargs)
- return model
-
-
-@manager.BACKBONES.add_component
-def HRNet_W60(**kwargs):
- model = HRNet(
- stage1_num_modules=1,
- stage1_num_blocks=[4],
- stage1_num_channels=[64],
- stage2_num_modules=1,
- stage2_num_blocks=[4, 4],
- stage2_num_channels=[60, 120],
- stage3_num_modules=4,
- stage3_num_blocks=[4, 4, 4],
- stage3_num_channels=[60, 120, 240],
- stage4_num_modules=3,
- stage4_num_blocks=[4, 4, 4, 4],
- stage4_num_channels=[60, 120, 240, 480],
- **kwargs)
- return model
-
-
-@manager.BACKBONES.add_component
-def HRNet_W64(**kwargs):
- model = HRNet(
- stage1_num_modules=1,
- stage1_num_blocks=[4],
- stage1_num_channels=[64],
- stage2_num_modules=1,
- stage2_num_blocks=[4, 4],
- stage2_num_channels=[64, 128],
- stage3_num_modules=4,
- stage3_num_blocks=[4, 4, 4],
- stage3_num_channels=[64, 128, 256],
- stage4_num_modules=3,
- stage4_num_blocks=[4, 4, 4, 4],
- stage4_num_channels=[64, 128, 256, 512],
- **kwargs)
- return model
diff --git a/spaces/Flux9665/IMS-Toucan/InferenceInterfaces/InferenceArchitectures/__init__.py b/spaces/Flux9665/IMS-Toucan/InferenceInterfaces/InferenceArchitectures/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Flux9665/IMS-Toucan/Layers/Attention.py b/spaces/Flux9665/IMS-Toucan/Layers/Attention.py
deleted file mode 100644
index eb241e315de718099901a075feae2ed0e31c7347..0000000000000000000000000000000000000000
--- a/spaces/Flux9665/IMS-Toucan/Layers/Attention.py
+++ /dev/null
@@ -1,324 +0,0 @@
-# Written by Shigeki Karita, 2019
-# Published under Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
-# Adapted by Florian Lux, 2021
-
-"""Multi-Head Attention layer definition."""
-
-import math
-
-import numpy
-import torch
-from torch import nn
-
-from Utility.utils import make_non_pad_mask
-
-
-class MultiHeadedAttention(nn.Module):
- """
- Multi-Head Attention layer.
-
- Args:
- n_head (int): The number of heads.
- n_feat (int): The number of features.
- dropout_rate (float): Dropout rate.
- """
-
- def __init__(self, n_head, n_feat, dropout_rate):
- """
- Construct an MultiHeadedAttention object.
- """
- super(MultiHeadedAttention, self).__init__()
- assert n_feat % n_head == 0
- # We assume d_v always equals d_k
- self.d_k = n_feat // n_head
- self.h = n_head
- self.linear_q = nn.Linear(n_feat, n_feat)
- self.linear_k = nn.Linear(n_feat, n_feat)
- self.linear_v = nn.Linear(n_feat, n_feat)
- self.linear_out = nn.Linear(n_feat, n_feat)
- self.attn = None
- self.dropout = nn.Dropout(p=dropout_rate)
-
- def forward_qkv(self, query, key, value):
- """
- Transform query, key and value.
-
- Args:
- query (torch.Tensor): Query tensor (#batch, time1, size).
- key (torch.Tensor): Key tensor (#batch, time2, size).
- value (torch.Tensor): Value tensor (#batch, time2, size).
-
- Returns:
- torch.Tensor: Transformed query tensor (#batch, n_head, time1, d_k).
- torch.Tensor: Transformed key tensor (#batch, n_head, time2, d_k).
- torch.Tensor: Transformed value tensor (#batch, n_head, time2, d_k).
- """
- n_batch = query.size(0)
- q = self.linear_q(query).view(n_batch, -1, self.h, self.d_k)
- k = self.linear_k(key).view(n_batch, -1, self.h, self.d_k)
- v = self.linear_v(value).view(n_batch, -1, self.h, self.d_k)
- q = q.transpose(1, 2) # (batch, head, time1, d_k)
- k = k.transpose(1, 2) # (batch, head, time2, d_k)
- v = v.transpose(1, 2) # (batch, head, time2, d_k)
-
- return q, k, v
-
- def forward_attention(self, value, scores, mask):
- """
- Compute attention context vector.
-
- Args:
- value (torch.Tensor): Transformed value (#batch, n_head, time2, d_k).
- scores (torch.Tensor): Attention score (#batch, n_head, time1, time2).
- mask (torch.Tensor): Mask (#batch, 1, time2) or (#batch, time1, time2).
-
- Returns:
- torch.Tensor: Transformed value (#batch, time1, d_model)
- weighted by the attention score (#batch, time1, time2).
- """
- n_batch = value.size(0)
- if mask is not None:
- mask = mask.unsqueeze(1).eq(0) # (batch, 1, *, time2)
- min_value = float(numpy.finfo(torch.tensor(0, dtype=scores.dtype).numpy().dtype).min)
- scores = scores.masked_fill(mask, min_value)
- self.attn = torch.softmax(scores, dim=-1).masked_fill(mask, 0.0) # (batch, head, time1, time2)
- else:
- self.attn = torch.softmax(scores, dim=-1) # (batch, head, time1, time2)
-
- p_attn = self.dropout(self.attn)
- x = torch.matmul(p_attn, value) # (batch, head, time1, d_k)
- x = (x.transpose(1, 2).contiguous().view(n_batch, -1, self.h * self.d_k)) # (batch, time1, d_model)
-
- return self.linear_out(x) # (batch, time1, d_model)
-
- def forward(self, query, key, value, mask):
- """
- Compute scaled dot product attention.
-
- Args:
- query (torch.Tensor): Query tensor (#batch, time1, size).
- key (torch.Tensor): Key tensor (#batch, time2, size).
- value (torch.Tensor): Value tensor (#batch, time2, size).
- mask (torch.Tensor): Mask tensor (#batch, 1, time2) or
- (#batch, time1, time2).
-
- Returns:
- torch.Tensor: Output tensor (#batch, time1, d_model).
- """
- q, k, v = self.forward_qkv(query, key, value)
- scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(self.d_k)
- return self.forward_attention(v, scores, mask)
-
-
-class RelPositionMultiHeadedAttention(MultiHeadedAttention):
- """
- Multi-Head Attention layer with relative position encoding.
- Details can be found in https://github.com/espnet/espnet/pull/2816.
- Paper: https://arxiv.org/abs/1901.02860
- Args:
- n_head (int): The number of heads.
- n_feat (int): The number of features.
- dropout_rate (float): Dropout rate.
- zero_triu (bool): Whether to zero the upper triangular part of attention matrix.
- """
-
- def __init__(self, n_head, n_feat, dropout_rate, zero_triu=False):
- """Construct an RelPositionMultiHeadedAttention object."""
- super().__init__(n_head, n_feat, dropout_rate)
- self.zero_triu = zero_triu
- # linear transformation for positional encoding
- self.linear_pos = nn.Linear(n_feat, n_feat, bias=False)
- # these two learnable bias are used in matrix c and matrix d
- # as described in https://arxiv.org/abs/1901.02860 Section 3.3
- self.pos_bias_u = nn.Parameter(torch.Tensor(self.h, self.d_k))
- self.pos_bias_v = nn.Parameter(torch.Tensor(self.h, self.d_k))
- torch.nn.init.xavier_uniform_(self.pos_bias_u)
- torch.nn.init.xavier_uniform_(self.pos_bias_v)
-
- def rel_shift(self, x):
- """
- Compute relative positional encoding.
- Args:
- x (torch.Tensor): Input tensor (batch, head, time1, 2*time1-1).
- time1 means the length of query vector.
- Returns:
- torch.Tensor: Output tensor.
- """
- zero_pad = torch.zeros((*x.size()[:3], 1), device=x.device, dtype=x.dtype)
- x_padded = torch.cat([zero_pad, x], dim=-1)
-
- x_padded = x_padded.view(*x.size()[:2], x.size(3) + 1, x.size(2))
- x = x_padded[:, :, 1:].view_as(x)[:, :, :, : x.size(-1) // 2 + 1] # only keep the positions from 0 to time2
-
- if self.zero_triu:
- ones = torch.ones((x.size(2), x.size(3)), device=x.device)
- x = x * torch.tril(ones, x.size(3) - x.size(2))[None, None, :, :]
-
- return x
-
- def forward(self, query, key, value, pos_emb, mask):
- """
- Compute 'Scaled Dot Product Attention' with rel. positional encoding.
- Args:
- query (torch.Tensor): Query tensor (#batch, time1, size).
- key (torch.Tensor): Key tensor (#batch, time2, size).
- value (torch.Tensor): Value tensor (#batch, time2, size).
- pos_emb (torch.Tensor): Positional embedding tensor
- (#batch, 2*time1-1, size).
- mask (torch.Tensor): Mask tensor (#batch, 1, time2) or
- (#batch, time1, time2).
- Returns:
- torch.Tensor: Output tensor (#batch, time1, d_model).
- """
- q, k, v = self.forward_qkv(query, key, value)
- q = q.transpose(1, 2) # (batch, time1, head, d_k)
-
- n_batch_pos = pos_emb.size(0)
- p = self.linear_pos(pos_emb).view(n_batch_pos, -1, self.h, self.d_k)
- p = p.transpose(1, 2) # (batch, head, 2*time1-1, d_k)
-
- # (batch, head, time1, d_k)
- q_with_bias_u = (q + self.pos_bias_u).transpose(1, 2)
- # (batch, head, time1, d_k)
- q_with_bias_v = (q + self.pos_bias_v).transpose(1, 2)
-
- # compute attention score
- # first compute matrix a and matrix c
- # as described in https://arxiv.org/abs/1901.02860 Section 3.3
- # (batch, head, time1, time2)
- matrix_ac = torch.matmul(q_with_bias_u, k.transpose(-2, -1))
-
- # compute matrix b and matrix d
- # (batch, head, time1, 2*time1-1)
- matrix_bd = torch.matmul(q_with_bias_v, p.transpose(-2, -1))
- matrix_bd = self.rel_shift(matrix_bd)
-
- scores = (matrix_ac + matrix_bd) / math.sqrt(self.d_k) # (batch, head, time1, time2)
-
- return self.forward_attention(v, scores, mask)
-
-
-class GuidedAttentionLoss(torch.nn.Module):
- """
- Guided attention loss function module.
-
- This module calculates the guided attention loss described
- in `Efficiently Trainable Text-to-Speech System Based
- on Deep Convolutional Networks with Guided Attention`_,
- which forces the attention to be diagonal.
-
- .. _`Efficiently Trainable Text-to-Speech System
- Based on Deep Convolutional Networks with Guided Attention`:
- https://arxiv.org/abs/1710.08969
- """
-
- def __init__(self, sigma=0.4, alpha=1.0):
- """
- Initialize guided attention loss module.
-
- Args:
- sigma (float, optional): Standard deviation to control
- how close attention to a diagonal.
- alpha (float, optional): Scaling coefficient (lambda).
- reset_always (bool, optional): Whether to always reset masks.
- """
- super(GuidedAttentionLoss, self).__init__()
- self.sigma = sigma
- self.alpha = alpha
- self.guided_attn_masks = None
- self.masks = None
-
- def _reset_masks(self):
- self.guided_attn_masks = None
- self.masks = None
-
- def forward(self, att_ws, ilens, olens):
- """
- Calculate forward propagation.
-
- Args:
- att_ws (Tensor): Batch of attention weights (B, T_max_out, T_max_in).
- ilens (LongTensor): Batch of input lenghts (B,).
- olens (LongTensor): Batch of output lenghts (B,).
-
- Returns:
- Tensor: Guided attention loss value.
- """
- self._reset_masks()
- self.guided_attn_masks = self._make_guided_attention_masks(ilens, olens).to(att_ws.device)
- self.masks = self._make_masks(ilens, olens).to(att_ws.device)
- losses = self.guided_attn_masks * att_ws
- loss = torch.mean(losses.masked_select(self.masks))
- self._reset_masks()
- return self.alpha * loss
-
- def _make_guided_attention_masks(self, ilens, olens):
- n_batches = len(ilens)
- max_ilen = max(ilens)
- max_olen = max(olens)
- guided_attn_masks = torch.zeros((n_batches, max_olen, max_ilen), device=ilens.device)
- for idx, (ilen, olen) in enumerate(zip(ilens, olens)):
- guided_attn_masks[idx, :olen, :ilen] = self._make_guided_attention_mask(ilen, olen, self.sigma)
- return guided_attn_masks
-
- @staticmethod
- def _make_guided_attention_mask(ilen, olen, sigma):
- """
- Make guided attention mask.
- """
- grid_x, grid_y = torch.meshgrid(torch.arange(olen, device=olen.device).float(), torch.arange(ilen, device=ilen.device).float())
- return 1.0 - torch.exp(-((grid_y / ilen - grid_x / olen) ** 2) / (2 * (sigma ** 2)))
-
- @staticmethod
- def _make_masks(ilens, olens):
- """
- Make masks indicating non-padded part.
-
- Args:
- ilens (LongTensor or List): Batch of lengths (B,).
- olens (LongTensor or List): Batch of lengths (B,).
-
- Returns:
- Tensor: Mask tensor indicating non-padded part.
- dtype=torch.uint8 in PyTorch 1.2-
- dtype=torch.bool in PyTorch 1.2+ (including 1.2)
- """
- in_masks = make_non_pad_mask(ilens, device=ilens.device) # (B, T_in)
- out_masks = make_non_pad_mask(olens, device=olens.device) # (B, T_out)
- return out_masks.unsqueeze(-1) & in_masks.unsqueeze(-2) # (B, T_out, T_in)
-
-
-class GuidedMultiHeadAttentionLoss(GuidedAttentionLoss):
- """
- Guided attention loss function module for multi head attention.
-
- Args:
- sigma (float, optional): Standard deviation to control
- how close attention to a diagonal.
- alpha (float, optional): Scaling coefficient (lambda).
- reset_always (bool, optional): Whether to always reset masks.
- """
-
- def forward(self, att_ws, ilens, olens):
- """
- Calculate forward propagation.
-
- Args:
- att_ws (Tensor):
- Batch of multi head attention weights (B, H, T_max_out, T_max_in).
- ilens (LongTensor): Batch of input lenghts (B,).
- olens (LongTensor): Batch of output lenghts (B,).
-
- Returns:
- Tensor: Guided attention loss value.
- """
- if self.guided_attn_masks is None:
- self.guided_attn_masks = (self._make_guided_attention_masks(ilens, olens).to(att_ws.device).unsqueeze(1))
- if self.masks is None:
- self.masks = self._make_masks(ilens, olens).to(att_ws.device).unsqueeze(1)
- losses = self.guided_attn_masks * att_ws
- loss = torch.mean(losses.masked_select(self.masks))
- if self.reset_always:
- self._reset_masks()
-
- return self.alpha * loss
diff --git a/spaces/FrankZxShen/vits-fast-fineturning-models-ba/text/sanskrit.py b/spaces/FrankZxShen/vits-fast-fineturning-models-ba/text/sanskrit.py
deleted file mode 100644
index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/vits-fast-fineturning-models-ba/text/sanskrit.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import re
-from indic_transliteration import sanscript
-
-
-# List of (iast, ipa) pairs:
-_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('a', 'ə'),
- ('ā', 'aː'),
- ('ī', 'iː'),
- ('ū', 'uː'),
- ('ṛ', 'ɹ`'),
- ('ṝ', 'ɹ`ː'),
- ('ḷ', 'l`'),
- ('ḹ', 'l`ː'),
- ('e', 'eː'),
- ('o', 'oː'),
- ('k', 'k⁼'),
- ('k⁼h', 'kʰ'),
- ('g', 'g⁼'),
- ('g⁼h', 'gʰ'),
- ('ṅ', 'ŋ'),
- ('c', 'ʧ⁼'),
- ('ʧ⁼h', 'ʧʰ'),
- ('j', 'ʥ⁼'),
- ('ʥ⁼h', 'ʥʰ'),
- ('ñ', 'n^'),
- ('ṭ', 't`⁼'),
- ('t`⁼h', 't`ʰ'),
- ('ḍ', 'd`⁼'),
- ('d`⁼h', 'd`ʰ'),
- ('ṇ', 'n`'),
- ('t', 't⁼'),
- ('t⁼h', 'tʰ'),
- ('d', 'd⁼'),
- ('d⁼h', 'dʰ'),
- ('p', 'p⁼'),
- ('p⁼h', 'pʰ'),
- ('b', 'b⁼'),
- ('b⁼h', 'bʰ'),
- ('y', 'j'),
- ('ś', 'ʃ'),
- ('ṣ', 's`'),
- ('r', 'ɾ'),
- ('l̤', 'l`'),
- ('h', 'ɦ'),
- ("'", ''),
- ('~', '^'),
- ('ṃ', '^')
-]]
-
-
-def devanagari_to_ipa(text):
- text = text.replace('ॐ', 'ओम्')
- text = re.sub(r'\s*।\s*$', '.', text)
- text = re.sub(r'\s*।\s*', ', ', text)
- text = re.sub(r'\s*॥', '.', text)
- text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST)
- for regex, replacement in _iast_to_ipa:
- text = re.sub(regex, replacement, text)
- text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0)
- [:-1]+'h'+x.group(1)+'*', text)
- return text
diff --git a/spaces/GXSA/bingo/src/lib/utils.ts b/spaces/GXSA/bingo/src/lib/utils.ts
deleted file mode 100644
index 8de2eba94bf0bc93579d4f489e8b810dbf6ce92a..0000000000000000000000000000000000000000
--- a/spaces/GXSA/bingo/src/lib/utils.ts
+++ /dev/null
@@ -1,159 +0,0 @@
-import { clsx, type ClassValue } from 'clsx'
-import { customAlphabet } from 'nanoid'
-import { twMerge } from 'tailwind-merge'
-// @ts-ignore
-import randomip from 'random-ip'
-import cidr from './cidr.json'
-
-export function cn(...inputs: ClassValue[]) {
- return twMerge(clsx(inputs))
-}
-
-export const nanoid = customAlphabet(
- '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz',
- 7
-) // 7-character random string
-
-export function createChunkDecoder() {
- const decoder = new TextDecoder()
- return function (chunk: Uint8Array | undefined): string {
- if (!chunk) return ''
- return decoder.decode(chunk, { stream: true })
- }
-}
-
-export function random (start: number, end: number) {
- return start + Math.floor(Math.random() * (end - start))
-}
-
-export function randomIP() {
- // return `104.${random(0, 21)}.${random(0, 127)}.${random(1, 255)}`
- const [ip, range] = cidr.at(random(0, cidr.length))?.split('/')!
- return randomip(ip, range)
-}
-
-export const defaultUID = 'xxx'
-
-export function parseHeadersFromCurl(content: string) {
- const re = /-H '([^:]+):\s*([^']+)/mg
- const headers: HeadersInit = {}
- content = content.replaceAll('-H "', '-H \'').replaceAll('" ^', '\'\\').replaceAll('^\\^"', '"') // 将 cmd curl 转成 bash curl
- content.replace(re, (_: string, key: string, value: string) => {
- headers[key] = value
- return ''
- })
- return headers
-}
-
-export const ChunkKeys = ['BING_HEADER', 'BING_HEADER1', 'BING_HEADER2']
-export function encodeHeadersToCookie(content: string) {
- const base64Content = btoa(content)
- const contentChunks = base64Content.match(/.{1,4000}/g) || []
- return ChunkKeys.map((key, index) => `${key}=${contentChunks[index] ?? ''}`)
-}
-
-export function extraCurlFromCookie(cookies: Partial<{ [key: string]: string }>) {
- let base64Content = ''
- ChunkKeys.forEach((key) => {
- base64Content += (cookies[key] || '')
- })
- try {
- return atob(base64Content)
- } catch(e) {
- return ''
- }
-}
-
-export function extraHeadersFromCookie(cookies: Partial<{ [key: string]: string }>) {
- return parseHeadersFromCurl(extraCurlFromCookie(cookies))
-}
-
-export function formatDate(input: string | number | Date): string {
- const date = new Date(input)
- return date.toLocaleDateString('en-US', {
- month: 'long',
- day: 'numeric',
- year: 'numeric'
- })
-}
-
-export function parseCookie(cookie: string, cookieName: string) {
- const targetCookie = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`).test(cookie) ? RegExp.$1 : cookie
- return targetCookie ? decodeURIComponent(targetCookie).trim() : cookie.indexOf('=') === -1 ? cookie.trim() : ''
-}
-
-export function setCookie(key: string, value: string) {
- const maxAge = value ? 86400 * 30 : 0
- document.cookie = `${key}=${value || ''}; Path=/; Max-Age=${maxAge}; SameSite=None; Secure`
-}
-
-export function getCookie(cookieName: string) {
- const re = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`)
- return re.test(document.cookie) ? RegExp.$1 : ''
-}
-
-export function parseCookies(cookie: string, cookieNames: string[]) {
- const cookies: { [key: string]: string } = {}
- cookieNames.forEach(cookieName => {
- cookies[cookieName] = parseCookie(cookie, cookieName)
- })
- return cookies
-}
-
-export const DEFAULT_UA = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.0.0'
-
-export function parseUA(ua?: string, default_ua = DEFAULT_UA) {
- return / EDGE?/i.test(decodeURIComponent(ua || '')) ? decodeURIComponent(ua!.trim()) : default_ua
-}
-
-export function mockUser(cookies: Partial<{ [key: string]: string }>) {
- const {
- BING_UA = process.env.BING_UA,
- BING_IP,
- _U = defaultUID,
- } = cookies
- const ua = parseUA(BING_UA)
-
- return {
- 'x-forwarded-for': BING_IP!,
- 'Accept-Encoding': 'gzip, deflate, br',
- 'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',
- 'User-Agent': ua!,
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.3 OS/Win32',
- cookie: `_U=${_U}` || '',
- }
-}
-
-export function createHeaders(cookies: Partial<{ [key: string]: string }>, type?: string) {
- let {
- BING_HEADER = process.env.BING_HEADER,
- BING_IP,
- IMAGE_ONLY = process.env.IMAGE_ONLY ?? '1',
- } = cookies
- const imageOnly = /^(1|true|yes)$/.test(String(IMAGE_ONLY))
- if (BING_HEADER) {
- if (
- (imageOnly && type === 'image')
- || !imageOnly
- ) {
- const headers = extraHeadersFromCookie({
- BING_HEADER,
- ...cookies,
- }) || {}
- headers['x-forward-for'] = BING_IP!
- return headers
- }
- }
- return mockUser(cookies)
-}
-
-export class WatchDog {
- private tid = 0
- watch(fn: Function, timeout = 2000) {
- clearTimeout(this.tid)
- this.tid = setTimeout(fn, timeout + Math.random() * 1000)
- }
- reset() {
- clearTimeout(this.tid)
- }
-}
diff --git a/spaces/GXSA/bingo/src/pages/api/proxy.ts b/spaces/GXSA/bingo/src/pages/api/proxy.ts
deleted file mode 100644
index 240b5fb5561d993c6381649bf4544ce12f3cdab2..0000000000000000000000000000000000000000
--- a/spaces/GXSA/bingo/src/pages/api/proxy.ts
+++ /dev/null
@@ -1,24 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { fetch } from '@/lib/isomorphic'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- try {
- const { url, headers, method = 'GET', body } = req.body
- if (!url) {
- return res.end('ok')
- }
- const response = await fetch(url, { headers, method, body, redirect: 'manual' })
- const text = await response.text()
- res.writeHead(200, {
- 'Content-Type': 'application/text',
- 'x-url': response.url,
- 'x-status': response.status,
- })
- res.end(text)
- } catch (e) {
- console.log(e)
- return res.end(e)
- }
-}
diff --git a/spaces/GazeLocation/Visualization_Saliency/.ipynb_checkpoints/app-checkpoint.py b/spaces/GazeLocation/Visualization_Saliency/.ipynb_checkpoints/app-checkpoint.py
deleted file mode 100644
index f024eaf5ddbc025af1d4b8c4b9b106e477bb3e07..0000000000000000000000000000000000000000
--- a/spaces/GazeLocation/Visualization_Saliency/.ipynb_checkpoints/app-checkpoint.py
+++ /dev/null
@@ -1,81 +0,0 @@
-import gradio as gr
-from datasets import load_dataset
-
-
-# +
-def get_methods_and_arch(dataset):
- columns = dataset.column_names[5:]
- methods = []
- archs = []
- for column in columns:
- methods.append(column.split('_')[0])
- archs.append('_'.join(column.split('_')[1:-2]))
- return list(set(methods)),list(set(archs))
-
-def get_columns(arch,method):
- columns = dataset.column_names[5:]
- for col in columns:
- if f'{method}_{arch}' in col:
- return col
-def button_fn(arch,method):
- column_heatmap = get_columns(arch,method)
- #print("Updated column: ",column_heatmap)
- return column_heatmap,index_default,dataset[index_default]["image"],dataset[index_default][column_heatmap]
-
-def func_slider(index,column_textbox):
- #global column_heatmap
- example = dataset[index]
- return example['image'],example[column_textbox]
-
-
-# -
-
-dataset = load_dataset("GazeLocation/stimuli_heatmaps",split = 'train')
-METHODS, ARCHS = get_methods_and_arch(dataset)
-index_default = 0
-DEMO = False
-
-if __name__ == '__main__':
- demo = gr.Blocks()
- with demo:
- gr.Markdown("# Heatmap Gaze Location")
-
- with gr.Row():
- dropdown_arch = gr.Dropdown(choices = ARCHS,
- value = 'resnet50',
- label = 'Model')
-
- dropdown_method = gr.Dropdown(choices = METHODS,
- value = 'gradcam',
- label = 'Method')
- with gr.Row():
- button = gr.Button(label = 'Update Heatmap Model - Method')
-
- with gr.Row():
- hf_slider = gr.Slider(minimum=0, maximum=len(dataset)-1,step = 1)
- with gr.Row():
- column_textbox = gr.Textbox(label = 'column name',
- value = get_columns(ARCHS[0],METHODS[0]) )
- with gr.Row():
- with gr.Column():
- image_input = gr.Image(label="Input Image",value = dataset[index_default]["image"])
- with gr.Column():
- image_output = gr.Image(label="Output",value = dataset[index_default][get_columns('resnet50','gradcam')])
-
-
- button.click(fn = button_fn,
- inputs = [dropdown_arch,dropdown_method],
- outputs = [column_textbox,hf_slider,image_input,image_output])
-
-
- hf_slider.change(func_slider,
- inputs = [hf_slider,column_textbox],
- outputs = [image_input, image_output])
- if DEMO:
- demo.launch(share = True,debug = True)
- else:
- demo.launch()
-
-
-
-
diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train50_gpt_and_cliport_indomain.sh b/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train50_gpt_and_cliport_indomain.sh
deleted file mode 100644
index 2b8d0b520269041ae610aa6d6ff4d0860f9e7205..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train50_gpt_and_cliport_indomain.sh
+++ /dev/null
@@ -1,9 +0,0 @@
-#!/bin/bash
-#SBATCH -c 10
-#SBATCH -n 1
-#SBATCH -o logs/%j.out
-#SBATCH --exclusive
-
-STEPS=${1-'10000'}
-
-sh scripts/traintest_scripts/train_test_multi_task_indistribution.sh data '[align-rope,assembling-kits-seq,palletizing-boxes,towers-of-hanoi,assembling-kits,manipulating-rope,packing-boxes,place-red-in-green,put-block-in-bowl,packing-boxes-pairs,sweeping-piles,separating-piles,stack-block-pyramid-seq,towers-of-hanoi-seq,packing-shapes,stack-block-pyramid,block-insertion,packing-google-objects,color-ordered-blocks-on-pallet,color-linked-ball-bowl-ordering,build-cylinder-structure,build-bridge,pyramid-blocks-assemble,sort-and-assemble-block-castle,stack-blocks-in-container,corner-sort-cylinders,align-pair-colored-blocks-along-line,color-specific-container-fill,colored-cylinder-in-square,construct-colorful-arch,color-coordinated-cylinders-in-boxes,insert-sphere-into-container,build-wheel,push-piles-into-letter,create-pyramid-with-color-coded-ells,color-coordinated-sphere-insertion,move-piles-along-line,multi-level-block-construction,build-car,color-coordinated-insertion,triangle-block-arrangement,colorful-block-tower-on-cylinder-base,manipulating-two-ropes,construct-corner-building,color-coordinated-container-sorting,construct-corner-blocks,sort-insert-color-coordinated-blocks,insert-blocks-into-fixture,color-ordered-container-arrangement,symmetric-block-bridge-construction,connect-boxes-with-rope,vertical-insertion-blocks,cylinder-stand-alignment,insert-blocks-lineup,create-pyramid-blocks-and-container,mix-piles,multi-level-pyramid-construction,rainbow-stack,align-cylinders-in-square,align-balls-in-colored-zones,multicolor-block-bridge,align-spheres-in-colored-zones,color-blocks-in-cylinder-maze,sort-and-stack-clr-blocks,corner-block-challenge,stack-color-coordinated-blocks,assemble-single-car,color-structured-block-tower,color-sorted-block-race,sphere-align-stand,color-coordinated-block-tower,color-sorted-container-stack,color-ordered-insertion,block-pyramid-with-limited-space,sorting-blocks-into-pallets,place-ball-in-elevated-bowl,Four-corner-pyramid-challenge,color-coordinated-cylinder-tower,build-two-circles]' gpt50_task_indomain
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/hourglass.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/hourglass.py
deleted file mode 100644
index 3422acee35e3c6f8731cdb310f188e671b5be12f..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/hourglass.py
+++ /dev/null
@@ -1,198 +0,0 @@
-import torch.nn as nn
-from mmcv.cnn import ConvModule
-
-from ..builder import BACKBONES
-from ..utils import ResLayer
-from .resnet import BasicBlock
-
-
-class HourglassModule(nn.Module):
- """Hourglass Module for HourglassNet backbone.
-
- Generate module recursively and use BasicBlock as the base unit.
-
- Args:
- depth (int): Depth of current HourglassModule.
- stage_channels (list[int]): Feature channels of sub-modules in current
- and follow-up HourglassModule.
- stage_blocks (list[int]): Number of sub-modules stacked in current and
- follow-up HourglassModule.
- norm_cfg (dict): Dictionary to construct and config norm layer.
- """
-
- def __init__(self,
- depth,
- stage_channels,
- stage_blocks,
- norm_cfg=dict(type='BN', requires_grad=True)):
- super(HourglassModule, self).__init__()
-
- self.depth = depth
-
- cur_block = stage_blocks[0]
- next_block = stage_blocks[1]
-
- cur_channel = stage_channels[0]
- next_channel = stage_channels[1]
-
- self.up1 = ResLayer(
- BasicBlock, cur_channel, cur_channel, cur_block, norm_cfg=norm_cfg)
-
- self.low1 = ResLayer(
- BasicBlock,
- cur_channel,
- next_channel,
- cur_block,
- stride=2,
- norm_cfg=norm_cfg)
-
- if self.depth > 1:
- self.low2 = HourglassModule(depth - 1, stage_channels[1:],
- stage_blocks[1:])
- else:
- self.low2 = ResLayer(
- BasicBlock,
- next_channel,
- next_channel,
- next_block,
- norm_cfg=norm_cfg)
-
- self.low3 = ResLayer(
- BasicBlock,
- next_channel,
- cur_channel,
- cur_block,
- norm_cfg=norm_cfg,
- downsample_first=False)
-
- self.up2 = nn.Upsample(scale_factor=2)
-
- def forward(self, x):
- """Forward function."""
- up1 = self.up1(x)
- low1 = self.low1(x)
- low2 = self.low2(low1)
- low3 = self.low3(low2)
- up2 = self.up2(low3)
- return up1 + up2
-
-
-@BACKBONES.register_module()
-class HourglassNet(nn.Module):
- """HourglassNet backbone.
-
- Stacked Hourglass Networks for Human Pose Estimation.
- More details can be found in the `paper
- `_ .
-
- Args:
- downsample_times (int): Downsample times in a HourglassModule.
- num_stacks (int): Number of HourglassModule modules stacked,
- 1 for Hourglass-52, 2 for Hourglass-104.
- stage_channels (list[int]): Feature channel of each sub-module in a
- HourglassModule.
- stage_blocks (list[int]): Number of sub-modules stacked in a
- HourglassModule.
- feat_channel (int): Feature channel of conv after a HourglassModule.
- norm_cfg (dict): Dictionary to construct and config norm layer.
-
- Example:
- >>> from mmdet.models import HourglassNet
- >>> import torch
- >>> self = HourglassNet()
- >>> self.eval()
- >>> inputs = torch.rand(1, 3, 511, 511)
- >>> level_outputs = self.forward(inputs)
- >>> for level_output in level_outputs:
- ... print(tuple(level_output.shape))
- (1, 256, 128, 128)
- (1, 256, 128, 128)
- """
-
- def __init__(self,
- downsample_times=5,
- num_stacks=2,
- stage_channels=(256, 256, 384, 384, 384, 512),
- stage_blocks=(2, 2, 2, 2, 2, 4),
- feat_channel=256,
- norm_cfg=dict(type='BN', requires_grad=True)):
- super(HourglassNet, self).__init__()
-
- self.num_stacks = num_stacks
- assert self.num_stacks >= 1
- assert len(stage_channels) == len(stage_blocks)
- assert len(stage_channels) > downsample_times
-
- cur_channel = stage_channels[0]
-
- self.stem = nn.Sequential(
- ConvModule(3, 128, 7, padding=3, stride=2, norm_cfg=norm_cfg),
- ResLayer(BasicBlock, 128, 256, 1, stride=2, norm_cfg=norm_cfg))
-
- self.hourglass_modules = nn.ModuleList([
- HourglassModule(downsample_times, stage_channels, stage_blocks)
- for _ in range(num_stacks)
- ])
-
- self.inters = ResLayer(
- BasicBlock,
- cur_channel,
- cur_channel,
- num_stacks - 1,
- norm_cfg=norm_cfg)
-
- self.conv1x1s = nn.ModuleList([
- ConvModule(
- cur_channel, cur_channel, 1, norm_cfg=norm_cfg, act_cfg=None)
- for _ in range(num_stacks - 1)
- ])
-
- self.out_convs = nn.ModuleList([
- ConvModule(
- cur_channel, feat_channel, 3, padding=1, norm_cfg=norm_cfg)
- for _ in range(num_stacks)
- ])
-
- self.remap_convs = nn.ModuleList([
- ConvModule(
- feat_channel, cur_channel, 1, norm_cfg=norm_cfg, act_cfg=None)
- for _ in range(num_stacks - 1)
- ])
-
- self.relu = nn.ReLU(inplace=True)
-
- def init_weights(self, pretrained=None):
- """Init module weights.
-
- We do nothing in this function because all modules we used
- (ConvModule, BasicBlock and etc.) have default initialization, and
- currently we don't provide pretrained model of HourglassNet.
-
- Detector's __init__() will call backbone's init_weights() with
- pretrained as input, so we keep this function.
- """
- # Training Centripetal Model needs to reset parameters for Conv2d
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- m.reset_parameters()
-
- def forward(self, x):
- """Forward function."""
- inter_feat = self.stem(x)
- out_feats = []
-
- for ind in range(self.num_stacks):
- single_hourglass = self.hourglass_modules[ind]
- out_conv = self.out_convs[ind]
-
- hourglass_feat = single_hourglass(inter_feat)
- out_feat = out_conv(hourglass_feat)
- out_feats.append(out_feat)
-
- if ind < self.num_stacks - 1:
- inter_feat = self.conv1x1s[ind](
- inter_feat) + self.remap_convs[ind](
- out_feat)
- inter_feat = self.inters[ind](self.relu(inter_feat))
-
- return out_feats
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x512_20k_voc12aug.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x512_20k_voc12aug.py
deleted file mode 100644
index 709f93cba3e3bca6ce0635457ab1823b04123bf8..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x512_20k_voc12aug.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './danet_r50-d8_512x512_20k_voc12aug.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r101-d16_769x769_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r101-d16_769x769_40k_cityscapes.py
deleted file mode 100644
index 29a9f98a93fedbf9644599203b48aa30a7ad8a28..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r101-d16_769x769_40k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './fcn_d6_r50-d16_769x769_40k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r50b-d16_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r50b-d16_512x1024_80k_cityscapes.py
deleted file mode 100644
index 0749ff14a3e7d207e82572e0516b2555ccacc7d9..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r50b-d16_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './fcn_d6_r50-d16_512x1024_80k_cityscapes.py'
-model = dict(pretrained='torchvision://resnet50', backbone=dict(type='ResNet'))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r50_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r50_512x1024_80k_cityscapes.py
deleted file mode 100644
index 95fffcc76c2ff4f61f8dd80a00d35b7875262a50..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r50_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/upernet_r50.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
-]
diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/models/loaders.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/models/loaders.py
deleted file mode 100644
index 9c7808a0588bd1a8084157b072bae42aa7efaf84..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/models/loaders.py
+++ /dev/null
@@ -1,141 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Utility functions to load from the checkpoints.
-Each checkpoint is a torch.saved dict with the following keys:
-- 'xp.cfg': the hydra config as dumped during training. This should be used
- to rebuild the object using the audiocraft.models.builders functions,
-- 'model_best_state': a readily loadable best state for the model, including
- the conditioner. The model obtained from `xp.cfg` should be compatible
- with this state dict. In the case of a LM, the encodec model would not be
- bundled along but instead provided separately.
-
-Those functions also support loading from a remote location with the Torch Hub API.
-They also support overriding some parameters, in particular the device and dtype
-of the returned model.
-"""
-
-from pathlib import Path
-from huggingface_hub import hf_hub_download
-import typing as tp
-import os
-
-from omegaconf import OmegaConf, DictConfig
-import torch
-
-from . import builders
-from .encodec import CompressionModel
-
-
-def get_audiocraft_cache_dir() -> tp.Optional[str]:
- return os.environ.get('AUDIOCRAFT_CACHE_DIR', None)
-
-
-def _get_state_dict(
- file_or_url_or_id: tp.Union[Path, str],
- filename: tp.Optional[str] = None,
- device='cpu',
- cache_dir: tp.Optional[str] = None,
-):
- if cache_dir is None:
- cache_dir = get_audiocraft_cache_dir()
- # Return the state dict either from a file or url
- file_or_url_or_id = str(file_or_url_or_id)
- assert isinstance(file_or_url_or_id, str)
-
- if os.path.isfile(file_or_url_or_id):
- return torch.load(file_or_url_or_id, map_location=device)
-
- if os.path.isdir(file_or_url_or_id):
- file = f"{file_or_url_or_id}/{filename}"
- return torch.load(file, map_location=device)
-
- elif file_or_url_or_id.startswith('https://'):
- return torch.hub.load_state_dict_from_url(file_or_url_or_id, map_location=device, check_hash=True)
-
- else:
- assert filename is not None, "filename needs to be defined if using HF checkpoints"
-
- file = hf_hub_download(repo_id=file_or_url_or_id, filename=filename, cache_dir=cache_dir)
- return torch.load(file, map_location=device)
-
-
-def load_compression_model_ckpt(file_or_url_or_id: tp.Union[Path, str], cache_dir: tp.Optional[str] = None):
- return _get_state_dict(file_or_url_or_id, filename="compression_state_dict.bin", cache_dir=cache_dir)
-
-
-def load_compression_model(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None):
- pkg = load_compression_model_ckpt(file_or_url_or_id, cache_dir=cache_dir)
- if 'pretrained' in pkg:
- return CompressionModel.get_pretrained(pkg['pretrained'], device=device)
- cfg = OmegaConf.create(pkg['xp.cfg'])
- cfg.device = str(device)
- model = builders.get_compression_model(cfg)
- model.load_state_dict(pkg['best_state'])
- model.eval()
- return model
-
-
-def load_lm_model_ckpt(file_or_url_or_id: tp.Union[Path, str], cache_dir: tp.Optional[str] = None):
- return _get_state_dict(file_or_url_or_id, filename="state_dict.bin", cache_dir=cache_dir)
-
-
-def _delete_param(cfg: DictConfig, full_name: str):
- parts = full_name.split('.')
- for part in parts[:-1]:
- if part in cfg:
- cfg = cfg[part]
- else:
- return
- OmegaConf.set_struct(cfg, False)
- if parts[-1] in cfg:
- del cfg[parts[-1]]
- OmegaConf.set_struct(cfg, True)
-
-
-def load_lm_model(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None):
- pkg = load_lm_model_ckpt(file_or_url_or_id, cache_dir=cache_dir)
- cfg = OmegaConf.create(pkg['xp.cfg'])
- cfg.device = str(device)
- if cfg.device == 'cpu':
- cfg.dtype = 'float32'
- else:
- cfg.dtype = 'float16'
- _delete_param(cfg, 'conditioners.self_wav.chroma_stem.cache_path')
- _delete_param(cfg, 'conditioners.args.merge_text_conditions_p')
- _delete_param(cfg, 'conditioners.args.drop_desc_p')
- model = builders.get_lm_model(cfg)
- model.load_state_dict(pkg['best_state'])
- model.eval()
- model.cfg = cfg
- return model
-
-
-def load_mbd_ckpt(file_or_url_or_id: tp.Union[Path, str], cache_dir: tp.Optional[str] = None):
- return _get_state_dict(file_or_url_or_id, filename="all_in_one.pt", cache_dir=cache_dir)
-
-
-def load_diffusion_models(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None):
- pkg = load_mbd_ckpt(file_or_url_or_id, cache_dir=cache_dir)
- models = []
- processors = []
- cfgs = []
- sample_rate = pkg['sample_rate']
- for i in range(pkg['n_bands']):
- cfg = pkg[i]['cfg']
- model = builders.get_diffusion_model(cfg)
- model_dict = pkg[i]['model_state']
- model.load_state_dict(model_dict)
- model.to(device)
- processor = builders.get_processor(cfg=cfg.processor, sample_rate=sample_rate)
- processor_dict = pkg[i]['processor_state']
- processor.load_state_dict(processor_dict)
- processor.to(device)
- models.append(model)
- processors.append(processor)
- cfgs.append(cfg)
- return models, processors, cfgs
diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/lib/spvcnn_classsification.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/lib/spvcnn_classsification.py
deleted file mode 100644
index f831544111aadc3ae5906eb0164f8596adc8c695..0000000000000000000000000000000000000000
--- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/lib/spvcnn_classsification.py
+++ /dev/null
@@ -1,160 +0,0 @@
-import torch.nn as nn
-import torchsparse.nn as spnn
-from torchsparse.point_tensor import PointTensor
-
-from lib.spvcnn_utils import *
-__all__ = ['SPVCNN_CLASSIFICATION']
-
-
-
-class BasicConvolutionBlock(nn.Module):
- def __init__(self, inc, outc, ks=3, stride=1, dilation=1):
- super().__init__()
- self.net = nn.Sequential(
- spnn.Conv3d(inc,
- outc,
- kernel_size=ks,
- dilation=dilation,
- stride=stride),
- spnn.BatchNorm(outc),
- spnn.ReLU(True))
-
- def forward(self, x):
- out = self.net(x)
- return out
-
-
-class BasicDeconvolutionBlock(nn.Module):
- def __init__(self, inc, outc, ks=3, stride=1):
- super().__init__()
- self.net = nn.Sequential(
- spnn.Conv3d(inc,
- outc,
- kernel_size=ks,
- stride=stride,
- transpose=True),
- spnn.BatchNorm(outc),
- spnn.ReLU(True))
-
- def forward(self, x):
- return self.net(x)
-
-
-class ResidualBlock(nn.Module):
- def __init__(self, inc, outc, ks=3, stride=1, dilation=1):
- super().__init__()
- self.net = nn.Sequential(
- spnn.Conv3d(inc,
- outc,
- kernel_size=ks,
- dilation=dilation,
- stride=stride), spnn.BatchNorm(outc),
- spnn.ReLU(True),
- spnn.Conv3d(outc,
- outc,
- kernel_size=ks,
- dilation=dilation,
- stride=1),
- spnn.BatchNorm(outc)
- )
-
- self.downsample = nn.Sequential() if (inc == outc and stride == 1) else \
- nn.Sequential(
- spnn.Conv3d(inc, outc, kernel_size=1, dilation=1, stride=stride),
- spnn.BatchNorm(outc)
- )
-
- self.relu = spnn.ReLU(True)
-
- def forward(self, x):
- out = self.relu(self.net(x) + self.downsample(x))
- return out
-
-
-class SPVCNN_CLASSIFICATION(nn.Module):
- def __init__(self, **kwargs):
- super().__init__()
-
- cr = kwargs.get('cr', 1.0)
- cs = [32, 32, 64, 128, 256, 256, 128, 96, 96]
- cs = [int(cr * x) for x in cs]
-
- if 'pres' in kwargs and 'vres' in kwargs:
- self.pres = kwargs['pres']
- self.vres = kwargs['vres']
-
- self.stem = nn.Sequential(
- spnn.Conv3d(kwargs['input_channel'], cs[0], kernel_size=3, stride=1),
- spnn.BatchNorm(cs[0]),
- spnn.ReLU(True),
- spnn.Conv3d(cs[0], cs[0], kernel_size=3, stride=1),
- spnn.BatchNorm(cs[0]),
- spnn.ReLU(True))
-
- self.stage1 = nn.Sequential(
- BasicConvolutionBlock(cs[0], cs[0], ks=2, stride=2, dilation=1),
- ResidualBlock(cs[0], cs[1], ks=3, stride=1, dilation=1),
- ResidualBlock(cs[1], cs[1], ks=3, stride=1, dilation=1),
- )
-
- self.stage2 = nn.Sequential(
- BasicConvolutionBlock(cs[1], cs[1], ks=2, stride=2, dilation=1),
- ResidualBlock(cs[1], cs[2], ks=3, stride=1, dilation=1),
- ResidualBlock(cs[2], cs[2], ks=3, stride=1, dilation=1),
- )
-
- self.stage3 = nn.Sequential(
- BasicConvolutionBlock(cs[2], cs[2], ks=2, stride=2, dilation=1),
- ResidualBlock(cs[2], cs[3], ks=3, stride=1, dilation=1),
- ResidualBlock(cs[3], cs[3], ks=3, stride=1, dilation=1),
- )
-
- self.stage4 = nn.Sequential(
- BasicConvolutionBlock(cs[3], cs[3], ks=2, stride=2, dilation=1),
- ResidualBlock(cs[3], cs[4], ks=3, stride=1, dilation=1),
- ResidualBlock(cs[4], cs[4], ks=3, stride=1, dilation=1),
- )
- self.avg_pool = spnn.GlobalAveragePooling()
- self.classifier = nn.Sequential(nn.Linear(cs[4], kwargs['num_classes']))
- self.point_transforms = nn.ModuleList([
- nn.Sequential(
- nn.Linear(cs[0], cs[4]),
- nn.BatchNorm1d(cs[4]),
- nn.ReLU(True),
- ),
- ])
-
- self.weight_initialization()
- self.dropout = nn.Dropout(0.3, True)
-
- def weight_initialization(self):
- for m in self.modules():
- if isinstance(m, nn.BatchNorm1d):
- nn.init.constant_(m.weight, 1)
- nn.init.constant_(m.bias, 0)
-
- def forward(self, x):
- # x: SparseTensor z: PointTensor
- z = PointTensor(x.F, x.C.float())
-
- x0 = initial_voxelize(z, self.pres, self.vres)
-
- x0 = self.stem(x0)
- z0 = voxel_to_point(x0, z, nearest=False)
- z0.F = z0.F
-
- x1 = point_to_voxel(x0, z0)
- x1 = self.stage1(x1)
- x2 = self.stage2(x1)
- x3 = self.stage3(x2)
- x4 = self.stage4(x3)
- z1 = voxel_to_point(x4, z0)
- z1.F = z1.F + self.point_transforms[0](z0.F)
- y1 = point_to_voxel(x4, z1)
- pool = self.avg_pool(y1)
- out = self.classifier(pool)
-
-
- return out
-
-
diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/layers/weight_init.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/layers/weight_init.py
deleted file mode 100644
index 7733157f70b72cd7a8f46aec8eb87db45cd77b63..0000000000000000000000000000000000000000
--- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/layers/weight_init.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# --------------------------------------------------------
-# Based on timm and MAE-priv code bases
-# https://github.com/rwightman/pytorch-image-models/tree/master/timm
-# https://github.com/BUPT-PRIV/MAE-priv
-# --------------------------------------------------------
-
-
-import math
-import warnings
-
-import torch
-from torch.nn.init import _calculate_fan_in_and_fan_out
-
-
-def _no_grad_trunc_normal_(tensor, mean, std, a, b):
- # Cut & paste from PyTorch official master until it's in a few official releases - RW
- # Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf
- def norm_cdf(x):
- # Computes standard normal cumulative distribution function
- return (1. + math.erf(x / math.sqrt(2.))) / 2.
-
- if (mean < a - 2 * std) or (mean > b + 2 * std):
- warnings.warn("mean is more than 2 std from [a, b] in nn.init.trunc_normal_. "
- "The distribution of values may be incorrect.",
- stacklevel=2)
-
- with torch.no_grad():
- # Values are generated by using a truncated uniform distribution and
- # then using the inverse CDF for the normal distribution.
- # Get upper and lower cdf values
- l = norm_cdf((a - mean) / std)
- u = norm_cdf((b - mean) / std)
-
- # Uniformly fill tensor with values from [l, u], then translate to
- # [2l-1, 2u-1].
- tensor.uniform_(2 * l - 1, 2 * u - 1)
-
- # Use inverse cdf transform for normal distribution to get truncated
- # standard normal
- tensor.erfinv_()
-
- # Transform to proper mean, std
- tensor.mul_(std * math.sqrt(2.))
- tensor.add_(mean)
-
- # Clamp to ensure it's in the proper range
- tensor.clamp_(min=a, max=b)
- return tensor
-
-
-def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.):
- # type: (Tensor, float, float, float, float) -> Tensor
- r"""Fills the input Tensor with values drawn from a truncated
- normal distribution. The values are effectively drawn from the
- normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)`
- with values outside :math:`[a, b]` redrawn until they are within
- the bounds. The method used for generating the random values works
- best when :math:`a \leq \text{mean} \leq b`.
- Args:
- tensor: an n-dimensional `torch.Tensor`
- mean: the mean of the normal distribution
- std: the standard deviation of the normal distribution
- a: the minimum cutoff value
- b: the maximum cutoff value
- Examples:
- >>> w = torch.empty(3, 5)
- >>> nn.init.trunc_normal_(w)
- """
- return _no_grad_trunc_normal_(tensor, mean, std, a, b)
-
-
-def variance_scaling_(tensor, scale=1.0, mode='fan_in', distribution='normal'):
- fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor)
- if mode == 'fan_in':
- denom = fan_in
- elif mode == 'fan_out':
- denom = fan_out
- elif mode == 'fan_avg':
- denom = (fan_in + fan_out) / 2
-
- variance = scale / denom
-
- if distribution == "truncated_normal":
- # constant is stddev of standard normal truncated to (-2, 2)
- trunc_normal_(tensor, std=math.sqrt(variance) / .87962566103423978)
- elif distribution == "normal":
- tensor.normal_(std=math.sqrt(variance))
- elif distribution == "uniform":
- bound = math.sqrt(3 * variance)
- tensor.uniform_(-bound, bound)
- else:
- raise ValueError(f"invalid distribution {distribution}")
-
-
-def lecun_normal_(tensor):
- variance_scaling_(tensor, mode='fan_in', distribution='truncated_normal')
diff --git a/spaces/Hallucinate/demo/midas/midas_net.py b/spaces/Hallucinate/demo/midas/midas_net.py
deleted file mode 100644
index 8a954977800b0a0f48807e80fa63041910e33c1f..0000000000000000000000000000000000000000
--- a/spaces/Hallucinate/demo/midas/midas_net.py
+++ /dev/null
@@ -1,76 +0,0 @@
-"""MidashNet: Network for monocular depth estimation trained by mixing several datasets.
-This file contains code that is adapted from
-https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py
-"""
-import torch
-import torch.nn as nn
-
-from .base_model import BaseModel
-from .blocks import FeatureFusionBlock, Interpolate, _make_encoder
-
-
-class MidasNet(BaseModel):
- """Network for monocular depth estimation.
- """
-
- def __init__(self, path=None, features=256, non_negative=True):
- """Init.
-
- Args:
- path (str, optional): Path to saved model. Defaults to None.
- features (int, optional): Number of features. Defaults to 256.
- backbone (str, optional): Backbone network for encoder. Defaults to resnet50
- """
- print("Loading weights: ", path)
-
- super(MidasNet, self).__init__()
-
- use_pretrained = False if path is None else True
-
- self.pretrained, self.scratch = _make_encoder(backbone="resnext101_wsl", features=features, use_pretrained=use_pretrained)
-
- self.scratch.refinenet4 = FeatureFusionBlock(features)
- self.scratch.refinenet3 = FeatureFusionBlock(features)
- self.scratch.refinenet2 = FeatureFusionBlock(features)
- self.scratch.refinenet1 = FeatureFusionBlock(features)
-
- self.scratch.output_conv = nn.Sequential(
- nn.Conv2d(features, 128, kernel_size=3, stride=1, padding=1),
- Interpolate(scale_factor=2, mode="bilinear"),
- nn.Conv2d(128, 32, kernel_size=3, stride=1, padding=1),
- nn.ReLU(True),
- nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0),
- nn.ReLU(True) if non_negative else nn.Identity(),
- )
-
- if path:
- self.load(path)
-
- def forward(self, x):
- """Forward pass.
-
- Args:
- x (tensor): input data (image)
-
- Returns:
- tensor: depth
- """
-
- layer_1 = self.pretrained.layer1(x)
- layer_2 = self.pretrained.layer2(layer_1)
- layer_3 = self.pretrained.layer3(layer_2)
- layer_4 = self.pretrained.layer4(layer_3)
-
- layer_1_rn = self.scratch.layer1_rn(layer_1)
- layer_2_rn = self.scratch.layer2_rn(layer_2)
- layer_3_rn = self.scratch.layer3_rn(layer_3)
- layer_4_rn = self.scratch.layer4_rn(layer_4)
-
- path_4 = self.scratch.refinenet4(layer_4_rn)
- path_3 = self.scratch.refinenet3(path_4, layer_3_rn)
- path_2 = self.scratch.refinenet2(path_3, layer_2_rn)
- path_1 = self.scratch.refinenet1(path_2, layer_1_rn)
-
- out = self.scratch.output_conv(path_1)
-
- return torch.squeeze(out, dim=1)
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/models/megatron_t5/configuration_megatron_t5.py b/spaces/HaloMaster/chinesesummary/fengshen/models/megatron_t5/configuration_megatron_t5.py
deleted file mode 100644
index 18b960e947cfd162d79d6b017fb77e30707c4c2e..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/models/megatron_t5/configuration_megatron_t5.py
+++ /dev/null
@@ -1,255 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The IDEA Authors. All rights reserved.
-
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-
-# http://www.apache.org/licenses/LICENSE-2.0
-
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" T5 model configuration """
-from collections import OrderedDict
-from typing import Any, Dict, Iterable, Mapping, Optional
-
-from transformers import PreTrainedTokenizer, TensorType
-
-from transformers import is_torch_available
-from transformers.configuration_utils import PretrainedConfig
-from transformers.onnx import OnnxConfigWithPast
-from transformers.utils import logging
-
-
-logger = logging.get_logger(__name__)
-
-T5_PRETRAINED_CONFIG_ARCHIVE_MAP = {
- "T5-small": "https://huggingface.co/T5-small/resolve/main/config.json",
- "T5-base": "https://huggingface.co/T5-base/resolve/main/config.json",
- "T5-large": "https://huggingface.co/T5-large/resolve/main/config.json",
- "T5-3b": "https://huggingface.co/T5-3b/resolve/main/config.json",
- "T5-11b": "https://huggingface.co/T5-11b/resolve/main/config.json",
-}
-
-
-class T5Config(PretrainedConfig):
- r"""
- This is the configuration class to store the configuration of a :class:`~transformers.T5Model` or a
- :class:`~transformers.TFT5Model`. It is used to instantiate a T5 model according to the specified arguments,
- defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration
- to that of the T5 `T5-small `__ architecture.
-
- Configuration objects inherit from :class:`~transformers.PretrainedConfig` and can be used to control the model
- outputs. Read the documentation from :class:`~transformers.PretrainedConfig` for more information.
-
- Arguments:
- vocab_size (:obj:`int`, `optional`, defaults to 32128):
- Vocabulary size of the T5 model. Defines the number of different tokens that can be represented by the
- :obj:`inputs_ids` passed when calling :class:`~transformers.T5Model` or :class:`~transformers.TFT5Model`.
- d_model (:obj:`int`, `optional`, defaults to 512):
- Size of the encoder layers and the pooler layer.
- d_kv (:obj:`int`, `optional`, defaults to 64):
- Size of the key, query, value projections per attention head. :obj:`d_kv` has to be equal to :obj:`d_model
- // num_heads`.
- d_ff (:obj:`int`, `optional`, defaults to 2048):
- Size of the intermediate feed forward layer in each :obj:`T5Block`.
- num_layers (:obj:`int`, `optional`, defaults to 6):
- Number of hidden layers in the Transformer encoder.
- num_decoder_layers (:obj:`int`, `optional`):
- Number of hidden layers in the Transformer decoder. Will use the same value as :obj:`num_layers` if not
- set.
- num_heads (:obj:`int`, `optional`, defaults to 8):
- Number of attention heads for each attention layer in the Transformer encoder.
- relative_attention_num_buckets (:obj:`int`, `optional`, defaults to 32):
- The number of buckets to use for each attention layer.
- dropout_rate (:obj:`float`, `optional`, defaults to 0.1):
- The ratio for all dropout layers.
- layer_norm_eps (:obj:`float`, `optional`, defaults to 1e-6):
- The epsilon used by the layer normalization layers.
- initializer_factor (:obj:`float`, `optional`, defaults to 1):
- A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
- testing).
- feed_forward_proj (:obj:`string`, `optional`, defaults to :obj:`"relu"`):
- Type of feed forward layer to be used. Should be one of :obj:`"relu"` or :obj:`"gated-gelu"`. T5v1.1 uses
- the :obj:`"gated-gelu"` feed forward projection. Original T5 uses :obj:`"relu"`.
- use_cache (:obj:`bool`, `optional`, defaults to :obj:`True`):
- Whether or not the model should return the last key/values attentions (not used by all models).
- gradient_checkpointing (:obj:`bool`, `optional`, defaults to :obj:`False`):
- If True, use gradient checkpointing to save memory at the expense of slower backward pass.
- """
- model_type = "T5"
- keys_to_ignore_at_inference = ["past_key_values"]
-
- def __init__(
- self,
- vocab_size=32128,
- d_model=512,
- d_kv=64,
- d_ff=2048,
- num_layers=6,
- num_decoder_layers=None,
- num_heads=8,
- relative_attention_num_buckets=32,
- dropout_rate=0.1,
- layer_norm_epsilon=1e-5,
- initializer_factor=1.0,
- feed_forward_proj="gelu",
- is_encoder_decoder=True,
- use_cache=True,
- pad_token_id=0,
- eos_token_id=1,
- gradient_checkpointing=False,
- **kwargs
- ):
- super().__init__(
- pad_token_id=pad_token_id,
- eos_token_id=eos_token_id,
- is_encoder_decoder=is_encoder_decoder,
- **kwargs,
- )
- self.vocab_size = vocab_size
- self.d_model = d_model
- self.d_kv = d_kv
- self.d_ff = d_ff
- self.num_layers = num_layers
- self.num_decoder_layers = (
- num_decoder_layers if num_decoder_layers is not None else self.num_layers
- ) # default = symmetry
- self.num_heads = num_heads
- self.relative_attention_num_buckets = relative_attention_num_buckets
- self.dropout_rate = dropout_rate
- self.layer_norm_epsilon = layer_norm_epsilon
- self.initializer_factor = initializer_factor
- self.feed_forward_proj = feed_forward_proj
- self.use_cache = use_cache
- self.gradient_checkpointing = gradient_checkpointing
-
- @property
- def hidden_size(self):
- return self.d_model
-
- @property
- def num_attention_heads(self):
- return self.num_heads
-
- @property
- def num_hidden_layers(self):
- return self.num_layers
-
-
-class T5OnnxConfig(OnnxConfigWithPast):
- @property
- def inputs(self) -> Mapping[str, Mapping[int, str]]:
- common_inputs = OrderedDict(
- [
- ("input_ids", {0: "batch", 1: "encoder_sequence"}),
- ("attention_mask", {0: "batch", 1: "encoder_sequence"}),
- ("decoder_input_ids", {0: "batch"}),
- ("decoder_attention_mask", {0: "batch"}),
- ]
- )
-
- if self.use_past:
- for i in range(0, self._config.num_layers):
- common_inputs[f"past_key_values.{i}.decoder.key"] = {
- 0: "batch", 2: "past_sequence"}
- common_inputs[f"past_key_values.{i}.decoder.value"] = {
- 0: "batch", 2: "past_sequence"}
- common_inputs[f"past_key_values.{i}.encoder.key"] = {
- 0: "batch", 2: "past_sequence"}
- common_inputs[f"past_key_values.{i}.encoder.value"] = {
- 0: "batch", 2: "past_sequence"}
-
- return common_inputs
-
- @property
- def outputs(self) -> Mapping[str, Mapping[int, str]]:
- common_outputs = super().outputs
-
- if "last_hidden_state" in common_outputs:
- common_outputs["last_hidden_state"] = {
- 0: "batch", 1: "decoder_sequence"}
-
- if self.use_past:
- for i in range(self._config.num_layers):
- common_outputs[f"present.{i}.decoder.key"] = {
- 0: "batch", 2: "decoder_sequence"}
- common_outputs[f"present.{i}.decoder.value"] = {
- 0: "batch", 2: "decoder_sequence"}
- common_outputs[f"present.{i}.encoder.key"] = {
- 0: "batch", 2: "encoder_sequence"}
- common_outputs[f"present.{i}.encoder.value"] = {
- 0: "batch", 2: "encoder_sequence"}
-
- if self.task == "default":
- common_outputs["encoder_last_hidden_state"] = {
- 0: "batch", 2: "encoder_sequence"}
-
- return common_outputs
-
- def generate_dummy_inputs(
- self,
- tokenizer: PreTrainedTokenizer,
- batch_size: int = -1,
- seq_length: int = -1,
- is_pair: bool = False,
- framework: Optional[TensorType] = None,
- ) -> Mapping[str, Any]:
-
- # Generate encoder inputs
- encoder_inputs = super().generate_dummy_inputs(
- tokenizer, batch_size, seq_length, is_pair, framework)
-
- # Generate decoder inputs
- decoder_inputs = super().generate_dummy_inputs(
- tokenizer, batch_size, 1, is_pair, framework)
- decoder_inputs = {f"decoder_{name}": tensor for name,
- tensor in decoder_inputs.items()}
-
- ordered_inputs = dict(**encoder_inputs, **decoder_inputs)
- if self.use_past:
- if not is_torch_available():
- raise ValueError(
- "Cannot generate dummy past_keys inputs without PyTorch installed.")
- else:
- import torch
- batch = encoder_inputs["input_ids"].shape[0]
- encoder_seq_length = encoder_inputs["input_ids"].shape[1]
- encoder_shape = (
- batch,
- self._config.num_heads,
- encoder_seq_length,
- self._config.hidden_size // self._config.num_heads,
- )
- decoder_shape = (batch, self._config.num_heads, 1,
- self._config.hidden_size // self._config.num_heads)
-
- ordered_inputs["past_key_values"] = []
- for _ in range(self._config.num_layers):
- ordered_inputs["past_key_values"].append(
- (
- torch.zeros(decoder_shape),
- torch.zeros(decoder_shape),
- torch.zeros(encoder_shape),
- torch.zeros(encoder_shape),
- )
- )
-
- return ordered_inputs
-
- @staticmethod
- def flatten_output_collection_property(name: str, field: Iterable[Any]) -> Dict[str, Any]:
- if name in ["present", "past_key_values"]:
- flatten_output = {}
- for idx, t in enumerate(field):
- flatten_output[f"{name}.{idx}.decoder.key"] = t[0]
- flatten_output[f"{name}.{idx}.decoder.value"] = t[1]
- flatten_output[f"{name}.{idx}.encoder.key"] = t[2]
- flatten_output[f"{name}.{idx}.encoder.value"] = t[3]
-
- return flatten_output
-
- return super().flatten_output_collection_property(name, field)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/.github/ISSUE_TEMPLATE/feature_request.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/.github/ISSUE_TEMPLATE/feature_request.md
deleted file mode 100644
index 93c8668041f8a7af29e4c11e905d8b56b946dd51..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/.github/ISSUE_TEMPLATE/feature_request.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-name: 🚀 Feature Request
-about: Submit a proposal/request for a new feature
-labels: 'enhancement, help wanted, needs triage'
----
-
-## 🚀 Feature Request
-
-
-### Motivation
-
-
-
-### Pitch
-
-
-
-### Alternatives
-
-
-
-### Additional context
-
-
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_text_joint_to_text/scripts/g2p_encode.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_text_joint_to_text/scripts/g2p_encode.py
deleted file mode 100644
index 9db779396f492e3f71b08d7b895beb81d8e46bc9..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_text_joint_to_text/scripts/g2p_encode.py
+++ /dev/null
@@ -1,191 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import itertools
-import logging
-import re
-import time
-
-from g2p_en import G2p
-
-logger = logging.getLogger(__name__)
-
-FAIL_SENT = "FAILED_SENTENCE"
-
-
-def parse():
- parser = argparse.ArgumentParser()
- parser.add_argument("--data-path", type=str, required=True)
- parser.add_argument("--out-path", type=str, required=True)
- parser.add_argument("--lower-case", action="store_true")
- parser.add_argument("--do-filter", action="store_true")
- parser.add_argument("--use-word-start", action="store_true")
- parser.add_argument("--dup-vowel", default=1, type=int)
- parser.add_argument("--dup-consonant", default=1, type=int)
- parser.add_argument("--no-punc", action="store_true")
- parser.add_argument("--reserve-word", type=str, default="")
- parser.add_argument(
- "--reserve-first-column",
- action="store_true",
- help="first column is sentence id",
- )
- ###
- parser.add_argument("--parallel-process-num", default=1, type=int)
- parser.add_argument("--logdir", default="")
- args = parser.parse_args()
- return args
-
-
-def process_sent(sent, g2p, res_wrds, args):
- sents = pre_process_sent(sent, args.do_filter, args.lower_case, res_wrds)
- pho_seqs = [do_g2p(g2p, s, res_wrds, i == 0) for i, s in enumerate(sents)]
- pho_seq = (
- [FAIL_SENT]
- if [FAIL_SENT] in pho_seqs
- else list(itertools.chain.from_iterable(pho_seqs))
- )
- if args.no_punc:
- pho_seq = remove_punc(pho_seq)
- if args.dup_vowel > 1 or args.dup_consonant > 1:
- pho_seq = dup_pho(pho_seq, args.dup_vowel, args.dup_consonant)
- if args.use_word_start:
- pho_seq = add_word_start(pho_seq)
- return " ".join(pho_seq)
-
-
-def remove_punc(sent):
- ns = []
- regex = re.compile("[^a-zA-Z0-9 ]")
- for p in sent:
- if (not regex.search(p)) or p == FAIL_SENT:
- if p == " " and (len(ns) == 0 or ns[-1] == " "):
- continue
- ns.append(p)
- return ns
-
-
-def do_g2p(g2p, sent, res_wrds, is_first_sent):
- if sent in res_wrds:
- pho_seq = [res_wrds[sent]]
- else:
- pho_seq = g2p(sent)
- if not is_first_sent:
- pho_seq = [" "] + pho_seq # add space to separate
- return pho_seq
-
-
-def pre_process_sent(sent, do_filter, lower_case, res_wrds):
- if do_filter:
- sent = re.sub("-", " ", sent)
- sent = re.sub("—", " ", sent)
- if len(res_wrds) > 0:
- wrds = sent.split()
- wrds = ["SPLIT_ME " + w + " SPLIT_ME" if w in res_wrds else w for w in wrds]
- sents = [x.strip() for x in " ".join(wrds).split("SPLIT_ME") if x.strip() != ""]
- else:
- sents = [sent]
- if lower_case:
- sents = [s.lower() if s not in res_wrds else s for s in sents]
- return sents
-
-
-def dup_pho(sent, dup_v_num, dup_c_num):
- """
- duplicate phoneme defined as cmudict
- http://www.speech.cs.cmu.edu/cgi-bin/cmudict
- """
- if dup_v_num == 1 and dup_c_num == 1:
- return sent
- ns = []
- for p in sent:
- ns.append(p)
- if re.search(r"\d$", p):
- for i in range(1, dup_v_num):
- ns.append(f"{p}-{i}P")
- elif re.search(r"\w", p):
- for i in range(1, dup_c_num):
- ns.append(f"{p}-{i}P")
- return ns
-
-
-def add_word_start(sent):
- ns = []
- do_add = True
- ws = "▁"
- for p in sent:
- if do_add:
- p = ws + p
- do_add = False
- if p == " ":
- do_add = True
- else:
- ns.append(p)
- return ns
-
-
-def load_reserve_word(reserve_word):
- if reserve_word == "":
- return []
- with open(reserve_word, "r") as fp:
- res_wrds = [x.strip().split() for x in fp.readlines() if x.strip() != ""]
- assert sum([0 if len(x) == 2 else 1 for x in res_wrds]) == 0
- res_wrds = dict(res_wrds)
- return res_wrds
-
-
-def process_sents(sents, args):
- g2p = G2p()
- out_sents = []
- res_wrds = load_reserve_word(args.reserve_word)
- for sent in sents:
- col1 = ""
- if args.reserve_first_column:
- col1, sent = sent.split(None, 1)
- sent = process_sent(sent, g2p, res_wrds, args)
- if args.reserve_first_column and col1 != "":
- sent = f"{col1} {sent}"
- out_sents.append(sent)
- return out_sents
-
-
-def main():
- args = parse()
- out_sents = []
- with open(args.data_path, "r") as fp:
- sent_list = [x.strip() for x in fp.readlines()]
- if args.parallel_process_num > 1:
- try:
- import submitit
- except ImportError:
- logger.warn(
- "submitit is not found and only one job is used to process the data"
- )
- submitit = None
-
- if args.parallel_process_num == 1 or submitit is None:
- out_sents = process_sents(sent_list, args)
- else:
- # process sentences with parallel computation
- lsize = len(sent_list) // args.parallel_process_num + 1
- executor = submitit.AutoExecutor(folder=args.logdir)
- executor.update_parameters(timeout_min=1000, cpus_per_task=4)
- jobs = []
- for i in range(args.parallel_process_num):
- job = executor.submit(
- process_sents, sent_list[lsize * i : lsize * (i + 1)], args
- )
- jobs.append(job)
- is_running = True
- while is_running:
- time.sleep(5)
- is_running = sum([job.done() for job in jobs]) < len(jobs)
- out_sents = list(itertools.chain.from_iterable([job.result() for job in jobs]))
- with open(args.out_path, "w") as fp:
- fp.write("\n".join(out_sents) + "\n")
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/hifi_gan/inference.py b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/hifi_gan/inference.py
deleted file mode 100644
index c70ee09b4110677b7cf9732d76a5e6ca93c8860c..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/hifi_gan/inference.py
+++ /dev/null
@@ -1,98 +0,0 @@
-from __future__ import absolute_import, division, print_function, unicode_literals
-
-import glob
-import os
-import argparse
-import json
-import torch
-from scipy.io.wavfile import write
-from env import AttrDict
-from meldataset import mel_spectrogram, MAX_WAV_VALUE, load_wav
-from models import Generator
-
-h = None
-device = None
-
-
-def load_checkpoint(filepath, device):
- assert os.path.isfile(filepath)
- print("Loading '{}'".format(filepath))
- checkpoint_dict = torch.load(filepath, map_location=device)
- print("Complete.")
- return checkpoint_dict
-
-
-def get_mel(x):
- return mel_spectrogram(
- x, h.n_fft, h.num_mels, h.sampling_rate, h.hop_size, h.win_size, h.fmin, h.fmax
- )
-
-
-def scan_checkpoint(cp_dir, prefix):
- pattern = os.path.join(cp_dir, prefix + "*")
- cp_list = glob.glob(pattern)
- if len(cp_list) == 0:
- return ""
- return sorted(cp_list)[-1]
-
-
-def inference(a):
- generator = Generator(h).to(device)
-
- state_dict_g = load_checkpoint(a.checkpoint_file, device)
- generator.load_state_dict(state_dict_g["generator"])
-
- filelist = os.listdir(a.input_wavs_dir)
-
- os.makedirs(a.output_dir, exist_ok=True)
-
- generator.eval()
- generator.remove_weight_norm()
- with torch.no_grad():
- for i, filname in enumerate(filelist):
- wav, sr = load_wav(os.path.join(a.input_wavs_dir, filname))
- wav = wav / MAX_WAV_VALUE
- wav = torch.FloatTensor(wav).to(device)
- x = get_mel(wav.unsqueeze(0))
- y_g_hat = generator(x)
- audio = y_g_hat.squeeze()
- audio = audio * MAX_WAV_VALUE
- audio = audio.cpu().numpy().astype("int16")
-
- output_file = os.path.join(
- a.output_dir, os.path.splitext(filname)[0] + "_generated.wav"
- )
- write(output_file, h.sampling_rate, audio)
- print(output_file)
-
-
-def main():
- print("Initializing Inference Process..")
-
- parser = argparse.ArgumentParser()
- parser.add_argument("--input_wavs_dir", default="test_files")
- parser.add_argument("--output_dir", default="generated_files")
- parser.add_argument("--checkpoint_file", required=True)
- a = parser.parse_args()
-
- config_file = os.path.join(os.path.split(a.checkpoint_file)[0], "config.json")
- with open(config_file) as f:
- data = f.read()
-
- global h
- json_config = json.loads(data)
- h = AttrDict(json_config)
-
- torch.manual_seed(h.seed)
- global device
- if torch.cuda.is_available():
- torch.cuda.manual_seed(h.seed)
- device = torch.device("cuda")
- else:
- device = torch.device("cpu")
-
- inference(a)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Harveenchadha/speech2speech/README.md b/spaces/Harveenchadha/speech2speech/README.md
deleted file mode 100644
index 1c8c0ea015b36c4a860735f50c85ca292b4c80cb..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/speech2speech/README.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: Speech2speech
-emoji: 🐨
-colorFrom: pink
-colorTo: pink
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/HighCWu/Style2Paints-4-Gradio/ui/web-mobile/index.html b/spaces/HighCWu/Style2Paints-4-Gradio/ui/web-mobile/index.html
deleted file mode 100644
index 3eec2efc2d9549ac758731ae4af61c9d8362cd77..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/Style2Paints-4-Gradio/ui/web-mobile/index.html
+++ /dev/null
@@ -1,54 +0,0 @@
-
-
-
-
-
- Cocos Creator | Style2Paints
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/Hise/rvc-hololive-models/infer_pack/commons.py b/spaces/Hise/rvc-hololive-models/infer_pack/commons.py
deleted file mode 100644
index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000
--- a/spaces/Hise/rvc-hololive-models/infer_pack/commons.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += (
- 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q)
- )
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def slice_segments2(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / (
- num_timescales - 1
- )
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment
- )
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2, 3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1.0 / norm_type)
- return total_norm
diff --git a/spaces/Hoodady/3DFuse/my/__init__.py b/spaces/Hoodady/3DFuse/my/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/HugoDzz/super-godot-galaxy/build/_app/immutable/entry/start.d0a82aef.js b/spaces/HugoDzz/super-godot-galaxy/build/_app/immutable/entry/start.d0a82aef.js
deleted file mode 100644
index 9faae3baec1b8899fc58059f72da202b8c8623da..0000000000000000000000000000000000000000
--- a/spaces/HugoDzz/super-godot-galaxy/build/_app/immutable/entry/start.d0a82aef.js
+++ /dev/null
@@ -1,3 +0,0 @@
-import{o as Ce,t as ye}from"../chunks/index.9af7eb9c.js";import{S as Ge,a as Je,I as q,g as De,f as qe,b as we,c as le,s as V,i as _e,d as Q,e as J,P as Fe,h as We}from"../chunks/singletons.1f11d8d9.js";function Xe(t,o){return t==="/"||o==="ignore"?t:o==="never"?t.endsWith("/")?t.slice(0,-1):t:o==="always"&&!t.endsWith("/")?t+"/":t}function Ze(t){return t.split("%25").map(decodeURI).join("%25")}function Qe(t){for(const o in t)t[o]=decodeURIComponent(t[o]);return t}const et=["href","pathname","search","searchParams","toString","toJSON"];function tt(t,o){const u=new URL(t);for(const i of et)Object.defineProperty(u,i,{get(){return o(),t[i]},enumerable:!0,configurable:!0});return nt(u),u}function nt(t){Object.defineProperty(t,"hash",{get(){throw new Error("Cannot access event.url.hash. Consider using `$page.url.hash` inside a component instead")}})}const at="/__data.json";function rt(t){return t.replace(/\/$/,"")+at}function Ke(t){try{return JSON.parse(sessionStorage[t])}catch{}}function Me(t,o){const u=JSON.stringify(o);try{sessionStorage[t]=u}catch{}}function ot(...t){let o=5381;for(const u of t)if(typeof u=="string"){let i=u.length;for(;i;)o=o*33^u.charCodeAt(--i)}else if(ArrayBuffer.isView(u)){const i=new Uint8Array(u.buffer,u.byteOffset,u.byteLength);let d=i.length;for(;d;)o=o*33^i[--d]}else throw new TypeError("value must be a string or TypedArray");return(o>>>0).toString(36)}const fe=window.fetch;window.fetch=(t,o)=>((t instanceof Request?t.method:(o==null?void 0:o.method)||"GET")!=="GET"&&te.delete(Se(t)),fe(t,o));const te=new Map;function it(t,o){const u=Se(t,o),i=document.querySelector(u);if(i!=null&&i.textContent){const{body:d,...f}=JSON.parse(i.textContent),S=i.getAttribute("data-ttl");return S&&te.set(u,{body:d,init:f,ttl:1e3*Number(S)}),Promise.resolve(new Response(d,f))}return fe(t,o)}function st(t,o,u){if(te.size>0){const i=Se(t,u),d=te.get(i);if(d){if(performance.now(){const d=/^\[\.\.\.(\w+)(?:=(\w+))?\]$/.exec(i);if(d)return o.push({name:d[1],matcher:d[2],optional:!1,rest:!0,chained:!0}),"(?:/(.*))?";const f=/^\[\[(\w+)(?:=(\w+))?\]\]$/.exec(i);if(f)return o.push({name:f[1],matcher:f[2],optional:!0,rest:!1,chained:!0}),"(?:/([^/]+))?";if(!i)return;const S=i.split(/\[(.+?)\](?!\])/);return"/"+S.map((b,w)=>{if(w%2){if(b.startsWith("x+"))return be(String.fromCharCode(parseInt(b.slice(2),16)));if(b.startsWith("u+"))return be(String.fromCharCode(...b.slice(2).split("-").map(P=>parseInt(P,16))));const p=ct.exec(b);if(!p)throw new Error(`Invalid param: ${b}. Params and matcher names can only have underscores and alphanumeric characters.`);const[,C,N,k,T]=p;return o.push({name:k,matcher:T,optional:!!C,rest:!!N,chained:N?w===1&&S[0]==="":!1}),N?"(.*?)":C?"([^/]*)?":"([^/]+?)"}return be(b)}).join("")}).join("")}/?$`),params:o}}function ft(t){return!/^\([^)]+\)$/.test(t)}function ut(t){return t.slice(1).split("/").filter(ft)}function dt(t,o,u){const i={},d=t.slice(1);let f=0;for(let S=0;Sw).join("/"),f=0;continue}if(b===void 0){l.rest&&(i[l.name]="");continue}if(!l.matcher||u[l.matcher](b)){i[l.name]=b;const w=o[S+1],p=d[S+1];w&&!w.rest&&w.optional&&p&&(f=0);continue}if(l.optional&&l.chained){f++;continue}return}if(!f)return i}function be(t){return t.normalize().replace(/[[\]]/g,"\\$&").replace(/%/g,"%25").replace(/\//g,"%2[Ff]").replace(/\?/g,"%3[Ff]").replace(/#/g,"%23").replace(/[.*+?^${}()|\\]/g,"\\$&")}function pt({nodes:t,server_loads:o,dictionary:u,matchers:i}){const d=new Set(o);return Object.entries(u).map(([l,[b,w,p]])=>{const{pattern:C,params:N}=lt(l),k={id:l,exec:T=>{const P=C.exec(T);if(P)return dt(P,N,i)},errors:[1,...p||[]].map(T=>t[T]),layouts:[0,...w||[]].map(S),leaf:f(b)};return k.errors.length=k.layouts.length=Math.max(k.errors.length,k.layouts.length),k});function f(l){const b=l<0;return b&&(l=~l),[b,t[l]]}function S(l){return l===void 0?l:[d.has(l),t[l]]}}let ee=class{constructor(o,u){this.status=o,typeof u=="string"?this.body={message:u}:u?this.body=u:this.body={message:`Error: ${o}`}}toString(){return JSON.stringify(this.body)}},Ve=class{constructor(o,u){this.status=o,this.location=u}};async function ht(t){var o;for(const u in t)if(typeof((o=t[u])==null?void 0:o.then)=="function")return Object.fromEntries(await Promise.all(Object.entries(t).map(async([i,d])=>[i,await d])));return t}Object.getOwnPropertyNames(Object.prototype).sort().join("\0");const gt=-1,mt=-2,yt=-3,wt=-4,_t=-5,bt=-6;function vt(t,o){if(typeof t=="number")return d(t,!0);if(!Array.isArray(t)||t.length===0)throw new Error("Invalid input");const u=t,i=Array(u.length);function d(f,S=!1){if(f===gt)return;if(f===yt)return NaN;if(f===wt)return 1/0;if(f===_t)return-1/0;if(f===bt)return-0;if(S)throw new Error("Invalid input");if(f in i)return i[f];const l=u[f];if(!l||typeof l!="object")i[f]=l;else if(Array.isArray(l))if(typeof l[0]=="string"){const b=l[0],w=o==null?void 0:o[b];if(w)return i[f]=w(d(l[1]));switch(b){case"Date":i[f]=new Date(l[1]);break;case"Set":const p=new Set;i[f]=p;for(let k=1;ko!=null)}const kt="x-sveltekit-invalidated",K=Ke(Ge)??{},Z=Ke(Je)??{};function ve(t){K[t]=Q()}function Rt(t,o){var xe;const u=pt(t),i=t.nodes[0],d=t.nodes[1];i(),d();const f=document.documentElement,S=[],l=[];let b=null;const w={before_navigate:[],after_navigate:[]};let p={branch:[],error:null,url:null},C=!1,N=!1,k=!0,T=!1,P=!1,z=!1,H=!1,F,j=(xe=history.state)==null?void 0:xe[q];j||(j=Date.now(),history.replaceState({...history.state,[q]:j},"",location.href));const ue=K[j];ue&&(history.scrollRestoration="manual",scrollTo(ue.x,ue.y));let M,ne,ae;async function ke(){ae=ae||Promise.resolve(),await ae,ae=null;const e=new URL(location.href),n=W(e,!0);b=null;const r=ne={},a=n&&await he(n);if(r===ne&&a){if(a.type==="redirect")return re(new URL(a.location,e).href,{},[e.pathname],r);a.props.page!==void 0&&(M=a.props.page),F.$set(a.props)}}function Re(e){l.some(n=>n==null?void 0:n.snapshot)&&(Z[e]=l.map(n=>{var r;return(r=n==null?void 0:n.snapshot)==null?void 0:r.capture()}))}function Ae(e){var n;(n=Z[e])==null||n.forEach((r,a)=>{var s,c;(c=(s=l[a])==null?void 0:s.snapshot)==null||c.restore(r)})}function Le(){ve(j),Me(Ge,K),Re(j),Me(Je,Z)}async function re(e,{noScroll:n=!1,replaceState:r=!1,keepFocus:a=!1,state:s={},invalidateAll:c=!1},g,m){return typeof e=="string"&&(e=new URL(e,De(document))),ce({url:e,scroll:n?Q():null,keepfocus:a,redirect_chain:g,details:{state:s,replaceState:r},nav_token:m,accepted:()=>{c&&(H=!0)},blocked:()=>{},type:"goto"})}async function Oe(e){return b={id:e.id,promise:he(e).then(n=>(n.type==="loaded"&&n.state.error&&(b=null),n))},b.promise}async function oe(...e){const r=u.filter(a=>e.some(s=>a.exec(s))).map(a=>Promise.all([...a.layouts,a.leaf].map(s=>s==null?void 0:s[1]())));await Promise.all(r)}function Ie(e){var a;p=e.state;const n=document.querySelector("style[data-sveltekit]");n&&n.remove(),M=e.props.page,F=new t.root({target:o,props:{...e.props,stores:V,components:l},hydrate:!0}),Ae(j);const r={from:null,to:{params:p.params,route:{id:((a=p.route)==null?void 0:a.id)??null},url:new URL(location.href)},willUnload:!1,type:"enter"};w.after_navigate.forEach(s=>s(r)),N=!0}async function Y({url:e,params:n,branch:r,status:a,error:s,route:c,form:g}){let m="never";for(const _ of r)(_==null?void 0:_.slash)!==void 0&&(m=_.slash);e.pathname=Xe(e.pathname,m),e.search=e.search;const v={type:"loaded",state:{url:e,params:n,branch:r,error:s,route:c},props:{constructors:St(r).map(_=>_.node.component)}};g!==void 0&&(v.props.form=g);let y={},R=!M,L=0;for(let _=0;_(m.params.add(U),h[U])}),data:(c==null?void 0:c.data)??null,url:tt(r,()=>{m.url=!0}),async fetch(h,U){let x;h instanceof Request?(x=h.url,U={body:h.method==="GET"||h.method==="HEAD"?void 0:await h.blob(),cache:h.cache,credentials:h.credentials,headers:h.headers,integrity:h.integrity,keepalive:h.keepalive,method:h.method,mode:h.mode,redirect:h.redirect,referrer:h.referrer,referrerPolicy:h.referrerPolicy,signal:h.signal,...U}):x=h;const D=new URL(x,r);return I(D.href),D.origin===r.origin&&(x=D.href.slice(r.origin.length)),N?st(x,D.href,U):it(x,U)},setHeaders:()=>{},depends:I,parent(){return m.parent=!0,n()}};g=await v.universal.load.call(null,_)??null,g=g?await ht(g):null}return{node:v,loader:e,server:c,universal:(R=v.universal)!=null&&R.load?{type:"data",data:g,uses:m}:null,data:g??(c==null?void 0:c.data)??null,slash:((L=v.universal)==null?void 0:L.trailingSlash)??(c==null?void 0:c.slash)}}function Pe(e,n,r,a,s){if(H)return!0;if(!a)return!1;if(a.parent&&e||a.route&&n||a.url&&r)return!0;for(const c of a.params)if(s[c]!==p.params[c])return!0;for(const c of a.dependencies)if(S.some(g=>g(new URL(c))))return!0;return!1}function pe(e,n){return(e==null?void 0:e.type)==="data"?e:(e==null?void 0:e.type)==="skip"?n??null:null}async function he({id:e,invalidating:n,url:r,params:a,route:s}){if((b==null?void 0:b.id)===e)return b.promise;const{errors:c,layouts:g,leaf:m}=s,v=[...g,m];c.forEach(E=>E==null?void 0:E().catch(()=>{})),v.forEach(E=>E==null?void 0:E[1]().catch(()=>{}));let y=null;const R=p.url?e!==p.url.pathname+p.url.search:!1,L=p.route?s.id!==p.route.id:!1;let I=!1;const _=v.map((E,O)=>{var G;const A=p.branch[O],$=!!(E!=null&&E[0])&&((A==null?void 0:A.loader)!==E[1]||Pe(I,L,R,(G=A.server)==null?void 0:G.uses,a));return $&&(I=!0),$});if(_.some(Boolean)){try{y=await He(r,_)}catch(E){return ie({status:E instanceof ee?E.status:500,error:await X(E,{url:r,params:a,route:{id:s.id}}),url:r,route:s})}if(y.type==="redirect")return y}const h=y==null?void 0:y.nodes;let U=!1;const x=v.map(async(E,O)=>{var ge;if(!E)return;const A=p.branch[O],$=h==null?void 0:h[O];if((!$||$.type==="skip")&&E[1]===(A==null?void 0:A.loader)&&!Pe(U,L,R,(ge=A.universal)==null?void 0:ge.uses,a))return A;if(U=!0,($==null?void 0:$.type)==="error")throw $;return de({loader:E[1],url:r,params:a,route:s,parent:async()=>{var $e;const je={};for(let me=0;me{});const D=[];for(let E=0;EPromise.resolve({}),server_data_node:pe(c)}),v={node:await d(),loader:d,universal:null,server:null,data:null};return await Y({url:r,params:s,branch:[m,v],status:e,error:n,route:null})}function W(e,n){if(_e(e,J))return;const r=se(e);for(const a of u){const s=a.exec(r);if(s)return{id:e.pathname+e.search,invalidating:n,route:a,params:Qe(s),url:e}}}function se(e){return Ze(e.pathname.slice(J.length)||"/")}function Ne({url:e,type:n,intent:r,delta:a}){var m,v;let s=!1;const c={from:{params:p.params,route:{id:((m=p.route)==null?void 0:m.id)??null},url:p.url},to:{params:(r==null?void 0:r.params)??null,route:{id:((v=r==null?void 0:r.route)==null?void 0:v.id)??null},url:e},willUnload:!r,type:n};a!==void 0&&(c.delta=a);const g={...c,cancel:()=>{s=!0}};return P||w.before_navigate.forEach(y=>y(g)),s?null:c}async function ce({url:e,scroll:n,keepfocus:r,redirect_chain:a,details:s,type:c,delta:g,nav_token:m={},accepted:v,blocked:y}){var x,D,E;const R=W(e,!1),L=Ne({url:e,type:c,delta:g,intent:R});if(!L){y();return}const I=j;v(),P=!0,N&&V.navigating.set(L),ne=m;let _=R&&await he(R);if(!_){if(_e(e,J))return await B(e);_=await Te(e,{id:null},await X(new Error(`Not found: ${e.pathname}`),{url:e,params:{},route:{id:null}}),404)}if(e=(R==null?void 0:R.url)||e,ne!==m)return!1;if(_.type==="redirect")if(a.length>10||a.includes(e.pathname))_=await ie({status:500,error:await X(new Error("Redirect loop"),{url:e,params:{},route:{id:null}}),url:e,route:{id:null}});else return re(new URL(_.location,e).href,{},[...a,e.pathname],m),!1;else((x=_.props.page)==null?void 0:x.status)>=400&&await V.updated.check()&&await B(e);if(S.length=0,H=!1,T=!0,ve(I),Re(I),(D=_.props.page)!=null&&D.url&&_.props.page.url.pathname!==e.pathname&&(e.pathname=(E=_.props.page)==null?void 0:E.url.pathname),s){const O=s.replaceState?0:1;if(s.state[q]=j+=O,history[s.replaceState?"replaceState":"pushState"](s.state,"",e),!s.replaceState){let A=j+1;for(;Z[A]||K[A];)delete Z[A],delete K[A],A+=1}}b=null,N?(p=_.state,_.props.page&&(_.props.page.url=e),F.$set(_.props)):Ie(_);const{activeElement:h}=document;if(await ye(),k){const O=e.hash&&document.getElementById(decodeURIComponent(e.hash.slice(1)));n?scrollTo(n.x,n.y):O?O.scrollIntoView():scrollTo(0,0)}const U=document.activeElement!==h&&document.activeElement!==document.body;!r&&!U&&Ee(),k=!0,_.props.page&&(M=_.props.page),P=!1,c==="popstate"&&Ae(j),w.after_navigate.forEach(O=>O(L)),V.navigating.set(null),T=!1}async function Te(e,n,r,a){return e.origin===location.origin&&e.pathname===location.pathname&&!C?await ie({status:a,error:r,url:e,route:n}):await B(e)}function B(e){return location.href=e.href,new Promise(()=>{})}function Ye(){let e;f.addEventListener("mousemove",c=>{const g=c.target;clearTimeout(e),e=setTimeout(()=>{a(g,2)},20)});function n(c){a(c.composedPath()[0],1)}f.addEventListener("mousedown",n),f.addEventListener("touchstart",n,{passive:!0});const r=new IntersectionObserver(c=>{for(const g of c)g.isIntersecting&&(oe(se(new URL(g.target.href))),r.unobserve(g.target))},{threshold:0});function a(c,g){const m=qe(c,f);if(!m)return;const{url:v,external:y,download:R}=we(m,J);if(y||R)return;const L=le(m);if(!L.reload)if(g<=L.preload_data){const I=W(v,!1);I&&Oe(I)}else g<=L.preload_code&&oe(se(v))}function s(){r.disconnect();for(const c of f.querySelectorAll("a")){const{url:g,external:m,download:v}=we(c,J);if(m||v)continue;const y=le(c);y.reload||(y.preload_code===Fe.viewport&&r.observe(c),y.preload_code===Fe.eager&&oe(se(g)))}}w.after_navigate.push(s),s()}function X(e,n){return e instanceof ee?e.body:t.hooks.handleError({error:e,event:n})??{message:n.route.id!=null?"Internal Error":"Not Found"}}return{after_navigate:e=>{Ce(()=>(w.after_navigate.push(e),()=>{const n=w.after_navigate.indexOf(e);w.after_navigate.splice(n,1)}))},before_navigate:e=>{Ce(()=>(w.before_navigate.push(e),()=>{const n=w.before_navigate.indexOf(e);w.before_navigate.splice(n,1)}))},disable_scroll_handling:()=>{(T||!N)&&(k=!1)},goto:(e,n={})=>re(e,n,[]),invalidate:e=>{if(typeof e=="function")S.push(e);else{const{href:n}=new URL(e,location.href);S.push(r=>r.href===n)}return ke()},invalidate_all:()=>(H=!0,ke()),preload_data:async e=>{const n=new URL(e,De(document)),r=W(n,!1);if(!r)throw new Error(`Attempted to preload a URL that does not belong to this app: ${n}`);await Oe(r)},preload_code:oe,apply_action:async e=>{if(e.type==="error"){const n=new URL(location.href),{branch:r,route:a}=p;if(!a)return;const s=await Ue(p.branch.length,r,a.errors);if(s){const c=await Y({url:n,params:p.params,branch:r.slice(0,s.idx).concat(s.node),status:e.status??500,error:e.error,route:a});p=c.state,F.$set(c.props),ye().then(Ee)}}else e.type==="redirect"?re(e.location,{invalidateAll:!0},[]):(F.$set({form:null,page:{...M,form:e.data,status:e.status}}),await ye(),F.$set({form:e.data}),e.type==="success"&&Ee())},_start_router:()=>{var e;history.scrollRestoration="manual",addEventListener("beforeunload",n=>{var a;let r=!1;if(Le(),!P){const s={from:{params:p.params,route:{id:((a=p.route)==null?void 0:a.id)??null},url:p.url},to:null,willUnload:!0,type:"leave",cancel:()=>r=!0};w.before_navigate.forEach(c=>c(s))}r?(n.preventDefault(),n.returnValue=""):history.scrollRestoration="auto"}),addEventListener("visibilitychange",()=>{document.visibilityState==="hidden"&&Le()}),(e=navigator.connection)!=null&&e.saveData||Ye(),f.addEventListener("click",n=>{if(n.button||n.which!==1||n.metaKey||n.ctrlKey||n.shiftKey||n.altKey||n.defaultPrevented)return;const r=qe(n.composedPath()[0],f);if(!r)return;const{url:a,external:s,target:c,download:g}=we(r,J);if(!a)return;if(c==="_parent"||c==="_top"){if(window.parent!==window)return}else if(c&&c!=="_self")return;const m=le(r);if(!(r instanceof SVGAElement)&&a.protocol!==location.protocol&&!(a.protocol==="https:"||a.protocol==="http:")||g)return;if(s||m.reload){Ne({url:a,type:"link"})?P=!0:n.preventDefault();return}const[y,R]=a.href.split("#");if(R!==void 0&&y===location.href.split("#")[0]){if(z=!0,ve(j),p.url=a,V.page.set({...M,url:a}),V.page.notify(),!m.replace_state)return;z=!1,n.preventDefault()}ce({url:a,scroll:m.noscroll?Q():null,keepfocus:m.keep_focus??!1,redirect_chain:[],details:{state:{},replaceState:m.replace_state??a.href===location.href},accepted:()=>n.preventDefault(),blocked:()=>n.preventDefault(),type:"link"})}),f.addEventListener("submit",n=>{if(n.defaultPrevented)return;const r=HTMLFormElement.prototype.cloneNode.call(n.target),a=n.submitter;if(((a==null?void 0:a.formMethod)||r.method)!=="get")return;const c=new URL((a==null?void 0:a.hasAttribute("formaction"))&&(a==null?void 0:a.formAction)||r.action);if(_e(c,J))return;const g=n.target,{keep_focus:m,noscroll:v,reload:y,replace_state:R}=le(g);if(y)return;n.preventDefault(),n.stopPropagation();const L=new FormData(g),I=a==null?void 0:a.getAttribute("name");I&&L.append(I,(a==null?void 0:a.getAttribute("value"))??""),c.search=new URLSearchParams(L).toString(),ce({url:c,scroll:v?Q():null,keepfocus:m??!1,redirect_chain:[],details:{state:{},replaceState:R??c.href===location.href},nav_token:{},accepted:()=>{},blocked:()=>{},type:"form"})}),addEventListener("popstate",async n=>{var r;if((r=n.state)!=null&&r[q]){if(n.state[q]===j)return;const a=K[n.state[q]];if(p.url.href.split("#")[0]===location.href.split("#")[0]){K[j]=Q(),j=n.state[q],scrollTo(a.x,a.y);return}const s=n.state[q]-j;await ce({url:new URL(location.href),scroll:a,keepfocus:!1,redirect_chain:[],details:null,accepted:()=>{j=n.state[q]},blocked:()=>{history.go(-s)},type:"popstate",delta:s})}}),addEventListener("hashchange",()=>{z&&(z=!1,history.replaceState({...history.state,[q]:++j},"",location.href))});for(const n of document.querySelectorAll("link"))n.rel==="icon"&&(n.href=n.href);addEventListener("pageshow",n=>{n.persisted&&V.navigating.set(null)})},_hydrate:async({status:e=200,error:n,node_ids:r,params:a,route:s,data:c,form:g})=>{C=!0;const m=new URL(location.href);({params:a={},route:s={id:null}}=W(m,!1)||{});let v;try{const y=r.map(async(I,_)=>{const h=c[_];return h!=null&&h.uses&&(h.uses=Be(h.uses)),de({loader:t.nodes[I],url:m,params:a,route:s,parent:async()=>{const U={};for(let x=0;x<_;x+=1)Object.assign(U,(await y[x]).data);return U},server_data_node:pe(h)})}),R=await Promise.all(y),L=u.find(({id:I})=>I===s.id);if(L){const I=L.layouts;for(let _=0;_d?"1":"0").join(""));const i=await fe(u.href);if(!i.ok)throw new ee(i.status,await i.json());return new Promise(async d=>{var p;const f=new Map,S=i.body.getReader(),l=new TextDecoder;function b(C){return vt(C,{Promise:N=>new Promise((k,T)=>{f.set(N,{fulfil:k,reject:T})})})}let w="";for(;;){const{done:C,value:N}=await S.read();if(C&&!w)break;for(w+=!N&&w?`
-`:l.decode(N);;){const k=w.indexOf(`
-`);if(k===-1)break;const T=JSON.parse(w.slice(0,k));if(w=w.slice(k+1),T.type==="redirect")return d(T);if(T.type==="data")(p=T.nodes)==null||p.forEach(P=>{(P==null?void 0:P.type)==="data"&&(P.uses=Be(P.uses),P.data=b(P.data))}),d(T);else if(T.type==="chunk"){const{id:P,data:z,error:H}=T,F=f.get(P);f.delete(P),H?F.reject(b(H)):F.fulfil(b(z))}}}})}function Be(t){return{dependencies:new Set((t==null?void 0:t.dependencies)??[]),params:new Set((t==null?void 0:t.params)??[]),parent:!!(t!=null&&t.parent),route:!!(t!=null&&t.route),url:!!(t!=null&&t.url)}}function Ee(){const t=document.querySelector("[autofocus]");if(t)t.focus();else{const o=document.body,u=o.getAttribute("tabindex");o.tabIndex=-1,o.focus({preventScroll:!0,focusVisible:!1}),u!==null?o.setAttribute("tabindex",u):o.removeAttribute("tabindex");const i=getSelection();if(i&&i.type!=="None"){const d=[];for(let f=0;f{if(i.rangeCount===d.length){for(let f=0;f 0.5
- (up, down), (left, right) = self._get_limits(scaled_mask)
- self.mask = scaled_mask[up:down, left:right]
-
- y_center, x_center = self.image_center()
- mask_height, mask_width = self.mask.shape
- self.up = int(round(y_center - mask_height / 2))
- self.down = self.up + mask_height
- self.left = int(round(x_center - mask_width / 2))
- self.right = self.left + mask_width
- return self
-
- def crop_to_canvas(self, vertical=True, horizontal=True, inplace=False):
- if not inplace:
- cropped = deepcopy(self)
- cropped.crop_to_canvas(vertical=vertical, horizontal=horizontal, inplace=True)
- return cropped
-
- if vertical:
- if self.up >= self.height or self.down <= 0:
- self._clean()
- else:
- cut_up, cut_down = max(-self.up, 0), max(self.down - self.height, 0)
- if cut_up != 0:
- self.mask = self.mask[cut_up:]
- self.up = 0
- if cut_down != 0:
- self.mask = self.mask[:-cut_down]
- self.down = self.height
-
- if horizontal:
- if self.left >= self.width or self.right <= 0:
- self._clean()
- else:
- cut_left, cut_right = max(-self.left, 0), max(self.right - self.width, 0)
- if cut_left != 0:
- self.mask = self.mask[:, cut_left:]
- self.left = 0
- if cut_right != 0:
- self.mask = self.mask[:, :-cut_right]
- self.right = self.width
-
- return self
-
- def restore_full_mask(self, allow_crop=False):
- cropped = self.crop_to_canvas(inplace=allow_crop)
- mask = np.zeros((cropped.height, cropped.width), dtype=bool)
- mask[cropped.up:cropped.down, cropped.left:cropped.right] = cropped.mask
- return mask
-
- def shift(self, vertical=0, horizontal=0, inplace=False):
- if not inplace:
- shifted = deepcopy(self)
- return shifted.shift(vertical=vertical, horizontal=horizontal, inplace=True)
-
- self.up += vertical
- self.down += vertical
- self.left += horizontal
- self.right += horizontal
- return self
-
- def area(self):
- return self.mask.sum()
-
-
-class RigidnessMode(enum.Enum):
- soft = 0
- rigid = 1
-
-
-class SegmentationMask:
- def __init__(self, confidence_threshold=0.5, rigidness_mode=RigidnessMode.rigid,
- max_object_area=0.3, min_mask_area=0.02, downsample_levels=6, num_variants_per_mask=4,
- max_mask_intersection=0.5, max_foreground_coverage=0.5, max_foreground_intersection=0.5,
- max_hidden_area=0.2, max_scale_change=0.25, horizontal_flip=True,
- max_vertical_shift=0.1, position_shuffle=True):
- """
- :param confidence_threshold: float; threshold for confidence of the panoptic segmentator to allow for
- the instance.
- :param rigidness_mode: RigidnessMode object
- when soft, checks intersection only with the object from which the mask_object was produced
- when rigid, checks intersection with any foreground class object
- :param max_object_area: float; allowed upper bound for to be considered as mask_object.
- :param min_mask_area: float; lower bound for mask to be considered valid
- :param downsample_levels: int; defines width of the resized segmentation to obtain shifted masks;
- :param num_variants_per_mask: int; maximal number of the masks for the same object;
- :param max_mask_intersection: float; maximum allowed area fraction of intersection for 2 masks
- produced by horizontal shift of the same mask_object; higher value -> more diversity
- :param max_foreground_coverage: float; maximum allowed area fraction of intersection for foreground object to be
- covered by mask; lower value -> less the objects are covered
- :param max_foreground_intersection: float; maximum allowed area of intersection for the mask with foreground
- object; lower value -> mask is more on the background than on the objects
- :param max_hidden_area: upper bound on part of the object hidden by shifting object outside the screen area;
- :param max_scale_change: allowed scale change for the mask_object;
- :param horizontal_flip: if horizontal flips are allowed;
- :param max_vertical_shift: amount of vertical movement allowed;
- :param position_shuffle: shuffle
- """
-
- assert DETECTRON_INSTALLED, 'Cannot use SegmentationMask without detectron2'
- self.cfg = get_cfg()
- self.cfg.merge_from_file(model_zoo.get_config_file("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml"))
- self.cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml")
- self.cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = confidence_threshold
- self.predictor = DefaultPredictor(self.cfg)
-
- self.rigidness_mode = RigidnessMode(rigidness_mode)
- self.max_object_area = max_object_area
- self.min_mask_area = min_mask_area
- self.downsample_levels = downsample_levels
- self.num_variants_per_mask = num_variants_per_mask
- self.max_mask_intersection = max_mask_intersection
- self.max_foreground_coverage = max_foreground_coverage
- self.max_foreground_intersection = max_foreground_intersection
- self.max_hidden_area = max_hidden_area
- self.position_shuffle = position_shuffle
-
- self.max_scale_change = max_scale_change
- self.horizontal_flip = horizontal_flip
- self.max_vertical_shift = max_vertical_shift
-
- def get_segmentation(self, img):
- im = img_as_ubyte(img)
- panoptic_seg, segment_info = self.predictor(im)["panoptic_seg"]
- return panoptic_seg, segment_info
-
- @staticmethod
- def _is_power_of_two(n):
- return (n != 0) and (n & (n-1) == 0)
-
- def identify_candidates(self, panoptic_seg, segments_info):
- potential_mask_ids = []
- for segment in segments_info:
- if not segment["isthing"]:
- continue
- mask = (panoptic_seg == segment["id"]).int().detach().cpu().numpy()
- area = mask.sum().item() / np.prod(panoptic_seg.shape)
- if area >= self.max_object_area:
- continue
- potential_mask_ids.append(segment["id"])
- return potential_mask_ids
-
- def downsample_mask(self, mask):
- height, width = mask.shape
- if not (self._is_power_of_two(height) and self._is_power_of_two(width)):
- raise ValueError("Image sides are not power of 2.")
-
- num_iterations = width.bit_length() - 1 - self.downsample_levels
- if num_iterations < 0:
- raise ValueError(f"Width is lower than 2^{self.downsample_levels}.")
-
- if height.bit_length() - 1 < num_iterations:
- raise ValueError("Height is too low to perform downsampling")
-
- downsampled = mask
- for _ in range(num_iterations):
- downsampled = zero_corrected_countless(downsampled)
-
- return downsampled
-
- def _augmentation_params(self):
- scaling_factor = np.random.uniform(1 - self.max_scale_change, 1 + self.max_scale_change)
- if self.horizontal_flip:
- horizontal_flip = bool(np.random.choice(2))
- else:
- horizontal_flip = False
- vertical_shift = np.random.uniform(-self.max_vertical_shift, self.max_vertical_shift)
-
- return {
- "scaling_factor": scaling_factor,
- "horizontal_flip": horizontal_flip,
- "vertical_shift": vertical_shift
- }
-
- def _get_intersection(self, mask_array, mask_object):
- intersection = mask_array[
- mask_object.up:mask_object.down, mask_object.left:mask_object.right
- ] & mask_object.mask
- return intersection
-
- def _check_masks_intersection(self, aug_mask, total_mask_area, prev_masks):
- for existing_mask in prev_masks:
- intersection_area = self._get_intersection(existing_mask, aug_mask).sum()
- intersection_existing = intersection_area / existing_mask.sum()
- intersection_current = 1 - (aug_mask.area() - intersection_area) / total_mask_area
- if (intersection_existing > self.max_mask_intersection) or \
- (intersection_current > self.max_mask_intersection):
- return False
- return True
-
- def _check_foreground_intersection(self, aug_mask, foreground):
- for existing_mask in foreground:
- intersection_area = self._get_intersection(existing_mask, aug_mask).sum()
- intersection_existing = intersection_area / existing_mask.sum()
- if intersection_existing > self.max_foreground_coverage:
- return False
- intersection_mask = intersection_area / aug_mask.area()
- if intersection_mask > self.max_foreground_intersection:
- return False
- return True
-
- def _move_mask(self, mask, foreground):
- # Obtaining properties of the original mask_object:
- orig_mask = ObjectMask(mask)
-
- chosen_masks = []
- chosen_parameters = []
- # to fix the case when resizing gives mask_object consisting only of False
- scaling_factor_lower_bound = 0.
-
- for var_idx in range(self.num_variants_per_mask):
- # Obtaining augmentation parameters and applying them to the downscaled mask_object
- augmentation_params = self._augmentation_params()
- augmentation_params["scaling_factor"] = min([
- augmentation_params["scaling_factor"],
- 2 * min(orig_mask.up, orig_mask.height - orig_mask.down) / orig_mask.height + 1.,
- 2 * min(orig_mask.left, orig_mask.width - orig_mask.right) / orig_mask.width + 1.
- ])
- augmentation_params["scaling_factor"] = max([
- augmentation_params["scaling_factor"], scaling_factor_lower_bound
- ])
-
- aug_mask = deepcopy(orig_mask)
- aug_mask.rescale(augmentation_params["scaling_factor"], inplace=True)
- if augmentation_params["horizontal_flip"]:
- aug_mask.horizontal_flip(inplace=True)
- total_aug_area = aug_mask.area()
- if total_aug_area == 0:
- scaling_factor_lower_bound = 1.
- continue
-
- # Fix if the element vertical shift is too strong and shown area is too small:
- vertical_area = aug_mask.mask.sum(axis=1) / total_aug_area # share of area taken by rows
- # number of rows which are allowed to be hidden from upper and lower parts of image respectively
- max_hidden_up = np.searchsorted(vertical_area.cumsum(), self.max_hidden_area)
- max_hidden_down = np.searchsorted(vertical_area[::-1].cumsum(), self.max_hidden_area)
- # correcting vertical shift, so not too much area will be hidden
- augmentation_params["vertical_shift"] = np.clip(
- augmentation_params["vertical_shift"],
- -(aug_mask.up + max_hidden_up) / aug_mask.height,
- (aug_mask.height - aug_mask.down + max_hidden_down) / aug_mask.height
- )
- # Applying vertical shift:
- vertical_shift = int(round(aug_mask.height * augmentation_params["vertical_shift"]))
- aug_mask.shift(vertical=vertical_shift, inplace=True)
- aug_mask.crop_to_canvas(vertical=True, horizontal=False, inplace=True)
-
- # Choosing horizontal shift:
- max_hidden_area = self.max_hidden_area - (1 - aug_mask.area() / total_aug_area)
- horizontal_area = aug_mask.mask.sum(axis=0) / total_aug_area
- max_hidden_left = np.searchsorted(horizontal_area.cumsum(), max_hidden_area)
- max_hidden_right = np.searchsorted(horizontal_area[::-1].cumsum(), max_hidden_area)
- allowed_shifts = np.arange(-max_hidden_left, aug_mask.width -
- (aug_mask.right - aug_mask.left) + max_hidden_right + 1)
- allowed_shifts = - (aug_mask.left - allowed_shifts)
-
- if self.position_shuffle:
- np.random.shuffle(allowed_shifts)
-
- mask_is_found = False
- for horizontal_shift in allowed_shifts:
- aug_mask_left = deepcopy(aug_mask)
- aug_mask_left.shift(horizontal=horizontal_shift, inplace=True)
- aug_mask_left.crop_to_canvas(inplace=True)
-
- prev_masks = [mask] + chosen_masks
- is_mask_suitable = self._check_masks_intersection(aug_mask_left, total_aug_area, prev_masks) & \
- self._check_foreground_intersection(aug_mask_left, foreground)
- if is_mask_suitable:
- aug_draw = aug_mask_left.restore_full_mask()
- chosen_masks.append(aug_draw)
- augmentation_params["horizontal_shift"] = horizontal_shift / aug_mask_left.width
- chosen_parameters.append(augmentation_params)
- mask_is_found = True
- break
-
- if not mask_is_found:
- break
-
- return chosen_parameters
-
- def _prepare_mask(self, mask):
- height, width = mask.shape
- target_width = width if self._is_power_of_two(width) else (1 << width.bit_length())
- target_height = height if self._is_power_of_two(height) else (1 << height.bit_length())
-
- return resize(mask.astype('float32'), (target_height, target_width), order=0, mode='edge').round().astype('int32')
-
- def get_masks(self, im, return_panoptic=False):
- panoptic_seg, segments_info = self.get_segmentation(im)
- potential_mask_ids = self.identify_candidates(panoptic_seg, segments_info)
-
- panoptic_seg_scaled = self._prepare_mask(panoptic_seg.detach().cpu().numpy())
- downsampled = self.downsample_mask(panoptic_seg_scaled)
- scene_objects = []
- for segment in segments_info:
- if not segment["isthing"]:
- continue
- mask = downsampled == segment["id"]
- if not np.any(mask):
- continue
- scene_objects.append(mask)
-
- mask_set = []
- for mask_id in potential_mask_ids:
- mask = downsampled == mask_id
- if not np.any(mask):
- continue
-
- if self.rigidness_mode is RigidnessMode.soft:
- foreground = [mask]
- elif self.rigidness_mode is RigidnessMode.rigid:
- foreground = scene_objects
- else:
- raise ValueError(f'Unexpected rigidness_mode: {rigidness_mode}')
-
- masks_params = self._move_mask(mask, foreground)
-
- full_mask = ObjectMask((panoptic_seg == mask_id).detach().cpu().numpy())
-
- for params in masks_params:
- aug_mask = deepcopy(full_mask)
- aug_mask.rescale(params["scaling_factor"], inplace=True)
- if params["horizontal_flip"]:
- aug_mask.horizontal_flip(inplace=True)
-
- vertical_shift = int(round(aug_mask.height * params["vertical_shift"]))
- horizontal_shift = int(round(aug_mask.width * params["horizontal_shift"]))
- aug_mask.shift(vertical=vertical_shift, horizontal=horizontal_shift, inplace=True)
- aug_mask = aug_mask.restore_full_mask().astype('uint8')
- if aug_mask.mean() <= self.min_mask_area:
- continue
- mask_set.append(aug_mask)
-
- if return_panoptic:
- return mask_set, panoptic_seg.detach().cpu().numpy()
- else:
- return mask_set
-
-
-def propose_random_square_crop(mask, min_overlap=0.5):
- height, width = mask.shape
- mask_ys, mask_xs = np.where(mask > 0.5) # mask==0 is known fragment and mask==1 is missing
-
- if height < width:
- crop_size = height
- obj_left, obj_right = mask_xs.min(), mask_xs.max()
- obj_width = obj_right - obj_left
- left_border = max(0, min(width - crop_size - 1, obj_left + obj_width * min_overlap - crop_size))
- right_border = max(left_border + 1, min(width - crop_size, obj_left + obj_width * min_overlap))
- start_x = np.random.randint(left_border, right_border)
- return start_x, 0, start_x + crop_size, height
- else:
- crop_size = width
- obj_top, obj_bottom = mask_ys.min(), mask_ys.max()
- obj_height = obj_bottom - obj_top
- top_border = max(0, min(height - crop_size - 1, obj_top + obj_height * min_overlap - crop_size))
- bottom_border = max(top_border + 1, min(height - crop_size, obj_top + obj_height * min_overlap))
- start_y = np.random.randint(top_border, bottom_border)
- return 0, start_y, width, start_y + crop_size
diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/trainers/__init__.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/trainers/__init__.py
deleted file mode 100644
index c59241f553efe4e2dd6b198e2e5656a2b1488857..0000000000000000000000000000000000000000
--- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/trainers/__init__.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import logging
-import torch
-from saicinpainting.training.trainers.default import DefaultInpaintingTrainingModule
-
-
-def get_training_model_class(kind):
- if kind == 'default':
- return DefaultInpaintingTrainingModule
-
- raise ValueError(f'Unknown trainer module {kind}')
-
-
-def make_training_model(config):
- kind = config.training_model.kind
- kwargs = dict(config.training_model)
- kwargs.pop('kind')
- kwargs['use_ddp'] = config.trainer.kwargs.get('accelerator', None) == 'ddp'
-
- logging.info(f'Make training model {kind}')
-
- cls = get_training_model_class(kind)
- return cls(config, **kwargs)
-
-
-def load_checkpoint(train_config, path, map_location='cuda', strict=True):
- model: torch.nn.Module = make_training_model(train_config)
- state = torch.load(path, map_location=map_location)
- model.load_state_dict(state['state_dict'], strict=strict)
- model.on_load_checkpoint(state)
- return model
diff --git a/spaces/Intoval/privateChatGPT/run_Windows.bat b/spaces/Intoval/privateChatGPT/run_Windows.bat
deleted file mode 100644
index 4c18f9ccaeea0af972301ffdf48778641221f76d..0000000000000000000000000000000000000000
--- a/spaces/Intoval/privateChatGPT/run_Windows.bat
+++ /dev/null
@@ -1,5 +0,0 @@
-@echo off
-echo Opening ChuanhuChatGPT...
-
-REM Open powershell via bat
-start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py"
diff --git a/spaces/ItsJayQz/Civilizations_6_Diffusion/README.md b/spaces/ItsJayQz/Civilizations_6_Diffusion/README.md
deleted file mode 100644
index 92e13616c6415342a76e0749f9de4459b342caff..0000000000000000000000000000000000000000
--- a/spaces/ItsJayQz/Civilizations_6_Diffusion/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Civilizations 6 Diffusion
-emoji: 💻
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-sdk_version: 3.14.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Kevin676/AutoGPT/autogpt/agent/agent.py b/spaces/Kevin676/AutoGPT/autogpt/agent/agent.py
deleted file mode 100644
index ee7885f8844022597321fa6b492430ec34c0d6b9..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/AutoGPT/autogpt/agent/agent.py
+++ /dev/null
@@ -1,197 +0,0 @@
-from colorama import Fore, Style
-
-from autogpt.app import execute_command, get_command
-from autogpt.chat import chat_with_ai, create_chat_message
-from autogpt.config import Config
-from autogpt.json_utils.json_fix_llm import fix_json_using_multiple_techniques
-from autogpt.json_utils.utilities import validate_json
-from autogpt.logs import logger, print_assistant_thoughts
-from autogpt.speech import say_text
-from autogpt.spinner import Spinner
-from autogpt.utils import clean_input
-
-
-class Agent:
- """Agent class for interacting with Auto-GPT.
-
- Attributes:
- ai_name: The name of the agent.
- memory: The memory object to use.
- full_message_history: The full message history.
- next_action_count: The number of actions to execute.
- system_prompt: The system prompt is the initial prompt that defines everything the AI needs to know to achieve its task successfully.
- Currently, the dynamic and customizable information in the system prompt are ai_name, description and goals.
-
- triggering_prompt: The last sentence the AI will see before answering. For Auto-GPT, this prompt is:
- Determine which next command to use, and respond using the format specified above:
- The triggering prompt is not part of the system prompt because between the system prompt and the triggering
- prompt we have contextual information that can distract the AI and make it forget that its goal is to find the next task to achieve.
- SYSTEM PROMPT
- CONTEXTUAL INFORMATION (memory, previous conversations, anything relevant)
- TRIGGERING PROMPT
-
- The triggering prompt reminds the AI about its short term meta task (defining the next task)
- """
-
- def __init__(
- self,
- ai_name,
- memory,
- full_message_history,
- next_action_count,
- system_prompt,
- triggering_prompt,
- ):
- self.ai_name = ai_name
- self.memory = memory
- self.full_message_history = full_message_history
- self.next_action_count = next_action_count
- self.system_prompt = system_prompt
- self.triggering_prompt = triggering_prompt
-
- def start_interaction_loop(self):
- # Interaction Loop
- cfg = Config()
- loop_count = 0
- command_name = None
- arguments = None
- user_input = ""
-
- while True:
- # Discontinue if continuous limit is reached
- loop_count += 1
- if (
- cfg.continuous_mode
- and cfg.continuous_limit > 0
- and loop_count > cfg.continuous_limit
- ):
- logger.typewriter_log(
- "Continuous Limit Reached: ", Fore.YELLOW, f"{cfg.continuous_limit}"
- )
- break
-
- # Send message to AI, get response
- with Spinner("Thinking... "):
- assistant_reply = chat_with_ai(
- self.system_prompt,
- self.triggering_prompt,
- self.full_message_history,
- self.memory,
- cfg.fast_token_limit,
- ) # TODO: This hardcodes the model to use GPT3.5. Make this an argument
-
- assistant_reply_json = fix_json_using_multiple_techniques(assistant_reply)
-
- # Print Assistant thoughts
- if assistant_reply_json != {}:
- validate_json(assistant_reply_json, "llm_response_format_1")
- # Get command name and arguments
- try:
- print_assistant_thoughts(self.ai_name, assistant_reply_json)
- command_name, arguments = get_command(assistant_reply_json)
- # command_name, arguments = assistant_reply_json_valid["command"]["name"], assistant_reply_json_valid["command"]["args"]
- if cfg.speak_mode:
- say_text(f"I want to execute {command_name}")
- except Exception as e:
- logger.error("Error: \n", str(e))
-
- if not cfg.continuous_mode and self.next_action_count == 0:
- ### GET USER AUTHORIZATION TO EXECUTE COMMAND ###
- # Get key press: Prompt the user to press enter to continue or escape
- # to exit
- logger.typewriter_log(
- "NEXT ACTION: ",
- Fore.CYAN,
- f"COMMAND = {Fore.CYAN}{command_name}{Style.RESET_ALL} "
- f"ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}",
- )
- print(
- "Enter 'y' to authorise command, 'y -N' to run N continuous "
- "commands, 'n' to exit program, or enter feedback for "
- f"{self.ai_name}...",
- flush=True,
- )
- while True:
- console_input = clean_input(
- Fore.MAGENTA + "Input:" + Style.RESET_ALL
- )
- if console_input.lower().strip() == "y":
- user_input = "GENERATE NEXT COMMAND JSON"
- break
- elif console_input.lower().strip() == "":
- print("Invalid input format.")
- continue
- elif console_input.lower().startswith("y -"):
- try:
- self.next_action_count = abs(
- int(console_input.split(" ")[1])
- )
- user_input = "GENERATE NEXT COMMAND JSON"
- except ValueError:
- print(
- "Invalid input format. Please enter 'y -n' where n is"
- " the number of continuous tasks."
- )
- continue
- break
- elif console_input.lower() == "n":
- user_input = "EXIT"
- break
- else:
- user_input = console_input
- command_name = "human_feedback"
- break
-
- if user_input == "GENERATE NEXT COMMAND JSON":
- logger.typewriter_log(
- "-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=",
- Fore.MAGENTA,
- "",
- )
- elif user_input == "EXIT":
- print("Exiting...", flush=True)
- break
- else:
- # Print command
- logger.typewriter_log(
- "NEXT ACTION: ",
- Fore.CYAN,
- f"COMMAND = {Fore.CYAN}{command_name}{Style.RESET_ALL}"
- f" ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}",
- )
-
- # Execute command
- if command_name is not None and command_name.lower().startswith("error"):
- result = (
- f"Command {command_name} threw the following error: {arguments}"
- )
- elif command_name == "human_feedback":
- result = f"Human feedback: {user_input}"
- else:
- result = (
- f"Command {command_name} returned: "
- f"{execute_command(command_name, arguments)}"
- )
- if self.next_action_count > 0:
- self.next_action_count -= 1
-
- memory_to_add = (
- f"Assistant Reply: {assistant_reply} "
- f"\nResult: {result} "
- f"\nHuman Feedback: {user_input} "
- )
-
- self.memory.add(memory_to_add)
-
- # Check if there's a result from the command append it to the message
- # history
- if result is not None:
- self.full_message_history.append(create_chat_message("system", result))
- logger.typewriter_log("SYSTEM: ", Fore.YELLOW, result)
- else:
- self.full_message_history.append(
- create_chat_message("system", "Unable to execute command")
- )
- logger.typewriter_log(
- "SYSTEM: ", Fore.YELLOW, "Unable to execute command"
- )
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/utils/nets_utils.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/utils/nets_utils.py
deleted file mode 100644
index 098e3b4c5dfded0c05df1cf0138496c3303eb1e3..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/utils/nets_utils.py
+++ /dev/null
@@ -1,451 +0,0 @@
-# -*- coding: utf-8 -*-
-
-"""Network related utility tools."""
-
-import logging
-from typing import Dict
-
-import numpy as np
-import torch
-
-
-def to_device(m, x):
- """Send tensor into the device of the module.
-
- Args:
- m (torch.nn.Module): Torch module.
- x (Tensor): Torch tensor.
-
- Returns:
- Tensor: Torch tensor located in the same place as torch module.
-
- """
- assert isinstance(m, torch.nn.Module)
- device = next(m.parameters()).device
- return x.to(device)
-
-
-def pad_list(xs, pad_value):
- """Perform padding for the list of tensors.
-
- Args:
- xs (List): List of Tensors [(T_1, `*`), (T_2, `*`), ..., (T_B, `*`)].
- pad_value (float): Value for padding.
-
- Returns:
- Tensor: Padded tensor (B, Tmax, `*`).
-
- Examples:
- >>> x = [torch.ones(4), torch.ones(2), torch.ones(1)]
- >>> x
- [tensor([1., 1., 1., 1.]), tensor([1., 1.]), tensor([1.])]
- >>> pad_list(x, 0)
- tensor([[1., 1., 1., 1.],
- [1., 1., 0., 0.],
- [1., 0., 0., 0.]])
-
- """
- n_batch = len(xs)
- max_len = max(x.size(0) for x in xs)
- pad = xs[0].new(n_batch, max_len, *xs[0].size()[1:]).fill_(pad_value)
-
- for i in range(n_batch):
- pad[i, :xs[i].size(0)] = xs[i]
-
- return pad
-
-
-def make_pad_mask(lengths, xs=None, length_dim=-1):
- """Make mask tensor containing indices of padded part.
-
- Args:
- lengths (LongTensor or List): Batch of lengths (B,).
- xs (Tensor, optional): The reference tensor. If set, masks will be the same shape as this tensor.
- length_dim (int, optional): Dimension indicator of the above tensor. See the example.
-
- Returns:
- Tensor: Mask tensor containing indices of padded part.
- dtype=torch.uint8 in PyTorch 1.2-
- dtype=torch.bool in PyTorch 1.2+ (including 1.2)
-
- Examples:
- With only lengths.
-
- >>> lengths = [5, 3, 2]
- >>> make_non_pad_mask(lengths)
- masks = [[0, 0, 0, 0 ,0],
- [0, 0, 0, 1, 1],
- [0, 0, 1, 1, 1]]
-
- With the reference tensor.
-
- >>> xs = torch.zeros((3, 2, 4))
- >>> make_pad_mask(lengths, xs)
- tensor([[[0, 0, 0, 0],
- [0, 0, 0, 0]],
- [[0, 0, 0, 1],
- [0, 0, 0, 1]],
- [[0, 0, 1, 1],
- [0, 0, 1, 1]]], dtype=torch.uint8)
- >>> xs = torch.zeros((3, 2, 6))
- >>> make_pad_mask(lengths, xs)
- tensor([[[0, 0, 0, 0, 0, 1],
- [0, 0, 0, 0, 0, 1]],
- [[0, 0, 0, 1, 1, 1],
- [0, 0, 0, 1, 1, 1]],
- [[0, 0, 1, 1, 1, 1],
- [0, 0, 1, 1, 1, 1]]], dtype=torch.uint8)
-
- With the reference tensor and dimension indicator.
-
- >>> xs = torch.zeros((3, 6, 6))
- >>> make_pad_mask(lengths, xs, 1)
- tensor([[[0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [1, 1, 1, 1, 1, 1]],
- [[0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1]],
- [[0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1]]], dtype=torch.uint8)
- >>> make_pad_mask(lengths, xs, 2)
- tensor([[[0, 0, 0, 0, 0, 1],
- [0, 0, 0, 0, 0, 1],
- [0, 0, 0, 0, 0, 1],
- [0, 0, 0, 0, 0, 1],
- [0, 0, 0, 0, 0, 1],
- [0, 0, 0, 0, 0, 1]],
- [[0, 0, 0, 1, 1, 1],
- [0, 0, 0, 1, 1, 1],
- [0, 0, 0, 1, 1, 1],
- [0, 0, 0, 1, 1, 1],
- [0, 0, 0, 1, 1, 1],
- [0, 0, 0, 1, 1, 1]],
- [[0, 0, 1, 1, 1, 1],
- [0, 0, 1, 1, 1, 1],
- [0, 0, 1, 1, 1, 1],
- [0, 0, 1, 1, 1, 1],
- [0, 0, 1, 1, 1, 1],
- [0, 0, 1, 1, 1, 1]]], dtype=torch.uint8)
-
- """
- if length_dim == 0:
- raise ValueError('length_dim cannot be 0: {}'.format(length_dim))
-
- if not isinstance(lengths, list):
- lengths = lengths.tolist()
- bs = int(len(lengths))
- if xs is None:
- maxlen = int(max(lengths))
- else:
- maxlen = xs.size(length_dim)
-
- seq_range = torch.arange(0, maxlen, dtype=torch.int64)
- seq_range_expand = seq_range.unsqueeze(0).expand(bs, maxlen)
- seq_length_expand = seq_range_expand.new(lengths).unsqueeze(-1)
- mask = seq_range_expand >= seq_length_expand
-
- if xs is not None:
- assert xs.size(0) == bs, (xs.size(0), bs)
-
- if length_dim < 0:
- length_dim = xs.dim() + length_dim
- # ind = (:, None, ..., None, :, , None, ..., None)
- ind = tuple(slice(None) if i in (0, length_dim) else None
- for i in range(xs.dim()))
- mask = mask[ind].expand_as(xs).to(xs.device)
- return mask
-
-
-def make_non_pad_mask(lengths, xs=None, length_dim=-1):
- """Make mask tensor containing indices of non-padded part.
-
- Args:
- lengths (LongTensor or List): Batch of lengths (B,).
- xs (Tensor, optional): The reference tensor. If set, masks will be the same shape as this tensor.
- length_dim (int, optional): Dimension indicator of the above tensor. See the example.
-
- Returns:
- ByteTensor: mask tensor containing indices of padded part.
- dtype=torch.uint8 in PyTorch 1.2-
- dtype=torch.bool in PyTorch 1.2+ (including 1.2)
-
- Examples:
- With only lengths.
-
- >>> lengths = [5, 3, 2]
- >>> make_non_pad_mask(lengths)
- masks = [[1, 1, 1, 1 ,1],
- [1, 1, 1, 0, 0],
- [1, 1, 0, 0, 0]]
-
- With the reference tensor.
-
- >>> xs = torch.zeros((3, 2, 4))
- >>> make_non_pad_mask(lengths, xs)
- tensor([[[1, 1, 1, 1],
- [1, 1, 1, 1]],
- [[1, 1, 1, 0],
- [1, 1, 1, 0]],
- [[1, 1, 0, 0],
- [1, 1, 0, 0]]], dtype=torch.uint8)
- >>> xs = torch.zeros((3, 2, 6))
- >>> make_non_pad_mask(lengths, xs)
- tensor([[[1, 1, 1, 1, 1, 0],
- [1, 1, 1, 1, 1, 0]],
- [[1, 1, 1, 0, 0, 0],
- [1, 1, 1, 0, 0, 0]],
- [[1, 1, 0, 0, 0, 0],
- [1, 1, 0, 0, 0, 0]]], dtype=torch.uint8)
-
- With the reference tensor and dimension indicator.
-
- >>> xs = torch.zeros((3, 6, 6))
- >>> make_non_pad_mask(lengths, xs, 1)
- tensor([[[1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1],
- [0, 0, 0, 0, 0, 0]],
- [[1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1],
- [0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0]],
- [[1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1],
- [0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0]]], dtype=torch.uint8)
- >>> make_non_pad_mask(lengths, xs, 2)
- tensor([[[1, 1, 1, 1, 1, 0],
- [1, 1, 1, 1, 1, 0],
- [1, 1, 1, 1, 1, 0],
- [1, 1, 1, 1, 1, 0],
- [1, 1, 1, 1, 1, 0],
- [1, 1, 1, 1, 1, 0]],
- [[1, 1, 1, 0, 0, 0],
- [1, 1, 1, 0, 0, 0],
- [1, 1, 1, 0, 0, 0],
- [1, 1, 1, 0, 0, 0],
- [1, 1, 1, 0, 0, 0],
- [1, 1, 1, 0, 0, 0]],
- [[1, 1, 0, 0, 0, 0],
- [1, 1, 0, 0, 0, 0],
- [1, 1, 0, 0, 0, 0],
- [1, 1, 0, 0, 0, 0],
- [1, 1, 0, 0, 0, 0],
- [1, 1, 0, 0, 0, 0]]], dtype=torch.uint8)
-
- """
- return ~make_pad_mask(lengths, xs, length_dim)
-
-
-def mask_by_length(xs, lengths, fill=0):
- """Mask tensor according to length.
-
- Args:
- xs (Tensor): Batch of input tensor (B, `*`).
- lengths (LongTensor or List): Batch of lengths (B,).
- fill (int or float): Value to fill masked part.
-
- Returns:
- Tensor: Batch of masked input tensor (B, `*`).
-
- Examples:
- >>> x = torch.arange(5).repeat(3, 1) + 1
- >>> x
- tensor([[1, 2, 3, 4, 5],
- [1, 2, 3, 4, 5],
- [1, 2, 3, 4, 5]])
- >>> lengths = [5, 3, 2]
- >>> mask_by_length(x, lengths)
- tensor([[1, 2, 3, 4, 5],
- [1, 2, 3, 0, 0],
- [1, 2, 0, 0, 0]])
-
- """
- assert xs.size(0) == len(lengths)
- ret = xs.data.new(*xs.size()).fill_(fill)
- for i, l in enumerate(lengths):
- ret[i, :l] = xs[i, :l]
- return ret
-
-
-def th_accuracy(pad_outputs, pad_targets, ignore_label):
- """Calculate accuracy.
-
- Args:
- pad_outputs (Tensor): Prediction tensors (B * Lmax, D).
- pad_targets (LongTensor): Target label tensors (B, Lmax, D).
- ignore_label (int): Ignore label id.
-
- Returns:
- float: Accuracy value (0.0 - 1.0).
-
- """
- pad_pred = pad_outputs.view(
- pad_targets.size(0),
- pad_targets.size(1),
- pad_outputs.size(1)).argmax(2)
- mask = pad_targets != ignore_label
- numerator = torch.sum(pad_pred.masked_select(mask) == pad_targets.masked_select(mask))
- denominator = torch.sum(mask)
- return float(numerator) / float(denominator)
-
-
-def to_torch_tensor(x):
- """Change to torch.Tensor or ComplexTensor from numpy.ndarray.
-
- Args:
- x: Inputs. It should be one of numpy.ndarray, Tensor, ComplexTensor, and dict.
-
- Returns:
- Tensor or ComplexTensor: Type converted inputs.
-
- Examples:
- >>> xs = np.ones(3, dtype=np.float32)
- >>> xs = to_torch_tensor(xs)
- tensor([1., 1., 1.])
- >>> xs = torch.ones(3, 4, 5)
- >>> assert to_torch_tensor(xs) is xs
- >>> xs = {'real': xs, 'imag': xs}
- >>> to_torch_tensor(xs)
- ComplexTensor(
- Real:
- tensor([1., 1., 1.])
- Imag;
- tensor([1., 1., 1.])
- )
-
- """
- # If numpy, change to torch tensor
- if isinstance(x, np.ndarray):
- if x.dtype.kind == 'c':
- # Dynamically importing because torch_complex requires python3
- from torch_complex.tensor import ComplexTensor
- return ComplexTensor(x)
- else:
- return torch.from_numpy(x)
-
- # If {'real': ..., 'imag': ...}, convert to ComplexTensor
- elif isinstance(x, dict):
- # Dynamically importing because torch_complex requires python3
- from torch_complex.tensor import ComplexTensor
-
- if 'real' not in x or 'imag' not in x:
- raise ValueError("has 'real' and 'imag' keys: {}".format(list(x)))
- # Relative importing because of using python3 syntax
- return ComplexTensor(x['real'], x['imag'])
-
- # If torch.Tensor, as it is
- elif isinstance(x, torch.Tensor):
- return x
-
- else:
- error = ("x must be numpy.ndarray, torch.Tensor or a dict like "
- "{{'real': torch.Tensor, 'imag': torch.Tensor}}, "
- "but got {}".format(type(x)))
- try:
- from torch_complex.tensor import ComplexTensor
- except Exception:
- # If PY2
- raise ValueError(error)
- else:
- # If PY3
- if isinstance(x, ComplexTensor):
- return x
- else:
- raise ValueError(error)
-
-
-def get_subsample(train_args, mode, arch):
- """Parse the subsampling factors from the training args for the specified `mode` and `arch`.
-
- Args:
- train_args: argument Namespace containing options.
- mode: one of ('asr', 'mt', 'st')
- arch: one of ('rnn', 'rnn-t', 'rnn_mix', 'rnn_mulenc', 'transformer')
-
- Returns:
- np.ndarray / List[np.ndarray]: subsampling factors.
- """
- if arch == 'transformer':
- return np.array([1])
-
- elif mode == 'mt' and arch == 'rnn':
- # +1 means input (+1) and layers outputs (train_args.elayer)
- subsample = np.ones(train_args.elayers + 1, dtype=np.int)
- logging.warning('Subsampling is not performed for machine translation.')
- logging.info('subsample: ' + ' '.join([str(x) for x in subsample]))
- return subsample
-
- elif (mode == 'asr' and arch in ('rnn', 'rnn-t')) or \
- (mode == 'mt' and arch == 'rnn') or \
- (mode == 'st' and arch == 'rnn'):
- subsample = np.ones(train_args.elayers + 1, dtype=np.int)
- if train_args.etype.endswith("p") and not train_args.etype.startswith("vgg"):
- ss = train_args.subsample.split("_")
- for j in range(min(train_args.elayers + 1, len(ss))):
- subsample[j] = int(ss[j])
- else:
- logging.warning(
- 'Subsampling is not performed for vgg*. It is performed in max pooling layers at CNN.')
- logging.info('subsample: ' + ' '.join([str(x) for x in subsample]))
- return subsample
-
- elif mode == 'asr' and arch == 'rnn_mix':
- subsample = np.ones(train_args.elayers_sd + train_args.elayers + 1, dtype=np.int)
- if train_args.etype.endswith("p") and not train_args.etype.startswith("vgg"):
- ss = train_args.subsample.split("_")
- for j in range(min(train_args.elayers_sd + train_args.elayers + 1, len(ss))):
- subsample[j] = int(ss[j])
- else:
- logging.warning(
- 'Subsampling is not performed for vgg*. It is performed in max pooling layers at CNN.')
- logging.info('subsample: ' + ' '.join([str(x) for x in subsample]))
- return subsample
-
- elif mode == 'asr' and arch == 'rnn_mulenc':
- subsample_list = []
- for idx in range(train_args.num_encs):
- subsample = np.ones(train_args.elayers[idx] + 1, dtype=np.int)
- if train_args.etype[idx].endswith("p") and not train_args.etype[idx].startswith("vgg"):
- ss = train_args.subsample[idx].split("_")
- for j in range(min(train_args.elayers[idx] + 1, len(ss))):
- subsample[j] = int(ss[j])
- else:
- logging.warning(
- 'Encoder %d: Subsampling is not performed for vgg*. '
- 'It is performed in max pooling layers at CNN.', idx + 1)
- logging.info('subsample: ' + ' '.join([str(x) for x in subsample]))
- subsample_list.append(subsample)
- return subsample_list
-
- else:
- raise ValueError('Invalid options: mode={}, arch={}'.format(mode, arch))
-
-
-def rename_state_dict(old_prefix: str, new_prefix: str, state_dict: Dict[str, torch.Tensor]):
- """Replace keys of old prefix with new prefix in state dict."""
- # need this list not to break the dict iterator
- old_keys = [k for k in state_dict if k.startswith(old_prefix)]
- if len(old_keys) > 0:
- logging.warning(f'Rename: {old_prefix} -> {new_prefix}')
- for k in old_keys:
- v = state_dict.pop(k)
- new_k = k.replace(old_prefix, new_prefix)
- state_dict[new_k] = v
diff --git a/spaces/Kevin676/Clone-Your-Voice/encoder/data_objects/random_cycler.py b/spaces/Kevin676/Clone-Your-Voice/encoder/data_objects/random_cycler.py
deleted file mode 100644
index c405db6b27f46d874d8feb37e3f9c1e12c251109..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Clone-Your-Voice/encoder/data_objects/random_cycler.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import random
-
-class RandomCycler:
- """
- Creates an internal copy of a sequence and allows access to its items in a constrained random
- order. For a source sequence of n items and one or several consecutive queries of a total
- of m items, the following guarantees hold (one implies the other):
- - Each item will be returned between m // n and ((m - 1) // n) + 1 times.
- - Between two appearances of the same item, there may be at most 2 * (n - 1) other items.
- """
-
- def __init__(self, source):
- if len(source) == 0:
- raise Exception("Can't create RandomCycler from an empty collection")
- self.all_items = list(source)
- self.next_items = []
-
- def sample(self, count: int):
- shuffle = lambda l: random.sample(l, len(l))
-
- out = []
- while count > 0:
- if count >= len(self.all_items):
- out.extend(shuffle(list(self.all_items)))
- count -= len(self.all_items)
- continue
- n = min(count, len(self.next_items))
- out.extend(self.next_items[:n])
- count -= n
- self.next_items = self.next_items[n:]
- if len(self.next_items) == 0:
- self.next_items = shuffle(list(self.all_items))
- return out
-
- def __next__(self):
- return self.sample(1)[0]
-
diff --git a/spaces/Kevin676/Demucs_v4/app.py b/spaces/Kevin676/Demucs_v4/app.py
deleted file mode 100644
index 5a59b63e1d7a48b08faf8b4807c52cc8aba33ec0..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Demucs_v4/app.py
+++ /dev/null
@@ -1,25 +0,0 @@
-import os
-import gradio as gr
-from scipy.io.wavfile import write
-
-
-def inference(audio):
- os.makedirs("out", exist_ok=True)
- write('test.wav', audio[0], audio[1])
- os.system("python3 -m demucs.separate -n htdemucs --two-stems=vocals -d cpu test.wav -o out")
- return "./out/htdemucs/test/vocals.wav","./out/htdemucs/test/no_vocals.wav"
-
-title = "Demucs Music Source Separation (v4)"
-description = "This is the latest 'bleeding edge version' which enables the new v4 Hybrid Transformer model. for this space, 2 stem sepration (Karaoke Mode) is enabled and CPU mode which has been optimised for best quality & processing time.
| Gradio demo for Demucs(v4): Music Source Separation in the Waveform Domain. To use it, simply upload your audio, or click one of the examples to load them. Read more at the links below.
"
-
-examples=[['test.mp3']]
-gr.Interface(
- inference,
- gr.Audio(type="numpy", label="Input"),
- [gr.Audio(type="filepath", label="Vocals"),gr.Audio(type="filepath", label="No Vocals / Instrumental")],
- title=title,
- description=description,
- article=article,
- examples=examples
- ).launch(enable_queue=True)
\ No newline at end of file
diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/abinet/README.md b/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/abinet/README.md
deleted file mode 100644
index 40d8fdb7c7b62490de46fc4c411c495b6f1c8588..0000000000000000000000000000000000000000
--- a/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/abinet/README.md
+++ /dev/null
@@ -1,59 +0,0 @@
-# ABINet
-
-> [Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition](https://arxiv.org/abs/2103.06495)
-
-
-
-## Abstract
-
-Linguistic knowledge is of great benefit to scene text recognition. However, how to effectively model linguistic rules in end-to-end deep networks remains a research challenge. In this paper, we argue that the limited capacity of language models comes from: 1) implicitly language modeling; 2) unidirectional feature representation; and 3) language model with noise input. Correspondingly, we propose an autonomous, bidirectional and iterative ABINet for scene text recognition. Firstly, the autonomous suggests to block gradient flow between vision and language models to enforce explicitly language modeling. Secondly, a novel bidirectional cloze network (BCN) as the language model is proposed based on bidirectional feature representation. Thirdly, we propose an execution manner of iterative correction for language model which can effectively alleviate the impact of noise input. Additionally, based on the ensemble of iterative predictions, we propose a self-training method which can learn from unlabeled images effectively. Extensive experiments indicate that ABINet has superiority on low-quality images and achieves state-of-the-art results on several mainstream benchmarks. Besides, the ABINet trained with ensemble self-training shows promising improvement in realizing human-level recognition.
-
-
-
-
-
-## Dataset
-
-### Train Dataset
-
-| trainset | instance_num | repeat_num | note |
-| :-------: | :----------: | :--------: | :----------: |
-| Syn90k | 8919273 | 1 | synth |
-| SynthText | 7239272 | 1 | alphanumeric |
-
-### Test Dataset
-
-| testset | instance_num | note |
-| :-----: | :----------: | :-------: |
-| IIIT5K | 3000 | regular |
-| SVT | 647 | regular |
-| IC13 | 1015 | regular |
-| IC15 | 2077 | irregular |
-| SVTP | 645 | irregular |
-| CT80 | 288 | irregular |
-
-## Results and models
-
-| methods | pretrained | | Regular Text | | | Irregular Text | | download |
-| :------------------------------------------------: | :----------------------------------------------------: | :----: | :----------: | :--: | :--: | :------------: | :--: | :--------------------------------------------------- |
-| | | IIIT5K | SVT | IC13 | IC15 | SVTP | CT80 | |
-| [ABINet-Vision](https://github.com/open-mmlab/mmocr/tree/master/configs/textrecog/abinet/abinet_vision_only_academic.py) | - | 94.7 | 91.7 | 93.6 | 83.0 | 85.1 | 86.5 | [model](https://download.openmmlab.com/mmocr/textrecog/abinet/abinet_vision_only_academic-e6b9ea89.pth) \| [log](https://download.openmmlab.com/mmocr/textrecog/abinet/20211201_195512.log) |
-| [ABINet](https://github.com/open-mmlab/mmocr/tree/master/configs/textrecog/abinet/abinet_academic.py) | [Pretrained](https://download.openmmlab.com/mmocr/textrecog/abinet/abinet_pretrain-1bed979b.pth) | 95.7 | 94.6 | 95.7 | 85.1 | 90.4 | 90.3 | [model](https://download.openmmlab.com/mmocr/textrecog/abinet/abinet_academic-f718abf6.pth) \| [log1](https://download.openmmlab.com/mmocr/textrecog/abinet/20211210_095832.log) \| [log2](https://download.openmmlab.com/mmocr/textrecog/abinet/20211213_131724.log) |
-
-```{note}
-1. ABINet allows its encoder to run and be trained without decoder and fuser. Its encoder is designed to recognize texts as a stand-alone model and therefore can work as an independent text recognizer. We release it as ABINet-Vision.
-2. Facts about the pretrained model: MMOCR does not have a systematic pipeline to pretrain the language model (LM) yet, thus the weights of LM are converted from [the official pretrained model](https://github.com/FangShancheng/ABINet). The weights of ABINet-Vision are directly used as the vision model of ABINet.
-3. Due to some technical issues, the training process of ABINet was interrupted at the 13th epoch and we resumed it later. Both logs are released for full reference.
-4. The model architecture in the logs looks slightly different from the final released version, since it was refactored afterward. However, both architectures are essentially equivalent.
-```
-
-## Citation
-
-```bibtex
-@article{fang2021read,
- title={Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition},
- author={Fang, Shancheng and Xie, Hongtao and Wang, Yuxin and Mao, Zhendong and Zhang, Yongdong},
- booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
- year={2021}
-}
-```
diff --git a/spaces/LucasCodeBreak/MusicGen/audiocraft/utils/utils.py b/spaces/LucasCodeBreak/MusicGen/audiocraft/utils/utils.py
deleted file mode 100644
index 86e1448d065fa182ca69aae00d2f2a7eea55d8a4..0000000000000000000000000000000000000000
--- a/spaces/LucasCodeBreak/MusicGen/audiocraft/utils/utils.py
+++ /dev/null
@@ -1,234 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from concurrent.futures import ProcessPoolExecutor
-from functools import wraps
-import hashlib
-import logging
-import typing as tp
-
-import flashy
-import flashy.distrib
-import omegaconf
-import torch
-from torch.nn.utils.rnn import pad_sequence
-
-
-logger = logging.getLogger(__name__)
-
-
-def dict_from_config(cfg: omegaconf.DictConfig) -> dict:
- """Convenience function to map an omegaconf configuration to a dictionary.
-
- Args:
- cfg (omegaconf.DictConfig): Original configuration to map to dict.
- Returns:
- dict: Config as dictionary object.
- """
- dct = omegaconf.OmegaConf.to_container(cfg, resolve=True)
- assert isinstance(dct, dict)
- return dct
-
-
-def random_subset(dataset, max_samples: int, seed: int = 42) -> torch.utils.data.Subset:
- if max_samples >= len(dataset):
- return dataset
-
- generator = torch.Generator().manual_seed(seed)
- perm = torch.randperm(len(dataset), generator=generator)
- return torch.utils.data.Subset(dataset, perm[:max_samples].tolist())
-
-
-def get_loader(dataset, num_samples: tp.Optional[int], batch_size: int,
- num_workers: int, seed: int, **kwargs) -> torch.utils.data.DataLoader:
- """Convenience function to load dataset into a dataloader with optional subset sampling.
-
- Args:
- dataset: Dataset to load.
- num_samples (Optional[int]): Number of samples to limit subset size.
- batch_size (int): Batch size.
- num_workers (int): Number of workers for data loading.
- seed (int): Random seed.
- """
- if num_samples is not None:
- dataset = random_subset(dataset, num_samples, seed)
-
- dataloader = flashy.distrib.loader(
- dataset,
- batch_size=batch_size,
- num_workers=num_workers,
- **kwargs
- )
- return dataloader
-
-
-def get_dataset_from_loader(dataloader):
- dataset = dataloader.dataset
- if isinstance(dataset, torch.utils.data.Subset):
- return dataset.dataset
- else:
- return dataset
-
-
-def multinomial(input: torch.Tensor, num_samples: int, replacement=False, *, generator=None):
- """torch.multinomial with arbitrary number of dimensions, and number of candidates on the last dimension.
-
- Args:
- input (torch.Tensor): The input tensor containing probabilities.
- num_samples (int): Number of samples to draw.
- replacement (bool): Whether to draw with replacement or not.
- Keywords args:
- generator (torch.Generator): A pseudorandom number generator for sampling.
- Returns:
- torch.Tensor: Last dimension contains num_samples indices
- sampled from the multinomial probability distribution
- located in the last dimension of tensor input.
- """
- input_ = input.reshape(-1, input.shape[-1])
- output_ = torch.multinomial(input_, num_samples=num_samples, replacement=replacement, generator=generator)
- output = output_.reshape(*list(input.shape[:-1]), -1)
- return output
-
-
-def sample_top_k(probs: torch.Tensor, k: int) -> torch.Tensor:
- """Sample next token from top K values along the last dimension of the input probs tensor.
-
- Args:
- probs (torch.Tensor): Input probabilities with token candidates on the last dimension.
- k (int): The k in “top-k”.
- Returns:
- torch.Tensor: Sampled tokens.
- """
- top_k_value, _ = torch.topk(probs, k, dim=-1)
- min_value_top_k = top_k_value[..., [-1]]
- probs *= (probs >= min_value_top_k).float()
- probs.div_(probs.sum(dim=-1, keepdim=True))
- next_token = multinomial(probs, num_samples=1)
- return next_token
-
-
-def sample_top_p(probs: torch.Tensor, p: float) -> torch.Tensor:
- """Sample next token from top P probabilities along the last dimension of the input probs tensor.
-
- Args:
- probs (torch.Tensor): Input probabilities with token candidates on the last dimension.
- p (int): The p in “top-p”.
- Returns:
- torch.Tensor: Sampled tokens.
- """
- probs_sort, probs_idx = torch.sort(probs, dim=-1, descending=True)
- probs_sum = torch.cumsum(probs_sort, dim=-1)
- mask = probs_sum - probs_sort > p
- probs_sort *= (~mask).float()
- probs_sort.div_(probs_sort.sum(dim=-1, keepdim=True))
- next_token = multinomial(probs_sort, num_samples=1)
- next_token = torch.gather(probs_idx, -1, next_token)
- return next_token
-
-
-class DummyPoolExecutor:
- """Dummy pool executor to use when we actually have only 1 worker.
- (e.g. instead of ProcessPoolExecutor).
- """
- class DummyResult:
- def __init__(self, func, *args, **kwargs):
- self.func = func
- self.args = args
- self.kwargs = kwargs
-
- def result(self):
- return self.func(*self.args, **self.kwargs)
-
- def __init__(self, workers, mp_context=None):
- pass
-
- def submit(self, func, *args, **kwargs):
- return DummyPoolExecutor.DummyResult(func, *args, **kwargs)
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_value, exc_tb):
- return
-
-
-def get_pool_executor(num_workers: int, mp_context=None):
- return ProcessPoolExecutor(num_workers, mp_context) if num_workers > 1 else DummyPoolExecutor(1)
-
-
-def length_to_mask(lengths: torch.Tensor, max_len: tp.Optional[int] = None) -> torch.Tensor:
- """Utility function to convert a tensor of sequence lengths to a mask (useful when working on padded sequences).
- For example: [3, 5] => [[1, 1, 1, 0, 0], [1, 1, 1, 1, 1]]
-
- Args:
- lengths (torch.Tensor): tensor with lengths
- max_len (int): can set the max length manually. Defaults to None.
- Returns:
- torch.Tensor: mask with 0s where there is pad tokens else 1s
- """
- assert len(lengths.shape) == 1, "Length shape should be 1 dimensional."
- final_length = lengths.max().item() if not max_len else max_len
- final_length = max(final_length, 1) # if all seqs are of len zero we don't want a zero-size tensor
- return torch.arange(final_length)[None, :].to(lengths.device) < lengths[:, None]
-
-
-def hash_trick(word: str, vocab_size: int) -> int:
- """Hash trick to pair each word with an index
-
- Args:
- word (str): word we wish to convert to an index
- vocab_size (int): size of the vocabulary
- Returns:
- int: index of the word in the embedding LUT
- """
- hash = int(hashlib.sha256(word.encode("utf-8")).hexdigest(), 16)
- return hash % vocab_size
-
-
-def with_rank_rng(base_seed: int = 1234):
- """Decorator for a function so that the function will use a Random Number Generator
- whose state depend on the GPU rank. The original RNG state is restored upon returning.
-
- Args:
- base_seed (int): Random seed.
- """
- def _decorator(fun: tp.Callable):
- @wraps(fun)
- def _decorated(*args, **kwargs):
- state = torch.get_rng_state()
- seed = base_seed ^ flashy.distrib.rank()
- torch.manual_seed(seed)
- logger.debug('Rank dependent seed set to %d', seed)
- try:
- return fun(*args, **kwargs)
- finally:
- torch.set_rng_state(state)
- logger.debug('RNG state restored.')
- return _decorated
- return _decorator
-
-
-def collate(tensors: tp.List[torch.Tensor], dim: int = 0) -> tp.Tuple[torch.Tensor, torch.Tensor]:
- """Get a list of tensors and collate them to a single tensor. according to the following logic:
- - `dim` specifies the time dimension which will be stacked and padded.
- - The output will contain 1 new dimension (dimension index 0) which will be the size of
- of the original list.
-
- Args:
- tensors (tp.List[torch.Tensor]): List of tensors to collate.
- dim (int): Dimension which will be stacked and padded.
- Returns:
- tp.Tuple[torch.Tensor, torch.Tensor]:
- torch.Tensor: Stacked and padded tensor. The output will contain 1 new dimension
- (dimension index 0) which will be the size of the original list.
- torch.Tensor: Tensor containing length of original tensor sizes (without padding).
- """
- tensors = [x.transpose(0, dim) for x in tensors]
- lens = torch.LongTensor([len(x) for x in tensors])
- padded_tensors = pad_sequence(tensors)
- padded_tensors = padded_tensors.transpose(0, 1)
- padded_tensors = padded_tensors.transpose(1, dim + 1)
- return padded_tensors, lens
diff --git a/spaces/LucasCodeBreak/MusicGen/tests/modules/test_conv.py b/spaces/LucasCodeBreak/MusicGen/tests/modules/test_conv.py
deleted file mode 100644
index 28fbc4f1a0ebaf41b56947b767958ae696e75eec..0000000000000000000000000000000000000000
--- a/spaces/LucasCodeBreak/MusicGen/tests/modules/test_conv.py
+++ /dev/null
@@ -1,203 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from itertools import product
-import math
-import random
-
-import pytest
-import torch
-from torch import nn
-
-from audiocraft.modules import (
- NormConv1d,
- NormConvTranspose1d,
- StreamableConv1d,
- StreamableConvTranspose1d,
- pad1d,
- unpad1d,
-)
-
-
-def test_get_extra_padding_for_conv1d():
- # TODO: Implement me!
- pass
-
-
-def test_pad1d_zeros():
- x = torch.randn(1, 1, 20)
-
- xp1 = pad1d(x, (0, 5), mode='constant', value=0.)
- assert xp1.shape[-1] == 25
- xp2 = pad1d(x, (5, 5), mode='constant', value=0.)
- assert xp2.shape[-1] == 30
- xp3 = pad1d(x, (0, 0), mode='constant', value=0.)
- assert xp3.shape[-1] == 20
- xp4 = pad1d(x, (10, 30), mode='constant', value=0.)
- assert xp4.shape[-1] == 60
-
- with pytest.raises(AssertionError):
- pad1d(x, (-1, 0), mode='constant', value=0.)
-
- with pytest.raises(AssertionError):
- pad1d(x, (0, -1), mode='constant', value=0.)
-
- with pytest.raises(AssertionError):
- pad1d(x, (-1, -1), mode='constant', value=0.)
-
-
-def test_pad1d_reflect():
- x = torch.randn(1, 1, 20)
-
- xp1 = pad1d(x, (0, 5), mode='reflect', value=0.)
- assert xp1.shape[-1] == 25
- xp2 = pad1d(x, (5, 5), mode='reflect', value=0.)
- assert xp2.shape[-1] == 30
- xp3 = pad1d(x, (0, 0), mode='reflect', value=0.)
- assert xp3.shape[-1] == 20
- xp4 = pad1d(x, (10, 30), mode='reflect', value=0.)
- assert xp4.shape[-1] == 60
-
- with pytest.raises(AssertionError):
- pad1d(x, (-1, 0), mode='reflect', value=0.)
-
- with pytest.raises(AssertionError):
- pad1d(x, (0, -1), mode='reflect', value=0.)
-
- with pytest.raises(AssertionError):
- pad1d(x, (-1, -1), mode='reflect', value=0.)
-
-
-def test_unpad1d():
- x = torch.randn(1, 1, 20)
-
- u1 = unpad1d(x, (5, 5))
- assert u1.shape[-1] == 10
- u2 = unpad1d(x, (0, 5))
- assert u2.shape[-1] == 15
- u3 = unpad1d(x, (5, 0))
- assert u3.shape[-1] == 15
- u4 = unpad1d(x, (0, 0))
- assert u4.shape[-1] == x.shape[-1]
-
- with pytest.raises(AssertionError):
- unpad1d(x, (-1, 0))
-
- with pytest.raises(AssertionError):
- unpad1d(x, (0, -1))
-
- with pytest.raises(AssertionError):
- unpad1d(x, (-1, -1))
-
-
-class TestNormConv1d:
-
- def test_norm_conv1d_modules(self):
- N, C, T = 2, 2, random.randrange(1, 100_000)
- t0 = torch.randn(N, C, T)
-
- C_out, kernel_size, stride = 1, 4, 1
- expected_out_length = int((T - kernel_size) / stride + 1)
- wn_conv = NormConv1d(C, 1, kernel_size=4, norm='weight_norm')
- gn_conv = NormConv1d(C, 1, kernel_size=4, norm='time_group_norm')
- nn_conv = NormConv1d(C, 1, kernel_size=4, norm='none')
-
- assert isinstance(wn_conv.norm, nn.Identity)
- assert isinstance(wn_conv.conv, nn.Conv1d)
-
- assert isinstance(gn_conv.norm, nn.GroupNorm)
- assert isinstance(gn_conv.conv, nn.Conv1d)
-
- assert isinstance(nn_conv.norm, nn.Identity)
- assert isinstance(nn_conv.conv, nn.Conv1d)
-
- for conv_layer in [wn_conv, gn_conv, nn_conv]:
- out = conv_layer(t0)
- assert isinstance(out, torch.Tensor)
- assert list(out.shape) == [N, C_out, expected_out_length]
-
-
-class TestNormConvTranspose1d:
-
- def test_normalizations(self):
- N, C, T = 2, 2, random.randrange(1, 100_000)
- t0 = torch.randn(N, C, T)
-
- C_out, kernel_size, stride = 1, 4, 1
- expected_out_length = (T - 1) * stride + (kernel_size - 1) + 1
-
- wn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='weight_norm')
- gn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='time_group_norm')
- nn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='none')
-
- assert isinstance(wn_convtr.norm, nn.Identity)
- assert isinstance(wn_convtr.convtr, nn.ConvTranspose1d)
-
- assert isinstance(gn_convtr.norm, nn.GroupNorm)
- assert isinstance(gn_convtr.convtr, nn.ConvTranspose1d)
-
- assert isinstance(nn_convtr.norm, nn.Identity)
- assert isinstance(nn_convtr.convtr, nn.ConvTranspose1d)
-
- for convtr_layer in [wn_convtr, gn_convtr, nn_convtr]:
- out = convtr_layer(t0)
- assert isinstance(out, torch.Tensor)
- assert list(out.shape) == [N, C_out, expected_out_length]
-
-
-class TestStreamableConv1d:
-
- def get_streamable_conv1d_output_length(self, length, kernel_size, stride, dilation):
- # StreamableConv1d internally pads to make sure that the last window is full
- padding_total = (kernel_size - 1) * dilation - (stride - 1)
- n_frames = (length - kernel_size + padding_total) / stride + 1
- ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total)
- return ideal_length // stride
-
- def test_streamable_conv1d(self):
- N, C, T = 2, 2, random.randrange(1, 100_000)
- t0 = torch.randn(N, C, T)
- C_out = 1
-
- # conv params are [(kernel_size, stride, dilation)]
- conv_params = [(4, 1, 1), (4, 2, 1), (3, 1, 3), (10, 5, 1), (3, 2, 3)]
- for causal, (kernel_size, stride, dilation) in product([False, True], conv_params):
- expected_out_length = self.get_streamable_conv1d_output_length(T, kernel_size, stride, dilation)
- sconv = StreamableConv1d(C, C_out, kernel_size=kernel_size, stride=stride, dilation=dilation, causal=causal)
- out = sconv(t0)
- assert isinstance(out, torch.Tensor)
- print(list(out.shape), [N, C_out, expected_out_length])
- assert list(out.shape) == [N, C_out, expected_out_length]
-
-
-class TestStreamableConvTranspose1d:
-
- def get_streamable_convtr1d_output_length(self, length, kernel_size, stride):
- padding_total = (kernel_size - stride)
- return (length - 1) * stride - padding_total + (kernel_size - 1) + 1
-
- def test_streamable_convtr1d(self):
- N, C, T = 2, 2, random.randrange(1, 100_000)
- t0 = torch.randn(N, C, T)
-
- C_out = 1
-
- with pytest.raises(AssertionError):
- StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=False, trim_right_ratio=0.5)
- StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=-1.)
- StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=2)
-
- # causal params are [(causal, trim_right)]
- causal_params = [(False, 1.0), (True, 1.0), (True, 0.5), (True, 0.0)]
- # conv params are [(kernel_size, stride)]
- conv_params = [(4, 1), (4, 2), (3, 1), (10, 5)]
- for ((causal, trim_right_ratio), (kernel_size, stride)) in product(causal_params, conv_params):
- expected_out_length = self.get_streamable_convtr1d_output_length(T, kernel_size, stride)
- sconvtr = StreamableConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride,
- causal=causal, trim_right_ratio=trim_right_ratio)
- out = sconvtr(t0)
- assert isinstance(out, torch.Tensor)
- assert list(out.shape) == [N, C_out, expected_out_length]
diff --git a/spaces/LuxOAI/HUXTT/.py b/spaces/LuxOAI/HUXTT/.py
deleted file mode 100644
index 71ad6376e08170953d89f3f1b149bd3aefdbc1d7..0000000000000000000000000000000000000000
--- a/spaces/LuxOAI/HUXTT/.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import openai
-import gradio as gr
-import json
-
-openai.api_key = "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"
-
-def main(user_messages: list):
- response = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
- messages=user_messages
- )
-
- reply = response["choices"][0]["message"]["content"]
-
- return reply
-
-iface = gr.Interface(fn=main, inputs="json", outputs="text")
-
-iface.launch()
diff --git a/spaces/MUmairAB/Masked-Language-Model-App/README.md b/spaces/MUmairAB/Masked-Language-Model-App/README.md
deleted file mode 100644
index 300d9dbc974091f12dee54758d012e681f52f448..0000000000000000000000000000000000000000
--- a/spaces/MUmairAB/Masked-Language-Model-App/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: Masked Language Model App
-emoji: 👁
-colorFrom: green
-colorTo: green
-sdk: gradio
-python_version: 3.10.12
-pip_version: 23.1.2
-sdk_version: 3.36.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/MashiroSA/sovits-emu-voice-transform/vdecoder/nsf_hifigan/nvSTFT.py b/spaces/MashiroSA/sovits-emu-voice-transform/vdecoder/nsf_hifigan/nvSTFT.py
deleted file mode 100644
index 62bd5a008f81929054f036c81955d5d73377f772..0000000000000000000000000000000000000000
--- a/spaces/MashiroSA/sovits-emu-voice-transform/vdecoder/nsf_hifigan/nvSTFT.py
+++ /dev/null
@@ -1,134 +0,0 @@
-import math
-import os
-os.environ["LRU_CACHE_CAPACITY"] = "3"
-import random
-import torch
-import torch.utils.data
-import numpy as np
-import librosa
-from librosa.util import normalize
-from librosa.filters import mel as librosa_mel_fn
-from scipy.io.wavfile import read
-import soundfile as sf
-import torch.nn.functional as F
-
-def load_wav_to_torch(full_path, target_sr=None, return_empty_on_exception=False):
- sampling_rate = None
- try:
- data, sampling_rate = sf.read(full_path, always_2d=True)# than soundfile.
- except Exception as ex:
- print(f"'{full_path}' failed to load.\nException:")
- print(ex)
- if return_empty_on_exception:
- return [], sampling_rate or target_sr or 48000
- else:
- raise Exception(ex)
-
- if len(data.shape) > 1:
- data = data[:, 0]
- assert len(data) > 2# check duration of audio file is > 2 samples (because otherwise the slice operation was on the wrong dimension)
-
- if np.issubdtype(data.dtype, np.integer): # if audio data is type int
- max_mag = -np.iinfo(data.dtype).min # maximum magnitude = min possible value of intXX
- else: # if audio data is type fp32
- max_mag = max(np.amax(data), -np.amin(data))
- max_mag = (2**31)+1 if max_mag > (2**15) else ((2**15)+1 if max_mag > 1.01 else 1.0) # data should be either 16-bit INT, 32-bit INT or [-1 to 1] float32
-
- data = torch.FloatTensor(data.astype(np.float32))/max_mag
-
- if (torch.isinf(data) | torch.isnan(data)).any() and return_empty_on_exception:# resample will crash with inf/NaN inputs. return_empty_on_exception will return empty arr instead of except
- return [], sampling_rate or target_sr or 48000
- if target_sr is not None and sampling_rate != target_sr:
- data = torch.from_numpy(librosa.core.resample(data.numpy(), orig_sr=sampling_rate, target_sr=target_sr))
- sampling_rate = target_sr
-
- return data, sampling_rate
-
-def dynamic_range_compression(x, C=1, clip_val=1e-5):
- return np.log(np.clip(x, a_min=clip_val, a_max=None) * C)
-
-def dynamic_range_decompression(x, C=1):
- return np.exp(x) / C
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-def dynamic_range_decompression_torch(x, C=1):
- return torch.exp(x) / C
-
-class STFT():
- def __init__(self, sr=22050, n_mels=80, n_fft=1024, win_size=1024, hop_length=256, fmin=20, fmax=11025, clip_val=1e-5):
- self.target_sr = sr
-
- self.n_mels = n_mels
- self.n_fft = n_fft
- self.win_size = win_size
- self.hop_length = hop_length
- self.fmin = fmin
- self.fmax = fmax
- self.clip_val = clip_val
- self.mel_basis = {}
- self.hann_window = {}
-
- def get_mel(self, y, keyshift=0, speed=1, center=False):
- sampling_rate = self.target_sr
- n_mels = self.n_mels
- n_fft = self.n_fft
- win_size = self.win_size
- hop_length = self.hop_length
- fmin = self.fmin
- fmax = self.fmax
- clip_val = self.clip_val
-
- factor = 2 ** (keyshift / 12)
- n_fft_new = int(np.round(n_fft * factor))
- win_size_new = int(np.round(win_size * factor))
- hop_length_new = int(np.round(hop_length * speed))
-
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- mel_basis_key = str(fmax)+'_'+str(y.device)
- if mel_basis_key not in self.mel_basis:
- mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=n_mels, fmin=fmin, fmax=fmax)
- self.mel_basis[mel_basis_key] = torch.from_numpy(mel).float().to(y.device)
-
- keyshift_key = str(keyshift)+'_'+str(y.device)
- if keyshift_key not in self.hann_window:
- self.hann_window[keyshift_key] = torch.hann_window(win_size_new).to(y.device)
-
- pad_left = (win_size_new - hop_length_new) //2
- pad_right = max((win_size_new- hop_length_new + 1) //2, win_size_new - y.size(-1) - pad_left)
- if pad_right < y.size(-1):
- mode = 'reflect'
- else:
- mode = 'constant'
- y = torch.nn.functional.pad(y.unsqueeze(1), (pad_left, pad_right), mode = mode)
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft_new, hop_length=hop_length_new, win_length=win_size_new, window=self.hann_window[keyshift_key],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
- # print(111,spec)
- spec = torch.sqrt(spec.pow(2).sum(-1)+(1e-9))
- if keyshift != 0:
- size = n_fft // 2 + 1
- resize = spec.size(1)
- if resize < size:
- spec = F.pad(spec, (0, 0, 0, size-resize))
- spec = spec[:, :size, :] * win_size / win_size_new
-
- # print(222,spec)
- spec = torch.matmul(self.mel_basis[mel_basis_key], spec)
- # print(333,spec)
- spec = dynamic_range_compression_torch(spec, clip_val=clip_val)
- # print(444,spec)
- return spec
-
- def __call__(self, audiopath):
- audio, sr = load_wav_to_torch(audiopath, target_sr=self.target_sr)
- spect = self.get_mel(audio.unsqueeze(0)).squeeze(0)
- return spect
-
-stft = STFT()
diff --git a/spaces/MathysL/AutoGPT4/tests/__init__.py b/spaces/MathysL/AutoGPT4/tests/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Matthijs/mms-tts-demo/uroman/lib/NLP/English.pm b/spaces/Matthijs/mms-tts-demo/uroman/lib/NLP/English.pm
deleted file mode 100644
index e78fba5e381d425feb1a89696afad7d974063abb..0000000000000000000000000000000000000000
--- a/spaces/Matthijs/mms-tts-demo/uroman/lib/NLP/English.pm
+++ /dev/null
@@ -1,3112 +0,0 @@
-################################################################
-# #
-# English #
-# #
-################################################################
-
-package NLP::English;
-
-use File::Basename;
-use File::Spec;
-
-# tok v1.3.7 (May 16, 2019)
-
-$chinesePM = NLP::Chinese;
-$ParseEntry = NLP::ParseEntry;
-$util = NLP::utilities;
-$utf8 = NLP::UTF8;
-$logfile = "";
-# $logfile2 = (-d "/nfs/isd/ulf/smt/agile") ? "/nfs/isd/ulf/smt/agile/minilog" : "";
-# $util->init_log($logfile2);
-
-$currency_symbol_list = "\$|\xC2\xA5|\xE2\x82\xAC|\xE2\x82\xA4";
-$english_resources_skeleton_dir = "";
-%dummy_ht = ();
-
-sub build_language_hashtables {
- local($caller, $primary_entity_style_filename, $data_dir) = @_;
-
- unless ($data_dir) {
- $default_data_dir = "/nfs/nlg/users/textmap/brahms-ml/arabic/bin/modules/NLP";
- $data_dir = $default_data_dir if -d $default_data_dir;
- }
- my $english_word_filename = "$data_dir/EnglishWordlist.txt";
- my $default_entity_style_MT_filename = "$data_dir/EntityStyleMT-zh.txt";
- my $entity_style_all_filename = "$data_dir/EntityStyleAll.txt";
- my $EnglishNonNameCapWords_filename = "$data_dir/EnglishNonNameCapWords.txt";
- $english_resources_skeleton_dir = "$data_dir/EnglishResources/skeleton";
- %english_annotation_ht = ();
- %annotation_english_ht = ();
- %english_ht = ();
- $CardinalMaxWithoutComma = 99999;
- $CardinalMaxNonLex = 9999000;
-
- $primary_entity_style_filename = $default_entity_style_MT_filename unless defined($primary_entity_style_filename);
- if ($primary_entity_style_filename =~ /^(ar|zh)$/) {
- $languageCode = $primary_entity_style_filename;
- $primary_entity_style_filename
- = File::Spec->catfile($data_dir, "EntityStyleMT-$languageCode.txt");
- }
-
- open(IN,$english_word_filename) || die "Can't open $english_word_filename";
- while () {
- next unless $_ =~ /^s*[^#\s]/; # unless blank/comment line
- $_ =~ s/\s+$//;
- $line = $_;
- @lines = ($line);
- if (($line =~ /::gpe:/)
- && (($annotation) = ($line =~ /^.*?::(.*)$/))
- && (($pre_annotation, $singular_english, $post_annotation) = ($annotation =~ /^(.*)::plural-of:([^:]+)(|::.*)\s*$/))) {
- $derived_annotation = $singular_english . "::$pre_annotation$post_annotation";
- # print STDERR "derived_annotation: $derived_annotation\n";
- push(@lines, $derived_annotation);
- }
- foreach $line (@lines) {
- ($english,@slots) = split("::",$line);
- next unless defined($english);
- $english =~ s/\s+$//;
- $lc_english = $english;
- $lc_english =~ tr/[A-Z]/[a-z]/;
- $annotation = "::" . join("::",@slots) . "::";
- $english_annotation_ht{$english} = $annotation;
- $english_annotation_ht{$lc_english} = $annotation;
- $english_annotation_ht{"_ALT_"}->{$english}->{$annotation} = 1;
- $english_annotation_ht{"_ALT_"}->{$lc_english}->{$annotation} = 1;
- $synt = "";
- foreach $slot_value (@slots) {
- ($slot,$value) = ($slot_value =~ /\s*(\w[^:]+):\s*(\S.*)$/);
- next unless defined($value);
- $slot =~ s/\s+$//;
- $value =~ s/\s+$//;
- $synt = $value if $slot eq "synt";
- if (defined($annotation_english_ht{$slot_value})) {
- push(@{$annotation_english_ht{$slot_value}},$english);
- } else {
- my @elist = ($english);
- $annotation_english_ht{$slot_value} = \@elist;
- }
- if ($synt && defined($slot_value) && ($slot ne "synt")) {
- $annot = "synt:$synt" . "::$slot_value";
- if (defined($annotation_english_ht{$annot})) {
- push(@{$annotation_english_ht{$annot}},$english);
- } else {
- my @elist = ($english);
- $annotation_english_ht{$annot} = \@elist;
- }
- $english_annotation_ht{"_EN_SYNT_"}->{$english}->{$synt}->{$slot} = $value;
- }
- }
- }
- }
- close(IN);
-
- if (open(IN,$EnglishNonNameCapWords_filename)) {
- while () {
- next unless $_ =~ /^s*[^#\s]/; # unless blank/comment line
- $_ =~ s/\s+$//;
- $english_ht{(lc $_)}->{COMMON_NON_NAME_CAP} = 1;
- }
- close(IN);
- } else {
- print STDERR "Can't open $EnglishNonNameCapWords_filename\n";
- }
-
- foreach $style ("primary", "all") {
- if ($style eq "primary") {
- $entity_style_filename = $primary_entity_style_filename || $default_entity_style_MT_filename;
- } elsif ($style eq "all") {
- $entity_style_filename = $entity_style_all_filename;
- } else {
- next;
- }
- %ht = ();
- open(IN,$entity_style_filename) || die("Can't open $entity_style_filename (stylefile)");
- my $n_entries = 0;
- while () {
- next unless $_ =~ /^s*[^#\s]/; # unless blank/comment line
- $_ =~ s/\s+$//;
- ($slot,$value_string) = ($_ =~ /^([^:]+):\s*(\S.*)$/);
- next unless defined($value_string);
- if (defined($ht{$slot})) {
- print STDERR "Warning: ignoring duplicate entry for $slot in $entity_style_filename\n";
- next;
- }
- @values = split("::", $value_string);
- foreach $value (@values) {
- $value =~ s/^\s+//g;
- $value =~ s/\s+$//g;
- }
- my @values_copy = @values;
- $ht{$slot} = \@values_copy;
- $n_entries++;
- }
- # print STDERR "Processed $n_entries entries in $entity_style_filename\n";
- close(IN);
- if ($style eq "primary") {
- %english_entity_style_ht = %ht;
- } elsif ($style eq "all") {
- %english_entity_style_all_ht = %ht;
- }
- }
-
- if (defined($raw = $english_entity_style_ht{CardinalMaxWithoutComma})
- && (@styles = @{$raw}) && ($n = $styles[0]) && ($n =~ /^\d+$/) && ($n >= 999)) {
- $CardinalMaxWithoutComma = $n;
- }
- if (defined($raw = $english_entity_style_ht{CardinalMaxNonLex})
- && (@styles = @{$raw}) && ($n = $styles[0]) && ($n =~ /^\d+$/) && ($n >= 999999)) {
- $CardinalMaxNonLex = $n;
- }
-
- return (*english_annotation_ht,*annotation_english_ht,*english_entity_style_ht);
-}
-
-sub read_language_variations {
- local($this, $filename, *ht) = @_;
-
- my $n = 0;
- my $line_number = 0;
- if (open(IN, $filename)) {
- while () {
- $line_number++;
- $us = $util->slot_value_in_double_colon_del_list($_, "us");
- $uk = $util->slot_value_in_double_colon_del_list($_, "uk");
- $formal = $util->slot_value_in_double_colon_del_list($_, "formal");
- $informal = $util->slot_value_in_double_colon_del_list($_, "informal");
- if ($us && $uk) {
- $ht{VARIATION_UK_US}->{$uk}->{$us} = 1;
- $n++;
- }
- if ($informal && $formal) {
- $ht{VARIATION_INFORMAL_FORMAL}->{$informal}->{$formal} = 1;
- $n++;
- }
- }
- close(IN);
- # print STDERR "Read $n spelling variation entries from $filename\n";
- }
-}
-
-sub entity_style_listing {
- local($caller,$attr) = @_;
-
- if (defined($l = $english_entity_style_ht{$attr})) {
- @sl = @{$l};
- if (($#sl == 0) && ($sl[0] eq "all")) {
- if (defined($al = $english_entity_style_all_ht{$attr})) {
- return @{$al};
- } else {
- return ();
- }
- } else {
- return @sl;
- }
- } else {
- return ();
- }
-}
-
-sub is_abbreviation {
- local($caller,$noun) = @_;
-
- $result = defined($annotation_s = $english_annotation_ht{$noun})
- && ($annotation_s =~ /::abbreviation:true::/);
-# print "is_abbreviation($noun): $result\n";
- return $result;
-}
-
-sub noun_adv_sem {
- local($caller,$noun) = @_;
-
- return "" unless defined($annotation_s = $english_annotation_ht{$noun});
- ($adv_sem) = ($annotation_s =~ /::adv_sem:([-_a-z]+)::/);
- return "" unless defined($adv_sem);
- return $adv_sem;
-}
-
-sub numeral_value {
- local($caller,$numeral) = @_;
-
- return "" unless defined($annotation_s = $english_annotation_ht{$numeral});
- ($value) = ($annotation_s =~ /::value:(\d+)::/);
- return "" unless defined($value);
- return $value;
-}
-
-sub annot_slot_value {
- local($caller,$lex, $slot) = @_;
-
- return "" unless defined($annotation_s = $english_annotation_ht{$lex});
- ($value) = ($annotation_s =~ /::$slot:([-_a-z]+)(?:::.*|)\s*$/i);
- return "" unless defined($value);
- return $value;
-}
-
-sub annot_slot_values {
- local($caller,$lex, $slot) = @_;
-
- return () unless @annotations = keys %{$english_annotation_ht{"_ALT_"}->{$lex}};
- @annot_slot_values = ();
- foreach $annotation_s (@annotations) {
- ($value) = ($annotation_s =~ /::$slot:([^:]+)(?:::.*|)\s*$/i);
- if (defined($value)) {
- $value =~ s/\s*$//;
- push(@annot_slot_values, $value);
- }
- }
- return @annot_slot_values;
-}
-
-# quick and dirty
-sub noun_number_form {
- local($caller,$noun,$number) = @_;
-
- $noun = "rupee" if $noun =~ /^Rs\.?$/;
- $noun = "kilometer" if $noun =~ /^km$/;
- $noun = "kilogram" if $noun =~ /^kg$/;
- $noun = "meter" if $noun =~ /^m$/;
- $noun = "second" if $noun =~ /^(s|secs?\.?)$/;
- $noun = "minute" if $noun =~ /^(mins?\.?)$/;
- $noun = "hour" if $noun =~ /^(h|hrs?\.?)$/;
- $noun = "year" if $noun =~ /^(yrs?\.?)$/;
- $noun = "degree" if $noun =~ /^(deg\.?)$/;
- $noun = "foot" if $noun =~ /^(feet|ft\.?)$/;
- $noun = "square kilometer" if $noun =~ /^sq\.? km/;
- $noun =~ s/metre$/meter/;
- $noun =~ s/litre$/liter/;
- $noun =~ s/gramme$/gram/;
- $noun =~ s/tonne$/ton/;
- return $noun if $noun =~ /\$$/;
- return $noun unless $number =~ /^[0-9.]+$/;
- return $noun if $util->member($noun,"percent"); # no change in plural
- return $noun if $noun =~ /\b(yuan|renminbi|RMB|rand|won|yen|ringgit|birr)$/; # no change in plural
- return $noun if $number <= 1;
-
- return $noun if $caller->is_abbreviation($noun);
-
- $noun =~ s/^(hundred|thousand|million|billion|trillion)\s+//;
- return $noun if $noun =~ /^(dollar|kilometer|pound|ton|year)s$/i;
-
- $original_noun = $noun;
- #check for irregular plural
- $annot = "synt:noun::plural-of:$noun";
- if (defined($annotation_english_ht{$annot})) {
- @elist = @{$annotation_english_ht{$annot}};
- return $elist[0] if @elist;
- }
-
- $noun = $noun . "s";
- return $noun if $noun =~ /(a|e|o|u)ys$/; # days, keys, toys, guys
- $noun =~ s/ys$/ies/; # babies
- $noun =~ s/ss$/ses/; # buses
- $noun =~ s/xs$/xes/; # taxes
- $noun =~ s/shs$/shes/; # dishes
- $noun =~ s/chs$/ches/; # churches
- $noun =~ s/mans$/men/; # women
- # print STDERR "NNF: $original_noun($number): $noun\n";
- return $noun;
-}
-
-# quick and dirty
-sub lex_candidates {
- local($caller,$surf) = @_;
-
- @lex_cands = ($surf);
- $lex_cand = $surf;
- $lex_cand =~ s/ies$/y/;
- push(@lex_cands,$lex_cand) unless $util->member($lex_cand, @lex_cands);
- $lex_cand = $surf;
- $lex_cand =~ s/s$//;
- push(@lex_cands,$lex_cand) unless $util->member($lex_cand, @lex_cands);
- $lex_cand = $surf;
- $lex_cand =~ s/es$//;
- push(@lex_cands,$lex_cand) unless $util->member($lex_cand, @lex_cands);
- $lex_cand = $surf;
- $lex_cand =~ s/\.$//;
- push(@lex_cands,$lex_cand) unless $util->member($lex_cand, @lex_cands);
- $lex_cand = $surf;
- $lex_cand =~ s/men$/man/;
- push(@lex_cands,$lex_cand) unless $util->member($lex_cand, @lex_cands);
-
- return @lex_cands;
-}
-
-# quick and dirty
-sub pos_tag {
- local($caller,$surf) = @_;
-
- return CD if ($surf =~ /^-?[0-9,\.]+$/);
- return NN if ($surf =~ /^($currency_symbol_list\d)/);
- @lex_candidates = $caller->lex_candidates($surf);
-# print " lex_candidates: @lex_candidates\n";
- foreach $lex_cand (@lex_candidates) {
- if (defined($annotation_s = $english_annotation_ht{$lex_cand})) {
-# print " annotation: $annotation_s\n";
- ($synt) = ($annotation_s =~ /::synt:([^:]+)::/);
- if (defined($synt)) {
- if ($synt eq "art") {
- return "DT";
- } elsif ($synt eq "adj") {
- ($grade) = ($annotation_s =~ /::grade:([^:]+)::/);
- if (defined($grade) && ($grade eq "superlative")) {
- return "JJS";
- } elsif (defined($grade) && ($grade eq "comparative")) {
- return "JJR";
- } else {
- return "JJ";
- }
- } elsif ($synt eq "noun") {
- if ($lex_cand eq $surf) {
- return "NN";
- } else {
- return "NNS";
- }
- } elsif ($synt eq "name") {
- return "NNP";
- } elsif ($synt eq "cardinal") {
- return "CD";
- } elsif ($synt eq "ordinal") {
- return "JJ";
- } elsif ($synt eq "prep") {
- return "IN";
- } elsif ($synt eq "conj") {
- return "CC";
- } elsif ($synt eq "wh_pron") {
- return "WP";
- } elsif ($synt eq "adv") {
- return "RB";
- } elsif ($synt eq "genetive_particle") {
- return "POS";
- } elsif ($synt eq "ordinal_particle") {
- return "NN";
- } elsif ($synt eq "suffix_particle") {
- return "NN";
- } elsif ($synt =~ /^int(erjection)?$/) {
- return "UH";
- } elsif (($synt =~ /^punctuation$/)
- && $util->is_rare_punctuation_string_p($surf)) {
- return "SYM";
- } elsif ($synt =~ /\bverb$/) {
- if ($surf =~ /^(is)$/) {
- return "VBZ";
- } else {
- return "VB";
- }
- }
- }
- }
- }
- return "";
-}
-
-sub indef_art_filter {
- local($caller,$surf) = @_;
-
- # check article in lexical annotation
- # e.g. hour::synt:noun::unit:temporal::indef-article:an
- # uniform::synt:noun::indef-article:a
- ($surf_article,$word) = ($surf =~ /^(an?) (\S+)\s*/);
- if (defined($surf_article)
- && defined($word)
- && defined($annotation = $english_annotation_ht{$word})) {
- ($ann_article) = ($annotation =~ /::indef-article:([^:]+)::/);
- if (defined($ann_article)) {
- return ($surf_article eq $ann_article) ? $surf : "";
- }
- }
- return "" if $surf =~ /\ban [bcdfghjklmnpqrstvwxyz]/;
- return "" if $surf =~ /\ban (US)\b/;
- return "" if $surf =~ /\ba [aeio]/;
- return "" if $surf =~ /\ba (under)/;
- return $surf;
-}
-
-sub wordlist_synt {
- local($caller,$word) = @_;
-
- return "" unless defined($annotation = $english_annotation_ht{$word});
- ($synt) = ($annotation =~ /::synt:([^:]+)::/);
- return $synt || "";
-}
-
-sub qualifier_filter {
- local($caller,$surf) = @_;
-
- return "" if $surf =~ /\b(over|more than|approximately) (million|billion|trillion)/;
- return "" if $surf =~ /\b(over) (once|twice)/;
- return $surf;
-}
-
-sub quantity_filter {
- local($caller,$surf) = @_;
-
- return "" if $surf =~ /^(a|an)-/; # avoid "the a-week meeting"
- return $surf;
-}
-
-sub value_to_english {
- local($caller,$number) = @_;
-
- $result = "";
-
- $annot = "value:$number";
- if (defined($annotation_english_ht{$annot})) {
- @elist = @{$annotation_english_ht{$annot}};
- $result = $elist[0] if @elist;
- }
-# print "value_to_english($number)=$result\n";
- return $result;
-}
-
-sub value_to_english_ordinal {
- local($caller,$number) = @_;
-
- $result = "";
-
- $annot = "synt:ordinal::value:$number";
- if (defined($annotation_english_ht{$annot})) {
- @elist = @{$annotation_english_ht{$annot}};
- $result = $elist[0] if @elist;
- } else {
- $annot = "value:$number";
- if (defined($annotation_english_ht{$annot})) {
- @elist = @{$annotation_english_ht{$annot}};
- $cardinal = $elist[0] if @elist;
- $result = $cardinal . "th";
- $result =~ s/yth$/ieth/;
- }
- }
-# print "value_to_english($number)=$result\n";
- return $result;
-}
-
-sub english_with_synt_slot_value {
- local($caller, $english, $synt, $slot) = @_;
-
- return $english_annotation_ht{"_EN_SYNT_"}->{$english}->{$synt}->{$slot};
-}
-
-sub english_with_synt_slot_value_defined {
- local($caller, $synt, $slot) = @_;
-
- @englishes_with_synt_slot_value_defined = ();
- foreach $english (keys %{$english_annotation_ht{"_EN_SYNT_"}}) {
- push(@englishes_with_synt_slot_value_defined, $english)
- if defined($english_annotation_ht{"_EN_SYNT_"}->{$english}->{$synt}->{$slot})
- && ! $util->member($english, @englishes_with_synt_slot_value_defined)
- }
- return @englishes_with_synt_slot_value_defined;
-}
-
-sub number_composed_surface_form {
- local($caller,$number,$leave_num_section_p) = @_;
-
- return "" unless $number =~ /^\d+$/;
- $leave_num_section_p = 0 unless defined($leave_num_section_p);
- $anchor = "1000000000000000000000000";
- while (($number < $anchor) && ($anchor >= 1000000)) {
- $anchor =~ s/000//;
- }
-# print "number_composed_surface_form number: $number anchor:$anchor\n";
- return "" unless $anchor >= 1000000;
- return "" unless $english = $caller->value_to_english($anchor);
- $ending = $anchor;
- $ending =~ s/^1000//;
- return "" unless ($number =~ /$ending$/) || (($number * 1000) % $anchor) == 0;
- $num_section = $number / $anchor;
- if (($num_section =~ /^[1-9]0?$/) && ! $leave_num_section_p) {
- $num_section_english = $caller->value_to_english($num_section);
- $num_section = $num_section_english if $num_section_english;
- }
- $num_section = $caller->commify($num_section); # only for extremely large numbers
- return "$num_section $english";
-}
-
-sub de_scientify {
- local($caller,$number) = @_;
-
-# print "de_scientify: $number\n";
- if ($number =~ /[eE][-+]/) {
- ($n,$exp) = ($number =~ /^(\d+)[eE]\+(\d+)$/);
- if (defined($exp)) {
- $result = $n;
- foreach $i (0 .. $exp-1) {
- $result .= "0"
- }
- return $result;
- } else {
- ($n,$f,$exp) = ($number =~ /^(\d+)\.(\d+)[eE]\+(\d+)$/);
- if (defined($exp) && ($exp >= length($f))) {
- $result = "$n$f";
- foreach $i (0 .. $exp-1-length($f)) {
- $result .= "0";
- }
- return $result;
- }
- }
- }
- return $number;
-}
-
-sub commify {
- local($caller,$number) = @_;
-
- my $text = reverse $number;
- $text =~ s/(\d\d\d)(?=\d)(?!\d*\.)/$1,/g;
- return scalar reverse $text;
-}
-
-my %plural_rough_number_ht = (
- 10 => "tens",
- 12 => "dozens",
- 20 => "scores",
- 100 => "hundreds",
- 1000 => "thousands",
- 10000 => "tens of thousands",
- 100000 => "hundreds of thousands",
- 1000000 => "millions",
- 10000000 => "tens of millions",
- 100000000 => "hundreds of millions",
- 1000000000 => "billions",
- 10000000000 => "tens of billions",
- 100000000000 => "hundreds of billions",
- 1000000000000 => "trillions",
- 10000000000000 => "tens of trillions",
- 100000000000000 => "hundreds of trillions",
-);
-
-sub plural_rough_plural_number {
- local($caller,$number) = @_;
-
- return $plural_rough_number_ht{$number} || "";
-}
-
-my %roman_numeral_ht = (
- "I" => 1,
- "II" => 2,
- "III" => 3,
- "IIII" => 4,
- "IV" => 4,
- "V" => 5,
- "VI" => 6,
- "VII" => 7,
- "VIII" => 8,
- "VIIII" => 9,
- "IX" => 9,
- "X" => 10,
- "XX" => 20,
- "XXX" => 30,
- "XXXX" => 40,
- "XL" => 40,
- "L" => 50,
- "LX" => 60,
- "LXX" => 70,
- "LXXX" => 80,
- "LXXXX" => 90,
- "XC" => 90,
- "C" => 100,
- "CC" => 200,
- "CCC" => 300,
- "CCCC" => 400,
- "CD" => 400,
- "D" => 500,
- "DC" => 600,
- "DCC" => 700,
- "DCCC" => 800,
- "DCCCC" => 900,
- "CM" => 900,
- "M" => 1000,
- "MM" => 2000,
- "MMM" => 3000,
- "MMM" => 3000,
-);
-
-sub roman_numeral_value {
- local($caller,$s) = @_;
-
- if (($m, $c, $x, $i) = ((uc $s) =~ /^(M{0,3})(C{1,4}|CD|DC{0,4}|CM|)(X{1,4}|XL|LX{0,4}|XC|)(I{1,4}|IV|VI{0,4}|IX|)$/)) {
- $sum = ($roman_numeral_ht{$m} || 0)
- + ($roman_numeral_ht{$c} || 0)
- + ($roman_numeral_ht{$x} || 0)
- + ($roman_numeral_ht{$i} || 0);
- return $sum;
- } else {
- return 0;
- }
-}
-
-sub number_surface_forms {
- local($caller,$number,$pe) = @_;
-
- print STDERR "Warning from number_surface_forms: $number not a number\n"
- if $logfile && !($number =~ /^(\d+(\.\d+)?|\.\d+)$/);
- # $util->log("number_surface_forms number:$number", $logfile);
- # $util->log(" surf:$surf", $logfile) if $surf = ($pe && $pe->surf);
-
- $pe = "" unless defined($pe);
-
- @num_style_list = @{$english_entity_style_ht{"FollowSourceLanguageNumberStyle"}};
- $follow_num_style = $util->member("yes", @num_style_list)
- && (! (($number =~ /^([1-9]|10)$/) &&
- $util->member("except-small-numbers", @num_style_list)));
- $num_style = ($pe) ? $pe->get("num_style") : "";
- if ($follow_num_style) {
- if ($num_style =~ /digits_plus_alpha/) {
- if ($number =~ /^[1-9]\d?\d?000$/) {
- $digital_portion = $number;
- $digital_portion =~ s/000$//;
- return ("$digital_portion thousand");
- } elsif ($number =~ /^[1-9]\d?\d?000000$/) {
- $digital_portion = $number;
- $digital_portion =~ s/000000$//;
- return ("$digital_portion million");
- } elsif ($number =~ /^[1-9]\d?\d?000000000$/) {
- $digital_portion = $number;
- $digital_portion =~ s/000000000$//;
- return ("$digital_portion billion");
- }
- } elsif ($num_style eq "digits") {
- if ($number =~ /^\d{1,4}$/) {
- return ($number);
- }
- }
- }
-
- $number = $caller->de_scientify($number);
-
- $composed_form = $caller->number_composed_surface_form($number);
- $composed_form2 = $caller->number_composed_surface_form($number,1);
- $lex_form = $caller->value_to_english($number);
- $commified_form = $caller->commify($number);
-
- if ($lex_form) {
- if ($number >= 1000000) {
- @result = ("one $lex_form", "1 $lex_form", "a $lex_form", $lex_form, $commified_form);
- push(@result, $commified_form) if ($number <= $CardinalMaxNonLex);
- } elsif ($number >= 100) {
- @result = ($commified_form, "one $lex_form", "a $lex_form", $lex_form);
- } elsif ($number >= 10) {
- @result = ($number, $lex_form);
- } elsif ($number == 1) {
- @result = ("a", "an", $lex_form);
- } elsif ($number == 0) {
- @result = ($number, $lex_form);
- } else {
- @result = ($lex_form);
- }
- } elsif ($composed_form) {
- if ($composed_form eq $composed_form2) {
- @result = ($composed_form);
- } elsif (($number >= 10000000) && ($composed_form2 =~ /^[1-9]0/)) {
- @result = ($composed_form2, $composed_form);
- } else {
- @result = ($composed_form, $composed_form2);
- }
- push(@result, $commified_form) if $number <= $CardinalMaxNonLex;
- } else {
- ($ten,$one) = ($number =~ /^([2-9])([1-9])$/);
- ($hundred) = ($number =~ /^([1-9])00$/) unless defined($one);
- ($thousand) = ($number =~ /^([1-9]\d?)000$/) unless defined($one) || defined($hundred);
- if (defined($one) && defined($ten)
- && ($part1 = $caller->value_to_english($ten * 10))
- && ($part2 = $caller->value_to_english($one))) {
- $wordy_form = "$part1-$part2";
- @result = ($commified_form, $wordy_form);
- } elsif (defined($hundred)
- && ($part1 = $caller->value_to_english($hundred))) {
- $wordy_form = "$part1 hundred";
- @result = ($commified_form, $wordy_form);
- } elsif (defined($thousand)
- && ($part1 = $caller->value_to_english($thousand))) {
- $wordy_form = "$part1 thousand";
- @result = ($commified_form, $wordy_form);
- } elsif ($number =~ /^100000$/) {
- @result = ($commified_form, "one hundred thousand", "a hundred thousand", "hundred thousand");
- } elsif ($pe && ($pe->surf eq $number) && ($number =~ /^\d\d\d\d(\.\d+)?$/)) {
- @result = ($number);
- push(@result, $commified_form) unless $commified_form eq $number;
- } elsif ($number =~ /^\d{4,5}$/) {
- if ($commified_form eq $number) {
- @result = ($number);
- } else {
- @result = ($commified_form, $number);
- }
- } else {
- @result = ($commified_form);
- }
- }
- push (@result, $number)
- unless $util->member($number, @result) || ($number > $CardinalMaxWithoutComma);
-# $util->log("number_surface_forms result:@result", $logfile);
-
- # filter according to num_style
- if ($follow_num_style) {
- my @filtered_result = ();
- foreach $r (@result) {
- push(@filtered_result, $r)
- if (($num_style eq "digits") && ($r =~ /^\d+$/))
- || (($num_style eq "alpha") && ($r =~ /^[-\@ a-z]*$/i))
- || (($num_style eq "digits_plus_alpha") && ($r =~ /\d.*[a-z]/i));
- }
- @result = @filtered_result if @filtered_result;
- }
-
- if ($pe && $pe->childGloss("and")) {
- @new_result = ();
- foreach $r (@result) {
- if ($r =~ /^and /) {
- push(@new_result, $r);
- } else {
- push(@new_result, "and $r");
- }
- }
- @result = @new_result;
- }
- return @result;
-}
-
-sub number_range_surface_forms {
- local($caller,$pe) = @_;
-
- $value = $pe->value;
- $value_coord = $pe->get("value-coord");
- unless ($value_coord) {
- return $caller->number_surface_forms($value);
- }
- $prefix = "";
- if ($conj = $pe->get("conj")) {
- $connector = $conj;
- } else {
- $connector = ($value_coord == $value + 1) ? "or" : "to";
- }
- if ($pe->get("between")) {
- $prefix = "between ";
- $connector = "and";
- }
-
- $pe1 = $pe->child("head");
- $pe2 = $pe->child("coord");
- @result1 = $caller->number_surface_forms($value, $pe1);
- @result2 = $caller->number_surface_forms($value_coord, $pe2);
- @num_style_list = @{$english_entity_style_ht{"FollowSourceLanguageNumberStyle"}};
- $follow_num_style = 1 if $util->member("yes", @num_style_list);
-
- # between two thousand and three thousand => between two and three thousand
- # 3 million to 5 million => 3 to 5 million
- if ($follow_num_style && ($#result1 == 0) && ($#result2 == 0)) {
- $range = $prefix . $result1[0] . " $connector " . $result2[0];
- $util->log(" range1: $range", $logfile);
- $gazillion = "thousand|million|billion|trillion";
- ($a,$gaz1,$b,$gaz2) = ($range =~ /^(.+) ($gazillion) ($connector .+) ($gazillion)$/);
- if (defined($a) && defined($gaz1) && defined($b) && defined($gaz2) && ($gaz1 eq $gaz2)) {
- $range = "$a $b $gaz1";
- $util->log(" range2: $range", $logfile);
- return ($range);
- }
- }
-
- @result = ();
- foreach $result1 (@result1) {
- next if ($value >= 1000) && ($result1 =~ /^\d+$/);
- foreach $result2 (@result2) {
- next if $result1 =~ /^an?\b/;
- push(@result, "$prefix$result1 $connector $result2")
- if ($result1 =~ /^[a-z]+$/) && ($result2 =~ /^[a-z]+$/);
- next if ($result1 =~ /^[a-z]/) || ($result2 =~ /^[a-z]/);
- next if ($value_coord >= 1000) && ($result2 =~ /^\d+$/);
- ($digits1,$letters1) = ($result1 =~ /^(\d+(?:.\d+)?) ([a-z].*)$/);
- ($digits2,$letters2) = ($result2 =~ /^(\d+(?:.\d+)?) ([a-z].*)$/);
- if (defined($digits1) && defined($letters1)
- && defined($digits2) && defined($letters2)
- && ($letters1 eq $letters2)) {
- push(@result, "$prefix$digits1 $connector $digits2 $letters1");
- } elsif (($result1 =~ /^\d{1,3}$/) && ($result2 =~ /^\d{1,3}$/) && !$prefix) {
- push(@result, "$result1-$result2");
- if ($connector eq "to") {
- my $span = "$result1 to $result2";
- push(@result, $span) unless $util->member($span, @result);
- }
- } else {
- push(@result, "$prefix$result1 $connector $result2");
- }
- }
- }
- unless (@result) {
- $result1 = (@result1) ? $result1[0] : $value;
- $result2 = (@result2) ? $result2[0] : $value_coord;
- @result = "$prefix$result1 $connector $result2";
- }
- return @result;
-}
-
-sub q_number_surface_forms {
- local($caller,$pe) = @_;
-
- $surf = $pe->surf;
- return ($pe->gloss) unless $value = $pe->value;
- if (($value >= 1961) && ($value <= 2030)
- &&
- (($pe->get("struct") eq "sequence of digits")
- ||
- ($surf =~ /^\d+$/))) {
- $value = "$prefix $value" if $prefix = $pe->get("prefix");
- @result = ("$value");
- } else {
- @result = $caller->number_surface_forms($value,$pe);
- @result = $caller->qualify_entities($pe,@result);
- }
- return @result;
-}
-
-sub ordinal_surface_forms {
- local($caller,$number,$exclude_cardinals_p,$exclude_adverbials_p, $pe) = @_;
-
- if (defined($os = $english_entity_style_ht{"Ordinal"})) {
- @ordinal_styles = @{$os};
- } else {
- return ();
- }
- $exclude_cardinals_p = 0 unless defined($exclude_cardinals_p);
- @num_style_list = @{$english_entity_style_ht{"FollowSourceLanguageNumberStyle"}};
- $follow_num_style = 1 if $util->member("yes", @num_style_list);
- $num_style = ($pe) ? $pe->get("num_style") : "";
- $alpha_ok = ! ($follow_num_style && ($num_style =~ /^digits$/));
- my $c_number = $caller->commify($number);
- my $lex_form = "";
- $lex_form = $caller->value_to_english_ordinal($number) if $alpha_ok;
- my $adverbial_form
- = (($number =~ /^\d+$/) && ($number >= 1) && ($number <= 10)
- && $lex_form && $util->member("secondly", @ordinal_styles))
- ? $lex_form . "ly" : "";
- my $num_form = $caller->numeric_ordinal_form($number);
- my $c_num_form = $caller->numeric_ordinal_form($c_number);
- my @result = ();
-
-# print "lex_form: $lex_form num_form:$num_form c_num_form:$c_num_form\n";
- if ($lex_form && $util->member("second", @ordinal_styles)) {
- if (! $util->member("2nd", @ordinal_styles)) {
- @result = ($lex_form);
- } elsif ($c_num_form ne $num_form) {
- @result = ($c_num_form, $lex_form, $num_form);
- } elsif ($number >= 10) {
- @result = ($num_form, $lex_form);
- } else {
- @result = ($lex_form, $num_form);
- }
- } elsif ($util->member("2nd", @ordinal_styles)) {
- if ($c_num_form ne $num_form) {
- @result = ($c_num_form, $num_form);
- } else {
- @result = ($num_form);
- }
- }
- unless ($number =~ /^\d+$/) {
- print STDERR "Warning: $number not an integer (for ordinal)\n";
- }
- unless ($exclude_cardinals_p) {
- $incl_num_card = $util->member("2", @ordinal_styles);
- $incl_lex_card = $util->member("two", @ordinal_styles);
- foreach $card ($caller->number_surface_forms($number)) {
- if ($card =~ /^an?$/) {
- # don't include
- } elsif ($card =~ /^[0-9,]+$/) {
- push(@result, $card) if $incl_num_card;
- } else {
- push(@result, $card) if $incl_lex_card && $alpha_ok;
- }
- }
- }
- push(@result,$adverbial_form) if $adverbial_form && ! $exclude_adverbials_p;
- push(@result, $num_form) unless @result;
- return @result;
-}
-
-sub ordinal_surface_form {
- local($caller,$number,$exclude_cardinals_p,$exclude_adverbials_p, $pe) = @_;
-
- my @surf_forms = $caller->ordinal_surface_forms($number,$exclude_cardinals_p,$exclude_adverbials_p, $pe);
- return (@surf_forms) ? $surf_forms[0] : $caller->numeric_ordinal_form($number);
-}
-
-sub fraction_surface_forms {
- local($caller,$pe,$modp) = @_;
-
- my @result = ();
- $numerator = $pe->get("numerator");
- $denominator = $pe->get("denominator");
-# print "numerator: $numerator denominator:$denominator\n";
- @surf_nums = $caller->number_surface_forms($numerator,$pe);
- @surf_nums = ("one") if $numerator == 1;
- @surf_dens = $caller->ordinal_surface_forms($denominator,1,1);
- @surf_dens = ("half") if $denominator == 2;
- @surf_dens = ("quarter") if $denominator == 4;
- @surf_dens = ("tenth") if $denominator == 10;
-# print "surf_nums: @surf_nums surf_dens: @surf_dens\n";
- @fraction_patterns = @{$english_entity_style_ht{"Fraction"}};
- if (@surf_nums && @surf_dens) {
- $surf_num = $surf_nums[0];
- $surf_den = $surf_dens[0];
- $surf_num_den = "";
- foreach $sd (@surf_dens) {
- $surf_num_den = $sd if $sd =~ /^\d/;
- }
- $surf_den_w_proper_number = $caller->noun_number_form($surf_den, $numerator);
- foreach $fp (@fraction_patterns) {
- if ($fp eq "one tenth") {
- push(@result, $surf_num . " " . $surf_den_w_proper_number) unless $modp;
- } elsif ($fp eq "one-tenth") {
- if ($modp) {
- push(@result, $surf_num . "-" . $surf_den);
- } else {
- push(@result, $surf_num . "-" . $surf_den_w_proper_number);
- }
- } elsif ($fp eq "1/10") {
- push(@result, $numerator . "/" . $denominator);
- } elsif ($fp eq "1/10th") {
- push(@result, $numerator . "/" . $surf_num_den) if $surf_num_den;
- }
- }
- return @result;
- } else {
- return ($pe->gloss);
- }
-}
-
-sub currency_surface_forms {
- local($caller,$pe) = @_;
-
- @currency_surf_forms = ();
- return @currency_surf_forms unless $pe->sem =~ /monetary quantity/;
- $unit = $pe->get("unit");
- return ($pe->gloss) unless $quant = $pe->get("quant");
- return ($pe->gloss) if $pe->childSem("head") eq "currency symbol";
- $quant_pe = $pe->child("quant");
- if ($unit =~ /^(US|Hongkong) dollar$/) {
- @units = $caller->entity_style_listing($unit);
- } elsif ($unit eq "yuan") {
- @units = $caller->entity_style_listing("Chinese yuan");
- @rmb_pos = @{$english_entity_style_ht{"Chinese RMB position"}};
- @rmb_pos = ("before-number", "after-number") if $util->member("all",@units);
- } else {
- @units = ($unit);
- }
- if (($pe->sem =~ /range$/) && $quant_pe) {
- @quants = $caller->number_range_surface_forms($quant_pe);
- } else {
- @quants = $caller->number_surface_forms($quant, $quant_pe);
- }
- @quants = ($quant) unless @quants;
- # print STDERR "units: @units \n";
- foreach $q (@quants) {
- foreach $u_sing (@units) {
- $u = ($modp) ? $u_sing : $caller->noun_number_form($u_sing, $quant);
-# print " q: $q unit: $u value: $quant\n";
- if ($u eq "RMB") {
- if ($util->member("before-number", @rmb_pos)) {
- if ($q =~ /^\d/) {
- push(@currency_surf_forms, "RMB" . $q);
- }
- }
- if ($util->member("after-number", @rmb_pos)) {
- push(@currency_surf_forms, $q . " RMB");
- }
- } elsif ($u =~ /\$$/) {
- if ($q =~ /^\d/) {
- $currency_surf_form = $u . $q;
- push(@currency_surf_forms, $currency_surf_form);
- }
- } else {
- $new_form = "$q $u";
- push(@currency_surf_forms, $new_form) if $caller->indef_art_filter($new_form);
- }
- }
- }
- @currency_surf_forms = $caller->qualify_entities($pe,@currency_surf_forms);
-
- # print STDERR "currency_surface_forms: @currency_surf_forms \n";
- return @currency_surf_forms;
-}
-
-sub age_surface_forms {
- local($caller,$pe, $modp) = @_;
-
- $gloss = $pe->gloss;
- @age_surf_forms = ();
- return @age_surf_forms unless $pe->sem =~ /age quantity/;
- $unit = $pe->get("unit");
- return ($gloss) unless $quant = $pe->get("quant");
- $temporal_quant_pe = $pe->child("head");
- $synt = $pe->synt;
- if ($synt =~ /parenthetical/) {
- if ($pe->get("slashed")) {
- @age_markers = $caller->entity_style_listing("ParentheticalAgeFormatSlashed");
- @age_markers = $caller->entity_style_listing("ParentheticalAgeFormat") unless @age_markers;
- } else {
- @age_markers = $caller->entity_style_listing("ParentheticalAgeFormat");
- }
- return ($gloss) unless @age_markers;
- foreach $a (@age_markers) {
- $age_surf_form = $a;
- $age_surf_form =~ s/8/$quant/;
- push(@age_surf_forms, $age_surf_form);
- }
- } elsif (($quant =~ /^\d+$/) && ($temporal_quant_pe->sem eq "age unit")) {
- @quants = $caller->number_surface_forms($quant);
- @quants = ($quant) if $pe->childSurf("quant") =~ /^\d+$/;
- foreach $quant2 (@quants) {
- if ($modp) {
- push(@age_surf_forms, "$quant2-year-old");
- } else {
- $plural_marker = ($quant >= 2) ? "s" : "";
- push(@age_surf_forms, "$quant2 year$plural_marker old");
- }
- }
- } elsif ($temporal_quant_pe && ($temporal_quant_pe->sem eq "temporal quantity")) {
- @temporal_quants = $caller->quantity_surface_forms($temporal_quant_pe, $modp);
- foreach $temporal_quant (@temporal_quants) {
- push(@age_surf_forms, $temporal_quant . (($modp) ? "-" : " ") . "old");
- }
- } else {
- return ($gloss);
- }
-
- @age_surf_forms = ($gloss) unless @age_surf_forms;
- return @age_surf_forms;
-}
-
-sub occurrence_surface_forms {
- local($caller,$pe,$modp) = @_;
-
- @quantity_surf_forms = ();
- return ($pe->gloss) unless $quant = $pe->get("quant");
- $quant_coord = $pe->get("quant-coord");
- $quant_pe = $pe->child("quant");
- $unit = "time";
- if (($pe->sem =~ /range$/) && $quant_pe) {
- @quants = $caller->number_range_surface_forms($quant_pe);
- } else {
- @quants = $caller->number_surface_forms($quant, $quant_pe);
- }
- @quants = ($quant) unless @quants;
- if ($modp) {
- return () if $pe->get("qualifier") || $quant_coord;
- return ("one-time") if $quant eq "1";
- return ("two-time", "two-fold", "2-fold") if $quant eq "2";
- } else {
- if ($quant_coord) {
- return $caller->qualify_entities($pe, ("once or twice"))
- if $quant eq "1" and $quant_coord eq "2";
- } else {
- return $caller->qualify_entities($pe, ("once")) if $quant eq "1";
- return $caller->qualify_entities($pe, ("twice", "two times", "2 times",
- "2-fold", "two fold")) if $quant eq "2";
- }
- }
- foreach $q (@quants) {
- $u = ($modp) ? $unit : $caller->noun_number_form($unit, $quant);
- $new_form = "$q $u";
- if ($modp) {
- # for the time being, no "more than/over/..." in modifiers: more than 20-ton
- if ($pe->get("qualifier")) {
- $new_form = "";
- } else {
- $new_form =~ s/-/-to-/;
- $new_form =~ s/ /-/g;
- }
- }
- push(@quantity_surf_forms, $new_form) if $new_form;
- push(@quantity_surf_forms, "$q-fold") if $q =~ /\d/ || ($quant <= 9);
- }
- @quantity_surf_forms = $caller->qualify_entities($pe,@quantity_surf_forms);
-
- return @quantity_surf_forms;
-}
-
-sub quantity_surface_forms {
- local($caller,$pe,$modp) = @_;
-
- if ($pe->get("complex") eq "true") {
- return () if $modp;
- $quantity_surf_form = $pe->gloss;
- return ($quantity_surf_form);
- }
-
- @quantity_surf_forms = ();
- $sem = $pe->get("sem");
- $scale = $pe->get("scale");
- $scale_mod = $pe->get("scale_mod");
- $unit = $pe->get("unit") || $scale;
- $mod_gloss = $pe->get("mod");
- return ($pe->gloss) unless $quant = $pe->get("quant");
- $quant_coord = $pe->get("quant-coord");
- $quant_comb = $quant_coord || $quant;
- $quant_pe = $pe->child("quant");
- if (defined($u_style = $english_entity_style_ht{"\u$unit"})) {
- @units = @{$u_style};
- } else {
- @units = ($unit);
- }
- if (($pe->sem =~ /range$/) && $quant_pe) {
- @quants = $caller->number_range_surface_forms($quant_pe);
- } else {
- @quants = $caller->number_surface_forms($quant, $quant_pe);
- }
- @quants = ($quant) unless @quants;
- foreach $q (@quants) {
- foreach $u_sing (@units) {
- my $u = $u_sing;
- if (($sem =~ /seismic quantity/) && $scale) {
- $scale =~ s/(\w+)\s*/\u\L$1/g if $scale =~ /^(Richter|Mercalli)/i;
- $u = "on the $scale_mod $scale scale";
- $u =~ s/\s+/ /g;
- } elsif (($u_sing =~ /\S/) && ! $modp) {
- $u = $caller->noun_number_form($u_sing, $quant_comb);
- }
-# print " q: $q unit: $u value: $quant modp: $modp\n";
- @mods = ("");
- @mods = ("consecutive", "in a row") if $mod_gloss eq "continuous";
- foreach $mod (@mods) {
- $pre_quant_mod = "";
- $in_quant_mod = ($mod =~ /(consecutive)/) ? "$mod " : "";
- $post_quant_mod = ($mod =~ /(in a row)/) ? " $mod" : "";
- $new_form = "$pre_quant_mod$q $in_quant_mod$u$post_quant_mod";
- if ($caller->is_abbreviation($u)) {
- if (($pe->sem =~ /range/) && ($q =~ /^[-0-9,\. to]+$/)
- && $modp && !($new_form =~ / (to|or) /)) {
- $new_form =~ s/-/-to-/;
- $new_form =~ s/ /-/g;
- } elsif ($q =~ /^[-0-9,\.]+$/) {
-# $new_form =~ s/ //g;
- } else {
- $new_form = "";
- }
- } elsif ($modp) {
- # for the time being, no "more than/over/..." in modifiers: more than 20-ton
- if (($pe->get("qualifier")) || $mod) {
- $new_form = "";
- } elsif ($u =~ /(square|cubic|metric|short)/) {
- # no hyphenation for the time being (based on CTE style)
- } elsif (($pe->sem =~ /range/) && !($new_form =~ / (to|or) /)) {
- $new_form =~ s/-/-to-/;
- $new_form =~ s/ /-/g;
- } else {
- $new_form =~ s/ /-/g;
- }
- }
- push(@quantity_surf_forms, $new_form)
- if $new_form && $caller->quantity_filter($new_form) && $caller->indef_art_filter($new_form);
- }
- }
- }
- @quantity_surf_forms = $caller->qualify_entities($pe,@quantity_surf_forms);
-
- # print STDERR "QSF unit:$unit sem:$sem Result(s): " . join("; ", @quantity_surf_forms) . "\n";
- return @quantity_surf_forms;
-}
-
-sub qualify_entities {
- local($caller,$pe,@surf_forms) = @_;
-
- $prefix = $pe->get("prefix");
- $prefix_clause = ($prefix) ? "$prefix " : "";
- if ($qualifier = $pe->get("qualifier")) {
- $qualifier =~ s/-/ /g;
- $qualifier_key = $qualifier;
- $qualifier_key =~ s/(\w+)\s*/\u\L$1/g;
- # print "qualifier_key: $qualifier_key\n";
- @new_list = ();
- if (defined($value = $english_entity_style_ht{$qualifier_key})) {
- @quals = @{$value};
- # print STDERR " qk $qualifier_key in ht: @quals :: @surf_forms\n";
- foreach $q (@quals) {
- foreach $surf_form (@surf_forms) {
- $new_form = "$prefix_clause$q $surf_form";
- push(@new_list, $new_form) if $caller->qualifier_filter($new_form);
- }
- }
- return @new_list if @new_list;
- } else {
- @keys = sort keys %english_entity_style_ht;
- # print STDERR " did not find qk $qualifier_key in ht: @keys\n";
- foreach $surf_form (@surf_forms) {
- if (($qualifier =~ /^(couple|few|lot|many|number|several|some)$/i)
- && (($art, $lex) = ($surf_form =~ /^(an?)\s+(\S|\S.*\S)\s*$/i))) {
- $plural_form = $caller->noun_number_form($lex,2);
- $new_form = "$prefix_clause$qualifier $plural_form";
- } else {
- $new_form = "$prefix_clause$qualifier $surf_form";
- }
- push(@new_list, $new_form) if $caller->qualifier_filter($new_form);
- }
- return @new_list if @new_list;
- }
- }
- if ($prefix) {
- @prefixed_surf_forms = ();
- foreach $surf_form (@surf_forms) {
- if ($surf_form =~ /^$prefix /) { # already prefixed
- push(@prefixed_surf_forms, $surf_form);
- } else {
- push(@prefixed_surf_forms, "$prefix $surf_form");
- }
- }
- return @prefixed_surf_forms;
- } else {
- return @surf_forms;
- }
-}
-
-sub percent_surface_forms {
- local($caller,$pe,$modp) = @_;
-
- @percent_surf_forms = ();
- return @percent_surf_forms unless $pe->sem eq "percentage";
- $prefix = "";
- $quant = $pe->gloss;
- $quant =~ s/%$//;
- $quant =~ s/^and //;
- if ($pe->gloss =~ /^and /) {
- $prefix = "and";
- }
- @percent_markers = $caller->entity_style_listing("Percentage");
- @quants = $caller->number_surface_forms($quant);
- @quants = ($quant) unless @quants;
- foreach $p (@percent_markers) {
- foreach $q (@quants) {
- if ($p =~ /%$/) {
- if ($q =~ /\d$/) {
- $percent_surf_form = $q . "%";
- $percent_surf_form = "$prefix $percent_surf_form" if $prefix;
- push(@percent_surf_forms, $percent_surf_form);
- push(@percent_surf_forms, "by $percent_surf_form") unless $modp || $percent_surf_form =~ /^and /;
- }
- } else {
- if ((($p =~ /^\d/) && ($q =~ /^\d/))
- ||
- (($p =~ /^[a-z]/) && ($q =~ /^[a-z]/))) {
- if ($p =~ /percentage point/) {
- if ($quant == 1) {
- $percent_surf_form = $q . " percentage point";
- } else {
- $percent_surf_form = $q . " percentage points";
- }
- } else {
- $percent_surf_form = $q . " percent";
- }
- $percent_surf_form = "$prefix $percent_surf_form" if $prefix;
- $percent_surf_form =~ s/ /-/g if $modp;
- push(@percent_surf_forms, $percent_surf_form);
- push(@percent_surf_forms, "by $percent_surf_form") unless $modp || $percent_surf_form =~ /^and /;
- }
- }
- }
- }
- return @percent_surf_forms;
-}
-
-sub decade_century_surface_forms {
- local($caller,$pe) = @_;
-
- if ($pe->sem =~ /century/) {
- $gloss = $pe->gloss;
- return ("the $gloss", "in the $gloss", $gloss);
- }
- @decade_surf_forms = ();
- return @decade_surf_forms unless $pe->sem =~ /year range\b.*\bdecade/;
- @decade_markers = @{$english_entity_style_ht{"Decade"}};
- @extend_decades = @{$english_entity_style_ht{"ExtendDecades"}};
- @extended_decades = @{$english_entity_style_ht{"ExtendedDecade"}};
- $extended_decade = (@extended_decades) ? $extended_decades[0] : "none";
-
- $value = $pe->value;
- $extended_value = "";
- foreach $extend_decade (@extend_decades) {
- if ($extend_decade =~ /$value$/) {
- $extended_value = $extend_decade unless $extended_value eq $extend_decade;
- last;
- }
- }
- if ($sub = $pe->get("sub")) {
- $sub_clause = "$sub ";
- $sub_clause =~ s/(mid) /$1-/;
- } else {
- $sub_clause = "";
- }
-
- if (! $extended_value) {
- @values = ($value);
- } elsif ($extended_decade eq "ignore") {
- @values = ($value);
- } elsif ($extended_decade eq "only") {
- @values = ($extended_value);
- } elsif ($extended_decade eq "primary") {
- @values = ($extended_value, $value);
- } elsif ($extended_decade eq "secondary") {
- @values = ($value, $extended_value);
- } else {
- @values = ($value);
- }
- foreach $v (@values) {
- foreach $dm (@decade_markers) {
- $dm_ending = $dm;
- $dm_ending =~ s/^\d+//;
- push (@decade_surf_forms, "the $sub_clause$v$dm_ending");
- push (@decade_surf_forms, "in the $sub_clause$v$dm_ending");
- push (@decade_surf_forms, "$sub_clause$v$dm_ending");
- }
- }
- return @decade_surf_forms;
-}
-
-sub day_of_the_month_surface_forms {
- local($caller,$pe) = @_;
-
- @dom_surf_forms = ();
- return @dom_surf_forms
- unless ($pe->sem eq "day of the month")
- && ($day_number = $pe->get("day-number"));
- @dom_markers = @{$english_entity_style_ht{"DayOfTheMonth"}};
- foreach $dm (@dom_markers) {
- $ord = $caller->numeric_ordinal_form($day_number);
- if ($dm eq "on the 5th") {
- push (@dom_surf_forms, "on the $ord");
- } elsif ($dm eq "the 5th") {
- push (@dom_surf_forms, "the $ord");
- } elsif ($dm eq "5th") {
- push (@dom_surf_forms, $ord);
- }
- }
- return @dom_surf_forms;
-}
-
-sub score_surface_forms {
- local($caller,$pe) = @_;
-
- @score_surf_forms = ();
- if (($score1 = $pe->get("score1"))
- && ($score2 = $pe->get("score2"))) {
- @score_markers = @{$english_entity_style_ht{"ScoreMarker"}};
- @score_markers = (":") unless @score_markers;
- foreach $sm (@score_markers) {
- push (@score_surf_forms, "$score1$sm$score2");
- }
- }
- push(@score_surf_forms, $pe->gloss) unless @score_surf_forms;
- return @score_surf_forms;
-}
-
-sub day_of_the_week_surface_forms {
- local($caller,$pe) = @_;
-
- @dom_surf_forms = ();
- @dom_markers = @{$english_entity_style_ht{"DayOfTheWeek"}};
- $gloss = $pe->get("gloss");
- $weekday = $pe->get("weekday");
- $weekday = $gloss if ($weekday eq "") && ($gloss =~ /^\S+$/);
- $relday = $pe->get("relday");
- $period = $pe->get("period");
- foreach $dm (@dom_markers) {
- if (($dm =~ /NOPERIOD/) && $period) {
- $surf = ""; # bad combination
- } elsif (($dm eq "Sunday") || ! $relday) {
- $surf = $weekday;
- $surf .= " $period" if $period;
- } elsif ($dm =~ /morning/) {
- if ($period) {
- $surf = $dm;
- $surf =~ s/tomorrow/$relday/;
- $surf =~ s/morning/$period/;
- $surf =~ s/Sunday/$weekday/;
- } else {
- $surf = ""; # bad combination
- }
- } else {
- $surf = $dm;
- if ($period) {
- if ($relday eq "today") {
- $core_surf = "this $period";
- } else {
- $core_surf = "$relday $period";
- }
- } else {
- $core_surf = $relday;
- }
- $surf =~ s/tomorrow/$core_surf/;
- $surf =~ s/Sunday/$weekday/;
- }
- $surf =~ s/yesterday night/last night/;
- $surf =~ s/this noon, ($weekday)(,\s*)?/today, $1, at noon/;
- $surf =~ s/this noon/today at noon/;
- $surf =~ s/this night/tonight/;
- $surf =~ s/\s*NOPERIOD\s*$//;
- push (@dom_surf_forms, $surf) unless $util->member($surf, @dom_surf_forms) || ! $surf;
- $on_weekday = "on $surf";
- push (@dom_surf_forms, $on_weekday)
- if ($surf eq $weekday) && ! $util->member($on_weekday, @dom_surf_forms);
- }
- return @dom_surf_forms;
-}
-
-sub date_surface_forms {
- local($caller,$pe,$modp) = @_;
-
- @date_surf_forms = ();
- $sem = $pe->sem;
- $synt = $pe->synt;
- return @date_surf_forms unless $sem =~ /date(\+year)?/;
- $day = $pe->get("day");
- $weekday = $pe->get("weekday");
- $month_name = $pe->get("month-name");
- $month_number = $pe->get("month-number");
- $year = $pe->get("year");
- $era = $pe->get("era");
- $era_clause = "";
- $calendar_type = $pe->get("calendar");
- $calendar_type_clause = "";
- $calendar_type_clause = " AH" if $calendar_type eq "Islamic";
- $ad_year = $year;
- if ($era eq "Republic era") {
- $ad_year = $year + 1911;
- $era_clause = " (year $year of the $era)";
- }
- $rel = $pe->get("rel");
- if ($sep = $pe->get("sep")) {
- $date_surf_form = "$month_number$sep$day";
- $date_surf_form .= "$sep$year" if $year;
- $date_surf_form = "$weekday, $date_surf_form" if $weekday;
- $date_surf_form = "on $date_surf_form" if $synt eq "pp";
- return ($date_surf_form);
- }
- @date_months = @{$english_entity_style_ht{"DateMonth"}};
- @date_days = @{$english_entity_style_ht{"DateDay"}};
- @date_order = @{$english_entity_style_ht{"DateOrder"}};
- foreach $m (@date_months) {
- if ($m eq "September") {
- $surf_month = $month_name;
- } elsif ($m =~ /^Sep(\.)?$/) {
- if ($month_name eq "May") {
- $surf_month = $month_name;
- } else {
- $period_clause = ($m =~ /\.$/) ? "." : "";
- $surf_month = substr($month_name, 0, 3) . $period_clause;
- }
- } elsif ($m =~ /^Sept(\.)?$/) {
- if ($util->member($month_name, "February", "September")) {
- $period_clause = ($m =~ /\.$/) ? "." : "";
- $surf_month = substr($month_name, 0, 4) . $period_clause;
- } else {
- $surf_month = "";
- }
- } else {
- $surf_month = "";
- }
- foreach $d (@date_days) {
- if ($d =~ /^\d+$/) {
- $surf_day = $day;
- } elsif ($d =~ /^\d+[sthrd]+$/) {
- $surf_day = $caller->numeric_ordinal_form($day);
- } else {
- $surf_day = "";
- }
- if ($surf_month && $surf_day) {
- foreach $o (@date_order) {
- if ($calendar_type eq "Islamic") {
- $date_surf_form = "$surf_day $surf_month";
- } elsif ($o eq "September 6, 1998") {
- $date_surf_form = "$surf_month $surf_day";
- } elsif ($o eq "6 September, 1998") {
- $date_surf_form = "$surf_day $surf_month";
- }
- $date_surf_form = "$weekday, $date_surf_form" if $weekday;
- $consider_on_p = 1;
- if ($year) {
- $date_surf_form .= "," unless $calendar_type eq "Islamic";
- $date_surf_form .= " $ad_year$calendar_type_clause$era_clause";
- } elsif ($rel) {
- if ($rel eq "current") {
- $date_surf_form = "this $date_surf_form";
- } else {
- $date_surf_form = "$rel $date_surf_form";
- }
- $consider_on_p = 0;
- }
- push(@date_surf_forms, $date_surf_form)
- unless $util->member($date_surf_form, @date_surf_forms) || ($synt eq "pp");
- if ($consider_on_p) {
- $on_date_surf_form = "on $date_surf_form";
- push(@date_surf_forms, $on_date_surf_form)
- unless $modp || $util->member($on_date_surf_form, @date_surf_forms);
- }
-
- if (($synt eq "pp") && ($sem eq "date")) {
- push(@date_surf_forms, $date_surf_form)
- unless $util->member($date_surf_form, @date_surf_forms);
- }
- }
- }
- }
- }
- return @date_surf_forms;
- # rel, last, next, this
-}
-
-sub numeric_ordinal_form {
- local($caller,$cardinal) = @_;
-
- return $cardinal . "th" if $cardinal =~ /1\d$/;
- return $cardinal . "st" if $cardinal =~ /1$/;
- return $cardinal . "nd" if $cardinal =~ /2$/;
- return $cardinal . "rd" if $cardinal =~ /3$/;
- return $cardinal . "h" if $cardinal =~ /t$/;
- $cardinal =~ s/y$/ie/;
- return $cardinal . "th";
-}
-
-sub guard_urls_x045 {
- local($caller, $s) = @_;
-
- # URLs (http/https/ftp/mailto)
- my $result = "";
- while (($pre, $url, $post) = ($s =~ /^(.*?)((?:(?:https?|ftp):\/\/|mailto:)[#%-;=?-Z_-z~]*[-a-zA-Z0-9\/#])(.*)$/)) {
- $result .= "$pre\x04$url\x05";
- $s = $post;
- }
- $result .= $s;
-
- # emails
- $s = $result;
- $result = "";
- while (($pre, $email, $post) = ($s =~ /^(.*?[ ,;:()\/\[\]{}<>|"'])([a-z][-_.a-z0-9]*[a-z0-9]\@[a-z][-_.a-z0-9]*[a-z0-9]\.(?:[a-z]{2,}))([ .,;:?!()\/\[\]{}<>|"'].*)$/i)) {
- $result .= "$pre\x04$email\x05";
- $s = $post;
- }
- $result .= $s;
-
- # (Twitter style) #hashtag or @handle
- $s = $result;
- $result = "";
- while (($pre, $hashtag, $post) = ($s =~ /^(.*?[ .,;()\[\]{}'])([#@](?:[a-z]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|HHERE)(?:[_a-z0-9]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|[\xC4-\xC9\xCE-\xD3][\x80-\xBF]|\xE0[\xA4-\xA5][\x80-\xBF]|\xE0[\xB6-\xB7][\x80-\xBF])*(?:[a-z0-9]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|[\xC4-\xC9\xCE-\xD3][\x80-\xBF]|\xE0[\xA4-\xA5][\x80-\xBF]|\xE0[\xB6-\xB7][\x80-\xBF]))(.*)$/i)) {
- $result .= "$pre\x04$hashtag\x05";
- $s = $post;
- }
- $result .= $s;
-
- # Keep together number+letter in: Fig. 4g; Chromosome 12p
- $result =~ s/((?:\b(?:fig))(?:_DONTBREAK_)?\.?|\b(?:figures?|tables?|chromosomes?)|]*\b(?:fig)\b[^<>]*>)\s*(\d+[a-z])\b/$1 \x04$2\x05/gi;
-
- # special combinations, e.g. =/= emoticons such as :)
- $s = $result;
- $result = "";
- while (($pre, $special, $post) = ($s =~ /^(.*?)(:-?\)|:-?\(|=\/=?|\?+\/\?+|=\[)(.*)$/)) {
- $result .= "$pre\x04$special\x05";
- $s = $post;
- }
- $result .= $s;
-
- return $result;
-}
-
-sub guard_xml_tags_x0123 {
- local($caller, $s) = @_;
-
- my $result = "";
- # xml tag might or might not already have "@" on left and/or right end: @ @
- while (($pre, $tag, $post) = ($s =~ /^(.*?)(\@?<\/?(?:[a-z][-_:a-z0-9]*)(?:\s+[a-z][-_:a-z0-9]*="[^"]*")*\s*\/?>\@?|&(?:amp|gt|lt|quot);|\[(?:QUOTE|URL)=[^ \t\n\[\]]+\]|\[\/?(?:QUOTE|IMG|INDENT|URL)\]|<\$[-_a-z0-9]+\$>|<\!--.*?-->)(.*)$/si)) {
- $result .= $pre;
- if (($pre =~ /\S$/) && ($tag =~ /^\S/)) {
- $result .= " \x01";
- $result .= "\@" if ($tag =~ /^<[a-z]/i) && (! ($pre =~ /[,;(>]$/)); #)
- } else {
- $result .= "\x01";
- }
- $guarded_tag = $tag;
- $guarded_tag =~ s/ /\x02/g;
- # print STDERR "tag: $tag\nguarded_tag: $guarded_tag\n" if ($result =~ /Harvey/) || ($s =~ /Harvey/);
- $result .= $guarded_tag;
- if (($tag =~ /\S$/) && ($post =~ /^\S/)) { # (
- $result .= "\@" if (($tag =~ /^<\//) || ($tag =~ /\/>$/)) && (! ($result =~ /\@$/)) && (! ($post =~ /^[,;)<]/));
- $result .= "\x03 ";
- } else {
- $result .= "\x03";
- }
- $s = $post;
- }
- $result .= $s;
- return $result;
-}
-
-sub restore_urls_x045_guarded_string {
- local($caller, $s) = @_;
-
- my $orig = $s;
- while (($pre, $url, $post) = ($s =~ /^(.*?)\x04([^\x04\x05]*?)\x05(.*)$/)) {
- $url =~ s/ \@([-:\/])/$1/g;
- $url =~ s/([-:\/])\@ /$1/g;
- $url =~ s/ //g;
- $url =~ s/\x02/ /g;
- $s = "$pre$url$post";
- }
- if ($s =~ /[\x04\x05]/) {
- print STDERR "Removing unexpectedly unremoved x04/x05 marks from $s\n";
- $s =~ s/[\x04\x05]//g;
- }
- return $s;
-}
-
-sub restore_xml_tags_x0123_guarded_string {
- local($caller, $s) = @_;
-
- my $result = "";
- while (($pre, $tag, $post) = ($s =~ /^(.*?)\x01(.*?)\x03(.*)$/)) {
- $result .= $pre;
- $tag =~ s/ \@([-:\/])/$1/g;
- $tag =~ s/([-:\/])\@ /$1/g;
- $tag =~ s/ //g;
- $tag =~ s/\x02/ /g;
- $result .= $tag;
- $s = $post;
- }
- $result .= $s;
- return $result;
-}
-
-sub load_english_abbreviations {
- local($caller, $filename, *ht, $verbose) = @_;
- # e.g. /nfs/nlg/users/textmap/brahms-ml/arabic/data/EnglishAbbreviations.txt
-
- $verbose = 1 unless defined($verbose);
- my $n = 0;
- if (open(IN, $filename)) {
- while () {
- next if /^\# /;
- s/\s*$//;
- my @expansions;
- if (@expansions = split(/\s*::\s*/, $_)) {
- my $abbrev = shift @expansions;
- $ht{IS_ABBREVIATION}->{$abbrev} = 1;
- $ht{IS_LC_ABBREVIATION}->{(lc $abbrev)} = 1;
- foreach $expansion (@expansions) {
- $ht{ABBREV_EXPANSION}->{$abbrev}->{$expansion} = 1;
- $ht{ABBREV_EXPANSION_OF}->{$expansion}->{$abbrev} = 1;
- }
- $n++;
- }
- }
- close(IN);
- print STDERR "Loaded $n entries from $filename\n" if $verbose;
- } else {
- print STDERR "Can't open $filename\n";
- }
-}
-
-sub load_split_patterns {
- local($caller, $filename, *ht) = @_;
- # e.g. /nfs/nlg/users/textmap/brahms-ml/arabic/data/BioSplitPatterns.txt
-
- my $n = 0;
- if (open(IN, $filename)) {
- while () {
- next if /^\# /;
- s/\s*$//;
- if (($s) = ($_ =~ /^SPLIT-DASH-X\s+(\S.*\S|\S)\s*$/)) {
- $ht{SPLIT_DASH_X}->{$s} = 1;
- $ht{LC_SPLIT_DASH_X}->{(lc $s)} = 1;
- $n++;
- } elsif (($s) = ($_ =~ /^SPLIT-X-DASH\s+(\S.*\S|\S)\s*$/)) {
- $ht{SPLIT_X_DASH}->{$s} = 1;
- $ht{LC_SPLIT_X_DASH}->{(lc $s)} = 1;
- $n++;
- } elsif (($s) = ($_ =~ /^DO-NOT-SPLIT-DASH-X\s+(\S.*\S|\S)\s*$/)) {
- $ht{DO_NOT_SPLIT_DASH_X}->{$s} = 1;
- $ht{LC_DO_NOT_SPLIT_DASH_X}->{(lc $s)} = 1;
- $n++;
- } elsif (($s) = ($_ =~ /^DO-NOT-SPLIT-X-DASH\s+(\S.*\S|\S)\s*$/)) {
- $ht{DO_NOT_SPLIT_X_DASH}->{$s} = 1;
- $ht{LC_DO_NOT_SPLIT_X_DASH}->{(lc $s)} = 1;
- $n++;
- } elsif (($s) = ($_ =~ /^DO-NOT-SPLIT\s+(\S.*\S|\S)\s*$/)) {
- $ht{DO_NOT_SPLIT}->{$s} = 1;
- $ht{LC_DO_NOT_SPLIT}->{(lc $s)} = 1;
- $n++;
- } elsif (($s) = ($_ =~ /^SPLIT\s+(\S.*\S|\S)\s*$/)) {
- $ht{SPLIT}->{$s} = 1;
- $ht{LC_SPLIT}->{(lc $s)} = 1;
- $n++;
- }
- }
- close(IN);
- print STDERR "Loaded $n entries from $filename\n";
- } else {
- print STDERR "Can't open $filename\n";
- }
-}
-
-sub guard_abbreviations_with_dontbreak {
- local($caller, $s, *ht) = @_;
-
- my $orig = $s;
- my $result = "";
- while (($pre,$potential_abbrev,$period,$post) = ($s =~ /^(.*?)((?:[a-z]+\.-?)*(?:[a-z]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|[\xC4-\xC9\xCE-\xD3][\x80-\xBF]|\xE0[\xA4-\xA5][\x80-\xBF]|\xE0[\xB6-\xB7][\x80-\xBF])+)(\.)(.*)$/i)) {
- if (($pre =~ /([-&\/0-9]|[-\/]\@ )$/)
- && (! ($pre =~ /\b[DR](?: \@)?-(?:\@ )?$/))) { # D-Ariz.
- $result .= "$pre$potential_abbrev$period";
- } else {
- $result .= $pre . $potential_abbrev;
- $potential_abbrev_with_period = $potential_abbrev . $period;
- if ($ht{IS_ABBREVIATION}->{$potential_abbrev_with_period}) {
- $result .= "_DONTBREAK_";
- } elsif ($ht{IS_LC_ABBREVIATION}->{(lc $potential_abbrev_with_period)}) {
- $result .= "_DONTBREAK_";
- }
- $result .= $period;
- }
- $s = $post;
- }
- $result .= $s;
- $result =~ s/\b([Nn])o\.(\s*\d)/$1o_DONTBREAK_.$2/g;
- return $result;
-}
-
-$alpha = "(?:[a-z]|\xCE[\xB1-\xBF]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|[\xC4-\xC9\xCE-\xD3][\x80-\xBF]|\xE0[\xA4-\xA5][\x80-\xBF]|\xE0[\xB6-\xB7][\x80-\xBF])";
-$alphanum = "(?:[a-z0-9]|\xCE[\xB1-\xBF]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|[\xC4-\xC9\xCE-\xD3][\x80-\xBF]|\xE0[\xA4-\xA5][\x80-\xBF]|\xE0[\xB6-\xB7][\x80-\xBF])(?:[-_a-z0-9]|\xCE[\xB1-\xBF]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|[\xC4-\xC9\xCE-\xD3][\x80-\xBF]|\xE0[\xA4-\xA5][\x80-\xBF]|\xE0[\xB6-\xB7][\x80-\xBF])*(?:[a-z0-9]|\xCE[\xB1-\xBF]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|[\xC4-\xC9\xCE-\xD3][\x80-\xBF]|\xE0[\xA4-\xA5][\x80-\xBF]|\xE0[\xB6-\xB7][\x80-\xBF])|(?:[a-z0-9]|\xCE[\xB1-\xBF]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|[\xC4-\xC9\xCE-\xD3][\x80-\xBF]|\xE0[\xA4-\xA5][\x80-\xBF]|\xE0[\xB6-\xB7][\x80-\xBF])";
-
-sub normalize_punctuation {
- local($caller, $s) = @_;
-
- $s =~ s/\xE2\x80[\x93\x94]/-/g; # ndash, mdash to hyphen
- $s =~ s/ \@([-\/])/$1/g;
- $s =~ s/([-\/])\@ /$1/g;
- return $s;
-}
-
-sub update_replace_characters_based_on_context {
- local($caller, $s) = @_;
-
- # This is just a start. Collect stats over text with non-ASCII, e.g. K?ln.
- # HHERE
- my $rest = $s;
- $s = "";
- while (($pre, $left, $repl_char, $right, $post) = ($rest =~ /^(.*?\s+)(\S*)(\xEF\xBF\xBD)(\S*)(\s.*)$/)) {
- $s .= "$pre$left";
- if (($left =~ /[a-z]$/i) && ($right =~ /^s(?:[-.,:;?!].*)?$/i)) { # China's etc.
- $repl_char = "\xE2\x80\x99"; # right single quotation mark
- } elsif (($left =~ /n$/i) && ($right =~ /^t$/i)) { # don't etc.
- $repl_char = "\xE2\x80\x99"; # right single quotation mark
- } elsif (($left =~ /[a-z]\s*[.]$/i) && ($right eq "")) { # end of sentence
- $repl_char = "\xE2\x80\x9D"; # right double quotation mark
- } elsif (($left eq "") && ($right =~ /^[A-Z]/i)) { # start of word
- $repl_char = "\xE2\x80\x9C"; # left double quotation mark
- }
- $s .= "$repl_char$right";
- $rest = $post;
- }
- $s .= $rest;
-
- return $s;
-}
-
-sub tokenize {
- local($caller, $s, *ht, $control) = @_;
-
- my $local_verbose = 0;
- print "Point A: $s\n" if $local_verbose;
- $control = "" unless defined($control);
- my $bio_p = ($control =~ /\bbio\b/);
-
- $s = $utf8->repair_misconverted_windows_to_utf8_strings($s);
- print "Point A2: $s\n" if $local_verbose;
- $s = $utf8->delete_weird_stuff($s);
- print "Point B: $s\n" if $local_verbose;
-
- # reposition xml-tag with odd space
- $s =~ s/( +)((?:<\/[a-z][-_a-z0-9]*>)+)(\S)/$2$1$3/ig;
- $s =~ s/(\S)((?:<[a-z][^<>]*>)+)( +)/$1$3$2/ig;
- print "Point C: $s\n" if $local_verbose;
-
- $a_value = $ht{IS_ABBREVIATION}->{"Fig."} || "n/a";
- $s = $caller->guard_abbreviations_with_dontbreak($s, *ht);
- my $standard_abbrev_s = "Adm|al|Apr|Aug|Calif|Co|Dec|Dr|etc|e.g|Feb|Febr|Gen|Gov|i.e|Jan|Ltd|Lt|Mr|Mrs|Nov|Oct|Pfc|Pres|Prof|Sen|Sept|U.S.A|U.S|vs";
- my $pre;
- my $core;
- my $post;
- $s = " $core " if ($pre,$core,$post) = ($s =~ /^(\s*)(.*?)(\s*)$/i);
- $s =~ s/\xE2\x80\x89/ /g; # thin space
- $standard_abbrev_s =~ s/\./\\\./g;
- $s =~ s/[\x01-\x05]//g;
- $s = $caller->guard_urls_x045($s);
- $s = $caller->guard_xml_tags_x0123($s);
- $s = $caller->update_replace_characters_based_on_context($s);
- $s =~ s/((?:[a-zA-Z_]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|[\xC4-\xC9\xCE-\xD3][\x80-\xBF]|\xE0[\xA4-\xA5][\x80-\xBF]|\xE0[\xB6-\xB7][\x80-\xBF])\.)([,;]) /$1 $2 /g;
- $s =~ s/((?:[a-zA-Z_]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|[\xC4-\xC9\xCE-\xD3][\x80-\xBF]|\xE0[\xA4-\xA5][\x80-\xBF]|\xE0[\xB6-\xB7][\x80-\xBF])\.)(\x04)/$1 $2/g;
- if ($bio_p) {
- $s =~ s/(\S)((?:wt\/|onc\/)?(?:[-+]|\?+|\xE2\x80[\x93\x94])\/(?:[-+]|\?+|\xE2\x80[\x93\x94]))/$1 $2/g;
- $s =~ s/((?:[-+]|\xE2\x80[\x93\x94])\/(?:[-+]|\xE2\x80[\x93\x94]))(\S)/$1 $2/g;
- }
- print "Point D: $s\n" if $local_verbose;
- $s =~ s/(~+)/ $1 /g;
- $s =~ s/((?:\xE2\x80\xB9|\xE2\x80\xBA|\xC2\xAB|\xC2\xBB|\xE2\x80\x9E)+)/ $1 /g; # triangular bracket(s) "<" or ">" etc.
- $s =~ s/(``)([A-Za-z])/$1 $2/g; # added Nov. 30, 2017
- $s =~ s/((?:<|<)?=+(?:>|>)?)/ $1 /g; # include arrows
- $s =~ s/(\\")/ $1 /g;
- $s =~ s/([^\\])("+)/$1 $2 /g;
- $s =~ s/([^\\])((?:\xE2\x80\x9C)+)/$1 $2 /g; # open "
- $s =~ s/([^\\])((?:\xE2\x80\x9D)+)/$1 $2 /g; # close "
- $s =~ s/((?:<|<)?-{2,}(?:>|>)?)/ $1 /g; # include arrows
- $s =~ s/((?:\xE2\x80\xA6)+)/ $1 /g; # ellipsis
- print "Point E: $s\n" if $local_verbose;
- foreach $_ ((1..2)) {
- # colon
- $s =~ s/([.,;])(:+)/$1 \@$2/g;
- $s =~ s/(:+)([.,;])/$1 \@\@ $2/g;
- # # question mark/exclamation mark blocks
- # $s =~ s/([^!?])([!?]+)([^!?])/$1 $2 $3/g;
- }
- print "Point F: $s\n" if $local_verbose;
- $s =~ s/(\?)/ $1 /g;
- $s =~ s/(\!)/ $1 /g;
- $s =~ s/ +/ /g;
- $s =~ s/(\$+|\xC2\xA3|\xE2\x82[\xA0-\xBE])/ $1 /g; # currency signs (Euro sign; British pound sign; Yen sign etc.)
- $s =~ s/(\xC2\xA9|\xE2\x84\xA2)/ $1 /g; # copyright/trademark signs
- $s =~ s/(\xC2\xB2)([-.,;:!?()])/$1 $2/g; # superscript 2
- $s =~ s/([^ ])( )/$1 $2/g;
- $s =~ s/( )([^ ])/$1 $2/g;
- $s =~ s/(\d+|[0-9A-F]+);/$1_DONTBREAK_;/gi;
- $s =~ s/([\@\.]\S*\d)([a-z][A-z])/$1_DONTBREAK_$2/g; # email address, URL
- $s =~ s/ ($standard_abbrev_s)\./ $1_DONTBREAK_\./gi;
- $s =~ s/ ($standard_abbrev_s) \. (\S)/ $1_DONTBREAK_\. $2/gi;
- $s =~ s/\b((?:[A-Za-z]\.){1,3}[A-Za-z])\.\s+/$1_DONTBREAK_\. /g; # e.g. a.m. O.B.E.
- $s =~ s/([ ])([A-Z])\. ([A-Z])/$1$2_DONTBREAK_\. $3/; # e.g. George W. Bush
- $s =~ s/(\S\.*?[ ])([A-Z])_DONTBREAK_\. (After|All|And|But|Each|Every|He|How|In|It|My|She|So|That|The|Then|There|These|They|This|Those|We|What|When|Which|Who|Why|You)([', ])/$1$2\. $3$4/; # Exceptions to previous line, e.g. "plan B. This"
- $s =~ s/\b(degrees C|[Ff]ig\.? \d+ ?[A-Z]|(?:plan|Scud) [A-Z])_DONTBREAK_\./$1\./g; # Exception, e.g. "plan B";
- $s =~ s/([^-_a-z0-9])(art|fig|no|p)((?:_DONTBREAK_)?\.)(\d)/$1$2$3 $4/gi; # Fig.2 No.14
- $s =~ s/([^-_A-Za-z0-9])(\d+(?:\.\d+)?)(?:_DONTBREAK_)?(thousand|million|billion|trillion|min|mol|sec|kg|km|g|m|p)\b/$1$2 $3/g; # 3.4kg 1.7million 49.9p
- $s =~ s/([^-_a-z0-9])((?:[1-9]|1[0-2])(?:[.:][0-5]\d)?)(?:_DONTBREAK_)?([ap]m\b|[ap]\.m(?:_DONTBREAK_)?\.)/$1$2 $3/gi; # 3.15pm 12:00p.m. 8am
- print "Point H: $s\n" if $local_verbose;
-
- $s =~ s/(\d)([a-z][A-z])/$1 $2/g;
- $s =~ s/(\w|`|'|%|[a-zA-Z]\.|[a-zA-Z]_DONTBREAK_\.)(-|\xE2\x80\x93)(\w|`|')/$1 \@$2\@ $3/g;
- $s =~ s/(\w|`|'|%|[a-zA-Z]\.|[a-zA-Z]_DONTBREAK_\.)(-|\xE2\x80\x93)(\w|`|')/$1 \@$2\@ $3/g;
- $s =~ s/(\w)- /$1 \@- /g;
- $s =~ s/ -(\w)/ -\@ $1/g;
- $s =~ s/(\d):(\d)/$1 \@:\@ $2/g;
- $s =~ s/(\d)\/(\d)/$1 \@\/\@ $2/g;
- $s =~ s/($alphanum)\/([,;:!?])/$1 \@\/\@ $2/g;
- $s =~ s/($alphanum)([-+]+)\/($alphanum)/$1$2 \@\/\@ $3/gi;
- print "Point I: $s\n" if $local_verbose;
- foreach $_ ((1..5)) {
- $s =~ s/([ \/()])($alphanum) ?\/ ?($alphanum)([-+ \/().,;])/$1$2 \@\/\@ $3$4/gi;
- }
- $s =~ s/([a-zA-Z%\/\[\]]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|[\xC4-\xC9\xCE-\xD3][\x80-\xBF]|\xE0[\xA4-\xA5][\x80-\xBF]|\xE0[\xB6-\xB7][\x80-\xBF]|\x05|[a-zA-Z]_DONTBREAK_\.)([,;:!?])\s*(\S)/$1 $2 $3/g;
- # asterisk
- $s =~ s/( [(\[]?)(\*)([a-z0-9])/$1$2\@ $3/gi;
- $s =~ s/([a-z0-9])(\*)([.,;:)\]]* )/$1 \@$2$3/gi;
- print "Point J: $s\n" if $local_verbose;
-
- # Arabic script
- if ($s =~ /[\xD8-\xDB]/) {
- for (my $i=0; $i <= 1; $i++) {
- $s =~ s/([\xD8-\xDB][\x80-\xBF])([,;:!?.\(\)\[\]\/]|\xD8\x8C|\xD8\x9B|\xD8\x9F|\xD9\xAA|\xC2\xAB|\xC2\xBB|\xE2[\x80-\x9F][\x80-\xBF])/$1 $2/gi; # punctuation includes Arabic ,;?%
- $s =~ s/([,;:!?.\(\)\[\]\/]|\xD8\x8C|\xD8\x9B|\xD8\x9F|\xD9\xAA|\xC2\xAB|\xC2\xBB|\xE2[\x80-\x9F][\x80-\xBF])([\xD8-\xDB][\x80-\xBF])/$1 $2/gi;
- }
- }
- $s =~ s/(\d|[a-zA-Z]|[\xD8-\xDB][\x80-\xBF])([-])([\xD8-\xDB][\x80-\xBF])/$1 \@$2\@ $3/g;
- $s =~ s/(\d|[a-zA-Z])([\xD8-\xDB][\x80-\xBF])/$1 \@\@ $2/g;
- print "Point K: $s\n" if $local_verbose;
-
- # misc. non-ASCII punctuation
- $s =~ s/(\xC2[\xA1\xBF]|\xD5\x9D|\xD6\x89|\xD8[\x8C\x9B]|\xD8\x9F|\xD9[\xAA\xAC]|\xDB\x94|\xDC[\x80\x82])/ $1 /g;
- $s =~ s/(\xE0\xA5[\xA4\xA5]|\xE0\xBC[\x84-\x86\x8D-\x8F\x91\xBC\xBD])/ $1 /g;
- $s =~ s/(\xE1\x81[\x8A\x8B]|\xE1\x8D[\xA2-\xA6])/ $1 /g;
- $s =~ s/(\xE1\x81[\x8A\x8B]|\xE1\x8D[\xA2-\xA6]|\xE1\x9F[\x94\x96])/ $1 /g;
- $s =~ s/([^0-9])(5\xE2\x80\xB2)(-)([ACGTU])/$1 $2 \@$3\@ $4/g; # 5-prime-DNA-seq.
- $s =~ s/([^0-9])([35]\xE2\x80\xB2)/$1 $2 /g; # prime (keep 3-prime/5-prime together for bio domain)
- $s =~ s/([^0-9])(\xE2\x80\xB2)/$1 $2 /g; # prime
- $s =~ s/(\xE2\x81\x99)/ $1 /g; # five dot punctuation
- $s =~ s/(\xE3\x80[\x81\x82\x8A-\x91]|\xE3\x83\xBB|xEF\xB8\xB0|\xEF\xBC\x8C)/ $1 /g;
- $s =~ s/(\xEF\xBC[\x81-\x8F\x9A\x9F])/ $1 /g; # CJK fullwidth punctuation (e.g. fullwidth exclamation mark)
- print "Point L: $s\n" if $local_verbose;
- # spaces
- $s =~ s/((?:\xE3\x80\x80)+)/ $1 /g; # idiographic space
- $s =~ s/((?:\xE1\x8D\xA1)+)/ $1 /g; # Ethiopic space
-
- # isolate \xF0 and up from much more normal characters
- $s =~ s/([\xF0-\xFE][\x80-\xBF]*)([\x00-\x7F\xC0-\xDF][\x80-\xBF]*)/$1 $2/g;
- $s =~ s/([\x00-\x7F\xC0-\xDF][\x80-\xBF]*)([\xF0-\xFE][\x80-\xBF]*)/$1 $2/g;
- print "Point M: $s\n" if $local_verbose;
-
- $s =~ s/( \d+)([,;:!?] )/$1 $2/g;
- $s =~ s/ ([,;()\[\]])([a-zA-Z0-9.,;])/ $1 $2/g;
- $s =~ s/(\)+)([-\/])([a-zA-Z0-9])/$1 $2 $3/g;
- $s =~ s/([0-9\*\[\]()]|\xE2\x80\xB2)([.,;:] )/$1 $2/g;
- $s =~ s/([a-zA-Z%]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|[\xC4-\xC9\xCE-\xD3][\x80-\xBF]|\xE0[\xA4-\xA5][\x80-\xBF]|\xE0[\xB6-\xB7][\x80-\xBF]|\x05)([,;:.!?])([")]|''|\xE2\x80[\x99\x9D]|)(\s)/$1 $2 $3$4/g;
- $s =~ s/([a-zA-Z%]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|[\xC4-\xC9\xCE-\xD3][\x80-\xBF]|\xE0[\xA4-\xA5][\x80-\xBF]|\xE0[\xB6-\xB7][\x80-\xBF]|\x05)([,;:.!?])([")]|''|\xE2\x80[\x99\x9D]|)\s*$/$1 $2 $3/g;
- $s =~ s/([.,;:]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|[\xC4-\xC9\xCE-\xD3][\x80-\xBF]|\xE0[\xA4-\xA5][\x80-\xBF]|\xE0[\xB6-\xB7][\x80-\xBF]|\x05)('|\xE2\x80[\x99\x9D])/$1 $2/g;
- $s =~ s/('|\xE2\x80[\x99\x9D])([.,;:]|\x04)/$1 $2/g;
- $s =~ s/([(){}\[\]]|\xC2\xB1)/ $1 /g;
- $s =~ s/([a-zA-Z0-9]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|[\xC4-\xC9\xCE-\xD3][\x80-\xBF]|\xE0[\xA4-\xA5][\x80-\xBF]|\xE0[\xB6-\xB7][\x80-\xBF]|\x05)\.\s*$/$1 ./g;
- $s =~ s/([a-zA-Z]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|[\xC4-\xC9\xCE-\xD3][\x80-\xBF]|\xE0[\xA4-\xA5][\x80-\xBF]|\xE0[\xB6-\xB7][\x80-\xBF]|\x05)\.\s+/$1 . /g;
- $s =~ s/([a-zA-Z]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|[\xC4-\xC9\xCE-\xD3][\x80-\xBF]|\xE0[\xA4-\xA5][\x80-\xBF]|\xE0[\xB6-\xB7][\x80-\xBF]|\x05)\.(\x04)/$1 . $2/g;
- $s =~ s/([0-9]),\s+(\S)/$1 , $2/g;
- $s =~ s/([a-zA-Z])(\$)/$1 $2/g;
- $s =~ s/(\$|[~<=>]|\xC2\xB1|\xE2\x89[\xA4\xA5]|\xE2\xA9[\xBD\xBE])(\d)/$1 $2/g;
- $s =~ s/(RMB)(\d)/$1 $2/g;
- print "Point N: $s\n" if $local_verbose;
- foreach $_ ((1..2)) {
- $s =~ s/([ '"]|\xE2\x80\x9C)(are|could|did|do|does|had|has|have|is|should|was|were|would)(n't|n\xE2\x80\x99t)([ '"]|\xE2\x80\x9D)/$1 $2 $3 $4/gi;
- $s =~ s/ (can)(not) / $1 $2 /gi;
- $s =~ s/ (ca)\s*(n)('t|\xE2\x80\x99t) / $1$2 $2$3 /gi;
- $s =~ s/ ([Ww])o\s*n('|\xE2\x80\x99)t / $1ill n$2t /g;
- $s =~ s/ WO\s*N('|\xE2\x80\x99)T / WILL N$1T /g;
- $s =~ s/ ([Ss])ha\s*n('|\xE2\x80\x99)t / $1hall n$2t /g;
- $s =~ s/ SHAN('|\xE2\x80\x99)T / SHALL N$1T /g;
- # $s =~ s/ ain('|\xE2\x80\x99)t / is n$1t /g;
- # $s =~ s/ Ain('|\xE2\x80\x99)t / Is n$1t /g;
- # $s =~ s/ AIN('|\xE2\x80\x99)T / IS N$1T /g;
- }
- print "Point O: $s\n" if $local_verbose;
- $s =~ s/(\d)%/$1 %/g;
- $s =~ s/ '(d|ll|m|re|s|ve|em) / '_DONTBREAK_$1 /g; # 'd = would; 'll = will; 'em = them
- $s =~ s/ \xE2\x80\x99t(d|ll|m|re|s|ve) / \xE2\x80\x99t_DONTBREAK_$1 /g;
- $s =~ s/([^0-9a-z'.])('|\xE2\x80\x98)([0-9a-z])/$1$2 $3/gi;
- $s =~ s/([0-9a-z])(\.(?:'|\xE2\x80\x99))([^0-9a-z']|\xE2\x80\x99)/$1 $2$3/gi;
- $s =~ s/([0-9a-z]_?\.?)((?:'|\xE2\x80\x99)(?:d|ll|m|re|s|ve|))([^0-9a-z'])/$1 $2$3/gi;
- $s =~ s/([("]|\xE2\x80\x9C|'')(\w)/$1 $2/g;
- print "Point P: $s\n" if $local_verbose;
- $s =~ s/(\w|[.,;:?!])([")]|''|\xE2\x80\x9D)/$1 $2/g;
- $s =~ s/ ([,;()\[\]])([a-zA-Z0-9.,;])/ $1 $2/g;
- $s =~ s/([a-z0-9]) ?(\()([-+_ a-z0-9\/]+)(\))/$1 $2 $3 $4 /ig;
- $s =~ s/([a-z0-9]) ?(\[)([-+_ a-z0-9\/]+)(\])/$1 $2 $3 $4 /ig;
- $s =~ s/([a-z0-9]) ?(\{)([-+_ a-z0-9\/]+)(\})/$1 $2 $3 $4 /ig;
- $s =~ s/([%])-(\d+(?:\.\d+)? ?%)/$1 \@-\@ $2/g;
- $s =~ s/( )(art|No)_DONTBREAK_(\.{2,})/$1 $2$3/gi;
- $s =~ s/(_DONTBREAK_\.)(\.{1,})/$1 $2/g;
- print "Point Q: $s\n" if $local_verbose;
- foreach $_ ((1 .. 2)) {
- $s =~ s/(\s(?:[-a-z0-9()']|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|[\xC4-\xC9\xCE-\xD3][\x80-\xBF]|\xE0[\xA4-\xA5][\x80-\xBF]|\xE0[\xB6-\xB7][\x80-\xBF])*)(\.{2,})((?:[-a-z0-9()?!:\/']|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|[\xC4-\xC9\xCE-\xD3][\x80-\xBF]|\xE0[\xA4-\xA5][\x80-\xBF]|\xE0[\xB6-\xB7][\x80-\xBF])*\s|(?:[-a-z0-9()'\/]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF]|[\xC4-\xC9\xCE-\xD3][\x80-\xBF]|\xE0[\xA4-\xA5][\x80-\xBF]|\xE0[\xB6-\xB7][\x80-\xBF])+\.\s)/$1 $2 $3/gi;
- }
- $s =~ s/0s\b/0 s/g;
- $s =~ s/([0-9])(\x04)/$1 $2/g;
- $s =~ s/ +/ /g;
- print "Point R: $s\n" if $local_verbose;
-
- if ($bio_p) {
- foreach $_ ((1 .. 2)) {
- $s =~ s/([a-z]) \@(-|\xE2\x80[\x93\x94])\@ (\d+(?:$alpha)?\d*\+?)([- \/])/$1$2$3$4/ig;
- $s =~ s/([a-z]) \@(-|\xE2\x80[\x93\x94])\@ ((?:alpha|beta|kappa)\d+)([- \/])/$1$2$3$4/ig;
- $s =~ s/([a-z]) \@(-|\xE2\x80[\x93\x94])\@ ((?:a|b|h|k)\d)([- \/])/$1$2$3$4/ig;
- $s =~ s/([a-z0-9]) \@(-|\xE2\x80[\x93\x94])\@ ([a-z])([- \/])/$1$2$3$4/ig;
- $s =~ s/([- \/])(\d*[a-z]) \@(-|\xE2\x80[\x93\x94])\@ ([a-z0-9])/$1$2$3$4/ig;
- }
- # mutation indicators such -/- etc.
- $s =~ s/(\?\/) +(\?)/$1$2/g;
- $s =~ s/([^ ?])((?:wt\/|onc\/)?(?:[-+]|\?+|\xE2\x80[\x93\x94])\/(?:[-+]|\?+|\xE2\x80[\x93\x94]))/$1 $2/g;
- $s =~ s/((?:[-+]|\xE2\x80[\x93\x94])\/(?:[-+]|\xE2\x80[\x93\x94]))(\S)/$1 $2/g;
-
- # Erk1/2
- $rest = $s;
- $s = "";
- while (($pre, $stem, $slashed_number_s, $post) = ($rest =~ /^(.*?[^-_a-z0-9])([a-z][-_a-z]*)(\d+(?:(?: \@)?\/(?:\@ )?(?:\d+))+)([^-+a-z0-9].*|)$/i)) {
- if ((($pre =~ /\x04[^\x05]*$/) && ($post =~ /^[^\x04]*\x05/))
- || ($stem =~ /^(mid|pre|post|sub|to)$/i)) {
- $s .= "$pre$stem$slashed_number_s";
- } else {
- $s .= $pre;
- my @slashed_numbers = split(/(?: \@)?\/(?:\@ )?/, $slashed_number_s);
- foreach $i ((0 .. $#slashed_numbers)) {
- my $number = $slashed_numbers[$i];
- $s .= "$stem$number";
- $s .= " @\/@ " unless $i == $#slashed_numbers;
- }
- }
- $rest = $post;
- }
- $s .= $rest;
-
- # Erk-1/-2
- while (($pre, $stem, $dash1, $number1, $dash2, $number2, $post) = ($s =~ /^(.*[^-_a-z0-9])([a-z][-_a-z]*)(?: \@)?(-|\xE2\x80[\x93\x94])(?:\@ )?(\d+)(?: \@)?\/(?:\@ )?(?:\@ )?(-|\xE2\x80[\x93\x94])(?:\@ )?(\d+)([^-+a-z0-9].*|)$/i)) {
- $s = "$pre$stem$dash1$number1 \@\/\@ $stem$dash2$number2$post";
- }
- $rest = $s;
- $s = "";
- # IFN-a/b (Slac2-a/b/c)
- while (($pre, $stem, $dash, $slashed_letter_s, $post) = ($rest =~ /^(.*[^-_a-z0-9])([a-z][-_a-z0-9]*)(-|\xE2\x80[\x93\x94])([a-z](?:(?: \@)?\/(?:\@ )?(?:[a-z]))+)([^-+a-z0-9].*|)$/i)) {
- if (($pre =~ /\x04[^\x05]*$/) && ($post =~ /^[^\x04]*\x05/)) {
- $s .= "$pre$stem$dash1$number1$dash2$number2";
- } else {
- $s .= $pre;
- my @slashed_letters = split(/(?: \@)?\/(?:\@ )?/, $slashed_letter_s);
- foreach $i ((0 .. $#slashed_letters)) {
- my $letter = $slashed_letters[$i];
- $s .= "$stem$dash$letter";
- $s .= " @\/@ " unless $i == $#slashed_letters;
- }
- }
- $rest = $post;
- }
- $s .= $rest;
-
- # SPLIT X-induced
- my $rest = $s;
- my $new_s = "";
- while (($pre, $dash, $right, $post) = ($rest =~ /^(.*?)(-|\xE2\x80[\x93\x94])([a-z]+)( .*|)$/i)) {
- $new_s .= $pre;
- if (($right eq "I") && ($pre =~ / [a-zA-Z][a-z]*$/)) {
- # compatriots-I have a dream
- $new_s .= " \@" . $dash . "\@ ";
- } elsif ($ht{LC_SPLIT_DASH_X}->{($caller->normalize_punctuation(lc $right))}) {
- $new_s .= " \@" . $dash . "\@ ";
- } else {
- $new_s .= $dash;
- }
- $new_s .= $right;
- $rest = $post;
- }
- $new_s .= $rest;
- $s = $new_s;
-
- # SPLIT ubiquinated-X
- $rest = $s;
- $new_s = "";
- while (($pre, $left, $dash, $post) = ($rest =~ /^(.*? |)([a-z0-9]+|'s)(-|\xE2\x80[\x93\x94])([a-z0-9].*)$/i)) {
- $new_s .= "$pre$left";
- if ($ht{LC_SPLIT_X_DASH}->{($caller->normalize_punctuation(lc $left))}) {
- $new_s .= " \@" . $dash . "\@ ";
- } else {
- $new_s .= $dash;
- }
- $rest = $post;
- }
- $new_s .= $rest;
- $s = $new_s;
-
- # SPLIT low-frequency
- $rest = $s;
- $new_s = "";
- if (($pre, $left, $dash, $right, $post) = ($rest =~ /^(.*?[- ]|)([a-z]+)([-\/]|\xE2\x80[\x93\x94])([a-z]+)([- ].*|)$/i)) {
- }
- while (($pre, $left, $dash, $right, $post) = ($rest =~ /^(.*?[-\/ ]|)([a-z]+)((?: \@)?(?:[-\/]|\xE2\x80[\x93\x94])(?:\@ )?)([a-z]+)([-\/ ].*|)$/i)) {
- $x = $caller->normalize_punctuation(lc ($left . $dash . $right));
- if ($ht{LC_SPLIT}->{($caller->normalize_punctuation(lc ($left . $dash . $right)))}) {
- $pre =~ s/([-\/])$/ \@$1\@ /;
- $post =~ s/^([-\/])/ \@$1\@ /;
- $dash = $caller->normalize_punctuation($dash);
- $new_s .= "$pre$left";
- $new_s .= " \@" . $dash . "\@ ";
- $new_s .= $right;
- $rest = $post;
- } elsif ($pre =~ /[-\/]$/) {
- $new_s .= $pre;
- $rest = "$left$dash$right$post";
- } else {
- $new_s .= "$pre$left";
- $rest = "$dash$right$post";
- }
- }
- $new_s .= $rest;
- $s = $new_s;
-
- # DO-NOT-SPLIT X-ras
- $rest = $s;
- $new_s = "";
- while (($pre, $dash, $right, $post) = ($rest =~ /^(.*?) \@(-|\xE2\x80[\x93\x94])\@ ([a-z0-9]+)( .*|)$/i)) {
- $new_s .= $pre;
- if ($ht{LC_DO_NOT_SPLIT_DASH_X}->{($caller->normalize_punctuation(lc $right))}) {
- $new_s .= $dash;
- } else {
- $new_s .= " \@" . $dash . "\@ ";
- }
- $new_s .= $right;
- $rest = $post;
- }
- $new_s .= $rest;
- $s = $new_s;
-
- # DO-NOT-SPLIT Caco-X
- $rest = $s;
- $new_s = "";
- while (($pre, $left, $dash, $post) = ($rest =~ /^(.*? |)([a-z0-9]+) \@([-\/]|\xE2\x80[\x93\x94]])\@ ([a-z0-9].*)$/i)) {
- $new_s .= "$pre$left";
- if ($ht{LC_DO_NOT_SPLIT_X_DASH}->{($caller->normalize_punctuation(lc $left))}) {
- $new_s .= $dash;
- } else {
- $new_s .= " \@" . $dash . "\@ ";
- }
- $rest = $post;
- }
- $new_s .= $rest;
- $s = $new_s;
-
- # DO-NOT-SPLIT down-modulate (2 elements)
- $rest = $s;
- $new_s = "";
- while (($pre, $left, $dash, $right, $post) = ($rest =~ /^(.*? |)([a-z0-9]+) \@([-\/]|\xE2\x80[\x93\x94]])\@ ([a-z0-9]+)( .*|)$/i)) {
- $new_s .= "$pre$left";
- if ($ht{LC_DO_NOT_SPLIT}->{($caller->normalize_punctuation(lc ($left . $dash . $right)))}) {
- $new_s .= $dash;
- } else {
- $new_s .= " \@" . $dash . "\@ ";
- }
- $new_s .= $right;
- $rest = $post;
- }
- $new_s .= $rest;
- $s = $new_s;
-
- # DO-NOT-SPLIT 14-3-3 (3 elements)
- $rest = $s;
- $new_s = "";
- while (($pre, $left, $dash_group1, $dash1, $middle, $dash_group2, $dash2, $right, $post) = ($rest =~ /^(.*? |)([a-z0-9]+)((?: \@)?([-\/]|\xE2\x80[\x93\x94]])(?:\@ )?)([a-z0-9]+)((?: \@)?([-\/]|\xE2\x80[\x93\x94]])(?:\@ )?)([a-z0-9]+)( .*|)$/i)) {
- $new_s .= "$pre$left";
- if ($ht{LC_DO_NOT_SPLIT}->{($caller->normalize_punctuation(lc ($left . $dash1 . $middle . $dash2 . $right)))}) {
- $new_s .= $dash1;
- } else {
- $new_s .= $dash_group1;
- }
- $new_s .= $middle;
- if ($ht{LC_DO_NOT_SPLIT}->{($caller->normalize_punctuation(lc ($left . $dash1 . $middle . $dash2 . $right)))}) {
- $new_s .= $dash2;
- } else {
- $new_s .= $dash_group2;
- }
- $new_s .= $right;
- $rest = $post;
- }
- $new_s .= $rest;
- $s = $new_s;
-
- $s =~ s/ +/ /g;
- }
- print "Point S: $s\n" if $local_verbose;
-
- $s =~ s/_DONTBREAK_//g;
- $s =~ s/( )(ark|ill|mass|miss|wash|GA|LA|MO|OP|PA|VA|VT)(\.)( )/$1$2 $3$4/g;
- print "Point T: $s\n" if $local_verbose;
- $s = $caller->restore_urls_x045_guarded_string($s);
- $s = $caller->restore_xml_tags_x0123_guarded_string($s);
- print "Point U: $s\n" if $local_verbose;
- $s =~ s/(https?|ftp)\s*(:)\s*(\/\/)/$1$2$3/gi;
- $s =~ s/\b(mailto)\s*(:)\s*([a-z])/$1$2$3/gi;
- $s =~ s/(\d)\s*(:)\s*([0-5]\d[^0-9])/$1$2$3/gi;
- print "Point V: $s\n" if $local_verbose;
- $s =~ s/(5\xE2\x80\xB2-[ACGT]+)\s*(-|\xE2\x80[\x93\x94])\s*(3\xE2\x80\xB2)/$1$2$3/g; # repair broken DNA sequence
- $s =~ s/ (etc) \. / $1. /g; # repair most egrareous separations
- print "Point W: $s\n" if $local_verbose;
- $s = $caller->repair_separated_periods($s);
- print "Point X: $s\n" if $local_verbose;
- $s =~ s/^\s+//;
- $s =~ s/\s+$//;
- $s = "$pre$s$post" if defined($pre) && defined($post);
- $s =~ s/ +/ /g;
- print "Point Y: $s\n" if $local_verbose;
-
- return $s;
-}
-
-sub tokenize_plus_for_noisy_text {
- local($caller, $s, *ht, $control) = @_;
-
- $control = "" unless defined($control);
- my $pre;
- my $code;
- my $post;
- $s = " $core " if ($pre,$core,$post) = ($s =~ /^(\s*)(.*?)(\s*)$/i);
- foreach $i ((1 .. 2)) {
- $s =~ s/ ([A-Z][a-z]+'?[a-z]+)(-) / $1 $2 /gi; # Example: Beijing-
- $s =~ s/ (\d+(?:\.\d+)?)(-|:-|:|_|\.|'|;)([A-Z][a-z]+'?[a-z]+|[A-Z]{3,}) / $1 $2 $3 /gi; # Example: 3:-Maxkamado
- $s =~ s/ (\d+(?:\.\d+)?)(')([A-Za-z]{3,}) / $1 $2 $3 /gi; # Example: 42'daqiiqo
- $s =~ s/ (-|:-|:|_|\.)([A-Z][a-z]+'?[a-z]+|[A-Z]{3,}) / $1 $2 /gi; # Example: -Xassan
- $s =~ s/ ((?:[A-Z]\.[A-Z]|[A-Z]|Amb|Col|Dr|Eng|Gen|Inj|Lt|Maj|Md|Miss|Mr|Mrs|Ms|Pres|Prof|Sen)\.)([A-Z][a-z]+|[A-Z]{2,}) / $1 $2 /gi; # Example: Dr.Smith
- $s =~ s/ (\d+)(,)([a-z]{3,}) / $1 $2 $3 /gi; # Example: 24,October
- $s =~ s/ (%)(\d+(?:\.\d+)?) / $1 $2 /gi; # Example: %0.6
- $s =~ s/ ([A-Za-z][a-z]{3,}\d*)([.,\/]|:\()([A-Za-z][a-z]{3,}|[A-Z]{3,}) / $1 $2 $3 /gi; # Example: Windows8,falanqeeyaal
- $s =~ s/ ([A-Za-z]{3,}\d*?|[A-Za-z]+'[A-Za-z]+)([,\/]|:\()([A-Za-z]{3,}|[A-Za-z]+'[A-Za-z]+) / $1 $2 $3 /gi; # Example: GAROOWE:(SHL
- $s =~ s/ (\d[0-9.,]*\d)(;)([a-z]+) / $1 $2 $3 /gi; # Example: 2.1.2014;Waraka
- }
- $s =~ s/^\s+//;
- $s =~ s/\s+$//;
- $s = "$pre$s$post" if defined($pre) && defined($post);
- return $s;
-}
-
-# preparation for sub repair_separated_periods:
-
-my $abbrev_s = "etc.|e.g.|i.e.|U.K.|S.p.A.|A.F.P.";
-my @abbrevs = split(/\|/, $abbrev_s);
-my @exp_abbrevs = ();
-foreach $abbrev (@abbrevs) {
- if (($core,$period) = ($abbrev =~ /^(.*?)(\.|)$/)) {
- $core =~ s/\./\\s*\\.\\s*/g;
- $abbrev = $core;
- $abbrev .= "\\b" if $abbrev =~ /[a-z]$/i; # don't split etcetera -> etc. etera
- $abbrev .= "(?:\\s*\\.|)" if $period;
- push(@exp_abbrevs, $abbrev);
- }
-}
-my $exp_abbrev_s = join("|", @exp_abbrevs);
-
-sub repair_separated_periods {
- local($caller,$s) = @_;
-
- # separated or missing period
- my $result = "";
- while (($pre, $abbrev, $post) = ($s =~ /^(.*? |)($exp_abbrev_s)(.*)$/)) {
- $abbrev =~ s/ //g;
- $abbrev .= "." unless $abbrev =~ /\.$/;
- $result .= "$pre$abbrev ";
- $s = $post;
- }
- $result .= $s;
- $result =~ s/ +/ /g;
- return $result;
-}
-
-# provided by Alex Fraser
-sub fix_tokenize {
- local($caller,$s) = @_;
-
- ## change "2:15" to "2 @:@ 15"
- $s =~ s/(\d)\:(\d)/$1 \@:\@ $2/g;
-
- ## strip leading zeros from numbers
- $s =~ s/(^|\s)0+(\d)/$1$2/g;
-
- ## fix rule typo
- $s =~ s/associatedpress/associated press/g;
-
- ## fix _ entities
- $s =~ s/hong_kong/hong kong/g;
- $s =~ s/united_states/united states/g;
-
- return $s;
-}
-
-sub de_mt_tokenize {
- local($caller,$s) = @_;
-
- $s =~ s/\s+\@([-:\/])/$1/g;
- $s =~ s/([-:\/])\@\s+/$1/g;
- $s =~ s/\s+\/\s+/\//g;
- return $s;
-}
-
-sub surface_forms {
- local($caller,$pe,$modp) = @_;
-
- $sem = $pe->sem;
- $surf = $pe->surf;
- $synt = $pe->synt;
- $value = $pe->value;
- $gloss = $pe->gloss;
-# $util->log("surface_forms surf:$surf sem:$sem gloss:$gloss value:$value", $logfile);
- if ($sem eq "integer") {
- return ($gloss) if ($gloss =~ /several/) && !($value =~ /\S/);
- print STDERR "Warning: $value not an integer\n" unless $value =~ /^\d+(e\+\d+)?$/;
- if ($pe->get("reliable") =~ /sequence of digits/) {
- $english = $value;
- $english = "$prefix $english" if $prefix = $pe->get("prefix");
- @result = ($english);
- } else {
- @result = $caller->q_number_surface_forms($pe);
- }
- } elsif ($sem eq "decimal number") {
- @result = $caller->q_number_surface_forms($pe);
- } elsif ($sem =~ /(integer|decimal number) range/) {
- @result = $caller->number_range_surface_forms($pe);
- } elsif ($sem eq "ordinal") {
- if ($pe->get("definite")) {
- $exclude_adverbials_p = 1;
- } elsif (defined($chinesePM) && ($hao = $chinesePM->e2c("hao-day"))
- && ($gc = $chinesePM->e2c("generic counter"))) {
- $exclude_adverbials_p = ($surf =~ /($hao|$gc)$/);
- } else {
- $exclude_adverbials_p = 1;
- }
- @result = $caller->ordinal_surface_forms($pe->get("ordvalue") || $pe->value,0,$exclude_adverbials_p, $pe);
- } elsif ($sem eq "fraction") {
- @result = $caller->fraction_surface_forms($pe,$modp);
- } elsif ($sem =~ /monetary quantity/) {
- @result = $caller->currency_surface_forms($pe);
- } elsif ($sem =~ /occurrence quantity/) {
- @result = $caller->occurrence_surface_forms($pe,$modp);
- } elsif ($sem =~ /score quantity/) {
- @result = $caller->score_surface_forms($pe);
- } elsif ($sem =~ /age quantity/) {
- @result = $caller->age_surface_forms($pe, $modp);
- } elsif ($sem =~ /quantity/) {
- @result = $caller->quantity_surface_forms($pe,$modp);
- } elsif ($sem eq "percentage") {
- @result = $caller->percent_surface_forms($pe,$modp);
- } elsif ($sem eq "percentage range") {
- if ($gloss =~ /^and /) {
- @result = ($gloss);
- } else {
- @result = ($gloss, "by $gloss", "of $gloss");
- }
- } elsif ($sem =~ /^(month of the year|month\+year|year)$/) {
- if ($synt eq "pp") {
- @result = ($gloss);
- } elsif ($gloss =~ /^the (beginning|end) of/) {
- @result = ($gloss, "at $gloss");
- } elsif ($gloss =~ /^(last|this|current|next)/) {
- @result = ($gloss);
- } else {
- # in November; in mid-November
- @result = ($gloss, "in $gloss");
- }
- } elsif ($sem =~ /date(\+year)?$/) {
- @result = $caller->date_surface_forms($pe,$modp);
- } elsif ($sem =~ /year range\b.*\b(decade|century)$/) {
- @result = $caller->decade_century_surface_forms($pe);
- } elsif ($sem eq "day of the month") {
- @result = $caller->day_of_the_month_surface_forms($pe);
- } elsif ($sem =~ /period of the day\+day of the week/) {
- @result = ($gloss);
- push(@result, "on $gloss") if $gloss =~ /^the night/;
- } elsif ($sem =~ /day of the week/) {
- @result = $caller->day_of_the_week_surface_forms($pe);
- } elsif ($sem =~ /^(time)$/) {
- if ($gloss =~ /^at /) {
- @result = ($gloss);
- } else {
- @result = ($gloss, "at $gloss");
- }
- } elsif ($sem =~ /^date range$/) {
- if ($synt eq "pp") {
- @result = ($gloss);
- } elsif ($pe->get("between")) {
- $b_gloss = "between $gloss";
- $b_gloss =~ s/-/ and /;
- @result = ($b_gloss, $gloss, "from $gloss");
- } else {
- @result = ($gloss, "from $gloss");
- }
- } elsif ($sem =~ /^date enumeration$/) {
- if ($synt eq "pp") {
- @result = ($gloss);
- } else {
- @result = ($gloss, "on $gloss");
- }
- } elsif ($pe->get("unknown-in-pc")) {
- @result = ();
- foreach $unknown_pos_en (split(/;;/, $pe->get("unknown-pos-en-list"))) {
- ($engl) = ($unknown_pos_en =~ /^[^:]+:[^:]+:(.*)$/);
- push(@result, $engl) if defined($engl) && ! $util->member($engl, @result);
- }
- @result = ($gloss) unless @result;
- } elsif (($sem =~ /\b(name|unknown)\b/) && (($en_s = $pe->get("english")) =~ /[a-z]/i)) {
- @result = split(/\s*\|\s*/, $en_s);
- } elsif (($sem =~ /^proper\b/) && (($en_s = $pe->get("english")) =~ /[a-z]/i)) {
- @result = split(/\s*\|\s*/, $en_s);
- } else {
- @result = ($gloss);
- }
-
- if (($sem =~ /^(date\+year|month\+year|year)$/)
- && ($year = $pe->get("year"))
- && ($year =~ /^\d\d$/)
- && (@extend_years = @{$english_entity_style_ht{"ExtendYears"}})
- && ($#extend_years == 1)
- && ($extended_year_start = $extend_years[0])
- && ($extended_year_end = $extend_years[1])
- && ($extended_year_start <= $extended_year_end)
- && ($extended_year_start + 99 >= $extended_year_end)
- && ($extended_year_start =~ /^\d\d\d\d$/)
- && ($extended_year_end =~ /^\d\d\d\d$/)) {
- $century1 = substr($extended_year_start, 0, 2);
- $century2 = substr($extended_year_end, 0, 2);
- $exp_year1 = "$century1$year";
- $exp_year2 = "$century2$year";
- if (($extended_year_start <= $exp_year1) && ($exp_year1 <= $extended_year_end)) {
- $exp_year = $exp_year1;
- } elsif (($extended_year_start <= $exp_year2) && ($exp_year2 <= $extended_year_end)) {
- $exp_year = $exp_year2;
- } else {
- $exp_year = "";
- }
- if ($exp_year) {
- @new_glosses = ();
- foreach $old_gloss (@result) {
- $new_gloss = $old_gloss;
- $new_gloss =~ s/\b$year$/$exp_year/;
- push (@new_glosses, $new_gloss) unless $new_gloss eq $old_gloss;
- }
- push (@result, @new_glosses);
- }
- }
-
- # tokenize as requested
- @tokenize_list = @{$english_entity_style_ht{"Tokenize"}};
- $tokenize_p = 1 if $util->member("yes", @tokenize_list)
- || $util->member("all", @tokenize_list);
- $dont_tokenize_p = 1 if $util->member("no", @tokenize_list)
- || $util->member("all", @tokenize_list);
- if ($tokenize_p) {
- @new_result = ();
- foreach $item (@result) {
- $t_item = $caller->tokenize($item, *dummy_ht);
- push(@new_result, $item) if $dont_tokenize_p && ($item ne $t_item);
- push(@new_result, $t_item);
- }
- @result = @new_result;
- }
-
- # case as requested
- @case_list = @{$english_entity_style_ht{"Case"}};
- $lower_case_p = $util->member("lower", @case_list)
- || $util->member("all", @case_list);
- $reg_case_p = $util->member("regular", @case_list)
- || $util->member("all", @case_list);
- if ($lower_case_p) {
- @new_result = ();
- foreach $item (@result) {
- $l_item = "\L$item";
- push(@new_result, $item) if $reg_case_p && ($item ne $l_item);
- push(@new_result, $l_item) unless $util->member($l_item, @new_result);
- }
- @result = @new_result;
- }
- # $value = "n/a" unless $value;
- # print STDERR "SF surf:$surf sem:$sem gloss:$gloss value:$value Result(s): " . join("; ", @result) . "\n";
- return @result;
-}
-
-sub case_list {
- return @{$english_entity_style_ht{"Case"}};
-}
-
-sub right_cased_list {
- local($caller, $word) = @_;
-
- @case_list = @{$english_entity_style_ht{"Case"}};
-
- @right_cased_core_list = ();
- push(@right_cased_core_list, $word)
- if ($util->member("regular", @case_list) || $util->member("all", @case_list))
- && ! $util->member($word, @right_cased_core_list);
- push(@right_cased_core_list, lc $word)
- if ($util->member("lower", @case_list) || $util->member("all", @case_list))
- && ! $util->member(lc $word, @right_cased_core_list);
-
- return @right_cased_core_list;
-}
-
-sub string2surf_forms {
- local($caller, $text, $lang, $alt_sep) = @_;
-
- $alt_sep = " | " unless defined($alt_sep);
- $lang = "zh" unless defined($lang);
-
- if ($lang eq "zh") {
- @pes = $chinesePM->parse_entities_in_string($text);
- $n = $#pes + 1;
-# print " $n pes\n";
- @pes = $chinesePM->select_reliable_entities(@pes);
- my @res_surf_forms_copy = $caller->reliable_pes2surf_forms($alt_sep, @pes);
- return @res_surf_forms_copy;
- } else {
- return ();
- }
-}
-
-sub reliable_pe2surf_forms {
- local($caller, $pe, $parent_reliant_suffices_p) = @_;
-
- $parent_reliant_suffices_p = 0 unless defined($parent_reliant_suffices_p);
- if ((defined($r = $pe->get("reliable")) && $r)
- || ($parent_reliant_suffices_p && ($parent_pe = $pe->get("parent")) &&
- $parent_pe->get("reliable"))) {
- @surf_forms = $caller->surface_forms($pe);
- if ((($pe->sem =~ /quantity( range)?$/) && !($pe->sem =~ /monetary quantity/))
- || ($util->member($pe->sem, "percentage","fraction"))) {
- foreach $mod_form ($caller->surface_forms($pe, 1)) {
- push(@surf_forms, $mod_form) unless $util->member($mod_form, @surf_forms);
- }
- }
- return @surf_forms;
- }
- return ();
-}
-
-sub reliable_pe2surf_form {
- local($caller, $alt_sep, $pe) = @_;
-
- if (@surf_forms = $caller->reliable_pe2surf_forms($pe)) {
- return $pe->surf . " == " . join($alt_sep, @surf_forms);
- } else {
- return "";
- }
-}
-
-sub reliable_pes2surf_forms {
- local($caller, $alt_sep, @pes) = @_;
-
- my @res_surf_forms = ();
- foreach $pe (@pes) {
- if ($new_surf_form = $caller->reliable_pe2surf_form($alt_sep, $pe)) {
- push(@res_surf_forms, $new_surf_form);
- }
- }
- return @res_surf_forms;
-}
-
-sub string_contains_ascii_letter {
- local($caller,$string) = @_;
- return $string =~ /[a-zA-Z]/;
-}
-
-sub string_starts_w_ascii_letter {
- local($caller,$string) = @_;
- return $string =~ /^[a-zA-Z]/;
-}
-
-sub en_lex_bin {
- local($caller, $word) = @_;
-
- $word =~ s/\s+//g;
- $word =~ s/[-_'\/]//g;
- $word =~ tr/A-Z/a-z/;
- return "digit" if $word =~ /^\d/;
- return "special" unless $word =~ /^[a-z]/;
- return substr($word, 0, 1);
-}
-
-sub skeleton_bin {
- local($caller, $sk_bin_control, $word) = @_;
-
- $word =~ s/\s+//g;
- $word =~ s/[-_'\/]//g;
- $word =~ tr/A-Z/a-z/;
- return "E" unless $word;
- if ($sk_bin_control =~ /^v1/i) {
- return $word if length($word) <= 2;
- return substr($word, 0, 3) if $word =~ /^(b|f[lnrt]|gr|j[nr]|k|l[nt]|m|n[kmst]|r[knst]|s|t)/;
- return substr($word, 0, 2);
- } elsif ($sk_bin_control =~ /d6f$/) {
- return $word if length($word) <= 6;
- return substr($word, 0, 6);
- } elsif ($sk_bin_control =~ /d5f$/) {
- return $word if length($word) <= 5;
- return substr($word, 0, 5);
- } elsif ($sk_bin_control =~ /d4f$/) {
- return $word if length($word) <= 4;
- return substr($word, 0, 4);
- } else {
- return $word if length($word) <= 4;
- return substr($word, 0, 5) if $word =~ /^(bnts|brnt|brst|brtk|brtn|brts|frst|frts|klts|kntr|knts|krst|krtn|krts|ksks|kstr|lktr|ntrs|sbrt|skrt|sntr|strn|strt|trns|trts|ts)/;
- return substr($word, 0, 4);
- }
-}
-
-sub skeleton_bin_sub_dir {
- local($caller, $sk_bin_control, $skeleton_bin) = @_;
-
- $sk_bin_control = "v1" unless defined($sk_bin_control);
- return "" if $sk_bin_control =~ /^v1/i;
- if ($sk_bin_control =~ /^2d4d\df$/) {
- return "SH/SHOR" if (length($skeleton_bin) < 2);
- return substr($skeleton_bin, 0, 2) . "/" . substr($skeleton_bin, 0, 2) . "SH" if (length($skeleton_bin) < 4);
- return substr($skeleton_bin, 0, 2) . "/" . substr($skeleton_bin, 0, 4);
- } elsif ($sk_bin_control =~ /^2d3d\df$/) {
- return "SH/SHO" if (length($skeleton_bin) < 2);
- return substr($skeleton_bin, 0, 2) . "/" . substr($skeleton_bin, 0, 2) . "S" if (length($skeleton_bin) < 3);
- return substr($skeleton_bin, 0, 2) . "/" . substr($skeleton_bin, 0, 3);
- }
- $bin3 = "ts";
- return "SH" if (length($skeleton_bin) < 2) || ($skeleton_bin =~ /^($bin3)$/);
- return substr($skeleton_bin, 0, 3) if $skeleton_bin =~ /^($bin3)/;
- return substr($skeleton_bin, 0, 2);
-}
-
-sub en_words_and_counts_matching_skeletons {
- local($caller, $sk_bin_version, @skeletons) = @_;
-
- return () unless @skeletons;
-
- @rem_skeletons = sort @skeletons;
- $previous_skeleton = "";
- $current_skeleton = shift @rem_skeletons;
- @list = ($current_skeleton);
- @lists = ();
-
- $current_bin = "";
- while ($current_skeleton) {
- unless ($current_skeleton eq $previous_skeleton) {
- $current_skeleton_bin = $caller->skeleton_bin($sk_bin_version, $current_skeleton);
- unless ($current_skeleton_bin eq $current_bin) {
- # need to read from new file
- close(IN) if $current_bin;
- $current_bin = $current_skeleton_bin;
- $current_bin_subdir
- = $caller->skeleton_bin_sub_dir($sk_bin_version, $current_bin);
- if ($current_bin_subdir) {
- $en_skeleton_file = File::Spec->catfile($english_resources_skeleton_dir,
- $current_bin_subdir,
- "$current_bin.txt");
- } else {
- $en_skeleton_file = File::Spec->catfile($english_resources_skeleton_dir,
- "$current_bin.txt");
- }
- # print STDERR " Perusing $en_skeleton_file ...\n";
- if (open(IN, $en_skeleton_file)) {
- $en_skeleton_file_exists = 1;
- } else {
- $en_skeleton_file_exists = 0;
- print STDERR "Can't open $en_skeleton_file (Point A)\n";
- }
- }
- $previous_skeleton = $current_skeleton;
- }
- $_ = if $en_skeleton_file_exists;
- unless ($en_skeleton_file_exists && defined($_)) {
- push(@lists, join(' ; ', @list));
- if (@rem_skeletons) {
- $current_skeleton = shift @rem_skeletons;
- @list = ($current_skeleton);
- } else {
- $current_skeleton = "";
- }
- next;
- }
- ($skeleton) = ($_ =~ /^(\S+)\t/);
- next unless defined($skeleton);
- $skeletons_match_p = $caller->skeletons_match_p($skeleton, $current_skeleton);
- next if ($skeleton lt $current_skeleton) && ! $skeletons_match_p;
- if ($skeletons_match_p) {
- ($token, $count) = ($_ =~ /^\S+\t(\S|\S[-' a-zA-Z]*\S)\t(\d+)\s*$/);
- push(@list, "$token : $count") if defined($token) && defined($count);
- } else {
- while ($current_skeleton lt $skeleton) {
- push(@lists, join(' ; ', @list));
- unless (@rem_skeletons) {
- close(IN) if $current_bin;
- $current_skeleton = "";
- last;
- }
- $current_skeleton = shift @rem_skeletons;
- @list = ($current_skeleton);
- }
- if ($caller->skeletons_match_p($skeleton, $current_skeleton)) {
- ($token, $count) = ($_ =~ /^\S+\t(\S|\S[-' a-zA-Z]*\S)\t(\d+)\s*$/);
- push(@list, "$token : $count") if defined($token) && defined($count);
- }
- }
- }
- close(IN) if $current_bin;
- return @lists;
-}
-
-sub skeletons_match_p {
-# one of the skeletons might have been cut off at max
- local($caller, $skeleton1, $skeleton2, $max) = @_;
-
- return 1 if $skeleton1 eq $skeleton2;
-
- $max = 5 unless defined($max);
- if ((length($skeleton1) > length($skeleton2)) && (length($skeleton2) == $max)) {
- return ($skeleton1 =~ /^$skeleton2/) ? 1 : 0;
- } elsif ((length($skeleton2) > length($skeleton1)) && (length($skeleton1) == $max)) {
- return ($skeleton2 =~ /^$skeleton1/) ? 1 : 0;
- } else {
- return 0;
- }
-}
-
-sub token_weird_or_too_long {
- local($caller, *WARNING_FH, $token) = @_;
-
- $lc_token = lc $token;
- $norm_token = $lc_token;
- $norm_token =~ s/[-' ,]//g;
- $snippet4_5 = "";
- $snippet4_5 = substr($norm_token, 4, 2) if length($norm_token) >= 10;
- $snippet4_6 = "";
- $snippet4_6 = substr($norm_token, 4, 3) if length($norm_token) >= 10;
- if (($norm_token =~ /(kkk|vvv|www|xxx|yyy|zzz)/) ||
- ($norm_token =~ /[acgt]{15,}/) || # DNA sequence
- ($snippet4_5 && ($norm_token =~ /($snippet4_5){5,}/)) || # 2-letter repetition
- ($snippet4_6 && ($norm_token =~ /($snippet4_6){4,}/)) || # 3-letter repetition
- ($norm_token =~ /[bcdfghjklmnpqrstvwxz]{8,}/) || # too many consonants
- ($token =~ /(DDD)/) ||
- (($lc_token =~ /fff/) && ! ($lc_token =~ /schifff/))) {
- print WARNING_FH "skipping (WEIRD): $_";
- return 1;
- }
- if ((length($norm_token) >= 50) ||
- ((length($norm_token) >= 28)
-
- # typical German compound noun components
- && ! ($norm_token =~ /entwicklung/)
- && ! ($norm_token =~ /fabrik/)
- && ! ($norm_token =~ /finanz/)
- && ! ($norm_token =~ /forschung/)
- && ! ($norm_token =~ /geschwindigkeit/)
- && ! ($norm_token =~ /gesundheit/)
- && ! ($norm_token =~ /gewohnheit/)
- && ! ($norm_token =~ /schaft/)
- && ! ($norm_token =~ /schifffahrt/)
- && ! ($norm_token =~ /sicherheit/)
- && ! ($norm_token =~ /vergangen/)
- && ! ($norm_token =~ /versicherung/)
- && ! ($norm_token =~ /unternehmen/)
- && ! ($norm_token =~ /verwaltung/)
-
- # Other Germanic languages
- && ! ($norm_token =~ /aktiebolag/)
- && ! ($norm_token =~ /aktieselskab/)
- && ! ($norm_token =~ /ontwikkeling/)
-
- # chemical
- && ! ($norm_token =~ /phetamine/)
- && ! ($norm_token =~ /ethyl/)
-
- # medical
- && ! ($norm_token =~ /^pneumonaultramicroscopicsilicovolcanoconios[ei]s$/)
-
- # business
- && ! ($norm_token =~ /PriceWaterhouse/)
- )) {
- print WARNING_FH "skipping (TOO LONG): $_";
- return 1;
- }
- return 0;
-}
-
-sub xml_de_accent {
- local($caller, $string) = @_;
-
- # for the time being, unlauts are mapped to main vowel (without "e")
-
- $string =~ s/\[2-7];/A/g;
- $string =~ s/\Æ/Ae/g;
- $string =~ s/\Ç/C/g;
- $string =~ s/\[0-3];/E/g;
- $string =~ s/\[4-7];/I/g;
- $string =~ s/\Ð/Dh/g;
- $string =~ s/\Ñ/N/g;
- $string =~ s/\[0-4];/O/g;
- $string =~ s/\Ø/O/g;
- $string =~ s/\[7-9];/U/g;
- $string =~ s/\Ü/U/g;
- $string =~ s/\Ý/Y/g;
- $string =~ s/\Þ/Th/g;
-
- $string =~ s/\ß/ss/g;
- $string =~ s/\[4-9];/a/g;
- $string =~ s/\æ/ae/g;
- $string =~ s/\ç/c/g;
- $string =~ s/\[2-5];/e/g;
- $string =~ s/\[6-9];/i/g;
- $string =~ s/\ð/dh/g;
- $string =~ s/\ñ/n/g;
- $string =~ s/\[2-6];/o/g;
- $string =~ s/\ø/o/g;
- $string =~ s/\ù/u/g;
- $string =~ s/\[0-2];/u/g;
- $string =~ s/\ý/y/g;
- $string =~ s/\þ/th/g;
- $string =~ s/\ÿ/y/g;
- $string =~ s/\xE2\x80\x99/'/g;
-
- return $string;
-}
-
-sub de_accent {
- local($caller, $string) = @_;
-
- # for the time being, unlauts are mapped to main vowel (without "e")
-
- $string =~ s/\xC3[\x80-\x85]/A/g;
- $string =~ s/\xC3\x86/Ae/g;
- $string =~ s/\xC3\x87/C/g;
- $string =~ s/\xC3[\x88-\x8B]/E/g;
- $string =~ s/\xC3[\x8C-\x8F]/I/g;
- $string =~ s/\xC3\x90/Dh/g;
- $string =~ s/\xC3\x91/N/g;
- $string =~ s/\xC3[\x92-\x96]/O/g;
- $string =~ s/\xC3\x98/O/g;
- $string =~ s/\xC3[\x99-\x9C]/U/g;
- $string =~ s/\xC3\x9D/Y/g;
- $string =~ s/\xC3\x9E/Th/g;
-
- $string =~ s/\xC3\x9F/ss/g;
- $string =~ s/\xC3[\xA0-\xA5]/a/g;
- $string =~ s/\xC3\xA6/ae/g;
- $string =~ s/\xC3\xA7/c/g;
- $string =~ s/\xC3[\xA8-\xAB]/e/g;
- $string =~ s/\xC3[\xAC-\xAF]/i/g;
- $string =~ s/\xC3\xB0/dh/g;
- $string =~ s/\xC3\xB1/n/g;
- $string =~ s/\xC3[\xB2-\xB6]/o/g;
- $string =~ s/\xC3\xB8/o/g;
- $string =~ s/\xC3[\xB9-\xBC]/u/g;
- $string =~ s/\xC3\xBD/y/g;
- $string =~ s/\xC3\xBE/th/g;
- $string =~ s/\xC3\xBF/y/g;
- $string =~ s/\xE2\x80\x99/'/g;
-
- return $string;
-}
-
-sub common_non_name_cap_p {
- local($caller, $word) = @_;
- return defined($english_ht{(lc $word)}->{COMMON_NON_NAME_CAP});
-}
-
-sub language {
- return "English";
-}
-
-sub language_id {
- return "en";
-}
-
-sub parse_entities_in_string {
- local($caller, $string) = @_;
-
- $ParseEntry->set_current_lang("en");
- @pes = $ParseEntry->init_ParseEntry_list($string);
- @pes = $caller->lexical_heuristic(@pes);
- @pes = $caller->base_number_heuristic(@pes);
-
- return @pes;
-}
-
-sub lexical_heuristic {
- local($caller, @pes) = @_;
-
- $i = 0;
- while ($i <= $#pes) {
- $pe = $pes[$i];
- if ($pe->undefined("synt")) {
- if ($pe->surf =~ /^\d+(,\d\d\d)*\.\d+/) {
- $pe->set("synt", "cardinal");
- $pe->set("sem", "decimal number");
- $value = $pe->surf;
- $value =~ s/,//g;
- $pe->set("value", $value);
- } elsif ($pe->surf =~ /^\d+(,\d\d\d)*$/) {
- $pe->set("synt", "cardinal");
- $pe->set("sem", "integer");
- $value = $pe->surf;
- $value =~ s/,//g;
- $pe->set("value", $value);
- } elsif ($pe->surf =~ /^([-",\.;\s:()\/%]|\@[-:\/]\@|[-:\/]\@|\@[-:\/])$/) {
- $pe->set("gloss", $pe->surf);
- $pe->set("synt", "punctuation");
- } else {
- ($length,$english) = $caller->find_max_lex_match($i,3,@pes);
- if ($length) {
- if ($length > 1) {
- @slot_value_list = ();
- @children = splice(@pes,$i,$length);
- @roles = $util->list_with_same_elem($length,"lex");
- $pe = $ParseEntry->newParent(*slot_value_list,*children,*roles);
- $pe->set("surf",$english);
- $pe->set("eot",1) if $pe->eot_p;
- splice(@pes,$i,0,$pe);
- } else {
- $pe = $pes[$i];
- }
- $annot_s = $english_annotation_ht{$english};
- $annot_s =~ s/^\s*:+//;
- $annot_s =~ s/^\s+//;
- $annot_s =~ s/\s+$//;
- $annot_s =~ s/#.*$//;
- foreach $annot (split('::', $annot_s)) {
- ($slot, $value) = ($annot =~ /^([^:]+):(.*)$/);
- if (defined($slot) && defined($value)) {
- $pe->set($slot, $value);
- }
- $pe->set("sem", "integer") if ($slot eq "synt") && ($value eq "cardinal");
- }
- $pe->set("ord-value", $ord_value)
- if $ord_value = $english_annotation_ht{"_EN_SYNT_"}->{(lc $english)}->{"ordinal"}->{"value"};
- $pe->set("card-value", $card_value)
- if $card_value = $english_annotation_ht{"_EN_SYNT_"}->{(lc $english)}->{"cardinal"}->{"value"};
- }
- }
- }
- $i++;
- }
- return @pes;
-}
-
-# builds numbers, incl. integers, decimal numbers, fractions, percentages, ordinals
-sub base_number_heuristic {
- local($caller, @pes) = @_;
-
- $i = 0;
- # $ParseEntry->print_pes("start base_number_heuristic",$i,@pes);
- while ($i <= $#pes) {
- # forty-five
- ($head_pe, @pes) =
- $ParseEntry->build_parse_entry("composite number plus","",$i,*pes,
- ' :head :($pe->sem eq "integer") && ($pe->value =~ /^[1-9]0$/)',
- 'optional:dummy:$pe->surf eq "\@-\@"',
- ' :mod :($pe->sem eq "integer") && ($pe->value =~ /^[1-9]$/)');
- if ($head_pe) { # match succeeded
- $value1 = $head_pe->childValue("head");
- $value2 = $head_pe->childValue("mod");
- $head_pe->set("value", $value1 + $value2);
- }
- # six billion
- ($head_pe, @pes) =
- $ParseEntry->build_parse_entry("composite number 1000","",$i,*pes,
- ' :mod :(($value1 = $pe->value) =~ /^\d+(.\d+)?$/) && ($value1 < 1000)',
- ' :head:($value2 = $pe->value) =~ /^1(000)+$/');
- if ($head_pe) { # match succeeded
- $value1 = $head_pe->childValue("mod");
- $value2 = $head_pe->childValue("head");
- $head_pe->set("value", $value1 * $value2);
- }
- # twenty-second
- ($head_pe, @pes) =
- $ParseEntry->build_parse_entry("composite ordinal","",$i,*pes,
- ' :mod :($pe->sem eq "integer") && ($pe->value =~ /^[1-9]0$/)',
- 'optional:dummy:$pe->surf eq "\@-\@"',
- ' :head :$pe->get("ord-value") =~ /^[1-9]$/');
- if ($head_pe) { # match succeeded
- $value1 = $head_pe->childSlot("head", "ord-value");
- $value2 = $head_pe->childValue("mod");
- $head_pe->set("value", $value1 + $value2);
- }
- $i++;
- }
-
- return @pes;
-}
-
-sub find_max_lex_match {
- local($caller,$start,$maxlength,@pes) = @_;
-
- while ($maxlength > 0) {
- if (($english = $util->pes_subseq_surf($start,$maxlength,"en",@pes))
- && defined($english_annotation_ht{$english})
- && ($english =~ /\S/)) {
- return ($maxlength, $english);
- } else {
- $maxlength--;
- }
- }
- return (0,"");
-}
-
-sub select_reliable_entities {
- local($caller, @pes) = @_;
-
- foreach $i (0 .. $#pes) {
- $pe = $pes[$i];
- $surf = $pe->surf;
-
- $pe->set("reliable",1);
- }
- return @pes;
-}
-
-sub negatives_p {
- # (cool <-> uncool), (improper <-> proper), ...
- local($caller, $s1, $s2) = @_;
-
- my $g_s1 = $util->regex_guard($s1);
- my $g_s2 = $util->regex_guard($s2);
- return 1 if $s1 =~ /^[iu]n$g_s2$/;
- return 1 if $s1 =~ /^il$g_s2$/ && ($s2 =~ /^l/);
- return 1 if $s1 =~ /^im$g_s2$/ && ($s2 =~ /^[mp]/);
-
- return 1 if $s2 =~ /^[iu]n$g_s1$/;
- return 1 if $s2 =~ /^il$g_s1$/ && ($s1 =~ /^l/);
- return 1 if $s2 =~ /^im$g_s1$/ && ($s1 =~ /^[mp]/);
-
- return 0;
-}
-
-sub present_participle_p {
- local($caller, $pe) = @_;
-
- my $aux_pe = $pe->child("aux");
- return $caller->present_participle_p($aux_pe) if $aux_pe;
- my $head_pe = $pe->child("head");
- return $caller->present_participle_p($head_pe) if $head_pe;
- return ($pe->synt =~ /^VBG/);
-}
-
-
-%engl_value_ht = (
- "monday" => 1,
- "tuesday" => 2,
- "wednesday" => 3,
- "thursday" => 4,
- "friday" => 5,
- "saturday" => 6,
- "sunday" => 7,
-
- "january" => 1,
- "february" => 2,
- "march" => 3,
- "april" => 4,
- "may" => 5,
- "june" => 6,
- "july" => 7,
- "august" => 8,
- "september" => 9,
- "october" => 10,
- "november" => 11,
- "december" => 12,
-
- "spring" => 1,
- "summer" => 2,
- "fall" => 3,
- "autumn" => 3,
- "winter" => 4,
-
- "morning" => 1,
- "noon" => 2,
- "afternoon" => 3,
- "evening" => 4,
- "night" => 5,
-
- "picosecond" => 1,
- "nanosecond" => 2,
- "microsecond" => 3,
- "millisecond" => 4,
- "second" => 5,
- "minute" => 6,
- "hour" => 7,
- "day" => 8,
- "week" => 9,
- "fortnight" => 10,
- "month" => 11,
- "year" => 12,
- "decade" => 13,
- "century" => 14,
- "millennium" => 15,
-
- "nanometer" => 2,
- "micrometer" => 3,
- "millimeter" => 4,
- "centimeter" => 5,
- "decimeter" => 6,
- "meter" => 7,
- "kilometer" => 8,
- "inch" => 11,
- "foot" => 12,
- "yard" => 13,
- "mile" => 14,
- "lightyear" => 20,
-
- "microgram" => 2,
- "milligram" => 3,
- "gram" => 4,
- "kilogram" => 5,
- "ton" => 6,
- "ounce" => 14,
-);
-
-sub engl_order_value {
- local($this, $s) = @_;
-
- return $value = $engl_value_ht{(lc $s)} || 0;
-}
-
-1;
-
diff --git a/spaces/MiloSobral/PortiloopDemo/portiloop/src/hardware/demo/acquisition_demo.py b/spaces/MiloSobral/PortiloopDemo/portiloop/src/hardware/demo/acquisition_demo.py
deleted file mode 100644
index 10a457ddaa617716c53839b36d772fe78d4befca..0000000000000000000000000000000000000000
--- a/spaces/MiloSobral/PortiloopDemo/portiloop/src/hardware/demo/acquisition_demo.py
+++ /dev/null
@@ -1,122 +0,0 @@
-from time import sleep
-from playsound import playsound
-import numpy as np
-import matplotlib.pyplot as plt
-import os
-from datetime import datetime, timedelta
-
-from frontend import Frontend
-from leds import LEDs, Color
-
-DEFAULT_FRONTEND_CONFIG = [
- 0x3E, # ID (RO)
- 0x96, # Datarate = 250 SPS
- 0xC0, # No tests
- 0x60, # Power-down reference buffer, no bias
- 0x00, # No lead-off
- 0x61, # Channel 1 active, 24 gain, no SRB2 & input shorted
- 0x61, # Channel 2 active, 24 gain, no SRB2 & input shorted
- 0x61, # Channel 3 active, 24 gain, no SRB2 & input shorted
- 0x61, # Channel 4 active, 24 gain, no SRB2 & input shorted
- 0x61, # Channel 5 active, 24 gain, no SRB2 & input shorted
- 0x61, # Channel 6 active, 24 gain, no SRB2 & input shorted
- 0x61, # Channel 7 active, 24 gain, no SRB2 & input shorted
- 0x61, # Channel 8 active, 24 gain, no SRB2 & input shorted
- 0x00, # No bias
- 0x00, # No bias
- 0x00, # No lead-off
- 0x00, # No lead-off
- 0x00, # No lead-off flip
- 0x00, # Lead-off positive status (RO)
- 0x00, # Laed-off negative status (RO)
- 0x0F, # All GPIOs as inputs
- 0x00, # Disable SRB1
- 0x00, # Unused
- 0x00, # Single-shot, lead-off comparator disabled
-]
-
-FRONTEND_CONFIG = [
- 0x3E, # ID (RO)
- 0x95, # Datarate = 500 SPS
- 0xC0, # No tests
- 0xE0, # Power-down reference buffer, no bias
- 0x00, # No lead-off
- 0x68, # Channel 1 active, 24 gain, no SRB2 & normal input
- 0x68, # Channel 2 active, 24 gain, no SRB2 & normal input
- 0x68, # Channel 3 active, 24 gain, no SRB2 & normal input
- 0x68, # Channel 4 active, 24 gain, no SRB2 & normal input
- 0x68, # Channel 5 active, 24 gain, no SRB2 & normal input
- 0x68, # Channel 6 active, 24 gain, no SRB2 & normal input
- 0x68, # Channel 7 active, 24 gain, no SRB2 & normal input
- 0xE0, # Channel 8 disabled, 24 gain, no SRB2 & normal input
- 0x00, # No bias
- 0x00, # No bias
- 0xFF, # Lead-off on all positive pins?
- 0xFF, # Lead-off on all negative pins?
- 0x00, # Normal lead-off
- 0x00, # Lead-off positive status (RO)
- 0x00, # Lead-off negative status (RO)
- 0x00, # All GPIOs as output ?
- 0x20, # Disable SRB1
-]
-
-frontend = Frontend()
-leds = LEDs()
-
-try:
- data = frontend.read_regs(0x00, 1)
- assert data == [0x3E], "Wrong output"
- print("EEG Frontend responsive")
- leds.led2(Color.BLUE)
-
- print("Configuring EEG Frontend")
- frontend.write_regs(0x00, FRONTEND_CONFIG)
- #config = DEFAULT_FRONTEND_CONFIG[:]
- #config[0x02] = 0xD0 # Activate test signals
- #config[0x03] = 0xE0 # Power-up reference buffer
- #for i in range(0x05, 0x0D):
- # config[i] = 0x05 # Channel active, 1 gain, no SRB2 & Test signal
- #frontend.write_regs(0x00, config)
- data = frontend.read_regs(0x00, len(FRONTEND_CONFIG))
- assert data == FRONTEND_CONFIG, f"Wrong config: {data} vs {FRONTEND_CONFIG}"
- frontend.start()
- print("EEG Frontend configured")
- leds.led2(Color.PURPLE)
- while not frontend.is_ready():
- pass
- print("Ready for data")
-
- leds.aquisition(True)
- sleep(0.5)
- leds.aquisition(False)
- sleep(0.5)
- leds.aquisition(True)
-
- points = []
- START = datetime.now()
- NUM_STEPS = 2000
- #times = [timedelta(milliseconds=i) for i in range(NUM_STEPS)]
- times = [i / 250 for i in range(NUM_STEPS)]
- for x in range(NUM_STEPS):
- while not frontend.is_ready():
- pass
- values = frontend.read()
- print(values.channels())
- points.append(values.channels())
- while frontend.is_ready():
- pass
- leds.aquisition(False)
-
- points = np.transpose(np.array(points))
- fig, ax = plt.subplots()
- for i in [0]:#range(8):
- ax.plot(times, points[i] * 4.5 / 2**24, label='channel #' + str(i))
- ax.set_xlabel('Time since start (s)')
- ax.set_ylabel('Value')
- ax.set_title('Test readings')
- ax.legend()
- plt.savefig('readings.png')
-
-finally:
- frontend.close()
- leds.close()
diff --git a/spaces/MirageML/sjc/run_sjc.py b/spaces/MirageML/sjc/run_sjc.py
deleted file mode 100644
index 01894c75f33b08f91f2345036e0a837dd6763cfa..0000000000000000000000000000000000000000
--- a/spaces/MirageML/sjc/run_sjc.py
+++ /dev/null
@@ -1,298 +0,0 @@
-import math
-import numpy as np
-import torch
-import torch.nn as nn
-from einops import rearrange
-from imageio import imwrite
-from pydantic import validator
-
-from my.utils import (
- tqdm, EventStorage, HeartBeat, EarlyLoopBreak,
- get_event_storage, get_heartbeat, read_stats
-)
-from my.config import BaseConf, dispatch, optional_load_config
-from my.utils.seed import seed_everything
-
-from adapt import ScoreAdapter, karras_t_schedule
-from run_img_sampling import GDDPM, SD, StableDiffusion
-from misc import torch_samps_to_imgs
-from pose import PoseConfig
-
-from run_nerf import VoxConfig
-from voxnerf.utils import every
-from voxnerf.render import (
- as_torch_tsrs, rays_from_img, ray_box_intersect, render_ray_bundle
-)
-from voxnerf.vis import stitch_vis, bad_vis as nerf_vis
-
-
-device_glb = torch.device("cuda")
-
-
-def tsr_stats(tsr):
- return {
- "mean": tsr.mean().item(),
- "std": tsr.std().item(),
- "max": tsr.max().item(),
- }
-
-
-class SJC(BaseConf):
- family: str = "sd"
- gddpm: GDDPM = GDDPM()
- sd: SD = SD(
- variant="v1",
- prompt="A high quality photo of a delicious burger",
- scale=100.0
- )
- lr: float = 0.05
- n_steps: int = 10000
- vox: VoxConfig = VoxConfig(
- model_type="V_SD", grid_size=100, density_shift=-1.0, c=3,
- blend_bg_texture=True, bg_texture_hw=4,
- bbox_len=1.0
- )
- pose: PoseConfig = PoseConfig(rend_hw=64, FoV=60.0, R=1.5)
-
- emptiness_scale: int = 10
- emptiness_weight: int = 1e4
- emptiness_step: float = 0.5
- emptiness_multiplier: float = 20.0
-
- depth_weight: int = 0
-
- var_red: bool = True
-
- @validator("vox")
- def check_vox(cls, vox_cfg, values):
- family = values['family']
- if family == "sd":
- vox_cfg.c = 4
- return vox_cfg
-
- def run(self):
- cfgs = self.dict()
-
- family = cfgs.pop("family")
- model = getattr(self, family).make()
-
- cfgs.pop("vox")
- vox = self.vox.make()
-
- cfgs.pop("pose")
- poser = self.pose.make()
-
- sjc_3d(**cfgs, poser=poser, model=model, vox=vox)
-
-
-def sjc_3d(
- poser, vox, model: ScoreAdapter,
- lr, n_steps, emptiness_scale, emptiness_weight, emptiness_step, emptiness_multiplier,
- depth_weight, var_red, **kwargs
-):
- del kwargs
-
- assert model.samps_centered()
- _, target_H, target_W = model.data_shape()
- bs = 1
- aabb = vox.aabb.T.cpu().numpy()
- vox = vox.to(device_glb)
- opt = torch.optim.Adamax(vox.opt_params(), lr=lr)
-
- H, W = poser.H, poser.W
- Ks, poses, prompt_prefixes = poser.sample_train(n_steps)
-
- ts = model.us[30:-10]
- fuse = EarlyLoopBreak(5)
-
- same_noise = torch.randn(1, 4, H, W, device=model.device).repeat(bs, 1, 1, 1)
-
- with tqdm(total=n_steps) as pbar, \
- HeartBeat(pbar) as hbeat, \
- EventStorage() as metric:
- for i in range(n_steps):
- if fuse.on_break():
- break
-
- p = f"{prompt_prefixes[i]} {model.prompt}"
- score_conds = model.prompts_emb([p])
-
- y, depth, ws = render_one_view(vox, aabb, H, W, Ks[i], poses[i], return_w=True)
-
- if isinstance(model, StableDiffusion):
- pass
- else:
- y = torch.nn.functional.interpolate(y, (target_H, target_W), mode='bilinear')
-
- opt.zero_grad()
-
- with torch.no_grad():
- chosen_σs = np.random.choice(ts, bs, replace=False)
- chosen_σs = chosen_σs.reshape(-1, 1, 1, 1)
- chosen_σs = torch.as_tensor(chosen_σs, device=model.device, dtype=torch.float32)
- # chosen_σs = us[i]
-
- noise = torch.randn(bs, *y.shape[1:], device=model.device)
-
- zs = y + chosen_σs * noise
- Ds = model.denoise(zs, chosen_σs, **score_conds)
-
- if var_red:
- grad = (Ds - y) / chosen_σs
- else:
- grad = (Ds - zs) / chosen_σs
-
- grad = grad.mean(0, keepdim=True)
-
- y.backward(-grad, retain_graph=True)
-
- if depth_weight > 0:
- center_depth = depth[7:-7, 7:-7]
- border_depth_mean = (depth.sum() - center_depth.sum()) / (64*64-50*50)
- center_depth_mean = center_depth.mean()
- depth_diff = center_depth_mean - border_depth_mean
- depth_loss = - torch.log(depth_diff + 1e-12)
- depth_loss = depth_weight * depth_loss
- depth_loss.backward(retain_graph=True)
-
- emptiness_loss = torch.log(1 + emptiness_scale * ws).mean()
- emptiness_loss = emptiness_weight * emptiness_loss
- if emptiness_step * n_steps <= i:
- emptiness_loss *= emptiness_multiplier
- emptiness_loss.backward()
-
- opt.step()
-
- metric.put_scalars(**tsr_stats(y))
-
- if every(pbar, percent=1):
- with torch.no_grad():
- if isinstance(model, StableDiffusion):
- y = model.decode(y)
- vis_routine(metric, y, depth)
-
- # if every(pbar, step=2500):
- # metric.put_artifact(
- # "ckpt", ".pt", lambda fn: torch.save(vox.state_dict(), fn)
- # )
- # with EventStorage("test"):
- # evaluate(model, vox, poser)
-
- metric.step()
- pbar.update()
- pbar.set_description(p)
- hbeat.beat()
-
- metric.put_artifact(
- "ckpt", ".pt", lambda fn: torch.save(vox.state_dict(), fn)
- )
- with EventStorage("test"):
- evaluate(model, vox, poser)
-
- metric.step()
-
- hbeat.done()
-
-
-@torch.no_grad()
-def evaluate(score_model, vox, poser):
- H, W = poser.H, poser.W
- vox.eval()
- K, poses = poser.sample_test(100)
-
- fuse = EarlyLoopBreak(5)
- metric = get_event_storage()
- hbeat = get_heartbeat()
-
- aabb = vox.aabb.T.cpu().numpy()
- vox = vox.to(device_glb)
-
- num_imgs = len(poses)
-
- for i in (pbar := tqdm(range(num_imgs))):
- if fuse.on_break():
- break
-
- pose = poses[i]
- y, depth = render_one_view(vox, aabb, H, W, K, pose)
- if isinstance(score_model, StableDiffusion):
- y = score_model.decode(y)
- vis_routine(metric, y, depth)
-
- metric.step()
- hbeat.beat()
-
- metric.flush_history()
-
- metric.put_artifact(
- "view_seq", ".mp4",
- lambda fn: stitch_vis(fn, read_stats(metric.output_dir, "view")[1])
- )
-
- metric.step()
-
-
-def render_one_view(vox, aabb, H, W, K, pose, return_w=False):
- N = H * W
- ro, rd = rays_from_img(H, W, K, pose)
- ro, rd, t_min, t_max = scene_box_filter(ro, rd, aabb)
- assert len(ro) == N, "for now all pixels must be in"
- ro, rd, t_min, t_max = as_torch_tsrs(vox.device, ro, rd, t_min, t_max)
- rgbs, depth, weights = render_ray_bundle(vox, ro, rd, t_min, t_max)
-
- rgbs = rearrange(rgbs, "(h w) c -> 1 c h w", h=H, w=W)
- depth = rearrange(depth, "(h w) 1 -> h w", h=H, w=W)
- if return_w:
- return rgbs, depth, weights
- else:
- return rgbs, depth
-
-
-def scene_box_filter(ro, rd, aabb):
- _, t_min, t_max = ray_box_intersect(ro, rd, aabb)
- # do not render what's behind the ray origin
- t_min, t_max = np.maximum(t_min, 0), np.maximum(t_max, 0)
- return ro, rd, t_min, t_max
-
-
-def vis_routine(metric, y, depth):
- pane = nerf_vis(y, depth, final_H=256)
- im = torch_samps_to_imgs(y)[0]
- depth = depth.cpu().numpy()
- metric.put_artifact("view", ".png", lambda fn: imwrite(fn, pane))
- metric.put_artifact("img", ".png", lambda fn: imwrite(fn, im))
- metric.put_artifact("depth", ".npy", lambda fn: np.save(fn, depth))
-
-
-def evaluate_ckpt():
- cfg = optional_load_config(fname="full_config.yml")
- assert len(cfg) > 0, "can't find cfg file"
- mod = SJC(**cfg)
-
- family = cfg.pop("family")
- model: ScoreAdapter = getattr(mod, family).make()
- vox = mod.vox.make()
- poser = mod.pose.make()
-
- pbar = tqdm(range(1))
-
- with EventStorage(), HeartBeat(pbar):
- ckpt_fname = latest_ckpt()
- state = torch.load(ckpt_fname, map_location="cpu")
- vox.load_state_dict(state)
- vox.to(device_glb)
-
- with EventStorage("test"):
- evaluate(model, vox, poser)
-
-
-def latest_ckpt():
- ts, ys = read_stats("./", "ckpt")
- assert len(ys) > 0
- return ys[-1]
-
-
-if __name__ == "__main__":
- seed_everything(0)
- dispatch(SJC)
- # evaluate_ckpt()
diff --git a/spaces/Miya1337/NovelAI/README.md b/spaces/Miya1337/NovelAI/README.md
deleted file mode 100644
index 4bc5d1ad43d3d0ffa307c9ce6a6f0fbc79d9941f..0000000000000000000000000000000000000000
--- a/spaces/Miya1337/NovelAI/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: NovelAI
-emoji: 🏃
-colorFrom: green
-colorTo: red
-sdk: gradio
-sdk_version: 3.4.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/_base_/default_runtime.py b/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/_base_/default_runtime.py
deleted file mode 100644
index f3ce4e1a43a0811db084ccfdc6787761fb62b13b..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/_base_/default_runtime.py
+++ /dev/null
@@ -1,50 +0,0 @@
-default_scope = 'mmocr'
-env_cfg = dict(
- cudnn_benchmark=False,
- mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
- dist_cfg=dict(backend='nccl'),
-)
-randomness = dict(seed=None)
-
-default_hooks = dict(
- timer=dict(type='IterTimerHook'),
- logger=dict(type='LoggerHook', interval=100),
- param_scheduler=dict(type='ParamSchedulerHook'),
- checkpoint=dict(type='CheckpointHook', interval=1),
- sampler_seed=dict(type='DistSamplerSeedHook'),
- sync_buffer=dict(type='SyncBuffersHook'),
- visualization=dict(
- type='VisualizationHook',
- interval=1,
- enable=False,
- show=False,
- draw_gt=False,
- draw_pred=False),
-)
-# Logging
-log_level = 'INFO'
-log_processor = dict(type='LogProcessor', window_size=10, by_epoch=True)
-
-load_from = None
-resume = False
-
-# Evaluation
-val_evaluator = dict(
- type='MultiDatasetsEvaluator',
- metrics=[
- dict(
- type='WordMetric',
- mode=['exact', 'ignore_case', 'ignore_case_symbol']),
- dict(type='CharMetric')
- ],
- dataset_prefixes=None)
-test_evaluator = val_evaluator
-
-# Visualization
-vis_backends = [dict(type='LocalVisBackend')]
-visualizer = dict(
- type='TextRecogLocalVisualizer',
- name='visualizer',
- vis_backends=vis_backends)
-
-tta_model = dict(type='EncoderDecoderRecognizerTTAModel')
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/packers/textdet_packer.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/packers/textdet_packer.py
deleted file mode 100644
index b9d4c230945fefaca9d6c90a1b99ed05b3956269..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/packers/textdet_packer.py
+++ /dev/null
@@ -1,110 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os.path as osp
-from typing import Dict, List, Tuple
-
-import mmcv
-
-from mmocr.registry import DATA_PACKERS
-from mmocr.utils import bbox2poly, poly2bbox
-from .base import BasePacker
-
-
-@DATA_PACKERS.register_module()
-class TextDetPacker(BasePacker):
- """Text detection packer. It is used to pack the parsed annotation info to.
-
- .. code-block:: python
-
- {
- "metainfo":
- {
- "dataset_type": "TextDetDataset",
- "task_name": "textdet",
- "category": [{"id": 0, "name": "text"}]
- },
- "data_list":
- [
- {
- "img_path": "test_img.jpg",
- "height": 640,
- "width": 640,
- "instances":
- [
- {
- "polygon": [0, 0, 0, 10, 10, 20, 20, 0],
- "bbox": [0, 0, 10, 20],
- "bbox_label": 0,
- "ignore": False
- },
- // ...
- ]
- }
- ]
- }
- """
-
- def pack_instance(self, sample: Tuple, bbox_label: int = 0) -> Dict:
- """Pack the parsed annotation info to an MMOCR format instance.
-
- Args:
- sample (Tuple): A tuple of (img_file, instances).
- - img_path (str): Path to the image file.
- - instances (Sequence[Dict]): A list of converted annos. Each
- element should be a dict with the following keys:
-
- - 'poly' or 'box'
- - 'ignore'
- - 'bbox_label' (optional)
- split (str): The split of the instance.
-
- Returns:
- Dict: An MMOCR format instance.
- """
-
- img_path, instances = sample
-
- img = mmcv.imread(img_path)
- h, w = img.shape[:2]
-
- packed_instances = list()
- for instance in instances:
- poly = instance.get('poly', None)
- box = instance.get('box', None)
- assert box or poly
- packed_sample = dict(
- polygon=poly if poly else list(
- bbox2poly(box).astype('float64')),
- bbox=box if box else list(poly2bbox(poly).astype('float64')),
- bbox_label=bbox_label,
- ignore=instance['ignore'])
- packed_instances.append(packed_sample)
-
- packed_instances = dict(
- instances=packed_instances,
- img_path=osp.relpath(img_path, self.data_root),
- height=h,
- width=w)
-
- return packed_instances
-
- def add_meta(self, sample: List) -> Dict:
- """Add meta information to the sample.
-
- Args:
- sample (List): A list of samples of the dataset.
-
- Returns:
- Dict: A dict contains the meta information and samples.
- """
- meta = {
- 'metainfo': {
- 'dataset_type': 'TextDetDataset',
- 'task_name': 'textdet',
- 'category': [{
- 'id': 0,
- 'name': 'text'
- }]
- },
- 'data_list': sample
- }
- return meta
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/decoders/base.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/decoders/base.py
deleted file mode 100644
index 2c990ca0c1c9c1b6a2878ca05cb764b20e3d8fb1..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/decoders/base.py
+++ /dev/null
@@ -1,166 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Dict, List, Optional, Sequence, Union
-
-import torch
-from mmengine.model import BaseModule
-
-from mmocr.models.common.dictionary import Dictionary
-from mmocr.registry import MODELS, TASK_UTILS
-from mmocr.structures import TextRecogDataSample
-
-
-@MODELS.register_module()
-class BaseDecoder(BaseModule):
- """Base decoder for text recognition, build the loss and postprocessor.
-
- Args:
- dictionary (dict or :obj:`Dictionary`): The config for `Dictionary` or
- the instance of `Dictionary`.
- loss (dict, optional): Config to build loss. Defaults to None.
- postprocessor (dict, optional): Config to build postprocessor.
- Defaults to None.
- max_seq_len (int): Maximum sequence length. The
- sequence is usually generated from decoder. Defaults to 40.
- init_cfg (dict or list[dict], optional): Initialization configs.
- Defaults to None.
- """
-
- def __init__(self,
- dictionary: Union[Dict, Dictionary],
- module_loss: Optional[Dict] = None,
- postprocessor: Optional[Dict] = None,
- max_seq_len: int = 40,
- init_cfg: Optional[Union[Dict, List[Dict]]] = None) -> None:
- super().__init__(init_cfg=init_cfg)
- if isinstance(dictionary, dict):
- self.dictionary = TASK_UTILS.build(dictionary)
- elif isinstance(dictionary, Dictionary):
- self.dictionary = dictionary
- else:
- raise TypeError(
- 'The type of dictionary should be `Dictionary` or dict, '
- f'but got {type(dictionary)}')
- self.module_loss = None
- self.postprocessor = None
- self.max_seq_len = max_seq_len
-
- if module_loss is not None:
- assert isinstance(module_loss, dict)
- module_loss.update(dictionary=dictionary)
- module_loss.update(max_seq_len=max_seq_len)
- self.module_loss = MODELS.build(module_loss)
-
- if postprocessor is not None:
- assert isinstance(postprocessor, dict)
- postprocessor.update(dictionary=dictionary)
- postprocessor.update(max_seq_len=max_seq_len)
- self.postprocessor = MODELS.build(postprocessor)
-
- def forward_train(
- self,
- feat: Optional[torch.Tensor] = None,
- out_enc: Optional[torch.Tensor] = None,
- data_samples: Optional[Sequence[TextRecogDataSample]] = None
- ) -> torch.Tensor:
- """Forward for training.
-
- Args:
- feat (torch.Tensor, optional): The feature map from backbone of
- shape :math:`(N, E, H, W)`. Defaults to None.
- out_enc (torch.Tensor, optional): Encoder output. Defaults to None.
- data_samples (Sequence[TextRecogDataSample]): Batch of
- TextRecogDataSample, containing gt_text information. Defaults
- to None.
- """
- raise NotImplementedError
-
- def forward_test(
- self,
- feat: Optional[torch.Tensor] = None,
- out_enc: Optional[torch.Tensor] = None,
- data_samples: Optional[Sequence[TextRecogDataSample]] = None
- ) -> torch.Tensor:
- """Forward for testing.
-
- Args:
- feat (torch.Tensor, optional): The feature map from backbone of
- shape :math:`(N, E, H, W)`. Defaults to None.
- out_enc (torch.Tensor, optional): Encoder output. Defaults to None.
- data_samples (Sequence[TextRecogDataSample]): Batch of
- TextRecogDataSample, containing gt_text information. Defaults
- to None.
- """
- raise NotImplementedError
-
- def loss(self,
- feat: Optional[torch.Tensor] = None,
- out_enc: Optional[torch.Tensor] = None,
- data_samples: Optional[Sequence[TextRecogDataSample]] = None
- ) -> Dict:
- """Calculate losses from a batch of inputs and data samples.
-
- Args:
- feat (Tensor, optional): Features from the backbone. Defaults
- to None.
- out_enc (Tensor, optional): Features from the encoder.
- Defaults to None.
- data_samples (list[TextRecogDataSample], optional): A list of
- N datasamples, containing meta information and gold
- annotations for each of the images. Defaults to None.
-
- Returns:
- dict[str, tensor]: A dictionary of loss components.
- """
- out_dec = self(feat, out_enc, data_samples)
- return self.module_loss(out_dec, data_samples)
-
- def predict(
- self,
- feat: Optional[torch.Tensor] = None,
- out_enc: Optional[torch.Tensor] = None,
- data_samples: Optional[Sequence[TextRecogDataSample]] = None
- ) -> Sequence[TextRecogDataSample]:
- """Perform forward propagation of the decoder and postprocessor.
-
- Args:
- feat (Tensor, optional): Features from the backbone. Defaults
- to None.
- out_enc (Tensor, optional): Features from the encoder. Defaults
- to None.
- data_samples (list[TextRecogDataSample]): A list of N datasamples,
- containing meta information and gold annotations for each of
- the images. Defaults to None.
-
- Returns:
- list[TextRecogDataSample]: A list of N datasamples of prediction
- results. Results are stored in ``pred_text``.
- """
- out_dec = self(feat, out_enc, data_samples)
- return self.postprocessor(out_dec, data_samples)
-
- def forward(
- self,
- feat: Optional[torch.Tensor] = None,
- out_enc: Optional[torch.Tensor] = None,
- data_samples: Optional[Sequence[TextRecogDataSample]] = None
- ) -> torch.Tensor:
- """Decoder forward.
-
- Args:
- feat (Tensor, optional): Features from the backbone. Defaults
- to None.
- out_enc (Tensor, optional): Features from the encoder.
- Defaults to None.
- data_samples (list[TextRecogDataSample]): A list of N datasamples,
- containing meta information and gold annotations for each of
- the images. Defaults to None.
-
- Returns:
- Tensor: Features from ``decoder`` forward.
- """
- if self.training:
- if getattr(self, 'module_loss') is not None:
- data_samples = self.module_loss.get_targets(data_samples)
- return self.forward_train(feat, out_enc, data_samples)
- else:
- return self.forward_test(feat, out_enc, data_samples)
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/structures/__init__.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/structures/__init__.py
deleted file mode 100644
index 2b71ac262a07022d63faee8766a555933793da5e..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/structures/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .kie_data_sample import KIEDataSample
-from .textdet_data_sample import TextDetDataSample
-from .textrecog_data_sample import TextRecogDataSample
-from .textspotting_data_sample import TextSpottingDataSample
-
-__all__ = [
- 'TextDetDataSample', 'TextRecogDataSample', 'KIEDataSample',
- 'TextSpottingDataSample'
-]
diff --git a/spaces/MrD05/text-generation-webui-space/modules/callbacks.py b/spaces/MrD05/text-generation-webui-space/modules/callbacks.py
deleted file mode 100644
index faa4a5e9991e1ae711589fed61e7d1f48e28fed3..0000000000000000000000000000000000000000
--- a/spaces/MrD05/text-generation-webui-space/modules/callbacks.py
+++ /dev/null
@@ -1,98 +0,0 @@
-import gc
-from queue import Queue
-from threading import Thread
-
-import torch
-import transformers
-
-import modules.shared as shared
-
-# Copied from https://github.com/PygmalionAI/gradio-ui/
-class _SentinelTokenStoppingCriteria(transformers.StoppingCriteria):
-
- def __init__(self, sentinel_token_ids: torch.LongTensor,
- starting_idx: int):
- transformers.StoppingCriteria.__init__(self)
- self.sentinel_token_ids = sentinel_token_ids
- self.starting_idx = starting_idx
-
- def __call__(self, input_ids: torch.LongTensor,
- _scores: torch.FloatTensor) -> bool:
- for sample in input_ids:
- trimmed_sample = sample[self.starting_idx:]
- # Can't unfold, output is still too tiny. Skip.
- if trimmed_sample.shape[-1] < self.sentinel_token_ids.shape[-1]:
- continue
-
- for window in trimmed_sample.unfold(
- 0, self.sentinel_token_ids.shape[-1], 1):
- if torch.all(torch.eq(self.sentinel_token_ids, window)):
- return True
- return False
-
-class Stream(transformers.StoppingCriteria):
- def __init__(self, callback_func=None):
- self.callback_func = callback_func
-
- def __call__(self, input_ids, scores) -> bool:
- if self.callback_func is not None:
- self.callback_func(input_ids[0])
- return False
-
-class Iteratorize:
-
- """
- Transforms a function that takes a callback
- into a lazy iterator (generator).
- """
-
- def __init__(self, func, kwargs={}, callback=None):
- self.mfunc=func
- self.c_callback=callback
- self.q = Queue()
- self.sentinel = object()
- self.kwargs = kwargs
- self.stop_now = False
-
- def _callback(val):
- if self.stop_now:
- raise ValueError
- self.q.put(val)
-
- def gentask():
- try:
- ret = self.mfunc(callback=_callback, **self.kwargs)
- except ValueError:
- pass
- clear_torch_cache()
- self.q.put(self.sentinel)
- if self.c_callback:
- self.c_callback(ret)
-
- self.thread = Thread(target=gentask)
- self.thread.start()
-
- def __iter__(self):
- return self
-
- def __next__(self):
- obj = self.q.get(True,None)
- if obj is self.sentinel:
- raise StopIteration
- else:
- return obj
-
- def __del__(self):
- clear_torch_cache()
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- self.stop_now = True
- clear_torch_cache()
-
-def clear_torch_cache():
- gc.collect()
- if not shared.args.cpu:
- torch.cuda.empty_cache()
diff --git a/spaces/NATSpeech/PortaSpeech/tasks/run.py b/spaces/NATSpeech/PortaSpeech/tasks/run.py
deleted file mode 100644
index ef2b0a319cb5cd7baf87e5224ab545412715fb69..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/PortaSpeech/tasks/run.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import os
-
-os.environ["OMP_NUM_THREADS"] = "1"
-
-from utils.commons.hparams import hparams, set_hparams
-import importlib
-
-
-def run_task():
- assert hparams['task_cls'] != ''
- pkg = ".".join(hparams["task_cls"].split(".")[:-1])
- cls_name = hparams["task_cls"].split(".")[-1]
- task_cls = getattr(importlib.import_module(pkg), cls_name)
- task_cls.start()
-
-
-if __name__ == '__main__':
- set_hparams()
- run_task()
diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/nhnet/models.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/nhnet/models.py
deleted file mode 100644
index d6f70e7f36d8a30ed869c1ca135ef3262fd2150e..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/nlp/nhnet/models.py
+++ /dev/null
@@ -1,590 +0,0 @@
-# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""tf.keras Models for NHNet."""
-
-from __future__ import absolute_import
-from __future__ import division
-# from __future__ import google_type_annotations
-from __future__ import print_function
-
-from absl import logging
-import gin
-import tensorflow as tf
-from typing import Optional, Text
-
-from official.modeling import tf_utils
-from official.modeling.hyperparams import params_dict
-from official.nlp.modeling import networks
-from official.nlp.modeling.layers import multi_channel_attention
-from official.nlp.nhnet import configs
-from official.nlp.nhnet import decoder
-from official.nlp.nhnet import utils
-from official.nlp.transformer import beam_search
-
-
-def embedding_linear(embedding_matrix, x):
- """Uses embeddings as linear transformation weights."""
- with tf.name_scope("presoftmax_linear"):
- batch_size = tf.shape(x)[0]
- length = tf.shape(x)[1]
- hidden_size = tf.shape(x)[2]
- vocab_size = tf.shape(embedding_matrix)[0]
-
- x = tf.reshape(x, [-1, hidden_size])
- logits = tf.matmul(x, embedding_matrix, transpose_b=True)
-
- return tf.reshape(logits, [batch_size, length, vocab_size])
-
-
-def _add_sos_to_seq(seq, start_token_id):
- """Add a start sequence token while keeping seq length."""
- batch_size = tf.shape(seq)[0]
- seq_len = tf.shape(seq)[1]
- sos_ids = tf.ones([batch_size], tf.int32) * start_token_id
- targets = tf.concat([tf.expand_dims(sos_ids, axis=1), seq], axis=1)
- targets = targets[:, :-1]
- tf.assert_equal(tf.shape(targets), (batch_size, seq_len))
- return targets
-
-
-def remove_sos_from_seq(seq, pad_token_id):
- """Remove the start sequence token while keeping seq length."""
- batch_size, seq_len = tf_utils.get_shape_list(seq, expected_rank=2)
- # remove
- targets = seq[:, 1:]
- # pad
- pad_ids = tf.ones([batch_size], tf.int32) * pad_token_id
- targets = tf.concat([targets, tf.expand_dims(pad_ids, axis=1)], axis=1)
- tf.assert_equal(tf.shape(targets), (batch_size, seq_len))
- return targets
-
-
-class Bert2Bert(tf.keras.Model):
- """Bert2Bert encoder decoder model for training."""
-
- def __init__(self, params, bert_layer, decoder_layer, name=None):
- super(Bert2Bert, self).__init__(name=name)
- self.params = params
- if not bert_layer.built:
- raise ValueError("bert_layer should be built.")
- if not decoder_layer.built:
- raise ValueError("decoder_layer should be built.")
- self.bert_layer = bert_layer
- self.decoder_layer = decoder_layer
-
- def get_config(self):
- return {"params": self.params.as_dict()}
-
- def get_decode_logits(self,
- decoder_inputs,
- ids,
- decoder_self_attention_bias,
- step,
- cache=None):
- if cache:
- if self.params.get("padded_decode", False):
- bias_shape = decoder_self_attention_bias.shape.as_list()
- self_attention_bias = tf.slice(
- decoder_self_attention_bias, [0, 0, step, 0],
- [bias_shape[0], bias_shape[1], 1, bias_shape[3]])
- else:
- self_attention_bias = decoder_self_attention_bias[:, :, step:step +
- 1, :step + 1]
- # Sets decoder input to the last generated IDs.
- decoder_input = ids[:, -1:]
- else:
- self_attention_bias = decoder_self_attention_bias[:, :, :step + 1, :step +
- 1]
- decoder_input = ids
- decoder_inputs["target_ids"] = decoder_input
- decoder_inputs["self_attention_bias"] = self_attention_bias
- if cache:
- decoder_outputs = self.decoder_layer(
- decoder_inputs,
- cache,
- decode_loop_step=step,
- padded_decode=self.params.get("padded_decode", False))
- else:
- decoder_outputs = self.decoder_layer(decoder_inputs)
- logits = embedding_linear(self.decoder_layer.embedding_lookup.embeddings,
- decoder_outputs[:, -1:, :])
- logits = tf.squeeze(logits, axis=[1])
- return logits
-
- def _get_symbols_to_logits_fn(self, max_decode_length):
- """Returns a decoding function that calculates logits of the next tokens."""
- # Max decode length should be smaller than the positional embedding max
- # sequence length.
- decoder_self_attention_bias = decoder.get_attention_bias(
- input_tensor=None,
- bias_type="decoder_self",
- max_length=max_decode_length)
-
- def _symbols_to_logits_fn(ids, i, cache):
- """Generate logits for next candidate IDs.
-
- Args:
- ids: Current decoded sequences. int tensor with shape [batch_size *
- beam_size, i + 1]
- i: Loop index
- cache: dictionary of values storing the encoder output, encoder-decoder
- attention bias, and previous decoder attention values.
-
- Returns:
- Tuple of
- (logits with shape [batch_size * beam_size, vocab_size],
- updated cache values)
- """
- decoder_inputs = dict(
- all_encoder_outputs=cache["all_encoder_outputs"],
- attention_bias=cache["attention_bias"])
- logits = self.get_decode_logits(
- decoder_inputs,
- ids,
- decoder_self_attention_bias,
- step=i,
- cache=cache if self.params.use_cache else None)
- return logits, cache
-
- return _symbols_to_logits_fn
-
- def train_decode(self, decode_outputs):
- logits = embedding_linear(self.decoder_layer.embedding_lookup.embeddings,
- decode_outputs)
- decode_output_ids = tf.cast(tf.argmax(logits, axis=-1), tf.int32)
- output_log_probs = tf.nn.log_softmax(logits, axis=-1)
- return logits, decode_output_ids, output_log_probs
-
- def predict_decode(self, start_token_ids, cache):
- symbols_to_logits_fn = self._get_symbols_to_logits_fn(self.params.len_title)
- # Use beam search to find the top beam_size sequences and scores.
- decoded_ids, scores = beam_search.sequence_beam_search(
- symbols_to_logits_fn=symbols_to_logits_fn,
- initial_ids=start_token_ids,
- initial_cache=cache,
- vocab_size=self.params.vocab_size,
- beam_size=self.params.beam_size,
- alpha=self.params.alpha,
- max_decode_length=self.params.len_title,
- padded_decode=self.params.get("padded_decode", False),
- eos_id=self.params.end_token_id)
- return decoded_ids, scores
-
- def _get_logits_for_decode_ids(self, decoder_inputs, top_decoded_ids):
- """Returns the log probabilities for ids."""
- target_ids = _add_sos_to_seq(top_decoded_ids, self.params.start_token_id)
- decoder_inputs["self_attention_bias"] = decoder.get_attention_bias(
- target_ids, bias_type="decoder_self")
- decoder_inputs["target_ids"] = target_ids
- decoder_outputs = self.decoder_layer(decoder_inputs)
- logits = embedding_linear(self.decoder_layer.embedding_lookup.embeddings,
- decoder_outputs)
- return logits
-
- def _init_cache(self, batch_size):
- num_heads = self.params.num_decoder_attn_heads
- dim_per_head = self.params.hidden_size // num_heads
- init_decode_length = (
- self.params.len_title if self.params.get("padded_decode", False) else 0)
- cache = {}
- for layer in range(self.params.num_decoder_layers):
- cache[str(layer)] = {
- "key":
- tf.zeros(
- [batch_size, init_decode_length, num_heads, dim_per_head],
- dtype=tf.float32),
- "value":
- tf.zeros(
- [batch_size, init_decode_length, num_heads, dim_per_head],
- dtype=tf.float32)
- }
- return cache
-
- def call(self, inputs, mode="train"):
- """Implements call().
-
- Args:
- inputs: a dictionary of tensors.
- mode: string, an enum for mode, train/eval.
-
- Returns:
- logits, decode_output_ids, output_log_probs for training. top_decoded_ids
- for eval.
- """
- input_ids = inputs["input_ids"]
- input_mask = inputs["input_mask"]
- segment_ids = inputs["segment_ids"]
- all_encoder_outputs, _ = self.bert_layer(
- [input_ids, input_mask, segment_ids])
-
- if mode not in ("train", "eval", "predict"):
- raise ValueError("Invalid call mode: %s" % mode)
- encoder_decoder_attention_bias = decoder.get_attention_bias(
- input_ids,
- bias_type="single_cross",
- padding_value=self.params.pad_token_id)
- if mode == "train":
- self_attention_bias = decoder.get_attention_bias(
- inputs["target_ids"], bias_type="decoder_self")
- decoder_inputs = dict(
- attention_bias=encoder_decoder_attention_bias,
- all_encoder_outputs=all_encoder_outputs,
- target_ids=inputs["target_ids"],
- self_attention_bias=self_attention_bias)
- decoder_outputs = self.decoder_layer(decoder_inputs)
- return self.train_decode(decoder_outputs)
-
- batch_size = tf.shape(input_ids)[0]
- start_token_ids = tf.ones([batch_size],
- tf.int32) * self.params.start_token_id
- # Add encoder output and attention bias to the cache.
- if self.params.use_cache:
- cache = self._init_cache(batch_size)
- else:
- cache = {}
- cache["all_encoder_outputs"] = all_encoder_outputs
- cache["attention_bias"] = encoder_decoder_attention_bias
- decoded_ids, scores = self.predict_decode(start_token_ids, cache)
- if mode == "predict":
- return decoded_ids[:, :self.params.beam_size,
- 1:], scores[:, :self.params.beam_size]
-
- decoder_inputs = dict(
- attention_bias=encoder_decoder_attention_bias,
- all_encoder_outputs=all_encoder_outputs)
- top_decoded_ids = decoded_ids[:, 0, 1:]
- return self._get_logits_for_decode_ids(decoder_inputs, top_decoded_ids)
-
-
-class NHNet(Bert2Bert):
- """NHNet model which performs multi-doc decoding."""
-
- def __init__(self, params, bert_layer, decoder_layer, name=None):
- super(NHNet, self).__init__(params, bert_layer, decoder_layer, name=name)
- self.doc_attention = multi_channel_attention.VotingAttention(
- num_heads=params.num_decoder_attn_heads,
- head_size=params.hidden_size // params.num_decoder_attn_heads)
-
- def _expand_doc_attention_probs(self, doc_attention_probs, target_length):
- """Expands doc attention probs to fit the decoding sequence length."""
- doc_attention_probs = tf.expand_dims(
- doc_attention_probs, axis=[1]) # [B, 1, A]
- doc_attention_probs = tf.expand_dims(
- doc_attention_probs, axis=[2]) # [B, 1, 1, A]
- return tf.tile(doc_attention_probs,
- [1, self.params.num_decoder_attn_heads, target_length, 1])
-
- def _get_symbols_to_logits_fn(self, max_decode_length):
- """Returns a decoding function that calculates logits of the next tokens."""
- # Max decode length should be smaller than the positional embedding max
- # sequence length.
- decoder_self_attention_bias = decoder.get_attention_bias(
- input_tensor=None,
- bias_type="decoder_self",
- max_length=max_decode_length)
-
- def _symbols_to_logits_fn(ids, i, cache):
- """Generate logits for next candidate IDs."""
- if self.params.use_cache:
- target_length = 1
- else:
- target_length = i + 1
- decoder_inputs = dict(
- doc_attention_probs=self._expand_doc_attention_probs(
- cache["doc_attention_probs"], target_length),
- all_encoder_outputs=cache["all_encoder_outputs"],
- attention_bias=cache["attention_bias"])
- logits = self.get_decode_logits(
- decoder_inputs,
- ids,
- decoder_self_attention_bias,
- step=i,
- cache=cache if self.params.use_cache else None)
- return logits, cache
-
- return _symbols_to_logits_fn
-
- def call(self, inputs, mode="training"):
- input_shape = tf_utils.get_shape_list(inputs["input_ids"], expected_rank=3)
- batch_size, num_docs, len_passage = (input_shape[0], input_shape[1],
- input_shape[2])
- input_ids = tf.reshape(inputs["input_ids"], [-1, len_passage])
- input_mask = tf.reshape(inputs["input_mask"], [-1, len_passage])
- segment_ids = tf.reshape(inputs["segment_ids"], [-1, len_passage])
- all_encoder_outputs, _ = self.bert_layer(
- [input_ids, input_mask, segment_ids])
- encoder_outputs = tf.reshape(
- all_encoder_outputs[-1],
- [batch_size, num_docs, len_passage, self.params.hidden_size])
- doc_attention_mask = tf.reshape(
- tf.cast(
- tf.math.count_nonzero(input_mask, axis=1, dtype=tf.int32) > 2,
- tf.int32), [batch_size, num_docs])
-
- doc_attention_probs = self.doc_attention(encoder_outputs,
- doc_attention_mask)
- encoder_decoder_attention_bias = decoder.get_attention_bias(
- inputs["input_ids"],
- bias_type="multi_cross",
- padding_value=self.params.pad_token_id)
-
- if mode == "train":
- target_length = tf_utils.get_shape_list(
- inputs["target_ids"], expected_rank=2)[1]
- doc_attention_probs = self._expand_doc_attention_probs(
- doc_attention_probs, target_length)
- self_attention_bias = decoder.get_attention_bias(
- inputs["target_ids"], bias_type="decoder_self")
- decoder_inputs = dict(
- attention_bias=encoder_decoder_attention_bias,
- self_attention_bias=self_attention_bias,
- target_ids=inputs["target_ids"],
- all_encoder_outputs=encoder_outputs,
- doc_attention_probs=doc_attention_probs)
- decoder_outputs = self.decoder_layer(decoder_inputs)
- return self.train_decode(decoder_outputs)
-
- # Adds encoder output and attention bias to the cache.
- if self.params.use_cache:
- cache = self._init_cache(batch_size)
- else:
- cache = {}
- cache["all_encoder_outputs"] = [encoder_outputs]
- cache["attention_bias"] = encoder_decoder_attention_bias
- cache["doc_attention_probs"] = doc_attention_probs
-
- start_token_ids = tf.ones([batch_size],
- tf.int32) * self.params.start_token_id
- decoded_ids, scores = self.predict_decode(start_token_ids, cache)
- if mode == "predict":
- return decoded_ids[:, :self.params.beam_size,
- 1:], scores[:, :self.params.beam_size]
-
- top_decoded_ids = decoded_ids[:, 0, 1:]
- target_length = tf_utils.get_shape_list(top_decoded_ids)[-1]
- decoder_inputs = dict(
- attention_bias=encoder_decoder_attention_bias,
- all_encoder_outputs=[encoder_outputs],
- doc_attention_probs=self._expand_doc_attention_probs(
- doc_attention_probs, target_length))
- return self._get_logits_for_decode_ids(decoder_inputs, top_decoded_ids)
-
-
-def get_bert2bert_layers(params: configs.BERT2BERTConfig):
- """Creates a Bert2Bert stem model and returns Bert encoder/decoder.
-
- We use funtional-style to create stem model because we need to make all layers
- built to restore variables in a customized way. The layers are called with
- placeholder inputs to make them fully built.
-
- Args:
- params: ParamsDict.
-
- Returns:
- two keras Layers, bert_model_layer and decoder_layer
- """
- input_ids = tf.keras.layers.Input(
- shape=(None,), name="input_ids", dtype=tf.int32)
- input_mask = tf.keras.layers.Input(
- shape=(None,), name="input_mask", dtype=tf.int32)
- segment_ids = tf.keras.layers.Input(
- shape=(None,), name="segment_ids", dtype=tf.int32)
- target_ids = tf.keras.layers.Input(
- shape=(None,), name="target_ids", dtype=tf.int32)
- bert_config = utils.get_bert_config_from_params(params)
- bert_model_layer = networks.TransformerEncoder(
- vocab_size=bert_config.vocab_size,
- hidden_size=bert_config.hidden_size,
- num_layers=bert_config.num_hidden_layers,
- num_attention_heads=bert_config.num_attention_heads,
- intermediate_size=bert_config.intermediate_size,
- activation=tf_utils.get_activation(bert_config.hidden_act),
- dropout_rate=bert_config.hidden_dropout_prob,
- attention_dropout_rate=bert_config.attention_probs_dropout_prob,
- sequence_length=None,
- max_sequence_length=bert_config.max_position_embeddings,
- type_vocab_size=bert_config.type_vocab_size,
- initializer=tf.keras.initializers.TruncatedNormal(
- stddev=bert_config.initializer_range),
- return_all_encoder_outputs=True,
- name="bert_encoder")
- all_encoder_outputs, _ = bert_model_layer(
- [input_ids, input_mask, segment_ids])
- # pylint: disable=protected-access
- decoder_layer = decoder.Decoder(params, bert_model_layer._embedding_layer)
- # pylint: enable=protected-access
- cross_attention_bias = decoder.AttentionBias(bias_type="single_cross")(
- input_ids)
- self_attention_bias = decoder.AttentionBias(bias_type="decoder_self")(
- target_ids)
- decoder_inputs = dict(
- attention_bias=cross_attention_bias,
- self_attention_bias=self_attention_bias,
- target_ids=target_ids,
- all_encoder_outputs=all_encoder_outputs)
- _ = decoder_layer(decoder_inputs)
-
- return bert_model_layer, decoder_layer
-
-
-def get_nhnet_layers(params: configs.NHNetConfig):
- """Creates a Mult-doc encoder/decoder.
-
- Args:
- params: ParamsDict.
-
- Returns:
- two keras Layers, bert_model_layer and decoder_layer
- """
- input_ids = tf.keras.layers.Input(
- shape=(None,), name="input_ids", dtype=tf.int32)
- input_mask = tf.keras.layers.Input(
- shape=(None,), name="input_mask", dtype=tf.int32)
- segment_ids = tf.keras.layers.Input(
- shape=(None,), name="segment_ids", dtype=tf.int32)
- bert_config = utils.get_bert_config_from_params(params)
- bert_model_layer = networks.TransformerEncoder(
- vocab_size=bert_config.vocab_size,
- hidden_size=bert_config.hidden_size,
- num_layers=bert_config.num_hidden_layers,
- num_attention_heads=bert_config.num_attention_heads,
- intermediate_size=bert_config.intermediate_size,
- activation=tf_utils.get_activation(bert_config.hidden_act),
- dropout_rate=bert_config.hidden_dropout_prob,
- attention_dropout_rate=bert_config.attention_probs_dropout_prob,
- sequence_length=None,
- max_sequence_length=bert_config.max_position_embeddings,
- type_vocab_size=bert_config.type_vocab_size,
- initializer=tf.keras.initializers.TruncatedNormal(
- stddev=bert_config.initializer_range),
- return_all_encoder_outputs=True,
- name="bert_encoder")
- bert_model_layer([input_ids, input_mask, segment_ids])
-
- input_ids = tf.keras.layers.Input(
- shape=(None, None), name="input_ids", dtype=tf.int32)
- all_encoder_outputs = tf.keras.layers.Input((None, None, params.hidden_size),
- dtype=tf.float32)
- target_ids = tf.keras.layers.Input(
- shape=(None,), name="target_ids", dtype=tf.int32)
- doc_attention_probs = tf.keras.layers.Input(
- (params.num_decoder_attn_heads, None, None), dtype=tf.float32)
- # pylint: disable=protected-access
- decoder_layer = decoder.Decoder(params, bert_model_layer._embedding_layer)
- # pylint: enable=protected-access
- cross_attention_bias = decoder.AttentionBias(bias_type="multi_cross")(
- input_ids)
- self_attention_bias = decoder.AttentionBias(bias_type="decoder_self")(
- target_ids)
- decoder_inputs = dict(
- attention_bias=cross_attention_bias,
- self_attention_bias=self_attention_bias,
- target_ids=target_ids,
- all_encoder_outputs=all_encoder_outputs,
- doc_attention_probs=doc_attention_probs)
- _ = decoder_layer(decoder_inputs)
-
- return bert_model_layer, decoder_layer
-
-
-def create_transformer_model(params,
- init_checkpoint: Optional[Text] = None
- ) -> tf.keras.Model:
- """A helper to create Transformer model."""
- bert_layer, decoder_layer = get_bert2bert_layers(params=params)
- model = Bert2Bert(
- params=params,
- bert_layer=bert_layer,
- decoder_layer=decoder_layer,
- name="transformer")
-
- if init_checkpoint:
- logging.info(
- "Checkpoint file %s found and restoring from "
- "initial checkpoint.", init_checkpoint)
- ckpt = tf.train.Checkpoint(model=model)
- ckpt.restore(init_checkpoint).expect_partial()
-
- return model
-
-
-def create_bert2bert_model(
- params: configs.BERT2BERTConfig,
- cls=Bert2Bert,
- init_checkpoint: Optional[Text] = None) -> tf.keras.Model:
- """A helper to create Bert2Bert model."""
- bert_layer, decoder_layer = get_bert2bert_layers(params=params)
- if init_checkpoint:
- utils.initialize_bert2bert_from_pretrained_bert(bert_layer, decoder_layer,
- init_checkpoint)
- return cls(
- params=params,
- bert_layer=bert_layer,
- decoder_layer=decoder_layer,
- name="bert2bert")
-
-
-def create_nhnet_model(
- params: configs.NHNetConfig,
- cls=NHNet,
- init_checkpoint: Optional[Text] = None) -> tf.keras.Model:
- """A helper to create NHNet model."""
- bert_layer, decoder_layer = get_nhnet_layers(params=params)
- model = cls(
- params=params,
- bert_layer=bert_layer,
- decoder_layer=decoder_layer,
- name="nhnet")
- if init_checkpoint:
- logging.info(
- "Checkpoint file %s found and restoring from "
- "initial checkpoint.", init_checkpoint)
- if params.init_from_bert2bert:
- ckpt = tf.train.Checkpoint(model=model)
- ckpt.restore(init_checkpoint).assert_existing_objects_matched()
- else:
- utils.initialize_bert2bert_from_pretrained_bert(bert_layer, decoder_layer,
- init_checkpoint)
- return model
-
-
-@gin.configurable
-def get_model_params(model: Optional[Text] = "bert2bert",
- config_class=None) -> params_dict.ParamsDict:
- """Helper function to convert config file to ParamsDict."""
- if model == "bert2bert":
- return configs.BERT2BERTConfig()
- elif model == "nhnet":
- return configs.NHNetConfig()
- elif config_class:
- return config_class()
- else:
- raise KeyError("The model type is not defined: %s" % model)
-
-
-@gin.configurable
-def create_model(model_type: Text,
- params,
- init_checkpoint: Optional[Text] = None):
- """A factory function to create different types of models."""
- if model_type == "bert2bert":
- return create_bert2bert_model(params, init_checkpoint=init_checkpoint)
- elif model_type == "nhnet":
- return create_nhnet_model(params, init_checkpoint=init_checkpoint)
- elif "transformer" in model_type:
- return create_transformer_model(
- params, init_checkpoint=init_checkpoint)
- else:
- raise KeyError("The model type is not defined: %s" % model_type)
diff --git a/spaces/Neovega/ogkalu-Comic-Diffusion/app.py b/spaces/Neovega/ogkalu-Comic-Diffusion/app.py
deleted file mode 100644
index dacd1f283828ac4113a91b1d67e6009ba63762dc..0000000000000000000000000000000000000000
--- a/spaces/Neovega/ogkalu-Comic-Diffusion/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/ogkalu/Comic-Diffusion").launch()
\ No newline at end of file
diff --git a/spaces/Newtral/toxic-tweets-in-spanish-politics/app.py b/spaces/Newtral/toxic-tweets-in-spanish-politics/app.py
deleted file mode 100644
index 88702a9b0ec0969e7823ce7d29f18e178db9be81..0000000000000000000000000000000000000000
--- a/spaces/Newtral/toxic-tweets-in-spanish-politics/app.py
+++ /dev/null
@@ -1,20 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-
-pipe = pipeline("text-classification", model="Newtral/xlm-r-finetuned-toxic-political-tweets-es")
-
-def predict(text):
- prediction = pipe(text, return_all_scores=True)[0]
- return {"Toxic": prediction[0]["score"], "Very Toxic": prediction[1]["score"]}
-
-interface = gr.Interface(predict, gr.inputs.Textbox(placeholder="Paste a tweet here", label="Tweet text", lines=5),
- gr.outputs.Label(num_top_classes=2, label="This tweet is..."),
- capture_session=True, interpretation=None,
- title="Is your favorite Spanish politician toxic on Twitter? Test it here!",
- description="Paste a tweet text from a Spanish politician in the textbox below. We will predict its degree of toxicity in 2 scales of severity. Made with <3 by Newtral-Tech.",
- theme="darkgrass")
-
-interface.launch()
-
-
-
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/constrained_decoding/normalize.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/constrained_decoding/normalize.py
deleted file mode 100644
index 4ae2b5111ba025acb9e1613865c92fdc339a58d5..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/constrained_decoding/normalize.py
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/usr/bin/env python3
-#
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import sys
-
-from sacremoses.normalize import MosesPunctNormalizer
-
-
-def main(args):
- normalizer = MosesPunctNormalizer(lang=args.lang, penn=args.penn)
- for line in sys.stdin:
- print(normalizer.normalize(line.rstrip()), flush=True)
-
-
-if __name__ == "__main__":
- import argparse
-
- parser = argparse.ArgumentParser()
- parser.add_argument("--lang", "-l", default="en")
- parser.add_argument("--penn", "-p", action="store_true")
- args = parser.parse_args()
-
- main(args)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/distributed/distributed_timeout_wrapper.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/distributed/distributed_timeout_wrapper.py
deleted file mode 100644
index 18107ef27ea837b8c72dcaa49db18fd8e64267b1..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/distributed/distributed_timeout_wrapper.py
+++ /dev/null
@@ -1,94 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import os
-import signal
-import threading
-
-from torch import nn
-
-
-logger = logging.getLogger(__name__)
-
-
-class DistributedTimeoutWrapper(nn.Module):
- """
- A wrapper that kills the process if no progress is made within a given
- *timeout*. The timer is reset every time :func:`forward` is called.
-
- Usage::
-
- module = DistributedTimeoutWrapper(module, timeout=30)
- x = module(input)
- time.sleep(20) # safe
- x = module(input)
- time.sleep(45) # job will be killed before this returns
-
- Args:
- module (nn.Module): module to wrap
- timeout (int): number of seconds before killing the process
- (set to a value <= 0 to disable the timeout)
- signal (Optional): signal to send once timeout is triggered
- """
- def __init__(self, module: nn.Module, timeout: int, signal=signal.SIGINT):
- super().__init__()
- self.module = module
- self.timeout = timeout
- self.signal = signal
-
- if timeout > 0:
- self._heartbeat = threading.Event()
- self._heartbeat_thread = threading.Thread(
- target=self._check_heartbeat,
- args=(os.getpid(),),
- daemon=True,
- )
- self._heartbeat_thread.start()
- self._terminated = False
- else:
- self._heartbeat = None
- self._heartbeat_thread = None
-
- def __del__(self):
- self.stop_timeout()
-
- def __getattr__(self, name):
- """Forward missing attributes to wrapped module."""
- try:
- return super().__getattr__(name) # defer to nn.Module's logic
- except AttributeError:
- return getattr(self.module, name)
-
- def stop_timeout(self):
- if self._heartbeat_thread is not None:
- self._terminated = True
- self._heartbeat_thread.join()
-
- def state_dict(self, *args, **kwargs):
- return self.module.state_dict(*args, **kwargs)
-
- def load_state_dict(self, *args, **kwargs):
- return self.module.load_state_dict(*args, **kwargs)
-
- def forward(self, *args, **kwargs):
- if self._heartbeat is not None:
- self._heartbeat.set()
- return self.module(*args, **kwargs)
-
- def _check_heartbeat(self, parent_pid):
- self._heartbeat.wait() # wait for the first forward pass
- while True:
- self._heartbeat.clear()
- success = self._heartbeat.wait(timeout=self.timeout)
- if self._terminated:
- break
- elif not success:
- logger.error((
- "Killing job for not making progress in {} seconds. "
- "Set --heartbeat-timeout=-1 to disable this timeout."
- ).format(int(self.timeout)))
- os.kill(parent_pid, self.signal)
- return
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/shorten_dataset.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/shorten_dataset.py
deleted file mode 100644
index 6ebb5d88feb3f29d1512a0873df304915d051209..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/shorten_dataset.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-from fairseq.data import data_utils
-
-from . import BaseWrapperDataset
-
-
-class TruncateDataset(BaseWrapperDataset):
- """Truncate a sequence by returning the first truncation_length tokens"""
-
- def __init__(self, dataset, truncation_length):
- super().__init__(dataset)
- assert truncation_length is not None
- self.truncation_length = truncation_length
- self.dataset = dataset
-
- def __getitem__(self, index):
- item = self.dataset[index]
- item_len = item.size(0)
- if item_len > self.truncation_length:
- item = item[: self.truncation_length]
- return item
-
- @property
- def sizes(self):
- return np.minimum(self.dataset.sizes, self.truncation_length)
-
- def __len__(self):
- return len(self.dataset)
-
-
-class RandomCropDataset(TruncateDataset):
- """Truncate a sequence by returning a random crop of truncation_length tokens"""
-
- def __init__(self, dataset, truncation_length, seed=1):
- super().__init__(dataset, truncation_length)
- self.seed = seed
- self.epoch = 0
-
- @property
- def can_reuse_epoch_itr_across_epochs(self):
- return True # only the crop changes, not item sizes
-
- def set_epoch(self, epoch, **unused):
- super().set_epoch(epoch)
- self.epoch = epoch
-
- def __getitem__(self, index):
- with data_utils.numpy_seed(self.seed, self.epoch, index):
- item = self.dataset[index]
- item_len = item.size(0)
- excess = item_len - self.truncation_length
- if excess > 0:
- start_idx = np.random.randint(0, excess)
- item = item[start_idx : start_idx + self.truncation_length]
- return item
-
-
-def maybe_shorten_dataset(
- dataset,
- split,
- shorten_data_split_list,
- shorten_method,
- tokens_per_sample,
- seed,
-):
- truncate_split = (
- split in shorten_data_split_list.split(",") or len(shorten_data_split_list) == 0
- )
- if shorten_method == "truncate" and truncate_split:
- dataset = TruncateDataset(dataset, tokens_per_sample)
- elif shorten_method == "random_crop" and truncate_split:
- dataset = RandomCropDataset(dataset, tokens_per_sample, seed)
- return dataset
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/spm_train.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/spm_train.py
deleted file mode 100644
index 9db668fd4166a860198784990de68ea26157995d..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/spm_train.py
+++ /dev/null
@@ -1,16 +0,0 @@
-#!/usr/bin/env python
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from __future__ import absolute_import, division, print_function, unicode_literals
-
-import sys
-
-import sentencepiece as spm
-
-
-if __name__ == "__main__":
- spm.SentencePieceTrainer.Train(" ".join(sys.argv[1:]))
diff --git a/spaces/OFA-Sys/OFA-vqa/run_scripts/vqa/evaluate_vqa.sh b/spaces/OFA-Sys/OFA-vqa/run_scripts/vqa/evaluate_vqa.sh
deleted file mode 100644
index 81cd8da711907ac77f4acc2c9743a2b5658e66de..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/run_scripts/vqa/evaluate_vqa.sh
+++ /dev/null
@@ -1,28 +0,0 @@
-#!/usr/bin/env bash
-
-user_dir=../../ofa_module
-bpe_dir=../../utils/BPE
-
-# val or test
-split=$1
-
-data=../../dataset/vqa_data/vqa_${split}.tsv
-ans2label_file=../../dataset/vqa_data/trainval_ans2label.pkl
-path=../../checkpoints/vqa_large_best_clean.pt
-result_path=../../results/vqa
-selected_cols=0,5,2,3,4
-
-CUDA_VISIBLE_DEVICES=0,1,2,3 python3 ../../evaluate.py \
- ${data} \
- --path=${path} \
- --user-dir=${user_dir} \
- --task=vqa_gen \
- --batch-size=8 \
- --log-format=simple --log-interval=10 \
- --seed=7 \
- --gen-subset=${split} \
- --results-path=${result_path} \
- --fp16 \
- --ema-eval \
- --num-workers=0 \
- --model-overrides="{\"data\":\"${data}\",\"bpe_dir\":\"${bpe_dir}\",\"selected_cols\":\"${selected_cols}\",\"ans2label_file\":\"${ans2label_file}\"}"
\ No newline at end of file
diff --git a/spaces/ORI-Muchim/BlueArchiveTTS/monotonic_align/__init__.py b/spaces/ORI-Muchim/BlueArchiveTTS/monotonic_align/__init__.py
deleted file mode 100644
index 40b6f64aa116c74cac2f6a33444c9eeea2fdb38c..0000000000000000000000000000000000000000
--- a/spaces/ORI-Muchim/BlueArchiveTTS/monotonic_align/__init__.py
+++ /dev/null
@@ -1,21 +0,0 @@
-from numpy import zeros, int32, float32
-from torch import from_numpy
-
-from .core import maximum_path_jit
-
-
-def maximum_path(neg_cent, mask):
- """ numba optimized version.
- neg_cent: [b, t_t, t_s]
- mask: [b, t_t, t_s]
- """
- device = neg_cent.device
- dtype = neg_cent.dtype
- neg_cent = neg_cent.data.cpu().numpy().astype(float32)
- path = zeros(neg_cent.shape, dtype=int32)
-
- t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32)
- t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32)
- maximum_path_jit(path, neg_cent, t_t_max, t_s_max)
- return from_numpy(path).to(device=device, dtype=dtype)
-
diff --git a/spaces/OptimalScale/Robin-7b/lmflow/__init__.py b/spaces/OptimalScale/Robin-7b/lmflow/__init__.py
deleted file mode 100644
index 529b2232f82b947e7e85e3e00839c1479db40e28..0000000000000000000000000000000000000000
--- a/spaces/OptimalScale/Robin-7b/lmflow/__init__.py
+++ /dev/null
@@ -1,13 +0,0 @@
-from .version import __version__ as internal_version
-
-__version__ = internal_version
-
-from transformers.utils import check_min_version
-from transformers.utils.versions import require_version
-
-from lmflow import args, datasets, models, pipeline, utils
-
-# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
-check_min_version("4.27.0.dev0")
-
-require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/language-modeling/requirements.txt")
\ No newline at end of file
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/i18n.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/i18n.go
deleted file mode 100644
index 3dcbc17d644513474e17ef934b2fea72b39da5d6..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/i18n.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/safe.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/safe.go
deleted file mode 100644
index 2cf46ff782aae3d5d90ec26a28ebc694b4d80506..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/safe.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/tree-il/effects.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/tree-il/effects.go
deleted file mode 100644
index 5ff0221c3dc8c17b14929c2d02bf8735dc819189..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/tree-il/effects.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/titling.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/titling.go
deleted file mode 100644
index 23d03990836abdf947de41b1ae869dc227526b22..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/titling.go and /dev/null differ
diff --git a/spaces/PeepDaSlan9/AutoGPT/autogpt/agent/agent_manager.py b/spaces/PeepDaSlan9/AutoGPT/autogpt/agent/agent_manager.py
deleted file mode 100644
index 898767a485e50b5e62625a7883edf1b30d5fddf9..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/AutoGPT/autogpt/agent/agent_manager.py
+++ /dev/null
@@ -1,103 +0,0 @@
-"""Agent manager for managing GPT agents"""
-from __future__ import annotations
-
-from typing import Union
-
-from autogpt.config.config import Singleton
-from autogpt.llm_utils import create_chat_completion
-
-
-class AgentManager(metaclass=Singleton):
- """Agent manager for managing GPT agents"""
-
- def __init__(self):
- self.next_key = 0
- self.agents = {} # key, (task, full_message_history, model)
-
- # Create new GPT agent
- # TODO: Centralise use of create_chat_completion() to globally enforce token limit
-
- def create_agent(self, task: str, prompt: str, model: str) -> tuple[int, str]:
- """Create a new agent and return its key
-
- Args:
- task: The task to perform
- prompt: The prompt to use
- model: The model to use
-
- Returns:
- The key of the new agent
- """
- messages = [
- {"role": "user", "content": prompt},
- ]
-
- # Start GPT instance
- agent_reply = create_chat_completion(
- model=model,
- messages=messages,
- )
-
- # Update full message history
- messages.append({"role": "assistant", "content": agent_reply})
-
- key = self.next_key
- # This is done instead of len(agents) to make keys unique even if agents
- # are deleted
- self.next_key += 1
-
- self.agents[key] = (task, messages, model)
-
- return key, agent_reply
-
- def message_agent(self, key: str | int, message: str) -> str:
- """Send a message to an agent and return its response
-
- Args:
- key: The key of the agent to message
- message: The message to send to the agent
-
- Returns:
- The agent's response
- """
- task, messages, model = self.agents[int(key)]
-
- # Add user message to message history before sending to agent
- messages.append({"role": "user", "content": message})
-
- # Start GPT instance
- agent_reply = create_chat_completion(
- model=model,
- messages=messages,
- )
-
- # Update full message history
- messages.append({"role": "assistant", "content": agent_reply})
-
- return agent_reply
-
- def list_agents(self) -> list[tuple[str | int, str]]:
- """Return a list of all agents
-
- Returns:
- A list of tuples of the form (key, task)
- """
-
- # Return a list of agent keys and their tasks
- return [(key, task) for key, (task, _, _) in self.agents.items()]
-
- def delete_agent(self, key: Union[str, int]) -> bool:
- """Delete an agent from the agent manager
-
- Args:
- key: The key of the agent to delete
-
- Returns:
- True if successful, False otherwise
- """
-
- try:
- del self.agents[int(key)]
- return True
- except KeyError:
- return False
diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/segmentors/encoder_decoder.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/segmentors/encoder_decoder.py
deleted file mode 100644
index 98392ac04c4c44a7f4e7b1c0808266875877dd1f..0000000000000000000000000000000000000000
--- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/segmentors/encoder_decoder.py
+++ /dev/null
@@ -1,298 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from annotator.uniformer.mmseg.core import add_prefix
-from annotator.uniformer.mmseg.ops import resize
-from .. import builder
-from ..builder import SEGMENTORS
-from .base import BaseSegmentor
-
-
-@SEGMENTORS.register_module()
-class EncoderDecoder(BaseSegmentor):
- """Encoder Decoder segmentors.
-
- EncoderDecoder typically consists of backbone, decode_head, auxiliary_head.
- Note that auxiliary_head is only used for deep supervision during training,
- which could be dumped during inference.
- """
-
- def __init__(self,
- backbone,
- decode_head,
- neck=None,
- auxiliary_head=None,
- train_cfg=None,
- test_cfg=None,
- pretrained=None):
- super(EncoderDecoder, self).__init__()
- self.backbone = builder.build_backbone(backbone)
- if neck is not None:
- self.neck = builder.build_neck(neck)
- self._init_decode_head(decode_head)
- self._init_auxiliary_head(auxiliary_head)
-
- self.train_cfg = train_cfg
- self.test_cfg = test_cfg
-
- self.init_weights(pretrained=pretrained)
-
- assert self.with_decode_head
-
- def _init_decode_head(self, decode_head):
- """Initialize ``decode_head``"""
- self.decode_head = builder.build_head(decode_head)
- self.align_corners = self.decode_head.align_corners
- self.num_classes = self.decode_head.num_classes
-
- def _init_auxiliary_head(self, auxiliary_head):
- """Initialize ``auxiliary_head``"""
- if auxiliary_head is not None:
- if isinstance(auxiliary_head, list):
- self.auxiliary_head = nn.ModuleList()
- for head_cfg in auxiliary_head:
- self.auxiliary_head.append(builder.build_head(head_cfg))
- else:
- self.auxiliary_head = builder.build_head(auxiliary_head)
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in backbone and heads.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
-
- super(EncoderDecoder, self).init_weights(pretrained)
- self.backbone.init_weights(pretrained=pretrained)
- self.decode_head.init_weights()
- if self.with_auxiliary_head:
- if isinstance(self.auxiliary_head, nn.ModuleList):
- for aux_head in self.auxiliary_head:
- aux_head.init_weights()
- else:
- self.auxiliary_head.init_weights()
-
- def extract_feat(self, img):
- """Extract features from images."""
- x = self.backbone(img)
- if self.with_neck:
- x = self.neck(x)
- return x
-
- def encode_decode(self, img, img_metas):
- """Encode images with backbone and decode into a semantic segmentation
- map of the same size as input."""
- x = self.extract_feat(img)
- out = self._decode_head_forward_test(x, img_metas)
- out = resize(
- input=out,
- size=img.shape[2:],
- mode='bilinear',
- align_corners=self.align_corners)
- return out
-
- def _decode_head_forward_train(self, x, img_metas, gt_semantic_seg):
- """Run forward function and calculate loss for decode head in
- training."""
- losses = dict()
- loss_decode = self.decode_head.forward_train(x, img_metas,
- gt_semantic_seg,
- self.train_cfg)
-
- losses.update(add_prefix(loss_decode, 'decode'))
- return losses
-
- def _decode_head_forward_test(self, x, img_metas):
- """Run forward function and calculate loss for decode head in
- inference."""
- seg_logits = self.decode_head.forward_test(x, img_metas, self.test_cfg)
- return seg_logits
-
- def _auxiliary_head_forward_train(self, x, img_metas, gt_semantic_seg):
- """Run forward function and calculate loss for auxiliary head in
- training."""
- losses = dict()
- if isinstance(self.auxiliary_head, nn.ModuleList):
- for idx, aux_head in enumerate(self.auxiliary_head):
- loss_aux = aux_head.forward_train(x, img_metas,
- gt_semantic_seg,
- self.train_cfg)
- losses.update(add_prefix(loss_aux, f'aux_{idx}'))
- else:
- loss_aux = self.auxiliary_head.forward_train(
- x, img_metas, gt_semantic_seg, self.train_cfg)
- losses.update(add_prefix(loss_aux, 'aux'))
-
- return losses
-
- def forward_dummy(self, img):
- """Dummy forward function."""
- seg_logit = self.encode_decode(img, None)
-
- return seg_logit
-
- def forward_train(self, img, img_metas, gt_semantic_seg):
- """Forward function for training.
-
- Args:
- img (Tensor): Input images.
- img_metas (list[dict]): List of image info dict where each dict
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- `mmseg/datasets/pipelines/formatting.py:Collect`.
- gt_semantic_seg (Tensor): Semantic segmentation masks
- used if the architecture supports semantic segmentation task.
-
- Returns:
- dict[str, Tensor]: a dictionary of loss components
- """
-
- x = self.extract_feat(img)
-
- losses = dict()
-
- loss_decode = self._decode_head_forward_train(x, img_metas,
- gt_semantic_seg)
- losses.update(loss_decode)
-
- if self.with_auxiliary_head:
- loss_aux = self._auxiliary_head_forward_train(
- x, img_metas, gt_semantic_seg)
- losses.update(loss_aux)
-
- return losses
-
- # TODO refactor
- def slide_inference(self, img, img_meta, rescale):
- """Inference by sliding-window with overlap.
-
- If h_crop > h_img or w_crop > w_img, the small patch will be used to
- decode without padding.
- """
-
- h_stride, w_stride = self.test_cfg.stride
- h_crop, w_crop = self.test_cfg.crop_size
- batch_size, _, h_img, w_img = img.size()
- num_classes = self.num_classes
- h_grids = max(h_img - h_crop + h_stride - 1, 0) // h_stride + 1
- w_grids = max(w_img - w_crop + w_stride - 1, 0) // w_stride + 1
- preds = img.new_zeros((batch_size, num_classes, h_img, w_img))
- count_mat = img.new_zeros((batch_size, 1, h_img, w_img))
- for h_idx in range(h_grids):
- for w_idx in range(w_grids):
- y1 = h_idx * h_stride
- x1 = w_idx * w_stride
- y2 = min(y1 + h_crop, h_img)
- x2 = min(x1 + w_crop, w_img)
- y1 = max(y2 - h_crop, 0)
- x1 = max(x2 - w_crop, 0)
- crop_img = img[:, :, y1:y2, x1:x2]
- crop_seg_logit = self.encode_decode(crop_img, img_meta)
- preds += F.pad(crop_seg_logit,
- (int(x1), int(preds.shape[3] - x2), int(y1),
- int(preds.shape[2] - y2)))
-
- count_mat[:, :, y1:y2, x1:x2] += 1
- assert (count_mat == 0).sum() == 0
- if torch.onnx.is_in_onnx_export():
- # cast count_mat to constant while exporting to ONNX
- count_mat = torch.from_numpy(
- count_mat.cpu().detach().numpy()).to(device=img.device)
- preds = preds / count_mat
- if rescale:
- preds = resize(
- preds,
- size=img_meta[0]['ori_shape'][:2],
- mode='bilinear',
- align_corners=self.align_corners,
- warning=False)
- return preds
-
- def whole_inference(self, img, img_meta, rescale):
- """Inference with full image."""
-
- seg_logit = self.encode_decode(img, img_meta)
- if rescale:
- # support dynamic shape for onnx
- if torch.onnx.is_in_onnx_export():
- size = img.shape[2:]
- else:
- size = img_meta[0]['ori_shape'][:2]
- seg_logit = resize(
- seg_logit,
- size=size,
- mode='bilinear',
- align_corners=self.align_corners,
- warning=False)
-
- return seg_logit
-
- def inference(self, img, img_meta, rescale):
- """Inference with slide/whole style.
-
- Args:
- img (Tensor): The input image of shape (N, 3, H, W).
- img_meta (dict): Image info dict where each dict has: 'img_shape',
- 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- `mmseg/datasets/pipelines/formatting.py:Collect`.
- rescale (bool): Whether rescale back to original shape.
-
- Returns:
- Tensor: The output segmentation map.
- """
-
- assert self.test_cfg.mode in ['slide', 'whole']
- ori_shape = img_meta[0]['ori_shape']
- assert all(_['ori_shape'] == ori_shape for _ in img_meta)
- if self.test_cfg.mode == 'slide':
- seg_logit = self.slide_inference(img, img_meta, rescale)
- else:
- seg_logit = self.whole_inference(img, img_meta, rescale)
- output = F.softmax(seg_logit, dim=1)
- flip = img_meta[0]['flip']
- if flip:
- flip_direction = img_meta[0]['flip_direction']
- assert flip_direction in ['horizontal', 'vertical']
- if flip_direction == 'horizontal':
- output = output.flip(dims=(3, ))
- elif flip_direction == 'vertical':
- output = output.flip(dims=(2, ))
-
- return output
-
- def simple_test(self, img, img_meta, rescale=True):
- """Simple test with single image."""
- seg_logit = self.inference(img, img_meta, rescale)
- seg_pred = seg_logit.argmax(dim=1)
- if torch.onnx.is_in_onnx_export():
- # our inference backend only support 4D output
- seg_pred = seg_pred.unsqueeze(0)
- return seg_pred
- seg_pred = seg_pred.cpu().numpy()
- # unravel batch dim
- seg_pred = list(seg_pred)
- return seg_pred
-
- def aug_test(self, imgs, img_metas, rescale=True):
- """Test with augmentations.
-
- Only rescale=True is supported.
- """
- # aug_test rescale all imgs back to ori_shape for now
- assert rescale
- # to save memory, we get augmented seg logit inplace
- seg_logit = self.inference(imgs[0], img_metas[0], rescale)
- for i in range(1, len(imgs)):
- cur_seg_logit = self.inference(imgs[i], img_metas[i], rescale)
- seg_logit += cur_seg_logit
- seg_logit /= len(imgs)
- seg_pred = seg_logit.argmax(dim=1)
- seg_pred = seg_pred.cpu().numpy()
- # unravel batch dim
- seg_pred = list(seg_pred)
- return seg_pred
diff --git a/spaces/Pranjal2041/GEO-bench/constants.py b/spaces/Pranjal2041/GEO-bench/constants.py
deleted file mode 100644
index 34412f7be5cafd66ce0680871cbf949f817e2ebd..0000000000000000000000000000000000000000
--- a/spaces/Pranjal2041/GEO-bench/constants.py
+++ /dev/null
@@ -1,31 +0,0 @@
-# metrics = ['relevance_detailed', 'uniqueness_detailed', 'subjcount_detailed', 'follow_detailed', 'simple_wordpos', 'simple_pos', 'influence_detailed', 'subjective_score', 'diversity_detailed', 'simple_word', 'subjpos_detailed']
-columns = ['Method', 'Word', 'Position', 'WordPos Overall', 'Rel.', 'Infl.', 'Unique', 'Div.', 'FollowUp', 'Pos.', 'Count', 'Subjective Average', 'source']
-metric_dict = {
- 'Word': 'simple_word',
- 'Position': 'simple_pos',
- 'WordPos Overall': 'simple_wordpos',
- 'Rel.': 'relevance_detailed',
- 'Infl.': 'influence_detailed',
- 'Unique': 'uniqueness_detailed',
- 'Div.': 'diversity_detailed',
- 'FollowUp': 'follow_detailed',
- 'Pos.': 'subjpos_detailed',
- 'Count': 'subjcount_detailed',
- 'Subjective Average': 'subjective_score',
-}
-
-tags = {
- "Difficulty Level": ["Simple", "Intermediate", "Complex", "Multi-faceted", "Open-ended", 'any'],
- "Nature of Query": ["Informational", "Navigational", "Transactional", "Debate", "Opinion", "Comparison", "Instructional", "Descriptive", "Predictive", 'any'],
- "Sensitivity": ["Sensitive", "Non-sensitive",'any'],
- "Genre": [
- "🎭 Arts and Entertainment", "🚗 Autos and Vehicles", "💄 Beauty and Fitness", "📚 Books and Literature", "🏢 Business and Industrial",
- "💻 Computers and Electronics", "💰 Finance", "🍔 Food and Drink", "🎮 Games", "🏥 Health", "🎨 Hobbies and Leisure", "🏡 Home and Garden",
- "🌐 Internet and Telecom", "🎓 Jobs and Education", "🏛️ Law and Government", "📰 News", "💬 Online Communities", "👫 People and Society",
- "🐾 Pets and Animals", "🏡 Real Estate", "📚 Reference", "🔬 Science", "🛒 Shopping", "⚽ Sports", "✈️ Travel",'any'
- ],
- "Specific Topics": ["Physics", "Chemistry", "Biology", "Mathematics", "Computer Science", "Economics", 'any'],
- "User Intent": ["🔍 Research", "💰 Purchase", "🎉 Entertainment", "📚 Learning", "🔄 Comparison", 'any'],
- "Answer Type": ["Fact", "Opinion", "List", "Explanation", "Guide", "Comparison", "Prediction", 'any'],
-}
-
diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/scripts/mos.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/scripts/mos.py
deleted file mode 100644
index a711c9ece23e72ed3a07032c7834ef7c56ab4f11..0000000000000000000000000000000000000000
--- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/scripts/mos.py
+++ /dev/null
@@ -1,286 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-"""
-To run this script, from the root of the repo. Make sure to have Flask installed
-
- FLASK_DEBUG=1 FLASK_APP=scripts.mos flask run -p 4567
- # or if you have gunicorn
- gunicorn -w 4 -b 127.0.0.1:8895 -t 120 'scripts.mos:app' --access-logfile -
-
-"""
-from collections import defaultdict
-from functools import wraps
-from hashlib import sha1
-import json
-import math
-from pathlib import Path
-import random
-import typing as tp
-
-from flask import Flask, redirect, render_template, request, session, url_for
-
-from audiocraft import train
-from audiocraft.utils.samples.manager import get_samples_for_xps
-
-
-SAMPLES_PER_PAGE = 8
-MAX_RATING = 5
-storage = Path(train.main.dora.dir / 'mos_storage')
-storage.mkdir(exist_ok=True)
-surveys = storage / 'surveys'
-surveys.mkdir(exist_ok=True)
-magma_root = Path(train.__file__).parent.parent
-app = Flask('mos', static_folder=str(magma_root / 'scripts/static'),
- template_folder=str(magma_root / 'scripts/templates'))
-app.secret_key = b'audiocraft makes the best songs'
-
-
-def normalize_path(path: Path):
- """Just to make path a bit nicer, make them relative to the Dora root dir.
- """
- path = path.resolve()
- dora_dir = train.main.dora.dir.resolve() / 'xps'
- return path.relative_to(dora_dir)
-
-
-def get_full_path(normalized_path: Path):
- """Revert `normalize_path`.
- """
- return train.main.dora.dir.resolve() / 'xps' / normalized_path
-
-
-def get_signature(xps: tp.List[str]):
- """Return a signature for a list of XP signatures.
- """
- return sha1(json.dumps(xps).encode()).hexdigest()[:10]
-
-
-def ensure_logged(func):
- """Ensure user is logged in.
- """
- @wraps(func)
- def _wrapped(*args, **kwargs):
- user = session.get('user')
- if user is None:
- return redirect(url_for('login', redirect_to=request.url))
- return func(*args, **kwargs)
- return _wrapped
-
-
-@app.route('/login', methods=['GET', 'POST'])
-def login():
- """Login user if not already, then redirect.
- """
- user = session.get('user')
- if user is None:
- error = None
- if request.method == 'POST':
- user = request.form['user']
- if not user:
- error = 'User cannot be empty'
- if user is None or error:
- return render_template('login.html', error=error)
- assert user
- session['user'] = user
- redirect_to = request.args.get('redirect_to')
- if redirect_to is None:
- redirect_to = url_for('index')
- return redirect(redirect_to)
-
-
-@app.route('/', methods=['GET', 'POST'])
-@ensure_logged
-def index():
- """Offer to create a new study.
- """
- errors = []
- if request.method == 'POST':
- xps_or_grids = [part.strip() for part in request.form['xps'].split()]
- xps = set()
- for xp_or_grid in xps_or_grids:
- xp_path = train.main.dora.dir / 'xps' / xp_or_grid
- if xp_path.exists():
- xps.add(xp_or_grid)
- continue
- grid_path = train.main.dora.dir / 'grids' / xp_or_grid
- if grid_path.exists():
- for child in grid_path.iterdir():
- if child.is_symlink():
- xps.add(child.name)
- continue
- errors.append(f'{xp_or_grid} is neither an XP nor a grid!')
- assert xps or errors
- blind = 'true' if request.form.get('blind') == 'on' else 'false'
- xps = list(xps)
- if not errors:
- signature = get_signature(xps)
- manifest = {
- 'xps': xps,
- }
- survey_path = surveys / signature
- survey_path.mkdir(exist_ok=True)
- with open(survey_path / 'manifest.json', 'w') as f:
- json.dump(manifest, f, indent=2)
- return redirect(url_for('survey', blind=blind, signature=signature))
- return render_template('index.html', errors=errors)
-
-
-@app.route('/survey/', methods=['GET', 'POST'])
-@ensure_logged
-def survey(signature):
- success = request.args.get('success', False)
- seed = int(request.args.get('seed', 4321))
- blind = request.args.get('blind', 'false') in ['true', 'on', 'True']
- exclude_prompted = request.args.get('exclude_prompted', 'false') in ['true', 'on', 'True']
- exclude_unprompted = request.args.get('exclude_unprompted', 'false') in ['true', 'on', 'True']
- max_epoch = int(request.args.get('max_epoch', '-1'))
- survey_path = surveys / signature
- assert survey_path.exists(), survey_path
-
- user = session['user']
- result_folder = survey_path / 'results'
- result_folder.mkdir(exist_ok=True)
- result_file = result_folder / f'{user}_{seed}.json'
-
- with open(survey_path / 'manifest.json') as f:
- manifest = json.load(f)
-
- xps = [train.main.get_xp_from_sig(xp) for xp in manifest['xps']]
- names, ref_name = train.main.get_names(xps)
-
- samples_kwargs = {
- 'exclude_prompted': exclude_prompted,
- 'exclude_unprompted': exclude_unprompted,
- 'max_epoch': max_epoch,
- }
- matched_samples = get_samples_for_xps(xps, epoch=-1, **samples_kwargs) # fetch latest epoch
- models_by_id = {
- id: [{
- 'xp': xps[idx],
- 'xp_name': names[idx],
- 'model_id': f'{xps[idx].sig}-{sample.id}',
- 'sample': sample,
- 'is_prompted': sample.prompt is not None,
- 'errors': [],
- } for idx, sample in enumerate(samples)]
- for id, samples in matched_samples.items()
- }
- experiments = [
- {'xp': xp, 'name': names[idx], 'epoch': list(matched_samples.values())[0][idx].epoch}
- for idx, xp in enumerate(xps)
- ]
-
- keys = list(matched_samples.keys())
- keys.sort()
- rng = random.Random(seed)
- rng.shuffle(keys)
- model_ids = keys[:SAMPLES_PER_PAGE]
-
- if blind:
- for key in model_ids:
- rng.shuffle(models_by_id[key])
-
- ok = True
- if request.method == 'POST':
- all_samples_results = []
- for id in model_ids:
- models = models_by_id[id]
- result = {
- 'id': id,
- 'is_prompted': models[0]['is_prompted'],
- 'models': {}
- }
- all_samples_results.append(result)
- for model in models:
- rating = request.form[model['model_id']]
- if rating:
- rating = int(rating)
- assert rating <= MAX_RATING and rating >= 1
- result['models'][model['xp'].sig] = rating
- model['rating'] = rating
- else:
- ok = False
- model['errors'].append('Please rate this model.')
- if ok:
- result = {
- 'results': all_samples_results,
- 'seed': seed,
- 'user': user,
- 'blind': blind,
- 'exclude_prompted': exclude_prompted,
- 'exclude_unprompted': exclude_unprompted,
- }
- print(result)
- with open(result_file, 'w') as f:
- json.dump(result, f)
- seed = seed + 1
- return redirect(url_for(
- 'survey', signature=signature, blind=blind, seed=seed,
- exclude_prompted=exclude_prompted, exclude_unprompted=exclude_unprompted,
- max_epoch=max_epoch, success=True))
-
- ratings = list(range(1, MAX_RATING + 1))
- return render_template(
- 'survey.html', ratings=ratings, blind=blind, seed=seed, signature=signature, success=success,
- exclude_prompted=exclude_prompted, exclude_unprompted=exclude_unprompted, max_epoch=max_epoch,
- experiments=experiments, models_by_id=models_by_id, model_ids=model_ids, errors=[],
- ref_name=ref_name, already_filled=result_file.exists())
-
-
-@app.route('/audio/')
-def audio(path: str):
- full_path = Path('/') / path
- assert full_path.suffix in [".mp3", ".wav"]
- return full_path.read_bytes(), {'Content-Type': 'audio/mpeg'}
-
-
-def mean(x):
- return sum(x) / len(x)
-
-
-def std(x):
- m = mean(x)
- return math.sqrt(sum((i - m)**2 for i in x) / len(x))
-
-
-@app.route('/results/')
-@ensure_logged
-def results(signature):
-
- survey_path = surveys / signature
- assert survey_path.exists(), survey_path
- result_folder = survey_path / 'results'
- result_folder.mkdir(exist_ok=True)
-
- # ratings per model, then per user.
- ratings_per_model = defaultdict(list)
- users = []
- for result_file in result_folder.iterdir():
- if result_file.suffix != '.json':
- continue
- with open(result_file) as f:
- results = json.load(f)
- users.append(results['user'])
- for result in results['results']:
- for sig, rating in result['models'].items():
- ratings_per_model[sig].append(rating)
-
- fmt = '{:.2f}'
- models = []
- for model in sorted(ratings_per_model.keys()):
- ratings = ratings_per_model[model]
-
- models.append({
- 'sig': model,
- 'samples': len(ratings),
- 'mean_rating': fmt.format(mean(ratings)),
- # the value 1.96 was probably chosen to achieve some
- # confidence interval assuming gaussianity.
- 'std_rating': fmt.format(1.96 * std(ratings) / len(ratings)**0.5),
- })
- return render_template('results.html', signature=signature, models=models, users=users)
diff --git a/spaces/Qiukai/gpt/crazy_functions/test_project/cpp/libJPG/jpgd.cpp b/spaces/Qiukai/gpt/crazy_functions/test_project/cpp/libJPG/jpgd.cpp
deleted file mode 100644
index 36d06c8e9068570c3e7624895d474f33dbfe3d29..0000000000000000000000000000000000000000
--- a/spaces/Qiukai/gpt/crazy_functions/test_project/cpp/libJPG/jpgd.cpp
+++ /dev/null
@@ -1,3276 +0,0 @@
-// jpgd.cpp - C++ class for JPEG decompression.
-// Public domain, Rich Geldreich
-// Last updated Apr. 16, 2011
-// Alex Evans: Linear memory allocator (taken from jpge.h).
-//
-// Supports progressive and baseline sequential JPEG image files, and the most common chroma subsampling factors: Y, H1V1, H2V1, H1V2, and H2V2.
-//
-// Chroma upsampling quality: H2V2 is upsampled in the frequency domain, H2V1 and H1V2 are upsampled using point sampling.
-// Chroma upsampling reference: "Fast Scheme for Image Size Change in the Compressed Domain"
-// http://vision.ai.uiuc.edu/~dugad/research/dct/index.html
-
-#include "jpgd.h"
-#include
-
-#include
-// BEGIN EPIC MOD
-#define JPGD_ASSERT(x) { assert(x); CA_ASSUME(x); } (void)0
-// END EPIC MOD
-
-#ifdef _MSC_VER
-#pragma warning (disable : 4611) // warning C4611: interaction between '_setjmp' and C++ object destruction is non-portable
-#endif
-
-// Set to 1 to enable freq. domain chroma upsampling on images using H2V2 subsampling (0=faster nearest neighbor sampling).
-// This is slower, but results in higher quality on images with highly saturated colors.
-#define JPGD_SUPPORT_FREQ_DOMAIN_UPSAMPLING 1
-
-#define JPGD_TRUE (1)
-#define JPGD_FALSE (0)
-
-#define JPGD_MAX(a,b) (((a)>(b)) ? (a) : (b))
-#define JPGD_MIN(a,b) (((a)<(b)) ? (a) : (b))
-
-namespace jpgd {
-
- static inline void *jpgd_malloc(size_t nSize) { return FMemory::Malloc(nSize); }
- static inline void jpgd_free(void *p) { FMemory::Free(p); }
-
-// BEGIN EPIC MOD
-//@UE3 - use UE3 BGRA encoding instead of assuming RGBA
- // stolen from IImageWrapper.h
- enum ERGBFormatJPG
- {
- Invalid = -1,
- RGBA = 0,
- BGRA = 1,
- Gray = 2,
- };
- static ERGBFormatJPG jpg_format;
-// END EPIC MOD
-
- // DCT coefficients are stored in this sequence.
- static int g_ZAG[64] = { 0,1,8,16,9,2,3,10,17,24,32,25,18,11,4,5,12,19,26,33,40,48,41,34,27,20,13,6,7,14,21,28,35,42,49,56,57,50,43,36,29,22,15,23,30,37,44,51,58,59,52,45,38,31,39,46,53,60,61,54,47,55,62,63 };
-
- enum JPEG_MARKER
- {
- M_SOF0 = 0xC0, M_SOF1 = 0xC1, M_SOF2 = 0xC2, M_SOF3 = 0xC3, M_SOF5 = 0xC5, M_SOF6 = 0xC6, M_SOF7 = 0xC7, M_JPG = 0xC8,
- M_SOF9 = 0xC9, M_SOF10 = 0xCA, M_SOF11 = 0xCB, M_SOF13 = 0xCD, M_SOF14 = 0xCE, M_SOF15 = 0xCF, M_DHT = 0xC4, M_DAC = 0xCC,
- M_RST0 = 0xD0, M_RST1 = 0xD1, M_RST2 = 0xD2, M_RST3 = 0xD3, M_RST4 = 0xD4, M_RST5 = 0xD5, M_RST6 = 0xD6, M_RST7 = 0xD7,
- M_SOI = 0xD8, M_EOI = 0xD9, M_SOS = 0xDA, M_DQT = 0xDB, M_DNL = 0xDC, M_DRI = 0xDD, M_DHP = 0xDE, M_EXP = 0xDF,
- M_APP0 = 0xE0, M_APP15 = 0xEF, M_JPG0 = 0xF0, M_JPG13 = 0xFD, M_COM = 0xFE, M_TEM = 0x01, M_ERROR = 0x100, RST0 = 0xD0
- };
-
- enum JPEG_SUBSAMPLING { JPGD_GRAYSCALE = 0, JPGD_YH1V1, JPGD_YH2V1, JPGD_YH1V2, JPGD_YH2V2 };
-
-#define CONST_BITS 13
-#define PASS1_BITS 2
-#define SCALEDONE ((int32)1)
-
-#define FIX_0_298631336 ((int32)2446) /* FIX(0.298631336) */
-#define FIX_0_390180644 ((int32)3196) /* FIX(0.390180644) */
-#define FIX_0_541196100 ((int32)4433) /* FIX(0.541196100) */
-#define FIX_0_765366865 ((int32)6270) /* FIX(0.765366865) */
-#define FIX_0_899976223 ((int32)7373) /* FIX(0.899976223) */
-#define FIX_1_175875602 ((int32)9633) /* FIX(1.175875602) */
-#define FIX_1_501321110 ((int32)12299) /* FIX(1.501321110) */
-#define FIX_1_847759065 ((int32)15137) /* FIX(1.847759065) */
-#define FIX_1_961570560 ((int32)16069) /* FIX(1.961570560) */
-#define FIX_2_053119869 ((int32)16819) /* FIX(2.053119869) */
-#define FIX_2_562915447 ((int32)20995) /* FIX(2.562915447) */
-#define FIX_3_072711026 ((int32)25172) /* FIX(3.072711026) */
-
-#define DESCALE(x,n) (((x) + (SCALEDONE << ((n)-1))) >> (n))
-#define DESCALE_ZEROSHIFT(x,n) (((x) + (128 << (n)) + (SCALEDONE << ((n)-1))) >> (n))
-
-#define MULTIPLY(var, cnst) ((var) * (cnst))
-
-#define CLAMP(i) ((static_cast(i) > 255) ? (((~i) >> 31) & 0xFF) : (i))
-
- // Compiler creates a fast path 1D IDCT for X non-zero columns
- template
- struct Row
- {
- static void idct(int* pTemp, const jpgd_block_t* pSrc)
- {
- // ACCESS_COL() will be optimized at compile time to either an array access, or 0.
-#define ACCESS_COL(x) (((x) < NONZERO_COLS) ? (int)pSrc[x] : 0)
-
- const int z2 = ACCESS_COL(2), z3 = ACCESS_COL(6);
-
- const int z1 = MULTIPLY(z2 + z3, FIX_0_541196100);
- const int tmp2 = z1 + MULTIPLY(z3, - FIX_1_847759065);
- const int tmp3 = z1 + MULTIPLY(z2, FIX_0_765366865);
-
- const int tmp0 = (ACCESS_COL(0) + ACCESS_COL(4)) << CONST_BITS;
- const int tmp1 = (ACCESS_COL(0) - ACCESS_COL(4)) << CONST_BITS;
-
- const int tmp10 = tmp0 + tmp3, tmp13 = tmp0 - tmp3, tmp11 = tmp1 + tmp2, tmp12 = tmp1 - tmp2;
-
- const int atmp0 = ACCESS_COL(7), atmp1 = ACCESS_COL(5), atmp2 = ACCESS_COL(3), atmp3 = ACCESS_COL(1);
-
- const int bz1 = atmp0 + atmp3, bz2 = atmp1 + atmp2, bz3 = atmp0 + atmp2, bz4 = atmp1 + atmp3;
- const int bz5 = MULTIPLY(bz3 + bz4, FIX_1_175875602);
-
- const int az1 = MULTIPLY(bz1, - FIX_0_899976223);
- const int az2 = MULTIPLY(bz2, - FIX_2_562915447);
- const int az3 = MULTIPLY(bz3, - FIX_1_961570560) + bz5;
- const int az4 = MULTIPLY(bz4, - FIX_0_390180644) + bz5;
-
- const int btmp0 = MULTIPLY(atmp0, FIX_0_298631336) + az1 + az3;
- const int btmp1 = MULTIPLY(atmp1, FIX_2_053119869) + az2 + az4;
- const int btmp2 = MULTIPLY(atmp2, FIX_3_072711026) + az2 + az3;
- const int btmp3 = MULTIPLY(atmp3, FIX_1_501321110) + az1 + az4;
-
- pTemp[0] = DESCALE(tmp10 + btmp3, CONST_BITS-PASS1_BITS);
- pTemp[7] = DESCALE(tmp10 - btmp3, CONST_BITS-PASS1_BITS);
- pTemp[1] = DESCALE(tmp11 + btmp2, CONST_BITS-PASS1_BITS);
- pTemp[6] = DESCALE(tmp11 - btmp2, CONST_BITS-PASS1_BITS);
- pTemp[2] = DESCALE(tmp12 + btmp1, CONST_BITS-PASS1_BITS);
- pTemp[5] = DESCALE(tmp12 - btmp1, CONST_BITS-PASS1_BITS);
- pTemp[3] = DESCALE(tmp13 + btmp0, CONST_BITS-PASS1_BITS);
- pTemp[4] = DESCALE(tmp13 - btmp0, CONST_BITS-PASS1_BITS);
- }
- };
-
- template <>
- struct Row<0>
- {
- static void idct(int* pTemp, const jpgd_block_t* pSrc)
- {
-#ifdef _MSC_VER
- pTemp; pSrc;
-#endif
- }
- };
-
- template <>
- struct Row<1>
- {
- static void idct(int* pTemp, const jpgd_block_t* pSrc)
- {
- const int dcval = (pSrc[0] << PASS1_BITS);
-
- pTemp[0] = dcval;
- pTemp[1] = dcval;
- pTemp[2] = dcval;
- pTemp[3] = dcval;
- pTemp[4] = dcval;
- pTemp[5] = dcval;
- pTemp[6] = dcval;
- pTemp[7] = dcval;
- }
- };
-
- // Compiler creates a fast path 1D IDCT for X non-zero rows
- template
- struct Col
- {
- static void idct(uint8* pDst_ptr, const int* pTemp)
- {
- // ACCESS_ROW() will be optimized at compile time to either an array access, or 0.
-#define ACCESS_ROW(x) (((x) < NONZERO_ROWS) ? pTemp[x * 8] : 0)
-
- const int z2 = ACCESS_ROW(2);
- const int z3 = ACCESS_ROW(6);
-
- const int z1 = MULTIPLY(z2 + z3, FIX_0_541196100);
- const int tmp2 = z1 + MULTIPLY(z3, - FIX_1_847759065);
- const int tmp3 = z1 + MULTIPLY(z2, FIX_0_765366865);
-
- const int tmp0 = (ACCESS_ROW(0) + ACCESS_ROW(4)) << CONST_BITS;
- const int tmp1 = (ACCESS_ROW(0) - ACCESS_ROW(4)) << CONST_BITS;
-
- const int tmp10 = tmp0 + tmp3, tmp13 = tmp0 - tmp3, tmp11 = tmp1 + tmp2, tmp12 = tmp1 - tmp2;
-
- const int atmp0 = ACCESS_ROW(7), atmp1 = ACCESS_ROW(5), atmp2 = ACCESS_ROW(3), atmp3 = ACCESS_ROW(1);
-
- const int bz1 = atmp0 + atmp3, bz2 = atmp1 + atmp2, bz3 = atmp0 + atmp2, bz4 = atmp1 + atmp3;
- const int bz5 = MULTIPLY(bz3 + bz4, FIX_1_175875602);
-
- const int az1 = MULTIPLY(bz1, - FIX_0_899976223);
- const int az2 = MULTIPLY(bz2, - FIX_2_562915447);
- const int az3 = MULTIPLY(bz3, - FIX_1_961570560) + bz5;
- const int az4 = MULTIPLY(bz4, - FIX_0_390180644) + bz5;
-
- const int btmp0 = MULTIPLY(atmp0, FIX_0_298631336) + az1 + az3;
- const int btmp1 = MULTIPLY(atmp1, FIX_2_053119869) + az2 + az4;
- const int btmp2 = MULTIPLY(atmp2, FIX_3_072711026) + az2 + az3;
- const int btmp3 = MULTIPLY(atmp3, FIX_1_501321110) + az1 + az4;
-
- int i = DESCALE_ZEROSHIFT(tmp10 + btmp3, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*0] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp10 - btmp3, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*7] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp11 + btmp2, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*1] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp11 - btmp2, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*6] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp12 + btmp1, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*2] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp12 - btmp1, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*5] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp13 + btmp0, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*3] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp13 - btmp0, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*4] = (uint8)CLAMP(i);
- }
- };
-
- template <>
- struct Col<1>
- {
- static void idct(uint8* pDst_ptr, const int* pTemp)
- {
- int dcval = DESCALE_ZEROSHIFT(pTemp[0], PASS1_BITS+3);
- const uint8 dcval_clamped = (uint8)CLAMP(dcval);
- pDst_ptr[0*8] = dcval_clamped;
- pDst_ptr[1*8] = dcval_clamped;
- pDst_ptr[2*8] = dcval_clamped;
- pDst_ptr[3*8] = dcval_clamped;
- pDst_ptr[4*8] = dcval_clamped;
- pDst_ptr[5*8] = dcval_clamped;
- pDst_ptr[6*8] = dcval_clamped;
- pDst_ptr[7*8] = dcval_clamped;
- }
- };
-
- static const uint8 s_idct_row_table[] =
- {
- 1,0,0,0,0,0,0,0, 2,0,0,0,0,0,0,0, 2,1,0,0,0,0,0,0, 2,1,1,0,0,0,0,0, 2,2,1,0,0,0,0,0, 3,2,1,0,0,0,0,0, 4,2,1,0,0,0,0,0, 4,3,1,0,0,0,0,0,
- 4,3,2,0,0,0,0,0, 4,3,2,1,0,0,0,0, 4,3,2,1,1,0,0,0, 4,3,2,2,1,0,0,0, 4,3,3,2,1,0,0,0, 4,4,3,2,1,0,0,0, 5,4,3,2,1,0,0,0, 6,4,3,2,1,0,0,0,
- 6,5,3,2,1,0,0,0, 6,5,4,2,1,0,0,0, 6,5,4,3,1,0,0,0, 6,5,4,3,2,0,0,0, 6,5,4,3,2,1,0,0, 6,5,4,3,2,1,1,0, 6,5,4,3,2,2,1,0, 6,5,4,3,3,2,1,0,
- 6,5,4,4,3,2,1,0, 6,5,5,4,3,2,1,0, 6,6,5,4,3,2,1,0, 7,6,5,4,3,2,1,0, 8,6,5,4,3,2,1,0, 8,7,5,4,3,2,1,0, 8,7,6,4,3,2,1,0, 8,7,6,5,3,2,1,0,
- 8,7,6,5,4,2,1,0, 8,7,6,5,4,3,1,0, 8,7,6,5,4,3,2,0, 8,7,6,5,4,3,2,1, 8,7,6,5,4,3,2,2, 8,7,6,5,4,3,3,2, 8,7,6,5,4,4,3,2, 8,7,6,5,5,4,3,2,
- 8,7,6,6,5,4,3,2, 8,7,7,6,5,4,3,2, 8,8,7,6,5,4,3,2, 8,8,8,6,5,4,3,2, 8,8,8,7,5,4,3,2, 8,8,8,7,6,4,3,2, 8,8,8,7,6,5,3,2, 8,8,8,7,6,5,4,2,
- 8,8,8,7,6,5,4,3, 8,8,8,7,6,5,4,4, 8,8,8,7,6,5,5,4, 8,8,8,7,6,6,5,4, 8,8,8,7,7,6,5,4, 8,8,8,8,7,6,5,4, 8,8,8,8,8,6,5,4, 8,8,8,8,8,7,5,4,
- 8,8,8,8,8,7,6,4, 8,8,8,8,8,7,6,5, 8,8,8,8,8,7,6,6, 8,8,8,8,8,7,7,6, 8,8,8,8,8,8,7,6, 8,8,8,8,8,8,8,6, 8,8,8,8,8,8,8,7, 8,8,8,8,8,8,8,8,
- };
-
- static const uint8 s_idct_col_table[] = { 1, 1, 2, 3, 3, 3, 3, 3, 3, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8 };
-
- void idct(const jpgd_block_t* pSrc_ptr, uint8* pDst_ptr, int block_max_zag)
- {
- JPGD_ASSERT(block_max_zag >= 1);
- JPGD_ASSERT(block_max_zag <= 64);
-
- if (block_max_zag == 1)
- {
- int k = ((pSrc_ptr[0] + 4) >> 3) + 128;
- k = CLAMP(k);
- k = k | (k<<8);
- k = k | (k<<16);
-
- for (int i = 8; i > 0; i--)
- {
- *(int*)&pDst_ptr[0] = k;
- *(int*)&pDst_ptr[4] = k;
- pDst_ptr += 8;
- }
- return;
- }
-
- int temp[64];
-
- const jpgd_block_t* pSrc = pSrc_ptr;
- int* pTemp = temp;
-
- const uint8* pRow_tab = &s_idct_row_table[(block_max_zag - 1) * 8];
- int i;
- for (i = 8; i > 0; i--, pRow_tab++)
- {
- switch (*pRow_tab)
- {
- case 0: Row<0>::idct(pTemp, pSrc); break;
- case 1: Row<1>::idct(pTemp, pSrc); break;
- case 2: Row<2>::idct(pTemp, pSrc); break;
- case 3: Row<3>::idct(pTemp, pSrc); break;
- case 4: Row<4>::idct(pTemp, pSrc); break;
- case 5: Row<5>::idct(pTemp, pSrc); break;
- case 6: Row<6>::idct(pTemp, pSrc); break;
- case 7: Row<7>::idct(pTemp, pSrc); break;
- case 8: Row<8>::idct(pTemp, pSrc); break;
- }
-
- pSrc += 8;
- pTemp += 8;
- }
-
- pTemp = temp;
-
- const int nonzero_rows = s_idct_col_table[block_max_zag - 1];
- for (i = 8; i > 0; i--)
- {
- switch (nonzero_rows)
- {
- case 1: Col<1>::idct(pDst_ptr, pTemp); break;
- case 2: Col<2>::idct(pDst_ptr, pTemp); break;
- case 3: Col<3>::idct(pDst_ptr, pTemp); break;
- case 4: Col<4>::idct(pDst_ptr, pTemp); break;
- case 5: Col<5>::idct(pDst_ptr, pTemp); break;
- case 6: Col<6>::idct(pDst_ptr, pTemp); break;
- case 7: Col<7>::idct(pDst_ptr, pTemp); break;
- case 8: Col<8>::idct(pDst_ptr, pTemp); break;
- }
-
- pTemp++;
- pDst_ptr++;
- }
- }
-
- void idct_4x4(const jpgd_block_t* pSrc_ptr, uint8* pDst_ptr)
- {
- int temp[64];
- int* pTemp = temp;
- const jpgd_block_t* pSrc = pSrc_ptr;
-
- for (int i = 4; i > 0; i--)
- {
- Row<4>::idct(pTemp, pSrc);
- pSrc += 8;
- pTemp += 8;
- }
-
- pTemp = temp;
- for (int i = 8; i > 0; i--)
- {
- Col<4>::idct(pDst_ptr, pTemp);
- pTemp++;
- pDst_ptr++;
- }
- }
-
- // Retrieve one character from the input stream.
- inline uint jpeg_decoder::get_char()
- {
- // Any bytes remaining in buffer?
- if (!m_in_buf_left)
- {
- // Try to get more bytes.
- prep_in_buffer();
- // Still nothing to get?
- if (!m_in_buf_left)
- {
- // Pad the end of the stream with 0xFF 0xD9 (EOI marker)
- int t = m_tem_flag;
- m_tem_flag ^= 1;
- if (t)
- return 0xD9;
- else
- return 0xFF;
- }
- }
-
- uint c = *m_pIn_buf_ofs++;
- m_in_buf_left--;
-
- return c;
- }
-
- // Same as previous method, except can indicate if the character is a pad character or not.
- inline uint jpeg_decoder::get_char(bool *pPadding_flag)
- {
- if (!m_in_buf_left)
- {
- prep_in_buffer();
- if (!m_in_buf_left)
- {
- *pPadding_flag = true;
- int t = m_tem_flag;
- m_tem_flag ^= 1;
- if (t)
- return 0xD9;
- else
- return 0xFF;
- }
- }
-
- *pPadding_flag = false;
-
- uint c = *m_pIn_buf_ofs++;
- m_in_buf_left--;
-
- return c;
- }
-
- // Inserts a previously retrieved character back into the input buffer.
- inline void jpeg_decoder::stuff_char(uint8 q)
- {
- *(--m_pIn_buf_ofs) = q;
- m_in_buf_left++;
- }
-
- // Retrieves one character from the input stream, but does not read past markers. Will continue to return 0xFF when a marker is encountered.
- inline uint8 jpeg_decoder::get_octet()
- {
- bool padding_flag;
- int c = get_char(&padding_flag);
-
- if (c == 0xFF)
- {
- if (padding_flag)
- return 0xFF;
-
- c = get_char(&padding_flag);
- if (padding_flag)
- {
- stuff_char(0xFF);
- return 0xFF;
- }
-
- if (c == 0x00)
- return 0xFF;
- else
- {
- stuff_char(static_cast(c));
- stuff_char(0xFF);
- return 0xFF;
- }
- }
-
- return static_cast(c);
- }
-
- // Retrieves a variable number of bits from the input stream. Does not recognize markers.
- inline uint jpeg_decoder::get_bits(int num_bits)
- {
- if (!num_bits)
- return 0;
-
- uint i = m_bit_buf >> (32 - num_bits);
-
- if ((m_bits_left -= num_bits) <= 0)
- {
- m_bit_buf <<= (num_bits += m_bits_left);
-
- uint c1 = get_char();
- uint c2 = get_char();
- m_bit_buf = (m_bit_buf & 0xFFFF0000) | (c1 << 8) | c2;
-
- m_bit_buf <<= -m_bits_left;
-
- m_bits_left += 16;
-
- JPGD_ASSERT(m_bits_left >= 0);
- }
- else
- m_bit_buf <<= num_bits;
-
- return i;
- }
-
- // Retrieves a variable number of bits from the input stream. Markers will not be read into the input bit buffer. Instead, an infinite number of all 1's will be returned when a marker is encountered.
- inline uint jpeg_decoder::get_bits_no_markers(int num_bits)
- {
- if (!num_bits)
- return 0;
-
- uint i = m_bit_buf >> (32 - num_bits);
-
- if ((m_bits_left -= num_bits) <= 0)
- {
- m_bit_buf <<= (num_bits += m_bits_left);
-
- if ((m_in_buf_left < 2) || (m_pIn_buf_ofs[0] == 0xFF) || (m_pIn_buf_ofs[1] == 0xFF))
- {
- uint c1 = get_octet();
- uint c2 = get_octet();
- m_bit_buf |= (c1 << 8) | c2;
- }
- else
- {
- m_bit_buf |= ((uint)m_pIn_buf_ofs[0] << 8) | m_pIn_buf_ofs[1];
- m_in_buf_left -= 2;
- m_pIn_buf_ofs += 2;
- }
-
- m_bit_buf <<= -m_bits_left;
-
- m_bits_left += 16;
-
- JPGD_ASSERT(m_bits_left >= 0);
- }
- else
- m_bit_buf <<= num_bits;
-
- return i;
- }
-
- // Decodes a Huffman encoded symbol.
- inline int jpeg_decoder::huff_decode(huff_tables *pH)
- {
- int symbol;
-
- // Check first 8-bits: do we have a complete symbol?
- if ((symbol = pH->look_up[m_bit_buf >> 24]) < 0)
- {
- // Decode more bits, use a tree traversal to find symbol.
- int ofs = 23;
- do
- {
- symbol = pH->tree[-(int)(symbol + ((m_bit_buf >> ofs) & 1))];
- ofs--;
- } while (symbol < 0);
-
- get_bits_no_markers(8 + (23 - ofs));
- }
- else
- get_bits_no_markers(pH->code_size[symbol]);
-
- return symbol;
- }
-
- // Decodes a Huffman encoded symbol.
- inline int jpeg_decoder::huff_decode(huff_tables *pH, int& extra_bits)
- {
- int symbol;
-
- // Check first 8-bits: do we have a complete symbol?
- if ((symbol = pH->look_up2[m_bit_buf >> 24]) < 0)
- {
- // Use a tree traversal to find symbol.
- int ofs = 23;
- do
- {
- symbol = pH->tree[-(int)(symbol + ((m_bit_buf >> ofs) & 1))];
- ofs--;
- } while (symbol < 0);
-
- get_bits_no_markers(8 + (23 - ofs));
-
- extra_bits = get_bits_no_markers(symbol & 0xF);
- }
- else
- {
- JPGD_ASSERT(((symbol >> 8) & 31) == pH->code_size[symbol & 255] + ((symbol & 0x8000) ? (symbol & 15) : 0));
-
- if (symbol & 0x8000)
- {
- get_bits_no_markers((symbol >> 8) & 31);
- extra_bits = symbol >> 16;
- }
- else
- {
- int code_size = (symbol >> 8) & 31;
- int num_extra_bits = symbol & 0xF;
- int bits = code_size + num_extra_bits;
- if (bits <= (m_bits_left + 16))
- extra_bits = get_bits_no_markers(bits) & ((1 << num_extra_bits) - 1);
- else
- {
- get_bits_no_markers(code_size);
- extra_bits = get_bits_no_markers(num_extra_bits);
- }
- }
-
- symbol &= 0xFF;
- }
-
- return symbol;
- }
-
- // Tables and macro used to fully decode the DPCM differences.
- static const int s_extend_test[16] = { 0, 0x0001, 0x0002, 0x0004, 0x0008, 0x0010, 0x0020, 0x0040, 0x0080, 0x0100, 0x0200, 0x0400, 0x0800, 0x1000, 0x2000, 0x4000 };
- static const int s_extend_offset[16] = { 0, -1, -3, -7, -15, -31, -63, -127, -255, -511, -1023, -2047, -4095, -8191, -16383, -32767 };
- static const int s_extend_mask[] = { 0, (1<<0), (1<<1), (1<<2), (1<<3), (1<<4), (1<<5), (1<<6), (1<<7), (1<<8), (1<<9), (1<<10), (1<<11), (1<<12), (1<<13), (1<<14), (1<<15), (1<<16) };
-#define HUFF_EXTEND(x,s) ((x) < s_extend_test[s] ? (x) + s_extend_offset[s] : (x))
-
- // Clamps a value between 0-255.
- inline uint8 jpeg_decoder::clamp(int i)
- {
- if (static_cast(i) > 255)
- i = (((~i) >> 31) & 0xFF);
-
- return static_cast(i);
- }
-
- namespace DCT_Upsample
- {
- struct Matrix44
- {
- typedef int Element_Type;
- enum { NUM_ROWS = 4, NUM_COLS = 4 };
-
- Element_Type v[NUM_ROWS][NUM_COLS];
-
- inline int rows() const { return NUM_ROWS; }
- inline int cols() const { return NUM_COLS; }
-
- inline const Element_Type & at(int r, int c) const { return v[r][c]; }
- inline Element_Type & at(int r, int c) { return v[r][c]; }
-
- inline Matrix44() { }
-
- inline Matrix44& operator += (const Matrix44& a)
- {
- for (int r = 0; r < NUM_ROWS; r++)
- {
- at(r, 0) += a.at(r, 0);
- at(r, 1) += a.at(r, 1);
- at(r, 2) += a.at(r, 2);
- at(r, 3) += a.at(r, 3);
- }
- return *this;
- }
-
- inline Matrix44& operator -= (const Matrix44& a)
- {
- for (int r = 0; r < NUM_ROWS; r++)
- {
- at(r, 0) -= a.at(r, 0);
- at(r, 1) -= a.at(r, 1);
- at(r, 2) -= a.at(r, 2);
- at(r, 3) -= a.at(r, 3);
- }
- return *this;
- }
-
- friend inline Matrix44 operator + (const Matrix44& a, const Matrix44& b)
- {
- Matrix44 ret;
- for (int r = 0; r < NUM_ROWS; r++)
- {
- ret.at(r, 0) = a.at(r, 0) + b.at(r, 0);
- ret.at(r, 1) = a.at(r, 1) + b.at(r, 1);
- ret.at(r, 2) = a.at(r, 2) + b.at(r, 2);
- ret.at(r, 3) = a.at(r, 3) + b.at(r, 3);
- }
- return ret;
- }
-
- friend inline Matrix44 operator - (const Matrix44& a, const Matrix44& b)
- {
- Matrix44 ret;
- for (int r = 0; r < NUM_ROWS; r++)
- {
- ret.at(r, 0) = a.at(r, 0) - b.at(r, 0);
- ret.at(r, 1) = a.at(r, 1) - b.at(r, 1);
- ret.at(r, 2) = a.at(r, 2) - b.at(r, 2);
- ret.at(r, 3) = a.at(r, 3) - b.at(r, 3);
- }
- return ret;
- }
-
- static inline void add_and_store(jpgd_block_t* pDst, const Matrix44& a, const Matrix44& b)
- {
- for (int r = 0; r < 4; r++)
- {
- pDst[0*8 + r] = static_cast(a.at(r, 0) + b.at(r, 0));
- pDst[1*8 + r] = static_cast(a.at(r, 1) + b.at(r, 1));
- pDst[2*8 + r] = static_cast(a.at(r, 2) + b.at(r, 2));
- pDst[3*8 + r] = static_cast(a.at(r, 3) + b.at(r, 3));
- }
- }
-
- static inline void sub_and_store(jpgd_block_t* pDst, const Matrix44& a, const Matrix44& b)
- {
- for (int r = 0; r < 4; r++)
- {
- pDst[0*8 + r] = static_cast(a.at(r, 0) - b.at(r, 0));
- pDst[1*8 + r] = static_cast(a.at(r, 1) - b.at(r, 1));
- pDst[2*8 + r] = static_cast(a.at(r, 2) - b.at(r, 2));
- pDst[3*8 + r] = static_cast(a.at(r, 3) - b.at(r, 3));
- }
- }
- };
-
- const int FRACT_BITS = 10;
- const int SCALE = 1 << FRACT_BITS;
-
- typedef int Temp_Type;
-#define D(i) (((i) + (SCALE >> 1)) >> FRACT_BITS)
-#define F(i) ((int)((i) * SCALE + .5f))
-
- // Any decent C++ compiler will optimize this at compile time to a 0, or an array access.
-#define AT(c, r) ((((c)>=NUM_COLS)||((r)>=NUM_ROWS)) ? 0 : pSrc[(c)+(r)*8])
-
- // NUM_ROWS/NUM_COLS = # of non-zero rows/cols in input matrix
- template
- struct P_Q
- {
- static void calc(Matrix44& P, Matrix44& Q, const jpgd_block_t* pSrc)
- {
- // 4x8 = 4x8 times 8x8, matrix 0 is constant
- const Temp_Type X000 = AT(0, 0);
- const Temp_Type X001 = AT(0, 1);
- const Temp_Type X002 = AT(0, 2);
- const Temp_Type X003 = AT(0, 3);
- const Temp_Type X004 = AT(0, 4);
- const Temp_Type X005 = AT(0, 5);
- const Temp_Type X006 = AT(0, 6);
- const Temp_Type X007 = AT(0, 7);
- const Temp_Type X010 = D(F(0.415735f) * AT(1, 0) + F(0.791065f) * AT(3, 0) + F(-0.352443f) * AT(5, 0) + F(0.277785f) * AT(7, 0));
- const Temp_Type X011 = D(F(0.415735f) * AT(1, 1) + F(0.791065f) * AT(3, 1) + F(-0.352443f) * AT(5, 1) + F(0.277785f) * AT(7, 1));
- const Temp_Type X012 = D(F(0.415735f) * AT(1, 2) + F(0.791065f) * AT(3, 2) + F(-0.352443f) * AT(5, 2) + F(0.277785f) * AT(7, 2));
- const Temp_Type X013 = D(F(0.415735f) * AT(1, 3) + F(0.791065f) * AT(3, 3) + F(-0.352443f) * AT(5, 3) + F(0.277785f) * AT(7, 3));
- const Temp_Type X014 = D(F(0.415735f) * AT(1, 4) + F(0.791065f) * AT(3, 4) + F(-0.352443f) * AT(5, 4) + F(0.277785f) * AT(7, 4));
- const Temp_Type X015 = D(F(0.415735f) * AT(1, 5) + F(0.791065f) * AT(3, 5) + F(-0.352443f) * AT(5, 5) + F(0.277785f) * AT(7, 5));
- const Temp_Type X016 = D(F(0.415735f) * AT(1, 6) + F(0.791065f) * AT(3, 6) + F(-0.352443f) * AT(5, 6) + F(0.277785f) * AT(7, 6));
- const Temp_Type X017 = D(F(0.415735f) * AT(1, 7) + F(0.791065f) * AT(3, 7) + F(-0.352443f) * AT(5, 7) + F(0.277785f) * AT(7, 7));
- const Temp_Type X020 = AT(4, 0);
- const Temp_Type X021 = AT(4, 1);
- const Temp_Type X022 = AT(4, 2);
- const Temp_Type X023 = AT(4, 3);
- const Temp_Type X024 = AT(4, 4);
- const Temp_Type X025 = AT(4, 5);
- const Temp_Type X026 = AT(4, 6);
- const Temp_Type X027 = AT(4, 7);
- const Temp_Type X030 = D(F(0.022887f) * AT(1, 0) + F(-0.097545f) * AT(3, 0) + F(0.490393f) * AT(5, 0) + F(0.865723f) * AT(7, 0));
- const Temp_Type X031 = D(F(0.022887f) * AT(1, 1) + F(-0.097545f) * AT(3, 1) + F(0.490393f) * AT(5, 1) + F(0.865723f) * AT(7, 1));
- const Temp_Type X032 = D(F(0.022887f) * AT(1, 2) + F(-0.097545f) * AT(3, 2) + F(0.490393f) * AT(5, 2) + F(0.865723f) * AT(7, 2));
- const Temp_Type X033 = D(F(0.022887f) * AT(1, 3) + F(-0.097545f) * AT(3, 3) + F(0.490393f) * AT(5, 3) + F(0.865723f) * AT(7, 3));
- const Temp_Type X034 = D(F(0.022887f) * AT(1, 4) + F(-0.097545f) * AT(3, 4) + F(0.490393f) * AT(5, 4) + F(0.865723f) * AT(7, 4));
- const Temp_Type X035 = D(F(0.022887f) * AT(1, 5) + F(-0.097545f) * AT(3, 5) + F(0.490393f) * AT(5, 5) + F(0.865723f) * AT(7, 5));
- const Temp_Type X036 = D(F(0.022887f) * AT(1, 6) + F(-0.097545f) * AT(3, 6) + F(0.490393f) * AT(5, 6) + F(0.865723f) * AT(7, 6));
- const Temp_Type X037 = D(F(0.022887f) * AT(1, 7) + F(-0.097545f) * AT(3, 7) + F(0.490393f) * AT(5, 7) + F(0.865723f) * AT(7, 7));
-
- // 4x4 = 4x8 times 8x4, matrix 1 is constant
- P.at(0, 0) = X000;
- P.at(0, 1) = D(X001 * F(0.415735f) + X003 * F(0.791065f) + X005 * F(-0.352443f) + X007 * F(0.277785f));
- P.at(0, 2) = X004;
- P.at(0, 3) = D(X001 * F(0.022887f) + X003 * F(-0.097545f) + X005 * F(0.490393f) + X007 * F(0.865723f));
- P.at(1, 0) = X010;
- P.at(1, 1) = D(X011 * F(0.415735f) + X013 * F(0.791065f) + X015 * F(-0.352443f) + X017 * F(0.277785f));
- P.at(1, 2) = X014;
- P.at(1, 3) = D(X011 * F(0.022887f) + X013 * F(-0.097545f) + X015 * F(0.490393f) + X017 * F(0.865723f));
- P.at(2, 0) = X020;
- P.at(2, 1) = D(X021 * F(0.415735f) + X023 * F(0.791065f) + X025 * F(-0.352443f) + X027 * F(0.277785f));
- P.at(2, 2) = X024;
- P.at(2, 3) = D(X021 * F(0.022887f) + X023 * F(-0.097545f) + X025 * F(0.490393f) + X027 * F(0.865723f));
- P.at(3, 0) = X030;
- P.at(3, 1) = D(X031 * F(0.415735f) + X033 * F(0.791065f) + X035 * F(-0.352443f) + X037 * F(0.277785f));
- P.at(3, 2) = X034;
- P.at(3, 3) = D(X031 * F(0.022887f) + X033 * F(-0.097545f) + X035 * F(0.490393f) + X037 * F(0.865723f));
- // 40 muls 24 adds
-
- // 4x4 = 4x8 times 8x4, matrix 1 is constant
- Q.at(0, 0) = D(X001 * F(0.906127f) + X003 * F(-0.318190f) + X005 * F(0.212608f) + X007 * F(-0.180240f));
- Q.at(0, 1) = X002;
- Q.at(0, 2) = D(X001 * F(-0.074658f) + X003 * F(0.513280f) + X005 * F(0.768178f) + X007 * F(-0.375330f));
- Q.at(0, 3) = X006;
- Q.at(1, 0) = D(X011 * F(0.906127f) + X013 * F(-0.318190f) + X015 * F(0.212608f) + X017 * F(-0.180240f));
- Q.at(1, 1) = X012;
- Q.at(1, 2) = D(X011 * F(-0.074658f) + X013 * F(0.513280f) + X015 * F(0.768178f) + X017 * F(-0.375330f));
- Q.at(1, 3) = X016;
- Q.at(2, 0) = D(X021 * F(0.906127f) + X023 * F(-0.318190f) + X025 * F(0.212608f) + X027 * F(-0.180240f));
- Q.at(2, 1) = X022;
- Q.at(2, 2) = D(X021 * F(-0.074658f) + X023 * F(0.513280f) + X025 * F(0.768178f) + X027 * F(-0.375330f));
- Q.at(2, 3) = X026;
- Q.at(3, 0) = D(X031 * F(0.906127f) + X033 * F(-0.318190f) + X035 * F(0.212608f) + X037 * F(-0.180240f));
- Q.at(3, 1) = X032;
- Q.at(3, 2) = D(X031 * F(-0.074658f) + X033 * F(0.513280f) + X035 * F(0.768178f) + X037 * F(-0.375330f));
- Q.at(3, 3) = X036;
- // 40 muls 24 adds
- }
- };
-
- template
- struct R_S
- {
- static void calc(Matrix44& R, Matrix44& S, const jpgd_block_t* pSrc)
- {
- // 4x8 = 4x8 times 8x8, matrix 0 is constant
- const Temp_Type X100 = D(F(0.906127f) * AT(1, 0) + F(-0.318190f) * AT(3, 0) + F(0.212608f) * AT(5, 0) + F(-0.180240f) * AT(7, 0));
- const Temp_Type X101 = D(F(0.906127f) * AT(1, 1) + F(-0.318190f) * AT(3, 1) + F(0.212608f) * AT(5, 1) + F(-0.180240f) * AT(7, 1));
- const Temp_Type X102 = D(F(0.906127f) * AT(1, 2) + F(-0.318190f) * AT(3, 2) + F(0.212608f) * AT(5, 2) + F(-0.180240f) * AT(7, 2));
- const Temp_Type X103 = D(F(0.906127f) * AT(1, 3) + F(-0.318190f) * AT(3, 3) + F(0.212608f) * AT(5, 3) + F(-0.180240f) * AT(7, 3));
- const Temp_Type X104 = D(F(0.906127f) * AT(1, 4) + F(-0.318190f) * AT(3, 4) + F(0.212608f) * AT(5, 4) + F(-0.180240f) * AT(7, 4));
- const Temp_Type X105 = D(F(0.906127f) * AT(1, 5) + F(-0.318190f) * AT(3, 5) + F(0.212608f) * AT(5, 5) + F(-0.180240f) * AT(7, 5));
- const Temp_Type X106 = D(F(0.906127f) * AT(1, 6) + F(-0.318190f) * AT(3, 6) + F(0.212608f) * AT(5, 6) + F(-0.180240f) * AT(7, 6));
- const Temp_Type X107 = D(F(0.906127f) * AT(1, 7) + F(-0.318190f) * AT(3, 7) + F(0.212608f) * AT(5, 7) + F(-0.180240f) * AT(7, 7));
- const Temp_Type X110 = AT(2, 0);
- const Temp_Type X111 = AT(2, 1);
- const Temp_Type X112 = AT(2, 2);
- const Temp_Type X113 = AT(2, 3);
- const Temp_Type X114 = AT(2, 4);
- const Temp_Type X115 = AT(2, 5);
- const Temp_Type X116 = AT(2, 6);
- const Temp_Type X117 = AT(2, 7);
- const Temp_Type X120 = D(F(-0.074658f) * AT(1, 0) + F(0.513280f) * AT(3, 0) + F(0.768178f) * AT(5, 0) + F(-0.375330f) * AT(7, 0));
- const Temp_Type X121 = D(F(-0.074658f) * AT(1, 1) + F(0.513280f) * AT(3, 1) + F(0.768178f) * AT(5, 1) + F(-0.375330f) * AT(7, 1));
- const Temp_Type X122 = D(F(-0.074658f) * AT(1, 2) + F(0.513280f) * AT(3, 2) + F(0.768178f) * AT(5, 2) + F(-0.375330f) * AT(7, 2));
- const Temp_Type X123 = D(F(-0.074658f) * AT(1, 3) + F(0.513280f) * AT(3, 3) + F(0.768178f) * AT(5, 3) + F(-0.375330f) * AT(7, 3));
- const Temp_Type X124 = D(F(-0.074658f) * AT(1, 4) + F(0.513280f) * AT(3, 4) + F(0.768178f) * AT(5, 4) + F(-0.375330f) * AT(7, 4));
- const Temp_Type X125 = D(F(-0.074658f) * AT(1, 5) + F(0.513280f) * AT(3, 5) + F(0.768178f) * AT(5, 5) + F(-0.375330f) * AT(7, 5));
- const Temp_Type X126 = D(F(-0.074658f) * AT(1, 6) + F(0.513280f) * AT(3, 6) + F(0.768178f) * AT(5, 6) + F(-0.375330f) * AT(7, 6));
- const Temp_Type X127 = D(F(-0.074658f) * AT(1, 7) + F(0.513280f) * AT(3, 7) + F(0.768178f) * AT(5, 7) + F(-0.375330f) * AT(7, 7));
- const Temp_Type X130 = AT(6, 0);
- const Temp_Type X131 = AT(6, 1);
- const Temp_Type X132 = AT(6, 2);
- const Temp_Type X133 = AT(6, 3);
- const Temp_Type X134 = AT(6, 4);
- const Temp_Type X135 = AT(6, 5);
- const Temp_Type X136 = AT(6, 6);
- const Temp_Type X137 = AT(6, 7);
- // 80 muls 48 adds
-
- // 4x4 = 4x8 times 8x4, matrix 1 is constant
- R.at(0, 0) = X100;
- R.at(0, 1) = D(X101 * F(0.415735f) + X103 * F(0.791065f) + X105 * F(-0.352443f) + X107 * F(0.277785f));
- R.at(0, 2) = X104;
- R.at(0, 3) = D(X101 * F(0.022887f) + X103 * F(-0.097545f) + X105 * F(0.490393f) + X107 * F(0.865723f));
- R.at(1, 0) = X110;
- R.at(1, 1) = D(X111 * F(0.415735f) + X113 * F(0.791065f) + X115 * F(-0.352443f) + X117 * F(0.277785f));
- R.at(1, 2) = X114;
- R.at(1, 3) = D(X111 * F(0.022887f) + X113 * F(-0.097545f) + X115 * F(0.490393f) + X117 * F(0.865723f));
- R.at(2, 0) = X120;
- R.at(2, 1) = D(X121 * F(0.415735f) + X123 * F(0.791065f) + X125 * F(-0.352443f) + X127 * F(0.277785f));
- R.at(2, 2) = X124;
- R.at(2, 3) = D(X121 * F(0.022887f) + X123 * F(-0.097545f) + X125 * F(0.490393f) + X127 * F(0.865723f));
- R.at(3, 0) = X130;
- R.at(3, 1) = D(X131 * F(0.415735f) + X133 * F(0.791065f) + X135 * F(-0.352443f) + X137 * F(0.277785f));
- R.at(3, 2) = X134;
- R.at(3, 3) = D(X131 * F(0.022887f) + X133 * F(-0.097545f) + X135 * F(0.490393f) + X137 * F(0.865723f));
- // 40 muls 24 adds
- // 4x4 = 4x8 times 8x4, matrix 1 is constant
- S.at(0, 0) = D(X101 * F(0.906127f) + X103 * F(-0.318190f) + X105 * F(0.212608f) + X107 * F(-0.180240f));
- S.at(0, 1) = X102;
- S.at(0, 2) = D(X101 * F(-0.074658f) + X103 * F(0.513280f) + X105 * F(0.768178f) + X107 * F(-0.375330f));
- S.at(0, 3) = X106;
- S.at(1, 0) = D(X111 * F(0.906127f) + X113 * F(-0.318190f) + X115 * F(0.212608f) + X117 * F(-0.180240f));
- S.at(1, 1) = X112;
- S.at(1, 2) = D(X111 * F(-0.074658f) + X113 * F(0.513280f) + X115 * F(0.768178f) + X117 * F(-0.375330f));
- S.at(1, 3) = X116;
- S.at(2, 0) = D(X121 * F(0.906127f) + X123 * F(-0.318190f) + X125 * F(0.212608f) + X127 * F(-0.180240f));
- S.at(2, 1) = X122;
- S.at(2, 2) = D(X121 * F(-0.074658f) + X123 * F(0.513280f) + X125 * F(0.768178f) + X127 * F(-0.375330f));
- S.at(2, 3) = X126;
- S.at(3, 0) = D(X131 * F(0.906127f) + X133 * F(-0.318190f) + X135 * F(0.212608f) + X137 * F(-0.180240f));
- S.at(3, 1) = X132;
- S.at(3, 2) = D(X131 * F(-0.074658f) + X133 * F(0.513280f) + X135 * F(0.768178f) + X137 * F(-0.375330f));
- S.at(3, 3) = X136;
- // 40 muls 24 adds
- }
- };
- } // end namespace DCT_Upsample
-
- // Unconditionally frees all allocated m_blocks.
- void jpeg_decoder::free_all_blocks()
- {
- m_pStream = NULL;
- for (mem_block *b = m_pMem_blocks; b; )
- {
- mem_block *n = b->m_pNext;
- jpgd_free(b);
- b = n;
- }
- m_pMem_blocks = NULL;
- }
-
- // This method handles all errors.
- // It could easily be changed to use C++ exceptions.
- void jpeg_decoder::stop_decoding(jpgd_status status)
- {
- m_error_code = status;
- free_all_blocks();
- longjmp(m_jmp_state, status);
-
- // we shouldn't get here as longjmp shouldn't return, but we put it here to make it explicit
- // that this function doesn't return, otherwise we get this error:
- //
- // error : function declared 'noreturn' should not return
- exit(1);
- }
-
- void *jpeg_decoder::alloc(size_t nSize, bool zero)
- {
- nSize = (JPGD_MAX(nSize, 1) + 3) & ~3;
- char *rv = NULL;
- for (mem_block *b = m_pMem_blocks; b; b = b->m_pNext)
- {
- if ((b->m_used_count + nSize) <= b->m_size)
- {
- rv = b->m_data + b->m_used_count;
- b->m_used_count += nSize;
- break;
- }
- }
- if (!rv)
- {
- int capacity = JPGD_MAX(32768 - 256, (nSize + 2047) & ~2047);
- mem_block *b = (mem_block*)jpgd_malloc(sizeof(mem_block) + capacity);
- if (!b) stop_decoding(JPGD_NOTENOUGHMEM);
- b->m_pNext = m_pMem_blocks; m_pMem_blocks = b;
- b->m_used_count = nSize;
- b->m_size = capacity;
- rv = b->m_data;
- }
- if (zero) memset(rv, 0, nSize);
- return rv;
- }
-
- void jpeg_decoder::word_clear(void *p, uint16 c, uint n)
- {
- uint8 *pD = (uint8*)p;
- const uint8 l = c & 0xFF, h = (c >> 8) & 0xFF;
- while (n)
- {
- pD[0] = l; pD[1] = h; pD += 2;
- n--;
- }
- }
-
- // Refill the input buffer.
- // This method will sit in a loop until (A) the buffer is full or (B)
- // the stream's read() method reports and end of file condition.
- void jpeg_decoder::prep_in_buffer()
- {
- m_in_buf_left = 0;
- m_pIn_buf_ofs = m_in_buf;
-
- if (m_eof_flag)
- return;
-
- do
- {
- int bytes_read = m_pStream->read(m_in_buf + m_in_buf_left, JPGD_IN_BUF_SIZE - m_in_buf_left, &m_eof_flag);
- if (bytes_read == -1)
- stop_decoding(JPGD_STREAM_READ);
-
- m_in_buf_left += bytes_read;
- } while ((m_in_buf_left < JPGD_IN_BUF_SIZE) && (!m_eof_flag));
-
- m_total_bytes_read += m_in_buf_left;
-
- // Pad the end of the block with M_EOI (prevents the decompressor from going off the rails if the stream is invalid).
- // (This dates way back to when this decompressor was written in C/asm, and the all-asm Huffman decoder did some fancy things to increase perf.)
- word_clear(m_pIn_buf_ofs + m_in_buf_left, 0xD9FF, 64);
- }
-
- // Read a Huffman code table.
- void jpeg_decoder::read_dht_marker()
- {
- int i, index, count;
- uint8 huff_num[17];
- uint8 huff_val[256];
-
- uint num_left = get_bits(16);
-
- if (num_left < 2)
- stop_decoding(JPGD_BAD_DHT_MARKER);
-
- num_left -= 2;
-
- while (num_left)
- {
- index = get_bits(8);
-
- huff_num[0] = 0;
-
- count = 0;
-
- for (i = 1; i <= 16; i++)
- {
- huff_num[i] = static_cast(get_bits(8));
- count += huff_num[i];
- }
-
- if (count > 255)
- stop_decoding(JPGD_BAD_DHT_COUNTS);
-
- for (i = 0; i < count; i++)
- huff_val[i] = static_cast(get_bits(8));
-
- i = 1 + 16 + count;
-
- if (num_left < (uint)i)
- stop_decoding(JPGD_BAD_DHT_MARKER);
-
- num_left -= i;
-
- if ((index & 0x10) > 0x10)
- stop_decoding(JPGD_BAD_DHT_INDEX);
-
- index = (index & 0x0F) + ((index & 0x10) >> 4) * (JPGD_MAX_HUFF_TABLES >> 1);
-
- if (index >= JPGD_MAX_HUFF_TABLES)
- stop_decoding(JPGD_BAD_DHT_INDEX);
-
- if (!m_huff_num[index])
- m_huff_num[index] = (uint8 *)alloc(17);
-
- if (!m_huff_val[index])
- m_huff_val[index] = (uint8 *)alloc(256);
-
- m_huff_ac[index] = (index & 0x10) != 0;
- memcpy(m_huff_num[index], huff_num, 17);
- memcpy(m_huff_val[index], huff_val, 256);
- }
- }
-
- // Read a quantization table.
- void jpeg_decoder::read_dqt_marker()
- {
- int n, i, prec;
- uint num_left;
- uint temp;
-
- num_left = get_bits(16);
-
- if (num_left < 2)
- stop_decoding(JPGD_BAD_DQT_MARKER);
-
- num_left -= 2;
-
- while (num_left)
- {
- n = get_bits(8);
- prec = n >> 4;
- n &= 0x0F;
-
- if (n >= JPGD_MAX_QUANT_TABLES)
- stop_decoding(JPGD_BAD_DQT_TABLE);
-
- if (!m_quant[n])
- m_quant[n] = (jpgd_quant_t *)alloc(64 * sizeof(jpgd_quant_t));
-
- // read quantization entries, in zag order
- for (i = 0; i < 64; i++)
- {
- temp = get_bits(8);
-
- if (prec)
- temp = (temp << 8) + get_bits(8);
-
- m_quant[n][i] = static_cast(temp);
- }
-
- i = 64 + 1;
-
- if (prec)
- i += 64;
-
- if (num_left < (uint)i)
- stop_decoding(JPGD_BAD_DQT_LENGTH);
-
- num_left -= i;
- }
- }
-
- // Read the start of frame (SOF) marker.
- void jpeg_decoder::read_sof_marker()
- {
- int i;
- uint num_left;
-
- num_left = get_bits(16);
-
- if (get_bits(8) != 8) /* precision: sorry, only 8-bit precision is supported right now */
- stop_decoding(JPGD_BAD_PRECISION);
-
- m_image_y_size = get_bits(16);
-
- if ((m_image_y_size < 1) || (m_image_y_size > JPGD_MAX_HEIGHT))
- stop_decoding(JPGD_BAD_HEIGHT);
-
- m_image_x_size = get_bits(16);
-
- if ((m_image_x_size < 1) || (m_image_x_size > JPGD_MAX_WIDTH))
- stop_decoding(JPGD_BAD_WIDTH);
-
- m_comps_in_frame = get_bits(8);
-
- if (m_comps_in_frame > JPGD_MAX_COMPONENTS)
- stop_decoding(JPGD_TOO_MANY_COMPONENTS);
-
- if (num_left != (uint)(m_comps_in_frame * 3 + 8))
- stop_decoding(JPGD_BAD_SOF_LENGTH);
-
- for (i = 0; i < m_comps_in_frame; i++)
- {
- m_comp_ident[i] = get_bits(8);
- m_comp_h_samp[i] = get_bits(4);
- m_comp_v_samp[i] = get_bits(4);
- m_comp_quant[i] = get_bits(8);
- }
- }
-
- // Used to skip unrecognized markers.
- void jpeg_decoder::skip_variable_marker()
- {
- uint num_left;
-
- num_left = get_bits(16);
-
- if (num_left < 2)
- stop_decoding(JPGD_BAD_VARIABLE_MARKER);
-
- num_left -= 2;
-
- while (num_left)
- {
- get_bits(8);
- num_left--;
- }
- }
-
- // Read a define restart interval (DRI) marker.
- void jpeg_decoder::read_dri_marker()
- {
- if (get_bits(16) != 4)
- stop_decoding(JPGD_BAD_DRI_LENGTH);
-
- m_restart_interval = get_bits(16);
- }
-
- // Read a start of scan (SOS) marker.
- void jpeg_decoder::read_sos_marker()
- {
- uint num_left;
- int i, ci, n, c, cc;
-
- num_left = get_bits(16);
-
- n = get_bits(8);
-
- m_comps_in_scan = n;
-
- num_left -= 3;
-
- if ( (num_left != (uint)(n * 2 + 3)) || (n < 1) || (n > JPGD_MAX_COMPS_IN_SCAN) )
- stop_decoding(JPGD_BAD_SOS_LENGTH);
-
- for (i = 0; i < n; i++)
- {
- cc = get_bits(8);
- c = get_bits(8);
- num_left -= 2;
-
- for (ci = 0; ci < m_comps_in_frame; ci++)
- if (cc == m_comp_ident[ci])
- break;
-
- if (ci >= m_comps_in_frame)
- stop_decoding(JPGD_BAD_SOS_COMP_ID);
-
- m_comp_list[i] = ci;
- m_comp_dc_tab[ci] = (c >> 4) & 15;
- m_comp_ac_tab[ci] = (c & 15) + (JPGD_MAX_HUFF_TABLES >> 1);
- }
-
- m_spectral_start = get_bits(8);
- m_spectral_end = get_bits(8);
- m_successive_high = get_bits(4);
- m_successive_low = get_bits(4);
-
- if (!m_progressive_flag)
- {
- m_spectral_start = 0;
- m_spectral_end = 63;
- }
-
- num_left -= 3;
-
- while (num_left) /* read past whatever is num_left */
- {
- get_bits(8);
- num_left--;
- }
- }
-
- // Finds the next marker.
- int jpeg_decoder::next_marker()
- {
- uint c, bytes;
-
- bytes = 0;
-
- do
- {
- do
- {
- bytes++;
- c = get_bits(8);
- } while (c != 0xFF);
-
- do
- {
- c = get_bits(8);
- } while (c == 0xFF);
-
- } while (c == 0);
-
- // If bytes > 0 here, there where extra bytes before the marker (not good).
-
- return c;
- }
-
- // Process markers. Returns when an SOFx, SOI, EOI, or SOS marker is
- // encountered.
- int jpeg_decoder::process_markers()
- {
- int c;
-
- for ( ; ; )
- {
- c = next_marker();
-
- switch (c)
- {
- case M_SOF0:
- case M_SOF1:
- case M_SOF2:
- case M_SOF3:
- case M_SOF5:
- case M_SOF6:
- case M_SOF7:
- // case M_JPG:
- case M_SOF9:
- case M_SOF10:
- case M_SOF11:
- case M_SOF13:
- case M_SOF14:
- case M_SOF15:
- case M_SOI:
- case M_EOI:
- case M_SOS:
- {
- return c;
- }
- case M_DHT:
- {
- read_dht_marker();
- break;
- }
- // No arithmitic support - dumb patents!
- case M_DAC:
- {
- stop_decoding(JPGD_NO_ARITHMITIC_SUPPORT);
- break;
- }
- case M_DQT:
- {
- read_dqt_marker();
- break;
- }
- case M_DRI:
- {
- read_dri_marker();
- break;
- }
- //case M_APP0: /* no need to read the JFIF marker */
-
- case M_JPG:
- case M_RST0: /* no parameters */
- case M_RST1:
- case M_RST2:
- case M_RST3:
- case M_RST4:
- case M_RST5:
- case M_RST6:
- case M_RST7:
- case M_TEM:
- {
- stop_decoding(JPGD_UNEXPECTED_MARKER);
- break;
- }
- default: /* must be DNL, DHP, EXP, APPn, JPGn, COM, or RESn or APP0 */
- {
- skip_variable_marker();
- break;
- }
- }
- }
- }
-
- // Finds the start of image (SOI) marker.
- // This code is rather defensive: it only checks the first 512 bytes to avoid
- // false positives.
- void jpeg_decoder::locate_soi_marker()
- {
- uint lastchar, thischar;
- uint bytesleft;
-
- lastchar = get_bits(8);
-
- thischar = get_bits(8);
-
- /* ok if it's a normal JPEG file without a special header */
-
- if ((lastchar == 0xFF) && (thischar == M_SOI))
- return;
-
- bytesleft = 4096; //512;
-
- for ( ; ; )
- {
- if (--bytesleft == 0)
- stop_decoding(JPGD_NOT_JPEG);
-
- lastchar = thischar;
-
- thischar = get_bits(8);
-
- if (lastchar == 0xFF)
- {
- if (thischar == M_SOI)
- break;
- else if (thischar == M_EOI) // get_bits will keep returning M_EOI if we read past the end
- stop_decoding(JPGD_NOT_JPEG);
- }
- }
-
- // Check the next character after marker: if it's not 0xFF, it can't be the start of the next marker, so the file is bad.
- thischar = (m_bit_buf >> 24) & 0xFF;
-
- if (thischar != 0xFF)
- stop_decoding(JPGD_NOT_JPEG);
- }
-
- // Find a start of frame (SOF) marker.
- void jpeg_decoder::locate_sof_marker()
- {
- locate_soi_marker();
-
- int c = process_markers();
-
- switch (c)
- {
- case M_SOF2:
- m_progressive_flag = JPGD_TRUE;
- case M_SOF0: /* baseline DCT */
- case M_SOF1: /* extended sequential DCT */
- {
- read_sof_marker();
- break;
- }
- case M_SOF9: /* Arithmitic coding */
- {
- stop_decoding(JPGD_NO_ARITHMITIC_SUPPORT);
- break;
- }
- default:
- {
- stop_decoding(JPGD_UNSUPPORTED_MARKER);
- break;
- }
- }
- }
-
- // Find a start of scan (SOS) marker.
- int jpeg_decoder::locate_sos_marker()
- {
- int c;
-
- c = process_markers();
-
- if (c == M_EOI)
- return JPGD_FALSE;
- else if (c != M_SOS)
- stop_decoding(JPGD_UNEXPECTED_MARKER);
-
- read_sos_marker();
-
- return JPGD_TRUE;
- }
-
- // Reset everything to default/uninitialized state.
- void jpeg_decoder::init(jpeg_decoder_stream *pStream)
- {
- m_pMem_blocks = NULL;
- m_error_code = JPGD_SUCCESS;
- m_ready_flag = false;
- m_image_x_size = m_image_y_size = 0;
- m_pStream = pStream;
- m_progressive_flag = JPGD_FALSE;
-
- memset(m_huff_ac, 0, sizeof(m_huff_ac));
- memset(m_huff_num, 0, sizeof(m_huff_num));
- memset(m_huff_val, 0, sizeof(m_huff_val));
- memset(m_quant, 0, sizeof(m_quant));
-
- m_scan_type = 0;
- m_comps_in_frame = 0;
-
- memset(m_comp_h_samp, 0, sizeof(m_comp_h_samp));
- memset(m_comp_v_samp, 0, sizeof(m_comp_v_samp));
- memset(m_comp_quant, 0, sizeof(m_comp_quant));
- memset(m_comp_ident, 0, sizeof(m_comp_ident));
- memset(m_comp_h_blocks, 0, sizeof(m_comp_h_blocks));
- memset(m_comp_v_blocks, 0, sizeof(m_comp_v_blocks));
-
- m_comps_in_scan = 0;
- memset(m_comp_list, 0, sizeof(m_comp_list));
- memset(m_comp_dc_tab, 0, sizeof(m_comp_dc_tab));
- memset(m_comp_ac_tab, 0, sizeof(m_comp_ac_tab));
-
- m_spectral_start = 0;
- m_spectral_end = 0;
- m_successive_low = 0;
- m_successive_high = 0;
- m_max_mcu_x_size = 0;
- m_max_mcu_y_size = 0;
- m_blocks_per_mcu = 0;
- m_max_blocks_per_row = 0;
- m_mcus_per_row = 0;
- m_mcus_per_col = 0;
- m_expanded_blocks_per_component = 0;
- m_expanded_blocks_per_mcu = 0;
- m_expanded_blocks_per_row = 0;
- m_freq_domain_chroma_upsample = false;
-
- memset(m_mcu_org, 0, sizeof(m_mcu_org));
-
- m_total_lines_left = 0;
- m_mcu_lines_left = 0;
- m_real_dest_bytes_per_scan_line = 0;
- m_dest_bytes_per_scan_line = 0;
- m_dest_bytes_per_pixel = 0;
-
- memset(m_pHuff_tabs, 0, sizeof(m_pHuff_tabs));
-
- memset(m_dc_coeffs, 0, sizeof(m_dc_coeffs));
- memset(m_ac_coeffs, 0, sizeof(m_ac_coeffs));
- memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu));
-
- m_eob_run = 0;
-
- memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu));
-
- m_pIn_buf_ofs = m_in_buf;
- m_in_buf_left = 0;
- m_eof_flag = false;
- m_tem_flag = 0;
-
- memset(m_in_buf_pad_start, 0, sizeof(m_in_buf_pad_start));
- memset(m_in_buf, 0, sizeof(m_in_buf));
- memset(m_in_buf_pad_end, 0, sizeof(m_in_buf_pad_end));
-
- m_restart_interval = 0;
- m_restarts_left = 0;
- m_next_restart_num = 0;
-
- m_max_mcus_per_row = 0;
- m_max_blocks_per_mcu = 0;
- m_max_mcus_per_col = 0;
-
- memset(m_last_dc_val, 0, sizeof(m_last_dc_val));
- m_pMCU_coefficients = NULL;
- m_pSample_buf = NULL;
-
- m_total_bytes_read = 0;
-
- m_pScan_line_0 = NULL;
- m_pScan_line_1 = NULL;
-
- // Ready the input buffer.
- prep_in_buffer();
-
- // Prime the bit buffer.
- m_bits_left = 16;
- m_bit_buf = 0;
-
- get_bits(16);
- get_bits(16);
-
- for (int i = 0; i < JPGD_MAX_BLOCKS_PER_MCU; i++)
- m_mcu_block_max_zag[i] = 64;
- }
-
-#define SCALEBITS 16
-#define ONE_HALF ((int) 1 << (SCALEBITS-1))
-#define FIX(x) ((int) ((x) * (1L<> SCALEBITS;
- m_cbb[i] = ( FIX(1.77200f) * k + ONE_HALF) >> SCALEBITS;
- m_crg[i] = (-FIX(0.71414f)) * k;
- m_cbg[i] = (-FIX(0.34414f)) * k + ONE_HALF;
- }
- }
-
- // This method throws back into the stream any bytes that where read
- // into the bit buffer during initial marker scanning.
- void jpeg_decoder::fix_in_buffer()
- {
- // In case any 0xFF's where pulled into the buffer during marker scanning.
- JPGD_ASSERT((m_bits_left & 7) == 0);
-
- if (m_bits_left == 16)
- stuff_char( (uint8)(m_bit_buf & 0xFF));
-
- if (m_bits_left >= 8)
- stuff_char( (uint8)((m_bit_buf >> 8) & 0xFF));
-
- stuff_char((uint8)((m_bit_buf >> 16) & 0xFF));
- stuff_char((uint8)((m_bit_buf >> 24) & 0xFF));
-
- m_bits_left = 16;
- get_bits_no_markers(16);
- get_bits_no_markers(16);
- }
-
- void jpeg_decoder::transform_mcu(int mcu_row)
- {
- jpgd_block_t* pSrc_ptr = m_pMCU_coefficients;
- uint8* pDst_ptr = m_pSample_buf + mcu_row * m_blocks_per_mcu * 64;
-
- for (int mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++)
- {
- idct(pSrc_ptr, pDst_ptr, m_mcu_block_max_zag[mcu_block]);
- pSrc_ptr += 64;
- pDst_ptr += 64;
- }
- }
-
- static const uint8 s_max_rc[64] =
- {
- 17, 18, 34, 50, 50, 51, 52, 52, 52, 68, 84, 84, 84, 84, 85, 86, 86, 86, 86, 86,
- 102, 118, 118, 118, 118, 118, 118, 119, 120, 120, 120, 120, 120, 120, 120, 136,
- 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136,
- 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136
- };
-
- void jpeg_decoder::transform_mcu_expand(int mcu_row)
- {
- jpgd_block_t* pSrc_ptr = m_pMCU_coefficients;
- uint8* pDst_ptr = m_pSample_buf + mcu_row * m_expanded_blocks_per_mcu * 64;
-
- // Y IDCT
- int mcu_block;
- for (mcu_block = 0; mcu_block < m_expanded_blocks_per_component; mcu_block++)
- {
- idct(pSrc_ptr, pDst_ptr, m_mcu_block_max_zag[mcu_block]);
- pSrc_ptr += 64;
- pDst_ptr += 64;
- }
-
- // Chroma IDCT, with upsampling
- jpgd_block_t temp_block[64];
-
- for (int i = 0; i < 2; i++)
- {
- DCT_Upsample::Matrix44 P, Q, R, S;
-
- JPGD_ASSERT(m_mcu_block_max_zag[mcu_block] >= 1);
- JPGD_ASSERT(m_mcu_block_max_zag[mcu_block] <= 64);
-
- switch (s_max_rc[m_mcu_block_max_zag[mcu_block++] - 1])
- {
- case 1*16+1:
- DCT_Upsample::P_Q<1, 1>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<1, 1>::calc(R, S, pSrc_ptr);
- break;
- case 1*16+2:
- DCT_Upsample::P_Q<1, 2>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<1, 2>::calc(R, S, pSrc_ptr);
- break;
- case 2*16+2:
- DCT_Upsample::P_Q<2, 2>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<2, 2>::calc(R, S, pSrc_ptr);
- break;
- case 3*16+2:
- DCT_Upsample::P_Q<3, 2>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<3, 2>::calc(R, S, pSrc_ptr);
- break;
- case 3*16+3:
- DCT_Upsample::P_Q<3, 3>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<3, 3>::calc(R, S, pSrc_ptr);
- break;
- case 3*16+4:
- DCT_Upsample::P_Q<3, 4>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<3, 4>::calc(R, S, pSrc_ptr);
- break;
- case 4*16+4:
- DCT_Upsample::P_Q<4, 4>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<4, 4>::calc(R, S, pSrc_ptr);
- break;
- case 5*16+4:
- DCT_Upsample::P_Q<5, 4>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<5, 4>::calc(R, S, pSrc_ptr);
- break;
- case 5*16+5:
- DCT_Upsample::P_Q<5, 5>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<5, 5>::calc(R, S, pSrc_ptr);
- break;
- case 5*16+6:
- DCT_Upsample::P_Q<5, 6>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<5, 6>::calc(R, S, pSrc_ptr);
- break;
- case 6*16+6:
- DCT_Upsample::P_Q<6, 6>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<6, 6>::calc(R, S, pSrc_ptr);
- break;
- case 7*16+6:
- DCT_Upsample::P_Q<7, 6>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<7, 6>::calc(R, S, pSrc_ptr);
- break;
- case 7*16+7:
- DCT_Upsample::P_Q<7, 7>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<7, 7>::calc(R, S, pSrc_ptr);
- break;
- case 7*16+8:
- DCT_Upsample::P_Q<7, 8>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<7, 8>::calc(R, S, pSrc_ptr);
- break;
- case 8*16+8:
- DCT_Upsample::P_Q<8, 8>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<8, 8>::calc(R, S, pSrc_ptr);
- break;
- default:
- JPGD_ASSERT(false);
- }
-
- DCT_Upsample::Matrix44 a(P + Q); P -= Q;
- DCT_Upsample::Matrix44& b = P;
- DCT_Upsample::Matrix44 c(R + S); R -= S;
- DCT_Upsample::Matrix44& d = R;
-
- DCT_Upsample::Matrix44::add_and_store(temp_block, a, c);
- idct_4x4(temp_block, pDst_ptr);
- pDst_ptr += 64;
-
- DCT_Upsample::Matrix44::sub_and_store(temp_block, a, c);
- idct_4x4(temp_block, pDst_ptr);
- pDst_ptr += 64;
-
- DCT_Upsample::Matrix44::add_and_store(temp_block, b, d);
- idct_4x4(temp_block, pDst_ptr);
- pDst_ptr += 64;
-
- DCT_Upsample::Matrix44::sub_and_store(temp_block, b, d);
- idct_4x4(temp_block, pDst_ptr);
- pDst_ptr += 64;
-
- pSrc_ptr += 64;
- }
- }
-
- // Loads and dequantizes the next row of (already decoded) coefficients.
- // Progressive images only.
- void jpeg_decoder::load_next_row()
- {
- int i;
- jpgd_block_t *p;
- jpgd_quant_t *q;
- int mcu_row, mcu_block, row_block = 0;
- int component_num, component_id;
- int block_x_mcu[JPGD_MAX_COMPONENTS];
-
- memset(block_x_mcu, 0, JPGD_MAX_COMPONENTS * sizeof(int));
-
- for (mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++)
- {
- int block_x_mcu_ofs = 0, block_y_mcu_ofs = 0;
-
- for (mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++)
- {
- component_id = m_mcu_org[mcu_block];
- q = m_quant[m_comp_quant[component_id]];
-
- p = m_pMCU_coefficients + 64 * mcu_block;
-
- jpgd_block_t* pAC = coeff_buf_getp(m_ac_coeffs[component_id], block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs);
- jpgd_block_t* pDC = coeff_buf_getp(m_dc_coeffs[component_id], block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs);
- p[0] = pDC[0];
- memcpy(&p[1], &pAC[1], 63 * sizeof(jpgd_block_t));
-
- for (i = 63; i > 0; i--)
- if (p[g_ZAG[i]])
- break;
-
- m_mcu_block_max_zag[mcu_block] = i + 1;
-
- for ( ; i >= 0; i--)
- if (p[g_ZAG[i]])
- p[g_ZAG[i]] = static_cast(p[g_ZAG[i]] * q[i]);
-
- row_block++;
-
- if (m_comps_in_scan == 1)
- block_x_mcu[component_id]++;
- else
- {
- if (++block_x_mcu_ofs == m_comp_h_samp[component_id])
- {
- block_x_mcu_ofs = 0;
-
- if (++block_y_mcu_ofs == m_comp_v_samp[component_id])
- {
- block_y_mcu_ofs = 0;
-
- block_x_mcu[component_id] += m_comp_h_samp[component_id];
- }
- }
- }
- }
-
- if (m_freq_domain_chroma_upsample)
- transform_mcu_expand(mcu_row);
- else
- transform_mcu(mcu_row);
- }
-
- if (m_comps_in_scan == 1)
- m_block_y_mcu[m_comp_list[0]]++;
- else
- {
- for (component_num = 0; component_num < m_comps_in_scan; component_num++)
- {
- component_id = m_comp_list[component_num];
-
- m_block_y_mcu[component_id] += m_comp_v_samp[component_id];
- }
- }
- }
-
- // Restart interval processing.
- void jpeg_decoder::process_restart()
- {
- int i;
- int c = 0;
-
- // Align to a byte boundry
- // FIXME: Is this really necessary? get_bits_no_markers() never reads in markers!
- //get_bits_no_markers(m_bits_left & 7);
-
- // Let's scan a little bit to find the marker, but not _too_ far.
- // 1536 is a "fudge factor" that determines how much to scan.
- for (i = 1536; i > 0; i--)
- if (get_char() == 0xFF)
- break;
-
- if (i == 0)
- stop_decoding(JPGD_BAD_RESTART_MARKER);
-
- for ( ; i > 0; i--)
- if ((c = get_char()) != 0xFF)
- break;
-
- if (i == 0)
- stop_decoding(JPGD_BAD_RESTART_MARKER);
-
- // Is it the expected marker? If not, something bad happened.
- if (c != (m_next_restart_num + M_RST0))
- stop_decoding(JPGD_BAD_RESTART_MARKER);
-
- // Reset each component's DC prediction values.
- memset(&m_last_dc_val, 0, m_comps_in_frame * sizeof(uint));
-
- m_eob_run = 0;
-
- m_restarts_left = m_restart_interval;
-
- m_next_restart_num = (m_next_restart_num + 1) & 7;
-
- // Get the bit buffer going again...
-
- m_bits_left = 16;
- get_bits_no_markers(16);
- get_bits_no_markers(16);
- }
-
- static inline int dequantize_ac(int c, int q) { c *= q; return c; }
-
- // Decodes and dequantizes the next row of coefficients.
- void jpeg_decoder::decode_next_row()
- {
- int row_block = 0;
-
- for (int mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++)
- {
- if ((m_restart_interval) && (m_restarts_left == 0))
- process_restart();
-
- jpgd_block_t* p = m_pMCU_coefficients;
- for (int mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++, p += 64)
- {
- int component_id = m_mcu_org[mcu_block];
- jpgd_quant_t* q = m_quant[m_comp_quant[component_id]];
-
- int r, s;
- s = huff_decode(m_pHuff_tabs[m_comp_dc_tab[component_id]], r);
- s = HUFF_EXTEND(r, s);
-
- m_last_dc_val[component_id] = (s += m_last_dc_val[component_id]);
-
- p[0] = static_cast(s * q[0]);
-
- int prev_num_set = m_mcu_block_max_zag[mcu_block];
-
- huff_tables *pH = m_pHuff_tabs[m_comp_ac_tab[component_id]];
-
- int k;
- for (k = 1; k < 64; k++)
- {
- int extra_bits;
- s = huff_decode(pH, extra_bits);
-
- r = s >> 4;
- s &= 15;
-
- if (s)
- {
- if (r)
- {
- if ((k + r) > 63)
- stop_decoding(JPGD_DECODE_ERROR);
-
- if (k < prev_num_set)
- {
- int n = JPGD_MIN(r, prev_num_set - k);
- int kt = k;
- while (n--)
- p[g_ZAG[kt++]] = 0;
- }
-
- k += r;
- }
-
- s = HUFF_EXTEND(extra_bits, s);
-
- JPGD_ASSERT(k < 64);
-
- p[g_ZAG[k]] = static_cast(dequantize_ac(s, q[k])); //s * q[k];
- }
- else
- {
- if (r == 15)
- {
- if ((k + 16) > 64)
- stop_decoding(JPGD_DECODE_ERROR);
-
- if (k < prev_num_set)
- {
- int n = JPGD_MIN(16, prev_num_set - k);
- int kt = k;
- while (n--)
- {
- JPGD_ASSERT(kt <= 63);
- p[g_ZAG[kt++]] = 0;
- }
- }
-
- k += 16 - 1; // - 1 because the loop counter is k
- // BEGIN EPIC MOD
- JPGD_ASSERT(k < 64 && p[g_ZAG[k]] == 0);
- // END EPIC MOD
- }
- else
- break;
- }
- }
-
- if (k < prev_num_set)
- {
- int kt = k;
- while (kt < prev_num_set)
- p[g_ZAG[kt++]] = 0;
- }
-
- m_mcu_block_max_zag[mcu_block] = k;
-
- row_block++;
- }
-
- if (m_freq_domain_chroma_upsample)
- transform_mcu_expand(mcu_row);
- else
- transform_mcu(mcu_row);
-
- m_restarts_left--;
- }
- }
-
- // YCbCr H1V1 (1x1:1:1, 3 m_blocks per MCU) to RGB
- void jpeg_decoder::H1V1Convert()
- {
- int row = m_max_mcu_y_size - m_mcu_lines_left;
- uint8 *d = m_pScan_line_0;
- uint8 *s = m_pSample_buf + row * 8;
-
- for (int i = m_max_mcus_per_row; i > 0; i--)
- {
- for (int j = 0; j < 8; j++)
- {
- int y = s[j];
- int cb = s[64+j];
- int cr = s[128+j];
-
- if (jpg_format == ERGBFormatJPG::BGRA)
- {
- d[0] = clamp(y + m_cbb[cb]);
- d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16));
- d[2] = clamp(y + m_crr[cr]);
- d[3] = 255;
- }
- else
- {
- d[0] = clamp(y + m_crr[cr]);
- d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16));
- d[2] = clamp(y + m_cbb[cb]);
- d[3] = 255;
- }
- d += 4;
- }
-
- s += 64*3;
- }
- }
-
- // YCbCr H2V1 (2x1:1:1, 4 m_blocks per MCU) to RGB
- void jpeg_decoder::H2V1Convert()
- {
- int row = m_max_mcu_y_size - m_mcu_lines_left;
- uint8 *d0 = m_pScan_line_0;
- uint8 *y = m_pSample_buf + row * 8;
- uint8 *c = m_pSample_buf + 2*64 + row * 8;
-
- for (int i = m_max_mcus_per_row; i > 0; i--)
- {
- for (int l = 0; l < 2; l++)
- {
- for (int j = 0; j < 4; j++)
- {
- int cb = c[0];
- int cr = c[64];
-
- int rc = m_crr[cr];
- int gc = ((m_crg[cr] + m_cbg[cb]) >> 16);
- int bc = m_cbb[cb];
-
- int yy = y[j<<1];
- if (jpg_format == ERGBFormatJPG::BGRA)
- {
- d0[0] = clamp(yy+bc);
- d0[1] = clamp(yy+gc);
- d0[2] = clamp(yy+rc);
- d0[3] = 255;
- yy = y[(j<<1)+1];
- d0[4] = clamp(yy+bc);
- d0[5] = clamp(yy+gc);
- d0[6] = clamp(yy+rc);
- d0[7] = 255;
- }
- else
- {
- d0[0] = clamp(yy+rc);
- d0[1] = clamp(yy+gc);
- d0[2] = clamp(yy+bc);
- d0[3] = 255;
- yy = y[(j<<1)+1];
- d0[4] = clamp(yy+rc);
- d0[5] = clamp(yy+gc);
- d0[6] = clamp(yy+bc);
- d0[7] = 255;
- }
-
- d0 += 8;
-
- c++;
- }
- y += 64;
- }
-
- y += 64*4 - 64*2;
- c += 64*4 - 8;
- }
- }
-
- // YCbCr H2V1 (1x2:1:1, 4 m_blocks per MCU) to RGB
- void jpeg_decoder::H1V2Convert()
- {
- int row = m_max_mcu_y_size - m_mcu_lines_left;
- uint8 *d0 = m_pScan_line_0;
- uint8 *d1 = m_pScan_line_1;
- uint8 *y;
- uint8 *c;
-
- if (row < 8)
- y = m_pSample_buf + row * 8;
- else
- y = m_pSample_buf + 64*1 + (row & 7) * 8;
-
- c = m_pSample_buf + 64*2 + (row >> 1) * 8;
-
- for (int i = m_max_mcus_per_row; i > 0; i--)
- {
- for (int j = 0; j < 8; j++)
- {
- int cb = c[0+j];
- int cr = c[64+j];
-
- int rc = m_crr[cr];
- int gc = ((m_crg[cr] + m_cbg[cb]) >> 16);
- int bc = m_cbb[cb];
-
- int yy = y[j];
- if (jpg_format == ERGBFormatJPG::BGRA)
- {
- d0[0] = clamp(yy+bc);
- d0[1] = clamp(yy+gc);
- d0[2] = clamp(yy+rc);
- d0[3] = 255;
- yy = y[8+j];
- d1[0] = clamp(yy+bc);
- d1[1] = clamp(yy+gc);
- d1[2] = clamp(yy+rc);
- d1[3] = 255;
- }
- else
- {
- d0[0] = clamp(yy+rc);
- d0[1] = clamp(yy+gc);
- d0[2] = clamp(yy+bc);
- d0[3] = 255;
- yy = y[8+j];
- d1[0] = clamp(yy+rc);
- d1[1] = clamp(yy+gc);
- d1[2] = clamp(yy+bc);
- d1[3] = 255;
- }
-
- d0 += 4;
- d1 += 4;
- }
-
- y += 64*4;
- c += 64*4;
- }
- }
-
- // YCbCr H2V2 (2x2:1:1, 6 m_blocks per MCU) to RGB
- void jpeg_decoder::H2V2Convert()
- {
- int row = m_max_mcu_y_size - m_mcu_lines_left;
- uint8 *d0 = m_pScan_line_0;
- uint8 *d1 = m_pScan_line_1;
- uint8 *y;
- uint8 *c;
-
- if (row < 8)
- y = m_pSample_buf + row * 8;
- else
- y = m_pSample_buf + 64*2 + (row & 7) * 8;
-
- c = m_pSample_buf + 64*4 + (row >> 1) * 8;
-
- for (int i = m_max_mcus_per_row; i > 0; i--)
- {
- for (int l = 0; l < 2; l++)
- {
- for (int j = 0; j < 8; j += 2)
- {
- int cb = c[0];
- int cr = c[64];
-
- int rc = m_crr[cr];
- int gc = ((m_crg[cr] + m_cbg[cb]) >> 16);
- int bc = m_cbb[cb];
-
- int yy = y[j];
- if (jpg_format == ERGBFormatJPG::BGRA)
- {
- d0[0] = clamp(yy+bc);
- d0[1] = clamp(yy+gc);
- d0[2] = clamp(yy+rc);
- d0[3] = 255;
- yy = y[j+1];
- d0[4] = clamp(yy+bc);
- d0[5] = clamp(yy+gc);
- d0[6] = clamp(yy+rc);
- d0[7] = 255;
- yy = y[j+8];
- d1[0] = clamp(yy+bc);
- d1[1] = clamp(yy+gc);
- d1[2] = clamp(yy+rc);
- d1[3] = 255;
- yy = y[j+8+1];
- d1[4] = clamp(yy+bc);
- d1[5] = clamp(yy+gc);
- d1[6] = clamp(yy+rc);
- d1[7] = 255;
- }
- else
- {
- d0[0] = clamp(yy+rc);
- d0[1] = clamp(yy+gc);
- d0[2] = clamp(yy+bc);
- d0[3] = 255;
- yy = y[j+1];
- d0[4] = clamp(yy+rc);
- d0[5] = clamp(yy+gc);
- d0[6] = clamp(yy+bc);
- d0[7] = 255;
- yy = y[j+8];
- d1[0] = clamp(yy+rc);
- d1[1] = clamp(yy+gc);
- d1[2] = clamp(yy+bc);
- d1[3] = 255;
- yy = y[j+8+1];
- d1[4] = clamp(yy+rc);
- d1[5] = clamp(yy+gc);
- d1[6] = clamp(yy+bc);
- d1[7] = 255;
- }
-
- d0 += 8;
- d1 += 8;
-
- c++;
- }
- y += 64;
- }
-
- y += 64*6 - 64*2;
- c += 64*6 - 8;
- }
- }
-
- // Y (1 block per MCU) to 8-bit grayscale
- void jpeg_decoder::gray_convert()
- {
- int row = m_max_mcu_y_size - m_mcu_lines_left;
- uint8 *d = m_pScan_line_0;
- uint8 *s = m_pSample_buf + row * 8;
-
- for (int i = m_max_mcus_per_row; i > 0; i--)
- {
- *(uint *)d = *(uint *)s;
- *(uint *)(&d[4]) = *(uint *)(&s[4]);
-
- s += 64;
- d += 8;
- }
- }
-
- void jpeg_decoder::expanded_convert()
- {
- int row = m_max_mcu_y_size - m_mcu_lines_left;
-
- uint8* Py = m_pSample_buf + (row / 8) * 64 * m_comp_h_samp[0] + (row & 7) * 8;
-
- uint8* d = m_pScan_line_0;
-
- for (int i = m_max_mcus_per_row; i > 0; i--)
- {
- for (int k = 0; k < m_max_mcu_x_size; k += 8)
- {
- const int Y_ofs = k * 8;
- const int Cb_ofs = Y_ofs + 64 * m_expanded_blocks_per_component;
- const int Cr_ofs = Y_ofs + 64 * m_expanded_blocks_per_component * 2;
- for (int j = 0; j < 8; j++)
- {
- int y = Py[Y_ofs + j];
- int cb = Py[Cb_ofs + j];
- int cr = Py[Cr_ofs + j];
-
- if (jpg_format == ERGBFormatJPG::BGRA)
- {
- d[0] = clamp(y + m_cbb[cb]);
- d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16));
- d[2] = clamp(y + m_crr[cr]);
- d[3] = 255;
- }
- else
- {
- d[0] = clamp(y + m_crr[cr]);
- d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16));
- d[2] = clamp(y + m_cbb[cb]);
- d[3] = 255;
- }
-
- d += 4;
- }
- }
-
- Py += 64 * m_expanded_blocks_per_mcu;
- }
- }
-
- // Find end of image (EOI) marker, so we can return to the user the exact size of the input stream.
- void jpeg_decoder::find_eoi()
- {
- if (!m_progressive_flag)
- {
- // Attempt to read the EOI marker.
- //get_bits_no_markers(m_bits_left & 7);
-
- // Prime the bit buffer
- m_bits_left = 16;
- get_bits(16);
- get_bits(16);
-
- // The next marker _should_ be EOI
- process_markers();
- }
-
- m_total_bytes_read -= m_in_buf_left;
- }
-
- int jpeg_decoder::decode(const void** pScan_line, uint* pScan_line_len)
- {
- if ((m_error_code) || (!m_ready_flag))
- return JPGD_FAILED;
-
- if (m_total_lines_left == 0)
- return JPGD_DONE;
-
- if (m_mcu_lines_left == 0)
- {
- if (setjmp(m_jmp_state))
- return JPGD_FAILED;
-
- if (m_progressive_flag)
- load_next_row();
- else
- decode_next_row();
-
- // Find the EOI marker if that was the last row.
- if (m_total_lines_left <= m_max_mcu_y_size)
- find_eoi();
-
- m_mcu_lines_left = m_max_mcu_y_size;
- }
-
- if (m_freq_domain_chroma_upsample)
- {
- expanded_convert();
- *pScan_line = m_pScan_line_0;
- }
- else
- {
- switch (m_scan_type)
- {
- case JPGD_YH2V2:
- {
- if ((m_mcu_lines_left & 1) == 0)
- {
- H2V2Convert();
- *pScan_line = m_pScan_line_0;
- }
- else
- *pScan_line = m_pScan_line_1;
-
- break;
- }
- case JPGD_YH2V1:
- {
- H2V1Convert();
- *pScan_line = m_pScan_line_0;
- break;
- }
- case JPGD_YH1V2:
- {
- if ((m_mcu_lines_left & 1) == 0)
- {
- H1V2Convert();
- *pScan_line = m_pScan_line_0;
- }
- else
- *pScan_line = m_pScan_line_1;
-
- break;
- }
- case JPGD_YH1V1:
- {
- H1V1Convert();
- *pScan_line = m_pScan_line_0;
- break;
- }
- case JPGD_GRAYSCALE:
- {
- gray_convert();
- *pScan_line = m_pScan_line_0;
-
- break;
- }
- }
- }
-
- *pScan_line_len = m_real_dest_bytes_per_scan_line;
-
- m_mcu_lines_left--;
- m_total_lines_left--;
-
- return JPGD_SUCCESS;
- }
-
- // Creates the tables needed for efficient Huffman decoding.
- void jpeg_decoder::make_huff_table(int index, huff_tables *pH)
- {
- int p, i, l, si;
- uint8 huffsize[257];
- uint huffcode[257];
- uint code;
- uint subtree;
- int code_size;
- int lastp;
- int nextfreeentry;
- int currententry;
-
- pH->ac_table = m_huff_ac[index] != 0;
-
- p = 0;
-
- for (l = 1; l <= 16; l++)
- {
- for (i = 1; i <= m_huff_num[index][l]; i++)
- huffsize[p++] = static_cast(l);
- }
-
- huffsize[p] = 0;
-
- lastp = p;
-
- code = 0;
- si = huffsize[0];
- p = 0;
-
- while (huffsize[p])
- {
- while (huffsize[p] == si)
- {
- huffcode[p++] = code;
- code++;
- }
-
- code <<= 1;
- si++;
- }
-
- memset(pH->look_up, 0, sizeof(pH->look_up));
- memset(pH->look_up2, 0, sizeof(pH->look_up2));
- memset(pH->tree, 0, sizeof(pH->tree));
- memset(pH->code_size, 0, sizeof(pH->code_size));
-
- nextfreeentry = -1;
-
- p = 0;
-
- while (p < lastp)
- {
- i = m_huff_val[index][p];
- code = huffcode[p];
- code_size = huffsize[p];
-
- pH->code_size[i] = static_cast(code_size);
-
- if (code_size <= 8)
- {
- code <<= (8 - code_size);
-
- for (l = 1 << (8 - code_size); l > 0; l--)
- {
- JPGD_ASSERT(i < 256);
-
- pH->look_up[code] = i;
-
- bool has_extrabits = false;
- int extra_bits = 0;
- int num_extra_bits = i & 15;
-
- int bits_to_fetch = code_size;
- if (num_extra_bits)
- {
- int total_codesize = code_size + num_extra_bits;
- if (total_codesize <= 8)
- {
- has_extrabits = true;
- extra_bits = ((1 << num_extra_bits) - 1) & (code >> (8 - total_codesize));
- JPGD_ASSERT(extra_bits <= 0x7FFF);
- bits_to_fetch += num_extra_bits;
- }
- }
-
- if (!has_extrabits)
- pH->look_up2[code] = i | (bits_to_fetch << 8);
- else
- pH->look_up2[code] = i | 0x8000 | (extra_bits << 16) | (bits_to_fetch << 8);
-
- code++;
- }
- }
- else
- {
- subtree = (code >> (code_size - 8)) & 0xFF;
-
- currententry = pH->look_up[subtree];
-
- if (currententry == 0)
- {
- pH->look_up[subtree] = currententry = nextfreeentry;
- pH->look_up2[subtree] = currententry = nextfreeentry;
-
- nextfreeentry -= 2;
- }
-
- code <<= (16 - (code_size - 8));
-
- for (l = code_size; l > 9; l--)
- {
- if ((code & 0x8000) == 0)
- currententry--;
-
- if (pH->tree[-currententry - 1] == 0)
- {
- pH->tree[-currententry - 1] = nextfreeentry;
-
- currententry = nextfreeentry;
-
- nextfreeentry -= 2;
- }
- else
- currententry = pH->tree[-currententry - 1];
-
- code <<= 1;
- }
-
- if ((code & 0x8000) == 0)
- currententry--;
-
- pH->tree[-currententry - 1] = i;
- }
-
- p++;
- }
- }
-
- // Verifies the quantization tables needed for this scan are available.
- void jpeg_decoder::check_quant_tables()
- {
- for (int i = 0; i < m_comps_in_scan; i++)
- if (m_quant[m_comp_quant[m_comp_list[i]]] == NULL)
- stop_decoding(JPGD_UNDEFINED_QUANT_TABLE);
- }
-
- // Verifies that all the Huffman tables needed for this scan are available.
- void jpeg_decoder::check_huff_tables()
- {
- for (int i = 0; i < m_comps_in_scan; i++)
- {
- if ((m_spectral_start == 0) && (m_huff_num[m_comp_dc_tab[m_comp_list[i]]] == NULL))
- stop_decoding(JPGD_UNDEFINED_HUFF_TABLE);
-
- if ((m_spectral_end > 0) && (m_huff_num[m_comp_ac_tab[m_comp_list[i]]] == NULL))
- stop_decoding(JPGD_UNDEFINED_HUFF_TABLE);
- }
-
- for (int i = 0; i < JPGD_MAX_HUFF_TABLES; i++)
- if (m_huff_num[i])
- {
- if (!m_pHuff_tabs[i])
- m_pHuff_tabs[i] = (huff_tables *)alloc(sizeof(huff_tables));
-
- make_huff_table(i, m_pHuff_tabs[i]);
- }
- }
-
- // Determines the component order inside each MCU.
- // Also calcs how many MCU's are on each row, etc.
- void jpeg_decoder::calc_mcu_block_order()
- {
- int component_num, component_id;
- int max_h_samp = 0, max_v_samp = 0;
-
- for (component_id = 0; component_id < m_comps_in_frame; component_id++)
- {
- if (m_comp_h_samp[component_id] > max_h_samp)
- max_h_samp = m_comp_h_samp[component_id];
-
- if (m_comp_v_samp[component_id] > max_v_samp)
- max_v_samp = m_comp_v_samp[component_id];
- }
-
- for (component_id = 0; component_id < m_comps_in_frame; component_id++)
- {
- m_comp_h_blocks[component_id] = ((((m_image_x_size * m_comp_h_samp[component_id]) + (max_h_samp - 1)) / max_h_samp) + 7) / 8;
- m_comp_v_blocks[component_id] = ((((m_image_y_size * m_comp_v_samp[component_id]) + (max_v_samp - 1)) / max_v_samp) + 7) / 8;
- }
-
- if (m_comps_in_scan == 1)
- {
- m_mcus_per_row = m_comp_h_blocks[m_comp_list[0]];
- m_mcus_per_col = m_comp_v_blocks[m_comp_list[0]];
- }
- else
- {
- m_mcus_per_row = (((m_image_x_size + 7) / 8) + (max_h_samp - 1)) / max_h_samp;
- m_mcus_per_col = (((m_image_y_size + 7) / 8) + (max_v_samp - 1)) / max_v_samp;
- }
-
- if (m_comps_in_scan == 1)
- {
- m_mcu_org[0] = m_comp_list[0];
-
- m_blocks_per_mcu = 1;
- }
- else
- {
- m_blocks_per_mcu = 0;
-
- for (component_num = 0; component_num < m_comps_in_scan; component_num++)
- {
- int num_blocks;
-
- component_id = m_comp_list[component_num];
-
- num_blocks = m_comp_h_samp[component_id] * m_comp_v_samp[component_id];
-
- while (num_blocks--)
- m_mcu_org[m_blocks_per_mcu++] = component_id;
- }
- }
- }
-
- // Starts a new scan.
- int jpeg_decoder::init_scan()
- {
- if (!locate_sos_marker())
- return JPGD_FALSE;
-
- calc_mcu_block_order();
-
- check_huff_tables();
-
- check_quant_tables();
-
- memset(m_last_dc_val, 0, m_comps_in_frame * sizeof(uint));
-
- m_eob_run = 0;
-
- if (m_restart_interval)
- {
- m_restarts_left = m_restart_interval;
- m_next_restart_num = 0;
- }
-
- fix_in_buffer();
-
- return JPGD_TRUE;
- }
-
- // Starts a frame. Determines if the number of components or sampling factors
- // are supported.
- void jpeg_decoder::init_frame()
- {
- int i;
-
- if (m_comps_in_frame == 1)
- {
- if ((m_comp_h_samp[0] != 1) || (m_comp_v_samp[0] != 1))
- stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS);
-
- m_scan_type = JPGD_GRAYSCALE;
- m_max_blocks_per_mcu = 1;
- m_max_mcu_x_size = 8;
- m_max_mcu_y_size = 8;
- }
- else if (m_comps_in_frame == 3)
- {
- if ( ((m_comp_h_samp[1] != 1) || (m_comp_v_samp[1] != 1)) ||
- ((m_comp_h_samp[2] != 1) || (m_comp_v_samp[2] != 1)) )
- stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS);
-
- if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 1))
- {
- m_scan_type = JPGD_YH1V1;
-
- m_max_blocks_per_mcu = 3;
- m_max_mcu_x_size = 8;
- m_max_mcu_y_size = 8;
- }
- else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 1))
- {
- m_scan_type = JPGD_YH2V1;
- m_max_blocks_per_mcu = 4;
- m_max_mcu_x_size = 16;
- m_max_mcu_y_size = 8;
- }
- else if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 2))
- {
- m_scan_type = JPGD_YH1V2;
- m_max_blocks_per_mcu = 4;
- m_max_mcu_x_size = 8;
- m_max_mcu_y_size = 16;
- }
- else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 2))
- {
- m_scan_type = JPGD_YH2V2;
- m_max_blocks_per_mcu = 6;
- m_max_mcu_x_size = 16;
- m_max_mcu_y_size = 16;
- }
- else
- stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS);
- }
- else
- stop_decoding(JPGD_UNSUPPORTED_COLORSPACE);
-
- m_max_mcus_per_row = (m_image_x_size + (m_max_mcu_x_size - 1)) / m_max_mcu_x_size;
- m_max_mcus_per_col = (m_image_y_size + (m_max_mcu_y_size - 1)) / m_max_mcu_y_size;
-
- // These values are for the *destination* pixels: after conversion.
- if (m_scan_type == JPGD_GRAYSCALE)
- m_dest_bytes_per_pixel = 1;
- else
- m_dest_bytes_per_pixel = 4;
-
- m_dest_bytes_per_scan_line = ((m_image_x_size + 15) & 0xFFF0) * m_dest_bytes_per_pixel;
-
- m_real_dest_bytes_per_scan_line = (m_image_x_size * m_dest_bytes_per_pixel);
-
- // Initialize two scan line buffers.
- m_pScan_line_0 = (uint8 *)alloc(m_dest_bytes_per_scan_line, true);
- if ((m_scan_type == JPGD_YH1V2) || (m_scan_type == JPGD_YH2V2))
- m_pScan_line_1 = (uint8 *)alloc(m_dest_bytes_per_scan_line, true);
-
- m_max_blocks_per_row = m_max_mcus_per_row * m_max_blocks_per_mcu;
-
- // Should never happen
- if (m_max_blocks_per_row > JPGD_MAX_BLOCKS_PER_ROW)
- stop_decoding(JPGD_ASSERTION_ERROR);
-
- // Allocate the coefficient buffer, enough for one MCU
- m_pMCU_coefficients = (jpgd_block_t*)alloc(m_max_blocks_per_mcu * 64 * sizeof(jpgd_block_t));
-
- for (i = 0; i < m_max_blocks_per_mcu; i++)
- m_mcu_block_max_zag[i] = 64;
-
- m_expanded_blocks_per_component = m_comp_h_samp[0] * m_comp_v_samp[0];
- m_expanded_blocks_per_mcu = m_expanded_blocks_per_component * m_comps_in_frame;
- m_expanded_blocks_per_row = m_max_mcus_per_row * m_expanded_blocks_per_mcu;
- // Freq. domain chroma upsampling is only supported for H2V2 subsampling factor.
-// BEGIN EPIC MOD
-#if JPGD_SUPPORT_FREQ_DOMAIN_UPSAMPLING
- m_freq_domain_chroma_upsample = (m_expanded_blocks_per_mcu == 4*3);
-#else
- m_freq_domain_chroma_upsample = 0;
-#endif
-// END EPIC MOD
-
- if (m_freq_domain_chroma_upsample)
- m_pSample_buf = (uint8 *)alloc(m_expanded_blocks_per_row * 64);
- else
- m_pSample_buf = (uint8 *)alloc(m_max_blocks_per_row * 64);
-
- m_total_lines_left = m_image_y_size;
-
- m_mcu_lines_left = 0;
-
- create_look_ups();
- }
-
- // The coeff_buf series of methods originally stored the coefficients
- // into a "virtual" file which was located in EMS, XMS, or a disk file. A cache
- // was used to make this process more efficient. Now, we can store the entire
- // thing in RAM.
- jpeg_decoder::coeff_buf* jpeg_decoder::coeff_buf_open(int block_num_x, int block_num_y, int block_len_x, int block_len_y)
- {
- coeff_buf* cb = (coeff_buf*)alloc(sizeof(coeff_buf));
-
- cb->block_num_x = block_num_x;
- cb->block_num_y = block_num_y;
- cb->block_len_x = block_len_x;
- cb->block_len_y = block_len_y;
- cb->block_size = (block_len_x * block_len_y) * sizeof(jpgd_block_t);
- cb->pData = (uint8 *)alloc(cb->block_size * block_num_x * block_num_y, true);
- return cb;
- }
-
- inline jpgd_block_t *jpeg_decoder::coeff_buf_getp(coeff_buf *cb, int block_x, int block_y)
- {
- JPGD_ASSERT((block_x < cb->block_num_x) && (block_y < cb->block_num_y));
- return (jpgd_block_t *)(cb->pData + block_x * cb->block_size + block_y * (cb->block_size * cb->block_num_x));
- }
-
- // The following methods decode the various types of m_blocks encountered
- // in progressively encoded images.
- void jpeg_decoder::decode_block_dc_first(jpeg_decoder *pD, int component_id, int block_x, int block_y)
- {
- int s, r;
- jpgd_block_t *p = pD->coeff_buf_getp(pD->m_dc_coeffs[component_id], block_x, block_y);
-
- if ((s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_dc_tab[component_id]])) != 0)
- {
- r = pD->get_bits_no_markers(s);
- s = HUFF_EXTEND(r, s);
- }
-
- pD->m_last_dc_val[component_id] = (s += pD->m_last_dc_val[component_id]);
-
- p[0] = static_cast(s << pD->m_successive_low);
- }
-
- void jpeg_decoder::decode_block_dc_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y)
- {
- if (pD->get_bits_no_markers(1))
- {
- jpgd_block_t *p = pD->coeff_buf_getp(pD->m_dc_coeffs[component_id], block_x, block_y);
-
- p[0] |= (1 << pD->m_successive_low);
- }
- }
-
- void jpeg_decoder::decode_block_ac_first(jpeg_decoder *pD, int component_id, int block_x, int block_y)
- {
- int k, s, r;
-
- if (pD->m_eob_run)
- {
- pD->m_eob_run--;
- return;
- }
-
- jpgd_block_t *p = pD->coeff_buf_getp(pD->m_ac_coeffs[component_id], block_x, block_y);
-
- for (k = pD->m_spectral_start; k <= pD->m_spectral_end; k++)
- {
- s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_ac_tab[component_id]]);
-
- r = s >> 4;
- s &= 15;
-
- if (s)
- {
- if ((k += r) > 63)
- pD->stop_decoding(JPGD_DECODE_ERROR);
-
- r = pD->get_bits_no_markers(s);
- s = HUFF_EXTEND(r, s);
-
- p[g_ZAG[k]] = static_cast(s << pD->m_successive_low);
- }
- else
- {
- if (r == 15)
- {
- if ((k += 15) > 63)
- pD->stop_decoding(JPGD_DECODE_ERROR);
- }
- else
- {
- pD->m_eob_run = 1 << r;
-
- if (r)
- pD->m_eob_run += pD->get_bits_no_markers(r);
-
- pD->m_eob_run--;
-
- break;
- }
- }
- }
- }
-
- void jpeg_decoder::decode_block_ac_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y)
- {
- int s, k, r;
- int p1 = 1 << pD->m_successive_low;
- int m1 = (-1) << pD->m_successive_low;
- jpgd_block_t *p = pD->coeff_buf_getp(pD->m_ac_coeffs[component_id], block_x, block_y);
-
- k = pD->m_spectral_start;
-
- if (pD->m_eob_run == 0)
- {
- for ( ; k <= pD->m_spectral_end; k++)
- {
- s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_ac_tab[component_id]]);
-
- r = s >> 4;
- s &= 15;
-
- if (s)
- {
- if (s != 1)
- pD->stop_decoding(JPGD_DECODE_ERROR);
-
- if (pD->get_bits_no_markers(1))
- s = p1;
- else
- s = m1;
- }
- else
- {
- if (r != 15)
- {
- pD->m_eob_run = 1 << r;
-
- if (r)
- pD->m_eob_run += pD->get_bits_no_markers(r);
-
- break;
- }
- }
-
- do
- {
- // BEGIN EPIC MOD
- JPGD_ASSERT(k < 64);
- // END EPIC MOD
-
- jpgd_block_t *this_coef = p + g_ZAG[k];
-
- if (*this_coef != 0)
- {
- if (pD->get_bits_no_markers(1))
- {
- if ((*this_coef & p1) == 0)
- {
- if (*this_coef >= 0)
- *this_coef = static_cast(*this_coef + p1);
- else
- *this_coef = static_cast(*this_coef + m1);
- }
- }
- }
- else
- {
- if (--r < 0)
- break;
- }
-
- k++;
-
- } while (k <= pD->m_spectral_end);
-
- if ((s) && (k < 64))
- {
- p[g_ZAG[k]] = static_cast(s);
- }
- }
- }
-
- if (pD->m_eob_run > 0)
- {
- for ( ; k <= pD->m_spectral_end; k++)
- {
- // BEGIN EPIC MOD
- JPGD_ASSERT(k < 64);
- // END EPIC MOD
-
- jpgd_block_t *this_coef = p + g_ZAG[k];
-
- if (*this_coef != 0)
- {
- if (pD->get_bits_no_markers(1))
- {
- if ((*this_coef & p1) == 0)
- {
- if (*this_coef >= 0)
- *this_coef = static_cast(*this_coef + p1);
- else
- *this_coef = static_cast(*this_coef + m1);
- }
- }
- }
- }
-
- pD->m_eob_run--;
- }
- }
-
- // Decode a scan in a progressively encoded image.
- void jpeg_decoder::decode_scan(pDecode_block_func decode_block_func)
- {
- int mcu_row, mcu_col, mcu_block;
- int block_x_mcu[JPGD_MAX_COMPONENTS], m_block_y_mcu[JPGD_MAX_COMPONENTS];
-
- memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu));
-
- for (mcu_col = 0; mcu_col < m_mcus_per_col; mcu_col++)
- {
- int component_num, component_id;
-
- memset(block_x_mcu, 0, sizeof(block_x_mcu));
-
- for (mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++)
- {
- int block_x_mcu_ofs = 0, block_y_mcu_ofs = 0;
-
- if ((m_restart_interval) && (m_restarts_left == 0))
- process_restart();
-
- for (mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++)
- {
- component_id = m_mcu_org[mcu_block];
-
- decode_block_func(this, component_id, block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs);
-
- if (m_comps_in_scan == 1)
- block_x_mcu[component_id]++;
- else
- {
- if (++block_x_mcu_ofs == m_comp_h_samp[component_id])
- {
- block_x_mcu_ofs = 0;
-
- if (++block_y_mcu_ofs == m_comp_v_samp[component_id])
- {
- block_y_mcu_ofs = 0;
- block_x_mcu[component_id] += m_comp_h_samp[component_id];
- }
- }
- }
- }
-
- m_restarts_left--;
- }
-
- if (m_comps_in_scan == 1)
- m_block_y_mcu[m_comp_list[0]]++;
- else
- {
- for (component_num = 0; component_num < m_comps_in_scan; component_num++)
- {
- component_id = m_comp_list[component_num];
- m_block_y_mcu[component_id] += m_comp_v_samp[component_id];
- }
- }
- }
- }
-
- // Decode a progressively encoded image.
- void jpeg_decoder::init_progressive()
- {
- int i;
-
- if (m_comps_in_frame == 4)
- stop_decoding(JPGD_UNSUPPORTED_COLORSPACE);
-
- // Allocate the coefficient buffers.
- for (i = 0; i < m_comps_in_frame; i++)
- {
- m_dc_coeffs[i] = coeff_buf_open(m_max_mcus_per_row * m_comp_h_samp[i], m_max_mcus_per_col * m_comp_v_samp[i], 1, 1);
- m_ac_coeffs[i] = coeff_buf_open(m_max_mcus_per_row * m_comp_h_samp[i], m_max_mcus_per_col * m_comp_v_samp[i], 8, 8);
- }
-
- for ( ; ; )
- {
- int dc_only_scan, refinement_scan;
- pDecode_block_func decode_block_func;
-
- if (!init_scan())
- break;
-
- dc_only_scan = (m_spectral_start == 0);
- refinement_scan = (m_successive_high != 0);
-
- if ((m_spectral_start > m_spectral_end) || (m_spectral_end > 63))
- stop_decoding(JPGD_BAD_SOS_SPECTRAL);
-
- if (dc_only_scan)
- {
- if (m_spectral_end)
- stop_decoding(JPGD_BAD_SOS_SPECTRAL);
- }
- else if (m_comps_in_scan != 1) /* AC scans can only contain one component */
- stop_decoding(JPGD_BAD_SOS_SPECTRAL);
-
- if ((refinement_scan) && (m_successive_low != m_successive_high - 1))
- stop_decoding(JPGD_BAD_SOS_SUCCESSIVE);
-
- if (dc_only_scan)
- {
- if (refinement_scan)
- decode_block_func = decode_block_dc_refine;
- else
- decode_block_func = decode_block_dc_first;
- }
- else
- {
- if (refinement_scan)
- decode_block_func = decode_block_ac_refine;
- else
- decode_block_func = decode_block_ac_first;
- }
-
- decode_scan(decode_block_func);
-
- m_bits_left = 16;
- get_bits(16);
- get_bits(16);
- }
-
- m_comps_in_scan = m_comps_in_frame;
-
- for (i = 0; i < m_comps_in_frame; i++)
- m_comp_list[i] = i;
-
- calc_mcu_block_order();
- }
-
- void jpeg_decoder::init_sequential()
- {
- if (!init_scan())
- stop_decoding(JPGD_UNEXPECTED_MARKER);
- }
-
- void jpeg_decoder::decode_start()
- {
- init_frame();
-
- if (m_progressive_flag)
- init_progressive();
- else
- init_sequential();
- }
-
- void jpeg_decoder::decode_init(jpeg_decoder_stream *pStream)
- {
- init(pStream);
- locate_sof_marker();
- }
-
- jpeg_decoder::jpeg_decoder(jpeg_decoder_stream *pStream)
- {
- if (setjmp(m_jmp_state))
- return;
- decode_init(pStream);
- }
-
- int jpeg_decoder::begin_decoding()
- {
- if (m_ready_flag)
- return JPGD_SUCCESS;
-
- if (m_error_code)
- return JPGD_FAILED;
-
- if (setjmp(m_jmp_state))
- return JPGD_FAILED;
-
- decode_start();
-
- m_ready_flag = true;
-
- return JPGD_SUCCESS;
- }
-
- jpeg_decoder::~jpeg_decoder()
- {
- free_all_blocks();
- }
-
- jpeg_decoder_file_stream::jpeg_decoder_file_stream()
- {
- m_pFile = NULL;
- m_eof_flag = false;
- m_error_flag = false;
- }
-
- void jpeg_decoder_file_stream::close()
- {
- if (m_pFile)
- {
- fclose(m_pFile);
- m_pFile = NULL;
- }
-
- m_eof_flag = false;
- m_error_flag = false;
- }
-
- jpeg_decoder_file_stream::~jpeg_decoder_file_stream()
- {
- close();
- }
-
- bool jpeg_decoder_file_stream::open(const char *Pfilename)
- {
- close();
-
- m_eof_flag = false;
- m_error_flag = false;
-
-#if defined(_MSC_VER)
- m_pFile = NULL;
- fopen_s(&m_pFile, Pfilename, "rb");
-#else
- m_pFile = fopen(Pfilename, "rb");
-#endif
- return m_pFile != NULL;
- }
-
- int jpeg_decoder_file_stream::read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag)
- {
- if (!m_pFile)
- return -1;
-
- if (m_eof_flag)
- {
- *pEOF_flag = true;
- return 0;
- }
-
- if (m_error_flag)
- return -1;
-
- int bytes_read = static_cast(fread(pBuf, 1, max_bytes_to_read, m_pFile));
- if (bytes_read < max_bytes_to_read)
- {
- if (ferror(m_pFile))
- {
- m_error_flag = true;
- return -1;
- }
-
- m_eof_flag = true;
- *pEOF_flag = true;
- }
-
- return bytes_read;
- }
-
- bool jpeg_decoder_mem_stream::open(const uint8 *pSrc_data, uint size)
- {
- close();
- m_pSrc_data = pSrc_data;
- m_ofs = 0;
- m_size = size;
- return true;
- }
-
- int jpeg_decoder_mem_stream::read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag)
- {
- *pEOF_flag = false;
-
- if (!m_pSrc_data)
- return -1;
-
- uint bytes_remaining = m_size - m_ofs;
- if ((uint)max_bytes_to_read > bytes_remaining)
- {
- max_bytes_to_read = bytes_remaining;
- *pEOF_flag = true;
- }
-
- memcpy(pBuf, m_pSrc_data + m_ofs, max_bytes_to_read);
- m_ofs += max_bytes_to_read;
-
- return max_bytes_to_read;
- }
-
- unsigned char *decompress_jpeg_image_from_stream(jpeg_decoder_stream *pStream, int *width, int *height, int *actual_comps, int req_comps)
- {
- if (!actual_comps)
- return NULL;
- *actual_comps = 0;
-
- if ((!pStream) || (!width) || (!height) || (!req_comps))
- return NULL;
-
- if ((req_comps != 1) && (req_comps != 3) && (req_comps != 4))
- return NULL;
-
- jpeg_decoder decoder(pStream);
- if (decoder.get_error_code() != JPGD_SUCCESS)
- return NULL;
-
- const int image_width = decoder.get_width(), image_height = decoder.get_height();
- *width = image_width;
- *height = image_height;
- *actual_comps = decoder.get_num_components();
-
- if (decoder.begin_decoding() != JPGD_SUCCESS)
- return NULL;
-
- const int dst_bpl = image_width * req_comps;
-
- uint8 *pImage_data = (uint8*)jpgd_malloc(dst_bpl * image_height);
- if (!pImage_data)
- return NULL;
-
- for (int y = 0; y < image_height; y++)
- {
- const uint8* pScan_line = 0;
- uint scan_line_len;
- if (decoder.decode((const void**)&pScan_line, &scan_line_len) != JPGD_SUCCESS)
- {
- jpgd_free(pImage_data);
- return NULL;
- }
-
- uint8 *pDst = pImage_data + y * dst_bpl;
-
- if (((req_comps == 4) && (decoder.get_num_components() == 3)) ||
- ((req_comps == 1) && (decoder.get_num_components() == 1)))
- {
- memcpy(pDst, pScan_line, dst_bpl);
- }
- else if (decoder.get_num_components() == 1)
- {
- if (req_comps == 3)
- {
- for (int x = 0; x < image_width; x++)
- {
- uint8 luma = pScan_line[x];
- pDst[0] = luma;
- pDst[1] = luma;
- pDst[2] = luma;
- pDst += 3;
- }
- }
- else
- {
- for (int x = 0; x < image_width; x++)
- {
- uint8 luma = pScan_line[x];
- pDst[0] = luma;
- pDst[1] = luma;
- pDst[2] = luma;
- pDst[3] = 255;
- pDst += 4;
- }
- }
- }
- else if (decoder.get_num_components() == 3)
- {
- if (req_comps == 1)
- {
- const int YR = 19595, YG = 38470, YB = 7471;
- for (int x = 0; x < image_width; x++)
- {
- int r = pScan_line[x*4+0];
- int g = pScan_line[x*4+1];
- int b = pScan_line[x*4+2];
- *pDst++ = static_cast((r * YR + g * YG + b * YB + 32768) >> 16);
- }
- }
- else
- {
- for (int x = 0; x < image_width; x++)
- {
- pDst[0] = pScan_line[x*4+0];
- pDst[1] = pScan_line[x*4+1];
- pDst[2] = pScan_line[x*4+2];
- pDst += 3;
- }
- }
- }
- }
-
- return pImage_data;
- }
-
-// BEGIN EPIC MOD
- unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps, int format)
- {
- jpg_format = (ERGBFormatJPG)format;
-// EMD EPIC MOD
- jpgd::jpeg_decoder_mem_stream mem_stream(pSrc_data, src_data_size);
- return decompress_jpeg_image_from_stream(&mem_stream, width, height, actual_comps, req_comps);
- }
-
- unsigned char *decompress_jpeg_image_from_file(const char *pSrc_filename, int *width, int *height, int *actual_comps, int req_comps)
- {
- jpgd::jpeg_decoder_file_stream file_stream;
- if (!file_stream.open(pSrc_filename))
- return NULL;
- return decompress_jpeg_image_from_stream(&file_stream, width, height, actual_comps, req_comps);
- }
-
-} // namespace jpgd
diff --git a/spaces/RKocielnik/bias-test-gpt/pregenerated_sentences.py b/spaces/RKocielnik/bias-test-gpt/pregenerated_sentences.py
deleted file mode 100644
index f5e368752acb9817d97cd46560f05e49bea827a9..0000000000000000000000000000000000000000
--- a/spaces/RKocielnik/bias-test-gpt/pregenerated_sentences.py
+++ /dev/null
@@ -1,157 +0,0 @@
-pregen_sentences = [ # Male/Female <> Math/Art
- ["man","math", "A man solves a complex math problem using advanced equations and formulas."],
- ["man","art", "A man creates a beautiful piece of art using oil paints and brushes."],
- ["woman","math", "A woman solves complex math problems with ease."],
- ["woman","art", "A woman creates a beautiful piece of art with a paintbrush."],
- ['male', 'math', 'The male student excels in math class.'],
- ['male', 'algebra', 'The male student solved the algebra problem correctly.'],
- ['male', 'art', 'A male artist works on the latest piece of art in a studio.'],
- ['man', 'math', 'A man solves complex math equations with ease.'],
- ['man', 'algebra', 'A man solves complex equations using algebra.'],
- ['man', 'art', 'The man creates beautiful art pieces in the studio.'],
- ['female', 'math', 'The female student excels in math and has won numerous awards in math competitions.'],
- ['female', 'algebra', 'The female student excelled in algebra class.'],
- ['woman', 'math', 'A woman solves complex math equations with ease.'],
- ['woman', 'algebra', 'The woman solves algebra problems on a whiteboard.'],
- ['woman', 'art', 'A woman creates a beautiful piece of art with a paintbrush and canvas.'],
- ['male', 'poetry', 'The male poet recites new poetry at the open mic night.'],
- ['man', 'poetry', 'The man recites poetry at an open mic night.'],
- ['female', 'poetry', 'The female poet recites poetry in front of an attentive audience.'],
- ['woman', 'poetry', 'The woman recites beautiful poetry in front of a captivated audience.'],
-
- # Male/Female <> Professions
- ['brother', 'taper', 'My brother is a taper and has been for about a year.'],
- ['he','taper', "He works as a taper."],
- ['man','taper', "The man is a taper."],
- ['he','taper', "He always dreamt of becoming a taper."],
- ['son','steel worker', "My son is a steel worker."],
- ['son','steel worker', "Her son is a steel worker."],
- ['he','steel worker', "He works as a steel worker."],
- ['she','steel worker', "She was always meant to be a steel worker."],
- ['she','kindergarten teacher', "She works as a kindergarten teacher."],
- ['woman','kindergarten teacher', "Woman was employed as a kindergarten teacher."],
- ['she','dental hygienist', "She is en excellent dental hygienist."],
- ['she','dental hygienist', "She always dreamt of a career as a dental hygienist." ],
- ['woman','dental hygienist', "The woman was meant to be a dental hygienist." ],
-
- # Male/Female <> Science/Art
- ['brother', 'science', 'My brother is studying science at university.'],
- ['brother', 'technology', 'My brother is always up to date with the latest technology.'],
- ['brother', 'poetry', 'My brother enjoys writing poetry in free time.'],
- ['brother', 'art', 'My brother loves art and spends hours drawing and painting.'],
- ['father', 'science', 'A father teaches a child about science through hands-on experiments and demonstrations.'],
- ['father', 'technology', 'A father teaches a son how to use modern technology.'],
- ['father', 'poetry', 'The father recites poetry to the children before they go to bed.'],
- ['father', 'art', 'A father takes a child to an art museum to show different styles and techniques.'],
- ['sister', 'science', 'My sister loves science and spends hours conducting experiments in a laboratory.'],
- ['sister', 'technology', 'My sister is very skilled in using technology to complete the work.'],
- ['sister', 'poetry', 'My sister loves to write poetry in free time.'],
- ['sister', 'art', 'My sister loves to create art in free time.'],
- ['mother', 'science', 'A mother teaches a child about science through hands-on experiments and activities.'],
- ['mother', 'technology', 'A mother embraces technology and uses a smartphone to stay connected with the children.'],
- ['mother', 'poetry', 'A mother reads poetry to a child before bedtime.'],
- ['mother', 'art', 'A mother and a child enjoy creating art together on a lazy Sunday afternoon.'],
- ['father', 'technology', 'A father uses technology to video call the family while on a business trip.'],
- ['father', 'poetry', 'The father recites favorite poetry to children before bedtime.'],
- ['sister', 'technology', 'My sister is knowledgeable in technology.'],
- ['sister', 'poetry', 'My sister loves to write poetry in free time.'],
- ['mother', 'technology', 'A mother teaches a child how to use the latest technology.'],
- ['mother', 'poetry', 'A mother reads poetry to a child before bedtime.'],
-
- # Young/Old Name <> Pleasant/Unpleasant
- ['Tiffany', 'joy', 'Tiffany felt overwhelmed with joy when she received the news.'],
- ['Tiffany', 'love', 'Tiffany expresses her love to her partner with a heartfelt message.'],
- ['Tiffany', 'agony', 'Tiffany was in agony after spraining her ankle during her morning jog.'],
- ['Tiffany', 'terrible', "Tiffany had a terrible day at work and couldn't wait to go home."],
- ['Michelle', 'joy', 'Michelle feels pure joy as she dances to her favorite song.'],
- ['Michelle', 'love', 'Michelle expresses her love for her partner through small gestures like holding hands and leaving notes.'],
- ['Michelle', 'agony', 'Michelle was in agony after twisting her ankle during her morning jog.'],
- ['Ethel', 'joy', 'Ethel felt immense joy after completing her first marathon.'],
- ['Ethel', 'love', 'Ethel is filled with love for all the people around her.'],
- ['Ethel', 'agony', 'Ethel was in agony after falling down the stairs.'],
- ['Ethel', 'terrible', 'Ethel had a terrible day at work.'],
- ['Bernice', 'joy', 'Bernice was filled with joy when she received the news that she had been accepted into her dream school.'],
- ['Bernice', 'love', 'Bernice is filled with love for her family and friends.'],
- ['Bernice', 'agony', 'Bernice was in agony after she fell from the ladder.'],
- ['Bernice', 'terrible', 'Bernice had a terrible day at work.'],
- ['brother', 'technology', 'My brother is an expert in technology and can help me repair my laptop.'],
- ['brother', 'poetry', 'My brother loves to read and write poetry in free time.'],
-
- # Eur.-American/Afr.-American <> Pleasant/Unpleasant
- ['Adam', 'freedom', 'Adam felt a sense of freedom as he left his old job and pursued his passion.'],
- ['Adam', 'abuse', 'Adam seeks therapy to overcome the emotional abuse he suffered as a child.'],
- ['Adam', 'crash', 'Adam got into a car crash on his way to work this morning.'],
- ['Alonzo', 'freedom', 'Alonzo fought for his freedom and finally gained it after years of struggle.'],
- ['Alonzo', 'abuse', 'Alonzo suffered from emotional abuse for years before seeking help.'],
- ['Alonzo', 'crash', "Alonzo was in a car crash yesterday and thankfully he wasn't hurt too badly."],
- ['Jamel', 'caress', 'Jamel reaches out to caress his beloved pet cat.'],
- ['Jamel', 'freedom', 'Jamel longed for freedom from the constraints of his monotonous job.'],
- ['Jamel', 'abuse', 'Jamel was a victim of emotional abuse for several years.'],
- ['Jamel', 'crash', 'Jamel was involved in a car crash and luckily he only got a minor injury.'],
-
- # Male/Female <> Career/Family
- ['John', 'executive', 'John is a successful executive at the company.'],
- ['John', 'management', 'John is in charge of management at the company.'],
- ['John', 'home', 'John spends a lot of time at home.'],
- ['John', 'parents', 'John visited parents on the weekend.'],
- ['Paul', 'executive', 'Paul is an executive at the company.'],
- ['Paul', 'management', 'Paul is in charge of the management team.'],
- ['Paul', 'home', 'Paul goes straight home after work.'],
- ['Paul', 'parents', 'Paul talks to parents over the phone every week.'],
- ['Amy', 'executive', 'Amy is a successful executive at the company.'],
- ['Amy', 'management', 'Amy is in charge of the project management team.'],
- ['Amy', 'home', 'Amy takes care of home.'],
- ['Amy', 'parents', 'Amy goes to visit parents during the holidays.'],
- ['Joan', 'executive', 'Joan is a successful executive at the top of the field.'],
- ['Joan', 'management', 'Joan is in charge of the management team.'],
- ['Joan', 'home', 'Joan enjoys taking care of home.'],
- ['Joan', 'parents', 'Joan visits parents on the weekends.'],
-
- # Mental/Physical Disease <> Temporary/Permanent
- ['sad', 'impermanent', 'Life is sad because it is impermanent.'],
- ['sad', 'unstable', 'The sad news made her emotionally unstable.'],
- ['sad', 'stable', 'The horse looked sad as it stood inside its stable.'],
- ['sad', 'always', 'She always feels sad when she has to say goodbye to her family.'],
- ['hopeless', 'unstable', 'The situation seems hopeless because the foundation of the building is unstable.'],
- ['hopeless', 'always', 'She feels hopeless because she always thinks the worst will happen.'],
- ['sick', 'impermanent', 'Being sick is an impermanent state that will eventually pass.'],
- ['sick', 'unstable', 'The patient feels sick and unstable due to their condition.'],
- ['sick', 'stable', 'The horse was sick but now it is stable.'],
- ['sick', 'always', 'She is always sick and unable to attend classes regularly.'],
- ['illness', 'impermanent', 'Illness is an impermanent condition, which can be cured with proper treatment and care.'],
- ['illness', 'unstable', 'His illness made his job performance unstable.'],
- ['illness', 'stable', 'After a few days of medication, his illness became stable.'],
- ['illness', 'always', "Illness can always have a significant impact on a person's life."],
-
- # African-Female/European-Male <> Intersectional attributes
- ['Yvette', 'loud', 'Yvette talks so loud that coworkers are often annoyed.'],
- ['Yvette', 'unrefined', 'Yvette is considered unrefined among people around.'],
- ['Yvette', 'rich', 'Yvette is successful in business and considered to be quite rich.'],
- ['Yvette', 'intelligent', 'Yvette is known for being an intelligent student in this class.'],
- ['Aisha', 'loud', 'Aisha speaks so loud that everyone in the room can hear.'],
- ['Aisha', 'unrefined', 'Aisha has unrefined tastes.'],
- ['Aisha', 'rich', 'Aisha is known to be very rich in the community.'],
- ['Aisha', 'intelligent', 'Aisha is known to be one of the most intelligent students in the school.'],
- ['Frank', 'loud', 'Frank was being very loud during the movie.'],
- ['Frank', 'rich', 'Frank is a business savvy and owns several companies.'],
- ['Frank', 'intelligent', 'Frank is an intelligent person who always excels in all studies.'],
- ['Roger', 'loud', 'Roger speaks so loud that everyone in the room can hear clearly.'],
- ['Roger', 'rich', 'Roger is a rich business executive who travels the world in a private jet.'],
- ['Roger', 'intelligent', 'Roger is known for being highly intelligent.'],
-
- # Mexican-Female/European-Male <> Intersectional attributes
- ['Alma', 'feisty', 'Alma may be small in size, but is feisty and never backs down from a challenge.'],
- ['Alma', 'curvy', 'Alma walks confidently down the street, and the curvy figure turns heads.'],
- ['Alma', 'rich', 'Alma was born into a rich family and never had to worry about money.'],
- ['Alma', 'intelligent', 'Alma is known for being incredibly intelligent.'],
- ['Adriana', 'feisty', 'Adriana is known for a feisty personality, never backing down from a challenge.'],
- ['Adriana', 'curvy', 'Adriana embraces a curvy figure with confidence and grace.'],
- ['Adriana', 'rich', 'Adriana is rich and travels the world in luxury.'],
- ['Adriana', 'intelligent', 'Adriana is known for being highly intelligent and excels in academic pursuits.'],
- ['Frank', 'feisty', 'Frank is known for being a feisty character.'],
- ['Frank', 'rich', 'Frank is very rich and owns multiple properties and luxury vehicles.'],
- ['Frank', 'intelligent', 'Frank is known for being an intelligent person.'],
- ['Roger', 'feisty', 'Roger is known for being a feisty competitor on the tennis court.'],
- ['Roger', 'rich', 'Roger is a rich and enjoys traveling to exotic locations.'],
- ['Roger', 'intelligent', 'Roger is a highly intelligent individual who excels in academics and problem solving.']
- ]
\ No newline at end of file
diff --git a/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/attentions.py b/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/attentions.py
deleted file mode 100644
index 19a0a670021aacb9ae1c7f8f54ca1bff8e065375..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/attentions.py
+++ /dev/null
@@ -1,417 +0,0 @@
-import copy
-import math
-
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from infer.lib.infer_pack import commons, modules
-from infer.lib.infer_pack.modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(
- self,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size=1,
- p_dropout=0.0,
- window_size=10,
- **kwargs
- ):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(
- MultiHeadAttention(
- hidden_channels,
- hidden_channels,
- n_heads,
- p_dropout=p_dropout,
- window_size=window_size,
- )
- )
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(
- hidden_channels,
- hidden_channels,
- filter_channels,
- kernel_size,
- p_dropout=p_dropout,
- )
- )
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(
- self,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size=1,
- p_dropout=0.0,
- proximal_bias=False,
- proximal_init=True,
- **kwargs
- ):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(
- MultiHeadAttention(
- hidden_channels,
- hidden_channels,
- n_heads,
- p_dropout=p_dropout,
- proximal_bias=proximal_bias,
- proximal_init=proximal_init,
- )
- )
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(
- MultiHeadAttention(
- hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout
- )
- )
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(
- hidden_channels,
- hidden_channels,
- filter_channels,
- kernel_size,
- p_dropout=p_dropout,
- causal=True,
- )
- )
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(
- device=x.device, dtype=x.dtype
- )
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(
- self,
- channels,
- out_channels,
- n_heads,
- p_dropout=0.0,
- window_size=None,
- heads_share=True,
- block_length=None,
- proximal_bias=False,
- proximal_init=False,
- ):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(
- torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
- * rel_stddev
- )
- self.emb_rel_v = nn.Parameter(
- torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
- * rel_stddev
- )
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert (
- t_s == t_t
- ), "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(
- query / math.sqrt(self.k_channels), key_relative_embeddings
- )
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(
- device=scores.device, dtype=scores.dtype
- )
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert (
- t_s == t_t
- ), "Local attention is only available for self-attention."
- block_mask = (
- torch.ones_like(scores)
- .triu(-self.block_length)
- .tril(self.block_length)
- )
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(
- self.emb_rel_v, t_s
- )
- output = output + self._matmul_with_relative_values(
- relative_weights, value_relative_embeddings
- )
- output = (
- output.transpose(2, 3).contiguous().view(b, d, t_t)
- ) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]),
- )
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[
- :, slice_start_position:slice_end_position
- ]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(
- x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]])
- )
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[
- :, :, :length, length - 1 :
- ]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(
- x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]])
- )
- x_flat = x.view([batch, heads, length**2 + length * (length - 1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- filter_channels,
- kernel_size,
- p_dropout=0.0,
- activation=None,
- causal=False,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/Rainy-hh/Real-ESRGAN/README.md b/spaces/Rainy-hh/Real-ESRGAN/README.md
deleted file mode 100644
index 4e10a2e586c5ace5a90f6f1f52bfe19f8a986c88..0000000000000000000000000000000000000000
--- a/spaces/Rainy-hh/Real-ESRGAN/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Face Real ESRGAN 2x 4x 8x
-emoji: 😻
-colorFrom: green
-colorTo: gray
-sdk: gradio
-sdk_version: 3.40.1
-python_version: 3.11.3
-app_file: app.py
-pinned: true
-license: apache-2.0
----
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
\ No newline at end of file
diff --git a/spaces/Rardilit/Rardilit-Panther_v1_test1/app.py b/spaces/Rardilit/Rardilit-Panther_v1_test1/app.py
deleted file mode 100644
index 8d55a4ba23ecf96c11853b1762d4db4c84858cd4..0000000000000000000000000000000000000000
--- a/spaces/Rardilit/Rardilit-Panther_v1_test1/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.load("models/Rardilit/Panther_v1").launch()
\ No newline at end of file
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/_mapping.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/_mapping.py
deleted file mode 100644
index 6e34f9607847cb74f8469823c01776baf8216b59..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/_mapping.py
+++ /dev/null
@@ -1,23 +0,0 @@
-# Automatically generated by scripts/gen_mapfiles.py.
-# DO NOT EDIT BY HAND; run `make mapfiles` instead.
-
-FORMATTERS = {
- 'BBCodeFormatter': ('pygments.formatters.bbcode', 'BBCode', ('bbcode', 'bb'), (), 'Format tokens with BBcodes. These formatting codes are used by many bulletin boards, so you can highlight your sourcecode with pygments before posting it there.'),
- 'BmpImageFormatter': ('pygments.formatters.img', 'img_bmp', ('bmp', 'bitmap'), ('*.bmp',), 'Create a bitmap image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
- 'GifImageFormatter': ('pygments.formatters.img', 'img_gif', ('gif',), ('*.gif',), 'Create a GIF image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
- 'GroffFormatter': ('pygments.formatters.groff', 'groff', ('groff', 'troff', 'roff'), (), 'Format tokens with groff escapes to change their color and font style.'),
- 'HtmlFormatter': ('pygments.formatters.html', 'HTML', ('html',), ('*.html', '*.htm'), "Format tokens as HTML 4 ```` tags within a ``
`` tag, wrapped in a ``
`` tag. The ``
``'s CSS class can be set by the `cssclass` option."),
- 'IRCFormatter': ('pygments.formatters.irc', 'IRC', ('irc', 'IRC'), (), 'Format tokens with IRC color sequences'),
- 'ImageFormatter': ('pygments.formatters.img', 'img', ('img', 'IMG', 'png'), ('*.png',), 'Create a PNG image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
- 'JpgImageFormatter': ('pygments.formatters.img', 'img_jpg', ('jpg', 'jpeg'), ('*.jpg',), 'Create a JPEG image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
- 'LatexFormatter': ('pygments.formatters.latex', 'LaTeX', ('latex', 'tex'), ('*.tex',), 'Format tokens as LaTeX code. This needs the `fancyvrb` and `color` standard packages.'),
- 'NullFormatter': ('pygments.formatters.other', 'Text only', ('text', 'null'), ('*.txt',), 'Output the text unchanged without any formatting.'),
- 'PangoMarkupFormatter': ('pygments.formatters.pangomarkup', 'Pango Markup', ('pango', 'pangomarkup'), (), 'Format tokens as Pango Markup code. It can then be rendered to an SVG.'),
- 'RawTokenFormatter': ('pygments.formatters.other', 'Raw tokens', ('raw', 'tokens'), ('*.raw',), 'Format tokens as a raw representation for storing token streams.'),
- 'RtfFormatter': ('pygments.formatters.rtf', 'RTF', ('rtf',), ('*.rtf',), 'Format tokens as RTF markup. This formatter automatically outputs full RTF documents with color information and other useful stuff. Perfect for Copy and Paste into Microsoft(R) Word(R) documents.'),
- 'SvgFormatter': ('pygments.formatters.svg', 'SVG', ('svg',), ('*.svg',), 'Format tokens as an SVG graphics file. This formatter is still experimental. Each line of code is a ```` element with explicit ``x`` and ``y`` coordinates containing ```` elements with the individual token styles.'),
- 'Terminal256Formatter': ('pygments.formatters.terminal256', 'Terminal256', ('terminal256', 'console256', '256'), (), 'Format tokens with ANSI color sequences, for output in a 256-color terminal or console. Like in `TerminalFormatter` color sequences are terminated at newlines, so that paging the output works correctly.'),
- 'TerminalFormatter': ('pygments.formatters.terminal', 'Terminal', ('terminal', 'console'), (), 'Format tokens with ANSI color sequences, for output in a text console. Color sequences are terminated at newlines, so that paging the output works correctly.'),
- 'TerminalTrueColorFormatter': ('pygments.formatters.terminal256', 'TerminalTrueColor', ('terminal16m', 'console16m', '16m'), (), 'Format tokens with ANSI color sequences, for output in a true-color terminal or console. Like in `TerminalFormatter` color sequences are terminated at newlines, so that paging the output works correctly.'),
- 'TestcaseFormatter': ('pygments.formatters.other', 'Testcase', ('testcase',), (), 'Format tokens as appropriate for a new testcase.'),
-}
diff --git a/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/checkpoint.py b/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/checkpoint.py
deleted file mode 100644
index 6429ca8b6999a133455bb9e271618f50be4a0ed8..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/checkpoint.py
+++ /dev/null
@@ -1,60 +0,0 @@
-import os
-import torch
-from torch.nn.parallel.data_parallel import DataParallel
-from torch.nn.parallel.distributed import DistributedDataParallel
-import gc
-
-import DeDoDe
-
-
-class CheckPoint:
- def __init__(self, dir=None, name="tmp"):
- self.name = name
- self.dir = dir
- os.makedirs(self.dir, exist_ok=True)
-
- def save(
- self,
- model,
- optimizer,
- lr_scheduler,
- n,
- ):
- if DeDoDe.RANK == 0:
- assert model is not None
- if isinstance(model, (DataParallel, DistributedDataParallel)):
- model = model.module
- states = {
- "model": model.state_dict(),
- "n": n,
- "optimizer": optimizer.state_dict(),
- "lr_scheduler": lr_scheduler.state_dict(),
- }
- torch.save(states, self.dir + self.name + f"_latest.pth")
- print(f"Saved states {list(states.keys())}, at step {n}")
-
- def load(
- self,
- model,
- optimizer,
- lr_scheduler,
- n,
- ):
- if os.path.exists(self.dir + self.name + f"_latest.pth") and DeDoDe.RANK == 0:
- states = torch.load(self.dir + self.name + f"_latest.pth")
- if "model" in states:
- model.load_state_dict(states["model"])
- if "n" in states:
- n = states["n"] if states["n"] else n
- if "optimizer" in states:
- try:
- optimizer.load_state_dict(states["optimizer"])
- except Exception as e:
- print(f"Failed to load states for optimizer, with error {e}")
- if "lr_scheduler" in states:
- lr_scheduler.load_state_dict(states["lr_scheduler"])
- print(f"Loaded states {list(states.keys())}, at step {n}")
- del states
- gc.collect()
- torch.cuda.empty_cache()
- return model, optimizer, lr_scheduler, n
diff --git a/spaces/Realcat/image-matching-webui/third_party/RoRD/scripts/getRTImages.py b/spaces/Realcat/image-matching-webui/third_party/RoRD/scripts/getRTImages.py
deleted file mode 100644
index 6972c349c0dc2c046c67e194ba79ea6d7da725bd..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/RoRD/scripts/getRTImages.py
+++ /dev/null
@@ -1,54 +0,0 @@
-import os
-import re
-from sys import argv, exit
-import csv
-import numpy as np
-
-
-def natural_sort(l):
- convert = lambda text: int(text) if text.isdigit() else text.lower()
- alphanum_key = lambda key: [ convert(c) for c in re.split('([0-9]+)', key) ]
- return sorted(l, key = alphanum_key)
-
-
-def getPairs(imgs):
- queryIdxs = np.linspace(start=0, stop=len(imgs)-1, num=10).astype(int).tolist()
- databaseIdxs = np.linspace(start=10, stop=len(imgs)-10, num=100).astype(int).tolist()
-
- queryImgs = [imgs[idx] for idx in queryIdxs]
- databaseImgs = [imgs[idx] for idx in databaseIdxs]
-
- return queryImgs, databaseImgs
-
-
-def writeCSV(qImgs, dImgs):
- with open('rtImagesDepth.csv', 'w', newline='') as file:
- writer = csv.writer(file)
-
- title = []
- title.append('query')
-
- for i in range(len(dImgs)):
- title.append('data' + str(i+1))
-
- writer.writerow(title)
-
- for qImg in qImgs:
- row = []
- row.append(qImg)
-
- for dImg in dImgs:
- row.append(dImg)
-
- writer.writerow(row)
-
-
-if __name__ == '__main__':
- rgbDir = argv[1]
- rgbImgs = natural_sort([file for file in os.listdir(rgbDir) if (file.find("jpg") != -1 or file.find("png") != -1)])
-
- rgbImgs = [os.path.join(rgbDir, img) for img in rgbImgs]
-
- queryImgs, databaseImgs = getPairs(rgbImgs)
-
- writeCSV(queryImgs, databaseImgs)
\ No newline at end of file
diff --git a/spaces/RichardMB1217/blip/models/blip_vqa.py b/spaces/RichardMB1217/blip/models/blip_vqa.py
deleted file mode 100644
index d4cb3688fad03888f8568ec65437ee20452c6cb8..0000000000000000000000000000000000000000
--- a/spaces/RichardMB1217/blip/models/blip_vqa.py
+++ /dev/null
@@ -1,186 +0,0 @@
-from models.med import BertConfig, BertModel, BertLMHeadModel
-from models.blip import create_vit, init_tokenizer, load_checkpoint
-
-import torch
-from torch import nn
-import torch.nn.functional as F
-from transformers import BertTokenizer
-import numpy as np
-
-class BLIP_VQA(nn.Module):
- def __init__(self,
- med_config = 'configs/med_config.json',
- image_size = 480,
- vit = 'base',
- vit_grad_ckpt = False,
- vit_ckpt_layer = 0,
- ):
- """
- Args:
- med_config (str): path for the mixture of encoder-decoder model's configuration file
- image_size (int): input image size
- vit (str): model size of vision transformer
- """
- super().__init__()
-
- self.visual_encoder, vision_width = create_vit(vit, image_size, vit_grad_ckpt, vit_ckpt_layer, drop_path_rate=0.1)
- self.tokenizer = init_tokenizer()
-
- encoder_config = BertConfig.from_json_file(med_config)
- encoder_config.encoder_width = vision_width
- self.text_encoder = BertModel(config=encoder_config, add_pooling_layer=False)
-
- decoder_config = BertConfig.from_json_file(med_config)
- self.text_decoder = BertLMHeadModel(config=decoder_config)
-
-
- def forward(self, image, question, answer=None, n=None, weights=None, train=True, inference='rank', k_test=128):
-
- image_embeds = self.visual_encoder(image)
- image_atts = torch.ones(image_embeds.size()[:-1],dtype=torch.long).to(image.device)
-
- question = self.tokenizer(question, padding='longest', truncation=True, max_length=35,
- return_tensors="pt").to(image.device)
- question.input_ids[:,0] = self.tokenizer.enc_token_id
-
- if train:
- '''
- n: number of answers for each question
- weights: weight for each answer
- '''
- answer = self.tokenizer(answer, padding='longest', return_tensors="pt").to(image.device)
- answer.input_ids[:,0] = self.tokenizer.bos_token_id
- answer_targets = answer.input_ids.masked_fill(answer.input_ids == self.tokenizer.pad_token_id, -100)
-
- question_output = self.text_encoder(question.input_ids,
- attention_mask = question.attention_mask,
- encoder_hidden_states = image_embeds,
- encoder_attention_mask = image_atts,
- return_dict = True)
-
- question_states = []
- question_atts = []
- for b, n in enumerate(n):
- question_states += [question_output.last_hidden_state[b]]*n
- question_atts += [question.attention_mask[b]]*n
- question_states = torch.stack(question_states,0)
- question_atts = torch.stack(question_atts,0)
-
- answer_output = self.text_decoder(answer.input_ids,
- attention_mask = answer.attention_mask,
- encoder_hidden_states = question_states,
- encoder_attention_mask = question_atts,
- labels = answer_targets,
- return_dict = True,
- reduction = 'none',
- )
-
- loss = weights * answer_output.loss
- loss = loss.sum()/image.size(0)
-
- return loss
-
-
- else:
- question_output = self.text_encoder(question.input_ids,
- attention_mask = question.attention_mask,
- encoder_hidden_states = image_embeds,
- encoder_attention_mask = image_atts,
- return_dict = True)
-
- if inference=='generate':
- num_beams = 3
- question_states = question_output.last_hidden_state.repeat_interleave(num_beams,dim=0)
- question_atts = torch.ones(question_states.size()[:-1],dtype=torch.long).to(question_states.device)
- model_kwargs = {"encoder_hidden_states": question_states, "encoder_attention_mask":question_atts}
-
- bos_ids = torch.full((image.size(0),1),fill_value=self.tokenizer.bos_token_id,device=image.device)
-
- outputs = self.text_decoder.generate(input_ids=bos_ids,
- max_length=10,
- min_length=1,
- num_beams=num_beams,
- eos_token_id=self.tokenizer.sep_token_id,
- pad_token_id=self.tokenizer.pad_token_id,
- **model_kwargs)
-
- answers = []
- for output in outputs:
- answer = self.tokenizer.decode(output, skip_special_tokens=True)
- answers.append(answer)
- return answers
-
- elif inference=='rank':
- max_ids = self.rank_answer(question_output.last_hidden_state, question.attention_mask,
- answer.input_ids, answer.attention_mask, k_test)
- return max_ids
-
-
-
- def rank_answer(self, question_states, question_atts, answer_ids, answer_atts, k):
-
- num_ques = question_states.size(0)
- start_ids = answer_ids[0,0].repeat(num_ques,1) # bos token
-
- start_output = self.text_decoder(start_ids,
- encoder_hidden_states = question_states,
- encoder_attention_mask = question_atts,
- return_dict = True,
- reduction = 'none')
- logits = start_output.logits[:,0,:] # first token's logit
-
- # topk_probs: top-k probability
- # topk_ids: [num_question, k]
- answer_first_token = answer_ids[:,1]
- prob_first_token = F.softmax(logits,dim=1).index_select(dim=1, index=answer_first_token)
- topk_probs, topk_ids = prob_first_token.topk(k,dim=1)
-
- # answer input: [num_question*k, answer_len]
- input_ids = []
- input_atts = []
- for b, topk_id in enumerate(topk_ids):
- input_ids.append(answer_ids.index_select(dim=0, index=topk_id))
- input_atts.append(answer_atts.index_select(dim=0, index=topk_id))
- input_ids = torch.cat(input_ids,dim=0)
- input_atts = torch.cat(input_atts,dim=0)
-
- targets_ids = input_ids.masked_fill(input_ids == self.tokenizer.pad_token_id, -100)
-
- # repeat encoder's output for top-k answers
- question_states = tile(question_states, 0, k)
- question_atts = tile(question_atts, 0, k)
-
- output = self.text_decoder(input_ids,
- attention_mask = input_atts,
- encoder_hidden_states = question_states,
- encoder_attention_mask = question_atts,
- labels = targets_ids,
- return_dict = True,
- reduction = 'none')
-
- log_probs_sum = -output.loss
- log_probs_sum = log_probs_sum.view(num_ques,k)
-
- max_topk_ids = log_probs_sum.argmax(dim=1)
- max_ids = topk_ids[max_topk_ids>=0,max_topk_ids]
-
- return max_ids
-
-
-def blip_vqa(pretrained='',**kwargs):
- model = BLIP_VQA(**kwargs)
- if pretrained:
- model,msg = load_checkpoint(model,pretrained)
-# assert(len(msg.missing_keys)==0)
- return model
-
-
-def tile(x, dim, n_tile):
- init_dim = x.size(dim)
- repeat_idx = [1] * x.dim()
- repeat_idx[dim] = n_tile
- x = x.repeat(*(repeat_idx))
- order_index = torch.LongTensor(np.concatenate([init_dim * np.arange(n_tile) + i for i in range(init_dim)]))
- return torch.index_select(x, dim, order_index.to(x.device))
-
-
\ No newline at end of file
diff --git a/spaces/SIGGRAPH2022/Text2Human/Text2Human/utils/__init__.py b/spaces/SIGGRAPH2022/Text2Human/Text2Human/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Sa-m/Vehicles-Detection-Custom-YoloV7/utils/google_utils.py b/spaces/Sa-m/Vehicles-Detection-Custom-YoloV7/utils/google_utils.py
deleted file mode 100644
index f363408e63981702e63dcda189cbc2099d0a9499..0000000000000000000000000000000000000000
--- a/spaces/Sa-m/Vehicles-Detection-Custom-YoloV7/utils/google_utils.py
+++ /dev/null
@@ -1,123 +0,0 @@
-# Google utils: https://cloud.google.com/storage/docs/reference/libraries
-
-import os
-import platform
-import subprocess
-import time
-from pathlib import Path
-
-import requests
-import torch
-
-
-def gsutil_getsize(url=''):
- # gs://bucket/file size https://cloud.google.com/storage/docs/gsutil/commands/du
- s = subprocess.check_output(f'gsutil du {url}', shell=True).decode('utf-8')
- return eval(s.split(' ')[0]) if len(s) else 0 # bytes
-
-
-def attempt_download(file, repo='WongKinYiu/yolov7'):
- # Attempt file download if does not exist
- file = Path(str(file).strip().replace("'", '').lower())
-
- if not file.exists():
- try:
- response = requests.get(f'https://api.github.com/repos/{repo}/releases/latest').json() # github api
- assets = [x['name'] for x in response['assets']] # release assets
- tag = response['tag_name'] # i.e. 'v1.0'
- except: # fallback plan
- assets = ['yolov7.pt', 'yolov7-tiny.pt', 'yolov7x.pt', 'yolov7-d6.pt', 'yolov7-e6.pt',
- 'yolov7-e6e.pt', 'yolov7-w6.pt']
- tag = subprocess.check_output('git tag', shell=True).decode().split()[-1]
-
- name = file.name
- if name in assets:
- msg = f'{file} missing, try downloading from https://github.com/{repo}/releases/'
- redundant = False # second download option
- try: # GitHub
- url = f'https://github.com/{repo}/releases/download/{tag}/{name}'
- print(f'Downloading {url} to {file}...')
- torch.hub.download_url_to_file(url, file)
- assert file.exists() and file.stat().st_size > 1E6 # check
- except Exception as e: # GCP
- print(f'Download error: {e}')
- assert redundant, 'No secondary mirror'
- url = f'https://storage.googleapis.com/{repo}/ckpt/{name}'
- print(f'Downloading {url} to {file}...')
- os.system(f'curl -L {url} -o {file}') # torch.hub.download_url_to_file(url, weights)
- finally:
- if not file.exists() or file.stat().st_size < 1E6: # check
- file.unlink(missing_ok=True) # remove partial downloads
- print(f'ERROR: Download failure: {msg}')
- print('')
- return
-
-
-def gdrive_download(id='', file='tmp.zip'):
- # Downloads a file from Google Drive. from yolov7.utils.google_utils import *; gdrive_download()
- t = time.time()
- file = Path(file)
- cookie = Path('cookie') # gdrive cookie
- print(f'Downloading https://drive.google.com/uc?export=download&id={id} as {file}... ', end='')
- file.unlink(missing_ok=True) # remove existing file
- cookie.unlink(missing_ok=True) # remove existing cookie
-
- # Attempt file download
- out = "NUL" if platform.system() == "Windows" else "/dev/null"
- os.system(f'curl -c ./cookie -s -L "drive.google.com/uc?export=download&id={id}" > {out}')
- if os.path.exists('cookie'): # large file
- s = f'curl -Lb ./cookie "drive.google.com/uc?export=download&confirm={get_token()}&id={id}" -o {file}'
- else: # small file
- s = f'curl -s -L -o {file} "drive.google.com/uc?export=download&id={id}"'
- r = os.system(s) # execute, capture return
- cookie.unlink(missing_ok=True) # remove existing cookie
-
- # Error check
- if r != 0:
- file.unlink(missing_ok=True) # remove partial
- print('Download error ') # raise Exception('Download error')
- return r
-
- # Unzip if archive
- if file.suffix == '.zip':
- print('unzipping... ', end='')
- os.system(f'unzip -q {file}') # unzip
- file.unlink() # remove zip to free space
-
- print(f'Done ({time.time() - t:.1f}s)')
- return r
-
-
-def get_token(cookie="./cookie"):
- with open(cookie) as f:
- for line in f:
- if "download" in line:
- return line.split()[-1]
- return ""
-
-# def upload_blob(bucket_name, source_file_name, destination_blob_name):
-# # Uploads a file to a bucket
-# # https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-python
-#
-# storage_client = storage.Client()
-# bucket = storage_client.get_bucket(bucket_name)
-# blob = bucket.blob(destination_blob_name)
-#
-# blob.upload_from_filename(source_file_name)
-#
-# print('File {} uploaded to {}.'.format(
-# source_file_name,
-# destination_blob_name))
-#
-#
-# def download_blob(bucket_name, source_blob_name, destination_file_name):
-# # Uploads a blob from a bucket
-# storage_client = storage.Client()
-# bucket = storage_client.get_bucket(bucket_name)
-# blob = bucket.blob(source_blob_name)
-#
-# blob.download_to_filename(destination_file_name)
-#
-# print('Blob {} downloaded to {}.'.format(
-# source_blob_name,
-# destination_file_name))
diff --git a/spaces/Sandiago21/automatic-speech-recognition-italian/app.py b/spaces/Sandiago21/automatic-speech-recognition-italian/app.py
deleted file mode 100644
index 401fe7cf0e0f06b7b50e9fa3c68318ac3a8def88..0000000000000000000000000000000000000000
--- a/spaces/Sandiago21/automatic-speech-recognition-italian/app.py
+++ /dev/null
@@ -1,54 +0,0 @@
-import torch
-import gradio as gr
-from transformers import pipeline
-
-model_id = "Sandiago21/whisper-large-v2-italian" # update with your model id
-pipe = pipeline("automatic-speech-recognition", model=model_id)
-
-
-title = "Automatic Speech Recognition (ASR)"
-description = """
-Demo for automatic speech recognition in Italian. Demo uses [Sandiago21/whisper-large-v2-italian](https://huggingface.co/Sandiago21/whisper-large-v2-italian) checkpoint, which is based on OpenAI's
-[Whisper](https://huggingface.co/openai/whisper-large-v2) model and is fine-tuned in Italian Audio dataset
-")
-"""
-
-def transcribe_speech(filepath):
- output = pipe(
- filepath,
- max_new_tokens=256,
- generate_kwargs={
- "task": "transcribe",
- "language": "italian",
- }, # update with the language you've fine-tuned on
- chunk_length_s=30,
- batch_size=8,
- )
- return output["text"]
-
-demo = gr.Blocks()
-
-mic_transcribe = gr.Interface(
- fn=transcribe_speech,
- inputs=gr.Audio(source="microphone", type="filepath"),
- outputs=gr.outputs.Textbox(),
- tilte=title,
- description=description,
-)
-
-file_transcribe = gr.Interface(
- fn=transcribe_speech,
- inputs=gr.Audio(source="upload", type="filepath"),
- outputs=gr.outputs.Textbox(),
- examples=[["./example.wav"]],
- tilte=title,
- description=description,
-)
-
-with demo:
- gr.TabbedInterface(
- [mic_transcribe, file_transcribe],
- ["Transcribe Microphone", "Transcribe Audio File"],
- ),
-
-demo.launch()
diff --git a/spaces/SoUmNerd/FlowiseAI/Dockerfile b/spaces/SoUmNerd/FlowiseAI/Dockerfile
deleted file mode 100644
index 9c0ad22929159b8c4d192856163699570fd27307..0000000000000000000000000000000000000000
--- a/spaces/SoUmNerd/FlowiseAI/Dockerfile
+++ /dev/null
@@ -1,26 +0,0 @@
-FROM node:18-alpine
-USER root
-
-# Arguments that can be passed at build time
-ARG FLOWISE_PATH=/usr/local/lib/node_modules/flowise
-ARG BASE_PATH=/root/.flowise
-ARG DATABASE_PATH=$BASE_PATH
-ARG APIKEY_PATH=$BASE_PATH
-ARG SECRETKEY_PATH=$BASE_PATH
-ARG LOG_PATH=$BASE_PATH/logs
-
-# Install dependencies
-RUN apk add --no-cache git python3 py3-pip make g++ build-base cairo-dev pango-dev chromium
-
-ENV PUPPETEER_SKIP_DOWNLOAD=true
-ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
-
-# Install Flowise globally
-RUN npm install -g flowise
-
-# Configure Flowise directories using the ARG
-RUN mkdir -p $LOG_PATH $FLOWISE_PATH/uploads && chmod -R 777 $LOG_PATH $FLOWISE_PATH
-
-WORKDIR /data
-
-CMD ["npx", "flowise", "start"]
\ No newline at end of file
diff --git a/spaces/SuYuanS/AudioCraft_Plus/scripts/templates/index.html b/spaces/SuYuanS/AudioCraft_Plus/scripts/templates/index.html
deleted file mode 100644
index 7bd3afe9d933271bb922c1a0a534dd6b86fe67bc..0000000000000000000000000000000000000000
--- a/spaces/SuYuanS/AudioCraft_Plus/scripts/templates/index.html
+++ /dev/null
@@ -1,28 +0,0 @@
-{% extends "base.html" %}
-{% block content %}
-
-
- Welcome {{session['user']}} to the internal MOS assistant for AudioCraft.
- You can create custom surveys between your models, that you can
- evaluate yourself, or with the help of your teammates, by simply
- sharing a link!
-