If you are looking for a powerful, customizable, and scalable CAD platform that can help you create, edit, and share your designs, then you might be interested in AutoCAD OEM 2017. This is a software development toolkit that allows you to build your own branded applications based on the core functionality of AutoCAD, the world's leading CAD software. However, to use AutoCAD OEM 2017, you need a valid product key that can activate the software. And since this is not a cheap product, you might be tempted to look for a crack or a universal product key that can bypass the activation process. In this article, we will show you how to download, install, crack, and activate AutoCAD OEM 2017 for free using X-force 2017, a keygen tool that can generate product keys for all Autodesk products.
-
Features of AutoCAD OEM 2017
-
AutoCAD OEM 2017 is a software development toolkit that allows you to create your own branded applications based on the core functionality of AutoCAD. With AutoCAD OEM 2017, you can:
Customize and scale your CAD platform: You can tailor your application to your specific needs and preferences, such as adding or removing features, changing the user interface, modifying commands and menus, creating custom objects and entities, etc. You can also scale your application to support different platforms, devices, languages, and markets.
-
Use powerful drawing and editing tools: You can access the same drawing and editing tools that are available in AutoCAD, such as lines, arcs, circles, polylines, hatches, blocks, dimensions, text, etc. You can also use advanced tools such as parametric constraints, dynamic blocks, associative arrays, etc.
-
Support various file formats and standards: You can read and write DWG files, the native file format of AutoCAD, as well as other common file formats such as DXF, DWF, PDF, etc. You can also comply with industry standards such as ISO, ANSI, DIN, etc.
-
Integrate with other Autodesk products and cloud services: You can leverage the power of other Autodesk products and cloud services that are compatible with AutoCAD OEM 2017, such as Inventor, Revit, Fusion 360, BIM 360, etc. You can also use Autodesk APIs and SDKs to extend the functionality of your application.
-
-
How to download and install AutoCAD OEM 2017
-
To download and install AutoCAD OEM 2017 on your computer, you need to follow these steps:
-
-
Check the system requirements and compatibility: Before you download AutoCAD OEM 2017, make sure that your computer meets the minimum system requirements for running the software. You can find the system requirements here. You also need to check if your operating system is compatible with AutoCAD OEM 2017. The software supports Windows 10 (64-bit), Windows 8.1 (64-bit), Windows 8 (64-bit), Windows 7 SP1 (64-bit), Windows Server 2016 (64-bit), Windows Server R2 (64-bit), Windows Server R2 SP1 (64-bit), Windows Server R2 SP2 (64-bit), Windows Server R2 SP3 (64-bit), Windows Server R2 SP4 (64-bit), Windows Server R2 SP5 (64-bit), Windows Server R2 SP6 (64-bit), Windows Server R2 SP7 (64-bit), Windows Server R2 SP8 (64-bit), Windows Server R2 SP9 (64-bit), Windows Server R2 SP10 (64-bit).
-
How to crack and activate AutoCAD OEM 2017
-
To crack and activate AutoCAD OEM 2017, you need to use a tool called X-force 2017, which is a keygen that can generate product keys for all Autodesk products. Here is how to use X-force 2017 to crack and activate AutoCAD OEM 2017:
-
How to activate AutoCAD OEM 2017 with crack and keygen
-AutoCAD OEM 2017 license code generator download free
-Crack AutoCAD OEM 2017 for lifetime activation without product key
-AutoCAD OEM 2017 serial number and activation code free download
-AutoCAD OEM 2017 full version cracked software free download
-Download AutoCAD OEM 2017 crack patch keygen torrent
-AutoCAD OEM 2017 offline installer with crack and product key
-AutoCAD OEM 2017 registration code and license key free download
-AutoCAD OEM 2017 crack only download no survey
-AutoCAD OEM 2017 activation key and crack free download
-AutoCAD OEM 2017 crack file download for windows 10
-AutoCAD OEM 2017 product key generator online free
-AutoCAD OEM 2017 crack and keygen download for mac
-AutoCAD OEM 2017 license key and crack free download
-AutoCAD OEM 2017 crack software download for pc
-AutoCAD OEM 2017 keygen and crack free download
-AutoCAD OEM 2017 activation code and product key free download
-AutoCAD OEM 2017 crack download for windows 7
-AutoCAD OEM 2017 product key and crack free download
-AutoCAD OEM 2017 serial key and crack free download
-AutoCAD OEM 2017 crack tool download for windows 8
-AutoCAD OEM 2017 license code and product key free download
-AutoCAD OEM 2017 full crack download for windows xp
-AutoCAD OEM 2017 activation key generator online free
-AutoCAD OEM 2017 crack and serial number free download
-AutoCAD OEM 2017 product key finder online free
-AutoCAD OEM 2017 crack and patch download for linux
-AutoCAD OEM 2017 license key finder online free
-AutoCAD OEM 2017 full version with crack and product key
-AutoCAD OEM 2017 serial number generator online free
-AutoCAD OEM 2017 crack and license code free download
-AutoCAD OEM 2017 product key checker online free
-AutoCAD OEM 2017 crack and activation key free download
-AutoCAD OEM 2017 license code checker online free
-AutoCAD OEM 2017 full version with crack and serial number
-AutoCAD OEM 2017 serial number checker online free
-AutoCAD OEM 2017 crack and registration code free download
-AutoCAD OEM 2017 license code finder online free
-AutoCAD OEM 2017 full version with crack and license key
-AutoCAD OEM 2017 serial number finder online free
-AutoCAD OEM 2017 crack and license key free download
-AutoCAD OEM 2017 product key generator offline free
-AutoCAD OEM 2017 full version with crack and activation code
-AutoCAD OEM 2017 activation code generator offline free
-AutoCAD OEM 2017 crack and serial key free download
-AutoCAD OEM 2017 license code generator offline free
-AutoCAD OEM 2017 full version with crack and registration code
-AutoCAD OEM 2017 registration code generator offline free
-AutoCAD OEM 2017 crack and product key free download
-
-
What is X-force 2017 and how does it work?: X-force 2017 is a software that can generate product keys for all Autodesk products, including AutoCAD OEM 2017. It works by creating a code that matches the specific product and version that you want to activate. The code is then entered into the activation window of the software, and the software is activated.
-
How to use X-force 2017 to generate a product key: To use X-force 2017 to generate a product key for AutoCAD OEM 2017, you need to follow these steps:
-
Download X-force 2017 from one of these links . Make sure you download the correct version for your operating system (32-bit or 64-bit).
-
Extract the downloaded file and run X-force 2017 as administrator.
-
Select "AutoCAD OEM 2017" from the drop-down list and click on "Generate".
-
Copy the generated product key and paste it into the activation window of AutoCAD OEM 2017.
-
-
-
How to enter the product key and activate AutoCAD OEM 2017: To enter the product key and activate AutoCAD OEM 2017, you need to follow these steps:
-
Open AutoCAD OEM 2017 and click on "Enter a Serial Number".
-
Select "I have an activation code from Autodesk" and click on "Next".
-
Paste the product key that you generated with X-force 2017 into the "Product Key" field.
-
Click on "Next" and then on "Finish".
-
Enjoy your activated AutoCAD OEM 2017.
-
-
-
-
Benefits of using AutoCAD OEM 2017 crack and product key
-
By using AutoCAD OEM 2017 crack and product key, you can enjoy several benefits, such as:
-
-
Access to full features and updates: You can use all the features and functions of AutoCAD OEM 2017 without any limitations or restrictions. You can also get access to the latest updates and patches that can improve the performance and stability of the software.
-
Save money and time: You can save money by not having to buy a license or subscription for AutoCAD OEM 2017. You can also save time by not having to go through a complicated registration or activation process.
-
Avoid malware and viruses: You can avoid malware and viruses that might come with other cracks or hacks that claim to activate AutoCAD OEM 2017. X-force 2017 is a safe and reliable tool that has been tested and verified by many users.
-
-
Conclusion
-
In conclusion, AutoCAD OEM 2017 is a powerful, customizable, and scalable CAD platform that allows you to create your own branded applications based on the core functionality of AutoCAD. However, to use AutoCAD OEM 2017, you need a valid product key that can activate the software. And since this is not a cheap product, you might be tempted to look for a crack or a universal product key that can bypass the activation process. In this article, we showed you how to download, install, crack, and activate AutoCAD OEM 2017 for free using X-force 2017, a keygen tool that can generate product keys for all Autodesk products. By using this method, you can enjoy several benefits, such as access to full features and updates, saving money and time, and avoiding malware and viruses. We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.
-
Frequently Asked Questions
-
-
What is AutoCAD OEM?: AutoCAD OEM is a software development toolkit that allows you to create your own branded applications based on the core functionality of AutoCAD.
-
What is X-force 2017?: X-force 2017 is a software that can generate product keys for all Autodesk products, including AutoCAD OEM 2017.
-
How do I download AutoCAD OEM 2017?: You can download AutoCAD OEM 2017 from the official Autodesk website or from one of these links . You will need to sign in with your Autodesk account or create one if you don't have one.
-
How do I install AutoCAD OEM 2017?: You can install AutoCAD OEM 2017 by following these steps:
-
Check the system requirements and compatibility.
-
Download AutoCAD OEM 2017 from one of these links .
-
Extract the downloaded file and run setup.exe as administrator.
-
Follow the installation wizard and accept the license agreement.
-
Choose your installation type and options.
-
Click on "Install" and wait for the installation to complete.
-
Click on "Finish" and restart your computer.
-
-
-
How do I crack and activate AutoCAD OEM 2017?: You can crack and activate AutoCAD OEM 2017 by using X-force 2017, a keygen tool that can generate product keys for all Autodesk products. You can follow these steps:
-
Download X-force 2017 from one of these links .
-
Extract the downloaded file and run X-force 2017 as administrator.
-
Select "AutoCAD OEM 2017" from the drop-down list and click on "Generate".
-
Copy the generated product key and paste it into the activation window of AutoCAD OEM 2017.
-
Click on "Next" and then on "Finish".
-
Enjoy your activated AutoCAD OEM 2017.
-
-
-
What are the benefits of using AutoCAD OEM 2017 crack and product key?: By using AutoCAD OEM 2017 crack and product key, you can enjoy several benefits, such as:
-
Access to full features and updates.
-
Save money and time.
-
Avoid malware and viruses.
-
-
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cimatron E11 Download Crack Software.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cimatron E11 Download Crack Software.md
deleted file mode 100644
index 5fd9dd43982e6482d49a390056c97674dd5817f6..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cimatron E11 Download Crack Software.md
+++ /dev/null
@@ -1,66 +0,0 @@
-
-
Cimatron E11 Download Crack Software
-
If you are looking for a powerful CAD/CAM software for tooling and manufacturing, you might have heard of Cimatron E11. This software is designed to help you create high-quality tools of any complexity or size, as well as optimize your CNC machining processes. However, you might also be wondering how to download Cimatron E11 crack software for free, without paying for a license or subscription. In this article, we will explain what Cimatron E11 is, why people want to download crack software, what are the risks of doing so, how to download it safely, and what are the alternatives to downloading crack software.
-
What is Cimatron E11?
-
Cimatron E11 is a CAD/CAM software that provides an end-to-end solution for designing and manufacturing tools, including molds, dies, electrodes, plates, and discrete parts. It also offers a full range of CNC technologies, from simple 2.5-axis milling and drilling to complex 5-axis machining.
Some of the features and benefits of Cimatron E11 are:
-
-
It has a single, integrated, dedicated solution for tooling that boosts productivity and quality.
-
It can handle any geometry, from solids and surfaces to meshes and STL files.
-
It can machine any part, from simple prismatic shapes to intricate freeform surfaces.
-
It has local training and support from tooling experts.
-
It can shorten tool delivery time by up to 70 percent.
-
It can easily handle engineering changes and updates.
-
-
Why do people want to download crack software?
-
Crack software is any software that has been modified or hacked to bypass its original security features, such as activation codes, license keys, or digital rights management. People who download crack software usually do so for one or more of the following reasons:
-
-
To save money and avoid paying for licensing fees or subscriptions.
-
To access premium features and updates that are otherwise restricted or unavailable.
-
To bypass geo-re - To bypass geo-restrictions and censorship that may limit their access to certain software or content.
-
-
However, downloading crack software is not only illegal, but also risky and unethical. Here are some of the dangers of downloading crack software.
-
What are the risks of downloading crack software?
-
Downloading crack software may seem like a good idea at first, but it can have serious consequences for you and your computer. Some of the risks of downloading crack software are:
-
-
Legal issues and penalties. Downloading crack software is a form of piracy, which is a violation of intellectual property rights. Depending on the laws of your country, you may face fines, lawsuits, or even jail time for using crack software.
-
Malware and viruses. Crack software often contains malicious code that can infect your computer with malware, viruses, spyware, ransomware, or other threats. These can damage your files, steal your data, compromise your security, or even take over your system.
-
Poor performance and compatibility. Crack software may not work properly or at all on your computer, as it may be outdated, corrupted, or incompatible with your hardware or software. It may also cause errors, crashes, freezes, or slowdowns on your system.
-
Lack of support and warranty. Crack software does not come with any technical support or warranty from the original developer or vendor. If you encounter any problems or issues with the software, you will not be able to get any help or assistance. You will also lose any rights or benefits that come with a legitimate license.
-
-
How to download Cimatron E11 crack software safely?
-
If you still want to download Cimatron E11 crack software despite the risks, you should at least take some precautions to protect yourself and your computer. Here are some tips on how to download Cimatron E11 crack software safely:
-
-
-
Use a reputable torrent site. Torrent sites are platforms where users can share and download files using peer-to-peer technology. However, not all torrent sites are trustworthy or reliable. Some may contain fake, infected, or low-quality files. To avoid these, you should use a reputable torrent site that has positive reviews, ratings, comments, and feedback from other users.
-
Use a VPN to protect your privacy and security. A VPN (virtual private network) is a service that encrypts your internet traffic and hides your IP address and location from prying eyes. This can help you avoid being tracked, monitored, or blocked by your ISP (internet service provider), government, or hackers when downloading crack software. A VPN can also help you access geo-restricted or censored content.
-
Scan the downloaded file with an antivirus program. Before you open or install the downloaded file, you should scan it with an antivirus program to check for any malware or viruses. You should also update your antivirus program regularly to keep it effective against new threats.
-
Backup your data and system before installing. Installing crack software can cause irreversible damage to your data and system. To prevent losing your important files or settings, you should backup your data and system before installing the crack software. You can use an external hard drive, a cloud service, or a recovery tool to backup your data and system.
-
-
What are the alternatives to downloading crack software?
-
Downloading crack software is not worth the risk and hassle. There are better and safer ways to get CAD/CAM software without breaking the law or compromising your computer. Some of the alternatives to downloading crack software are:
-
-
Use a free or open source CAD/CAM software. There are many free or open source CAD/CAM software that you can use for tooling and manufacturing without paying anything. Some examples are FreeCAD, LibreCAD, OpenSCAD, Blender, and G-Code. These software may not have all the features and functions of Cimatron E11, but they can still help you create and machine 2D and 3D models.
-
Use a trial or demo version of Cimatron E11. If you want to try Cimatron E11 before buying it, you can use a trial or demo version of the software that is available on the official website of Cimatron. The trial or demo version will let you use some of the features and functions of Cimatron E11 for a limited time or with some limitations.
-
Buy a legitimate license of Cimatron E11 from an authorized dealer. The best and safest way to get Cimatron E11 is - Buy a legitimate license of Cimatron E11 from an authorized dealer. The best and safest way to get Cimatron E11 is to buy a legitimate license of the software from an authorized dealer. This way, you will get the full version of the software with all the features and updates, as well as technical support and warranty. You will also avoid any legal issues or penalties for using crack software. The price of Cimatron E11 may vary depending on the type and number of licenses, as well as the region and currency. You can contact a local dealer for a quote.
-
-
Conclusion
-
Cimatron E11 is a CAD/CAM software that provides an end-to-end solution for designing and manufacturing tools, including molds, dies, electrodes, plates, and discrete parts. It also offers a full range of CNC technologies, from simple 2.5-axis milling and drilling to complex 5-axis machining. However, downloading Cimatron E11 crack software is not a good idea, as it can expose you to legal issues, malware, poor performance, and lack of support. Instead, you should consider using a free or open source CAD/CAM software, a trial or demo version of Cimatron E11, or buying a legitimate license of Cimatron E11 from an authorized dealer. By doing so, you will be able to enjoy the benefits of Cimatron E11 without risking your computer or breaking the law.
-
If you are interested in learning more about Cimatron E11 or finding a dealer near you, you can visit the official website of Cimatron at https://www.cimatron.com/. You can also access online tutorials, videos, manuals, and forums on the website to help you get started with Cimatron E11.
-
FAQs
-
Here are some frequently asked questions about Cimatron E11 and crack software:
-
-
What is the difference between CAD and CAM software?
-
CAD software is used to design 2D and 3D models, while CAM software is used to program CNC machines to make the models.
-
How much does Cimatron E11 cost?
-
The price of Cimatron E11 depends on the type and number of licenses, as well as the region and currency. You can contact a local dealer for a quote.
-
Is Cimatron E11 compatible with Windows 10?
-
Yes, Cimatron E11 is compatible with Windows 10, as well as Windows 7 and Windows 8.1.
-
How can I learn how to use Cimatron E11?
-
You can access online tutorials, videos, manuals, and forums on the official website of Cimatron. You can also enroll in training courses offered by Cimatron or its partners.
-
Where can I get support for Cimatron E11?
-
You can contact the technical support team of Cimatron by phone, email, or online chat. You can also visit the support portal for FAQs, downloads, updates, and tips.
- b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Ontrack Disk Manager 10.46 ISO and Overcome BIOS Limitations on Your Hard Drive.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Ontrack Disk Manager 10.46 ISO and Overcome BIOS Limitations on Your Hard Drive.md
deleted file mode 100644
index ac54ad476eb58d0a8cdc5edaf819d182b142514b..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Ontrack Disk Manager 10.46 ISO and Overcome BIOS Limitations on Your Hard Drive.md
+++ /dev/null
@@ -1,178 +0,0 @@
-
-
Download Ontrack Disk Manager 10.46 ISO
-
If you are looking for a reliable and easy-to-use tool to manage your hard drives, you might want to check out Ontrack Disk Manager 10.46 ISO. This is a powerful software that can help you partition, format, backup, restore, repair, and erase your hard drives in a few simple steps. In this article, we will show you what Ontrack Disk Manager is, how to download it, and how to use it effectively.
Ontrack Disk Manager is a software that was developed by Ontrack Data Recovery, a company that specializes in data recovery and disk management solutions. Ontrack Disk Manager is designed to help users create, delete, resize, and format partitions on their hard drives, as well as perform various maintenance tasks such as backup, restore, repair, and erase.
-
Ontrack Disk Manager can work with different types of hard drives, such as IDE, SATA, SCSI, USB, and FireWire. It can also support various file systems, such as FAT16, FAT32, NTFS, EXT2, EXT3, and EXT4. It can handle hard drives up to 4 TB in size.
-
Features of Ontrack Disk Manager
-
Some of the main features of Ontrack Disk Manager are:
-
-
It can create up to four primary partitions and unlimited logical partitions on a hard drive.
-
It can format partitions with different file systems and cluster sizes.
-
It can copy partitions from one hard drive to another.
-
It can backup and restore partitions or entire hard drives.
-
It can repair damaged or corrupted partitions or hard drives.
-
It can erase partitions or entire hard drives securely.
-
It can hide or unhide partitions.
-
It can change the drive letter or label of partitions.
-
It can check the disk surface for errors and bad sectors.
-
It can defragment partitions or entire hard drives.
-
-
Benefits of Ontrack Disk Manager
-
Some of the benefits of using Ontrack Disk Manager are:
-
How to download ontrack disk manager 10.46 iso for free
-Ontrack disk manager 10.46 iso download link
-Download ontrack disk manager 10.46 iso full version
-Ontrack disk manager 10.46 iso torrent download
-Download ontrack disk manager 10.46 iso with crack
-Ontrack disk manager 10.46 iso bootable usb download
-Download ontrack disk manager 10.46 iso for windows 10
-Ontrack disk manager 10.46 iso direct download
-Download ontrack disk manager 10.46 iso from official site
-Ontrack disk manager 10.46 iso online download
-Download ontrack disk manager 10.46 iso for mac
-Ontrack disk manager 10.46 iso cd download
-Download ontrack disk manager 10.46 iso for linux
-Ontrack disk manager 10.46 iso google drive download
-Download ontrack disk manager 10.46 iso without registration
-Ontrack disk manager 10.46 iso mega download
-Download ontrack disk manager 10.46 iso for android
-Ontrack disk manager 10.46 iso dvd download
-Download ontrack disk manager 10.46 iso for windows 7
-Ontrack disk manager 10.46 iso zip download
-Download ontrack disk manager 10.46 iso for windows xp
-Ontrack disk manager 10.46 iso rar download
-Download ontrack disk manager 10.46 iso for windows 8
-Ontrack disk manager 10.46 iso mediafire download
-Download ontrack disk manager 10.46 iso without survey
-Ontrack disk manager 10.46 iso dropbox download
-Download ontrack disk manager 10.46 iso for windows vista
-Ontrack disk manager 10.46 iso zippyshare download
-Download ontrack disk manager 10.46 iso with serial key
-Ontrack disk manager 10.46 iso filehippo download
-Download ontrack disk manager 10.46 iso with license key
-Ontrack disk manager 10.46 iso softpedia download
-Download ontrack disk manager 10.46 iso with activation code
-Ontrack disk manager 10.46 iso cnet download
-Download ontrack disk manager 10.46 iso with keygen
-Ontrack disk manager 10.46 iso filefactory download
-Download ontrack disk manager 10.46 iso with patch
-Ontrack disk manager 10.46 iso uptobox download
-Download ontrack disk manager 10.46 iso with registration code
-Ontrack disk manager 10.46 iso rapidshare download
-Download ontrack disk manager 10.46 iso with product key
-Ontrack disk manager 10.46 iso sendspace download
-Download ontrack disk manager 10.46 iso with crack file
-Ontrack disk manager 10.46 iso turbobit download
-Download ontrack disk manager 10.46 iso with serial number
-Ontrack disk manager 10.46 iso uploaded download
-Download ontrack disk manager 10.46 iso with activation key
-Ontrack disk manager 10.46 iso depositfiles download
-Download ontrack disk manager 10.46 iso with crack folder
-Ontrack disk manager 10.46 iso hotfile download
-
-
It can help you optimize the performance and storage space of your hard drives.
-
It can help you protect your data from loss or damage by creating backups and restoring them when needed.
-
It can help you recover your data from damaged or corrupted hard drives by repairing them.
-
It can help you securely erase your data from your hard drives when you want to dispose of them or sell them.
-
It can help you troubleshoot and fix various disk-related problems.
-
-
How to Download Ontrack Disk Manager 10.46 ISO?
-
If you want to use Ontrack Disk Manager, you need to download its ISO file first. An ISO file is a disk image file that contains all the files and folders of a CD or DVD. You can use an ISO file to create a bootable CD or USB drive that you can use to run Ontrack Disk Manager without installing it on your computer.
-
Requirements for Downloading Ontrack Disk Manager 10.46 ISO
-
To download Ontrack Disk Manager 10.46 ISO, you need the following:
-
-
A computer with an internet connection.
-
A web browser that supports downloading large files.
-
A blank CD or USB drive with at least 700 MB of free space.
-
A CD/DVD burner software or a USB creator software that can burn or write ISO files.
-
-
Steps for Downloading Ontrack Disk Manager 10.46 ISO
-
To download Ontrack Disk Manager 10.46 ISO, follow these steps:
Click on the "DOWNLOAD OPTIONS" section and select "ISO IMAGE".
-
Click on the "OnTrack_Disk_Manager_10.46.iso" file and save it to your computer.
-
The download may take some time depending on your internet speed and the size of the file (about 688 MB).
-
-
How to Use Ontrack Disk Manager 10.46 ISO?
-
After downloading Ontrack Disk Manager 10.46 ISO, you need to burn it to a CD or USB drive and boot from it to run the software. Here are the steps for doing that:
-
How to Burn Ontrack Disk Manager 10.46 ISO to a CD or USB Drive
-
To burn Ontrack Disk Manager 10.46 ISO to a CD or USB drive, you need a CD/DVD burner software or a USB creator software that can handle ISO files. There are many free and paid software available online that you can use for this purpose. Some examples are ImgBurn, CDBurnerXP, Rufus, UNetbootin, etc.
-
The exact steps for burning an ISO file may vary depending on the software you use, but generally they are similar to these:
-
-
Insert a blank CD or USB drive into your computer.
-
Open the CD/DVD burner software or the USB creator software and select the option to burn or write an ISO file.
-
Browse and select the OnTrack_Disk_Manager_10.46.iso file that you downloaded earlier.
-
Select the destination drive (the CD or USB drive) where you want to burn or write the ISO file.
-
Start the burning or writing process and wait until it is completed.
-
-
How to Boot from Ontrack Disk Manager 10.46 ISO
-
To boot from Ontrack Disk Manager 10.46 ISO, you need to change the boot order in your computer's BIOS settings so that it prioritizes the CD or USB drive over the hard drive. The exact steps for doing this may vary depending on your computer model and BIOS version, but generally they are similar to these:
-
-
Restart your computer and press the appropriate key (usually F2, F12, Del, Esc) to enter the BIOS setup menu.
-
Navigate to the boot options section and change the boot order so that the CD or USB drive is listed first before the hard drive.
-
Save the changes and exit the BIOS setup menu.
-
Your computer will restart again and boot from the CD or USB drive where you burned or wrote the OnTrack_Disk_Manager_10.46.iso file.
-
-
How to Partition and Format a Hard Drive with Ontrack Disk Manager 10.46 ISO
-
To partition and format a hard drive with OnTrack_Disk_Manager_10.46.iso , follow these steps:
-
-
After booting from the CD or USB drive where you burned or wrote the OnTrack_Disk_Manager_10.46.iso file , you will see a welcome screen with some options . Choose "Start Program" .
-
You will see a main menu with some options . Choose "Disk Utilities" .
-
You will see a list of all the hard drives detected by the software . Choose the one that you want to partition and format .
-format , hide , unhide , or change the drive letter or label of the partitions . You can also use the "Auto" option to let the software automatically partition and format the hard drive for you .
-
After making the changes that you want , click on "Apply" to confirm them . The software will ask you to reboot your computer to complete the process .
-
After rebooting your computer , you will see your new partitions and file systems on your hard drive .
-
-
Tips and Tricks for Using Ontrack Disk Manager 10.46 ISO
-
Here are some tips and tricks for using Ontrack Disk Manager 10.46 ISO effectively:
-
How to Backup and Restore a Hard Drive with Ontrack Disk Manager 10.46 ISO
-
To backup and restore a hard drive with Ontrack Disk Manager 10.46 ISO , follow these steps:
-
-
Boot from the CD or USB drive where you burned or wrote the OnTrack_Disk_Manager_10.46.iso file .
-
From the main menu , choose "Backup/Restore" .
-
You will see two options : "Backup" and "Restore" . Choose the one that you want to do .
-
If you choose "Backup" , you will see a list of all the hard drives detected by the software . Choose the one that you want to backup . You will also need to choose a destination drive where you want to save the backup file . The destination drive can be another hard drive , a CD/DVD , or a network location . You can also choose to compress or encrypt the backup file for security or space reasons .
-
If you choose "Restore" , you will need to locate and select the backup file that you want to restore . You will also need to choose a target drive where you want to restore the backup file . The target drive can be the same as the original drive or a different one . You can also choose to overwrite or append the existing data on the target drive .
-
After making your choices , click on "Start" to begin the backup or restore process . The software will show you a progress bar and some details about the process . Wait until it is completed .
-
-
How to Repair a Damaged or Corrupted Hard Drive with Ontrack Disk Manager 10.46 ISO
-
To repair a damaged or corrupted hard drive with Ontrack Disk Manager 10.46 ISO , follow these steps:
-
-
Boot from the CD or USB drive where you burned or wrote the OnTrack_Disk_Manager_10.46.iso file .
-
From the main menu , choose "Disk Utilities" .
-
You will see a list of all the hard drives detected by the software . Choose the one that you want to repair .
-
You will see a graphical representation of the hard drive with its current partitions . You can use the mouse or keyboard to select any partition that you want to repair . You can also use the "Select All" option to select all partitions on the hard drive .
-
Click on "Repair" to start the repair process . The software will scan and fix any errors or bad sectors on the selected partitions . It will also try to recover any lost or deleted data on them . The software will show you a progress bar and some details about the process . Wait until it is completed .
-
-
How to Erase a Hard Drive with Ontrack Disk Manager 10.46 ISO
-
To erase a hard drive with Ontrack Disk Manager 10.46 ISO , follow these steps:
-
-
Boot from the CD or USB drive where you burned or wrote the OnTrack_Disk_Manager_10.46.iso file .
-
From the main menu , choose "Disk Utilities" .
-
You will see a list of all the hard drives detected by the software . Choose the one that you want to erase .
-the "Select All" option to select all partitions on the hard drive .
-
Click on "Erase" to start the erase process . The software will ask you to confirm your choice and warn you that all data on the selected partitions will be permanently deleted . Click on "Yes" to proceed .
-
The software will overwrite all data on the selected partitions with zeros or random data , depending on the level of security that you choose . You can choose from three levels of security : "Quick" , "Normal" , or "Secure" . The higher the level of security , the longer the erase process will take . The software will show you a progress bar and some details about the process . Wait until it is completed .
-
-
Conclusion
-
Ontrack Disk Manager 10.46 ISO is a powerful and easy-to-use software that can help you manage your hard drives in various ways . You can use it to partition , format , backup , restore , repair , and erase your hard drives in a few simple steps . You can also use it to troubleshoot and fix various disk-related problems . You can download it for free from this link: https://archive.org/details/OnTrackDiskManager_201801 and burn it to a CD or USB drive to run it without installing it on your computer . We hope this article has helped you learn more about Ontrack Disk Manager 10.46 ISO and how to use it effectively . If you have any questions or feedback , please feel free to leave a comment below .
-
FAQs
-
Here are some frequently asked questions about Ontrack Disk Manager 10.46 ISO :
-
-
Q: Is Ontrack Disk Manager 10.46 ISO compatible with Windows 10?
-
A: Yes, Ontrack Disk Manager 10.46 ISO is compatible with Windows 10 and other versions of Windows, such as Windows 8, Windows 7, Windows Vista, and Windows XP.
-
Q: Can I use Ontrack Disk Manager 10.46 ISO to clone a hard drive?
-
A: Yes, you can use Ontrack Disk Manager 10.46 ISO to clone a hard drive by using the "Copy" option in the "Disk Utilities" section. You can copy a partition or an entire hard drive to another hard drive.
-
Q: Can I use Ontrack Disk Manager 10.46 ISO to recover deleted files?
-
A: Yes, you can use Ontrack Disk Manager 10.46 ISO to recover deleted files by using the "Repair" option in the "Disk Utilities" section. The software will try to recover any lost or deleted data on the selected partitions.
-
Q: Can I use Ontrack Disk Manager 10.46 ISO to create bootable disks?
-
A: Yes, you can use Ontrack Disk Manager 10.46 ISO to create bootable disks by using the "Create Bootable Disk" option in the main menu. You can create bootable disks for different operating systems, such as DOS, Windows, Linux, etc.
-
Q: Can I use Ontrack Disk Manager 10.46 ISO to resize partitions?
-
A: Yes, you can use Ontrack Disk Manager 10.46 ISO to resize partitions by using the "Resize" option in the "Disk Utilities" section. You can increase or decrease the size of any partition on your hard drive.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/DoneEx XCell Compiler 1.8.1 NEW.rar Utorrent.md b/spaces/1gistliPinn/ChatGPT4/Examples/DoneEx XCell Compiler 1.8.1 NEW.rar Utorrent.md
deleted file mode 100644
index 1e0968d164408989c00405857735db21ea01a391..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/DoneEx XCell Compiler 1.8.1 NEW.rar Utorrent.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-... ://cracknets.net/v/d/t/harry+potter+and+the+half+blood+prince+reloaded+rar+password/ ... /v/b/e/Adobe+Photoshop+Elements+15+Crack+With+Latest+Serial+Key+Free+Download/ ... monthly 0.5 https://cracknets.net/m/o/s/Puzzle+Hero+v+1.8.1/ ... monthly 0.5 https://cracknets.net/q/z/x/DoneEx+XCell+Compiler+2.4.1.5/Â ... 1fdad05405
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Cookie Run Kingdom Now and Meet the Cutest Cookies Ever.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Cookie Run Kingdom Now and Meet the Cutest Cookies Ever.md
deleted file mode 100644
index 5a50289d76f9ad1d334e92ed9e5802e4f11986b7..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Cookie Run Kingdom Now and Meet the Cutest Cookies Ever.md
+++ /dev/null
@@ -1,123 +0,0 @@
-
-
Where to Download Cookie Run: Kingdom
-
If you are looking for a sweet and addictive mobile game that combines action, strategy, and city-building, you might want to check out Cookie Run: Kingdom. This game is the latest installment in the Cookie Run series by Devsisters, and it has become a huge hit since its global launch in January 2021. In this article, we will tell you everything you need to know about Cookie Run: Kingdom, including what it is, what are its features, where and how to download it, what are some tips and tricks for beginners, and what are some pros and cons of playing it.
-
What is Cookie Run: Kingdom and why is it popular?
-
Cookie Run: Kingdom is a game that mixes real-time battle strategy and city-building, with a wide cast of unique cookies and a customizable kingdom. It tells the story of cookies who create a kingdom of their own to call home. Throughout their adventures, they explore other ancient kingdoms, battle fierce adversaries of the darkness, and unravel the mysteries of the ancient heroes who disappeared from the world.
Cookie Run: Kingdom is popular because it offers a lot of fun and variety for players of all ages and preferences. It has an intriguing RPG story mode, a player-versus-player (PvP) battle mode, and many ways to export the goods that you're making in your kingdom. It also has adorable graphics, catchy music, and charming voice acting. The game is free to play, but it also offers optional in-app purchases for players who want to enhance their experience.
-
What are the main features of Cookie Run: Kingdom and how to play it?
-
Cookie Run: Kingdom has two main modes: adventure and kingdom. In adventure mode, you can select levels to play that consist of platforming and battling. You can create a team of up to five cookies with different roles, such as attackers, defenders, healers, and supporters. You can also equip them with toppings that boost their stats and abilities. You can control your cookies manually or let them fight automatically. As you beat levels, you earn rewards such as coins, crystals, star jellies, soulstones, treasures, toppings, materials, and new cookies.
-
In kingdom mode, you can design and build your own kingdom with various buildings and decorations. You can produce materials, craft items, arrange activities, and collect resources from your kingdom. You can also interact with your cookies and other characters in your kingdom. Your kingdom serves as the hub area for most of your activities and a way to help your cookies grow.
-
Where and how to download Cookie Run: Kingdom for different devices?
-
Cookie Run: Kingdom is available for both iOS and Android devices. You can download it from the App Store or Google Play Store depending on your device. The game requires iOS 13.0 or later or Android 4.4 or later to run. The game also supports iPadOS 13.0 or later and macOS 11.0 or later with Apple M1 chip or later.
-
To download Cookie Run: Kingdom from the App Store or Google Play Store, follow these steps:
-
-
Open the App Store or Google Play Store app on your device.
-
Search for "Cookie Run: Kingdom" in the search bar.
-
Tap on the game icon that appears in the results.
-
Tap on the "Get" or "Install" button to start downloading the game.
-
Wait for the download to finish and then tap on the "Open" button to launch the game.
-
-
You can also use these links to download Cookie Run: Kingdom directly from your device:
What are some tips and tricks for beginners in Cookie Run: Kingdom?
-
If you are new to Cookie Run: Kingdom, you might feel overwhelmed by the amount of things to do and learn in the game. Don't worry, we have some tips and tricks to help you get started and enjoy the game more. Here are some of them:
-
-
Follow the main story quests and side quests to progress in the game and unlock new features. You can also get rewards such as coins, crystals, star jellies, soulstones, treasures, toppings, materials, and new cookies by completing quests.
-
Upgrade your cookies and toppings regularly to increase their power and performance. You can use soulstones to level up your cookies and star jellies to enhance your toppings. You can also use treasures to give your cookies special effects.
-
Build and upgrade your kingdom buildings to produce more resources and items. You can use coins and materials to construct and improve your buildings. You can also use crystals to speed up the process or buy more slots.
-
Explore the world map and discover new areas and secrets. You can find hidden chests, events, bosses, and other surprises by exploring the map. You can also earn rewards such as coins, crystals, star jellies, soulstones, treasures, toppings, materials, and new cookies by clearing areas.
-
Join a guild and cooperate with other players. You can chat with your guild members, exchange gifts, request help, participate in guild battles, and access exclusive guild features. You can also earn rewards such as coins, crystals, star jellies, soulstones, treasures, toppings, materials, and new cookies by contributing to your guild.
-
-
What are some pros and cons of Cookie Run: Kingdom and how does it compare to other games in the genre?
-
Cookie Run: Kingdom is a game that has many pros and cons that might appeal or deter different players. Here are some of them:
-
How to download cookie run kingdom on android
-Cookie run kingdom download for ios devices
-Cookie run kingdom apk download latest version
-Cookie run kingdom pc download free
-Cookie run kingdom mac download with m1 chip
-Best site to download cookie run kingdom safely
-Cookie run kingdom download size and requirements
-Cookie run kingdom download error and how to fix it
-Cookie run kingdom download link for google play store
-Cookie run kingdom download link for app store
-Cookie run kingdom download guide for beginners
-Cookie run kingdom download tips and tricks
-Cookie run kingdom download rewards and benefits
-Cookie run kingdom download review and rating
-Cookie run kingdom download problems and solutions
-How to update cookie run kingdom after downloading
-How to uninstall cookie run kingdom from your device
-How to transfer cookie run kingdom data to another device
-How to play cookie run kingdom offline without downloading
-How to play cookie run kingdom online with friends
-How to install cookie run kingdom on windows 10
-How to install cookie run kingdom on macos 11.0 or later
-How to install cookie run kingdom on chromebook
-How to install cookie run kingdom on fire tablet
-How to install cookie run kingdom on smart tv
-How to get cookie run kingdom without downloading
-How to get cookie run kingdom for free without paying
-How to get cookie run kingdom on steam or epic games store
-How to get cookie run kingdom on nintendo switch or ps4
-How to get cookie run kingdom on xbox one or xbox series x/s
-Why you should download cookie run kingdom today
-Why you should not download cookie run kingdom now
-Why is cookie run kingdom not available for download in my country
-Why is cookie run kingdom taking so long to download or update
-Why is cookie run kingdom crashing or freezing after downloading
-What is cookie run kingdom and how to download it
-What is new in cookie run kingdom latest update and how to download it
-What is the best way to download cookie run kingdom fast and easy
-What is the best device to play cookie run kingdom after downloading it
-What is the best treasure and topping combination in cookie run kingdom after downloading it
-
-
-
Pros
-
Cons
-
-
-
- Cute and colorful graphics
-
- Requires internet connection
-
-
-
- Engaging and diverse gameplay
-
- Can be repetitive or grindy
-
-
-
- Lovable and diverse characters
-
- Can be pay-to-win or gacha-based
-
-
-
- Immersive and rich story
-
- Can be buggy or laggy
-
-
-
- Fun and social features
-
- Can be addictive or time-consuming
-
-
-
Cookie Run: Kingdom is a game that can be compared to other games in the action-strategy-city-building genre, such as Clash of Clans, Rise of Kingdoms, or Lords Mobile. However, Cookie Run: Kingdom has its own unique charm and style that sets it apart from other games. It has a more whimsical and lighthearted tone, a more casual and accessible gameplay, a more diverse and customizable content, and a more loyal and friendly community.
-
Conclusion
-
Cookie Run: Kingdom is a game that offers a lot of fun and variety for players who love action, strategy, city-building, and cookies. It has an intriguing RPG story mode, a player-versus-player (PvP) battle mode, and many ways to export the goods that you're making in your kingdom. It also has adorable graphics, catchy music, and charming voice acting. The game is free to play, but it also offers optional in-app purchases for players who want to enhance their experience.
-
If you are interested in playing Cookie Run: Kingdom, you can download it from the App Store or Google Play Store depending on your device. The game requires iOS 13.0 or later or Android 4.4 or later to run. The game also supports iPadOS 13.0 or later and macOS 11.0 or later with Apple M1 chip or later.
-
We hope this article has helped you learn more about Cookie Run: Kingdom and how to download it. If you have any questions or feedback about the game or the article, feel free to leave a comment below. Happy gaming!
-
FAQs
-
Here are some frequently asked questions about Cookie Run: Kingdom:
-
-
How do I get more cookies in Cookie Run: Kingdom?
-
You can get more cookies in Cookie Run: Kingdom by completing quests, clearing areas on the world map, opening chests or gacha boxes, participating in events or promotions, joining a guild or inviting friends, or buying them with crystals.
-
How do I upgrade my kingdom in Cookie Run: Kingdom?
-
You can upgrade your kingdom in Cookie Run: Kingdom by building and upgrading various buildings and decorations. You can use coins and materials to construct and improve your buildings. You can also use crystals to speed up the process or buy more slots. You can also unlock new areas and features by increasing your kingdom level and population.
-
How do I win battles in Cookie Run: Kingdom?
-
You can win battles in Cookie Run: Kingdom by creating a balanced and powerful team of cookies with different roles, such as attackers, defenders, healers, and supporters. You can also equip them with toppings that boost their stats and abilities. You can also use treasures to give your cookies special effects. You can control your cookies manually or let them fight automatically. You can also use skills and items to help your cookies during battles.
-
How do I play with other players in Cookie Run: Kingdom?
-
You can play with other players in Cookie Run: Kingdom by joining a guild and cooperating with your guild members. You can chat with your guild members, exchange gifts, request help, participate in guild battles, and access exclusive guild features. You can also earn rewards such as coins, crystals, star jellies, soulstones, treasures, toppings, materials, and new cookies by contributing to your guild.
-
How do I get more crystals in Cookie Run: Kingdom?
-
You can get more crystals in Cookie Run: Kingdom by completing quests, clearing areas on the world map, opening chests or gacha boxes, participating in events or promotions, joining a guild or inviting friends, or buying them with real money.
-
Is Cookie Run: Kingdom safe for kids?
-
Cookie Run: Kingdom is a game that is suitable for kids of all ages. It has a cute and colorful graphics, a engaging and diverse gameplay, a lovable and diverse characters, and a immersive and rich story. It also has a fun and social features that allow kids to interact with other players in a friendly and respectful manner. However, parents should be aware that the game also has some elements that might require parental guidance or supervision, such as violence, gambling, spending, or addiction. Parents should also monitor their kids' screen time and online activity to ensure their safety and well-being.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Getting Over It for Free and Experience the Ultimate Challenge.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Getting Over It for Free and Experience the Ultimate Challenge.md
deleted file mode 100644
index 36b2c74e220e8a72833b723ec169f92fe2259c88..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Getting Over It for Free and Experience the Ultimate Challenge.md
+++ /dev/null
@@ -1,108 +0,0 @@
-
-
Getting Over It Free Download 2022 Latest Version
-
Have you ever heard of Getting Over It with Bennett Foddy? If not, you are missing out on one of the most unique, frustrating, hilarious, and philosophical games ever made. If yes, you probably know how hard it is to beat this game, let alone get it for free. But don't worry, in this article, I will tell you everything you need to know about this game, why you should play it, how to get it for free, and how to play it better. So sit back, relax, and get ready to climb some mountains with nothing but a hammer and a pot.
Getting Over It with Bennett Foddy is a game that was released in 2017 by Bennett Foddy, an Australian game designer who is also known for creating other games like QWOP, GIRP, CLOP, and Pole Riders. He describes his game as "a game I made for a certain kind of person. To hurt them." Sounds intriguing, right? Let's see what this game is all about.
-
A brief introduction to the game and its developer
-
The game is inspired by Sexy Hiking, a 2002 B-Game classic by Jazzuo, where you control a man who tries to climb a mountain using only a hammer. Foddy decided to make his own version of this game as a homage and as an experiment. He wanted to create a game that would challenge the players' patience, skill, perseverance, and sanity. He also wanted to explore the themes of frustration, failure, progress, reward, philosophy, humor, art, and culture in video games.
-
The gameplay and the controls
-
The gameplay is very simple: you play as Diogenes, a man who sits in a metal pot and holds a sledgehammer. Your goal is to climb up an enormous mountain that is filled with various obstacles like rocks, trees, pipes, furniture, buildings, etc. You move the hammer with your mouse or trackpad (or your finger if you play on mobile), and that's all
The game has no checkpoints, no saves, no levels, no tutorials, no hints, no maps, no menus, no options, no scores, no achievements, no rewards, no endings. It's just you and the mountain. And the hammer. And the pot. And the gravity. And the physics. And the bugs. And the glitches. And the lag. And the rage.
-
The difficulty and the frustration
-
The game is extremely hard. Not because of the complexity or the design, but because of the simplicity and the execution. The game relies on your mouse movement and your muscle memory to control the hammer. The slightest mistake or miscalculation can send you flying back to the bottom of the mountain, losing hours of progress in seconds. The game is unforgiving, unpredictable, and unfair. It will test your limits, your skills, your patience, your willpower, your emotions, and your sanity.
-
The narration and the philosophy
-
The game is not silent. As you play, you will hear the voice of Bennett Foddy himself, who narrates your journey with his calm and soothing voice. He will comment on your actions, your failures, your successes, your thoughts, and your feelings. He will also share with you some quotes, anecdotes, stories, jokes, facts, opinions, and insights about various topics related to the game and life in general. He will make you laugh, he will make you think, he will make you question, he will make you angry, he will make you sad, he will make you curious, he will make you inspired.
-
getting over it with bennett foddy free download 2022
-how to download getting over it for free on android 2022
-getting over it apk free download latest version 2022
-getting over it pc game free download full version 2022
-getting over it free download for windows 10 2022
-getting over it free download for mac 2022
-getting over it free download no virus 2022
-getting over it free download google drive 2022
-getting over it free download mega.nz 2022
-getting over it free download steamunlocked 2022
-getting over it free download ocean of games 2022
-getting over it free download igg games 2022
-getting over it free download skidrow reloaded 2022
-getting over it free download fitgirl repack 2022
-getting over it free download highly compressed 2022
-getting over it free download crack only 2022
-getting over it free download update patch 2022
-getting over it free download mod apk 2022
-getting over it free download unlimited hammer 2022
-getting over it free download cheat engine 2022
-getting over it free download speedrun mode 2022
-getting over it free download multiplayer mod 2022
-getting over it free download custom maps 2022
-getting over it free download new levels 2022
-getting over it free download new music 2022
-getting over it free download no commentary 2022
-getting over it free download walkthrough guide 2022
-getting over it free download tips and tricks 2022
-getting over it free download best settings 2022
-getting over it free download system requirements 2022
-getting over it free download review and rating 2022
-getting over it free download gameplay video 2022
-getting over it free download trailer and teaser 2022
-getting over it free download official website 2022
-getting over it free download developer blog 2022
-getting over it free download news and updates 2022
-getting over it free download release date and time 2022
-getting over it free download pre order and bonus 2022
-getting over it free download discount and coupon code 2022
-getting over it free download giveaway and contest 2022
-getting over it free download fan art and memes 2022
-getting over it free download merchandise and accessories 2022
-getting over it free download soundtrack and theme song 2022
-getting over it free download easter eggs and secrets 2022
-getting over it free download achievements and trophies 2022
-getting over it free download leaderboard and ranking 2022
-getting over it free download community and forum 2022
-getting over it free download feedback and support 2022
-
Why should you play Getting Over It with Bennett Foddy?
-
Now that you know what this game is and how it works, you might be wondering: why should I play this game? What's the point? What's the fun? What's the reward? Well, there are many reasons why you should play this game, depending on what kind of person you are and what kind of experience you are looking for. Here are some of them:
-
The challenge and the reward
-
If you are a person who loves a good challenge and a sense of accomplishment, this game is for you. This game is one of the hardest games ever made, and beating it is a feat that only a few people in the world have achieved. This game will push you to your limits and beyond, and it will make you feel every emotion possible along the way. This game will make you suffer, but it will also make you grow. This game will make you hate it, but it will also make you love it. This game will make you cry, but it will also make you smile. This game will make you quit, but it will also make you come back. This game will make you lose everything, but it will also make you gain something priceless: the satisfaction of overcoming yourself.
-
The humor and the references
-
If you are a person who loves a good laugh and a dose of culture, this game is for you. This game is full of humor and references that will tickle your funny bone and stimulate your brain. The game is full of jokes and puns that are related to the gameplay and the theme of frustration. The game is full of references and homages to other games, movies, books, songs, art, history, philosophy, and more. The game is full of surprises and secrets that will reward your curiosity and exploration. The game is full of irony and sarcasm that will make you laugh at yourself and the world. The game is full of wisdom and insight that will make you think and learn. The game is full of fun and entertainment that will make you enjoy and appreciate.
-
The exploration and the secrets
-
If you are a person who loves to discover new things and uncover hidden mysteries, this game is for you. This game is not just a linear climb up a mountain. It is also a nonlinear journey through a rich and diverse world that is full of secrets and Easter eggs. The game has many paths, branches, shortcuts, detours, loops, dead ends, and hidden areas that you can explore and find. The game has many objects, items, characters, sounds, music, and dialogues that you can interact with and learn from. The game has many secrets, puzzles, codes, clues, hints, messages, and meanings that you can uncover and decipher. The game has many layers, dimensions, levels, modes, endings, and outcomes that you can experience and achieve.
-
The community and the speedruns
-
If you are a person who loves to share your experiences and compete with others, this game is for you. This game has a huge and active community of players who are passionate about this game and who support each other through their struggles and successes. You can join this community online through various platforms like YouTube, Twitch, Discord, Reddit, Steam, etc., where you can watch, chat, stream, comment, like, subscribe, follow, donate, etc., with other players who are playing this game or who have played this game before. You can also participate in this community offline through various events like conventions, meetups, workshops, etc., where you can meet, talk, play, learn, teach, etc., with other players who are interested in this game or who are experts in this game.
-
One of the most popular ways to enjoy this game with the community is to do speedruns, which are attempts to complete the game as fast as possible using various techniques and strategies. Speedruns are a form of art and sport that showcase the skill and creativity of the players who perform them. Speedruns are also a form of entertainment and education that inspire and teach the viewers who watch them. Speedruns are also a form of challenge and competition that motivate and reward the participants who achieve them.
-
There are many categories of speedruns for this game, such as Any%, which is the fastest way to complete the game by any means necessary, Glitchless, which is the fastest way to complete the game without using any glitches or exploits, Space%, which is the fastest way to launch yourself into space and escape the game world, and more. The current world record for Any% is 56.717 seconds by Lumord, for Glitchless is 1 minute 13.2 seconds by Blastbolt, and for Space% is 1 minute 8.7 seconds by Hitachihex. You can watch these amazing speedruns on YouTube and learn from their strategies and skills. You can also try to beat their times and submit your own speedruns to Speedrun.com, where you can compare your results with other players and see your ranking on the leaderboard.
-
How to get Getting Over It with Bennett Foddy for free?
-
Now that you know why you should play this game, you might be wondering: how can I get this game for free? Well, there are a few ways to do that, but they are not all legal, safe, or ethical. So before you decide to download this game for free, you should be aware of the risks and the precautions that you should take. Here are some of the options that you have:
-
The official platforms and prices
-
The game is officially available on several platforms, such as Steam, Humble Bundle, Epic Games Store, itch.io, Google Play Store, and Apple App Store. The game costs $7.99 on Steam, Humble Bundle, Epic Games Store, and itch.io, $4.99 on Google Play Store, and $3.99 on Apple App Store. However, sometimes the game goes on sale or is offered for free on some of these platforms. For example, in December 2020, the game was free on Epic Games Store for a limited time. So if you want to get this game for free legally and ethically, you should keep an eye on these platforms and wait for a good deal or a giveaway.
-
The alternative platforms and sources
-
The game is also available on some alternative platforms and sources that are not official or authorized by the developer. These include torrent sites, file-sharing sites, modded APK sites, emulator sites, etc. These platforms and sources allow you to download the game for free without paying anything or going through any verification process. However, these platforms and sources are also illegal, unsafe, and unethical. They violate the intellectual property rights of the developer and the publisher of the game. They expose you to the risk of malware, viruses, spyware, adware, etc., that can harm your device or your data. They also deprive the developer and the publisher of the revenue that they deserve for their hard work and creativity.
-
The risks and the precautions
-
If you decide to download this game for free from an alternative platform or source, you should be aware of the risks and the precautions that you should take. Here are some of them: - The risks: - You could get sued or fined for piracy or copyright infringement. - You could get infected with malware or viruses that could damage your device or steal your data. - You could get banned or suspended from the official platforms or services that you use to play the game. - You could miss out on the updates, patches, bug fixes, features, content, etc., that the developer provides for the game. - You could have a poor or incomplete gaming experience due to glitches, errors, crashes, etc., that are not fixed or optimized by the developer. - The precautions: - You should use a VPN or a proxy to hide your IP address and location from the authorities or the hackers. - You should use an antivirus or a firewall to protect your device and your data from malware or viruses. - You should backup your device and your data regularly in case of any damage or loss. - You should check the reviews, ratings, comments, feedback, etc., of the platform or source that you use to download the game to make sure it is reliable and trustworthy. - You should scan the file or the app that you download for any malware or viruses before installing or running it.
How to play Getting Over It with Bennett Foddy better?
-
Finally, if you have managed to get this game for free (or paid for it) and want to play it better, here are some tips and tricks that can help you improve your skills and performance. These tips and tricks are based on my own experience and research, as well as the advice and guidance of other players who have mastered this game. Here they are:
-
The tips and tricks for beginners
-
If you are new to this game or have not played it much, here are some tips and tricks that can help you get started and make some progress:
- - Practice. This is the most important tip for this game. The only way to get better at this game is to practice a lot. The more you play, the more you learn, the more you improve. Practice makes perfect, or at least better. - Be patient. This is another crucial tip for this game. This game is not meant to be easy or fast. It is meant to be hard and slow. It will take you a long time to beat this game, if ever. So don't rush, don't panic, don't give up. Be patient with yourself and with the game. - Be calm. This is also a vital tip for this game. This game is designed to frustrate you and make you angry. It will test your nerves and your emotions. So don't let it get to you. Be calm and composed. Breathe deeply, relax your muscles, clear your mind. Don't let the game control you, control yourself. - Be positive. This is also a helpful tip for this game. This game is full of negativity and pessimism. It will make you feel bad and hopeless. So don't let it affect you. Be positive and optimistic. Focus on the good things, not the bad things. Celebrate your achievements, not your failures. Enjoy the journey, not the destination. - Be creative. This is also a fun tip for this game. This game is full of possibilities and opportunities. It will let you explore and experiment with different ways of playing and moving. So don't be afraid to try new things and be creative. Use your imagination and your intuition. Find your own style and your own solutions.
The advanced techniques for experts
-
If you are already familiar with this game or have played it a lot, here are some advanced techniques that can help you play faster and better:
- - Pogoing. This is a technique that involves using the hammer as a spring to bounce yourself up in the air. This technique can help you gain height and speed quickly and easily. To do this technique, you need to swing the hammer downwards behind you while lifting yourself up with the pot, then release the hammer when it hits the ground to launch yourself up in the air. - Flying. This is a technique that involves using the hammer as a propeller to fly yourself across long distances. This technique can help you skip large sections of the mountain and save time and effort. To do this technique, you need to swing the hammer in circles around you while moving yourself forward with the pot, then adjust the angle and direction of the hammer to steer yourself in the air. - Hooking. This is a technique that involves using the hammer as a hook to grab onto objects and pull yourself towards them. This technique can help you climb steep slopes and overcome tricky obstacles. To do this technique, you need to swing the hammer towards an object that you want to hook onto, then hold the hammer when it touches the object to attach yourself to it, then pull the hammer towards you while pushing yourself forward with the pot to move yourself closer to the object. - Flipping. This is a technique that involves using the hammer as a lever to flip yourself over objects or gaps. This technique can help you avoid falling or getting stuck in certain situations. To do this technique, you need to swing the hammer upwards in front of you while lowering yourself down with the pot, then release the hammer when it reaches the top of its arc to flip yourself over in the air, then catch yourself with the hammer on the other side of the object or gap. - Sliding. This is a technique that involves using the hammer as a brake to slide down slopes or surfaces. This technique can help you control your speed and direction when descending or moving horizontally. To do this technique, you need to swing the hammer downwards in front of you while moving yourself down or sideways with the pot, then hold the hammer when it touches the ground or the surface to slow yourself down and steer yourself.
The resources and guides for learning
-
If you want to learn more about these techniques and other aspects of this game, there are many resources and guides that you can use to improve your knowledge and skills. Here are some of them:
- - The official website of the game, where you can find information about the game, the developer, the platforms, the updates, etc. - The official wiki of the game, where you can find information about the gameplay, the controls, the narration, the references, etc. - The official subreddit of the game, where you can find discussions, questions, answers, tips, tricks, videos, memes, fan art, etc., related to the game. - The official YouTube channel of the developer, where you can find videos of him playing and talking about his games, including this one. - The unofficial YouTube channels of other players, where you can find videos of them playing and speedrunning this game, as well as tutorials and guides on how to play better. - The unofficial Discord server of the game, where you can chat and voice chat with other players who are playing or have played this game. - The unofficial Steam community hub of the game, where you can find reviews, ratings, comments, feedback, screenshots, videos, etc., related to the game.
Conclusion
-
Getting Over It with Bennett Foddy is a game that is not for everyone. It is a game that will make you love it or hate it. It is a game that will make you happy or sad. It is a game that will make you laugh or cry. It is a game that will make you think or feel. It is a game that will make you succeed or fail. It is a game that will make you get over it or not.
-
But whatever your reaction or outcome is, this game is worth trying. It is a game that will challenge you and reward you. It is a game that will entertain you and educate you. It is a game that will surprise you and inspire you. It is a game that will change you and stay with you.
-
So if you are interested in this game, I hope this article has helped you learn more about it and how to get it for free and play it better. If not, I hope this article has at least made you curious and amused. Either way, I thank you for reading this article and I wish you all the best in your gaming adventures.
-
Now go ahead and try Getting Over It with Bennett Foddy for yourself. And remember: don't hate the player, don't hate the game, just hate yourself.
-
FAQs
-
Here are some frequently asked questions about Getting Over It with Bennett Foddy:
-
Q: Is there an end to this game?
-
A: Yes, there is an end to this game. There is a final obstacle at the top of the mountain that marks the end of the game. However, reaching the end of the game is not easy, and it requires a lot of skill and luck. Also, the end of the game is not the same for everyone, as it depends on your choices and actions. There are different endings that you can get, depending on what you do at the end of the game. Some endings are more satisfying than others, some endings are more secret than others, and some endings are more meta than others. I won't spoil them for you, but I will say that they are worth seeing for yourself.
-
Q: What happens if you fall down the mountain?
-
A: If you fall down the mountain, you will lose some or all of your progress, depending on how far you fall and where you land. You will also hear Bennett Foddy's voice commenting on your fall and giving you some words of encouragement or discouragement. Sometimes, he will also play some music or sound effects to accompany your fall. The music and sound effects are usually related to the theme or the mood of your fall, and they can be either soothing or annoying. You can mute the music and sound effects if you want, but you can't mute Bennett Foddy's voice.
-
Q: How long does it take to beat this game?
-
A: The time it takes to beat this game varies from person to person, depending on their skill level, experience, strategy, luck, etc. Some people can beat this game in less than a minute, some people can beat this game in a few hours, some people can beat this game in a few days, some people can beat this game in a few weeks, some people can beat this game in a few months, some people can beat this game in a few years, and some people can never beat this game at all. The average time it takes to beat this game is around 5 hours, but that doesn't mean that you will beat this game in 5 hours. You might beat this game faster or slower than that, or you might not beat this game at all.
-
Q: Is this game based on a true story?
-
A: No, this game is not based on a true story. This game is a fictional work of art that is inspired by other works of art and culture. However, some elements of this game are based on or related to real-life facts or events. For example, the main character of this game, Diogenes, is named after a famous Greek philosopher who lived in a barrel and rejected social norms. The hammer that he uses is a reference to Thor's hammer from Norse mythology and Marvel comics. The pot that he sits in is a reference to a Chinese legend about a man who was trapped in a bronze pot by his enemies. The mountain that he climbs is a collage of various objects and scenes from different games, movies, books, songs, art, history, philosophy, and more. The narration that he hears is a mix of original and quoted texts from various sources and authors. The game itself is a homage to Sexy Hiking, a 2002 B-Game classic by Jazzuo. So while this game is not based on a true story, it is based on a lot of true stories.
-
Q: Is this game a joke or a serious game?
-
A: This game is both a joke and a serious game. It is a joke because it is full of humor and absurdity that makes fun of itself and other games. It is also a serious game because it is full of meaning and depth that explores various themes and topics related to gaming and life. This game is a paradox and a contradiction that defies easy categorization and interpretation. It is a game that makes you laugh and cry, think and feel, love and hate, succeed and fail, get over it or not.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Explore Hunt and Collect Dinosaurs in Dinosaur Hunter 3D.md b/spaces/1phancelerku/anime-remove-background/Explore Hunt and Collect Dinosaurs in Dinosaur Hunter 3D.md
deleted file mode 100644
index 39059d5064034f1c8bafcb730ac5f66e50e35642..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Explore Hunt and Collect Dinosaurs in Dinosaur Hunter 3D.md
+++ /dev/null
@@ -1,115 +0,0 @@
-
-
Dinosaur Hunter 3D Game Download: A Guide for Dino Lovers
-
Do you love dinosaurs? Do you want to experience the thrill of hunting them in a realistic 3D environment? If yes, then you should try Dinosaur Hunter 3D, one of the best dinosaur hunting games available on Android and iOS devices. In this article, we will tell you everything you need to know about this amazing game, including how to download and install it, how to play it, and some tips and tricks to help you become a master dinosaur hunter.
Dinosaurs are fascinating creatures that have captivated the imagination of many people for centuries. They were the dominant animals on Earth for millions of years, until they went extinct about 65 million years ago. However, thanks to modern technology, we can now bring them back to life in the form of video games. One of these games is Dinosaur Hunter 3D, a hunting simulation game that lets you explore different environments and hunt down various types of dinosaurs.
-
What is Dinosaur Hunter 3D?
-
Dinosaur Hunter 3D is a free-to-play game developed by ZG Games. It is a realistic and immersive hunting game that features stunning graphics, realistic sounds, and smooth controls. You can choose from different modes, such as survival, campaign, or free hunt, and hunt in different environments, such as jungle, desert, or snow. You can also select from a wide range of weapons and equipment, such as rifles, shotguns, bows, grenades, night vision goggles, camouflage, and more. You can hunt different kinds of dinosaurs, such as T-Rex, Velociraptor, Triceratops, Spinosaurus, and more. You can also collect coins and trophies for your achievements and use them to upgrade your weapons and equipment.
-
Why should you play Dinosaur Hunter 3D?
-
Dinosaur Hunter 3D is a game that will appeal to anyone who loves dinosaurs, hunting, or adventure. It is a game that will challenge your skills, test your reflexes, and stimulate your senses. It is a game that will make you feel like you are in a real dinosaur world, where you have to survive and hunt these majestic beasts. It is a game that will provide you with hours of fun and entertainment. Here are some of the reasons why you should play Dinosaur Hunter 3D:
-
-
It is free to download and play.
-
It has amazing graphics and sounds that create a realistic atmosphere.
-
It has different modes and environments that offer variety and replay value.
-
It has a wide range of weapons and equipment that suit different preferences and styles.
-
It has different types of dinosaurs that have different behaviors and characteristics.
-
It has coins and trophies that reward your performance and allow you to upgrade your gear.
-
-
How to download and install Dinosaur Hunter 3D?
-
Dinosaur Hunter 3D is available for both Android and iOS devices. You can download and install it easily by following these simple steps:
-
For Android devices
-
Step 1: Go to Google Play Store
-
Open the Google Play Store app on your Android device and make sure you are signed in with your Google account.
-
Step 2: Search for Dinosaur Hunter 3D
-
Type "Dinosaur Hunter 3D" in the search bar and tap on the game icon that appears in the results.
-
dinosaur hunter 3d game download for pc
-dinosaur hunter 3d game download for android
-dinosaur hunter 3d game download apk
-dinosaur hunter 3d game download free
-dinosaur hunter 3d game download offline
-dinosaur hunter 3d game download mod apk
-dinosaur hunter 3d game download for windows 10
-dinosaur hunter 3d game download for ios
-dinosaur hunter 3d game download full version
-dinosaur hunter 3d game download for laptop
-dinosaur hunter 3d game download online
-dinosaur hunter 3d game download latest version
-dinosaur hunter 3d game download for mac
-dinosaur hunter 3d game download without internet
-dinosaur hunter 3d game download hack
-dinosaur hunter 3d game download play store
-dinosaur hunter 3d game download for pc windows 7
-dinosaur hunter 3d game download unlimited money
-dinosaur hunter 3d game download highly compressed
-dinosaur hunter 3d game download uptodown
-dinosaur hunter 3d game download review
-dinosaur hunter 3d game download for pc windows 10 free
-dinosaur hunter 3d game download size
-dinosaur hunter 3d game download best graphics
-dinosaur hunter 3d game download new update
-dinosaur hunter 3d game download cheats
-dinosaur hunter 3d game download for pc windows xp
-dinosaur hunter 3d game download no ads
-dinosaur hunter 3d game download rexdl
-dinosaur hunter 3d game download tips and tricks
-dinosaur hunter 3d game download for pc windows 8.1
-dinosaur hunter 3d game download features
-dinosaur hunter 3d game download system requirements
-dinosaur hunter 3d game download gameplay
-dinosaur hunter 3d game download trailer
-dinosaur hunter 3d game download for pc windows vista
-dinosaur hunter 3d game download guide
-dinosaur hunter 3d game download how to play
-dinosaur hunter 3d game download screenshots
-dinosaur hunter 3d game download rating
-
Step 3: Tap on Install and wait for the download to finish
-
Tap on the green Install button and accept the permissions required by the game. Wait for the download to finish, which may take a few minutes depending on your internet speed and device storage.
-
Step 4: Open the game and enjoy hunting dinosaurs
-
Once the installation is complete, you can open the game by tapping on the Open button or by finding it in your app drawer. You can now start playing Dinosaur Hunter 3D and have fun hunting dinosaurs.
-
For iOS devices
-
Step 1: Go to App Store
-
Open the App Store app on your iOS device and make sure you are signed in with your Apple ID.
-
Step 2: Search for Dinosaur Hunter 3D
-
Type "Dinosaur Hunter 3D" in the search bar and tap on the game icon that appears in the results.
-
Step 3: Tap on Get and wait for the download to finish
-
Tap on the blue Get button and enter your Apple ID password or use Touch ID or Face ID to confirm. Wait for the download to finish, which may take a few minutes depending on your internet speed and device storage.
-
Step 4: Open the game and enjoy hunting dinosaurs
-
Once the installation is complete, you can open the game by tapping on the Open button or by finding it in your home screen. You can now start playing Dinosaur Hunter 3D and have fun hunting dinosaurs.
-
How to play Dinosaur Hunter 3D?
-
Dinosaur Hunter 3D is a simple and intuitive game that anyone can play. Here are some of the basic steps to play the game:
-
Choose your mode and environment
-
When you launch the game, you will see three options: Survival, Campaign, and Free Hunt. You can choose any of them depending on your preference and mood. Survival mode is where you have to survive as long as possible against waves of dinosaurs. Campaign mode is where you have to complete different missions and objectives in various environments. Free Hunt mode is where you can hunt any dinosaur you want without any restrictions or goals. You can also choose from different environments, such as jungle, desert, or snow, each with its own challenges and scenery.
-
Select your weapon and equipment
-
Before you start hunting, you have to select your weapon and equipment from the inventory. You can choose from a variety of weapons, such as rifles, shotguns, bows, grenades, etc., each with its own advantages and disadvantages. You can also choose from different equipment, such as night vision goggles, camouflage, medkits, etc., each with its own uses and benefits. You can upgrade your weapons and equipment using coins that you earn from hunting dinosaurs.
-
Hunt down different types of dinosaurs
-
Once you are ready, you can start hunting dinosaurs in your chosen mode and environment. You will see a radar on the top left corner of your screen that shows you the location of nearby dinosaurs. You can also use binoculars to zoom in and spot them from a distance. You have to aim carefully and shoot them before they notice you or run away. You can also use grenades or other explosives to cause more damage or lure them into traps. You have to be careful not to get too close to them or they will attack you back. You can use medkits to heal yourself if you get injured.
-
Earn coins and trophies for your achievements
-
As you hunt dinosaurs, you will earn coins and trophies for your achievements. Coins are used to upgrade your weapons and equipment, while trophies are used to unlock new modes and environments. You can also compare your scores and achievements with other players on the leaderboard and challenge your friends to beat your records. You can also share your hunting screenshots and videos on social media and show off your skills.
-
Tips and tricks for playing Dinosaur Hunter 3D
-
Dinosaur Hunter 3D is a game that requires strategy, skill, and patience. Here are some tips and tricks that will help you improve your game and become a better dinosaur hunter:
-
Use the radar to locate your prey
-
The radar is a very useful tool that shows you the direction and distance of the nearest dinosaurs. You can use it to plan your approach and avoid wasting time and ammo. You can also use it to avoid dangerous dinosaurs that are too big or too fast for you to handle.
-
Aim for the head or the heart for a quick kill
-
The best way to kill a dinosaur is to aim for its vital organs, such as the head or the heart. This will cause more damage and make them die faster. You can also use the binoculars to zoom in and see where these organs are located on different dinosaurs. However, be careful not to miss or hit the wrong spot, as this will alert them and make them run away or attack you.
-
Avoid getting too close to the dinosaurs or they will attack you
-
Dinosaurs are not friendly creatures and they will not hesitate to attack you if they sense your presence. You have to keep a safe distance from them and use your weapons wisely. If you get too close, they will charge at you, bite you, or stomp on you, causing you to lose health or die. You can use grenades or other explosives to create some distance or distract them, but be careful not to hurt yourself in the process.
-
Upgrade your weapons and equipment to improve your performance
-
As you progress in the game, you will face more challenging dinosaurs that require more powerful weapons and equipment. You can use the coins that you earn from hunting dinosaurs to upgrade your weapons and equipment in the inventory. You can increase their damage, accuracy, range, capacity, reload speed, etc. You can also buy new weapons and equipment that suit your style and preference.
-
Conclusion
-
Dinosaur Hunter 3D is a game that will satisfy your curiosity and passion for dinosaurs. It is a game that will let you experience the thrill and excitement of hunting these ancient creatures in a realistic 3D environment. It is a game that will challenge your skills, test your reflexes, and stimulate your senses. It is a game that will provide you with hours of fun and entertainment. If you are looking for a game that combines adventure, action, and simulation, then Dinosaur Hunter 3D is the game for you. Download it now and start hunting dinosaurs!
-
FAQs
-
Here are some of the frequently asked questions about Dinosaur Hunter 3D:
-
-
Q: Is Dinosaur Hunter 3D free to play?
-
A: Yes, Dinosaur Hunter 3D is free to download and play on both Android and iOS devices. However, it contains ads and in-app purchases that can enhance your gaming experience.
-
Q: How many dinosaurs are there in Dinosaur Hunter 3D?
-
A: There are more than 20 different types of dinosaurs in Dinosaur Hunter 3D, each with its own appearance, behavior, and difficulty level. Some of them are T-Rex, Velociraptor, Triceratops, Spinosaurus, Brachiosaurus, etc.
-
Q: How many modes and environments are there in Dinosaur Hunter 3D?
-
A: There are three modes in Dinosaur Hunter 3D: Survival, Campaign, and Free Hunt. Survival mode is where you have to survive as long as possible against waves of dinosaurs. Campaign mode is where you have to complete different missions and objectives in various environments. Free Hunt mode is where you can hunt any dinosaur you want without any restrictions or goals. There are also three environments in Dinosaur Hunter 3D: Jungle, Desert, and Snow. Each environment has its own challenges and scenery.
-
Q: How do I upgrade my weapons and equipment in Dinosaur Hunter 3D?
-
A: You can upgrade your weapons and equipment in the inventory using coins that you earn from hunting dinosaurs. You can increase their damage, accuracy, range, capacity, reload speed, etc. You can also buy new weapons and equipment that suit your style and preference.
-
Q: How do I share my hunting screenshots and videos on social media?
-
A: You can share your hunting screenshots and videos on social media by tapping on the Share button on the top right corner of your screen. You can choose from different platforms, such as Facebook, Twitter, Instagram, etc. You can also add captions and hashtags to your posts and tag your friends. You can also invite your friends to play Dinosaur Hunter 3D and challenge them to beat your scores and achievements. 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/2ndelement/voicevox/test/test_word_types.py b/spaces/2ndelement/voicevox/test/test_word_types.py
deleted file mode 100644
index 1f2635b680e9b82d23ae3825f2a746b171d6ed3a..0000000000000000000000000000000000000000
--- a/spaces/2ndelement/voicevox/test/test_word_types.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from unittest import TestCase
-
-from voicevox_engine.model import WordTypes
-from voicevox_engine.part_of_speech_data import part_of_speech_data
-
-
-class TestWordTypes(TestCase):
- def test_word_types(self):
- self.assertCountEqual(list(WordTypes), list(part_of_speech_data.keys()))
diff --git a/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/__init__.py b/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/__init__.py
deleted file mode 100644
index 617ba38c34b1801b2db2e0209b4e886c9d24c490..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from .visualization_utils import show_bboxes
-from .detector import detect_faces
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb8-150e_deepfashion2_short_sleeved_outwear_256x192/__init__.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb8-150e_deepfashion2_short_sleeved_outwear_256x192/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/AUBADA-ALARABI/poetry20233/app.py b/spaces/AUBADA-ALARABI/poetry20233/app.py
deleted file mode 100644
index 5b6654d5a405778ddbc9ca5fa5d041aff535f3b5..0000000000000000000000000000000000000000
--- a/spaces/AUBADA-ALARABI/poetry20233/app.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import gc
-import gradio as gr
-from transformers import pipeline, set_seed
-
-pipe = pipeline('text-generation', framework='pt', model='akhooli/ap2023', tokenizer='akhooli/ap2023')
-#gc.collect()
-samples = [['أنت'
- ,1.0, 50, 1.0, 1.0, 114],['هل غادر'
- ,1.0, 50, 1.0, 1.0, 114 ],['ألا ليت'
- ,1.0, 50, 1.0, 1.0, 114 ],['يا قدس'
- ,1.0, 50, 1.0, 1.0, 114],['عيد بأية حال'
- ,1.0, 50, 1.0, 1.0, 114],['لكل شيء إذا ما'
- ,1.0, 50, 1.0, 1.0, 114 ],['.'
- ,1.0, 50, 1.0, 1.0, 114]]
-
-notes = """
-- Enter a short prompt or select (click) one of the examples and click SEND
-- Adjust parameters (temperture, top k, top p and penalty) through the slider (keep close to default values).
-- For the same seed (randomness), the same output is regenerated if other parameters are fixed
-- Clear and enter new prompt or select another example and SEND to regenerate
-- The '.' means start a new line from no prompt (your prompt need not be long)
-- Be patient: this runs on CPU (free tier)
-- Feedback (Twitter): @akhooli (https://twitter.com/akhooli/status/1611025232201977859)
-- Note/Disclaimer: may generate unaccepted or inappropriate content. Use at your own risk.
-"""
-def sayPoetry(prompt, temp=1.0, topk = 50, topp = 1.0, penalty=1.0, seed=114):
- if not int(seed) >= 0: seed=114
- set_seed(seed)
- gen = pipe(prompt, max_length=96, do_sample=True, temperature=temp, top_k=topk, top_p=topp, repetition_penalty=penalty,
- min_length = 64, no_repeat_ngram_size = 3, return_full_text=True,
- num_beams=5, num_return_sequences=1)[0]["generated_text"]
- poetry =""
- for line in gen.split('.')[:-1]:
- poetry += line #+ "\n"
- return poetry
-poetry = gr.Interface(fn=sayPoetry,
- inputs=[
- gr.Textbox(label="Enter short prompt or select from examples:"),
- gr.Slider(0.70, 1.2, step=0.01,value=1.0, label='control temperature'),
- gr.Slider(25, 100, step=1,value=50, label='control top k'),
- gr.Slider(0.80, 1.0, step=0.01,value=1.0, label='control top p'),
- gr.Slider(0.90, 1.50, step=0.01,value=1.0, label='control penalty'),
- gr.Number(value=139750, precision=0, label='Seed'),
- ],
- outputs=[gr.Textbox(label="Generated Poetry:")],
-
- allow_flagging='never',
- title='Arabic Poetry Generation Demo (updated Jan. 2023)',
- description = "A simple demo of AI generated poetry based on 1M poems fine-tuned using AraGPT2 (be patient, runs on cpu)",
- examples=samples,
- cache_examples=False,
- article = notes)
-poetry.launch() # show_error = True, debug=True
\ No newline at end of file
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Ails.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Ails.py
deleted file mode 100644
index d533ae247cba63b236668375786124852f5bbad5..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Ails.py
+++ /dev/null
@@ -1,106 +0,0 @@
-from __future__ import annotations
-
-import hashlib
-import time
-import uuid
-import json
-from datetime import datetime
-from aiohttp import ClientSession
-
-from ..typing import SHA256, AsyncGenerator
-from .base_provider import AsyncGeneratorProvider
-
-
-class Ails(AsyncGeneratorProvider):
- url: str = "https://ai.ls"
- working = True
- supports_gpt_35_turbo = True
-
- @staticmethod
- async def create_async_generator(
- model: str,
- messages: list[dict[str, str]],
- stream: bool,
- proxy: str = None,
- **kwargs
- ) -> AsyncGenerator:
- headers = {
- "authority": "api.caipacity.com",
- "accept": "*/*",
- "accept-language": "en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3",
- "authorization": "Bearer free",
- "client-id": str(uuid.uuid4()),
- "client-v": "0.1.278",
- "content-type": "application/json",
- "origin": "https://ai.ls",
- "referer": "https://ai.ls/",
- "sec-ch-ua": '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"',
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-platform": '"Windows"',
- "sec-fetch-dest": "empty",
- "sec-fetch-mode": "cors",
- "sec-fetch-site": "cross-site",
- "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36",
- "from-url": "https://ai.ls/?chat=1"
- }
- async with ClientSession(
- headers=headers
- ) as session:
- timestamp = _format_timestamp(int(time.time() * 1000))
- json_data = {
- "model": "gpt-3.5-turbo",
- "temperature": kwargs.get("temperature", 0.6),
- "stream": True,
- "messages": messages,
- "d": datetime.now().strftime("%Y-%m-%d"),
- "t": timestamp,
- "s": _hash({"t": timestamp, "m": messages[-1]["content"]}),
- }
- async with session.post(
- "https://api.caipacity.com/v1/chat/completions",
- proxy=proxy,
- json=json_data
- ) as response:
- response.raise_for_status()
- start = "data: "
- async for line in response.content:
- line = line.decode('utf-8')
- if line.startswith(start) and line != "data: [DONE]":
- line = line[len(start):-1]
- line = json.loads(line)
- token = line["choices"][0]["delta"].get("content")
- if token:
- if "ai.ls" in token or "ai.ci" in token:
- raise Exception("Response Error: " + token)
- yield token
-
-
- @classmethod
- @property
- def params(cls):
- params = [
- ("model", "str"),
- ("messages", "list[dict[str, str]]"),
- ("stream", "bool"),
- ("temperature", "float"),
- ]
- param = ", ".join([": ".join(p) for p in params])
- return f"g4f.provider.{cls.__name__} supports: ({param})"
-
-
-def _hash(json_data: dict[str, str]) -> SHA256:
- base_string: str = "%s:%s:%s:%s" % (
- json_data["t"],
- json_data["m"],
- "WI,2rU#_r:r~aF4aJ36[.Z(/8Rv93Rf",
- len(json_data["m"]),
- )
-
- return SHA256(hashlib.sha256(base_string.encode()).hexdigest())
-
-
-def _format_timestamp(timestamp: int) -> str:
- e = timestamp
- n = e % 10
- r = n + 1 if n % 2 == 0 else n
- return str(e - n + r)
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/press/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/press/Factory.d.ts
deleted file mode 100644
index 51c70e26a6108b89e19c477a67cc80f557fe57b4..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/press/Factory.d.ts
+++ /dev/null
@@ -1,7 +0,0 @@
-// import * as Phaser from 'phaser';
-import Press from "./Press";
-
-export default function (
- gameObject: Phaser.GameObjects.GameObject | Phaser.Scene,
- config?: Press.IConfig
-): Press;
\ No newline at end of file
diff --git a/spaces/AlStable/AlPrompt/README.md b/spaces/AlStable/AlPrompt/README.md
deleted file mode 100644
index 78b65e6f9e556c75aad0dabba0ac85bcc8799e99..0000000000000000000000000000000000000000
--- a/spaces/AlStable/AlPrompt/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Al prompt
-emoji: 🤗
-colorFrom: indigo
-colorTo: purple
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Alpaca233/SadTalker/src/face3d/util/load_mats.py b/spaces/Alpaca233/SadTalker/src/face3d/util/load_mats.py
deleted file mode 100644
index f9a6fcc71de1d7dad8b0f81c67dc1c213764ff0b..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/SadTalker/src/face3d/util/load_mats.py
+++ /dev/null
@@ -1,120 +0,0 @@
-"""This script is to load 3D face model for Deep3DFaceRecon_pytorch
-"""
-
-import numpy as np
-from PIL import Image
-from scipy.io import loadmat, savemat
-from array import array
-import os.path as osp
-
-# load expression basis
-def LoadExpBasis(bfm_folder='BFM'):
- n_vertex = 53215
- Expbin = open(osp.join(bfm_folder, 'Exp_Pca.bin'), 'rb')
- exp_dim = array('i')
- exp_dim.fromfile(Expbin, 1)
- expMU = array('f')
- expPC = array('f')
- expMU.fromfile(Expbin, 3*n_vertex)
- expPC.fromfile(Expbin, 3*exp_dim[0]*n_vertex)
- Expbin.close()
-
- expPC = np.array(expPC)
- expPC = np.reshape(expPC, [exp_dim[0], -1])
- expPC = np.transpose(expPC)
-
- expEV = np.loadtxt(osp.join(bfm_folder, 'std_exp.txt'))
-
- return expPC, expEV
-
-
-# transfer original BFM09 to our face model
-def transferBFM09(bfm_folder='BFM'):
- print('Transfer BFM09 to BFM_model_front......')
- original_BFM = loadmat(osp.join(bfm_folder, '01_MorphableModel.mat'))
- shapePC = original_BFM['shapePC'] # shape basis
- shapeEV = original_BFM['shapeEV'] # corresponding eigen value
- shapeMU = original_BFM['shapeMU'] # mean face
- texPC = original_BFM['texPC'] # texture basis
- texEV = original_BFM['texEV'] # eigen value
- texMU = original_BFM['texMU'] # mean texture
-
- expPC, expEV = LoadExpBasis(bfm_folder)
-
- # transfer BFM09 to our face model
-
- idBase = shapePC*np.reshape(shapeEV, [-1, 199])
- idBase = idBase/1e5 # unify the scale to decimeter
- idBase = idBase[:, :80] # use only first 80 basis
-
- exBase = expPC*np.reshape(expEV, [-1, 79])
- exBase = exBase/1e5 # unify the scale to decimeter
- exBase = exBase[:, :64] # use only first 64 basis
-
- texBase = texPC*np.reshape(texEV, [-1, 199])
- texBase = texBase[:, :80] # use only first 80 basis
-
- # our face model is cropped along face landmarks and contains only 35709 vertex.
- # original BFM09 contains 53490 vertex, and expression basis provided by Guo et al. contains 53215 vertex.
- # thus we select corresponding vertex to get our face model.
-
- index_exp = loadmat(osp.join(bfm_folder, 'BFM_front_idx.mat'))
- index_exp = index_exp['idx'].astype(np.int32) - 1 # starts from 0 (to 53215)
-
- index_shape = loadmat(osp.join(bfm_folder, 'BFM_exp_idx.mat'))
- index_shape = index_shape['trimIndex'].astype(
- np.int32) - 1 # starts from 0 (to 53490)
- index_shape = index_shape[index_exp]
-
- idBase = np.reshape(idBase, [-1, 3, 80])
- idBase = idBase[index_shape, :, :]
- idBase = np.reshape(idBase, [-1, 80])
-
- texBase = np.reshape(texBase, [-1, 3, 80])
- texBase = texBase[index_shape, :, :]
- texBase = np.reshape(texBase, [-1, 80])
-
- exBase = np.reshape(exBase, [-1, 3, 64])
- exBase = exBase[index_exp, :, :]
- exBase = np.reshape(exBase, [-1, 64])
-
- meanshape = np.reshape(shapeMU, [-1, 3])/1e5
- meanshape = meanshape[index_shape, :]
- meanshape = np.reshape(meanshape, [1, -1])
-
- meantex = np.reshape(texMU, [-1, 3])
- meantex = meantex[index_shape, :]
- meantex = np.reshape(meantex, [1, -1])
-
- # other info contains triangles, region used for computing photometric loss,
- # region used for skin texture regularization, and 68 landmarks index etc.
- other_info = loadmat(osp.join(bfm_folder, 'facemodel_info.mat'))
- frontmask2_idx = other_info['frontmask2_idx']
- skinmask = other_info['skinmask']
- keypoints = other_info['keypoints']
- point_buf = other_info['point_buf']
- tri = other_info['tri']
- tri_mask2 = other_info['tri_mask2']
-
- # save our face model
- savemat(osp.join(bfm_folder, 'BFM_model_front.mat'), {'meanshape': meanshape, 'meantex': meantex, 'idBase': idBase, 'exBase': exBase, 'texBase': texBase,
- 'tri': tri, 'point_buf': point_buf, 'tri_mask2': tri_mask2, 'keypoints': keypoints, 'frontmask2_idx': frontmask2_idx, 'skinmask': skinmask})
-
-
-# load landmarks for standard face, which is used for image preprocessing
-def load_lm3d(bfm_folder):
-
- Lm3D = loadmat(osp.join(bfm_folder, 'similarity_Lm3D_all.mat'))
- Lm3D = Lm3D['lm']
-
- # calculate 5 facial landmarks using 68 landmarks
- lm_idx = np.array([31, 37, 40, 43, 46, 49, 55]) - 1
- Lm3D = np.stack([Lm3D[lm_idx[0], :], np.mean(Lm3D[lm_idx[[1, 2]], :], 0), np.mean(
- Lm3D[lm_idx[[3, 4]], :], 0), Lm3D[lm_idx[5], :], Lm3D[lm_idx[6], :]], axis=0)
- Lm3D = Lm3D[[1, 2, 0, 3, 4], :]
-
- return Lm3D
-
-
-if __name__ == '__main__':
- transferBFM09()
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/test_examples.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/test_examples.py
deleted file mode 100644
index cc57024c350e3f7c1a7cf94e73e0f363e1680460..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/test_examples.py
+++ /dev/null
@@ -1,1422 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc..
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-import logging
-import os
-import shutil
-import subprocess
-import sys
-import tempfile
-import unittest
-from typing import List
-
-import torch
-from accelerate.utils import write_basic_config
-
-from diffusers import DiffusionPipeline, UNet2DConditionModel
-
-
-logging.basicConfig(level=logging.DEBUG)
-
-logger = logging.getLogger()
-
-
-# These utils relate to ensuring the right error message is received when running scripts
-class SubprocessCallException(Exception):
- pass
-
-
-def run_command(command: List[str], return_stdout=False):
- """
- Runs `command` with `subprocess.check_output` and will potentially return the `stdout`. Will also properly capture
- if an error occurred while running `command`
- """
- try:
- output = subprocess.check_output(command, stderr=subprocess.STDOUT)
- if return_stdout:
- if hasattr(output, "decode"):
- output = output.decode("utf-8")
- return output
- except subprocess.CalledProcessError as e:
- raise SubprocessCallException(
- f"Command `{' '.join(command)}` failed with the following error:\n\n{e.output.decode()}"
- ) from e
-
-
-stream_handler = logging.StreamHandler(sys.stdout)
-logger.addHandler(stream_handler)
-
-
-class ExamplesTestsAccelerate(unittest.TestCase):
- @classmethod
- def setUpClass(cls):
- super().setUpClass()
- cls._tmpdir = tempfile.mkdtemp()
- cls.configPath = os.path.join(cls._tmpdir, "default_config.yml")
-
- write_basic_config(save_location=cls.configPath)
- cls._launch_args = ["accelerate", "launch", "--config_file", cls.configPath]
-
- @classmethod
- def tearDownClass(cls):
- super().tearDownClass()
- shutil.rmtree(cls._tmpdir)
-
- def test_train_unconditional(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/unconditional_image_generation/train_unconditional.py
- --dataset_name hf-internal-testing/dummy_image_class_data
- --model_config_name_or_path diffusers/ddpm_dummy
- --resolution 64
- --output_dir {tmpdir}
- --train_batch_size 2
- --num_epochs 1
- --gradient_accumulation_steps 1
- --ddpm_num_inference_steps 2
- --learning_rate 1e-3
- --lr_warmup_steps 5
- """.split()
-
- run_command(self._launch_args + test_args, return_stdout=True)
- # save_pretrained smoke test
- self.assertTrue(os.path.isfile(os.path.join(tmpdir, "unet", "diffusion_pytorch_model.bin")))
- self.assertTrue(os.path.isfile(os.path.join(tmpdir, "scheduler", "scheduler_config.json")))
-
- def test_textual_inversion(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/textual_inversion/textual_inversion.py
- --pretrained_model_name_or_path hf-internal-testing/tiny-stable-diffusion-pipe
- --train_data_dir docs/source/en/imgs
- --learnable_property object
- --placeholder_token
- --initializer_token a
- --validation_prompt
- --validation_steps 1
- --save_steps 1
- --num_vectors 2
- --resolution 64
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 2
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- """.split()
-
- run_command(self._launch_args + test_args)
- # save_pretrained smoke test
- self.assertTrue(os.path.isfile(os.path.join(tmpdir, "learned_embeds.bin")))
-
- def test_dreambooth(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/dreambooth/train_dreambooth.py
- --pretrained_model_name_or_path hf-internal-testing/tiny-stable-diffusion-pipe
- --instance_data_dir docs/source/en/imgs
- --instance_prompt photo
- --resolution 64
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 2
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- """.split()
-
- run_command(self._launch_args + test_args)
- # save_pretrained smoke test
- self.assertTrue(os.path.isfile(os.path.join(tmpdir, "unet", "diffusion_pytorch_model.bin")))
- self.assertTrue(os.path.isfile(os.path.join(tmpdir, "scheduler", "scheduler_config.json")))
-
- def test_dreambooth_if(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/dreambooth/train_dreambooth.py
- --pretrained_model_name_or_path hf-internal-testing/tiny-if-pipe
- --instance_data_dir docs/source/en/imgs
- --instance_prompt photo
- --resolution 64
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 2
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- --pre_compute_text_embeddings
- --tokenizer_max_length=77
- --text_encoder_use_attention_mask
- """.split()
-
- run_command(self._launch_args + test_args)
- # save_pretrained smoke test
- self.assertTrue(os.path.isfile(os.path.join(tmpdir, "unet", "diffusion_pytorch_model.bin")))
- self.assertTrue(os.path.isfile(os.path.join(tmpdir, "scheduler", "scheduler_config.json")))
-
- def test_dreambooth_checkpointing(self):
- instance_prompt = "photo"
- pretrained_model_name_or_path = "hf-internal-testing/tiny-stable-diffusion-pipe"
-
- with tempfile.TemporaryDirectory() as tmpdir:
- # Run training script with checkpointing
- # max_train_steps == 5, checkpointing_steps == 2
- # Should create checkpoints at steps 2, 4
-
- initial_run_args = f"""
- examples/dreambooth/train_dreambooth.py
- --pretrained_model_name_or_path {pretrained_model_name_or_path}
- --instance_data_dir docs/source/en/imgs
- --instance_prompt {instance_prompt}
- --resolution 64
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 5
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- --checkpointing_steps=2
- --seed=0
- """.split()
-
- run_command(self._launch_args + initial_run_args)
-
- # check can run the original fully trained output pipeline
- pipe = DiffusionPipeline.from_pretrained(tmpdir, safety_checker=None)
- pipe(instance_prompt, num_inference_steps=2)
-
- # check checkpoint directories exist
- self.assertTrue(os.path.isdir(os.path.join(tmpdir, "checkpoint-2")))
- self.assertTrue(os.path.isdir(os.path.join(tmpdir, "checkpoint-4")))
-
- # check can run an intermediate checkpoint
- unet = UNet2DConditionModel.from_pretrained(tmpdir, subfolder="checkpoint-2/unet")
- pipe = DiffusionPipeline.from_pretrained(pretrained_model_name_or_path, unet=unet, safety_checker=None)
- pipe(instance_prompt, num_inference_steps=2)
-
- # Remove checkpoint 2 so that we can check only later checkpoints exist after resuming
- shutil.rmtree(os.path.join(tmpdir, "checkpoint-2"))
-
- # Run training script for 7 total steps resuming from checkpoint 4
-
- resume_run_args = f"""
- examples/dreambooth/train_dreambooth.py
- --pretrained_model_name_or_path {pretrained_model_name_or_path}
- --instance_data_dir docs/source/en/imgs
- --instance_prompt {instance_prompt}
- --resolution 64
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 7
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- --checkpointing_steps=2
- --resume_from_checkpoint=checkpoint-4
- --seed=0
- """.split()
-
- run_command(self._launch_args + resume_run_args)
-
- # check can run new fully trained pipeline
- pipe = DiffusionPipeline.from_pretrained(tmpdir, safety_checker=None)
- pipe(instance_prompt, num_inference_steps=2)
-
- # check old checkpoints do not exist
- self.assertFalse(os.path.isdir(os.path.join(tmpdir, "checkpoint-2")))
-
- # check new checkpoints exist
- self.assertTrue(os.path.isdir(os.path.join(tmpdir, "checkpoint-4")))
- self.assertTrue(os.path.isdir(os.path.join(tmpdir, "checkpoint-6")))
-
- def test_dreambooth_lora(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/dreambooth/train_dreambooth_lora.py
- --pretrained_model_name_or_path hf-internal-testing/tiny-stable-diffusion-pipe
- --instance_data_dir docs/source/en/imgs
- --instance_prompt photo
- --resolution 64
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 2
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- """.split()
-
- run_command(self._launch_args + test_args)
- # save_pretrained smoke test
- self.assertTrue(os.path.isfile(os.path.join(tmpdir, "pytorch_lora_weights.bin")))
-
- # make sure the state_dict has the correct naming in the parameters.
- lora_state_dict = torch.load(os.path.join(tmpdir, "pytorch_lora_weights.bin"))
- is_lora = all("lora" in k for k in lora_state_dict.keys())
- self.assertTrue(is_lora)
-
- # when not training the text encoder, all the parameters in the state dict should start
- # with `"unet"` in their names.
- starts_with_unet = all(key.startswith("unet") for key in lora_state_dict.keys())
- self.assertTrue(starts_with_unet)
-
- def test_dreambooth_lora_with_text_encoder(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/dreambooth/train_dreambooth_lora.py
- --pretrained_model_name_or_path hf-internal-testing/tiny-stable-diffusion-pipe
- --instance_data_dir docs/source/en/imgs
- --instance_prompt photo
- --resolution 64
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 2
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --train_text_encoder
- --output_dir {tmpdir}
- """.split()
-
- run_command(self._launch_args + test_args)
- # save_pretrained smoke test
- self.assertTrue(os.path.isfile(os.path.join(tmpdir, "pytorch_lora_weights.bin")))
-
- # check `text_encoder` is present at all.
- lora_state_dict = torch.load(os.path.join(tmpdir, "pytorch_lora_weights.bin"))
- keys = lora_state_dict.keys()
- is_text_encoder_present = any(k.startswith("text_encoder") for k in keys)
- self.assertTrue(is_text_encoder_present)
-
- # the names of the keys of the state dict should either start with `unet`
- # or `text_encoder`.
- is_correct_naming = all(k.startswith("unet") or k.startswith("text_encoder") for k in keys)
- self.assertTrue(is_correct_naming)
-
- def test_dreambooth_lora_if_model(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/dreambooth/train_dreambooth_lora.py
- --pretrained_model_name_or_path hf-internal-testing/tiny-if-pipe
- --instance_data_dir docs/source/en/imgs
- --instance_prompt photo
- --resolution 64
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 2
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- --pre_compute_text_embeddings
- --tokenizer_max_length=77
- --text_encoder_use_attention_mask
- """.split()
-
- run_command(self._launch_args + test_args)
- # save_pretrained smoke test
- self.assertTrue(os.path.isfile(os.path.join(tmpdir, "pytorch_lora_weights.bin")))
-
- # make sure the state_dict has the correct naming in the parameters.
- lora_state_dict = torch.load(os.path.join(tmpdir, "pytorch_lora_weights.bin"))
- is_lora = all("lora" in k for k in lora_state_dict.keys())
- self.assertTrue(is_lora)
-
- # when not training the text encoder, all the parameters in the state dict should start
- # with `"unet"` in their names.
- starts_with_unet = all(key.startswith("unet") for key in lora_state_dict.keys())
- self.assertTrue(starts_with_unet)
-
- def test_dreambooth_lora_sdxl(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/dreambooth/train_dreambooth_lora_sdxl.py
- --pretrained_model_name_or_path hf-internal-testing/tiny-stable-diffusion-xl-pipe
- --instance_data_dir docs/source/en/imgs
- --instance_prompt photo
- --resolution 64
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 2
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- """.split()
-
- run_command(self._launch_args + test_args)
- # save_pretrained smoke test
- self.assertTrue(os.path.isfile(os.path.join(tmpdir, "pytorch_lora_weights.bin")))
-
- # make sure the state_dict has the correct naming in the parameters.
- lora_state_dict = torch.load(os.path.join(tmpdir, "pytorch_lora_weights.bin"))
- is_lora = all("lora" in k for k in lora_state_dict.keys())
- self.assertTrue(is_lora)
-
- # when not training the text encoder, all the parameters in the state dict should start
- # with `"unet"` in their names.
- starts_with_unet = all(key.startswith("unet") for key in lora_state_dict.keys())
- self.assertTrue(starts_with_unet)
-
- def test_dreambooth_lora_sdxl_with_text_encoder(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/dreambooth/train_dreambooth_lora_sdxl.py
- --pretrained_model_name_or_path hf-internal-testing/tiny-stable-diffusion-xl-pipe
- --instance_data_dir docs/source/en/imgs
- --instance_prompt photo
- --resolution 64
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 2
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- --train_text_encoder
- """.split()
-
- run_command(self._launch_args + test_args)
- # save_pretrained smoke test
- self.assertTrue(os.path.isfile(os.path.join(tmpdir, "pytorch_lora_weights.bin")))
-
- # make sure the state_dict has the correct naming in the parameters.
- lora_state_dict = torch.load(os.path.join(tmpdir, "pytorch_lora_weights.bin"))
- is_lora = all("lora" in k for k in lora_state_dict.keys())
- self.assertTrue(is_lora)
-
- # when not training the text encoder, all the parameters in the state dict should start
- # with `"unet"` or `"text_encoder"` or `"text_encoder_2"` in their names.
- keys = lora_state_dict.keys()
- starts_with_unet = all(
- k.startswith("unet") or k.startswith("text_encoder") or k.startswith("text_encoder_2") for k in keys
- )
- self.assertTrue(starts_with_unet)
-
- def test_custom_diffusion(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/custom_diffusion/train_custom_diffusion.py
- --pretrained_model_name_or_path hf-internal-testing/tiny-stable-diffusion-pipe
- --instance_data_dir docs/source/en/imgs
- --instance_prompt
- --resolution 64
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 2
- --learning_rate 1.0e-05
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --modifier_token
- --output_dir {tmpdir}
- """.split()
-
- run_command(self._launch_args + test_args)
- # save_pretrained smoke test
- self.assertTrue(os.path.isfile(os.path.join(tmpdir, "pytorch_custom_diffusion_weights.bin")))
- self.assertTrue(os.path.isfile(os.path.join(tmpdir, ".bin")))
-
- def test_text_to_image(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/text_to_image/train_text_to_image.py
- --pretrained_model_name_or_path hf-internal-testing/tiny-stable-diffusion-pipe
- --dataset_name hf-internal-testing/dummy_image_text_data
- --resolution 64
- --center_crop
- --random_flip
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 2
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- """.split()
-
- run_command(self._launch_args + test_args)
- # save_pretrained smoke test
- self.assertTrue(os.path.isfile(os.path.join(tmpdir, "unet", "diffusion_pytorch_model.bin")))
- self.assertTrue(os.path.isfile(os.path.join(tmpdir, "scheduler", "scheduler_config.json")))
-
- def test_text_to_image_checkpointing(self):
- pretrained_model_name_or_path = "hf-internal-testing/tiny-stable-diffusion-pipe"
- prompt = "a prompt"
-
- with tempfile.TemporaryDirectory() as tmpdir:
- # Run training script with checkpointing
- # max_train_steps == 5, checkpointing_steps == 2
- # Should create checkpoints at steps 2, 4
-
- initial_run_args = f"""
- examples/text_to_image/train_text_to_image.py
- --pretrained_model_name_or_path {pretrained_model_name_or_path}
- --dataset_name hf-internal-testing/dummy_image_text_data
- --resolution 64
- --center_crop
- --random_flip
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 5
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- --checkpointing_steps=2
- --seed=0
- """.split()
-
- run_command(self._launch_args + initial_run_args)
-
- pipe = DiffusionPipeline.from_pretrained(tmpdir, safety_checker=None)
- pipe(prompt, num_inference_steps=2)
-
- # check checkpoint directories exist
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-2", "checkpoint-4"},
- )
-
- # check can run an intermediate checkpoint
- unet = UNet2DConditionModel.from_pretrained(tmpdir, subfolder="checkpoint-2/unet")
- pipe = DiffusionPipeline.from_pretrained(pretrained_model_name_or_path, unet=unet, safety_checker=None)
- pipe(prompt, num_inference_steps=2)
-
- # Remove checkpoint 2 so that we can check only later checkpoints exist after resuming
- shutil.rmtree(os.path.join(tmpdir, "checkpoint-2"))
-
- # Run training script for 7 total steps resuming from checkpoint 4
-
- resume_run_args = f"""
- examples/text_to_image/train_text_to_image.py
- --pretrained_model_name_or_path {pretrained_model_name_or_path}
- --dataset_name hf-internal-testing/dummy_image_text_data
- --resolution 64
- --center_crop
- --random_flip
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 7
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- --checkpointing_steps=2
- --resume_from_checkpoint=checkpoint-4
- --seed=0
- """.split()
-
- run_command(self._launch_args + resume_run_args)
-
- # check can run new fully trained pipeline
- pipe = DiffusionPipeline.from_pretrained(tmpdir, safety_checker=None)
- pipe(prompt, num_inference_steps=2)
-
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {
- # no checkpoint-2 -> check old checkpoints do not exist
- # check new checkpoints exist
- "checkpoint-4",
- "checkpoint-6",
- },
- )
-
- def test_text_to_image_checkpointing_use_ema(self):
- pretrained_model_name_or_path = "hf-internal-testing/tiny-stable-diffusion-pipe"
- prompt = "a prompt"
-
- with tempfile.TemporaryDirectory() as tmpdir:
- # Run training script with checkpointing
- # max_train_steps == 5, checkpointing_steps == 2
- # Should create checkpoints at steps 2, 4
-
- initial_run_args = f"""
- examples/text_to_image/train_text_to_image.py
- --pretrained_model_name_or_path {pretrained_model_name_or_path}
- --dataset_name hf-internal-testing/dummy_image_text_data
- --resolution 64
- --center_crop
- --random_flip
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 5
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- --checkpointing_steps=2
- --use_ema
- --seed=0
- """.split()
-
- run_command(self._launch_args + initial_run_args)
-
- pipe = DiffusionPipeline.from_pretrained(tmpdir, safety_checker=None)
- pipe(prompt, num_inference_steps=2)
-
- # check checkpoint directories exist
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-2", "checkpoint-4"},
- )
-
- # check can run an intermediate checkpoint
- unet = UNet2DConditionModel.from_pretrained(tmpdir, subfolder="checkpoint-2/unet")
- pipe = DiffusionPipeline.from_pretrained(pretrained_model_name_or_path, unet=unet, safety_checker=None)
- pipe(prompt, num_inference_steps=2)
-
- # Remove checkpoint 2 so that we can check only later checkpoints exist after resuming
- shutil.rmtree(os.path.join(tmpdir, "checkpoint-2"))
-
- # Run training script for 7 total steps resuming from checkpoint 4
-
- resume_run_args = f"""
- examples/text_to_image/train_text_to_image.py
- --pretrained_model_name_or_path {pretrained_model_name_or_path}
- --dataset_name hf-internal-testing/dummy_image_text_data
- --resolution 64
- --center_crop
- --random_flip
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 7
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- --checkpointing_steps=2
- --resume_from_checkpoint=checkpoint-4
- --use_ema
- --seed=0
- """.split()
-
- run_command(self._launch_args + resume_run_args)
-
- # check can run new fully trained pipeline
- pipe = DiffusionPipeline.from_pretrained(tmpdir, safety_checker=None)
- pipe(prompt, num_inference_steps=2)
-
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {
- # no checkpoint-2 -> check old checkpoints do not exist
- # check new checkpoints exist
- "checkpoint-4",
- "checkpoint-6",
- },
- )
-
- def test_text_to_image_checkpointing_checkpoints_total_limit(self):
- pretrained_model_name_or_path = "hf-internal-testing/tiny-stable-diffusion-pipe"
- prompt = "a prompt"
-
- with tempfile.TemporaryDirectory() as tmpdir:
- # Run training script with checkpointing
- # max_train_steps == 7, checkpointing_steps == 2, checkpoints_total_limit == 2
- # Should create checkpoints at steps 2, 4, 6
- # with checkpoint at step 2 deleted
-
- initial_run_args = f"""
- examples/text_to_image/train_text_to_image.py
- --pretrained_model_name_or_path {pretrained_model_name_or_path}
- --dataset_name hf-internal-testing/dummy_image_text_data
- --resolution 64
- --center_crop
- --random_flip
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 7
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- --checkpointing_steps=2
- --checkpoints_total_limit=2
- --seed=0
- """.split()
-
- run_command(self._launch_args + initial_run_args)
-
- pipe = DiffusionPipeline.from_pretrained(tmpdir, safety_checker=None)
- pipe(prompt, num_inference_steps=2)
-
- # check checkpoint directories exist
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- # checkpoint-2 should have been deleted
- {"checkpoint-4", "checkpoint-6"},
- )
-
- def test_text_to_image_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints(self):
- pretrained_model_name_or_path = "hf-internal-testing/tiny-stable-diffusion-pipe"
- prompt = "a prompt"
-
- with tempfile.TemporaryDirectory() as tmpdir:
- # Run training script with checkpointing
- # max_train_steps == 9, checkpointing_steps == 2
- # Should create checkpoints at steps 2, 4, 6, 8
-
- initial_run_args = f"""
- examples/text_to_image/train_text_to_image.py
- --pretrained_model_name_or_path {pretrained_model_name_or_path}
- --dataset_name hf-internal-testing/dummy_image_text_data
- --resolution 64
- --center_crop
- --random_flip
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 9
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- --checkpointing_steps=2
- --seed=0
- """.split()
-
- run_command(self._launch_args + initial_run_args)
-
- pipe = DiffusionPipeline.from_pretrained(tmpdir, safety_checker=None)
- pipe(prompt, num_inference_steps=2)
-
- # check checkpoint directories exist
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-2", "checkpoint-4", "checkpoint-6", "checkpoint-8"},
- )
-
- # resume and we should try to checkpoint at 10, where we'll have to remove
- # checkpoint-2 and checkpoint-4 instead of just a single previous checkpoint
-
- resume_run_args = f"""
- examples/text_to_image/train_text_to_image.py
- --pretrained_model_name_or_path {pretrained_model_name_or_path}
- --dataset_name hf-internal-testing/dummy_image_text_data
- --resolution 64
- --center_crop
- --random_flip
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 11
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- --checkpointing_steps=2
- --resume_from_checkpoint=checkpoint-8
- --checkpoints_total_limit=3
- --seed=0
- """.split()
-
- run_command(self._launch_args + resume_run_args)
-
- pipe = DiffusionPipeline.from_pretrained(tmpdir, safety_checker=None)
- pipe(prompt, num_inference_steps=2)
-
- # check checkpoint directories exist
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-6", "checkpoint-8", "checkpoint-10"},
- )
-
- def test_text_to_image_lora_checkpointing_checkpoints_total_limit(self):
- pretrained_model_name_or_path = "hf-internal-testing/tiny-stable-diffusion-pipe"
- prompt = "a prompt"
-
- with tempfile.TemporaryDirectory() as tmpdir:
- # Run training script with checkpointing
- # max_train_steps == 7, checkpointing_steps == 2, checkpoints_total_limit == 2
- # Should create checkpoints at steps 2, 4, 6
- # with checkpoint at step 2 deleted
-
- initial_run_args = f"""
- examples/text_to_image/train_text_to_image_lora.py
- --pretrained_model_name_or_path {pretrained_model_name_or_path}
- --dataset_name hf-internal-testing/dummy_image_text_data
- --resolution 64
- --center_crop
- --random_flip
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 7
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- --checkpointing_steps=2
- --checkpoints_total_limit=2
- --seed=0
- --num_validation_images=0
- """.split()
-
- run_command(self._launch_args + initial_run_args)
-
- pipe = DiffusionPipeline.from_pretrained(
- "hf-internal-testing/tiny-stable-diffusion-pipe", safety_checker=None
- )
- pipe.load_lora_weights(tmpdir)
- pipe(prompt, num_inference_steps=2)
-
- # check checkpoint directories exist
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- # checkpoint-2 should have been deleted
- {"checkpoint-4", "checkpoint-6"},
- )
-
- def test_text_to_image_lora_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints(self):
- pretrained_model_name_or_path = "hf-internal-testing/tiny-stable-diffusion-pipe"
- prompt = "a prompt"
-
- with tempfile.TemporaryDirectory() as tmpdir:
- # Run training script with checkpointing
- # max_train_steps == 9, checkpointing_steps == 2
- # Should create checkpoints at steps 2, 4, 6, 8
-
- initial_run_args = f"""
- examples/text_to_image/train_text_to_image_lora.py
- --pretrained_model_name_or_path {pretrained_model_name_or_path}
- --dataset_name hf-internal-testing/dummy_image_text_data
- --resolution 64
- --center_crop
- --random_flip
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 9
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- --checkpointing_steps=2
- --seed=0
- --num_validation_images=0
- """.split()
-
- run_command(self._launch_args + initial_run_args)
-
- pipe = DiffusionPipeline.from_pretrained(
- "hf-internal-testing/tiny-stable-diffusion-pipe", safety_checker=None
- )
- pipe.load_lora_weights(tmpdir)
- pipe(prompt, num_inference_steps=2)
-
- # check checkpoint directories exist
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-2", "checkpoint-4", "checkpoint-6", "checkpoint-8"},
- )
-
- # resume and we should try to checkpoint at 10, where we'll have to remove
- # checkpoint-2 and checkpoint-4 instead of just a single previous checkpoint
-
- resume_run_args = f"""
- examples/text_to_image/train_text_to_image_lora.py
- --pretrained_model_name_or_path {pretrained_model_name_or_path}
- --dataset_name hf-internal-testing/dummy_image_text_data
- --resolution 64
- --center_crop
- --random_flip
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 11
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- --checkpointing_steps=2
- --resume_from_checkpoint=checkpoint-8
- --checkpoints_total_limit=3
- --seed=0
- --num_validation_images=0
- """.split()
-
- run_command(self._launch_args + resume_run_args)
-
- pipe = DiffusionPipeline.from_pretrained(
- "hf-internal-testing/tiny-stable-diffusion-pipe", safety_checker=None
- )
- pipe.load_lora_weights(tmpdir)
- pipe(prompt, num_inference_steps=2)
-
- # check checkpoint directories exist
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-6", "checkpoint-8", "checkpoint-10"},
- )
-
- def test_unconditional_checkpointing_checkpoints_total_limit(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- initial_run_args = f"""
- examples/unconditional_image_generation/train_unconditional.py
- --dataset_name hf-internal-testing/dummy_image_class_data
- --model_config_name_or_path diffusers/ddpm_dummy
- --resolution 64
- --output_dir {tmpdir}
- --train_batch_size 1
- --num_epochs 1
- --gradient_accumulation_steps 1
- --ddpm_num_inference_steps 2
- --learning_rate 1e-3
- --lr_warmup_steps 5
- --checkpointing_steps=2
- --checkpoints_total_limit=2
- """.split()
-
- run_command(self._launch_args + initial_run_args)
-
- # check checkpoint directories exist
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- # checkpoint-2 should have been deleted
- {"checkpoint-4", "checkpoint-6"},
- )
-
- def test_unconditional_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- initial_run_args = f"""
- examples/unconditional_image_generation/train_unconditional.py
- --dataset_name hf-internal-testing/dummy_image_class_data
- --model_config_name_or_path diffusers/ddpm_dummy
- --resolution 64
- --output_dir {tmpdir}
- --train_batch_size 1
- --num_epochs 1
- --gradient_accumulation_steps 1
- --ddpm_num_inference_steps 2
- --learning_rate 1e-3
- --lr_warmup_steps 5
- --checkpointing_steps=1
- """.split()
-
- run_command(self._launch_args + initial_run_args)
-
- # check checkpoint directories exist
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-1", "checkpoint-2", "checkpoint-3", "checkpoint-4", "checkpoint-5", "checkpoint-6"},
- )
-
- resume_run_args = f"""
- examples/unconditional_image_generation/train_unconditional.py
- --dataset_name hf-internal-testing/dummy_image_class_data
- --model_config_name_or_path diffusers/ddpm_dummy
- --resolution 64
- --output_dir {tmpdir}
- --train_batch_size 1
- --num_epochs 2
- --gradient_accumulation_steps 1
- --ddpm_num_inference_steps 2
- --learning_rate 1e-3
- --lr_warmup_steps 5
- --resume_from_checkpoint=checkpoint-6
- --checkpointing_steps=2
- --checkpoints_total_limit=3
- """.split()
-
- run_command(self._launch_args + resume_run_args)
-
- # check checkpoint directories exist
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-8", "checkpoint-10", "checkpoint-12"},
- )
-
- def test_textual_inversion_checkpointing(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/textual_inversion/textual_inversion.py
- --pretrained_model_name_or_path hf-internal-testing/tiny-stable-diffusion-pipe
- --train_data_dir docs/source/en/imgs
- --learnable_property object
- --placeholder_token
- --initializer_token a
- --validation_prompt
- --validation_steps 1
- --save_steps 1
- --num_vectors 2
- --resolution 64
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 3
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- --checkpointing_steps=1
- --checkpoints_total_limit=2
- """.split()
-
- run_command(self._launch_args + test_args)
-
- # check checkpoint directories exist
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-2", "checkpoint-3"},
- )
-
- def test_textual_inversion_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/textual_inversion/textual_inversion.py
- --pretrained_model_name_or_path hf-internal-testing/tiny-stable-diffusion-pipe
- --train_data_dir docs/source/en/imgs
- --learnable_property object
- --placeholder_token
- --initializer_token a
- --validation_prompt
- --validation_steps 1
- --save_steps 1
- --num_vectors 2
- --resolution 64
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 3
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- --checkpointing_steps=1
- """.split()
-
- run_command(self._launch_args + test_args)
-
- # check checkpoint directories exist
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-1", "checkpoint-2", "checkpoint-3"},
- )
-
- resume_run_args = f"""
- examples/textual_inversion/textual_inversion.py
- --pretrained_model_name_or_path hf-internal-testing/tiny-stable-diffusion-pipe
- --train_data_dir docs/source/en/imgs
- --learnable_property object
- --placeholder_token
- --initializer_token a
- --validation_prompt
- --validation_steps 1
- --save_steps 1
- --num_vectors 2
- --resolution 64
- --train_batch_size 1
- --gradient_accumulation_steps 1
- --max_train_steps 4
- --learning_rate 5.0e-04
- --scale_lr
- --lr_scheduler constant
- --lr_warmup_steps 0
- --output_dir {tmpdir}
- --checkpointing_steps=1
- --resume_from_checkpoint=checkpoint-3
- --checkpoints_total_limit=2
- """.split()
-
- run_command(self._launch_args + resume_run_args)
-
- # check checkpoint directories exist
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-3", "checkpoint-4"},
- )
-
- def test_instruct_pix2pix_checkpointing_checkpoints_total_limit(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/instruct_pix2pix/train_instruct_pix2pix.py
- --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe
- --dataset_name=hf-internal-testing/instructpix2pix-10-samples
- --resolution=64
- --random_flip
- --train_batch_size=1
- --max_train_steps=7
- --checkpointing_steps=2
- --checkpoints_total_limit=2
- --output_dir {tmpdir}
- --seed=0
- """.split()
-
- run_command(self._launch_args + test_args)
-
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-4", "checkpoint-6"},
- )
-
- def test_instruct_pix2pix_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/instruct_pix2pix/train_instruct_pix2pix.py
- --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe
- --dataset_name=hf-internal-testing/instructpix2pix-10-samples
- --resolution=64
- --random_flip
- --train_batch_size=1
- --max_train_steps=9
- --checkpointing_steps=2
- --output_dir {tmpdir}
- --seed=0
- """.split()
-
- run_command(self._launch_args + test_args)
-
- # check checkpoint directories exist
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-2", "checkpoint-4", "checkpoint-6", "checkpoint-8"},
- )
-
- resume_run_args = f"""
- examples/instruct_pix2pix/train_instruct_pix2pix.py
- --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe
- --dataset_name=hf-internal-testing/instructpix2pix-10-samples
- --resolution=64
- --random_flip
- --train_batch_size=1
- --max_train_steps=11
- --checkpointing_steps=2
- --output_dir {tmpdir}
- --seed=0
- --resume_from_checkpoint=checkpoint-8
- --checkpoints_total_limit=3
- """.split()
-
- run_command(self._launch_args + resume_run_args)
-
- # check checkpoint directories exist
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-6", "checkpoint-8", "checkpoint-10"},
- )
-
- def test_dreambooth_checkpointing_checkpoints_total_limit(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/dreambooth/train_dreambooth.py
- --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe
- --instance_data_dir=docs/source/en/imgs
- --output_dir={tmpdir}
- --instance_prompt=prompt
- --resolution=64
- --train_batch_size=1
- --gradient_accumulation_steps=1
- --max_train_steps=6
- --checkpoints_total_limit=2
- --checkpointing_steps=2
- """.split()
-
- run_command(self._launch_args + test_args)
-
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-4", "checkpoint-6"},
- )
-
- def test_dreambooth_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/dreambooth/train_dreambooth.py
- --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe
- --instance_data_dir=docs/source/en/imgs
- --output_dir={tmpdir}
- --instance_prompt=prompt
- --resolution=64
- --train_batch_size=1
- --gradient_accumulation_steps=1
- --max_train_steps=9
- --checkpointing_steps=2
- """.split()
-
- run_command(self._launch_args + test_args)
-
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-2", "checkpoint-4", "checkpoint-6", "checkpoint-8"},
- )
-
- resume_run_args = f"""
- examples/dreambooth/train_dreambooth.py
- --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe
- --instance_data_dir=docs/source/en/imgs
- --output_dir={tmpdir}
- --instance_prompt=prompt
- --resolution=64
- --train_batch_size=1
- --gradient_accumulation_steps=1
- --max_train_steps=11
- --checkpointing_steps=2
- --resume_from_checkpoint=checkpoint-8
- --checkpoints_total_limit=3
- """.split()
-
- run_command(self._launch_args + resume_run_args)
-
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-6", "checkpoint-8", "checkpoint-10"},
- )
-
- def test_dreambooth_lora_checkpointing_checkpoints_total_limit(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/dreambooth/train_dreambooth_lora.py
- --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe
- --instance_data_dir=docs/source/en/imgs
- --output_dir={tmpdir}
- --instance_prompt=prompt
- --resolution=64
- --train_batch_size=1
- --gradient_accumulation_steps=1
- --max_train_steps=6
- --checkpoints_total_limit=2
- --checkpointing_steps=2
- """.split()
-
- run_command(self._launch_args + test_args)
-
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-4", "checkpoint-6"},
- )
-
- def test_dreambooth_lora_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/dreambooth/train_dreambooth_lora.py
- --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe
- --instance_data_dir=docs/source/en/imgs
- --output_dir={tmpdir}
- --instance_prompt=prompt
- --resolution=64
- --train_batch_size=1
- --gradient_accumulation_steps=1
- --max_train_steps=9
- --checkpointing_steps=2
- """.split()
-
- run_command(self._launch_args + test_args)
-
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-2", "checkpoint-4", "checkpoint-6", "checkpoint-8"},
- )
-
- resume_run_args = f"""
- examples/dreambooth/train_dreambooth_lora.py
- --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe
- --instance_data_dir=docs/source/en/imgs
- --output_dir={tmpdir}
- --instance_prompt=prompt
- --resolution=64
- --train_batch_size=1
- --gradient_accumulation_steps=1
- --max_train_steps=11
- --checkpointing_steps=2
- --resume_from_checkpoint=checkpoint-8
- --checkpoints_total_limit=3
- """.split()
-
- run_command(self._launch_args + resume_run_args)
-
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-6", "checkpoint-8", "checkpoint-10"},
- )
-
- def test_controlnet_checkpointing_checkpoints_total_limit(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/controlnet/train_controlnet.py
- --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe
- --dataset_name=hf-internal-testing/fill10
- --output_dir={tmpdir}
- --resolution=64
- --train_batch_size=1
- --gradient_accumulation_steps=1
- --max_train_steps=6
- --checkpoints_total_limit=2
- --checkpointing_steps=2
- --controlnet_model_name_or_path=hf-internal-testing/tiny-controlnet
- """.split()
-
- run_command(self._launch_args + test_args)
-
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-4", "checkpoint-6"},
- )
-
- def test_controlnet_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/controlnet/train_controlnet.py
- --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe
- --dataset_name=hf-internal-testing/fill10
- --output_dir={tmpdir}
- --resolution=64
- --train_batch_size=1
- --gradient_accumulation_steps=1
- --controlnet_model_name_or_path=hf-internal-testing/tiny-controlnet
- --max_train_steps=9
- --checkpointing_steps=2
- """.split()
-
- run_command(self._launch_args + test_args)
-
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-2", "checkpoint-4", "checkpoint-6", "checkpoint-8"},
- )
-
- resume_run_args = f"""
- examples/controlnet/train_controlnet.py
- --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe
- --dataset_name=hf-internal-testing/fill10
- --output_dir={tmpdir}
- --resolution=64
- --train_batch_size=1
- --gradient_accumulation_steps=1
- --controlnet_model_name_or_path=hf-internal-testing/tiny-controlnet
- --max_train_steps=11
- --checkpointing_steps=2
- --resume_from_checkpoint=checkpoint-8
- --checkpoints_total_limit=3
- """.split()
-
- run_command(self._launch_args + resume_run_args)
-
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-8", "checkpoint-10", "checkpoint-12"},
- )
-
- def test_controlnet_sdxl(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/controlnet/train_controlnet_sdxl.py
- --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-xl-pipe
- --dataset_name=hf-internal-testing/fill10
- --output_dir={tmpdir}
- --resolution=64
- --train_batch_size=1
- --gradient_accumulation_steps=1
- --controlnet_model_name_or_path=hf-internal-testing/tiny-controlnet-sdxl
- --max_train_steps=9
- --checkpointing_steps=2
- """.split()
-
- run_command(self._launch_args + test_args)
-
- self.assertTrue(os.path.isfile(os.path.join(tmpdir, "diffusion_pytorch_model.bin")))
-
- def test_custom_diffusion_checkpointing_checkpoints_total_limit(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/custom_diffusion/train_custom_diffusion.py
- --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe
- --instance_data_dir=docs/source/en/imgs
- --output_dir={tmpdir}
- --instance_prompt=
- --resolution=64
- --train_batch_size=1
- --modifier_token=
- --dataloader_num_workers=0
- --max_train_steps=6
- --checkpoints_total_limit=2
- --checkpointing_steps=2
- """.split()
-
- run_command(self._launch_args + test_args)
-
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-4", "checkpoint-6"},
- )
-
- def test_custom_diffusion_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints(self):
- with tempfile.TemporaryDirectory() as tmpdir:
- test_args = f"""
- examples/custom_diffusion/train_custom_diffusion.py
- --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe
- --instance_data_dir=docs/source/en/imgs
- --output_dir={tmpdir}
- --instance_prompt=
- --resolution=64
- --train_batch_size=1
- --modifier_token=
- --dataloader_num_workers=0
- --max_train_steps=9
- --checkpointing_steps=2
- """.split()
-
- run_command(self._launch_args + test_args)
-
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-2", "checkpoint-4", "checkpoint-6", "checkpoint-8"},
- )
-
- resume_run_args = f"""
- examples/custom_diffusion/train_custom_diffusion.py
- --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe
- --instance_data_dir=docs/source/en/imgs
- --output_dir={tmpdir}
- --instance_prompt=
- --resolution=64
- --train_batch_size=1
- --modifier_token=
- --dataloader_num_workers=0
- --max_train_steps=11
- --checkpointing_steps=2
- --resume_from_checkpoint=checkpoint-8
- --checkpoints_total_limit=3
- """.split()
-
- run_command(self._launch_args + resume_run_args)
-
- self.assertEqual(
- {x for x in os.listdir(tmpdir) if "checkpoint" in x},
- {"checkpoint-6", "checkpoint-8", "checkpoint-10"},
- )
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/README.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/README.md
deleted file mode 100644
index 31ad27793e34783faabc222adf98691fb396a0d8..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/README.md
+++ /dev/null
@@ -1,3 +0,0 @@
-# Schedulers
-
-For more information on the schedulers, please refer to the [docs](https://huggingface.co/docs/diffusers/api/schedulers/overview).
\ No newline at end of file
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/trident_faster_rcnn.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/trident_faster_rcnn.py
deleted file mode 100644
index f0fd80d41407162df71ba5349fc659d4713cdb6e..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/trident_faster_rcnn.py
+++ /dev/null
@@ -1,66 +0,0 @@
-from ..builder import DETECTORS
-from .faster_rcnn import FasterRCNN
-
-
-@DETECTORS.register_module()
-class TridentFasterRCNN(FasterRCNN):
- """Implementation of `TridentNet `_"""
-
- def __init__(self,
- backbone,
- rpn_head,
- roi_head,
- train_cfg,
- test_cfg,
- neck=None,
- pretrained=None):
-
- super(TridentFasterRCNN, self).__init__(
- backbone=backbone,
- neck=neck,
- rpn_head=rpn_head,
- roi_head=roi_head,
- train_cfg=train_cfg,
- test_cfg=test_cfg,
- pretrained=pretrained)
- assert self.backbone.num_branch == self.roi_head.num_branch
- assert self.backbone.test_branch_idx == self.roi_head.test_branch_idx
- self.num_branch = self.backbone.num_branch
- self.test_branch_idx = self.backbone.test_branch_idx
-
- def simple_test(self, img, img_metas, proposals=None, rescale=False):
- """Test without augmentation."""
- assert self.with_bbox, 'Bbox head must be implemented.'
- x = self.extract_feat(img)
- if proposals is None:
- num_branch = (self.num_branch if self.test_branch_idx == -1 else 1)
- trident_img_metas = img_metas * num_branch
- proposal_list = self.rpn_head.simple_test_rpn(x, trident_img_metas)
- else:
- proposal_list = proposals
-
- return self.roi_head.simple_test(
- x, proposal_list, trident_img_metas, rescale=rescale)
-
- def aug_test(self, imgs, img_metas, rescale=False):
- """Test with augmentations.
-
- If rescale is False, then returned bboxes and masks will fit the scale
- of imgs[0].
- """
- x = self.extract_feats(imgs)
- num_branch = (self.num_branch if self.test_branch_idx == -1 else 1)
- trident_img_metas = [img_metas * num_branch for img_metas in img_metas]
- proposal_list = self.rpn_head.aug_test_rpn(x, trident_img_metas)
- return self.roi_head.aug_test(
- x, proposal_list, img_metas, rescale=rescale)
-
- def forward_train(self, img, img_metas, gt_bboxes, gt_labels, **kwargs):
- """make copies of img and gts to fit multi-branch."""
- trident_gt_bboxes = tuple(gt_bboxes * self.num_branch)
- trident_gt_labels = tuple(gt_labels * self.num_branch)
- trident_img_metas = tuple(img_metas * self.num_branch)
-
- return super(TridentFasterRCNN,
- self).forward_train(img, trident_img_metas,
- trident_gt_bboxes, trident_gt_labels)
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/chat_style-cai-chat.css b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/chat_style-cai-chat.css
deleted file mode 100644
index 47f39e0e870b4229ec8fd60a4a69657e36e48f66..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/chat_style-cai-chat.css
+++ /dev/null
@@ -1,59 +0,0 @@
-.message {
- display: grid;
- grid-template-columns: 60px minmax(0, 1fr);
- padding-bottom: 25px;
- font-size: 15px;
- font-family: 'Noto Sans', Helvetica, Arial, sans-serif;
- line-height: 23px !important;
-}
-
-.circle-you {
- width: 50px;
- height: 50px;
- background-color: rgb(238, 78, 59);
- border-radius: 50%;
-}
-
-.circle-bot {
- width: 50px;
- height: 50px;
- background-color: rgb(59, 78, 244);
- border-radius: 50%;
-}
-
-.circle-bot img,
-.circle-you img {
- border-radius: 50%;
- width: 100%;
- height: 100%;
- object-fit: cover;
-}
-
-.text p {
- margin-top: 5px;
-}
-
-.username {
- font-weight: bold;
-}
-
-.message-body img {
- max-width: 300px;
- max-height: 300px;
- border-radius: 20px;
-}
-
-.message-body p {
- margin-bottom: 0 !important;
- font-size: 15px !important;
- line-height: 23px !important;
-}
-
-.dark .message-body p em {
- color: rgb(138, 138, 138) !important;
-}
-
-.message-body p em {
- color: rgb(110, 110, 110) !important;
- font-weight: 500;
-}
\ No newline at end of file
diff --git a/spaces/Anustup/NS_AI_LABS/tests/vad_test.py b/spaces/Anustup/NS_AI_LABS/tests/vad_test.py
deleted file mode 100644
index a926605685ecbfcd5bed4ebe0da29e6d8bfbadbe..0000000000000000000000000000000000000000
--- a/spaces/Anustup/NS_AI_LABS/tests/vad_test.py
+++ /dev/null
@@ -1,66 +0,0 @@
-import pprint
-import unittest
-import numpy as np
-import sys
-
-sys.path.append('../NS_AI_LABS')
-
-from src.vad import AbstractTranscription, VadSileroTranscription
-
-class TestVad(unittest.TestCase):
- def __init__(self, *args, **kwargs):
- super(TestVad, self).__init__(*args, **kwargs)
- self.transcribe_calls = []
-
- def test_transcript(self):
- mock = MockVadTranscription()
-
- self.transcribe_calls.clear()
- result = mock.transcribe("mock", lambda segment : self.transcribe_segments(segment))
-
- self.assertListEqual(self.transcribe_calls, [
- [30, 30],
- [100, 100]
- ])
-
- self.assertListEqual(result['segments'],
- [{'end': 50.0, 'start': 40.0, 'text': 'Hello world '},
- {'end': 120.0, 'start': 110.0, 'text': 'Hello world '}]
- )
-
- def transcribe_segments(self, segment):
- self.transcribe_calls.append(segment.tolist())
-
- # Dummy text
- return {
- 'text': "Hello world ",
- 'segments': [
- {
- "start": 10.0,
- "end": 20.0,
- "text": "Hello world "
- }
- ],
- 'language': ""
- }
-
-class MockVadTranscription(AbstractTranscription):
- def __init__(self):
- super().__init__()
-
- def get_audio_segment(self, str, start_time: str = None, duration: str = None):
- start_time_seconds = float(start_time.removesuffix("s"))
- duration_seconds = float(duration.removesuffix("s"))
-
- # For mocking, this just returns a simple numppy array
- return np.array([start_time_seconds, duration_seconds], dtype=np.float64)
-
- def get_transcribe_timestamps(self, audio: str):
- result = []
-
- result.append( { 'start': 30, 'end': 60 } )
- result.append( { 'start': 100, 'end': 200 } )
- return result
-
-if __name__ == '__main__':
- unittest.main()
\ No newline at end of file
diff --git a/spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/stable_diffusion_video/image_generation.py b/spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/stable_diffusion_video/image_generation.py
deleted file mode 100644
index f04b4bcc76ff3c8dc59d1c61004073a3a6815c01..0000000000000000000000000000000000000000
--- a/spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/stable_diffusion_video/image_generation.py
+++ /dev/null
@@ -1,363 +0,0 @@
-import json
-import math
-import random
-import time
-from pathlib import Path
-from uuid import uuid4
-
-import torch
-from diffusers import __version__ as diffusers_version
-from huggingface_hub import CommitOperationAdd, create_commit, create_repo
-
-from .upsampling import RealESRGANModel
-from .utils import pad_along_axis
-
-
-def get_all_files(root: Path):
- dirs = [root]
- while len(dirs) > 0:
- dir = dirs.pop()
- for candidate in dir.iterdir():
- if candidate.is_file():
- yield candidate
- if candidate.is_dir():
- dirs.append(candidate)
-
-
-def get_groups_of_n(n: int, iterator):
- assert n > 1
- buffer = []
- for elt in iterator:
- if len(buffer) == n:
- yield buffer
- buffer = []
- buffer.append(elt)
- if len(buffer) != 0:
- yield buffer
-
-
-def upload_folder_chunked(
- repo_id: str,
- upload_dir: Path,
- n: int = 100,
- private: bool = False,
- create_pr: bool = False,
-):
- """Upload a folder to the Hugging Face Hub in chunks of n files at a time.
- Args:
- repo_id (str): The repo id to upload to.
- upload_dir (Path): The directory to upload.
- n (int, *optional*, defaults to 100): The number of files to upload at a time.
- private (bool, *optional*): Whether to upload the repo as private.
- create_pr (bool, *optional*): Whether to create a PR after uploading instead of commiting directly.
- """
-
- url = create_repo(repo_id, exist_ok=True, private=private, repo_type="dataset")
- print(f"Uploading files to: {url}")
-
- root = Path(upload_dir)
- if not root.exists():
- raise ValueError(f"Upload directory {root} does not exist.")
-
- for i, file_paths in enumerate(get_groups_of_n(n, get_all_files(root))):
- print(f"Committing {file_paths}")
- operations = [
- CommitOperationAdd(
- path_in_repo=f"{file_path.parent.name}/{file_path.name}",
- path_or_fileobj=str(file_path),
- )
- for file_path in file_paths
- ]
- create_commit(
- repo_id=repo_id,
- operations=operations,
- commit_message=f"Upload part {i}",
- repo_type="dataset",
- create_pr=create_pr,
- )
-
-
-def generate_input_batches(pipeline, prompts, seeds, batch_size, height, width):
- if len(prompts) != len(seeds):
- raise ValueError("Number of prompts and seeds must be equal.")
-
- embeds_batch, noise_batch = None, None
- batch_idx = 0
- for i, (prompt, seed) in enumerate(zip(prompts, seeds)):
- embeds = pipeline.embed_text(prompt)
- noise = torch.randn(
- (1, pipeline.unet.in_channels, height // 8, width // 8),
- device=pipeline.device,
- generator=torch.Generator(device="cpu" if pipeline.device.type == "mps" else pipeline.device).manual_seed(
- seed
- ),
- )
- embeds_batch = embeds if embeds_batch is None else torch.cat([embeds_batch, embeds])
- noise_batch = noise if noise_batch is None else torch.cat([noise_batch, noise])
- batch_is_ready = embeds_batch.shape[0] == batch_size or i + 1 == len(prompts)
- if not batch_is_ready:
- continue
- yield batch_idx, embeds_batch.type(torch.cuda.HalfTensor), noise_batch.type(torch.cuda.HalfTensor)
- batch_idx += 1
- del embeds_batch, noise_batch
- torch.cuda.empty_cache()
- embeds_batch, noise_batch = None, None
-
-
-def generate_images(
- pipeline,
- prompt,
- batch_size=1,
- num_batches=1,
- seeds=None,
- num_inference_steps=50,
- guidance_scale=7.5,
- output_dir="./images",
- image_file_ext=".jpg",
- upsample=False,
- height=512,
- width=512,
- eta=0.0,
- push_to_hub=False,
- repo_id=None,
- private=False,
- create_pr=False,
- name=None,
-):
- """Generate images using the StableDiffusion pipeline.
- Args:
- pipeline (StableDiffusionWalkPipeline): The StableDiffusion pipeline instance.
- prompt (str): The prompt to use for the image generation.
- batch_size (int, *optional*, defaults to 1): The batch size to use for image generation.
- num_batches (int, *optional*, defaults to 1): The number of batches to generate.
- seeds (list[int], *optional*): The seeds to use for the image generation.
- num_inference_steps (int, *optional*, defaults to 50): The number of inference steps to take.
- guidance_scale (float, *optional*, defaults to 7.5): The guidance scale to use for image generation.
- output_dir (str, *optional*, defaults to "./images"): The output directory to save the images to.
- image_file_ext (str, *optional*, defaults to '.jpg'): The image file extension to use.
- upsample (bool, *optional*, defaults to False): Whether to upsample the images.
- height (int, *optional*, defaults to 512): The height of the images to generate.
- width (int, *optional*, defaults to 512): The width of the images to generate.
- eta (float, *optional*, defaults to 0.0): The eta parameter to use for image generation.
- push_to_hub (bool, *optional*, defaults to False): Whether to push the generated images to the Hugging Face Hub.
- repo_id (str, *optional*): The repo id to push the images to.
- private (bool, *optional*): Whether to push the repo as private.
- create_pr (bool, *optional*): Whether to create a PR after pushing instead of commiting directly.
- name (str, *optional*, defaults to current timestamp str): The name of the sub-directory of
- output_dir to save the images to.
- """
- if push_to_hub:
- if repo_id is None:
- raise ValueError("Must provide repo_id if push_to_hub is True.")
-
- name = name or time.strftime("%Y%m%d-%H%M%S")
- save_path = Path(output_dir) / name
- save_path.mkdir(exist_ok=False, parents=True)
- prompt_config_path = save_path / "prompt_config.json"
-
- num_images = batch_size * num_batches
- seeds = seeds or [random.choice(list(range(0, 9999999))) for _ in range(num_images)]
- if len(seeds) != num_images:
- raise ValueError("Number of seeds must be equal to batch_size * num_batches.")
-
- if upsample:
- if getattr(pipeline, "upsampler", None) is None:
- pipeline.upsampler = RealESRGANModel.from_pretrained("nateraw/real-esrgan")
- pipeline.upsampler.to(pipeline.device)
-
- cfg = dict(
- prompt=prompt,
- guidance_scale=guidance_scale,
- eta=eta,
- num_inference_steps=num_inference_steps,
- upsample=upsample,
- height=height,
- width=width,
- scheduler=dict(pipeline.scheduler.config),
- tiled=pipeline.tiled,
- diffusers_version=diffusers_version,
- device_name=torch.cuda.get_device_name(0) if torch.cuda.is_available() else "unknown",
- )
- prompt_config_path.write_text(json.dumps(cfg, indent=2, sort_keys=False))
-
- frame_index = 0
- frame_filepaths = []
- for batch_idx, embeds, noise in generate_input_batches(
- pipeline, [prompt] * num_images, seeds, batch_size, height, width
- ):
- print(f"Generating batch {batch_idx}")
-
- outputs = pipeline(
- text_embeddings=embeds,
- latents=noise,
- num_inference_steps=num_inference_steps,
- guidance_scale=guidance_scale,
- eta=eta,
- height=height,
- width=width,
- output_type="pil" if not upsample else "numpy",
- )["images"]
- if upsample:
- images = []
- for output in outputs:
- images.append(pipeline.upsampler(output))
- else:
- images = outputs
-
- for image in images:
- frame_filepath = save_path / f"{seeds[frame_index]}{image_file_ext}"
- image.save(frame_filepath)
- frame_filepaths.append(str(frame_filepath))
- frame_index += 1
-
- return frame_filepaths
-
- if push_to_hub:
- upload_folder_chunked(repo_id, save_path, private=private, create_pr=create_pr)
-
-
-def generate_images_flax(
- pipeline,
- params,
- prompt,
- batch_size=1,
- num_batches=1,
- seeds=None,
- num_inference_steps=50,
- guidance_scale=7.5,
- output_dir="./images",
- image_file_ext=".jpg",
- upsample=False,
- height=512,
- width=512,
- push_to_hub=False,
- repo_id=None,
- private=False,
- create_pr=False,
- name=None,
-):
- import jax
- from flax.training.common_utils import shard
-
- """Generate images using the StableDiffusion pipeline.
- Args:
- pipeline (StableDiffusionWalkPipeline): The StableDiffusion pipeline instance.
- params (`Union[Dict, FrozenDict]`): The model parameters.
- prompt (str): The prompt to use for the image generation.
- batch_size (int, *optional*, defaults to 1): The batch size to use for image generation.
- num_batches (int, *optional*, defaults to 1): The number of batches to generate.
- seeds (int, *optional*): The seed to use for the image generation.
- num_inference_steps (int, *optional*, defaults to 50): The number of inference steps to take.
- guidance_scale (float, *optional*, defaults to 7.5): The guidance scale to use for image generation.
- output_dir (str, *optional*, defaults to "./images"): The output directory to save the images to.
- image_file_ext (str, *optional*, defaults to '.jpg'): The image file extension to use.
- upsample (bool, *optional*, defaults to False): Whether to upsample the images.
- height (int, *optional*, defaults to 512): The height of the images to generate.
- width (int, *optional*, defaults to 512): The width of the images to generate.
- push_to_hub (bool, *optional*, defaults to False): Whether to push the generated images to the Hugging Face Hub.
- repo_id (str, *optional*): The repo id to push the images to.
- private (bool, *optional*): Whether to push the repo as private.
- create_pr (bool, *optional*): Whether to create a PR after pushing instead of commiting directly.
- name (str, *optional*, defaults to current timestamp str): The name of the sub-directory of
- output_dir to save the images to.
- """
- if push_to_hub:
- if repo_id is None:
- raise ValueError("Must provide repo_id if push_to_hub is True.")
-
- name = name or time.strftime("%Y%m%d-%H%M%S")
- save_path = Path(output_dir) / name
- save_path.mkdir(exist_ok=False, parents=True)
- prompt_config_path = save_path / "prompt_config.json"
-
- num_images = batch_size * num_batches
- seeds = seeds or random.choice(list(range(0, 9999999)))
- prng_seed = jax.random.PRNGKey(seeds)
-
- if upsample:
- if getattr(pipeline, "upsampler", None) is None:
- pipeline.upsampler = RealESRGANModel.from_pretrained("nateraw/real-esrgan")
- if not torch.cuda.is_available():
- print("Upsampling is recommended to be done on a GPU, as it is very slow on CPU")
- else:
- pipeline.upsampler = pipeline.upsampler.cuda()
-
- cfg = dict(
- prompt=prompt,
- guidance_scale=guidance_scale,
- num_inference_steps=num_inference_steps,
- upsample=upsample,
- height=height,
- width=width,
- scheduler=dict(pipeline.scheduler.config),
- # tiled=pipeline.tiled,
- diffusers_version=diffusers_version,
- device_name=torch.cuda.get_device_name(0) if torch.cuda.is_available() else "unknown",
- )
- prompt_config_path.write_text(json.dumps(cfg, indent=2, sort_keys=False))
-
- NUM_TPU_CORES = jax.device_count()
- jit = True # force jit, assume params are already sharded
- batch_size_total = NUM_TPU_CORES * batch_size if jit else batch_size
-
- def generate_input_batches(prompts, batch_size):
- prompt_batch = None
- for batch_idx in range(math.ceil(len(prompts) / batch_size)):
- prompt_batch = prompts[batch_idx * batch_size : (batch_idx + 1) * batch_size]
- yield batch_idx, prompt_batch
-
- frame_index = 0
- frame_filepaths = []
- for batch_idx, prompt_batch in generate_input_batches([prompt] * num_images, batch_size_total):
- # This batch size correspond to each TPU core, so we are generating batch_size * NUM_TPU_CORES images
- print(f"Generating batches: {batch_idx*NUM_TPU_CORES} - {min((batch_idx+1)*NUM_TPU_CORES, num_batches)}")
- prompt_ids_batch = pipeline.prepare_inputs(prompt_batch)
- prng_seed_batch = prng_seed
-
- if jit:
- padded = False
- # Check if len of prompt_batch is multiple of NUM_TPU_CORES, if not pad its ids
- if len(prompt_batch) % NUM_TPU_CORES != 0:
- padded = True
- pad_size = NUM_TPU_CORES - (len(prompt_batch) % NUM_TPU_CORES)
- # Pad embeds_batch and noise_batch with zeros in batch dimension
- prompt_ids_batch = pad_along_axis(prompt_ids_batch, pad_size, axis=0)
-
- prompt_ids_batch = shard(prompt_ids_batch)
- prng_seed_batch = jax.random.split(prng_seed, jax.device_count())
-
- outputs = pipeline(
- params,
- prng_seed=prng_seed_batch,
- prompt_ids=prompt_ids_batch,
- height=height,
- width=width,
- guidance_scale=guidance_scale,
- num_inference_steps=num_inference_steps,
- output_type="pil" if not upsample else "numpy",
- jit=jit,
- )["images"]
-
- if jit:
- # check if we padded and remove that padding from outputs
- if padded:
- outputs = outputs[:-pad_size]
-
- if upsample:
- images = []
- for output in outputs:
- images.append(pipeline.upsampler(output))
- else:
- images = outputs
-
- for image in images:
- uuid = str(uuid4())
- frame_filepath = save_path / f"{uuid}{image_file_ext}"
- image.save(frame_filepath)
- frame_filepaths.append(str(frame_filepath))
- frame_index += 1
-
- return frame_filepaths
-
- if push_to_hub:
- upload_folder_chunked(repo_id, save_path, private=private, create_pr=create_pr)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/colorama/tests/isatty_test.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/colorama/tests/isatty_test.py
deleted file mode 100644
index 0f84e4befe550d4386d24264648abf1323e682ff..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/colorama/tests/isatty_test.py
+++ /dev/null
@@ -1,57 +0,0 @@
-# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.
-import sys
-from unittest import TestCase, main
-
-from ..ansitowin32 import StreamWrapper, AnsiToWin32
-from .utils import pycharm, replace_by, replace_original_by, StreamTTY, StreamNonTTY
-
-
-def is_a_tty(stream):
- return StreamWrapper(stream, None).isatty()
-
-class IsattyTest(TestCase):
-
- def test_TTY(self):
- tty = StreamTTY()
- self.assertTrue(is_a_tty(tty))
- with pycharm():
- self.assertTrue(is_a_tty(tty))
-
- def test_nonTTY(self):
- non_tty = StreamNonTTY()
- self.assertFalse(is_a_tty(non_tty))
- with pycharm():
- self.assertFalse(is_a_tty(non_tty))
-
- def test_withPycharm(self):
- with pycharm():
- self.assertTrue(is_a_tty(sys.stderr))
- self.assertTrue(is_a_tty(sys.stdout))
-
- def test_withPycharmTTYOverride(self):
- tty = StreamTTY()
- with pycharm(), replace_by(tty):
- self.assertTrue(is_a_tty(tty))
-
- def test_withPycharmNonTTYOverride(self):
- non_tty = StreamNonTTY()
- with pycharm(), replace_by(non_tty):
- self.assertFalse(is_a_tty(non_tty))
-
- def test_withPycharmNoneOverride(self):
- with pycharm():
- with replace_by(None), replace_original_by(None):
- self.assertFalse(is_a_tty(None))
- self.assertFalse(is_a_tty(StreamNonTTY()))
- self.assertTrue(is_a_tty(StreamTTY()))
-
- def test_withPycharmStreamWrapped(self):
- with pycharm():
- self.assertTrue(AnsiToWin32(StreamTTY()).stream.isatty())
- self.assertFalse(AnsiToWin32(StreamNonTTY()).stream.isatty())
- self.assertTrue(AnsiToWin32(sys.stdout).stream.isatty())
- self.assertTrue(AnsiToWin32(sys.stderr).stream.isatty())
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/dist.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/dist.py
deleted file mode 100644
index 917cd94a0c29985085f9332c5a73549c51bb8fb1..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/dist.py
+++ /dev/null
@@ -1,1286 +0,0 @@
-"""distutils.dist
-
-Provides the Distribution class, which represents the module distribution
-being built/installed/distributed.
-"""
-
-import sys
-import os
-import re
-import pathlib
-import contextlib
-from email import message_from_file
-
-try:
- import warnings
-except ImportError:
- warnings = None
-
-from distutils.errors import (
- DistutilsOptionError,
- DistutilsModuleError,
- DistutilsArgError,
- DistutilsClassError,
-)
-from distutils.fancy_getopt import FancyGetopt, translate_longopt
-from distutils.util import check_environ, strtobool, rfc822_escape
-from distutils import log
-from distutils.debug import DEBUG
-
-# Regex to define acceptable Distutils command names. This is not *quite*
-# the same as a Python NAME -- I don't allow leading underscores. The fact
-# that they're very similar is no coincidence; the default naming scheme is
-# to look for a Python module named after the command.
-command_re = re.compile(r'^[a-zA-Z]([a-zA-Z0-9_]*)$')
-
-
-def _ensure_list(value, fieldname):
- if isinstance(value, str):
- # a string containing comma separated values is okay. It will
- # be converted to a list by Distribution.finalize_options().
- pass
- elif not isinstance(value, list):
- # passing a tuple or an iterator perhaps, warn and convert
- typename = type(value).__name__
- msg = "Warning: '{fieldname}' should be a list, got type '{typename}'"
- msg = msg.format(**locals())
- log.log(log.WARN, msg)
- value = list(value)
- return value
-
-
-class Distribution:
- """The core of the Distutils. Most of the work hiding behind 'setup'
- is really done within a Distribution instance, which farms the work out
- to the Distutils commands specified on the command line.
-
- Setup scripts will almost never instantiate Distribution directly,
- unless the 'setup()' function is totally inadequate to their needs.
- However, it is conceivable that a setup script might wish to subclass
- Distribution for some specialized purpose, and then pass the subclass
- to 'setup()' as the 'distclass' keyword argument. If so, it is
- necessary to respect the expectations that 'setup' has of Distribution.
- See the code for 'setup()', in core.py, for details.
- """
-
- # 'global_options' describes the command-line options that may be
- # supplied to the setup script prior to any actual commands.
- # Eg. "./setup.py -n" or "./setup.py --quiet" both take advantage of
- # these global options. This list should be kept to a bare minimum,
- # since every global option is also valid as a command option -- and we
- # don't want to pollute the commands with too many options that they
- # have minimal control over.
- # The fourth entry for verbose means that it can be repeated.
- global_options = [
- ('verbose', 'v', "run verbosely (default)", 1),
- ('quiet', 'q', "run quietly (turns verbosity off)"),
- ('dry-run', 'n', "don't actually do anything"),
- ('help', 'h', "show detailed help message"),
- ('no-user-cfg', None, 'ignore pydistutils.cfg in your home directory'),
- ]
-
- # 'common_usage' is a short (2-3 line) string describing the common
- # usage of the setup script.
- common_usage = """\
-Common commands: (see '--help-commands' for more)
-
- setup.py build will build the package underneath 'build/'
- setup.py install will install the package
-"""
-
- # options that are not propagated to the commands
- display_options = [
- ('help-commands', None, "list all available commands"),
- ('name', None, "print package name"),
- ('version', 'V', "print package version"),
- ('fullname', None, "print -"),
- ('author', None, "print the author's name"),
- ('author-email', None, "print the author's email address"),
- ('maintainer', None, "print the maintainer's name"),
- ('maintainer-email', None, "print the maintainer's email address"),
- ('contact', None, "print the maintainer's name if known, else the author's"),
- (
- 'contact-email',
- None,
- "print the maintainer's email address if known, else the author's",
- ),
- ('url', None, "print the URL for this package"),
- ('license', None, "print the license of the package"),
- ('licence', None, "alias for --license"),
- ('description', None, "print the package description"),
- ('long-description', None, "print the long package description"),
- ('platforms', None, "print the list of platforms"),
- ('classifiers', None, "print the list of classifiers"),
- ('keywords', None, "print the list of keywords"),
- ('provides', None, "print the list of packages/modules provided"),
- ('requires', None, "print the list of packages/modules required"),
- ('obsoletes', None, "print the list of packages/modules made obsolete"),
- ]
- display_option_names = [translate_longopt(x[0]) for x in display_options]
-
- # negative options are options that exclude other options
- negative_opt = {'quiet': 'verbose'}
-
- # -- Creation/initialization methods -------------------------------
-
- def __init__(self, attrs=None): # noqa: C901
- """Construct a new Distribution instance: initialize all the
- attributes of a Distribution, and then use 'attrs' (a dictionary
- mapping attribute names to values) to assign some of those
- attributes their "real" values. (Any attributes not mentioned in
- 'attrs' will be assigned to some null value: 0, None, an empty list
- or dictionary, etc.) Most importantly, initialize the
- 'command_obj' attribute to the empty dictionary; this will be
- filled in with real command objects by 'parse_command_line()'.
- """
-
- # Default values for our command-line options
- self.verbose = 1
- self.dry_run = 0
- self.help = 0
- for attr in self.display_option_names:
- setattr(self, attr, 0)
-
- # Store the distribution meta-data (name, version, author, and so
- # forth) in a separate object -- we're getting to have enough
- # information here (and enough command-line options) that it's
- # worth it. Also delegate 'get_XXX()' methods to the 'metadata'
- # object in a sneaky and underhanded (but efficient!) way.
- self.metadata = DistributionMetadata()
- for basename in self.metadata._METHOD_BASENAMES:
- method_name = "get_" + basename
- setattr(self, method_name, getattr(self.metadata, method_name))
-
- # 'cmdclass' maps command names to class objects, so we
- # can 1) quickly figure out which class to instantiate when
- # we need to create a new command object, and 2) have a way
- # for the setup script to override command classes
- self.cmdclass = {}
-
- # 'command_packages' is a list of packages in which commands
- # are searched for. The factory for command 'foo' is expected
- # to be named 'foo' in the module 'foo' in one of the packages
- # named here. This list is searched from the left; an error
- # is raised if no named package provides the command being
- # searched for. (Always access using get_command_packages().)
- self.command_packages = None
-
- # 'script_name' and 'script_args' are usually set to sys.argv[0]
- # and sys.argv[1:], but they can be overridden when the caller is
- # not necessarily a setup script run from the command-line.
- self.script_name = None
- self.script_args = None
-
- # 'command_options' is where we store command options between
- # parsing them (from config files, the command-line, etc.) and when
- # they are actually needed -- ie. when the command in question is
- # instantiated. It is a dictionary of dictionaries of 2-tuples:
- # command_options = { command_name : { option : (source, value) } }
- self.command_options = {}
-
- # 'dist_files' is the list of (command, pyversion, file) that
- # have been created by any dist commands run so far. This is
- # filled regardless of whether the run is dry or not. pyversion
- # gives sysconfig.get_python_version() if the dist file is
- # specific to a Python version, 'any' if it is good for all
- # Python versions on the target platform, and '' for a source
- # file. pyversion should not be used to specify minimum or
- # maximum required Python versions; use the metainfo for that
- # instead.
- self.dist_files = []
-
- # These options are really the business of various commands, rather
- # than of the Distribution itself. We provide aliases for them in
- # Distribution as a convenience to the developer.
- self.packages = None
- self.package_data = {}
- self.package_dir = None
- self.py_modules = None
- self.libraries = None
- self.headers = None
- self.ext_modules = None
- self.ext_package = None
- self.include_dirs = None
- self.extra_path = None
- self.scripts = None
- self.data_files = None
- self.password = ''
-
- # And now initialize bookkeeping stuff that can't be supplied by
- # the caller at all. 'command_obj' maps command names to
- # Command instances -- that's how we enforce that every command
- # class is a singleton.
- self.command_obj = {}
-
- # 'have_run' maps command names to boolean values; it keeps track
- # of whether we have actually run a particular command, to make it
- # cheap to "run" a command whenever we think we might need to -- if
- # it's already been done, no need for expensive filesystem
- # operations, we just check the 'have_run' dictionary and carry on.
- # It's only safe to query 'have_run' for a command class that has
- # been instantiated -- a false value will be inserted when the
- # command object is created, and replaced with a true value when
- # the command is successfully run. Thus it's probably best to use
- # '.get()' rather than a straight lookup.
- self.have_run = {}
-
- # Now we'll use the attrs dictionary (ultimately, keyword args from
- # the setup script) to possibly override any or all of these
- # distribution options.
-
- if attrs:
- # Pull out the set of command options and work on them
- # specifically. Note that this order guarantees that aliased
- # command options will override any supplied redundantly
- # through the general options dictionary.
- options = attrs.get('options')
- if options is not None:
- del attrs['options']
- for (command, cmd_options) in options.items():
- opt_dict = self.get_option_dict(command)
- for (opt, val) in cmd_options.items():
- opt_dict[opt] = ("setup script", val)
-
- if 'licence' in attrs:
- attrs['license'] = attrs['licence']
- del attrs['licence']
- msg = "'licence' distribution option is deprecated; use 'license'"
- if warnings is not None:
- warnings.warn(msg)
- else:
- sys.stderr.write(msg + "\n")
-
- # Now work on the rest of the attributes. Any attribute that's
- # not already defined is invalid!
- for (key, val) in attrs.items():
- if hasattr(self.metadata, "set_" + key):
- getattr(self.metadata, "set_" + key)(val)
- elif hasattr(self.metadata, key):
- setattr(self.metadata, key, val)
- elif hasattr(self, key):
- setattr(self, key, val)
- else:
- msg = "Unknown distribution option: %s" % repr(key)
- warnings.warn(msg)
-
- # no-user-cfg is handled before other command line args
- # because other args override the config files, and this
- # one is needed before we can load the config files.
- # If attrs['script_args'] wasn't passed, assume false.
- #
- # This also make sure we just look at the global options
- self.want_user_cfg = True
-
- if self.script_args is not None:
- for arg in self.script_args:
- if not arg.startswith('-'):
- break
- if arg == '--no-user-cfg':
- self.want_user_cfg = False
- break
-
- self.finalize_options()
-
- def get_option_dict(self, command):
- """Get the option dictionary for a given command. If that
- command's option dictionary hasn't been created yet, then create it
- and return the new dictionary; otherwise, return the existing
- option dictionary.
- """
- dict = self.command_options.get(command)
- if dict is None:
- dict = self.command_options[command] = {}
- return dict
-
- def dump_option_dicts(self, header=None, commands=None, indent=""):
- from pprint import pformat
-
- if commands is None: # dump all command option dicts
- commands = sorted(self.command_options.keys())
-
- if header is not None:
- self.announce(indent + header)
- indent = indent + " "
-
- if not commands:
- self.announce(indent + "no commands known yet")
- return
-
- for cmd_name in commands:
- opt_dict = self.command_options.get(cmd_name)
- if opt_dict is None:
- self.announce(indent + "no option dict for '%s' command" % cmd_name)
- else:
- self.announce(indent + "option dict for '%s' command:" % cmd_name)
- out = pformat(opt_dict)
- for line in out.split('\n'):
- self.announce(indent + " " + line)
-
- # -- Config file finding/parsing methods ---------------------------
-
- def find_config_files(self):
- """Find as many configuration files as should be processed for this
- platform, and return a list of filenames in the order in which they
- should be parsed. The filenames returned are guaranteed to exist
- (modulo nasty race conditions).
-
- There are multiple possible config files:
- - distutils.cfg in the Distutils installation directory (i.e.
- where the top-level Distutils __inst__.py file lives)
- - a file in the user's home directory named .pydistutils.cfg
- on Unix and pydistutils.cfg on Windows/Mac; may be disabled
- with the ``--no-user-cfg`` option
- - setup.cfg in the current directory
- - a file named by an environment variable
- """
- check_environ()
- files = [str(path) for path in self._gen_paths() if os.path.isfile(path)]
-
- if DEBUG:
- self.announce("using config files: %s" % ', '.join(files))
-
- return files
-
- def _gen_paths(self):
- # The system-wide Distutils config file
- sys_dir = pathlib.Path(sys.modules['distutils'].__file__).parent
- yield sys_dir / "distutils.cfg"
-
- # The per-user config file
- prefix = '.' * (os.name == 'posix')
- filename = prefix + 'pydistutils.cfg'
- if self.want_user_cfg:
- yield pathlib.Path('~').expanduser() / filename
-
- # All platforms support local setup.cfg
- yield pathlib.Path('setup.cfg')
-
- # Additional config indicated in the environment
- with contextlib.suppress(TypeError):
- yield pathlib.Path(os.getenv("DIST_EXTRA_CONFIG"))
-
- def parse_config_files(self, filenames=None): # noqa: C901
- from configparser import ConfigParser
-
- # Ignore install directory options if we have a venv
- if sys.prefix != sys.base_prefix:
- ignore_options = [
- 'install-base',
- 'install-platbase',
- 'install-lib',
- 'install-platlib',
- 'install-purelib',
- 'install-headers',
- 'install-scripts',
- 'install-data',
- 'prefix',
- 'exec-prefix',
- 'home',
- 'user',
- 'root',
- ]
- else:
- ignore_options = []
-
- ignore_options = frozenset(ignore_options)
-
- if filenames is None:
- filenames = self.find_config_files()
-
- if DEBUG:
- self.announce("Distribution.parse_config_files():")
-
- parser = ConfigParser()
- for filename in filenames:
- if DEBUG:
- self.announce(" reading %s" % filename)
- parser.read(filename)
- for section in parser.sections():
- options = parser.options(section)
- opt_dict = self.get_option_dict(section)
-
- for opt in options:
- if opt != '__name__' and opt not in ignore_options:
- val = parser.get(section, opt)
- opt = opt.replace('-', '_')
- opt_dict[opt] = (filename, val)
-
- # Make the ConfigParser forget everything (so we retain
- # the original filenames that options come from)
- parser.__init__()
-
- # If there was a "global" section in the config file, use it
- # to set Distribution options.
-
- if 'global' in self.command_options:
- for (opt, (src, val)) in self.command_options['global'].items():
- alias = self.negative_opt.get(opt)
- try:
- if alias:
- setattr(self, alias, not strtobool(val))
- elif opt in ('verbose', 'dry_run'): # ugh!
- setattr(self, opt, strtobool(val))
- else:
- setattr(self, opt, val)
- except ValueError as msg:
- raise DistutilsOptionError(msg)
-
- # -- Command-line parsing methods ----------------------------------
-
- def parse_command_line(self):
- """Parse the setup script's command line, taken from the
- 'script_args' instance attribute (which defaults to 'sys.argv[1:]'
- -- see 'setup()' in core.py). This list is first processed for
- "global options" -- options that set attributes of the Distribution
- instance. Then, it is alternately scanned for Distutils commands
- and options for that command. Each new command terminates the
- options for the previous command. The allowed options for a
- command are determined by the 'user_options' attribute of the
- command class -- thus, we have to be able to load command classes
- in order to parse the command line. Any error in that 'options'
- attribute raises DistutilsGetoptError; any error on the
- command-line raises DistutilsArgError. If no Distutils commands
- were found on the command line, raises DistutilsArgError. Return
- true if command-line was successfully parsed and we should carry
- on with executing commands; false if no errors but we shouldn't
- execute commands (currently, this only happens if user asks for
- help).
- """
- #
- # We now have enough information to show the Macintosh dialog
- # that allows the user to interactively specify the "command line".
- #
- toplevel_options = self._get_toplevel_options()
-
- # We have to parse the command line a bit at a time -- global
- # options, then the first command, then its options, and so on --
- # because each command will be handled by a different class, and
- # the options that are valid for a particular class aren't known
- # until we have loaded the command class, which doesn't happen
- # until we know what the command is.
-
- self.commands = []
- parser = FancyGetopt(toplevel_options + self.display_options)
- parser.set_negative_aliases(self.negative_opt)
- parser.set_aliases({'licence': 'license'})
- args = parser.getopt(args=self.script_args, object=self)
- option_order = parser.get_option_order()
- log.set_verbosity(self.verbose)
-
- # for display options we return immediately
- if self.handle_display_options(option_order):
- return
- while args:
- args = self._parse_command_opts(parser, args)
- if args is None: # user asked for help (and got it)
- return
-
- # Handle the cases of --help as a "global" option, ie.
- # "setup.py --help" and "setup.py --help command ...". For the
- # former, we show global options (--verbose, --dry-run, etc.)
- # and display-only options (--name, --version, etc.); for the
- # latter, we omit the display-only options and show help for
- # each command listed on the command line.
- if self.help:
- self._show_help(
- parser, display_options=len(self.commands) == 0, commands=self.commands
- )
- return
-
- # Oops, no commands found -- an end-user error
- if not self.commands:
- raise DistutilsArgError("no commands supplied")
-
- # All is well: return true
- return True
-
- def _get_toplevel_options(self):
- """Return the non-display options recognized at the top level.
-
- This includes options that are recognized *only* at the top
- level as well as options recognized for commands.
- """
- return self.global_options + [
- (
- "command-packages=",
- None,
- "list of packages that provide distutils commands",
- ),
- ]
-
- def _parse_command_opts(self, parser, args): # noqa: C901
- """Parse the command-line options for a single command.
- 'parser' must be a FancyGetopt instance; 'args' must be the list
- of arguments, starting with the current command (whose options
- we are about to parse). Returns a new version of 'args' with
- the next command at the front of the list; will be the empty
- list if there are no more commands on the command line. Returns
- None if the user asked for help on this command.
- """
- # late import because of mutual dependence between these modules
- from distutils.cmd import Command
-
- # Pull the current command from the head of the command line
- command = args[0]
- if not command_re.match(command):
- raise SystemExit("invalid command name '%s'" % command)
- self.commands.append(command)
-
- # Dig up the command class that implements this command, so we
- # 1) know that it's a valid command, and 2) know which options
- # it takes.
- try:
- cmd_class = self.get_command_class(command)
- except DistutilsModuleError as msg:
- raise DistutilsArgError(msg)
-
- # Require that the command class be derived from Command -- want
- # to be sure that the basic "command" interface is implemented.
- if not issubclass(cmd_class, Command):
- raise DistutilsClassError(
- "command class %s must subclass Command" % cmd_class
- )
-
- # Also make sure that the command object provides a list of its
- # known options.
- if not (
- hasattr(cmd_class, 'user_options')
- and isinstance(cmd_class.user_options, list)
- ):
- msg = (
- "command class %s must provide "
- "'user_options' attribute (a list of tuples)"
- )
- raise DistutilsClassError(msg % cmd_class)
-
- # If the command class has a list of negative alias options,
- # merge it in with the global negative aliases.
- negative_opt = self.negative_opt
- if hasattr(cmd_class, 'negative_opt'):
- negative_opt = negative_opt.copy()
- negative_opt.update(cmd_class.negative_opt)
-
- # Check for help_options in command class. They have a different
- # format (tuple of four) so we need to preprocess them here.
- if hasattr(cmd_class, 'help_options') and isinstance(
- cmd_class.help_options, list
- ):
- help_options = fix_help_options(cmd_class.help_options)
- else:
- help_options = []
-
- # All commands support the global options too, just by adding
- # in 'global_options'.
- parser.set_option_table(
- self.global_options + cmd_class.user_options + help_options
- )
- parser.set_negative_aliases(negative_opt)
- (args, opts) = parser.getopt(args[1:])
- if hasattr(opts, 'help') and opts.help:
- self._show_help(parser, display_options=0, commands=[cmd_class])
- return
-
- if hasattr(cmd_class, 'help_options') and isinstance(
- cmd_class.help_options, list
- ):
- help_option_found = 0
- for (help_option, short, desc, func) in cmd_class.help_options:
- if hasattr(opts, parser.get_attr_name(help_option)):
- help_option_found = 1
- if callable(func):
- func()
- else:
- raise DistutilsClassError(
- "invalid help function %r for help option '%s': "
- "must be a callable object (function, etc.)"
- % (func, help_option)
- )
-
- if help_option_found:
- return
-
- # Put the options from the command-line into their official
- # holding pen, the 'command_options' dictionary.
- opt_dict = self.get_option_dict(command)
- for (name, value) in vars(opts).items():
- opt_dict[name] = ("command line", value)
-
- return args
-
- def finalize_options(self):
- """Set final values for all the options on the Distribution
- instance, analogous to the .finalize_options() method of Command
- objects.
- """
- for attr in ('keywords', 'platforms'):
- value = getattr(self.metadata, attr)
- if value is None:
- continue
- if isinstance(value, str):
- value = [elm.strip() for elm in value.split(',')]
- setattr(self.metadata, attr, value)
-
- def _show_help(self, parser, global_options=1, display_options=1, commands=[]):
- """Show help for the setup script command-line in the form of
- several lists of command-line options. 'parser' should be a
- FancyGetopt instance; do not expect it to be returned in the
- same state, as its option table will be reset to make it
- generate the correct help text.
-
- If 'global_options' is true, lists the global options:
- --verbose, --dry-run, etc. If 'display_options' is true, lists
- the "display-only" options: --name, --version, etc. Finally,
- lists per-command help for every command name or command class
- in 'commands'.
- """
- # late import because of mutual dependence between these modules
- from distutils.core import gen_usage
- from distutils.cmd import Command
-
- if global_options:
- if display_options:
- options = self._get_toplevel_options()
- else:
- options = self.global_options
- parser.set_option_table(options)
- parser.print_help(self.common_usage + "\nGlobal options:")
- print('')
-
- if display_options:
- parser.set_option_table(self.display_options)
- parser.print_help(
- "Information display options (just display "
- + "information, ignore any commands)"
- )
- print('')
-
- for command in self.commands:
- if isinstance(command, type) and issubclass(command, Command):
- klass = command
- else:
- klass = self.get_command_class(command)
- if hasattr(klass, 'help_options') and isinstance(klass.help_options, list):
- parser.set_option_table(
- klass.user_options + fix_help_options(klass.help_options)
- )
- else:
- parser.set_option_table(klass.user_options)
- parser.print_help("Options for '%s' command:" % klass.__name__)
- print('')
-
- print(gen_usage(self.script_name))
-
- def handle_display_options(self, option_order):
- """If there were any non-global "display-only" options
- (--help-commands or the metadata display options) on the command
- line, display the requested info and return true; else return
- false.
- """
- from distutils.core import gen_usage
-
- # User just wants a list of commands -- we'll print it out and stop
- # processing now (ie. if they ran "setup --help-commands foo bar",
- # we ignore "foo bar").
- if self.help_commands:
- self.print_commands()
- print('')
- print(gen_usage(self.script_name))
- return 1
-
- # If user supplied any of the "display metadata" options, then
- # display that metadata in the order in which the user supplied the
- # metadata options.
- any_display_options = 0
- is_display_option = {}
- for option in self.display_options:
- is_display_option[option[0]] = 1
-
- for (opt, val) in option_order:
- if val and is_display_option.get(opt):
- opt = translate_longopt(opt)
- value = getattr(self.metadata, "get_" + opt)()
- if opt in ['keywords', 'platforms']:
- print(','.join(value))
- elif opt in ('classifiers', 'provides', 'requires', 'obsoletes'):
- print('\n'.join(value))
- else:
- print(value)
- any_display_options = 1
-
- return any_display_options
-
- def print_command_list(self, commands, header, max_length):
- """Print a subset of the list of all commands -- used by
- 'print_commands()'.
- """
- print(header + ":")
-
- for cmd in commands:
- klass = self.cmdclass.get(cmd)
- if not klass:
- klass = self.get_command_class(cmd)
- try:
- description = klass.description
- except AttributeError:
- description = "(no description available)"
-
- print(" %-*s %s" % (max_length, cmd, description))
-
- def print_commands(self):
- """Print out a help message listing all available commands with a
- description of each. The list is divided into "standard commands"
- (listed in distutils.command.__all__) and "extra commands"
- (mentioned in self.cmdclass, but not a standard command). The
- descriptions come from the command class attribute
- 'description'.
- """
- import distutils.command
-
- std_commands = distutils.command.__all__
- is_std = {}
- for cmd in std_commands:
- is_std[cmd] = 1
-
- extra_commands = []
- for cmd in self.cmdclass.keys():
- if not is_std.get(cmd):
- extra_commands.append(cmd)
-
- max_length = 0
- for cmd in std_commands + extra_commands:
- if len(cmd) > max_length:
- max_length = len(cmd)
-
- self.print_command_list(std_commands, "Standard commands", max_length)
- if extra_commands:
- print()
- self.print_command_list(extra_commands, "Extra commands", max_length)
-
- def get_command_list(self):
- """Get a list of (command, description) tuples.
- The list is divided into "standard commands" (listed in
- distutils.command.__all__) and "extra commands" (mentioned in
- self.cmdclass, but not a standard command). The descriptions come
- from the command class attribute 'description'.
- """
- # Currently this is only used on Mac OS, for the Mac-only GUI
- # Distutils interface (by Jack Jansen)
- import distutils.command
-
- std_commands = distutils.command.__all__
- is_std = {}
- for cmd in std_commands:
- is_std[cmd] = 1
-
- extra_commands = []
- for cmd in self.cmdclass.keys():
- if not is_std.get(cmd):
- extra_commands.append(cmd)
-
- rv = []
- for cmd in std_commands + extra_commands:
- klass = self.cmdclass.get(cmd)
- if not klass:
- klass = self.get_command_class(cmd)
- try:
- description = klass.description
- except AttributeError:
- description = "(no description available)"
- rv.append((cmd, description))
- return rv
-
- # -- Command class/object methods ----------------------------------
-
- def get_command_packages(self):
- """Return a list of packages from which commands are loaded."""
- pkgs = self.command_packages
- if not isinstance(pkgs, list):
- if pkgs is None:
- pkgs = ''
- pkgs = [pkg.strip() for pkg in pkgs.split(',') if pkg != '']
- if "distutils.command" not in pkgs:
- pkgs.insert(0, "distutils.command")
- self.command_packages = pkgs
- return pkgs
-
- def get_command_class(self, command):
- """Return the class that implements the Distutils command named by
- 'command'. First we check the 'cmdclass' dictionary; if the
- command is mentioned there, we fetch the class object from the
- dictionary and return it. Otherwise we load the command module
- ("distutils.command." + command) and fetch the command class from
- the module. The loaded class is also stored in 'cmdclass'
- to speed future calls to 'get_command_class()'.
-
- Raises DistutilsModuleError if the expected module could not be
- found, or if that module does not define the expected class.
- """
- klass = self.cmdclass.get(command)
- if klass:
- return klass
-
- for pkgname in self.get_command_packages():
- module_name = "{}.{}".format(pkgname, command)
- klass_name = command
-
- try:
- __import__(module_name)
- module = sys.modules[module_name]
- except ImportError:
- continue
-
- try:
- klass = getattr(module, klass_name)
- except AttributeError:
- raise DistutilsModuleError(
- "invalid command '%s' (no class '%s' in module '%s')"
- % (command, klass_name, module_name)
- )
-
- self.cmdclass[command] = klass
- return klass
-
- raise DistutilsModuleError("invalid command '%s'" % command)
-
- def get_command_obj(self, command, create=1):
- """Return the command object for 'command'. Normally this object
- is cached on a previous call to 'get_command_obj()'; if no command
- object for 'command' is in the cache, then we either create and
- return it (if 'create' is true) or return None.
- """
- cmd_obj = self.command_obj.get(command)
- if not cmd_obj and create:
- if DEBUG:
- self.announce(
- "Distribution.get_command_obj(): "
- "creating '%s' command object" % command
- )
-
- klass = self.get_command_class(command)
- cmd_obj = self.command_obj[command] = klass(self)
- self.have_run[command] = 0
-
- # Set any options that were supplied in config files
- # or on the command line. (NB. support for error
- # reporting is lame here: any errors aren't reported
- # until 'finalize_options()' is called, which means
- # we won't report the source of the error.)
- options = self.command_options.get(command)
- if options:
- self._set_command_options(cmd_obj, options)
-
- return cmd_obj
-
- def _set_command_options(self, command_obj, option_dict=None): # noqa: C901
- """Set the options for 'command_obj' from 'option_dict'. Basically
- this means copying elements of a dictionary ('option_dict') to
- attributes of an instance ('command').
-
- 'command_obj' must be a Command instance. If 'option_dict' is not
- supplied, uses the standard option dictionary for this command
- (from 'self.command_options').
- """
- command_name = command_obj.get_command_name()
- if option_dict is None:
- option_dict = self.get_option_dict(command_name)
-
- if DEBUG:
- self.announce(" setting options for '%s' command:" % command_name)
- for (option, (source, value)) in option_dict.items():
- if DEBUG:
- self.announce(" {} = {} (from {})".format(option, value, source))
- try:
- bool_opts = [translate_longopt(o) for o in command_obj.boolean_options]
- except AttributeError:
- bool_opts = []
- try:
- neg_opt = command_obj.negative_opt
- except AttributeError:
- neg_opt = {}
-
- try:
- is_string = isinstance(value, str)
- if option in neg_opt and is_string:
- setattr(command_obj, neg_opt[option], not strtobool(value))
- elif option in bool_opts and is_string:
- setattr(command_obj, option, strtobool(value))
- elif hasattr(command_obj, option):
- setattr(command_obj, option, value)
- else:
- raise DistutilsOptionError(
- "error in %s: command '%s' has no such option '%s'"
- % (source, command_name, option)
- )
- except ValueError as msg:
- raise DistutilsOptionError(msg)
-
- def reinitialize_command(self, command, reinit_subcommands=0):
- """Reinitializes a command to the state it was in when first
- returned by 'get_command_obj()': ie., initialized but not yet
- finalized. This provides the opportunity to sneak option
- values in programmatically, overriding or supplementing
- user-supplied values from the config files and command line.
- You'll have to re-finalize the command object (by calling
- 'finalize_options()' or 'ensure_finalized()') before using it for
- real.
-
- 'command' should be a command name (string) or command object. If
- 'reinit_subcommands' is true, also reinitializes the command's
- sub-commands, as declared by the 'sub_commands' class attribute (if
- it has one). See the "install" command for an example. Only
- reinitializes the sub-commands that actually matter, ie. those
- whose test predicates return true.
-
- Returns the reinitialized command object.
- """
- from distutils.cmd import Command
-
- if not isinstance(command, Command):
- command_name = command
- command = self.get_command_obj(command_name)
- else:
- command_name = command.get_command_name()
-
- if not command.finalized:
- return command
- command.initialize_options()
- command.finalized = 0
- self.have_run[command_name] = 0
- self._set_command_options(command)
-
- if reinit_subcommands:
- for sub in command.get_sub_commands():
- self.reinitialize_command(sub, reinit_subcommands)
-
- return command
-
- # -- Methods that operate on the Distribution ----------------------
-
- def announce(self, msg, level=log.INFO):
- log.log(level, msg)
-
- def run_commands(self):
- """Run each command that was seen on the setup script command line.
- Uses the list of commands found and cache of command objects
- created by 'get_command_obj()'.
- """
- for cmd in self.commands:
- self.run_command(cmd)
-
- # -- Methods that operate on its Commands --------------------------
-
- def run_command(self, command):
- """Do whatever it takes to run a command (including nothing at all,
- if the command has already been run). Specifically: if we have
- already created and run the command named by 'command', return
- silently without doing anything. If the command named by 'command'
- doesn't even have a command object yet, create one. Then invoke
- 'run()' on that command object (or an existing one).
- """
- # Already been here, done that? then return silently.
- if self.have_run.get(command):
- return
-
- log.info("running %s", command)
- cmd_obj = self.get_command_obj(command)
- cmd_obj.ensure_finalized()
- cmd_obj.run()
- self.have_run[command] = 1
-
- # -- Distribution query methods ------------------------------------
-
- def has_pure_modules(self):
- return len(self.packages or self.py_modules or []) > 0
-
- def has_ext_modules(self):
- return self.ext_modules and len(self.ext_modules) > 0
-
- def has_c_libraries(self):
- return self.libraries and len(self.libraries) > 0
-
- def has_modules(self):
- return self.has_pure_modules() or self.has_ext_modules()
-
- def has_headers(self):
- return self.headers and len(self.headers) > 0
-
- def has_scripts(self):
- return self.scripts and len(self.scripts) > 0
-
- def has_data_files(self):
- return self.data_files and len(self.data_files) > 0
-
- def is_pure(self):
- return (
- self.has_pure_modules()
- and not self.has_ext_modules()
- and not self.has_c_libraries()
- )
-
- # -- Metadata query methods ----------------------------------------
-
- # If you're looking for 'get_name()', 'get_version()', and so forth,
- # they are defined in a sneaky way: the constructor binds self.get_XXX
- # to self.metadata.get_XXX. The actual code is in the
- # DistributionMetadata class, below.
-
-
-class DistributionMetadata:
- """Dummy class to hold the distribution meta-data: name, version,
- author, and so forth.
- """
-
- _METHOD_BASENAMES = (
- "name",
- "version",
- "author",
- "author_email",
- "maintainer",
- "maintainer_email",
- "url",
- "license",
- "description",
- "long_description",
- "keywords",
- "platforms",
- "fullname",
- "contact",
- "contact_email",
- "classifiers",
- "download_url",
- # PEP 314
- "provides",
- "requires",
- "obsoletes",
- )
-
- def __init__(self, path=None):
- if path is not None:
- self.read_pkg_file(open(path))
- else:
- self.name = None
- self.version = None
- self.author = None
- self.author_email = None
- self.maintainer = None
- self.maintainer_email = None
- self.url = None
- self.license = None
- self.description = None
- self.long_description = None
- self.keywords = None
- self.platforms = None
- self.classifiers = None
- self.download_url = None
- # PEP 314
- self.provides = None
- self.requires = None
- self.obsoletes = None
-
- def read_pkg_file(self, file):
- """Reads the metadata values from a file object."""
- msg = message_from_file(file)
-
- def _read_field(name):
- value = msg[name]
- if value and value != "UNKNOWN":
- return value
-
- def _read_list(name):
- values = msg.get_all(name, None)
- if values == []:
- return None
- return values
-
- metadata_version = msg['metadata-version']
- self.name = _read_field('name')
- self.version = _read_field('version')
- self.description = _read_field('summary')
- # we are filling author only.
- self.author = _read_field('author')
- self.maintainer = None
- self.author_email = _read_field('author-email')
- self.maintainer_email = None
- self.url = _read_field('home-page')
- self.license = _read_field('license')
-
- if 'download-url' in msg:
- self.download_url = _read_field('download-url')
- else:
- self.download_url = None
-
- self.long_description = _read_field('description')
- self.description = _read_field('summary')
-
- if 'keywords' in msg:
- self.keywords = _read_field('keywords').split(',')
-
- self.platforms = _read_list('platform')
- self.classifiers = _read_list('classifier')
-
- # PEP 314 - these fields only exist in 1.1
- if metadata_version == '1.1':
- self.requires = _read_list('requires')
- self.provides = _read_list('provides')
- self.obsoletes = _read_list('obsoletes')
- else:
- self.requires = None
- self.provides = None
- self.obsoletes = None
-
- def write_pkg_info(self, base_dir):
- """Write the PKG-INFO file into the release tree."""
- with open(
- os.path.join(base_dir, 'PKG-INFO'), 'w', encoding='UTF-8'
- ) as pkg_info:
- self.write_pkg_file(pkg_info)
-
- def write_pkg_file(self, file):
- """Write the PKG-INFO format data to a file object."""
- version = '1.0'
- if (
- self.provides
- or self.requires
- or self.obsoletes
- or self.classifiers
- or self.download_url
- ):
- version = '1.1'
-
- # required fields
- file.write('Metadata-Version: %s\n' % version)
- file.write('Name: %s\n' % self.get_name())
- file.write('Version: %s\n' % self.get_version())
-
- def maybe_write(header, val):
- if val:
- file.write(f"{header}: {val}\n")
-
- # optional fields
- maybe_write("Summary", self.get_description())
- maybe_write("Home-page", self.get_url())
- maybe_write("Author", self.get_contact())
- maybe_write("Author-email", self.get_contact_email())
- maybe_write("License", self.get_license())
- maybe_write("Download-URL", self.download_url)
- maybe_write("Description", rfc822_escape(self.get_long_description() or ""))
- maybe_write("Keywords", ",".join(self.get_keywords()))
-
- self._write_list(file, 'Platform', self.get_platforms())
- self._write_list(file, 'Classifier', self.get_classifiers())
-
- # PEP 314
- self._write_list(file, 'Requires', self.get_requires())
- self._write_list(file, 'Provides', self.get_provides())
- self._write_list(file, 'Obsoletes', self.get_obsoletes())
-
- def _write_list(self, file, name, values):
- values = values or []
- for value in values:
- file.write('{}: {}\n'.format(name, value))
-
- # -- Metadata query methods ----------------------------------------
-
- def get_name(self):
- return self.name or "UNKNOWN"
-
- def get_version(self):
- return self.version or "0.0.0"
-
- def get_fullname(self):
- return "{}-{}".format(self.get_name(), self.get_version())
-
- def get_author(self):
- return self.author
-
- def get_author_email(self):
- return self.author_email
-
- def get_maintainer(self):
- return self.maintainer
-
- def get_maintainer_email(self):
- return self.maintainer_email
-
- def get_contact(self):
- return self.maintainer or self.author
-
- def get_contact_email(self):
- return self.maintainer_email or self.author_email
-
- def get_url(self):
- return self.url
-
- def get_license(self):
- return self.license
-
- get_licence = get_license
-
- def get_description(self):
- return self.description
-
- def get_long_description(self):
- return self.long_description
-
- def get_keywords(self):
- return self.keywords or []
-
- def set_keywords(self, value):
- self.keywords = _ensure_list(value, 'keywords')
-
- def get_platforms(self):
- return self.platforms
-
- def set_platforms(self, value):
- self.platforms = _ensure_list(value, 'platforms')
-
- def get_classifiers(self):
- return self.classifiers or []
-
- def set_classifiers(self, value):
- self.classifiers = _ensure_list(value, 'classifiers')
-
- def get_download_url(self):
- return self.download_url
-
- # PEP 314
- def get_requires(self):
- return self.requires or []
-
- def set_requires(self, value):
- import distutils.versionpredicate
-
- for v in value:
- distutils.versionpredicate.VersionPredicate(v)
- self.requires = list(value)
-
- def get_provides(self):
- return self.provides or []
-
- def set_provides(self, value):
- value = [v.strip() for v in value]
- for v in value:
- import distutils.versionpredicate
-
- distutils.versionpredicate.split_provision(v)
- self.provides = value
-
- def get_obsoletes(self):
- return self.obsoletes or []
-
- def set_obsoletes(self, value):
- import distutils.versionpredicate
-
- for v in value:
- distutils.versionpredicate.VersionPredicate(v)
- self.obsoletes = list(value)
-
-
-def fix_help_options(options):
- """Convert a 4-tuple 'help_options' list as found in various command
- classes to the 3-tuple form required by FancyGetopt.
- """
- new_options = []
- for help_tuple in options:
- new_options.append(help_tuple[0:3])
- return new_options
diff --git a/spaces/Awesimo/jojogan/op/fused_bias_act.cpp b/spaces/Awesimo/jojogan/op/fused_bias_act.cpp
deleted file mode 100644
index 02be898f970bcc8ea297867fcaa4e71b24b3d949..0000000000000000000000000000000000000000
--- a/spaces/Awesimo/jojogan/op/fused_bias_act.cpp
+++ /dev/null
@@ -1,21 +0,0 @@
-#include
-
-
-torch::Tensor fused_bias_act_op(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer,
- int act, int grad, float alpha, float scale);
-
-#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
-#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous")
-#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
-
-torch::Tensor fused_bias_act(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer,
- int act, int grad, float alpha, float scale) {
- CHECK_CUDA(input);
- CHECK_CUDA(bias);
-
- return fused_bias_act_op(input, bias, refer, act, grad, alpha, scale);
-}
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
- m.def("fused_bias_act", &fused_bias_act, "fused bias act (CUDA)");
-}
\ No newline at end of file
diff --git a/spaces/Banbri/zcvzcv/src/components/ui/alert.tsx b/spaces/Banbri/zcvzcv/src/components/ui/alert.tsx
deleted file mode 100644
index f589783193a6cfe14032a77b89055cb3e920fe8c..0000000000000000000000000000000000000000
--- a/spaces/Banbri/zcvzcv/src/components/ui/alert.tsx
+++ /dev/null
@@ -1,59 +0,0 @@
-import * as React from "react"
-import { cva, type VariantProps } from "class-variance-authority"
-
-import { cn } from "@/lib/utils"
-
-const alertVariants = cva(
- "relative w-full rounded-lg border border-stone-200 p-4 [&:has(svg)]:pl-11 [&>svg+div]:translate-y-[-3px] [&>svg]:absolute [&>svg]:left-4 [&>svg]:top-4 [&>svg]:text-stone-950 dark:border-stone-800 dark:[&>svg]:text-stone-50",
- {
- variants: {
- variant: {
- default: "bg-white text-stone-950 dark:bg-stone-950 dark:text-stone-50",
- destructive:
- "border-red-500/50 text-red-500 dark:border-red-500 [&>svg]:text-red-500 dark:border-red-900/50 dark:text-red-900 dark:dark:border-red-900 dark:[&>svg]:text-red-900",
- },
- },
- defaultVariants: {
- variant: "default",
- },
- }
-)
-
-const Alert = React.forwardRef<
- HTMLDivElement,
- React.HTMLAttributes & VariantProps
->(({ className, variant, ...props }, ref) => (
-
-))
-Alert.displayName = "Alert"
-
-const AlertTitle = React.forwardRef<
- HTMLParagraphElement,
- React.HTMLAttributes
->(({ className, ...props }, ref) => (
-
-))
-AlertTitle.displayName = "AlertTitle"
-
-const AlertDescription = React.forwardRef<
- HTMLParagraphElement,
- React.HTMLAttributes
->(({ className, ...props }, ref) => (
-
-))
-AlertDescription.displayName = "AlertDescription"
-
-export { Alert, AlertTitle, AlertDescription }
diff --git a/spaces/Benson/text-generation/Examples/Android Mini Block Craft Apk.md b/spaces/Benson/text-generation/Examples/Android Mini Block Craft Apk.md
deleted file mode 100644
index aa377c55852e99fc2902e0defe311f38d3279c39..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Android Mini Block Craft Apk.md
+++ /dev/null
@@ -1,90 +0,0 @@
-
-
Mini bloque de arte APK: Un juego de caja de arena para Android
-
Si usted está buscando un juego de sandbox divertido y creativo para su dispositivo Android, es posible que desee echa un vistazo a Mini Block Craft APK. Este juego te permite crear, explorar y sobrevivir en un mundo abierto estilo pixel. Puedes construir tu propia casa, luchar contra monstruos o simplemente disfrutar del paisaje. En este artículo, le diremos todo lo que necesita saber sobre Mini Block Craft APK, incluyendo sus características, cómo descargarlo e instalarlo, sus pros y contras, y algunas alternativas.
-
¿Qué es Mini Block Craft APK?
-
Mini Block Craft APK es un juego para Android desarrollado por Build Block Studio. Está inspirado en el popular juego Minecraft, pero tiene sus propias características únicas y estilo. El juego es gratis para descargar y jugar, pero contiene anuncios y compras en la aplicación. Puedes jugar el juego sin conexión o en línea con otros jugadores.
Mini Block Craft APK tiene muchas características que lo hacen un juego divertido y adictivo. Aquí están algunos de ellos:
-
Gráficos de estilo de píxeles
-
El juego tiene unos gráficos estilo pixel que le dan una sensación retro y nostálgica. El juego también tiene un ciclo de día y noche, efectos meteorológicos y sombras realistas. Los gráficos son simples pero coloridos y encantadores.
-
Modos creativos y de supervivencia
-
El juego tiene dos modos: creativo y supervivencia. En el modo creativo, tienes recursos ilimitados y puedes construir lo que quieras sin restricciones. También puede volar alrededor del mapa y explorar diferentes biomas. En el modo de supervivencia, tienes que reunir recursos, crear herramientas y armas, y luchar contra los enemigos. También tienes que controlar tu hambre, salud y resistencia.
-
Construye la casa de tus sueños o explora el mapa
-
-
Lucha contra monstruos y zombies
-
El juego no se trata solo de construir y explorar. También tiene un sistema de combate que te permite luchar contra varios enemigos, como monstruos, zombis, arañas, esqueletos y más. Puedes usar diferentes armas, como espadas, arcos, hachas y armas. También puedes crear armaduras y pociones para protegerte.
-
¿Cómo descargar e instalar Mini Block Craft APK?
-
Si desea jugar Mini bloque arte APK en su dispositivo Android, es necesario descargar e instalarlo primero. Estos son los pasos para hacer eso:
-
Requisitos para Mini bloque de arte APK
-
Antes de descargar e instalar el juego, asegúrese de que su dispositivo cumple con los siguientes requisitos:
-
-
-
Versión de Android 4.1 o superior
-
Al menos 28 MB de espacio de almacenamiento libre
-
Una conexión a Internet estable (opcional)
-
-
Pasos para descargar e instalar Mini Block Craft APK
-
Siga estos pasos para descargar e instalar el juego:
-
-
Ir a [este enlace]( 1 ) para descargar la última versión de Mini Block Craft APK.
-
Una vez completada la descarga, abra la aplicación de administrador de archivos en su dispositivo y localice el archivo descargado.
-
Toque en el archivo y permita la instalación desde fuentes desconocidas si se le solicita.
-
Siga las instrucciones en pantalla para completar el proceso de instalación.
-
Una vez que se hace la instalación, puede iniciar el juego desde el cajón de la aplicación o la pantalla de inicio.
-
-
Pros y contras de Mini Block Craft APK
-
Mini Block Craft APK es un juego divertido y creativo, pero también tiene algunos inconvenientes. Estos son algunos de los pros y los contras del juego:
-
Pros de Mini bloque de arte APK
-
-
El juego es gratis para descargar y jugar.
-
El juego tiene unos gráficos estilo píxel que son atractivos y nostálgicos.
-
El juego tiene modos creativos y de supervivencia que ofrecen diferentes desafíos y experiencias.
-
-
El juego tiene un modo multijugador en línea que te permite jugar con otros jugadores de todo el mundo.
-
-
Contras de Mini Block Craft APK
-
-
El juego contiene anuncios y compras en la aplicación que pueden ser molestos y caros.
-
El juego puede ser lento y con errores en algunos dispositivos.
-
El juego puede ser repetitivo y aburrido después de un tiempo.
-
El juego tiene un sonido y música de baja calidad.
-
El juego tiene una interfaz de usuario y controles pobres.
-
-
Alternativas a Mini bloque de arte APK
-
Si usted está buscando algunas alternativas a Mini Block Craft APK, puede probar estos juegos:
-
Edición de bolsillo de Minecraft
-
Minecraft Pocket Edition es la versión móvil oficial del famoso juego de sandbox Minecraft. Tiene las mismas características y jugabilidad que el juego original, pero está optimizado para dispositivos móviles. Puede crear, explorar y sobrevivir en un mundo generado al azar, o unirse a servidores en línea y jugar con otros jugadores. El juego cuesta $6.99 para descargar, pero ofrece actualizaciones regulares y nuevo contenido.
-
Roblox
-
Roblox es una plataforma de juego online multijugador masivo que te permite crear y jugar varios juegos. Puedes elegir entre millones de juegos creados por otros usuarios, o hacer los tuyos usando Roblox Studio. También puedes personalizar tu avatar, chatear con otros jugadores y unirte a grupos. El juego es gratis para descargar y jugar, pero tiene una moneda virtual llamada Robux que se puede usar para comprar artículos y acceder a funciones premium.
-
Terraria
-
Terraria es un juego de sandbox en 2D que combina elementos de acción, aventura y juegos de rol. Puedes explorar, construir, crear, luchar y sobrevivir en un mundo generado por procedimientos. También puedes jugar con hasta 7 amigos online o localmente. El juego cuesta $4.99 para descargar, pero tiene mucho contenido y actualizaciones.
-
Conclusión
-
-
Esperamos que este artículo le ayudó a aprender más acerca de Mini Block Craft APK. Si usted tiene alguna pregunta o retroalimentación, no dude en dejar un comentario a continuación. ¡Gracias por leer!
-
Preguntas frecuentes
-
-
¿Es seguro descargar Mini Block Craft APK?
-
Sí, Mini Block Craft APK es seguro de descargar desde [este enlace]. Sin embargo, siempre debe tener cuidado al descargar archivos de fuentes desconocidas y escanearlos en busca de virus antes de instalarlos.
-
¿Puedo jugar Mini Block Craft APK en PC?
-
No, Mini Block Craft APK solo está disponible para dispositivos Android. Sin embargo, puede utilizar un emulador de Android en su PC para ejecutar el juego. Algunos emuladores de Android populares son BlueStacks, NoxPlayer y LDPlayer.
-
¿Cómo actualizo Mini Block Craft APK?
-
Para actualizar Mini Block Craft APK, es necesario descargar la última versión del juego desde [este enlace] e instalarlo sobre el existente. Alternativamente, puede habilitar actualizaciones automáticas en la configuración de su dispositivo o usar una herramienta de actualización de aplicaciones.
-
¿Cómo puedo desinstalar Mini Block Craft APK?
-
Para desinstalar Mini Block Craft APK, es necesario ir a la configuración de su dispositivo > aplicaciones > Mini Block Craft > desinstalar. Alternativamente, puedes arrastrar
el icono de la aplicación a la papelera en la pantalla de inicio o en el cajón de la aplicación. También es posible que tenga que eliminar el archivo APK de su aplicación de administrador de archivos.
-
¿Cuáles son algunos consejos y trucos para jugar Mini Block Craft APK?
-
Aquí hay algunos consejos y trucos para jugar Mini Block Craft APK:
-
-
Utilice el mapa y la brújula para navegar por el mundo y encontrar su hogar.
-
Recoge tantos recursos como puedas y guárdalos en cofres.
-
Crea una cama y duerme por la noche para evitar monstruos y zombis.
-
Utilice antorchas y linternas para iluminar su casa y sus alrededores.
-
Utiliza cercas y puertas para proteger tu casa de los enemigos.
-
Usa diferentes herramientas y armas para diferentes tareas y enemigos.
-
-
Explora el mapa y encuentra tesoros y secretos ocultos.
-
Juega en línea con otros jugadores y chatea con ellos usando la función de chat.
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/Eddycrack864/Applio-Inference/infer/modules/train/extract/extract_f0_rmvpe_dml.py b/spaces/Eddycrack864/Applio-Inference/infer/modules/train/extract/extract_f0_rmvpe_dml.py
deleted file mode 100644
index 6abb1898550664ca600cebbb6d37ba0de8a3d312..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/infer/modules/train/extract/extract_f0_rmvpe_dml.py
+++ /dev/null
@@ -1,139 +0,0 @@
-import os
-import sys
-import traceback
-
-import parselmouth
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-import logging
-
-import numpy as np
-import pyworld
-
-from infer.lib.audio import load_audio
-
-logging.getLogger("numba").setLevel(logging.WARNING)
-
-exp_dir = sys.argv[1]
-import torch_directml
-
-device = torch_directml.device(torch_directml.default_device())
-f = open("%s/extract_f0_feature.log" % exp_dir, "a+")
-
-
-def printt(strr):
- print(strr)
- f.write("%s\n" % strr)
- f.flush()
-
-
-class FeatureInput(object):
- def __init__(self, samplerate=16000, hop_size=160):
- self.fs = samplerate
- self.hop = hop_size
-
- self.f0_bin = 256
- self.f0_max = 1100.0
- self.f0_min = 50.0
- self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700)
- self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700)
-
- def compute_f0(self, path, f0_method):
- x = load_audio(path, self.fs)
- # p_len = x.shape[0] // self.hop
- if f0_method == "rmvpe":
- if hasattr(self, "model_rmvpe") == False:
- from infer.lib.rmvpe import RMVPE
-
- print("Loading rmvpe model")
- self.model_rmvpe = RMVPE(
- "assets/rmvpe/rmvpe.pt", is_half=False, device=device
- )
- f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03)
- return f0
-
- def coarse_f0(self, f0):
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - self.f0_mel_min) * (
- self.f0_bin - 2
- ) / (self.f0_mel_max - self.f0_mel_min) + 1
-
- # use 0 or 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > self.f0_bin - 1] = self.f0_bin - 1
- f0_coarse = np.rint(f0_mel).astype(int)
- assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (
- f0_coarse.max(),
- f0_coarse.min(),
- )
- return f0_coarse
-
- def go(self, paths, f0_method):
- if len(paths) == 0:
- printt("no-f0-todo")
- else:
- printt("todo-f0-%s" % len(paths))
- n = max(len(paths) // 5, 1) # 每个进程最多打印5条
- for idx, (inp_path, opt_path1, opt_path2) in enumerate(paths):
- try:
- if idx % n == 0:
- printt("f0ing,now-%s,all-%s,-%s" % (idx, len(paths), inp_path))
- if (
- os.path.exists(opt_path1 + ".npy") == True
- and os.path.exists(opt_path2 + ".npy") == True
- ):
- continue
- featur_pit = self.compute_f0(inp_path, f0_method)
- np.save(
- opt_path2,
- featur_pit,
- allow_pickle=False,
- ) # nsf
- coarse_pit = self.coarse_f0(featur_pit)
- np.save(
- opt_path1,
- coarse_pit,
- allow_pickle=False,
- ) # ori
- except:
- printt("f0fail-%s-%s-%s" % (idx, inp_path, traceback.format_exc()))
-
-
-if __name__ == "__main__":
- # exp_dir=r"E:\codes\py39\dataset\mi-test"
- # n_p=16
- # f = open("%s/log_extract_f0.log"%exp_dir, "w")
- printt(sys.argv)
- featureInput = FeatureInput()
- paths = []
- inp_root = "%s/1_16k_wavs" % (exp_dir)
- opt_root1 = "%s/2a_f0" % (exp_dir)
- opt_root2 = "%s/2b-f0nsf" % (exp_dir)
-
- os.makedirs(opt_root1, exist_ok=True)
- os.makedirs(opt_root2, exist_ok=True)
- for name in sorted(list(os.listdir(inp_root))):
- inp_path = "%s/%s" % (inp_root, name)
- if "spec" in inp_path:
- continue
- opt_path1 = "%s/%s" % (opt_root1, name)
- opt_path2 = "%s/%s" % (opt_root2, name)
- paths.append([inp_path, opt_path1, opt_path2])
- try:
- featureInput.go(paths, "rmvpe")
- except:
- printt("f0_all_fail-%s" % (traceback.format_exc()))
- # ps = []
- # for i in range(n_p):
- # p = Process(
- # target=featureInput.go,
- # args=(
- # paths[i::n_p],
- # f0method,
- # ),
- # )
- # ps.append(p)
- # p.start()
- # for i in range(n_p):
- # ps[i].join()
diff --git a/spaces/Ella2323/Positive-Reframing/test.py b/spaces/Ella2323/Positive-Reframing/test.py
deleted file mode 100644
index 3bcb4fd7311df1710faaa86282fe0cb310b0e570..0000000000000000000000000000000000000000
--- a/spaces/Ella2323/Positive-Reframing/test.py
+++ /dev/null
@@ -1,34 +0,0 @@
-from transformers import pipeline
-from transformers import AutoModelForSeq2SeqLM
-from transformers import AutoTokenizer
-import argparse
-
-# Load trained model
-model = AutoModelForSeq2SeqLM.from_pretrained("output/reframer")
-tokenizer = AutoTokenizer.from_pretrained("output/reframer")
-reframer = pipeline('summarization', model=model, tokenizer=tokenizer)
-
-def get_args():
- """ args from input
- """
- parser = argparse.ArgumentParser(description='HSIC-Bottleneck research')
-
- parser.add_argument('-ipt', '--input', required=True,
- type=str, help='input path')
-
- args = parser.parse_args()
-
- return args
-
-def main():
-
- args = get_args()
-
- input_file = args.input
-
- with open(input_file, 'r') as file:
- data = file.read().rstrip()
- print(reframer(data)[0]['summary_text'])
-
-if __name__ == '__main__':
- main()
\ No newline at end of file
diff --git a/spaces/EsoCode/text-generation-webui/app.py b/spaces/EsoCode/text-generation-webui/app.py
deleted file mode 100644
index f32f7063ebde3c8a88868518eacfebd0c2f6883a..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/app.py
+++ /dev/null
@@ -1,4 +0,0 @@
-import os
-os.system('python download-model.py PygmalionAI/pygmalion-350m --branch main')
-# os.system('python download-model.py waifu-workshop/pygmalion-6b --branch original-sharded')
-os.system('python server.py --cpu --chat --model PygmalionAI_pygmalion-350m --no-stream --auto-devices --settings settings.template.yml')
\ No newline at end of file
diff --git a/spaces/Exalt-company/text-to-video/app.py b/spaces/Exalt-company/text-to-video/app.py
deleted file mode 100644
index a82df332731f067826d3e1ef79fabceffb74d07e..0000000000000000000000000000000000000000
--- a/spaces/Exalt-company/text-to-video/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/runwayml/stable-diffusion-v1-5").launch()
\ No newline at end of file
diff --git a/spaces/FFusion/FFusionXL-SDXL-DEMO/app.py b/spaces/FFusion/FFusionXL-SDXL-DEMO/app.py
deleted file mode 100644
index ac60c3b53c4f10623d2e1aaaba8c3b61936ff294..0000000000000000000000000000000000000000
--- a/spaces/FFusion/FFusionXL-SDXL-DEMO/app.py
+++ /dev/null
@@ -1,162 +0,0 @@
-import gradio as gr
-import gradio.components as gc
-import torch
-import numpy as np
-from diffusers import DiffusionPipeline
-from huggingface_hub import login, HfApi, HfFolder
-from PIL import Image
-import os
-from datetime import datetime
-import shutil
-from PIL import ImageDraw, ImageFont
-
-# Get your Hugging Face API token
-folder = HfFolder()
-token = folder.get_token()
-
-# Instantiate the Hugging Face API
-api = HfApi()
-
-login(token=os.environ.get('HF_KEY'))
-
-device = "cuda" if torch.cuda.is_available() else "cpu"
-
-torch.cuda.max_memory_allocated(device=device)
-
-pipe1 = DiffusionPipeline.from_pretrained("FFusion/FFusionXL-BASE", torch_dtype=torch.float16, use_safetensors=True)
-pipe2 = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True)
-
-pipe1 = pipe1.to(device)
-pipe1.enable_xformers_memory_efficient_attention()
-
-pipe2 = pipe2.to(device)
-pipe2.enable_xformers_memory_efficient_attention()
-
-
-
-
-
-def add_watermark(image_np):
- img = Image.fromarray(image_np.astype('uint8'))
- draw = ImageDraw.Draw(img)
- watermark_text_line1 = "WARNING: This image is generated for Research & Demonstration Purposes Only."
- watermark_text_line2 = "Any misuse or inappropriate use and may be subject to legal action."
- font = ImageFont.truetype("arial.ttf", size=12)
- position_line1 = (10, img.height - 80)
- position_line2 = (10, img.height - 60) # Adjust this value based on the font size and desired spacing
- color_line1 = "white"
- color_line2 = "black"
- draw.text(position_line1, watermark_text_line1, font=font, fill=color_line1)
- draw.text(position_line2, watermark_text_line2, font=font, fill=color_line2)
- return np.array(img)
-
-
-
-def save_image_to_hf_space(image_np, image_name):
- # Name of your Hugging Face repo
- repo_name = "FFusion/FF2"
-
- # Convert the numpy array to an image
- image = Image.fromarray(image_np.astype('uint8'))
-
- # Append a timestamp to the filename
- timestamp = datetime.now().strftime("%Y%m%d-%H%M%S")
- image_name_with_timestamp = f"{image_name}-{timestamp}.png"
-
- # Save the image locally
- local_image_path = f"./{image_name_with_timestamp}"
- image.save(local_image_path)
-
- # Upload the image to your Hugging Face repo
- api.upload_file(
- token=token,
- path_or_fileobj=local_image_path,
- repo_id=repo_name,
- path_in_repo=image_name_with_timestamp # The path where the image will be stored in the repository
- )
-
- # Save the image to the persistent storage
- persistent_storage_path = f"/data/{image_name_with_timestamp}"
- shutil.copy(local_image_path, persistent_storage_path)
-
-
-
-
-def genie(prompt, negative_prompt, prompt_2, negative_prompt_2, scale, guidance_scale, aesthetic_score, negative_aesthetic_score, steps, seed):
- torch.cuda.empty_cache()
- generator = torch.Generator(device=device).manual_seed(seed)
- int_images = pipe1(prompt=prompt, prompt_2=prompt_2, negative_prompt=negative_prompt, negative_prompt_2=negative_prompt_2, num_inference_steps=steps, guidance_scale=scale, num_images_per_prompt=1, generator=generator).images
- torch.cuda.empty_cache()
- refined_images = pipe2(prompt=prompt, prompt_2=prompt_2, image=int_images, guidance_scale=guidance_scale, aesthetic_score=aesthetic_score, negative_aesthetic_score=negative_aesthetic_score).images
- int_image_np = np.array(int_images[0])
- refined_image_np = np.array(refined_images[0])
-
- # Add watermark to the images
- int_image_np_with_watermark = add_watermark(int_image_np)
- refined_image_np_with_watermark = add_watermark(refined_image_np)
-
- # Save the generated images to Hugging Face Spaces
- save_image_to_hf_space(int_image_np_with_watermark, "int_image")
- save_image_to_hf_space(refined_image_np_with_watermark, "refined_image")
-
- return int_image_np_with_watermark, refined_image_np_with_watermark
-
-article = f"""
-
-
-
-
-
Citation
-
Please note that the demo is intended solely for academic and research purposes. This demo features the FFusionXL-BASE model developed by FFusion.AI, a division of Source Code Bulgaria Ltd.
-
Acknowledgement of Original Work and Modifications
-
This Software is based on the Stable Diffusion XL Base 1.0 model developed by Stability AI Ltd. FFusion AI and Source Code Bulgaria Ltd. have made modifications and enhancements to the original Software for the creation of the FFusionXL-BASE model. While FFusion AI and Source Code Bulgaria Ltd. maintain the rights to their modifications and enhancements, all rights to the original Software and associated intellectual property remain with Stability AI Ltd. Use of the FFusionXL-BASE model is subject to the terms of this License, as well as any terms and conditions set forth by Stability AI Ltd. for the use of the original Software.
-
Attribution: SDXL 0.9 is licensed under the SDXL Research License, Copyright (c) Stability AI Ltd. All Rights Reserved.
-
Warning and Compliance
-
Any use of the demo for generating inappropriate or unlawful content is strictly prohibited, and any misuse will not be tolerated. Individuals found to be generating content that is in violation of these terms or any applicable laws will be dealt with accordingly in accordance with legal regulations. The responsibility for any misuse or inappropriate use of the demo lies solely with the users who generated such content, and this demo, nor its affiliates, shall be held liable for any such use.
-
All images and content generated through this demo are logged in a Hugging Face repository, and we actively monitor for violations of these terms.
-"""
-
-# Create the Gradio interface
-gr.Interface(fn=genie, inputs=[
- gr.Textbox(label='🎨 Main Prompt', placeholder='Describe the Desired Image (77 Token Limit)', lines=2),
- gr.Textbox(label='❌ Negative Prompt', placeholder='Specify Unwanted Elements', lines=2),
- gr.Textbox(label='📝 Secondary Prompt for Text Encoder 2', placeholder='The prompt for tokenizer_2 and text_encoder_2 (Optional)', lines=1),
- gr.Textbox(label='❌ Negative Prompt for Text Encoder 2', placeholder='Negative guidance for text_encoder_2 (Optional)', lines=1),
- gr.Slider(3, 25, 10, label='🌊 DiFFusion Scale: Influence of Main Features'),
- gr.Slider(1, 10, 5, label='🧭 Guidance Scale: Intensity of Guidance'),
- gr.Slider(1, 10, 6, label='🎨 Aesthetic Score: Preference for Visual Appeal'),
- gr.Slider(1, 10, 2.5, label='🚫 Negative Aesthetic Score: Avoidance of Unwanted Aesthetics'),
- gr.Slider(10, maximum=80, value=50, step=1, label='💎 Number of Diffusion Steps'),
- gr.Slider(minimum=1, step=1, maximum=999999999999999999, randomize=True, label='🎲 Seed')],
-
- outputs=[gc.Image(type='numpy', label="FFusionXL Base Image"), gc.Image(type='numpy', label="Refined Image")],
- title="FFusionXL Base - Generate and Refine",
- description='
',
- article=article,
- css="""
- .gr-textbox {
- width: 100%;
- }
- .gr-image {
- max-width: 50%;
- display: inline-block;
- }
- """,
- allow_flagging='never'
-).launch(debug=True, max_threads=10)
\ No newline at end of file
diff --git a/spaces/Fernando22/freegpt-webui/client/css/message.css b/spaces/Fernando22/freegpt-webui/client/css/message.css
deleted file mode 100644
index 64e04147ee4d1e76dda4f39c4f756c9da63e3874..0000000000000000000000000000000000000000
--- a/spaces/Fernando22/freegpt-webui/client/css/message.css
+++ /dev/null
@@ -1,65 +0,0 @@
-.message {
- width: 100%;
- overflow-wrap: break-word;
- display: flex;
- gap: var(--section-gap);
- padding: var(--section-gap);
- padding-bottom: 0;
-}
-
-.message:last-child {
- animation: 0.6s show_message;
-}
-
-@keyframes show_message {
- from {
- transform: translateY(10px);
- opacity: 0;
- }
-}
-
-.message .avatar-container img {
- max-width: 48px;
- max-height: 48px;
- box-shadow: 0.4px 0.5px 0.7px -2px rgba(0, 0, 0, 0.08), 1.1px 1.3px 2px -2px rgba(0, 0, 0, 0.041),
- 2.7px 3px 4.8px -2px rgba(0, 0, 0, 0.029), 9px 10px 16px -2px rgba(0, 0, 0, 0.022);
-}
-
-.message .content {
- display: flex;
- flex-direction: column;
- width: 90%;
- gap: 18px;
-}
-
-.message .content p,
-.message .content li,
-.message .content code {
- font-size: 1rem;
- line-height: 1.3;
-}
-
-@media screen and (max-height: 720px) {
- .message {
- padding: 12px;
- gap: 0;
- }
-
- .message .content {
- margin-left: 8px;
- width: 80%;
- }
-
- .message .avatar-container img {
- max-width: 32px;
- max-height: 32px;
- }
-
- .message .content,
- .message .content p,
- .message .content li,
- .message .content code {
- font-size: 0.875rem;
- line-height: 1.3;
- }
-}
diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/vdecoder/__init__.py b/spaces/FrankZxShen/so-vits-svc-models-ba/vdecoder/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/FrankZxShen/vits-fast-finetuning-pcr/text/english.py b/spaces/FrankZxShen/vits-fast-finetuning-pcr/text/english.py
deleted file mode 100644
index 6817392ba8a9eb830351de89fb7afc5ad72f5e42..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/vits-fast-finetuning-pcr/text/english.py
+++ /dev/null
@@ -1,188 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-
-'''
-Cleaners are transformations that run over the input text at both training and eval time.
-
-Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners"
-hyperparameter. Some cleaners are English-specific. You'll typically want to use:
- 1. "english_cleaners" for English text
- 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using
- the Unidecode library (https://pypi.python.org/pypi/Unidecode)
- 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update
- the symbols in symbols.py to match your data).
-'''
-
-
-# Regular expression matching whitespace:
-
-
-import re
-import inflect
-from unidecode import unidecode
-import eng_to_ipa as ipa
-_inflect = inflect.engine()
-_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])')
-_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)')
-_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)')
-_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)')
-_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)')
-_number_re = re.compile(r'[0-9]+')
-
-# List of (regular expression, replacement) pairs for abbreviations:
-_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [
- ('mrs', 'misess'),
- ('mr', 'mister'),
- ('dr', 'doctor'),
- ('st', 'saint'),
- ('co', 'company'),
- ('jr', 'junior'),
- ('maj', 'major'),
- ('gen', 'general'),
- ('drs', 'doctors'),
- ('rev', 'reverend'),
- ('lt', 'lieutenant'),
- ('hon', 'honorable'),
- ('sgt', 'sergeant'),
- ('capt', 'captain'),
- ('esq', 'esquire'),
- ('ltd', 'limited'),
- ('col', 'colonel'),
- ('ft', 'fort'),
-]]
-
-
-# List of (ipa, lazy ipa) pairs:
-_lazy_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('r', 'ɹ'),
- ('æ', 'e'),
- ('ɑ', 'a'),
- ('ɔ', 'o'),
- ('ð', 'z'),
- ('θ', 's'),
- ('ɛ', 'e'),
- ('ɪ', 'i'),
- ('ʊ', 'u'),
- ('ʒ', 'ʥ'),
- ('ʤ', 'ʥ'),
- ('ˈ', '↓'),
-]]
-
-# List of (ipa, lazy ipa2) pairs:
-_lazy_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('r', 'ɹ'),
- ('ð', 'z'),
- ('θ', 's'),
- ('ʒ', 'ʑ'),
- ('ʤ', 'dʑ'),
- ('ˈ', '↓'),
-]]
-
-# List of (ipa, ipa2) pairs
-_ipa_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('r', 'ɹ'),
- ('ʤ', 'dʒ'),
- ('ʧ', 'tʃ')
-]]
-
-
-def expand_abbreviations(text):
- for regex, replacement in _abbreviations:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def collapse_whitespace(text):
- return re.sub(r'\s+', ' ', text)
-
-
-def _remove_commas(m):
- return m.group(1).replace(',', '')
-
-
-def _expand_decimal_point(m):
- return m.group(1).replace('.', ' point ')
-
-
-def _expand_dollars(m):
- match = m.group(1)
- parts = match.split('.')
- if len(parts) > 2:
- return match + ' dollars' # Unexpected format
- dollars = int(parts[0]) if parts[0] else 0
- cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0
- if dollars and cents:
- dollar_unit = 'dollar' if dollars == 1 else 'dollars'
- cent_unit = 'cent' if cents == 1 else 'cents'
- return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit)
- elif dollars:
- dollar_unit = 'dollar' if dollars == 1 else 'dollars'
- return '%s %s' % (dollars, dollar_unit)
- elif cents:
- cent_unit = 'cent' if cents == 1 else 'cents'
- return '%s %s' % (cents, cent_unit)
- else:
- return 'zero dollars'
-
-
-def _expand_ordinal(m):
- return _inflect.number_to_words(m.group(0))
-
-
-def _expand_number(m):
- num = int(m.group(0))
- if num > 1000 and num < 3000:
- if num == 2000:
- return 'two thousand'
- elif num > 2000 and num < 2010:
- return 'two thousand ' + _inflect.number_to_words(num % 100)
- elif num % 100 == 0:
- return _inflect.number_to_words(num // 100) + ' hundred'
- else:
- return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ')
- else:
- return _inflect.number_to_words(num, andword='')
-
-
-def normalize_numbers(text):
- text = re.sub(_comma_number_re, _remove_commas, text)
- text = re.sub(_pounds_re, r'\1 pounds', text)
- text = re.sub(_dollars_re, _expand_dollars, text)
- text = re.sub(_decimal_number_re, _expand_decimal_point, text)
- text = re.sub(_ordinal_re, _expand_ordinal, text)
- text = re.sub(_number_re, _expand_number, text)
- return text
-
-
-def mark_dark_l(text):
- return re.sub(r'l([^aeiouæɑɔəɛɪʊ ]*(?: |$))', lambda x: 'ɫ'+x.group(1), text)
-
-
-def english_to_ipa(text):
- text = unidecode(text).lower()
- text = expand_abbreviations(text)
- text = normalize_numbers(text)
- phonemes = ipa.convert(text)
- phonemes = collapse_whitespace(phonemes)
- return phonemes
-
-
-def english_to_lazy_ipa(text):
- text = english_to_ipa(text)
- for regex, replacement in _lazy_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def english_to_ipa2(text):
- text = english_to_ipa(text)
- text = mark_dark_l(text)
- for regex, replacement in _ipa_to_ipa2:
- text = re.sub(regex, replacement, text)
- return text.replace('...', '…')
-
-
-def english_to_lazy_ipa2(text):
- text = english_to_ipa(text)
- for regex, replacement in _lazy_ipa2:
- text = re.sub(regex, replacement, text)
- return text
diff --git "a/spaces/Gmq-x/gpt-academic/crazy_functions/\347\220\206\350\247\243PDF\346\226\207\346\241\243\345\206\205\345\256\271.py" "b/spaces/Gmq-x/gpt-academic/crazy_functions/\347\220\206\350\247\243PDF\346\226\207\346\241\243\345\206\205\345\256\271.py"
deleted file mode 100644
index 505086455af8d2676055ab084cf97058b954c7d5..0000000000000000000000000000000000000000
--- "a/spaces/Gmq-x/gpt-academic/crazy_functions/\347\220\206\350\247\243PDF\346\226\207\346\241\243\345\206\205\345\256\271.py"
+++ /dev/null
@@ -1,112 +0,0 @@
-from toolbox import update_ui
-from toolbox import CatchException, report_execption
-from .crazy_utils import read_and_clean_pdf_text
-from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
-fast_debug = False
-
-
-def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
- import tiktoken
- print('begin analysis on:', file_name)
-
- ############################## <第 0 步,切割PDF> ##################################
- # 递归地切割PDF文件,每一块(尽量是完整的一个section,比如introduction,experiment等,必要时再进行切割)
- # 的长度必须小于 2500 个 Token
- file_content, page_one = read_and_clean_pdf_text(file_name) # (尝试)按照章节切割PDF
-
- TOKEN_LIMIT_PER_FRAGMENT = 2500
-
- from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
- from request_llm.bridge_all import model_info
- enc = model_info["gpt-3.5-turbo"]['tokenizer']
- def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
- paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
- txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT)
- page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
- txt=str(page_one), get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4)
- # 为了更好的效果,我们剥离Introduction之后的部分(如果有)
- paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
-
- ############################## <第 1 步,从摘要中提取高价值信息,放到history中> ##################################
- final_results = []
- final_results.append(paper_meta)
-
- ############################## <第 2 步,迭代地历遍整个文章,提取精炼信息> ##################################
- i_say_show_user = f'首先你在英文语境下通读整篇论文。'; gpt_say = "[Local Message] 收到。" # 用户提示
- chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=[]) # 更新UI
-
- iteration_results = []
- last_iteration_result = paper_meta # 初始值是摘要
- MAX_WORD_TOTAL = 4096
- n_fragment = len(paper_fragments)
- if n_fragment >= 20: print('文章极长,不能达到预期效果')
- for i in range(n_fragment):
- NUM_OF_WORD = MAX_WORD_TOTAL // n_fragment
- i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i]}"
- i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i][:200]}"
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
- llm_kwargs, chatbot,
- history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果
- sys_prompt="Extract the main idea of this section." # 提示
- )
- iteration_results.append(gpt_say)
- last_iteration_result = gpt_say
-
- ############################## <第 3 步,整理history> ##################################
- final_results.extend(iteration_results)
- final_results.append(f'接下来,你是一名专业的学术教授,利用以上信息,使用中文回答我的问题。')
- # 接下来两句话只显示在界面上,不起实际作用
- i_say_show_user = f'接下来,你是一名专业的学术教授,利用以上信息,使用中文回答我的问题。'; gpt_say = "[Local Message] 收到。"
- chatbot.append([i_say_show_user, gpt_say])
-
- ############################## <第 4 步,设置一个token上限,防止回答时Token溢出> ##################################
- from .crazy_utils import input_clipping
- _, final_results = input_clipping("", final_results, max_token_limit=3200)
- yield from update_ui(chatbot=chatbot, history=final_results) # 注意这里的历史记录被替代了
-
-
-@CatchException
-def 理解PDF文档内容标准文件输入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- import glob, os
-
- # 基本信息:功能、贡献者
- chatbot.append([
- "函数插件功能?",
- "理解PDF论文内容,并且将结合上下文内容,进行学术解答。函数插件贡献者: Hanzoe, binary-husky"])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import fitz
- except:
- report_execption(chatbot, history,
- a = f"解析项目: {txt}",
- b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 清空历史,以免输入溢出
- history = []
-
- # 检测输入参数,如没有给定输入参数,直接退出
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "":
- txt = '空空如也的输入栏'
- report_execption(chatbot, history,
- a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 搜索需要处理的文件清单
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)]
- # 如果没找到任何文件
- if len(file_manifest) == 0:
- report_execption(chatbot, history,
- a=f"解析项目: {txt}", b=f"找不到任何.tex或.pdf文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- txt = file_manifest[0]
- # 开始正式执行任务
- yield from 解析PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
diff --git a/spaces/Goutam982/RVC_V2_voice_clone/app.py b/spaces/Goutam982/RVC_V2_voice_clone/app.py
deleted file mode 100644
index 9ce7bc25915db5c6a62c57a3b9b8024a730a0595..0000000000000000000000000000000000000000
--- a/spaces/Goutam982/RVC_V2_voice_clone/app.py
+++ /dev/null
@@ -1,2088 +0,0 @@
-import subprocess, torch, os, traceback, sys, warnings, shutil, numpy as np
-from mega import Mega
-os.environ["no_proxy"] = "localhost, 127.0.0.1, ::1"
-import threading
-from time import sleep
-from subprocess import Popen
-import faiss
-from random import shuffle
-import json, datetime, requests
-from gtts import gTTS
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-tmp = os.path.join(now_dir, "TEMP")
-shutil.rmtree(tmp, ignore_errors=True)
-shutil.rmtree("%s/runtime/Lib/site-packages/infer_pack" % (now_dir), ignore_errors=True)
-os.makedirs(tmp, exist_ok=True)
-os.makedirs(os.path.join(now_dir, "logs"), exist_ok=True)
-os.makedirs(os.path.join(now_dir, "weights"), exist_ok=True)
-os.environ["TEMP"] = tmp
-warnings.filterwarnings("ignore")
-torch.manual_seed(114514)
-from i18n import I18nAuto
-
-import signal
-
-import math
-
-from utils import load_audio, CSVutil
-
-global DoFormant, Quefrency, Timbre
-
-if not os.path.isdir('csvdb/'):
- os.makedirs('csvdb')
- frmnt, stp = open("csvdb/formanting.csv", 'w'), open("csvdb/stop.csv", 'w')
- frmnt.close()
- stp.close()
-
-try:
- DoFormant, Quefrency, Timbre = CSVutil('csvdb/formanting.csv', 'r', 'formanting')
- DoFormant = (
- lambda DoFormant: True if DoFormant.lower() == 'true' else (False if DoFormant.lower() == 'false' else DoFormant)
- )(DoFormant)
-except (ValueError, TypeError, IndexError):
- DoFormant, Quefrency, Timbre = False, 1.0, 1.0
- CSVutil('csvdb/formanting.csv', 'w+', 'formanting', DoFormant, Quefrency, Timbre)
-
-def download_models():
- # Download hubert base model if not present
- if not os.path.isfile('./hubert_base.pt'):
- response = requests.get('https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt')
-
- if response.status_code == 200:
- with open('./hubert_base.pt', 'wb') as f:
- f.write(response.content)
- print("Downloaded hubert base model file successfully. File saved to ./hubert_base.pt.")
- else:
- raise Exception("Failed to download hubert base model file. Status code: " + str(response.status_code) + ".")
-
- # Download rmvpe model if not present
- if not os.path.isfile('./rmvpe.pt'):
- response = requests.get('https://drive.usercontent.google.com/download?id=1Hkn4kNuVFRCNQwyxQFRtmzmMBGpQxptI&export=download&authuser=0&confirm=t&uuid=0b3a40de-465b-4c65-8c41-135b0b45c3f7&at=APZUnTV3lA3LnyTbeuduura6Dmi2:1693724254058')
-
- if response.status_code == 200:
- with open('./rmvpe.pt', 'wb') as f:
- f.write(response.content)
- print("Downloaded rmvpe model file successfully. File saved to ./rmvpe.pt.")
- else:
- raise Exception("Failed to download rmvpe model file. Status code: " + str(response.status_code) + ".")
-
-download_models()
-
-print("\n-------------------------------\nRVC v2 Easy GUI (Local Edition)\n-------------------------------\n")
-
-def formant_apply(qfrency, tmbre):
- Quefrency = qfrency
- Timbre = tmbre
- DoFormant = True
- CSVutil('csvdb/formanting.csv', 'w+', 'formanting', DoFormant, qfrency, tmbre)
-
- return ({"value": Quefrency, "__type__": "update"}, {"value": Timbre, "__type__": "update"})
-
-def get_fshift_presets():
- fshift_presets_list = []
- for dirpath, _, filenames in os.walk("./formantshiftcfg/"):
- for filename in filenames:
- if filename.endswith(".txt"):
- fshift_presets_list.append(os.path.join(dirpath,filename).replace('\\','/'))
-
- if len(fshift_presets_list) > 0:
- return fshift_presets_list
- else:
- return ''
-
-
-
-def formant_enabled(cbox, qfrency, tmbre, frmntapply, formantpreset, formant_refresh_button):
-
- if (cbox):
-
- DoFormant = True
- CSVutil('csvdb/formanting.csv', 'w+', 'formanting', DoFormant, qfrency, tmbre)
- #print(f"is checked? - {cbox}\ngot {DoFormant}")
-
- return (
- {"value": True, "__type__": "update"},
- {"visible": True, "__type__": "update"},
- {"visible": True, "__type__": "update"},
- {"visible": True, "__type__": "update"},
- {"visible": True, "__type__": "update"},
- {"visible": True, "__type__": "update"},
- )
-
-
- else:
-
- DoFormant = False
- CSVutil('csvdb/formanting.csv', 'w+', 'formanting', DoFormant, qfrency, tmbre)
-
- #print(f"is checked? - {cbox}\ngot {DoFormant}")
- return (
- {"value": False, "__type__": "update"},
- {"visible": False, "__type__": "update"},
- {"visible": False, "__type__": "update"},
- {"visible": False, "__type__": "update"},
- {"visible": False, "__type__": "update"},
- {"visible": False, "__type__": "update"},
- {"visible": False, "__type__": "update"},
- )
-
-
-
-def preset_apply(preset, qfer, tmbr):
- if str(preset) != '':
- with open(str(preset), 'r') as p:
- content = p.readlines()
- qfer, tmbr = content[0].split('\n')[0], content[1]
-
- formant_apply(qfer, tmbr)
- else:
- pass
- return ({"value": qfer, "__type__": "update"}, {"value": tmbr, "__type__": "update"})
-
-def update_fshift_presets(preset, qfrency, tmbre):
-
- qfrency, tmbre = preset_apply(preset, qfrency, tmbre)
-
- if (str(preset) != ''):
- with open(str(preset), 'r') as p:
- content = p.readlines()
- qfrency, tmbre = content[0].split('\n')[0], content[1]
-
- formant_apply(qfrency, tmbre)
- else:
- pass
- return (
- {"choices": get_fshift_presets(), "__type__": "update"},
- {"value": qfrency, "__type__": "update"},
- {"value": tmbre, "__type__": "update"},
- )
-
-i18n = I18nAuto()
-#i18n.print()
-# 判断是否有能用来训练和加速推理的N卡
-ngpu = torch.cuda.device_count()
-gpu_infos = []
-mem = []
-if (not torch.cuda.is_available()) or ngpu == 0:
- if_gpu_ok = False
-else:
- if_gpu_ok = False
- for i in range(ngpu):
- gpu_name = torch.cuda.get_device_name(i)
- if (
- "10" in gpu_name
- or "16" in gpu_name
- or "20" in gpu_name
- or "30" in gpu_name
- or "40" in gpu_name
- or "A2" in gpu_name.upper()
- or "A3" in gpu_name.upper()
- or "A4" in gpu_name.upper()
- or "P4" in gpu_name.upper()
- or "A50" in gpu_name.upper()
- or "A60" in gpu_name.upper()
- or "70" in gpu_name
- or "80" in gpu_name
- or "90" in gpu_name
- or "M4" in gpu_name.upper()
- or "T4" in gpu_name.upper()
- or "TITAN" in gpu_name.upper()
- ): # A10#A100#V100#A40#P40#M40#K80#A4500
- if_gpu_ok = True # 至少有一张能用的N卡
- gpu_infos.append("%s\t%s" % (i, gpu_name))
- mem.append(
- int(
- torch.cuda.get_device_properties(i).total_memory
- / 1024
- / 1024
- / 1024
- + 0.4
- )
- )
-if if_gpu_ok == True and len(gpu_infos) > 0:
- gpu_info = "\n".join(gpu_infos)
- default_batch_size = min(mem) // 2
-else:
- gpu_info = i18n("很遗憾您这没有能用的显卡来支持您训练")
- default_batch_size = 1
-gpus = "-".join([i[0] for i in gpu_infos])
-from lib.infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-import soundfile as sf
-from fairseq import checkpoint_utils
-import gradio as gr
-import logging
-from vc_infer_pipeline import VC
-from config import Config
-
-config = Config()
-# from trainset_preprocess_pipeline import PreProcess
-logging.getLogger("numba").setLevel(logging.WARNING)
-
-hubert_model = None
-
-def load_hubert():
- global hubert_model
- models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
- ["hubert_base.pt"],
- suffix="",
- )
- hubert_model = models[0]
- hubert_model = hubert_model.to(config.device)
- if config.is_half:
- hubert_model = hubert_model.half()
- else:
- hubert_model = hubert_model.float()
- hubert_model.eval()
-
-
-weight_root = "weights"
-index_root = "logs"
-names = []
-for name in os.listdir(weight_root):
- if name.endswith(".pth"):
- names.append(name)
-index_paths = []
-for root, dirs, files in os.walk(index_root, topdown=False):
- for name in files:
- if name.endswith(".index") and "trained" not in name:
- index_paths.append("%s/%s" % (root, name))
-
-
-
-def vc_single(
- sid,
- input_audio_path,
- f0_up_key,
- f0_file,
- f0_method,
- file_index,
- #file_index2,
- # file_big_npy,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- crepe_hop_length,
-): # spk_item, input_audio0, vc_transform0,f0_file,f0method0
- global tgt_sr, net_g, vc, hubert_model, version
- if input_audio_path is None:
- return "You need to upload an audio", None
- f0_up_key = int(f0_up_key)
- try:
- audio = load_audio(input_audio_path, 16000, DoFormant, Quefrency, Timbre)
- audio_max = np.abs(audio).max() / 0.95
- if audio_max > 1:
- audio /= audio_max
- times = [0, 0, 0]
- if hubert_model == None:
- load_hubert()
- if_f0 = cpt.get("f0", 1)
- file_index = (
- (
- file_index.strip(" ")
- .strip('"')
- .strip("\n")
- .strip('"')
- .strip(" ")
- .replace("trained", "added")
- )
- ) # 防止小白写错,自动帮他替换掉
- # file_big_npy = (
- # file_big_npy.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- # )
- audio_opt = vc.pipeline(
- hubert_model,
- net_g,
- sid,
- audio,
- input_audio_path,
- times,
- f0_up_key,
- f0_method,
- file_index,
- # file_big_npy,
- index_rate,
- if_f0,
- filter_radius,
- tgt_sr,
- resample_sr,
- rms_mix_rate,
- version,
- protect,
- crepe_hop_length,
- f0_file=f0_file,
- )
- if resample_sr >= 16000 and tgt_sr != resample_sr:
- tgt_sr = resample_sr
- index_info = (
- "Using index:%s." % file_index
- if os.path.exists(file_index)
- else "Index not used."
- )
- return "Success.\n %s\nTime:\n npy:%ss, f0:%ss, infer:%ss" % (
- index_info,
- times[0],
- times[1],
- times[2],
- ), (tgt_sr, audio_opt)
- except:
- info = traceback.format_exc()
- print(info)
- return info, (None, None)
-
-
-def vc_multi(
- sid,
- dir_path,
- opt_root,
- paths,
- f0_up_key,
- f0_method,
- file_index,
- file_index2,
- # file_big_npy,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- format1,
- crepe_hop_length,
-):
- try:
- dir_path = (
- dir_path.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- ) # 防止小白拷路径头尾带了空格和"和回车
- opt_root = opt_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- os.makedirs(opt_root, exist_ok=True)
- try:
- if dir_path != "":
- paths = [os.path.join(dir_path, name) for name in os.listdir(dir_path)]
- else:
- paths = [path.name for path in paths]
- except:
- traceback.print_exc()
- paths = [path.name for path in paths]
- infos = []
- for path in paths:
- info, opt = vc_single(
- sid,
- path,
- f0_up_key,
- None,
- f0_method,
- file_index,
- # file_big_npy,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- crepe_hop_length
- )
- if "Success" in info:
- try:
- tgt_sr, audio_opt = opt
- if format1 in ["wav", "flac"]:
- sf.write(
- "%s/%s.%s" % (opt_root, os.path.basename(path), format1),
- audio_opt,
- tgt_sr,
- )
- else:
- path = "%s/%s.wav" % (opt_root, os.path.basename(path))
- sf.write(
- path,
- audio_opt,
- tgt_sr,
- )
- if os.path.exists(path):
- os.system(
- "ffmpeg -i %s -vn %s -q:a 2 -y"
- % (path, path[:-4] + ".%s" % format1)
- )
- except:
- info += traceback.format_exc()
- infos.append("%s->%s" % (os.path.basename(path), info))
- yield "\n".join(infos)
- yield "\n".join(infos)
- except:
- yield traceback.format_exc()
-
-# 一个选项卡全局只能有一个音色
-def get_vc(sid):
- global n_spk, tgt_sr, net_g, vc, cpt, version
- if sid == "" or sid == []:
- global hubert_model
- if hubert_model != None: # 考虑到轮询, 需要加个判断看是否 sid 是由有模型切换到无模型的
- print("clean_empty_cache")
- del net_g, n_spk, vc, hubert_model, tgt_sr # ,cpt
- hubert_model = net_g = n_spk = vc = hubert_model = tgt_sr = None
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- ###楼下不这么折腾清理不干净
- if_f0 = cpt.get("f0", 1)
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(
- *cpt["config"], is_half=config.is_half
- )
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(
- *cpt["config"], is_half=config.is_half
- )
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del net_g, cpt
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- cpt = None
- return {"visible": False, "__type__": "update"}
- person = "%s/%s" % (weight_root, sid)
- print("loading %s" % person)
- cpt = torch.load(person, map_location="cpu")
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- if_f0 = cpt.get("f0", 1)
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False))
- net_g.eval().to(config.device)
- if config.is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
- vc = VC(tgt_sr, config)
- n_spk = cpt["config"][-3]
- return {"visible": False, "maximum": n_spk, "__type__": "update"}
-
-
-def change_choices():
- names = []
- for name in os.listdir(weight_root):
- if name.endswith(".pth"):
- names.append(name)
- index_paths = []
- for root, dirs, files in os.walk(index_root, topdown=False):
- for name in files:
- if name.endswith(".index") and "trained" not in name:
- index_paths.append("%s/%s" % (root, name))
- return {"choices": sorted(names), "__type__": "update"}, {
- "choices": sorted(index_paths),
- "__type__": "update",
- }
-
-
-def clean():
- return {"value": "", "__type__": "update"}
-
-
-sr_dict = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-def if_done(done, p):
- while 1:
- if p.poll() == None:
- sleep(0.5)
- else:
- break
- done[0] = True
-
-
-def if_done_multi(done, ps):
- while 1:
- # poll==None代表进程未结束
- # 只要有一个进程未结束都不停
- flag = 1
- for p in ps:
- if p.poll() == None:
- flag = 0
- sleep(0.5)
- break
- if flag == 1:
- break
- done[0] = True
-
-
-def preprocess_dataset(trainset_dir, exp_dir, sr, n_p):
- sr = sr_dict[sr]
- os.makedirs("%s/logs/%s" % (now_dir, exp_dir), exist_ok=True)
- f = open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "w")
- f.close()
- cmd = (
- config.python_cmd
- + " trainset_preprocess_pipeline_print.py %s %s %s %s/logs/%s "
- % (trainset_dir, sr, n_p, now_dir, exp_dir)
- + str(config.noparallel)
- )
- print(cmd)
- p = Popen(cmd, shell=True) # , stdin=PIPE, stdout=PIPE,stderr=PIPE,cwd=now_dir
- ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读
- done = [False]
- threading.Thread(
- target=if_done,
- args=(
- done,
- p,
- ),
- ).start()
- while 1:
- with open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "r") as f:
- yield (f.read())
- sleep(1)
- if done[0] == True:
- break
- with open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "r") as f:
- log = f.read()
- print(log)
- yield log
-
-# but2.click(extract_f0,[gpus6,np7,f0method8,if_f0_3,trainset_dir4],[info2])
-def extract_f0_feature(gpus, n_p, f0method, if_f0, exp_dir, version19, echl):
- gpus = gpus.split("-")
- os.makedirs("%s/logs/%s" % (now_dir, exp_dir), exist_ok=True)
- f = open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "w")
- f.close()
- if if_f0:
- cmd = config.python_cmd + " extract_f0_print.py %s/logs/%s %s %s %s" % (
- now_dir,
- exp_dir,
- n_p,
- f0method,
- echl,
- )
- print(cmd)
- p = Popen(cmd, shell=True, cwd=now_dir) # , stdin=PIPE, stdout=PIPE,stderr=PIPE
- ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读
- done = [False]
- threading.Thread(
- target=if_done,
- args=(
- done,
- p,
- ),
- ).start()
- while 1:
- with open(
- "%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r"
- ) as f:
- yield (f.read())
- sleep(1)
- if done[0] == True:
- break
- with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f:
- log = f.read()
- print(log)
- yield log
- ####对不同part分别开多进程
- """
- n_part=int(sys.argv[1])
- i_part=int(sys.argv[2])
- i_gpu=sys.argv[3]
- exp_dir=sys.argv[4]
- os.environ["CUDA_VISIBLE_DEVICES"]=str(i_gpu)
- """
- leng = len(gpus)
- ps = []
- for idx, n_g in enumerate(gpus):
- cmd = (
- config.python_cmd
- + " extract_feature_print.py %s %s %s %s %s/logs/%s %s"
- % (
- config.device,
- leng,
- idx,
- n_g,
- now_dir,
- exp_dir,
- version19,
- )
- )
- print(cmd)
- p = Popen(
- cmd, shell=True, cwd=now_dir
- ) # , shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, cwd=now_dir
- ps.append(p)
- ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读
- done = [False]
- threading.Thread(
- target=if_done_multi,
- args=(
- done,
- ps,
- ),
- ).start()
- while 1:
- with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f:
- yield (f.read())
- sleep(1)
- if done[0] == True:
- break
- with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f:
- log = f.read()
- print(log)
- yield log
-
-
-def change_sr2(sr2, if_f0_3, version19):
- path_str = "" if version19 == "v1" else "_v2"
- f0_str = "f0" if if_f0_3 else ""
- if_pretrained_generator_exist = os.access("pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), os.F_OK)
- if_pretrained_discriminator_exist = os.access("pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), os.F_OK)
- if (if_pretrained_generator_exist == False):
- print("pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), "not exist, will not use pretrained model")
- if (if_pretrained_discriminator_exist == False):
- print("pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), "not exist, will not use pretrained model")
- return (
- ("pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2)) if if_pretrained_generator_exist else "",
- ("pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2)) if if_pretrained_discriminator_exist else "",
- {"visible": True, "__type__": "update"}
- )
-
-def change_version19(sr2, if_f0_3, version19):
- path_str = "" if version19 == "v1" else "_v2"
- f0_str = "f0" if if_f0_3 else ""
- if_pretrained_generator_exist = os.access("pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), os.F_OK)
- if_pretrained_discriminator_exist = os.access("pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), os.F_OK)
- if (if_pretrained_generator_exist == False):
- print("pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), "not exist, will not use pretrained model")
- if (if_pretrained_discriminator_exist == False):
- print("pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), "not exist, will not use pretrained model")
- return (
- ("pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2)) if if_pretrained_generator_exist else "",
- ("pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2)) if if_pretrained_discriminator_exist else "",
- )
-
-
-def change_f0(if_f0_3, sr2, version19): # f0method8,pretrained_G14,pretrained_D15
- path_str = "" if version19 == "v1" else "_v2"
- if_pretrained_generator_exist = os.access("pretrained%s/f0G%s.pth" % (path_str, sr2), os.F_OK)
- if_pretrained_discriminator_exist = os.access("pretrained%s/f0D%s.pth" % (path_str, sr2), os.F_OK)
- if (if_pretrained_generator_exist == False):
- print("pretrained%s/f0G%s.pth" % (path_str, sr2), "not exist, will not use pretrained model")
- if (if_pretrained_discriminator_exist == False):
- print("pretrained%s/f0D%s.pth" % (path_str, sr2), "not exist, will not use pretrained model")
- if if_f0_3:
- return (
- {"visible": True, "__type__": "update"},
- "pretrained%s/f0G%s.pth" % (path_str, sr2) if if_pretrained_generator_exist else "",
- "pretrained%s/f0D%s.pth" % (path_str, sr2) if if_pretrained_discriminator_exist else "",
- )
- return (
- {"visible": False, "__type__": "update"},
- ("pretrained%s/G%s.pth" % (path_str, sr2)) if if_pretrained_generator_exist else "",
- ("pretrained%s/D%s.pth" % (path_str, sr2)) if if_pretrained_discriminator_exist else "",
- )
-
-
-global log_interval
-
-
-def set_log_interval(exp_dir, batch_size12):
- log_interval = 1
-
- folder_path = os.path.join(exp_dir, "1_16k_wavs")
-
- if os.path.exists(folder_path) and os.path.isdir(folder_path):
- wav_files = [f for f in os.listdir(folder_path) if f.endswith(".wav")]
- if wav_files:
- sample_size = len(wav_files)
- log_interval = math.ceil(sample_size / batch_size12)
- if log_interval > 1:
- log_interval += 1
- return log_interval
-
-# but3.click(click_train,[exp_dir1,sr2,if_f0_3,save_epoch10,total_epoch11,batch_size12,if_save_latest13,pretrained_G14,pretrained_D15,gpus16])
-def click_train(
- exp_dir1,
- sr2,
- if_f0_3,
- spk_id5,
- save_epoch10,
- total_epoch11,
- batch_size12,
- if_save_latest13,
- pretrained_G14,
- pretrained_D15,
- gpus16,
- if_cache_gpu17,
- if_save_every_weights18,
- version19,
-):
- CSVutil('csvdb/stop.csv', 'w+', 'formanting', False)
- # 生成filelist
- exp_dir = "%s/logs/%s" % (now_dir, exp_dir1)
- os.makedirs(exp_dir, exist_ok=True)
- gt_wavs_dir = "%s/0_gt_wavs" % (exp_dir)
- feature_dir = (
- "%s/3_feature256" % (exp_dir)
- if version19 == "v1"
- else "%s/3_feature768" % (exp_dir)
- )
-
- log_interval = set_log_interval(exp_dir, batch_size12)
-
- if if_f0_3:
- f0_dir = "%s/2a_f0" % (exp_dir)
- f0nsf_dir = "%s/2b-f0nsf" % (exp_dir)
- names = (
- set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)])
- & set([name.split(".")[0] for name in os.listdir(feature_dir)])
- & set([name.split(".")[0] for name in os.listdir(f0_dir)])
- & set([name.split(".")[0] for name in os.listdir(f0nsf_dir)])
- )
- else:
- names = set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) & set(
- [name.split(".")[0] for name in os.listdir(feature_dir)]
- )
- opt = []
- for name in names:
- if if_f0_3:
- opt.append(
- "%s/%s.wav|%s/%s.npy|%s/%s.wav.npy|%s/%s.wav.npy|%s"
- % (
- gt_wavs_dir.replace("\\", "\\\\"),
- name,
- feature_dir.replace("\\", "\\\\"),
- name,
- f0_dir.replace("\\", "\\\\"),
- name,
- f0nsf_dir.replace("\\", "\\\\"),
- name,
- spk_id5,
- )
- )
- else:
- opt.append(
- "%s/%s.wav|%s/%s.npy|%s"
- % (
- gt_wavs_dir.replace("\\", "\\\\"),
- name,
- feature_dir.replace("\\", "\\\\"),
- name,
- spk_id5,
- )
- )
- fea_dim = 256 if version19 == "v1" else 768
- if if_f0_3:
- for _ in range(2):
- opt.append(
- "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s/logs/mute/2a_f0/mute.wav.npy|%s/logs/mute/2b-f0nsf/mute.wav.npy|%s"
- % (now_dir, sr2, now_dir, fea_dim, now_dir, now_dir, spk_id5)
- )
- else:
- for _ in range(2):
- opt.append(
- "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s"
- % (now_dir, sr2, now_dir, fea_dim, spk_id5)
- )
- shuffle(opt)
- with open("%s/filelist.txt" % exp_dir, "w") as f:
- f.write("\n".join(opt))
- print("write filelist done")
- # 生成config#无需生成config
- # cmd = python_cmd + " train_nsf_sim_cache_sid_load_pretrain.py -e mi-test -sr 40k -f0 1 -bs 4 -g 0 -te 10 -se 5 -pg pretrained/f0G40k.pth -pd pretrained/f0D40k.pth -l 1 -c 0"
- print("use gpus:", gpus16)
- if pretrained_G14 == "":
- print("no pretrained Generator")
- if pretrained_D15 == "":
- print("no pretrained Discriminator")
- if gpus16:
- cmd = (
- config.python_cmd
- + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -g %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s -li %s"
- % (
- exp_dir1,
- sr2,
- 1 if if_f0_3 else 0,
- batch_size12,
- gpus16,
- total_epoch11,
- save_epoch10,
- ("-pg %s" % pretrained_G14) if pretrained_G14 != "" else "",
- ("-pd %s" % pretrained_D15) if pretrained_D15 != "" else "",
- 1 if if_save_latest13 == True else 0,
- 1 if if_cache_gpu17 == True else 0,
- 1 if if_save_every_weights18 == True else 0,
- version19,
- log_interval,
- )
- )
- else:
- cmd = (
- config.python_cmd
- + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s -li %s"
- % (
- exp_dir1,
- sr2,
- 1 if if_f0_3 else 0,
- batch_size12,
- total_epoch11,
- save_epoch10,
- ("-pg %s" % pretrained_G14) if pretrained_G14 != "" else "\b",
- ("-pd %s" % pretrained_D15) if pretrained_D15 != "" else "\b",
- 1 if if_save_latest13 == True else 0,
- 1 if if_cache_gpu17 == True else 0,
- 1 if if_save_every_weights18 == True else 0,
- version19,
- log_interval,
- )
- )
- print(cmd)
- p = Popen(cmd, shell=True, cwd=now_dir)
- global PID
- PID = p.pid
- p.wait()
- return ("训练结束, 您可查看控制台训练日志或实验文件夹下的train.log", {"visible": False, "__type__": "update"}, {"visible": True, "__type__": "update"})
-
-
-# but4.click(train_index, [exp_dir1], info3)
-def train_index(exp_dir1, version19):
- exp_dir = "%s/logs/%s" % (now_dir, exp_dir1)
- os.makedirs(exp_dir, exist_ok=True)
- feature_dir = (
- "%s/3_feature256" % (exp_dir)
- if version19 == "v1"
- else "%s/3_feature768" % (exp_dir)
- )
- if os.path.exists(feature_dir) == False:
- return "请先进行特征提取!"
- listdir_res = list(os.listdir(feature_dir))
- if len(listdir_res) == 0:
- return "请先进行特征提取!"
- npys = []
- for name in sorted(listdir_res):
- phone = np.load("%s/%s" % (feature_dir, name))
- npys.append(phone)
- big_npy = np.concatenate(npys, 0)
- big_npy_idx = np.arange(big_npy.shape[0])
- np.random.shuffle(big_npy_idx)
- big_npy = big_npy[big_npy_idx]
- np.save("%s/total_fea.npy" % exp_dir, big_npy)
- # n_ivf = big_npy.shape[0] // 39
- n_ivf = min(int(16 * np.sqrt(big_npy.shape[0])), big_npy.shape[0] // 39)
- infos = []
- infos.append("%s,%s" % (big_npy.shape, n_ivf))
- yield "\n".join(infos)
- index = faiss.index_factory(256 if version19 == "v1" else 768, "IVF%s,Flat" % n_ivf)
- # index = faiss.index_factory(256if version19=="v1"else 768, "IVF%s,PQ128x4fs,RFlat"%n_ivf)
- infos.append("training")
- yield "\n".join(infos)
- index_ivf = faiss.extract_index_ivf(index) #
- index_ivf.nprobe = 1
- index.train(big_npy)
- faiss.write_index(
- index,
- "%s/trained_IVF%s_Flat_nprobe_%s_%s_%s.index"
- % (exp_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19),
- )
- # faiss.write_index(index, '%s/trained_IVF%s_Flat_FastScan_%s.index'%(exp_dir,n_ivf,version19))
- infos.append("adding")
- yield "\n".join(infos)
- batch_size_add = 8192
- for i in range(0, big_npy.shape[0], batch_size_add):
- index.add(big_npy[i : i + batch_size_add])
- faiss.write_index(
- index,
- "%s/added_IVF%s_Flat_nprobe_%s_%s_%s.index"
- % (exp_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19),
- )
- infos.append(
- "成功构建索引,added_IVF%s_Flat_nprobe_%s_%s_%s.index"
- % (n_ivf, index_ivf.nprobe, exp_dir1, version19)
- )
- # faiss.write_index(index, '%s/added_IVF%s_Flat_FastScan_%s.index'%(exp_dir,n_ivf,version19))
- # infos.append("成功构建索引,added_IVF%s_Flat_FastScan_%s.index"%(n_ivf,version19))
- yield "\n".join(infos)
-
-
-# but5.click(train1key, [exp_dir1, sr2, if_f0_3, trainset_dir4, spk_id5, gpus6, np7, f0method8, save_epoch10, total_epoch11, batch_size12, if_save_latest13, pretrained_G14, pretrained_D15, gpus16, if_cache_gpu17], info3)
-def train1key(
- exp_dir1,
- sr2,
- if_f0_3,
- trainset_dir4,
- spk_id5,
- np7,
- f0method8,
- save_epoch10,
- total_epoch11,
- batch_size12,
- if_save_latest13,
- pretrained_G14,
- pretrained_D15,
- gpus16,
- if_cache_gpu17,
- if_save_every_weights18,
- version19,
- echl
-):
- infos = []
-
- def get_info_str(strr):
- infos.append(strr)
- return "\n".join(infos)
-
- model_log_dir = "%s/logs/%s" % (now_dir, exp_dir1)
- preprocess_log_path = "%s/preprocess.log" % model_log_dir
- extract_f0_feature_log_path = "%s/extract_f0_feature.log" % model_log_dir
- gt_wavs_dir = "%s/0_gt_wavs" % model_log_dir
- feature_dir = (
- "%s/3_feature256" % model_log_dir
- if version19 == "v1"
- else "%s/3_feature768" % model_log_dir
- )
-
- os.makedirs(model_log_dir, exist_ok=True)
- #########step1:处理数据
- open(preprocess_log_path, "w").close()
- cmd = (
- config.python_cmd
- + " trainset_preprocess_pipeline_print.py %s %s %s %s "
- % (trainset_dir4, sr_dict[sr2], np7, model_log_dir)
- + str(config.noparallel)
- )
- yield get_info_str(i18n("step1:正在处理数据"))
- yield get_info_str(cmd)
- p = Popen(cmd, shell=True)
- p.wait()
- with open(preprocess_log_path, "r") as f:
- print(f.read())
- #########step2a:提取音高
- open(extract_f0_feature_log_path, "w")
- if if_f0_3:
- yield get_info_str("step2a:正在提取音高")
- cmd = config.python_cmd + " extract_f0_print.py %s %s %s %s" % (
- model_log_dir,
- np7,
- f0method8,
- echl
- )
- yield get_info_str(cmd)
- p = Popen(cmd, shell=True, cwd=now_dir)
- p.wait()
- with open(extract_f0_feature_log_path, "r") as f:
- print(f.read())
- else:
- yield get_info_str(i18n("step2a:无需提取音高"))
- #######step2b:提取特征
- yield get_info_str(i18n("step2b:正在提取特征"))
- gpus = gpus16.split("-")
- leng = len(gpus)
- ps = []
- for idx, n_g in enumerate(gpus):
- cmd = config.python_cmd + " extract_feature_print.py %s %s %s %s %s %s" % (
- config.device,
- leng,
- idx,
- n_g,
- model_log_dir,
- version19,
- )
- yield get_info_str(cmd)
- p = Popen(
- cmd, shell=True, cwd=now_dir
- ) # , shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, cwd=now_dir
- ps.append(p)
- for p in ps:
- p.wait()
- with open(extract_f0_feature_log_path, "r") as f:
- print(f.read())
- #######step3a:训练模型
- yield get_info_str(i18n("step3a:正在训练模型"))
- # 生成filelist
- if if_f0_3:
- f0_dir = "%s/2a_f0" % model_log_dir
- f0nsf_dir = "%s/2b-f0nsf" % model_log_dir
- names = (
- set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)])
- & set([name.split(".")[0] for name in os.listdir(feature_dir)])
- & set([name.split(".")[0] for name in os.listdir(f0_dir)])
- & set([name.split(".")[0] for name in os.listdir(f0nsf_dir)])
- )
- else:
- names = set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) & set(
- [name.split(".")[0] for name in os.listdir(feature_dir)]
- )
- opt = []
- for name in names:
- if if_f0_3:
- opt.append(
- "%s/%s.wav|%s/%s.npy|%s/%s.wav.npy|%s/%s.wav.npy|%s"
- % (
- gt_wavs_dir.replace("\\", "\\\\"),
- name,
- feature_dir.replace("\\", "\\\\"),
- name,
- f0_dir.replace("\\", "\\\\"),
- name,
- f0nsf_dir.replace("\\", "\\\\"),
- name,
- spk_id5,
- )
- )
- else:
- opt.append(
- "%s/%s.wav|%s/%s.npy|%s"
- % (
- gt_wavs_dir.replace("\\", "\\\\"),
- name,
- feature_dir.replace("\\", "\\\\"),
- name,
- spk_id5,
- )
- )
- fea_dim = 256 if version19 == "v1" else 768
- if if_f0_3:
- for _ in range(2):
- opt.append(
- "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s/logs/mute/2a_f0/mute.wav.npy|%s/logs/mute/2b-f0nsf/mute.wav.npy|%s"
- % (now_dir, sr2, now_dir, fea_dim, now_dir, now_dir, spk_id5)
- )
- else:
- for _ in range(2):
- opt.append(
- "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s"
- % (now_dir, sr2, now_dir, fea_dim, spk_id5)
- )
- shuffle(opt)
- with open("%s/filelist.txt" % model_log_dir, "w") as f:
- f.write("\n".join(opt))
- yield get_info_str("write filelist done")
- if gpus16:
- cmd = (
- config.python_cmd
- +" train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -g %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s"
- % (
- exp_dir1,
- sr2,
- 1 if if_f0_3 else 0,
- batch_size12,
- gpus16,
- total_epoch11,
- save_epoch10,
- ("-pg %s" % pretrained_G14) if pretrained_G14 != "" else "",
- ("-pd %s" % pretrained_D15) if pretrained_D15 != "" else "",
- 1 if if_save_latest13 == True else 0,
- 1 if if_cache_gpu17 == True else 0,
- 1 if if_save_every_weights18 == True else 0,
- version19,
- )
- )
- else:
- cmd = (
- config.python_cmd
- + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s"
- % (
- exp_dir1,
- sr2,
- 1 if if_f0_3 else 0,
- batch_size12,
- total_epoch11,
- save_epoch10,
- ("-pg %s" % pretrained_G14) if pretrained_G14 != "" else "",
- ("-pd %s" % pretrained_D15) if pretrained_D15 != "" else "",
- 1 if if_save_latest13 == True else 0,
- 1 if if_cache_gpu17 == True else 0,
- 1 if if_save_every_weights18 == True else 0,
- version19,
- )
- )
- yield get_info_str(cmd)
- p = Popen(cmd, shell=True, cwd=now_dir)
- p.wait()
- yield get_info_str(i18n("训练结束, 您可查看控制台训练日志或实验文件夹下的train.log"))
- #######step3b:训练索引
- npys = []
- listdir_res = list(os.listdir(feature_dir))
- for name in sorted(listdir_res):
- phone = np.load("%s/%s" % (feature_dir, name))
- npys.append(phone)
- big_npy = np.concatenate(npys, 0)
-
- big_npy_idx = np.arange(big_npy.shape[0])
- np.random.shuffle(big_npy_idx)
- big_npy = big_npy[big_npy_idx]
- np.save("%s/total_fea.npy" % model_log_dir, big_npy)
-
- # n_ivf = big_npy.shape[0] // 39
- n_ivf = min(int(16 * np.sqrt(big_npy.shape[0])), big_npy.shape[0] // 39)
- yield get_info_str("%s,%s" % (big_npy.shape, n_ivf))
- index = faiss.index_factory(256 if version19 == "v1" else 768, "IVF%s,Flat" % n_ivf)
- yield get_info_str("training index")
- index_ivf = faiss.extract_index_ivf(index) #
- index_ivf.nprobe = 1
- index.train(big_npy)
- faiss.write_index(
- index,
- "%s/trained_IVF%s_Flat_nprobe_%s_%s_%s.index"
- % (model_log_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19),
- )
- yield get_info_str("adding index")
- batch_size_add = 8192
- for i in range(0, big_npy.shape[0], batch_size_add):
- index.add(big_npy[i : i + batch_size_add])
- faiss.write_index(
- index,
- "%s/added_IVF%s_Flat_nprobe_%s_%s_%s.index"
- % (model_log_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19),
- )
- yield get_info_str(
- "成功构建索引, added_IVF%s_Flat_nprobe_%s_%s_%s.index"
- % (n_ivf, index_ivf.nprobe, exp_dir1, version19)
- )
- yield get_info_str(i18n("全流程结束!"))
-
-
-def whethercrepeornah(radio):
- mango = True if radio == 'mangio-crepe' or radio == 'mangio-crepe-tiny' else False
- return ({"visible": mango, "__type__": "update"})
-
-# ckpt_path2.change(change_info_,[ckpt_path2],[sr__,if_f0__])
-def change_info_(ckpt_path):
- if (
- os.path.exists(ckpt_path.replace(os.path.basename(ckpt_path), "train.log"))
- == False
- ):
- return {"__type__": "update"}, {"__type__": "update"}, {"__type__": "update"}
- try:
- with open(
- ckpt_path.replace(os.path.basename(ckpt_path), "train.log"), "r"
- ) as f:
- info = eval(f.read().strip("\n").split("\n")[0].split("\t")[-1])
- sr, f0 = info["sample_rate"], info["if_f0"]
- version = "v2" if ("version" in info and info["version"] == "v2") else "v1"
- return sr, str(f0), version
- except:
- traceback.print_exc()
- return {"__type__": "update"}, {"__type__": "update"}, {"__type__": "update"}
-
-
-from lib.infer_pack.models_onnx import SynthesizerTrnMsNSFsidM
-
-
-def export_onnx(ModelPath, ExportedPath, MoeVS=True):
- cpt = torch.load(ModelPath, map_location="cpu")
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- hidden_channels = 256 if cpt.get("version","v1")=="v1"else 768#cpt["config"][-2] # hidden_channels,为768Vec做准备
-
- test_phone = torch.rand(1, 200, hidden_channels) # hidden unit
- test_phone_lengths = torch.tensor([200]).long() # hidden unit 长度(貌似没啥用)
- test_pitch = torch.randint(size=(1, 200), low=5, high=255) # 基频(单位赫兹)
- test_pitchf = torch.rand(1, 200) # nsf基频
- test_ds = torch.LongTensor([0]) # 说话人ID
- test_rnd = torch.rand(1, 192, 200) # 噪声(加入随机因子)
-
- device = "cpu" # 导出时设备(不影响使用模型)
-
-
- net_g = SynthesizerTrnMsNSFsidM(
- *cpt["config"], is_half=False,version=cpt.get("version","v1")
- ) # fp32导出(C++要支持fp16必须手动将内存重新排列所以暂时不用fp16)
- net_g.load_state_dict(cpt["weight"], strict=False)
- input_names = ["phone", "phone_lengths", "pitch", "pitchf", "ds", "rnd"]
- output_names = [
- "audio",
- ]
- # net_g.construct_spkmixmap(n_speaker) 多角色混合轨道导出
- torch.onnx.export(
- net_g,
- (
- test_phone.to(device),
- test_phone_lengths.to(device),
- test_pitch.to(device),
- test_pitchf.to(device),
- test_ds.to(device),
- test_rnd.to(device),
- ),
- ExportedPath,
- dynamic_axes={
- "phone": [1],
- "pitch": [1],
- "pitchf": [1],
- "rnd": [2],
- },
- do_constant_folding=False,
- opset_version=16,
- verbose=False,
- input_names=input_names,
- output_names=output_names,
- )
- return "Finished"
-
-#region RVC WebUI App
-
-def get_presets():
- data = None
- with open('../inference-presets.json', 'r') as file:
- data = json.load(file)
- preset_names = []
- for preset in data['presets']:
- preset_names.append(preset['name'])
-
- return preset_names
-
-def change_choices2():
- audio_files=[]
- for filename in os.listdir("./audios"):
- if filename.endswith(('.wav','.mp3','.ogg','.flac','.m4a','.aac','.mp4')):
- audio_files.append(os.path.join('./audios',filename).replace('\\', '/'))
- return {"choices": sorted(audio_files), "__type__": "update"}, {"__type__": "update"}
-
-audio_files=[]
-for filename in os.listdir("./audios"):
- if filename.endswith(('.wav','.mp3','.ogg','.flac','.m4a','.aac','.mp4')):
- audio_files.append(os.path.join('./audios',filename).replace('\\', '/'))
-
-def get_index():
- if check_for_name() != '':
- chosen_model=sorted(names)[0].split(".")[0]
- logs_path="./logs/"+chosen_model
- if os.path.exists(logs_path):
- for file in os.listdir(logs_path):
- if file.endswith(".index"):
- return os.path.join(logs_path, file)
- return ''
- else:
- return ''
-
-def get_indexes():
- indexes_list=[]
- for dirpath, dirnames, filenames in os.walk("./logs/"):
- for filename in filenames:
- if filename.endswith(".index"):
- indexes_list.append(os.path.join(dirpath,filename))
- if len(indexes_list) > 0:
- return indexes_list
- else:
- return ''
-
-def get_name():
- if len(audio_files) > 0:
- return sorted(audio_files)[0]
- else:
- return ''
-
-def save_to_wav(record_button):
- if record_button is None:
- pass
- else:
- path_to_file=record_button
- new_name = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")+'.wav'
- new_path='./audios/'+new_name
- shutil.move(path_to_file,new_path)
- return new_path
-
-def save_to_wav2(dropbox):
- file_path=dropbox.name
- shutil.move(file_path,'./audios')
- return os.path.join('./audios',os.path.basename(file_path))
-
-def match_index(sid0):
- folder=sid0.split(".")[0]
- parent_dir="./logs/"+folder
- if os.path.exists(parent_dir):
- for filename in os.listdir(parent_dir):
- if filename.endswith(".index"):
- index_path=os.path.join(parent_dir,filename)
- return index_path
- else:
- return ''
-
-def check_for_name():
- if len(names) > 0:
- return sorted(names)[0]
- else:
- return ''
-
-def download_from_url(url, model):
- if url == '':
- return "URL cannot be left empty."
- if model =='':
- return "You need to name your model. For example: My-Model"
- url = url.strip()
- zip_dirs = ["zips", "unzips"]
- for directory in zip_dirs:
- if os.path.exists(directory):
- shutil.rmtree(directory)
- os.makedirs("zips", exist_ok=True)
- os.makedirs("unzips", exist_ok=True)
- zipfile = model + '.zip'
- zipfile_path = './zips/' + zipfile
- try:
- if "drive.google.com" in url:
- subprocess.run(["gdown", url, "--fuzzy", "-O", zipfile_path])
- elif "mega.nz" in url:
- m = Mega()
- m.download_url(url, './zips')
- else:
- subprocess.run(["wget", url, "-O", zipfile_path])
- for filename in os.listdir("./zips"):
- if filename.endswith(".zip"):
- zipfile_path = os.path.join("./zips/",filename)
- shutil.unpack_archive(zipfile_path, "./unzips", 'zip')
- else:
- return "No zipfile found."
- for root, dirs, files in os.walk('./unzips'):
- for file in files:
- file_path = os.path.join(root, file)
- if file.endswith(".index"):
- os.mkdir(f'./logs/{model}')
- shutil.copy2(file_path,f'./logs/{model}')
- elif "G_" not in file and "D_" not in file and file.endswith(".pth"):
- shutil.copy(file_path,f'./weights/{model}.pth')
- shutil.rmtree("zips")
- shutil.rmtree("unzips")
- return "Success."
- except:
- return "There's been an error."
-def success_message(face):
- return f'{face.name} has been uploaded.', 'None'
-def mouth(size, face, voice, faces):
- if size == 'Half':
- size = 2
- else:
- size = 1
- if faces == 'None':
- character = face.name
- else:
- if faces == 'Ben Shapiro':
- character = '/content/wav2lip-HD/inputs/ben-shapiro-10.mp4'
- elif faces == 'Andrew Tate':
- character = '/content/wav2lip-HD/inputs/tate-7.mp4'
- command = "python inference.py " \
- "--checkpoint_path checkpoints/wav2lip.pth " \
- f"--face {character} " \
- f"--audio {voice} " \
- "--pads 0 20 0 0 " \
- "--outfile /content/wav2lip-HD/outputs/result.mp4 " \
- "--fps 24 " \
- f"--resize_factor {size}"
- process = subprocess.Popen(command, shell=True, cwd='/content/wav2lip-HD/Wav2Lip-master')
- stdout, stderr = process.communicate()
- return '/content/wav2lip-HD/outputs/result.mp4', 'Animation completed.'
-eleven_voices = ['Adam','Antoni','Josh','Arnold','Sam','Bella','Rachel','Domi','Elli']
-eleven_voices_ids=['pNInz6obpgDQGcFmaJgB','ErXwobaYiN019PkySvjV','TxGEqnHWrfWFTfGW9XjX','VR6AewLTigWG4xSOukaG','yoZ06aMxZJJ28mfd3POQ','EXAVITQu4vr4xnSDxMaL','21m00Tcm4TlvDq8ikWAM','AZnzlk1XvdvUeBnXmlld','MF3mGyEYCl7XYWbV9V6O']
-chosen_voice = dict(zip(eleven_voices, eleven_voices_ids))
-
-def stoptraining(mim):
- if int(mim) == 1:
- try:
- CSVutil('csvdb/stop.csv', 'w+', 'stop', 'True')
- os.kill(PID, signal.SIGTERM)
- except Exception as e:
- print(f"Couldn't click due to {e}")
- return (
- {"visible": False, "__type__": "update"},
- {"visible": True, "__type__": "update"},
- )
-
-
-def elevenTTS(xiapi, text, id, lang):
- if xiapi!= '' and id !='':
- choice = chosen_voice[id]
- CHUNK_SIZE = 1024
- url = f"https://api.elevenlabs.io/v1/text-to-speech/{choice}"
- headers = {
- "Accept": "audio/mpeg",
- "Content-Type": "application/json",
- "xi-api-key": xiapi
- }
- if lang == 'en':
- data = {
- "text": text,
- "model_id": "eleven_monolingual_v1",
- "voice_settings": {
- "stability": 0.5,
- "similarity_boost": 0.5
- }
- }
- else:
- data = {
- "text": text,
- "model_id": "eleven_multilingual_v1",
- "voice_settings": {
- "stability": 0.5,
- "similarity_boost": 0.5
- }
- }
-
- response = requests.post(url, json=data, headers=headers)
- with open('./temp_eleven.mp3', 'wb') as f:
- for chunk in response.iter_content(chunk_size=CHUNK_SIZE):
- if chunk:
- f.write(chunk)
- aud_path = save_to_wav('./temp_eleven.mp3')
- return aud_path, aud_path
- else:
- tts = gTTS(text, lang=lang)
- tts.save('./temp_gTTS.mp3')
- aud_path = save_to_wav('./temp_gTTS.mp3')
- return aud_path, aud_path
-
-def upload_to_dataset(files, dir):
- if dir == '':
- dir = './dataset'
- if not os.path.exists(dir):
- os.makedirs(dir)
- count = 0
- for file in files:
- path=file.name
- shutil.copy2(path,dir)
- count += 1
- return f' {count} files uploaded to {dir}.'
-
-def zip_downloader(model):
- if not os.path.exists(f'./weights/{model}.pth'):
- return {"__type__": "update"}, f'Make sure the Voice Name is correct. I could not find {model}.pth'
- index_found = False
- for file in os.listdir(f'./logs/{model}'):
- if file.endswith('.index') and 'added' in file:
- log_file = file
- index_found = True
- if index_found:
- return [f'./weights/{model}.pth', f'./logs/{model}/{log_file}'], "Done"
- else:
- return f'./weights/{model}.pth', "Could not find Index file."
-
-with gr.Blocks(theme=gr.themes.Base(), title='Mangio-RVC-Web 💻') as app:
- with gr.Tabs():
- with gr.TabItem("Inference"):
- gr.HTML("
RVC V2 Huggingface Version
")
- gr.HTML(" Huggingface version made by Clebersla ")
- gr.HTML("
If you want to use this space privately, I recommend you duplicate the space.
")
-
- # Inference Preset Row
- # with gr.Row():
- # mangio_preset = gr.Dropdown(label="Inference Preset", choices=sorted(get_presets()))
- # mangio_preset_name_save = gr.Textbox(
- # label="Your preset name"
- # )
- # mangio_preset_save_btn = gr.Button('Save Preset', variant="primary")
-
- # Other RVC stuff
- with gr.Row():
- sid0 = gr.Dropdown(label="1.Choose your Model.", choices=sorted(names), value=check_for_name())
- refresh_button = gr.Button("Refresh", variant="primary")
- if check_for_name() != '':
- get_vc(sorted(names)[0])
- vc_transform0 = gr.Number(label="Optional: You can change the pitch here or leave it at 0.", value=0)
- #clean_button = gr.Button(i18n("卸载音色省显存"), variant="primary")
- spk_item = gr.Slider(
- minimum=0,
- maximum=2333,
- step=1,
- label=i18n("请选择说话人id"),
- value=0,
- visible=False,
- interactive=True,
- )
- #clean_button.click(fn=clean, inputs=[], outputs=[sid0])
- sid0.change(
- fn=get_vc,
- inputs=[sid0],
- outputs=[spk_item],
- )
- but0 = gr.Button("Convert", variant="primary")
- with gr.Row():
- with gr.Column():
- with gr.Row():
- dropbox = gr.File(label="Drop your audio here & hit the Reload button.")
- with gr.Row():
- record_button=gr.Audio(source="microphone", label="OR Record audio.", type="filepath")
- with gr.Row():
- input_audio0 = gr.Dropdown(
- label="2.Choose your audio.",
- value="./audios/someguy.mp3",
- choices=audio_files
- )
- dropbox.upload(fn=save_to_wav2, inputs=[dropbox], outputs=[input_audio0])
- dropbox.upload(fn=change_choices2, inputs=[], outputs=[input_audio0])
- refresh_button2 = gr.Button("Refresh", variant="primary", size='sm')
- record_button.change(fn=save_to_wav, inputs=[record_button], outputs=[input_audio0])
- record_button.change(fn=change_choices2, inputs=[], outputs=[input_audio0])
- with gr.Row():
- with gr.Accordion('Text To Speech', open=False):
- with gr.Column():
- lang = gr.Radio(label='Chinese & Japanese do not work with ElevenLabs currently.',choices=['en','es','fr','pt','zh-CN','de','hi','ja'], value='en')
- api_box = gr.Textbox(label="Enter your API Key for ElevenLabs, or leave empty to use GoogleTTS", value='')
- elevenid=gr.Dropdown(label="Voice:", choices=eleven_voices)
- with gr.Column():
- tfs = gr.Textbox(label="Input your Text", interactive=True, value="This is a test.")
- tts_button = gr.Button(value="Speak")
- tts_button.click(fn=elevenTTS, inputs=[api_box,tfs, elevenid, lang], outputs=[record_button, input_audio0])
- with gr.Row():
- with gr.Accordion('Wav2Lip', open=False):
- with gr.Row():
- size = gr.Radio(label='Resolution:',choices=['Half','Full'])
- face = gr.UploadButton("Upload A Character",type='file')
- faces = gr.Dropdown(label="OR Choose one:", choices=['None','Ben Shapiro','Andrew Tate'])
- with gr.Row():
- preview = gr.Textbox(label="Status:",interactive=False)
- face.upload(fn=success_message,inputs=[face], outputs=[preview, faces])
- with gr.Row():
- animation = gr.Video(type='filepath')
- refresh_button2.click(fn=change_choices2, inputs=[], outputs=[input_audio0, animation])
- with gr.Row():
- animate_button = gr.Button('Animate')
-
- with gr.Column():
- with gr.Accordion("Index Settings", open=False):
- file_index1 = gr.Dropdown(
- label="3. Path to your added.index file (if it didn't automatically find it.)",
- choices=get_indexes(),
- value=get_index(),
- interactive=True,
- )
- sid0.change(fn=match_index, inputs=[sid0],outputs=[file_index1])
- refresh_button.click(
- fn=change_choices, inputs=[], outputs=[sid0, file_index1]
- )
- # file_big_npy1 = gr.Textbox(
- # label=i18n("特征文件路径"),
- # value="E:\\codes\py39\\vits_vc_gpu_train\\logs\\mi-test-1key\\total_fea.npy",
- # interactive=True,
- # )
- index_rate1 = gr.Slider(
- minimum=0,
- maximum=1,
- label=i18n("检索特征占比"),
- value=0.66,
- interactive=True,
- )
- vc_output2 = gr.Audio(
- label="Output Audio (Click on the Three Dots in the Right Corner to Download)",
- type='filepath',
- interactive=False,
- )
- animate_button.click(fn=mouth, inputs=[size, face, vc_output2, faces], outputs=[animation, preview])
- with gr.Accordion("Advanced Settings", open=False):
- f0method0 = gr.Radio(
- label="Optional: Change the Pitch Extraction Algorithm.\nExtraction methods are sorted from 'worst quality' to 'best quality'.\nmangio-crepe may or may not be better than rmvpe in cases where 'smoothness' is more important, but rmvpe is the best overall.",
- choices=["pm", "dio", "crepe-tiny", "mangio-crepe-tiny", "crepe", "harvest", "mangio-crepe", "rmvpe"], # Fork Feature. Add Crepe-Tiny
- value="rmvpe",
- interactive=True,
- )
-
- crepe_hop_length = gr.Slider(
- minimum=1,
- maximum=512,
- step=1,
- label="Mangio-Crepe Hop Length. Higher numbers will reduce the chance of extreme pitch changes but lower numbers will increase accuracy. 64-192 is a good range to experiment with.",
- value=120,
- interactive=True,
- visible=False,
- )
- f0method0.change(fn=whethercrepeornah, inputs=[f0method0], outputs=[crepe_hop_length])
- filter_radius0 = gr.Slider(
- minimum=0,
- maximum=7,
- label=i18n(">=3则使用对harvest音高识别的结果使用中值滤波,数值为滤波半径,使用可以削弱哑音"),
- value=3,
- step=1,
- interactive=True,
- )
- resample_sr0 = gr.Slider(
- minimum=0,
- maximum=48000,
- label=i18n("后处理重采样至最终采样率,0为不进行重采样"),
- value=0,
- step=1,
- interactive=True,
- visible=False
- )
- rms_mix_rate0 = gr.Slider(
- minimum=0,
- maximum=1,
- label=i18n("输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络"),
- value=0.21,
- interactive=True,
- )
- protect0 = gr.Slider(
- minimum=0,
- maximum=0.5,
- label=i18n("保护清辅音和呼吸声,防止电音撕裂等artifact,拉满0.5不开启,调低加大保护力度但可能降低索引效果"),
- value=0.33,
- step=0.01,
- interactive=True,
- )
- formanting = gr.Checkbox(
- value=bool(DoFormant),
- label="[EXPERIMENTAL] Formant shift inference audio",
- info="Used for male to female and vice-versa conversions",
- interactive=True,
- visible=True,
- )
-
- formant_preset = gr.Dropdown(
- value='',
- choices=get_fshift_presets(),
- label="browse presets for formanting",
- visible=bool(DoFormant),
- )
- formant_refresh_button = gr.Button(
- value='\U0001f504',
- visible=bool(DoFormant),
- variant='primary',
- )
- #formant_refresh_button = ToolButton( elem_id='1')
- #create_refresh_button(formant_preset, lambda: {"choices": formant_preset}, "refresh_list_shiftpresets")
-
- qfrency = gr.Slider(
- value=Quefrency,
- info="Default value is 1.0",
- label="Quefrency for formant shifting",
- minimum=0.0,
- maximum=16.0,
- step=0.1,
- visible=bool(DoFormant),
- interactive=True,
- )
- tmbre = gr.Slider(
- value=Timbre,
- info="Default value is 1.0",
- label="Timbre for formant shifting",
- minimum=0.0,
- maximum=16.0,
- step=0.1,
- visible=bool(DoFormant),
- interactive=True,
- )
-
- formant_preset.change(fn=preset_apply, inputs=[formant_preset, qfrency, tmbre], outputs=[qfrency, tmbre])
- frmntbut = gr.Button("Apply", variant="primary", visible=bool(DoFormant))
- formanting.change(fn=formant_enabled,inputs=[formanting,qfrency,tmbre,frmntbut,formant_preset,formant_refresh_button],outputs=[formanting,qfrency,tmbre,frmntbut,formant_preset,formant_refresh_button])
- frmntbut.click(fn=formant_apply,inputs=[qfrency, tmbre], outputs=[qfrency, tmbre])
- formant_refresh_button.click(fn=update_fshift_presets,inputs=[formant_preset, qfrency, tmbre],outputs=[formant_preset, qfrency, tmbre])
- with gr.Row():
- vc_output1 = gr.Textbox("")
- f0_file = gr.File(label=i18n("F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调"), visible=False)
-
- but0.click(
- vc_single,
- [
- spk_item,
- input_audio0,
- vc_transform0,
- f0_file,
- f0method0,
- file_index1,
- # file_index2,
- # file_big_npy1,
- index_rate1,
- filter_radius0,
- resample_sr0,
- rms_mix_rate0,
- protect0,
- crepe_hop_length
- ],
- [vc_output1, vc_output2],
- )
-
- with gr.Accordion("Batch Conversion",open=False):
- with gr.Row():
- with gr.Column():
- vc_transform1 = gr.Number(
- label=i18n("变调(整数, 半音数量, 升八度12降八度-12)"), value=0
- )
- opt_input = gr.Textbox(label=i18n("指定输出文件夹"), value="opt")
- f0method1 = gr.Radio(
- label=i18n(
- "选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU"
- ),
- choices=["pm", "harvest", "crepe", "rmvpe"],
- value="rmvpe",
- interactive=True,
- )
- filter_radius1 = gr.Slider(
- minimum=0,
- maximum=7,
- label=i18n(">=3则使用对harvest音高识别的结果使用中值滤波,数值为滤波半径,使用可以削弱哑音"),
- value=3,
- step=1,
- interactive=True,
- )
- with gr.Column():
- file_index3 = gr.Textbox(
- label=i18n("特征检索库文件路径,为空则使用下拉的选择结果"),
- value="",
- interactive=True,
- )
- file_index4 = gr.Dropdown(
- label=i18n("自动检测index路径,下拉式选择(dropdown)"),
- choices=sorted(index_paths),
- interactive=True,
- )
- refresh_button.click(
- fn=lambda: change_choices()[1],
- inputs=[],
- outputs=file_index4,
- )
- # file_big_npy2 = gr.Textbox(
- # label=i18n("特征文件路径"),
- # value="E:\\codes\\py39\\vits_vc_gpu_train\\logs\\mi-test-1key\\total_fea.npy",
- # interactive=True,
- # )
- index_rate2 = gr.Slider(
- minimum=0,
- maximum=1,
- label=i18n("检索特征占比"),
- value=1,
- interactive=True,
- )
- with gr.Column():
- resample_sr1 = gr.Slider(
- minimum=0,
- maximum=48000,
- label=i18n("后处理重采样至最终采样率,0为不进行重采样"),
- value=0,
- step=1,
- interactive=True,
- )
- rms_mix_rate1 = gr.Slider(
- minimum=0,
- maximum=1,
- label=i18n("输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络"),
- value=1,
- interactive=True,
- )
- protect1 = gr.Slider(
- minimum=0,
- maximum=0.5,
- label=i18n(
- "保护清辅音和呼吸声,防止电音撕裂等artifact,拉满0.5不开启,调低加大保护力度但可能降低索引效果"
- ),
- value=0.33,
- step=0.01,
- interactive=True,
- )
- with gr.Column():
- dir_input = gr.Textbox(
- label=i18n("输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)"),
- value="E:\codes\py39\\test-20230416b\\todo-songs",
- )
- inputs = gr.File(
- file_count="multiple", label=i18n("也可批量输入音频文件, 二选一, 优先读文件夹")
- )
- with gr.Row():
- format1 = gr.Radio(
- label=i18n("导出文件格式"),
- choices=["wav", "flac", "mp3", "m4a"],
- value="flac",
- interactive=True,
- )
- but1 = gr.Button(i18n("转换"), variant="primary")
- vc_output3 = gr.Textbox(label=i18n("输出信息"))
- but1.click(
- vc_multi,
- [
- spk_item,
- dir_input,
- opt_input,
- inputs,
- vc_transform1,
- f0method1,
- file_index3,
- file_index4,
- # file_big_npy2,
- index_rate2,
- filter_radius1,
- resample_sr1,
- rms_mix_rate1,
- protect1,
- format1,
- crepe_hop_length,
- ],
- [vc_output3],
- )
- but1.click(fn=lambda: easy_uploader.clear())
- with gr.TabItem("Download Model"):
- with gr.Row():
- url=gr.Textbox(label="Enter the URL to the Model:")
- with gr.Row():
- model = gr.Textbox(label="Name your model:")
- download_button=gr.Button("Download")
- with gr.Row():
- status_bar=gr.Textbox(label="")
- download_button.click(fn=download_from_url, inputs=[url, model], outputs=[status_bar])
- with gr.Row():
- gr.Markdown(
- """
- Made with ❤️ by [Alice Oliveira](https://github.com/aliceoq) | Hosted with ❤️ by [Mateus Elias](https://github.com/mateuseap)
- """
- )
-
- def has_two_files_in_pretrained_folder():
- pretrained_folder = "./pretrained/"
- if not os.path.exists(pretrained_folder):
- return False
-
- files_in_folder = os.listdir(pretrained_folder)
- num_files = len(files_in_folder)
- return num_files >= 2
-
- if has_two_files_in_pretrained_folder():
- print("Pretrained weights are downloaded. Training tab enabled!\n-------------------------------")
- with gr.TabItem("Train", visible=False):
- with gr.Row():
- with gr.Column():
- exp_dir1 = gr.Textbox(label="Voice Name:", value="My-Voice")
- sr2 = gr.Radio(
- label=i18n("目标采样率"),
- choices=["40k", "48k"],
- value="40k",
- interactive=True,
- visible=False
- )
- if_f0_3 = gr.Radio(
- label=i18n("模型是否带音高指导(唱歌一定要, 语音可以不要)"),
- choices=[True, False],
- value=True,
- interactive=True,
- visible=False
- )
- version19 = gr.Radio(
- label="RVC version",
- choices=["v1", "v2"],
- value="v2",
- interactive=True,
- visible=False,
- )
- np7 = gr.Slider(
- minimum=0,
- maximum=config.n_cpu,
- step=1,
- label="# of CPUs for data processing (Leave as it is)",
- value=config.n_cpu,
- interactive=True,
- visible=True
- )
- trainset_dir4 = gr.Textbox(label="Path to your dataset (audios, not zip):", value="./dataset")
- easy_uploader = gr.Files(label='OR Drop your audios here. They will be uploaded in your dataset path above.',file_types=['audio'])
- but1 = gr.Button("1. Process The Dataset", variant="primary")
- info1 = gr.Textbox(label="Status (wait until it says 'end preprocess'):", value="")
- easy_uploader.upload(fn=upload_to_dataset, inputs=[easy_uploader, trainset_dir4], outputs=[info1])
- but1.click(
- preprocess_dataset, [trainset_dir4, exp_dir1, sr2, np7], [info1]
- )
- with gr.Column():
- spk_id5 = gr.Slider(
- minimum=0,
- maximum=4,
- step=1,
- label=i18n("请指定说话人id"),
- value=0,
- interactive=True,
- visible=False
- )
- with gr.Accordion('GPU Settings', open=False, visible=False):
- gpus6 = gr.Textbox(
- label=i18n("以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2"),
- value=gpus,
- interactive=True,
- visible=False
- )
- gpu_info9 = gr.Textbox(label=i18n("显卡信息"), value=gpu_info)
- f0method8 = gr.Radio(
- label=i18n(
- "选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢"
- ),
- choices=["harvest","crepe", "mangio-crepe", "rmvpe"], # Fork feature: Crepe on f0 extraction for training.
- value="rmvpe",
- interactive=True,
- )
-
- extraction_crepe_hop_length = gr.Slider(
- minimum=1,
- maximum=512,
- step=1,
- label=i18n("crepe_hop_length"),
- value=128,
- interactive=True,
- visible=False,
- )
- f0method8.change(fn=whethercrepeornah, inputs=[f0method8], outputs=[extraction_crepe_hop_length])
- but2 = gr.Button("2. Pitch Extraction", variant="primary")
- info2 = gr.Textbox(label="Status(Check the Colab Notebook's cell output):", value="", max_lines=8)
- but2.click(
- extract_f0_feature,
- [gpus6, np7, f0method8, if_f0_3, exp_dir1, version19, extraction_crepe_hop_length],
- [info2],
- )
- with gr.Row():
- with gr.Column():
- total_epoch11 = gr.Slider(
- minimum=1,
- maximum=5000,
- step=10,
- label="Total # of training epochs (IF you choose a value too high, your model will sound horribly overtrained.):",
- value=250,
- interactive=True,
- )
- butstop = gr.Button(
- "Stop Training",
- variant='primary',
- visible=False,
- )
- but3 = gr.Button("3. Train Model", variant="primary", visible=True)
-
- but3.click(fn=stoptraining, inputs=[gr.Number(value=0, visible=False)], outputs=[but3, butstop])
- butstop.click(fn=stoptraining, inputs=[gr.Number(value=1, visible=False)], outputs=[butstop, but3])
-
-
- but4 = gr.Button("4.Train Index", variant="primary")
- info3 = gr.Textbox(label="Status(Check the Colab Notebook's cell output):", value="", max_lines=10)
- with gr.Accordion("Training Preferences (You can leave these as they are)", open=False):
- #gr.Markdown(value=i18n("step3: 填写训练设置, 开始训练模型和索引"))
- with gr.Column():
- save_epoch10 = gr.Slider(
- minimum=1,
- maximum=200,
- step=1,
- label="Backup every X amount of epochs:",
- value=10,
- interactive=True,
- )
- batch_size12 = gr.Slider(
- minimum=1,
- maximum=40,
- step=1,
- label="Batch Size (LEAVE IT unless you know what you're doing!):",
- value=default_batch_size,
- interactive=True,
- )
- if_save_latest13 = gr.Checkbox(
- label="Save only the latest '.ckpt' file to save disk space.",
- value=True,
- interactive=True,
- )
- if_cache_gpu17 = gr.Checkbox(
- label="Cache all training sets to GPU memory. Caching small datasets (less than 10 minutes) can speed up training, but caching large datasets will consume a lot of GPU memory and may not provide much speed improvement.",
- value=False,
- interactive=True,
- )
- if_save_every_weights18 = gr.Checkbox(
- label="Save a small final model to the 'weights' folder at each save point.",
- value=True,
- interactive=True,
- )
- zip_model = gr.Button('5. Download Model')
- zipped_model = gr.Files(label='Your Model and Index file can be downloaded here:')
- zip_model.click(fn=zip_downloader, inputs=[exp_dir1], outputs=[zipped_model, info3])
- with gr.Group():
- with gr.Accordion("Base Model Locations:", open=False, visible=False):
- pretrained_G14 = gr.Textbox(
- label=i18n("加载预训练底模G路径"),
- value="pretrained_v2/f0G40k.pth",
- interactive=True,
- )
- pretrained_D15 = gr.Textbox(
- label=i18n("加载预训练底模D路径"),
- value="pretrained_v2/f0D40k.pth",
- interactive=True,
- )
- gpus16 = gr.Textbox(
- label=i18n("以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2"),
- value=gpus,
- interactive=True,
- )
- sr2.change(
- change_sr2,
- [sr2, if_f0_3, version19],
- [pretrained_G14, pretrained_D15, version19],
- )
- version19.change(
- change_version19,
- [sr2, if_f0_3, version19],
- [pretrained_G14, pretrained_D15],
- )
- if_f0_3.change(
- change_f0,
- [if_f0_3, sr2, version19],
- [f0method8, pretrained_G14, pretrained_D15],
- )
- but5 = gr.Button(i18n("一键训练"), variant="primary", visible=False)
- but3.click(
- click_train,
- [
- exp_dir1,
- sr2,
- if_f0_3,
- spk_id5,
- save_epoch10,
- total_epoch11,
- batch_size12,
- if_save_latest13,
- pretrained_G14,
- pretrained_D15,
- gpus16,
- if_cache_gpu17,
- if_save_every_weights18,
- version19,
- ],
- [
- info3,
- butstop,
- but3,
- ],
- )
- but4.click(train_index, [exp_dir1, version19], info3)
- but5.click(
- train1key,
- [
- exp_dir1,
- sr2,
- if_f0_3,
- trainset_dir4,
- spk_id5,
- np7,
- f0method8,
- save_epoch10,
- total_epoch11,
- batch_size12,
- if_save_latest13,
- pretrained_G14,
- pretrained_D15,
- gpus16,
- if_cache_gpu17,
- if_save_every_weights18,
- version19,
- extraction_crepe_hop_length
- ],
- info3,
- )
-
- else:
- print(
- "Pretrained weights not downloaded. Disabling training tab.\n"
- "Wondering how to train a voice? Visit here for the RVC model training guide: https://t.ly/RVC_Training_Guide\n"
- "-------------------------------\n"
- )
-
- app.queue(concurrency_count=511, max_size=1022).launch(share=False, quiet=True)
-#endregion
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/mask_scoring_rcnn.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/mask_scoring_rcnn.py
deleted file mode 100644
index b6252b6e1d234a201725342a5780fade7e21957c..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/mask_scoring_rcnn.py
+++ /dev/null
@@ -1,27 +0,0 @@
-from ..builder import DETECTORS
-from .two_stage import TwoStageDetector
-
-
-@DETECTORS.register_module()
-class MaskScoringRCNN(TwoStageDetector):
- """Mask Scoring RCNN.
-
- https://arxiv.org/abs/1903.00241
- """
-
- def __init__(self,
- backbone,
- rpn_head,
- roi_head,
- train_cfg,
- test_cfg,
- neck=None,
- pretrained=None):
- super(MaskScoringRCNN, self).__init__(
- backbone=backbone,
- neck=neck,
- rpn_head=rpn_head,
- roi_head=roi_head,
- train_cfg=train_cfg,
- test_cfg=test_cfg,
- pretrained=pretrained)
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/apcnet_r101-d8_769x769_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/apcnet_r101-d8_769x769_40k_cityscapes.py
deleted file mode 100644
index 5c44ebcaf36075e67208c5f033d1e5f9a78dda4e..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/apcnet_r101-d8_769x769_40k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './apcnet_r50-d8_769x769_40k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/GuXiaoBei/wechat-chatbot/bot/baidu/baidu_unit_bot.py b/spaces/GuXiaoBei/wechat-chatbot/bot/baidu/baidu_unit_bot.py
deleted file mode 100644
index a84ac57c9b7843a00e689b662807c9ec4710d6af..0000000000000000000000000000000000000000
--- a/spaces/GuXiaoBei/wechat-chatbot/bot/baidu/baidu_unit_bot.py
+++ /dev/null
@@ -1,26 +0,0 @@
-# encoding:utf-8
-
-import requests
-from bot.bot import Bot
-
-
-# Baidu Unit对话接口 (可用, 但能力较弱)
-class BaiduUnitBot(Bot):
- def reply(self, query, context=None):
- token = self.get_token()
- url = 'https://aip.baidubce.com/rpc/2.0/unit/service/v3/chat?access_token=' + token
- post_data = "{\"version\":\"3.0\",\"service_id\":\"S73177\",\"session_id\":\"\",\"log_id\":\"7758521\",\"skill_ids\":[\"1221886\"],\"request\":{\"terminal_id\":\"88888\",\"query\":\"" + query + "\", \"hyper_params\": {\"chat_custom_bot_profile\": 1}}}"
- print(post_data)
- headers = {'content-type': 'application/x-www-form-urlencoded'}
- response = requests.post(url, data=post_data.encode(), headers=headers)
- if response:
- return response.json()['result']['context']['SYS_PRESUMED_HIST'][1]
-
- def get_token(self):
- access_key = 'YOUR_ACCESS_KEY'
- secret_key = 'YOUR_SECRET_KEY'
- host = 'https://aip.baidubce.com/oauth/2.0/token?grant_type=client_credentials&client_id=' + access_key + '&client_secret=' + secret_key
- response = requests.get(host)
- if response:
- print(response.json())
- return response.json()['access_token']
diff --git a/spaces/HaHaBill/LandShapes-Antarctica/estimators.py b/spaces/HaHaBill/LandShapes-Antarctica/estimators.py
deleted file mode 100644
index 470858c8edc85a64f035fe12ceaf37182ecd497f..0000000000000000000000000000000000000000
--- a/spaces/HaHaBill/LandShapes-Antarctica/estimators.py
+++ /dev/null
@@ -1,218 +0,0 @@
-# Copyright 2020 Erik Härkönen. All rights reserved.
-# This file is licensed to you under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License. You may obtain a copy
-# of the License at http://www.apache.org/licenses/LICENSE-2.0
-
-# Unless required by applicable law or agreed to in writing, software distributed under
-# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS
-# OF ANY KIND, either express or implied. See the License for the specific language
-# governing permissions and limitations under the License.
-
-from sklearn.decomposition import FastICA, PCA, IncrementalPCA, MiniBatchSparsePCA, SparsePCA, KernelPCA
-import fbpca
-import numpy as np
-import itertools
-from types import SimpleNamespace
-
-# ICA
-class ICAEstimator():
- def __init__(self, n_components):
- self.n_components = n_components
- self.maxiter = 10000
- self.whiten = True # ICA: whitening is essential, should not be skipped
- self.transformer = FastICA(n_components, random_state=0, whiten=self.whiten, max_iter=self.maxiter)
- self.batch_support = False
- self.stdev = np.zeros((n_components,))
- self.total_var = 0.0
-
- def get_param_str(self):
- return "ica_c{}{}".format(self.n_components, '_w' if self.whiten else '')
-
- def fit(self, X):
- self.transformer.fit(X)
- if self.transformer.n_iter_ >= self.maxiter:
- raise RuntimeError(f'FastICA did not converge (N={X.shape[0]}, it={self.maxiter})')
-
- # Normalize components
- self.transformer.components_ /= np.sqrt(np.sum(self.transformer.components_**2, axis=-1, keepdims=True))
-
- # Save variance for later
- self.total_var = X.var(axis=0).sum()
-
- # Compute projected standard deviations
- self.stdev = np.dot(self.transformer.components_, X.T).std(axis=1)
-
- # Sort components based on explained variance
- idx = np.argsort(self.stdev)[::-1]
- self.stdev = self.stdev[idx]
- self.transformer.components_[:] = self.transformer.components_[idx]
-
- def get_components(self):
- var_ratio = self.stdev**2 / self.total_var
- return self.transformer.components_, self.stdev, var_ratio # ICA outputs are not normalized
-
-# Incremental PCA
-class IPCAEstimator():
- def __init__(self, n_components):
- self.n_components = n_components
- self.whiten = False
- self.transformer = IncrementalPCA(n_components, whiten=self.whiten, batch_size=max(100, 2*n_components))
- self.batch_support = True
-
- def get_param_str(self):
- return "ipca_c{}{}".format(self.n_components, '_w' if self.whiten else '')
-
- def fit(self, X):
- self.transformer.fit(X)
-
- def fit_partial(self, X):
- try:
- self.transformer.partial_fit(X)
- self.transformer.n_samples_seen_ = \
- self.transformer.n_samples_seen_.astype(np.int64) # avoid overflow
- return True
- except ValueError as e:
- print(f'\nIPCA error:', e)
- return False
-
- def get_components(self):
- stdev = np.sqrt(self.transformer.explained_variance_) # already sorted
- var_ratio = self.transformer.explained_variance_ratio_
- return self.transformer.components_, stdev, var_ratio # PCA outputs are normalized
-
-# Standard PCA
-class PCAEstimator():
- def __init__(self, n_components):
- self.n_components = n_components
- self.solver = 'full'
- self.transformer = PCA(n_components, svd_solver=self.solver)
- self.batch_support = False
-
- def get_param_str(self):
- return f"pca-{self.solver}_c{self.n_components}"
-
- def fit(self, X):
- self.transformer.fit(X)
-
- # Save variance for later
- self.total_var = X.var(axis=0).sum()
-
- # Compute projected standard deviations
- self.stdev = np.dot(self.transformer.components_, X.T).std(axis=1)
-
- # Sort components based on explained variance
- idx = np.argsort(self.stdev)[::-1]
- self.stdev = self.stdev[idx]
- self.transformer.components_[:] = self.transformer.components_[idx]
-
- # Check orthogonality
- dotps = [np.dot(*self.transformer.components_[[i, j]])
- for (i, j) in itertools.combinations(range(self.n_components), 2)]
- if not np.allclose(dotps, 0, atol=1e-4):
- print('IPCA components not orghogonal, max dot', np.abs(dotps).max())
-
- self.transformer.mean_ = X.mean(axis=0, keepdims=True)
-
- def get_components(self):
- var_ratio = self.stdev**2 / self.total_var
- return self.transformer.components_, self.stdev, var_ratio
-
-# Facebook's PCA
-# Good default choice: very fast and accurate.
-# Very high sample counts won't fit into RAM,
-# in which case IncrementalPCA must be used.
-class FacebookPCAEstimator():
- def __init__(self, n_components):
- self.n_components = n_components
- self.transformer = SimpleNamespace()
- self.batch_support = False
- self.n_iter = 2
- self.l = 2*self.n_components
-
- def get_param_str(self):
- return "fbpca_c{}_it{}_l{}".format(self.n_components, self.n_iter, self.l)
-
- def fit(self, X):
- U, s, Va = fbpca.pca(X, k=self.n_components, n_iter=self.n_iter, raw=True, l=self.l)
- self.transformer.components_ = Va
-
- # Save variance for later
- self.total_var = X.var(axis=0).sum()
-
- # Compute projected standard deviations
- self.stdev = np.dot(self.transformer.components_, X.T).std(axis=1)
-
- # Sort components based on explained variance
- idx = np.argsort(self.stdev)[::-1]
- self.stdev = self.stdev[idx]
- self.transformer.components_[:] = self.transformer.components_[idx]
-
- # Check orthogonality
- dotps = [np.dot(*self.transformer.components_[[i, j]])
- for (i, j) in itertools.combinations(range(self.n_components), 2)]
- if not np.allclose(dotps, 0, atol=1e-4):
- print('FBPCA components not orghogonal, max dot', np.abs(dotps).max())
-
- self.transformer.mean_ = X.mean(axis=0, keepdims=True)
-
- def get_components(self):
- var_ratio = self.stdev**2 / self.total_var
- return self.transformer.components_, self.stdev, var_ratio
-
-# Sparse PCA
-# The algorithm is online along the features direction, not the samples direction
-# => no partial_fit
-class SPCAEstimator():
- def __init__(self, n_components, alpha=10.0):
- self.n_components = n_components
- self.whiten = False
- self.alpha = alpha # higher alpha => sparser components
- #self.transformer = MiniBatchSparsePCA(n_components, alpha=alpha, n_iter=100,
- # batch_size=max(20, n_components//5), random_state=0, normalize_components=True)
- self.transformer = SparsePCA(n_components, alpha=alpha, ridge_alpha=0.01,
- max_iter=100, random_state=0, n_jobs=-1, normalize_components=True) # TODO: warm start using PCA result?
- self.batch_support = False # maybe through memmap and HDD-stored tensor
- self.stdev = np.zeros((n_components,))
- self.total_var = 0.0
-
- def get_param_str(self):
- return "spca_c{}_a{}{}".format(self.n_components, self.alpha, '_w' if self.whiten else '')
-
- def fit(self, X):
- self.transformer.fit(X)
-
- # Save variance for later
- self.total_var = X.var(axis=0).sum()
-
- # Compute projected standard deviations
- # NB: cannot simply project with dot product!
- self.stdev = self.transformer.transform(X).std(axis=0) # X = (n_samples, n_features)
-
- # Sort components based on explained variance
- idx = np.argsort(self.stdev)[::-1]
- self.stdev = self.stdev[idx]
- self.transformer.components_[:] = self.transformer.components_[idx]
-
- # Check orthogonality
- dotps = [np.dot(*self.transformer.components_[[i, j]])
- for (i, j) in itertools.combinations(range(self.n_components), 2)]
- if not np.allclose(dotps, 0, atol=1e-4):
- print('SPCA components not orghogonal, max dot', np.abs(dotps).max())
-
- def get_components(self):
- var_ratio = self.stdev**2 / self.total_var
- return self.transformer.components_, self.stdev, var_ratio # SPCA outputs are normalized
-
-def get_estimator(name, n_components, alpha):
- if name == 'pca':
- return PCAEstimator(n_components)
- if name == 'ipca':
- return IPCAEstimator(n_components)
- elif name == 'fbpca':
- return FacebookPCAEstimator(n_components)
- elif name == 'ica':
- return ICAEstimator(n_components)
- elif name == 'spca':
- return SPCAEstimator(n_components, alpha)
- else:
- raise RuntimeError('Unknown estimator')
\ No newline at end of file
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/data/bert_dataloader/auto_split.sh b/spaces/HaloMaster/chinesesummary/fengshen/data/bert_dataloader/auto_split.sh
deleted file mode 100644
index 0a0f66d01df8f1728e44d9deb1d37e0396c5143a..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/data/bert_dataloader/auto_split.sh
+++ /dev/null
@@ -1,10 +0,0 @@
-files=`find $1 -type f -size +1024M`
-
-for p in $files
-do
-echo "processing $p"
-name=`basename $p .json`
-file=`dirname $p`
-split -a 2 -C 300M $p $file/$name- && ls|grep -E "(-[a-zA-Z]{2})" |xargs -n1 -i{} mv {} {}.json
-rm -f $p
-done
\ No newline at end of file
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/utils.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/utils.py
deleted file mode 100644
index 5aaddf6421ab7fa417af508005671a0ed821c701..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/utils.py
+++ /dev/null
@@ -1,126 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import gc
-import os
-import random
-import shutil
-import numpy as np
-
-import torch
-import tqdm
-from examples.textless_nlp.gslm.speech2unit.pretrained.cpc_feature_reader import (
- CpcFeatureReader,
-)
-from examples.textless_nlp.gslm.speech2unit.pretrained.hubert_feature_reader import (
- HubertFeatureReader,
-)
-from examples.textless_nlp.gslm.speech2unit.pretrained.logmel_feature_reader import (
- LogMelFeatureReader,
-)
-from examples.textless_nlp.gslm.speech2unit.pretrained.w2v2_feature_reader import (
- Wav2VecFeatureReader,
-)
-
-
-def get_feature_reader(feature_type):
- if feature_type == "logmel":
- return LogMelFeatureReader
- elif feature_type == "hubert":
- return HubertFeatureReader
- elif feature_type == "w2v2":
- return Wav2VecFeatureReader
- elif feature_type == "cpc":
- return CpcFeatureReader
- else:
- raise NotImplementedError(f"{feature_type} is not supported.")
-
-
-def get_feature_iterator(
- feature_type, checkpoint_path, layer, manifest_path, sample_pct
-):
- feature_reader_cls = get_feature_reader(feature_type)
- with open(manifest_path, "r") as fp:
- lines = fp.read().split("\n")
- root = lines.pop(0).strip()
- file_path_list = [
- os.path.join(root, line.split("\t")[0])
- for line in lines
- if len(line) > 0
- ]
- if sample_pct < 1.0:
- file_path_list = random.sample(
- file_path_list, int(sample_pct * len(file_path_list))
- )
- num_files = len(file_path_list)
- reader = feature_reader_cls(
- checkpoint_path=checkpoint_path, layer=layer
- )
-
- def iterate():
- for file_path in file_path_list:
- feats = reader.get_feats(file_path)
- yield feats.cpu().numpy()
-
- return iterate, num_files
-
-
-def get_features(
- feature_type, checkpoint_path, layer, manifest_path, sample_pct, flatten
-):
- generator, num_files = get_feature_iterator(
- feature_type=feature_type,
- checkpoint_path=checkpoint_path,
- layer=layer,
- manifest_path=manifest_path,
- sample_pct=sample_pct,
- )
- iterator = generator()
-
- features_list = []
- for features in tqdm.tqdm(iterator, total=num_files):
- features_list.append(features)
-
- # Explicit clean up
- del iterator
- del generator
- gc.collect()
- torch.cuda.empty_cache()
-
- if flatten:
- return np.concatenate(features_list)
-
- return features_list
-
-
-def get_and_dump_features(
- feature_type,
- checkpoint_path,
- layer,
- manifest_path,
- sample_pct,
- flatten,
- out_features_path,
-):
- # Feature extraction
- features_batch = get_features(
- feature_type=feature_type,
- checkpoint_path=checkpoint_path,
- layer=layer,
- manifest_path=manifest_path,
- sample_pct=sample_pct,
- flatten=flatten,
- )
-
- # Save features
- out_dir_path = os.path.dirname(out_features_path)
- os.makedirs(out_dir_path, exist_ok=True)
- shutil.copyfile(
- manifest_path,
- os.path.join(out_dir_path, os.path.basename(manifest_path)),
- )
- np.save(out_features_path, features_batch)
-
- return features_batch
diff --git a/spaces/HiepPhuocSS/TimeSFormer/README.md b/spaces/HiepPhuocSS/TimeSFormer/README.md
deleted file mode 100644
index 260f89463202c48a8f1d58c10bc609c8ffb463d0..0000000000000000000000000000000000000000
--- a/spaces/HiepPhuocSS/TimeSFormer/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: TimeSFormer
-emoji: 🐢
-colorFrom: red
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/HuggingFaceM4/OBELICS-Interactive-Map/index.html b/spaces/HuggingFaceM4/OBELICS-Interactive-Map/index.html
deleted file mode 100644
index 6714b7dafa99b902f881d0358ba032d192bc9566..0000000000000000000000000000000000000000
--- a/spaces/HuggingFaceM4/OBELICS-Interactive-Map/index.html
+++ /dev/null
@@ -1,46 +0,0 @@
-
-
-
-
- OBELICS Interactive Map
-
-
-
-
This is an embedded interactive visualization of (a subset of) OBELICS powered by Nomic AI. You will find the original map here.
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/Illumotion/Koboldcpp/class.py b/spaces/Illumotion/Koboldcpp/class.py
deleted file mode 100644
index 76ad123d7dc7567dd41b4b5f385cfa4b87ff2c83..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/class.py
+++ /dev/null
@@ -1,314 +0,0 @@
-## KoboldCpp based GGML Backend by Concedo
-## For use as a custom backend in KoboldAI United
-## Not intended for general use.
-
-from __future__ import annotations
-
-import time, json
-import torch
-import requests
-import numpy as np
-from typing import List, Optional, Union
-import os
-from . import koboldcpp
-
-import utils
-from logger import logger
-from modeling.inference_model import (
- GenerationResult,
- GenerationSettings,
- InferenceModel,
-)
-
-model_backend_name = "koboldcpp" #specific instead of ggml
-model_backend_type = "ggml" #This should be a generic name in case multiple model backends are compatible (think Hugging Face Custom and Basic Hugging Face)
-
-kcpp_backend_loaded = False
-
-class KoboldCppException(Exception):
- """To be used for errors on cpp side of KoboldCpp."""
-
-class KcppArgsObject:
- def __init__(self, **kwargs):
- self.__dict__.update(kwargs)
-
-class model_backend(InferenceModel):
- def __init__(self) -> None:
- super().__init__()
-
- def is_valid(self, model_name, model_path, menu_path):
-
- foundfile = False
- try:
- files = os.listdir(model_path)
- foundfile = len([filename for filename in files if (("ggml" in filename.lower() and ".bin" in filename.lower()) or ".gguf" in filename.lower())])>0
- except:
- pass
- return foundfile
-
- def get_requested_parameters(self, model_name, model_path, menu_path, parameters = {}):
-
- self.kcpp_threads = 5
- self.model_name = "GGML_Model"
- self.kcpp_ctxsize = 2048
- self.kcpp_blasbatchsize = 512
- self.kcpp_gpulayers = 0
- self.kcpp_smartcontext = False
- self.kcpp_ropescale = 0.0
- self.kcpp_ropebase = 10000.0
- self.kcpp_useclblast = None
- self.kcpp_usecublas = None
- self.kcpp_noblas = False
- self.kcpp_noavx2 = False
- self.kcpp_nommap = False
- self.kcpp_debugmode = 0
- self.kcpp_tensor_split_str = ""
- self.kcpp_tensor_split = None
-
- files = os.listdir(model_path)
- foundfiles = [filename for filename in files if (("ggml" in filename.lower() and ".bin" in filename.lower()) or ".gguf" in filename.lower())]
-
- requested_parameters = []
- foldermdls = []
- for ff in foundfiles:
- foldermdls.append({'text': ff, 'value': os.path.join(model_path, ff)})
- requested_parameters.append({
- "uitype": "dropdown",
- "unit": "string",
- "label": "GGML DataFile Name",
- "id": "kcpp_filename",
- "default": os.path.join(model_path, foundfiles[0]) if len(foundfiles)>0 else model_name,
- "check": {"value": "", 'check': "!="},
- "tooltip": "Actual GGML DataFile Name",
- "menu_path": "",
- "refresh_model_inputs": False,
- "extra_classes": "",
- 'children': foldermdls
- })
- requested_parameters.append({
- "uitype": "dropdown",
- "unit": "int",
- "label": "KoboldCpp Accelerator",
- "id": "kcpp_accelerator",
- "default": 0,
- "check": {"value": "", 'check': "!="},
- 'multiple': False,
- "tooltip": "KoboldCpp Accelerator",
- "menu_path": "",
- "refresh_model_inputs": False,
- "extra_classes": "",
- 'children': [{'text': 'Use No BLAS', 'value': 0}, {'text': 'Use OpenBLAS', 'value': 1}, {'text': 'Use CuBLAS', 'value': 2},
- {'text': 'Use CLBLast GPU #1', 'value': 3},{'text': 'Use CLBLast GPU #2', 'value': 4},{'text': 'Use CLBLast GPU #3', 'value': 5}
- ,{'text': 'NoAVX2 Mode (Old CPU)', 'value': 6},{'text': 'Failsafe Mode (Old CPU)', 'value': 7}],
- })
- requested_parameters.append({
- "uitype": "text",
- "unit": "int",
- "label": "Threads",
- "id": "kcpp_threads",
- "default": self.kcpp_threads,
- "check": {"value": "", 'check': "!="},
- "tooltip": "Thread Count",
- "menu_path": "",
- "refresh_model_inputs": False,
- "extra_classes": ""
- })
-
- requested_parameters.append({
- "uitype": "text",
- "unit": "int",
- "label": "Max Context Size",
- "id": "kcpp_ctxsize",
- "default": self.kcpp_ctxsize,
- "check": {"value": "", 'check': "!="},
- "tooltip": "Max Context Size",
- "menu_path": "",
- "refresh_model_inputs": False,
- "extra_classes": ""
- })
- requested_parameters.append({
- "uitype": "text",
- "unit": "int",
- "label": "BLAS Batch Size",
- "id": "kcpp_blasbatchsize",
- "default": self.kcpp_blasbatchsize,
- "check": {"value": "", 'check': "!="},
- "tooltip": "BLAS Batch Size",
- "menu_path": "",
- "refresh_model_inputs": False,
- "extra_classes": ""
- })
- requested_parameters.append({
- "uitype": "text",
- "unit": "int",
- "label": "GPU Layers",
- "id": "kcpp_gpulayers",
- "default": self.kcpp_gpulayers,
- "check": {"value": "", 'check': "!="},
- "tooltip": "GPU Layers",
- "menu_path": "",
- "refresh_model_inputs": False,
- "extra_classes": ""
- })
- requested_parameters.append({
- "uitype": "text",
- "unit": "int",
- "label": "Rope Scale",
- "id": "kcpp_ropescale",
- "default": self.kcpp_ropescale,
- "check": {"value": "", 'check': "!="},
- "tooltip": "Rope Scale",
- "menu_path": "",
- "refresh_model_inputs": False,
- "extra_classes": ""
- })
- requested_parameters.append({
- "uitype": "text",
- "unit": "int",
- "label": "Rope Base",
- "id": "kcpp_ropebase",
- "default": self.kcpp_ropebase,
- "check": {"value": "", 'check': "!="},
- "tooltip": "Rope Base",
- "menu_path": "",
- "refresh_model_inputs": False,
- "extra_classes": ""
- })
- requested_parameters.append({
- "uitype": "dropdown",
- "unit": "int",
- "label": "Smart Context",
- "id": "kcpp_smartcontext",
- "default": self.kcpp_smartcontext,
- "check": {"value": "", 'check': "!="},
- 'multiple': False,
- "tooltip": "Smart Context",
- "menu_path": "",
- "refresh_model_inputs": False,
- "extra_classes": "",
- 'children': [{'text': 'False', 'value': False}, {'text': 'True', 'value': True}],
- })
- requested_parameters.append({
- "uitype": "text",
- "unit": "text",
- "label": "GPU ID",
- "id": "kcpp_tensor_split_str",
- "default": "1",
- "check": {"value": "", 'check': "!="},
- "tooltip": "Which GPU's do we use? For example:1 2",
- "menu_path": "",
- "refresh_model_inputs": False,
- "extra_classes": ""
- })
- requested_parameters.append({
- "uitype": "dropdown",
- "unit": "int",
- "label": "Debug Mode",
- "id": "kcpp_debugmode",
- "default": self.kcpp_debugmode,
- "check": {"value": "", 'check': "!="},
- 'multiple': False,
- "tooltip": "Debug Mode",
- "menu_path": "",
- "refresh_model_inputs": False,
- "extra_classes": "",
- 'children': [{'text': 'False', 'value': 0}, {'text': 'True', 'value': 1}],
- })
- return requested_parameters
-
- def set_input_parameters(self, parameters):
- self.kcpp_threads = parameters["kcpp_threads"]
- self.kcpp_filename = parameters["kcpp_filename"]
- self.kcpp_ctxsize = parameters["kcpp_ctxsize"]
- self.kcpp_blasbatchsize = parameters["kcpp_blasbatchsize"]
- self.kcpp_gpulayers = parameters["kcpp_gpulayers"]
- self.kcpp_smartcontext = parameters["kcpp_smartcontext"]
- self.kcpp_ropescale = parameters["kcpp_ropescale"]
- self.kcpp_ropebase = parameters["kcpp_ropebase"]
- self.kcpp_debugmode = parameters["kcpp_debugmode"]
- self.kcpp_tensor_split_str = parameters["kcpp_tensor_split_str"]
- if self.kcpp_tensor_split_str and self.kcpp_tensor_split_str!="":
- splits = self.kcpp_tensor_split_str.split()
- self.kcpp_tensor_split = []
- for s in splits:
- self.kcpp_tensor_split.append(int(s))
- print(self.kcpp_tensor_split)
-
- accel = parameters["kcpp_accelerator"]
- if accel==0:
- self.kcpp_noblas = True
- elif accel==1:
- pass
- elif accel==2:
- self.kcpp_usecublas = ["normal"]
- elif accel==3:
- self.kcpp_useclblast = [0,0]
- elif accel==4:
- self.kcpp_useclblast = [1,0]
- elif accel==5:
- self.kcpp_useclblast = [0,1]
- elif accel==6:
- self.kcpp_noavx2 = True
- elif accel==7:
- self.kcpp_noavx2 = True
- self.kcpp_noblas = True
- self.kcpp_nommap = True
- pass
-
- def unload(self):
- print("Attemping to unload library")
- koboldcpp.unload_libs()
- global kcpp_backend_loaded
- kcpp_backend_loaded = False
- pass
-
- def _load(self, save_model: bool, initial_load: bool) -> None:
- global kcpp_backend_loaded
- self.tokenizer = self._get_tokenizer("gpt2")
- if not kcpp_backend_loaded:
- kcppargs = KcppArgsObject(model=self.kcpp_filename, model_param=self.kcpp_filename,
- port=5001, port_param=5001, host='', launch=False, lora=None, threads=self.kcpp_threads, blasthreads=self.kcpp_threads,
- highpriority=False, contextsize=self.kcpp_ctxsize, blasbatchsize=self.kcpp_blasbatchsize, ropeconfig=[self.kcpp_ropescale, self.kcpp_ropebase],
- smartcontext=self.kcpp_smartcontext, bantokens=None, forceversion=0, nommap=self.kcpp_nommap,
- usemlock=False, noavx2=self.kcpp_noavx2, debugmode=self.kcpp_debugmode, skiplauncher=True, hordeconfig=None, noblas=self.kcpp_noblas,
- useclblast=self.kcpp_useclblast, usecublas=self.kcpp_usecublas, gpulayers=self.kcpp_gpulayers, tensor_split=self.kcpp_tensor_split, config=None,
- onready='', multiuser=False, foreground=False)
-
- koboldcpp.main(kcppargs,False) #initialize library without enabling Lite http server
- kcpp_backend_loaded = True
- pass
-
- def _save_settings(self):
- pass
-
- def _raw_generate(
- self,
- prompt_tokens: Union[List[int], torch.Tensor],
- max_new: int,
- gen_settings: GenerationSettings,
- single_line: bool = False,
- batch_count: int = 1,
- seed: Optional[int] = None,
- **kwargs,
- ) -> GenerationResult:
-
- decoded_prompt = utils.decodenewlines(self.tokenizer.decode(prompt_tokens))
-
- # Store context in memory to use it for comparison with generated content
- utils.koboldai_vars.lastctx = decoded_prompt
-
- genresult = koboldcpp.generate(decoded_prompt,max_new,utils.koboldai_vars.max_length,
- gen_settings.temp,int(gen_settings.top_k),gen_settings.top_a,gen_settings.top_p,
- gen_settings.typical,gen_settings.tfs,gen_settings.rep_pen,gen_settings.rep_pen_range,
- sampler_order=gen_settings.sampler_order,use_default_badwordsids=utils.koboldai_vars.use_default_badwordsids)
-
- outputs = [genresult]
- return GenerationResult(
- model=self,
- out_batches=np.array(
- [self.tokenizer.encode(x) for x in outputs]
- ),
- prompt=prompt_tokens,
- is_whole_generation=True,
- single_line=single_line,
- )
diff --git a/spaces/Illumotion/Koboldcpp/export_state_dict_checkpoint.py b/spaces/Illumotion/Koboldcpp/export_state_dict_checkpoint.py
deleted file mode 100644
index 59bb487905c0ebb77225bc4ed243d13f1a5740b8..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/export_state_dict_checkpoint.py
+++ /dev/null
@@ -1,129 +0,0 @@
-# this specific file adapted from https://github.com/tloen/alpaca-lora/blob/main/export_state_dict_checkpoint.py
-# under Apache 2.0 license https://raw.githubusercontent.com/tloen/alpaca-lora/main/LICENSE
-# todo: adapt to revert HF formats back to original PTH formats so ggml can convert them.
-
-import json
-import os
-
-import torch
-import transformers
-from peft import PeftModel
-from transformers import LlamaForCausalLM, LlamaTokenizer # noqa: E402
-
-BASE_MODEL = os.environ.get("BASE_MODEL", None)
-assert (
- BASE_MODEL
-), "Please specify a value for BASE_MODEL environment variable, e.g. `export BASE_MODEL=decapoda-research/llama-7b-hf`" # noqa: E501
-
-tokenizer = LlamaTokenizer.from_pretrained(BASE_MODEL)
-
-base_model = LlamaForCausalLM.from_pretrained(
- BASE_MODEL,
- load_in_8bit=False,
- torch_dtype=torch.float16,
- device_map={"": "cpu"},
-)
-
-lora_model = PeftModel.from_pretrained(
- base_model,
- "tloen/alpaca-lora-7b",
- device_map={"": "cpu"},
- torch_dtype=torch.float16,
-)
-
-# merge weights
-for layer in lora_model.base_model.model.model.layers:
- layer.self_attn.q_proj.merge_weights = True
- layer.self_attn.v_proj.merge_weights = True
-
-lora_model.train(False)
-
-lora_model_sd = lora_model.state_dict()
-
-params = {
- "dim": 4096,
- "multiple_of": 256,
- "n_heads": 32,
- "n_layers": 32,
- "norm_eps": 1e-06,
- "vocab_size": -1,
-}
-n_layers = params["n_layers"]
-n_heads = params["n_heads"]
-dim = params["dim"]
-dims_per_head = dim // n_heads
-base = 10000.0
-inv_freq = 1.0 / (
- base ** (torch.arange(0, dims_per_head, 2).float() / dims_per_head)
-)
-
-
-def permute(w):
- return (
- w.view(n_heads, dim // n_heads // 2, 2, dim)
- .transpose(1, 2)
- .reshape(dim, dim)
- )
-
-
-def unpermute(w):
- return (
- w.view(n_heads, 2, dim // n_heads // 2, dim)
- .transpose(1, 2)
- .reshape(dim, dim)
- )
-
-
-def translate_state_dict_key(k): # noqa: C901
- k = k.replace("base_model.model.", "")
- if k == "model.embed_tokens.weight":
- return "tok_embeddings.weight"
- elif k == "model.norm.weight":
- return "norm.weight"
- elif k == "lm_head.weight":
- return "output.weight"
- elif k.startswith("model.layers."):
- layer = k.split(".")[2]
- if k.endswith(".self_attn.q_proj.weight"):
- return f"layers.{layer}.attention.wq.weight"
- elif k.endswith(".self_attn.k_proj.weight"):
- return f"layers.{layer}.attention.wk.weight"
- elif k.endswith(".self_attn.v_proj.weight"):
- return f"layers.{layer}.attention.wv.weight"
- elif k.endswith(".self_attn.o_proj.weight"):
- return f"layers.{layer}.attention.wo.weight"
- elif k.endswith(".mlp.gate_proj.weight"):
- return f"layers.{layer}.feed_forward.w1.weight"
- elif k.endswith(".mlp.down_proj.weight"):
- return f"layers.{layer}.feed_forward.w2.weight"
- elif k.endswith(".mlp.up_proj.weight"):
- return f"layers.{layer}.feed_forward.w3.weight"
- elif k.endswith(".input_layernorm.weight"):
- return f"layers.{layer}.attention_norm.weight"
- elif k.endswith(".post_attention_layernorm.weight"):
- return f"layers.{layer}.ffn_norm.weight"
- elif k.endswith("rotary_emb.inv_freq") or "lora" in k:
- return None
- else:
- print(layer, k)
- raise NotImplementedError
- else:
- print(k)
- raise NotImplementedError
-
-
-new_state_dict = {}
-for k, v in lora_model_sd.items():
- new_k = translate_state_dict_key(k)
- if new_k is not None:
- if "wq" in new_k or "wk" in new_k:
- new_state_dict[new_k] = unpermute(v)
- else:
- new_state_dict[new_k] = v
-
-os.makedirs("./ckpt", exist_ok=True)
-
-torch.save(new_state_dict, "./ckpt/consolidated.00.pth")
-
-with open("./ckpt/params.json", "w") as f:
- json.dump(params, f)
\ No newline at end of file
diff --git a/spaces/Intoval/privateChatGPT/modules/base_model.py b/spaces/Intoval/privateChatGPT/modules/base_model.py
deleted file mode 100644
index 9a164ea1fc32523bb2ca59caf95d361c26e88afb..0000000000000000000000000000000000000000
--- a/spaces/Intoval/privateChatGPT/modules/base_model.py
+++ /dev/null
@@ -1,550 +0,0 @@
-from __future__ import annotations
-from typing import TYPE_CHECKING, List
-
-import logging
-import json
-import commentjson as cjson
-import os
-import sys
-import requests
-import urllib3
-import traceback
-
-from tqdm import tqdm
-import colorama
-from duckduckgo_search import ddg
-import asyncio
-import aiohttp
-from enum import Enum
-
-from .presets import *
-from .llama_func import *
-from .utils import *
-from . import shared
-from .config import retrieve_proxy
-
-
-class ModelType(Enum):
- Unknown = -1
- OpenAI = 0
- ChatGLM = 1
- LLaMA = 2
- XMBot = 3
-
- @classmethod
- def get_type(cls, model_name: str):
- model_type = None
- model_name_lower = model_name.lower()
- if "gpt" in model_name_lower:
- model_type = ModelType.OpenAI
- elif "chatglm" in model_name_lower:
- model_type = ModelType.ChatGLM
- elif "llama" in model_name_lower or "alpaca" in model_name_lower:
- model_type = ModelType.LLaMA
- elif "xmchat" in model_name_lower:
- model_type = ModelType.XMBot
- else:
- model_type = ModelType.Unknown
- return model_type
-
-
-class BaseLLMModel:
- def __init__(
- self,
- model_name,
- system_prompt="",
- temperature=1.0,
- top_p=1.0,
- n_choices=1,
- stop=None,
- max_generation_token=None,
- presence_penalty=0,
- frequency_penalty=0,
- logit_bias=None,
- user="",
- ) -> None:
- self.history = []
- self.all_token_counts = []
- self.model_name = model_name
- self.model_type = ModelType.get_type(model_name)
- try:
- self.token_upper_limit = MODEL_TOKEN_LIMIT[model_name]
- except KeyError:
- self.token_upper_limit = DEFAULT_TOKEN_LIMIT
- self.interrupted = False
- self.system_prompt = system_prompt
- self.api_key = None
- self.need_api_key = False
- self.single_turn = False
-
- self.temperature = temperature
- self.top_p = top_p
- self.n_choices = n_choices
- self.stop_sequence = stop
- self.max_generation_token = None
- self.presence_penalty = presence_penalty
- self.frequency_penalty = frequency_penalty
- self.logit_bias = logit_bias
- self.user_identifier = user
-
- def get_answer_stream_iter(self):
- """stream predict, need to be implemented
- conversations are stored in self.history, with the most recent question, in OpenAI format
- should return a generator, each time give the next word (str) in the answer
- """
- logging.warning("stream predict not implemented, using at once predict instead")
- response, _ = self.get_answer_at_once()
- yield response
-
- def get_answer_at_once(self):
- """predict at once, need to be implemented
- conversations are stored in self.history, with the most recent question, in OpenAI format
- Should return:
- the answer (str)
- total token count (int)
- """
- logging.warning("at once predict not implemented, using stream predict instead")
- response_iter = self.get_answer_stream_iter()
- count = 0
- for response in response_iter:
- count += 1
- return response, sum(self.all_token_counts) + count
-
- def billing_info(self):
- """get billing infomation, inplement if needed"""
- logging.warning("billing info not implemented, using default")
- return BILLING_NOT_APPLICABLE_MSG
-
- def count_token(self, user_input):
- """get token count from input, implement if needed"""
- logging.warning("token count not implemented, using default")
- return len(user_input)
-
- def stream_next_chatbot(self, inputs, chatbot, fake_input=None, display_append=""):
- def get_return_value():
- return chatbot, status_text
-
- status_text = i18n("开始实时传输回答……")
- if fake_input:
- chatbot.append((fake_input, ""))
- else:
- chatbot.append((inputs, ""))
-
- user_token_count = self.count_token(inputs)
- self.all_token_counts.append(user_token_count)
- logging.debug(f"输入token计数: {user_token_count}")
-
- stream_iter = self.get_answer_stream_iter()
-
- for partial_text in stream_iter:
- chatbot[-1] = (chatbot[-1][0], partial_text + display_append)
- self.all_token_counts[-1] += 1
- status_text = self.token_message()
- yield get_return_value()
- if self.interrupted:
- self.recover()
- break
- self.history.append(construct_assistant(partial_text))
-
- def next_chatbot_at_once(self, inputs, chatbot, fake_input=None, display_append=""):
- if fake_input:
- chatbot.append((fake_input, ""))
- else:
- chatbot.append((inputs, ""))
- if fake_input is not None:
- user_token_count = self.count_token(fake_input)
- else:
- user_token_count = self.count_token(inputs)
- self.all_token_counts.append(user_token_count)
- ai_reply, total_token_count = self.get_answer_at_once()
- self.history.append(construct_assistant(ai_reply))
- if fake_input is not None:
- self.history[-2] = construct_user(fake_input)
- chatbot[-1] = (chatbot[-1][0], ai_reply + display_append)
- if fake_input is not None:
- self.all_token_counts[-1] += count_token(construct_assistant(ai_reply))
- else:
- self.all_token_counts[-1] = total_token_count - sum(self.all_token_counts)
- status_text = self.token_message()
- return chatbot, status_text
-
- def handle_file_upload(self, files, chatbot):
- """if the model accepts multi modal input, implement this function"""
- status = gr.Markdown.update()
- if files:
- construct_index(self.api_key, file_src=files)
- status = "索引构建完成"
- return gr.Files.update(), chatbot, status
-
- def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot):
- fake_inputs = None
- display_append = []
- limited_context = False
- fake_inputs = real_inputs
- if files:
- from llama_index.indices.vector_store.base_query import GPTVectorStoreIndexQuery
- from llama_index.indices.query.schema import QueryBundle
- from langchain.embeddings.huggingface import HuggingFaceEmbeddings
- from langchain.chat_models import ChatOpenAI
- from llama_index import (
- GPTSimpleVectorIndex,
- ServiceContext,
- LangchainEmbedding,
- OpenAIEmbedding,
- )
- limited_context = True
- msg = "加载索引中……"
- logging.info(msg)
- # yield chatbot + [(inputs, "")], msg
- index = construct_index(self.api_key, file_src=files)
- assert index is not None, "获取索引失败"
- msg = "索引获取成功,生成回答中……"
- logging.info(msg)
- if local_embedding or self.model_type != ModelType.OpenAI:
- embed_model = LangchainEmbedding(HuggingFaceEmbeddings())
- else:
- embed_model = OpenAIEmbedding()
- # yield chatbot + [(inputs, "")], msg
- with retrieve_proxy():
- prompt_helper = PromptHelper(
- max_input_size=4096,
- num_output=5,
- max_chunk_overlap=20,
- chunk_size_limit=600,
- )
- from llama_index import ServiceContext
-
- service_context = ServiceContext.from_defaults(
- prompt_helper=prompt_helper, embed_model=embed_model
- )
- query_object = GPTVectorStoreIndexQuery(
- index.index_struct,
- service_context=service_context,
- similarity_top_k=5,
- vector_store=index._vector_store,
- docstore=index._docstore,
- )
- query_bundle = QueryBundle(real_inputs)
- nodes = query_object.retrieve(query_bundle)
- reference_results = [n.node.text for n in nodes]
- reference_results = add_source_numbers(reference_results, use_source=False)
- display_append = add_details(reference_results)
- display_append = "\n\n" + "".join(display_append)
- real_inputs = (
- replace_today(PROMPT_TEMPLATE)
- .replace("{query_str}", real_inputs)
- .replace("{context_str}", "\n\n".join(reference_results))
- .replace("{reply_language}", reply_language)
- )
- elif use_websearch:
- limited_context = True
- search_results = ddg(real_inputs, max_results=5)
- reference_results = []
- for idx, result in enumerate(search_results):
- logging.debug(f"搜索结果{idx + 1}:{result}")
- domain_name = urllib3.util.parse_url(result["href"]).host
- reference_results.append([result["body"], result["href"]])
- display_append.append(
- f"{idx+1}. [{domain_name}]({result['href']})\n"
- )
- reference_results = add_source_numbers(reference_results)
- display_append = "\n\n" + "".join(display_append)
- real_inputs = (
- replace_today(WEBSEARCH_PTOMPT_TEMPLATE)
- .replace("{query}", real_inputs)
- .replace("{web_results}", "\n\n".join(reference_results))
- .replace("{reply_language}", reply_language)
- )
- else:
- display_append = ""
- return limited_context, fake_inputs, display_append, real_inputs, chatbot
-
- def predict(
- self,
- inputs,
- chatbot,
- stream=False,
- use_websearch=False,
- files=None,
- reply_language="中文",
- should_check_token_count=True,
- ): # repetition_penalty, top_k
-
- status_text = "开始生成回答……"
- logging.info(
- "输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL
- )
- if should_check_token_count:
- yield chatbot + [(inputs, "")], status_text
- if reply_language == "跟随问题语言(不稳定)":
- reply_language = "the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch."
-
- limited_context, fake_inputs, display_append, inputs, chatbot = self.prepare_inputs(real_inputs=inputs, use_websearch=use_websearch, files=files, reply_language=reply_language, chatbot=chatbot)
- yield chatbot + [(fake_inputs, "")], status_text
-
- if (
- self.need_api_key and
- self.api_key is None
- and not shared.state.multi_api_key
- ):
- status_text = STANDARD_ERROR_MSG + NO_APIKEY_MSG
- logging.info(status_text)
- chatbot.append((inputs, ""))
- if len(self.history) == 0:
- self.history.append(construct_user(inputs))
- self.history.append("")
- self.all_token_counts.append(0)
- else:
- self.history[-2] = construct_user(inputs)
- yield chatbot + [(inputs, "")], status_text
- return
- elif len(inputs.strip()) == 0:
- status_text = STANDARD_ERROR_MSG + NO_INPUT_MSG
- logging.info(status_text)
- yield chatbot + [(inputs, "")], status_text
- return
-
- if self.single_turn:
- self.history = []
- self.all_token_counts = []
- self.history.append(construct_user(inputs))
-
- try:
- if stream:
- logging.debug("使用流式传输")
- iter = self.stream_next_chatbot(
- inputs,
- chatbot,
- fake_input=fake_inputs,
- display_append=display_append,
- )
- for chatbot, status_text in iter:
- yield chatbot, status_text
- else:
- logging.debug("不使用流式传输")
- chatbot, status_text = self.next_chatbot_at_once(
- inputs,
- chatbot,
- fake_input=fake_inputs,
- display_append=display_append,
- )
- yield chatbot, status_text
- except Exception as e:
- traceback.print_exc()
- status_text = STANDARD_ERROR_MSG + str(e)
- yield chatbot, status_text
-
- if len(self.history) > 1 and self.history[-1]["content"] != inputs:
- logging.info(
- "回答为:"
- + colorama.Fore.BLUE
- + f"{self.history[-1]['content']}"
- + colorama.Style.RESET_ALL
- )
-
- if limited_context:
- # self.history = self.history[-4:]
- # self.all_token_counts = self.all_token_counts[-2:]
- self.history = []
- self.all_token_counts = []
-
- max_token = self.token_upper_limit - TOKEN_OFFSET
-
- if sum(self.all_token_counts) > max_token and should_check_token_count:
- count = 0
- while (
- sum(self.all_token_counts)
- > self.token_upper_limit * REDUCE_TOKEN_FACTOR
- and sum(self.all_token_counts) > 0
- ):
- count += 1
- del self.all_token_counts[0]
- del self.history[:2]
- logging.info(status_text)
- status_text = f"为了防止token超限,模型忘记了早期的 {count} 轮对话"
- yield chatbot, status_text
-
- def retry(
- self,
- chatbot,
- stream=False,
- use_websearch=False,
- files=None,
- reply_language="中文",
- ):
- logging.debug("重试中……")
- if len(self.history) > 0:
- inputs = self.history[-2]["content"]
- del self.history[-2:]
- self.all_token_counts.pop()
- elif len(chatbot) > 0:
- inputs = chatbot[-1][0]
- else:
- yield chatbot, f"{STANDARD_ERROR_MSG}上下文是空的"
- return
-
- iter = self.predict(
- inputs,
- chatbot,
- stream=stream,
- use_websearch=use_websearch,
- files=files,
- reply_language=reply_language,
- )
- for x in iter:
- yield x
- logging.debug("重试完毕")
-
- # def reduce_token_size(self, chatbot):
- # logging.info("开始减少token数量……")
- # chatbot, status_text = self.next_chatbot_at_once(
- # summarize_prompt,
- # chatbot
- # )
- # max_token_count = self.token_upper_limit * REDUCE_TOKEN_FACTOR
- # num_chat = find_n(self.all_token_counts, max_token_count)
- # logging.info(f"previous_token_count: {self.all_token_counts}, keeping {num_chat} chats")
- # chatbot = chatbot[:-1]
- # self.history = self.history[-2*num_chat:] if num_chat > 0 else []
- # self.all_token_counts = self.all_token_counts[-num_chat:] if num_chat > 0 else []
- # msg = f"保留了最近{num_chat}轮对话"
- # logging.info(msg)
- # logging.info("减少token数量完毕")
- # return chatbot, msg + "," + self.token_message(self.all_token_counts if len(self.all_token_counts) > 0 else [0])
-
- def interrupt(self):
- self.interrupted = True
-
- def recover(self):
- self.interrupted = False
-
- def set_token_upper_limit(self, new_upper_limit):
- self.token_upper_limit = new_upper_limit
- print(f"token上限设置为{new_upper_limit}")
-
- def set_temperature(self, new_temperature):
- self.temperature = new_temperature
-
- def set_top_p(self, new_top_p):
- self.top_p = new_top_p
-
- def set_n_choices(self, new_n_choices):
- self.n_choices = new_n_choices
-
- def set_stop_sequence(self, new_stop_sequence: str):
- new_stop_sequence = new_stop_sequence.split(",")
- self.stop_sequence = new_stop_sequence
-
- def set_max_tokens(self, new_max_tokens):
- self.max_generation_token = new_max_tokens
-
- def set_presence_penalty(self, new_presence_penalty):
- self.presence_penalty = new_presence_penalty
-
- def set_frequency_penalty(self, new_frequency_penalty):
- self.frequency_penalty = new_frequency_penalty
-
- def set_logit_bias(self, logit_bias):
- logit_bias = logit_bias.split()
- bias_map = {}
- encoding = tiktoken.get_encoding("cl100k_base")
- for line in logit_bias:
- word, bias_amount = line.split(":")
- if word:
- for token in encoding.encode(word):
- bias_map[token] = float(bias_amount)
- self.logit_bias = bias_map
-
- def set_user_identifier(self, new_user_identifier):
- self.user_identifier = new_user_identifier
-
- def set_system_prompt(self, new_system_prompt):
- self.system_prompt = new_system_prompt
-
- def set_key(self, new_access_key):
- self.api_key = new_access_key.strip()
- msg = f"API密钥更改为了{hide_middle_chars(self.api_key)}"
- logging.info(msg)
- return new_access_key, msg
-
- def set_single_turn(self, new_single_turn):
- self.single_turn = new_single_turn
-
- def reset(self):
- self.history = []
- self.all_token_counts = []
- self.interrupted = False
- return [], self.token_message([0])
-
- def delete_first_conversation(self):
- if self.history:
- del self.history[:2]
- del self.all_token_counts[0]
- return self.token_message()
-
- def delete_last_conversation(self, chatbot):
- if len(chatbot) > 0 and STANDARD_ERROR_MSG in chatbot[-1][1]:
- msg = "由于包含报错信息,只删除chatbot记录"
- chatbot.pop()
- return chatbot, self.history
- if len(self.history) > 0:
- self.history.pop()
- self.history.pop()
- if len(chatbot) > 0:
- msg = "删除了一组chatbot对话"
- chatbot.pop()
- if len(self.all_token_counts) > 0:
- msg = "删除了一组对话的token计数记录"
- self.all_token_counts.pop()
- msg = "删除了一组对话"
- return chatbot, msg
-
- def token_message(self, token_lst=None):
- if token_lst is None:
- token_lst = self.all_token_counts
- token_sum = 0
- for i in range(len(token_lst)):
- token_sum += sum(token_lst[: i + 1])
- return i18n("Token 计数: ") + f"{sum(token_lst)}" + i18n(",本次对话累计消耗了 ") + f"{token_sum} tokens"
-
- def save_chat_history(self, filename, chatbot, user_name):
- if filename == "":
- return
- if not filename.endswith(".json"):
- filename += ".json"
- return save_file(filename, self.system_prompt, self.history, chatbot, user_name)
-
- def export_markdown(self, filename, chatbot, user_name):
- if filename == "":
- return
- if not filename.endswith(".md"):
- filename += ".md"
- return save_file(filename, self.system_prompt, self.history, chatbot, user_name)
-
- def load_chat_history(self, filename, chatbot, user_name):
- logging.debug(f"{user_name} 加载对话历史中……")
- if type(filename) != str:
- filename = filename.name
- try:
- with open(os.path.join(HISTORY_DIR, user_name, filename), "r") as f:
- json_s = json.load(f)
- try:
- if type(json_s["history"][0]) == str:
- logging.info("历史记录格式为旧版,正在转换……")
- new_history = []
- for index, item in enumerate(json_s["history"]):
- if index % 2 == 0:
- new_history.append(construct_user(item))
- else:
- new_history.append(construct_assistant(item))
- json_s["history"] = new_history
- logging.info(new_history)
- except:
- # 没有对话历史
- pass
- logging.debug(f"{user_name} 加载对话历史完毕")
- self.history = json_s["history"]
- return filename, json_s["system"], json_s["chatbot"]
- except FileNotFoundError:
- logging.warning(f"{user_name} 没有找到对话历史文件,不执行任何操作")
- return filename, self.system_prompt, chatbot
diff --git a/spaces/JSP/test4k/Dockerfile b/spaces/JSP/test4k/Dockerfile
deleted file mode 100644
index d04098fffee9697d27d99c9572b34b4c7b488ced..0000000000000000000000000000000000000000
--- a/spaces/JSP/test4k/Dockerfile
+++ /dev/null
@@ -1,35 +0,0 @@
-# Grab a fresh copy of the Python image
-FROM python:3.11-slim
-
-# Install build and runtime dependencies
-RUN apt-get update && \
- apt-get install -y \
- libopenblas-dev \
- ninja-build \
- build-essential \
- pkg-config \
- curl
-
-RUN pip install -U pip setuptools wheel && \
- CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" FORCE_CMAKE=1 pip install --verbose llama-cpp-python[server]
-
-# Download model
-RUN mkdir model && \
- curl -L https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GGUF/resolve/main/mistral-7b-openorca.Q4_K_M.gguf -o model/gguf-model.bin
-
-COPY ./start_server.sh ./
-COPY ./main.py ./
-COPY ./index.html ./
-
-# Make the server start script executable
-RUN chmod +x ./start_server.sh
-
-# Set environment variable for the host
-ENV HOST=0.0.0.0
-ENV PORT=7860
-
-# Expose a port for the server
-EXPOSE ${PORT}
-
-# Run the server start script
-CMD ["/bin/sh", "./start_server.sh"]
\ No newline at end of file
diff --git a/spaces/Jashvinu/NousResearch-Redmond-Hermes-Coder/README.md b/spaces/Jashvinu/NousResearch-Redmond-Hermes-Coder/README.md
deleted file mode 100644
index a5e666d3d0aa6f15da5bb18c39d510403d3b5ca8..0000000000000000000000000000000000000000
--- a/spaces/Jashvinu/NousResearch-Redmond-Hermes-Coder/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: NousResearch Redmond Hermes Coder
-emoji: 🔥
-colorFrom: yellow
-colorTo: purple
-sdk: gradio
-sdk_version: 3.36.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Jeff2323/ai-comic-factory/Dockerfile b/spaces/Jeff2323/ai-comic-factory/Dockerfile
deleted file mode 100644
index 91319be9b3dd35d916d18fba5260f51125c46b50..0000000000000000000000000000000000000000
--- a/spaces/Jeff2323/ai-comic-factory/Dockerfile
+++ /dev/null
@@ -1,65 +0,0 @@
-FROM node:18-alpine AS base
-
-# Install dependencies only when needed
-FROM base AS deps
-# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
-RUN apk add --no-cache libc6-compat
-WORKDIR /app
-
-# Install dependencies based on the preferred package manager
-COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
-RUN \
- if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
- elif [ -f package-lock.json ]; then npm ci; \
- elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
- else echo "Lockfile not found." && exit 1; \
- fi
-
-# Uncomment the following lines if you want to use a secret at buildtime,
-# for example to access your private npm packages
-# RUN --mount=type=secret,id=HF_EXAMPLE_SECRET,mode=0444,required=true \
-# $(cat /run/secrets/HF_EXAMPLE_SECRET)
-
-# Rebuild the source code only when needed
-FROM base AS builder
-WORKDIR /app
-COPY --from=deps /app/node_modules ./node_modules
-COPY . .
-
-# Next.js collects completely anonymous telemetry data about general usage.
-# Learn more here: https://nextjs.org/telemetry
-# Uncomment the following line in case you want to disable telemetry during the build.
-# ENV NEXT_TELEMETRY_DISABLED 1
-
-# RUN yarn build
-
-# If you use yarn, comment out this line and use the line above
-RUN npm run build
-
-# Production image, copy all the files and run next
-FROM base AS runner
-WORKDIR /app
-
-ENV NODE_ENV production
-# Uncomment the following line in case you want to disable telemetry during runtime.
-# ENV NEXT_TELEMETRY_DISABLED 1
-
-RUN addgroup --system --gid 1001 nodejs
-RUN adduser --system --uid 1001 nextjs
-
-COPY --from=builder /app/public ./public
-
-# Automatically leverage output traces to reduce image size
-# https://nextjs.org/docs/advanced-features/output-file-tracing
-COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
-COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
-COPY --from=builder --chown=nextjs:nodejs /app/.next/cache ./.next/cache
-# COPY --from=builder --chown=nextjs:nodejs /app/.next/cache/fetch-cache ./.next/cache/fetch-cache
-
-USER nextjs
-
-EXPOSE 3000
-
-ENV PORT 3000
-
-CMD ["node", "server.js"]
\ No newline at end of file
diff --git a/spaces/JohnC26/AI.Dashboard.Gradio.Streamlit.HTML5/index.html b/spaces/JohnC26/AI.Dashboard.Gradio.Streamlit.HTML5/index.html
deleted file mode 100644
index 66c7ac0516cb47848e339006985c57cfc0c153c4..0000000000000000000000000000000000000000
--- a/spaces/JohnC26/AI.Dashboard.Gradio.Streamlit.HTML5/index.html
+++ /dev/null
@@ -1,97 +0,0 @@
-
-
-
-
-
- My static Space
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-journey
- title Create AI
- section Training
- Format DataSet Inputs Files, Data Splits: 5: Teacher
- Model Build w/ SKLearn, TF, Pytorch: 3: Student
- Determine Model Performance: 1: Teacher, Student
- section Deploy
- Web Deploy Local and Cloud: 5: Teacher
- Architecture Spaces Gradio Streamlit Heroku AWS Azure and GCCP: 5: Teacher
- section Testing
- Test Model with Input Datasets: 5: Teacher
- Examples. Inputs that Work, Inputs That Break Model: 5: Teacher
- Governance - Analyze, Publish Fairness, Equity, Bias for Datasets and Outputs: 5: Teacher
-
-
-
-sequenceDiagram
- participant Alice
- participant Bob
- Alice->>John: Hello John, how are you?
- loop Healthcheck
- John->>John: Fight against hypochondria
- end
- Note right of John: Rational thoughts prevail...
- John-->>Alice: Great!
- John->>Bob: How about you?
- Bob-->>John: Jolly good!
-
-
-
-
Welcome to the Mermaid Modeler Tip Sheet
-
- You can use Mermaid inside HTML5 by including the script and a div with the class or mermaid.
-
walpZoffoopyiptyday [url= -fifa-2013-reloaded-rar-passwordrar]fifa 2013 reloaded .rar password.rar[/url]SugarBytes WOW VST V11 AiR [url= -the-art-of-tim-burton-pdf-free-download]trello.com[/url] Taiseertaids [url= -data-doctor-recovery-sim-card-5312-crack]Download[/url] Amtlib.dll Cs6 Crack Illustrator Cs3 [url= -ayitha-ezhuthu-movie-download-in-a-torrent]Ayitha Ezhuthu Movie Download In A Torrent[/url] Leapwing Audio †Bundle 2019 VST, VST3, AAX x64 Rev2 [url= -imspost-74-crack] -imspost-74-crack[/url] DiskGenius Professional 3.3.0525 crack full-mediafire.zip [url= -cara-instal-spss-17-di-windows-7-64-bit] -cara-instal-spss-17-di-windows-7-64-bit[/url] Adolix Split And Merge Pdf 2.1.29 Crack [url= -idm-ultraedit-2510050-x86-x64-keygen-cracksmind-crackl]trello[/url] FlowCode PIC V5.4 Serial Key keygen [url= -sage-saari-ligne-100-version-15-en-francais-serial] -sage-saari-ligne-100-version-15-en-francais-serial[/url]EquantyroarkPata [url= -gpr-slice-v6-0-zipzip]Download[/url] flissinneple [url= -chef-damodaran-recipes-book-in-tamil-pdf-download]trello.com[/url]
-
swollen members armed to the teeth album download [url= -product-key-windows-8-single-language-keygen]trello[/url]DrediuhIrrivataree [url= -the-walking-dead-s01e01-720p-or-1080p]Download[/url] Malwarebytes Anti Malware 3.4.5 Crack [url= -free-download-modul-tik-sd] -free-download-modul-tik-sd[/url] bascom avr 2.0.7.5 crack [url= -bonecraft-proper-crack-reloaded-223]Download[/url] dinosaur king arcade game battle free download [url= -hd-online-player-poda-podi-tamil-movie-mp4-53]Download[/url] vag com 908.2 fr crack [url= -rolling-girl-mp3-download-free]trello[/url] briletypeAbumunult [url= -adobe-captivate-2019-1151499-crack]trello[/url] Binksetvolume 12.zip.rar pippi digitali avvoc [url= -shadow-of-the-tomb-raider-download-xbox-360-free]Download[/url]super usb cassette capture software free 14 [url= -hd-online-player-bloodsport-2-720p]HD Online Player (Bloodsport 2 720p)[/url] Windows 7 all edition x86 x64 bit activated iso [url= -dashcommand-licence-key]Download[/url]
-
Magic Engine Fx 111 Crack Version Of 24 [url= -mass-effect-3-dlc-unlocker-reloaded]trello.com[/url]DrediuhIrrivataree [url= -civilization-3-free-download-full-version-for-pc]civilization 3 free download full version for pc[/url] Taiseertaids [url= -easy-recovery-essentials-easyre-pro-windows-7-8-10l]trello[/url] melsAtterve [url= -fast-and-furious-6-free-download-full-movie]Fast And Furious 6 Free Download Full Movie[/url] AutoCAD Electrical 2018 Scaricare Generatore Di Chiavi 32 Bits IT [url= -kochikame-movie-2-subtitle-download-19]kochikame movie 2 subtitle download 19[/url] NatttureCemFrawlHem [url= -sony-vegas-pro-32-bit-full-crack-11]Download[/url] briletypeAbumunult [url= -descargar-softflot-crack-taringarar]trello[/url] Windows 7 Loader EXtreme Edition V3010 [url= -opm-songbook-with-chords-pdf-685]trello[/url]EquantyroarkPata [url= -from-dusk-till-dawn-2-720p-torrent]trello[/url] keygen AutoCAD Mobile 2014 64 bits ingles [url= -matematicavolumeunicogelsoniezzipdfdownload] -matematicavolumeunicogelsoniezzipdfdownload[/url]
- aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Brutal Doom Player Skins.md b/spaces/bioriAsaeru/text-to-voice/Brutal Doom Player Skins.md
deleted file mode 100644
index 3011df7313224b7f8764f901511287eee0d33de9..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Brutal Doom Player Skins.md
+++ /dev/null
@@ -1,81 +0,0 @@
-## Brutal Doom Player Skins
-
-
-
- 
-
-
-
-**Brutal Doom Player Skins ✏ ✏ ✏ [https://kneedacexbrew.blogspot.com/?d=2twuno](https://kneedacexbrew.blogspot.com/?d=2twuno)**
-
-
-
-# How to Customize Your Brutal Doom Player Skins
-
-
-
-Brutal Doom is a mod for the classic first-person shooter Doom that enhances the game's graphics, gore, sound effects, and gameplay. It also allows you to customize your player skins with different colors, textures, and accessories. In this article, we will show you how to install and use Brutal Doom player skins, as well as some of the best ones available online.
-
-
-
-## How to Install Brutal Doom Player Skins
-
-
-
-To install Brutal Doom player skins, you will need the following:
-
-
-
-- A copy of Doom or Doom II (you can buy them from Steam or GOG)
-
-- A source port that supports Brutal Doom (such as GZDoom or Zandronum)
-
-- The latest version of Brutal Doom (you can download it from Mod DB)
-
-- The player skins you want to use (you can find them on Mod DB or other websites)
-
-
-
-Once you have everything ready, follow these steps:
-
-
-
-1. Extract the Brutal Doom mod files into your source port folder.
-
-2. Extract the player skin files into a subfolder called "skins" inside your source port folder.
-
-3. Launch your source port and select Brutal Doom as your active mod.
-
-4. Go to Options > Player Setup and choose your player skin from the list.
-
-5. Enjoy your new look!
-
-
-
-## Some of the Best Brutal Doom Player Skins
-
-
-
-There are many player skins available for Brutal Doom, but here are some of our favorites:
-
-
-
-- **Doomguy 2016**: This skin recreates the appearance of the Doom Slayer from the 2016 reboot of Doom. It features a green armor suit with metal plates, a helmet with a visor, and a shoulder-mounted flashlight. You can also choose between different colors and accessories for your armor.
-
-- **Doomguy Classic**: This skin brings back the original look of the Doomguy from the classic games. It features a simple green armor suit with exposed arms and legs, a helmet with goggles, and a backpack. You can also choose between different colors and faces for your helmet.
-
-- **Doomgirl**: This skin adds a female option for your player character. It features a green armor suit with metal plates, a helmet with a visor, and long hair. You can also choose between different colors and accessories for your armor.
-
-- **Doom Marine**: This skin is based on the appearance of the Doom Marine from the comic book adaptation of Doom. It features a blue armor suit with metal plates, a helmet with a visor, and a bandana. You can also choose between different colors and faces for your helmet.
-
-- **Doom Slayer Eternal**: This skin updates the appearance of the Doom Slayer from the 2020 sequel of Doom. It features a green armor suit with metal plates, spikes, and runes, a helmet with a visor and horns, and a shoulder-mounted flamethrower. You can also choose between different colors and accessories for your armor.
-
-
-
-## Conclusion
-
-
-
-Brutal Doom is a great way to experience the classic Doom games with enhanced graphics and gameplay. It also lets you customize your player skins with different colors, textures, and accessories. We hope this article helped you learn how to install and use Brutal Doom player skins, as well as some of the best ones available online. Have fun!
-
- 1b8d091108
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Command and Conquer Zero Hour No CD Crack Run the Game on Any PC.md b/spaces/bioriAsaeru/text-to-voice/Command and Conquer Zero Hour No CD Crack Run the Game on Any PC.md
deleted file mode 100644
index 8f82663dacb1f9f1dd8ef5396ec9fec320909cf6..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Command and Conquer Zero Hour No CD Crack Run the Game on Any PC.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Focus Movie Mp3 Song Free Download LINK.md b/spaces/bioriAsaeru/text-to-voice/Focus Movie Mp3 Song Free Download LINK.md
deleted file mode 100644
index 5dba079f6921aa2e12e921f7266cf7a5a93823e8..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Focus Movie Mp3 Song Free Download LINK.md
+++ /dev/null
@@ -1,30 +0,0 @@
-
-
Further, many free music download apps access technical loopholes to allow users to download music illegally. Some of these apps are legal in certain countries and not in others, so we recommend checking for yourself before downloading.
-
Audiomack features songs from almost every genre, though it predominantly offers hip-hop, rap, R&B and EDM. You can listen to these songs in-app, or download them to your device in all popular formats. Audiomack is also available on iOS.
Audials Play is a little different from the other apps in this list. A throw-back to a previous era of recording tracks off the radio using cassette tapes, this app allows you to record songs from various radio stations. You can then download the song as an MP3 file directly to your device.
-
SONGily comes as both a free app from the Play Store and a Pro app available as an APK from the SONGily site. It has a large database of music, but most of the free songs are covers, remixes, and live versions as these are legal to distribute and download. There are some originals, but it may be harder to find these on the app.
-
Amazon gives Prime subscribers and Amazon Music subscribers access to millions of songs ad-free. But, it also allows you to listen to much of their library for free with ads in between songs. Even better, the app allows you to download music to your device so you can listen to your favorite songs offline.
-
Hungama Music is one of the best Android music download apps, especially for fans of Indian music. Hungama Music has a massive collection of songs (15 million and more) in over 15 languages, so if you like Bollywood tunes, this is the app for you.
-
Hungama Music also has high-quality covers of many famous international songs. It offers in-app playback as well as easy downloads so that you can store the songs on your Android phone. You can also change the quality of your downloads and set the app to only download on Wi-Fi so that you can preserve your data.
-
Musopen provides sheet music, recordings, and educational materials for free to the public. They have a focus on classical music, and have recorded and released collections by composers like Beethoven and Chopin.
-
Free Episodes Download Adventures in Odyssey episodes from our showcase of free episodes. Below is a complete listing of episodes which can be downloaded free from Focus on Family. To download an episode, right-click on each download link.
-
Personally, I like listening to orchestral performances of James Bond movie themes(Opens in a new window). There are plenty of songs from the 24 movies and lots of variations from different recorded performances around the world.
-
-
Binaural(Opens in a new window), meanwhile, is a sound app that won't be for everyone. It's tailored to hardcore noise enthusiasts, and allows you to select different sound frequencies, such as 9.5Hz, which the app says is good for relaxation and dreams. For problem solving, it recommends a higher pitch of 39-50Hz. The interface is minimal, but you can mix in rain sounds on top of the frequencies. It's free to download on iOS(Opens in a new window) and Mac(Opens in a new window); pay a few bucks to unlock more features.
-
Reading is one of the fastest ways to level up your life. But sometimes it is hard to get into the zone. You can finish a whole page and then think to yourself "wait, what the hell did I just read?" That happens when your brain is in an active beta state brainwave, which makes it really difficult to focus. Audio wizard Cory Allen helped me put together some binaural beats which help to entrain the brain to more optimal brainwave patterns for reading, writing, and retaining knowledge.
Enter your email below to get one of my favorite tracks for free. Download and get your learn on! If you dig, check out the rest of the beats we have up for sale on the site.
-
Did you know that in addition to uploading music, the Music Manager app can also download your music from the service? Best of all, there are no weird restrictions; it will download regular MP3 files that you're free to move and listen to elsewhere.
-
You can see which songs are currently downloading by returning to the app's home screen, tapping on the vertical horizontal lines in the upper left-hand corner, then going to Settings > Manage downloads.
-
Download a free two week trial of Mixcraft 9 to test drive your own personal music recording studio. Mixcraft 9 is the perfect blend of ease-of-use and pro features. With great workflow and less clutter, you'll be able to focus on making music and not technical details.
-
The song received mixed reviews from music critics who praised Grande's vocals and the song's brassy production but criticized its similarity to her 2014 hit "Problem". "Focus" debuted at number seven on the Billboard Hot 100 chart with 113,000 downloads in its first week. It was Grande's sixth top-ten single, and her first unaccompanied by another artist. The song also reached the top ten in Canada, Australia, Greece, Italy, Spain and the United Kingdom. By January 2016, "Focus" sold 544,000 copies in the United States and was certified double Platinum by the RIAA. It is also certified Diamond in Brazil.
-
"Focus" was written by Grande, Savan Kotecha, Peter Svensson and its producers, Ilya Salmanzadeh and Max Martin.[1] Serban Ghenea mixed the song, aided by John Hands.[2] Grande began working on a new material for her third studio album in May 2015, posting details about the project on social media and in conversations with the public. The album was originally entitled Moonlight.[3] The singer posted an unfocused picture of herself on her Instagram page in July 2015 with the caption "Focus", calling it a "hint". A video released in September 2015 by iHeartMedia Music Summit demonstrated that Grande, Republic Records CEO Monte Lipman and her manager, Charlie Walk, played the song to the approval of the Republic Records staff.[4]
-
Lyrically, Grande asks for attention and focus. According to Idolator's Robbie Daw, the pre-chorus resembles the intro of KC and the Sunshine Band's 1975 disco song, "That's the Way (I Like It)".[21][27] Marcus Floyd of Renowned for Sound wrote that in the second verse, Grande "addresses her haters and insists that they address her realness".[28] Its "funky" chorus consists of the repeated phrase, "Focus on me", sung by American actor Jamie Foxx in a bass voice.[29] Grande explained the hook:
-
"Focus" debuted at number seven on the Billboard Hot 100 in the United States, selling 113,000 digital downloads in its first full week, becoming Grande's sixth top-ten single and fourth top-ten debut on the chart.[37] It was her first solo top ten in the country, unaccompanied by another artist.[37] The song entered the Digital Songs chart at number five and was number 13 on the Streaming Songs chart, with 13.3 million domestic streams.[37] It fell to number 13 in its second week on the Hot 100, rebounding to number 12 in its third week following Grande's performance at the 2015 American Music Awards.[38] By April 2018, "Focus" sold over 544,000 digital downloads in the US and was certified double Platinum by the RIAA.[39] It debuted and peaked at number eight on the Canadian Hot 100, Grande's fifth Canadian top-ten single.
-
The video begins with Grande texting the phrase "focus on me" on a cell phone. During the first verse, scenes of Grande lip-syncing the song are intercut with silhouettes of the singer dancing in a purple circle. She then performs a choreographed routine with six backing dancers on a purple set. During the second verse, she smiles and poses in a square wearing a black mini dress and high Toni Basil-style boots. For the song's bridge, the singer (in a star-print leotard, photographed in black-and-white) lies on a neon-lit set, plays a trumpet and performs with her dancers.[48] The video ends with her right eye illuminated. Teen Vogue's Ella Cerón noted its "slightly throwback" visuals, which match the song.[49]
-
Ashanti Songs: download Ashanti - The Woman You Love, listen Ashanti Feat Fabulous - So Into You, mp3 Ashanti - Never Should Have, music mp3 Ashanti - Intro, mp3 download Ashanti - Movies, song Ashanti feat Big Pun - How We Roll, Waptrick Ashanti - Focus Remix, free Ashanti - I Found It In You, download Ashanti - Fuck Song, listen Jagged Edge feat Ashanti And Jermaine Dupri - Put A Lil Umph In It - REMIX, mp3 Ashanti - Mother, music mp3 Ashanti - Unfoolish, mp3 download Ashanti - Shine, song Ashanti - So Over You, Waptrick Ashanti - The Christmas Song, free Ashanti - When A Man Does Wrong, download Ashanti feat Lloyd - South Side, listen Ashanti - Rain On Me Remix, @ Waptrick.com mp3 download. Waphan, Wapdam, Wap.in, Wapin, Zamob, Zonkewap, Ketomob, Cocawap, Cipcup, Mexicowap, Wapafull, Wapkid, Wapjet, Redwap, Herwap, Sikwap, Wetwap, Joswap, Gratisindo
-
While they do enjoy movies together, a more common family scene is for the four of them to cuddle while listening to an audio book that Bob has downloaded from the National Library Service for the Blind and Physically Handicapped BARD site.
-
When Disk Cleaner Pro is launched, focus is on the "Continue" button. To the left of the "Continue" button is information regarding how much free space on your hard drive is available. To get started, activate this "Continue" button.
-
When Quick Cleaner opens, focus is on the "Start Scan" button. At the top of the window is a toolbar followed by the words "Used," Free," and "Total." After the words are three numbers which correspond to the three words. For example, the first number is how much free space is available on your hard drive.
-
CleanMyMac 2 works well with VoiceOver: although, it is expensive. It gives the user a lot of control when choosing files to delete. If you have a lot of files and want to free up space on your hard drive, download the free trial and decide whether the app is worth $39.95.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git "a/spaces/bioriAsaeru/text-to-voice/How To Upload Shell\302\240.md" "b/spaces/bioriAsaeru/text-to-voice/How To Upload Shell\302\240.md"
deleted file mode 100644
index 6f9140f6a4a64f79fb9373dd2ee069857f6fd7db..0000000000000000000000000000000000000000
--- "a/spaces/bioriAsaeru/text-to-voice/How To Upload Shell\302\240.md"
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
Before using public-key authentication, the public/private key pair files must be created, with a copy of the public-key file being uploaded to a specific location on the server. The public and private keys are generated with a key generation utility. While the private and public keys within a key pair are related, a private key cannot be derived by someone who only possesses the corresponding public key.
Successful public-key authentication requires: (1) generating a key pair, (2) uploading the public key to the Secure Shell server, and (3) configuring the client to use the public-key authentication method. SecureCRT and SecureFX provide utilities to generate keys and automatically place a copy of the public key on a VShell® server. Public-key authentication between a VanDyke Software client application and a non-VShell server such as OpenSSH requires generation of a public/private key pair and placing the public-key file on the server in the right location and in a format supported by the Secure Shell server.
-
The public key can be uploaded to a VShell server at the end of the Key Generation wizard process, or at any time later through the Session Options dialog. Use the following steps to upload an existing public-key file:
-
*Note that the upload instructions apply only to servers like VanDyke Software's VShell that implement the Secure Shell Public Key Subsystem (RFC 4819). Although there may be server implementations that support the public-key subsystem, those connecting to servers that aren't VShell will typically need to use manual methods to place their public-key files on the server to meet the server's requirements.
-
Uploaded files represent a significant risk to applications. The firststep in many attacks is to get some code to the system to be attacked.Then the attack only needs to find a way to get the code executed. Usinga file upload helps the attacker accomplish the first step.
-
-
The consequences of unrestricted file upload can vary, includingcomplete system takeover, an overloaded file system or database,forwarding attacks to back-end systems, client-side attacks, or simpledefacement. It depends on what the application does with the uploadedfile and especially where it is stored.
-
Sometimes web applications intentionally or unintentionally use somefunctions (or APIs) to check the file types in order to process themfurther. For instance, when an application resize an image file, it mayjust show an error message when non-image files are uploaded withoutsaving them on the server.
-
Apart from installing applications like emacs on my guest machine, I would also like to upload some configuration files (e.g. to configure emacs for Clojure development). Sadly, Vagrant's documentation gives no clue about how to do this. I guess I'd have to put the configuration files into a shared folder and then copy them from the shared folder on the guest machine to the desired locations?
-
First catch is that it is run as the ssh user ("vagrant" by default) without sudo, so you need to have write access to the directory on the VM. A workaround is to copy to a temporary location and then use a normal shell provisioner to copy/move it to right place.
-
Realistically there are several ways we could achieve this, for example if we were able to install additional tools we could leverage azcopy. In my scenario I only have the following available to me and I'm limited to leveraging bash/shell scripting:
-
The --os-shell works for MySQL by attempting to use an into outfile to write a file to the web root. This can fail for any number of reasons. The most common reason being that the database and web server and different machines. Ubuntu's default AppArmor rule sets forbid MySQL from writing to /var/www/. Also, into outfile requires file privileges that should never be granted (but often is). You could try using sqlmap's file-io functionality to read and write to the remote file system.
-
in the context of this application, dumping the contents of the Wordpress MySQL database will yield the administrator's password hash. Cracking this hash will yield a Wordpress admin account which almost always has the ability to upload and install Wordpress extensions.... or PHP shells.
-
The easiest way to install shell integration is to select the iTerm2>Install Shell Integration menu item. It will download and run a shell script as described below. You should do this on every host you ssh to as well as your local machine. The following shells are supported: tcsh, zsh, bash, and fish 2.3 or later. Contributions for other shells are most welcome.
-
For zsh and bash users: if you are unable to modify PS1 directly (for example, if you use a zsh theme that wants to control PS1), you must take an extra step. Add export ITERM2_SQUELCH_MARK=1 before the shell integration script is sourced. Add the iterm2_prompt_mark as directed above to your prompt through those means available to you.
-
If you drop a file (e.g., from Finder) into iTerm2 while holding the option key, iTerm2 will offer to upload the file via scp to the remote host into the directory you were in on the line you dropped the file on. A new menu bar item will be added called Uploads that lets you view uploaded files and track their progress.
-
With shell integration, iTerm2 will remember which directories you have used recently. The list of preferred directories is stored separately for each username+hostname combination. It is sorted by "frecency" (frequency and recency of use). There are two places it is exposed in the UI:
-
If you'd like to be able to use shell integration as root, you have twooptions. The first option, presuming you use bash, is to become root with sudo-s (which loads your .bashrc as root) and add this to your .bashrc:
-
For some users, installing a login script on every host they connect to is notan option. To be sure, modifying root's login script is usually a bad idea. In these casesyou can get the benefits of shell integration by defining triggers. The following triggers are of interest:
-
iTerm2 links in libssh2, and does not shell out to scp. It respects /etc/known_hosts and ~/.ssh/known_hosts, and will update the latter file appropriately. Host fingerprints are verified. Password, keyboard-interactive, and public-key authentication are supported. Private keys by default come from ~/.ssh/id_rsa, id_dsa, or id_ecdsa, and may be encrypted with an optional passphrase.
-
Settings pulled from ssh_config override the hostname and user name provided by shell integration. The shell integration-provided host name is used as the text against which Host patterns are matched.
-
After exploit a remote command execution vulnerability then we can use a reverse shell to obtain an interactive shell session on the target machine. Throughout our article we are going to use this web shell to achieve the reverse shell of the target machine. Ready ? !! We execute the given command to edit the localhost address from the malicious shell.
-
Sometimes plugins installed in WordPress CMS are vulnerable, by taking advantage of which we can upload our malicious PHP shells to the target server and get reverse shells. In our case, as you can see a vulnerable plugin called Reflex is located on the WordPress CMS, so now we will try to exploit target mahcine by uploading shell through this plugin.
-
Weevely is a command line web shell dynamically extended over the network at runtime, designed for remote administration and penetration testing or bad things. It provides a ssh-like terminal just dropping a PHP script on the target server, even in restricted environments. The best thing about Weevely is its stealth functionality. So today we will see how Weevely functions.
-
When working with Azure Cloud Shell (opens new window), you sometimes need the ability to upload files to work with later. I'm going to call out the two methods that I use to accomplish this task all the time.
-
In method one, we'll update the file share that's associated with Cloud Shell by using the clouddrive mount command. Note: that you may already have a cloud drive that is created upon initial start of cloud shell. Go ahead and spin up Azure Cloud Shell and type clouddrive -h to see the commands to mount and unmount a drive.
-
We'll now simply call clouddrive mount -s subscription-id -g your-resource-group-name -n storage-account -f storage-file-name to create our drive. Once it has completed, we'll navigate to the resource and hit the Upload button and upload a file. Again, you could have navigated to your existing resource group instead of creating a new one - but I want you to learn how to do this manually.
-
In our blog post on ASP.NET resource files and deserialization issues [1], we showed how to run code by abusing deserialization features when uploading a RESX or RESOURCES file. In this blog post, similarly we show abuse of XAMLX file capabilities to run commands on a server when such files can be uploaded within an IIS application.
-
The second method is by a XAMLX file feature that can run code on the server-side when browsing the uploaded file. It is possible to simply use Visual Studio to develop a basic payload for this case. Examples provided here have been modified to be shorter and perhaps more effective.
-
It is possible to solve this issue when a web.config can be uploaded. However, in that case other techniques can be used to run code on the server as well (see [5] for more details). The following web.config file can be used to enable the .XAMLX file extension:
-
Often times on an engagement I find myself needing to copy a tool or a payload from my Kali linux attack box to a compromised Windows machine. As a perfect example, on a recent pentest, I found a vulnerable ColdFusion server and was able to upload a CFM webshell. It was a very limited, non-interactive shell and I wanted to download and execute a reverse Meterpreter binary from my attack machine. I generated the payload with Veil but needed a way to transfer the file to the Windows server running ColdFusion through simple commands.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Italian movie download Puppies and Kittens A screensaver that will melt your heart.md b/spaces/bioriAsaeru/text-to-voice/Italian movie download Puppies and Kittens A screensaver that will melt your heart.md
deleted file mode 100644
index 34b7404ec6fe18e281a4a0517336a4961819c6aa..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Italian movie download Puppies and Kittens A screensaver that will melt your heart.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
Some of our TV shows and movies are produced in partnership with a studio that owns the franchise or intellectual property associated with the content. While we may have the rights to offer them for streaming, we may not be able to offer them for download.
-
You can find more information at the website of the Danish Veterinary and Food Administration, including the rules for the commercial transport of animals, and the rules for the import of unvaccinated puppies and kittens.
Radiographs of puppies had norrnal VHS and no significant change from 3 months to 3 years of age (Figures 15 and 16) (4). However, radiographs in 3 month-old kittens had VHS values mildly above the normal feline range at 3 months and 6 months but decreased to the normal range by 9 months of age (22).
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Kate Elliott Jaran PDF Download Discover the Secrets of the Chapalii Empire and the Jaran People.md b/spaces/bioriAsaeru/text-to-voice/Kate Elliott Jaran PDF Download Discover the Secrets of the Chapalii Empire and the Jaran People.md
deleted file mode 100644
index 4a3f79119956d39f6f301310ede9272fbb3a627a..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Kate Elliott Jaran PDF Download Discover the Secrets of the Chapalii Empire and the Jaran People.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Learn Materials Science and Metallurgy with Material Science By Op Khanna Pdf Free Download.md b/spaces/bioriAsaeru/text-to-voice/Learn Materials Science and Metallurgy with Material Science By Op Khanna Pdf Free Download.md
deleted file mode 100644
index 9ae5550b71561d15cc12b329d260f559af2d4561..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Learn Materials Science and Metallurgy with Material Science By Op Khanna Pdf Free Download.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
Material science and engineering 8e william callister Best Books for ESE 2021 ... Callister, Jr John Wiley & Sons, New York, 1991.. pp.. xxi + 791, price E53.00. Download Wakaru Ver. Beta
material science and metallurgy pdf by o.p. khanna
... [PDF] Materials Science Books Collections Free Download .. ... by O.P.. Khanna is 1333F or 723C But in Callister's book "Materials Science ...
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/models/builders.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/models/builders.py
deleted file mode 100644
index 038bf99c3d0fbbb86005683d5a2a1b4edcac4298..0000000000000000000000000000000000000000
--- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/models/builders.py
+++ /dev/null
@@ -1,252 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-All the functions to build the relevant models and modules
-from the Hydra config.
-"""
-
-import typing as tp
-
-import audiocraft
-import omegaconf
-import torch
-
-from .encodec import CompressionModel, EncodecModel
-from .lm import LMModel
-from ..modules.codebooks_patterns import (
- CodebooksPatternProvider,
- DelayedPatternProvider,
- MusicLMPattern,
- ParallelPatternProvider,
- UnrolledPatternProvider,
- VALLEPattern,
-)
-from ..modules.conditioners import (
- BaseConditioner,
- ChromaStemConditioner,
- CLAPEmbeddingConditioner,
- ConditionFuser,
- ConditioningProvider,
- LUTConditioner,
- T5Conditioner,
-)
-from .unet import DiffusionUnet
-from .. import quantization as qt
-from ..utils.utils import dict_from_config
-from ..modules.diffusion_schedule import MultiBandProcessor, SampleProcessor
-
-
-def get_quantizer(quantizer: str, cfg: omegaconf.DictConfig, dimension: int) -> qt.BaseQuantizer:
- klass = {
- 'no_quant': qt.DummyQuantizer,
- 'rvq': qt.ResidualVectorQuantizer
- }[quantizer]
- kwargs = dict_from_config(getattr(cfg, quantizer))
- if quantizer != 'no_quant':
- kwargs['dimension'] = dimension
- return klass(**kwargs)
-
-
-def get_encodec_autoencoder(encoder_name: str, cfg: omegaconf.DictConfig):
- if encoder_name == 'seanet':
- kwargs = dict_from_config(getattr(cfg, 'seanet'))
- encoder_override_kwargs = kwargs.pop('encoder')
- decoder_override_kwargs = kwargs.pop('decoder')
- encoder_kwargs = {**kwargs, **encoder_override_kwargs}
- decoder_kwargs = {**kwargs, **decoder_override_kwargs}
- encoder = audiocraft.modules.SEANetEncoder(**encoder_kwargs)
- decoder = audiocraft.modules.SEANetDecoder(**decoder_kwargs)
- return encoder, decoder
- else:
- raise KeyError(f"Unexpected compression model {cfg.compression_model}")
-
-
-def get_compression_model(cfg: omegaconf.DictConfig) -> CompressionModel:
- """Instantiate a compression model."""
- if cfg.compression_model == 'encodec':
- kwargs = dict_from_config(getattr(cfg, 'encodec'))
- encoder_name = kwargs.pop('autoencoder')
- quantizer_name = kwargs.pop('quantizer')
- encoder, decoder = get_encodec_autoencoder(encoder_name, cfg)
- quantizer = get_quantizer(quantizer_name, cfg, encoder.dimension)
- frame_rate = kwargs['sample_rate'] // encoder.hop_length
- renormalize = kwargs.pop('renormalize', False)
- # deprecated params
- kwargs.pop('renorm', None)
- return EncodecModel(encoder, decoder, quantizer,
- frame_rate=frame_rate, renormalize=renormalize, **kwargs).to(cfg.device)
- else:
- raise KeyError(f"Unexpected compression model {cfg.compression_model}")
-
-
-def get_lm_model(cfg: omegaconf.DictConfig) -> LMModel:
- """Instantiate a transformer LM."""
- if cfg.lm_model == 'transformer_lm':
- kwargs = dict_from_config(getattr(cfg, 'transformer_lm'))
- n_q = kwargs['n_q']
- q_modeling = kwargs.pop('q_modeling', None)
- codebooks_pattern_cfg = getattr(cfg, 'codebooks_pattern')
- attribute_dropout = dict_from_config(getattr(cfg, 'attribute_dropout'))
- cls_free_guidance = dict_from_config(getattr(cfg, 'classifier_free_guidance'))
- cfg_prob, cfg_coef = cls_free_guidance['training_dropout'], cls_free_guidance['inference_coef']
- fuser = get_condition_fuser(cfg)
- condition_provider = get_conditioner_provider(kwargs["dim"], cfg).to(cfg.device)
- if len(fuser.fuse2cond['cross']) > 0: # enforce cross-att programmatically
- kwargs['cross_attention'] = True
- if codebooks_pattern_cfg.modeling is None:
- assert q_modeling is not None, \
- "LM model should either have a codebook pattern defined or transformer_lm.q_modeling"
- codebooks_pattern_cfg = omegaconf.OmegaConf.create(
- {'modeling': q_modeling, 'delay': {'delays': list(range(n_q))}}
- )
- pattern_provider = get_codebooks_pattern_provider(n_q, codebooks_pattern_cfg)
- return LMModel(
- pattern_provider=pattern_provider,
- condition_provider=condition_provider,
- fuser=fuser,
- cfg_dropout=cfg_prob,
- cfg_coef=cfg_coef,
- attribute_dropout=attribute_dropout,
- dtype=getattr(torch, cfg.dtype),
- device=cfg.device,
- **kwargs
- ).to(cfg.device)
- else:
- raise KeyError(f"Unexpected LM model {cfg.lm_model}")
-
-
-def get_conditioner_provider(output_dim: int, cfg: omegaconf.DictConfig) -> ConditioningProvider:
- """Instantiate a conditioning model."""
- device = cfg.device
- duration = cfg.dataset.segment_duration
- cfg = getattr(cfg, 'conditioners')
- dict_cfg = {} if cfg is None else dict_from_config(cfg)
- conditioners: tp.Dict[str, BaseConditioner] = {}
- condition_provider_args = dict_cfg.pop('args', {})
- condition_provider_args.pop('merge_text_conditions_p', None)
- condition_provider_args.pop('drop_desc_p', None)
-
- for cond, cond_cfg in dict_cfg.items():
- model_type = cond_cfg['model']
- model_args = cond_cfg[model_type]
- if model_type == 't5':
- conditioners[str(cond)] = T5Conditioner(output_dim=output_dim, device=device, **model_args)
- elif model_type == 'lut':
- conditioners[str(cond)] = LUTConditioner(output_dim=output_dim, **model_args)
- elif model_type == 'chroma_stem':
- conditioners[str(cond)] = ChromaStemConditioner(
- output_dim=output_dim,
- duration=duration,
- device=device,
- **model_args
- )
- elif model_type == 'clap':
- conditioners[str(cond)] = CLAPEmbeddingConditioner(
- output_dim=output_dim,
- device=device,
- **model_args
- )
- else:
- raise ValueError(f"Unrecognized conditioning model: {model_type}")
- conditioner = ConditioningProvider(conditioners, device=device, **condition_provider_args)
- return conditioner
-
-
-def get_condition_fuser(cfg: omegaconf.DictConfig) -> ConditionFuser:
- """Instantiate a condition fuser object."""
- fuser_cfg = getattr(cfg, 'fuser')
- fuser_methods = ['sum', 'cross', 'prepend', 'input_interpolate']
- fuse2cond = {k: fuser_cfg[k] for k in fuser_methods}
- kwargs = {k: v for k, v in fuser_cfg.items() if k not in fuser_methods}
- fuser = ConditionFuser(fuse2cond=fuse2cond, **kwargs)
- return fuser
-
-
-def get_codebooks_pattern_provider(n_q: int, cfg: omegaconf.DictConfig) -> CodebooksPatternProvider:
- """Instantiate a codebooks pattern provider object."""
- pattern_providers = {
- 'parallel': ParallelPatternProvider,
- 'delay': DelayedPatternProvider,
- 'unroll': UnrolledPatternProvider,
- 'valle': VALLEPattern,
- 'musiclm': MusicLMPattern,
- }
- name = cfg.modeling
- kwargs = dict_from_config(cfg.get(name)) if hasattr(cfg, name) else {}
- klass = pattern_providers[name]
- return klass(n_q, **kwargs)
-
-
-def get_debug_compression_model(device='cpu', sample_rate: int = 32000):
- """Instantiate a debug compression model to be used for unit tests."""
- assert sample_rate in [16000, 32000], "unsupported sample rate for debug compression model"
- model_ratios = {
- 16000: [10, 8, 8], # 25 Hz at 16kHz
- 32000: [10, 8, 16] # 25 Hz at 32kHz
- }
- ratios: tp.List[int] = model_ratios[sample_rate]
- frame_rate = 25
- seanet_kwargs: dict = {
- 'n_filters': 4,
- 'n_residual_layers': 1,
- 'dimension': 32,
- 'ratios': ratios,
- }
- print(seanet_kwargs)
- encoder = audiocraft.modules.SEANetEncoder(**seanet_kwargs)
- decoder = audiocraft.modules.SEANetDecoder(**seanet_kwargs)
- quantizer = qt.ResidualVectorQuantizer(dimension=32, bins=400, n_q=4)
- init_x = torch.randn(8, 32, 128)
- quantizer(init_x, 1) # initialize kmeans etc.
- compression_model = EncodecModel(
- encoder, decoder, quantizer,
- frame_rate=frame_rate, sample_rate=sample_rate, channels=1).to(device)
- return compression_model.eval()
-
-
-def get_diffusion_model(cfg: omegaconf.DictConfig):
- # TODO Find a way to infer the channels from dset
- channels = cfg.channels
- num_steps = cfg.schedule.num_steps
- return DiffusionUnet(
- chin=channels, num_steps=num_steps, **cfg.diffusion_unet)
-
-
-def get_processor(cfg, sample_rate: int = 24000):
- sample_processor = SampleProcessor()
- if cfg.use:
- kw = dict(cfg)
- kw.pop('use')
- kw.pop('name')
- if cfg.name == "multi_band_processor":
- sample_processor = MultiBandProcessor(sample_rate=sample_rate, **kw)
- return sample_processor
-
-
-def get_debug_lm_model(device='cpu'):
- """Instantiate a debug LM to be used for unit tests."""
- pattern = DelayedPatternProvider(n_q=4)
- dim = 16
- providers = {
- 'description': LUTConditioner(n_bins=128, dim=dim, output_dim=dim, tokenizer="whitespace"),
- }
- condition_provider = ConditioningProvider(providers)
- fuser = ConditionFuser(
- {'cross': ['description'], 'prepend': [],
- 'sum': [], 'input_interpolate': []})
- lm = LMModel(
- pattern, condition_provider, fuser,
- n_q=4, card=400, dim=dim, num_heads=4, custom=True, num_layers=2,
- cross_attention=True, causal=True)
- return lm.to(device).eval()
-
-
-def get_wrapped_compression_model(
- compression_model: CompressionModel,
- cfg: omegaconf.DictConfig) -> CompressionModel:
- # more to come.
- return compression_model
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/layers/test_nms_rotated.py b/spaces/brjathu/HMR2.0/vendor/detectron2/tests/layers/test_nms_rotated.py
deleted file mode 100644
index 4b45384892ab2a7cb20871cf19374f1bd08907ce..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/layers/test_nms_rotated.py
+++ /dev/null
@@ -1,172 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from __future__ import absolute_import, division, print_function, unicode_literals
-import numpy as np
-import unittest
-from copy import deepcopy
-import torch
-from torchvision import ops
-
-from detectron2.layers import batched_nms, batched_nms_rotated, nms_rotated
-from detectron2.utils.testing import random_boxes
-
-
-def nms_edit_distance(keep1, keep2):
- """
- Compare the "keep" result of two nms call.
- They are allowed to be different in terms of edit distance
- due to floating point precision issues, e.g.,
- if a box happen to have an IoU of 0.5 with another box,
- one implentation may choose to keep it while another may discard it.
- """
- keep1, keep2 = keep1.cpu(), keep2.cpu()
- if torch.equal(keep1, keep2):
- # they should be equal most of the time
- return 0
- keep1, keep2 = tuple(keep1), tuple(keep2)
- m, n = len(keep1), len(keep2)
-
- # edit distance with DP
- f = [np.arange(n + 1), np.arange(n + 1)]
- for i in range(m):
- cur_row = i % 2
- other_row = (i + 1) % 2
- f[other_row][0] = i + 1
- for j in range(n):
- f[other_row][j + 1] = (
- f[cur_row][j]
- if keep1[i] == keep2[j]
- else min(min(f[cur_row][j], f[cur_row][j + 1]), f[other_row][j]) + 1
- )
- return f[m % 2][n]
-
-
-class TestNMSRotated(unittest.TestCase):
- def reference_horizontal_nms(self, boxes, scores, iou_threshold):
- """
- Args:
- box_scores (N, 5): boxes in corner-form and probabilities.
- (Note here 5 == 4 + 1, i.e., 4-dim horizontal box + 1-dim prob)
- iou_threshold: intersection over union threshold.
- Returns:
- picked: a list of indexes of the kept boxes
- """
- picked = []
- _, indexes = scores.sort(descending=True)
- while len(indexes) > 0:
- current = indexes[0]
- picked.append(current.item())
- if len(indexes) == 1:
- break
- current_box = boxes[current, :]
- indexes = indexes[1:]
- rest_boxes = boxes[indexes, :]
- iou = ops.box_iou(rest_boxes, current_box.unsqueeze(0)).squeeze(1)
- indexes = indexes[iou <= iou_threshold]
-
- return torch.as_tensor(picked)
-
- def _create_tensors(self, N, device="cpu"):
- boxes = random_boxes(N, 200, device=device)
- scores = torch.rand(N, device=device)
- return boxes, scores
-
- def test_batched_nms_rotated_0_degree_cpu(self, device="cpu"):
- N = 2000
- num_classes = 50
- boxes, scores = self._create_tensors(N, device=device)
- idxs = torch.randint(0, num_classes, (N,))
- rotated_boxes = torch.zeros(N, 5, device=device)
- rotated_boxes[:, 0] = (boxes[:, 0] + boxes[:, 2]) / 2.0
- rotated_boxes[:, 1] = (boxes[:, 1] + boxes[:, 3]) / 2.0
- rotated_boxes[:, 2] = boxes[:, 2] - boxes[:, 0]
- rotated_boxes[:, 3] = boxes[:, 3] - boxes[:, 1]
- err_msg = "Rotated NMS with 0 degree is incompatible with horizontal NMS for IoU={}"
- for iou in [0.2, 0.5, 0.8]:
- backup = boxes.clone()
- keep_ref = batched_nms(boxes, scores, idxs, iou)
- assert torch.allclose(boxes, backup), "boxes modified by batched_nms"
- backup = rotated_boxes.clone()
- keep = batched_nms_rotated(rotated_boxes, scores, idxs, iou)
- assert torch.allclose(
- rotated_boxes, backup
- ), "rotated_boxes modified by batched_nms_rotated"
- # Occasionally the gap can be large if there are many IOU on the threshold boundary
- self.assertLessEqual(nms_edit_distance(keep, keep_ref), 5, err_msg.format(iou))
-
- @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available")
- def test_batched_nms_rotated_0_degree_cuda(self):
- self.test_batched_nms_rotated_0_degree_cpu(device="cuda")
-
- def test_nms_rotated_0_degree_cpu(self, device="cpu"):
- N = 1000
- boxes, scores = self._create_tensors(N, device=device)
- rotated_boxes = torch.zeros(N, 5, device=device)
- rotated_boxes[:, 0] = (boxes[:, 0] + boxes[:, 2]) / 2.0
- rotated_boxes[:, 1] = (boxes[:, 1] + boxes[:, 3]) / 2.0
- rotated_boxes[:, 2] = boxes[:, 2] - boxes[:, 0]
- rotated_boxes[:, 3] = boxes[:, 3] - boxes[:, 1]
- err_msg = "Rotated NMS incompatible between CPU and reference implementation for IoU={}"
- for iou in [0.2, 0.5, 0.8]:
- keep_ref = self.reference_horizontal_nms(boxes, scores, iou)
- keep = nms_rotated(rotated_boxes, scores, iou)
- self.assertLessEqual(nms_edit_distance(keep, keep_ref), 1, err_msg.format(iou))
-
- @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available")
- def test_nms_rotated_0_degree_cuda(self):
- self.test_nms_rotated_0_degree_cpu(device="cuda")
-
- def test_nms_rotated_90_degrees_cpu(self):
- N = 1000
- boxes, scores = self._create_tensors(N)
- rotated_boxes = torch.zeros(N, 5)
- rotated_boxes[:, 0] = (boxes[:, 0] + boxes[:, 2]) / 2.0
- rotated_boxes[:, 1] = (boxes[:, 1] + boxes[:, 3]) / 2.0
- # Note for rotated_boxes[:, 2] and rotated_boxes[:, 3]:
- # widths and heights are intentionally swapped here for 90 degrees case
- # so that the reference horizontal nms could be used
- rotated_boxes[:, 2] = boxes[:, 3] - boxes[:, 1]
- rotated_boxes[:, 3] = boxes[:, 2] - boxes[:, 0]
-
- rotated_boxes[:, 4] = torch.ones(N) * 90
- err_msg = "Rotated NMS incompatible between CPU and reference implementation for IoU={}"
- for iou in [0.2, 0.5, 0.8]:
- keep_ref = self.reference_horizontal_nms(boxes, scores, iou)
- keep = nms_rotated(rotated_boxes, scores, iou)
- self.assertLessEqual(nms_edit_distance(keep, keep_ref), 1, err_msg.format(iou))
-
- def test_nms_rotated_180_degrees_cpu(self):
- N = 1000
- boxes, scores = self._create_tensors(N)
- rotated_boxes = torch.zeros(N, 5)
- rotated_boxes[:, 0] = (boxes[:, 0] + boxes[:, 2]) / 2.0
- rotated_boxes[:, 1] = (boxes[:, 1] + boxes[:, 3]) / 2.0
- rotated_boxes[:, 2] = boxes[:, 2] - boxes[:, 0]
- rotated_boxes[:, 3] = boxes[:, 3] - boxes[:, 1]
- rotated_boxes[:, 4] = torch.ones(N) * 180
- err_msg = "Rotated NMS incompatible between CPU and reference implementation for IoU={}"
- for iou in [0.2, 0.5, 0.8]:
- keep_ref = self.reference_horizontal_nms(boxes, scores, iou)
- keep = nms_rotated(rotated_boxes, scores, iou)
- self.assertLessEqual(nms_edit_distance(keep, keep_ref), 1, err_msg.format(iou))
-
-
-class TestScriptable(unittest.TestCase):
- def setUp(self):
- class TestingModule(torch.nn.Module):
- def forward(self, boxes, scores, threshold):
- return nms_rotated(boxes, scores, threshold)
-
- self.module = TestingModule()
-
- def test_scriptable_cpu(self):
- m = deepcopy(self.module).cpu()
- _ = torch.jit.script(m)
-
- @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available")
- def test_scriptable_cuda(self):
- m = deepcopy(self.module).cuda()
- _ = torch.jit.script(m)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/platforms/egl.py b/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/platforms/egl.py
deleted file mode 100644
index ae2478d29c9a538c53ad83fa31f8e2277cd897c8..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/platforms/egl.py
+++ /dev/null
@@ -1,219 +0,0 @@
-import ctypes
-import os
-
-import OpenGL.platform
-
-from .base import Platform
-
-EGL_PLATFORM_DEVICE_EXT = 0x313F
-EGL_DRM_DEVICE_FILE_EXT = 0x3233
-
-
-def _ensure_egl_loaded():
- plugin = OpenGL.platform.PlatformPlugin.by_name('egl')
- if plugin is None:
- raise RuntimeError("EGL platform plugin is not available.")
-
- plugin_class = plugin.load()
- plugin.loaded = True
- # create instance of this platform implementation
- plugin = plugin_class()
-
- plugin.install(vars(OpenGL.platform))
-
-
-_ensure_egl_loaded()
-from OpenGL import EGL as egl
-
-
-def _get_egl_func(func_name, res_type, *arg_types):
- address = egl.eglGetProcAddress(func_name)
- if address is None:
- return None
-
- proto = ctypes.CFUNCTYPE(res_type)
- proto.argtypes = arg_types
- func = proto(address)
- return func
-
-
-def _get_egl_struct(struct_name):
- from OpenGL._opaque import opaque_pointer_cls
- return opaque_pointer_cls(struct_name)
-
-
-# These are not defined in PyOpenGL by default.
-_EGLDeviceEXT = _get_egl_struct('EGLDeviceEXT')
-_eglGetPlatformDisplayEXT = _get_egl_func('eglGetPlatformDisplayEXT', egl.EGLDisplay)
-_eglQueryDevicesEXT = _get_egl_func('eglQueryDevicesEXT', egl.EGLBoolean)
-_eglQueryDeviceStringEXT = _get_egl_func('eglQueryDeviceStringEXT', ctypes.c_char_p)
-
-
-def query_devices():
- if _eglQueryDevicesEXT is None:
- raise RuntimeError("EGL query extension is not loaded or is not supported.")
-
- num_devices = egl.EGLint()
- success = _eglQueryDevicesEXT(0, None, ctypes.pointer(num_devices))
- if not success or num_devices.value < 1:
- return []
-
- devices = (_EGLDeviceEXT * num_devices.value)() # array of size num_devices
- success = _eglQueryDevicesEXT(num_devices.value, devices, ctypes.pointer(num_devices))
- if not success or num_devices.value < 1:
- return []
-
- return [EGLDevice(devices[i]) for i in range(num_devices.value)]
-
-
-def get_default_device():
- # Fall back to not using query extension.
- if _eglQueryDevicesEXT is None:
- return EGLDevice(None)
-
- return query_devices()[0]
-
-
-def get_device_by_index(device_id):
- if _eglQueryDevicesEXT is None and device_id == 0:
- return get_default_device()
-
- devices = query_devices()
- if device_id >= len(devices):
- raise ValueError('Invalid device ID ({})'.format(device_id, len(devices)))
- return devices[device_id]
-
-
-class EGLDevice:
-
- def __init__(self, display=None):
- self._display = display
-
- def get_display(self):
- if self._display is None:
- return egl.eglGetDisplay(egl.EGL_DEFAULT_DISPLAY)
-
- return _eglGetPlatformDisplayEXT(EGL_PLATFORM_DEVICE_EXT, self._display, None)
-
- @property
- def name(self):
- if self._display is None:
- return 'default'
-
- name = _eglQueryDeviceStringEXT(self._display, EGL_DRM_DEVICE_FILE_EXT)
- if name is None:
- return None
-
- return name.decode('ascii')
-
- def __repr__(self):
- return "".format(self.name)
-
-
-class EGLPlatform(Platform):
- """Renders using EGL.
- """
-
- def __init__(self, viewport_width, viewport_height, device: EGLDevice = None):
- super(EGLPlatform, self).__init__(viewport_width, viewport_height)
- if device is None:
- device = get_default_device()
-
- self._egl_device = device
- self._egl_display = None
- self._egl_context = None
-
- def init_context(self):
- _ensure_egl_loaded()
-
- from OpenGL.EGL import (
- EGL_SURFACE_TYPE, EGL_PBUFFER_BIT, EGL_BLUE_SIZE,
- EGL_RED_SIZE, EGL_GREEN_SIZE, EGL_DEPTH_SIZE,
- EGL_COLOR_BUFFER_TYPE, EGL_RGB_BUFFER,
- EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT, EGL_CONFORMANT,
- EGL_NONE, EGL_DEFAULT_DISPLAY, EGL_NO_CONTEXT,
- EGL_OPENGL_API, EGL_CONTEXT_MAJOR_VERSION,
- EGL_CONTEXT_MINOR_VERSION,
- EGL_CONTEXT_OPENGL_PROFILE_MASK,
- EGL_CONTEXT_OPENGL_CORE_PROFILE_BIT,
- eglGetDisplay, eglInitialize, eglChooseConfig,
- eglBindAPI, eglCreateContext, EGLConfig
- )
- from OpenGL import arrays
-
- config_attributes = arrays.GLintArray.asArray([
- EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
- EGL_BLUE_SIZE, 8,
- EGL_RED_SIZE, 8,
- EGL_GREEN_SIZE, 8,
- EGL_DEPTH_SIZE, 24,
- EGL_COLOR_BUFFER_TYPE, EGL_RGB_BUFFER,
- EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
- EGL_CONFORMANT, EGL_OPENGL_BIT,
- EGL_NONE
- ])
- context_attributes = arrays.GLintArray.asArray([
- EGL_CONTEXT_MAJOR_VERSION, 4,
- EGL_CONTEXT_MINOR_VERSION, 1,
- EGL_CONTEXT_OPENGL_PROFILE_MASK,
- EGL_CONTEXT_OPENGL_CORE_PROFILE_BIT,
- EGL_NONE
- ])
- major, minor = ctypes.c_long(), ctypes.c_long()
- num_configs = ctypes.c_long()
- configs = (EGLConfig * 1)()
-
- # Cache DISPLAY if necessary and get an off-screen EGL display
- orig_dpy = None
- if 'DISPLAY' in os.environ:
- orig_dpy = os.environ['DISPLAY']
- del os.environ['DISPLAY']
-
- self._egl_display = self._egl_device.get_display()
- if orig_dpy is not None:
- os.environ['DISPLAY'] = orig_dpy
-
- # Initialize EGL
- assert eglInitialize(self._egl_display, major, minor)
- assert eglChooseConfig(
- self._egl_display, config_attributes, configs, 1, num_configs
- )
-
- # Bind EGL to the OpenGL API
- assert eglBindAPI(EGL_OPENGL_API)
-
- # Create an EGL context
- self._egl_context = eglCreateContext(
- self._egl_display, configs[0],
- EGL_NO_CONTEXT, context_attributes
- )
-
- # Make it current
- self.make_current()
-
- def make_current(self):
- from OpenGL.EGL import eglMakeCurrent, EGL_NO_SURFACE
- assert eglMakeCurrent(
- self._egl_display, EGL_NO_SURFACE, EGL_NO_SURFACE,
- self._egl_context
- )
-
- def make_uncurrent(self):
- """Make the OpenGL context uncurrent.
- """
- pass
-
- def delete_context(self):
- from OpenGL.EGL import eglDestroyContext, eglTerminate
- if self._egl_display is not None:
- if self._egl_context is not None:
- eglDestroyContext(self._egl_display, self._egl_context)
- self._egl_context = None
- eglTerminate(self._egl_display)
- self._egl_display = None
-
- def supports_framebuffers(self):
- return True
-
-
-__all__ = ['EGLPlatform']
diff --git a/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/sampler.py b/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/sampler.py
deleted file mode 100644
index e4784d068f808a40a56c8e748d83175f7f4e6233..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/sampler.py
+++ /dev/null
@@ -1,102 +0,0 @@
-"""Samplers, conforming to the glTF 2.0 standards as specified in
-https://github.com/KhronosGroup/glTF/tree/master/specification/2.0#reference-sampler
-
-Author: Matthew Matl
-"""
-from .constants import GLTF
-
-
-class Sampler(object):
- """Texture sampler properties for filtering and wrapping modes.
-
- Parameters
- ----------
- name : str, optional
- The user-defined name of this object.
- magFilter : int, optional
- Magnification filter. Valid values:
- - :attr:`.GLTF.NEAREST`
- - :attr:`.GLTF.LINEAR`
- minFilter : int, optional
- Minification filter. Valid values:
- - :attr:`.GLTF.NEAREST`
- - :attr:`.GLTF.LINEAR`
- - :attr:`.GLTF.NEAREST_MIPMAP_NEAREST`
- - :attr:`.GLTF.LINEAR_MIPMAP_NEAREST`
- - :attr:`.GLTF.NEAREST_MIPMAP_LINEAR`
- - :attr:`.GLTF.LINEAR_MIPMAP_LINEAR`
- wrapS : int, optional
- S (U) wrapping mode. Valid values:
- - :attr:`.GLTF.CLAMP_TO_EDGE`
- - :attr:`.GLTF.MIRRORED_REPEAT`
- - :attr:`.GLTF.REPEAT`
- wrapT : int, optional
- T (V) wrapping mode. Valid values:
- - :attr:`.GLTF.CLAMP_TO_EDGE`
- - :attr:`.GLTF.MIRRORED_REPEAT`
- - :attr:`.GLTF.REPEAT`
- """
-
- def __init__(self,
- name=None,
- magFilter=None,
- minFilter=None,
- wrapS=GLTF.REPEAT,
- wrapT=GLTF.REPEAT):
- self.name = name
- self.magFilter = magFilter
- self.minFilter = minFilter
- self.wrapS = wrapS
- self.wrapT = wrapT
-
- @property
- def name(self):
- """str : The user-defined name of this object.
- """
- return self._name
-
- @name.setter
- def name(self, value):
- if value is not None:
- value = str(value)
- self._name = value
-
- @property
- def magFilter(self):
- """int : Magnification filter type.
- """
- return self._magFilter
-
- @magFilter.setter
- def magFilter(self, value):
- self._magFilter = value
-
- @property
- def minFilter(self):
- """int : Minification filter type.
- """
- return self._minFilter
-
- @minFilter.setter
- def minFilter(self, value):
- self._minFilter = value
-
- @property
- def wrapS(self):
- """int : S (U) wrapping mode.
- """
- return self._wrapS
-
- @wrapS.setter
- def wrapS(self, value):
- self._wrapS = value
-
- @property
- def wrapT(self):
- """int : T (V) wrapping mode.
- """
- return self._wrapT
-
- @wrapT.setter
- def wrapT(self, value):
- self._wrapT = value
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ.py
deleted file mode 100644
index 7b86ea8c6c5c48f5d26c9e0df7cf96e745b17b34..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ.py
+++ /dev/null
@@ -1,14 +0,0 @@
-from .mask_rcnn_regnety_4gf_dds_FPN_100ep_LSJ import (
- dataloader,
- lr_multiplier,
- model,
- optimizer,
- train,
-)
-
-train.max_iter *= 4 # 100ep -> 400ep
-
-lr_multiplier.scheduler.milestones = [
- milestone * 4 for milestone in lr_multiplier.scheduler.milestones
-]
-lr_multiplier.scheduler.num_updates = train.max_iter
diff --git a/spaces/ccolas/TastyPiano/src/music/utilities/processing_models/__init__.py b/spaces/ccolas/TastyPiano/src/music/utilities/processing_models/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/chasemcdo/hf_localai/pkg/gallery/models.go b/spaces/chasemcdo/hf_localai/pkg/gallery/models.go
deleted file mode 100644
index 4295a99511a64ae35e515d07444271e6f880d6fa..0000000000000000000000000000000000000000
--- a/spaces/chasemcdo/hf_localai/pkg/gallery/models.go
+++ /dev/null
@@ -1,317 +0,0 @@
-package gallery
-
-import (
- "crypto/sha256"
- "fmt"
- "hash"
- "io"
- "net/http"
- "os"
- "path/filepath"
- "strconv"
-
- "github.com/go-skynet/LocalAI/pkg/utils"
- "github.com/imdario/mergo"
- "github.com/rs/zerolog/log"
- "gopkg.in/yaml.v2"
-)
-
-/*
-
-description: |
- foo
-license: ""
-
-urls:
--
--
-
-name: "bar"
-
-config_file: |
- # Note, name will be injected. or generated by the alias wanted by the user
- threads: 14
-
-files:
- - filename: ""
- sha: ""
- uri: ""
-
-prompt_templates:
- - name: ""
- content: ""
-
-*/
-// Config is the model configuration which contains all the model details
-// This configuration is read from the gallery endpoint and is used to download and install the model
-type Config struct {
- Description string `yaml:"description"`
- License string `yaml:"license"`
- URLs []string `yaml:"urls"`
- Name string `yaml:"name"`
- ConfigFile string `yaml:"config_file"`
- Files []File `yaml:"files"`
- PromptTemplates []PromptTemplate `yaml:"prompt_templates"`
-}
-
-type File struct {
- Filename string `yaml:"filename" json:"filename"`
- SHA256 string `yaml:"sha256" json:"sha256"`
- URI string `yaml:"uri" json:"uri"`
-}
-
-type PromptTemplate struct {
- Name string `yaml:"name"`
- Content string `yaml:"content"`
-}
-
-func GetGalleryConfigFromURL(url string) (Config, error) {
- var config Config
- err := utils.GetURI(url, func(url string, d []byte) error {
- return yaml.Unmarshal(d, &config)
- })
- if err != nil {
- return config, err
- }
- return config, nil
-}
-
-func ReadConfigFile(filePath string) (*Config, error) {
- // Read the YAML file
- yamlFile, err := os.ReadFile(filePath)
- if err != nil {
- return nil, fmt.Errorf("failed to read YAML file: %v", err)
- }
-
- // Unmarshal YAML data into a Config struct
- var config Config
- err = yaml.Unmarshal(yamlFile, &config)
- if err != nil {
- return nil, fmt.Errorf("failed to unmarshal YAML: %v", err)
- }
-
- return &config, nil
-}
-
-func InstallModel(basePath, nameOverride string, config *Config, configOverrides map[string]interface{}, downloadStatus func(string, string, string, float64)) error {
- // Create base path if it doesn't exist
- err := os.MkdirAll(basePath, 0755)
- if err != nil {
- return fmt.Errorf("failed to create base path: %v", err)
- }
-
- if len(configOverrides) > 0 {
- log.Debug().Msgf("Config overrides %+v", configOverrides)
- }
-
- // Download files and verify their SHA
- for _, file := range config.Files {
- log.Debug().Msgf("Checking %q exists and matches SHA", file.Filename)
-
- if err := utils.VerifyPath(file.Filename, basePath); err != nil {
- return err
- }
- // Create file path
- filePath := filepath.Join(basePath, file.Filename)
-
- // Check if the file already exists
- _, err := os.Stat(filePath)
- if err == nil {
- // File exists, check SHA
- if file.SHA256 != "" {
- // Verify SHA
- calculatedSHA, err := calculateSHA(filePath)
- if err != nil {
- return fmt.Errorf("failed to calculate SHA for file %q: %v", file.Filename, err)
- }
- if calculatedSHA == file.SHA256 {
- // SHA matches, skip downloading
- log.Debug().Msgf("File %q already exists and matches the SHA. Skipping download", file.Filename)
- continue
- }
- // SHA doesn't match, delete the file and download again
- err = os.Remove(filePath)
- if err != nil {
- return fmt.Errorf("failed to remove existing file %q: %v", file.Filename, err)
- }
- log.Debug().Msgf("Removed %q (SHA doesn't match)", filePath)
-
- } else {
- // SHA is missing, skip downloading
- log.Debug().Msgf("File %q already exists. Skipping download", file.Filename)
- continue
- }
- } else if !os.IsNotExist(err) {
- // Error occurred while checking file existence
- return fmt.Errorf("failed to check file %q existence: %v", file.Filename, err)
- }
-
- log.Debug().Msgf("Downloading %q", file.URI)
-
- // Download file
- resp, err := http.Get(file.URI)
- if err != nil {
- return fmt.Errorf("failed to download file %q: %v", file.Filename, err)
- }
- defer resp.Body.Close()
-
- // Create parent directory
- err = os.MkdirAll(filepath.Dir(filePath), 0755)
- if err != nil {
- return fmt.Errorf("failed to create parent directory for file %q: %v", file.Filename, err)
- }
-
- // Create and write file content
- outFile, err := os.Create(filePath)
- if err != nil {
- return fmt.Errorf("failed to create file %q: %v", file.Filename, err)
- }
- defer outFile.Close()
-
- progress := &progressWriter{
- fileName: file.Filename,
- total: resp.ContentLength,
- hash: sha256.New(),
- downloadStatus: downloadStatus,
- }
- _, err = io.Copy(io.MultiWriter(outFile, progress), resp.Body)
- if err != nil {
- return fmt.Errorf("failed to write file %q: %v", file.Filename, err)
- }
-
- if file.SHA256 != "" {
- // Verify SHA
- calculatedSHA := fmt.Sprintf("%x", progress.hash.Sum(nil))
- if calculatedSHA != file.SHA256 {
- log.Debug().Msgf("SHA mismatch for file %q ( calculated: %s != metadata: %s )", file.Filename, calculatedSHA, file.SHA256)
- return fmt.Errorf("SHA mismatch for file %q ( calculated: %s != metadata: %s )", file.Filename, calculatedSHA, file.SHA256)
- }
- } else {
- log.Debug().Msgf("SHA missing for %q. Skipping validation", file.Filename)
- }
-
- log.Debug().Msgf("File %q downloaded and verified", file.Filename)
- if utils.IsArchive(filePath) {
- log.Debug().Msgf("File %q is an archive, uncompressing to %s", file.Filename, basePath)
- if err := utils.ExtractArchive(filePath, basePath); err != nil {
- log.Debug().Msgf("Failed decompressing %q: %s", file.Filename, err.Error())
- return err
- }
- }
- }
-
- // Write prompt template contents to separate files
- for _, template := range config.PromptTemplates {
- if err := utils.VerifyPath(template.Name+".tmpl", basePath); err != nil {
- return err
- }
- // Create file path
- filePath := filepath.Join(basePath, template.Name+".tmpl")
-
- // Create parent directory
- err := os.MkdirAll(filepath.Dir(filePath), 0755)
- if err != nil {
- return fmt.Errorf("failed to create parent directory for prompt template %q: %v", template.Name, err)
- }
- // Create and write file content
- err = os.WriteFile(filePath, []byte(template.Content), 0644)
- if err != nil {
- return fmt.Errorf("failed to write prompt template %q: %v", template.Name, err)
- }
-
- log.Debug().Msgf("Prompt template %q written", template.Name)
- }
-
- name := config.Name
- if nameOverride != "" {
- name = nameOverride
- }
-
- if err := utils.VerifyPath(name+".yaml", basePath); err != nil {
- return err
- }
-
- // write config file
- if len(configOverrides) != 0 || len(config.ConfigFile) != 0 {
- configFilePath := filepath.Join(basePath, name+".yaml")
-
- // Read and update config file as map[string]interface{}
- configMap := make(map[string]interface{})
- err = yaml.Unmarshal([]byte(config.ConfigFile), &configMap)
- if err != nil {
- return fmt.Errorf("failed to unmarshal config YAML: %v", err)
- }
-
- configMap["name"] = name
-
- if err := mergo.Merge(&configMap, configOverrides, mergo.WithOverride); err != nil {
- return err
- }
-
- // Write updated config file
- updatedConfigYAML, err := yaml.Marshal(configMap)
- if err != nil {
- return fmt.Errorf("failed to marshal updated config YAML: %v", err)
- }
-
- err = os.WriteFile(configFilePath, updatedConfigYAML, 0644)
- if err != nil {
- return fmt.Errorf("failed to write updated config file: %v", err)
- }
-
- log.Debug().Msgf("Written config file %s", configFilePath)
- }
-
- return nil
-}
-
-type progressWriter struct {
- fileName string
- total int64
- written int64
- downloadStatus func(string, string, string, float64)
- hash hash.Hash
-}
-
-func (pw *progressWriter) Write(p []byte) (n int, err error) {
- n, err = pw.hash.Write(p)
- pw.written += int64(n)
-
- if pw.total > 0 {
- percentage := float64(pw.written) / float64(pw.total) * 100
- //log.Debug().Msgf("Downloading %s: %s/%s (%.2f%%)", pw.fileName, formatBytes(pw.written), formatBytes(pw.total), percentage)
- pw.downloadStatus(pw.fileName, formatBytes(pw.written), formatBytes(pw.total), percentage)
- } else {
- pw.downloadStatus(pw.fileName, formatBytes(pw.written), "", 0)
- }
-
- return
-}
-
-func formatBytes(bytes int64) string {
- const unit = 1024
- if bytes < unit {
- return strconv.FormatInt(bytes, 10) + " B"
- }
- div, exp := int64(unit), 0
- for n := bytes / unit; n >= unit; n /= unit {
- div *= unit
- exp++
- }
- return fmt.Sprintf("%.1f %ciB", float64(bytes)/float64(div), "KMGTPE"[exp])
-}
-
-func calculateSHA(filePath string) (string, error) {
- file, err := os.Open(filePath)
- if err != nil {
- return "", err
- }
- defer file.Close()
-
- hash := sha256.New()
- if _, err := io.Copy(hash, file); err != nil {
- return "", err
- }
-
- return fmt.Sprintf("%x", hash.Sum(nil)), nil
-}
diff --git a/spaces/chilleverydaychill/roop/roop/utilities.py b/spaces/chilleverydaychill/roop/roop/utilities.py
deleted file mode 100644
index 90c8d981f5f159a459ca0c08cc23dfac8d04c068..0000000000000000000000000000000000000000
--- a/spaces/chilleverydaychill/roop/roop/utilities.py
+++ /dev/null
@@ -1,141 +0,0 @@
-import glob
-import mimetypes
-import os
-import platform
-import shutil
-import ssl
-import subprocess
-import urllib
-from pathlib import Path
-from typing import List, Any
-from tqdm import tqdm
-
-import roop.globals
-
-TEMP_FILE = 'temp.mp4'
-TEMP_DIRECTORY = 'temp'
-
-# monkey patch ssl for mac
-if platform.system().lower() == 'darwin':
- ssl._create_default_https_context = ssl._create_unverified_context
-
-
-def run_ffmpeg(args: List[str]) -> bool:
- commands = ['ffmpeg', '-hide_banner', '-hwaccel', 'auto', '-loglevel', roop.globals.log_level]
- commands.extend(args)
- try:
- subprocess.check_output(commands, stderr=subprocess.STDOUT)
- return True
- except Exception:
- pass
- return False
-
-
-def detect_fps(target_path: str) -> float:
- command = ['ffprobe', '-v', 'error', '-select_streams', 'v:0', '-show_entries', 'stream=r_frame_rate', '-of', 'default=noprint_wrappers=1:nokey=1', target_path]
- output = subprocess.check_output(command).decode().strip().split('/')
- try:
- numerator, denominator = map(int, output)
- return numerator / denominator
- except Exception:
- pass
- return 30.0
-
-
-def extract_frames(target_path: str) -> None:
- temp_directory_path = get_temp_directory_path(target_path)
- run_ffmpeg(['-i', target_path, '-pix_fmt', 'rgb24', os.path.join(temp_directory_path, '%04d.png')])
-
-
-def create_video(target_path: str, fps: float = 30.0) -> None:
- temp_output_path = get_temp_output_path(target_path)
- temp_directory_path = get_temp_directory_path(target_path)
- run_ffmpeg(['-r', str(fps), '-i', os.path.join(temp_directory_path, '%04d.png'), '-c:v', roop.globals.video_encoder, '-crf', str(roop.globals.video_quality), '-pix_fmt', 'yuv420p', '-vf', 'colorspace=bt709:iall=bt601-6-625:fast=1', '-y', temp_output_path])
-
-
-def restore_audio(target_path: str, output_path: str) -> None:
- temp_output_path = get_temp_output_path(target_path)
- done = run_ffmpeg(['-i', temp_output_path, '-i', target_path, '-c:v', 'copy', '-map', '0:v:0', '-map', '1:a:0', '-y', output_path])
- if not done:
- move_temp(target_path, output_path)
-
-
-def get_temp_frame_paths(target_path: str) -> List[str]:
- temp_directory_path = get_temp_directory_path(target_path)
- return glob.glob((os.path.join(glob.escape(temp_directory_path), '*.png')))
-
-
-def get_temp_directory_path(target_path: str) -> str:
- target_name, _ = os.path.splitext(os.path.basename(target_path))
- target_directory_path = os.path.dirname(target_path)
- return os.path.join(target_directory_path, TEMP_DIRECTORY, target_name)
-
-
-def get_temp_output_path(target_path: str) -> str:
- temp_directory_path = get_temp_directory_path(target_path)
- return os.path.join(temp_directory_path, TEMP_FILE)
-
-
-def normalize_output_path(source_path: str, target_path: str, output_path: str) -> Any:
- if source_path and target_path:
- source_name, _ = os.path.splitext(os.path.basename(source_path))
- target_name, target_extension = os.path.splitext(os.path.basename(target_path))
- if os.path.isdir(output_path):
- return os.path.join(output_path, source_name + '-' + target_name + target_extension)
- return output_path
-
-
-def create_temp(target_path: str) -> None:
- temp_directory_path = get_temp_directory_path(target_path)
- Path(temp_directory_path).mkdir(parents=True, exist_ok=True)
-
-
-def move_temp(target_path: str, output_path: str) -> None:
- temp_output_path = get_temp_output_path(target_path)
- if os.path.isfile(temp_output_path):
- if os.path.isfile(output_path):
- os.remove(output_path)
- shutil.move(temp_output_path, output_path)
-
-
-def clean_temp(target_path: str) -> None:
- temp_directory_path = get_temp_directory_path(target_path)
- parent_directory_path = os.path.dirname(temp_directory_path)
- if not roop.globals.keep_frames and os.path.isdir(temp_directory_path):
- shutil.rmtree(temp_directory_path)
- if os.path.exists(parent_directory_path) and not os.listdir(parent_directory_path):
- os.rmdir(parent_directory_path)
-
-
-def has_image_extension(image_path: str) -> bool:
- return image_path.lower().endswith(('png', 'jpg', 'jpeg', 'webp'))
-
-
-def is_image(image_path: str) -> bool:
- if image_path and os.path.isfile(image_path):
- mimetype, _ = mimetypes.guess_type(image_path)
- return bool(mimetype and mimetype.startswith('image/'))
- return False
-
-
-def is_video(video_path: str) -> bool:
- if video_path and os.path.isfile(video_path):
- mimetype, _ = mimetypes.guess_type(video_path)
- return bool(mimetype and mimetype.startswith('video/'))
- return False
-
-
-def conditional_download(download_directory_path: str, urls: List[str]) -> None:
- if not os.path.exists(download_directory_path):
- os.makedirs(download_directory_path)
- for url in urls:
- download_file_path = os.path.join(download_directory_path, os.path.basename(url))
- if not os.path.exists(download_file_path):
- request = urllib.request.urlopen(url) # type: ignore[attr-defined]
- total = int(request.headers.get('Content-Length', 0))
- with tqdm(total=total, desc='Downloading', unit='B', unit_scale=True, unit_divisor=1024) as progress:
- urllib.request.urlretrieve(url, download_file_path, reporthook=lambda count, block_size, total_size: progress.update(block_size)) # type: ignore[attr-defined]
-
-
-def resolve_relative_path(path: str) -> str:
- return os.path.abspath(os.path.join(os.path.dirname(__file__), path))
diff --git a/spaces/chongjie/ZoeDepth_slim/README.md b/spaces/chongjie/ZoeDepth_slim/README.md
deleted file mode 100644
index b964f6ef8e9f5197600b340537722f8e9ecfd6ff..0000000000000000000000000000000000000000
--- a/spaces/chongjie/ZoeDepth_slim/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Zoe Depth
-emoji: 🌍
-colorFrom: green
-colorTo: gray
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
-duplicated_from: ameerazam08/zoe-depth
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/Hdf5StubImagePlugin.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/Hdf5StubImagePlugin.py
deleted file mode 100644
index bba05ed65a72c6b859f1722cefd0c75a59c43a37..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/Hdf5StubImagePlugin.py
+++ /dev/null
@@ -1,73 +0,0 @@
-#
-# The Python Imaging Library
-# $Id$
-#
-# HDF5 stub adapter
-#
-# Copyright (c) 2000-2003 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-
-from . import Image, ImageFile
-
-_handler = None
-
-
-def register_handler(handler):
- """
- Install application-specific HDF5 image handler.
-
- :param handler: Handler object.
- """
- global _handler
- _handler = handler
-
-
-# --------------------------------------------------------------------
-# Image adapter
-
-
-def _accept(prefix):
- return prefix[:8] == b"\x89HDF\r\n\x1a\n"
-
-
-class HDF5StubImageFile(ImageFile.StubImageFile):
- format = "HDF5"
- format_description = "HDF5"
-
- def _open(self):
- offset = self.fp.tell()
-
- if not _accept(self.fp.read(8)):
- msg = "Not an HDF file"
- raise SyntaxError(msg)
-
- self.fp.seek(offset)
-
- # make something up
- self.mode = "F"
- self._size = 1, 1
-
- loader = self._load()
- if loader:
- loader.open(self)
-
- def _load(self):
- return _handler
-
-
-def _save(im, fp, filename):
- if _handler is None or not hasattr(_handler, "save"):
- msg = "HDF5 save handler not installed"
- raise OSError(msg)
- _handler.save(im, fp, filename)
-
-
-# --------------------------------------------------------------------
-# Registry
-
-Image.register_open(HDF5StubImageFile.format, HDF5StubImageFile, _accept)
-Image.register_save(HDF5StubImageFile.format, _save)
-
-Image.register_extensions(HDF5StubImageFile.format, [".h5", ".hdf"])
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/vegalite/v5/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/vegalite/v5/__init__.py
deleted file mode 100644
index 38202d89c0d86a9be7a39d4b189781c43427983e..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/vegalite/v5/__init__.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# ruff: noqa
-from .schema import *
-from .api import *
-
-from ...expr import datum, expr # type: ignore[no-redef]
-
-from .display import VegaLite, renderers
-
-from .data import (
- MaxRowsError,
- pipe,
- curry,
- limit_rows,
- sample,
- to_json,
- to_csv,
- to_values,
- default_data_transformer,
- data_transformers,
-)
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/utils/embedding_functions.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/utils/embedding_functions.py
deleted file mode 100644
index 4a449b4edfc5a68f5c42583b558500bc7db43ca3..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/utils/embedding_functions.py
+++ /dev/null
@@ -1,387 +0,0 @@
-from chromadb.api.types import Documents, EmbeddingFunction, Embeddings
-from pathlib import Path
-import os
-import tarfile
-import requests
-from typing import Any, Dict, List, cast
-import numpy as np
-import numpy.typing as npt
-import importlib
-from typing import Optional
-
-try:
- from chromadb.is_thin_client import is_thin_client
-except ImportError:
- is_thin_client = False
-
-
-class SentenceTransformerEmbeddingFunction(EmbeddingFunction):
- # Since we do dynamic imports we have to type this as Any
- models: Dict[str, Any] = {}
-
- # If you have a beefier machine, try "gtr-t5-large".
- # for a full list of options: https://huggingface.co/sentence-transformers, https://www.sbert.net/docs/pretrained_models.html
- def __init__(
- self,
- model_name: str = "all-MiniLM-L6-v2",
- device: str = "cpu",
- normalize_embeddings: bool = False,
- ):
- if model_name not in self.models:
- try:
- from sentence_transformers import SentenceTransformer
- except ImportError:
- raise ValueError(
- "The sentence_transformers python package is not installed. Please install it with `pip install sentence_transformers`"
- )
- self.models[model_name] = SentenceTransformer(model_name, device=device)
- self._model = self.models[model_name]
- self._normalize_embeddings = normalize_embeddings
-
- def __call__(self, texts: Documents) -> Embeddings:
- return self._model.encode(
- list(texts),
- convert_to_numpy=True,
- normalize_embeddings=self._normalize_embeddings,
- ).tolist()
-
-
-class Text2VecEmbeddingFunction(EmbeddingFunction):
- def __init__(self, model_name: str = "shibing624/text2vec-base-chinese"):
- try:
- from text2vec import SentenceModel
- except ImportError:
- raise ValueError(
- "The text2vec python package is not installed. Please install it with `pip install text2vec`"
- )
- self._model = SentenceModel(model_name_or_path=model_name)
-
- def __call__(self, texts: Documents) -> Embeddings:
- return self._model.encode(list(texts), convert_to_numpy=True).tolist() # type: ignore # noqa E501
-
-
-class OpenAIEmbeddingFunction(EmbeddingFunction):
- def __init__(
- self,
- api_key: Optional[str] = None,
- model_name: str = "text-embedding-ada-002",
- organization_id: Optional[str] = None,
- api_base: Optional[str] = None,
- api_type: Optional[str] = None,
- ):
- """
- Initialize the OpenAIEmbeddingFunction.
-
- Args:
- api_key (str, optional): Your API key for the OpenAI API. If not
- provided, it will raise an error to provide an OpenAI API key.
- organization_id(str, optional): The OpenAI organization ID if applicable
- model_name (str, optional): The name of the model to use for text
- embeddings. Defaults to "text-embedding-ada-002".
- api_base (str, optional): The base path for the API. If not provided,
- it will use the base path for the OpenAI API. This can be used to
- point to a different deployment, such as an Azure deployment.
- api_type (str, optional): The type of the API deployment. This can be
- used to specify a different deployment, such as 'azure'. If not
- provided, it will use the default OpenAI deployment.
-
- """
- try:
- import openai
- except ImportError:
- raise ValueError(
- "The openai python package is not installed. Please install it with `pip install openai`"
- )
-
- if api_key is not None:
- openai.api_key = api_key
- # If the api key is still not set, raise an error
- elif openai.api_key is None:
- raise ValueError(
- "Please provide an OpenAI API key. You can get one at https://platform.openai.com/account/api-keys"
- )
-
- if api_base is not None:
- openai.api_base = api_base
-
- if api_type is not None:
- openai.api_type = api_type
-
- if organization_id is not None:
- openai.organization = organization_id
-
- self._client = openai.Embedding
- self._model_name = model_name
-
- def __call__(self, texts: Documents) -> Embeddings:
- # replace newlines, which can negatively affect performance.
- texts = [t.replace("\n", " ") for t in texts]
-
- # Call the OpenAI Embedding API
- embeddings = self._client.create(input=texts, engine=self._model_name)["data"]
-
- # Sort resulting embeddings by index
- sorted_embeddings = sorted(embeddings, key=lambda e: e["index"]) # type: ignore
-
- # Return just the embeddings
- return [result["embedding"] for result in sorted_embeddings]
-
-
-class CohereEmbeddingFunction(EmbeddingFunction):
- def __init__(self, api_key: str, model_name: str = "large"):
- try:
- import cohere
- except ImportError:
- raise ValueError(
- "The cohere python package is not installed. Please install it with `pip install cohere`"
- )
-
- self._client = cohere.Client(api_key)
- self._model_name = model_name
-
- def __call__(self, texts: Documents) -> Embeddings:
- # Call Cohere Embedding API for each document.
- return [
- embeddings
- for embeddings in self._client.embed(texts=texts, model=self._model_name)
- ]
-
-
-class HuggingFaceEmbeddingFunction(EmbeddingFunction):
- def __init__(
- self, api_key: str, model_name: str = "sentence-transformers/all-MiniLM-L6-v2"
- ):
- try:
- import requests
- except ImportError:
- raise ValueError(
- "The requests python package is not installed. Please install it with `pip install requests`"
- )
- self._api_url = f"https://api-inference.huggingface.co/pipeline/feature-extraction/{model_name}"
- self._session = requests.Session()
- self._session.headers.update({"Authorization": f"Bearer {api_key}"})
-
- def __call__(self, texts: Documents) -> Embeddings:
- # Call HuggingFace Embedding API for each document
- return self._session.post( # type: ignore
- self._api_url, json={"inputs": texts, "options": {"wait_for_model": True}}
- ).json()
-
-
-class InstructorEmbeddingFunction(EmbeddingFunction):
- # If you have a GPU with at least 6GB try model_name = "hkunlp/instructor-xl" and device = "cuda"
- # for a full list of options: https://github.com/HKUNLP/instructor-embedding#model-list
- def __init__(
- self,
- model_name: str = "hkunlp/instructor-base",
- device: str = "cpu",
- instruction: Optional[str] = None,
- ):
- try:
- from InstructorEmbedding import INSTRUCTOR
- except ImportError:
- raise ValueError(
- "The InstructorEmbedding python package is not installed. Please install it with `pip install InstructorEmbedding`"
- )
- self._model = INSTRUCTOR(model_name, device=device)
- self._instruction = instruction
-
- def __call__(self, texts: Documents) -> Embeddings:
- if self._instruction is None:
- return self._model.encode(texts).tolist()
-
- texts_with_instructions = [[self._instruction, text] for text in texts]
- return self._model.encode(texts_with_instructions).tolist()
-
-
-# In order to remove dependencies on sentence-transformers, which in turn depends on
-# pytorch and sentence-piece we have created a default ONNX embedding function that
-# implements the same functionality as "all-MiniLM-L6-v2" from sentence-transformers.
-# visit https://github.com/chroma-core/onnx-embedding for the source code to generate
-# and verify the ONNX model.
-class ONNXMiniLM_L6_V2(EmbeddingFunction):
- MODEL_NAME = "all-MiniLM-L6-v2"
- DOWNLOAD_PATH = Path.home() / ".cache" / "chroma" / "onnx_models" / MODEL_NAME
- EXTRACTED_FOLDER_NAME = "onnx"
- ARCHIVE_FILENAME = "onnx.tar.gz"
- MODEL_DOWNLOAD_URL = (
- "https://chroma-onnx-models.s3.amazonaws.com/all-MiniLM-L6-v2/onnx.tar.gz"
- )
- tokenizer = None
- model = None
-
- # https://github.com/python/mypy/issues/7291 mypy makes you type the constructor if
- # no args
- def __init__(self) -> None:
- # Import dependencies on demand to mirror other embedding functions. This
- # breaks typechecking, thus the ignores.
- try:
- # Equivalent to import onnxruntime
- self.ort = importlib.import_module("onnxruntime")
- except ImportError:
- raise ValueError(
- "The onnxruntime python package is not installed. Please install it with `pip install onnxruntime`"
- )
- try:
- # Equivalent to from tokenizers import Tokenizer
- self.Tokenizer = importlib.import_module("tokenizers").Tokenizer
- except ImportError:
- raise ValueError(
- "The tokenizers python package is not installed. Please install it with `pip install tokenizers`"
- )
- try:
- # Equivalent to from tqdm import tqdm
- self.tqdm = importlib.import_module("tqdm").tqdm
- except ImportError:
- raise ValueError(
- "The tqdm python package is not installed. Please install it with `pip install tqdm`"
- )
-
- # Borrowed from https://gist.github.com/yanqd0/c13ed29e29432e3cf3e7c38467f42f51
- # Download with tqdm to preserve the sentence-transformers experience
- def _download(self, url: str, fname: Path, chunk_size: int = 1024) -> None:
- resp = requests.get(url, stream=True)
- total = int(resp.headers.get("content-length", 0))
- with open(fname, "wb") as file, self.tqdm(
- desc=str(fname),
- total=total,
- unit="iB",
- unit_scale=True,
- unit_divisor=1024,
- ) as bar:
- for data in resp.iter_content(chunk_size=chunk_size):
- size = file.write(data)
- bar.update(size)
-
- # Use pytorches default epsilon for division by zero
- # https://pytorch.org/docs/stable/generated/torch.nn.functional.normalize.html
- def _normalize(self, v: npt.NDArray) -> npt.NDArray:
- norm = np.linalg.norm(v, axis=1)
- norm[norm == 0] = 1e-12
- return v / norm[:, np.newaxis]
-
- def _forward(self, documents: List[str], batch_size: int = 32) -> npt.NDArray:
- # We need to cast to the correct type because the type checker doesn't know that init_model_and_tokenizer will set the values
- self.tokenizer = cast(self.Tokenizer, self.tokenizer) # type: ignore
- self.model = cast(self.ort.InferenceSession, self.model) # type: ignore
- all_embeddings = []
- for i in range(0, len(documents), batch_size):
- batch = documents[i : i + batch_size]
- encoded = [self.tokenizer.encode(d) for d in batch]
- input_ids = np.array([e.ids for e in encoded])
- attention_mask = np.array([e.attention_mask for e in encoded])
- onnx_input = {
- "input_ids": np.array(input_ids, dtype=np.int64),
- "attention_mask": np.array(attention_mask, dtype=np.int64),
- "token_type_ids": np.array(
- [np.zeros(len(e), dtype=np.int64) for e in input_ids],
- dtype=np.int64,
- ),
- }
- model_output = self.model.run(None, onnx_input)
- last_hidden_state = model_output[0]
- # Perform mean pooling with attention weighting
- input_mask_expanded = np.broadcast_to(
- np.expand_dims(attention_mask, -1), last_hidden_state.shape
- )
- embeddings = np.sum(last_hidden_state * input_mask_expanded, 1) / np.clip(
- input_mask_expanded.sum(1), a_min=1e-9, a_max=None
- )
- embeddings = self._normalize(embeddings).astype(np.float32)
- all_embeddings.append(embeddings)
- return np.concatenate(all_embeddings)
-
- def _init_model_and_tokenizer(self) -> None:
- if self.model is None and self.tokenizer is None:
- self.tokenizer = self.Tokenizer.from_file(
- str(self.DOWNLOAD_PATH / self.EXTRACTED_FOLDER_NAME / "tokenizer.json")
- )
- # max_seq_length = 256, for some reason sentence-transformers uses 256 even though the HF config has a max length of 128
- # https://github.com/UKPLab/sentence-transformers/blob/3e1929fddef16df94f8bc6e3b10598a98f46e62d/docs/_static/html/models_en_sentence_embeddings.html#LL480
- self.tokenizer.enable_truncation(max_length=256)
- self.tokenizer.enable_padding(pad_id=0, pad_token="[PAD]", length=256)
- self.model = self.ort.InferenceSession(
- str(self.DOWNLOAD_PATH / self.EXTRACTED_FOLDER_NAME / "model.onnx")
- )
-
- def __call__(self, texts: Documents) -> Embeddings:
- # Only download the model when it is actually used
- self._download_model_if_not_exists()
- self._init_model_and_tokenizer()
- res = cast(Embeddings, self._forward(texts).tolist())
- return res
-
- def _download_model_if_not_exists(self) -> None:
- # Model is not downloaded yet
- if not os.path.exists(self.DOWNLOAD_PATH / self.ARCHIVE_FILENAME):
- os.makedirs(self.DOWNLOAD_PATH, exist_ok=True)
- self._download(
- self.MODEL_DOWNLOAD_URL, self.DOWNLOAD_PATH / self.ARCHIVE_FILENAME
- )
- with tarfile.open(
- self.DOWNLOAD_PATH / self.ARCHIVE_FILENAME, "r:gz"
- ) as tar:
- tar.extractall(self.DOWNLOAD_PATH)
-
-
-def DefaultEmbeddingFunction() -> Optional[EmbeddingFunction]:
- if is_thin_client:
- return None
- else:
- return ONNXMiniLM_L6_V2()
-
-
-class GooglePalmEmbeddingFunction(EmbeddingFunction):
- """To use this EmbeddingFunction, you must have the google.generativeai Python package installed and have a PaLM API key."""
-
- def __init__(self, api_key: str, model_name: str = "models/embedding-gecko-001"):
- if not api_key:
- raise ValueError("Please provide a PaLM API key.")
-
- if not model_name:
- raise ValueError("Please provide the model name.")
-
- try:
- import google.generativeai as palm
- except ImportError:
- raise ValueError(
- "The Google Generative AI python package is not installed. Please install it with `pip install google-generativeai`"
- )
-
- palm.configure(api_key=api_key)
- self._palm = palm
- self._model_name = model_name
-
- def __call__(self, texts: Documents) -> Embeddings:
- return [
- self._palm.generate_embeddings(model=self._model_name, text=text)[
- "embedding"
- ]
- for text in texts
- ]
-
-
-class GoogleVertexEmbeddingFunction(EmbeddingFunction):
- # Follow API Quickstart for Google Vertex AI
- # https://cloud.google.com/vertex-ai/docs/generative-ai/start/quickstarts/api-quickstart
- # Information about the text embedding modules in Google Vertex AI
- # https://cloud.google.com/vertex-ai/docs/generative-ai/embeddings/get-text-embeddings
- def __init__(
- self,
- api_key: str,
- model_name: str = "textembedding-gecko-001",
- project_id: str = "cloud-large-language-models",
- region: str = "us-central1",
- ):
- self._api_url = f"https://{region}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{region}/endpoints/{model_name}:predict"
- self._session = requests.Session()
- self._session.headers.update({"Authorization": f"Bearer {api_key}"})
-
- def __call__(self, texts: Documents) -> Embeddings:
- response = self._session.post(
- self._api_url, json={"instances": [{"content": texts}]}
- ).json()
-
- if "predictions" in response:
- return response["predictions"]
- return []
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/subset/cff.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/subset/cff.py
deleted file mode 100644
index dd79f6db37a482891b6f151159ef4c9b89475b8e..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/subset/cff.py
+++ /dev/null
@@ -1,536 +0,0 @@
-from fontTools.misc import psCharStrings
-from fontTools import ttLib
-from fontTools.pens.basePen import NullPen
-from fontTools.misc.roundTools import otRound
-from fontTools.misc.loggingTools import deprecateFunction
-from fontTools.subset.util import _add_method, _uniq_sort
-
-
-class _ClosureGlyphsT2Decompiler(psCharStrings.SimpleT2Decompiler):
- def __init__(self, components, localSubrs, globalSubrs):
- psCharStrings.SimpleT2Decompiler.__init__(self, localSubrs, globalSubrs)
- self.components = components
-
- def op_endchar(self, index):
- args = self.popall()
- if len(args) >= 4:
- from fontTools.encodings.StandardEncoding import StandardEncoding
-
- # endchar can do seac accent bulding; The T2 spec says it's deprecated,
- # but recent software that shall remain nameless does output it.
- adx, ady, bchar, achar = args[-4:]
- baseGlyph = StandardEncoding[bchar]
- accentGlyph = StandardEncoding[achar]
- self.components.add(baseGlyph)
- self.components.add(accentGlyph)
-
-
-@_add_method(ttLib.getTableClass("CFF "))
-def closure_glyphs(self, s):
- cff = self.cff
- assert len(cff) == 1
- font = cff[cff.keys()[0]]
- glyphSet = font.CharStrings
-
- decompose = s.glyphs
- while decompose:
- components = set()
- for g in decompose:
- if g not in glyphSet:
- continue
- gl = glyphSet[g]
-
- subrs = getattr(gl.private, "Subrs", [])
- decompiler = _ClosureGlyphsT2Decompiler(components, subrs, gl.globalSubrs)
- decompiler.execute(gl)
- components -= s.glyphs
- s.glyphs.update(components)
- decompose = components
-
-
-def _empty_charstring(font, glyphName, isCFF2, ignoreWidth=False):
- c, fdSelectIndex = font.CharStrings.getItemAndSelector(glyphName)
- if isCFF2 or ignoreWidth:
- # CFF2 charstrings have no widths nor 'endchar' operators
- c.setProgram([] if isCFF2 else ["endchar"])
- else:
- if hasattr(font, "FDArray") and font.FDArray is not None:
- private = font.FDArray[fdSelectIndex].Private
- else:
- private = font.Private
- dfltWdX = private.defaultWidthX
- nmnlWdX = private.nominalWidthX
- pen = NullPen()
- c.draw(pen) # this will set the charstring's width
- if c.width != dfltWdX:
- c.program = [c.width - nmnlWdX, "endchar"]
- else:
- c.program = ["endchar"]
-
-
-@_add_method(ttLib.getTableClass("CFF "))
-def prune_pre_subset(self, font, options):
- cff = self.cff
- # CFF table must have one font only
- cff.fontNames = cff.fontNames[:1]
-
- if options.notdef_glyph and not options.notdef_outline:
- isCFF2 = cff.major > 1
- for fontname in cff.keys():
- font = cff[fontname]
- _empty_charstring(font, ".notdef", isCFF2=isCFF2)
-
- # Clear useless Encoding
- for fontname in cff.keys():
- font = cff[fontname]
- # https://github.com/fonttools/fonttools/issues/620
- font.Encoding = "StandardEncoding"
-
- return True # bool(cff.fontNames)
-
-
-@_add_method(ttLib.getTableClass("CFF "))
-def subset_glyphs(self, s):
- cff = self.cff
- for fontname in cff.keys():
- font = cff[fontname]
- cs = font.CharStrings
-
- glyphs = s.glyphs.union(s.glyphs_emptied)
-
- # Load all glyphs
- for g in font.charset:
- if g not in glyphs:
- continue
- c, _ = cs.getItemAndSelector(g)
-
- if cs.charStringsAreIndexed:
- indices = [i for i, g in enumerate(font.charset) if g in glyphs]
- csi = cs.charStringsIndex
- csi.items = [csi.items[i] for i in indices]
- del csi.file, csi.offsets
- if hasattr(font, "FDSelect"):
- sel = font.FDSelect
- sel.format = None
- sel.gidArray = [sel.gidArray[i] for i in indices]
- newCharStrings = {}
- for indicesIdx, charsetIdx in enumerate(indices):
- g = font.charset[charsetIdx]
- if g in cs.charStrings:
- newCharStrings[g] = indicesIdx
- cs.charStrings = newCharStrings
- else:
- cs.charStrings = {g: v for g, v in cs.charStrings.items() if g in glyphs}
- font.charset = [g for g in font.charset if g in glyphs]
- font.numGlyphs = len(font.charset)
-
- if s.options.retain_gids:
- isCFF2 = cff.major > 1
- for g in s.glyphs_emptied:
- _empty_charstring(font, g, isCFF2=isCFF2, ignoreWidth=True)
-
- return True # any(cff[fontname].numGlyphs for fontname in cff.keys())
-
-
-@_add_method(psCharStrings.T2CharString)
-def subset_subroutines(self, subrs, gsubrs):
- p = self.program
- for i in range(1, len(p)):
- if p[i] == "callsubr":
- assert isinstance(p[i - 1], int)
- p[i - 1] = subrs._used.index(p[i - 1] + subrs._old_bias) - subrs._new_bias
- elif p[i] == "callgsubr":
- assert isinstance(p[i - 1], int)
- p[i - 1] = (
- gsubrs._used.index(p[i - 1] + gsubrs._old_bias) - gsubrs._new_bias
- )
-
-
-@_add_method(psCharStrings.T2CharString)
-def drop_hints(self):
- hints = self._hints
-
- if hints.deletions:
- p = self.program
- for idx in reversed(hints.deletions):
- del p[idx - 2 : idx]
-
- if hints.has_hint:
- assert not hints.deletions or hints.last_hint <= hints.deletions[0]
- self.program = self.program[hints.last_hint :]
- if not self.program:
- # TODO CFF2 no need for endchar.
- self.program.append("endchar")
- if hasattr(self, "width"):
- # Insert width back if needed
- if self.width != self.private.defaultWidthX:
- # For CFF2 charstrings, this should never happen
- assert (
- self.private.defaultWidthX is not None
- ), "CFF2 CharStrings must not have an initial width value"
- self.program.insert(0, self.width - self.private.nominalWidthX)
-
- if hints.has_hintmask:
- i = 0
- p = self.program
- while i < len(p):
- if p[i] in ["hintmask", "cntrmask"]:
- assert i + 1 <= len(p)
- del p[i : i + 2]
- continue
- i += 1
-
- assert len(self.program)
-
- del self._hints
-
-
-class _MarkingT2Decompiler(psCharStrings.SimpleT2Decompiler):
- def __init__(self, localSubrs, globalSubrs, private):
- psCharStrings.SimpleT2Decompiler.__init__(
- self, localSubrs, globalSubrs, private
- )
- for subrs in [localSubrs, globalSubrs]:
- if subrs and not hasattr(subrs, "_used"):
- subrs._used = set()
-
- def op_callsubr(self, index):
- self.localSubrs._used.add(self.operandStack[-1] + self.localBias)
- psCharStrings.SimpleT2Decompiler.op_callsubr(self, index)
-
- def op_callgsubr(self, index):
- self.globalSubrs._used.add(self.operandStack[-1] + self.globalBias)
- psCharStrings.SimpleT2Decompiler.op_callgsubr(self, index)
-
-
-class _DehintingT2Decompiler(psCharStrings.T2WidthExtractor):
- class Hints(object):
- def __init__(self):
- # Whether calling this charstring produces any hint stems
- # Note that if a charstring starts with hintmask, it will
- # have has_hint set to True, because it *might* produce an
- # implicit vstem if called under certain conditions.
- self.has_hint = False
- # Index to start at to drop all hints
- self.last_hint = 0
- # Index up to which we know more hints are possible.
- # Only relevant if status is 0 or 1.
- self.last_checked = 0
- # The status means:
- # 0: after dropping hints, this charstring is empty
- # 1: after dropping hints, there may be more hints
- # continuing after this, or there might be
- # other things. Not clear yet.
- # 2: no more hints possible after this charstring
- self.status = 0
- # Has hintmask instructions; not recursive
- self.has_hintmask = False
- # List of indices of calls to empty subroutines to remove.
- self.deletions = []
-
- pass
-
- def __init__(
- self, css, localSubrs, globalSubrs, nominalWidthX, defaultWidthX, private=None
- ):
- self._css = css
- psCharStrings.T2WidthExtractor.__init__(
- self, localSubrs, globalSubrs, nominalWidthX, defaultWidthX
- )
- self.private = private
-
- def execute(self, charString):
- old_hints = charString._hints if hasattr(charString, "_hints") else None
- charString._hints = self.Hints()
-
- psCharStrings.T2WidthExtractor.execute(self, charString)
-
- hints = charString._hints
-
- if hints.has_hint or hints.has_hintmask:
- self._css.add(charString)
-
- if hints.status != 2:
- # Check from last_check, make sure we didn't have any operators.
- for i in range(hints.last_checked, len(charString.program) - 1):
- if isinstance(charString.program[i], str):
- hints.status = 2
- break
- else:
- hints.status = 1 # There's *something* here
- hints.last_checked = len(charString.program)
-
- if old_hints:
- assert hints.__dict__ == old_hints.__dict__
-
- def op_callsubr(self, index):
- subr = self.localSubrs[self.operandStack[-1] + self.localBias]
- psCharStrings.T2WidthExtractor.op_callsubr(self, index)
- self.processSubr(index, subr)
-
- def op_callgsubr(self, index):
- subr = self.globalSubrs[self.operandStack[-1] + self.globalBias]
- psCharStrings.T2WidthExtractor.op_callgsubr(self, index)
- self.processSubr(index, subr)
-
- def op_hstem(self, index):
- psCharStrings.T2WidthExtractor.op_hstem(self, index)
- self.processHint(index)
-
- def op_vstem(self, index):
- psCharStrings.T2WidthExtractor.op_vstem(self, index)
- self.processHint(index)
-
- def op_hstemhm(self, index):
- psCharStrings.T2WidthExtractor.op_hstemhm(self, index)
- self.processHint(index)
-
- def op_vstemhm(self, index):
- psCharStrings.T2WidthExtractor.op_vstemhm(self, index)
- self.processHint(index)
-
- def op_hintmask(self, index):
- rv = psCharStrings.T2WidthExtractor.op_hintmask(self, index)
- self.processHintmask(index)
- return rv
-
- def op_cntrmask(self, index):
- rv = psCharStrings.T2WidthExtractor.op_cntrmask(self, index)
- self.processHintmask(index)
- return rv
-
- def processHintmask(self, index):
- cs = self.callingStack[-1]
- hints = cs._hints
- hints.has_hintmask = True
- if hints.status != 2:
- # Check from last_check, see if we may be an implicit vstem
- for i in range(hints.last_checked, index - 1):
- if isinstance(cs.program[i], str):
- hints.status = 2
- break
- else:
- # We are an implicit vstem
- hints.has_hint = True
- hints.last_hint = index + 1
- hints.status = 0
- hints.last_checked = index + 1
-
- def processHint(self, index):
- cs = self.callingStack[-1]
- hints = cs._hints
- hints.has_hint = True
- hints.last_hint = index
- hints.last_checked = index
-
- def processSubr(self, index, subr):
- cs = self.callingStack[-1]
- hints = cs._hints
- subr_hints = subr._hints
-
- # Check from last_check, make sure we didn't have
- # any operators.
- if hints.status != 2:
- for i in range(hints.last_checked, index - 1):
- if isinstance(cs.program[i], str):
- hints.status = 2
- break
- hints.last_checked = index
-
- if hints.status != 2:
- if subr_hints.has_hint:
- hints.has_hint = True
-
- # Decide where to chop off from
- if subr_hints.status == 0:
- hints.last_hint = index
- else:
- hints.last_hint = index - 2 # Leave the subr call in
-
- elif subr_hints.status == 0:
- hints.deletions.append(index)
-
- hints.status = max(hints.status, subr_hints.status)
-
-
-@_add_method(ttLib.getTableClass("CFF "))
-def prune_post_subset(self, ttfFont, options):
- cff = self.cff
- for fontname in cff.keys():
- font = cff[fontname]
- cs = font.CharStrings
-
- # Drop unused FontDictionaries
- if hasattr(font, "FDSelect"):
- sel = font.FDSelect
- indices = _uniq_sort(sel.gidArray)
- sel.gidArray = [indices.index(ss) for ss in sel.gidArray]
- arr = font.FDArray
- arr.items = [arr[i] for i in indices]
- del arr.file, arr.offsets
-
- # Desubroutinize if asked for
- if options.desubroutinize:
- cff.desubroutinize()
-
- # Drop hints if not needed
- if not options.hinting:
- self.remove_hints()
- elif not options.desubroutinize:
- self.remove_unused_subroutines()
- return True
-
-
-def _delete_empty_subrs(private_dict):
- if hasattr(private_dict, "Subrs") and not private_dict.Subrs:
- if "Subrs" in private_dict.rawDict:
- del private_dict.rawDict["Subrs"]
- del private_dict.Subrs
-
-
-@deprecateFunction(
- "use 'CFFFontSet.desubroutinize()' instead", category=DeprecationWarning
-)
-@_add_method(ttLib.getTableClass("CFF "))
-def desubroutinize(self):
- self.cff.desubroutinize()
-
-
-@_add_method(ttLib.getTableClass("CFF "))
-def remove_hints(self):
- cff = self.cff
- for fontname in cff.keys():
- font = cff[fontname]
- cs = font.CharStrings
- # This can be tricky, but doesn't have to. What we do is:
- #
- # - Run all used glyph charstrings and recurse into subroutines,
- # - For each charstring (including subroutines), if it has any
- # of the hint stem operators, we mark it as such.
- # Upon returning, for each charstring we note all the
- # subroutine calls it makes that (recursively) contain a stem,
- # - Dropping hinting then consists of the following two ops:
- # * Drop the piece of the program in each charstring before the
- # last call to a stem op or a stem-calling subroutine,
- # * Drop all hintmask operations.
- # - It's trickier... A hintmask right after hints and a few numbers
- # will act as an implicit vstemhm. As such, we track whether
- # we have seen any non-hint operators so far and do the right
- # thing, recursively... Good luck understanding that :(
- css = set()
- for g in font.charset:
- c, _ = cs.getItemAndSelector(g)
- c.decompile()
- subrs = getattr(c.private, "Subrs", [])
- decompiler = _DehintingT2Decompiler(
- css,
- subrs,
- c.globalSubrs,
- c.private.nominalWidthX,
- c.private.defaultWidthX,
- c.private,
- )
- decompiler.execute(c)
- c.width = decompiler.width
- for charstring in css:
- charstring.drop_hints()
- del css
-
- # Drop font-wide hinting values
- all_privs = []
- if hasattr(font, "FDArray"):
- all_privs.extend(fd.Private for fd in font.FDArray)
- else:
- all_privs.append(font.Private)
- for priv in all_privs:
- for k in [
- "BlueValues",
- "OtherBlues",
- "FamilyBlues",
- "FamilyOtherBlues",
- "BlueScale",
- "BlueShift",
- "BlueFuzz",
- "StemSnapH",
- "StemSnapV",
- "StdHW",
- "StdVW",
- "ForceBold",
- "LanguageGroup",
- "ExpansionFactor",
- ]:
- if hasattr(priv, k):
- setattr(priv, k, None)
- self.remove_unused_subroutines()
-
-
-@_add_method(ttLib.getTableClass("CFF "))
-def remove_unused_subroutines(self):
- cff = self.cff
- for fontname in cff.keys():
- font = cff[fontname]
- cs = font.CharStrings
- # Renumber subroutines to remove unused ones
-
- # Mark all used subroutines
- for g in font.charset:
- c, _ = cs.getItemAndSelector(g)
- subrs = getattr(c.private, "Subrs", [])
- decompiler = _MarkingT2Decompiler(subrs, c.globalSubrs, c.private)
- decompiler.execute(c)
-
- all_subrs = [font.GlobalSubrs]
- if hasattr(font, "FDArray"):
- all_subrs.extend(
- fd.Private.Subrs
- for fd in font.FDArray
- if hasattr(fd.Private, "Subrs") and fd.Private.Subrs
- )
- elif hasattr(font.Private, "Subrs") and font.Private.Subrs:
- all_subrs.append(font.Private.Subrs)
-
- subrs = set(subrs) # Remove duplicates
-
- # Prepare
- for subrs in all_subrs:
- if not hasattr(subrs, "_used"):
- subrs._used = set()
- subrs._used = _uniq_sort(subrs._used)
- subrs._old_bias = psCharStrings.calcSubrBias(subrs)
- subrs._new_bias = psCharStrings.calcSubrBias(subrs._used)
-
- # Renumber glyph charstrings
- for g in font.charset:
- c, _ = cs.getItemAndSelector(g)
- subrs = getattr(c.private, "Subrs", [])
- c.subset_subroutines(subrs, font.GlobalSubrs)
-
- # Renumber subroutines themselves
- for subrs in all_subrs:
- if subrs == font.GlobalSubrs:
- if not hasattr(font, "FDArray") and hasattr(font.Private, "Subrs"):
- local_subrs = font.Private.Subrs
- else:
- local_subrs = []
- else:
- local_subrs = subrs
-
- subrs.items = [subrs.items[i] for i in subrs._used]
- if hasattr(subrs, "file"):
- del subrs.file
- if hasattr(subrs, "offsets"):
- del subrs.offsets
-
- for subr in subrs.items:
- subr.subset_subroutines(local_subrs, font.GlobalSubrs)
-
- # Delete local SubrsIndex if empty
- if hasattr(font, "FDArray"):
- for fd in font.FDArray:
- _delete_empty_subrs(fd.Private)
- else:
- _delete_empty_subrs(font.Private)
-
- # Cleanup
- for subrs in all_subrs:
- del subrs._used, subrs._old_bias, subrs._new_bias
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttx.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttx.py
deleted file mode 100644
index 65a3c7a808b41fc571d59bac80f7b1085abc6b9b..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttx.py
+++ /dev/null
@@ -1,469 +0,0 @@
-"""\
-usage: ttx [options] inputfile1 [... inputfileN]
-
-TTX -- From OpenType To XML And Back
-
-If an input file is a TrueType or OpenType font file, it will be
-decompiled to a TTX file (an XML-based text format).
-If an input file is a TTX file, it will be compiled to whatever
-format the data is in, a TrueType or OpenType/CFF font file.
-A special input value of - means read from the standard input.
-
-Output files are created so they are unique: an existing file is
-never overwritten.
-
-General options
-===============
-
--h Help print this message.
---version show version and exit.
--d Specify a directory where the output files are
- to be created.
--o Specify a file to write the output to. A special
- value of - would use the standard output.
--f Overwrite existing output file(s), ie. don't append
- numbers.
--v Verbose: more messages will be written to stdout
- about what is being done.
--q Quiet: No messages will be written to stdout about
- what is being done.
--a allow virtual glyphs ID's on compile or decompile.
-
-Dump options
-============
-
--l List table info: instead of dumping to a TTX file, list
- some minimal info about each table.
--t
Specify a table to dump. Multiple -t options
- are allowed. When no -t option is specified, all tables
- will be dumped.
--x
Specify a table to exclude from the dump. Multiple
- -x options are allowed. -t and -x are mutually exclusive.
--s Split tables: save the TTX data into separate TTX files per
- table and write one small TTX file that contains references
- to the individual table dumps. This file can be used as
- input to ttx, as long as the table files are in the
- same directory.
--g Split glyf table: Save the glyf data into separate TTX files
- per glyph and write a small TTX for the glyf table which
- contains references to the individual TTGlyph elements.
- NOTE: specifying -g implies -s (no need for -s together
- with -g)
--i Do NOT disassemble TT instructions: when this option is
- given, all TrueType programs (glyph programs, the font
- program and the pre-program) will be written to the TTX
- file as hex data instead of assembly. This saves some time
- and makes the TTX file smaller.
--z Specify a bitmap data export option for EBDT:
- {'raw', 'row', 'bitwise', 'extfile'} or for the CBDT:
- {'raw', 'extfile'} Each option does one of the following:
-
- -z raw
- export the bitmap data as a hex dump
- -z row
- export each row as hex data
- -z bitwise
- export each row as binary in an ASCII art style
- -z extfile
- export the data as external files with XML references
-
- If no export format is specified 'raw' format is used.
--e Don't ignore decompilation errors, but show a full traceback
- and abort.
--y Select font number for TrueType Collection (.ttc/.otc),
- starting from 0.
---unicodedata
- Use custom database file to write character names in the
- comments of the cmap TTX output.
---newline
- Control how line endings are written in the XML file. It
- can be 'LF', 'CR', or 'CRLF'. If not specified, the
- default platform-specific line endings are used.
-
-Compile options
-===============
-
--m Merge with TrueType-input-file: specify a TrueType or
- OpenType font file to be merged with the TTX file. This
- option is only valid when at most one TTX file is specified.
--b Don't recalc glyph bounding boxes: use the values in the
- TTX file as-is.
---recalc-timestamp
- Set font 'modified' timestamp to current time.
- By default, the modification time of the TTX file will be
- used.
---no-recalc-timestamp
- Keep the original font 'modified' timestamp.
---flavor
- Specify flavor of output font file. May be 'woff' or 'woff2'.
- Note that WOFF2 requires the Brotli Python extension,
- available at https://github.com/google/brotli
---with-zopfli
- Use Zopfli instead of Zlib to compress WOFF. The Python
- extension is available at https://pypi.python.org/pypi/zopfli
-"""
-
-
-from fontTools.ttLib import TTFont, TTLibError
-from fontTools.misc.macCreatorType import getMacCreatorAndType
-from fontTools.unicode import setUnicodeData
-from fontTools.misc.textTools import Tag, tostr
-from fontTools.misc.timeTools import timestampSinceEpoch
-from fontTools.misc.loggingTools import Timer
-from fontTools.misc.cliTools import makeOutputFileName
-import os
-import sys
-import getopt
-import re
-import logging
-
-
-log = logging.getLogger("fontTools.ttx")
-
-opentypeheaderRE = re.compile("""sfntVersion=['"]OTTO["']""")
-
-
-class Options(object):
-
- listTables = False
- outputDir = None
- outputFile = None
- overWrite = False
- verbose = False
- quiet = False
- splitTables = False
- splitGlyphs = False
- disassembleInstructions = True
- mergeFile = None
- recalcBBoxes = True
- ignoreDecompileErrors = True
- bitmapGlyphDataFormat = "raw"
- unicodedata = None
- newlinestr = "\n"
- recalcTimestamp = None
- flavor = None
- useZopfli = False
-
- def __init__(self, rawOptions, numFiles):
- self.onlyTables = []
- self.skipTables = []
- self.fontNumber = -1
- for option, value in rawOptions:
- # general options
- if option == "-h":
- print(__doc__)
- sys.exit(0)
- elif option == "--version":
- from fontTools import version
-
- print(version)
- sys.exit(0)
- elif option == "-d":
- if not os.path.isdir(value):
- raise getopt.GetoptError(
- "The -d option value must be an existing directory"
- )
- self.outputDir = value
- elif option == "-o":
- self.outputFile = value
- elif option == "-f":
- self.overWrite = True
- elif option == "-v":
- self.verbose = True
- elif option == "-q":
- self.quiet = True
- # dump options
- elif option == "-l":
- self.listTables = True
- elif option == "-t":
- # pad with space if table tag length is less than 4
- value = value.ljust(4)
- self.onlyTables.append(value)
- elif option == "-x":
- # pad with space if table tag length is less than 4
- value = value.ljust(4)
- self.skipTables.append(value)
- elif option == "-s":
- self.splitTables = True
- elif option == "-g":
- # -g implies (and forces) splitTables
- self.splitGlyphs = True
- self.splitTables = True
- elif option == "-i":
- self.disassembleInstructions = False
- elif option == "-z":
- validOptions = ("raw", "row", "bitwise", "extfile")
- if value not in validOptions:
- raise getopt.GetoptError(
- "-z does not allow %s as a format. Use %s"
- % (option, validOptions)
- )
- self.bitmapGlyphDataFormat = value
- elif option == "-y":
- self.fontNumber = int(value)
- # compile options
- elif option == "-m":
- self.mergeFile = value
- elif option == "-b":
- self.recalcBBoxes = False
- elif option == "-e":
- self.ignoreDecompileErrors = False
- elif option == "--unicodedata":
- self.unicodedata = value
- elif option == "--newline":
- validOptions = ("LF", "CR", "CRLF")
- if value == "LF":
- self.newlinestr = "\n"
- elif value == "CR":
- self.newlinestr = "\r"
- elif value == "CRLF":
- self.newlinestr = "\r\n"
- else:
- raise getopt.GetoptError(
- "Invalid choice for --newline: %r (choose from %s)"
- % (value, ", ".join(map(repr, validOptions)))
- )
- elif option == "--recalc-timestamp":
- self.recalcTimestamp = True
- elif option == "--no-recalc-timestamp":
- self.recalcTimestamp = False
- elif option == "--flavor":
- self.flavor = value
- elif option == "--with-zopfli":
- self.useZopfli = True
- if self.verbose and self.quiet:
- raise getopt.GetoptError("-q and -v options are mutually exclusive")
- if self.verbose:
- self.logLevel = logging.DEBUG
- elif self.quiet:
- self.logLevel = logging.WARNING
- else:
- self.logLevel = logging.INFO
- if self.mergeFile and self.flavor:
- raise getopt.GetoptError("-m and --flavor options are mutually exclusive")
- if self.onlyTables and self.skipTables:
- raise getopt.GetoptError("-t and -x options are mutually exclusive")
- if self.mergeFile and numFiles > 1:
- raise getopt.GetoptError(
- "Must specify exactly one TTX source file when using -m"
- )
- if self.flavor != "woff" and self.useZopfli:
- raise getopt.GetoptError("--with-zopfli option requires --flavor 'woff'")
-
-
-def ttList(input, output, options):
- ttf = TTFont(input, fontNumber=options.fontNumber, lazy=True)
- reader = ttf.reader
- tags = sorted(reader.keys())
- print('Listing table info for "%s":' % input)
- format = " %4s %10s %8s %8s"
- print(format % ("tag ", " checksum", " length", " offset"))
- print(format % ("----", "----------", "--------", "--------"))
- for tag in tags:
- entry = reader.tables[tag]
- if ttf.flavor == "woff2":
- # WOFF2 doesn't store table checksums, so they must be calculated
- from fontTools.ttLib.sfnt import calcChecksum
-
- data = entry.loadData(reader.transformBuffer)
- checkSum = calcChecksum(data)
- else:
- checkSum = int(entry.checkSum)
- if checkSum < 0:
- checkSum = checkSum + 0x100000000
- checksum = "0x%08X" % checkSum
- print(format % (tag, checksum, entry.length, entry.offset))
- print()
- ttf.close()
-
-
-@Timer(log, "Done dumping TTX in %(time).3f seconds")
-def ttDump(input, output, options):
- input_name = input
- if input == "-":
- input, input_name = sys.stdin.buffer, sys.stdin.name
- output_name = output
- if output == "-":
- output, output_name = sys.stdout, sys.stdout.name
- log.info('Dumping "%s" to "%s"...', input_name, output_name)
- if options.unicodedata:
- setUnicodeData(options.unicodedata)
- ttf = TTFont(
- input,
- 0,
- ignoreDecompileErrors=options.ignoreDecompileErrors,
- fontNumber=options.fontNumber,
- )
- ttf.saveXML(
- output,
- tables=options.onlyTables,
- skipTables=options.skipTables,
- splitTables=options.splitTables,
- splitGlyphs=options.splitGlyphs,
- disassembleInstructions=options.disassembleInstructions,
- bitmapGlyphDataFormat=options.bitmapGlyphDataFormat,
- newlinestr=options.newlinestr,
- )
- ttf.close()
-
-
-@Timer(log, "Done compiling TTX in %(time).3f seconds")
-def ttCompile(input, output, options):
- input_name = input
- if input == "-":
- input, input_name = sys.stdin, sys.stdin.name
- output_name = output
- if output == "-":
- output, output_name = sys.stdout.buffer, sys.stdout.name
- log.info('Compiling "%s" to "%s"...' % (input_name, output))
- if options.useZopfli:
- from fontTools.ttLib import sfnt
-
- sfnt.USE_ZOPFLI = True
- ttf = TTFont(
- options.mergeFile,
- flavor=options.flavor,
- recalcBBoxes=options.recalcBBoxes,
- recalcTimestamp=options.recalcTimestamp,
- )
- ttf.importXML(input)
-
- if options.recalcTimestamp is None and "head" in ttf and input is not sys.stdin:
- # use TTX file modification time for head "modified" timestamp
- mtime = os.path.getmtime(input)
- ttf["head"].modified = timestampSinceEpoch(mtime)
-
- ttf.save(output)
-
-
-def guessFileType(fileName):
- if fileName == "-":
- header = sys.stdin.buffer.peek(256)
- ext = ""
- else:
- base, ext = os.path.splitext(fileName)
- try:
- with open(fileName, "rb") as f:
- header = f.read(256)
- except IOError:
- return None
-
- if header.startswith(b"\xef\xbb\xbfPrior to the film's release, Anushka Sharma referred to it as her "best film till date".[33] Band Baaja Baaraat's trailer and official website were both unveiled on 19 October 2010,[34] a couple of months before the theatrical release. In addition to the film's synopsis and trailer, the website initially also contained five wallpapers and a press kit for visitors to download. The number of available wallpapers later grew to twenty-five and the website eventually allowed visitors to send e-cards to their acquaintances, with the virtual cards dubbed "Band Baaj-O-Grams". A number of contests were organized by Yash Raj Films, including one where the company, along with partners Radio Mirchi and BIG Cinemas, offered the winning couple a free wedding in December, in time for the film's release and supposedly planned the film's heroes, and another in which a couple would win a trip to Switzerland and visit the filming locations of the various Yash Raj Films productions to have been shot there. In addition to the website, Yash Raj Films also had regularly updated official pages on Facebook and Twitter and a Blogspot blog in an effort to reach the widest audience possible. The company finally uploaded a number of videos on their YouTube account, including the trailer but also several videos promoting the songs "Tarkeebein" and "Ainvayi Ainvayi".
-
Baaraat Company man 2 full movie in hindi download
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Morgan Stanley Cmbs Primer Pdf 48 Les diffrentes options dinvestissement dans les CMBS.md b/spaces/cihyFjudo/fairness-paper-search/Morgan Stanley Cmbs Primer Pdf 48 Les diffrentes options dinvestissement dans les CMBS.md
deleted file mode 100644
index 7b130df6c330099597c06d4cd44d030434da10cd..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Morgan Stanley Cmbs Primer Pdf 48 Les diffrentes options dinvestissement dans les CMBS.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Teamviewer Trial Reset How to Fix the Error Your trial period has expired.md b/spaces/cihyFjudo/fairness-paper-search/Teamviewer Trial Reset How to Fix the Error Your trial period has expired.md
deleted file mode 100644
index f905196eac2accd55427590b5b4dcdb86524012b..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Teamviewer Trial Reset How to Fix the Error Your trial period has expired.md
+++ /dev/null
@@ -1,19 +0,0 @@
-
-
Once downloaded, run the TeamViewer setup and follow the on-screen instruction to install it on your PC. When ask if you are to run it for personal or commercial use, be sure to select for personal use only. This should fix the TeamViewer trial version expired on Windows 10 issue.
Learn more Please log in with your TeamViewer credentials to start the reactivation process. Click Here to StartWant more?Exclusive deals, the latest news: Our Newsletter!Sign upPlease choose your regionSelecting a region changes the language and/or content on teamviewer.com
-
I am a long time TeamViewer user. I use it personally (for free) and have used the commercial version at each client.
All of the sudden, one of my personal machines could not be connected to and said "your trial expired" or similar. I thought I had to reload, but then a second personal machine did it. No help and I think they are changing their policy.
Anyway, I am testing AnyDesk and it is working great so far. AnyDesk is being developed by some ex-TeamViewer people. It says it works on all platforms, and I have been trying on Ubuntu, W10, and W7. I will test on Mac, Ios and Android tonight. Enjoy.
-
I am a long time TeamViewer user. I use it personally (for free) and have used the commercial version at each client.
All of the sudden, one of my personal machines could not be connected to and said "your trial expired" or similar. I thought I had to reload, but then a second personal machine did it. No help and I think they are changing their policy.
-
-
You might experience annoying warning as shown above when you are try to connect with your friends through teamviewer. Below specified instructions for advanced users only and we are not responsible for any data loss occurs when you follow the steps and always recommend you to take a whole registry backup before to proceed the steps.
-
Step 1 : Close and exit your teamviewer application if it is running Step 2 : Click Windows Start > Run and search with %appdata% variable and find the teamviewer folder Izotope rx 7 advanced vst formats. and delete it.
-
Lỗi teamviewer hết hạn dùng thử là do khi cài đặt TeamViewer, bạn đã chọn vào mục Company / Commerical use. Lưu ý là chỉ chọn mục Personal / Non-commercial use (cá nhân, phi thương mại).
-
Hết thời gian dùng thử, kể cả gỡ phần mềm TeamViewer ra và cài lại thì vẫn không hết lỗi. Bởi vì, địa chỉ MAC máy tính của bạn đã được lưu trữ ở trang chủ. Từ địa chỉ MAC này, nó sẽ tạo ra ID TeamViewer cho bạn. Vậy nên, muốn khắc phục lỗi teamviewer hết hạn dùng thử ta cần phải đổi địa chỉ MAC hay reset ID TeamViewer
-
TVTools AlterID is a lightweight Windows program that allows you to reset the ID of TeamViewer software. Since there is no installation involved, you are able to drop the executable file in any part of the hard drive and click it to run. There is also the possibility to save the utility to a USB stick or similar storage device as well as to run it on any computer with minimum effort.
-
The utility enables you to remove various restrictions. The overall procedure is very straightforward. All you need to do is to install the app on your computer, specify the path to the TeamViewer directory and select the desired settings. You can pick from three available options, such as a 7-day trial with full features, a limited mode with enabled advertisements and return to the original ID received when you started remote control software for the first time.
-
Can't store a directory of computers that I connect to.Cumbersome connection requirements for unattended access. Why not simply do a computer ID and password with high strength?No option to start-up at computer start so that if there is a reset on the computer (power outage), I don't lose access to the remote PC. Run it as a service.
-
you have to actually install the product, and it is not just click install, you have a full installation to remote a pc.I'm using teamviewer and ScreenConnect, I far prefer these product as they are easier for my customer to run. ScreenConnect doesn't need the user to read 9 digits and a code.As an idea you could adopt the administrator feature, or team feature, so that you will have a special client for the end user, and they just click and run (NO INSTALL) and then it will popup on the supporters desktop that a person needs help and with one click you are connected.
-
been looking for a free piece of software to offer support to my customers and this fits the bill, as good as teamviewer, logmein etc and is freeware the others are overpriced for small businessesit does every thing i want it to do and with the security features gives the support customer confidence
-
Commentaires :amazing product, i was looking forsomething like this....i mean when you compare this to something like teamviewer which is so astronomically overpriced this is simply a breath of fresh air
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/The Children Of War Hindi Movie Download The True Story Behind the Controversial Film.md b/spaces/cihyFjudo/fairness-paper-search/The Children Of War Hindi Movie Download The True Story Behind the Controversial Film.md
deleted file mode 100644
index d14b38a0e07607bf3f04ad0004a743cf8a30277d..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/The Children Of War Hindi Movie Download The True Story Behind the Controversial Film.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
Watch the movie Children of War on the free film streaming website www.onlinemovieshindi.com (new web URL: ). Online streaming or downloading the video file easily. Watch or download Children of War online movie Hindi dubbed here.
-
Dear visitor, you can download the movie Children of War on this onlinemovieshindi website. It will download the HD video file by just clicking on the button below. The video file is the same file for the online streaming above when you directly click to play. The decision to download is entirely your choice and your personal responsibility when dealing with the legality of file ownership
Go Into the Story is the official blog for The Blacklist, the screenwriting community famous for its annual top ten list of unproduced scripts. One useful feature of Go Into the Story is its bank of downloadable movie scripts.
-
BONUS SCREENPLAYS TO READ: You can download five more of the best screenplays to read in each genre in this post. Read as many movie scripts as you can and watch your screenwriting ability soar.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/contourpy/util/mpl_util.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/contourpy/util/mpl_util.py
deleted file mode 100644
index 0c970886faeac57427db27ca4510934de223ac8c..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/contourpy/util/mpl_util.py
+++ /dev/null
@@ -1,79 +0,0 @@
-from __future__ import annotations
-
-from typing import TYPE_CHECKING, cast
-
-import matplotlib.path as mpath
-import numpy as np
-
-from contourpy import FillType, LineType
-
-if TYPE_CHECKING:
- from contourpy._contourpy import (
- CodeArray, FillReturn, LineReturn, LineReturn_Separate, OffsetArray,
- )
-
-
-def filled_to_mpl_paths(filled: FillReturn, fill_type: FillType) -> list[mpath.Path]:
- if fill_type in (FillType.OuterCode, FillType.ChunkCombinedCode):
- paths = [mpath.Path(points, codes) for points, codes in zip(*filled) if points is not None]
- elif fill_type in (FillType.OuterOffset, FillType.ChunkCombinedOffset):
- paths = [mpath.Path(points, offsets_to_mpl_codes(offsets))
- for points, offsets in zip(*filled) if points is not None]
- elif fill_type == FillType.ChunkCombinedCodeOffset:
- paths = []
- for points, codes, outer_offsets in zip(*filled):
- if points is None:
- continue
- points = np.split(points, outer_offsets[1:-1])
- codes = np.split(codes, outer_offsets[1:-1])
- paths += [mpath.Path(p, c) for p, c in zip(points, codes)]
- elif fill_type == FillType.ChunkCombinedOffsetOffset:
- paths = []
- for points, offsets, outer_offsets in zip(*filled):
- if points is None:
- continue
- for i in range(len(outer_offsets)-1):
- offs = offsets[outer_offsets[i]:outer_offsets[i+1]+1]
- pts = points[offs[0]:offs[-1]]
- paths += [mpath.Path(pts, offsets_to_mpl_codes(offs - offs[0]))]
- else:
- raise RuntimeError(f"Conversion of FillType {fill_type} to MPL Paths is not implemented")
- return paths
-
-
-def lines_to_mpl_paths(lines: LineReturn, line_type: LineType) -> list[mpath.Path]:
- if line_type == LineType.Separate:
- if TYPE_CHECKING:
- lines = cast(LineReturn_Separate, lines)
- paths = []
- for line in lines:
- # Drawing as Paths so that they can be closed correctly.
- closed = line[0, 0] == line[-1, 0] and line[0, 1] == line[-1, 1]
- paths.append(mpath.Path(line, closed=closed))
- elif line_type in (LineType.SeparateCode, LineType.ChunkCombinedCode):
- paths = [mpath.Path(points, codes) for points, codes in zip(*lines) if points is not None]
- elif line_type == LineType.ChunkCombinedOffset:
- paths = []
- for points, offsets in zip(*lines):
- if points is None:
- continue
- for i in range(len(offsets)-1):
- line = points[offsets[i]:offsets[i+1]]
- closed = line[0, 0] == line[-1, 0] and line[0, 1] == line[-1, 1]
- paths.append(mpath.Path(line, closed=closed))
- else:
- raise RuntimeError(f"Conversion of LineType {line_type} to MPL Paths is not implemented")
- return paths
-
-
-def mpl_codes_to_offsets(codes: CodeArray) -> OffsetArray:
- offsets = np.nonzero(codes == 1)[0].astype(np.uint32)
- offsets = np.append(offsets, len(codes))
- return offsets
-
-
-def offsets_to_mpl_codes(offsets: OffsetArray) -> CodeArray:
- codes = np.full(offsets[-1]-offsets[0], 2, dtype=np.uint8) # LINETO = 2
- codes[offsets[:-1]] = 1 # MOVETO = 1
- codes[offsets[1:]-1] = 79 # CLOSEPOLY 79
- return codes
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/treeTools.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/treeTools.py
deleted file mode 100644
index 24e10ba5b19ef41d56a552527680a4c73503cc3c..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/treeTools.py
+++ /dev/null
@@ -1,45 +0,0 @@
-"""Generic tools for working with trees."""
-
-from math import ceil, log
-
-
-def build_n_ary_tree(leaves, n):
- """Build N-ary tree from sequence of leaf nodes.
-
- Return a list of lists where each non-leaf node is a list containing
- max n nodes.
- """
- if not leaves:
- return []
-
- assert n > 1
-
- depth = ceil(log(len(leaves), n))
-
- if depth <= 1:
- return list(leaves)
-
- # Fully populate complete subtrees of root until we have enough leaves left
- root = []
- unassigned = None
- full_step = n ** (depth - 1)
- for i in range(0, len(leaves), full_step):
- subtree = leaves[i : i + full_step]
- if len(subtree) < full_step:
- unassigned = subtree
- break
- while len(subtree) > n:
- subtree = [subtree[k : k + n] for k in range(0, len(subtree), n)]
- root.append(subtree)
-
- if unassigned:
- # Recurse to fill the last subtree, which is the only partially populated one
- subtree = build_n_ary_tree(unassigned, n)
- if len(subtree) <= n - len(root):
- # replace last subtree with its children if they can still fit
- root.extend(subtree)
- else:
- root.append(subtree)
- assert len(root) <= n
-
- return root
diff --git a/spaces/cncanon/locusts/dummy_server.py b/spaces/cncanon/locusts/dummy_server.py
deleted file mode 100644
index 181b6f94b055f0fb2da0234d845a2c480f6f244b..0000000000000000000000000000000000000000
--- a/spaces/cncanon/locusts/dummy_server.py
+++ /dev/null
@@ -1,16 +0,0 @@
-from http.server import HTTPServer, BaseHTTPRequestHandler
-
-class DummyHandler(BaseHTTPRequestHandler):
- def do_GET(self):
- self.send_response(200)
- self.end_headers()
- self.wfile.write(b"Proxy is only open during 20:00 - 22:30 and 12:00 - 14:30, UTC+0.")
-
-def run(server_class=HTTPServer, handler_class=DummyHandler):
- server_address = ('', 7860)
- httpd = server_class(server_address, handler_class)
- print("Starting dummy server...")
- httpd.serve_forever()
-
-if __name__ == "__main__":
- run()
\ No newline at end of file
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacpsdata.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacpsdata.c
deleted file mode 100644
index 7a1f490060bd635af685c20adc84c161e6163eba..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacpsdata.c
+++ /dev/null
@@ -1,158 +0,0 @@
-/*
- * MPEG-4 Parametric Stereo data tables
- * Copyright (c) 2010 Alex Converse
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-static const uint8_t huff_iid_df1_bits[] = {
- 18, 18, 18, 18, 18, 18, 18, 18, 18, 17, 18, 17, 17, 16, 16, 15, 14, 14,
- 13, 12, 12, 11, 10, 10, 8, 7, 6, 5, 4, 3, 1, 3, 4, 5, 6, 7,
- 8, 9, 10, 11, 11, 12, 13, 14, 14, 15, 16, 16, 17, 17, 18, 17, 18, 18,
- 18, 18, 18, 18, 18, 18, 18,
-};
-
-static const uint32_t huff_iid_df1_codes[] = {
- 0x01FEB4, 0x01FEB5, 0x01FD76, 0x01FD77, 0x01FD74, 0x01FD75, 0x01FE8A,
- 0x01FE8B, 0x01FE88, 0x00FE80, 0x01FEB6, 0x00FE82, 0x00FEB8, 0x007F42,
- 0x007FAE, 0x003FAF, 0x001FD1, 0x001FE9, 0x000FE9, 0x0007EA, 0x0007FB,
- 0x0003FB, 0x0001FB, 0x0001FF, 0x00007C, 0x00003C, 0x00001C, 0x00000C,
- 0x000000, 0x000001, 0x000001, 0x000002, 0x000001, 0x00000D, 0x00001D,
- 0x00003D, 0x00007D, 0x0000FC, 0x0001FC, 0x0003FC, 0x0003F4, 0x0007EB,
- 0x000FEA, 0x001FEA, 0x001FD6, 0x003FD0, 0x007FAF, 0x007F43, 0x00FEB9,
- 0x00FE83, 0x01FEB7, 0x00FE81, 0x01FE89, 0x01FE8E, 0x01FE8F, 0x01FE8C,
- 0x01FE8D, 0x01FEB2, 0x01FEB3, 0x01FEB0, 0x01FEB1,
-};
-
-static const uint8_t huff_iid_dt1_bits[] = {
- 16, 16, 16, 16, 16, 16, 16, 16, 16, 15, 15, 15, 15, 15, 15, 14, 14, 13,
- 13, 13, 12, 12, 11, 10, 9, 9, 7, 6, 5, 3, 1, 2, 5, 6, 7, 8,
- 9, 10, 11, 11, 12, 12, 13, 13, 14, 14, 15, 15, 15, 15, 16, 16, 16, 16,
- 16, 16, 16, 16, 16, 16, 16,
-};
-
-static const uint16_t huff_iid_dt1_codes[] = {
- 0x004ED4, 0x004ED5, 0x004ECE, 0x004ECF, 0x004ECC, 0x004ED6, 0x004ED8,
- 0x004F46, 0x004F60, 0x002718, 0x002719, 0x002764, 0x002765, 0x00276D,
- 0x0027B1, 0x0013B7, 0x0013D6, 0x0009C7, 0x0009E9, 0x0009ED, 0x0004EE,
- 0x0004F7, 0x000278, 0x000139, 0x00009A, 0x00009F, 0x000020, 0x000011,
- 0x00000A, 0x000003, 0x000001, 0x000000, 0x00000B, 0x000012, 0x000021,
- 0x00004C, 0x00009B, 0x00013A, 0x000279, 0x000270, 0x0004EF, 0x0004E2,
- 0x0009EA, 0x0009D8, 0x0013D7, 0x0013D0, 0x0027B2, 0x0027A2, 0x00271A,
- 0x00271B, 0x004F66, 0x004F67, 0x004F61, 0x004F47, 0x004ED9, 0x004ED7,
- 0x004ECD, 0x004ED2, 0x004ED3, 0x004ED0, 0x004ED1,
-};
-
-static const uint8_t huff_iid_df0_bits[] = {
- 17, 17, 17, 17, 16, 15, 13, 10, 9, 7, 6, 5, 4, 3, 1, 3, 4, 5,
- 6, 6, 8, 11, 13, 14, 14, 15, 17, 18, 18,
-};
-
-static const uint32_t huff_iid_df0_codes[] = {
- 0x01FFFB, 0x01FFFC, 0x01FFFD, 0x01FFFA, 0x00FFFC, 0x007FFC, 0x001FFD,
- 0x0003FE, 0x0001FE, 0x00007E, 0x00003C, 0x00001D, 0x00000D, 0x000005,
- 0x000000, 0x000004, 0x00000C, 0x00001C, 0x00003D, 0x00003E, 0x0000FE,
- 0x0007FE, 0x001FFC, 0x003FFC, 0x003FFD, 0x007FFD, 0x01FFFE, 0x03FFFE,
- 0x03FFFF,
-};
-
-static const uint8_t huff_iid_dt0_bits[] = {
- 19, 19, 19, 20, 20, 20, 17, 15, 12, 10, 8, 6, 4, 2, 1, 3, 5, 7,
- 9, 11, 13, 14, 17, 19, 20, 20, 20, 20, 20,
-};
-
-static const uint32_t huff_iid_dt0_codes[] = {
- 0x07FFF9, 0x07FFFA, 0x07FFFB, 0x0FFFF8, 0x0FFFF9, 0x0FFFFA, 0x01FFFD,
- 0x007FFE, 0x000FFE, 0x0003FE, 0x0000FE, 0x00003E, 0x00000E, 0x000002,
- 0x000000, 0x000006, 0x00001E, 0x00007E, 0x0001FE, 0x0007FE, 0x001FFE,
- 0x003FFE, 0x01FFFC, 0x07FFF8, 0x0FFFFB, 0x0FFFFC, 0x0FFFFD, 0x0FFFFE,
- 0x0FFFFF,
-};
-
-static const uint8_t huff_icc_df_bits[] = {
- 14, 14, 12, 10, 7, 5, 3, 1, 2, 4, 6, 8, 9, 11, 13,
-};
-
-static const uint16_t huff_icc_df_codes[] = {
- 0x3FFF, 0x3FFE, 0x0FFE, 0x03FE, 0x007E, 0x001E, 0x0006, 0x0000,
- 0x0002, 0x000E, 0x003E, 0x00FE, 0x01FE, 0x07FE, 0x1FFE,
-};
-
-static const uint8_t huff_icc_dt_bits[] = {
- 14, 13, 11, 9, 7, 5, 3, 1, 2, 4, 6, 8, 10, 12, 14,
-};
-
-static const uint16_t huff_icc_dt_codes[] = {
- 0x3FFE, 0x1FFE, 0x07FE, 0x01FE, 0x007E, 0x001E, 0x0006, 0x0000,
- 0x0002, 0x000E, 0x003E, 0x00FE, 0x03FE, 0x0FFE, 0x3FFF,
-};
-
-static const uint8_t huff_ipd_df_bits[] = {
- 1, 3, 4, 4, 4, 4, 4, 4,
-};
-
-static const uint8_t huff_ipd_df_codes[] = {
- 0x01, 0x00, 0x06, 0x04, 0x02, 0x03, 0x05, 0x07,
-};
-
-static const uint8_t huff_ipd_dt_bits[] = {
- 1, 3, 4, 5, 5, 4, 4, 3,
-};
-
-static const uint8_t huff_ipd_dt_codes[] = {
- 0x01, 0x02, 0x02, 0x03, 0x02, 0x00, 0x03, 0x03,
-};
-
-static const uint8_t huff_opd_df_bits[] = {
- 1, 3, 4, 4, 5, 5, 4, 3,
-};
-
-static const uint8_t huff_opd_df_codes[] = {
- 0x01, 0x01, 0x06, 0x04, 0x0F, 0x0E, 0x05, 0x00,
-};
-
-static const uint8_t huff_opd_dt_bits[] = {
- 1, 3, 4, 5, 5, 4, 4, 3,
-};
-
-static const uint8_t huff_opd_dt_codes[] = {
- 0x01, 0x02, 0x01, 0x07, 0x06, 0x00, 0x02, 0x03,
-};
-
-static const int8_t huff_offset[] = {
- 30, 30,
- 14, 14,
- 7, 7,
- 0, 0,
- 0, 0,
-};
-
-///Table 8.48
-const int8_t ff_k_to_i_20[] = {
- 1, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 14, 15,
- 15, 15, 16, 16, 16, 16, 17, 17, 17, 17, 17, 18, 18, 18, 18, 18, 18, 18, 18,
- 18, 18, 18, 18, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
- 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19
-};
-///Table 8.49
-const int8_t ff_k_to_i_34[] = {
- 0, 1, 2, 3, 4, 5, 6, 6, 7, 2, 1, 0, 10, 10, 4, 5, 6, 7, 8,
- 9, 10, 11, 12, 9, 14, 11, 12, 13, 14, 15, 16, 13, 16, 17, 18, 19, 20, 21,
- 22, 22, 23, 23, 24, 24, 25, 25, 26, 26, 27, 27, 27, 28, 28, 28, 29, 29, 29,
- 30, 30, 30, 31, 31, 31, 31, 32, 32, 32, 32, 33, 33, 33, 33, 33, 33, 33, 33,
- 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33
-};
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Bet on Football with M-Bet Plus App - Download the Latest Version for Free.md b/spaces/congsaPfin/Manga-OCR/logs/Bet on Football with M-Bet Plus App - Download the Latest Version for Free.md
deleted file mode 100644
index 43f6b83843bb5efd17696806e51c2e736cbac4da..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Bet on Football with M-Bet Plus App - Download the Latest Version for Free.md
+++ /dev/null
@@ -1,110 +0,0 @@
-
-
M-Bet Plus App: How to Download and Install the Best Betting App in Tanzania
-
If you are looking for a reliable, convenient, and rewarding way to bet on your favorite sports in Tanzania, you should consider downloading the M-Bet Plus App. This is a mobile application that allows you to access all the features and services of M-Bet, one of the leading online sports betting platforms in Tanzania. With M-Bet Plus App, you can enjoy betting on various sports, such as football, basketball, tennis, rugby, cricket, and more. You can also take advantage of the generous bonuses and promotions that M-Bet offers to its customers. Whether you have an Android or iOS device, you can easily download and install the M-Bet Plus App on your smartphone or tablet. In this article, we will show you how to do that, as well as explain the features and benefits of this amazing app.
-
Features and Benefits of M-Bet Plus App
-
M-Bet Plus App is designed to provide you with a seamless and enjoyable betting experience on your mobile device. Here are some of the features and benefits that you can expect from this app:
User-friendly interface and design: The app has a simple and elegant design that makes it easy to navigate and use. You can find all the options and functions that you need with just a few taps. The app also has a dark mode that reduces eye strain and saves battery life.
-
Wide range of sports and markets: The app covers a variety of sports and markets that cater to different preferences and tastes. You can bet on popular sports like football, basketball, tennis, rugby, cricket, etc., as well as niche sports like darts, snooker, volleyball, etc. You can also bet on different types of markets, such as match result, over/under, handicap, correct score, etc.
-
Fast and secure payments: The app supports multiple payment methods that are fast and secure. You can deposit and withdraw money using mobile money services like Tigo Pesa, Airtel Money, Vodacom M-Pesa, Halo Pesa, etc. You can also use bank cards like Visa or Mastercard. The app uses SSL encryption technology to protect your personal and financial information.
-
Generous bonuses and promotions : The app offers various bonuses and promotions that can boost your winnings and enhance your betting experience. For example, you can get a 100% welcome bonus up to 10,000 TZS when you make your first deposit. You can also get a 10% cashback bonus every week if you lose more than 10 bets. Moreover, you can participate in the M-Bet Perfect 12 jackpot, where you can win up to 200 million TZS by predicting the outcome of 12 matches.
-
Live betting and virtual sports: The app allows you to bet on live events that are happening in real time. You can follow the action and place your bets as the game unfolds. You can also bet on virtual sports, which are simulated games that are based on random outcomes. You can bet on virtual football, horse racing, dog racing, etc.
-
Customer support and responsible gambling: The app has a dedicated customer support team that is available 24/7 to assist you with any issues or queries that you may have. You can contact them via phone, email, or live chat. The app also promotes responsible gambling and provides tools and resources to help you gamble safely and responsibly. You can set limits on your deposits, bets, and losses, as well as self-exclude yourself from the app if you feel that you have a gambling problem.
-
-
How to Download M-Bet Plus App for Android Devices
-
If you have an Android device, you can download and install the M-Bet Plus App by following these simple steps:
-
-
Visit the official M-Bet website and click on "Download Android App" at the top of the homepage.
-
Tap on "Download Our App Free" and then "Download" to start the download process. You may see a warning message that says "This type of file can harm your device". Ignore it and tap on "OK".
-
Once the apk file is downloaded, open it and allow installation from unknown sources. You may need to go to your device settings and enable this option.
-
Follow the instructions to install the app on your device. It may take a few minutes to complete.
-
-
How to Download M-Bet Plus App for iOS Devices
-
If you have an iOS device, you can download and install the M-Bet Plus App by following these simple steps:
-
-
Visit the official M-Bet website and click on "Download iOS App" at the top of the homepage.
-
Tap on "Download Our App Free" and then "Download" to start the download process. You may see a pop-up message that says "M-Bet would like to install 'M-Bet Plus'". Tap on "Install".
-
Once the app is downloaded, open it and trust the developer in your device settings. You may need to go to Settings > General > Device Management > Trust 'M-Bet'.
-
Follow the instructions to install the app on your device. It may take a few minutes to complete.
-
-
How to Use M-Bet Plus App
-
Once you have downloaded and installed the M-Bet Plus App on your device, you can start using it by following these simple steps:
-
-
Launch the app and log in with your existing account or register a new one. You will need to provide some basic information, such as your name, phone number, email address, etc.
-
Choose your preferred sport and market from the menu or search bar. You can browse through different categories, such as popular, today, tomorrow, etc. You can also filter by country, league, or team.
-
Place your bets by selecting the odds and entering the stake amount. You can place single or multiple bets, as well as pre-match or live bets. You can also use the quick bet feature to place your bets faster.
-
Confirm your bets and wait for the results. You can check your bet history and status in the app. You can also cash out your bets before the event ends if you want to secure a profit or minimize a loss.
-
-
Comparison Between M-Bet Plus App and M-Bet Classic App
-
M-Bet Plus App is not the only mobile application that M-Bet offers to its customers. There is also another app called M-Bet Classic App, which is an older version of the app. Here is a table showing the differences and similarities between the two apps in terms of features, design, functionality, etc.
-
m-bet plus app apk download Tanzania 2023
-m-bet plus app free download for android
-m-bet plus app latest version for tz players
-m-bet plus app install guide and tips
-m-bet plus app bonus and promotions
-m-bet plus app review and rating
-m-bet plus app features and benefits
-m-bet plus app vs m-bet classic app comparison
-m-bet plus app how to bet and win
-m-bet plus app customer service and support
-m-bet plus app best odds and predictions
-m-bet plus app live betting and streaming
-m-bet plus app payment methods and security
-m-bet plus app licence and regulation
-m-bet plus app mobile site alternative
-m-bet plus app for iphone and ios devices
-m-bet plus app sports betting offering
-m-bet plus app virtual sports and games
-m-bet plus app jackpot and prizes
-m-bet plus app how to register and login
-m-bet plus app problems and solutions
-m-bet plus app faq and help center
-m-bet plus app news and updates
-m-bet plus app feedback and testimonials
-m-bet plus app advantages and disadvantages
-download mbet apk for free online
-mbet apk file size and requirements
-mbet apk how to update and uninstall
-mbet apk compatibility and performance
-mbet apk download link and source
-mbet apk pros and cons evaluation
-mbet apk user experience and interface
-mbet apk how to use and navigate
-mbet apk troubleshooting and errors
-mbet apk contact information and details
-mbet mobile app best betting tanzania
-mbet mobile app how to download and install
-mbet mobile app new version 2023 release date
-mbet mobile app offers and deals
-mbet mobile app referral code and coupons
-mbet mobile app sports markets and events
-mbet mobile app how to deposit and withdraw
-mbet mobile app safety and privacy
-mbet mobile app terms and conditions
-mbet mobile app ratings and reviews
-mbet mobile app how to play and win
-mbet mobile app tips and tricks
-mbet mobile app blog and articles
-mbet mobile app social media and community
-
-
M-Bet Plus App
M-Bet Classic App
- Newer and improved version of the app - More features and options - Better design and interface - Faster and smoother performance - Dark mode available - Compatible with Android and iOS devices
- Older and outdated version of the app - Fewer features and options - Basic design and interface - Slower and less stable performance - No dark mode available - Compatible only with Android devices
-
-
Conclusion
-
M-Bet Plus App is a great choice for anyone who wants to bet on sports in Tanzania. It has many features and benefits that make it stand out from other betting apps. It is easy to download and install, user-friendly, secure, and rewarding. It also offers a wide range of sports and markets, live betting and virtual sports, bonuses and promotions, customer support and responsible gambling. Whether you are a beginner or an expert, you will find something that suits your needs and preferences. So, what are you waiting for? Download the M-Bet Plus App today and start betting on your favorite sports!
-
FAQs
-
Here are some of the frequently asked questions about M-Bet Plus App:
-
-
Q: How can I contact M-Bet customer support?
-A: You can contact M-Bet customer support via phone, email, or live chat. The phone number is +255 677 044 444, the email address is info@m-bet.co.tz, and the live chat option is available on the app or website.
-
Q: How can I claim my welcome bonus?
-A: You can claim your welcome bonus by making your first deposit of at least 1,000 TZS. You will receive a 100% bonus up to 10,000 TZS in your account. You will need to wager the bonus amount 4 times on odds of 2.0 or higher before you can withdraw it.
-
Q: How can I participate in the M-Bet Perfect 12 jackpot?
-A: You can participate in the M-Bet Perfect 12 jackpot by predicting the outcome of 12 matches that are selected by M-Bet. You can choose between home win, draw, or away win for each match. The entry fee is 1,000 TZS and the jackpot prize is 200 million TZS. You can also win consolation prizes if you get 11, 10, or 9 correct predictions.
-
Q: How can I cash out my bets?
-A: You can cash out your bets before the event ends if you want to secure a profit or minimize a loss. You can do this by going to your bet history and selecting the cash out option. The amount you will receive depends on the current odds and the status of your bet.
-
Q: How can I gamble responsibly?
-A: You can gamble responsibly by setting limits on your deposits, bets, and losses, as well as self-excluding yourself from the app if you feel that you have a gambling problem. You can also seek help from professional organizations like GamCare or Gamblers Anonymous if you need support or advice.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Enjoy the Best of PES 2018 with Konami APK and OBB Data Files.md b/spaces/congsaPfin/Manga-OCR/logs/Enjoy the Best of PES 2018 with Konami APK and OBB Data Files.md
deleted file mode 100644
index 797b61a16c9d68ec94d562cd366244e5829b8417..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Enjoy the Best of PES 2018 with Konami APK and OBB Data Files.md
+++ /dev/null
@@ -1,173 +0,0 @@
-
-
How to Download and Play PES 2018 Pro Evolution Soccer on Your Android Device
-
If you are a fan of soccer games, you might have heard of PES 2018 Pro Evolution Soccer, one of the most popular and realistic soccer games on mobile devices. In this article, we will show you how to download and play PES 2018 Pro Evolution Soccer on your Android device, as well as some tips and tricks to help you enjoy the game more.
PES 2018 Pro Evolution Soccer is a soccer game developed by Konami Digital Entertainment, the same company behind the famous Metal Gear Solid series. It is the latest entry in the PRO EVOLUTION SOCCER series, which has been running since 2001.
-
PES 2018 Pro Evolution Soccer features world famous national and club teams, such as Brazil, France, Japan, FC Barcelona, and Liverpool FC. You can build your own squad from over 10,000 actual players, including legends like Beckham, Maradona, and Zico. You can also live out your childhood soccer fantasy by taking control of these legendary players and creating your dream team.
-
PES 2018 Pro Evolution Soccer also boasts of stunning graphics, realistic animations, and smooth gameplay. You can experience authentic soccer on the go by playing as teams from all over the world, with natural player movements, precision passing, and in-depth tactics. You can also face off against your friends anytime, anywhere, by using the online or offline modes.
-
Whether you are a casual or hardcore soccer fan, PES 2018 Pro Evolution Soccer will surely satisfy your soccer cravings. It is a game that you can play for hours without getting bored.
-
How to Download PES 2018 Pro Evolution Soccer on Your Android Device
-
Before you can play PES 2018 Pro Evolution Soccer on your Android device, you need to make sure that your device meets the system requirements and compatibility of the game. According to the official website, you need to have an Android device with at least:
-
-
Android version 5.0 or higher
-
1.5 GB of RAM or more
-
1.4 GB of free storage space or more
-
A stable internet connection (Wi-Fi is recommended)
-
-
If your device meets these requirements, you can proceed to download PES 2018 Pro Evolution Soccer apk file from a trusted source. One of the sources that we recommend is CNET Download, which is a reputable website that offers safe and secure downloads of various software and apps.
-
pes konami 2018 apk + obb
-pes konami 2018 apk mod
-pes konami 2018 apk offline
-pes konami 2018 apk latest version
-pes konami 2018 apk data
-pes konami 2018 apk android
-pes konami 2018 apk free download
-pes konami 2018 apk full version
-pes konami 2018 apk update
-pes konami 2018 apk cracked
-pes konami 2018 apk + data download
-pes konami 2018 apk + obb offline
-pes konami 2018 apk + obb download
-pes konami 2018 apk + obb mod
-pes konami 2018 apk + obb latest version
-pes konami 2018 apk + data offline
-pes konami 2018 apk + data mod
-pes konami 2018 apk + data download
-pes konami 2018 apk + data latest version
-pes konami 2018 apk mod offline
-pes konami 2018 apk mod unlimited money
-pes konami 2018 apk mod download
-pes konami 2018 apk mod latest version
-pes konami 2018 apk offline download
-pes konami 2018 apk offline mod
-pes konami 2018 apk offline latest version
-pes konami 2018 apk latest version download
-pes konami 2018 apk latest version offline
-pes konami 2018 apk latest version mod
-pes konami 2018 apk data download link
-pes konami 2018 apk data offline download
-pes konami 2018 apk data mod download
-pes konami 2018 apk data latest version download
-pes konami 2018 apk android download
-pes konami 2018 apk android offline
-pes konami 2018 apk android mod
-pes konami 2018 apk android latest version
-pes konami 2018 apk free download for android
-pes konami 2018 apk free download link
-pes konami 2018 apk free download offline
-pes konami 2018 apk free download mod
-pes konami 2018 apk full version download
-pes konami 2018 apk full version offline
-pes konami 2018 apk full version mod
-pes konami 2018 apk update download
-pes konami 2018 apk update offline
-pes konami 2018 apk update mod
-pes konami 2018 apk cracked download
-pes konami 2018 apk cracked offline
-
To download PES 2018 Pro Evolution Soccer apk file from CNET Download, follow these steps:
-
-
Go to [CNET Download](^1^) website using your browser.
-
Pro Evolution Soccer" in the search box and hit enter.
-
Click on the "PES 2018 Pro Evolution Soccer" result from the list.
-
Click on the "Download Now" button and wait for the apk file to be downloaded to your device.
-
-
Alternatively, you can also download PES 2018 Pro Evolution Soccer apk file from other sources, such as APKPure or APKMirror. However, you need to be careful and make sure that the apk file is free from viruses and malware. You can use an antivirus app to scan the apk file before installing it.
-
Once you have downloaded PES 2018 Pro Evolution Soccer apk file, you need to install it on your Android device. To do this, follow these steps:
-
-
Go to your device's settings and enable the option to install apps from unknown sources. This will allow you to install apps that are not from the Google Play Store.
-
Locate the PES 2018 Pro Evolution Soccer apk file on your device using a file manager app.
-
Tap on the apk file and follow the instructions to install it on your device.
-
Wait for the installation process to finish and launch PES 2018 Pro Evolution Soccer from your app drawer or home screen.
-
-
Congratulations! You have successfully downloaded and installed PES 2018 Pro Evolution Soccer on your Android device. You are now ready to play the game and enjoy its amazing features.
-
How to Play PES 2018 Pro Evolution Soccer on Your Android Device
-
PES 2018 Pro Evolution Soccer is a game that is easy to learn but hard to master. You need to have some skills and strategies to win matches and tournaments. Here are some basic steps on how to play PES 2018 Pro Evolution Soccer on your Android device:
-
How to create your own squad and customize your players
-
The first thing you need to do is to create your own squad and customize your players. You can choose from over 10,000 actual players, including legends like Beckham, Maradona, and Zico. You can also edit their appearance, skills, attributes, and positions.
-
To create your own squad and customize your players, follow these steps:
-
-
From the main menu, tap on "My Team".
-
Tap on "Squad Management".
-
Tap on "Player List".
-
Select the player you want to edit or replace.
-
Tap on "Edit Player" or "Transfer".
-
Make the changes you want and save them.
-
-
You can also create your own original players by tapping on "Create Player" from the player list. You can customize their name, nationality, age, height, weight, face, hair, kit number, position, playing style, skills, and abilities.
-
How to control your players and perform actions on the field
-
The next thing you need to do is to control your players and perform actions on the field. You can choose between two types of controls: advanced or classic. The advanced controls allow you to use gestures and swipes to perform actions, while the classic controls use virtual buttons and joysticks.
-
To control your players and perform actions on the field, follow these steps:
-
-
To move your player, use the left joystick or swipe on the left side of the screen.
-
To pass the ball, tap on a teammate or swipe in their direction.
-
To shoot the ball, tap on the goal or swipe in its direction.
-
To dribble the ball, use the right joystick or swipe on the right side of the screen.
-
To tackle an opponent, tap on them or swipe in their direction.
-
To switch players, tap on the player icon or swipe up or down on the right side of the screen.
-
-
How to compete with other players online or offline
-
To compete with other players online or offline, follow these steps:
-
-
From the main menu, tap on "Match".
-
Select the mode you want to play, such as Matchday, Online Divisions, Local Match, Friendly Match Lobby, Campaign Mode, Events Mode, Training Mode, and more.
-
Choose your team and your opponent's team.
-
Adjust the match settings, such as difficulty, time limit, stadium, weather, and more.
-
Tap on "Start Match" and enjoy the game.
-
-
You can also view your match records, rankings, rewards, and achievements by tapping on "My Profile" from the main menu.
-
Tips and Tricks for Playing PES 2018 Pro Evolution Soccer on Your Android Device
-
PES 2018 Pro Evolution Soccer is a game that requires some skills and strategies to win matches and tournaments. Here are some tips and tricks that can help you improve your game and have more fun:
-
How to master the advanced and classic controls
-
One of the most important aspects of playing PES 2018 Pro Evolution Soccer is to master the advanced and classic controls. The advanced controls allow you to use gestures and swipes to perform actions, while the classic controls use virtual buttons and joysticks. You can choose the control type that suits your preference and style.
-
To master the advanced and classic controls, follow these tips:
-
-
Practice using the controls in the Training Mode. You can learn how to perform various actions, such as passing, shooting, dribbling, tackling, switching players, and more.
-
Adjust the sensitivity and size of the controls in the Settings. You can make the controls more responsive or comfortable for your fingers.
-
Use the auto-feint option in the Settings. This option will enable your players to perform feints and tricks automatically when you swipe on the screen.
-
Use the one-two pass option in the Settings. This option will enable your players to pass the ball back and forth quickly when you tap on the screen twice.
-
Use the through pass option in the Settings. This option will enable your players to pass the ball ahead of a teammate who is running into space when you swipe on the screen.
-
-
How to use scouts, agents, and auctions to acquire the best players
-
Another important aspect of playing PES 2018 Pro Evolution Soccer is to use scouts, agents, and auctions to acquire the best players. Scouts, agents, and auctions are ways to obtain new players for your squad. You can use coins or GP (game points) to use these methods.
-
To use scouts, agents, and auctions to acquire the best players, follow these tips:
-
-
Use scouts to obtain specific players based on their attributes, such as position, nationality, league, club, skill, etc. You can obtain scouts by playing matches or events. You can also combine scouts to increase your chances of getting better players.
-
Use agents to obtain random players based on their rarity, such as bronze, silver, gold, or black ball. You can obtain agents by playing matches or events. You can also use special agents that offer higher chances of getting rare players.
-
Use auctions to bid for specific players that other users have put up for sale. You can use GP to bid for players. You can also sell your own players in auctions to earn GP.
-
-
How to earn coins and GP and use them wisely
-
The last important aspect of playing PES 2018 Pro Evolution Soccer is to earn coins and GP and use them wisely. Coins and GP are currencies that you can use to buy scouts, agents, auctions, energy recovery items, contract renewals, etc. You can earn coins and GP by playing matches or events.
-
To earn coins and GP and use them wisely, follow these tips:
-
-
Complete daily missions and achievements to earn coins and GP. You can view your missions and achievements by tapping on "My Profile" from the main menu.
-
Participate in events and tournaments to earn coins and GP. You can view the current events and tournaments by tapping on "Match" from the main menu.
-
Play online matches against other users to earn coins and GP. You can play online matches by tapping on "Online Divisions" or "Friendly Match Lobby" from the match menu.
-
auctions, energy recovery items, contract renewals, etc. You can buy these items by tapping on "Shop" from the main menu.
-
Save your coins and GP for special occasions, such as when there are special agents or auctions that offer high-quality players.
-
-
Conclusion
-
PES 2018 Pro Evolution Soccer is a game that you can download and play on your Android device. It is a game that features world famous national and club teams, stunning graphics, realistic animations, and smooth gameplay. You can create your own squad, control your players, and compete with other players online or offline. You can also use scouts, agents, and auctions to acquire the best players, and earn coins and GP to buy various items.
-
If you are a fan of soccer games, you should not miss PES 2018 Pro Evolution Soccer. It is a game that will give you hours of fun and excitement. It is a game that will make you feel like a real soccer star.
-
So what are you waiting for? Download PES 2018 Pro Evolution Soccer apk file from CNET Download or other sources, install it on your Android device, and start playing the game. You will not regret it.
-
FAQs
-
Here are some frequently asked questions and answers about PES 2018 Pro Evolution Soccer:
-
Q: How can I update PES 2018 Pro Evolution Soccer to the latest version?
-
A: You can update PES 2018 Pro Evolution Soccer to the latest version by downloading and installing the latest apk file from CNET Download or other sources. You can also check for updates by tapping on "Extras" from the main menu and then tapping on "Update".
-
Q: How can I transfer my data from one device to another?
-
A: You can transfer your data from one device to another by using the data transfer feature. To do this, follow these steps:
-
-
On your old device, tap on "Extras" from the main menu and then tap on "Data Transfer".
-
Tap on "Link Data" and choose a method to link your data, such as Google Play Games or KONAMI ID.
-
On your new device, tap on "Extras" from the main menu and then tap on "Data Transfer".
-
Tap on "Transfer Data" and choose the same method that you used to link your data on your old device.
-
Follow the instructions to complete the data transfer.
-
-
Q: How can I contact the customer support of PES 2018 Pro Evolution Soccer?
-
A: You can contact the customer support of PES 2018 Pro Evolution Soccer by tapping on "Extras" from the main menu and then tapping on "Support". You can also visit the official website or the official Facebook page of PES 2018 Pro Evolution Soccer for more information and assistance.
-
Q: How can I change the language of PES 2018 Pro Evolution Soccer?
-
A: You can change the language of PES 2018 Pro Evolution Soccer by tapping on "Extras" from the main menu and then tapping on "Settings". You can then choose the language that you prefer from the list of available languages.
-
Q: How can I get more coins and GP for free?
-
A: You can get more coins and GP for free by completing daily missions and achievements, participating in events and tournaments, playing online matches against other users, selling your players in auctions, watching ads, and inviting your friends to play PES 2018 Pro Evolution Soccer.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Experience Sonic Mania Plus on Android with Exagear and Gamepad - The Ultimate Way to Play the Game.md b/spaces/congsaPfin/Manga-OCR/logs/Experience Sonic Mania Plus on Android with Exagear and Gamepad - The Ultimate Way to Play the Game.md
deleted file mode 100644
index 3278fdf91f70218ebf37e66c6b5fdcda85458915..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Experience Sonic Mania Plus on Android with Exagear and Gamepad - The Ultimate Way to Play the Game.md
+++ /dev/null
@@ -1,222 +0,0 @@
-
-< and >). Here is an example of how to create a table with HTML formatting:
-
-
-
Heading 1
-
Heading 2
-
Heading 3
-
-
-
-
-
Cell 1
-
Cell 2
-
Cell 3
-
-
-
Cell 4
-
Cell 5
-
Cell 6
-
-
-
- The table tag defines the table element, which contains the following tags: thead, tbody, tr, th, and td. The thead tag defines the header of the table, which contains one or more tr tags. The tr tag defines a row of the table, which contains one or more th or td tags. The th tag defines a header cell, which is usually bold and centered. The td tag defines a data cell, which is usually aligned to the left. You can use various attributes and styles to customize the appearance of the table, such as border, width, height, color, font, etc. Now that you know how to create tables, let me show you how to write an outline of the article and the article itself. Here is an example of how to write an outline of the article using a table: | H1: Sonic Mania Plus: A Review of Hedgehog's Greatest Adventure | | ------------------------------------------------------------ | | H2: Introduction | | - What is Sonic Mania Plus? | | - Why is it considered one of the best Sonic games? | | - What are its features and content? | | H2: Gameplay | | - How does it compare to the original Sonic Mania? | | - What are the new modes and characters? | | - How does it capture the classic Sonic feel? | | H2: Graphics and Sound | | - How does it look and sound on different platforms? | | - What are the references and homages to previous Sonic games? | | - How does it use retro-style pixel art and music? | | H2: Conclusion | | - What are the pros and cons of Sonic Mania Plus? | | - Who should play it and who should avoid it? | | - How does it rank among other Sonic games? | Here is an example of how to write an article based on that outline using a table with HTML formatting:
-
-
-
Sonic Mania Plus: A Review of Hedgehog's Greatest Adventure
-
-
-
Introduction
-
-
-
Sonic Mania Plus is an enhanced version of Sonic Mania, a game that was released in 2017 as a tribute to the classic Sonic games from the Sega Genesis era. It was developed by a team of talented fans who were hired by Sega to create a game that would appeal to both old-school and new-school Sonic fans.
-
-
-
Sonic Mania Plus is widely regarded as one of the best Sonic games ever made, as it
delivers a perfect balance of nostalgia and innovation, with stunning graphics, catchy music, and smooth gameplay. It also adds new features and content that make it even more enjoyable and replayable than the original version.
In this article, we will review Sonic Mania Plus and see why it is a must-play game for any Sonic fan or platformer lover. We will cover its gameplay, graphics, sound, and conclusion, as well as answer some frequently asked questions about the game.
-
-
-
Gameplay
-
-
-
Sonic Mania Plus is a 2D side-scrolling platformer that follows the adventures of Sonic the Hedgehog and his friends as they try to stop the evil Dr. Eggman and his robot army from taking over the world. The game consists of 12 zones, each with two acts and a boss fight. The zones are a mix of remixed stages from previous Sonic games and new stages that are inspired by them.
-
-
-
The gameplay is fast-paced and fun, as you can run, jump, spin, dash, and fly through the levels, collecting rings, power-ups, and secrets along the way. You can also use various gimmicks and obstacles, such as springs, loops, spikes, switches, and more, to spice up the action. The game also features multiple paths and routes that you can take to explore the levels and find different outcomes.
-
-
-
One of the main differences between Sonic Mania Plus and the original Sonic Mania is that it adds two new playable characters: Mighty the Armadillo and Ray the Flying Squirrel. These characters have their own unique abilities and playstyles that change the way you approach the game. Mighty can slam the ground to destroy enemies and obstacles, as well as bounce off spikes without taking damage. Ray can glide in the air by tilting the controller left or right, allowing him to reach high places and avoid hazards.
-
-
-
Another difference is that it adds two new modes: Encore Mode and Competition Mode. Encore Mode is a remixed version of the main game, with different level layouts, color palettes, music tracks, and enemy placements. It also introduces a new feature called Character Switching, which allows you to switch between two characters at any time by hitting a special monitor. You can also find more characters in the levels and swap them with your current ones. However, if you lose all your lives with one character, you lose them for good.
-
-
-
Competition Mode is a multiplayer mode that lets you race against up to three other players on split-screen. You can choose from four zones and customize the rules and settings of the match. You can also play online with other players via GameJolt, a platform that hosts indie games and fan games. To play Sonic Mania Plus online, you need to download the Sonic Mania Plus APK GameJolt file from their website and install it on your device.
-
sonic mania plus complete save data gamejolt
-sonic mania plus pc port gamejolt
-sonic mania plus android port gamejolt
-sonic mania plus mod apk gamejolt
-sonic mania plus download free gamejolt
-sonic mania plus online multiplayer gamejolt
-sonic mania plus ray and mighty gamejolt
-sonic mania plus encore mode gamejolt
-sonic mania plus cheats and codes gamejolt
-sonic mania plus update patch gamejolt
-sonic mania plus original soundtrack gamejolt
-sonic mania plus fan game gamejolt
-sonic mania plus exe apk gamejolt
-sonic mania plus full version gamejolt
-sonic mania plus android emulator gamejolt
-sonic mania plus extra stages gamejolt
-sonic mania plus bonus content gamejolt
-sonic mania plus custom characters gamejolt
-sonic mania plus debug mode gamejolt
-sonic mania plus level editor gamejolt
-sonic mania plus amy rose gamejolt
-sonic mania plus metal sonic gamejolt
-sonic mania plus super forms gamejolt
-sonic mania plus special edition gamejolt
-sonic mania plus knuckles chaotix gamejolt
-sonic mania plus tails adventure gamejolt
-sonic mania plus shadow the hedgehog gamejolt
-sonic mania plus infinite the jackal gamejolt
-sonic mania plus classic heroes gamejolt
-sonic mania plus modern remixes gamejolt
-sonic mania plus 3d models gamejolt
-sonic mania plus retro graphics gamejolt
-sonic mania plus hd textures gamejolt
-sonic mania plus new zones gamejolt
-sonic mania plus boss rush mode gamejolt
-sonic mania plus time attack mode gamejolt
-sonic mania plus competition mode gamejolt
-sonic mania plus co-op mode gamejolt
-sonic mania plus split screen mode gamejolt
-sonic mania plus speedrun mode gamejolt
-sonic mania plus hard mode gamejolt
-sonic mania plus chaos emeralds gamejolt
-sonic mania plus hyper emeralds gamejolt
-sonic mania plus master emeralds gamejolt
-sonic mania plus sol emeralds gamejolt
-
-
Sonic Mania Plus is a game that captures the classic Sonic feel, as it pays homage to the original games and their mechanics, while also adding new twists and surprises. The game is challenging but fair, rewarding skill and exploration. It also has a lot of replay value, as you can try different characters, modes, and paths. The game is a blast to play solo or with friends, as you can share the excitement and fun of the Sonic experience.
-
-
-
Graphics and Sound
-
-
-
Sonic Mania Plus is a game that looks and sounds amazing, as it uses retro-style pixel art and music to create a nostalgic and vibrant atmosphere. The game is colorful and detailed, with smooth animations and dynamic backgrounds. The game also runs at 60 frames per second on all platforms, ensuring a smooth and responsive gameplay.
-
-
-
The game is available on various platforms, such as PC, PlayStation 4, Xbox One, Nintendo Switch, and Android devices. The game looks and sounds great on all of them, with minor differences in resolution and performance. The game also supports cross-play between PC and Android devices, allowing you to play online with other players regardless of the platform.
-
-
-
The game is full of references and homages to previous Sonic games and other Sega titles, such as Streets of Rage, OutRun, and Jet Set Radio. The game features many Easter eggs and secrets that will delight any fan of the Sonic franchise. The game also has a lot of humor and personality, with funny dialogue and expressions from the characters.
-
-
-
The game's soundtrack is composed by Tee Lopes, a fan-turned-professional musician who created remixes and original tracks for the game. The soundtrack is catchy and diverse, featuring various genres and styles that match the mood and theme of each zone. The soundtrack also includes contributions from other Sonic composers, such as Jun Senoue, Masato Nakamura, and Hyper Potions.
-
-
Conclusion
-
-
-
Sonic Mania Plus is a game that deserves all the praise and acclaim it has received, as it is a masterpiece of platforming and fan service. It is a game that celebrates the legacy and history of Sonic the Hedgehog, while also bringing new and fresh ideas to the table. It is a game that appeals to both old and new fans of the blue blur, as well as anyone who enjoys a good platformer.
-
-
-
The game has many pros and cons, depending on your preferences and expectations. Here are some of them:
-
-
-
-
-
-
Pros
-
Cons
-
-
-
-
-
Amazing graphics and sound
-
Some levels can be frustrating or confusing
-
-
-
Smooth and fun gameplay
-
Some bosses can be too easy or too hard
-
-
-
New characters and modes
-
Some features can be locked behind DLC or online access
-
-
-
High replay value and content
-
Some glitches and bugs can occur
-
-
-
Nostalgic and innovative
-
Some references and homages can be obscure or missed
-
-
-
-
-
So, who should play Sonic Mania Plus and who should avoid it? Well, if you are a fan of Sonic the Hedgehog, or platformers in general, you should definitely give it a try. It is a game that will make you smile and have fun, as well as challenge and impress you. It is a game that will remind you why you love Sonic and why he is still relevant and popular today.
-
-
-
However, if you are not a fan of Sonic the Hedgehog, or platformers in general, you might want to skip it. It is a game that might not appeal to you or suit your tastes, as it is very faithful and loyal to the original games and their mechanics. It is a game that might frustrate or bore you, as it can be very fast and chaotic, or very slow and tedious.
-
-
-
Ultimately, Sonic Mania Plus is a game that deserves your attention and appreciation, as it is one of the best Sonic games ever made. It is a game that ranks among the top platformers of all time, as it is a masterpiece of design and creativity. It is a game that you should play at least once in your life, as it is a game that will make you feel like a kid again.
-
-
-
Frequently Asked Questions
-
-
-
Q: How long is Sonic Mania Plus?
-
-
-
A: Sonic Mania Plus is about 4 to 6 hours long, depending on your skill level and how much you explore the levels. However, the game has a lot of replay value, as you can play with different characters, modes, and paths.
-
-
-
Q: How much does Sonic Mania Plus cost?
-
-
-
A: Sonic Mania Plus costs $29.99 for the physical version, which includes an art book and a reversible cover. The digital version costs $19.99 for the base game, and $4.99 for the Plus DLC, which adds the new features and content.
-
-
-
Q: How do I unlock Super Sonic in Sonic Mania Plus?
-
-
-
A: To unlock Super Sonic in Sonic Mania Plus, you need to collect all seven Chaos Emeralds in the game. You can find them in special stages, which are accessed by finding giant rings hidden in the levels. Once you have all seven Chaos Emeralds, you can transform into Super Sonic by collecting 50 rings and pressing the jump button twice.
-
-
Q: How do I play Sonic Mania Plus online?
-
-
-
A: To play Sonic Mania Plus online, you need to download the Sonic Mania Plus APK GameJolt file from their website and install it on your Android device. You also need to create a GameJolt account and log in to the game. Then, you can join or host online matches with other players using the Competition Mode.
-
-
-
Q: What are the differences between Sonic Mania and Sonic Mania Plus?
-
-
-
A: Sonic Mania Plus is an enhanced version of Sonic Mania, which adds the following features and content:
-
-
-
-
Two new playable characters: Mighty the Armadillo and Ray the Flying Squirrel
-
Two new modes: Encore Mode and Competition Mode
-
New level layouts, color palettes, music tracks, and enemy placements
-
New feature: Character Switching
-
New option: Angel Island Zone
-
New cutscenes and endings
-
New achievements and trophies
-
Various bug fixes and improvements
-
-
-
-
I hope this article has helped you learn more about Sonic Mania Plus and why it is a game worth playing. If you have any questions or comments, feel free to leave them below. Thank you for reading and have a great day!
-
-
-
-
-
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Get Among Us 4.20 for Free on PC Android and iOS.md b/spaces/congsaPfin/Manga-OCR/logs/How to Get Among Us 4.20 for Free on PC Android and iOS.md
deleted file mode 100644
index 867ee10fc75b272bfae8c1936f336f6fa40bf70a..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/How to Get Among Us 4.20 for Free on PC Android and iOS.md
+++ /dev/null
@@ -1,156 +0,0 @@
-
- - What's new in Among Us 4.20? - How to download and install Among Us 4.20 on different platforms? - Conclusion: Summarize the main points and invite the reader to try out the game. | | H2: What is Among Us and why is it so popular? | - Explain the basic premise and gameplay of Among Us. - Mention some of the features and modes that make it fun and engaging. - Highlight some of the reasons why it has become a viral sensation among gamers and streamers. | | H2: What's new in Among Us 4.20? | - List some of the new features and improvements that are included in the latest update of Among Us. - Use a table to compare the differences between Among Us 4.20 and the previous version. - Provide some screenshots or videos to showcase the new content. | | H2: How to download and install Among Us 4.20 on different platforms? | - Explain how to download and install Among Us 4.20 on Android, iOS, Windows, and Mac devices. - Provide links to the official sources or trusted third-party websites where the user can get the game. - Give some tips and warnings on how to avoid scams, malware, or other issues that may arise from downloading or installing the game. | | H2: Conclusion | - Summarize the main points of the article and restate the benefits of playing Among Us 4.20. - Invite the reader to try out the game and share their feedback or experiences with other players. - Provide some resources or links where the reader can learn more about Among Us or join its community. | Table 2: Article with HTML formatting
Among Us 4.20 Download: How to Play the Latest Version of the Popular Social Deduction Game
-
If you are looking for a fun and exciting game to play with your friends or strangers online, you might want to check out Among Us, one of the most popular social deduction games in recent years. In this article, we will tell you everything you need to know about Among Us 4.20, the latest version of the game that has been released in June 2023. We will also show you how to download and install it on your device, whether it is an Android, iOS, Windows, or Mac device.
Among Us is a multiplayer game that can be played online or over local WiFi with 4-15 players. The game is set in a spaceship, where each player has a role: either a Crewmate or an Impostor.
-
The Crewmates have to work together to complete tasks around the ship and find out who the Impostor is. The Impostor has to sabotage, kill, and deceive the Crewmates without getting caught.
-
The game has several features and modes that make it fun and engaging, such as:
-
among us 4.20 download for android
-among us 4.20 download for pc steam
-among us 4.20 download epic games store
-among us 4.20 download free online
-among us 4.20 download latest update
-among us 4.20 download with mods
-among us 4.20 download for mac
-among us 4.20 download for windows 10
-among us 4.20 download apk mod menu
-among us 4.20 download no verification
-among us 4.20 download for chromebook
-among us 4.20 download for ios
-among us 4.20 download with airship map
-among us 4.20 download for pc without steam
-among us 4.20 download hack version
-among us 4.20 download for laptop
-among us 4.20 download for xbox one
-among us 4.20 download with voice chat
-among us 4.20 download for ps4
-among us 4.20 download apk pure
-among us 4.20 download for nintendo switch
-among us 4.20 download with friends online
-among us 4.20 download for pc full version
-among us 4.20 download with roles mod
-among us 4.20 download apk uptodown
-among us 4.20 download for kindle fire
-among us 4.20 download with hide and seek mode
-among us 4.20 download for pc windows 7
-among us 4.20 download unlimited money
-among us 4.20 download apk obb
-
-
Different maps to play in: The Skeld, MIRA HQ, Polus, and the Airship.
-
Lots of game options: Add more Impostors, more tasks, different roles, and so much more.
-
Different modes to choose from: Classic or Hide n Seek.
-
A chat system that allows players to communicate with each other during meetings or emergencies.
-
A friend system that lets players add and invite their friends to play together.
-
-
The game has become a viral sensation among gamers and streamers for several reasons, such as:
-
-
Its simple yet addictive gameplay that can be enjoyed by anyone.
-
Its social aspect that encourages interaction, cooperation, deception, and betrayal among players.
-
Its hilarious and unpredictable moments that can create memorable experiences.
-
Its low system requirements that make it accessible to a wide range of devices.
-
-
What's new in Among Us 4.20?
-
The latest update of Among Us, version 4.20, has been released in June 2023 with some new features and improvements that make the game even better than before. Some of these are:
-
Four new roles: Scientist, Engineer, Guardian Angel, and Shapeshifter.
-
An XP system that rewards players for playing the game and completing tasks.
-
Multiple currencies: Stars, Beans, and Pods that can be used to buy cosmetics and other items.
-
A new store that offers a variety of customization options, including visor cosmetics, name plates, and more.
-
A single Among Us account that lets players save their progress and use their cosmetics across different platforms.
-
-
To give you a better idea of what's new in Among Us 4.20, here is a table that compares it with the previous version of the game:
-
-
-
Feature
-
Among Us 4.19
-
Among Us 4.20
-
-
-
Roles
-
Crewmate or Impostor
-
Crewmate (Scientist or Engineer), Impostor (Shapeshifter), or Guardian Angel
-
-
-
XP system
-
No XP system
-
XP system that tracks players' level, playtime, tasks completed, and more
-
-
-
Currencies
-
No currencies
-
Stars (premium currency), Beans (earned by playing), and Pods (earned by leveling up)
-
-
-
Store
-
Limited store with only hats, skins, and pets
-
Expanded store with visor cosmetics, name plates, cosmicubes, and more
-
-
-
Account
-
No account required
-
Single account required to save progress and use cosmetics across platforms
-
-
-
If you want to see the new features in action, you can watch some of the videos or screenshots below:
-
-
-
-
How to download and install Among Us 4.20 on different platforms?
-
If you are interested in playing Among Us 4.20, you might be wondering how to download and install it on your device. Depending on what platform you are using, the process may vary slightly. Here are the steps for each platform:
-
Android
-
-
Go to the Google Play Store and search for Among Us or click on this link: [Among Us - Apps on Google Play](^3^).
-
Tap on the Install button and wait for the download to finish.
-
Open the game and enjoy!
-
-
iOS
-
-
Go to the App Store and search for Among Us or click on this link: [Among Us! on the App Store].
-
Tap on the Get button and wait for the download to finish.
-
Open the game and enjoy!
-
-
Windows
-
-
Go to Steam and search for Among Us or click on this link: [Save 25% on Among Us on Steam](^2^).
-
Add the game to your cart and purchase it for $3.74 (25% off until June 27).
-
Download and install the game through Steam.
-
Open the game and enjoy!
-
-
Mac
-
Go to the Epic Games Store and search for Among Us or click on this link: [Among Us - Epic Games Store].
-
Add the game to your cart and purchase it for $4.99.
-
Download and install the game through the Epic Games Launcher.
-
Open the game and enjoy!
-
-
Before you download and install Among Us 4.20, here are some tips and warnings that you should keep in mind:
-
-
Make sure you have enough storage space on your device to download and install the game.
-
Make sure you have a stable internet connection to download and play the game online.
-
Make sure you have a compatible device that meets the minimum system requirements of the game.
-
Do not download or install the game from unofficial or untrusted sources, as they may contain viruses, malware, or other harmful content.
-
Do not use any cheats, hacks, or mods that may alter the game or give you an unfair advantage, as they may ruin the game experience for yourself and others, or get you banned from the game.
-
-
Conclusion
-
In conclusion, Among Us 4.20 is the latest version of the popular social deduction game that has been released in June 2023 with some new features and improvements that make the game even better than before. You can play it online or over local WiFi with 4-15 players, and choose from different roles, maps, modes, and options. You can also customize your character with various cosmetics and items that you can buy with different currencies. You can download and install it on your Android, iOS, Windows, or Mac device by following the steps we have provided above. We hope you enjoy playing Among Us 4.20 and have a blast with your friends or strangers online!
-
If you want to learn more about Among Us or join its community, you can visit these resources or links:
-
-
The official website of Among Us: [Among Us | InnerSloth].
-
The official Twitter account of Among Us: [@AmongUsGame].
-
The official Discord server of Among Us: [Among Us].
-
The official subreddit of Among Us: [r/AmongUs].
-
The official wiki of Among Us: [Among Us Wiki | Fandom].
-
-
Frequently Asked Questions
-
Here are some of the frequently asked questions about Among Us 4.20:
-
Q: Is Among Us 4.20 free to play?
-
A: Among Us 4.20 is free to play on Android and iOS devices, but it costs $3.74 on Steam (until June 27) and $4.99 on Epic Games Store for Windows and Mac devices. However, there are some in-game purchases that require real money, such as Stars (the premium currency) and some cosmetics and items.
-
Q: How do I update my Among Us to version 4.20?
-
A: If you already have Among Us installed on your device, you can update it to version 4.20 by going to the app store or launcher where you got it from and checking for updates. Alternatively, you can uninstall the previous version of the game and download and install the latest version from the official sources or trusted third-party websites that we have provided above.
-
Q: How do I play with my friends in Among Us 4.20?
-
A: You can play with your friends in Among Us 4.20 by either creating a private room or joining a public room. To create a private room, you need to select a map, mode, and options, then tap on Host. You will get a code that you can share with your friends so they can join your room. To join a public room, you need to select a map, mode, and options, then tap on Find Game. You will see a list of available rooms that you can join by tapping on them.
-
Q: How do I change my role in Among Us 4.20?
-
A: You can change your role in Among Us 4.20 by going to the game options before starting a game and selecting one of the four roles: Scientist, Engineer, Guardian Angel, or Shapeshifter. However, note that not all roles are available for all modes or maps, and some roles may require certain conditions to be met before they can be activated.
-
Q: How do I report a bug or issue in Among Us 4.20?
-
A: You can report a bug or issue in Among Us 4.20 by going to the settings menu in the game and tapping on the Report Bug button. You will be redirected to a form where you can fill in the details of the bug or issue, such as the platform, the version, the map, the mode, the role, and the description. You can also attach a screenshot or a video to illustrate the problem. After submitting the form, you will receive a confirmation email and a ticket number that you can use to track the status of your report.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Tune Your Car and Win Races in Drag Racing Classic.md b/spaces/congsaPfin/Manga-OCR/logs/How to Tune Your Car and Win Races in Drag Racing Classic.md
deleted file mode 100644
index 844832100de8fcbe4a7bf773ef1507a727fb982c..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/How to Tune Your Car and Win Races in Drag Racing Classic.md
+++ /dev/null
@@ -1,111 +0,0 @@
-
-
Drag Racing Classic: A Guide for Beginners
-
If you are looking for a racing game that is fun, addictive, and challenging, you might want to check out Drag Racing Classic. This game is one of the most popular racing apps on the App Store, with over 100 million players worldwide. In this game, you can drive 50+ real licensed cars from the world’s hottest car manufacturers, race against other players online, and customize and tune your cars for 1/4 or 1/2 mile drag races. In this article, we will give you a guide on how to play, master, and enjoy Drag Racing Classic.
-
What is Drag Racing Classic?
-
Drag Racing Classic is a racing game developed by Creative Mobile, a leading mobile game developer based in Estonia. The game was released in 2011 and has since become one of the most downloaded racing apps on the App Store. The game is available for both iPhone and iPad devices.
Drag Racing Classic is a game that simulates drag racing, which is a type of motor racing that involves two vehicles competing in a straight line over a fixed distance. The objective of the game is to accelerate faster than your opponent and reach the finish line first. The game features realistic physics, graphics, and sound effects that make you feel like you are on a real drag strip.
-
The game also offers a variety of features that make it more engaging and enjoyable. Some of these features are:
-
-
50+ real licensed cars from the world’s hottest car manufacturers including BMW, Dodge, Honda, Nissan, McLaren, Pagani and the officially licensed 1200 bhp Hennessey Venom GT™
-
Performance upgrades, customization options, and tuning tools that allow you to modify your cars according to your preferences and needs
-
Different modes of play that cater to different levels of skill and interest: career mode, online mode, and pro league mode
-
Competitive multiplayer that lets you race against your friends or random racers online, drive your opponent’s car, or participate in real-time 10-player races in pro league mode
-
A team feature that lets you join a team or create your own team to exchange tunes, discuss strategy, and share your achievements with other players
-
An awesome community that connects you with other car game fanatics who enjoy Drag Racing Classic as much as you do
-
-
How to play Drag Racing Classic?
-
Playing Drag Racing Classic is easy and fun. All you need is your device and an internet connection. Here are the basic steps on how to play the game:
-
-
Download the game from the App Store for free. You can also purchase in-app items to enhance your gaming experience.
-
Launch the game and choose your mode of play: career mode, online mode, or pro league mode.
-
Select your car from the garage. You can buy new cars or upgrade your existing cars using cash or RP (respect points) earned from winning races.
-
Select your race type: 1/4 mile or 1/2 mile. You can also choose to race against an AI opponent or a real player online.
-
Start the race by pressing the gas pedal at the right time. You can also use nitrous oxide for a speed boost by tapping the N2 O button on the screen.
-
Shift gears at the right time by tapping the up and down arrows on the screen. You can also adjust your gear ratios in the garage to optimize your acceleration and speed.
-
Cross the finish line before your opponent and win the race. You can also watch a replay of your race or share it with your friends.
-
-
That’s it! You have just completed a drag race. You can repeat these steps as many times as you want and enjoy the thrill of drag racing.
-
How to master Drag Racing Classic?
-
While playing Drag Racing Classic is easy, mastering it is not. It takes practice, skill, and strategy to become a drag racing champion. Here are some tips and tricks that can help you improve your performance and win more races:
-
-
Know your car. Different cars have different strengths and weaknesses, such as power, grip, weight, and nitrous capacity. You should choose a car that suits your style and preference, and learn how to drive it well.
-
Upgrade your car. Upgrading your car can make a big difference in your performance. You can upgrade your engine, turbo, intake, nitrous, weight, tires, and transmission using cash or RP. You can also customize your car’s appearance by changing its color, rims, decals, and license plate.
-
Tune your car. Tuning your car can give you an edge over your opponents. You can tune your car’s gear ratios, final drive, tire pressure, nitrous timing, and launch control using the tuning tools in the garage. You can also test your tune on the dyno or the test track to see how it affects your speed and acceleration.
-
Join a team. Joining a team can help you learn from other players, share tunes and strategies, and compete in team events. You can join an existing team or create your own team using the team feature in the game. You can also chat with your teammates and send them gifts using the team chat function.
-
-
Why play Drag Racing Classic?
-
Drag Racing Classic is more than just a game. It is a hobby, a passion, and a lifestyle for many people who love cars and racing. Here are some of the reasons why you should play Drag Racing Classic:
-
drag racing classic app
-drag racing classic game
-drag racing classic cheats
-drag racing classic tuning
-drag racing classic mod apk
-drag racing classic best cars
-drag racing classic online
-drag racing classic download
-drag racing classic tips
-drag racing classic hack
-drag racing classic android
-drag racing classic ios
-drag racing classic review
-drag racing classic gameplay
-drag racing classic codes
-drag racing classic update
-drag racing classic levels
-drag racing classic pro league
-drag racing classic forum
-drag racing classic support
-drag racing classic wiki
-drag racing classic facebook
-drag racing classic twitter
-drag racing classic instagram
-drag racing classic youtube
-drag racing classic bmw m3 e92 tune
-drag racing classic dodge challenger tune
-drag racing classic honda s2000 tune
-drag racing classic nissan skyline tune
-drag racing classic mclaren mp4 12c tune
-drag racing classic pagani zonda tune
-drag racing classic hennessey venom gt tune
-drag racing classic boss cars
-drag racing classic career mode
-drag racing classic multiplayer mode
-drag racing classic nitrous oxide
-drag racing classic gear ratios
-drag racing classic launch control
-drag racing classic wheelie bar
-drag racing classic decals
-drag racing classic custom paint
-drag racing classic free rp and money
-drag racing classic unlimited coins and cash
-drag racing classic no ads
-drag racing classic offline mode
-drag racing classic how to play
-drag racing classic faq
-drag racing classic creative mobile
-drag racing classic app store
-
-
It is fun. Drag Racing Classic is a game that is easy to play but hard to master. It offers a lot of variety and challenge that keep you entertained and engaged. It is also a game that lets you express yourself and unleash your creativity through your car collection.
-
It is addictive. Drag Racing Classic is a game that keeps you coming back for more. It has a lot of features and modes that keep you hooked and motivated. It also has a competitive aspect that makes you want to improve your skills and rank higher on the leaderboards.
-
It is social. Drag Racing Classic is a game that connects you with other people who share your interest and passion for cars and racing. It has a friendly and supportive community that welcomes you and helps you grow as a player. It also has a team feature that lets you collaborate and compete with other players from around the world.
-
-
Conclusion
-
Drag Racing Classic is a game that offers you an exciting and realistic drag racing experience on your device. It lets you drive 50+ real licensed cars from the world’s hottest car manufacturers, race against other players online, and customize and tune your cars for 1/4 or 1/2 mile drag races. It also gives you tips and tricks on how to play, master, and enjoy the game. Whether you are a casual gamer or a hardcore racer, Drag Racing Classic is a game that you will love.
-
So what are you waiting for? Download Drag Racing Classic today and join the millions of players who are already having fun with this game. You will not regret it!
-
Frequently Asked Questions
-
Here are some of the most common questions that people ask about Drag Racing Classic:
-
-
How do I get more cash or RP in the game?
-
You can get more cash or RP by winning races, completing achievements, watching ads, or buying them with real money.
-
How do I unlock new cars or levels in the game?
-
You can unlock new cars or levels by earning enough RP or cash to buy them or by reaching certain milestones in the career mode.
-
How do I find other players to race with online?
-
You can find other players to race with online by using the online mode or the pro league mode in the game. You can also use the team feature to join or create a team of racers.
How do I use nitrous oxide in the game?
-
You can use nitrous oxide by tapping the N2O button on the screen. You can also adjust the nitrous timing in the tuning tools to activate it automatically at a certain RPM.
-
How do I share my races or tunes with other players?
-
You can share your races or tunes with other players by using the share button on the screen. You can also use the team chat function to send your races or tunes to your teammates.
-
-
I hope this article has helped you learn more about Drag Racing Classic and how to play, master, and enjoy it. If you have any questions or feedback, please feel free to leave a comment below. Happy racing!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Download MP3 Green Day Full Album for Free.md b/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Download MP3 Green Day Full Album for Free.md
deleted file mode 100644
index 5c497ee3b6803060aa91c3a0c65a20766df596b2..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Download MP3 Green Day Full Album for Free.md
+++ /dev/null
@@ -1,101 +0,0 @@
-
-
How to Download MP3 Green Day Full Album for Free
-
Green Day is one of the most influential and successful rock bands of all time. With over 75 million records sold worldwide, they have won multiple Grammy Awards, Rock and Roll Hall of Fame induction, and countless fans across generations.
If you are a fan of Green Day, you might want to download their music to your computer or mobile device, so you can enjoy it anytime, anywhere. But how can you do that without spending a fortune or breaking the law?
-
In this article, we will show you how to download MP3 Green Day full album for free from various sources. We will also explain the benefits of downloading MP3 files, as well as the legal and ethical issues of downloading music for free.
-
So, if you are ready to rock out with Green Day, read on!
-
The Best Free Music Download Sites for Green Day Albums
-
There are many websites that offer free music downloads, but not all of them are legal or safe. Some may contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some may also violate the copyright laws and infringe on the rights of the artists and record labels.
-
To avoid these risks, you should only use reputable and reliable free music download sites that respect the artists and their work. Here are some of the best ones that we recommend for downloading MP3 Green Day full album for free.
-
SoundCloud
-
SoundCloud is a popular online platform that allows artists to upload their music and share it with the world. You can find millions of songs from various genres, including rock, pop, hip-hop, electronic, and more. You can also discover new music by browsing through curated playlists, charts, and recommendations.
-
download mp3 green day dookie album
-download mp3 green day american idiot album
-download mp3 green day 21st century breakdown album
-download mp3 green day nimrod album
-download mp3 green day warning album
-download mp3 green day insomniac album
-download mp3 green day revolution radio album
-download mp3 green day father of all album
-download mp3 green day uno dos tre album
-download mp3 green day international superhits album
-download mp3 green day kerplunk album
-download mp3 green day 1039 smoothed out slappy hours album
-download mp3 green day shenanigans album
-download mp3 green day bullet in a bible album
-download mp3 green day awesome as fuck album
-download mp3 green day demolicious album
-download mp3 green day live tracks album
-download mp3 green day woodstock 1994 album
-download mp3 green day god's favorite band album
-download mp3 green day greatest hits album
-download mp3 green day boulevard of broken dreams song
-download mp3 green day wake me up when september ends song
-download mp3 green day basket case song
-download mp3 green day good riddance song
-download mp3 green day holiday song
-download mp3 green day when i come around song
-download mp3 green day longview song
-download mp3 green day minority song
-download mp3 green day welcome to paradise song
-download mp3 green day brain stew song
-download mp3 green day hitchin' a ride song
-download mp3 green day 21 guns song
-download mp3 green day know your enemy song
-download mp3 green day bang bang song
-download mp3 green day still breathing song
-download mp3 green day oh love song
-download mp3 green day ordinary world song
-download mp3 green day back in the usa song
-download free mp3 green day full albums zip file
-free online streaming and downloading of full albums by Green Day
-how to legally and safely get Green Day full albums in MP3 format
-best sites to find and enjoy Green Day full albums in MP3 quality
-top 10 Green Day full albums to listen and download in MP3 format
-Green Day full discography MP3 downloads with lyrics and cover art
-Green Day MP3 downloads for all songs from their latest album Father of All...
-Green Day MP3 downloads for rare and unreleased tracks and demos
-Green Day MP3 downloads for live performances and concerts
-Green Day MP3 downloads for acoustic versions and covers of their songs
-Green Day MP3 downloads for remixes and collaborations with other artists
-Green Day MP3 downloads for songs from their side projects and solo albums
-
To find and download Green Day songs on SoundCloud, you can use the search bar or the tags to look for their name or album title. You can also follow their official profile or fan pages to stay updated on their latest releases. Not all tracks are available for download, but some artists may allow you to download their songs for free or in exchange for your email address or social media follow.
-
To download a song from SoundCloud, look for a link marked either 'Buy' or 'Download' below the track. If there is no such link, it means that the song is not available for download. You can only download tracks individually, not whole albums.
-
Some of the pros of using SoundCloud are: - You can access a large and diverse collection of music from various artists and genres - You can support the artists directly by buying their music or following their social media accounts - You can enjoy high-quality audio streaming and downloading Some of the cons of using SoundCloud are: - Not all songs are available for download, and some may require payment or registration - You cannot download whole albums at once, only individual tracks - You may encounter some ads or pop-ups while using the site
Last.fm
-
Last.fm is a music discovery and recommendation service that tracks what you listen to and suggests new music based on your preferences. You can also connect with other music fans and join groups, forums, and events related to your favorite artists and genres.
-
To find and download Green Day albums on Last.fm, you can use the search bar or the tags to look for their name or album title. You can also visit their official page or fan pages to see their biography, discography, photos, videos, and more. Some albums may have a link marked 'Free MP3' below the tracklist, which means that you can download them for free. You can also find free downloads from other artists that are similar to Green Day.
-
To download an album from Last.fm, click on the 'Free MP3' link and you will be redirected to a third-party site that hosts the download. You may need to create an account or enter your email address to access the download. You can download whole albums or individual tracks.
-
Some of the pros of using Last.fm are:
- - You can discover new music that matches your taste and mood - You can interact with other music lovers and join communities related to your favorite artists and genres - You can get personalized recommendations and statistics based on your listening history Some of the cons of using Last.fm are: - Not all albums are available for download, and some may require payment or registration - You may need to visit different sites to download different albums, which can be inconvenient or risky - You may encounter some ads or pop-ups while using the site
NoiseTrade
-
NoiseTrade is a platform that connects artists and fans through free music downloads and tips. You can find thousands of songs and albums from various genres, including rock, pop, folk, country, and more. You can also browse through featured, new, and popular music, or search by artist name or genre.
-
To find and download Green Day albums on NoiseTrade, you can use the search bar or the tags to look for their name or album title. You can also visit their official page or fan pages to see their biography, discography, photos, videos, and more. Some albums may have a link marked 'Download Music' below the cover art, which means that you can download them for free. You can also find free downloads from other artists that are similar to Green Day.
-
To download an album from NoiseTrade, click on the 'Download Music' link and you will be asked to enter your email address and zip code. You will then receive a link to download the album in your inbox. You can download whole albums in MP3 format. You can also leave a tip for the artist if you like their music.
-
Some of the pros of using NoiseTrade are:
- - You can access a wide range of music from independent and emerging artists - You can support the artists directly by leaving tips or sharing their music with others - You can enjoy high-quality audio downloads without any ads or pop-ups Some of the cons of using NoiseTrade are: - Not all albums are available for download, and some may be out of stock or unavailable in your region - You need to provide your email address and zip code to access the downloads, which may compromise your privacy - You may receive promotional emails from NoiseTrade or the artists after downloading their music
Jamendo Music
-
Jamendo Music is a platform that offers free music downloads from independent artists who want to share their music with the world. You can find over 600,000 tracks from various genres, including rock, pop, jazz, classical, and more. You can also explore curated playlists, charts, radios, and podcasts.
-
To find and download Green Day albums on Jamendo Music, you can use the search bar or the tags to look for their name or album title. You can also visit their official page or fan pages to see their biography, discography, photos, videos, and more. Some albums may have a link marked 'Download' below the cover art, which means that you can download them for free. You can also find free downloads from other artists that are similar to Green Day.
To download an album from Jamendo Music, click on the 'Download' link and you will be asked to choose a license type. You can either download the album for personal use only, or for commercial use with attribution. You can download whole albums or individual tracks in MP3 or OGG format.
-
Some of the pros of using Jamendo Music are:
- - You can access a huge library of music from independent and talented artists - You can use the music for personal or commercial purposes, as long as you respect the license terms - You can enjoy high-quality audio downloads without any ads or pop-ups Some of the cons of using Jamendo Music are: - Not all albums are available for download, and some may require payment or registration - You need to choose a license type and follow the rules for each download, which can be confusing or restrictive - You may not find the latest or most popular Green Day albums on the site
Bandcamp
-
Bandcamp is a platform that allows artists to sell their music directly to their fans. You can find millions of songs and albums from various genres, including rock, pop, metal, punk, and more. You can also browse through featured, new, and best-selling music, or search by artist name or genre.
-
To find and download Green Day albums on Bandcamp, you can use the search bar or the tags to look for their name or album title. You can also visit their official page or fan pages to see their biography, discography, photos, videos, and more. Some albums may have a link marked 'Free Download' below the cover art, which means that you can download them for free. You can also find free downloads from other artists that are similar to Green Day.
-
To download an album from Bandcamp, click on the 'Free Download' link and you will be asked to enter your email address. You will then receive a link to download the album in your inbox. You can download whole albums or individual tracks in MP3, FLAC, ALAC, AAC, OGG, WAV, or AIFF format.
-
Some of the pros of using Bandcamp are:
- - You can access a diverse and high-quality selection of music from independent and established artists - You can support the artists directly by buying their music or merchandising - You can enjoy flexible and lossless audio downloads without any ads or pop-ups Some of the cons of using Bandcamp are: - Not all albums are available for download, and some may require payment or registration - You need to provide your email address to access the downloads, which may compromise your privacy - You may not find the latest or most popular Green Day albums on the site
The Best MP3 Download Software for Green Day Albums
-
If you prefer to use a software program to download MP3 Green Day full album for free, you should be careful about which one you choose. Some software may contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some may also violate the copyright laws and infringe on the rights of the artists and record labels.
-
To avoid these risks, you should only use reputable and reliable MP3 download software that respect the artists and their work. Here is one of the best ones that we recommend for downloading MP3 Green Day full album for free.
-
OKmusi MP3 Downloader
-
OKmusi MP3 Downloader is a free and easy-to-use software that allows you to download MP3 files from various sources, such as YouTube, SoundCloud, Spotify, etc. You can also convert video files to audio files with high quality.
-
To use OKmusi MP3 Downloader to download Green Day albums, you just need to follow these simple steps:
- - Download and install OKmusi MP3 Downloader on your computer - Launch the software and enter the name of the Green Day album or song that you want to download in the search bar - Choose the source that you want to download from (e.g., YouTube) and click on the 'Download' button - Wait for the download to finish and enjoy your MP3 file
Some of the pros of using OKmusi MP3 Downloader are:
- - You can download MP3 files from various sources with one click - You can convert video files to audio files with high quality - You can enjoy fast and safe downloads without any ads or pop-ups Some of the cons of using OKmusi MP3 Downloader are: - You may not be able to download some songs or albums due to copyright restrictions - You may need to update the software regularly to keep up with the changes in the sources - You may not be able to download whole albums at once, only individual tracks
Comparison Table of MP3 Download Software Features
-
To help you choose the best MP3 download software for Green Day albums, we have created a comparison table that shows the features of OKmusi MP3 Downloader and other popular MP3 download software, such as MP3Juices, Free Music Archive, etc. You can see the table below:
- | Software | Sources | Formats | Speed | Safety | Ads | | --- | --- | --- | --- | --- | --- | | OKmusi MP3 Downloader | YouTube, SoundCloud, Spotify, etc. | MP3, FLAC, ALAC, AAC, OGG, WAV, AIFF | Fast | Safe | No | | MP3Juices | YouTube, SoundCloud, etc. | MP3 | Fast | Risky | Yes | | Free Music Archive | Various artists and genres | MP3, OGG, FLAC | Medium | Safe | No | | 4K YouTube to MP3 | YouTube, Vimeo, SoundCloud, etc. | MP3, M4A, OGG | Fast | Safe | No | | Audacity | Any audio source on your computer | MP3, WAV, AIFF, FLAC, OGG, etc. | Medium | Safe | No |
Conclusion
-
In this article, we have shown you how to download MP3 Green Day full album for free from various sources. We have also explained the benefits of downloading MP3 files, as well as the legal and ethical issues of downloading music for free.
-
We hope that you have found this article helpful and informative. If you are a fan of Green Day, you should definitely try out the free music download sites and software that we have recommended. You will be able to enjoy their music anytime, anywhere, without spending a dime or breaking the law.
-
So, what are you waiting for? Go ahead and download your favorite Green Day albums for free and rock on!
-
FAQs
-
Here are some of the frequently asked questions about downloading MP3 Green Day full album for free:
- - Q: Is it legal to download MP3 Green Day full album for free? - A: It depends on the source and the license of the music. Some sources may offer free downloads legally and ethically, while others may not. You should always check the terms and conditions of each source before downloading any music. You should also respect the rights of the artists and record labels and avoid pirating or distributing their music without permission. - Q: What are the advantages of downloading MP3 files over other formats? - A: MP3 files are compressed audio files that can reduce the size of the original audio without losing much quality. This means that you can download more music in less time and space. MP3 files are also compatible with most devices and players, so you can play them easily and conveniently. - Q: What are the disadvantages of downloading MP3 files over other formats? - A: MP3 files are not lossless audio files, which means that they may lose some quality or fidelity compared to the original audio. This may affect the sound quality or clarity of the music. MP3 files are also not suitable for editing or mixing purposes, as they may introduce artifacts or noise. - Q: How can I download MP3 Green Day full album for free from YouTube? - A: You can use OKmusi MP3 Downloader or 4K YouTube to MP3 to download MP3 Green Day full album for free from YouTube. You just need to enter the name of the album or song that you want to download in the search bar and choose YouTube as the source. Then you can click on the 'Download' button and wait for the download to finish. - Q: How can I download MP3 Green Day full album for free from Spotify? - A: You can use OKmusi MP3 Downloader to download MP3 Green Day full album for free from Spotify. You just need to enter the name of the album or song that you want to download in the search bar and choose Spotify as the source. Then you can click on the 'Download' button and wait for the download to finish. 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/Automata-And-Computability-Kozen-Homework-Solutions.md b/spaces/contluForse/HuggingGPT/Automata-And-Computability-Kozen-Homework-Solutions.md
deleted file mode 100644
index 295c187b751de53ad623a23da7758b0e6ce5c5cf..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/Automata-And-Computability-Kozen-Homework-Solutions.md
+++ /dev/null
@@ -1,78 +0,0 @@
-## Automata And Computability Kozen Homework Solutions
-
-
-
-
-
- 
-
-
-
-
-
-**CLICK HERE - [https://urluso.com/2txV3t](https://urluso.com/2txV3t)**
-
-
-
-
-
-
-
-
-
-
-
- Here is a possible title and article with html formatting for the keyword "Automata And Computability Kozen Homework Solutions":
-
-# How to Find Solutions for Automata and Computability by Dexter C. Kozen
-
-
-
-Automata and Computability is a textbook by Dexter C. Kozen that covers the theory of computation, including finite automata, regular expressions, context-free grammars, Turing machines, computability, decidability, and complexity. It is a popular choice for undergraduate courses in computer science and related fields.
-
-
-
-However, finding solutions for the homework exercises in this book can be challenging, especially if you are stuck on a difficult problem or want to check your answers. Fortunately, there are some online resources that can help you with this task.
-
-
-
-One of them is Numerade, a website that provides video solutions for thousands of textbooks, including Automata and Computability. Numerade has a team of expert educators who explain the concepts and methods behind each problem in a clear and engaging way. You can watch the videos at your own pace, pause, rewind, or skip as needed. You can also ask questions and get feedback from other students and educators on the platform.
-
-
-
-To access the solutions for Automata and Computability by Numerade, you can visit this link: [https://www.numerade.com/books/automata-and-computability-undergraduate-texts-in-computer-science/](https://www.numerade.com/books/automata-and-computability-undergraduate-texts-in-computer-science/). You will see a list of chapters and homework sections from the book. You can click on any section to see the video solutions for each problem. You can also search for a specific problem by its number or keywords.
-
-
-
-Numerade offers a free trial for new users, so you can try it out without any risk. You can also upgrade to a premium subscription to get access to more features and benefits, such as unlimited video views, personalized study plans, live Q&A sessions, and more.
-
-
-
-If you are looking for a reliable and convenient way to find solutions for Automata and Computability by Dexter C. Kozen, Numerade is a great option to consider. It can help you learn the material better, improve your grades, and save time and effort.
-
-Here are a few more paragraphs for the article:
-
-How do you write a good article quickly? There is no one-size-fits-all answer, but there are some general tips and tricks that can help you speed up your writing process and improve your quality. Here are some of them:
-
-
-
-- Know your audience. Before you start writing, think about who you are writing for and what they want to learn from your article. This will help you tailor your tone, style, language, and content to suit their needs and expectations.
-
-- Do your research. You don't have to spend hours digging through sources, but you should have a clear idea of what you are going to write about and what information you need to support your points. Use reliable and credible sources, such as books, journals, websites, or experts. Take notes and organize them in a logical way.
-
-- Create an outline. An outline is a roadmap that guides you through your article. It helps you structure your ideas, organize your information, and avoid going off-topic. An outline can be as simple or as detailed as you want, but it should include the main sections of your article: introduction, body, and conclusion.
-
-- Write fast. Once you have your outline, start writing without worrying too much about grammar, spelling, or style. Just focus on getting your thoughts on paper (or screen) as quickly as possible. Don't stop to edit or revise until you finish your first draft.
-
-- Edit and proofread. After you finish your first draft, take a break and then come back to it with fresh eyes. Read it aloud and check for errors, inconsistencies, gaps, or redundancies. Make sure your article flows well, has a clear purpose, and delivers value to your readers.
-
-
-
-By following these tips, you can write a good article quickly and efficiently. Remember that practice makes perfect, so keep writing and improving your skills every day.
-
- dfd1c89656
-
-
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/A Horse Walks into a Bar Download the Award-Winning Novel by David Grossman in PDF Format.md b/spaces/contluForse/HuggingGPT/assets/A Horse Walks into a Bar Download the Award-Winning Novel by David Grossman in PDF Format.md
deleted file mode 100644
index fa98aedf85756e7abdc0d60132bf4b00a995820c..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/A Horse Walks into a Bar Download the Award-Winning Novel by David Grossman in PDF Format.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
Cohen has translated a number of Hebrew language books into English, including those by Nir Baram, David Grossman, Amir Gutfreund, Yael Hedaya [he], Ronit Matalon, Rutu Modan, Dorit Rabinyan, Tom Segev and Nava Semel. She currently resides in Denver, Colorado.[5][6]
-
Pari-mutuel betting: A strangely worded statute prohibits minors from ''knowingly making or attempting to make any wager on any horse race." I do not know how a minor could accidentally make a bet. Racetrack licensees may not knowingly permit anyone under 18, unless accompanied by a parent or guardian, into any pari-mutuel wagering area. Licensees are also prohibited from knowingly permitting any individual under 18 to place a wager. Missouri Revised Statutes §313.670.
The guy slams the phone down and storms upstairs into the bedroom, walks past his screaming wife, and rips open the wardrobe door. Sure enough, there is his brother, totally naked, cowering on the closet floor.
-
Hike, cycle or drive to the summit of Mount Constitution for expansive views of the San Juan archipelago. Climb the historic stone tower on the mountaintop for an even grander view. Enjoy the park's five lakes, where you can swim, kayak, stand up paddle or fish for rainbow trout. Explore Moran's 38 miles of hiking trails, or take a trail ride on your favorite bike or with your trusted horse. Stroll through a natural preserve to spot birds and wildlife.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Devil May Cry 5 Msvcp100 Dll Missing Error [REPACK].md b/spaces/contluForse/HuggingGPT/assets/Devil May Cry 5 Msvcp100 Dll Missing Error [REPACK].md
deleted file mode 100644
index a0699c1b08c28b7f2bf297c45fda35be13013090..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Devil May Cry 5 Msvcp100 Dll Missing Error [REPACK].md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
-7 Jul 2021 - ... bugs - gamepad not working problem - crashing to desktop - black screen problems - game stopped working error TOM dll file is missing. How to fix error TOM dll download and install for Windows.
-What to do if the game won't run with a TOM dll error?
-Go to the root folder of the game ...
-TOM dll - download TOM dll for free.
-TOM dll - download free from the official site.
-Download TOM dll - where to download TOM dll.
-Download TOM dll.
-TOM dll error.
-What do I do if the game does not start with an error TOM dll?
-Go to the root folder of the game.
-Then open the Bin folder and open the folder with the installed game. 8a78ff9644
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/Driverprintercanonf158200zip.md b/spaces/contluForse/HuggingGPT/assets/Driverprintercanonf158200zip.md
deleted file mode 100644
index e23433de5f2b9990cc0f8f376a65761ab36b9315..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Driverprintercanonf158200zip.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/crashedice/signify/signify/gan/models/base_model.py b/spaces/crashedice/signify/signify/gan/models/base_model.py
deleted file mode 100644
index af4bdd2bdb11eeb861dfc85fd89f54efcdf6d738..0000000000000000000000000000000000000000
--- a/spaces/crashedice/signify/signify/gan/models/base_model.py
+++ /dev/null
@@ -1,231 +0,0 @@
-import os
-import torch
-from collections import OrderedDict
-from abc import ABC, abstractmethod
-from signify.gan.models import networks
-
-
-class BaseModel(ABC):
- """This class is an abstract base class (ABC) for models.
- To create a subclass, you need to implement the following five functions:
- -- <__init__>: initialize the class; first call BaseModel.__init__(self, opt).
- -- : unpack data from dataset and apply preprocessing.
- -- : produce intermediate results.
- -- : calculate losses, gradients, and update network weights.
- -- : (optionally) add model-specific options and set default options.
- """
-
- def __init__(self, opt):
- """Initialize the BaseModel class.
-
- Parameters:
- opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
-
- When creating your custom class, you need to implement your own initialization.
- In this function, you should first call
- Then, you need to define four lists:
- -- self.loss_names (str list): specify the training losses that you want to plot and save.
- -- self.model_names (str list): define networks used in our training.
- -- self.visual_names (str list): specify the images that you want to display and save.
- -- self.optimizers (optimizer list): define and initialize optimizers. You can define one optimizer for each network. If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an example.
- """
- self.opt = opt
- self.gpu_ids = opt.gpu_ids
- self.isTrain = opt.isTrain
- self.device = torch.device('cuda:{}'.format(self.gpu_ids[0])) if self.gpu_ids else torch.device('cpu') # get device name: CPU or GPU
- self.save_dir = os.path.join(opt.checkpoints_dir, opt.name) # save all the checkpoints to save_dir
- if opt.preprocess != 'scale_width': # with [scale_width], input images might have different sizes, which hurts the performance of cudnn.benchmark.
- torch.backends.cudnn.benchmark = True
- self.loss_names = []
- self.model_names = []
- self.visual_names = []
- self.optimizers = []
- self.image_paths = []
- self.metric = 0 # used for learning rate policy 'plateau'
-
- @staticmethod
- def modify_commandline_options(parser, is_train):
- """Add new model-specific options, and rewrite default values for existing options.
-
- Parameters:
- parser -- original option parser
- is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options.
-
- Returns:
- the modified parser.
- """
- return parser
-
- @abstractmethod
- def set_input(self, input):
- """Unpack input data from the dataloader and perform necessary pre-processing steps.
-
- Parameters:
- input (dict): includes the data itself and its metadata information.
- """
- pass
-
- @abstractmethod
- def forward(self):
- """Run forward pass; called by both functions and ."""
- pass
-
- @abstractmethod
- def optimize_parameters(self):
- """Calculate losses, gradients, and update network weights; called in every training iteration"""
- pass
-
- def setup(self, opt):
- """Load and print networks; create schedulers
-
- Parameters:
- opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions
- """
- if self.isTrain:
- self.schedulers = [networks.get_scheduler(optimizer, opt) for optimizer in self.optimizers]
- if not self.isTrain or opt.continue_train:
- load_suffix = 'iter_%d' % opt.load_iter if opt.load_iter > 0 else opt.epoch
- self.load_networks(load_suffix)
- self.print_networks(opt.verbose)
-
- def eval(self):
- """Make models eval mode during test time"""
- for name in self.model_names:
- if isinstance(name, str):
- net = getattr(self, 'net' + name)
- net.eval()
-
- def test(self):
- """Forward function used in test time.
-
- This function wraps function in no_grad() so we don't save intermediate steps for backprop
- It also calls to produce additional visualization results
- """
- with torch.no_grad():
- self.forward()
- self.compute_visuals()
-
- def compute_visuals(self):
- """Calculate additional output images for visdom and HTML visualization"""
- pass
-
- def get_image_paths(self):
- """ Return image paths that are used to load current data"""
- return self.image_paths
-
- def update_learning_rate(self):
- """Update learning rates for all the networks; called at the end of every epoch"""
- old_lr = self.optimizers[0].param_groups[0]['lr']
- for scheduler in self.schedulers:
- if self.opt.lr_policy == 'plateau':
- scheduler.step(self.metric)
- else:
- scheduler.step()
-
- lr = self.optimizers[0].param_groups[0]['lr']
- print('learning rate %.7f -> %.7f' % (old_lr, lr))
-
- def get_current_visuals(self):
- """Return visualization images. train.py will display these images with visdom, and save the images to a HTML"""
- visual_ret = OrderedDict()
- for name in self.visual_names:
- if isinstance(name, str):
- visual_ret[name] = getattr(self, name)
- return visual_ret
-
- def get_current_losses(self):
- """Return traning losses / errors. train.py will print out these errors on console, and save them to a file"""
- errors_ret = OrderedDict()
- for name in self.loss_names:
- if isinstance(name, str):
- errors_ret[name] = float(getattr(self, 'loss_' + name)) # float(...) works for both scalar tensor and float number
- return errors_ret
-
- def save_networks(self, epoch):
- """Save all the networks to the disk.
-
- Parameters:
- epoch (int) -- current epoch; used in the file name '%s_net_%s.pth' % (epoch, name)
- """
- for name in self.model_names:
- if isinstance(name, str):
- save_filename = '%s_net_%s.pth' % (epoch, name)
- save_path = os.path.join(self.save_dir, save_filename)
- net = getattr(self, 'net' + name)
-
- if len(self.gpu_ids) > 0 and torch.cuda.is_available():
- torch.save(net.module.cpu().state_dict(), save_path)
- net.cuda(self.gpu_ids[0])
- else:
- torch.save(net.cpu().state_dict(), save_path)
-
- def __patch_instance_norm_state_dict(self, state_dict, module, keys, i=0):
- """Fix InstanceNorm checkpoints incompatibility (prior to 0.4)"""
- key = keys[i]
- if i + 1 == len(keys): # at the end, pointing to a parameter/buffer
- if module.__class__.__name__.startswith('InstanceNorm') and \
- (key == 'running_mean' or key == 'running_var'):
- if getattr(module, key) is None:
- state_dict.pop('.'.join(keys))
- if module.__class__.__name__.startswith('InstanceNorm') and \
- (key == 'num_batches_tracked'):
- state_dict.pop('.'.join(keys))
- else:
- self.__patch_instance_norm_state_dict(state_dict, getattr(module, key), keys, i + 1)
-
- def load_networks(self, epoch):
- """Load all the networks from the disk.
-
- Parameters:
- epoch (int) -- current epoch; used in the file name '%s_net_%s.pth' % (epoch, name)
- """
- for name in self.model_names:
- if isinstance(name, str):
- load_filename = '%s_net_%s.pth' % (epoch, name)
- load_path = os.path.join(self.save_dir, load_filename)
- net = getattr(self, 'net' + name)
- if isinstance(net, torch.nn.DataParallel):
- net = net.module
- print('loading the model from %s' % load_path)
- # if you are using PyTorch newer than 0.4 (e.g., built from
- # GitHub source), you can remove str() on self.device
- state_dict = torch.load(load_path, map_location=str(self.device))
- if hasattr(state_dict, '_metadata'):
- del state_dict._metadata
-
- # patch InstanceNorm checkpoints prior to 0.4
- for key in list(state_dict.keys()): # need to copy keys here because we mutate in loop
- self.__patch_instance_norm_state_dict(state_dict, net, key.split('.'))
- # net.load_state_dict(state_dict, strict=False)
- net.load_state_dict(state_dict)
-
- def print_networks(self, verbose):
- """Print the total number of parameters in the network and (if verbose) network architecture
-
- Parameters:
- verbose (bool) -- if verbose: print the network architecture
- """
- print('---------- Networks initialized -------------')
- for name in self.model_names:
- if isinstance(name, str):
- net = getattr(self, 'net' + name)
- num_params = 0
- for param in net.parameters():
- num_params += param.numel()
- if verbose:
- print(net)
- print('[Network %s] Total number of parameters : %.3f M' % (name, num_params / 1e6))
- print('-----------------------------------------------')
-
- def set_requires_grad(self, nets, requires_grad=False):
- """Set requies_grad=Fasle for all the networks to avoid unnecessary computations
- Parameters:
- nets (network list) -- a list of networks
- requires_grad (bool) -- whether the networks require gradients or not
- """
- if not isinstance(nets, list):
- nets = [nets]
- for net in nets:
- if net is not None:
- for param in net.parameters():
- param.requires_grad = requires_grad
diff --git a/spaces/dachenchen/HiWantJoin/locale/extract_locale.py b/spaces/dachenchen/HiWantJoin/locale/extract_locale.py
deleted file mode 100644
index 32b0924bd6dffe150cb3e481ddadef836b91b83c..0000000000000000000000000000000000000000
--- a/spaces/dachenchen/HiWantJoin/locale/extract_locale.py
+++ /dev/null
@@ -1,26 +0,0 @@
-import os
-import json
-import re
-
-# Define regular expression patterns
-pattern = r'i18n\((\"{3}.*?\"{3}|\".*?\")\)'
-
-# Load the .py file
-with open('ChuanhuChatbot.py', 'r', encoding='utf-8') as f:
- contents = f.read()
-
-# Load the .py files in the modules folder
-for filename in os.listdir("modules"):
- if filename.endswith(".py"):
- with open(os.path.join("modules", filename), "r", encoding="utf-8") as f:
- contents += f.read()
-
-# Matching with regular expressions
-matches = re.findall(pattern, contents, re.DOTALL)
-
-# Convert to key/value pairs
-data = {match.strip('()"'): '' for match in matches}
-
-# Save as a JSON file
-with open('labels.json', 'w', encoding='utf-8') as f:
- json.dump(data, f, ensure_ascii=False, indent=4)
\ No newline at end of file
diff --git a/spaces/daveckw/custom-chatgpt/my_functions/move_file.py b/spaces/daveckw/custom-chatgpt/my_functions/move_file.py
deleted file mode 100644
index 50b6caeb14c7e108a632e6aaae16caec2532030b..0000000000000000000000000000000000000000
--- a/spaces/daveckw/custom-chatgpt/my_functions/move_file.py
+++ /dev/null
@@ -1,5 +0,0 @@
-import shutil
-
-def move_file(src_path, dest_folder):
- # Move the file from the source path to the destination folder
- shutil.move(src_path, dest_folder)
diff --git a/spaces/dawood/Kanye-AI/modules/losses.py b/spaces/dawood/Kanye-AI/modules/losses.py
deleted file mode 100644
index cd21799eccde350c3aac0bdd661baf96ed220147..0000000000000000000000000000000000000000
--- a/spaces/dawood/Kanye-AI/modules/losses.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import modules.commons as commons
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- rl = rl.float().detach()
- gl = gl.float()
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss * 2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- dr = dr.float()
- dg = dg.float()
- r_loss = torch.mean((1-dr)**2)
- g_loss = torch.mean(dg**2)
- loss += (r_loss + g_loss)
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- dg = dg.float()
- l = torch.mean((1-dg)**2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
-
-
-def kl_loss(z_p, logs_q, m_p, logs_p, z_mask):
- """
- z_p, logs_q: [b, h, t_t]
- m_p, logs_p: [b, h, t_t]
- """
- z_p = z_p.float()
- logs_q = logs_q.float()
- m_p = m_p.float()
- logs_p = logs_p.float()
- z_mask = z_mask.float()
- #print(logs_p)
- kl = logs_p - logs_q - 0.5
- kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p)
- kl = torch.sum(kl * z_mask)
- l = kl / torch.sum(z_mask)
- return l
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/FitsImagePlugin.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/FitsImagePlugin.py
deleted file mode 100644
index 1359aeb1282ee78e38f40fc25b4a50b621db4043..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/FitsImagePlugin.py
+++ /dev/null
@@ -1,73 +0,0 @@
-#
-# The Python Imaging Library
-# $Id$
-#
-# FITS file handling
-#
-# Copyright (c) 1998-2003 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-
-import math
-
-from . import Image, ImageFile
-
-
-def _accept(prefix):
- return prefix[:6] == b"SIMPLE"
-
-
-class FitsImageFile(ImageFile.ImageFile):
- format = "FITS"
- format_description = "FITS"
-
- def _open(self):
- headers = {}
- while True:
- header = self.fp.read(80)
- if not header:
- msg = "Truncated FITS file"
- raise OSError(msg)
- keyword = header[:8].strip()
- if keyword == b"END":
- break
- value = header[8:].split(b"/")[0].strip()
- if value.startswith(b"="):
- value = value[1:].strip()
- if not headers and (not _accept(keyword) or value != b"T"):
- msg = "Not a FITS file"
- raise SyntaxError(msg)
- headers[keyword] = value
-
- naxis = int(headers[b"NAXIS"])
- if naxis == 0:
- msg = "No image data"
- raise ValueError(msg)
- elif naxis == 1:
- self._size = 1, int(headers[b"NAXIS1"])
- else:
- self._size = int(headers[b"NAXIS1"]), int(headers[b"NAXIS2"])
-
- number_of_bits = int(headers[b"BITPIX"])
- if number_of_bits == 8:
- self.mode = "L"
- elif number_of_bits == 16:
- self.mode = "I"
- # rawmode = "I;16S"
- elif number_of_bits == 32:
- self.mode = "I"
- elif number_of_bits in (-32, -64):
- self.mode = "F"
- # rawmode = "F" if number_of_bits == -32 else "F;64F"
-
- offset = math.ceil(self.fp.tell() / 2880) * 2880
- self.tile = [("raw", (0, 0) + self.size, offset, (self.mode, 0, -1))]
-
-
-# --------------------------------------------------------------------
-# Registry
-
-Image.register_open(FitsImageFile.format, FitsImageFile, _accept)
-
-Image.register_extensions(FitsImageFile.format, [".fit", ".fits"])
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-f0e43e7d.css b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-f0e43e7d.css
deleted file mode 100644
index fb320f5e9afc1570c36e34f44865052ff83acf86..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-f0e43e7d.css
+++ /dev/null
@@ -1 +0,0 @@
-.base-image.svelte-m3v3vb.svelte-m3v3vb{display:block;width:100%;height:auto}.container.svelte-m3v3vb.svelte-m3v3vb{display:flex;position:relative;flex-direction:column;justify-content:center;align-items:center;width:var(--size-full);height:var(--size-full)}.image-container.svelte-m3v3vb.svelte-m3v3vb{position:relative;top:0;left:0;flex-grow:1;width:100%;overflow:hidden}.fit-height.svelte-m3v3vb.svelte-m3v3vb{position:absolute;top:0;left:0;width:100%;height:100%;object-fit:contain}.mask.svelte-m3v3vb.svelte-m3v3vb{opacity:.85;transition:all .2s ease-in-out}.image-container.svelte-m3v3vb:hover .mask.svelte-m3v3vb{opacity:.3}.mask.active.svelte-m3v3vb.svelte-m3v3vb{opacity:1}.mask.inactive.svelte-m3v3vb.svelte-m3v3vb{opacity:0}.legend.svelte-m3v3vb.svelte-m3v3vb{display:flex;flex-direction:row;flex-wrap:wrap;align-content:center;justify-content:center;align-items:center;gap:var(--spacing-sm);padding:var(--spacing-sm)}.legend-item.svelte-m3v3vb.svelte-m3v3vb{display:flex;flex-direction:row;align-items:center;cursor:pointer;border-radius:var(--radius-sm);padding:var(--spacing-sm)}
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_async/http11.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_async/http11.py
deleted file mode 100644
index 7ad3664205b7371ec63013953e8ec9fafad0cf60..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_async/http11.py
+++ /dev/null
@@ -1,331 +0,0 @@
-import enum
-import logging
-import time
-from types import TracebackType
-from typing import (
- AsyncIterable,
- AsyncIterator,
- List,
- Optional,
- Tuple,
- Type,
- Union,
- cast,
-)
-
-import h11
-
-from .._backends.base import AsyncNetworkStream
-from .._exceptions import (
- ConnectionNotAvailable,
- LocalProtocolError,
- RemoteProtocolError,
- map_exceptions,
-)
-from .._models import Origin, Request, Response
-from .._synchronization import AsyncLock, AsyncShieldCancellation
-from .._trace import Trace
-from .interfaces import AsyncConnectionInterface
-
-logger = logging.getLogger("httpcore.http11")
-
-
-# A subset of `h11.Event` types supported by `_send_event`
-H11SendEvent = Union[
- h11.Request,
- h11.Data,
- h11.EndOfMessage,
-]
-
-
-class HTTPConnectionState(enum.IntEnum):
- NEW = 0
- ACTIVE = 1
- IDLE = 2
- CLOSED = 3
-
-
-class AsyncHTTP11Connection(AsyncConnectionInterface):
- READ_NUM_BYTES = 64 * 1024
- MAX_INCOMPLETE_EVENT_SIZE = 100 * 1024
-
- def __init__(
- self,
- origin: Origin,
- stream: AsyncNetworkStream,
- keepalive_expiry: Optional[float] = None,
- ) -> None:
- self._origin = origin
- self._network_stream = stream
- self._keepalive_expiry: Optional[float] = keepalive_expiry
- self._expire_at: Optional[float] = None
- self._state = HTTPConnectionState.NEW
- self._state_lock = AsyncLock()
- self._request_count = 0
- self._h11_state = h11.Connection(
- our_role=h11.CLIENT,
- max_incomplete_event_size=self.MAX_INCOMPLETE_EVENT_SIZE,
- )
-
- async def handle_async_request(self, request: Request) -> Response:
- if not self.can_handle_request(request.url.origin):
- raise RuntimeError(
- f"Attempted to send request to {request.url.origin} on connection "
- f"to {self._origin}"
- )
-
- async with self._state_lock:
- if self._state in (HTTPConnectionState.NEW, HTTPConnectionState.IDLE):
- self._request_count += 1
- self._state = HTTPConnectionState.ACTIVE
- self._expire_at = None
- else:
- raise ConnectionNotAvailable()
-
- try:
- kwargs = {"request": request}
- async with Trace("send_request_headers", logger, request, kwargs) as trace:
- await self._send_request_headers(**kwargs)
- async with Trace("send_request_body", logger, request, kwargs) as trace:
- await self._send_request_body(**kwargs)
- async with Trace(
- "receive_response_headers", logger, request, kwargs
- ) as trace:
- (
- http_version,
- status,
- reason_phrase,
- headers,
- ) = await self._receive_response_headers(**kwargs)
- trace.return_value = (
- http_version,
- status,
- reason_phrase,
- headers,
- )
-
- return Response(
- status=status,
- headers=headers,
- content=HTTP11ConnectionByteStream(self, request),
- extensions={
- "http_version": http_version,
- "reason_phrase": reason_phrase,
- "network_stream": self._network_stream,
- },
- )
- except BaseException as exc:
- with AsyncShieldCancellation():
- async with Trace("response_closed", logger, request) as trace:
- await self._response_closed()
- raise exc
-
- # Sending the request...
-
- async def _send_request_headers(self, request: Request) -> None:
- timeouts = request.extensions.get("timeout", {})
- timeout = timeouts.get("write", None)
-
- with map_exceptions({h11.LocalProtocolError: LocalProtocolError}):
- event = h11.Request(
- method=request.method,
- target=request.url.target,
- headers=request.headers,
- )
- await self._send_event(event, timeout=timeout)
-
- async def _send_request_body(self, request: Request) -> None:
- timeouts = request.extensions.get("timeout", {})
- timeout = timeouts.get("write", None)
-
- assert isinstance(request.stream, AsyncIterable)
- async for chunk in request.stream:
- event = h11.Data(data=chunk)
- await self._send_event(event, timeout=timeout)
-
- await self._send_event(h11.EndOfMessage(), timeout=timeout)
-
- async def _send_event(
- self, event: h11.Event, timeout: Optional[float] = None
- ) -> None:
- bytes_to_send = self._h11_state.send(event)
- if bytes_to_send is not None:
- await self._network_stream.write(bytes_to_send, timeout=timeout)
-
- # Receiving the response...
-
- async def _receive_response_headers(
- self, request: Request
- ) -> Tuple[bytes, int, bytes, List[Tuple[bytes, bytes]]]:
- timeouts = request.extensions.get("timeout", {})
- timeout = timeouts.get("read", None)
-
- while True:
- event = await self._receive_event(timeout=timeout)
- if isinstance(event, h11.Response):
- break
- if (
- isinstance(event, h11.InformationalResponse)
- and event.status_code == 101
- ):
- break
-
- http_version = b"HTTP/" + event.http_version
-
- # h11 version 0.11+ supports a `raw_items` interface to get the
- # raw header casing, rather than the enforced lowercase headers.
- headers = event.headers.raw_items()
-
- return http_version, event.status_code, event.reason, headers
-
- async def _receive_response_body(self, request: Request) -> AsyncIterator[bytes]:
- timeouts = request.extensions.get("timeout", {})
- timeout = timeouts.get("read", None)
-
- while True:
- event = await self._receive_event(timeout=timeout)
- if isinstance(event, h11.Data):
- yield bytes(event.data)
- elif isinstance(event, (h11.EndOfMessage, h11.PAUSED)):
- break
-
- async def _receive_event(
- self, timeout: Optional[float] = None
- ) -> Union[h11.Event, Type[h11.PAUSED]]:
- while True:
- with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}):
- event = self._h11_state.next_event()
-
- if event is h11.NEED_DATA:
- data = await self._network_stream.read(
- self.READ_NUM_BYTES, timeout=timeout
- )
-
- # If we feed this case through h11 we'll raise an exception like:
- #
- # httpcore.RemoteProtocolError: can't handle event type
- # ConnectionClosed when role=SERVER and state=SEND_RESPONSE
- #
- # Which is accurate, but not very informative from an end-user
- # perspective. Instead we handle this case distinctly and treat
- # it as a ConnectError.
- if data == b"" and self._h11_state.their_state == h11.SEND_RESPONSE:
- msg = "Server disconnected without sending a response."
- raise RemoteProtocolError(msg)
-
- self._h11_state.receive_data(data)
- else:
- # mypy fails to narrow the type in the above if statement above
- return cast(Union[h11.Event, Type[h11.PAUSED]], event)
-
- async def _response_closed(self) -> None:
- async with self._state_lock:
- if (
- self._h11_state.our_state is h11.DONE
- and self._h11_state.their_state is h11.DONE
- ):
- self._state = HTTPConnectionState.IDLE
- self._h11_state.start_next_cycle()
- if self._keepalive_expiry is not None:
- now = time.monotonic()
- self._expire_at = now + self._keepalive_expiry
- else:
- await self.aclose()
-
- # Once the connection is no longer required...
-
- async def aclose(self) -> None:
- # Note that this method unilaterally closes the connection, and does
- # not have any kind of locking in place around it.
- self._state = HTTPConnectionState.CLOSED
- await self._network_stream.aclose()
-
- # The AsyncConnectionInterface methods provide information about the state of
- # the connection, allowing for a connection pooling implementation to
- # determine when to reuse and when to close the connection...
-
- def can_handle_request(self, origin: Origin) -> bool:
- return origin == self._origin
-
- def is_available(self) -> bool:
- # Note that HTTP/1.1 connections in the "NEW" state are not treated as
- # being "available". The control flow which created the connection will
- # be able to send an outgoing request, but the connection will not be
- # acquired from the connection pool for any other request.
- return self._state == HTTPConnectionState.IDLE
-
- def has_expired(self) -> bool:
- now = time.monotonic()
- keepalive_expired = self._expire_at is not None and now > self._expire_at
-
- # If the HTTP connection is idle but the socket is readable, then the
- # only valid state is that the socket is about to return b"", indicating
- # a server-initiated disconnect.
- server_disconnected = (
- self._state == HTTPConnectionState.IDLE
- and self._network_stream.get_extra_info("is_readable")
- )
-
- return keepalive_expired or server_disconnected
-
- def is_idle(self) -> bool:
- return self._state == HTTPConnectionState.IDLE
-
- def is_closed(self) -> bool:
- return self._state == HTTPConnectionState.CLOSED
-
- def info(self) -> str:
- origin = str(self._origin)
- return (
- f"{origin!r}, HTTP/1.1, {self._state.name}, "
- f"Request Count: {self._request_count}"
- )
-
- def __repr__(self) -> str:
- class_name = self.__class__.__name__
- origin = str(self._origin)
- return (
- f"<{class_name} [{origin!r}, {self._state.name}, "
- f"Request Count: {self._request_count}]>"
- )
-
- # These context managers are not used in the standard flow, but are
- # useful for testing or working with connection instances directly.
-
- async def __aenter__(self) -> "AsyncHTTP11Connection":
- return self
-
- async def __aexit__(
- self,
- exc_type: Optional[Type[BaseException]] = None,
- exc_value: Optional[BaseException] = None,
- traceback: Optional[TracebackType] = None,
- ) -> None:
- await self.aclose()
-
-
-class HTTP11ConnectionByteStream:
- def __init__(self, connection: AsyncHTTP11Connection, request: Request) -> None:
- self._connection = connection
- self._request = request
- self._closed = False
-
- async def __aiter__(self) -> AsyncIterator[bytes]:
- kwargs = {"request": self._request}
- try:
- async with Trace("receive_response_body", logger, self._request, kwargs):
- async for chunk in self._connection._receive_response_body(**kwargs):
- yield chunk
- except BaseException as exc:
- # If we get an exception while streaming the response,
- # we want to close the response (and possibly the connection)
- # before raising that exception.
- with AsyncShieldCancellation():
- await self.aclose()
- raise exc
-
- async def aclose(self) -> None:
- if not self._closed:
- self._closed = True
- async with Trace("response_closed", logger, self._request):
- await self._connection._response_closed()
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/tests/test_open.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/tests/test_open.py
deleted file mode 100644
index 83b737dc85cb674d3c76f4f2676856d98d65d264..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/tests/test_open.py
+++ /dev/null
@@ -1,85 +0,0 @@
-import unittest
-
-import importlib_resources as resources
-from . import data01
-from . import util
-
-
-class CommonBinaryTests(util.CommonTests, unittest.TestCase):
- def execute(self, package, path):
- target = resources.files(package).joinpath(path)
- with target.open('rb'):
- pass
-
-
-class CommonTextTests(util.CommonTests, unittest.TestCase):
- def execute(self, package, path):
- target = resources.files(package).joinpath(path)
- with target.open(encoding='utf-8'):
- pass
-
-
-class OpenTests:
- def test_open_binary(self):
- target = resources.files(self.data) / 'binary.file'
- with target.open('rb') as fp:
- result = fp.read()
- self.assertEqual(result, b'\x00\x01\x02\x03')
-
- def test_open_text_default_encoding(self):
- target = resources.files(self.data) / 'utf-8.file'
- with target.open(encoding='utf-8') as fp:
- result = fp.read()
- self.assertEqual(result, 'Hello, UTF-8 world!\n')
-
- def test_open_text_given_encoding(self):
- target = resources.files(self.data) / 'utf-16.file'
- with target.open(encoding='utf-16', errors='strict') as fp:
- result = fp.read()
- self.assertEqual(result, 'Hello, UTF-16 world!\n')
-
- def test_open_text_with_errors(self):
- """
- Raises UnicodeError without the 'errors' argument.
- """
- target = resources.files(self.data) / 'utf-16.file'
- with target.open(encoding='utf-8', errors='strict') as fp:
- self.assertRaises(UnicodeError, fp.read)
- with target.open(encoding='utf-8', errors='ignore') as fp:
- result = fp.read()
- self.assertEqual(
- result,
- 'H\x00e\x00l\x00l\x00o\x00,\x00 '
- '\x00U\x00T\x00F\x00-\x001\x006\x00 '
- '\x00w\x00o\x00r\x00l\x00d\x00!\x00\n\x00',
- )
-
- def test_open_binary_FileNotFoundError(self):
- target = resources.files(self.data) / 'does-not-exist'
- with self.assertRaises(FileNotFoundError):
- target.open('rb')
-
- def test_open_text_FileNotFoundError(self):
- target = resources.files(self.data) / 'does-not-exist'
- with self.assertRaises(FileNotFoundError):
- target.open(encoding='utf-8')
-
-
-class OpenDiskTests(OpenTests, unittest.TestCase):
- def setUp(self):
- self.data = data01
-
-
-class OpenDiskNamespaceTests(OpenTests, unittest.TestCase):
- def setUp(self):
- from . import namespacedata01
-
- self.data = namespacedata01
-
-
-class OpenZipTests(OpenTests, util.ZipSetup, unittest.TestCase):
- pass
-
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/tests/update-zips.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/tests/update-zips.py
deleted file mode 100644
index 231334aa7e38b46157234314129262657edeadee..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/tests/update-zips.py
+++ /dev/null
@@ -1,53 +0,0 @@
-"""
-Generate the zip test data files.
-
-Run to build the tests/zipdataNN/ziptestdata.zip files from
-files in tests/dataNN.
-
-Replaces the file with the working copy, but does commit anything
-to the source repo.
-"""
-
-import contextlib
-import os
-import pathlib
-import zipfile
-
-
-def main():
- """
- >>> from unittest import mock
- >>> monkeypatch = getfixture('monkeypatch')
- >>> monkeypatch.setattr(zipfile, 'ZipFile', mock.MagicMock())
- >>> print(); main() # print workaround for bpo-32509
-
- ...data01... -> ziptestdata/...
- ...
- ...data02... -> ziptestdata/...
- ...
- """
- suffixes = '01', '02'
- tuple(map(generate, suffixes))
-
-
-def generate(suffix):
- root = pathlib.Path(__file__).parent.relative_to(os.getcwd())
- zfpath = root / f'zipdata{suffix}/ziptestdata.zip'
- with zipfile.ZipFile(zfpath, 'w') as zf:
- for src, rel in walk(root / f'data{suffix}'):
- dst = 'ziptestdata' / pathlib.PurePosixPath(rel.as_posix())
- print(src, '->', dst)
- zf.write(src, dst)
-
-
-def walk(datapath):
- for dirpath, dirnames, filenames in os.walk(datapath):
- with contextlib.suppress(ValueError):
- dirnames.remove('__pycache__')
- for filename in filenames:
- res = pathlib.Path(dirpath) / filename
- rel = res.relative_to(datapath)
- yield res, rel
-
-
-__name__ == '__main__' and main()
diff --git a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/facerender/modules/util.py b/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/facerender/modules/util.py
deleted file mode 100644
index 95e1a18cfbf7e849679429324dd6806400d4659f..0000000000000000000000000000000000000000
--- a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/facerender/modules/util.py
+++ /dev/null
@@ -1,564 +0,0 @@
-from torch import nn
-
-import torch.nn.functional as F
-import torch
-
-from sad_talker.src.facerender.sync_batchnorm import SynchronizedBatchNorm2d as BatchNorm2d
-from sad_talker.src.facerender.sync_batchnorm import SynchronizedBatchNorm3d as BatchNorm3d
-
-import torch.nn.utils.spectral_norm as spectral_norm
-
-
-def kp2gaussian(kp, spatial_size, kp_variance):
- """
- Transform a keypoint into gaussian like representation
- """
- mean = kp['value']
-
- coordinate_grid = make_coordinate_grid(spatial_size, mean.type())
- number_of_leading_dimensions = len(mean.shape) - 1
- shape = (1,) * number_of_leading_dimensions + coordinate_grid.shape
- coordinate_grid = coordinate_grid.view(*shape)
- repeats = mean.shape[:number_of_leading_dimensions] + (1, 1, 1, 1)
- coordinate_grid = coordinate_grid.repeat(*repeats)
-
- # Preprocess kp shape
- shape = mean.shape[:number_of_leading_dimensions] + (1, 1, 1, 3)
- mean = mean.view(*shape)
-
- mean_sub = (coordinate_grid - mean)
-
- out = torch.exp(-0.5 * (mean_sub ** 2).sum(-1) / kp_variance)
-
- return out
-
-def make_coordinate_grid_2d(spatial_size, type):
- """
- Create a meshgrid [-1,1] x [-1,1] of given spatial_size.
- """
- h, w = spatial_size
- x = torch.arange(w).type(type)
- y = torch.arange(h).type(type)
-
- x = (2 * (x / (w - 1)) - 1)
- y = (2 * (y / (h - 1)) - 1)
-
- yy = y.view(-1, 1).repeat(1, w)
- xx = x.view(1, -1).repeat(h, 1)
-
- meshed = torch.cat([xx.unsqueeze_(2), yy.unsqueeze_(2)], 2)
-
- return meshed
-
-
-def make_coordinate_grid(spatial_size, type):
- d, h, w = spatial_size
- x = torch.arange(w).type(type)
- y = torch.arange(h).type(type)
- z = torch.arange(d).type(type)
-
- x = (2 * (x / (w - 1)) - 1)
- y = (2 * (y / (h - 1)) - 1)
- z = (2 * (z / (d - 1)) - 1)
-
- yy = y.view(1, -1, 1).repeat(d, 1, w)
- xx = x.view(1, 1, -1).repeat(d, h, 1)
- zz = z.view(-1, 1, 1).repeat(1, h, w)
-
- meshed = torch.cat([xx.unsqueeze_(3), yy.unsqueeze_(3), zz.unsqueeze_(3)], 3)
-
- return meshed
-
-
-class ResBottleneck(nn.Module):
- def __init__(self, in_features, stride):
- super(ResBottleneck, self).__init__()
- self.conv1 = nn.Conv2d(in_channels=in_features, out_channels=in_features//4, kernel_size=1)
- self.conv2 = nn.Conv2d(in_channels=in_features//4, out_channels=in_features//4, kernel_size=3, padding=1, stride=stride)
- self.conv3 = nn.Conv2d(in_channels=in_features//4, out_channels=in_features, kernel_size=1)
- self.norm1 = BatchNorm2d(in_features//4, affine=True)
- self.norm2 = BatchNorm2d(in_features//4, affine=True)
- self.norm3 = BatchNorm2d(in_features, affine=True)
-
- self.stride = stride
- if self.stride != 1:
- self.skip = nn.Conv2d(in_channels=in_features, out_channels=in_features, kernel_size=1, stride=stride)
- self.norm4 = BatchNorm2d(in_features, affine=True)
-
- def forward(self, x):
- out = self.conv1(x)
- out = self.norm1(out)
- out = F.relu(out)
- out = self.conv2(out)
- out = self.norm2(out)
- out = F.relu(out)
- out = self.conv3(out)
- out = self.norm3(out)
- if self.stride != 1:
- x = self.skip(x)
- x = self.norm4(x)
- out += x
- out = F.relu(out)
- return out
-
-
-class ResBlock2d(nn.Module):
- """
- Res block, preserve spatial resolution.
- """
-
- def __init__(self, in_features, kernel_size, padding):
- super(ResBlock2d, self).__init__()
- self.conv1 = nn.Conv2d(in_channels=in_features, out_channels=in_features, kernel_size=kernel_size,
- padding=padding)
- self.conv2 = nn.Conv2d(in_channels=in_features, out_channels=in_features, kernel_size=kernel_size,
- padding=padding)
- self.norm1 = BatchNorm2d(in_features, affine=True)
- self.norm2 = BatchNorm2d(in_features, affine=True)
-
- def forward(self, x):
- out = self.norm1(x)
- out = F.relu(out)
- out = self.conv1(out)
- out = self.norm2(out)
- out = F.relu(out)
- out = self.conv2(out)
- out += x
- return out
-
-
-class ResBlock3d(nn.Module):
- """
- Res block, preserve spatial resolution.
- """
-
- def __init__(self, in_features, kernel_size, padding):
- super(ResBlock3d, self).__init__()
- self.conv1 = nn.Conv3d(in_channels=in_features, out_channels=in_features, kernel_size=kernel_size,
- padding=padding)
- self.conv2 = nn.Conv3d(in_channels=in_features, out_channels=in_features, kernel_size=kernel_size,
- padding=padding)
- self.norm1 = BatchNorm3d(in_features, affine=True)
- self.norm2 = BatchNorm3d(in_features, affine=True)
-
- def forward(self, x):
- out = self.norm1(x)
- out = F.relu(out)
- out = self.conv1(out)
- out = self.norm2(out)
- out = F.relu(out)
- out = self.conv2(out)
- out += x
- return out
-
-
-class UpBlock2d(nn.Module):
- """
- Upsampling block for use in decoder.
- """
-
- def __init__(self, in_features, out_features, kernel_size=3, padding=1, groups=1):
- super(UpBlock2d, self).__init__()
-
- self.conv = nn.Conv2d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size,
- padding=padding, groups=groups)
- self.norm = BatchNorm2d(out_features, affine=True)
-
- def forward(self, x):
- out = F.interpolate(x, scale_factor=2)
- out = self.conv(out)
- out = self.norm(out)
- out = F.relu(out)
- return out
-
-class UpBlock3d(nn.Module):
- """
- Upsampling block for use in decoder.
- """
-
- def __init__(self, in_features, out_features, kernel_size=3, padding=1, groups=1):
- super(UpBlock3d, self).__init__()
-
- self.conv = nn.Conv3d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size,
- padding=padding, groups=groups)
- self.norm = BatchNorm3d(out_features, affine=True)
-
- def forward(self, x):
- # out = F.interpolate(x, scale_factor=(1, 2, 2), mode='trilinear')
- out = F.interpolate(x, scale_factor=(1, 2, 2))
- out = self.conv(out)
- out = self.norm(out)
- out = F.relu(out)
- return out
-
-
-class DownBlock2d(nn.Module):
- """
- Downsampling block for use in encoder.
- """
-
- def __init__(self, in_features, out_features, kernel_size=3, padding=1, groups=1):
- super(DownBlock2d, self).__init__()
- self.conv = nn.Conv2d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size,
- padding=padding, groups=groups)
- self.norm = BatchNorm2d(out_features, affine=True)
- self.pool = nn.AvgPool2d(kernel_size=(2, 2))
-
- def forward(self, x):
- out = self.conv(x)
- out = self.norm(out)
- out = F.relu(out)
- out = self.pool(out)
- return out
-
-
-class DownBlock3d(nn.Module):
- """
- Downsampling block for use in encoder.
- """
-
- def __init__(self, in_features, out_features, kernel_size=3, padding=1, groups=1):
- super(DownBlock3d, self).__init__()
- '''
- self.conv = nn.Conv3d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size,
- padding=padding, groups=groups, stride=(1, 2, 2))
- '''
- self.conv = nn.Conv3d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size,
- padding=padding, groups=groups)
- self.norm = BatchNorm3d(out_features, affine=True)
- self.pool = nn.AvgPool3d(kernel_size=(1, 2, 2))
-
- def forward(self, x):
- out = self.conv(x)
- out = self.norm(out)
- out = F.relu(out)
- out = self.pool(out)
- return out
-
-
-class SameBlock2d(nn.Module):
- """
- Simple block, preserve spatial resolution.
- """
-
- def __init__(self, in_features, out_features, groups=1, kernel_size=3, padding=1, lrelu=False):
- super(SameBlock2d, self).__init__()
- self.conv = nn.Conv2d(in_channels=in_features, out_channels=out_features,
- kernel_size=kernel_size, padding=padding, groups=groups)
- self.norm = BatchNorm2d(out_features, affine=True)
- if lrelu:
- self.ac = nn.LeakyReLU()
- else:
- self.ac = nn.ReLU()
-
- def forward(self, x):
- out = self.conv(x)
- out = self.norm(out)
- out = self.ac(out)
- return out
-
-
-class Encoder(nn.Module):
- """
- Hourglass Encoder
- """
-
- def __init__(self, block_expansion, in_features, num_blocks=3, max_features=256):
- super(Encoder, self).__init__()
-
- down_blocks = []
- for i in range(num_blocks):
- down_blocks.append(DownBlock3d(in_features if i == 0 else min(max_features, block_expansion * (2 ** i)),
- min(max_features, block_expansion * (2 ** (i + 1))),
- kernel_size=3, padding=1))
- self.down_blocks = nn.ModuleList(down_blocks)
-
- def forward(self, x):
- outs = [x]
- for down_block in self.down_blocks:
- outs.append(down_block(outs[-1]))
- return outs
-
-
-class Decoder(nn.Module):
- """
- Hourglass Decoder
- """
-
- def __init__(self, block_expansion, in_features, num_blocks=3, max_features=256):
- super(Decoder, self).__init__()
-
- up_blocks = []
-
- for i in range(num_blocks)[::-1]:
- in_filters = (1 if i == num_blocks - 1 else 2) * min(max_features, block_expansion * (2 ** (i + 1)))
- out_filters = min(max_features, block_expansion * (2 ** i))
- up_blocks.append(UpBlock3d(in_filters, out_filters, kernel_size=3, padding=1))
-
- self.up_blocks = nn.ModuleList(up_blocks)
- # self.out_filters = block_expansion
- self.out_filters = block_expansion + in_features
-
- self.conv = nn.Conv3d(in_channels=self.out_filters, out_channels=self.out_filters, kernel_size=3, padding=1)
- self.norm = BatchNorm3d(self.out_filters, affine=True)
-
- def forward(self, x):
- out = x.pop()
- # for up_block in self.up_blocks[:-1]:
- for up_block in self.up_blocks:
- out = up_block(out)
- skip = x.pop()
- out = torch.cat([out, skip], dim=1)
- # out = self.up_blocks[-1](out)
- out = self.conv(out)
- out = self.norm(out)
- out = F.relu(out)
- return out
-
-
-class Hourglass(nn.Module):
- """
- Hourglass architecture.
- """
-
- def __init__(self, block_expansion, in_features, num_blocks=3, max_features=256):
- super(Hourglass, self).__init__()
- self.encoder = Encoder(block_expansion, in_features, num_blocks, max_features)
- self.decoder = Decoder(block_expansion, in_features, num_blocks, max_features)
- self.out_filters = self.decoder.out_filters
-
- def forward(self, x):
- return self.decoder(self.encoder(x))
-
-
-class KPHourglass(nn.Module):
- """
- Hourglass architecture.
- """
-
- def __init__(self, block_expansion, in_features, reshape_features, reshape_depth, num_blocks=3, max_features=256):
- super(KPHourglass, self).__init__()
-
- self.down_blocks = nn.Sequential()
- for i in range(num_blocks):
- self.down_blocks.add_module('down'+ str(i), DownBlock2d(in_features if i == 0 else min(max_features, block_expansion * (2 ** i)),
- min(max_features, block_expansion * (2 ** (i + 1))),
- kernel_size=3, padding=1))
-
- in_filters = min(max_features, block_expansion * (2 ** num_blocks))
- self.conv = nn.Conv2d(in_channels=in_filters, out_channels=reshape_features, kernel_size=1)
-
- self.up_blocks = nn.Sequential()
- for i in range(num_blocks):
- in_filters = min(max_features, block_expansion * (2 ** (num_blocks - i)))
- out_filters = min(max_features, block_expansion * (2 ** (num_blocks - i - 1)))
- self.up_blocks.add_module('up'+ str(i), UpBlock3d(in_filters, out_filters, kernel_size=3, padding=1))
-
- self.reshape_depth = reshape_depth
- self.out_filters = out_filters
-
- def forward(self, x):
- out = self.down_blocks(x)
- out = self.conv(out)
- bs, c, h, w = out.shape
- out = out.view(bs, c//self.reshape_depth, self.reshape_depth, h, w)
- out = self.up_blocks(out)
-
- return out
-
-
-
-class AntiAliasInterpolation2d(nn.Module):
- """
- Band-limited downsampling, for better preservation of the input signal.
- """
- def __init__(self, channels, scale):
- super(AntiAliasInterpolation2d, self).__init__()
- sigma = (1 / scale - 1) / 2
- kernel_size = 2 * round(sigma * 4) + 1
- self.ka = kernel_size // 2
- self.kb = self.ka - 1 if kernel_size % 2 == 0 else self.ka
-
- kernel_size = [kernel_size, kernel_size]
- sigma = [sigma, sigma]
- # The gaussian kernel is the product of the
- # gaussian function of each dimension.
- kernel = 1
- meshgrids = torch.meshgrid(
- [
- torch.arange(size, dtype=torch.float32)
- for size in kernel_size
- ]
- )
- for size, std, mgrid in zip(kernel_size, sigma, meshgrids):
- mean = (size - 1) / 2
- kernel *= torch.exp(-(mgrid - mean) ** 2 / (2 * std ** 2))
-
- # Make sure sum of values in gaussian kernel equals 1.
- kernel = kernel / torch.sum(kernel)
- # Reshape to depthwise convolutional weight
- kernel = kernel.view(1, 1, *kernel.size())
- kernel = kernel.repeat(channels, *[1] * (kernel.dim() - 1))
-
- self.register_buffer('weight', kernel)
- self.groups = channels
- self.scale = scale
- inv_scale = 1 / scale
- self.int_inv_scale = int(inv_scale)
-
- def forward(self, input):
- if self.scale == 1.0:
- return input
-
- out = F.pad(input, (self.ka, self.kb, self.ka, self.kb))
- out = F.conv2d(out, weight=self.weight, groups=self.groups)
- out = out[:, :, ::self.int_inv_scale, ::self.int_inv_scale]
-
- return out
-
-
-class SPADE(nn.Module):
- def __init__(self, norm_nc, label_nc):
- super().__init__()
-
- self.param_free_norm = nn.InstanceNorm2d(norm_nc, affine=False)
- nhidden = 128
-
- self.mlp_shared = nn.Sequential(
- nn.Conv2d(label_nc, nhidden, kernel_size=3, padding=1),
- nn.ReLU())
- self.mlp_gamma = nn.Conv2d(nhidden, norm_nc, kernel_size=3, padding=1)
- self.mlp_beta = nn.Conv2d(nhidden, norm_nc, kernel_size=3, padding=1)
-
- def forward(self, x, segmap):
- normalized = self.param_free_norm(x)
- segmap = F.interpolate(segmap, size=x.size()[2:], mode='nearest')
- actv = self.mlp_shared(segmap)
- gamma = self.mlp_gamma(actv)
- beta = self.mlp_beta(actv)
- out = normalized * (1 + gamma) + beta
- return out
-
-
-class SPADEResnetBlock(nn.Module):
- def __init__(self, fin, fout, norm_G, label_nc, use_se=False, dilation=1):
- super().__init__()
- # Attributes
- self.learned_shortcut = (fin != fout)
- fmiddle = min(fin, fout)
- self.use_se = use_se
- # create conv layers
- self.conv_0 = nn.Conv2d(fin, fmiddle, kernel_size=3, padding=dilation, dilation=dilation)
- self.conv_1 = nn.Conv2d(fmiddle, fout, kernel_size=3, padding=dilation, dilation=dilation)
- if self.learned_shortcut:
- self.conv_s = nn.Conv2d(fin, fout, kernel_size=1, bias=False)
- # apply spectral norm if specified
- if 'spectral' in norm_G:
- self.conv_0 = spectral_norm(self.conv_0)
- self.conv_1 = spectral_norm(self.conv_1)
- if self.learned_shortcut:
- self.conv_s = spectral_norm(self.conv_s)
- # define normalization layers
- self.norm_0 = SPADE(fin, label_nc)
- self.norm_1 = SPADE(fmiddle, label_nc)
- if self.learned_shortcut:
- self.norm_s = SPADE(fin, label_nc)
-
- def forward(self, x, seg1):
- x_s = self.shortcut(x, seg1)
- dx = self.conv_0(self.actvn(self.norm_0(x, seg1)))
- dx = self.conv_1(self.actvn(self.norm_1(dx, seg1)))
- out = x_s + dx
- return out
-
- def shortcut(self, x, seg1):
- if self.learned_shortcut:
- x_s = self.conv_s(self.norm_s(x, seg1))
- else:
- x_s = x
- return x_s
-
- def actvn(self, x):
- return F.leaky_relu(x, 2e-1)
-
-class audio2image(nn.Module):
- def __init__(self, generator, kp_extractor, he_estimator_video, he_estimator_audio, train_params):
- super().__init__()
- # Attributes
- self.generator = generator
- self.kp_extractor = kp_extractor
- self.he_estimator_video = he_estimator_video
- self.he_estimator_audio = he_estimator_audio
- self.train_params = train_params
-
- def headpose_pred_to_degree(self, pred):
- device = pred.device
- idx_tensor = [idx for idx in range(66)]
- idx_tensor = torch.FloatTensor(idx_tensor).to(device)
- pred = F.softmax(pred)
- degree = torch.sum(pred*idx_tensor, 1) * 3 - 99
-
- return degree
-
- def get_rotation_matrix(self, yaw, pitch, roll):
- yaw = yaw / 180 * 3.14
- pitch = pitch / 180 * 3.14
- roll = roll / 180 * 3.14
-
- roll = roll.unsqueeze(1)
- pitch = pitch.unsqueeze(1)
- yaw = yaw.unsqueeze(1)
-
- roll_mat = torch.cat([torch.ones_like(roll), torch.zeros_like(roll), torch.zeros_like(roll),
- torch.zeros_like(roll), torch.cos(roll), -torch.sin(roll),
- torch.zeros_like(roll), torch.sin(roll), torch.cos(roll)], dim=1)
- roll_mat = roll_mat.view(roll_mat.shape[0], 3, 3)
-
- pitch_mat = torch.cat([torch.cos(pitch), torch.zeros_like(pitch), torch.sin(pitch),
- torch.zeros_like(pitch), torch.ones_like(pitch), torch.zeros_like(pitch),
- -torch.sin(pitch), torch.zeros_like(pitch), torch.cos(pitch)], dim=1)
- pitch_mat = pitch_mat.view(pitch_mat.shape[0], 3, 3)
-
- yaw_mat = torch.cat([torch.cos(yaw), -torch.sin(yaw), torch.zeros_like(yaw),
- torch.sin(yaw), torch.cos(yaw), torch.zeros_like(yaw),
- torch.zeros_like(yaw), torch.zeros_like(yaw), torch.ones_like(yaw)], dim=1)
- yaw_mat = yaw_mat.view(yaw_mat.shape[0], 3, 3)
-
- rot_mat = torch.einsum('bij,bjk,bkm->bim', roll_mat, pitch_mat, yaw_mat)
-
- return rot_mat
-
- def keypoint_transformation(self, kp_canonical, he):
- kp = kp_canonical['value'] # (bs, k, 3)
- yaw, pitch, roll = he['yaw'], he['pitch'], he['roll']
- t, exp = he['t'], he['exp']
-
- yaw = self.headpose_pred_to_degree(yaw)
- pitch = self.headpose_pred_to_degree(pitch)
- roll = self.headpose_pred_to_degree(roll)
-
- rot_mat = self.get_rotation_matrix(yaw, pitch, roll) # (bs, 3, 3)
-
- # keypoint rotation
- kp_rotated = torch.einsum('bmp,bkp->bkm', rot_mat, kp)
-
-
-
- # keypoint translation
- t = t.unsqueeze_(1).repeat(1, kp.shape[1], 1)
- kp_t = kp_rotated + t
-
- # add expression deviation
- exp = exp.view(exp.shape[0], -1, 3)
- kp_transformed = kp_t + exp
-
- return {'value': kp_transformed}
-
- def forward(self, source_image, target_audio):
- pose_source = self.he_estimator_video(source_image)
- pose_generated = self.he_estimator_audio(target_audio)
- kp_canonical = self.kp_extractor(source_image)
- kp_source = self.keypoint_transformation(kp_canonical, pose_source)
- kp_transformed_generated = self.keypoint_transformation(kp_canonical, pose_generated)
- generated = self.generator(source_image, kp_source=kp_source, kp_driving=kp_transformed_generated)
- return generated
\ No newline at end of file
diff --git a/spaces/deepthiaj/Electro_oneAPI/app_f1.md b/spaces/deepthiaj/Electro_oneAPI/app_f1.md
deleted file mode 100644
index 30a331cf8f8979b0d70248b304ee2be38a9a6f95..0000000000000000000000000000000000000000
--- a/spaces/deepthiaj/Electro_oneAPI/app_f1.md
+++ /dev/null
@@ -1,266 +0,0 @@
-import streamlit as st
-import pandas as pd
-import pickle
-import xgboost as xgb
-import numpy as np
-import sklearn
-from sklearn.metrics import confusion_matrix, classification_report
-import seaborn as sns
-import matplotlib.pyplot as plt
-from io import StringIO
-from scipy import signal
-import daal4py as d4p
-import time
-from sklearn.model_selection import train_test_split
-
-st.title("Automated Diagnosis of Heart Disease from Electro-Cardiogram")
-st.write('This is a prototype for checking heart health condition. The performance of the model has been achieved using XGboost ML algorithm.')
-st.write('Please select the data and the model from the dropdown menu on the left panel to see the working of this prototype.')
-
-st.divider()
-
-enc_dat = pd.read_csv("PTB_ECGencoded_dat.csv")
-
-# Split the dataset into features (X) and target (y)
-X = enc_dat.iloc[:, :-1].values # Features (all columns except the last one)
-y = enc_dat.iloc[:, -1].values # Target (last column "diagnosis")
-# Map the existing class labels to the expected class values
-class_mapping = {0: 0, 1: 1, 3: 2, 4: 3, 6: 4, 7: 5}
-mapped_labels = np.array([class_mapping[label] for label in y])
-
-# split data into train and test sets
-seed = 7
-test_size = 0.33
-X_train, X_test, y_train, y_test = train_test_split(X, mapped_labels, test_size=test_size, random_state=seed)
-
-# Define the model parameters
-model_params = {
- 'objective': 'multi:softmax',
- 'num_class': 6,
- 'random_state': 42
-}
-
-# Create and train the XGBoost model
-xgb_model = xgb.XGBClassifier(**model_params)
-eval_set = [(X_test, y_test)]
-xgb_model.fit(X_train, y_train, early_stopping_rounds=10, eval_set=eval_set, verbose=True)
-# DAAL model
-daal_model = d4p.get_gbt_model_from_xgboost(xgb_model.get_booster())
-
-
-st.subheader("Performance evaluation of the Automated Diagnosis Model")
-
-
-if st.button('ECG analysis of Patient001'):
- # patient001_signal_analysis() to visualize data analysis of single patient upon a button click
- st.write('give plots and heart rate analysis. Please upload ECG signal data in specified format below for analysis')
- # refer PTB website for format
- # call preprocessing module
- # call ecg_analysis()
-
-st.divider()
- # # Evaluate the model on the entire dataset
-
-# XGBoost prediction (for accuracy comparison)
-t0 = time.time()
-y_pred = xgb_model.predict(X_test)
-t1 = time.time()
-xgb_errors_count = np.count_nonzero(y_pred - np.ravel(y_test))
-
-xgb_total = t1-t0
-st.write("Prediction time using XGBoost model is ", xgb_total)
-accuracy = np.sum(y_pred == y_test) / len(y_test) # Calculate accuracy
- # print(f"Accuracy: {accuracy}")
-acc = (accuracy / 1) * 100
-st.write("The accuracy of the diagnosis report is: ", acc, "%")
-
-
-st.divider()
-
- # # Evaluate the model on the entire dataset
- # y_pred = loaded_model.predict(X)
-
- # # Calculate evaluation metrics
-classification_metrics = classification_report(y_test, y_pred, output_dict=True)
-st.caption(":blue[Classification Metrics]")
-# classification_metrics = [classification_metrics]
-# cm = classification_metrics.insert(0,'metrics')
-st.table(classification_metrics)
-# st.json(classification_metrics)
-st.write("1: Myocardial infarction, 2: Bundle branch block, 3: Dysrhythmia , 4: Valvular heart disease, 5: Myocarditis")
-
-st.divider()
- # # Calculate confusion matrix
-confusion_mat = confusion_matrix(y_test, y_pred)
-# st.write("Confusion matrix:")
-
- # # Plot confusion matrix
-plt.figure(figsize=(10, 8))
-htmap = sns.heatmap(confusion_mat, annot=True, fmt="d", cmap="Blues")
-plt.title("Confusion Matrix")
-plt.xlabel("Predicted Class")
-plt.ylabel("True Class")
-plt.show()
-htmap = htmap.figure
-st.pyplot(htmap)
-
-
-st.divider()
- # Format signal info & preprocessing module for generating X[0] to diagnose from an external input data & give a dropbox to enter a single patient ecg data in .dat and .hea format
-
-# Make a faster prediction with oneDAL
-n_classes = 6
-# daal_prediction = d4p.gbt_classification_prediction(nClasses = n_classes).compute(X, daal_model).prediction
-# daal4py prediction for increased performance
-daal_predict_algo = d4p.gbt_classification_prediction(
- nClasses=n_classes,
- resultsToEvaluate="computeClassLabels",
- fptype='float'
-)
-t0 = time.time()
-daal_prediction = daal_predict_algo.compute(X_test, daal_model)
-t1 = time.time()
-daal_errors_count = np.count_nonzero(np.ravel(daal_prediction.prediction) - np.ravel(y_test))
-
-d4p_total = t1-t0
-st.write("Prediction time using DAAL model is ", xgb_total)
-
-
-# # List all results that you need by placing '|' between them
-# predict_algo = d4p.gbt_classification_prediction(nClasses = n_classes, resultsToEvaluate = "computeClassLabels|computeClassProbabilities")
-# daal_prediction = predict_algo.compute(X, daal_model)
-# # Get probabilities:
-# probabilities = daal_prediction.probabilities
-# st.write(probabilities)
-# # Get labels:
-# labels = daal_prediction.prediction
-# st.write(labels)
-
-# assert np.absolute(xgb_errors_count - daal_errors_count) = 0
-y_test = np.ravel(y_test)
-daal_prediction = np.ravel(daal_prediction.prediction)
-xgb_prediction = y_pred
-
-st.subheader("Accuracy & Performance Comparison: XGBoots Prediction vs. Daal4py Prediction")
-st.write("No accuracy loss!")
-st.write("\nXGBoost prediction results (first 10 rows):\n", xgb_prediction[0:10])
-st.write("\ndaal4py prediction results (first 10 rows):\n", daal_prediction[0:10])
-st.write("\nGround truth (first 10 rows):\n", y_test[0:10])
-
-st.write("XGBoost errors count:", xgb_errors_count)
-st.write("XGBoost accuracy score:", 1 - xgb_errors_count / xgb_prediction.shape[0])
-
-st.write("\ndaal4py errors count:", daal_errors_count)
-st.write("daal4py accuracy score:", 1 - daal_errors_count / daal_prediction.shape[0])
-
-st.write("\n XGBoost Prediction Time:", xgb_total)
-st.write("\n daal4py Prediction Time:", d4p_total)
-# st.write("\nAll looks good!")
-
-
-st.subheader("Visualizations")
-st.write("Performance")
-left = [1,2]
-pred_times = [xgb_total, d4p_total]
-tick_label = ['XGBoost Prediction', 'daal4py Prediction']
-# plt.bar(left, pred_times, tick_label = tick_label, width = 0.5, color = ['red', 'blue'])
-plt.xlabel('Prediction Method'); plt.ylabel('time,s'); plt.title('Prediction time,s')
-plt.show()
-# plt0 = plt0.figure
-# st.pyplot(plt0)
-st.bar_chart(pred_times)
-st.write("speedup:",xgb_total/d4p_total)
-st.write("Accuracy")
-left = [1,2]
-
-
-xgb_acc = 1 - xgb_errors_count / xgb_prediction.shape[0]
-d4p_acc = 1 - daal_errors_count / daal_prediction.shape[0]
-pred_acc = [xgb_acc, d4p_acc]
-tick_label = ['XGBoost Prediction', 'daal4py Prediction']
-# plt.bar(left, pred_acc, tick_label = tick_label, width = 0.5, color = ['red', 'blue'])
-plt.xlabel('Prediction Method')
-plt.ylabel('accuracy, %')
-plt.title('Prediction Accuracy, %')
-plt.show()
-# plt1 = plt1.figure
-# st.pyplot(plt1)
-st.bar_chart(pred_acc)
-st.write("Accuracy Difference",xgb_acc-d4p_acc)
-
-st.divider()
-
-
-
-
-patient_enc_data = {"Patient001":X[0],"Patient002":X[100],"Patient003":X[200],"Patient004":X[50],"Patient005":X[40],"Patient006":X[30],"Patient007":X[20],"Patient008":X[10],"Patient009":X[60],"Patient010":X[110],"Patient011":X[120],"Patient012":X[130],"Patient013":X[140],"Patient014":X[150],"Patient015":X[160],"Patient016":X[170],"Patient017":X[180],"Patient018":X[190],"Patient019":X[210],"Patient020":X[220],"Patient021":X[21],"Patient022":X[22],"Patient023":X[23],"Patient024":X[24],"Patient025":X[25],"Patient026":X[26],"Patient027":X[27],"Patient028":X[28],"Patient029":X[29],"Patient030":X[31],"Patient031":X[41],"Patient032":X[42],"Patient033":X[43],"Patient034":X[44],"Patient035":X[45],"Patient036":X[46],"Patient037":X[47],"Patient038":X[48],"Patient039":X[49],"Patient040":X[51],"Patient41":X[61],"Patient042":X[62],"Patient043":X[63],"Patient044":X[64],"Patient045":X[65],"Patient046":X[66],"Patient047":X[67],"Patient048":X[68],"Patient049":X[69],"Patient050":X[71], }
-patient_ecg_sel = st.selectbox( "Select a ECG of a patient from the list", list(patient_enc_data.keys()))
-
-
-
-
-def ecg_analysis(ecg_test_data):
-
- # Classify the test data point
- predicted_class = xgb_model.predict(np.array([ecg_test_data]))
-
-
- st.subheader("Diagnosis Report")
-
-
- if predicted_class[0] == 0:
- st.write("Sorry, We cannot give your diagnosis report at the moment. Kindly consult a doctor in person.")
- elif predicted_class[0] == 1:
- st.write("You are diagnosed with Myocardial infarction.")
- st.write("Kindly consult a doctor to take the necessary treatment.")
- elif predicted_class[0] == 2:
- st.write("You are diagnosed with Bundle branch block.")
- st.write("Kindly consult a doctor to take the necessary treatment.")
- elif predicted_class[0] == 3:
- st.write("You are diagnosed with Dysrhythmia.")
- st.write("Kindly take consult a doctor to the necessary treatment.")
- elif predicted_class[0] == 4:
- st.write("You are diagnosed with Valvular heart disease.")
- st.write("Kindly consult a doctor to take the necessary treatment.")
- elif predicted_class[0] == 5:
- st.write("You are diagnosed with Myocarditis.")
- st.write("Kindly consult a doctor to take the necessary treatment.")
- else:
- st.write("Sorry, We cannot give your diagnosis report at the moment. Kindly consult a doctor in person.")
-
-
-
-if st.button("Analyze Raw ECG"):
-# # if new_data:
-# # new_patient_data_preprocessing()
-# # else:
- ecg_train_dat = pd.read_csv("PTB_ECGdata.csv")
- diagnosis_counts = ecg_train_dat["diagnosis"].value_counts()
- st.bar_chart(diagnosis_counts)
-
-def new_patient_data_preprocessing(new_data):
-
- # code to preprocess .dat and .hea files from PTB ecg database, check one from ptb xl as external new data & convert it into .csv & encode to pass it as an argument to call ecg_analysis function
- st.write('')
-
-# st.write("")
-uploaded_file = st.file_uploader("Upload ECG file")
-if uploaded_file is not None:
-
- # Can be used wherever a "file-like" object is accepted:
- dataframe = pd.read_csv(uploaded_file)
- st.write(dataframe[:1])
- new_patient_data_preprocessing(dataframe)
-
-if st.button("Check Heart health"):
- ecg_test_data = patient_enc_data[patient_ecg_sel]
- st.write("Diagnosis report of", patient_ecg_sel)
- # st_profile_report(ecg_test_data)
- ecg_analysis(ecg_test_data)
-else:
- st.write("Diagnosis report of Patient001")
- ecg_test_data = X[0]
- ecg_analysis(ecg_test_data)
-
-
-
diff --git a/spaces/deepwisdom/MetaGPT/metagpt/utils/cost_manager.py b/spaces/deepwisdom/MetaGPT/metagpt/utils/cost_manager.py
deleted file mode 100644
index 21b37d5523412705f96cea322e9d4b26459727a8..0000000000000000000000000000000000000000
--- a/spaces/deepwisdom/MetaGPT/metagpt/utils/cost_manager.py
+++ /dev/null
@@ -1,79 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-@Time : 2023/8/28
-@Author : mashenquan
-@File : openai.py
-@Desc : mashenquan, 2023/8/28. Separate the `CostManager` class to support user-level cost accounting.
-"""
-
-from pydantic import BaseModel
-from metagpt.logs import logger
-from metagpt.utils.token_counter import TOKEN_COSTS
-from typing import NamedTuple
-
-
-class Costs(NamedTuple):
- total_prompt_tokens: int
- total_completion_tokens: int
- total_cost: float
- total_budget: float
-
-
-class CostManager(BaseModel):
- """Calculate the overhead of using the interface."""
-
- total_prompt_tokens: int = 0
- total_completion_tokens: int = 0
- total_budget: float = 0
- max_budget: float = 10.0
- total_cost: float = 0
-
- def update_cost(self, prompt_tokens, completion_tokens, model):
- """
- Update the total cost, prompt tokens, and completion tokens.
-
- Args:
- prompt_tokens (int): The number of tokens used in the prompt.
- completion_tokens (int): The number of tokens used in the completion.
- model (str): The model used for the API call.
- """
- self.total_prompt_tokens += prompt_tokens
- self.total_completion_tokens += completion_tokens
- cost = (prompt_tokens * TOKEN_COSTS[model]["prompt"] + completion_tokens * TOKEN_COSTS[model][
- "completion"]) / 1000
- self.total_cost += cost
- logger.info(
- f"Total running cost: ${self.total_cost:.3f} | Max budget: ${self.max_budget:.3f} | "
- f"Current cost: ${cost:.3f}, prompt_tokens: {prompt_tokens}, completion_tokens: {completion_tokens}"
- )
-
- def get_total_prompt_tokens(self):
- """
- Get the total number of prompt tokens.
-
- Returns:
- int: The total number of prompt tokens.
- """
- return self.total_prompt_tokens
-
- def get_total_completion_tokens(self):
- """
- Get the total number of completion tokens.
-
- Returns:
- int: The total number of completion tokens.
- """
- return self.total_completion_tokens
-
- def get_total_cost(self):
- """
- Get the total cost of API calls.
-
- Returns:
- float: The total cost of API calls.
- """
- return self.total_cost
-
- def get_costs(self) -> Costs:
- """获得所有开销"""
- return Costs(self.total_prompt_tokens, self.total_completion_tokens, self.total_cost, self.total_budget)
diff --git a/spaces/diacanFperku/AutoGPT/Adobe Acrobat X Pro 10.1.3 (English French German) (EXCLUSIVE Keygen-CORE) Serial Key EXCLUSIVE Keygen.md b/spaces/diacanFperku/AutoGPT/Adobe Acrobat X Pro 10.1.3 (English French German) (EXCLUSIVE Keygen-CORE) Serial Key EXCLUSIVE Keygen.md
deleted file mode 100644
index 42d249a3b7d390993d0efe491f0fd4aa60eaa9b5..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Adobe Acrobat X Pro 10.1.3 (English French German) (EXCLUSIVE Keygen-CORE) Serial Key EXCLUSIVE Keygen.md
+++ /dev/null
@@ -1,133 +0,0 @@
-
-
Adobe Acrobat X Pro 10.1.3 (English French German) (keygen-CORE) Serial Key keygen
-
-
If you are looking for a powerful and versatile PDF solution that supports multiple languages, you might want to check out Adobe Acrobat X Pro 10.1.3. This software allows you to create, edit, convert, and secure PDF files with ease. You can also collaborate with others, share your documents online, and access them from anywhere.
-
-
But how can you get Adobe Acrobat X Pro 10.1.3 for free? Well, you can use a keygen-CORE serial key to activate the software and enjoy its full features. A keygen-CORE serial key is a code that is generated by a program called keygen-CORE, which can crack the activation process of Adobe Acrobat X Pro 10.1.3.
-
Adobe Acrobat X Pro 10.1.3 (English French German) (keygen-CORE) Serial Key keygen
How to use keygen-CORE serial key for Adobe Acrobat X Pro 10.1.3
-
-
To use a keygen-CORE serial key for Adobe Acrobat X Pro 10.1.3, you need to follow these steps:
-
-
-
Download Adobe Acrobat X Pro 10.1.3 from the official website or from a direct download link. You can choose the language you prefer: English, French, or German.
-
Install Adobe Acrobat X Pro 10.1.3 on your computer. Do not launch the software yet.
-
Download keygen-CORE from a reliable source. Make sure it is compatible with Adobe Acrobat X Pro 10.1.3.
-
Run keygen-CORE as administrator and select Adobe Acrobat X Pro 10 from the drop-down menu.
-
Click on Generate button and copy the serial key that appears.
-
Launch Adobe Acrobat X Pro 10.1.3 and enter the serial key when prompted.
-
Enjoy your activated Adobe Acrobat X Pro 10.1.3!
-
-
-
Benefits of Adobe Acrobat X Pro 10.1.3
-
-
Adobe Acrobat X Pro 10.1.3 is a great PDF solution that offers many benefits, such as:
-
-
-
It supports multiple languages, including English, French, and German. You can easily switch between languages and work with documents in different languages.
-
It has a user-friendly interface that makes it easy to create, edit, convert, and secure PDF files.
-
It has advanced features that allow you to add multimedia elements, interactive forms, digital signatures, and comments to your PDF files.
-
It has a powerful OCR (optical character recognition) feature that can recognize text in scanned documents and images.
-
It has a built-in converter that can convert PDF files to other formats, such as Word, Excel, PowerPoint, HTML, and more.
-
It has a cloud service that lets you store, access, and share your PDF files online.
-
It has a collaboration feature that lets you work with others on the same PDF file in real time.
-
-
-
Conclusion
-
-
Adobe Acrobat X Pro 10.1.3 is a comprehensive PDF solution that can help you create, edit, convert, and secure PDF files in multiple languages. You can get it for free by using a keygen-CORE serial key that can bypass the activation process of the software. However, you should be careful when downloading and using keygen-CORE serial keys, as they may contain viruses or malware that can harm your computer or compromise your privacy.
-
-
If you want to use Adobe Acrobat X Pro 10.1.3 legally and safely, you should buy a license from the official website or from an authorized reseller.
-
Features of Adobe Acrobat X Pro 10.1.3
-
-
Adobe Acrobat X Pro 10.1.3 is packed with features that make it a powerful and versatile PDF solution. Some of the features are:
-
-
-
-
Action Wizard: This feature allows you to automate repetitive tasks and save time. You can create, manage, and share custom actions that can perform multiple operations on PDF files.
-
Portfolio: This feature allows you to combine multiple files of different formats into a single PDF portfolio. You can customize the look and feel of your portfolio with themes, layouts, and colors.
-
Compare Documents: This feature allows you to compare two versions of a PDF file and highlight the differences. You can also filter the changes by type, such as text, images, annotations, etc.
-
Optimize Scanned Documents: This feature allows you to enhance the quality of scanned documents and images. You can adjust the contrast, brightness, resolution, and color of your scans.
-
Export Comments: This feature allows you to export all the comments in a PDF file to a Word document or an Excel spreadsheet. You can also import comments from other sources into your PDF file.
-
-
-
How to download Adobe Acrobat X Pro 10.1.3
-
-
If you want to download Adobe Acrobat X Pro 10.1.3, you have two options:
-
-
-
Official website: You can visit the official website of Adobe and download the software from there. You can choose the language you prefer: English, French, or German. However, you will need to sign in with your Adobe ID or create one for free.
-
Direct download link: You can use a direct download link that will take you to the authentic and secure files residing on Adobe's servers. You do not need to sign in with your Adobe ID or create one for this option. However, you will need to follow some important instructions before clicking on the link.
-
-
-
Either way, you will get a trial version of Adobe Acrobat X Pro 10.1.3 that will last for 30 days. After that, you will need to activate the software with a keygen-CORE serial key or buy a license from Adobe.
-
How to scan documents and convert them to PDF with Adobe Acrobat X Pro 10.1.3
-
-
One of the features of Adobe Acrobat X Pro 10.1.3 is that it can scan documents and convert them to PDF files. This is useful if you want to digitize your paper documents and make them searchable, editable, and shareable. To scan documents and convert them to PDF with Adobe Acrobat X Pro 10.1.3, you need to follow these steps:
-
-
-
Connect your scanner to your computer and turn it on.
-
Launch Adobe Acrobat X Pro 10.1.3 and click on Create > PDF from Scanner.
-
Select the scanner you want to use and the preset you want to apply. You can choose from different presets, such as Black & White Document, Color Document, Grayscale Document, etc.
-
Click on Scan and wait for the scanning process to finish.
-
If you want to scan more pages, click on Scan More Pages. If you are done, click on Scan Is Complete.
-
Adobe Acrobat X Pro 10.1.3 will create a PDF file from your scanned document and open it in the main window.
-
You can now edit, save, or share your PDF file as you wish.
-
-
-
How to secure your PDF files with Adobe Acrobat X Pro 10.1.3
-
-
Another feature of Adobe Acrobat X Pro 10.1.3 is that it can secure your PDF files with passwords, encryption, and digital signatures. This is useful if you want to protect your PDF files from unauthorized access, copying, printing, or modification. To secure your PDF files with Adobe Acrobat X Pro 10.1.3, you need to follow these steps:
-
-
-
Open the PDF file you want to secure in Adobe Acrobat X Pro 10.1.3.
-
Click on Tools > Protection > Encrypt > Encrypt with Password.
-
Check the boxes for Require a password to open the document and/or Require a password to change security settings and access specific functions.
-
Enter the passwords you want to use and click on OK.
-
If you want to add a digital signature to your PDF file, click on Tools > Sign & Certify > Sign Document.
-
Select the signature you want to use or create a new one.
-
Drag a rectangle where you want to place your signature and click on Sign.
-
Save your PDF file with the security settings applied.
-
-
-
Conclusion
-
-
Adobe Acrobat X Pro 10.1.3 is a comprehensive PDF solution that can help you create, edit, convert, and secure PDF files in multiple languages. You can get it for free by using a keygen-CORE serial key that can bypass the activation process of the software. However, you should be careful when downloading and using keygen-CORE serial keys, as they may contain viruses or malware that can harm your computer or compromise your privacy.
-
-
If you want to use Adobe Acrobat X Pro 10.1.3 legally and safely, you should buy a license from the official website or from an authorized reseller.
-
How to edit PDF files with Adobe Acrobat X Pro 10.1.3
-
-
One of the features of Adobe Acrobat X Pro 10.1.3 is that it can edit PDF files with ease. You can modify the text, images, layout, and formatting of your PDF files. You can also add annotations, comments, stamps, and watermarks to your PDF files. To edit PDF files with Adobe Acrobat X Pro 10.1.3, you need to follow these steps:
-
-
-
Open the PDF file you want to edit in Adobe Acrobat X Pro 10.1.3.
-
Click on Tools > Content > Edit Document Text or Edit Object.
-
Select the text or object you want to edit and make the changes you want.
-
If you want to add annotations, comments, stamps, or watermarks to your PDF file, click on Tools > Comment & Markup or Watermark.
-
Select the tool you want to use and apply it to your PDF file.
-
Save your PDF file with the changes applied.
-
-
-
How to convert PDF files to other formats with Adobe Acrobat X Pro 10.1.3
-
-
Another feature of Adobe Acrobat X Pro 10.1.3 is that it can convert PDF files to other formats, such as Word, Excel, PowerPoint, HTML, and more. This is useful if you want to reuse or edit the content of your PDF files in other applications. To convert PDF files to other formats with Adobe Acrobat X Pro 10.1.3, you need to follow these steps:
-
-
-
Open the PDF file you want to convert in Adobe Acrobat X Pro 10.1.3.
-
Click on File > Save As > Microsoft Word or Microsoft Excel or Microsoft PowerPoint or HTML Web Page.
-
Select the format you want to convert your PDF file to and click on Save.
-
Adobe Acrobat X Pro 10.1.3 will create a new file in the selected format and open it in the corresponding application.
-
You can now edit or use your converted file as you wish.
-
-
-
Conclusion
-
-
Adobe Acrobat X Pro 10.1.3 is a comprehensive PDF solution that can help you create, edit, convert, and secure PDF files in multiple languages. You can get it for free by using a keygen-CORE serial key that can bypass the activation process of the software. However, you should be careful when downloading and using keygen-CORE serial keys, as they may contain viruses or malware that can harm your computer or compromise your privacy.
-
-
If you want to use Adobe Acrobat X Pro 10.1.3 legally and safely, you should buy a license from the official website or from an authorized reseller.
-
Adobe Acrobat X Pro 10.1.3 is a comprehensive PDF solution that can help you create, edit, convert, and secure PDF files in multiple languages. You can get it for free by using a keygen-CORE serial key that can bypass the activation process of the software. However, you should be careful when downloading and using keygen-CORE serial keys, as they may contain viruses or malware that can harm your computer or compromise your privacy.
-
-
If you want to use Adobe Acrobat X Pro 10.1.3 legally and safely, you should buy a license from the official website or from an authorized reseller.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Download Utorrent Plus Full Version Crack 2013.md b/spaces/diacanFperku/AutoGPT/Download Utorrent Plus Full Version Crack 2013.md
deleted file mode 100644
index 86ce438f47db85eb473d94dc5c565d3a8fe4aad3..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Download Utorrent Plus Full Version Crack 2013.md
+++ /dev/null
@@ -1,98 +0,0 @@
-
-
Download uTorrent Plus Full Version with Crack 2013
-
If you are looking for a fast, easy and reliable way to download torrents, you might want to try uTorrent Plus. uTorrent Plus is a premium version of the popular BitTorrent client uTorrent, which offers some extra features and benefits that can enhance your torrenting experience.
In this article, we will show you how to download uTorrent Plus full version with crack 2013, which will allow you to enjoy all the advantages of uTorrent Plus without paying anything. You will also learn about the features and benefits of uTorrent Plus, and how to use it safely and efficiently.
-
What is uTorrent Plus?
-
uTorrent Plus is a paid upgrade of uTorrent, which is one of the most widely used BitTorrent clients in the world. uTorrent Plus has all the features of uTorrent, such as:
-
-
Fast and efficient downloads
-
Bandwidth prioritization and scheduling
-
RSS auto-downloading and magnet links support
-
Protocol encryption and peer exchange
-
Customizable interface and skins
-
Portable mode and remote access
-
-
But uTorrent Plus also adds some exclusive features that make it more powerful and convenient, such as:
-
-
Integrated antivirus protection that scans your downloads for malware and viruses
-
Built-in HD media player that lets you watch videos and listen to music within seconds of starting a download
-
Media converter that allows you to convert your downloaded files to various formats and devices
-
No ads or pop-ups that can interrupt your downloads or browsing
-
Premium customer support that can help you with any issues or questions
-
-
With uTorrent Plus, you can download torrents faster, safer and easier than ever before.
-
-
How to Download uTorrent Plus Full Version with Crack 2013?
-
If you want to download uTorrent Plus full version with crack 2013, you will need to follow these steps:
-
-
Download the uTorrent Plus setup file from a reliable source. You can use the link below to download it directly:
Copy all the files from the crack folder and paste them into the uTorrent Plus installation directory. You can find the installation directory by typing %appdata%\uTorrent in the Run dialog box (Windows logo key + R).
-
Open uTorrent Plus and disable the automatic updates option. You can do this by going to Preferences > General > Privacy & Updates > Turn off automatic updates.
-
Enjoy uTorrent Plus full version with crack 2013 for free!
-
-
Tips and Tricks for Using uTorrent Plus
-
To make the most out of uTorrent Plus, here are some tips and tricks that you can use:
-
-
To stream videos or music within seconds of starting a download, click on the play icon next to the torrent name in the main window.
-
To convert your downloaded files to different formats or devices, right-click on the torrent name and select Convert Files.
-
To scan your downloads for viruses or malware, right-click on the torrent name and select Scan Files.
-
To access your uTorrent client from anywhere, use uTorrent Remote. You can create an account and log in from any web browser or Android device.
-
To optimize your download speed and performance, adjust your bandwidth settings according to your network conditions. You can also use the built-in speed guide to find the best settings for your connection.
-
To find more content to download, use the App Studio feature. You can access various apps that offer games, music, videos, news and more.
-
-
-
Conclusion
-
-
uTorrent Plus is a great way to download torrents faster, safer and easier than ever before. With its advanced features and benefits, you can enjoy a smooth and satisfying torrenting experience. However, if you don't want to pay for it, you can download uTorrent Plus full version with crack 2013 for free by following our guide above. Just make sure you download the files from reliable sources and scan them for viruses before installing them. Also, remember to disable the automatic updates option to avoid any problems or errors.
-
-
We hope this article was helpful for you. If you have any questions or comments, feel free to leave them below. Happy torrenting!
-
Is uTorrent Plus Safe and Legal?
-
One of the common questions that people have about uTorrent Plus is whether it is safe and legal to use. The answer is not so simple, as it depends on several factors.
-
First of all, uTorrent Plus itself is a safe and legal software that does not contain any viruses, malware or spyware. It also has an integrated antivirus feature that can scan your downloads for any potential threats. However, this does not mean that all the files that you download with uTorrent Plus are safe and legal. Some of the torrents that you find on the internet may contain harmful or illegal content, such as pirated movies, music, games or software. Downloading such content can expose you to legal risks or damage your computer.
-
Therefore, it is important to be careful and responsible when using uTorrent Plus. You should always check the source and reputation of the torrents that you download, and avoid any suspicious or illegal files. You should also respect the intellectual property rights of the creators and owners of the content that you download, and only use it for personal and non-commercial purposes. Moreover, you should be aware of the laws and regulations of your country regarding torrenting, as they may vary from place to place.
-
What are the Alternatives to uTorrent Plus?
-
If you are not satisfied with uTorrent Plus or you want to try something different, there are some alternatives that you can consider. Here are some of the most popular ones:
-
-
BitTorrent Pro: BitTorrent Pro is another premium version of BitTorrent, which is the original client that started the torrenting revolution. BitTorrent Pro has similar features and benefits as uTorrent Plus, such as antivirus protection, HD media player, media converter and remote access. However, BitTorrent Pro is more expensive than uTorrent Plus, and it may not be compatible with some torrent sites or trackers.
-
qBittorrent: qBittorrent is a free and open-source torrent client that aims to provide a simple and lightweight alternative to uTorrent. qBittorrent has a clean and user-friendly interface, and it supports all the essential features of torrenting, such as magnet links, DHT, PEX, encryption and RSS feeds. qBittorrent also has some advanced features, such as torrent creation, IP filtering, bandwidth scheduling and search engine.
-
Vuze: Vuze is a powerful and feature-rich torrent client that offers more than just downloading torrents. Vuze has a built-in browser that lets you access various content channels, such as games, music, videos and podcasts. Vuze also has a media player that can play HD videos and stream them to your devices. Vuze also has a premium version called Vuze Plus, which adds antivirus protection, DVD burning and ad removal.
-
-
-
Conclusion
-
-
uTorrent Plus is a great way to download torrents faster, safer and easier than ever before. With its advanced features and benefits, you can enjoy a smooth and satisfying torrenting experience. However, if you don't want to pay for it, you can download uTorrent Plus full version with crack 2013 for free by following our guide above. Just make sure you download the files from reliable sources and scan them for viruses before installing them. Also, remember to disable the automatic updates option to avoid any problems or errors.
-
-
We hope this article was helpful for you. If you have any questions or comments, feel free to leave them below. Happy torrenting!
-
What are the Pros and Cons of uTorrent Plus?
-
uTorrent Plus has many advantages and disadvantages that you should consider before using it. Here are some of the main ones:
-
Pros
-
-
uTorrent Plus is fast and efficient, as it uses minimal system resources and offers high download speeds.
-
uTorrent Plus is easy and intuitive, as it has a simple and customizable interface and offers many options and preferences.
-
uTorrent Plus is safe and secure, as it has an integrated antivirus feature and supports protocol encryption and peer exchange.
-
uTorrent Plus is convenient and versatile, as it has a built-in HD media player, media converter and remote access feature.
-
uTorrent Plus is ad-free and premium, as it does not show any ads or pop-ups and offers premium customer support.
-
-
Cons
-
-
uTorrent Plus is expensive and illegal, as it costs $19.95 per year and requires a crack to use it for free.
-
uTorrent Plus is risky and unreliable, as it may contain viruses or malware in the crack file or the downloaded torrents.
-
uTorrent Plus is controversial and unethical, as it may violate the intellectual property rights of the content creators and owners.
-
uTorrent Plus is outdated and unsupported, as it was last updated in 2018 and may not work with some torrent sites or trackers.
-
uTorrent Plus is unnecessary and redundant, as there are many free and open-source alternatives that offer similar or better features and benefits.
-
-
-
Conclusion
-
-
uTorrent Plus is a great way to download torrents faster, safer and easier than ever before. With its advanced features and benefits, you can enjoy a smooth and satisfying torrenting experience. However, if you don't want to pay for it, you can download uTorrent Plus full version with crack 2013 for free by following our guide above. Just make sure you download the files from reliable sources and scan them for viruses before installing them. Also, remember to disable the automatic updates option to avoid any problems or errors.
-
-
We hope this article was helpful for you. If you have any questions or comments, feel free to leave them below. Happy torrenting!
-
In conclusion, uTorrent Plus is a premium version of uTorrent that offers some extra features and benefits that can enhance your torrenting experience. However, it also has some drawbacks and risks that you should be aware of. If you want to use uTorrent Plus for free, you can download uTorrent Plus full version with crack 2013 by following our guide above. However, you should also be careful and responsible when using it, and respect the laws and regulations of your country and the rights of the content creators and owners. Alternatively, you can try some of the free and open-source alternatives that we have mentioned in this article. We hope this article was helpful for you. If you have any questions or comments, feel free to leave them below. Happy torrenting!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Grepolis Hack V4.2l [UPDATED].md b/spaces/diacanFperku/AutoGPT/Grepolis Hack V4.2l [UPDATED].md
deleted file mode 100644
index 131db2f82eead744b2478903ea6c32c28b35544a..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Grepolis Hack V4.2l [UPDATED].md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
the two games mentioned above are good starting points, but it is not always possible to purchase the coins you need to be more competitive and build your way to the top. the grepolis hack is probably the best way to get coins and start out on the right foot. it is certainly worth a try, but as always, keep in mind that the hack is designed to bypass the game and may cause your system to slow down. the longer you wait, the more youll need to spend on the hack.
there are a few different ways to get coins, but the hack will be the easiest. you will be able to use the hack to get coins without the need to wait for a chance to upgrade. instead, you will need to use the hack, and when youre ready, youll receive the amount of coins that youve bought in a matter of seconds. the hack is very simple to use, and in about a minute, you will be able to get the coins you need to compete.
-
one of the worst problems in game development is the lack of user friendly game play. i myself have tried countless games which make it extremely frustrating to play, or in some cases, completely impossible. grepolis however, brings us a gaming experience that is not only fun, but also very user friendly. in grepolis, it is easy to jump right into the game and start playing. there is no need for learning any advanced game play, so beginners can jump right in and start conquering the world. there are so many upgrades available, and each one is very useful and fun to use. you dont have to worry about becoming bored by game play, as each upgrade gives you something to do.
-
the best way to describe grepolis is that it is an ancient war game that has all the gameplay elements of an rpg, but in the setting of ancient greece. the game is developed by a small team of only 6 people, and the game itself has a very polished look and feel. another unique aspect of grepolis is that you can play this game on both the app store and google play, which gives you the opportunity to play at any time and place. if youre constantly in the mood for a unique and challenging game, youll be glad to know that grepolis has an extensive online community that is very active. you can also join a guild that you can build or a raid, if you like a different way of playing the game. for the most part, grepolis does not have a very high price, so you will be able to find a package for your budget.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Refx Nexus 2.2.1 Update Crack Hack Usb Air Elicenser Emulator !!EXCLUSIVE!!.md b/spaces/diacanFperku/AutoGPT/Refx Nexus 2.2.1 Update Crack Hack Usb Air Elicenser Emulator !!EXCLUSIVE!!.md
deleted file mode 100644
index b4874fb37a263655a4a7fc94ef16ab3745ca3223..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Refx Nexus 2.2.1 Update Crack Hack Usb Air Elicenser Emulator !!EXCLUSIVE!!.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
Refx Nexus 2.2.1 Update Crack, Hack Usb Air Elicenser Emulator
-
-May 11, 2561 BC. - installation of the air elicenser nexus 2 emulator All objects belonging to ReFX Nexus 2.4.1 The USB eLicenser emulator is recognized and asks if .. . Installing Nexus emulator on Windows 10 How to install Nexus on Nexus S/T/X/S/T Pro on Windows 8 How to download and install Nexus from Nexus S/T/X/S/T Pro on Windows 8....
-25 Apr 2016 ...
-Emulating Air-Elichender-Nexus in Nullsoft Scripted Real Player....
-Installing the Nexus emulator on Windows 10 How to install Nexus on Nexus S/T/X/S/T Pro on Windows 8 How to download and install Nexus from Nexus S/T/X/S/T Pro on Windows 8 ... 25 Apr 2016 ... 8a78ff9644
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/SILK LABO After Summer Days.md b/spaces/diacanFperku/AutoGPT/SILK LABO After Summer Days.md
deleted file mode 100644
index 5df54f6217d6cb6924175636b20c5ba4945b5ca4..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/SILK LABO After Summer Days.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
i was so excited to try the p50 as a night time product, as i am a bit over-sensitive and prone to redness. i would love your insights on the p50 as my nightly pm moisturizer and am spot treatment (since i have sensitive skin). ive been using it on my hands, elbows and knees at night, but i need help with the forehead and eye area. im using the adapted japanese facecloth for my eye area and face and the it works! eye cream for my face. again, any and all tips and reviews are greatly appreciated. i know my skin is extra sensitive, but the p50 is absorbing quickly and has really helped to prevent my redness from happening.
-
hi, i wanted to drop by and leave a comment. i love the fact that you can buy your br opial products in europe. i travel a lot and this makes it so much easier to not have to find a br reseller when i am traveling abroad. do you know of any of the opial shops in london? i was actually just there about 3 weeks ago and was really disappointed that i couldnt find any decent opial retailers there. i love your blog!
so to me, the wait time is just part of the routine. of course, there is no rule or any inherent reason why the wait time cannot be used to moisturise or refresh. the obvious benefit of having more time between step is that we can use this time to catch a zzz.. or rather, catch up on our reading or finish that school assignment. when it comes to lotions, i find that a little bit of moisturising goes a long way. i find that if i apply lotion after my shower, i end up feeling greasy afterwards. however, if i apply lotion before my shower, the sensation disappears. this method might sound silly, but i really do think it works. i almost always finish my shower before i start moisturising. that way, i always have at least a few minutes before the shower water hits my skin.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/data/examples.py b/spaces/diagaiwei/ir_chinese_medqa/colbert/data/examples.py
deleted file mode 100644
index 543f6f8647ed9b8ffc5bbc755e2bf400b2f0af53..0000000000000000000000000000000000000000
--- a/spaces/diagaiwei/ir_chinese_medqa/colbert/data/examples.py
+++ /dev/null
@@ -1,82 +0,0 @@
-from colbert.infra.run import Run
-import os
-import ujson
-
-from colbert.utils.utils import print_message
-from colbert.infra.provenance import Provenance
-from utility.utils.save_metadata import get_metadata_only
-
-
-class Examples:
- def __init__(self, path=None, data=None, nway=None, provenance=None):
- self.__provenance = provenance or path or Provenance()
- self.nway = nway
- self.path = path
- self.data = data or self._load_file(path)
-
- def provenance(self):
- return self.__provenance
-
- def toDict(self):
- return self.provenance()
-
- def _load_file(self, path):
- nway = self.nway + 1 if self.nway else self.nway
- examples = []
-
- with open(path) as f:
- for line in f:
- example = ujson.loads(line)[:nway]
- examples.append(example)
-
- return examples
-
- def tolist(self, rank=None, nranks=None):
- """
- NOTE: For distributed sampling, this isn't equivalent to perfectly uniform sampling.
- In particular, each subset is perfectly represented in every batch! However, since we never
- repeat passes over the data, we never repeat any particular triple, and the split across
- nodes is random (since the underlying file is pre-shuffled), there's no concern here.
- """
-
- if rank or nranks:
- assert rank in range(nranks), (rank, nranks)
- return [self.data[idx] for idx in range(0, len(self.data), nranks)] # if line_idx % nranks == rank
-
- return list(self.data)
-
- def save(self, new_path):
- assert 'json' in new_path.strip('/').split('/')[-1].split('.'), "TODO: Support .json[l] too."
-
- print_message(f"#> Writing {len(self.data) / 1000_000.0}M examples to {new_path}")
-
- with Run().open(new_path, 'w') as f:
- for example in self.data:
- ujson.dump(example, f)
- f.write('\n')
-
- output_path = f.name
- print_message(f"#> Saved examples with {len(self.data)} lines to {f.name}")
-
- with Run().open(f'{new_path}.meta', 'w') as f:
- d = {}
- d['metadata'] = get_metadata_only()
- d['provenance'] = self.provenance()
- line = ujson.dumps(d, indent=4)
- f.write(line)
-
- return output_path
-
- @classmethod
- def cast(cls, obj, nway=None):
- if type(obj) is str:
- return cls(path=obj, nway=nway)
-
- if isinstance(obj, list):
- return cls(data=obj, nway=nway)
-
- if type(obj) is cls:
- assert nway is None, nway
- return obj
-
- assert False, f"obj has type {type(obj)} which is not compatible with cast()"
diff --git a/spaces/digitalxingtong/Eileen-Bert-Vits2/monotonic_align/core.py b/spaces/digitalxingtong/Eileen-Bert-Vits2/monotonic_align/core.py
deleted file mode 100644
index 5ff728cd74c9228346a82ec64a9829cb98ad315e..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Eileen-Bert-Vits2/monotonic_align/core.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import numba
-
-
-@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]),
- nopython=True, nogil=True)
-def maximum_path_jit(paths, values, t_ys, t_xs):
- b = paths.shape[0]
- max_neg_val = -1e9
- for i in range(int(b)):
- path = paths[i]
- value = values[i]
- t_y = t_ys[i]
- t_x = t_xs[i]
-
- v_prev = v_cur = 0.0
- index = t_x - 1
-
- for y in range(t_y):
- for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- if x == y:
- v_cur = max_neg_val
- else:
- v_cur = value[y - 1, x]
- if x == 0:
- if y == 0:
- v_prev = 0.
- else:
- v_prev = max_neg_val
- else:
- v_prev = value[y - 1, x - 1]
- value[y, x] += max(v_prev, v_cur)
-
- for y in range(t_y - 1, -1, -1):
- path[y, index] = 1
- if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]):
- index = index - 1
\ No newline at end of file
diff --git a/spaces/dineshreddy/WALT/mmdet/models/detectors/faster_rcnn.py b/spaces/dineshreddy/WALT/mmdet/models/detectors/faster_rcnn.py
deleted file mode 100644
index 81bad0f43a48b1022c4cd996e26d6c90be93d4d0..0000000000000000000000000000000000000000
--- a/spaces/dineshreddy/WALT/mmdet/models/detectors/faster_rcnn.py
+++ /dev/null
@@ -1,24 +0,0 @@
-from ..builder import DETECTORS
-from .two_stage import TwoStageDetector
-
-
-@DETECTORS.register_module()
-class FasterRCNN(TwoStageDetector):
- """Implementation of `Faster R-CNN `_"""
-
- def __init__(self,
- backbone,
- rpn_head,
- roi_head,
- train_cfg,
- test_cfg,
- neck=None,
- pretrained=None):
- super(FasterRCNN, self).__init__(
- backbone=backbone,
- neck=neck,
- rpn_head=rpn_head,
- roi_head=roi_head,
- train_cfg=train_cfg,
- test_cfg=test_cfg,
- pretrained=pretrained)
diff --git a/spaces/dingjian/luckpainting/README.md b/spaces/dingjian/luckpainting/README.md
deleted file mode 100644
index 6bc527e508e5f18ed659e7502cd9eea153e5f8fb..0000000000000000000000000000000000000000
--- a/spaces/dingjian/luckpainting/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Luckpainting
-emoji: 🌖
-colorFrom: purple
-colorTo: blue
-sdk: gradio
-sdk_version: 3.6
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/recog_models/satrn.py b/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/recog_models/satrn.py
deleted file mode 100644
index f7a6de8637c77a18a930e032bfb752434b173ba4..0000000000000000000000000000000000000000
--- a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/recog_models/satrn.py
+++ /dev/null
@@ -1,11 +0,0 @@
-label_convertor = dict(
- type='AttnConvertor', dict_type='DICT36', with_unknown=True, lower=True)
-
-model = dict(
- type='SATRN',
- backbone=dict(type='ShallowCNN'),
- encoder=dict(type='SatrnEncoder'),
- decoder=dict(type='TFDecoder'),
- loss=dict(type='TFLoss'),
- label_convertor=label_convertor,
- max_seq_len=40)
diff --git a/spaces/divyahansg/text-generation-webui-space/convert-to-flexgen.py b/spaces/divyahansg/text-generation-webui-space/convert-to-flexgen.py
deleted file mode 100644
index 917f023c3fe395c2e3cbcad11c9cdc6b85ef1e7e..0000000000000000000000000000000000000000
--- a/spaces/divyahansg/text-generation-webui-space/convert-to-flexgen.py
+++ /dev/null
@@ -1,60 +0,0 @@
-'''
-
-Converts a transformers model to a format compatible with flexgen.
-
-'''
-
-import argparse
-import os
-from pathlib import Path
-
-import numpy as np
-import torch
-from tqdm import tqdm
-from transformers import AutoModelForCausalLM, AutoTokenizer
-
-parser = argparse.ArgumentParser(formatter_class=lambda prog: argparse.HelpFormatter(prog,max_help_position=54))
-parser.add_argument('MODEL', type=str, default=None, nargs='?', help="Path to the input model.")
-args = parser.parse_args()
-
-def disable_torch_init():
- """
- Disable the redundant torch default initialization to accelerate model creation.
- """
- import torch
- global torch_linear_init_backup
- global torch_layer_norm_init_backup
-
- torch_linear_init_backup = torch.nn.Linear.reset_parameters
- setattr(torch.nn.Linear, "reset_parameters", lambda self: None)
-
- torch_layer_norm_init_backup = torch.nn.LayerNorm.reset_parameters
- setattr(torch.nn.LayerNorm, "reset_parameters", lambda self: None)
-
-def restore_torch_init():
- """Rollback the change made by disable_torch_init."""
- import torch
- setattr(torch.nn.Linear, "reset_parameters", torch_linear_init_backup)
- setattr(torch.nn.LayerNorm, "reset_parameters", torch_layer_norm_init_backup)
-
-if __name__ == '__main__':
- path = Path(args.MODEL)
- model_name = path.name
-
- print(f"Loading {model_name}...")
- #disable_torch_init()
- model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.float16, low_cpu_mem_usage=True)
- #restore_torch_init()
-
- tokenizer = AutoTokenizer.from_pretrained(path)
-
- out_folder = Path(f"models/{model_name}-np")
- if not Path(out_folder).exists():
- os.mkdir(out_folder)
-
- print(f"Saving the converted model to {out_folder}...")
- for name, param in tqdm(list(model.model.named_parameters())):
- name = name.replace("decoder.final_layer_norm", "decoder.layer_norm")
- param_path = os.path.join(out_folder, name)
- with open(param_path, "wb") as f:
- np.save(f, param.cpu().detach().numpy())
diff --git a/spaces/dongyi/MMFS/app.py b/spaces/dongyi/MMFS/app.py
deleted file mode 100644
index 877c34bda106f91897d8a0e0866085200dd858e8..0000000000000000000000000000000000000000
--- a/spaces/dongyi/MMFS/app.py
+++ /dev/null
@@ -1,314 +0,0 @@
-import gradio as gr
-from PIL import Image
-
-import os
-import cv2
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from models.modules.stylegan2.model import StyledConv, ToRGB, EqualLinear, ResBlock, ConvLayer, PixelNorm
-
-from utils.util import *
-from utils.data_utils import Transforms
-from data import CustomDataLoader
-from data.super_dataset import SuperDataset
-from configs import parse_config
-from utils.augmentation import ImagePathToImage
-import clip
-from torchvision.transforms import Compose, Resize, ToTensor, Normalize, InterpolationMode
-from models.style_based_pix2pixII_model import CLIPFeats2Wplus
-
-
-class Stylizer(nn.Module):
-
- def __init__(self, ngf=64, phase=2, model_weights=None):
- super(Stylizer, self).__init__()
-
- # encoder
- self.encoder = nn.Sequential(
- ConvLayer(3, ngf, 3), # 512
- ResBlock(ngf * 1, ngf * 1), # 256
- ResBlock(ngf * 1, ngf * 2), # 128
- ResBlock(ngf * 2, ngf * 4), # 64
- ResBlock(ngf * 4, ngf * 8), # 32
- ConvLayer(ngf * 8, ngf * 8, 3) # 32
- )
-
- # mapping network
- self.mapping_z = nn.Sequential(*([ PixelNorm() ] + [ EqualLinear(512, 512, activation='fused_lrelu', lr_mul=0.01) for _ in range(8) ]))
-
- # style-based decoder
- channels = {
- 32 : ngf * 8,
- 64 : ngf * 8,
- 128: ngf * 4,
- 256: ngf * 2,
- 512: ngf * 1
- }
- self.decoder0 = StyledConv(channels[32], channels[32], 3, 512)
- self.to_rgb0 = ToRGB(channels[32], 512, upsample=False)
- for i in range(4):
- ichan = channels[2 ** (i + 5)]
- ochan = channels[2 ** (i + 6)]
- setattr(self, f'decoder{i + 1}a', StyledConv(ichan, ochan, 3, 512, upsample=True))
- setattr(self, f'decoder{i + 1}b', StyledConv(ochan, ochan, 3, 512))
- setattr(self, f'to_rgb{i + 1}', ToRGB(ochan, 512))
- self.n_latent = 10
-
- # random style for testing
- self.test_z = torch.randn(1, 512)
-
- # load pretrained model weights
-
- if phase == 2:
- # load pretrained encoder and stylegan2 decoder
- self.load_state_dict(model_weights)
- if phase == 3:
- self.clip_mapper = CLIPFeats2Wplus(n_tokens=self.n_latent)
- # load pretraned base model and freeze all params except clip mapper
- self.load_state_dict(model_weights, strict=False)
- params = dict(self.named_parameters())
- for k in params.keys():
- if 'clip_mapper' in k:
- print(f'{k} not freezed !')
- continue
- params[k].requires_grad = False
-
- def get_styles(self, x, **kwargs):
- if len(kwargs) == 0:
- return self.mapping_z(self.test_z.to(x.device).repeat(x.shape[0], 1)).repeat(self.n_latent, 1, 1)
- elif 'mixing' in kwargs and kwargs['mixing']:
- w0 = self.mapping_z(torch.randn(x.shape[0], 512, device=x.device))
- w1 = self.mapping_z(torch.randn(x.shape[0], 512, device=x.device))
- inject_index = random.randint(1, self.n_latent - 1)
- return torch.cat([
- w0.repeat(inject_index, 1, 1),
- w1.repeat(self.n_latent - inject_index, 1, 1)
- ])
- elif 'z' in kwargs:
- return self.mapping_z(kwargs['z']).repeat(self.n_latent, 1, 1)
- elif 'clip_feats' in kwargs:
- return self.clip_mapper(kwargs['clip_feats'])
- else:
- z = torch.randn(x.shape[0], 512, device=x.device)
- return self.mapping_z(z).repeat(self.n_latent, 1, 1)
-
- def forward(self, x, **kwargs):
- # encode
- feat = self.encoder(x)
-
- # get style code
- styles = self.get_styles(x, **kwargs)
-
- # style-based generate
- feat = self.decoder0(feat, styles[0])
- out = self.to_rgb0(feat, styles[1])
- for i in range(4):
- feat = getattr(self, f'decoder{i + 1}a')(feat, styles[i * 2 + 1])
- feat = getattr(self, f'decoder{i + 1}b')(feat, styles[i * 2 + 2])
- out = getattr(self, f'to_rgb{i + 1}')(feat, styles[i * 2 + 3], out)
-
- return F.hardtanh(out)
-
-def tensor2file(input_image):
-
- if not isinstance(input_image, np.ndarray):
- if isinstance(input_image, torch.Tensor): # get the data from a variable
- image_tensor = input_image.data
- else:
- return input_image
- image_numpy = image_tensor[0].cpu().float().numpy() # convert it into a numpy array
- if image_numpy.shape[0] == 1: # grayscale to RGB
- image_numpy = np.tile(image_numpy, (3, 1, 1))
- image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + 1) / 2.0 * 255.0 # post-processing: tranpose and scaling
- else: # if it is a numpy array, do nothing
- image_numpy = input_image
-
- if image_numpy.shape[2] <= 3:
- image_numpy = image_numpy.astype(np.uint8)
- image_pil = Image.fromarray(image_numpy)
- return image_pil
- else:
- return image_pil
-
-
-device = "cuda"
-def generate_multi_model(input_img):
-
- # parse config
- config = parse_config("./exp/sp2pII-phase2.yaml")
-
-
- # hard-code some parameters for test
- config['common']['phase'] = "test"
- config['dataset']['n_threads'] = 0 # test code only supports num_threads = 0
- config['dataset']['batch_size'] = 1 # test code only supports batch_size = 1
- config['dataset']['serial_batches'] = True # disable data shuffling; comment this line if results on randomly chosen images are needed.
- config['dataset']['no_flip'] = True # no flip; comment this line if results on flipped images are needed.
-
- # override data augmentation
- config['dataset']['load_size'] = config['testing']['load_size']
- config['dataset']['crop_size'] = config['testing']['crop_size']
- config['dataset']['preprocess'] = config['testing']['preprocess']
-
- config['training']['pretrained_model'] = "./pretrained_models/phase2_pretrain_90000.pth"
-
- # add testing path
- config['testing']['test_img'] = input_img
- config['testing']['test_video'] = None
- config['testing']['test_folder'] = None
-
- dataset = SuperDataset(config)
- dataloader = CustomDataLoader(config, dataset)
-
- model_dict = torch.load("./pretrained_models/phase2_pretrain_90000.pth", map_location='cpu')
-
- # init netG
- model = Stylizer(ngf=config['model']['ngf'], phase=2, model_weights=model_dict['G_ema_model']).to(device)
-
- for data in dataloader:
-
- real_A = data['test_A'].to(device)
- fake_B = model(real_A, mixing=False)
- output_img = tensor2file(fake_B) # get image results
-
- return output_img
-
-
-def generate_one_shot(src_img, img_prompt):
-
- # init model
- state_dict = torch.load(f"./checkpoints/{img_prompt[-2:]}/epoch_latest.pth", map_location='cpu')
- model = Stylizer(ngf=64, phase=3, model_weights=state_dict['G_ema_model'])
- model.to(device)
- model.eval()
- model.requires_grad_(False)
-
- clip_model, img_preprocess = clip.load('ViT-B/32', device=device)
- clip_model.eval()
- clip_model.requires_grad_(False)
-
- # image transform for stylizer
- img_transform = Compose([
- Resize((512, 512), interpolation=InterpolationMode.LANCZOS),
- ToTensor(),
- Normalize([0.5], [0.5])
- ])
-
- # get clip features
- with torch.no_grad():
- img = img_preprocess(Image.open(f"./example/reference/{img_prompt[-2:]}.png")).unsqueeze(0).to(device)
- clip_feats = clip_model.encode_image(img)
- clip_feats /= clip_feats.norm(dim=1, keepdim=True)
-
-
- # load image & to tensor
- img = Image.open(src_img)
- if not img.mode == 'RGB':
- img = img.convert('RGB')
- img = img_transform(img).unsqueeze(0).to(device)
-
- # stylize it !
- with torch.no_grad():
- res = model(img, clip_feats=clip_feats)
-
- output_img = tensor2file(res) # get image results
- return output_img
-
-
-def generate_zero_shot(src_img, txt_prompt):
- # init model
- state_dict = torch.load(f"./checkpoints/{txt_prompt.replace(' ', '_')}/epoch_latest.pth", map_location='cpu')
- model = Stylizer(ngf=64, phase=3, model_weights=state_dict['G_ema_model'])
- model.to(device)
- model.eval()
- model.requires_grad_(False)
-
- clip_model, img_preprocess = clip.load('ViT-B/32', device=device)
- clip_model.eval()
- clip_model.requires_grad_(False)
-
- # image transform for stylizer
- img_transform = Compose([
- Resize((512, 512), interpolation=InterpolationMode.LANCZOS),
- ToTensor(),
- Normalize([0.5], [0.5])
- ])
-
- # get clip features
- with torch.no_grad():
- text = clip.tokenize(txt_prompt).to(device)
- clip_feats = clip_model.encode_text(text)
- clip_feats /= clip_feats.norm(dim=1, keepdim=True)
-
-
- # load image & to tensor
- img = Image.open(src_img)
- if not img.mode == 'RGB':
- img = img.convert('RGB')
- img = img_transform(img).unsqueeze(0).to(device)
-
- # stylize it !
- with torch.no_grad():
- res = model(img, clip_feats=clip_feats)
-
- output_img = tensor2file(res) # get image results
- return output_img
-
-
-with gr.Blocks() as demo:
- # 顶部文字
- gr.Markdown("# MMFS")
-
- # 多个tab
- with gr.Tabs():
-
- with gr.TabItem("Multi-Model"):
- multi_input_img = gr.Image(label="Upload Input Face Image", type='filepath', height=400)
- gr.Examples(examples=["./example/source/01.png", "./example/source/02.png", "./example/source/03.png", "./example/source/04.png"], inputs=multi_input_img)
- multi_model_button = gr.Button("Random Stylize")
-
- multi_output_img = gr.Image(label="Output Image", height=400)
-
-
- with gr.TabItem("One-Shot"):
- one_shot_src_img = gr.Image(label="Upload Input Face Image", type='filepath', height=400)
- gr.Examples(examples=["./example/source/01.png", "./example/source/02.png", "./example/source/03.png", "./example/source/04.png"], inputs=one_shot_src_img)
- with gr.Row():
- gr.Image(shape=(100, 100), value = Image.open("example/reference/01.png"), type='pil', label="ref01")
- gr.Image(shape=(100, 100), value = Image.open("example/reference/02.png"), type='pil', label="ref02")
- gr.Image(shape=(100, 100), value = Image.open("example/reference/03.png"), type='pil', label="ref03")
- gr.Image(shape=(100, 100), value = Image.open("example/reference/04.png"), type='pil', label="ref04")
-
- one_shot_ref_img = gr.Radio(['ref01','ref02','ref03','ref04'],value="ref01", label="Select a reference style image")
-
- one_shot_test_button = gr.Button("Stylize Image")
-
- one_shot_output_img = gr.Image(label="Output Image", height=400)
-
-
- with gr.TabItem("Zero-Shot"):
- zero_shot_src_img = gr.Image(label="Upload Input Face Image", type='filepath', height=400)
- gr.Examples(examples=["./example/source/01.png", "./example/source/02.png", "./example/source/03.png", "./example/source/04.png"], inputs=zero_shot_src_img)
- zero_shot_ref_prompt = gr.Dropdown(
- label="Txt Prompt",
- info="Select a reference style prompt",
- choices=[
- "pop art",
- "watercolor painting",
- ],
- max_choices=1,
- value="pop art",
- )
-
- zero_shot_test_button = gr.Button("Stylize Image")
-
- zero_shot_output_img = gr.Image(label="Output Image", height=400)
-
-
- multi_model_button.click(fn=generate_multi_model, inputs=multi_input_img, outputs=multi_output_img)
- one_shot_test_button.click(fn=generate_one_shot, inputs=[one_shot_src_img, one_shot_ref_img], outputs=one_shot_output_img)
- zero_shot_test_button.click(fn=generate_zero_shot, inputs=[zero_shot_src_img, zero_shot_ref_prompt], outputs=zero_shot_output_img)
-
-demo.queue(max_size=20)
-demo.launch()
\ No newline at end of file
diff --git a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docker/Dockerfile b/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docker/Dockerfile
deleted file mode 100644
index b4fc91216606d74fc4505c7d85330b557341a4f1..0000000000000000000000000000000000000000
--- a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docker/Dockerfile
+++ /dev/null
@@ -1,68 +0,0 @@
-FROM nvidia/cuda:11.8.0-devel-ubuntu22.04 as builder
-
-RUN apt-get update && \
- apt-get install --no-install-recommends -y git vim build-essential python3-dev python3-venv && \
- rm -rf /var/lib/apt/lists/*
-
-RUN git clone https://github.com/oobabooga/GPTQ-for-LLaMa /build
-
-WORKDIR /build
-
-RUN python3 -m venv /build/venv
-RUN . /build/venv/bin/activate && \
- pip3 install --upgrade pip setuptools && \
- pip3 install torch torchvision torchaudio && \
- pip3 install -r requirements.txt
-
-# https://developer.nvidia.com/cuda-gpus
-# for a rtx 2060: ARG TORCH_CUDA_ARCH_LIST="7.5"
-ARG TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;6.1;7.0;7.5;8.0;8.6+PTX"
-RUN . /build/venv/bin/activate && \
- python3 setup_cuda.py bdist_wheel -d .
-
-FROM nvidia/cuda:11.8.0-runtime-ubuntu22.04
-
-LABEL maintainer="Your Name "
-LABEL description="Docker image for GPTQ-for-LLaMa and Text Generation WebUI"
-
-RUN apt-get update && \
- apt-get install --no-install-recommends -y libportaudio2 libasound-dev git python3 python3-pip make g++ && \
- rm -rf /var/lib/apt/lists/*
-
-RUN --mount=type=cache,target=/root/.cache/pip pip3 install virtualenv
-RUN mkdir /app
-
-WORKDIR /app
-
-ARG WEBUI_VERSION
-RUN test -n "${WEBUI_VERSION}" && git reset --hard ${WEBUI_VERSION} || echo "Using provided webui source"
-
-RUN virtualenv /app/venv
-RUN . /app/venv/bin/activate && \
- pip3 install --upgrade pip setuptools && \
- pip3 install torch torchvision torchaudio
-
-COPY --from=builder /build /app/repositories/GPTQ-for-LLaMa
-RUN . /app/venv/bin/activate && \
- pip3 install /app/repositories/GPTQ-for-LLaMa/*.whl
-
-COPY extensions/api/requirements.txt /app/extensions/api/requirements.txt
-COPY extensions/elevenlabs_tts/requirements.txt /app/extensions/elevenlabs_tts/requirements.txt
-COPY extensions/google_translate/requirements.txt /app/extensions/google_translate/requirements.txt
-COPY extensions/silero_tts/requirements.txt /app/extensions/silero_tts/requirements.txt
-COPY extensions/whisper_stt/requirements.txt /app/extensions/whisper_stt/requirements.txt
-RUN --mount=type=cache,target=/root/.cache/pip . /app/venv/bin/activate && cd extensions/api && pip3 install -r requirements.txt
-RUN --mount=type=cache,target=/root/.cache/pip . /app/venv/bin/activate && cd extensions/elevenlabs_tts && pip3 install -r requirements.txt
-RUN --mount=type=cache,target=/root/.cache/pip . /app/venv/bin/activate && cd extensions/google_translate && pip3 install -r requirements.txt
-RUN --mount=type=cache,target=/root/.cache/pip . /app/venv/bin/activate && cd extensions/silero_tts && pip3 install -r requirements.txt
-RUN --mount=type=cache,target=/root/.cache/pip . /app/venv/bin/activate && cd extensions/whisper_stt && pip3 install -r requirements.txt
-
-COPY requirements.txt /app/requirements.txt
-RUN . /app/venv/bin/activate && \
- pip3 install -r requirements.txt
-
-RUN cp /app/venv/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda118.so /app/venv/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so
-
-COPY . /app/
-ENV CLI_ARGS=""
-CMD . /app/venv/bin/activate && python3 server.py ${CLI_ARGS}
diff --git a/spaces/duycse1603/math2tex/ScanSSD/gtdb/create_dataset.py b/spaces/duycse1603/math2tex/ScanSSD/gtdb/create_dataset.py
deleted file mode 100644
index caeebeead9b59dc25d63fb1decc9839a627fef9c..0000000000000000000000000000000000000000
--- a/spaces/duycse1603/math2tex/ScanSSD/gtdb/create_dataset.py
+++ /dev/null
@@ -1,127 +0,0 @@
-# Author: Parag Mali
-# This script reads ground truths and normalizes them using image size
-
-# read the image
-import sys
-sys.path.extend(['/home/psm2208/code', '/home/psm2208/code'])
-import cv2
-import os
-import numpy as np
-from multiprocessing import Pool
-from gtdb import fit_box
-from gtdb import feature_extractor
-import argparse
-
-# Default parameters for thr GTDB dataset
-def parse_args():
-
- parser = argparse.ArgumentParser(
- description='Stitching method')
-
- parser.add_argument('--data_file', default='test',
- type=str, help='choose one')
- parser.add_argument('--output_dir', default='.',
- help='Output directory path')
- parser.add_argument('--math_dir', required=True,
- type=str, help='detections dir')
- parser.add_argument('--math_ext', default='.csv',
- help='Extention of detection files')
- parser.add_argument('--home_data', default='/home/psm2208/data/GTDB/', type = str,
- help='database dir')
- parser.add_argument('--home_eval', default='/home/psm2208/code/eval/', type = str,
- help='Eval dir')
- parser.add_argument('--home_images', default='/home/psm2208/data/GTDB/images/', type = str,
- help='Images dir')
- parser.add_argument('--home_anno', default='/home/psm2208/data/GTDB/annotations/', type = str,
- help='Annotations dir')
- parser.add_argument('--home_char', default='/home/psm2208/data/GTDB/char_annotations/', type = str,
- help='Char anno dir')
- parser.add_argument('--num_workers', default=4, type=int, help='Number of workers')
-
- return parser.parse_args()
-
-def read_math(args, pdf_name):
-
- math_file = os.path.join(args.math_dir, pdf_name + args.math_ext)
- data = np.array([])
-
- if os.path.exists(math_file):
- data = np.genfromtxt(math_file, delimiter=',')
-
- # if there is only one entry convert it to correct form required
- if len(data.shape) == 1:
- data = data.reshape(1, -1)
-
- if args.math_ext == '.char':
- data = np.delete(data,1,1)
- data = data[:,:5]
-
- return data.astype(int)
-
-def normalize(params):
-
- args, math_regions, pdf_name, page_num = params
- print('Processing ', pdf_name, ' > ', page_num)
-
- image = cv2.imread(os.path.join(args.home_images, pdf_name, str(int(page_num + 1)) + ".png"))
- im_bw = fit_box.convert_to_binary(image)
-
- new_math = []
- for math in math_regions:
- box = [math[0]/im_bw.shape[1], math[1]/im_bw.shape[0],
- math[2]/im_bw.shape[1], math[3]/im_bw.shape[0]]#fit_box.adjust_box(im_bw, math)
-
- if feature_extractor.width(box) > 0 and feature_extractor.height(box) > 0:
- new_math.append(box)
-
- return new_math
-
-def normalize_boxes(args):
- pdf_list = []
- pdf_names_file = open(args.data_file, 'r')
-
- for pdf_name in pdf_names_file:
- pdf_name = pdf_name.strip()
-
- if pdf_name != '':
- pdf_list.append(pdf_name)
-
- math_regions = {}
-
- for pdf_name in pdf_list:
- math_regions[pdf_name] = read_math(args, pdf_name)
-
- voting_ip_list = []
- for pdf_name in pdf_list:
-
- pages = np.unique(math_regions[pdf_name][:, 0])
-
- #args, math_regions, pdf_name, page_num
- for page_num in pages:
- current_math = math_regions[pdf_name][math_regions[pdf_name][:,0] == page_num]
- voting_ip_list.append([args, np.delete(current_math, 0, 1), pdf_name, page_num])
-
- pool = Pool(processes=args.num_workers)
- out = pool.map(normalize, voting_ip_list)
-
- for ip, final_math in zip(voting_ip_list, out):
- pdf_name = ip[2]
- page_num = ip[3]
-
- col = np.array([int(page_num)] * len(final_math))
- final_math = np.concatenate((col[:, np.newaxis], final_math), axis=1)
-
- math_file_path = os.path.join(args.output_dir, pdf_name + '.csv')
-
- if not os.path.exists(os.path.dirname(math_file_path)):
- os.makedirs(os.path.dirname(math_file_path))
-
- math_file = open(math_file_path, 'a')
-
- np.savetxt(math_file, final_math, fmt='%.2f', delimiter=',')
- math_file.close()
-
-if __name__ == '__main__':
-
- args = parse_args()
- normalize_boxes(args)
diff --git a/spaces/duycse1603/math2tex/ScanSSD/gtdb/create_segmentation_gt.py b/spaces/duycse1603/math2tex/ScanSSD/gtdb/create_segmentation_gt.py
deleted file mode 100644
index 8c209dbe98b318a192115b0f4f07e99788fcb196..0000000000000000000000000000000000000000
--- a/spaces/duycse1603/math2tex/ScanSSD/gtdb/create_segmentation_gt.py
+++ /dev/null
@@ -1,171 +0,0 @@
-# Rectangles after projection
-
-import sys
-sys.path.extend(['/home/psm2208/code', '/home/psm2208/code'])
-import os
-import csv
-import numpy as np
-from multiprocessing import Pool
-import shutil
-from gtdb import feature_extractor
-
-def intersects(first, other):
- return not (first[2] < other[0] or
- first[0] > other[2] or
- first[1] > other[3] or
- first[3] < other[1])
-
-
-def create_gt(args):
- count = 0
-
- try:
- output_dir, pdf_name, page_num, gt_page_math, det_page_math = args
-
-
- inside_gt_dict = {}
-
- # for each det i, find the gt with which it intersects
- for i, det in enumerate(det_page_math):
-
- inside_gt_dict[i] = set()
-
- for j, gt in enumerate(gt_page_math):
- if intersects(det, gt):
- inside_gt_dict[i].add(j)
-
- check_dict = {}
- for i, gt in enumerate(gt_page_math):
-
- check_dict[i] = set()
-
- for j, det in enumerate(det_page_math):
- if check_inside(det, gt):
- check_dict[i].add(j)
-
- for key in check_dict:
- if len(check_dict[key]) > 1:
- count = count + 1
-
- segmentation_gt = []
-
- for i, det_math1 in enumerate(det_page_math):
-
- min = float('inf')
- min_idx = -1
-
- x1 = det_math1[0] + ((det_math1[2] - det_math1[0]) / 2)
- y1 = det_math1[1] + ((det_math1[3] - det_math1[1]) / 2)
-
- for j, det_math in enumerate(det_page_math):
- if i != j:
-
- x2 = det_math[0] + ((det_math[2] - det_math[0]) / 2)
- y2 = det_math[1] + ((det_math[3] - det_math[1]) / 2)
-
- c_dist = np.sqrt((y2 - y1) * (y2 - y1) + (x2 - x1) * (x2 - x1))#feature_extractor.vertical_dist_bb(det_page_math[i], det_page_math[j])
-
- if c_dist < min:
- min = c_dist
- min_idx = j
-
- if len(inside_gt_dict[i].intersection(inside_gt_dict[min_idx])) > 0:
- # positive example
- segmentation_gt.append(
- feature_extractor.extract_features(det_page_math[i], det_page_math[min_idx], 1))
- else:
- #negative example
- segmentation_gt.append(
- feature_extractor.extract_features(det_page_math[i], det_page_math[min_idx], 0))
-
- output_file = os.path.join(output_dir, "gt.csv")
- writer = csv.writer(open(output_file,"a"), delimiter=",")
-
- for gt_row in segmentation_gt:
- writer.writerow(gt_row)
-
- print('Processed ', pdf_name, ' ', page_num)
-
- except:
- print("Exception while processing ", pdf_name, " ", page_num, " ", sys.exc_info())
-
- return count
-
-def create_gt_segmentation(filename, gt_math_dir, det_math_dir, output_dir):
-
- if os.path.exists(output_dir):
- shutil.rmtree(output_dir)
-
- if not os.path.exists(output_dir):
- os.mkdir(output_dir)
-
- pages_list = []
- pdf_names = open(filename, 'r')
-
- for pdf_name in pdf_names:
- print('Processing-1', pdf_name)
- pdf_name = pdf_name.strip()
-
- if pdf_name != '':
- gt_math_file = os.path.join(gt_math_dir, pdf_name + ".csv")
- gt_math_regions = np.genfromtxt(gt_math_file, delimiter=',', dtype=int)
-
- det_math_file = os.path.join(det_math_dir, pdf_name + ".csv")
- det_math_regions = np.genfromtxt(det_math_file, delimiter=',', dtype=int)
-
- pages = np.unique(gt_math_regions[:, 0])
-
- for page_num in pages:
-
- gt_page_math = gt_math_regions[np.where(gt_math_regions[:,0]==page_num)]
- gt_page_math = gt_page_math[:,1:]
-
- det_page_math = det_math_regions[np.where(det_math_regions[:, 0] == page_num)]
- det_page_math = det_page_math[:, 1:]
-
- pages_list.append([output_dir, pdf_name, page_num, gt_page_math, det_page_math])
-
- pdf_names.close()
-
- pool = Pool(processes=1)
- result = pool.map(create_gt, pages_list)
- pool.close()
- pool.join()
- print('Merged regions', np.sum(result))
-
-
-def check_inside(rectA, rectB):
-
- # returns True if A is inside B
- #left, top, right, bottom
- #If any of the sides from A are outside of B
- if rectA[3] < rectB[1]: # if bottom of rectA is less than top of rectB
- return False
- if rectA[1] > rectB[3]: # if top of rectA is greater than bottom of rectB
- return False
- if rectA[2] < rectB[0]: # if right of rectA is less than left of rectB
- return False
- if rectA[0] > rectB[2]: # if left of rectangleA is greater than right of rectB
- return False
-
- #If none of the sides from A are outside B
- return True
-
-if __name__ == "__main__":
- home_data = "/home/psm2208/data/GTDB/"
- home_eval = "/home/psm2208/code/eval/"
- home_images = "/home/psm2208/data/GTDB/images/"
- home_anno = "/home/psm2208/data/GTDB/annotations/"
- home_char = "/home/psm2208/data/GTDB/char_annotations/"
-
- output_dir = "/home/psm2208/code/eval/segmentation_gt/"
- gt_math = "/home/psm2208/Workspace/Task3_Detection/Train/GT_math_csv/"
-
- det_math = "/home/psm2208/code/eval/Train3_Focal_10_25/equal_30.0"
-
- type = sys.argv[1]
-
-
- #filename, gt_math_dir, det_math_dir, output_dir
- create_gt_segmentation(home_data + type, gt_math, det_math, output_dir)
-
diff --git a/spaces/emc348/faces-through-time/models/StyleCLIP/models/__init__.py b/spaces/emc348/faces-through-time/models/StyleCLIP/models/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/enzostvs/hair-colour/Dockerfile b/spaces/enzostvs/hair-colour/Dockerfile
deleted file mode 100644
index fcc1e0fa5b760803490b0294e65afe3349b6a53d..0000000000000000000000000000000000000000
--- a/spaces/enzostvs/hair-colour/Dockerfile
+++ /dev/null
@@ -1,28 +0,0 @@
-# Dockerfile
-
-# Use an official Node.js runtime as the base image
-FROM node:18
-
-# Set the working directory in the container
-WORKDIR /usr/src/app
-
-# Copy package.json and package-lock.json to the container
-COPY package.json package-lock.json ./
-
-# Install dependencies
-RUN npm install
-
-# Copy the rest of the application files to the container
-COPY . .
-
-# Build the Next.js application for production
-RUN npm run build
-
-# Expose the application port (assuming your app runs on port 3000)
-EXPOSE 3000
-
-RUN mkdir -p /app/node_modules/@xenova/.cache/
-RUN chmod 777 -R /app/node_modules/@xenova/
-
-# Start the application
-CMD ["npm", "start"]
\ No newline at end of file
diff --git a/spaces/esraa-abdelmaksoud/Dominant-Ad-Colors-Detection/README.md b/spaces/esraa-abdelmaksoud/Dominant-Ad-Colors-Detection/README.md
deleted file mode 100644
index aa767ffced478f1d8345bd4ea04e6c33ea337f49..0000000000000000000000000000000000000000
--- a/spaces/esraa-abdelmaksoud/Dominant-Ad-Colors-Detection/README.md
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: Dominant Ad Colors Detection
-emoji: 🎨
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: false
-license: cc-by-4.0
----
-
-Copyright: The designs used as examples were designed by Esraa Abdelmaksoud for SimpleSite web solutions company in Denmark.
-
-Git Repository: https://github.com/esraa-abdelmaksoud/Dominant-Ad-Colors
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/estusgroup/ai-qr-code-generator-beta-v2/README.md b/spaces/estusgroup/ai-qr-code-generator-beta-v2/README.md
deleted file mode 100644
index 0252a89bfc4d9fc55d3d73761fe1e967dd7ce21b..0000000000000000000000000000000000000000
--- a/spaces/estusgroup/ai-qr-code-generator-beta-v2/README.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: AI QR Code Art Generator BETA V2
-emoji: 🚀
-colorFrom: pink
-colorTo: pink
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
-suggested_hardware: t4-medium
-startup_duration_timeout: 1h
-duplicated_from: estusgroup/ai-qr-generator-earlybeta
-license: cc-by-nc-nd-4.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/eunjae/LoRA-DreamBooth-Training-UI/app.py b/spaces/eunjae/LoRA-DreamBooth-Training-UI/app.py
deleted file mode 100644
index 1b47590d28504c5832a3fbb2fcd4f5ef121cf7d8..0000000000000000000000000000000000000000
--- a/spaces/eunjae/LoRA-DreamBooth-Training-UI/app.py
+++ /dev/null
@@ -1,76 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import os
-
-import gradio as gr
-import torch
-
-from app_inference import create_inference_demo
-from app_training import create_training_demo
-from app_upload import create_upload_demo
-from inference import InferencePipeline
-from trainer import Trainer
-
-TITLE = '# LoRA DreamBooth Training UI'
-
-ORIGINAL_SPACE_ID = 'lora-library/LoRA-DreamBooth-Training-UI'
-SPACE_ID = os.getenv('SPACE_ID', ORIGINAL_SPACE_ID)
-SHARED_UI_WARNING = f'''# Attention - This Space doesn't work in this shared UI. You can duplicate and use it with a paid private T4 GPU.
-
-
-You can assign a GPU in the {SETTINGS} tab if you are running this on HF Spaces.
-"T4 small" is sufficient to run this demo.
-
-'''
-
-HF_TOKEN_NOT_SPECIFIED_WARNING = f'''# Attention - The environment variable `HF_TOKEN` is not specified. Please specify your Hugging Face token with write permission as the value of it.
-
-You can check and create your Hugging Face tokens here.
-You can specify environment variables in the "Repository secrets" section of the {SETTINGS} tab.
-
-'''
-
-HF_TOKEN = os.getenv('HF_TOKEN')
-
-
-def show_warning(warning_text: str) -> gr.Blocks:
- with gr.Blocks() as demo:
- with gr.Box():
- gr.Markdown(warning_text)
- return demo
-
-
-pipe = InferencePipeline(HF_TOKEN)
-trainer = Trainer(HF_TOKEN)
-
-with gr.Blocks(css='style.css') as demo:
- if os.getenv('IS_SHARED_UI'):
- show_warning(SHARED_UI_WARNING)
- if not torch.cuda.is_available():
- show_warning(CUDA_NOT_AVAILABLE_WARNING)
- if not HF_TOKEN:
- show_warning(HF_TOKEN_NOT_SPECIFIED_WARNING)
-
- gr.Markdown(TITLE)
- with gr.Tabs():
- with gr.TabItem('Train'):
- create_training_demo(trainer, pipe)
- with gr.TabItem('Test'):
- create_inference_demo(pipe, HF_TOKEN)
- with gr.TabItem('Upload'):
- gr.Markdown('''
- - You can use this tab to upload models later if you choose not to upload models in training time or if upload in training time failed.
- ''')
- create_upload_demo(HF_TOKEN)
-
-demo.queue(max_size=1).launch(share=False)
diff --git a/spaces/facebook/MusicGen/model_cards/MUSICGEN_MODEL_CARD.md b/spaces/facebook/MusicGen/model_cards/MUSICGEN_MODEL_CARD.md
deleted file mode 100644
index 68e81d4467008d597f1e17105b37adff78c8218c..0000000000000000000000000000000000000000
--- a/spaces/facebook/MusicGen/model_cards/MUSICGEN_MODEL_CARD.md
+++ /dev/null
@@ -1,105 +0,0 @@
-# MusicGen Model Card
-
-## Model details
-
-**Organization developing the model:** The FAIR team of Meta AI.
-
-**Model date:** MusicGen was trained between April 2023 and May 2023.
-
-**Model version:** This is the version 1 of the model.
-
-**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
-
-**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation][arxiv].
-
-**Citation details:** See [our paper][arxiv]
-
-**License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
-
-**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [GitHub repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
-
-## Intended use
-**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
-
-- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
-- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
-
-**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
-
-**Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
-
-## Metrics
-
-**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
-
-- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
-- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
-- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
-
-Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
-
-- Overall quality of the music samples;
-- Text relevance to the provided text input;
-- Adherence to the melody for melody-guided music generation.
-
-More details on performance measures and human studies can be found in the paper.
-
-**Decision thresholds:** Not applicable.
-
-## Evaluation datasets
-
-The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
-
-## Training datasets
-
-The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
-
-## Evaluation results
-
-Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper.
-
-| Model | Frechet Audio Distance | KLD | Text Consistency | Chroma Cosine Similarity |
-|---|---|---|---|---|
-| facebook/musicgen-small | 4.88 | 1.42 | 0.27 | - |
-| facebook/musicgen-medium | 5.14 | 1.38 | 0.28 | - |
-| facebook/musicgen-large | 5.48 | 1.37 | 0.28 | - |
-| facebook/musicgen-melody | 4.93 | 1.41 | 0.27 | 0.44 |
-
-More information can be found in the paper [Simple and Controllable Music Generation][arxiv], in the Results section.
-
-## Limitations and biases
-
-**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
-
-**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
-
-**Limitations:**
-
-- The model is not able to generate realistic vocals.
-- The model has been trained with English descriptions and will not perform as well in other languages.
-- The model does not perform equally well for all music styles and cultures.
-- The model sometimes generates end of songs, collapsing to silence.
-- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
-
-**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
-
-**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
-
-**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
-
-## Update: stereo models and large melody.
-
-We further release a set of stereophonic capable models. Those were fine tuned for 200k updates starting
-from the mono models. The training data is otherwise identical and capabilities and limitations are shared with the base modes. The stereo models work by getting 2 streams of tokens from the EnCodec model, and interleaving those using
-the delay pattern. We also release a mono large model with melody conditioning capabilities. The list of new models
-is as follow:
-
-- facebook/musicgen-stereo-small
-- facebook/musicgen-stereo-medium
-- facebook/musicgen-stereo-large
-- facebook/musicgen-stereo-melody
-- facebook/musicgen-melody-large
-- facebook/musicgen-stereo-melody-large
-
-
-[arxiv]: https://arxiv.org/abs/2306.05284
diff --git a/spaces/facebook/StyleNeRF/launcher.py b/spaces/facebook/StyleNeRF/launcher.py
deleted file mode 100644
index 5296a93c256b48ae43cf57eb7876e3fb3b3dfe68..0000000000000000000000000000000000000000
--- a/spaces/facebook/StyleNeRF/launcher.py
+++ /dev/null
@@ -1,189 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-
-import random, shlex, datetime
-import os, sys, subprocess, shutil
-from glob import iglob
-
-
-def copy_all_python_files(
- source, snapshot_main_dir, code_snapshot_hash, recurse_dirs="fairseq"
-):
- """
- Copies following files from source to destination:
- a) all *.py files at direct source location.
- b) all fairseq/*.py recursively (default); recurse through comma-separated recurse_dirs
- """
- os.makedirs(snapshot_main_dir, exist_ok=True)
- destination = os.path.join(snapshot_main_dir, code_snapshot_hash)
- assert not os.path.exists(destination), "Code snapshot: {0} alredy exists".format(
- code_snapshot_hash
- )
- os.makedirs(destination)
-
- def all_pys(recurse_dirs):
- yield from iglob(os.path.join(source, "*.py"))
- for d in recurse_dirs.split(","):
- yield from iglob(os.path.join(source, d, "**/*.py"), recursive=True)
- yield from iglob(os.path.join(source, d, "**/*.so"), recursive=True)
- yield from iglob(os.path.join(source, d, "**/*.yaml"), recursive=True)
-
- for filepath in all_pys(recurse_dirs):
- directory, filename = os.path.split(filepath)
- if directory:
- os.makedirs(os.path.join(destination, directory), exist_ok=True)
- shutil.copy2(
- os.path.join(source, filepath), os.path.join(destination, filepath)
- )
- return destination
-
-def launch_cluster(slurm_args, model_args):
- # prepare
- jobname = slurm_args.get('job-name', 'test')
- if slurm_args.get('workplace') is not None:
- os.makedirs(slurm_args.get('workplace'), exist_ok=True)
- if slurm_args.get('workplace') is not None:
- train_log = os.path.join(slurm_args['workplace'], 'train.%A.out')
- train_stderr = os.path.join(slurm_args['workplace'], 'train.%A.stderr.%j')
- else:
- train_log = train_stderr = None
- nodes, gpus = slurm_args.get('nodes', 1), slurm_args.get('gpus', 8)
- if not slurm_args.get('local', False):
- assert (train_log is not None) and (train_stderr is not None)
- # parse slurm
-
- destination = ""
- # if slurm_args.get('workplace', None) is not None:
- # # Currently hash is just the current time in ISO format.
- # # Remove colons since they cannot be escaped in POSIX PATH env vars.
- # code_snapshot_hash = datetime.datetime.now().isoformat().replace(":", "_")
- # destination = copy_all_python_files(
- # ".",
- # os.path.join(slurm_args['workplace'], "slurm_snapshot_code"),
- # code_snapshot_hash,
- # 'fairseq',
- # )
- # os.environ["PYTHONPATH"] = destination + ":" + os.environ.get("PYTHONPATH", "")
- # print('creat snapshot at {}'.format(destination))
-
- train_cmd = ['python', os.path.join(destination, 'run_train.py'), ]
- train_cmd.extend([f'gpus={nodes * gpus}'])
- train_cmd.extend([f'port={get_random_port()}'])
- train_cmd += model_args
-
- base_srun_cmd = [
- 'srun',
- '--job-name', jobname,
- '--output', train_log,
- '--error', train_stderr,
- '--open-mode', 'append',
- '--unbuffered',
- ]
- srun_cmd = base_srun_cmd + train_cmd
- srun_cmd_str = ' '.join(map(shlex.quote, srun_cmd))
- srun_cmd_str = srun_cmd_str + ' &'
-
- sbatch_cmd = [
- 'sbatch',
- '--job-name', jobname,
- '--partition', slurm_args.get('partition', 'learnfair'),
- '--gres', 'gpu:volta:{}'.format(gpus),
- '--nodes', str(nodes),
- '--ntasks-per-node', '1',
- '--cpus-per-task', '20',
- '--output', train_log,
- '--error', train_stderr,
- '--open-mode', 'append',
- '--signal', 'B:USR1@180',
- '--time', slurm_args.get('time', '4320'),
- '--mem', slurm_args.get('mem', '500gb'),
- '--exclusive',
- '--exclude', 'learnfair5035,learnfair5289,learnfair5088,learnfair5028,learnfair5032,learnfair5033,learnfair5056,learnfair5098,learnfair5122,learnfair5124,learnfair5156,learnfair5036,learnfair5258,learnfair5205,learnfair5201,learnfair5240,learnfair5087,learnfair5119,learnfair5246,learnfair7474,learnfair7585,learnfair5150,learnfair5166,learnfair5215,learnfair5142,learnfair5070,learnfair5236,learnfair7523'
- ]
- if 'constraint' in slurm_args:
- sbatch_cmd += ['-C', slurm_args.get('constraint')]
- if 'comment' in slurm_args:
- sbatch_cmd += ['--comment', slurm_args.get('comment')]
-
- wrapped_cmd = requeue_support() + '\n' + srun_cmd_str + ' \n wait $! \n sleep 610 & \n wait $!'
- sbatch_cmd += ['--wrap', wrapped_cmd]
- sbatch_cmd_str = ' '.join(map(shlex.quote, sbatch_cmd))
-
- # start training
- env = os.environ.copy()
- env['OMP_NUM_THREADS'] = '2'
- env['NCCL_SOCKET_IFNAME'] = ''
-
- if env.get('SLURM_ARGS', None) is not None:
- del env['SLURM_ARGS']
-
- if nodes > 1:
- env['NCCL_SOCKET_IFNAME'] = '^docker0,lo'
- env['NCCL_DEBUG'] = 'INFO'
-
- if slurm_args.get('dry-run', False):
- print(sbatch_cmd_str)
-
- elif slurm_args.get('local', False):
- assert nodes == 1, 'distributed training cannot be combined with local'
- if 'CUDA_VISIBLE_DEVICES' not in env:
- env['CUDA_VISIBLE_DEVICES'] = ','.join(map(str, range(gpus)))
- env['NCCL_DEBUG'] = 'INFO'
-
- if train_log is not None:
- train_proc = subprocess.Popen(train_cmd, env=env, stdout=subprocess.PIPE)
- tee_proc = subprocess.Popen(['tee', '-a', train_log], stdin=train_proc.stdout)
- train_proc.stdout.close()
- train_proc.wait()
- tee_proc.wait()
- else:
- train_proc = subprocess.Popen(train_cmd, env=env)
- train_proc.wait()
- else:
- with open(train_log, 'a') as train_log_h:
- print(f'running command: {sbatch_cmd_str}\n')
- with subprocess.Popen(sbatch_cmd, stdout=subprocess.PIPE, env=env) as train_proc:
- stdout = train_proc.stdout.read().decode('utf-8')
- print(stdout, file=train_log_h)
- try:
- job_id = int(stdout.rstrip().split()[-1])
- return job_id
- except IndexError:
- return None
-
-
-def launch(slurm_args, model_args):
- job_id = launch_cluster(slurm_args, model_args)
- if job_id is not None:
- print('Launched {}'.format(job_id))
- else:
- print('Failed.')
-
-
-def requeue_support():
- return """
- trap_handler () {
- echo "Caught signal: " $1
- # SIGTERM must be bypassed
- if [ "$1" = "TERM" ]; then
- echo "bypass sigterm"
- else
- # Submit a new job to the queue
- echo "Requeuing " $SLURM_JOB_ID
- scontrol requeue $SLURM_JOB_ID
- fi
- }
-
-
- # Install signal handler
- trap 'trap_handler USR1' USR1
- trap 'trap_handler TERM' TERM
- """
-
-
-def get_random_port():
- old_state = random.getstate()
- random.seed()
- port = random.randint(10000, 20000)
- random.setstate(old_state)
- return port
diff --git a/spaces/falterWliame/Face_Mask_Detection/FULL Dialog Naskah Drama Sangkuriang Bahasa Jawa 5 Orang _BEST_.md b/spaces/falterWliame/Face_Mask_Detection/FULL Dialog Naskah Drama Sangkuriang Bahasa Jawa 5 Orang _BEST_.md
deleted file mode 100644
index 3fc3685fb172ba6da9aa5b96bce340988dc1a455..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/FULL Dialog Naskah Drama Sangkuriang Bahasa Jawa 5 Orang _BEST_.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
FULL Dialog Naskah Drama Sangkuriang Bahasa Jawa 5 Orang
-
-Ixxat automation gmbh vci3 usb to can compact driver download 528769. ... Here is the defau screen of the sevcon dvt software. to acquire the software, i called ... 4d29de3e1b
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/Lumion332bittorrent.md b/spaces/falterWliame/Face_Mask_Detection/Lumion332bittorrent.md
deleted file mode 100644
index 3674cd87ee812b3b15818a9ef35838ca8c7ecfeb..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Lumion332bittorrent.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-lumion332bittorrent · Queen Tamil Dubbed Movie · windows 8 pro with media center build 9200 genuine activation key · CorelDRAW Graphics Suite 2020 Crack ... 1fdad05405
-
-
-
diff --git a/spaces/fatiXbelha/sd/F1 2016 for Android The Ultimate Formula One Game with APK and OBB Data.md b/spaces/fatiXbelha/sd/F1 2016 for Android The Ultimate Formula One Game with APK and OBB Data.md
deleted file mode 100644
index ace36d26025e09bd27c7b24cfd316eddfa6728c8..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/F1 2016 for Android The Ultimate Formula One Game with APK and OBB Data.md
+++ /dev/null
@@ -1,67 +0,0 @@
-
-
F1 2016 Apk Obb: A Thrilling Racing Game for Mobile Devices
| |
Write a catchy introduction that hooks the readers and gives them an overview of what the article is about.
| |
What is F1 2016 Apk Obb?
| |
Write a brief introduction to the game and its features. Explain what apk and obb files are and why they are needed to play the game.
F1 2016: The Official Videogame of the 2016 FIA Formula One World Championship
| |
Write about the game's authenticity, realism, and immersion. Mention the official teams, drivers, circuits, and rules from the 2016 season. Cite some facts from or .
| |
Apk and Obb Files: How They Work
| |
Write about what apk and obb files are, how they differ, and how they work together to run the game. Mention that apk is the application package file that contains the game's code, while obb is the opaque binary blob file that contains the game's data, such as graphics, sounds, and videos. Cite some information from or .
| |
How to Download and Install F1 2016 Apk Obb?
| |
Write a step-by-step guide to download the apk and obb files from reliable sources. Provide links to the files and warn about potential risks of malware or viruses. Mention that the game is not available on Google Play Store or App Store.
| |
-
Download the apk file from , , or . Choose the version that suits your device and preference.
-
Download the obb file from . Make sure it matches the apk version you downloaded.
-
Install a file explorer app like Zarchiver or RAR if you don't have one.
-
Extract the files from archives using your file explorer app.
-
Install the apk file by tapping on it. You may need to enable unknown sources in your device settings.
-
Copy or move the obb folder (not just files inside it) to Android/OBB using your file explorer app.
-
Launch the game and enjoy :)
-
-| |
What are the System Requirements for F1 2016 Apk Obb?
| |
Write a list of the minimum and recommended specifications for running the game on different devices. Mention that some new devices may have compatibility issues with older games. Provide some examples of devices that can run the game smoothly or not.
| |
-| HTML Formatting | | --- | |
What are the Game Features of F1 2016 Apk Obb?
| |
Write a detailed overview of the game modes, options, and mechanics. Highlight the game's strengths and uniqueness. Mention some of the features that make the game fun and challenging, such as career mode, time trial, custom season, quick race, live events, weather effects, car damage, pit stops, etc. Cite some details from or .
| |
Career Mode: Create Your Own Legend
| |
Write about the career mode, which is the main attraction of the game. Explain how you can create your own driver, choose your team, and compete in a full season of 21 races. Mention how you can interact with your agent, engineer, and team boss, and make decisions that affect your performance and reputation. Mention how you can upgrade your car and customize your helmet.
| |
Time Trial: Test Your Skills Against The Clock
| |
Write about the time trial mode, which is a great way to practice and improve your skills. Explain how you can choose any track, car, and weather condition, and try to set the fastest lap time possible. Mention how you can compare your results with other players on the global leaderboards.
-
f1 2016 mobile apk and data updated links
-f1 2016 android game free download
-f1 2016 for android apk + obb download offline
-f1 2016 mod season 2022 apk
-f1 2016 android bug fixed and new camera apk
-f1 2016 apk + obb highly compressed
-f1 2016 apk vision download
-f1 2016 apk mediafire link
-f1 2016 apk reddit
-f1 2016 apk youtube
-f1 2016 apk combo
-f1 2016 apk codemasters
-f1 2016 apk full version
-f1 2016 apk latest version
-f1 2016 apk no license verification
-f1 2016 apk obb file size
-f1 2016 apk obb installation instructions
-f1 2016 apk obb zarchiver
-f1 2016 apk obb rar app
-f1 2016 apk obb compatibility issues
-f1 2016 apk obb working on new devices
-f1 2016 apk obb game crashes fix
-f1 2016 apk obb parsing the file error
-f1 2016 apk obb data folder location
-f1 2016 apk obb launch the game and enjoy
-f1 2016 official game of the formula one world championship
-f1 2016 full season single race and time trial mode
-f1 2016 all official circuits from the season
-f1 2016 stunning mobile racing game graphics
-f1 2016 realistic physics and car handling
-f1 2016 customisable controls and camera angles
-f1 2016 live leaderboard and race results
-f1 2016 career mode and championship mode
-f1 2016 create your own legend and challenge the best drivers
-f1 2016 unlock new helmets and liveries for your car
-f1 2016 compete with friends and other players online
-f1 2016 support for bluetooth controllers and gamepads
-f1 2016 immersive sound effects and commentary
-f1 2016 requires android version 5.0 or higher
-f1 2016 requires internet connection for some features
| |
Custom Season: Create Your Own Championship
| |
Write about the custom season mode, which is a flexible way to enjoy the game. Explain how you can customize your own championship by selecting the number of races, tracks, teams, drivers, and difficulty level. Mention how you can change the rules and settings to suit your preference.
| |
Quick Race: Jump Into The Action
| |
Write about the quick race mode, which is a simple way to have some fun. Explain how you can choose any track, car, and weather condition, and race against AI opponents or real players online. Mention how you can adjust the race length and difficulty level.
| |
Live Events: Challenge Yourself With Real-World Scenarios
| | HTML Formatting | | --- | |
What are the Tips and Tricks for F1 2016 Apk Obb?
| |
Write a collection of useful advice and strategies for improving your performance and enjoyment. Provide some tips on how to master the controls, optimize the settings, use the assists, manage the tyres, fuel, and brakes, overtake the rivals, avoid penalties, etc. Cite some suggestions from or .
| |
Master the Controls: Find Your Comfort Zone
| |
Write about the different control options available in the game, such as tilt, touch, or gamepad. Explain how you can adjust the sensitivity, calibration, and feedback of each option. Mention how you can choose between manual or automatic transmission, and how to use the clutch and gear buttons.
| |
Optimize the Settings: Fine-Tune Your Experience
| |
Write about the different settings options available in the game, such as graphics, sound, camera, and HUD. Explain how you can change the quality, resolution, and frame rate of the graphics to suit your device's performance. Mention how you can adjust the volume, language, and subtitles of the sound. Mention how you can switch between different camera angles, such as cockpit, chase, or TV pod. Mention how you can customize the HUD elements, such as speedometer, lap timer, mini-map, etc.
| |
Use the Assists: Get Some Help When You Need It
| |
Write about the different assists options available in the game, such as braking assist, traction control, anti-lock brakes, racing line, etc. Explain how you can turn them on or off depending on your skill level and preference. Mention how they can help you avoid mistakes and accidents, but also reduce your speed and challenge.
| |
Manage the Tyres, Fuel, and Brakes: Plan Your Strategy
| | HTML Formatting | | --- | |
Overtake the Rivals: Find the Best Opportunities
| |
Write about the art of overtaking in the game. Explain how you can use your skills, tactics, and tools to gain an advantage over your opponents. Mention how you can use the DRS (drag reduction system) and ERS (energy recovery system) to boost your speed and power. Mention how you can use the slipstream and braking zones to get closer and pass your rivals. Mention how you can use the radio commands to communicate with your team and request information or assistance.
| |
Avoid Penalties: Follow the Rules
| |
Write about the importance of avoiding penalties in the game. Explain how they can ruin your race and reputation. Mention some of the common infractions that can result in penalties, such as speeding in the pit lane, cutting corners, causing collisions, blocking, etc. Mention how you can appeal or serve your penalties, or avoid them altogether by driving cleanly and fairly.
| |
Conclusion
| |
Write a summary of the main points and a call to action for the readers. Remind them of what they learned from the article and why they should try the game. Invite them to share their feedback, questions, or suggestions in the comments section or on social media.
| |
FAQs
| |
Write a list of five frequently asked questions and answers about the game. Use clear and concise language and provide relevant information. Use bullet points for each question and answer pair.
| |
-
Q: How much space does F1 2016 Apk Obb take on my device?
-
A: The apk file is about 80 MB, while the obb file is about 1 GB. You will need at least 1.5 GB of free space on your device to install and run the game.
-
Q: Can I play F1 2016 Apk Obb offline?
-
A: Yes, you can play most of the game modes offline, except for live events and online multiplayer. You will need an internet connection to download and update the game files, as well as to access some of the features and services.
-
Q: Can I play F1 2016 Apk Obb with my friends?
-
A: Yes, you can play F1 2016 Apk Obb with your friends online or locally. You can join or create online lobbies and race against up to 21 other players from around the world. You can also use Wi-Fi or Bluetooth to connect with nearby devices and race against up to 3 other players in split-screen mode.
-
Q: How can I save my progress in F1 2016 Apk Obb?
-
A: The game automatically saves your progress after each race or event. You can also manually save your progress by tapping on the save icon on the main menu. You can load your saved progress by tapping on the load icon on the main menu.
-
Q: How can I contact the developers of F1 2016 Apk Obb?
-
A: You can contact the developers of F1 2016 Apk Obb by visiting their official website , following them on social media , or sending them an email at support@codemasters.com.
-
-| | | 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/utils/common.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/utils/common.py
deleted file mode 100644
index b19e18ddcb78b06678fa18e4a76da44fc511b789..0000000000000000000000000000000000000000
--- a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/utils/common.py
+++ /dev/null
@@ -1,55 +0,0 @@
-from PIL import Image
-import matplotlib.pyplot as plt
-
-
-# Log images
-def log_input_image(x, opts):
- return tensor2im(x)
-
-
-def tensor2im(var):
- # var shape: (3, H, W)
- var = var.cpu().detach().transpose(0, 2).transpose(0, 1).numpy()
- var = ((var + 1) / 2)
- var[var < 0] = 0
- var[var > 1] = 1
- var = var * 255
- return Image.fromarray(var.astype('uint8'))
-
-
-def vis_faces(log_hooks):
- display_count = len(log_hooks)
- fig = plt.figure(figsize=(8, 4 * display_count))
- gs = fig.add_gridspec(display_count, 3)
- for i in range(display_count):
- hooks_dict = log_hooks[i]
- fig.add_subplot(gs[i, 0])
- if 'diff_input' in hooks_dict:
- vis_faces_with_id(hooks_dict, fig, gs, i)
- else:
- vis_faces_no_id(hooks_dict, fig, gs, i)
- plt.tight_layout()
- return fig
-
-
-def vis_faces_with_id(hooks_dict, fig, gs, i):
- plt.imshow(hooks_dict['input_face'])
- plt.title('Input\nOut Sim={:.2f}'.format(float(hooks_dict['diff_input'])))
- fig.add_subplot(gs[i, 1])
- plt.imshow(hooks_dict['target_face'])
- plt.title('Target\nIn={:.2f}, Out={:.2f}'.format(float(hooks_dict['diff_views']),
- float(hooks_dict['diff_target'])))
- fig.add_subplot(gs[i, 2])
- plt.imshow(hooks_dict['output_face'])
- plt.title('Output\n Target Sim={:.2f}'.format(float(hooks_dict['diff_target'])))
-
-
-def vis_faces_no_id(hooks_dict, fig, gs, i):
- plt.imshow(hooks_dict['input_face'], cmap="gray")
- plt.title('Input')
- fig.add_subplot(gs[i, 1])
- plt.imshow(hooks_dict['target_face'])
- plt.title('Target')
- fig.add_subplot(gs[i, 2])
- plt.imshow(hooks_dict['output_face'])
- plt.title('Output')
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bad 2 Bad Apocalypse Mod Apk 1.2.5 - Free Shopping and No Ads.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bad 2 Bad Apocalypse Mod Apk 1.2.5 - Free Shopping and No Ads.md
deleted file mode 100644
index cec7c5ad94ac609aafe6be5bf91105975b0df0b4..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bad 2 Bad Apocalypse Mod Apk 1.2.5 - Free Shopping and No Ads.md
+++ /dev/null
@@ -1,17 +0,0 @@
-
-
Bad 2 Bad: Apocalypse Mod APK 1.2.5 - A Fun and Action-Packed Role-Playing Game
- If you are looking for a fun and action-packed role-playing game that will keep you entertained for hours, then you should try Bad 2 Bad: Apocalypse. This game is a sequel to the popular Bad 2 Bad: Delta game, which was a hit among mobile gamers. In this game, you will join a team of elite soldiers who are fighting against the evil forces of Al-Qatala, a terrorist organization that wants to destroy the world.
What is Bad 2 Bad: Apocalypse?
- Bad 2 Bad: Apocalypse is a role-playing game that combines elements of shooting, strategy, and adventure. You will control a squad of animal soldiers who have different skills and abilities, such as sniping, hacking, healing, and more. You will also be able to customize your squad with various weapons, armor, and accessories.
The story and the gameplay of Bad 2 Bad: Apocalypse
- The story of Bad 2 Bad: Apocalypse follows the events of Bad 2 Bad: Delta, where you defeated the leader of Al-Qatala, Gorat al-Llama. However, his death triggered a nuclear explosion that caused a zombie apocalypse in the world. Now, you have to survive the hordes of zombies and mutants that are roaming the streets, while also fighting against the remnants of Al-Qatala. The gameplay of Bad 2 Bad: Apocalypse is similar to that of Bad 2 Bad: Delta, but with more features and challenges. You will have to complete various missions and quests, such as rescuing hostages, destroying enemy bases, collecting resources, and more. You will also have to manage your base, where you can upgrade your facilities, recruit new soldiers, and research new technologies.
The features and the graphics of Bad 2 Bad: Apocalypse
- Bad 2 Bad: Apocalypse has many features that make it an enjoyable and immersive game. Some of these features are: - A large open world map with different regions and environments - A variety of enemies and bosses with different behaviors and abilities - A dynamic weather system that affects the gameplay - A realistic physics engine that allows for destructible environments and ragdoll effects - A rich story mode with voice acting and cutscenes - A multiplayer mode where you can cooperate or compete with other players online The graphics of Bad 2 Bad: Apocalypse are also impressive, as they are colorful and detailed. The game has a cartoonish style that suits the humorous tone of the game, but also has realistic shadows and lighting effects that create a contrast between the cute characters and the dark atmosphere.
What is Bad 2 Bad: Apocalypse Mod APK 1.2.5?
- Bad 2 Bad: Apocalypse Mod APK 1.2.5 is a modified version of the original game that gives you some advantages and extra features . Some of these advantages and features are: - Unlimited bullets - No skill cooldown - Unlimited money - Unlimited diamonds - All characters unlocked - All weapons unlocked
The benefits of using Bad 2 Bad: Apocalypse Mod APK 1.2.5
- Using Bad 2 Bad: Apocalypse Mod APK 1.2.5 can make your gaming experience more fun and easy, as you will be able to: - Save your time and resources by not having to worry about running out of bullets or waiting for your skills to recharge - Enjoy the game without any limitations or restrictions by having access to all the characters and weapons - Experiment with different combinations and strategies by using different skills and weapons - Enhance your gaming performance and satisfaction by having more money and diamonds to spend on upgrades and items
The drawbacks of using Bad 2 Bad: Apocalypse Mod APK 1.2.5
- However, using Bad 2 Bad: Apocalypse Mod APK 1.2.5 also has some drawbacks and risks, such as: - Losing the challenge and the thrill of the game by making it too easy and boring - Missing out on the original features and updates of the game by using an outdated or incompatible version - Exposing your device and data to malware or viruses by downloading from untrusted sources - Violating the terms and conditions of the game by using an unauthorized modification - Getting banned or suspended from the game by the developers or the anti-cheat system
How to download and install Bad 2 Bad: Apocalypse Mod APK 1.2.5?
- If you still want to try Bad 2 Bad: Apocalypse Mod APK 1.2.5, you will need to follow some steps to download and install it on your device.
The steps to download and install Bad 2 Bad: Apocalypse Mod APK 1.2.5
- The steps are: - Find a reliable and safe website that offers Bad 2 Bad: Apocalypse Mod APK 1.2.5 for download - Click on the download button and wait for the file to be downloaded - Go to your device settings and enable the installation of apps from unknown sources - Locate the downloaded file in your file manager and tap on it to start the installation - Follow the instructions on the screen and wait for the installation to finish - Launch the game and enjoy
The precautions to take before downloading and installing Bad 2 Bad: Apocalypse Mod APK 1.2.5
- Before you download and install Bad 2 Bad: Apocalypse Mod APK 1.2.5, you should take some precautions to avoid any problems or issues, such as: - Make sure that your device has enough storage space and battery life - Make sure that your device is compatible with the game and the mod version - Make sure that you have a stable internet connection - Make a backup of your original game data in case something goes wrong - Scan the downloaded file with an antivirus or a malware detector before installing it
Conclusion
- Bad 2 Bad: Apocalypse is a fun and action-packed role-playing game that will keep you entertained for hours. You can join a team of elite soldiers who are fighting against zombies, mutants, and terrorists in a post-apocalyptic world. You can also customize your squad with various weapons, armor, and accessories. Bad 2 Bad: Apocalypse Mod APK 1.2.5 is a modified version of the game that gives you some advantages and extra features, such as unlimited bullets, money, diamonds, characters, and weapons. However, it also has some drawbacks and risks, such as losing the challenge, missing out on updates, exposing your device to malware, violating the terms, and getting banned. If you want to try Bad 2 Bad: Apocalypse Mod APK 1.2.5, you will need to follow some steps to download and install it on your device. You will also need to take some precautions before downloading and installing it.
FAQs
- Here are some frequently asked questions about Bad 2 Bad: Apocalypse Mod APK 1.2.5: Q: Is Bad 2 Bad: Apocalypse Mod APK 1.2.5 free? A: Yes, Bad 2 Bad: Apocalypse Mod APK 1.2.5 is free to download and use. Q: Is Bad 2 Bad: Apocalypse Mod APK 1.2.5 safe? A: It depends on where you download it from. Some websites may offer fake or infected files that can harm your device or data. You should always download from trusted sources . Q: How can I update Bad 2 Bad: Apocalypse Mod APK 1.2.5? A: You can update Bad 2 Bad: Apocalypse Mod APK 1.2.5 by downloading the latest version of the mod from the same website where you downloaded it from. You may also need to uninstall the previous version before installing the new one. Q: Can I play Bad 2 Bad: Apocalypse Mod APK 1.2.5 offline? A: Yes, you can play Bad 2 Bad: Apocalypse Mod APK 1.2.5 offline, as it does not require an internet connection to run. However, you may not be able to access some features or modes that require online connectivity, such as multiplayer or updates. Q: Can I play Bad 2 Bad: Apocalypse Mod APK 1.2.5 with my friends? A: Yes, you can play Bad 2 Bad: Apocalypse Mod APK 1.2.5 with your friends, as it has a multiplayer mode where you can cooperate or compete with other players online. You will need an internet connection and a valid account to join or create a room.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download GTA 5 3D Models for Free - Characters Vehicles Weapons and More.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download GTA 5 3D Models for Free - Characters Vehicles Weapons and More.md
deleted file mode 100644
index 103a6f4d3b0e70d4fe433e6c6671d9447eec05b3..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download GTA 5 3D Models for Free - Characters Vehicles Weapons and More.md
+++ /dev/null
@@ -1,55 +0,0 @@
-
-
GTA 5 3D Download: How to Enjoy the Best Graphics and Gameplay
-
Grand Theft Auto V, or GTA 5 for short, is one of the most successful video games of all time. It has sold over 150 million copies worldwide and has won numerous awards and accolades. But what makes this game so special and how can you enjoy it in a whole new dimension? In this article, we will tell you everything you need to know about GTA 5 3D download, including what it is, how to get it, and how to optimize it for the best experience possible.
GTA 5 is an action-adventure game developed by Rockstar Games and released in 2013 for PlayStation 3 and Xbox 360, and later for PlayStation 4, Xbox One, and PC. It is the fifth main installment in the Grand Theft Auto series, which is known for its open-world, sandbox, and crime-themed gameplay.
-
The story and the characters of GTA 5
-
GTA 5 follows the lives of three protagonists: Michael, a retired bank robber who lives a luxurious but unhappy life in Los Santos; Franklin, a young street hustler who dreams of a better future; and Trevor, a psychotic drug dealer who lives in the desert. Their paths cross when they team up for a series of heists that put them in conflict with various criminal factions, corrupt officials, and federal agents.
-
The open-world and the activities of GTA 5
-
GTA 5 features a vast and detailed open-world that covers the fictional state of San Andreas, which is based on Southern California. The game allows you to explore various locations, such as urban areas, rural towns, mountains, deserts, forests, beaches, and oceans. You can also engage in various activities, such as driving, shooting, fighting, stealth, racing, flying, parachuting, swimming, diving, hunting, golfing, tennis, yoga, darts, bowling, arm wrestling, strip clubs, cinemas, amusement parks, casinos, nightclubs, arcades, and more.
-
gta 5 3d models free download
-gta 5 3d map download
-gta 5 3d game download for android
-gta 5 3d mod download
-gta 5 3d wallpaper download
-gta 5 3d animation download
-gta 5 3d graphics download
-gta 5 3d sound download
-gta 5 3d car models download
-gta 5 3d character models download
-gta 5 3d city download
-gta 5 3d logo download
-gta 5 3d font download
-gta 5 3d weapons download
-gta 5 3d skins download
-gta 5 3d icons download
-gta 5 3d art download
-gta 5 3d video download
-gta 5 3d apk download
-gta 5 3d pc game download
-gta 5 online 3d download
-gta san andreas to gta v (gta sa in hd and in real life) - full conversion mod (gta sa in hd and in real life) - full conversion mod (gta sa in hd and in real life) - full conversion mod (gta sa in hd and in real life) - full conversion mod (gta sa in hd and in real life) - full conversion mod (gta sa in hd and in real life) - full conversion mod (gta sa in hd and in real life) - full conversion mod (gta sa in hd and in real life) - full conversion mod (gta sa in hd and in real life) - full conversion mod (gta sa in hd and in real life) - full conversion mod (gta sa in hd and in real life) - full conversion mod (gta sa in hd and in real life) - full conversion mod (gta sa in hd and in real life) - full conversion mod (gta sa in hd and in real life) - full conversion mod (gta sa in hd and in real life) - full conversion mod (gta sa in hd and in real life) - full conversion mod (gta sa in hd and in real life) - full conversion mod (gta sa in hd and in real life) - full conversion mod (gta sa in hd and in real life) - full conversion mod (gta sa
-
The online mode and the community of GTA 5
-
GTA 5 also features an online mode called GTA Online, which lets you create your own character and play with other players online. You can join or create crews, participate in missions, races, deathmatches, heists, businesses, events, challenges, and more. You can also customize your character's appearance, clothing, vehicles, weapons, properties, businesses, clubs, yachts, bunkers, hangars, and more. GTA Online has a huge and active community of players who create and share content, such as videos, screenshots, mods, maps, and more.
-
What is GTA 5 3D and how to download it?
-
GTA 5 3D is a way of playing GTA 5 in three-dimensional graphics that create a sense of depth and immersion. Playing GTA 5 in 3D can enhance your enjoyment and appreciation of the game's stunning visuals and realistic physics.
-
The benefits of playing GTA 5 in 3D
-
Playing GTA 5 in 3D can offer some of the following benefits: - You can experience the game's world and action in a more realistic and immersive way, as if you were there. - You can appreciate the game's graphics and details more, such as the lighting, shadows, textures, reflections, particles, and animations. - You can have a better sense of distance, scale, and perspective, which can help you navigate, aim, and drive more accurately and smoothly. - You can enjoy the game's cinematic moments and cutscenes more, as they feel more like watching a movie in 3D.
The requirements and the steps to download GTA 5 in 3D
-
To play GTA 5 in 3D, you will need the following: - A PC version of GTA 5 - A 3D monitor or TV that supports either NVIDIA 3D Vision or AMD HD3D - A pair of 3D glasses that are compatible with your monitor or TV - A graphics card that supports either NVIDIA 3D Vision or AMD HD3D - A high-end CPU and RAM to handle the increased graphics demand To download GTA 5 in 3D, you will need to follow these steps: - Install the latest drivers for your graphics card from either NVIDIA or AMD - Launch GTA 5 and go to the Settings menu - Go to the Graphics tab and enable Stereoscopic 3D - Adjust the 3D settings according to your preference and system capabilities - Save the settings and exit the menu - Put on your 3D glasses and enjoy GTA 5 in 3D
The best websites and platforms to download GTA 5 in 3D
-
If you don't have a PC version of GTA 5 yet, you can download it from one of these websites or platforms: - Steam: Steam is a popular digital distribution platform that offers GTA 5 for $29.99. You can also access Steam Workshop, which allows you to download and install mods for GTA 5 easily. - Epic Games Store: Epic Games Store is another digital distribution platform that offers GTA 5 for $29.99. You can also get free games every week, including GTA 5 occasionally. - Rockstar Games Launcher: Rockstar Games Launcher is the official launcher for GTA 5 and other Rockstar games. You can buy GTA 5 for $29.99 or get it for free if you already own it on another platform. You can also access Rockstar Social Club, which allows you to connect with other players online.
How to optimize your GTA 5 3D experience?
-
Playing GTA 5 in 3D can be a lot of fun, but it can also be challenging for your system and your eyes. Here are some tips on how to optimize your GTA 5 3D experience:
-
The settings and the tips to improve your graphics and performance
-
To improve your graphics and performance while playing GTA 5 in 3D, you should: - Lower some of the graphics settings, such as resolution, anti-aliasing, texture quality, shadow quality, reflection quality, and grass quality. This will reduce the load on your system and improve your frame rate. - Turn off some of the graphics features, such as motion blur, depth of field, lens flare, chromatic aberration, and ambient occlusion. This will make the image clearer and less distorted in 3D. - Use a higher refresh rate for your monitor or TV, such as 120Hz or 144Hz. This will make the motion smoother and reduce flickering in 3D. - Use a lower convergence setting for your stereoscopic 3D. This will reduce the eye strain and headache caused by viewing objects that are too close or too far in 3D. - Take breaks every hour or so to rest your eyes and avoid fatigue.
-
The mods and the enhancements to customize your gameplay and visuals
-
To customize your gameplay and visuals while playing GTA 5 in 3D, you can use some of the mods and enhancements that are available online. Mods are modifications that add new features, content, or changes to the game. Enhancements are improvements that enhance the existing features, content, or graphics of the game. Here are some examples of mods and enhancements that you can use for GTA 5 in 3D: - NaturalVision Evolved: This is a graphical enhancement that aims to make GTA 5 look as realistic and beautiful as possible. It features new weather effects, lighting effects, textures, models, shaders, and more. It also supports ray tracing, which is a technique that simulates realistic reflections, shadows, and lighting. - GTA 5 Redux: This is another graphical enhancement that also improves the gameplay and the physics of GTA 5. It features new weather effects, lighting effects, textures, models, shaders, and more. It also adds new traffic patterns, pedestrian behaviors, police behaviors, weapon effects, sound effects, and more. - Realistic Driving V: This is a gameplay mod that makes the driving in GTA 5 more realistic and challenging. It features new handling models, suspension models, tire models, brake models, and more. It also adds new features such as ABS, ESP, TC, engine damage, tire wear, fuel consumption, and more. - LSPDFR: This is a gameplay mod that lets you play as a police officer in GTA 5. You can patrol the streets, respond to calls, chase suspects, arrest criminals, use police equipment, and more. You can also customize your character's appearance, clothing, badge, rank, and more. - Script Hook V: This is a tool that lets you use scripts in GTA 5. Scripts are codes that add new functions or features to the game. You can use scripts to do various things such as teleporting, spawning vehicles or weapons, changing the time or weather, and more.
Conclusion
-
GTA 5 is an amazing game that offers a lot of fun and entertainment for anyone who loves action-adventure games. But if you want to take your gaming experience to the next level, you should try playing GTA 5 in 3D. GTA 5 in 3D can make you feel like you are part of the game's world and action, and give you a new perspective on the game's graphics and details. To play GTA 5 in 3D, you will need a PC version of the game, a 3D monitor or TV, a pair of 3D glasses, and a graphics card that supports stereoscopic 3D. You will also need to download GTA 5 in 3D from one of the websites or platforms that offer it, and optimize your settings and mods for the best performance and visuals.
-
Summary of the main points
-
In this article, we have covered the following points: - What is GTA 5 and why is it so popular? - What is GTA 5 3D and how to download it? - How to optimize your GTA 5 3D experience?
-
Call to action and final remarks
-
If you are ready to play GTA 5 in 3D, don't wait any longer and get your copy today. You will not regret it, as you will have hours of fun and excitement in a whole new dimension. And if you need any help or guidance on how to play GTA 5 in 3D, you can check out some of the resources and guides that we have listed in this article. They will help you learn more about GTA 5 in 3D and how to make the most of it. We hope you have enjoyed this article and found it useful. If you have any questions, comments, or feedback, please feel free to leave them below. We would love to hear from you and help you out. Thank you for reading and happy gaming!
-
FAQs
-
Here are some of the frequently asked questions about GTA 5 3D download:
- - Q: Can I play GTA 5 in 3D on PlayStation or Xbox? - A: No, GTA 5 in 3D is only available for PC. You will need a PC version of the game and a 3D monitor or TV to play GTA 5 in 3D. - Q: How much space does GTA 5 in 3D take on my PC? - A: GTA 5 in 3D takes about the same amount of space as GTA 5 in 2D, which is around 65 GB. However, if you install any mods or enhancements, the space requirement may increase. - Q: Is GTA 5 in 3D safe to download and play? - A: Yes, GTA 5 in 3D is safe to download and play, as long as you download it from a reputable website or platform, such as Steam, Epic Games Store, or Rockstar Games Launcher. You should also scan your PC for any viruses or malware before and after downloading GTA 5 in 3D. - Q: Can I play GTA Online in 3D? - A: Yes, you can play GTA Online in 3D, as long as you have a stable internet connection and a compatible graphics card. However, some of the mods or enhancements that work for GTA 5 in 3D may not work for GTA Online in 3D, so you should be careful about what you install and use. - Q: What are some of the best mods or enhancements for GTA 5 in 3D? - A: Some of the best mods or enhancements for GTA 5 in 3D are NaturalVision Evolved, GTA 5 Redux, Realistic Driving V, LSPDFR, and Script Hook V. You can find more mods or enhancements on websites such as GTA5-Mods.com, Nexus Mods, or Steam Workshop. 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/body-parser/lib/types/urlencoded.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/body-parser/lib/types/urlencoded.js
deleted file mode 100644
index b2ca8f16d0c105424acd16282e629346698e140b..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/body-parser/lib/types/urlencoded.js
+++ /dev/null
@@ -1,284 +0,0 @@
-/*!
- * body-parser
- * Copyright(c) 2014 Jonathan Ong
- * Copyright(c) 2014-2015 Douglas Christopher Wilson
- * MIT Licensed
- */
-
-'use strict'
-
-/**
- * Module dependencies.
- * @private
- */
-
-var bytes = require('bytes')
-var contentType = require('content-type')
-var createError = require('http-errors')
-var debug = require('debug')('body-parser:urlencoded')
-var deprecate = require('depd')('body-parser')
-var read = require('../read')
-var typeis = require('type-is')
-
-/**
- * Module exports.
- */
-
-module.exports = urlencoded
-
-/**
- * Cache of parser modules.
- */
-
-var parsers = Object.create(null)
-
-/**
- * Create a middleware to parse urlencoded bodies.
- *
- * @param {object} [options]
- * @return {function}
- * @public
- */
-
-function urlencoded (options) {
- var opts = options || {}
-
- // notice because option default will flip in next major
- if (opts.extended === undefined) {
- deprecate('undefined extended: provide extended option')
- }
-
- var extended = opts.extended !== false
- var inflate = opts.inflate !== false
- var limit = typeof opts.limit !== 'number'
- ? bytes.parse(opts.limit || '100kb')
- : opts.limit
- var type = opts.type || 'application/x-www-form-urlencoded'
- var verify = opts.verify || false
-
- if (verify !== false && typeof verify !== 'function') {
- throw new TypeError('option verify must be function')
- }
-
- // create the appropriate query parser
- var queryparse = extended
- ? extendedparser(opts)
- : simpleparser(opts)
-
- // create the appropriate type checking function
- var shouldParse = typeof type !== 'function'
- ? typeChecker(type)
- : type
-
- function parse (body) {
- return body.length
- ? queryparse(body)
- : {}
- }
-
- return function urlencodedParser (req, res, next) {
- if (req._body) {
- debug('body already parsed')
- next()
- return
- }
-
- req.body = req.body || {}
-
- // skip requests without bodies
- if (!typeis.hasBody(req)) {
- debug('skip empty body')
- next()
- return
- }
-
- debug('content-type %j', req.headers['content-type'])
-
- // determine if request should be parsed
- if (!shouldParse(req)) {
- debug('skip parsing')
- next()
- return
- }
-
- // assert charset
- var charset = getCharset(req) || 'utf-8'
- if (charset !== 'utf-8') {
- debug('invalid charset')
- next(createError(415, 'unsupported charset "' + charset.toUpperCase() + '"', {
- charset: charset,
- type: 'charset.unsupported'
- }))
- return
- }
-
- // read
- read(req, res, next, parse, debug, {
- debug: debug,
- encoding: charset,
- inflate: inflate,
- limit: limit,
- verify: verify
- })
- }
-}
-
-/**
- * Get the extended query parser.
- *
- * @param {object} options
- */
-
-function extendedparser (options) {
- var parameterLimit = options.parameterLimit !== undefined
- ? options.parameterLimit
- : 1000
- var parse = parser('qs')
-
- if (isNaN(parameterLimit) || parameterLimit < 1) {
- throw new TypeError('option parameterLimit must be a positive number')
- }
-
- if (isFinite(parameterLimit)) {
- parameterLimit = parameterLimit | 0
- }
-
- return function queryparse (body) {
- var paramCount = parameterCount(body, parameterLimit)
-
- if (paramCount === undefined) {
- debug('too many parameters')
- throw createError(413, 'too many parameters', {
- type: 'parameters.too.many'
- })
- }
-
- var arrayLimit = Math.max(100, paramCount)
-
- debug('parse extended urlencoding')
- return parse(body, {
- allowPrototypes: true,
- arrayLimit: arrayLimit,
- depth: Infinity,
- parameterLimit: parameterLimit
- })
- }
-}
-
-/**
- * Get the charset of a request.
- *
- * @param {object} req
- * @api private
- */
-
-function getCharset (req) {
- try {
- return (contentType.parse(req).parameters.charset || '').toLowerCase()
- } catch (e) {
- return undefined
- }
-}
-
-/**
- * Count the number of parameters, stopping once limit reached
- *
- * @param {string} body
- * @param {number} limit
- * @api private
- */
-
-function parameterCount (body, limit) {
- var count = 0
- var index = 0
-
- while ((index = body.indexOf('&', index)) !== -1) {
- count++
- index++
-
- if (count === limit) {
- return undefined
- }
- }
-
- return count
-}
-
-/**
- * Get parser for module name dynamically.
- *
- * @param {string} name
- * @return {function}
- * @api private
- */
-
-function parser (name) {
- var mod = parsers[name]
-
- if (mod !== undefined) {
- return mod.parse
- }
-
- // this uses a switch for static require analysis
- switch (name) {
- case 'qs':
- mod = require('qs')
- break
- case 'querystring':
- mod = require('querystring')
- break
- }
-
- // store to prevent invoking require()
- parsers[name] = mod
-
- return mod.parse
-}
-
-/**
- * Get the simple query parser.
- *
- * @param {object} options
- */
-
-function simpleparser (options) {
- var parameterLimit = options.parameterLimit !== undefined
- ? options.parameterLimit
- : 1000
- var parse = parser('querystring')
-
- if (isNaN(parameterLimit) || parameterLimit < 1) {
- throw new TypeError('option parameterLimit must be a positive number')
- }
-
- if (isFinite(parameterLimit)) {
- parameterLimit = parameterLimit | 0
- }
-
- return function queryparse (body) {
- var paramCount = parameterCount(body, parameterLimit)
-
- if (paramCount === undefined) {
- debug('too many parameters')
- throw createError(413, 'too many parameters', {
- type: 'parameters.too.many'
- })
- }
-
- debug('parse urlencoding')
- return parse(body, undefined, undefined, { maxKeys: parameterLimit })
- }
-}
-
-/**
- * Get the simple type checker.
- *
- * @param {string} type
- * @return {function}
- */
-
-function typeChecker (type) {
- return function checkType (req) {
- return Boolean(typeis(req, type))
- }
-}
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/iconv-lite/encodings/utf7.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/iconv-lite/encodings/utf7.js
deleted file mode 100644
index b7631c23a801b0275c1f12a9d1a8fe00d8f51f0c..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/iconv-lite/encodings/utf7.js
+++ /dev/null
@@ -1,290 +0,0 @@
-"use strict";
-var Buffer = require("safer-buffer").Buffer;
-
-// UTF-7 codec, according to https://tools.ietf.org/html/rfc2152
-// See also below a UTF-7-IMAP codec, according to http://tools.ietf.org/html/rfc3501#section-5.1.3
-
-exports.utf7 = Utf7Codec;
-exports.unicode11utf7 = 'utf7'; // Alias UNICODE-1-1-UTF-7
-function Utf7Codec(codecOptions, iconv) {
- this.iconv = iconv;
-};
-
-Utf7Codec.prototype.encoder = Utf7Encoder;
-Utf7Codec.prototype.decoder = Utf7Decoder;
-Utf7Codec.prototype.bomAware = true;
-
-
-// -- Encoding
-
-var nonDirectChars = /[^A-Za-z0-9'\(\),-\.\/:\? \n\r\t]+/g;
-
-function Utf7Encoder(options, codec) {
- this.iconv = codec.iconv;
-}
-
-Utf7Encoder.prototype.write = function(str) {
- // Naive implementation.
- // Non-direct chars are encoded as "+-"; single "+" char is encoded as "+-".
- return Buffer.from(str.replace(nonDirectChars, function(chunk) {
- return "+" + (chunk === '+' ? '' :
- this.iconv.encode(chunk, 'utf16-be').toString('base64').replace(/=+$/, ''))
- + "-";
- }.bind(this)));
-}
-
-Utf7Encoder.prototype.end = function() {
-}
-
-
-// -- Decoding
-
-function Utf7Decoder(options, codec) {
- this.iconv = codec.iconv;
- this.inBase64 = false;
- this.base64Accum = '';
-}
-
-var base64Regex = /[A-Za-z0-9\/+]/;
-var base64Chars = [];
-for (var i = 0; i < 256; i++)
- base64Chars[i] = base64Regex.test(String.fromCharCode(i));
-
-var plusChar = '+'.charCodeAt(0),
- minusChar = '-'.charCodeAt(0),
- andChar = '&'.charCodeAt(0);
-
-Utf7Decoder.prototype.write = function(buf) {
- var res = "", lastI = 0,
- inBase64 = this.inBase64,
- base64Accum = this.base64Accum;
-
- // The decoder is more involved as we must handle chunks in stream.
-
- for (var i = 0; i < buf.length; i++) {
- if (!inBase64) { // We're in direct mode.
- // Write direct chars until '+'
- if (buf[i] == plusChar) {
- res += this.iconv.decode(buf.slice(lastI, i), "ascii"); // Write direct chars.
- lastI = i+1;
- inBase64 = true;
- }
- } else { // We decode base64.
- if (!base64Chars[buf[i]]) { // Base64 ended.
- if (i == lastI && buf[i] == minusChar) {// "+-" -> "+"
- res += "+";
- } else {
- var b64str = base64Accum + buf.slice(lastI, i).toString();
- res += this.iconv.decode(Buffer.from(b64str, 'base64'), "utf16-be");
- }
-
- if (buf[i] != minusChar) // Minus is absorbed after base64.
- i--;
-
- lastI = i+1;
- inBase64 = false;
- base64Accum = '';
- }
- }
- }
-
- if (!inBase64) {
- res += this.iconv.decode(buf.slice(lastI), "ascii"); // Write direct chars.
- } else {
- var b64str = base64Accum + buf.slice(lastI).toString();
-
- var canBeDecoded = b64str.length - (b64str.length % 8); // Minimal chunk: 2 quads -> 2x3 bytes -> 3 chars.
- base64Accum = b64str.slice(canBeDecoded); // The rest will be decoded in future.
- b64str = b64str.slice(0, canBeDecoded);
-
- res += this.iconv.decode(Buffer.from(b64str, 'base64'), "utf16-be");
- }
-
- this.inBase64 = inBase64;
- this.base64Accum = base64Accum;
-
- return res;
-}
-
-Utf7Decoder.prototype.end = function() {
- var res = "";
- if (this.inBase64 && this.base64Accum.length > 0)
- res = this.iconv.decode(Buffer.from(this.base64Accum, 'base64'), "utf16-be");
-
- this.inBase64 = false;
- this.base64Accum = '';
- return res;
-}
-
-
-// UTF-7-IMAP codec.
-// RFC3501 Sec. 5.1.3 Modified UTF-7 (http://tools.ietf.org/html/rfc3501#section-5.1.3)
-// Differences:
-// * Base64 part is started by "&" instead of "+"
-// * Direct characters are 0x20-0x7E, except "&" (0x26)
-// * In Base64, "," is used instead of "/"
-// * Base64 must not be used to represent direct characters.
-// * No implicit shift back from Base64 (should always end with '-')
-// * String must end in non-shifted position.
-// * "-&" while in base64 is not allowed.
-
-
-exports.utf7imap = Utf7IMAPCodec;
-function Utf7IMAPCodec(codecOptions, iconv) {
- this.iconv = iconv;
-};
-
-Utf7IMAPCodec.prototype.encoder = Utf7IMAPEncoder;
-Utf7IMAPCodec.prototype.decoder = Utf7IMAPDecoder;
-Utf7IMAPCodec.prototype.bomAware = true;
-
-
-// -- Encoding
-
-function Utf7IMAPEncoder(options, codec) {
- this.iconv = codec.iconv;
- this.inBase64 = false;
- this.base64Accum = Buffer.alloc(6);
- this.base64AccumIdx = 0;
-}
-
-Utf7IMAPEncoder.prototype.write = function(str) {
- var inBase64 = this.inBase64,
- base64Accum = this.base64Accum,
- base64AccumIdx = this.base64AccumIdx,
- buf = Buffer.alloc(str.length*5 + 10), bufIdx = 0;
-
- for (var i = 0; i < str.length; i++) {
- var uChar = str.charCodeAt(i);
- if (0x20 <= uChar && uChar <= 0x7E) { // Direct character or '&'.
- if (inBase64) {
- if (base64AccumIdx > 0) {
- bufIdx += buf.write(base64Accum.slice(0, base64AccumIdx).toString('base64').replace(/\//g, ',').replace(/=+$/, ''), bufIdx);
- base64AccumIdx = 0;
- }
-
- buf[bufIdx++] = minusChar; // Write '-', then go to direct mode.
- inBase64 = false;
- }
-
- if (!inBase64) {
- buf[bufIdx++] = uChar; // Write direct character
-
- if (uChar === andChar) // Ampersand -> '&-'
- buf[bufIdx++] = minusChar;
- }
-
- } else { // Non-direct character
- if (!inBase64) {
- buf[bufIdx++] = andChar; // Write '&', then go to base64 mode.
- inBase64 = true;
- }
- if (inBase64) {
- base64Accum[base64AccumIdx++] = uChar >> 8;
- base64Accum[base64AccumIdx++] = uChar & 0xFF;
-
- if (base64AccumIdx == base64Accum.length) {
- bufIdx += buf.write(base64Accum.toString('base64').replace(/\//g, ','), bufIdx);
- base64AccumIdx = 0;
- }
- }
- }
- }
-
- this.inBase64 = inBase64;
- this.base64AccumIdx = base64AccumIdx;
-
- return buf.slice(0, bufIdx);
-}
-
-Utf7IMAPEncoder.prototype.end = function() {
- var buf = Buffer.alloc(10), bufIdx = 0;
- if (this.inBase64) {
- if (this.base64AccumIdx > 0) {
- bufIdx += buf.write(this.base64Accum.slice(0, this.base64AccumIdx).toString('base64').replace(/\//g, ',').replace(/=+$/, ''), bufIdx);
- this.base64AccumIdx = 0;
- }
-
- buf[bufIdx++] = minusChar; // Write '-', then go to direct mode.
- this.inBase64 = false;
- }
-
- return buf.slice(0, bufIdx);
-}
-
-
-// -- Decoding
-
-function Utf7IMAPDecoder(options, codec) {
- this.iconv = codec.iconv;
- this.inBase64 = false;
- this.base64Accum = '';
-}
-
-var base64IMAPChars = base64Chars.slice();
-base64IMAPChars[','.charCodeAt(0)] = true;
-
-Utf7IMAPDecoder.prototype.write = function(buf) {
- var res = "", lastI = 0,
- inBase64 = this.inBase64,
- base64Accum = this.base64Accum;
-
- // The decoder is more involved as we must handle chunks in stream.
- // It is forgiving, closer to standard UTF-7 (for example, '-' is optional at the end).
-
- for (var i = 0; i < buf.length; i++) {
- if (!inBase64) { // We're in direct mode.
- // Write direct chars until '&'
- if (buf[i] == andChar) {
- res += this.iconv.decode(buf.slice(lastI, i), "ascii"); // Write direct chars.
- lastI = i+1;
- inBase64 = true;
- }
- } else { // We decode base64.
- if (!base64IMAPChars[buf[i]]) { // Base64 ended.
- if (i == lastI && buf[i] == minusChar) { // "&-" -> "&"
- res += "&";
- } else {
- var b64str = base64Accum + buf.slice(lastI, i).toString().replace(/,/g, '/');
- res += this.iconv.decode(Buffer.from(b64str, 'base64'), "utf16-be");
- }
-
- if (buf[i] != minusChar) // Minus may be absorbed after base64.
- i--;
-
- lastI = i+1;
- inBase64 = false;
- base64Accum = '';
- }
- }
- }
-
- if (!inBase64) {
- res += this.iconv.decode(buf.slice(lastI), "ascii"); // Write direct chars.
- } else {
- var b64str = base64Accum + buf.slice(lastI).toString().replace(/,/g, '/');
-
- var canBeDecoded = b64str.length - (b64str.length % 8); // Minimal chunk: 2 quads -> 2x3 bytes -> 3 chars.
- base64Accum = b64str.slice(canBeDecoded); // The rest will be decoded in future.
- b64str = b64str.slice(0, canBeDecoded);
-
- res += this.iconv.decode(Buffer.from(b64str, 'base64'), "utf16-be");
- }
-
- this.inBase64 = inBase64;
- this.base64Accum = base64Accum;
-
- return res;
-}
-
-Utf7IMAPDecoder.prototype.end = function() {
- var res = "";
- if (this.inBase64 && this.base64Accum.length > 0)
- res = this.iconv.decode(Buffer.from(this.base64Accum, 'base64'), "utf16-be");
-
- this.inBase64 = false;
- this.base64Accum = '';
- return res;
-}
-
-
diff --git a/spaces/fffiloni/image-to-sound-fx-debug/style.css b/spaces/fffiloni/image-to-sound-fx-debug/style.css
deleted file mode 100644
index 7b61a2babdce8c49e9586954c1b37eb36babac4d..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/image-to-sound-fx-debug/style.css
+++ /dev/null
@@ -1,80 +0,0 @@
-#col-container {max-width: 440px; margin-left: auto; margin-right: auto;}
-a {text-decoration-line: underline; font-weight: 600;}
-
-.footer {
- margin-bottom: 45px;
- margin-top: 10px;
- text-align: center;
- border-bottom: 1px solid #e5e5e5;
-}
-
-.footer>p {
- font-size: .8rem;
- display: inline-block;
- padding: 0 10px;
- transform: translateY(28px);
- background: white;
-}
-.dark .footer {
- border-color: #303030;
-}
-.dark .footer>p {
- background: #0b0f19;
-}
-
-.animate-spin {
- animation: spin 1s linear infinite;
-}
-
-@keyframes spin {
- from {
- transform: rotate(0deg);
- }
- to {
- transform: rotate(360deg);
- }
-}
-
-#share-btn-container {
- display: flex;
- padding-left: 0.5rem !important;
- padding-right: 0.5rem !important;
- background-color: #000000;
- justify-content: center;
- align-items: center;
- border-radius: 9999px !important;
- max-width: 13rem;
-}
-
-#share-btn-container:hover {
- background-color: #060606;
-}
-
-#share-btn {
- all: initial;
- color: #ffffff;
- font-weight: 600;
- cursor:pointer;
- font-family: 'IBM Plex Sans', sans-serif;
- margin-left: 0.5rem !important;
- padding-top: 0.5rem !important;
- padding-bottom: 0.5rem !important;
- right:0;
-}
-
-#share-btn * {
- all: unset;
-}
-
-#share-btn-container div:nth-child(-n+2){
- width: auto !important;
- min-height: 0px !important;
-}
-
-#share-btn-container .wrap {
- display: none !important;
-}
-
-#share-btn-container.hidden {
- display: none!important;
-}
\ No newline at end of file
diff --git a/spaces/flowers-team/Interactive_DeepRL_Demo/README.md b/spaces/flowers-team/Interactive_DeepRL_Demo/README.md
deleted file mode 100644
index f11ea0380b1dffb600fbdb5201d6ee794d45200f..0000000000000000000000000000000000000000
--- a/spaces/flowers-team/Interactive_DeepRL_Demo/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Interactive DeepRL Demo
-emoji: 🐨
-colorFrom: red
-colorTo: gray
-sdk: static
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/blocks_neural_instrument_coding/run.py b/spaces/freddyaboulton/3.1.4.9-all-demos/demos/blocks_neural_instrument_coding/run.py
deleted file mode 100644
index 3d387de781ee72df120c52a8a2bdbf2dd218ee4b..0000000000000000000000000000000000000000
--- a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/blocks_neural_instrument_coding/run.py
+++ /dev/null
@@ -1,143 +0,0 @@
-# A Blocks implementation of https://erlj.notion.site/Neural-Instrument-Cloning-from-very-few-samples-2cf41d8b630842ee8c7eb55036a1bfd6
-
-import datetime
-import os
-import random
-
-import gradio as gr
-from gradio.components import Markdown as m
-
-
-def get_time():
- now = datetime.datetime.now()
- return now.strftime("%m/%d/%Y, %H:%M:%S")
-
-
-def generate_recording():
- return random.choice(["new-sax-1.mp3", "new-sax-1.wav"])
-
-
-def reconstruct(audio):
- return random.choice(["new-sax-1.mp3", "new-sax-1.wav"])
-
-
-io1 = gr.Interface(
- lambda x, y, z: os.path.join(os.path.dirname(__file__),"sax.wav"),
- [
- gr.Slider(label="pitch"),
- gr.Slider(label="loudness"),
- gr.Audio(label="base audio file (optional)"),
- ],
- gr.Audio(),
-)
-
-io2 = gr.Interface(
- lambda x, y, z: os.path.join(os.path.dirname(__file__),"flute.wav"),
- [
- gr.Slider(label="pitch"),
- gr.Slider(label="loudness"),
- gr.Audio(label="base audio file (optional)"),
- ],
- gr.Audio(),
-)
-
-io3 = gr.Interface(
- lambda x, y, z: os.path.join(os.path.dirname(__file__),"trombone.wav"),
- [
- gr.Slider(label="pitch"),
- gr.Slider(label="loudness"),
- gr.Audio(label="base audio file (optional)"),
- ],
- gr.Audio(),
-)
-
-io4 = gr.Interface(
- lambda x, y, z: os.path.join(os.path.dirname(__file__),"sax2.wav"),
- [
- gr.Slider(label="pitch"),
- gr.Slider(label="loudness"),
- gr.Audio(label="base audio file (optional)"),
- ],
- gr.Audio(),
-)
-
-demo = gr.Blocks(title="Neural Instrument Cloning")
-
-with demo.clear():
- m(
- """
- ## Neural Instrument Cloning from Very Few Samples
-
"""
- )
- m(
- """
- This Blocks implementation is an adaptation [a report written](https://erlj.notion.site/Neural-Instrument-Cloning-from-very-few-samples-2cf41d8b630842ee8c7eb55036a1bfd6) by Nicolas Jonason and Bob L.T. Sturm.
-
- I've implemented it in Blocks to show off some cool features, such as embedding live ML demos. More on that ahead...
-
- ### What does this machine learning model do?
- It combines techniques from neural voice cloning with musical instrument synthesis. This makes it possible to produce neural instrument synthesisers from just seconds of target instrument audio.
-
- ### Audio Examples
- Here are some **real** 16 second saxophone recordings:
- """
- )
- gr.Audio(os.path.join(os.path.dirname(__file__),"sax.wav"), label="Here is a real 16 second saxophone recording:")
- gr.Audio(os.path.join(os.path.dirname(__file__),"sax.wav"))
-
- m(
- """\n
- Here is a **generated** saxophone recordings:"""
- )
- a = gr.Audio(os.path.join(os.path.dirname(__file__),"new-sax.wav"))
-
- gr.Button("Generate a new saxophone recording")
-
- m(
- """
- ### Inputs to the model
- The inputs to the model are:
- * pitch
- * loudness
- * base audio file
- """
- )
-
- m(
- """
- Try the model live!
- """
- )
-
- gr.TabbedInterface(
- [io1, io2, io3, io4], ["Saxophone", "Flute", "Trombone", "Another Saxophone"]
- )
-
- m(
- """
- ### Using the model for cloning
- You can also use this model a different way, to simply clone the audio file and reconstruct it
- using machine learning. Here, we'll show a demo of that below:
- """
- )
-
- a2 = gr.Audio()
- a2.change(reconstruct, a2, a2)
-
- m(
- """
- Thanks for reading this! As you may have realized, all of the "models" in this demo are fake. They are just designed to show you what is possible using Blocks 🤗.
-
- For details of the model, read the [original report here](https://erlj.notion.site/Neural-Instrument-Cloning-from-very-few-samples-2cf41d8b630842ee8c7eb55036a1bfd6).
-
- *Details for nerds*: this report was "launched" on:
- """
- )
-
- t = gr.Textbox(label="timestamp")
-
- demo.load(get_time, [], t)
-
-
-if __name__ == "__main__":
- demo.launch()
diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/datasets/cityscapes.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/datasets/cityscapes.py
deleted file mode 100644
index f21867c63e1835f6fceb61f066e802fd8fd2a735..0000000000000000000000000000000000000000
--- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/datasets/cityscapes.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# dataset settings
-dataset_type = 'CityscapesDataset'
-data_root = 'data/cityscapes/'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-crop_size = (512, 1024)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations'),
- dict(type='Resize', img_scale=(2048, 1024), ratio_range=(0.5, 2.0)),
- dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
- dict(type='RandomFlip', prob=0.5),
- dict(type='PhotoMetricDistortion'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_semantic_seg']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(2048, 1024),
- # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75],
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=2,
- workers_per_gpu=2,
- train=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='leftImg8bit/train',
- ann_dir='gtFine/train',
- pipeline=train_pipeline),
- val=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='leftImg8bit/val',
- ann_dir='gtFine/val',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='leftImg8bit/val',
- ann_dir='gtFine/val',
- pipeline=test_pipeline))
diff --git a/spaces/giswqs/solara-demo/README.md b/spaces/giswqs/solara-demo/README.md
deleted file mode 100644
index 5e06fb34802e3589015f91535e2fc93e7bf7e03d..0000000000000000000000000000000000000000
--- a/spaces/giswqs/solara-demo/README.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-title: Solara Demo
-emoji: 🌍
-colorFrom: green
-colorTo: red
-sdk: docker
-pinned: false
-license: mit
-app_port: 8765
-duplicated_from: giswqs/solara-geospatial
----
-
-## Introduction
-
-A collection of [Solara](https://github.com/widgetti/solara) web apps for geospatial applications
-
-Just a proof-of-concept for now. Not all features are working yet. More features will be added in the future.
-
-- Web App:
-- GitHub:
-- Hugging Face:
-
-## Demos
-
-
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Download Quadro 4000 Drivers Troubleshooting Common Issues and Errors.md b/spaces/gotiQspiryo/whisper-ui/examples/Download Quadro 4000 Drivers Troubleshooting Common Issues and Errors.md
deleted file mode 100644
index d08d056cadfd5c31cda67aeaa9e171257d17f4d2..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Download Quadro 4000 Drivers Troubleshooting Common Issues and Errors.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-
If you have a video card Quadro 4000 and want to maintain stable and high performance in games and graphics applications, you need to download the latest graphics driver.This page contains current and official Game Ready Drivers for Quadro 4000 under the Windows operating system. Download and install the 4000 driver.
-
In this short article, we will briefly and clearly explain to you how to download and install 4000 driver. Let's start! On this page, in the table with the list of operating systems, select the operating system installed on your computer.
It is recommended that you use the Windows operating system to download or update video card drivers. This is the safest and most reliable way to download or update drivers. The Windows operating system itself downloads and installs new verified drivers.
-
This metapackage depends on the NVIDIA binary driver and librariesthat provide optimized hardware acceleration ofOpenGL/GLX/EGL/GLES/Vulkan applications via a direct-rendering X Server.Please see the nvidia-legacy-390xx-kernel-dkms ornvidia-legacy-390xx-kernel-source packagesfor building the kernel module required by this package.This will provide nvidia-legacy-390xx-kernel-390.154.This legacy version is the last release that supports the following GPUs:GeForce 410M [GF119M], GeForce 510 [GF119], GeForce 605 [GF119],GeForce 610M [GF108M], GeForce 610M [GF119M], GeForce 610M [GF117M],GeForce 705M [GF119M], GeForce 710A [GK107M], GeForce 710M [GF117M],GeForce 810M [GF117M], GeForce 810M [GK107M], GeForce 820M [GF117M],GeForce 820M [GK107M], GeForce 825M [GK208M], GeForce 910M [GK208BM],GeForce GT 415M [GF108M], GeForce GT 420 [GF108], GeForce GT 420M [GF108M],GeForce GT 425M [GF108M], GeForce GT 430 [GF108], GeForce GT 435M [GF106M],GeForce GT 435M [GF108M], GeForce GT 440 [GF106], GeForce GT 440 [GF108],GeForce GT 445M [GF106M], GeForce GT 520 [GF108], GeForce GT 520 [GF119],GeForce GT 520M [GF108M], GeForce GT 520M [GF119M],GeForce GT 520MX [GF119M], GeForce GT 525M [GF108M], GeForce GT 530 [GF108],GeForce GT 540M [GF108M], GeForce GT 545 OEM [GF116], GeForce GT 545 [GF116],GeForce GT 550M [GF106M], GeForce GT 550M [GF108M], GeForce GT 550M [GF116M],GeForce GT 555M [GF106M], GeForce GT 555M [GF108M], GeForce GT 555M [GF116M],GeForce GT 560M [GF116M], GeForce GT 610 [GF108], GeForce GT 610 [GF119],GeForce GT 620 OEM [GF119], GeForce GT 620 [GF108], GeForce GT 620M [GF108M],GeForce GT 620M [GF117M], GeForce GT 620M LE [GF108M],GeForce GT 625 OEM [GF119], GeForce GT 625M [GF117M], GeForce GT 630 [GF108],GeForce GT 630M [GF117M], GeForce GT 630M LE [GF108M],GeForce GT 635M [GF108M], GeForce GT 635M [GF116M],GeForce GT 635M LE [GF108M], GeForce GT 640 OEM [GF116],GeForce GT 640M LE [GF108M], GeForce GT 640M LE [GK107M],GeForce GT 640M Mac Edition [GK107M], GeForce GT 645 OEM [GF114],GeForce GT 645M [GK107M], GeForce GT 650M [GK107M],GeForce GT 650M Mac Edition [GK107M], GeForce GT 705 [GF119],GeForce GT 720M [GF117M], GeForce GT 720M [GK208M], GeForce GT 730 [GF108],GeForce GT 730M [GK107M], GeForce GT 730M [GK208M], GeForce GT 735M [GK208M],GeForce GT 740M [GK107M], GeForce GT 745M [GK107M], GeForce GT 750M [GK107M],GeForce GT 750M Mac Edition [GK107M], GeForce GT 755M [GK107M],GeForce GT 755M Mac Edition [GK107M], GeForce GTS 450 OEM [GF106],GeForce GTS 450 [GF106], GeForce GTS 450 Rev. 2 [GF116],GeForce GTS 450 Rev. 3 [GF116], GeForce GTX 460 OEM [GF104],GeForce GTX 460 [GF104], GeForce GTX 460 v2 [GF114],GeForce GTX 460 SE [GF104], GeForce GTX 460 SE v2 [GF114],GeForce GTX 460M [GF106M], GeForce GTX 465 [GF100], GeForce GTX 470 [GF100],GeForce GTX 470M [GF104M], GeForce GTX 480 [GF100],GeForce GTX 480M [GF100M], GeForce GTX 485M [GF104M],GeForce GTX 550 Ti [GF116], GeForce GTX 555 [GF114],GeForce GTX 560 OEM [GF110], GeForce GTX 560 [GF114],GeForce GTX 560 SE [GF114], GeForce GTX 560 Ti [GF114],GeForce GTX 560 Ti OEM [GF110], GeForce GTX 560 Ti 448 Cores [GF110],GeForce GTX 570 [GF110], GeForce GTX 570 Rev. 2 [GF110],GeForce GTX 570M [GF114M], GeForce GTX 580 [GF110],GeForce GTX 580 Rev. 2 [GF110], GeForce GTX 580M [GF114M],GeForce GTX 590 [GF110], GeForce GTX 660M [GK107M],GeForce GTX 660M Mac Edition [GK107M], GeForce GTX 670M [GF114M],GeForce GTX 670MX [GK104M], GeForce GTX 675M [GF114M],GeForce GTX 675MX [GK104M], GeForce GTX 675MX Mac Edition [GK104M],GeForce GTX 680M [GK104M], GeForce GTX 680MX [GK104M],GeForce GTX 765M [GK106M], GeForce GTX 770M [GK106M],GeForce GTX 775M Mac Edition [GK104M], GeForce GTX 780M [GK104M],GeForce GTX 780M Mac Edition [GK104M], GeForce GTX 860M [GK104M],GeForce GTX 880M [GK104M], NVS 310 [GF119], NVS 315 [GF119],NVS 4200M [GF119M], NVS 5200M [GF108GLM], NVS 5400M [GF108M],Quadro 500M [GF108GLM], Quadro 600 [GF108GL], Quadro 1000M [GF108GLM],Quadro 2000 [GF106GL], Quadro 2000M [GF106GLM], Quadro 3000M [GF104GLM],Quadro 4000 [GF100GL], Quadro 4000M [GF104GLM], Quadro 5000 [GF100GL],Quadro 5000M [GF100GLM], Quadro 5010M [GF100GLM], Quadro 6000 [GF100GL],Quadro 7000 [GF100GL], Quadro K500M [GK107GLM], Quadro K510M [GK208GLM],Quadro K610M [GK208GLM], Quadro K1000M [GK107GLM], Quadro K1100M [GK107GLM],Quadro K2000M [GK107GLM], Quadro K2100M [GK106GLM], Quadro K3000M [GK104GLM],Quadro K3100M [GK104GLM], Quadro K4000M [GK104GLM], Quadro K4100M [GK104GLM],Quadro K5000M [GK104GLM], Quadro K5100M [GK104GLM],Quadro NVS 4200M [GF119M], Tesla C2050 [GF100GL], Tesla C2050 [GF110GL],Tesla C2070 [GF100GL], Tesla C2075 [GF110GL], Tesla M2070 [GF100GL],Tesla M2070-Q [GF100GL], Tesla M2075 [GF110GL], Tesla M2090 [GF110GL],Tesla T20 Processor [GF100GL].There are several "more modern" GPUs supported by this package, too, but theupdated drivers in the newer legacy packages or the current nvidia-driverpackage usually provide more features and better support.Look at the other legacy packages for older cards.See /usr/share/doc/nvidia-legacy-390xx-driver/README.txt.gzfor a complete list of supported GPUs and PCI IDs.Building the kernel module has been tested up to Linux 5.19. Other Packages Related to nvidia-legacy-390xx-driver
depends
recommends
suggests
enhances
dep:nvidia-installer-cleanup cleanup after driver installation with the nvidia-installer
dep:nvidia-legacy-390xx-alternative (= 390.154-1~deb10u1) allows the selection of NVIDIA as GLX provider (390xx legacy version)
dep:nvidia-legacy-390xx-driver-bin (= 390.154-1~deb10u1) NVIDIA driver support binaries (390xx legacy version)
rec:nvidia-persistenced daemon to maintain persistent software state in the NVIDIA driver
rec:nvidia-settings-legacy-390xx tool for configuring the NVIDIA graphics driver (390xx legacy version)
sug:nvidia-legacy-390xx-kernel-dkms (>= 390.154) NVIDIA binary kernel module DKMS source (390xx legacy version) or nvidia-legacy-390xx-kernel-source (>= 390.154) NVIDIA binary kernel module source (390xx legacy version) Download nvidia-legacy-390xx-driver Download for all available architectures ArchitecturePackage SizeInstalled SizeFiles amd64488.3 kB1,190.0 kB [list of files] armhf487.5 kB1,188.0 kB [list of files] i386488.3 kB1,189.0 kB [list of files] This page is also available in the following languages (How to set the default document language):
-
i would like to buy a quadro k2000 or quadro 4000. now, i know that the "k" series is the new graphic cards for 2000, 4000 and etc.... my only concern is about graphic card support. i saw on autodesk website that quadro k series hasn't been tested on maya 2012 and 2013 or tested but not supported. i dont know and this is my problem. so my question is: does quadro k series is approved for maya 2012 or 2013? is it ok if i sill buy the k2000 and use it on maya 2012? i dont want any support problem or what so ever....does someone knows? how could i know?
-
2) when i opened nvidia website i checked quadro series - k2000 module. after then i encountered another open box titled: "download type| whice gives me several options to check. what is the different between all of the options and what do i need to check and download?
-
function gennr()var n=480678,t=new Date,e=t.getMonth()+1,r=t.getDay(),a=parseFloat("0."+String(e)+r);return new Intl.NumberFormat('en-US').format(Math.round(569086*a+n))var rng=document.querySelector("#restoro-downloads");rng.innerHTML=gennr();rng.removeAttribute("id");var restoroDownloadLink=document.querySelector("#restoro-download-link"),restoroDownloadArrow=document.querySelector(".restoro-download-arrow"),restoroCloseArrow=document.querySelector("#close-restoro-download-arrow");if(window.navigator.vendor=="Google Inc.")restoroDownloadLink.addEventListener("click",function()setTimeout(function()restoroDownloadArrow.style.display="flex",500),restoroCloseArrow.addEventListener("click",function()restoroDownloadArrow.style.display="none"));We all know how critical the graphics drivers are, and the significance of keeping them up-to-date. However, a lot of users have reported that the Nvidia drivers are not installing in Windows 11.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/gradio/HuBERT/fairseq/data/concat_sentences_dataset.py b/spaces/gradio/HuBERT/fairseq/data/concat_sentences_dataset.py
deleted file mode 100644
index 625a29370e90f9d1d7274024afb902ed83a22325..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/fairseq/data/concat_sentences_dataset.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-from . import FairseqDataset
-
-
-class ConcatSentencesDataset(FairseqDataset):
- def __init__(self, *datasets):
- super().__init__()
- self.datasets = datasets
- assert all(
- len(ds) == len(datasets[0]) for ds in datasets
- ), "datasets must have the same length"
-
- def __getitem__(self, index):
- return torch.cat([ds[index] for ds in self.datasets])
-
- def __len__(self):
- return len(self.datasets[0])
-
- def collater(self, samples):
- return self.datasets[0].collater(samples)
-
- @property
- def sizes(self):
- return sum(ds.sizes for ds in self.datasets)
-
- def num_tokens(self, index):
- return sum(ds.num_tokens(index) for ds in self.datasets)
-
- def size(self, index):
- return sum(ds.size(index) for ds in self.datasets)
-
- def ordered_indices(self):
- return self.datasets[0].ordered_indices()
-
- @property
- def supports_prefetch(self):
- return any(getattr(ds, "supports_prefetch", False) for ds in self.datasets)
-
- def prefetch(self, indices):
- for ds in self.datasets:
- if getattr(ds, "supports_prefetch", False):
- ds.prefetch(indices)
-
- def set_epoch(self, epoch):
- super().set_epoch(epoch)
- for ds in self.datasets:
- if hasattr(ds, "set_epoch"):
- ds.set_epoch(epoch)
diff --git a/spaces/gradio/HuBERT/fairseq/data/encoders/hf_bert_bpe.py b/spaces/gradio/HuBERT/fairseq/data/encoders/hf_bert_bpe.py
deleted file mode 100644
index a41c059343ec7e2914b2c9d2f53f526c33f9659d..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/fairseq/data/encoders/hf_bert_bpe.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass, field
-from typing import Optional
-
-from fairseq.data.encoders import register_bpe
-from fairseq.dataclass import FairseqDataclass
-
-
-@dataclass
-class BertBPEConfig(FairseqDataclass):
- bpe_cased: bool = field(default=False, metadata={"help": "set for cased BPE"})
- bpe_vocab_file: Optional[str] = field(
- default=None, metadata={"help": "bpe vocab file"}
- )
-
-
-@register_bpe("bert", dataclass=BertBPEConfig)
-class BertBPE(object):
- def __init__(self, cfg):
- try:
- from transformers import BertTokenizer
- except ImportError:
- raise ImportError(
- "Please install transformers with: pip install transformers"
- )
-
- if cfg.bpe_vocab_file:
- self.bert_tokenizer = BertTokenizer(
- cfg.bpe_vocab_file, do_lower_case=not cfg.bpe_cased
- )
- else:
- vocab_file_name = (
- "bert-base-cased" if cfg.bpe_cased else "bert-base-uncased"
- )
- self.bert_tokenizer = BertTokenizer.from_pretrained(vocab_file_name)
-
- def encode(self, x: str) -> str:
- return " ".join(self.bert_tokenizer.tokenize(x))
-
- def decode(self, x: str) -> str:
- return self.bert_tokenizer.clean_up_tokenization(
- self.bert_tokenizer.convert_tokens_to_string(x.split(" "))
- )
-
- def is_beginning_of_word(self, x: str) -> bool:
- return not x.startswith("##")
diff --git a/spaces/gradio/HuBERT/fairseq_cli/eval_lm.py b/spaces/gradio/HuBERT/fairseq_cli/eval_lm.py
deleted file mode 100644
index ab6e77029ef738291efd190b1cfe2435dd403dea..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/fairseq_cli/eval_lm.py
+++ /dev/null
@@ -1,347 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Evaluate the perplexity of a trained language model.
-"""
-
-import logging
-import math
-import os
-import sys
-from argparse import Namespace
-from typing import Iterable, List, Optional
-
-import torch
-import fairseq
-from fairseq import checkpoint_utils, distributed_utils, options, tasks, utils
-from fairseq.dataclass.utils import convert_namespace_to_omegaconf
-from fairseq.logging import progress_bar
-from fairseq.logging.meters import StopwatchMeter
-from fairseq.sequence_scorer import SequenceScorer
-from omegaconf import DictConfig
-
-
-logging.basicConfig(
- format="%(asctime)s | %(levelname)s | %(name)s | %(message)s",
- datefmt="%Y-%m-%d %H:%M:%S",
- level=os.environ.get("LOGLEVEL", "INFO").upper(),
- stream=sys.stdout,
-)
-logger = logging.getLogger("fairseq_cli.eval_lm")
-
-
-def eval_lm(
- models: List[fairseq.models.FairseqModel],
- source_dictionary: fairseq.data.Dictionary,
- batch_iterator: Iterable,
- post_process: Optional[str] = None,
- output_word_probs: bool = False,
- output_word_stats: bool = False,
- target_dictionary: Optional[fairseq.data.Dictionary] = None,
- softmax_batch: int = 0,
- remove_bos_token: bool = False,
- device: Optional[torch.device] = None,
-):
- """
- Args:
- models (List[~fairseq.models.FairseqModel]): list of models to
- evaluate. Models are essentially `nn.Module` instances, but
- must be compatible with fairseq's `SequenceScorer`.
- source_dictionary (~fairseq.data.Dictionary): dictionary for
- applying any relevant post processing or outputing word
- probs/stats.
- batch_iterator (Iterable): yield batches of data
- post_process (Optional[str]): post-process text by removing BPE,
- letter segmentation, etc. Valid options can be found in
- fairseq.data.utils.post_process, although not all options
- are implemented here.
- output_word_probs (Optional[bool]): output words and their
- predicted log probabilities
- output_word_stats (Optional[bool]): output word statistics such
- as word count and average probability
- target_dictionary (Optional[~fairseq.data.Dictionary]): output
- dictionary (defaults to *source_dictionary*)
- softmax_batch (Optional[bool]): if BxT is more than this, will
- batch the softmax over vocab to this amount of tokens, in
- order to fit into GPU memory
- remove_bos_token (Optional[bool]): if True, confirm that the
- first token is the beginning-of-sentence symbol (according
- to the relevant dictionary) and remove it from the output
- device (Optional[torch.device]): device to use for evaluation
- (defaults to device of first model parameter)
- """
- if target_dictionary is None:
- target_dictionary = source_dictionary
- if device is None:
- device = next(models[0].parameters()).device
-
- gen_timer = StopwatchMeter()
- scorer = SequenceScorer(target_dictionary, softmax_batch)
-
- score_sum = 0.0
- count = 0
-
- if post_process is not None:
- if post_process in {"subword_nmt", "@@ "}:
- bpe_cont = post_process.rstrip()
- bpe_toks = {
- i
- for i in range(len(source_dictionary))
- if source_dictionary[i].endswith(bpe_cont)
- }
- else:
- raise NotImplementedError(
- "--post-process={post_process} is not implemented"
- )
- bpe_len = len(bpe_cont)
- else:
- bpe_toks = None
- bpe_len = 0
-
- word_stats = dict()
-
- for sample in batch_iterator:
- if "net_input" not in sample:
- continue
-
- sample = utils.move_to_cuda(sample, device=device)
-
- gen_timer.start()
- hypos = scorer.generate(models, sample)
- gen_timer.stop(sample["ntokens"])
-
- for i, hypos_i in enumerate(hypos):
- hypo = hypos_i[0]
- sample_id = sample["id"][i]
-
- tokens = hypo["tokens"]
- tgt_len = tokens.numel()
- pos_scores = hypo["positional_scores"].float()
-
- if remove_bos_token:
- assert hypo["tokens"][0].item() == target_dictionary.bos()
- tokens = tokens[1:]
- pos_scores = pos_scores[1:]
-
- skipped_toks = 0
- if bpe_toks is not None:
- for i in range(tgt_len - 1):
- if tokens[i].item() in bpe_toks:
- skipped_toks += 1
- pos_scores[i + 1] += pos_scores[i]
- pos_scores[i] = 0
-
- inf_scores = pos_scores.eq(float("inf")) | pos_scores.eq(float("-inf"))
- if inf_scores.any():
- logger.info(
- "skipping tokens with inf scores:",
- target_dictionary.string(tokens[inf_scores.nonzero()]),
- )
- pos_scores = pos_scores[(~inf_scores).nonzero()]
- score_sum += pos_scores.sum().cpu()
- count += pos_scores.numel() - skipped_toks
-
- if output_word_probs or output_word_stats:
- w = ""
- word_prob = []
- is_bpe = False
- for i in range(len(tokens)):
- w_ind = tokens[i].item()
- w += source_dictionary[w_ind]
- if bpe_toks is not None and w_ind in bpe_toks:
- w = w[:-bpe_len]
- is_bpe = True
- else:
- word_prob.append((w, pos_scores[i].item()))
-
- next_prob = None
- ind = i + 1
- while ind < len(tokens):
- if pos_scores[ind].item() != 0:
- next_prob = pos_scores[ind]
- break
- ind += 1
-
- word_stats.setdefault(w, WordStat(w, is_bpe)).add(
- pos_scores[i].item(), next_prob
- )
- is_bpe = False
- w = ""
- if output_word_probs:
- logger.info(
- str(int(sample_id))
- + " "
- + (
- "\t".join(
- "{} [{:2f}]".format(x[0], x[1]) for x in word_prob
- )
- )
- )
-
- avg_nll_loss = (
- -score_sum / count / math.log(2) if count > 0 else 0
- ) # convert to base 2
- logger.info(
- "Evaluated {:,} tokens in {:.1f}s ({:.2f} tokens/s)".format(
- gen_timer.n, gen_timer.sum, 1.0 / gen_timer.avg if gen_timer.avg > 0 else 0
- )
- )
-
- if output_word_stats:
- for ws in sorted(word_stats.values(), key=lambda x: x.count, reverse=True):
- logger.info(ws)
-
- return {
- "loss": avg_nll_loss,
- "perplexity": 2 ** avg_nll_loss,
- }
-
-
-class WordStat(object):
- def __init__(self, word, is_bpe):
- self.word = word
- self.is_bpe = is_bpe
- self.log_prob = 0
- self.next_word_prob = 0
- self.count = 0
- self.missing_next_words = 0
-
- def add(self, log_prob, next_word_prob):
- """increments counters for the sum of log probs of current word and next
- word (given context ending at current word). Since the next word might be at the end of the example,
- or it might be not counted because it is not an ending subword unit,
- also keeps track of how many of those we have seen"""
- if next_word_prob is not None:
- self.next_word_prob += next_word_prob
- else:
- self.missing_next_words += 1
- self.log_prob += log_prob
- self.count += 1
-
- def __str__(self):
- return "{}\t{}\t{}\t{}\t{}\t{}".format(
- self.word,
- self.count,
- self.log_prob,
- self.is_bpe,
- self.next_word_prob,
- self.count - self.missing_next_words,
- )
-
-
-def main(cfg: DictConfig, **unused_kwargs):
- if isinstance(cfg, Namespace):
- cfg = convert_namespace_to_omegaconf(cfg)
-
- utils.import_user_module(cfg.common)
-
- logger.info(cfg)
-
- if cfg.eval_lm.context_window > 0:
- # reduce tokens per sample by the required context window size
- cfg.task.tokens_per_sample -= cfg.eval_lm.context_window
-
- # Initialize the task using the current *cfg*
- task = tasks.setup_task(cfg.task)
-
- # Load ensemble
- logger.info("loading model(s) from {}".format(cfg.common_eval.path))
- models, model_args, task = checkpoint_utils.load_model_ensemble_and_task(
- [cfg.common_eval.path],
- arg_overrides=eval(cfg.common_eval.model_overrides),
- suffix=cfg.checkpoint.checkpoint_suffix,
- strict=(cfg.checkpoint.checkpoint_shard_count == 1),
- num_shards=cfg.checkpoint.checkpoint_shard_count,
- task=task,
- )
-
- use_fp16 = cfg.common.fp16
- use_cuda = torch.cuda.is_available() and not cfg.common.cpu
- if use_cuda:
- torch.cuda.set_device(cfg.distributed_training.device_id)
-
- # Optimize ensemble for generation and set the source and dest dicts on the model
- # (required by scorer)
- for model in models:
- if use_fp16:
- model.half()
- if use_cuda and not cfg.distributed_training.pipeline_model_parallel:
- model.cuda()
- model.prepare_for_inference_(cfg)
-
- assert len(models) > 0
-
- logger.info(
- "num. model params: {:,}".format(sum(p.numel() for p in models[0].parameters()))
- )
-
- # Load dataset splits
- task.load_dataset(cfg.dataset.gen_subset)
- dataset = task.dataset(cfg.dataset.gen_subset)
- logger.info(
- "{} {} {:,} examples".format(
- cfg.task.data, cfg.dataset.gen_subset, len(dataset)
- )
- )
-
- itr = task.eval_lm_dataloader(
- dataset=dataset,
- max_tokens=cfg.dataset.max_tokens or 36000,
- batch_size=cfg.dataset.batch_size,
- max_positions=utils.resolve_max_positions(
- *[model.max_positions() for model in models]
- ),
- num_shards=max(
- cfg.dataset.num_shards,
- cfg.distributed_training.distributed_world_size,
- ),
- shard_id=max(
- cfg.dataset.shard_id,
- cfg.distributed_training.distributed_rank,
- ),
- num_workers=cfg.dataset.num_workers,
- data_buffer_size=cfg.dataset.data_buffer_size,
- context_window=cfg.eval_lm.context_window,
- )
-
- itr = progress_bar.progress_bar(
- itr,
- log_format=cfg.common.log_format,
- log_interval=cfg.common.log_interval,
- default_log_format=("tqdm" if not cfg.common.no_progress_bar else "simple"),
- )
-
- results = eval_lm(
- models=models,
- source_dictionary=task.source_dictionary,
- batch_iterator=itr,
- post_process=cfg.common_eval.post_process,
- output_word_probs=cfg.eval_lm.output_word_probs,
- output_word_stats=cfg.eval_lm.output_word_stats,
- target_dictionary=task.target_dictionary,
- softmax_batch=cfg.eval_lm.softmax_batch,
- remove_bos_token=getattr(cfg.task, "add_bos_token", False),
- )
-
- logger.info(
- "Loss (base 2): {:.4f}, Perplexity: {:.2f}".format(
- results["loss"], results["perplexity"]
- )
- )
-
- return results
-
-
-def cli_main():
- parser = options.get_eval_lm_parser()
- args = options.parse_args_and_arch(parser)
-
- distributed_utils.call_main(convert_namespace_to_omegaconf(args), main)
-
-
-if __name__ == "__main__":
- cli_main()
diff --git a/spaces/gradio/gpt-neo/README.md b/spaces/gradio/gpt-neo/README.md
deleted file mode 100644
index a9a930aaa7e72a28ad3bac5883e26abc56858b7d..0000000000000000000000000000000000000000
--- a/spaces/gradio/gpt-neo/README.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: GPT-Neo
-emoji: 📉
-colorFrom: indigo
-colorTo: purple
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/gsaivinay/Llama-2-13B-GGML-UI/utils/app/conversation.ts b/spaces/gsaivinay/Llama-2-13B-GGML-UI/utils/app/conversation.ts
deleted file mode 100644
index 3fdfbcf2368802a8d867a99cf40ade44d11efba7..0000000000000000000000000000000000000000
--- a/spaces/gsaivinay/Llama-2-13B-GGML-UI/utils/app/conversation.ts
+++ /dev/null
@@ -1,30 +0,0 @@
-import { Conversation } from '@/types/chat';
-
-export const updateConversation = (
- updatedConversation: Conversation,
- allConversations: Conversation[],
-) => {
- const updatedConversations = allConversations.map((c) => {
- if (c.id === updatedConversation.id) {
- return updatedConversation;
- }
-
- return c;
- });
-
- saveConversation(updatedConversation);
- saveConversations(updatedConversations);
-
- return {
- single: updatedConversation,
- all: updatedConversations,
- };
-};
-
-export const saveConversation = (conversation: Conversation) => {
- localStorage.setItem('selectedConversation', JSON.stringify(conversation));
-};
-
-export const saveConversations = (conversations: Conversation[]) => {
- localStorage.setItem('conversationHistory', JSON.stringify(conversations));
-};
diff --git a/spaces/gyrojeff/YuzuMarker.FontDetection/detector/model.py b/spaces/gyrojeff/YuzuMarker.FontDetection/detector/model.py
deleted file mode 100644
index 50bfea2f001ab208c07f4ec6e150b45cba722542..0000000000000000000000000000000000000000
--- a/spaces/gyrojeff/YuzuMarker.FontDetection/detector/model.py
+++ /dev/null
@@ -1,322 +0,0 @@
-import torchmetrics
-from . import config
-
-from typing import Tuple, Dict, List, Any
-
-import numpy as np
-import torch
-import torchvision
-import torch.nn as nn
-import pytorch_lightning as ptl
-
-
-class DeepFontBaseline(nn.Module):
- def __init__(self) -> None:
- super().__init__()
- self.model = nn.Sequential(
- nn.Conv2d(3, 64, 11, 2),
- nn.BatchNorm2d(64),
- nn.ReLU(),
- nn.MaxPool2d(2, 2),
- nn.Conv2d(64, 128, 3, 1, 1),
- nn.BatchNorm2d(128),
- nn.ReLU(),
- nn.MaxPool2d(2, 2),
- nn.Conv2d(128, 256, 3, 1, 1),
- nn.ReLU(),
- nn.Conv2d(256, 256, 3, 1, 1),
- nn.ReLU(),
- nn.Conv2d(256, 256, 3, 1, 1),
- nn.ReLU(),
- # fc
- nn.Flatten(),
- nn.Linear(256 * 12 * 12, 4096),
- nn.ReLU(),
- nn.Linear(4096, 4096),
- nn.ReLU(),
- nn.Linear(4096, config.FONT_COUNT),
- )
-
- def forward(self, X):
- return self.model(X)
-
-
-class ResNet18Regressor(nn.Module):
- def __init__(self, pretrained: bool = False, regression_use_tanh: bool = False):
- super().__init__()
- weights = torchvision.models.ResNet18_Weights.DEFAULT if pretrained else None
- self.model = torchvision.models.resnet18(weights=weights)
- self.model.fc = nn.Linear(512, config.FONT_COUNT + 12)
- self.regression_use_tanh = regression_use_tanh
-
- def forward(self, X):
- X = self.model(X)
- # [0, 1]
- if not self.regression_use_tanh:
- X[..., config.FONT_COUNT + 2 :] = X[..., config.FONT_COUNT + 2 :].sigmoid()
- else:
- X[..., config.FONT_COUNT + 2 :] = X[..., config.FONT_COUNT + 2 :].tanh()
- return X
-
-
-class ResNet34Regressor(nn.Module):
- def __init__(self, pretrained: bool = False, regression_use_tanh: bool = False):
- super().__init__()
- weights = torchvision.models.ResNet34_Weights.DEFAULT if pretrained else None
- self.model = torchvision.models.resnet34(weights=weights)
- self.model.fc = nn.Linear(512, config.FONT_COUNT + 12)
- self.regression_use_tanh = regression_use_tanh
-
- def forward(self, X):
- X = self.model(X)
- # [0, 1]
- if not self.regression_use_tanh:
- X[..., config.FONT_COUNT + 2 :] = X[..., config.FONT_COUNT + 2 :].sigmoid()
- else:
- X[..., config.FONT_COUNT + 2 :] = X[..., config.FONT_COUNT + 2 :].tanh()
- return X
-
-
-class ResNet50Regressor(nn.Module):
- def __init__(self, pretrained: bool = False, regression_use_tanh: bool = False):
- super().__init__()
- weights = torchvision.models.ResNet50_Weights.DEFAULT if pretrained else None
- self.model = torchvision.models.resnet50(weights=weights)
- self.model.fc = nn.Linear(2048, config.FONT_COUNT + 12)
- self.regression_use_tanh = regression_use_tanh
-
- def forward(self, X):
- X = self.model(X)
- # [0, 1]
- if not self.regression_use_tanh:
- X[..., config.FONT_COUNT + 2 :] = X[..., config.FONT_COUNT + 2 :].sigmoid()
- else:
- X[..., config.FONT_COUNT + 2 :] = X[..., config.FONT_COUNT + 2 :].tanh()
- return X
-
-
-class ResNet101Regressor(nn.Module):
- def __init__(self, pretrained: bool = False, regression_use_tanh: bool = False):
- super().__init__()
- weights = torchvision.models.ResNet101_Weights.DEFAULT if pretrained else None
- self.model = torchvision.models.resnet101(weights=weights)
- self.model.fc = nn.Linear(2048, config.FONT_COUNT + 12)
- self.regression_use_tanh = regression_use_tanh
-
- def forward(self, X):
- X = self.model(X)
- # [0, 1]
- if not self.regression_use_tanh:
- X[..., config.FONT_COUNT + 2 :] = X[..., config.FONT_COUNT + 2 :].sigmoid()
- else:
- X[..., config.FONT_COUNT + 2 :] = X[..., config.FONT_COUNT + 2 :].tanh()
- return X
-
-
-class FontDetectorLoss(nn.Module):
- def __init__(
- self, lambda_font, lambda_direction, lambda_regression, font_classification_only
- ):
- super().__init__()
- self.category_loss = nn.CrossEntropyLoss()
- self.regression_loss = nn.MSELoss()
- self.lambda_font = lambda_font
- self.lambda_direction = lambda_direction
- self.lambda_regression = lambda_regression
- self.font_classfiication_only = font_classification_only
-
- def forward(self, y_hat, y):
- font_cat = self.category_loss(y_hat[..., : config.FONT_COUNT], y[..., 0].long())
- if self.font_classfiication_only:
- return self.lambda_font * font_cat
- direction_cat = self.category_loss(
- y_hat[..., config.FONT_COUNT : config.FONT_COUNT + 2], y[..., 1].long()
- )
- regression = self.regression_loss(
- y_hat[..., config.FONT_COUNT + 2 :], y[..., 2:]
- )
- return (
- self.lambda_font * font_cat
- + self.lambda_direction * direction_cat
- + self.lambda_regression * regression
- )
-
-
-class CosineWarmupScheduler(torch.optim.lr_scheduler._LRScheduler):
- def __init__(self, optimizer, warmup, max_iters):
- self.warmup = warmup
- self.max_num_iters = max_iters
- super().__init__(optimizer)
-
- def get_lr(self):
- lr_factor = self.get_lr_factor(epoch=self.last_epoch)
- return [base_lr * lr_factor for base_lr in self.base_lrs]
-
- def get_lr_factor(self, epoch):
- lr_factor = 0.5 * (1 + np.cos(np.pi * epoch / self.max_num_iters))
- if epoch <= self.warmup:
- lr_factor *= epoch * 1.0 / self.warmup
- return lr_factor
-
-
-class FontDetector(ptl.LightningModule):
- def __init__(
- self,
- model: nn.Module,
- lambda_font: float,
- lambda_direction: float,
- lambda_regression: float,
- font_classification_only: bool,
- lr: float,
- betas: Tuple[float, float],
- num_warmup_iters: int,
- num_iters: int,
- num_epochs: int,
- ):
- super().__init__()
- self.model = model
- self.loss = FontDetectorLoss(
- lambda_font, lambda_direction, lambda_regression, font_classification_only
- )
- self.font_accur_train = torchmetrics.Accuracy(
- task="multiclass", num_classes=config.FONT_COUNT
- )
- self.font_accur_val = torchmetrics.Accuracy(
- task="multiclass", num_classes=config.FONT_COUNT
- )
- self.font_accur_test = torchmetrics.Accuracy(
- task="multiclass", num_classes=config.FONT_COUNT
- )
- if not font_classification_only:
- self.direction_accur_train = torchmetrics.Accuracy(
- task="multiclass", num_classes=2
- )
- self.direction_accur_val = torchmetrics.Accuracy(
- task="multiclass", num_classes=2
- )
- self.direction_accur_test = torchmetrics.Accuracy(
- task="multiclass", num_classes=2
- )
- self.lr = lr
- self.betas = betas
- self.num_warmup_iters = num_warmup_iters
- self.num_iters = num_iters
- self.num_epochs = num_epochs
- self.load_epoch = -1
- self.font_classification_only = font_classification_only
-
- def forward(self, x):
- return self.model(x)
-
- def training_step(
- self, batch: Tuple[torch.Tensor, torch.Tensor], batch_idx: int
- ) -> Dict[str, Any]:
- X, y = batch
- y_hat = self.forward(X)
- loss = self.loss(y_hat, y)
- self.log("train_loss", loss, prog_bar=True, sync_dist=True)
- # accur
- self.log(
- "train_font_accur",
- self.font_accur_train(y_hat[..., : config.FONT_COUNT], y[..., 0]),
- sync_dist=True,
- )
- if not self.font_classification_only:
- self.log(
- "train_direction_accur",
- self.direction_accur_train(
- y_hat[..., config.FONT_COUNT : config.FONT_COUNT + 2], y[..., 1]
- ),
- sync_dist=True,
- )
- return {"loss": loss}
-
- def on_train_epoch_end(self) -> None:
- self.log("train_font_accur", self.font_accur_train.compute(), sync_dist=True)
- self.font_accur_train.reset()
- if not self.font_classification_only:
- self.log(
- "train_direction_accur",
- self.direction_accur_train.compute(),
- sync_dist=True,
- )
- self.direction_accur_train.reset()
-
- def validation_step(
- self, batch: Tuple[torch.Tensor, torch.Tensor], batch_idx: int
- ) -> Dict[str, Any]:
- X, y = batch
- y_hat = self.forward(X)
- loss = self.loss(y_hat, y)
- self.log("val_loss", loss, prog_bar=True, sync_dist=True)
- self.font_accur_val.update(y_hat[..., : config.FONT_COUNT], y[..., 0])
- if not self.font_classification_only:
- self.direction_accur_val.update(
- y_hat[..., config.FONT_COUNT : config.FONT_COUNT + 2], y[..., 1]
- )
- return {"loss": loss}
-
- def on_validation_epoch_end(self):
- self.log("val_font_accur", self.font_accur_val.compute(), sync_dist=True)
- self.font_accur_val.reset()
- if not self.font_classification_only:
- self.log(
- "val_direction_accur",
- self.direction_accur_val.compute(),
- sync_dist=True,
- )
- self.direction_accur_val.reset()
-
- def test_step(self, batch: Tuple[torch.Tensor, torch.Tensor], batch_idx: int):
- X, y = batch
- y_hat = self.forward(X)
- loss = self.loss(y_hat, y)
- self.log("test_loss", loss, prog_bar=True, sync_dist=True)
- self.font_accur_test.update(y_hat[..., : config.FONT_COUNT], y[..., 0])
- if not self.font_classification_only:
- self.direction_accur_test.update(
- y_hat[..., config.FONT_COUNT : config.FONT_COUNT + 2], y[..., 1]
- )
- return {"loss": loss}
-
- def on_test_epoch_end(self) -> None:
- self.log("test_font_accur", self.font_accur_test.compute(), sync_dist=True)
- self.font_accur_test.reset()
- if not self.font_classification_only:
- self.log(
- "test_direction_accur",
- self.direction_accur_test.compute(),
- sync_dist=True,
- )
- self.direction_accur_test.reset()
-
- def configure_optimizers(self):
- optimizer = torch.optim.Adam(
- self.model.parameters(), lr=self.lr, betas=self.betas
- )
- self.scheduler = CosineWarmupScheduler(
- optimizer, self.num_warmup_iters, self.num_iters
- )
- print("Load epoch:", self.load_epoch)
- for _ in range(self.num_iters * (self.load_epoch + 1) // self.num_epochs):
- self.scheduler.step()
- print("Current learning rate set to:", self.scheduler.get_last_lr())
- return optimizer
-
- def optimizer_step(
- self,
- epoch: int,
- batch_idx: int,
- optimizer,
- optimizer_idx: int = 0,
- *args,
- **kwargs
- ):
- super().optimizer_step(
- epoch, batch_idx, optimizer, optimizer_idx, *args, **kwargs
- )
- self.log("lr", self.scheduler.get_last_lr()[0])
- self.scheduler.step()
-
- def on_load_checkpoint(self, checkpoint: Dict[str, Any]) -> None:
- self.load_epoch = checkpoint["epoch"]
diff --git a/spaces/gyugnsu/DragGan-Inversion/PTI/__init__.py b/spaces/gyugnsu/DragGan-Inversion/PTI/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/gyugnsu/DragGan-Inversion/viz/renderer.py b/spaces/gyugnsu/DragGan-Inversion/viz/renderer.py
deleted file mode 100644
index 4c6fc27c50046e56af68166e017276d97a173580..0000000000000000000000000000000000000000
--- a/spaces/gyugnsu/DragGan-Inversion/viz/renderer.py
+++ /dev/null
@@ -1,442 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-from socket import has_dualstack_ipv6
-import sys
-import copy
-import traceback
-import math
-import numpy as np
-from PIL import Image, ImageDraw, ImageFont
-import torch
-import torch.fft
-import torch.nn as nn
-import torch.nn.functional as F
-import matplotlib.cm
-import dnnlib
-from torch_utils.ops import upfirdn2d
-import legacy # pylint: disable=import-error
-
-# ----------------------------------------------------------------------------
-
-
-class CapturedException(Exception):
- def __init__(self, msg=None):
- if msg is None:
- _type, value, _traceback = sys.exc_info()
- assert value is not None
- if isinstance(value, CapturedException):
- msg = str(value)
- else:
- msg = traceback.format_exc()
- assert isinstance(msg, str)
- super().__init__(msg)
-
-# ----------------------------------------------------------------------------
-
-
-class CaptureSuccess(Exception):
- def __init__(self, out):
- super().__init__()
- self.out = out
-
-# ----------------------------------------------------------------------------
-
-
-def add_watermark_np(input_image_array, watermark_text="AI Generated"):
- image = Image.fromarray(np.uint8(input_image_array)).convert("RGBA")
-
- # Initialize text image
- txt = Image.new('RGBA', image.size, (255, 255, 255, 0))
- font = ImageFont.truetype('arial.ttf', round(25/512*image.size[0]))
- d = ImageDraw.Draw(txt)
-
- text_width, text_height = font.getsize(watermark_text)
- text_position = (image.size[0] - text_width -
- 10, image.size[1] - text_height - 10)
- # white color with the alpha channel set to semi-transparent
- text_color = (255, 255, 255, 128)
-
- # Draw the text onto the text canvas
- d.text(text_position, watermark_text, font=font, fill=text_color)
-
- # Combine the image with the watermark
- watermarked = Image.alpha_composite(image, txt)
- watermarked_array = np.array(watermarked)
- return watermarked_array
-
-# ----------------------------------------------------------------------------
-
-
-class Renderer:
- def __init__(self, disable_timing=False):
- self._device = torch.device('cuda' if torch.cuda.is_available(
- ) else 'mps' if torch.backends.mps.is_available() else 'cpu')
- self._dtype = torch.float32 if self._device.type == 'mps' else torch.float64
- self._pkl_data = dict() # {pkl: dict | CapturedException, ...}
- self._networks = dict() # {cache_key: torch.nn.Module, ...}
- self._pinned_bufs = dict() # {(shape, dtype): torch.Tensor, ...}
- self._cmaps = dict() # {name: torch.Tensor, ...}
- self._is_timing = False
- if not disable_timing:
- self._start_event = torch.cuda.Event(enable_timing=True)
- self._end_event = torch.cuda.Event(enable_timing=True)
- self._disable_timing = disable_timing
- self._net_layers = dict() # {cache_key: [dnnlib.EasyDict, ...], ...}
-
- def render(self, **args):
- if self._disable_timing:
- self._is_timing = False
- else:
- self._start_event.record(torch.cuda.current_stream(self._device))
- self._is_timing = True
- res = dnnlib.EasyDict()
- try:
- init_net = False
- if not hasattr(self, 'G'):
- init_net = True
- if hasattr(self, 'pkl'):
- if self.pkl != args['pkl']:
- init_net = True
- if hasattr(self, 'w_load'):
- if self.w_load is not args['w_load']:
- init_net = True
- if hasattr(self, 'w0_seed'):
- if self.w0_seed != args['w0_seed']:
- init_net = True
- if hasattr(self, 'w_plus'):
- if self.w_plus != args['w_plus']:
- init_net = True
- if args['reset_w']:
- init_net = True
- res.init_net = init_net
- if init_net:
- self.init_network(res, **args)
- self._render_drag_impl(res, **args)
- except:
- res.error = CapturedException()
- if not self._disable_timing:
- self._end_event.record(torch.cuda.current_stream(self._device))
- if 'image' in res:
- res.image = self.to_cpu(res.image).detach().numpy()
- res.image = add_watermark_np(res.image, 'AI Generated')
- if 'stats' in res:
- res.stats = self.to_cpu(res.stats).detach().numpy()
- if 'error' in res:
- res.error = str(res.error)
- # if 'stop' in res and res.stop:
-
- if self._is_timing and not self._disable_timing:
- self._end_event.synchronize()
- res.render_time = self._start_event.elapsed_time(
- self._end_event) * 1e-3
- self._is_timing = False
- return res
-
- def get_network(self, pkl, key, **tweak_kwargs):
- data = self._pkl_data.get(pkl, None)
- if data is None:
- print(f'Loading "{pkl}"... ', end='', flush=True)
- try:
- with dnnlib.util.open_url(pkl, verbose=False) as f:
- data = legacy.load_network_pkl(f)
- print('Done.')
- except:
- data = CapturedException()
- print('Failed!')
- self._pkl_data[pkl] = data
- self._ignore_timing()
- if isinstance(data, CapturedException):
- raise data
-
- orig_net = data[key]
- cache_key = (orig_net, self._device, tuple(
- sorted(tweak_kwargs.items())))
- net = self._networks.get(cache_key, None)
- if net is None:
- try:
- if 'stylegan2' in pkl:
- from training.networks_stylegan2 import Generator
- elif 'stylegan3' in pkl:
- from training.networks_stylegan3 import Generator
- elif 'stylegan_human' in pkl:
- from stylegan_human.training_scripts.sg2.training.networks import Generator
- else:
- raise NameError('Cannot infer model type from pkl name!')
-
- print(data[key].init_args)
- print(data[key].init_kwargs)
- if 'stylegan_human' in pkl:
- net = Generator(
- *data[key].init_args, **data[key].init_kwargs, square=False, padding=True)
- else:
- net = Generator(*data[key].init_args,
- **data[key].init_kwargs)
- net.load_state_dict(data[key].state_dict())
- net.to(self._device)
- except:
- net = CapturedException()
- self._networks[cache_key] = net
- self._ignore_timing()
- if isinstance(net, CapturedException):
- raise net
- return net
-
- def _get_pinned_buf(self, ref):
- key = (tuple(ref.shape), ref.dtype)
- buf = self._pinned_bufs.get(key, None)
- if buf is None:
- buf = torch.empty(ref.shape, dtype=ref.dtype).pin_memory()
- self._pinned_bufs[key] = buf
- return buf
-
- def to_device(self, buf):
- return self._get_pinned_buf(buf).copy_(buf).to(self._device)
-
- def to_cpu(self, buf):
- return self._get_pinned_buf(buf).copy_(buf).clone()
-
- def _ignore_timing(self):
- self._is_timing = False
-
- def _apply_cmap(self, x, name='viridis'):
- cmap = self._cmaps.get(name, None)
- if cmap is None:
- cmap = matplotlib.cm.get_cmap(name)
- cmap = cmap(np.linspace(0, 1, num=1024), bytes=True)[:, :3]
- cmap = self.to_device(torch.from_numpy(cmap))
- self._cmaps[name] = cmap
- hi = cmap.shape[0] - 1
- x = (x * hi + 0.5).clamp(0, hi).to(torch.int64)
- x = torch.nn.functional.embedding(x, cmap)
- return x
-
- def init_network(self, res,
- pkl=None,
- w0_seed=0,
- w_load=None,
- w_plus=True,
- noise_mode='const',
- trunc_psi=0.7,
- trunc_cutoff=None,
- input_transform=None,
- lr=0.001,
- **kwargs
- ):
- # Dig up network details.
- self.pkl = pkl
- G = self.get_network(pkl, 'G_ema')
- self.G = G
- res.img_resolution = G.img_resolution
- res.num_ws = G.num_ws
- res.has_noise = any('noise_const' in name for name,
- _buf in G.synthesis.named_buffers())
- res.has_input_transform = (
- hasattr(G.synthesis, 'input') and hasattr(G.synthesis.input, 'transform'))
- res.stop = False
- self.lr = lr
- # Set input transform.
- if res.has_input_transform:
- m = np.eye(3)
- try:
- if input_transform is not None:
- m = np.linalg.inv(np.asarray(input_transform))
- except np.linalg.LinAlgError:
- res.error = CapturedException()
- G.synthesis.input.transform.copy_(torch.from_numpy(m))
-
- # Generate random latents.
- self.w0_seed = w0_seed
- self.w_load = w_load
-
- if self.w_load is None:
- # Generate random latents.
- z = torch.from_numpy(np.random.RandomState(w0_seed).randn(
- 1, 512)).to(self._device, dtype=self._dtype)
-
- # Run mapping network.
- label = torch.zeros([1, G.c_dim], device=self._device)
- w = G.mapping(z, label, truncation_psi=trunc_psi,
- truncation_cutoff=trunc_cutoff)
- else:
- w = self.w_load.clone().to(self._device)
-
- self.w0 = w.detach().clone()
- self.w_plus = w_plus
- if w_plus:
- self.w = w.detach()
- else:
- self.w = w[:, 0, :].detach()
- self.w.requires_grad = True
- self.w_optim = torch.optim.Adam([self.w], lr=lr)
-
- self.feat_refs = None
- self.points0_pt = None
-
- def set_latent(self, w, trunc_psi, trunc_cutoff):
- # label = torch.zeros([1, self.G.c_dim], device=self._device)
- # w = self.G.mapping(z, label, truncation_psi=trunc_psi, truncation_cutoff=trunc_cutoff)
- self.w0 = w.detach().clone()
- if self.w_plus:
- self.w = w.detach()
- else:
- self.w = w[:, 0, :].detach()
- self.w.requires_grad = True
- self.w_optim = torch.optim.Adam([self.w], lr=self.lr)
-
- self.feat_refs = None
- self.points0_pt = None
-
- def update_lr(self, lr):
-
- del self.w_optim
- self.w_optim = torch.optim.Adam([self.w], lr=lr)
- print(f'Rebuild optimizer with lr: {lr}')
- print(' Remain feat_refs and points0_pt')
-
- def _render_drag_impl(self, res,
- points=[],
- targets=[],
- mask=None,
- lambda_mask=10,
- reg=0,
- feature_idx=5,
- r1=3,
- r2=12,
- random_seed=0,
- noise_mode='const',
- trunc_psi=0.7,
- force_fp32=False,
- layer_name=None,
- sel_channels=3,
- base_channel=0,
- img_scale_db=0,
- img_normalize=False,
- untransform=False,
- is_drag=False,
- reset=False,
- to_pil=False,
- **kwargs
- ):
- try:
- G = self.G
- ws = self.w
- if ws.dim() == 2:
- ws = ws.unsqueeze(1).repeat(1, 6, 1)
- ws = torch.cat([ws[:, :6, :], self.w0[:, 6:, :]], dim=1)
- if hasattr(self, 'points'):
- if len(points) != len(self.points):
- reset = True
- if reset:
- self.feat_refs = None
- self.points0_pt = None
- self.points = points
-
- # Run synthesis network.
- label = torch.zeros([1, G.c_dim], device=self._device)
- img, feat = G(ws, label, truncation_psi=trunc_psi,
- noise_mode=noise_mode, input_is_w=True, return_feature=True)
-
- h, w = G.img_resolution, G.img_resolution
-
- if is_drag:
- X = torch.linspace(0, h, h)
- Y = torch.linspace(0, w, w)
- xx, yy = torch.meshgrid(X, Y)
- feat_resize = F.interpolate(
- feat[feature_idx], [h, w], mode='bilinear')
- if self.feat_refs is None:
- self.feat0_resize = F.interpolate(
- feat[feature_idx].detach(), [h, w], mode='bilinear')
- self.feat_refs = []
- for point in points:
- py, px = round(point[0]), round(point[1])
- self.feat_refs.append(self.feat0_resize[:, :, py, px])
- self.points0_pt = torch.Tensor(points).unsqueeze(
- 0).to(self._device) # 1, N, 2
-
- # Point tracking with feature matching
- with torch.no_grad():
- for j, point in enumerate(points):
- r = round(r2 / 512 * h)
- up = max(point[0] - r, 0)
- down = min(point[0] + r + 1, h)
- left = max(point[1] - r, 0)
- right = min(point[1] + r + 1, w)
- feat_patch = feat_resize[:, :, up:down, left:right]
- L2 = torch.linalg.norm(
- feat_patch - self.feat_refs[j].reshape(1, -1, 1, 1), dim=1)
- _, idx = torch.min(L2.view(1, -1), -1)
- width = right - left
- point = [idx.item() // width + up, idx.item() %
- width + left]
- points[j] = point
-
- res.points = [[point[0], point[1]] for point in points]
-
- # Motion supervision
- loss_motion = 0
- res.stop = True
- for j, point in enumerate(points):
- direction = torch.Tensor(
- [targets[j][1] - point[1], targets[j][0] - point[0]])
- if torch.linalg.norm(direction) > max(2 / 512 * h, 2):
- res.stop = False
- if torch.linalg.norm(direction) > 1:
- distance = (
- (xx.to(self._device) - point[0])**2 + (yy.to(self._device) - point[1])**2)**0.5
- relis, reljs = torch.where(
- distance < round(r1 / 512 * h))
- direction = direction / \
- (torch.linalg.norm(direction) + 1e-7)
- gridh = (relis-direction[1]) / (h-1) * 2 - 1
- gridw = (reljs-direction[0]) / (w-1) * 2 - 1
- grid = torch.stack(
- [gridw, gridh], dim=-1).unsqueeze(0).unsqueeze(0)
- target = F.grid_sample(
- feat_resize.float(), grid, align_corners=True).squeeze(2)
- loss_motion += F.l1_loss(
- feat_resize[:, :, relis, reljs], target.detach())
-
- loss = loss_motion
- if mask is not None:
- if mask.min() == 0 and mask.max() == 1:
- mask_usq = mask.to(
- self._device).unsqueeze(0).unsqueeze(0)
- loss_fix = F.l1_loss(
- feat_resize * mask_usq, self.feat0_resize * mask_usq)
- loss += lambda_mask * loss_fix
-
- # latent code regularization
- loss += reg * F.l1_loss(ws, self.w0)
- if not res.stop:
- self.w_optim.zero_grad()
- loss.backward()
- self.w_optim.step()
-
- # Scale and convert to uint8.
- img = img[0]
- if img_normalize:
- img = img / img.norm(float('inf'),
- dim=[1, 2], keepdim=True).clip(1e-8, 1e8)
- img = img * (10 ** (img_scale_db / 20))
- img = (img * 127.5 + 128).clamp(0,
- 255).to(torch.uint8).permute(1, 2, 0)
- if to_pil:
- from PIL import Image
- img = img.cpu().numpy()
- img = Image.fromarray(img)
- res.image = img
- res.w = ws.detach().cpu().numpy()
- except Exception as e:
- import os
- print(f'Renderer error: {e}')
- print("Out of memory error occurred. Restarting the app...")
- os.execv(sys.executable, ['python'] + sys.argv)
-
-# ----------------------------------------------------------------------------
diff --git a/spaces/hallochen/firstspace/index.html b/spaces/hallochen/firstspace/index.html
deleted file mode 100644
index 918e851d9dd1baf9e4fb4f067fd979d432472161..0000000000000000000000000000000000000000
--- a/spaces/hallochen/firstspace/index.html
+++ /dev/null
@@ -1,24 +0,0 @@
-
-
-
-
-
- My static Space
-
-
-
-
-
Welcome to your static Space!
-
- You can modify this app directly by editing index.html in the
- Files and versions tab.
-
What is Ergosoft Texprint 14 Crack Junki and Why You Need It
-
If you are a professional textile designer or producer, you probably know how important it is to have a reliable and powerful RIP software that can handle your digital printing needs. RIP stands for raster image processor, and it is a software that converts your digital designs into printable formats that can be printed on various textile products, such as fabrics, garments, banners, flags, and more.
-
One of the most popular and trusted RIP software for digital textile printing is Ergosoft Texprint 14. This software is designed to meet the challenges and requirements of today's and tomorrow's textile production. It integrates the leading image processing technologies and incorporates unique textile-specific features that can help you create stunning and high-quality prints.
However, Ergosoft Texprint 14 is not a cheap software. It is protected with a WIBU CodeMeter hardware key and a custom license that can cost you thousands of dollars. If you want to use this software without paying for it, you need Ergosoft Texprint 14 Crack Junki.
-
Ergosoft Texprint 14 Crack Junki is a solution that allows you to crack and use Ergosoft Texprint 14 for free. It bypasses the hardware key and the license verification and unlocks all the features of the software. With Ergosoft Texprint 14 Crack Junki, you can enjoy the benefits of Ergosoft Texprint 14 without spending a dime.
-
The Benefits of Ergosoft Texprint 14
-
Ergosoft Texprint 14 has many benefits that make it a great choice for digital textile printing. Here are some of them:
-
-
It has a user-friendly interface that lets you create your designs in a few easy steps.
-
It has over 3,000 pre-designed templates that you can use or modify for your projects.
-
It has a large library of images, clipart, shapes, symbols, and backgrounds that you can add to your designs.
-
It has a text editor that lets you customize your fonts, sizes, colors, alignments, and effects.
-
It has a barcode generator that lets you create and print barcodes for your products.
-
It has a mail merge feature that lets you import data from Excel or other sources and print personalized labels or cards.
-
It has a print preview feature that lets you see how your designs will look on the actual product before printing.
-
It supports a variety of textile products, such as fabrics, garments, banners, flags, vinyls, stickers, magnets, iron-on transfers, and more.
-
It allows you to save your designs as PDF files or export them to other formats.
-
-
How to Download and Install Ergosoft Texprint 14 Crack Junki
-
If you want to download and install Ergosoft Texprint 14 Crack Junki on your Windows 10 computer, you can follow these steps:
Extract the file to your computer and copy the DesignPro.exe file.
-
Paste the file into the installation folder of Ergosoft Texprint 14 (usually C:\Program Files (x86)\Ergosoft\TexPrint).
-
Replace the original file with the cracked one.
-
Launch the software and enjoy all its features for free.
-
-
Conclusion
-
Ergosoft Texprint 14 is a powerful software that can help you create and print stunning and high-quality textile products. It integrates the leading image processing technologies and incorporates unique textile-specific features. However, it is also an expensive software that requires a hardware key and a license to use.
-
If you want to use Ergosoft Texprint 14 for free, you need Ergosoft Texprint 14 Crack Junki. This solution allows you to crack and use Ergosoft Texprint 14 without paying for it. It bypasses the hardware key and the license verification and unlocks all the features of the software.
-
In this article, we showed you what is Ergosoft Texprint 14 Crack Junki and why you need it. We also showed you how to download and install Ergosoft Texprint 14 Crack Junki on your Windows 10 computer. We hope you found this article helpful and informative. If you have any questions or comments, please feel free to leave them below.
-
How to Use Ergosoft Texprint 14 Crack Junki for Different Textile Products
-
One of the advantages of Ergosoft Texprint 14 Crack Junki is that it allows you to use Ergosoft Texprint 14 for different textile products. You can use it to design and print on various materials, such as fabrics, garments, banners, flags, vinyls, stickers, magnets, iron-on transfers, and more. You can also use it to create different types of products, such as labels, cards, business cards, dividers, CD/DVD labels, name badges, and more.
-
-
To use Ergosoft Texprint 14 Crack Junki for different textile products, you need to follow these steps:
-
-
Launch the software and click on the New Project button.
-
Select the type of product you want to create (for example, labels) and choose the Avery product number that matches your product (for example, 5160).
-
Choose a template from the available options or click on the Blank Project button to start from scratch.
-
Edit your design by adding text, images, shapes, symbols, backgrounds, and effects. You can use the tools on the left side of the screen to customize your design.
-
Click on the Print button when you are done with your design. You can preview your design before printing and adjust the settings as needed.
-
Load your textile product into your printer and print your design.
-
-
You can repeat these steps for any textile product you want to create and print with Ergosoft Texprint 14 Crack Junki.
-
How to Optimize Your Designs with Ergosoft Texprint 14 Crack Junki
-
Another benefit of Ergosoft Texprint 14 Crack Junki is that it allows you to optimize your designs with Ergosoft Texprint 14. This software has many features and tools that can help you improve the quality and efficiency of your designs. Here are some of them:
-
-
It has a color management system that lets you control and adjust the colors of your designs according to your printer and textile product.
-
It has a step and repeat feature that lets you duplicate and arrange your designs in a grid pattern for efficient printing.
-
It has a nesting feature that lets you fit multiple designs on a single sheet of material for optimal use of space and material.
-
It has a cut contour feature that lets you create and print cut lines around your designs for easy cutting and weeding.
-
It has a color replacement feature that lets you change the colors of your designs without affecting the original file.
-
-
To optimize your designs with Ergosoft Texprint 14 Crack Junki, you need to use these features and tools according to your needs and preferences. You can access them from the menus and toolbars on the top and right side of the screen.
-
Conclusion
-
Ergosoft Texprint 14 is a powerful software that can help you create and print stunning and high-quality textile products. It integrates the leading image processing technologies and incorporates unique textile-specific features. However, it is also an expensive software that requires a hardware key and a license to use.
-
If you want to use Ergosoft Texprint 14 for free, you need Ergosoft Texprint 14 Crack Junki. This solution allows you to crack and use Ergosoft Texprint 14 without paying for it. It bypasses the hardware key and the license verification and unlocks all the features of the software.
-
In this article, we showed you what is Ergosoft Texprint 14 Crack Junki and why you need it. We also showed you how to download and install Ergosoft Texprint 14 Crack Junki on your Windows 10 computer. We also showed you how to use Ergosoft Texprint 14 Crack Junki for different textile products and how to optimize your designs with Ergosoft Texprint 14 Crack Junki.
-
We hope you found this article helpful and informative. If you have any questions or comments, please feel free to leave them below.
-
How to Design Textile Products that Stand Out with Ergosoft Texprint 14 Crack Junki
-
If you want to design textile products that stand out with Ergosoft Texprint 14 Crack Junki, you need to use your creativity and skills to create unique and attractive designs. Here are some tips that can help you design textile products that stand out with Ergosoft Texprint 14 Crack Junki:
-
-
Choose the right textile product for your design. Consider the size, shape, material, and purpose of your product and how it will fit your design.
-
Choose the right template for your design. Use the pre-designed templates from Ergosoft Texprint 14 or create your own template from scratch.
-
Choose the right colors for your design. Use the color management system from Ergosoft Texprint 14 to control and adjust the colors of your design according to your printer and textile product.
-
Choose the right images for your design. Use the large library of images, clipart, shapes, symbols, and backgrounds from Ergosoft Texprint 14 or import your own images from your computer or online sources.
-
Choose the right fonts for your design. Use the text editor from Ergosoft Texprint 14 to customize your fonts, sizes, colors, alignments, and effects.
-
Choose the right effects for your design. Use the tools and features from Ergosoft Texprint 14 to add effects to your design, such as shadows, gradients, transparency, distortion, and more.
-
Preview and print your design. Use the print preview feature from Ergosoft Texprint 14 to see how your design will look on the actual product before printing. Adjust the settings as needed and print your design on your textile product.
-
-
By following these tips, you can design textile products that stand out with Ergosoft Texprint 14 Crack Junki.
-
How to Share Your Designs with Others with Ergosoft Texprint 14 Crack Junki
-
If you want to share your designs with others with Ergosoft Texprint 14 Crack Junki, you need to save or export your designs in a format that others can view or use. Here are some ways to share your designs with others with Ergosoft Texprint 14 Crack Junki:
-
-
Save your designs as PDF files. You can use the save as PDF feature from Ergosoft Texprint 14 to save your designs as PDF files that others can view or print on any device.
-
Export your designs to other formats. You can use the export feature from Ergosoft Texprint 14 to export your designs to other formats that others can use or edit on other software, such as JPG, PNG, TIFF, BMP, EPS, PSD, and more.
-
Upload your designs online. You can use online platforms or services to upload your designs online and share them with others via email, social media, or other channels.
-
-
By using these methods, you can share your designs with others with Ergosoft Texprint 14 Crack Junki.
-
Conclusion
-
Ergosoft Texprint 14 is a powerful software that can help you create and print stunning and high-quality textile products. It integrates the leading image processing technologies and incorporates unique textile-specific features. However, it is also an expensive software that requires a hardware key and a license to use.
-
If you want to use Ergosoft Texprint 14 for free, you need Ergosoft Texprint 14 Crack Junki. This solution allows you to crack and use Ergosoft Texprint 14 without paying for it. It bypasses the hardware key and the license verification and unlocks all the features of the software.
-
In this article, we showed you what is Ergosoft Texprint 14 Crack Junki and why you need it. We also showed you how to download and install Ergosoft Texprint 14 Crack Junki on your Windows 10 computer. We also showed you how to use Ergosoft Texprint 14 Crack Junki for different textile products and how to optimize your designs with Ergosoft Texprint 14 Crack Junki. We also showed you how to design textile products that stand out with Ergosoft Texprint 14 Crack Junki and how to share your designs with others with Ergosoft Texprint 14 Crack Junki.
-
We hope you found this article helpful and informative. If you have any questions or comments, please feel free to leave them below.
-
Conclusion
-
Ergosoft Texprint 14 is a powerful software that can help you create and print stunning and high-quality textile products. It integrates the leading image processing technologies and incorporates unique textile-specific features. However, it is also an expensive software that requires a hardware key and a license to use.
-
If you want to use Ergosoft Texprint 14 for free, you need Ergosoft Texprint 14 Crack Junki. This solution allows you to crack and use Ergosoft Texprint 14 without paying for it. It bypasses the hardware key and the license verification and unlocks all the features of the software.
-
In this article, we showed you what is Ergosoft Texprint 14 Crack Junki and why you need it. We also showed you how to download and install Ergosoft Texprint 14 Crack Junki on your Windows 10 computer. We also showed you how to use Ergosoft Texprint 14 Crack Junki for different textile products and how to optimize your designs with Ergosoft Texprint 14 Crack Junki. We also showed you how to design textile products that stand out with Ergosoft Texprint 14 Crack Junki and how to share your designs with others with Ergosoft Texprint 14 Crack Junki.
-
We hope you found this article helpful and informative. If you have any questions or comments, please feel free to leave them below.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Gta 4 Psp Download Gratis ((TOP)).md b/spaces/inplisQlawa/anything-midjourney-v4-1/Gta 4 Psp Download Gratis ((TOP)).md
deleted file mode 100644
index 2eed8c95fbdde4e672aa9f36873cfdcbd7ade6f1..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Gta 4 Psp Download Gratis ((TOP)).md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Adobe Photoshop Cs7: What to Expect from the Next Version of the Popular Photo Editing Software
-
Adobe Photoshop is one of the most widely used and powerful photo editing software in the world. It has been around for over 30 years, and has evolved with new features and improvements over time. The latest version of Photoshop is CS6, which was released in 2012. However, many Photoshop users are eagerly waiting for the next version, which is rumored to be CS7.
So what can we expect from Adobe Photoshop CS7? Here are some of the possible features and enhancements that might be included in the next update:
-
-
A redesigned user interface that is more intuitive and customizable. According to Creative Bloq[^2^], Adobe might adopt a dark UI similar to Lightroom and Premiere Pro, as well as a more streamlined workflow and better organization of tools and panels.
-
New and improved filters and effects, such as a Blur gallery that allows you to edit blurs directly on screen[^2^], a Content-Aware Move tool that lets you move objects within an image and automatically fill in the gaps[^2^], and a Camera Raw filter that applies non-destructive adjustments to any layer[^2^].
-
Better performance and stability, especially for large and complex files. Adobe might also optimize Photoshop for touch devices, such as tablets and laptops with touchscreens, and support high-resolution displays, such as Retina screens on Macs.
-
More integration with other Adobe products and services, such as Creative Cloud, which offers cloud storage, online collaboration, and access to other apps and resources. Adobe might also introduce new features that leverage artificial intelligence and machine learning, such as Sensei, which can help with tasks like selecting subjects, enhancing images, and creating realistic effects.
-
-
Of course, these are just speculations based on rumors and wish lists from Photoshop users. There is no official confirmation or announcement from Adobe about when Photoshop CS7 will be released or what it will include. The last update from Adobe was in 2019, when they celebrated Photoshop's 25th anniversary and released an infographic that showed the history and evolution of Photoshop[^3^]. According to that infographic, Photoshop CS7 would be the 15th major version of Photoshop.
-
However, some sources suggest that Adobe might skip CS7 altogether and move to a new naming scheme based on the year of release, such as Photoshop 2020 or Photoshop 2021. This would align with other Adobe products, such as Illustrator and InDesign, which have adopted this format since 2015. This would also make it easier for users to identify the latest version of Photoshop and avoid confusion with older versions.
-
Whatever the name or the features of the next version of Photoshop, one thing is certain: it will be eagerly anticipated by millions of photographers, designers, artists, and hobbyists who rely on Photoshop for their creative work. Photoshop is not just a software; it is a culture, a phenomenon, and a legacy that has shaped the digital imaging industry for decades.
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/jackli888/stable-diffusion-webui/javascript/localization.js b/spaces/jackli888/stable-diffusion-webui/javascript/localization.js
deleted file mode 100644
index 7b4affabd7bd9e5d9e8193e51e1e40267f0d4cde..0000000000000000000000000000000000000000
--- a/spaces/jackli888/stable-diffusion-webui/javascript/localization.js
+++ /dev/null
@@ -1,165 +0,0 @@
-
-// localization = {} -- the dict with translations is created by the backend
-
-ignore_ids_for_localization={
- setting_sd_hypernetwork: 'OPTION',
- setting_sd_model_checkpoint: 'OPTION',
- setting_realesrgan_enabled_models: 'OPTION',
- modelmerger_primary_model_name: 'OPTION',
- modelmerger_secondary_model_name: 'OPTION',
- modelmerger_tertiary_model_name: 'OPTION',
- train_embedding: 'OPTION',
- train_hypernetwork: 'OPTION',
- txt2img_styles: 'OPTION',
- img2img_styles: 'OPTION',
- setting_random_artist_categories: 'SPAN',
- setting_face_restoration_model: 'SPAN',
- setting_realesrgan_enabled_models: 'SPAN',
- extras_upscaler_1: 'SPAN',
- extras_upscaler_2: 'SPAN',
-}
-
-re_num = /^[\.\d]+$/
-re_emoji = /[\p{Extended_Pictographic}\u{1F3FB}-\u{1F3FF}\u{1F9B0}-\u{1F9B3}]/u
-
-original_lines = {}
-translated_lines = {}
-
-function textNodesUnder(el){
- var n, a=[], walk=document.createTreeWalker(el,NodeFilter.SHOW_TEXT,null,false);
- while(n=walk.nextNode()) a.push(n);
- return a;
-}
-
-function canBeTranslated(node, text){
- if(! text) return false;
- if(! node.parentElement) return false;
-
- parentType = node.parentElement.nodeName
- if(parentType=='SCRIPT' || parentType=='STYLE' || parentType=='TEXTAREA') return false;
-
- if (parentType=='OPTION' || parentType=='SPAN'){
- pnode = node
- for(var level=0; level<4; level++){
- pnode = pnode.parentElement
- if(! pnode) break;
-
- if(ignore_ids_for_localization[pnode.id] == parentType) return false;
- }
- }
-
- if(re_num.test(text)) return false;
- if(re_emoji.test(text)) return false;
- return true
-}
-
-function getTranslation(text){
- if(! text) return undefined
-
- if(translated_lines[text] === undefined){
- original_lines[text] = 1
- }
-
- tl = localization[text]
- if(tl !== undefined){
- translated_lines[tl] = 1
- }
-
- return tl
-}
-
-function processTextNode(node){
- text = node.textContent.trim()
-
- if(! canBeTranslated(node, text)) return
-
- tl = getTranslation(text)
- if(tl !== undefined){
- node.textContent = tl
- }
-}
-
-function processNode(node){
- if(node.nodeType == 3){
- processTextNode(node)
- return
- }
-
- if(node.title){
- tl = getTranslation(node.title)
- if(tl !== undefined){
- node.title = tl
- }
- }
-
- if(node.placeholder){
- tl = getTranslation(node.placeholder)
- if(tl !== undefined){
- node.placeholder = tl
- }
- }
-
- textNodesUnder(node).forEach(function(node){
- processTextNode(node)
- })
-}
-
-function dumpTranslations(){
- dumped = {}
- if (localization.rtl) {
- dumped.rtl = true
- }
-
- Object.keys(original_lines).forEach(function(text){
- if(dumped[text] !== undefined) return
-
- dumped[text] = localization[text] || text
- })
-
- return dumped
-}
-
-onUiUpdate(function(m){
- m.forEach(function(mutation){
- mutation.addedNodes.forEach(function(node){
- processNode(node)
- })
- });
-})
-
-
-document.addEventListener("DOMContentLoaded", function() {
- processNode(gradioApp())
-
- if (localization.rtl) { // if the language is from right to left,
- (new MutationObserver((mutations, observer) => { // wait for the style to load
- mutations.forEach(mutation => {
- mutation.addedNodes.forEach(node => {
- if (node.tagName === 'STYLE') {
- observer.disconnect();
-
- for (const x of node.sheet.rules) { // find all rtl media rules
- if (Array.from(x.media || []).includes('rtl')) {
- x.media.appendMedium('all'); // enable them
- }
- }
- }
- })
- });
- })).observe(gradioApp(), { childList: true });
- }
-})
-
-function download_localization() {
- text = JSON.stringify(dumpTranslations(), null, 4)
-
- var element = document.createElement('a');
- element.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(text));
- element.setAttribute('download', "localization.json");
- element.style.display = 'none';
- document.body.appendChild(element);
-
- element.click();
-
- document.body.removeChild(element);
-}
diff --git a/spaces/james-oldfield/PandA/networks/stylegan3/torch_utils/ops/bias_act.h b/spaces/james-oldfield/PandA/networks/stylegan3/torch_utils/ops/bias_act.h
deleted file mode 100644
index 60b81c6058d54638a6d74a13046fa388442d767d..0000000000000000000000000000000000000000
--- a/spaces/james-oldfield/PandA/networks/stylegan3/torch_utils/ops/bias_act.h
+++ /dev/null
@@ -1,38 +0,0 @@
-// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// NVIDIA CORPORATION and its licensors retain all intellectual property
-// and proprietary rights in and to this software, related documentation
-// and any modifications thereto. Any use, reproduction, disclosure or
-// distribution of this software and related documentation without an express
-// license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-//------------------------------------------------------------------------
-// CUDA kernel parameters.
-
-struct bias_act_kernel_params
-{
- const void* x; // [sizeX]
- const void* b; // [sizeB] or NULL
- const void* xref; // [sizeX] or NULL
- const void* yref; // [sizeX] or NULL
- const void* dy; // [sizeX] or NULL
- void* y; // [sizeX]
-
- int grad;
- int act;
- float alpha;
- float gain;
- float clamp;
-
- int sizeX;
- int sizeB;
- int stepB;
- int loopX;
-};
-
-//------------------------------------------------------------------------
-// CUDA kernel selection.
-
-template void* choose_bias_act_kernel(const bias_act_kernel_params& p);
-
-//------------------------------------------------------------------------
diff --git a/spaces/jb30k/LegalWW/README.md b/spaces/jb30k/LegalWW/README.md
deleted file mode 100644
index 6278cd6dc0aa0ca2abe03df384b8214e856a4284..0000000000000000000000000000000000000000
--- a/spaces/jb30k/LegalWW/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: LegalWW
-emoji: 👀
-colorFrom: gray
-colorTo: gray
-sdk: gradio
-sdk_version: 3.28.3
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/jbilcke-hf/Panoremix/Dockerfile b/spaces/jbilcke-hf/Panoremix/Dockerfile
deleted file mode 100644
index 91319be9b3dd35d916d18fba5260f51125c46b50..0000000000000000000000000000000000000000
--- a/spaces/jbilcke-hf/Panoremix/Dockerfile
+++ /dev/null
@@ -1,65 +0,0 @@
-FROM node:18-alpine AS base
-
-# Install dependencies only when needed
-FROM base AS deps
-# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
-RUN apk add --no-cache libc6-compat
-WORKDIR /app
-
-# Install dependencies based on the preferred package manager
-COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
-RUN \
- if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
- elif [ -f package-lock.json ]; then npm ci; \
- elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
- else echo "Lockfile not found." && exit 1; \
- fi
-
-# Uncomment the following lines if you want to use a secret at buildtime,
-# for example to access your private npm packages
-# RUN --mount=type=secret,id=HF_EXAMPLE_SECRET,mode=0444,required=true \
-# $(cat /run/secrets/HF_EXAMPLE_SECRET)
-
-# Rebuild the source code only when needed
-FROM base AS builder
-WORKDIR /app
-COPY --from=deps /app/node_modules ./node_modules
-COPY . .
-
-# Next.js collects completely anonymous telemetry data about general usage.
-# Learn more here: https://nextjs.org/telemetry
-# Uncomment the following line in case you want to disable telemetry during the build.
-# ENV NEXT_TELEMETRY_DISABLED 1
-
-# RUN yarn build
-
-# If you use yarn, comment out this line and use the line above
-RUN npm run build
-
-# Production image, copy all the files and run next
-FROM base AS runner
-WORKDIR /app
-
-ENV NODE_ENV production
-# Uncomment the following line in case you want to disable telemetry during runtime.
-# ENV NEXT_TELEMETRY_DISABLED 1
-
-RUN addgroup --system --gid 1001 nodejs
-RUN adduser --system --uid 1001 nextjs
-
-COPY --from=builder /app/public ./public
-
-# Automatically leverage output traces to reduce image size
-# https://nextjs.org/docs/advanced-features/output-file-tracing
-COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
-COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
-COPY --from=builder --chown=nextjs:nodejs /app/.next/cache ./.next/cache
-# COPY --from=builder --chown=nextjs:nodejs /app/.next/cache/fetch-cache ./.next/cache/fetch-cache
-
-USER nextjs
-
-EXPOSE 3000
-
-ENV PORT 3000
-
-CMD ["node", "server.js"]
\ No newline at end of file
diff --git a/spaces/jbilcke-hf/media-server/scripts/channel_comedy.sh b/spaces/jbilcke-hf/media-server/scripts/channel_comedy.sh
deleted file mode 100644
index 5979d5cc8fd200af0880325bda853b337013265b..0000000000000000000000000000000000000000
--- a/spaces/jbilcke-hf/media-server/scripts/channel_comedy.sh
+++ /dev/null
@@ -1,13 +0,0 @@
-#!/bin/bash
-
-echo "Starting FFMPEG live stream for channel comedy"
-while true; do
- if [ -f channel_comedy.txt ] && [ -f channel_1_audio.txt ]; then
- echo "Files exist, starting stream"
- # Note: for now we also use channel 1 for audio!
- ffmpeg -y -nostdin -re -f concat -safe 0 -i channel_comedy.txt -stream_loop -1 -safe 0 -i channel_1_audio.txt -loglevel error -c:v libx264 -preset veryfast -tune zerolatency -c:a aac -ar 44100 -shortest -f flv rtmp://localhost/live/comedy
- else
- echo "Files do not exist, waiting for files"
- sleep 1 # check every second
- fi
-done
\ No newline at end of file
diff --git a/spaces/jiejiejie0420/bingo/src/app/layout.tsx b/spaces/jiejiejie0420/bingo/src/app/layout.tsx
deleted file mode 100644
index 8b5122759987177b8dc4e4356d1d06cea25c15ea..0000000000000000000000000000000000000000
--- a/spaces/jiejiejie0420/bingo/src/app/layout.tsx
+++ /dev/null
@@ -1,47 +0,0 @@
-import { Metadata } from 'next'
-import { Toaster } from 'react-hot-toast'
-import { TailwindIndicator } from '@/components/tailwind-indicator'
-import { Providers } from '@/components/providers'
-import { Header } from '@/components/header'
-
-import '@/app/globals.scss'
-
-
-export const metadata: Metadata = {
- title: {
- default: 'Bing AI Chatbot',
- template: `%s - Bing AI Chatbot`
- },
- description: 'Bing AI Chatbot Web App.',
- themeColor: [
- { media: '(prefers-color-scheme: light)', color: 'white' },
- { media: '(prefers-color-scheme: dark)', color: 'dark' }
- ],
- icons: {
- icon: '/favicon.ico',
- shortcut: '../assets/images/logo.svg',
- apple: '../assets/images/logo.svg'
- }
-}
-
-interface RootLayoutProps {
- children: React.ReactNode
-}
-
-export default function RootLayout({ children }: RootLayoutProps) {
- return (
-
-
-
-
-
- {/* @ts-ignore */}
-
- {children}
-
-
-
-
-
- )
-}
diff --git a/spaces/jmourad/TXT2IMG-MJ-Desc/README.md b/spaces/jmourad/TXT2IMG-MJ-Desc/README.md
deleted file mode 100644
index 9ee198f4c55775e86330e8c548f510e59ad77688..0000000000000000000000000000000000000000
--- a/spaces/jmourad/TXT2IMG-MJ-Desc/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: TXT2IMG MJ Desc
-emoji: 🔥
-colorFrom: gray
-colorTo: gray
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
-license: artistic-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PyPDF2/_codecs/symbol.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PyPDF2/_codecs/symbol.py
deleted file mode 100644
index 4c0d680ff242a475d3f48916ad6b7998b30ff76a..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PyPDF2/_codecs/symbol.py
+++ /dev/null
@@ -1,260 +0,0 @@
-# manually generated from https://www.unicode.org/Public/MAPPINGS/VENDORS/ADOBE/symbol.txt
-_symbol_encoding = [
- "\u0000",
- "\u0001",
- "\u0002",
- "\u0003",
- "\u0004",
- "\u0005",
- "\u0006",
- "\u0007",
- "\u0008",
- "\u0009",
- "\u000A",
- "\u000B",
- "\u000C",
- "\u000D",
- "\u000E",
- "\u000F",
- "\u0010",
- "\u0011",
- "\u0012",
- "\u0013",
- "\u0014",
- "\u0015",
- "\u0016",
- "\u0017",
- "\u0018",
- "\u0019",
- "\u001A",
- "\u001B",
- "\u001C",
- "\u001D",
- "\u001E",
- "\u001F",
- "\u0020",
- "\u0021",
- "\u2200",
- "\u0023",
- "\u2203",
- "\u0025",
- "\u0026",
- "\u220B",
- "\u0028",
- "\u0029",
- "\u2217",
- "\u002B",
- "\u002C",
- "\u2212",
- "\u002E",
- "\u002F",
- "\u0030",
- "\u0031",
- "\u0032",
- "\u0033",
- "\u0034",
- "\u0035",
- "\u0036",
- "\u0037",
- "\u0038",
- "\u0039",
- "\u003A",
- "\u003B",
- "\u003C",
- "\u003D",
- "\u003E",
- "\u003F",
- "\u2245",
- "\u0391",
- "\u0392",
- "\u03A7",
- "\u0394",
- "\u0395",
- "\u03A6",
- "\u0393",
- "\u0397",
- "\u0399",
- "\u03D1",
- "\u039A",
- "\u039B",
- "\u039C",
- "\u039D",
- "\u039F",
- "\u03A0",
- "\u0398",
- "\u03A1",
- "\u03A3",
- "\u03A4",
- "\u03A5",
- "\u03C2",
- "\u03A9",
- "\u039E",
- "\u03A8",
- "\u0396",
- "\u005B",
- "\u2234",
- "\u005D",
- "\u22A5",
- "\u005F",
- "\uF8E5",
- "\u03B1",
- "\u03B2",
- "\u03C7",
- "\u03B4",
- "\u03B5",
- "\u03C6",
- "\u03B3",
- "\u03B7",
- "\u03B9",
- "\u03D5",
- "\u03BA",
- "\u03BB",
- "\u00B5",
- "\u03BD",
- "\u03BF",
- "\u03C0",
- "\u03B8",
- "\u03C1",
- "\u03C3",
- "\u03C4",
- "\u03C5",
- "\u03D6",
- "\u03C9",
- "\u03BE",
- "\u03C8",
- "\u03B6",
- "\u007B",
- "\u007C",
- "\u007D",
- "\u223C",
- "\u007F",
- "\u0080",
- "\u0081",
- "\u0082",
- "\u0083",
- "\u0084",
- "\u0085",
- "\u0086",
- "\u0087",
- "\u0088",
- "\u0089",
- "\u008A",
- "\u008B",
- "\u008C",
- "\u008D",
- "\u008E",
- "\u008F",
- "\u0090",
- "\u0091",
- "\u0092",
- "\u0093",
- "\u0094",
- "\u0095",
- "\u0096",
- "\u0097",
- "\u0098",
- "\u0099",
- "\u009A",
- "\u009B",
- "\u009C",
- "\u009D",
- "\u009E",
- "\u009F",
- "\u20AC",
- "\u03D2",
- "\u2032",
- "\u2264",
- "\u2044",
- "\u221E",
- "\u0192",
- "\u2663",
- "\u2666",
- "\u2665",
- "\u2660",
- "\u2194",
- "\u2190",
- "\u2191",
- "\u2192",
- "\u2193",
- "\u00B0",
- "\u00B1",
- "\u2033",
- "\u2265",
- "\u00D7",
- "\u221D",
- "\u2202",
- "\u2022",
- "\u00F7",
- "\u2260",
- "\u2261",
- "\u2248",
- "\u2026",
- "\uF8E6",
- "\uF8E7",
- "\u21B5",
- "\u2135",
- "\u2111",
- "\u211C",
- "\u2118",
- "\u2297",
- "\u2295",
- "\u2205",
- "\u2229",
- "\u222A",
- "\u2283",
- "\u2287",
- "\u2284",
- "\u2282",
- "\u2286",
- "\u2208",
- "\u2209",
- "\u2220",
- "\u2207",
- "\uF6DA",
- "\uF6D9",
- "\uF6DB",
- "\u220F",
- "\u221A",
- "\u22C5",
- "\u00AC",
- "\u2227",
- "\u2228",
- "\u21D4",
- "\u21D0",
- "\u21D1",
- "\u21D2",
- "\u21D3",
- "\u25CA",
- "\u2329",
- "\uF8E8",
- "\uF8E9",
- "\uF8EA",
- "\u2211",
- "\uF8EB",
- "\uF8EC",
- "\uF8ED",
- "\uF8EE",
- "\uF8EF",
- "\uF8F0",
- "\uF8F1",
- "\uF8F2",
- "\uF8F3",
- "\uF8F4",
- "\u00F0",
- "\u232A",
- "\u222B",
- "\u2320",
- "\uF8F5",
- "\u2321",
- "\uF8F6",
- "\uF8F7",
- "\uF8F8",
- "\uF8F9",
- "\uF8FA",
- "\uF8FB",
- "\uF8FC",
- "\uF8FD",
- "\uF8FE",
- "\u00FF",
-]
-assert len(_symbol_encoding) == 256
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/sbixGlyph.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/sbixGlyph.py
deleted file mode 100644
index fd687a18808b6b2655951f9a6934916d7bafbc71..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/sbixGlyph.py
+++ /dev/null
@@ -1,145 +0,0 @@
-from fontTools.misc import sstruct
-from fontTools.misc.textTools import readHex, safeEval
-import struct
-
-
-sbixGlyphHeaderFormat = """
- >
- originOffsetX: h # The x-value of the point in the glyph relative to its
- # lower-left corner which corresponds to the origin of
- # the glyph on the screen, that is the point on the
- # baseline at the left edge of the glyph.
- originOffsetY: h # The y-value of the point in the glyph relative to its
- # lower-left corner which corresponds to the origin of
- # the glyph on the screen, that is the point on the
- # baseline at the left edge of the glyph.
- graphicType: 4s # e.g. "png "
-"""
-
-sbixGlyphHeaderFormatSize = sstruct.calcsize(sbixGlyphHeaderFormat)
-
-
-class Glyph(object):
- def __init__(
- self,
- glyphName=None,
- referenceGlyphName=None,
- originOffsetX=0,
- originOffsetY=0,
- graphicType=None,
- imageData=None,
- rawdata=None,
- gid=0,
- ):
- self.gid = gid
- self.glyphName = glyphName
- self.referenceGlyphName = referenceGlyphName
- self.originOffsetX = originOffsetX
- self.originOffsetY = originOffsetY
- self.rawdata = rawdata
- self.graphicType = graphicType
- self.imageData = imageData
-
- # fix self.graphicType if it is null terminated or too short
- if self.graphicType is not None:
- if self.graphicType[-1] == "\0":
- self.graphicType = self.graphicType[:-1]
- if len(self.graphicType) > 4:
- from fontTools import ttLib
-
- raise ttLib.TTLibError(
- "Glyph.graphicType must not be longer than 4 characters."
- )
- elif len(self.graphicType) < 4:
- # pad with spaces
- self.graphicType += " "[: (4 - len(self.graphicType))]
-
- def decompile(self, ttFont):
- self.glyphName = ttFont.getGlyphName(self.gid)
- if self.rawdata is None:
- from fontTools import ttLib
-
- raise ttLib.TTLibError("No table data to decompile")
- if len(self.rawdata) > 0:
- if len(self.rawdata) < sbixGlyphHeaderFormatSize:
- from fontTools import ttLib
-
- # print "Glyph %i header too short: Expected %x, got %x." % (self.gid, sbixGlyphHeaderFormatSize, len(self.rawdata))
- raise ttLib.TTLibError("Glyph header too short.")
-
- sstruct.unpack(
- sbixGlyphHeaderFormat, self.rawdata[:sbixGlyphHeaderFormatSize], self
- )
-
- if self.graphicType == "dupe":
- # this glyph is a reference to another glyph's image data
- (gid,) = struct.unpack(">H", self.rawdata[sbixGlyphHeaderFormatSize:])
- self.referenceGlyphName = ttFont.getGlyphName(gid)
- else:
- self.imageData = self.rawdata[sbixGlyphHeaderFormatSize:]
- self.referenceGlyphName = None
- # clean up
- del self.rawdata
- del self.gid
-
- def compile(self, ttFont):
- if self.glyphName is None:
- from fontTools import ttLib
-
- raise ttLib.TTLibError("Can't compile Glyph without glyph name")
- # TODO: if ttFont has no maxp, cmap etc., ignore glyph names and compile by index?
- # (needed if you just want to compile the sbix table on its own)
- self.gid = struct.pack(">H", ttFont.getGlyphID(self.glyphName))
- if self.graphicType is None:
- rawdata = b""
- else:
- rawdata = sstruct.pack(sbixGlyphHeaderFormat, self)
- if self.graphicType == "dupe":
- rawdata += struct.pack(">H", ttFont.getGlyphID(self.referenceGlyphName))
- else:
- assert self.imageData is not None
- rawdata += self.imageData
- self.rawdata = rawdata
-
- def toXML(self, xmlWriter, ttFont):
- if self.graphicType is None:
- # TODO: ignore empty glyphs?
- # a glyph data entry is required for each glyph,
- # but empty ones can be calculated at compile time
- xmlWriter.simpletag("glyph", name=self.glyphName)
- xmlWriter.newline()
- return
- xmlWriter.begintag(
- "glyph",
- graphicType=self.graphicType,
- name=self.glyphName,
- originOffsetX=self.originOffsetX,
- originOffsetY=self.originOffsetY,
- )
- xmlWriter.newline()
- if self.graphicType == "dupe":
- # graphicType == "dupe" is a reference to another glyph id.
- xmlWriter.simpletag("ref", glyphname=self.referenceGlyphName)
- else:
- xmlWriter.begintag("hexdata")
- xmlWriter.newline()
- xmlWriter.dumphex(self.imageData)
- xmlWriter.endtag("hexdata")
- xmlWriter.newline()
- xmlWriter.endtag("glyph")
- xmlWriter.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- if name == "ref":
- # glyph is a "dupe", i.e. a reference to another glyph's image data.
- # in this case imageData contains the glyph id of the reference glyph
- # get glyph id from glyphname
- glyphname = safeEval("'''" + attrs["glyphname"] + "'''")
- self.imageData = struct.pack(">H", ttFont.getGlyphID(glyphname))
- self.referenceGlyphName = glyphname
- elif name == "hexdata":
- self.imageData = readHex(content)
- else:
- from fontTools import ttLib
-
- raise ttLib.TTLibError("can't handle '%s' element" % name)
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/exceptions.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/exceptions.py
deleted file mode 100644
index 2d6e1a44b6a1667d1c302869ff2a332634fda47e..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/exceptions.py
+++ /dev/null
@@ -1,21 +0,0 @@
-"""
-fsspec user-defined exception classes
-"""
-import asyncio
-
-
-class BlocksizeMismatchError(ValueError):
- """
- Raised when a cached file is opened with a different blocksize than it was
- written with
- """
-
- ...
-
-
-class FSTimeoutError(asyncio.TimeoutError):
- """
- Raised when a fsspec function timed out occurs
- """
-
- ...
diff --git a/spaces/jonathang/dog_breed_v2/app.py b/spaces/jonathang/dog_breed_v2/app.py
deleted file mode 100644
index 1abc909f5e1d5034630137b163c550721fea9ba4..0000000000000000000000000000000000000000
--- a/spaces/jonathang/dog_breed_v2/app.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import gradio as gr
-import timm
-from fastai.vision.all import *
-import skimage
-from huggingface_hub import from_pretrained_fastai
-
-
-learn = from_pretrained_fastai('jonathang/dog_breed')
-
-labels = learn.dls.vocab
-def predict(img):
- img = PILImage.create(img)
- pred,pred_idx,probs = learn.predict(img)
- return {labels[i]: float(probs[i]) for i in range(len(labels))}
-
-title = "Doge Breed Classifier"
-description = "A dog breed classifier trained on duckduckgo images with fastai."
-enable_queue=True
-
-gr.Interface(
- fn=predict,
- inputs=gr.inputs.Image(shape=(512, 512)),
- outputs=gr.outputs.Label(num_top_classes=5),
- title=title,
- description=description,
- enable_queue=enable_queue,
- live=True,
-).launch()
\ No newline at end of file
diff --git a/spaces/jpfearnworks/ai_agents/modules/reasoning/chain_of_thought.py b/spaces/jpfearnworks/ai_agents/modules/reasoning/chain_of_thought.py
deleted file mode 100644
index 81972fcf77ea90128b727ce0e2a6e698202fea94..0000000000000000000000000000000000000000
--- a/spaces/jpfearnworks/ai_agents/modules/reasoning/chain_of_thought.py
+++ /dev/null
@@ -1,34 +0,0 @@
-from langchain import PromptTemplate, LLMChain
-from .reasoning_strategy import ReasoningStrategy, ReasoningConfig
-from typing import Callable
-import pprint
-
-class ChainOfThoughtStrategy(ReasoningStrategy):
- def __init__(self, config: ReasoningConfig, display: Callable):
- super().__init__(config=config, display=display)
- print("Creating Reasoning Router with config: ")
- pprint.pprint(vars(config))
-
- def run(self, question):
- print('Using Chain of Thought')
- self.display("Using 'Chain of Thought'")
-
- template_cot = """You are asked a question and rather than simply guessing the right answer break down the solution into a series of staps
- The question is {question}
-
- Write out your step by step reasoning and after considering all of the facts and applying this reasoning write out your final answer
- """
- prompt = PromptTemplate(template=template_cot, input_variables=["question"])
- llm_chain = LLMChain(prompt=prompt, llm=self.llm)
- response_cot = llm_chain.run(question)
- print(response_cot)
- self.display(response_cot)
- return response_cot
-
-def get_cot_confg(temperature: float = 0.7) -> ReasoningConfig:
- usage = """
- This problem is simple and the solution may be obtained by focusing on generating a coherent series
- of reasoning steps that lead to the final answer. The approach provides interpretability, decomposes
- multi-step problems into intermediate steps, and allows for additional computation allocation
- """
- return ReasoningConfig(usage=usage, temperature=temperature)
\ No newline at end of file
diff --git a/spaces/juancopi81/whisper-demo-es-medium/youtubeaudioextractor.py b/spaces/juancopi81/whisper-demo-es-medium/youtubeaudioextractor.py
deleted file mode 100644
index 5fcb4cbe2671d8c4ed8bf4fd41cf04d3cf60d360..0000000000000000000000000000000000000000
--- a/spaces/juancopi81/whisper-demo-es-medium/youtubeaudioextractor.py
+++ /dev/null
@@ -1,23 +0,0 @@
-from abc import ABC, abstractmethod
-
-import pytube as pt
-
-class YouTubeAudioExtractor(ABC):
-
- @abstractmethod
- def extract(self, url: str, save_path: str) -> str:
- pass
-
-class PytubeAudioExtractor(YouTubeAudioExtractor):
-
- def __init__(self,
- only_audio: bool = True) -> None:
- self.only_audio = only_audio
-
- def extract(self,
- url: str,
- save_path: str = "yt_audio.mp3") -> str:
- yt = pt.YouTube(url)
- stream = yt.streams.filter(only_audio=self.only_audio)[0]
- stream.download(filename=save_path)
- return "yt_audio.mp3"
\ No newline at end of file
diff --git a/spaces/kevinwang676/Bark-with-Voice-Cloning/cloning/clonevoice.py b/spaces/kevinwang676/Bark-with-Voice-Cloning/cloning/clonevoice.py
deleted file mode 100644
index a59b0fc561040572400af2771cac8dac75e8d13f..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/Bark-with-Voice-Cloning/cloning/clonevoice.py
+++ /dev/null
@@ -1,68 +0,0 @@
-from bark.generation import load_codec_model, generate_text_semantic, grab_best_device
-from encodec.utils import convert_audio
-from bark.hubert.hubert_manager import HuBERTManager
-from bark.hubert.pre_kmeans_hubert import CustomHubert
-from bark.hubert.customtokenizer import CustomTokenizer
-
-import torchaudio
-import torch
-import os
-import gradio
-
-
-def clone_voice(audio_filepath, dest_filename, progress=gradio.Progress(track_tqdm=True)):
- # if len(text) < 1:
- # raise gradio.Error('No transcription text entered!')
-
- use_gpu = False # not os.environ.get("BARK_FORCE_CPU", False)
- progress(0, desc="Loading Codec")
- model = load_codec_model(use_gpu=use_gpu)
-
- # From https://github.com/gitmylo/bark-voice-cloning-HuBERT-quantizer
- hubert_manager = HuBERTManager()
- hubert_manager.make_sure_hubert_installed()
- hubert_manager.make_sure_tokenizer_installed()
-
- # From https://github.com/gitmylo/bark-voice-cloning-HuBERT-quantizer
- # Load HuBERT for semantic tokens
-
- # Load the HuBERT model
- device = grab_best_device(use_gpu)
- hubert_model = CustomHubert(checkpoint_path='./models/hubert/hubert.pt').to(device)
-
- # Load the CustomTokenizer model
- tokenizer = CustomTokenizer.load_from_checkpoint('./models/hubert/en_tokenizer.pth').to(device) # change to the correct path
-
- progress(0.25, desc="Converting WAV")
-
- # Load and pre-process the audio waveform
- wav, sr = torchaudio.load(audio_filepath)
- if wav.shape[0] == 2: # Stereo to mono if needed
- wav = wav.mean(0, keepdim=True)
-
- wav = convert_audio(wav, sr, model.sample_rate, model.channels)
- wav = wav.to(device)
- progress(0.5, desc="Extracting codes")
-
- semantic_vectors = hubert_model.forward(wav, input_sample_hz=model.sample_rate)
- semantic_tokens = tokenizer.get_token(semantic_vectors)
-
- # Extract discrete codes from EnCodec
- with torch.no_grad():
- encoded_frames = model.encode(wav.unsqueeze(0))
- codes = torch.cat([encoded[0] for encoded in encoded_frames], dim=-1).squeeze() # [n_q, T]
-
- # get seconds of audio
- # seconds = wav.shape[-1] / model.sample_rate
- # generate semantic tokens
- # semantic_tokens = generate_text_semantic(text, max_gen_duration_s=seconds, top_k=50, top_p=.95, temp=0.7)
-
- # move codes to cpu
- codes = codes.cpu().numpy()
- # move semantic tokens to cpu
- semantic_tokens = semantic_tokens.cpu().numpy()
-
- import numpy as np
- output_path = dest_filename + '.npz'
- np.savez(output_path, fine_prompt=codes, coarse_prompt=codes[:2, :], semantic_prompt=semantic_tokens)
- return ["Finished", output_path]
\ No newline at end of file
diff --git a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/README.md b/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/README.md
deleted file mode 100644
index 2ee63a861229b68873561fa39bfa7c9a8b53b947..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/README.md
+++ /dev/null
@@ -1,164 +0,0 @@
-# Distributed Arcface Training in Pytorch
-
-This is a deep learning library that makes face recognition efficient, and effective, which can train tens of millions
-identity on a single server.
-
-## Requirements
-
-- Install [pytorch](http://pytorch.org) (torch>=1.6.0), our doc for [install.md](docs/install.md).
-- `pip install -r requirements.txt`.
-- Download the dataset
- from [https://github.com/deepinsight/insightface/tree/master/recognition/_datasets_](https://github.com/deepinsight/insightface/tree/master/recognition/_datasets_)
- .
-
-## How to Training
-
-To train a model, run `train.py` with the path to the configs:
-
-### 1. Single node, 8 GPUs:
-
-```shell
-python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/ms1mv3_r50
-```
-
-### 2. Multiple nodes, each node 8 GPUs:
-
-Node 0:
-
-```shell
-python -m torch.distributed.launch --nproc_per_node=8 --nnodes=2 --node_rank=0 --master_addr="ip1" --master_port=1234 train.py train.py configs/ms1mv3_r50
-```
-
-Node 1:
-
-```shell
-python -m torch.distributed.launch --nproc_per_node=8 --nnodes=2 --node_rank=1 --master_addr="ip1" --master_port=1234 train.py train.py configs/ms1mv3_r50
-```
-
-### 3.Training resnet2060 with 8 GPUs:
-
-```shell
-python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/ms1mv3_r2060.py
-```
-
-## Model Zoo
-
-- The models are available for non-commercial research purposes only.
-- All models can be found in here.
-- [Baidu Yun Pan](https://pan.baidu.com/s/1CL-l4zWqsI1oDuEEYVhj-g): e8pw
-- [onedrive](https://1drv.ms/u/s!AswpsDO2toNKq0lWY69vN58GR6mw?e=p9Ov5d)
-
-### Performance on [**ICCV2021-MFR**](http://iccv21-mfr.com/)
-
-ICCV2021-MFR testset consists of non-celebrities so we can ensure that it has very few overlap with public available face
-recognition training set, such as MS1M and CASIA as they mostly collected from online celebrities.
-As the result, we can evaluate the FAIR performance for different algorithms.
-
-For **ICCV2021-MFR-ALL** set, TAR is measured on all-to-all 1:1 protocal, with FAR less than 0.000001(e-6). The
-globalised multi-racial testset contains 242,143 identities and 1,624,305 images.
-
-For **ICCV2021-MFR-MASK** set, TAR is measured on mask-to-nonmask 1:1 protocal, with FAR less than 0.0001(e-4).
-Mask testset contains 6,964 identities, 6,964 masked images and 13,928 non-masked images.
-There are totally 13,928 positive pairs and 96,983,824 negative pairs.
-
-| Datasets | backbone | Training throughout | Size / MB | **ICCV2021-MFR-MASK** | **ICCV2021-MFR-ALL** |
-| :---: | :--- | :--- | :--- |:--- |:--- |
-| MS1MV3 | r18 | - | 91 | **47.85** | **68.33** |
-| Glint360k | r18 | 8536 | 91 | **53.32** | **72.07** |
-| MS1MV3 | r34 | - | 130 | **58.72** | **77.36** |
-| Glint360k | r34 | 6344 | 130 | **65.10** | **83.02** |
-| MS1MV3 | r50 | 5500 | 166 | **63.85** | **80.53** |
-| Glint360k | r50 | 5136 | 166 | **70.23** | **87.08** |
-| MS1MV3 | r100 | - | 248 | **69.09** | **84.31** |
-| Glint360k | r100 | 3332 | 248 | **75.57** | **90.66** |
-| MS1MV3 | mobilefacenet | 12185 | 7.8 | **41.52** | **65.26** |
-| Glint360k | mobilefacenet | 11197 | 7.8 | **44.52** | **66.48** |
-
-### Performance on IJB-C and Verification Datasets
-
-| Datasets | backbone | IJBC(1e-05) | IJBC(1e-04) | agedb30 | cfp_fp | lfw | log |
-| :---: | :--- | :--- | :--- | :--- |:--- |:--- |:--- |
-| MS1MV3 | r18 | 92.07 | 94.66 | 97.77 | 97.73 | 99.77 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r18_fp16/training.log)|
-| MS1MV3 | r34 | 94.10 | 95.90 | 98.10 | 98.67 | 99.80 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r34_fp16/training.log)|
-| MS1MV3 | r50 | 94.79 | 96.46 | 98.35 | 98.96 | 99.83 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r50_fp16/training.log)|
-| MS1MV3 | r100 | 95.31 | 96.81 | 98.48 | 99.06 | 99.85 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r100_fp16/training.log)|
-| MS1MV3 | **r2060**| 95.34 | 97.11 | 98.67 | 99.24 | 99.87 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r2060_fp16/training.log)|
-| Glint360k |r18-0.1 | 93.16 | 95.33 | 97.72 | 97.73 | 99.77 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_cosface_r18_fp16_0.1/training.log)|
-| Glint360k |r34-0.1 | 95.16 | 96.56 | 98.33 | 98.78 | 99.82 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_cosface_r34_fp16_0.1/training.log)|
-| Glint360k |r50-0.1 | 95.61 | 96.97 | 98.38 | 99.20 | 99.83 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_cosface_r50_fp16_0.1/training.log)|
-| Glint360k |r100-0.1 | 95.88 | 97.32 | 98.48 | 99.29 | 99.82 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_cosface_r100_fp16_0.1/training.log)|
-
-[comment]: <> (More details see [model.md](docs/modelzoo.md) in docs.)
-
-
-## [Speed Benchmark](docs/speed_benchmark.md)
-
-**Arcface Torch** can train large-scale face recognition training set efficiently and quickly. When the number of
-classes in training sets is greater than 300K and the training is sufficient, partial fc sampling strategy will get same
-accuracy with several times faster training performance and smaller GPU memory.
-Partial FC is a sparse variant of the model parallel architecture for large sacle face recognition. Partial FC use a
-sparse softmax, where each batch dynamicly sample a subset of class centers for training. In each iteration, only a
-sparse part of the parameters will be updated, which can reduce a lot of GPU memory and calculations. With Partial FC,
-we can scale trainset of 29 millions identities, the largest to date. Partial FC also supports multi-machine distributed
-training and mixed precision training.
-
-
-
-More details see
-[speed_benchmark.md](docs/speed_benchmark.md) in docs.
-
-### 1. Training speed of different parallel methods (samples / second), Tesla V100 32GB * 8. (Larger is better)
-
-`-` means training failed because of gpu memory limitations.
-
-| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 |
-| :--- | :--- | :--- | :--- |
-|125000 | 4681 | 4824 | 5004 |
-|1400000 | **1672** | 3043 | 4738 |
-|5500000 | **-** | **1389** | 3975 |
-|8000000 | **-** | **-** | 3565 |
-|16000000 | **-** | **-** | 2679 |
-|29000000 | **-** | **-** | **1855** |
-
-### 2. GPU memory cost of different parallel methods (MB per GPU), Tesla V100 32GB * 8. (Smaller is better)
-
-| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 |
-| :--- | :--- | :--- | :--- |
-|125000 | 7358 | 5306 | 4868 |
-|1400000 | 32252 | 11178 | 6056 |
-|5500000 | **-** | 32188 | 9854 |
-|8000000 | **-** | **-** | 12310 |
-|16000000 | **-** | **-** | 19950 |
-|29000000 | **-** | **-** | 32324 |
-
-## Evaluation ICCV2021-MFR and IJB-C
-
-More details see [eval.md](docs/eval.md) in docs.
-
-## Test
-
-We tested many versions of PyTorch. Please create an issue if you are having trouble.
-
-- [x] torch 1.6.0
-- [x] torch 1.7.1
-- [x] torch 1.8.0
-- [x] torch 1.9.0
-
-## Citation
-
-```
-@inproceedings{deng2019arcface,
- title={Arcface: Additive angular margin loss for deep face recognition},
- author={Deng, Jiankang and Guo, Jia and Xue, Niannan and Zafeiriou, Stefanos},
- booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
- pages={4690--4699},
- year={2019}
-}
-@inproceedings{an2020partical_fc,
- title={Partial FC: Training 10 Million Identities on a Single Machine},
- author={An, Xiang and Zhu, Xuhan and Xiao, Yang and Wu, Lan and Zhang, Ming and Gao, Yuan and Qin, Bin and
- Zhang, Debing and Fu Ying},
- booktitle={Arxiv 2010.05222},
- year={2020}
-}
-```
diff --git a/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/uix/__init__.py b/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/uix/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/king007/parrot-t5-test/README.md b/spaces/king007/parrot-t5-test/README.md
deleted file mode 100644
index 507b7963f5a1d4d35c6beaa4d5430220ba9776ea..0000000000000000000000000000000000000000
--- a/spaces/king007/parrot-t5-test/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Parrot Paraphraser
-emoji: 🌖
-colorFrom: purple
-colorTo: pink
-sdk: gradio
-sdk_version: 3.0.14
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: trysem/parrot-paraphraser
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/ppg_extractor/encoder/swish.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/ppg_extractor/encoder/swish.py
deleted file mode 100644
index c53a7a98bfc6d983c3a308c4b40f81e315aa7875..0000000000000000000000000000000000000000
--- a/spaces/kira4424/Tacotron-zero-short-voice-clone/ppg_extractor/encoder/swish.py
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-
-# Copyright 2020 Johns Hopkins University (Shinji Watanabe)
-# Northwestern Polytechnical University (Pengcheng Guo)
-# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
-
-"""Swish() activation function for Conformer."""
-
-import torch
-
-
-class Swish(torch.nn.Module):
- """Construct an Swish object."""
-
- def forward(self, x):
- """Return Swich activation function."""
- return x * torch.sigmoid(x)
diff --git a/spaces/kokofixcomputers/chat-ui/src/lib/utils/concatUint8Arrays.ts b/spaces/kokofixcomputers/chat-ui/src/lib/utils/concatUint8Arrays.ts
deleted file mode 100644
index e53396eca7e3dee20a543fb6ac28ecf48c7e3965..0000000000000000000000000000000000000000
--- a/spaces/kokofixcomputers/chat-ui/src/lib/utils/concatUint8Arrays.ts
+++ /dev/null
@@ -1,12 +0,0 @@
-import { sum } from "./sum";
-
-export function concatUint8Arrays(arrays: Uint8Array[]): Uint8Array {
- const totalLength = sum(arrays.map((a) => a.length));
- const result = new Uint8Array(totalLength);
- let offset = 0;
- for (const array of arrays) {
- result.set(array, offset);
- offset += array.length;
- }
- return result;
-}
diff --git a/spaces/kornia/kornia-resize-antialias/README.md b/spaces/kornia/kornia-resize-antialias/README.md
deleted file mode 100644
index bbdf96959f3f4429d30c136f8f6c2a6eed4df015..0000000000000000000000000000000000000000
--- a/spaces/kornia/kornia-resize-antialias/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Kornia Reshaping Antialias
-emoji: 🌍
-colorFrom: green
-colorTo: red
-sdk: gradio
-sdk_version: 3.3.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/kunwarsaaim/Self-Debiasing/generation.py b/spaces/kunwarsaaim/Self-Debiasing/generation.py
deleted file mode 100644
index cd3584aab1ff34b1b3f811ed19f08db66f94ee9b..0000000000000000000000000000000000000000
--- a/spaces/kunwarsaaim/Self-Debiasing/generation.py
+++ /dev/null
@@ -1,252 +0,0 @@
-from typing import List, Optional, Union, Tuple
-
-import torch
-import torch.nn.functional as F
-from transformers import GPT2LMHeadModel, LogitsProcessorList, LogitsProcessor, PreTrainedTokenizer
-from transformers.generation_utils import GenerationMixin, SampleOutput, SampleEncoderDecoderOutput, SampleDecoderOnlyOutput
-
-
-class SelfDebiasingLogitsProcessor(LogitsProcessor):
- """This class represents a logits processor that applies self-debiasing."""
-
- def __init__(self, num_debiasing_prefixes: int, decay_constant: float = 50, epsilon: float = 0.01, debug: bool = False,
- tokenizer: Optional[PreTrainedTokenizer] = None):
- """
- :param num_debiasing_prefixes: the number of debiasing prefixes used
- :param decay_constant: the decay constant (lambda in the paper)
- :param epsilon: the minimum factor by which each probability is multiplied
- :param debug: whether to print additional debugging output
- :param tokenizer: a tokenizer used to print debugging output
- """
- assert not debug or tokenizer, "If debug=True, a tokenizer must be passed to SelfDebiasingLogitsProcessor()"
- self.num_debiasing_prefixes = num_debiasing_prefixes
- self.decay_constant = decay_constant
- self.epsilon = epsilon
- self.debug = debug
- self.tokenizer = tokenizer
-
- def __call__(self, input_ids: torch.LongTensor,scores: torch.FloatTensor) -> torch.FloatTensor:
- batch_size = scores.shape[0] // (1 + self.num_debiasing_prefixes)
- regular_sentence_indices = range(batch_size)
- for regular_sentence_idx in regular_sentence_indices:
- bias_indices = self._get_bias_indices(regular_sentence_idx, batch_size)
- if bias_indices:
- self._debias_scores(scores, regular_sentence_idx, bias_indices)
- return scores
-
- def _get_bias_indices(self, regular_sentence_idx: int, batch_size: int) -> List[int]:
- """Returns the indices of all self-debiasing inputs for a regular input"""
- return [regular_sentence_idx + (prefix_idx + 1) * batch_size for prefix_idx in range(self.num_debiasing_prefixes)]
-
- def _debias_scores(self, scores: torch.FloatTensor, regular_sent_idx: int, bias_indices: List[int]) -> None:
- """Partially debiases the given scores considering a single sentence and the corresponding self-debiasing inputs"""
- logits_biased = [scores[bias_idx] for bias_idx in bias_indices]
-
- mask = self._generate_decay_mask(scores[regular_sent_idx], logits_biased)
- scores[regular_sent_idx] = torch.log(self._apply_decay_mask(scores[regular_sent_idx], mask))
-
- for debiasing_sent_idx in bias_indices:
- scores[debiasing_sent_idx] = scores[regular_sent_idx]
-
- def _apply_decay_mask(self, logits: torch.Tensor, decay_mask: torch.Tensor) -> torch.Tensor:
- """Applies exponential decay to a tensor of logits"""
- probabilities = logits.softmax(dim=-1)
- decay_mask = torch.exp(- decay_mask * self.decay_constant)
- decay_mask = torch.max(decay_mask, torch.tensor([self.epsilon], device=decay_mask.device))
- probabilities = probabilities * decay_mask
- probabilities = probabilities / probabilities.sum(dim=-1)
- return probabilities
-
- def _generate_decay_mask(self, logits_regular: torch.FloatTensor, logits_biased_list: List[torch.FloatTensor]) -> torch.Tensor:
- """Computes the alpha values (see paper) for each token and stores them in a mask tensor"""
- p_regular = logits_regular.softmax(dim=-1)
- p_biased = None
-
- for logits_biased in logits_biased_list:
- if p_biased is None:
- p_biased = logits_biased.softmax(dim=-1)
- else:
- p_biased = torch.max(p_biased, logits_biased.softmax(dim=-1))
-
- if self.debug:
- print(f'== Before Debiasing ==\n'
- f'Top 5 predictions (regular): {self._get_most_likely_tokens(p_regular, k=5)}\n'
- f'Top 5 predictions (biased): {self._get_most_likely_tokens(p_biased, k=5)}')
-
- mask = torch.max(p_biased - p_regular, torch.tensor([0.], device=p_regular.device))
-
- if self.debug:
- p_regular = self._apply_decay_mask(logits_regular, mask)
- print(f'== After Debiasing ==\n'
- f'Top 5 predictions (regular): {self._get_most_likely_tokens(p_regular, k=5)}')
-
- return mask
-
- def _get_most_likely_tokens(self, probabilities_tensor: torch.Tensor, k: int) -> List[Tuple[str, float]]:
- """Returns the most likely tokens according to a tensor of probabilities"""
- assert len(probabilities_tensor.shape) == 1
- values, indices = torch.topk(probabilities_tensor, k=k, dim=-1)
- tokens = self.tokenizer.convert_ids_to_tokens(indices)
- return list(zip(tokens, [pv.item() for pv in values]))
-
-
-class SelfDebiasingGPT2LMHeadModel(GPT2LMHeadModel, GenerationMixin):
- """
- This class represents a regular GPT2LMHeadModel that additionally has the capacity to perform self-debiasing. For self-debiasing, the
- init_logits_processor function must be called. Otherwise, this model just performs regular language modeling.
- """
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self.logits_processor = None # type: Optional[SelfDebiasingLogitsProcessor]
-
- def init_logits_processor(self, *args, **kwargs):
- """Initialize the logits processor. For a list of arguments, see the self-debiasing logit processor's init function."""
- self.logits_processor = SelfDebiasingLogitsProcessor(*args, **kwargs)
-
- def _get_logits_processor(self, *args, **kwargs) -> LogitsProcessorList:
- logits_processor = super()._get_logits_processor(*args, **kwargs)
- if self.logits_processor is not None:
- logits_processor.append(self.logits_processor)
- return logits_processor
-
- def beam_sample(self, *args, **kwargs):
- raise NotImplementedError("Beam sampling is not implemented for self-debiasing models")
-
- def sample(self, input_ids: torch.LongTensor, logits_processor: Optional[LogitsProcessorList] = None,
- logits_warper: Optional[LogitsProcessorList] = None, max_length: Optional[int] = None, pad_token_id: Optional[int] = None,
- eos_token_id: Optional[int] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None,
- output_scores: Optional[bool] = None, return_dict_in_generate: Optional[bool] = None, **model_kwargs) -> Union[
- SampleOutput, torch.LongTensor]:
- """
- This is a verbatim copy of the original implementation by huggingface, with a single modification to ensure that a text and all
- corresponding self-debiasing inputs always chose the same token to generate next. This modification is enclosed by the texts
- "BEGIN MODIFICATIONS" and "END MODIFICATIONS", respectively.
- """
- # init values
- logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList()
- logits_warper = logits_warper if logits_warper is not None else LogitsProcessorList()
- max_length = max_length if max_length is not None else self.config.max_length
- pad_token_id = pad_token_id if pad_token_id is not None else self.config.pad_token_id
- eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id
- output_scores = output_scores if output_scores is not None else self.config.output_scores
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict_in_generate = (
- return_dict_in_generate if return_dict_in_generate is not None else self.config.return_dict_in_generate
- )
-
- # init attention / hidden states / scores tuples
- scores = () if (return_dict_in_generate and output_scores) else None
- decoder_attentions = () if (return_dict_in_generate and output_attentions) else None
- decoder_hidden_states = () if (return_dict_in_generate and output_hidden_states) else None
-
- # if model is an encoder-decoder, retrieve encoder attention weights and hidden states
- if return_dict_in_generate and self.config.is_encoder_decoder:
- encoder_attentions = model_kwargs["encoder_outputs"].get("attentions") if output_attentions else None
- encoder_hidden_states = (
- model_kwargs["encoder_outputs"].get("hidden_states") if output_hidden_states else None
- )
-
- # init sequence length tensors
- sequence_lengths, unfinished_sequences, cur_len = self._init_sequence_length_for_generation(
- input_ids, max_length
- )
-
- # auto-regressive generation
- while cur_len < max_length:
- # prepare model inputs
- model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
-
- # forward pass to get next token
- outputs = self(
- **model_inputs,
- return_dict=True,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- )
- next_token_logits = outputs.logits[:, -1, :]
-
- # pre-process distribution
- next_token_scores = logits_processor(input_ids, next_token_logits)
- next_token_scores = logits_warper(input_ids, next_token_scores)
-
- # Store scores, attentions and hidden_states when required
- if return_dict_in_generate:
- if output_scores:
- scores += (next_token_scores,)
- if output_attentions:
- decoder_attentions += (
- (outputs.decoder_attentions,) if self.config.is_encoder_decoder else (outputs.attentions,)
- )
-
- if output_hidden_states:
- decoder_hidden_states += (
- (outputs.decoder_hidden_states,)
- if self.config.is_encoder_decoder
- else (outputs.hidden_states,)
- )
-
- # sample
- probs = F.softmax(next_token_scores, dim=-1)
- next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
-
- # =========================
- # BEGIN MODIFICATIONS
- # the following modification to the sample method is necessary to ensure that each debiasing sentence is continued in the same
- # way as the original sentence
- if self.logits_processor is not None:
- batch_size = next_tokens.shape[0] // (1 + self.logits_processor.num_debiasing_prefixes)
- regular_sentence_indices = range(batch_size)
- for regular_sentence_idx in regular_sentence_indices:
- debiasing_sentence_indices = self.logits_processor._get_bias_indices(regular_sentence_idx, batch_size)
- for debiasing_sentence_idx in debiasing_sentence_indices:
- next_tokens[debiasing_sentence_idx] = next_tokens[regular_sentence_idx]
- # END MODIFICATIONS
- # =========================
-
- # add code that transfomers next_tokens to tokens_to_add
- if eos_token_id is not None:
- assert pad_token_id is not None, "If eos_token_id is defined, make sure that pad_token_id is defined."
- next_tokens = next_tokens * unfinished_sequences + (pad_token_id) * (1 - unfinished_sequences)
-
- # add token and increase length by one
- input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1)
- cur_len = cur_len + 1
-
- # update sequence length
- if eos_token_id is not None:
- sequence_lengths, unfinished_sequences = self._update_seq_length_for_generation(
- sequence_lengths, unfinished_sequences, cur_len, next_tokens == eos_token_id
- )
-
- # stop when there is a in each sentence, or if we exceed the maximul length
- if unfinished_sequences.max() == 0:
- break
-
- # update model kwargs
- model_kwargs = self._update_model_kwargs_for_generation(
- outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder
- )
-
- if return_dict_in_generate:
- if self.config.is_encoder_decoder:
- return SampleEncoderDecoderOutput(
- sequences=input_ids,
- scores=scores,
- encoder_attentions=encoder_attentions,
- encoder_hidden_states=encoder_hidden_states,
- decoder_attentions=decoder_attentions,
- decoder_hidden_states=decoder_hidden_states,
- )
- else:
- return SampleDecoderOnlyOutput(
- sequences=input_ids,
- scores=scores,
- attentions=decoder_attentions,
- hidden_states=decoder_hidden_states,
- )
- else:
- return input_ids
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/web_server.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/web_server.py
deleted file mode 100644
index fa46e905caa307f30a242951610193ee2a98692e..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/web_server.py
+++ /dev/null
@@ -1,62 +0,0 @@
-"""Low level HTTP server."""
-import asyncio
-from typing import Any, Awaitable, Callable, Dict, List, Optional # noqa
-
-from .abc import AbstractStreamWriter
-from .helpers import get_running_loop
-from .http_parser import RawRequestMessage
-from .streams import StreamReader
-from .web_protocol import RequestHandler, _RequestFactory, _RequestHandler
-from .web_request import BaseRequest
-
-__all__ = ("Server",)
-
-
-class Server:
- def __init__(
- self,
- handler: _RequestHandler,
- *,
- request_factory: Optional[_RequestFactory] = None,
- loop: Optional[asyncio.AbstractEventLoop] = None,
- **kwargs: Any
- ) -> None:
- self._loop = get_running_loop(loop)
- self._connections: Dict[RequestHandler, asyncio.Transport] = {}
- self._kwargs = kwargs
- self.requests_count = 0
- self.request_handler = handler
- self.request_factory = request_factory or self._make_request
-
- @property
- def connections(self) -> List[RequestHandler]:
- return list(self._connections.keys())
-
- def connection_made(
- self, handler: RequestHandler, transport: asyncio.Transport
- ) -> None:
- self._connections[handler] = transport
-
- def connection_lost(
- self, handler: RequestHandler, exc: Optional[BaseException] = None
- ) -> None:
- if handler in self._connections:
- del self._connections[handler]
-
- def _make_request(
- self,
- message: RawRequestMessage,
- payload: StreamReader,
- protocol: RequestHandler,
- writer: AbstractStreamWriter,
- task: "asyncio.Task[None]",
- ) -> BaseRequest:
- return BaseRequest(message, payload, protocol, writer, task, self._loop)
-
- async def shutdown(self, timeout: Optional[float] = None) -> None:
- coros = [conn.shutdown(timeout) for conn in self._connections]
- await asyncio.gather(*coros)
- self._connections.clear()
-
- def __call__(self) -> RequestHandler:
- return RequestHandler(self, loop=self._loop, **self._kwargs)
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/tests/abstract/put.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/tests/abstract/put.py
deleted file mode 100644
index cdb76a8b733e789d70ada3cfc22efec372f44e80..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/tests/abstract/put.py
+++ /dev/null
@@ -1,36 +0,0 @@
-class AbstractPutTests:
- def test_put_directory_recursive(
- self, fs, fs_join, fs_path, local_fs, local_join, local_path
- ):
- # https://github.com/fsspec/filesystem_spec/issues/1062
- # Recursive cp/get/put of source directory into non-existent target directory.
- src = local_join(local_path, "src")
- src_file = local_join(src, "file")
- local_fs.mkdir(src)
- local_fs.touch(src_file)
-
- target = fs_join(fs_path, "target")
-
- # put without slash
- assert not fs.exists(target)
- for loop in range(2):
- fs.put(src, target, recursive=True)
- assert fs.isdir(target)
-
- if loop == 0:
- assert fs.isfile(fs_join(target, "file"))
- assert not fs.exists(fs_join(target, "src"))
- else:
- assert fs.isfile(fs_join(target, "file"))
- assert fs.isdir(fs_join(target, "src"))
- assert fs.isfile(fs_join(target, "src", "file"))
-
- fs.rm(target, recursive=True)
-
- # put with slash
- assert not fs.exists(target)
- for loop in range(2):
- fs.put(src + "/", target, recursive=True)
- assert fs.isdir(target)
- assert fs.isfile(fs_join(target, "file"))
- assert not fs.exists(fs_join(target, "src"))
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/utils/sha.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/utils/sha.py
deleted file mode 100644
index 157ccb0379eb1c80389d8e06135f305d11889caf..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/utils/sha.py
+++ /dev/null
@@ -1,27 +0,0 @@
-"""Utilities to efficiently compute the SHA 256 hash of a bunch of bytes."""
-from hashlib import sha256
-from typing import BinaryIO, Optional
-
-
-def sha_fileobj(fileobj: BinaryIO, chunk_size: Optional[int] = None) -> bytes:
- """
- Computes the sha256 hash of the given file object, by chunks of size `chunk_size`.
-
- Args:
- fileobj (file-like object):
- The File object to compute sha256 for, typically obtained with `open(path, "rb")`
- chunk_size (`int`, *optional*):
- The number of bytes to read from `fileobj` at once, defaults to 1MB.
-
- Returns:
- `bytes`: `fileobj`'s sha256 hash as bytes
- """
- chunk_size = chunk_size if chunk_size is not None else 1024 * 1024
-
- sha = sha256()
- while True:
- chunk = fileobj.read(chunk_size)
- sha.update(chunk)
- if not chunk:
- break
- return sha.digest()
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Black Magic Design Davinci Resolve 11 LINK Crack.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Black Magic Design Davinci Resolve 11 LINK Crack.md
deleted file mode 100644
index 56570f30b276b8358776b3154b6006a1db83a60d..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Black Magic Design Davinci Resolve 11 LINK Crack.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-DaVinci Resolve Studio is also the only solution designed for multi ... a revolutionary intelligent design, Blackmagic RAW gives you both the ... 4d29de3e1b
-
-
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Elcomsoft Forensic Disk Decryptor V1.0.110 With Key [TorDigger] Download.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Elcomsoft Forensic Disk Decryptor V1.0.110 With Key [TorDigger] Download.md
deleted file mode 100644
index 3c37de8a800354ba3c4a5fd00c0e3515fc12978a..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Elcomsoft Forensic Disk Decryptor V1.0.110 With Key [TorDigger] Download.md
+++ /dev/null
@@ -1,78 +0,0 @@
-
Elcomsoft Forensic Disk Decryptor v1.0.110 with Key [TorDigger] download
My Music Taste jaise item phir mandel ke aa jayenge. you are spoiled my life and for me my life is spoiled. Si Pehla Khoob Baap Jaaye Bahaar Ke woh meri dad hi hi main raat vako vachan ke le itane ka kamwana patthar (Ostkhane Do Bande): Sonu Nigam hoke Bande Raat ise Kaam hai liye. Apna Taher walla Main Nathi Ki Beti. Bahaar Hi Meri Baap. Apne toove jee safaane kamne ko dekh le banaa hai. maanda aanaa the, ko tu main bahut karaongi aur zindagani main (Kuchh Nahi Na Man): Sonu Nigam hoke Amma Mujhe Kuchh To Jab Chahiye Kahaan Sake, Uddho hko bhi itna. Aap Main Hoon (Tu Mere Dil Me): Talat Aziz - Sonu Nigam. Sonu Nigam Hoke Mitha Koi, Sonu Nigam Hoke The. Apne ke saamne aa jayenge, ye kisi ke aa jayenge. Dil ke jaane ki saat ke aa jaye, ye kisi ki saat ke aa jaye. Anuradha Paudwal, Pankaj Udhas. Koi Aane Ko Hai, Meherbaun Ki Rani. Pyaar Kar Raat Ke Hai, Meherbaun Ki Rani (Bekhudi Ji). Meherbaun Ki Rani Mere Dil Me: Sonu Nigam hoke Amar Kahani. Akhbar Ki Hazaar Bandi Main Hai Teri Amma. Akbhar Kurbani Se Zara, Amar Kahani Se Zara. Aaj Ki Raat Koi Aane Ko Hai, -.
-
koi pattah se na maare 520964 Shreya Ghoshal and Sonu Nigam. Kholo Kholo 540355 Noyon. Baharon Mein Hawaein Kyoon 5545626 Pankaj Udhas https://coub.com/stories/2172433-meherbaan-1993-mp3-vbr-320kbps-patched. com/stories/2324589-koi-aane-ko-hai-jaam-kholo-zara-by-pankaj-udhas-mp3-25-_hot_
-
koi aane ko hai jaam kholo zara by pankaj udhas.mp3
https://www.webmd.com/skin-problems-and-treatments/autoimmune-thyroiditis-autoimmune-thyroiditis-hypothyroidism-symptoms https://www.kotobuki.com/speaker/mp3/koi-aane-ko-hai-jaam-kholo-zara-by- pankaj-udhas-mp3/.koi aane ko hai jaam kholo zara by pankaj udhas 3asuvbbb.mp3Main Tere Saath Hoon (Pankaj Udhas). mp3. pak-A000004034.
How to Download Malice by John Gwynne in EPUB Format for Free
-
Malice is the first book in the epic fantasy series The Faithful and the Fallen by John Gwynne. It is a story of good and evil, betrayal and loyalty, war and peace, set in a world where ancient prophecies are coming true and a dark power is rising. If you are a fan of high fantasy with complex characters, rich world-building, and thrilling action, you will love Malice by John Gwynne.
-
But how can you get your hands on this amazing book without spending a dime? Well, there are some websites that offer free EPUB downloads of Malice by John Gwynne. EPUB is a popular e-book format that can be read on most devices, such as smartphones, tablets, e-readers, and computers. However, not all websites are safe and legal, so you need to be careful when choosing where to download Malice by John Gwynne in EPUB format.
In this article, we will show you some of the best websites that offer free EPUB downloads of Malice by John Gwynne. We will also give you some tips on how to avoid malware, viruses, and copyright infringement when downloading e-books online.
-
The Best Websites to Download Malice by John Gwynne in EPUB Format for Free
-
Here are some of the best websites that offer free EPUB downloads of Malice by John Gwynne. We have checked them for safety and legality, but we still recommend that you use a VPN and an antivirus software when accessing them.
-
-
Archive.org: Archive.org is a non-profit digital library that hosts millions of free books, movies, music, and more. You can find both the first book Malice[^2^] and the entire series The Faithful and the Fallen[^1^] by John Gwynne on Archive.org. You can download them in EPUB format or read them online.
-
Flowcode.com: Flowcode.com is a website that allows users to create and share QR codes that link to various content. One of the users has created a QR code that links to a free EPUB download of Malice by John Gwynne[^3^]. You can scan the QR code with your phone or click on the link to access the download page.
-
-
How to Avoid Malware, Viruses, and Copyright Infringement When Downloading E-books Online
-
Downloading e-books online can be risky if you don't take some precautions. Here are some tips on how to avoid malware, viruses, and copyright infringement when downloading e-books online:
-
-
Use a VPN: A VPN (virtual private network) is a service that encrypts your internet traffic and hides your IP address. This way, you can protect your privacy and security online. A VPN can also help you bypass geo-restrictions and access content that may be blocked in your country.
-
Use an antivirus software: An antivirus software is a program that detects and removes malware, viruses, and other threats from your device. It can also warn you of suspicious websites and files before you open them. You should always keep your antivirus software updated and scan your device regularly.
-
Check the source: Before downloading an e-book online, you should check the source of the file. Look for reviews, ratings, comments, and feedback from other users. Avoid websites that have pop-ups, ads, or requests for personal information. Also, avoid files that have strange extensions or sizes.
-
Respect the author's rights: Downloading e-books online for free may be tempting, but it may also be illegal or unethical. You should respect the author's rights and support their work by buying their books or borrowing them from libraries. If you download an e-book online for free, you should delete it after reading it or buy a copy if you liked it.
-
-
Conclusion
-
Malice by John Gwynne is a great book for fantasy
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/TechSmith Camtasia Studio 2019.0.7 Build 5034 With Crack.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/TechSmith Camtasia Studio 2019.0.7 Build 5034 With Crack.md
deleted file mode 100644
index de624e2e6df31b8e45522b2137f94f404e21c297..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/TechSmith Camtasia Studio 2019.0.7 Build 5034 With Crack.md
+++ /dev/null
@@ -1,36 +0,0 @@
-
-```html
-
TechSmith Camtasia Studio 2019.0.7 Build 5034 With Crack: A Complete Professional Solution for Screen Recording and Video Editing
-
If you are looking for a powerful and easy-to-use software to create high-quality screen videos, presentations, tutorials, and more, you should consider TechSmith Camtasia Studio 2019.0.7 Build 5034 With Crack. This is the latest version of the popular Camtasia Studio software, which offers a comprehensive set of features and tools to help you capture, edit, and share your screen recordings.
-
TechSmith Camtasia Studio 2019.0.7 Build 5034 With Crack
With TechSmith Camtasia Studio 2019.0.7 Build 5034 With Crack, you can:
-
-
Record anything on your screen, including PowerPoint slides, webcam video, audio, and voice narration.
-
Edit and enhance your videos with callouts, titles, credits, zooming, panning, quizzes, and additional audio tracks.
-
Share your videos in various formats, such as Flash, QuickTime, MP4, AVI, and more.
-
Use the intelligent capture controls that adapt to your needs and preferences.
-
Enjoy the crystal-clear playback at any size with Camtasia SmartFocus⢠technology.
-
Create interactive videos with TechSmith ExpressShow⢠feature.
-
-
TechSmith Camtasia Studio 2019.0.7 Build 5034 With Crack is compatible with Windows 7 SP1, Windows 8, and Windows 10 (64 Bit versions only). It requires a minimum of 2.0 GHz CPU with dual-core processor (quad-core i5 processor or better recommended), 4 GB RAM (8 GB or more recommended), 2 GB of hard-disk space for program installation, and a display resolution of 1024x768 or higher.
-
To download TechSmith Camtasia Studio 2019.0.7 Build 5034 With Crack for free, you can use the link below[^1^]. This is a Google Drive link that contains a zip file with the setup and crack files. You will need to extract the zip file and follow the instructions in the readme file to install and activate the software.
-
TechSmith Camtasia Studio 2019.0.7 Build 5034 With Crack is a great software for anyone who wants to create professional-looking screen videos with ease and efficiency. Whether you want to make training videos, screencasts, presentations, or online courses, TechSmith Camtasia Studio 2019.0.7 Build 5034 With Crack can help you achieve your goals.
-```
-
-```html
-
How to use TechSmith Camtasia Studio 2019.0.7 Build 5034 With Crack
-
Once you have installed and activated TechSmith Camtasia Studio 2019.0.7 Build 5034 With Crack, you can start using it to create your screen videos. Here are some basic steps to follow:
-
-
Launch the software and choose whether you want to record your screen, import media, or open a project.
-
If you choose to record your screen, you can select the area of the screen you want to capture, adjust the audio and video settings, and start recording. You can also use the Camtasia Recorder toolbar to pause, resume, or stop the recording.
-
After you finish recording, you can save your recording as a project file or send it directly to the Camtasia Editor.
-
In the Camtasia Editor, you can edit your video by adding transitions, annotations, behaviors, animations, effects, and more. You can also use the timeline to trim, split, cut, copy, paste, and arrange your clips.
-
When you are happy with your video, you can export it in various formats and quality levels. You can also share it directly to YouTube, Vimeo, Google Drive, or Screencast.com.
-
-
TechSmith Camtasia Studio 2019.0.7 Build 5034 With Crack is a user-friendly and versatile software that allows you to create stunning screen videos with minimal effort. You can use it for various purposes, such as education, marketing, training, entertainment, and more.
-
However, please note that this software is not free and requires a valid license key to use it without any limitations. If you want to support the developers and enjoy the full features of TechSmith Camtasia Studio 2019.0.7 Build 5034 With Crack, you should purchase it from the official website. Alternatively, you can also try the free trial version for 30 days before buying it.
-
If you encounter any problems or bugs while using TechSmith Camtasia Studio 2019.0.7 Build 5034 With Crack, you can contact the TechSmith Support team for assistance[^1^]. You can also visit the TechSmith Community site to ask questions, give feedback, and search answers from other users[^2^].
-
-``` cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/nigeljw/ViewDiffusion/app.py b/spaces/nigeljw/ViewDiffusion/app.py
deleted file mode 100644
index 99534fa8ad88a5c33c9dbb29b2d1b24429800559..0000000000000000000000000000000000000000
--- a/spaces/nigeljw/ViewDiffusion/app.py
+++ /dev/null
@@ -1,170 +0,0 @@
-import gradio
-import torch
-import numpy
-from PIL import Image
-from torchvision import transforms
-from diffusers import StableDiffusionInpaintPipeline
-from diffusers import DPMSolverMultistepScheduler
-
-print("Initializing View Diffusion")
-
-deviceStr = "cuda" if torch.cuda.is_available() else "cpu"
-device = torch.device(deviceStr)
-latents = None
-latentsOld = None
-latentsSize = (1, 4, 64, 64)
-imageSize = (512, 512)
-lastImage = Image.new(mode="RGB", size=imageSize)
-lastSeed = 4096
-generator = torch.Generator(device).manual_seed(lastSeed)
-modelNames = ["stabilityai/stable-diffusion-2-inpainting",
- "runwayml/stable-diffusion-inpainting"]
-modelIndex = 0
-outpaintPipeline = None
-oldLatentWalk = None
-activeLatents = None
-oldLatents = None
-
-def GenerateNewLatentsForInference():
- global latents, oldLatents
- if activeLatents is not None:
- oldLatents = activeLatents
- else:
- oldLatents = latents
-
- if deviceStr == "cuda":
- latents = torch.randn(latentsSize, device=device, dtype=torch.float16)
- else:
- latents = torch.randn(latentsSize, device=device)
- return 0
-
-def InitializeOutpainting():
- print("Initializing Outpainting")
- global outpaintPipeline
- if deviceStr == "cuda":
- outpaintPipeline = StableDiffusionInpaintPipeline.from_pretrained(modelNames[modelIndex],
- torch_dtype=torch.float16)
- #safety_checker=lambda images, **kwargs: (images, False))
- outpaintPipeline.to(device)
- outpaintPipeline.enable_xformers_memory_efficient_attention()
- else:
- outpaintPipeline = StableDiffusionInpaintPipeline.from_pretrained(modelNames[modelIndex])
- #safety_checker=lambda images, **kwargs: (images, False))
-
- outpaintPipeline.scheduler = DPMSolverMultistepScheduler.from_config(outpaintPipeline.scheduler.config)
- outpaintPipeline.set_progress_bar_config(disable=True)
-
-# Based on: https://discuss.pytorch.org/t/help-regarding-slerp-function-for-generative-model-sampling/32475/4
-# Further optimized to trade a divide operation for a multiply
-def Slerp(start, end, alpha):
- start_norm = torch.norm(start, dim=1, keepdim=True)
- end_norm = torch.norm(end, dim=1, keepdim=True)
- omega = torch.acos((start*end/(start_norm*end_norm)).sum(1))
- sinOmega = torch.sin(omega)
- first = torch.sin((1.0-alpha)*omega)/sinOmega
- second = torch.sin(alpha*omega)/sinOmega
- return first.unsqueeze(1)*start + second.unsqueeze(1)*end
-
-def Diffuse(latentWalk, generatorSeed, inputImage, mask, prompt, negativePrompt, guidanceScale, numInferenceSteps, pauseInference):
- global lastImage, lastSeed, generator, oldLatentWalk, activeLatents
-
- if mask is None or pauseInference is True:
- return lastImage
-
- #if staticLatents is False:
- # GenerateNewLatentsForInference()
-
- if oldLatentWalk != latentWalk:
- activeLatents = Slerp(oldLatents, latents, latentWalk)
- oldLatentWalk = latentWalk
-
- if lastSeed != generatorSeed:
- generator = torch.Generator(device).manual_seed(generatorSeed)
- lastSeed = generatorSeed
-
- newImage = outpaintPipeline(prompt=prompt,
- negative_prompt=negativePrompt,
- image=inputImage,
- mask_image=mask,
- guidance_scale=guidanceScale,
- num_inference_steps=numInferenceSteps,
- latents=activeLatents,
- generator=generator).images[0]
-
- if not pauseInference:
- lastImage = newImage
-
- return newImage
-
-InitializeOutpainting()
-
-print("Generating Latents")
-
-GenerateNewLatentsForInference()
-GenerateNewLatentsForInference()
-activeLatents = oldLatents
-
-print("Initializing Gradio Interface")
-
-defaultMask = Image.open("assets/masks/diamond.png")
-numInfStepsDesc = "A higher value generally increases quality, but reduces the frames per second of the output stream."
-#staticLatentsDesc = "This setting increases the frame to frame determisn of the generation. If this is disabled, then the inference will take continuous large walks across the latent space between frames."
-generatorSeedDesc = "Identical seeds allow for persistent scene generation between runs, and changing the seed will take a static large walk across the latent space to better control and alter the generation of scene scene content especially when large abberations exist in the reconstruction."
-promptDesc = "This text will condition the generation of the scene to help guide the content creation."
-negPromptDesc = "This text will help deter the generation from converging towards reconstructing the elements described in the text."
-outputText = "This inferred imagery expands the field of view from the masked area of the input camera feed."
-latentWalkDesc = "This allows you to walk short spans across the latent space with relatively continuous gradients."
-
-examplePrompt1 = "A person in a room" #A person in a room with colored hair"
-examplePrompt2 = "A person with colored hair" #"People in a room with colored hair"
-examplePrompt3 = "A person on a beach with long hair" #"A man on a beach with long hair"
-examplePrompt4 = "A person outside in a field under a starry night sky" #"A woman on a beach with long hair"
-examplePrompt5 = "A person in a forest" #"A panda eating bamboo" #"A panda eating bamboo"
-examplePrompt6 = "A bird flying in the sky" #"A family together in a room"
-examplePrompt7 = "A person in a room" #"A family together outside with colored hair"
-
-with gradio.Blocks(live=True) as ux:
- gradio.Markdown("This generative machine learning demonstration streams stable diffusion outpainting inference live from your camera on your computer or phone to expand your local reality and create an alternate world. High quality frame to frame determinism is a hard problem to solve for latent diffusion models as the generation is inherently relative to input noise distributions for the latents, and many factors such as the inherent Bayer noise from the camera images as well as anything that is altered between camera images (such as focus, white balance, etc) causes non-determinism between frames. Some methods apply spationtemporal attention, but this demonstration focuses on the control over the input latents to navigate the latent space. **Increase the lighting of your physical scene from your camera's perspective, and avoid self shadows of scene content, to improve the quality and consistency of the scene generation.**")
- with gradio.Row():
- with gradio.Column():
- #staticLatents = gradio.Checkbox(label="Static Latents", info=staticLatentsDesc, value=True, interactive=True)
- inputImage = gradio.Image(label="Input Feed", source="webcam", shape=[512,512], streaming=True)
- #inputImage2 = gradio.Image(label="Input Feed 2", source="webcam", shape=[512,512], streaming=True)
- mask = gradio.Image(label="Mask", type="pil", value=defaultMask)
- prompt = gradio.Textbox(label="Prompt", info=promptDesc, placeholder=examplePrompt1, value=examplePrompt1, lines=3)
- negativePrompt = gradio.Textbox(label="Negative Prompt", info=negPromptDesc, placeholder="Facial hair", value="Text, words", lines=3)
- guidanceScale = gradio.Slider(label="Guidance Scale", info="A higher value causes the generation to be more relative to the text prompt conditioning.", maximum=100, minimum=1, value=7.5, step= 0.1)
- numInferenceSteps = gradio.Slider(label="Number of Inference Steps", info=numInfStepsDesc, maximum=100, minimum=1, value=20, step=1)
- generatorSeed = gradio.Slider(label="Generator Seed", info=generatorSeedDesc, maximum=10000, minimum=1, value=lastSeed, step=1)
- #numViews = gradio.Slider(label="Number of Views", info="The number of discrete view perspectives to merge together in the view expansion.", maximum=100, minimum=1, value=1, step=1)
- #modelIndex = gradio.Dropdown(modelNames, label="Model", value="runwayml/stable-diffusion-inpainting")
- #inputImage.style(full_width=True)
-
- with gradio.Column():
- gradio.Markdown("The navigation will attempt to continously loiter in its current location in the embedded space if no input variables change. If you click **Generate New Latents**, then it will preserve the current active latents in the walk,create a new set of random latents, and reset the **Latent Walk** value so that you can walk to a new location.")
- generateLatents = gradio.Button(value="Generate New Latents")
- latentWalk = gradio.Slider(label="Latent Walk", info=latentWalkDesc, maximum=1.0, minimum=0.0, value=0.0, interactive=True)
- outputImage = gradio.Image(label="Extrapolated Field of View")
- pauseInference = gradio.Checkbox(label="Pause Inference", value=False)
-
- inferenceInputs = [latentWalk, generatorSeed, inputImage, mask, prompt, negativePrompt, guidanceScale, numInferenceSteps, pauseInference]
- generateLatents.click(GenerateNewLatentsForInference, outputs=latentWalk)
- inputImage.change(fn=Diffuse, inputs=inferenceInputs, outputs=outputImage, show_progress=False)
-
- examples = [[1.0, 1234, "assets/input/man.png","assets/masks/diamond.png", examplePrompt1, "", 7.5, 20, 1],
- [0.8, 2048, "assets/input/woman.jpg", "assets/masks/star.png", examplePrompt2, "", 7.5, 15, 1],
- [0.3, 8192, "assets/input/man.png", "assets/masks/sphere.png", examplePrompt3, "", 7.5, 25, 1],
- [0.7, 1024, "assets/input/woman.jpg", "assets/masks/spiral.png", examplePrompt4, "", 7.5, 15, 1],
- [1.0, 512, "assets/input/man.png", "assets/masks/square.png", examplePrompt5, "", 7.5, 10, 1],
- [0.1, 256, "assets/input/woman.jpg", "assets/masks/wave.png", examplePrompt6, "", 11.5, 30, 1],
- [0.9, 9999, "assets/input/man.png", "assets/masks/maze.png", examplePrompt7, "", 17.5, 35, 1],]
-
- inputExamples = gradio.Examples(examples, inputs=inferenceInputs, outputs=outputImage, fn=Diffuse)
-
- gradio.Markdown("This demonstration should initialize automatically from the default values, and run relatively well, but if the output is not an ideal reconstruction of your physical local space from your camera's perspective, then you should adjust the generator seed to take large walks across the latent space. In addition, the static latents can be disable to continously walk the latent space, and then it can be set to static again when a better region of the embedded space is found, but this will increase frame to fram non-determinism. You can also condition the generation using prompts to re-enforce or change aspects of the scene. **If you see a black image instead of a generated output image, then you are running into the safety checker.** This can trigger inconsistently even when the generated content is purely PG. If this happens, then increase the lighting of the scene and also increase the number of inference steps to improve the generated predicition to reduce the likelihood of the saftey checker triggering a false positive.")
-
-#inputs=[latentWalk, staticLatents, generatorSeed, inputImage, mask, pauseInference, prompt, negativePrompt, guidanceScale, numInferenceSteps]
-#ux = gradio.Interface(fn=diffuse, title="View Diffusion", article=article, description=description, inputs=inputs, outputs=outputImage, examples=inputExamples, live=True)
-
-print("Launching Demo")
-ux.launch() #debug=True)
diff --git a/spaces/nota-ai/compressed-wav2lip/README.md b/spaces/nota-ai/compressed-wav2lip/README.md
deleted file mode 100644
index 907cb7495de52ee8d93c79c3c235ea624e5cd8bb..0000000000000000000000000000000000000000
--- a/spaces/nota-ai/compressed-wav2lip/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Compressed Wav2Lip
-emoji: 🌟
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-sdk_version: 3.28.0
-app_file: app.py
-pinned: true
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/numerics/fast_transcendentals.h b/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/numerics/fast_transcendentals.h
deleted file mode 100644
index 2c73eeec3ddfbb214da3a47281f832abdde64929..0000000000000000000000000000000000000000
--- a/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/numerics/fast_transcendentals.h
+++ /dev/null
@@ -1,1177 +0,0 @@
-/*
- * Copyright 2021 Google LLC
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef LYRA_CODEC_SPARSE_MATMUL_NUMERICS_FAST_TRANSCENDENTALS_H_
-#define LYRA_CODEC_SPARSE_MATMUL_NUMERICS_FAST_TRANSCENDENTALS_H_
-
-#include
-#if defined __ARM_NEON || defined __aarch64__
-#include
-#else
-#include
-#endif
-#if defined __AVX__ || defined __AVX2__
-#include
-#endif
-#include
-
-#include "sparse_matmul/numerics/fixed_types.h"
-#include "sparse_matmul/numerics/type_utils.h"
-
-namespace csrblocksparse {
-
-// The input to exp is clipped to bounds that prevent overflow/underflow in a
-// 32 bit float representation. e^80 ~ 6e34, which is close to maxfloat.
-constexpr float kMaxExpInput = 80.f;
-constexpr int kMaxExpInputInt = static_cast(kMaxExpInput);
-constexpr float kMinExpInput = -80.f;
-// tanh(9) ~ 0.99999997, which cannot be resolved from 1 in a float32.
-constexpr float kMaxTanhInput = 9.f;
-constexpr float kMinTanhInput = -9.f;
-// sigmoid(18) ~ 0.999999985, which cannot be resolved from 1 in a float32.
-constexpr float kMaxSigmoidInput = 18.f;
-constexpr float kMinSigmoidInput = -18.f;
-// kAConstant ~= 2^23 / ln 2
-constexpr uint32_t kAConstant = 0x4b38aa3b;
-// kBConstant ~= (127 << 23) - 366000
-constexpr uint32_t kBConstant = 0x4e7de9a9;
-// Coefficients of the rational approximation to tanh.
-// Coefficients of the numerator polynomial (odd).
-constexpr float kTanhAlpha1 = 4.89352455891786e-03;
-constexpr float kTanhAlpha3 = 6.37261928875436e-04;
-constexpr float kTanhAlpha5 = 1.48572235717979e-05;
-constexpr float kTanhAlpha7 = 5.12229709037114e-08;
-constexpr float kTanhAlpha9 = -8.60467152213735e-11;
-constexpr float kTanhAlpha11 = 2.00018790482477e-13;
-constexpr float kTanhAlpha13 = -2.76076847742355e-16;
-// The monomial coefficients of the denominator polynomial (even).
-constexpr float kTanhBeta0 = 4.89352518554385e-03;
-constexpr float kTanhBeta2 = 2.26843463243900e-03;
-constexpr float kTanhBeta4 = 1.18534705686654e-04;
-constexpr float kTanhBeta6 = 1.19825839466702e-06;
-
-// Coefficients of the rational approximation to sigmoid.
-// Coefficients of the numerator polynomial (odd).
-constexpr float kSigmoidAlpha1 = 2.48287947061529e-01;
-constexpr float kSigmoidAlpha3 = 8.51377133304701e-03;
-constexpr float kSigmoidAlpha5 = 6.08574864600143e-05;
-constexpr float kSigmoidAlpha7 = 1.15627324459942e-07;
-constexpr float kSigmoidAlpha9 = 4.37031012579801e-11;
-
-// The monomial coefficients of the denominator polynomial (even).
-constexpr float kSigmoidBeta0 = 9.93151921023180e-01;
-constexpr float kSigmoidBeta2 = 1.16817656904453e-01;
-constexpr float kSigmoidBeta4 = 1.70198817374094e-03;
-constexpr float kSigmoidBeta6 = 6.29106785017040e-06;
-constexpr float kSigmoidBeta8 = 5.76102136993427e-09;
-constexpr float kSigmoidBeta10 = 6.10247389755681e-13;
-
-// x is the first term of the Taylor series approximation of tanh near 0 and
-// because the leading error term of tanh(x) - x is O(x^3), it is good for a
-// wide interval, use it in this region where the other approximation is
-// inaccurate. tanh(x) = x - x^3 / 3 + 2x^5 / 15 - 17x^7 / 315 + ...
-// Similarly for sigmoid where the first term is .25x
-constexpr float kTanhLinearRegion = .15f;
-constexpr float kSigmoidLinearRegion = .75f;
-
-// Maximum shift factor for 1/log 2 to keep it inside int32.
-constexpr int kMaxLog2Shift = 30;
-static const int kLogFactor = static_cast((1 << kMaxLog2Shift) / log(2.f));
-static const float kOneOverLog2 = 1.0f / log(2.f);
-// Number of real mantissa bits in IEEE float32.
-constexpr int kFloatMantissaBits = 23;
-// Offset to correct the exponent value in the resulting float.
-constexpr int kFloatExponentOffset = 127 << kFloatMantissaBits;
-// Mask for mantissa.
-constexpr int kFloatMantissaMask = (1 << kFloatMantissaBits) - 1;
-// Mask for exponent;
-constexpr int kFloatExponentMask = (-1) ^ kFloatMantissaMask;
-
-// ========== COMMON DOCUMENTATION FOR THE FLOATING EXPONENT TRICK ============
-// Summary: Use the exponent-mantissa representation of a floating point number
-// to give exponentiation of 2 for free. If we desire f(z) = e^z = 2^(x+n), (for
-// some fixed-point z expressed as an integer with imaginary binary point within
-// it) then we have to compute x+n = z / ln 2 and then splitting x+n into
-// n = int(x+n) and x = fract(x+n) in [0, 1), we can use n and 2^x as the
-// exponent and mantissa of a floating point number, and that float is equal to
-// e^z. For original reference see:
-// http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.9.4508&rep=rep1&type=pdf
-// Important detail:
-// IEEE floats are stored normalized, ie 1.bbbbbbb... x 2^exponent. The leading
-// 1 bit is not actually stored, (as it is always 1), providing an extra bit of
-// precision.
-// Since 2^0=1 and 2^1=2, we can treat the problem as 2^x = 1 + u and we thus
-// need a mapping x in [0, 1) -> u in [0, 1) and the 1 + is provided by the
-// representation.
-// In the original paper cited above, the mapping is u = x - c, where c is set
-// to minimize the average error. The function to compute exp(x) this way is
-// incredibly simple and computationally cheap, but not very accurate.
-// Fortunately, the problem has been reduced to u = 2^x - 1 over [0, 1) for
-// which it is far easier to construct accurate approximations with small
-// polynomials than a full range exp(x), and this is what the cubic and quartic
-// versions below do. An important feature of these functions is that they
-// constrain the solution to be exact at 0 and 1 so there is continuity at each
-// integer boundary where we wrap from 1 to 0 and increment the power of 2.
-
-// Coefficients for quartic representation of 2^x - 1 for x on [0,1).
-// The quartic representation is 2^x - 1 ~ x - x(1-x)(ax^2 + bx + c), hence the
-// coefficients of a quadratic are all that is required.
-// Coefficients came from numerical experiments.
-constexpr float kExpQuarticFactor2 = 0.0135302434f;
-constexpr float kExpQuarticFactor1 = 0.0656107542f;
-constexpr float kExpQuarticFactor0 = 0.306963906f;
-// Coefficients for cubic representation of 2^x - 1 for x on [0,1]
-// The cubic representation is 2^x - 1 ~ x - x(1-x)(mx + c), hence the
-// coefficients of a linear function are all that is required.
-// Coefficients came from numerical experiments.
-constexpr float kExpCubicFactor1 = 0.0780252018f;
-constexpr float kExpCubicFactor0 = 0.304684167f;
-// Coefficients are optimized to minimize the absolute error on
-// tanh = (e^2x - 1) / (e^2x + 1) instead of on pure e^x.
-
-// Enum that determines how a transcendental is computed.
-enum TranscendentalMode {
- // Cubic using 16 bit integer arithmetic.
- TM_ORDER3_16BIT,
- // Quartic using 16 bit integer arithmetic.
- TM_ORDER4_16BIT,
- // Quartic using 32 bit float arithmetic.
- TM_ORDER4_FLOAT,
-};
-
-inline int FloatAsInt16(float x) {
- return static_cast(x * (1 << 15) + 0.5f);
-}
-
-inline int FloatAsInt32(float x) {
- return static_cast(x * (1 << 30) + 0.5f);
-}
-
-#if defined __ARM_NEON || defined __aarch64__
-
-constexpr int kMaxSigmoidInputInt = static_cast(kMaxSigmoidInput);
-
-// Computes and returns 2^(x>>23) ie 2^u where x = u << 23 bits.
-// Uses the quartic floating point exponent trick, see COMMON DOCUMENTATION FOR
-// THE FLOATING EXPONENT TRICK above for details.
-// Returns the true value, ie not scaled.
-inline float32x4_t float32_pow2(float32x4_t x) {
- // The input is already shifted left by 23 bits, so when we convert to int,
- // the bottom 23 bits are the fractional part, and the top bits are the
- // integer part. We want to compute a function of the fractional part, so
- // we will mask it off and manipulate it.
- int32x4_t exp_int_x = vcvtq_s32_f32(x);
- // Mask to allow conversion of just the fractional part of x to fixed16<0>.
- int32x4_t mantissa_mask16 = vdupq_n_s32(0x7fff00);
- // Mask to allow conversion of just the fractional part of x to fixed32<1>.
- int32x4_t mantissa_mask32 = vdupq_n_s32(0x7fffff);
- // Narrowing shift to convert to fixed16<0>.
- int16x4_t x_16 = vshrn_n_s32(vandq_s32(mantissa_mask16, exp_int_x), 8);
- // Shift to convert to fixed32<1>.
- int32x4_t x_32 = vshlq_n_s32(vandq_s32(mantissa_mask32, exp_int_x), 7);
- // Compute the polynomial x(x - 1)(ax^2 + bx + c) of the fractional part.
- // Ordering these lines carefully makes it faster, as some of the multiply
- // operations can pipeline instead of waiting for the previous result.
- int32x4_t x_squared = vmull_s16(x_16, x_16);
- int16x4_t b = vdup_n_s16(FloatAsInt16(kExpQuarticFactor1));
- int32x4_t c = vdupq_n_s32(FloatAsInt32(kExpQuarticFactor0));
- int32x4_t bx_plus_c = vmlal_s16(c, b, x_16);
- int16x4_t a = vdup_n_s16(FloatAsInt16(kExpQuarticFactor2));
- // Finish the quadratic: result = ax^2 + bx + c.
- int32x4_t result = vmlal_s16(bx_plus_c, a, vshrn_n_s32(x_squared, 15));
- int32x4_t x_squared_minus_x = vsubq_s32(x_squared, x_32);
-
- // Multiply by x^2 - x.
- result = vqrdmulhq_s32(result, x_squared_minus_x);
- // Shift back to mantissa position. vqrdmulhq_s32 took 2x 30-mantissa bit
- // inputs, made 60-mantissa bit result, doubled it to 61 bits, then discarded
- // the bottom 32 making 29, so shift right 6 to get 23.
- result = vshrq_n_s32(result, 6);
- // Add the constant to normalize the exponent for IEEE format.
- int32x4_t exp_offset = vdupq_n_s32(kFloatExponentOffset);
- exp_int_x = vaddq_s32(exp_int_x, exp_offset);
- exp_int_x = vaddq_s32(exp_int_x, result);
- // Cast back to float, as we just computed the exponent and mantissa and
- // assembled them in IEEE format.
- return vreinterpretq_f32_s32(exp_int_x);
-}
-
-// Scaled float to float exp approximation, using a quartic refinement of
-// the exponent trick. See COMMON DOCUMENTATION FOR THE FLOATING EXPONENT TRICK
-// above for details. Input is a fixed32<31 - mantissa_bits> that has been
-// converted to a float without any further shifting. MUST HAVE ALREADY BEEN
-// CLIPPED to a suitable range for exp!
-// Returns a vector of standard unscaled floats.
-inline float32x4_t fixed32_exp_float_preclipped(const int mantissa_bits,
- float32x4_t x) {
- // Divide by log 2 to convert problem to 2^x, and scale to match the
- // mantissa bits required by IEEE floats.
- // This is the shift of the FP mantissa relative to the input mantissa.
- const int kXShift = kFloatMantissaBits - mantissa_bits;
- const float kLogFactor = static_cast(1 << kXShift);
- float32x4_t factor = vdupq_n_f32(kLogFactor * kOneOverLog2);
- float32x4_t y = vmulq_f32(x, factor);
- // Now compute 2^x.
- return float32_pow2(y);
-}
-
-// uses trick that 2^x can be computed by shifting integer into the
-// exponent, see the following reference for a derivation using double:
-// goo.gl/aUVTK3
-// Input x is clamped to [-64, 64], even infinity and NaN.
-// Accurate to within 3% relative across the entire range.
-// Fully pipelined throughput is about 10 cycles per fast_exp call.
-inline float32x4_t fast_exp(float32x4_t x) {
-#if defined FAST_TRANSCENDENTALS && __ARM_ARCH >= 800
- // Uses vcvtnq_s32_f32, not available on ARM v7 NEON.
-
- // Load A and B, which are defined as integers into float registers.
- float32x4_t A = vreinterpretq_f32_u32(vdupq_n_u32(kAConstant));
- float32x4_t res = vreinterpretq_f32_u32(vdupq_n_u32(kBConstant));
-
- // Make sure x within the allowed range.
- x = vminq_f32(x, vdupq_n_f32(kMaxExpInput));
- x = vmaxq_f32(x, vdupq_n_f32(kMinExpInput));
-
- // res = A * x + B.
- // This shifts x into the exponent field and adds the bias.
- res = vmlaq_f32(res, A, x);
-
- // Convert back to an integer, this is what uses the floating point
- // unit to compute 2^x.
- int32x4_t x_int = vcvtnq_s32_f32(res);
-
- return vreinterpretq_f32_s32(x_int);
-#else
- float32x4_t return_val = vdupq_n_f32(0.f);
-
- float exponent = expf(vgetq_lane_f32(x, 0));
- return_val = vld1q_lane_f32(&exponent, return_val, 0);
-
- exponent = expf(vgetq_lane_f32(x, 1));
- return_val = vld1q_lane_f32(&exponent, return_val, 1);
- exponent = expf(vgetq_lane_f32(x, 2));
- return_val = vld1q_lane_f32(&exponent, return_val, 2);
- exponent = expf(vgetq_lane_f32(x, 3));
- return_val = vld1q_lane_f32(&exponent, return_val, 3);
-
- return return_val;
-#endif // FAST_TRANSCENDENTALS
-}
-
-// This version does a conversion of the input to floating point, then calls
-// the floating point fast_exp function. There is another version
-// fast_exp_fixed, that never does a conversion and is less accurate, but much
-// faster.
-template
-inline float32x4_t fast_exp(int32x4_t x) {
- return fast_exp(vcvtq_n_f32_s32(x, 31 - ExponentBits));
-}
-
-// Performs an exp estimate without doing any floating point operations. The
-// result is a floating point number. See scalar version for an explanation.
-template
-inline float32x4_t fast_exp_fixed(int32x4_t x) {
- static_assert(ExponentBits > 8, "Must have more than 8 ExponentBits");
- constexpr int kA = 1.4426950408889634 * (1 << (ExponentBits - 8));
- constexpr int kB = (127 << 23) - 366000;
-
- constexpr int maxInput = 80 << (31 - ExponentBits);
- constexpr int minInput = -maxInput;
-
- int32x4_t A = vdupq_n_s32(kA);
- int32x4_t res = vdupq_n_s32(kB);
-
- // Make sure x within the allowed range.
- x = vminq_s32(x, vdupq_n_s32(maxInput));
- x = vmaxq_s32(x, vdupq_n_s32(minInput));
-
- // res = A * x + B.
- // This shifts x into the exponent field and adds the bias.
- res = vmlaq_s32(res, A, x);
-
- return vreinterpretq_f32_s32(res);
-}
-
-// fast_exp_norange_check uses vcvtnq_s32_f32, not available on ARM v7 NEON.
-#if __ARM_ARCH >= 800
-namespace detail {
-// tanh can do range check once.
-// Input x is clamped to [-64, 64], even infinity and NaN.
-inline float32x4_t fast_exp_norange_check(float32x4_t x) {
- float32x4_t A = vreinterpretq_f32_u32(vdupq_n_u32(kAConstant));
- float32x4_t res = vreinterpretq_f32_u32(vdupq_n_u32(kBConstant));
-
- res = vmlaq_f32(res, A, x);
-
- int32x4_t x_int = vcvtnq_s32_f32(res);
-
- return vreinterpretq_f32_s32(x_int);
-}
-
-} // namespace detail
-#endif // __ARM_ARCH >= 800
-
-// Clips float input to [-kLimit,kLimit].
-inline float32x4_t ClipToFloatBounds(const float kLimit, const float32x4_t x) {
- // Clip to the input bounds for this approximation.
- float32x4_t clip_limit = vdupq_n_f32(kLimit);
- float32x4_t clipped_x = vminq_f32(x, clip_limit);
- clip_limit = vnegq_f32(clip_limit);
- return vmaxq_f32(clipped_x, clip_limit);
-}
-
-inline float32x4_t float_tanh_float(const float32x4_t& x) {
- float32x4_t clipped_x = ClipToFloatBounds(kMaxTanhInput, x);
- // Divide by log 2 to convert problem to 2^x, double (as we need exp(2x)) and
- // scale to the mantissa bits required by float32_pow2 all in one multiply.
- // Add one to double the input.
- const float kLogFactor = static_cast(1 << (kFloatMantissaBits + 1));
- float32x4_t factor = vdupq_n_f32(kLogFactor * kOneOverLog2);
- clipped_x = vmulq_f32(clipped_x, factor);
- // Now compute 2^x.
- float32x4_t exp_result = float32_pow2(clipped_x);
- // Now compute tanh using (e^2x - 1) / (e^2x + 1).
- float32x4_t one = vdupq_n_f32(1.0f);
- float32x4_t numerator = vsubq_f32(exp_result, one);
- float32x4_t denominator = vaddq_f32(exp_result, one);
- float32x4_t recp = vrecpeq_f32(denominator);
- // Newton-Raphson iteration, accuracy is important for audio quality
- recp = vmulq_f32(recp, vrecpsq_f32(recp, denominator));
- recp = vmulq_f32(recp, numerator);
- // Compute 3rd-order Taylor tanh ~ x - x^3/3 for high accuracy and thus low
- // relative error close to 0.
- float32x4_t third = vdupq_n_f32(1.0f / 3.0f);
- float32x4_t taylor = vmulq_f32(x, x);
- taylor = vmulq_f32(taylor, x);
- taylor = vmulq_f32(taylor, third);
- taylor = vsubq_f32(x, taylor);
- // Test |x| <= 1/9, roughly where the errors cross over, without needing yet
- // another constant.
- float32x4_t ninth = vmulq_f32(third, third);
- uint32x4_t cmp_results = vcaleq_f32(x, ninth);
- return vbslq_f32(cmp_results, taylor, recp);
-}
-
-// Calculates (exp(x) - exp(-x)) / (exp(x) + exp(-x)).
-// Input x is clamped to [-9, 9], even infinity and NaN.
-// See test program for bounds. Throughput of FAST is 334 Mega/sec,
-// throughput of accurate is 232 Mega/sec.
-inline float32x4_t fast_tanh(float32x4_t x) {
-#if defined FASTER_TRANSCENDENTALS
- return float_tanh_float(x);
-#elif defined ACCURATE_TRANSCENDENTAL_APPROX && defined FAST_TRANSCENDENTALS
- x = vminq_f32(x, vdupq_n_f32(kMaxTanhInput));
- x = vmaxq_f32(x, vdupq_n_f32(kMinTanhInput));
-
- // The monomial coefficients of the numerator polynomial (odd).
- const float32x4_t alpha_1 = vdupq_n_f32(kTanhAlpha1);
- const float32x4_t alpha_3 = vdupq_n_f32(kTanhAlpha3);
- const float32x4_t alpha_5 = vdupq_n_f32(kTanhAlpha5);
- const float32x4_t alpha_7 = vdupq_n_f32(kTanhAlpha7);
- const float32x4_t alpha_9 = vdupq_n_f32(kTanhAlpha9);
- const float32x4_t alpha_11 = vdupq_n_f32(kTanhAlpha11);
- const float32x4_t alpha_13 = vdupq_n_f32(kTanhAlpha13);
-
- // The monomial coefficients of the denominator polynomial (even).
- const float32x4_t beta_0 = vdupq_n_f32(kTanhBeta0);
- const float32x4_t beta_2 = vdupq_n_f32(kTanhBeta2);
- const float32x4_t beta_4 = vdupq_n_f32(kTanhBeta4);
- const float32x4_t beta_6 = vdupq_n_f32(kTanhBeta6);
-
- // Since the polynomials are odd/even, we need x^2.
- const float32x4_t x2 = vmulq_f32(x, x);
-
- // Evaluate the numerator polynomial |p|.
- float32x4_t p = vmlaq_f32(alpha_11, x2, alpha_13);
- p = vmlaq_f32(alpha_9, x2, p);
- p = vmlaq_f32(alpha_7, x2, p);
- p = vmlaq_f32(alpha_5, x2, p);
- p = vmlaq_f32(alpha_3, x2, p);
- p = vmlaq_f32(alpha_1, x2, p);
- p = vmulq_f32(x, p);
-
- // Evaluate the denominator polynomial p.
- float32x4_t q = vmlaq_f32(beta_4, x2, beta_6);
- q = vmlaq_f32(beta_2, x2, q);
- q = vmlaq_f32(beta_0, x2, q);
-
- // Divide the numerator by the denominator.
- float32x4_t recp = vrecpeq_f32(q);
- recp = vmulq_f32(recp, vrecpsq_f32(recp, q));
- return vmulq_f32(p, recp);
-#elif defined FAST_TRANSCENDENTALS && __ARM_ARCH >= 800
- // Uses vcvtnq_s32_f32, not available on ARM v7 NEON.
-
- x = vminq_f32(x, vdupq_n_f32(kMaxTanhInput));
- x = vmaxq_f32(x, vdupq_n_f32(kMinTanhInput));
- float32x4_t exp_est = detail::fast_exp_norange_check(x);
- float32x4_t neg_exp_est = detail::fast_exp_norange_check(-x);
-
- // If we're in the linear region.
- // caleq = compare absolute <=
- uint32x4_t cmp_results = vcaleq_f32(x, vdupq_n_f32(kTanhLinearRegion));
-
- float32x4_t diff = vsubq_f32(exp_est, neg_exp_est);
- float32x4_t sum = vaddq_f32(exp_est, neg_exp_est);
- float32x4_t recp = vrecpeq_f32(sum);
- recp = vmulq_f32(recp, vrecpsq_f32(recp, sum));
- float32x4_t tanh_estimate = vmulq_f32(diff, recp);
-
- // Based on comparison, possibly copy x through instead of calculated value.
- // TODO(b/191497441): Is the compiler generating VBIT or VBSL ? VBIT is one
- // cycle and VBSL is two... documentation suggests it can do either.
- return vbslq_f32(cmp_results, x, tanh_estimate);
-#else
- float32x4_t return_val = vdupq_n_f32(0.f);
-
- float tanh_value = tanhf(vgetq_lane_f32(x, 0));
- return_val = vld1q_lane_f32(&tanh_value, return_val, 0);
- tanh_value = tanhf(vgetq_lane_f32(x, 1));
- return_val = vld1q_lane_f32(&tanh_value, return_val, 1);
- tanh_value = tanhf(vgetq_lane_f32(x, 2));
- return_val = vld1q_lane_f32(&tanh_value, return_val, 2);
- tanh_value = tanhf(vgetq_lane_f32(x, 3));
- return_val = vld1q_lane_f32(&tanh_value, return_val, 3);
-
- return return_val;
-#endif // FAST_TRANSCENDENTALS
-}
-
-// Input x is clamped to [-18, 18], even infinity and NaN.
-// See tests for error bounds. Using SIGMOID_AS_TANH with
-// ACCURATE_TRANSCENDENTAL_APPROX is both faster and more accurate. Using
-// SIGMOID_AS_TANH with just FAST is slower, but more accurate.
-// SIGMOID_AS_TANH, ACCURATE is 205 Mega/sec
-// SIGMOID_AS_TANH, FAST is 290 Mega/sec
-// FAST is 340 Mega/sec
-inline float32x4_t fast_sigmoid(float32x4_t x) {
-#ifdef SIGMOID_AS_TANH
- float32x4_t half = vdupq_n_f32(0.5f);
- return vmlaq_f32(half, half, fast_tanh(vmulq_f32(half, x)));
-#else // SIGMOID_AS_TANH
-#if defined FAST_TRANSCENDENTALS && defined ACCURATE_TRANSCENDENTAL_APPROX
- x = vminq_f32(x, vdupq_n_f32(kMaxSigmoidInput));
- x = vmaxq_f32(x, vdupq_n_f32(kMinSigmoidInput));
-
- // The monomial coefficients of the numerator polynomial (odd).
- const float32x4_t alpha_1 = vdupq_n_f32(kSigmoidAlpha1);
- const float32x4_t alpha_3 = vdupq_n_f32(kSigmoidAlpha3);
- const float32x4_t alpha_5 = vdupq_n_f32(kSigmoidAlpha5);
- const float32x4_t alpha_7 = vdupq_n_f32(kSigmoidAlpha7);
- const float32x4_t alpha_9 = vdupq_n_f32(kSigmoidAlpha9);
-
- // The monomial coefficients of the denominator polynomial (even).
- const float32x4_t beta_0 = vdupq_n_f32(kSigmoidBeta0);
- const float32x4_t beta_2 = vdupq_n_f32(kSigmoidBeta2);
- const float32x4_t beta_4 = vdupq_n_f32(kSigmoidBeta4);
- const float32x4_t beta_6 = vdupq_n_f32(kSigmoidBeta6);
- const float32x4_t beta_8 = vdupq_n_f32(kSigmoidBeta8);
- const float32x4_t beta_10 = vdupq_n_f32(kSigmoidBeta10);
-
- // Since the polynomials are odd/even, we need x^2.
- const float32x4_t x2 = vmulq_f32(x, x);
-
- // Evaluate the numerator polynomial p.
- float32x4_t p = vmlaq_f32(alpha_7, x2, alpha_9);
- p = vmlaq_f32(alpha_5, x2, p);
- p = vmlaq_f32(alpha_3, x2, p);
- p = vmlaq_f32(alpha_1, x2, p);
- p = vmulq_f32(x, p);
-
- // Evaluate the denominator polynomial p.
- float32x4_t q = vmlaq_f32(beta_8, x2, beta_10);
- q = vmlaq_f32(beta_6, x2, q);
- q = vmlaq_f32(beta_4, x2, q);
- q = vmlaq_f32(beta_2, x2, q);
- q = vmlaq_f32(beta_0, x2, q);
-
- // Divide the numerator by the denominator.
- float32x4_t recp = vrecpeq_f32(q);
- recp = vmulq_f32(recp, vrecpsq_f32(recp, q));
- return vmlaq_f32(vdupq_n_f32(0.5f), p, recp);
-#elif defined FAST_TRANSCENDENTALS
- float32x4_t denom = vaddq_f32(fast_exp(vnegq_f32(x)), vdupq_n_f32(1.f));
-
- float32x4_t recp = vrecpeq_f32(denom);
- // Newton-Raphson iteration, accuracy is important for audio quality.
- recp = vmulq_f32(recp, vrecpsq_f32(recp, denom));
- float32x4_t half = vdupq_n_f32(0.5f);
- float32x4_t quarter = vdupq_n_f32(0.245f);
- float32x4_t linear_approx = vmlaq_f32(half, quarter, x);
- uint32x4_t cmp_results = vcaleq_f32(x, vdupq_n_f32(kSigmoidLinearRegion));
-
- return vbslq_f32(cmp_results, linear_approx, recp);
-#else
- float32x4_t return_val = vdupq_n_f32(0.f);
-
- float result = 1.f / (1.f + expf(-vgetq_lane_f32(x, 0)));
- return_val = vld1q_lane_f32(&result, return_val, 0);
- result = 1.f / (1.f + expf(-vgetq_lane_f32(x, 1)));
- return_val = vld1q_lane_f32(&result, return_val, 1);
- result = 1.f / (1.f + expf(-vgetq_lane_f32(x, 2)));
- return_val = vld1q_lane_f32(&result, return_val, 2);
- result = 1.f / (1.f + expf(-vgetq_lane_f32(x, 3)));
- return_val = vld1q_lane_f32(&result, return_val, 3);
-
- return return_val;
-#endif // FAST_TRANSCENDENTALS
-#endif // SIGMOID_AS_TANH
-}
-
-// Scalar implementations, mainly useful for testing.
-inline float fast_exp(float x) {
- return vgetq_lane_f32(fast_exp(vdupq_n_f32(x)), 0);
-}
-
-template
-inline float fast_exp(fixed32 x) {
- return vgetq_lane_f32(fast_exp(vdupq_n_s32(x.raw_val())), 0);
-}
-
-// Returns the exponent of a fixed point number in floating point without ever
-// doing any conversions. Less accurate than the version that does conversions,
-// but still accurate to within 4% relative for x < 16.
-template
-inline float fast_exp_fixed(fixed32 x) {
- return vgetq_lane_f32(fast_exp_fixed(vdupq_n_s32(x.raw_val())),
- 0);
-}
-
-inline float fast_sigmoid(float x) {
- return vgetq_lane_f32(fast_sigmoid(vdupq_n_f32(x)), 0);
-}
-
-inline float fast_tanh(float x) {
- return vgetq_lane_f32(fast_tanh(vdupq_n_f32(x)), 0);
-}
-
-// Clips integer input to [-|kLimit|, |kLimit|].
-// Input: register containins 4x fixed32 with mantissa_bits.
-// Output: register containing 4x fixed32 limited to
-// [-|kLimit| << |mantissa_bits|, |kLimit| << |mantissa_bits|].
-template
-inline int32x4_t ClipToBounds(const int mantissa_bits, const int32x4_t x) {
- // Clip to the input bounds for this approximation.
- int32x4_t clip_limit = vdupq_n_s32(-(kLimit << mantissa_bits));
- int32x4_t clipped_x = vmaxq_s32(x, clip_limit);
- clip_limit = vnegq_s32(clip_limit);
- return vminq_s32(clipped_x, clip_limit);
-}
-
-// Fixed32 sigmoid approximation via a quadratic refinement of the exponent
-// trick.
-// Input: Register containing 4x fixed32 with |mantissa_bits|.
-// Output: Register containing 4x float results.
-inline float32x4_t fixed32_sigmoid_float(const int mantissa_bits,
- const int32x4_t x) {
- int32x4_t input = vnegq_s32(x);
- float32x4_t y =
- vcvtq_f32_s32(ClipToBounds(mantissa_bits, input));
- y = fixed32_exp_float_preclipped(mantissa_bits, y);
- float32x4_t one = vdupq_n_f32(1.0f);
- // Approximate reciprocal is not accurate enough - use full division.
- float32x4_t denom = vaddq_f32(y, one);
- float32x4_t recp = vrecpeq_f32(denom);
- // Newton-Raphson iteration, accuracy is important for audio quality
- recp = vmulq_f32(recp, vrecpsq_f32(recp, denom));
- return recp;
-}
-
-template
-inline float32x4_t fast_sigmoid(int32x4_t x) {
-#if defined FASTER_TRANSCENDENTALS
- // Computation will fail to produce the right result if the input mantissa
- // bits exceeds the number in a float.
- static_assert(kFloatMantissaBits >= fixed32::kMantissaBits,
- "Mantissa bits must be at most 23!");
- return fixed32_sigmoid_float(fixed32::kMantissaBits, x);
-#else
- return fast_sigmoid(vcvtq_n_f32_s32(x, fixed32::kMantissaBits));
-#endif // FASTER_TRANSCENDENTALS
-}
-
-template
-inline float fast_sigmoid(fixed32 x) {
- return vgetq_lane_f32(fast_sigmoid(vdupq_n_s32(x.raw_val())),
- 0);
-}
-
-#else // defined __ARM_NEON || defined __aarch64__
-
-inline float fast_exp(float x) {
-#ifdef FAST_TRANSCENDENTALS
- if (isnan(x)) return 0.0f;
- x = std::max(std::min(x, kMaxExpInput), kMinExpInput);
- float AConstant, BConstant;
- memcpy(&AConstant, &kAConstant, sizeof(int));
- memcpy(&BConstant, &kBConstant, sizeof(int));
- float y = x * AConstant + BConstant;
- int x_int = static_cast(y);
- float ret;
- memcpy(&ret, &x_int, sizeof(float));
- return ret;
-#else
- return expf(x);
-#endif // FAST_TRANSCENDENTALS
-}
-
-template
-inline float fast_exp(fixed32 x) {
- return fast_exp(static_cast(x));
-}
-
-template
-inline float fast_exp_fixed(fixed32 x) {
- static_assert(ExponentBits > 8, "Must have more than 8 ExponentBits");
- int matched_decimal =
- std::max(std::min(x.raw_val(), (80 << (31 - ExponentBits))),
- -(80 << (31 - ExponentBits)));
- // Convert 1 / log(2) to 16-bit fixed point with 1 exponent bit
- // (1 / log(2)) * (1 << 14), but then right shift by the appropriate amount to
- // line the decimal point up with the 32-bit float representation.
- // (MantissaBits of x) + (MantissaBits of constant) = 23
- // 23 - (MantissaBits of x) = MantissaBits of constant
- // 23 - (31 - ExponentBits of x) = ...
- // (ExponentBits of x - 8) = MantissaBits of constant
- const int16_t A = (1.f / logf(2.f)) * (1 << (ExponentBits - 8));
- // Same rationale as for floating point versions, bias exponent, subtract
- // 366000 to reduce error by centering approximation, instead of being
- // one-sided.
- const int B = (127 << 23) - 366000;
- matched_decimal = A * matched_decimal + B;
- float ret_val;
- memcpy(&ret_val, &matched_decimal, sizeof(float));
- return ret_val;
-}
-
-inline float fast_tanh(float x) {
-#if defined FAST_TRANSCENDENTALS && defined ACCURATE_TRANSCENDENTAL_APPROX
- // Doesn't do anything fancy, just a 13/6-degree rational interpolant which
- // is accurate up to a couple of ulp in the range [-9, 9], outside of which
- // fl(tanh(x)) = +/-1.
- x = std::max(std::min(x, kMaxTanhInput), kMinTanhInput);
-
- // Since the polynomials are odd/even, we need x^2.
- float x2 = x * x;
-
- // Evaluate numerator.
- float p = kTanhAlpha11 + x2 * kTanhAlpha13;
- p = kTanhAlpha9 + x2 * p;
- p = kTanhAlpha7 + x2 * p;
- p = kTanhAlpha5 + x2 * p;
- p = kTanhAlpha3 + x2 * p;
- p = kTanhAlpha1 + x2 * p;
- p = x * p;
-
- // Evaluate denominator.
- float q = kTanhBeta4 + x2 * kTanhBeta6;
- q = kTanhBeta2 + x2 * q;
- q = kTanhBeta0 + x2 * q;
-
- return p / q;
-#elif defined FAST_TRANSCENDENTALS
- if (std::abs(x) < kTanhLinearRegion) {
- return x;
- } else {
- x = std::max(std::min(x, kMaxTanhInput), kMinTanhInput);
- float positive = fast_exp(x);
- float negative = fast_exp(-x);
- return (positive - negative) / (positive + negative);
- }
-#else
- return tanhf(x);
-#endif // FAST_TRANSCENDENTALS
-}
-
-inline float fast_sigmoid(float x) {
-#ifdef SIGMOID_AS_TANH
- return .5f * fast_tanh(.5f * x) + .5f;
-#else
-#if defined FAST_TRANSCENDENTALS && defined ACCURATE_TRANSCENDENTAL_APPROX
- // Doesn't do anything fancy, just a 9/10-degree rational interpolant which
- // interpolates 1/(1+exp(-x)) - 0.5 up to a couple of ulp in the range
- // [-18, 18], outside of which the fl(sigmoid(x)) = {0|1}. The shifted
- // sigmoid is interpolated because it was easier to make the fit converge.
- // See GenericPacketMath.h* in the open source Eigen library.
- x = std::max(std::min(x, kMaxSigmoidInput), kMinSigmoidInput);
-
- // Since the polynomials are odd/even, we need x^2.
- float x2 = x * x;
-
- // Evaluate numerator.
- float p = kSigmoidAlpha7 + x2 * kSigmoidAlpha9;
- p = kSigmoidAlpha5 + x2 * p;
- p = kSigmoidAlpha3 + x2 * p;
- p = kSigmoidAlpha1 + x2 * p;
- p = x * p;
-
- // Evaluate denominator.
- float q = kSigmoidBeta8 + x2 * kSigmoidBeta10;
- q = kSigmoidBeta6 + x2 * q;
- q = kSigmoidBeta4 + x2 * q;
- q = kSigmoidBeta2 + x2 * q;
- q = kSigmoidBeta0 + x2 * q;
-
- return p / q + 0.5f;
-#elif defined FAST_TRANSCENDENTALS
- if (std::abs(x) < kSigmoidLinearRegion) {
- return .245 * x + .5;
- } else {
- return 1.f / (1.f + fast_exp(-x));
- }
-#else
- return 1.f / (1.f + expf(-x));
-#endif // FAST_TRANSCENDENTALS
-#endif // SIGMOID_AS_TANH
-}
-
-template
-inline float fast_sigmoid(fixed32 x) {
- return fast_sigmoid(static_cast(x));
-}
-
-#endif // defined __aarch64__
-
-// Number of exponent bits to use for tanh.
-static constexpr int kNumTanhExpBits = 3;
-// Number of exponent bits to use for sigmoid.
-static constexpr int kNumSigmoidExpBits = 4;
-// Number of extra bits to shift sigmoid, due to its low gradient.
-static constexpr int kNumExtraSigmoidShiftBits = 1;
-
-// Returns (and builds if not done yet) a static data table (that is never
-// deleted, as per the style guide) that implements tanh on fixed32 input,
-// returning another fixed32 with the given number of mantissa bits (which is
-// assumed to be less than the input mantissa bits).
-// NOTE that this function is intended to be used only with fixed16 outputs that
-// are sign-extended to 32 bits for convenience, and will return a nullptr
-// if asked for more than |kMaxMantissaBits| of precision in the output table.
-const int* TanhTable(int num_mantissa_bits_out);
-// As TanhTable, but for Sigmoid.
-const int* SigmoidTable(int num_mantissa_bits_out);
-
-// Scalar/generic function to compute and return the fast approximation to exp
-// via a polynomial refinement of the floating point exponent trick.
-// TM_ORDER4_16BIT:Max relative error < 5e-6, absolute error < 1e-5 for x < 1.
-// TM_ORDER3_16BIT:Max relative error < 1.1e-4, absolute error < 3e-4 for x
-// < 1.
-template
-float fixed32_exp(fixed32 x) {
- constexpr int kMantissaBits = MantissaBitsOf>::value;
- // Clip x to min/max exp input to avoid infinities.
- int64_t clipped_x =
- std::max(std::min(x.raw_val(), kMaxExpInputInt << kMantissaBits),
- -(kMaxExpInputInt << kMantissaBits));
- // First convert problem from e^x to 2^x by multiplying by 1/log(2).
- // To maximize precision, log_factor is shifted left the maximum amount to
- // keep within int32, and we shift x left a further amount such that the
- // binary point of the product sits in the correct place in the top 32 bits of
- // the result to be used directly as a float. We can't do that directly, as x
- // would overflow, so we have to shift by 1 bit less and shift the result by
- // 1 bit less to match.
- constexpr int kXShift =
- kFloatMantissaBits + 31 - kMaxLog2Shift - kMantissaBits;
- static_assert(kXShift >= 0,
- "Mantissa bits > kFloatMantissaBits + 31 - kMaxLog2Shift");
- clipped_x <<= kXShift;
- int float_as_int = (kLogFactor * clipped_x >> 31) + kFloatExponentOffset;
- // Separate the resulting fixed-point into integer and fractional parts.
- int int_part = float_as_int & kFloatExponentMask;
- int float_part = float_as_int & kFloatMantissaMask;
- float fraction = static_cast(float_part) / (1 << kFloatMantissaBits);
- // Compute the mantissa = 2^fraction using:
- // fraction - fraction*(1-fraction)*(polynomial of fraction)
- // This guarantees exactness at 0 and 1, providing continuity of the error at
- // integer boundaries.
- float mantissa;
- if (kOrder == TM_ORDER4_16BIT || kOrder == TM_ORDER4_FLOAT) {
- mantissa = (kExpQuarticFactor2 * fraction + kExpQuarticFactor1) * fraction +
- kExpQuarticFactor0;
- } else if (kOrder == TM_ORDER3_16BIT) {
- mantissa = kExpCubicFactor1 * fraction + kExpCubicFactor0;
- }
- mantissa = fraction - fraction * (1.0f - fraction) * mantissa;
- // Since the function above guarantees to stay within [0, 1), we could do all
- // the above in fixed point if necessary, in which case, we can just stuff
- // the bottom kFloatMantissaBits in with the exponent and we are done.
- // In the floating point world, it is simpler to just multiply them together.
- float result;
- memcpy(&result, &int_part, sizeof(float));
- return result * (1.0f + mantissa);
-}
-
-// Computes and returns tanh(x) fixed32->float using a polynomial refinement of
-// the floating point exponent trick.
-// kOrder=4: Absolute error < 1.8e-6. Relative error < 1.2e-4 for |x| > 0.01.
-// kOrder=3: Absolute error < 6e-5. Relative error < 3e-3 for |x| > 0.01
-template
-float fixed32_tanh(fixed32 x) {
- float float_x = static_cast(x);
- if (std::abs(float_x) < 1.0f / 9.0f) {
- return float_x * (1 - float_x * float_x / 3.0f);
- }
- x = static_cast>(x.raw_val() * 2);
- float exp_2x = fixed32_exp(x);
- return (exp_2x - 1.0f) / (exp_2x + 1.0f);
-}
-
-// Computes and returns sigmoid(x) fixed32->float using a polynomial refinement
-// of the floating point exponent trick.
-// TM_ORDER4_16BIT: Absolute error < 9e-7, relative < 4e-6.
-// TM_ORDER3_16BIT: Absolute error < 3e-5, relative < 1.1e-4.
-template
-float fixed32_sigmoid(fixed32 x) {
- x = static_cast>(-x.raw_val());
- float exp_x = fixed32_exp(x);
- return 1.0f / (exp_x + 1.0f);
-}
-
-#if defined __AVX2__
-
-// Inline function to access an int32 data table by shifting |x| right by
-// |kNumShiftBits|, and adding |kTableOffset| to the result. |x| contains 8
-// indices and 8 results are returned. The data table is of size
-// |kTableOffset| * 2 + 1.
-template
-inline __m256i index_data_table(const int32_t* data_table, const __m256i& x) {
- // Shift right with rounding to match input and output precision.
- __m256i shifted = _mm256_set1_epi32(1 << (kNumShiftBits - 1));
- shifted = _mm256_add_epi32(x, shifted);
- shifted = _mm256_srai_epi32(shifted, kNumShiftBits);
- // Add the offset.
- __m256i addend = _mm256_set1_epi32(kTableOffset);
- shifted = _mm256_add_epi32(shifted, addend);
- // And clamp to the indices of the LUT.
- addend = _mm256_add_epi32(addend, addend);
- shifted = _mm256_min_epi32(shifted, addend);
- shifted = _mm256_max_epi32(shifted, _mm256_setzero_si256());
- // Lookup the results in the table.
- return _mm256_i32gather_epi32(data_table, shifted, 4);
-}
-
-// Fixed32 to fixed16-in-an-int32 tanh LUT function.
-// Input: register containins 8x fixed32 with |NumInputMantissaBits|.
-// Output: a register containing 8x fixed16 with |NumOutputMantissaBits|, but
-// note that they are sign-extended to 32 bits and are therefore basically the
-// same as fixed32 with |NumOutputMantissaBits|.
-template
-inline __m256i fixed32_tanh_fixed16(const int* tanh_table, const __m256i& x) {
- // Lose the unnecessary input precision.
- constexpr int kNumShiftBits = NumInputMantissaBits - NumOutputMantissaBits;
- constexpr int kTableOffset = 1 << (NumOutputMantissaBits + kNumTanhExpBits);
- return index_data_table(tanh_table, x);
-}
-
-// Fixed32 to fixed16-in-an-int32 sigmoid LUT function.
-// Input: register containins 8x fixed32 with |NumInputMantissaBits|.
-// Output: a register containing 8x fixed16 with |NumOutputMantissaBits|, but
-// note that they are sign-extended to 32 bits and are therefore basically the
-// same as fixed32 with |NumOutputMantissaBits|.
-template
-inline __m256i fixed32_sigmoid_fixed16(const int* sigmoid_table,
- const __m256i& x) {
- // Lose the unnecessary input precision.
- constexpr int kNumShiftBits =
- kNumExtraSigmoidShiftBits + NumInputMantissaBits - NumOutputMantissaBits;
- constexpr int kTableOffset = 1
- << (NumOutputMantissaBits + kNumSigmoidExpBits -
- kNumExtraSigmoidShiftBits);
- return index_data_table(sigmoid_table, x);
-}
-
-// Convert 2x registers of 8x float32 into 1 register of 16x16 bit fixed int,
-// assuming that the floats are already scaled up.
-inline __m256i PackFloatsToFixed16(const __m256& x0, const __m256& x1) {
- __m256i int0 = _mm256_cvtps_epi32(x0);
- __m256i int1 = _mm256_cvtps_epi32(x1);
- int0 = _mm256_packs_epi32(int0, int1);
- // Swap the middle 64 bit elements so the results are in the right order.
- return _mm256_permute4x64_epi64(int0, 0xd8);
-}
-
-// Clips integer input to [-|kLimit|, |kLimit|].
-// Input: register containins 8x fixed32 with |mantissa_bits|.
-// Output: register containing 8x fixed32 limited to
-// [-|kLimit| << |mantissa_bits|, |kLimit| << |mantissa_bits|].
-template
-inline __m256i ClipToBounds(const int mantissa_bits, const __m256i& x) {
- // Clip to the input bounds for this approximation.
- __m256i clip_limit = _mm256_set1_epi32(-(kLimit << mantissa_bits));
- __m256i clipped_x = _mm256_max_epi32(x, clip_limit);
- // This quickly negates the limit without having to load another constant.
- clip_limit = _mm256_sign_epi32(clip_limit, clip_limit);
- return _mm256_min_epi32(clipped_x, clip_limit);
-}
-
-// Clips float input to [-|kLimit|, |kLimit|].
-// Input: register containins 8x float.
-// Output: register containing 8x float limited to [-|kLimit|, |kLimit|].
-inline __m256 ClipToFloatBounds(const float kLimit, const __m256& x) {
- __m256 clip_limit = _mm256_set1_ps(kLimit);
- __m256 clipped_x = _mm256_min_ps(x, clip_limit);
- clip_limit = _mm256_set1_ps(-kLimit);
- return _mm256_max_ps(clipped_x, clip_limit);
-}
-
-// Float to float power of 2 approximation, using a quartic refinement of
-// the exponent trick. For TM_ORDER4_16BIT and TM_ORDER3_16BIT, implementation
-// is entirely in integer, using 16x16=16 multiplication, using AVX2, which
-// enables 16 elements to be computed in parallel, hence the double register
-// input/output args.
-// The price paid for this speed is an increase in error over the (scalar) int32
-// example implementations above by a variable factor of 4-10.
-// For the TM_ORDER4_FLOAT case, the computation is all done in float, solving
-// this lower precision problem.
-// NOTE: The input must have already been clipped to prevent overflow, which
-// sets the practical limit to +/-126 << kFloatMantissaBits.
-// NOTE: The input is a scaled float, as if converted raw from int, and the
-// scale factor is fixed at kFloatMantissaBits!
-// Input: 2x register containining 8x float * 1 << kFloatMantissaBits.
-// Output: 2x register containing 8x float.
-// TM_ORDER4_FLOAT: Max relative error < 8e-6, absolute error < 9e-6 for x < 1.
-// TM_ORDER4_16BIT: Max relative error < 3e-5, absolute error < 6e-5 for x < 1.
-// TM_ORDER3_16BIT: Max relative error < 6e-4, absolute error < 2e-3 for x < 1.
-template
-inline void float32_pow2(__m256& x0, __m256& x1) {
- // Convert straight to int.
- __m256i exp_int_x0 = _mm256_cvtps_epi32(x0);
- __m256i exp_int_x1 = _mm256_cvtps_epi32(x1);
- __m256i result_x0, result_x1;
-
- static_assert(kOrder == TM_ORDER4_FLOAT || kOrder == TM_ORDER4_16BIT ||
- kOrder == TM_ORDER3_16BIT,
- "Invalid order.");
-
- if (kOrder == TM_ORDER4_FLOAT) {
- __m256i mantissa_mask = _mm256_set1_epi32(0x7fffff);
- __m256 float_factor =
- _mm256_set1_ps(1.0f / static_cast(1 << kFloatMantissaBits));
- __m256i fract0 = _mm256_and_si256(mantissa_mask, exp_int_x0);
- __m256i fract1 = _mm256_and_si256(mantissa_mask, exp_int_x1);
- __m256 float0 = _mm256_mul_ps(_mm256_cvtepi32_ps(fract0), float_factor);
- __m256 float1 = _mm256_mul_ps(_mm256_cvtepi32_ps(fract1), float_factor);
- // Compute the polynomial of the fractional part.
- // Ordering these lines carefully makes it faster, as some of the multiply
- // operations can pipeline instead of waiting for the previous result.
- __m256 x_squared0 = _mm256_mul_ps(float0, float0);
- __m256 x_squared1 = _mm256_mul_ps(float1, float1);
- __m256 b = _mm256_set1_ps(kExpQuarticFactor1);
- __m256 b_x0 = _mm256_mul_ps(b, float0);
- __m256 b_x1 = _mm256_mul_ps(b, float1);
- __m256 a = _mm256_set1_ps(kExpQuarticFactor2);
- __m256 a_x_squared0 = _mm256_mul_ps(a, x_squared0);
- __m256 a_x_squared1 = _mm256_mul_ps(a, x_squared1);
- __m256 x_squared_minus_x0 = _mm256_sub_ps(x_squared0, float0);
- __m256 x_squared_minus_x1 = _mm256_sub_ps(x_squared1, float1);
- __m256 c = _mm256_set1_ps(kExpQuarticFactor0);
- b_x0 = _mm256_add_ps(b_x0, c);
- b_x1 = _mm256_add_ps(b_x1, c);
- float_factor = _mm256_set1_ps(static_cast(1 << kFloatMantissaBits));
- a_x_squared0 = _mm256_add_ps(a_x_squared0, b_x0);
- a_x_squared1 = _mm256_add_ps(a_x_squared1, b_x1);
- a_x_squared0 = _mm256_mul_ps(a_x_squared0, x_squared_minus_x0);
- a_x_squared1 = _mm256_mul_ps(a_x_squared1, x_squared_minus_x1);
- result_x0 = _mm256_cvtps_epi32(_mm256_mul_ps(a_x_squared0, float_factor));
- result_x1 = _mm256_cvtps_epi32(_mm256_mul_ps(a_x_squared1, float_factor));
- } else {
- // Combine the fractional part of both inputs into a single register.
- // The representation is fixed16<0>, ie 15 mantissa bits.
- __m256i mantissa_mask = _mm256_set1_epi32(0x7fff00);
- __m256i x_01 =
- _mm256_srli_epi32(_mm256_and_si256(mantissa_mask, exp_int_x0), 8);
- x_01 = _mm256_or_si256(
- x_01,
- _mm256_slli_epi32(_mm256_and_si256(mantissa_mask, exp_int_x1), 8));
- // Compute the polynomial of the fractional part.
- // Ordering these lines carefully makes it faster, as some of the multiply
- // operations can pipeline instead of waiting for the previous result.
- __m256i x_squared = _mm256_mulhrs_epi16(x_01, x_01);
- __m256i result, x_squared_minus_x;
- if (kOrder == TM_ORDER4_16BIT) {
- __m256i b = _mm256_set1_epi16(FloatAsInt16(kExpQuarticFactor1));
- __m256i b_x = _mm256_mulhrs_epi16(b, x_01);
- __m256i a = _mm256_set1_epi16(FloatAsInt16(kExpQuarticFactor2));
- __m256i a_x_squared = _mm256_mulhrs_epi16(a, x_squared);
- x_squared_minus_x = _mm256_sub_epi16(x_squared, x_01);
- // LOG(INFO) << "x_squared_minus_x=" <<
- // static_cast(_mm256_extract_epi16(x_squared_minus_x, 0)) /
- // 32768.0f;
- __m256i c = _mm256_set1_epi16(FloatAsInt16(kExpQuarticFactor0));
- b_x = _mm256_add_epi16(b_x, c);
- // LOG(INFO) << "bx+c=" << static_cast(_mm256_extract_epi16(b_x,
- // 0)) / 32768.0f;
- result = _mm256_add_epi16(a_x_squared, b_x);
- } else { // kOrder = TM_ORDER3_16BIT
- __m256i a = _mm256_set1_epi16(FloatAsInt16(kExpCubicFactor1));
- __m256i b = _mm256_set1_epi16(FloatAsInt16(kExpQuarticFactor0));
- __m256i a_x = _mm256_mulhrs_epi16(a, x_01);
- x_squared_minus_x = _mm256_sub_epi16(x_squared, x_01);
- result = _mm256_add_epi16(a_x, b);
- }
- result = _mm256_mulhrs_epi16(result, x_squared_minus_x);
- // Extract 16x16-bit results back to the separate sets of 8x32.
- result_x0 = _mm256_slli_epi32(result, 16);
- result_x0 = _mm256_srai_epi32(result_x0, 8);
- result_x1 = _mm256_srai_epi32(result, 16);
- result_x1 = _mm256_slli_epi32(result_x1, 8);
- }
- // Add the constant to normalize the exponent.
- __m256i exp_offset = _mm256_set1_epi32(kFloatExponentOffset);
- exp_int_x0 = _mm256_add_epi32(exp_int_x0, exp_offset);
- exp_int_x0 = _mm256_add_epi32(exp_int_x0, result_x0);
- exp_int_x1 = _mm256_add_epi32(exp_int_x1, exp_offset);
- exp_int_x1 = _mm256_add_epi32(exp_int_x1, result_x1);
- // Cast back to float, as we just computed the exponent and mantissa and
- // assembled them in IEEE format.
- x0 = _mm256_castsi256_ps(exp_int_x0);
- x1 = _mm256_castsi256_ps(exp_int_x1);
-}
-
-// Fixed32 to to float exp approximation, using a quartic/cubic refinement of
-// the exponent trick. Implementation is entirely in integer, using 16x16=16
-// multiplication, using AVX2, which enables 16 elements to be computed in
-// parallel, hence the double register input/output args.
-// The price paid for this speed is an increase in error over the (scalar) int32
-// example implementations above by a variable factor of 4-10.
-// The TM_ORDER4_FLOAT version uses floats and improves the precision.
-// Input: 2x registers containins 8x fixed32 with kMantissaBits.
-// Output: 2x registers containing 8x float32.
-// TM_ORDER4_FLOAT: Max relative error < 8e-6, absolute error < 9e-6 for x < 1.
-// TM_ORDER4_16BIT: Max relative error < 3e-5, absolute error < 6e-5 for x < 1.
-// TM_ORDER3_16BIT: Max relative error < 6e-4, absolute error < 2e-3 for x < 1.
-template
-inline void float_exp_float_preclipped(__m256& y0, __m256& y1) {
- // Divide by log 2 to convert problem to 2^x, and scale to match the
- // mantissa bits required by IEEE floats. Without a _mm256_mulhrs_epi32, it is
- // much easier to do this in float, even with the double conversion, as 16 bit
- // is not precise enough here.
- // This is the shift of the FP mantissa relative to the input mantissa.
- constexpr int kXShift = kFloatMantissaBits - kInputMantissaBits;
- constexpr float kLogFactor = static_cast(1 << kXShift);
- __m256 factor = _mm256_set1_ps(kLogFactor * kOneOverLog2);
- y0 = _mm256_mul_ps(y0, factor);
- y1 = _mm256_mul_ps(y1, factor);
- // Now compute 2^x.
- float32_pow2(y0, y1);
-}
-template
-inline void fixed32_exp_float(const __m256i& x0, const __m256i& x1, __m256& y0,
- __m256& y1) {
- // Clip to acceptable bounds to prevent overflow, and convert to float.
- y0 =
- _mm256_cvtepi32_ps(ClipToBounds