diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Artsoft Mach 4 Crack 536 [PATCHED] A Complete Guide for CNC Enthusiasts.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Artsoft Mach 4 Crack 536 [PATCHED] A Complete Guide for CNC Enthusiasts.md
deleted file mode 100644
index e3db7b1d7804c89b7856a0ac503c399d28b10181..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Artsoft Mach 4 Crack 536 [PATCHED] A Complete Guide for CNC Enthusiasts.md
+++ /dev/null
@@ -1,181 +0,0 @@
-
-
Artsoft Mach 4 Crack 536: What You Need to Know
-
If you are looking for a way to control your CNC machinery, PLC equipment, or robotics, you might have heard of Artsoft Mach 4, a powerful and flexible software that can handle very large files and complex motions. But what if you don't want to pay for the license fee or deal with the activation process? You might be tempted to use a crack instead. But is it worth it? In this article, we will explain what Artsoft Mach 4 and a crack are, how to download and install Artsoft Mach 4 Crack 536, the pros and cons of using it, and some alternatives to consider.
Artsoft Mach 4 is a software that can control CNC machinery, PLC equipment, and robotics. It is the newest version of CNC motion control software from Artsoft USA, which has been developing software for CNC machines since 2001. Mach 4 is designed to be expandable, flexible, and extremely responsive for use with very large files. It can work with different types of motion controllers, such as parallel port, Galil, Vital Systems, PMDX, PoLabs, and CNC4PC. It can also support different types of machines, such as mills, drills, lathes, routers, plasma cutters, lasers, and more.
-
What is a crack?
-
A crack is a modified version of a software that bypasses its security features or license verification. It can be a file that replaces the original executable file of the software, or a patch that modifies the code of the software. A crack can allow users to access all the features of the software without paying for it or activating it.
-
Why do people use cracks?
-
People use cracks for various reasons. Some common ones are:
-
-
To save money. Some software can be very expensive, especially for hobbyists or students who have limited budgets.
-
To access all features. Some software may have limited functionality or features in their trial or demo versions.
-
To avoid license restrictions. Some software may have strict license terms that limit the number of installations or devices that can use it.
-
-
How to download and install Artsoft Mach 4 Crack 536
-
Step 1: Find a reliable source
-
The first step to download and install Artsoft Mach 4 Crack 536 is to find a reliable source that offers the file. There are many websites that claim to provide cracks for various software, but not all of them are trustworthy. Some may contain malware, viruses, spyware, or adware that can harm your computer or steal your personal information. Some may also provide fake or outdated files that do not work properly.
-
To avoid these risks, you should look for sources that have positive reviews, feedbacks, ratings, or comments from other users. You should also scan the file with an antivirus program before opening it.
The next step is to download the file from the source you have chosen. The file size may vary depending on the source, but it should be around 300 MB. The file name may also vary depending on the source, but it should contain "Artsoft", "Mach4", and "crack" in some form.
-
To download the file, you may need to click on a link or button that says "Download", "Download Now", "Free Download", or something similar. You may also need to complete some surveys, offers, captcha tests, or other tasks to unlock the download link. Be careful not to click on any ads or pop-ups that may appear on the website.
-
Step 3: Extract the file
-
The file you have downloaded should be in a compressed format, such as ZIP or RAR. To extract it, you will need a program that can handle these formats, such as WinRAR or 7-Zip. You can download these programs for free from their official websites.
-
To extract the file, you will need to right-click on it and choose "Extract Here" or "Extract to" from the menu. You will then see a folder with the same name as the file appear in the same location as the file.
-
Step 4: Run the installer
-
The folder you have extracted should contain an installer file that has an icon of a blue gear and says "Mach4Installer". To run it, you will need to double-click on it and follow the instructions on the screen.
-
The installer will ask you to choose a language and accept the terms and conditions. It will then ask you to choose a destination folder where you want to install Artsoft Mach 4. The default folder is C:\Mach4Hobby\ , but you can change it if you want.
-
The installer will then copy some files and create some shortcuts on your desktop and start menu. It will also ask you if you want to launch Artsoft Mach 4 after installation.
-
Step 5: Copy and paste the crack file
-
The final step is to copy and paste the crack file into the installation folder of Artsoft Mach 4. The crack file should be in the same folder as the installer file and have an icon of a red gear and say "Mach4". To copy it, you will need to right-click on it and choose "Copy" from the menu.
-
To paste it into the installation folder of Artsoft Mach 4, you will need to open it by clicking on its shortcut on your desktop or start menu. You will then see a window with some tabs and buttons at the top. You will need to click on "Help" and then "About". You will then see another window with some information about Artsoft Mach 4.
-
You will need to close this window by clicking on "OK". You will then see another window with some folders and files in it. This is where you need to paste the crack file by right-clicking on an empty space and choosing "Paste" from the menu.
-
You will then see a message asking you if you want to replace an existing file with the same name. You will need to click on "Yes" or "Replace". This will complete the installation process of Artsoft Mach 4 Crack 536.
-
Pros and cons of using Artsoft Mach 4 Crack 536
-
Pros
-
Using Artsoft Mach 4 Crack 536 can have some advantages over using the official version of Artsoft Mach 4. Some of them are:
-
Save money
-
The official version of Artsoft Mach 4 costs $200 for the hobby version and $1400 for the industrial version (as of May 2021). Using a crack can save you this amount of money if you don't want to pay for the license fee.
-
Access all features
-
The official version of Artsoft Mach 4 has different versions with different levels of functionality and features. The hobby version has fewer features than the industrial version, and both versions require additional plugins or licenses for certain motion controllers or devices. Using a crack can allow you to access all the features of both the hobby and the industrial versions of Artsoft Mach 4 without any limitations.
-
No license required
-
The official version of Artsoft Mach 4 requires a license to activate and use the software. The license is tied to a specific computer and cannot be transferred to another one. If you change your computer or hardware, you may need to contact Artsoft to get a new license. Using a crack can avoid this hassle and let you use the software on any computer you want.
-
Cons
-
Using Artsoft Mach 4 Crack 536 can also have some disadvantages over using the official version of Artsoft Mach 4. Some of them are:
-
Risk of malware infection
-
As mentioned earlier, not all sources that provide cracks are reliable or safe. Some may contain malicious programs that can infect your computer or steal your personal information. These programs can damage your files, slow down your system, spy on your activities, or even take control of your machine. You may not even notice that you have been infected until it is too late.
-
Legal issues
-
Using a crack is also illegal and unethical. It violates the terms and conditions of Artsoft Mach 4 and infringes on the intellectual property rights of Artsoft USA. You may face legal consequences if you are caught using a crack, such as fines, lawsuits, or even criminal charges. You may also lose your warranty or support from Artsoft or your machine manufacturer if you use a crack.
-
No updates or support
-
Using a crack also means that you will not receive any updates or support from Artsoft or your machine manufacturer. Updates are important to fix bugs, improve performance, add new features, or enhance compatibility with new hardware or software. Without updates, you may encounter errors, crashes, or compatibility issues with your machine or other devices. You may also miss out on new features that could improve your productivity or creativity.
-
Support is also important to help you troubleshoot any problems or issues that you may face with the software or the machine. Without support, you may have to rely on online forums, blogs, or videos for help, which may not be accurate, reliable, or up-to-date. You may also have to spend more time and money to fix the problems yourself.
-
Alternatives to using Artsoft Mach 4 Crack 536
-
If you are looking for a way to control your CNC machinery, PLC equipment, or robotics without using a crack, there are some alternatives that you can consider. Some of them are:
-
Buy the official version
-
The best and most legal way to use Artsoft Mach 4 is to buy the official version from Artsoft USA or an authorized reseller. You can choose between the hobby version and the industrial version depending on your needs and budget. You can also buy additional plugins or licenses for specific motion controllers or devices that you want to use.
-
By buying the official version, you will get access to all the features and functionality of Artsoft Mach 4 without any limitations. You will also get regular updates and support from Artsoft and your machine manufacturer. You will also avoid any legal issues or malware risks that come with using a crack.
-
Use a free or open source software
-
If you don't want to pay for Artsoft Mach 4 but still want to use a software that can control your CNC machinery, PLC equipment, or robotics, you can look for a free or open source software that can do the same job. There are many free or open source software that can control CNC machines, such as LinuxCNC, GRBL, G-Code Sender, Universal G-Code Sender, CNCjs, bCNC, and more.
-
These software are usually developed by enthusiasts or communities who share their code and knowledge with others. They may not have all the features or functionality of Artsoft Mach 4, but they may have enough for your needs. They may also have more compatibility with different types of hardware or devices than Artsoft Mach 4.
-
However, these software may also have some drawbacks compared to Artsoft Mach 4. They may not be as user-friendly, stable, or reliable as Artsoft Mach 4. They may also have less support or documentation than Artsoft Mach 4. You may also need to learn how to install, configure, and use them properly.
-
Use a trial or demo version
-
If you want to try Artsoft Mach 4 before buying it, you can use a trial or demo version that Artsoft USA offers on its website. The trial or demo version allows you to use Artsoft Mach 4 for a limited time or with limited features. You can use it to test the software and see if it meets your expectations and requirements.
-
The trial or demo version is a good way to get familiar with Artsoft Mach 4 and its features and functionality. You can also use it to compare it with other software that you may be interested in. However, the trial or demo version is not meant to be used for production or commercial purposes. You will still need to buy the official version if you want to use Artsoft Mach 4 for your projects.
-
Conclusion
-
Artsoft Mach 4 is a powerful and flexible software that can control CNC machinery, PLC equipment, and robotics. It is the newest version of CNC motion control software from Artsoft USA, which has been developing software for CNC machines since 2001. It can work with different types of motion controllers and machines, and it can handle very large files and complex motions.
-
However, Artsoft Mach 4 is not free or cheap. It costs $200 for the hobby version and $1400 for the industrial version (as of May 2021). It also requires a license to activate and use the software. Some people may want to use a crack instead of buying the official version. A crack is a modified version of a software that bypasses its security features or license verification. It can allow users to access all the features of Artsoft Mach 4 without paying for it or activating it.
-
But using a crack is not a good idea. It has many disadvantages over using the official version of Artsoft Mach 4. Some of them are:
-
-
Risk of malware infection. Some cracks may contain malicious programs that can harm your computer or steal your personal information.
-
Legal issues. Using a crack is illegal and unethical. It violates the terms and conditions of Artsoft Mach 4 and infringes on the intellectual property rights of Artsoft USA. You may face legal consequences if you are caught using a crack.
-
No updates or support. Using a crack means that you will not receive any updates or support from Artsoft or your machine manufacturer. Updates are important to fix bugs, improve performance, add new features, or enhance compatibility with new hardware or software. Support is important to help you troubleshoot any problems or issues that you may face with the software or the machine.
-
-
Therefore, we recommend that you do not use a crack for Artsoft Mach 4. Instead, you should consider some alternatives that are legal and safe. Some of them are:
-
-
Buy the official version. The best and most legal way to use Artsoft Mach 4 is to buy the official version from Artsoft USA or an authorized reseller. You can choose between the hobby version and the industrial version depending on your needs and budget. You can also buy additional plugins or licenses for specific motion controllers or devices that you want to use.
-
Use a free or open source software. If you don't want to pay for Artsoft Mach 4 but still want to use a software that can control your CNC machinery, PLC equipment, or robotics, you can look for a free or open source software that can do the same job. There are many free or open source software that can control CNC machines, such as LinuxCNC, GRBL, G-Code Sender, Universal G-Code Sender, CNCjs, bCNC, and more.
-
Use a trial or demo version. If you want to try Artsoft Mach 4 before buying it, you can use a trial or demo version that Artsoft USA offers on its website. The trial or demo version allows you to use Artsoft Mach 4 for a limited time or with limited features. You can use it to test the software and see if it meets your expectations and requirements.
-
-
We hope that this article has helped you understand what Artsoft Mach 4 Crack 536 is, how to download and install it, the pros and cons of using it, and some alternatives to consider. We hope that you will make an informed decision and choose the best option for your needs.
-
FAQs
-
Here are some frequently asked questions about Artsoft Mach 4 Crack 536:
-
Q: Is Artsoft Mach 4 Crack 536 safe?
-
A: No, it is not safe. It may contain malware, viruses, spyware, or adware that can harm your computer or steal your personal information. It may also damage your files, slow down your system, spy on your activities, or even take control of your machine. You may not even notice that you have been infected until it is too late.
-
Q: Is Artsoft Mach 4 Crack 536 legal?
-
A: No, it is not legal. It violates the terms and conditions of Artsoft Mach 4 and infringes on the intellectual property rights of Artsoft USA. You may face legal consequences if you are caught using a crack, such as fines, lawsuits, or even criminal charges. You may also lose your warranty or support from Artsoft or your machine manufacturer if you use a crack.
-
Q: Is Artsoft Mach 4 Crack 536 worth it?
-
A: No, it is not worth it. It has many disadvantages over using the official version of Artsoft Mach 4. Some of them are:
-
-
Risk of malware infection. Some cracks may contain malicious programs that can harm your computer or steal your personal information.
-
Legal issues. Using a crack is illegal and unethical. It violates the terms and conditions of Artsoft Mach 4 and infringes on the intellectual property rights of Artsoft USA. You may face legal consequences if you are caught using a crack.
-
No updates or support. Using a crack means that you will not receive any updates or support from Artsoft or your machine manufacturer. Updates are important to fix bugs, improve performance, add new features, or enhance compatibility with new hardware or software. Support is important to help you troubleshoot any problems or issues that you may face with the software or the machine.
-
-
Therefore, we recommend that you do not use a crack for Artsoft Mach 4. Instead, you should consider some alternatives that are legal and safe.
-
Q: What are some alternatives to using Artsoft Mach 4 Crack 536?
-
A: Some alternatives to using a crack for Artsoft Mach 4 are:
-
-
Buy the official version. The best and most legal way to use Artsoft Mach 4 is to buy the official version from Artsoft USA or an authorized reseller. You can choose between the hobby version and the industrial version depending on your needs and budget. You can also buy additional plugins or licenses for specific motion controllers or devices that you want to use.
-
Use a free or open source software. If you don't want to pay for Artsoft Mach 4 but still want to use a software that can control your CNC machinery, PLC equipment, or robotics, you can look for a free or open source software that can do the same job. There are many free or open source software that can control CNC machines, such as LinuxCNC, GRBL, G-Code Sender, Universal G-Code Sender, CNCjs, bCNC, and more.
-
Use a trial or demo version. If you want to try Artsoft Mach 4 before buying it, you can use a trial or demo version that Artsoft USA offers on its website. The trial or demo version allows you to use Artsoft Mach 4 for a limited time or with limited features. You can use it to test the software and see if it meets your expectations and requirements.
-
-
Q: How to download and install Artsoft Mach 4 Crack 536?
-
A: To download and install Artsoft Mach 4 Crack 536, you will need to follow these steps:
-
-
Find a reliable source that offers the file. There are many websites that claim to provide cracks for various software, but not all of them are trustworthy. Some may contain malware, viruses, spyware, or adware that can harm your computer or steal your personal information. Some may also provide fake or outdated files that do not work properly.
-
Download the file from the source you have chosen. The file size may vary depending on the source, but it should be around 300 MB. The file name may also vary depending on the source, but it should contain "Artsoft", "Mach4", and "crack" in some form.
-
Extract the file from the compressed format, such as ZIP or RAR. To extract it, you will need a program that can handle these formats, such as WinRAR or 7-Zip. You can download these programs for free from their official websites.
-
Run the installer file that has an icon of a blue gear and says "Mach4Installer". To run it, you will need to double-click on it and follow the instructions on the screen.
-
Copy and paste the crack file that has an icon of a red gear and says "Mach4" into the installation folder of Artsoft Mach 4. To copy it, you will need to right-click on it and choose "Copy" from the menu. To paste it into the installation folder of Artsoft Mach 4, you will need to open it by clicking on its shortcut on your desktop or start menu. You will then need to click on "Help" and then "About". You will then need to close this window by clicking on "OK". You will then see another window with some folders and files in it. This is where you need to paste the crack file by right-clicking on an empty space and choosing "Paste" from the menu. You will then need to click on "Yes" or "Replace" when asked if you want to replace an existing file with the same name.
-
-
Q: What are the differences between Artsoft Mach 4 Hobby and Industrial?
-
A: Artsoft Mach 4 Hobby and Industrial are two different versions of Artsoft Mach 4 that have different features and functionality. The hobby version is designed for simple hobby machines and costs $200 (as of May 2021). The industrial version is designed for complex industrial machines and costs $1400 (as of May 2021). Some of the differences between them are:
-
-
The industrial version has more features than the hobby version, such as Macro B gcode programming, tool life management, screw mapping, and an advanced GUI editing tool.
-
The industrial version has more compatibility than the hobby version, such as out of band axis (OBA), slave motors, and multiple gcode interpreters.
-
The industrial version has more support than the hobby version, such as simulated 3D machining (with additional plugin license), professional screen designer, scripted M code, and PMC (Ladder Logic addressing for cnc/plc).
-
-
You can find more details about the differences between Artsoft Mach 4 Hobby and Industrial on the official website of Artsoft USA or in this document.
-
Q: How to update Artsoft Mach 4?
-
A: To update Artsoft Mach 4, you will need to follow these steps:
-
-
Go to the official website of Artsoft USA and download the latest version of Artsoft Mach 4 from the downloads page.
-
Run the installer file that has an icon of a blue gear and says "Mach4Installer". To run it, you will need to double-click on it and follow the instructions on the screen.
-
Choose the same destination folder where you have installed Artsoft Mach 4 before. The installer will overwrite the old files with the new ones.
-
Restart your computer and launch Artsoft Mach 4. You should see the new version number in the title bar or in the help menu.
-
-
Note: If you are using a crack for Artsoft Mach 4, you may not be able to update it or use the new features. You may also lose your crack file or get infected by malware during the update process. We recommend that you do not use a crack for Artsoft Mach 4 and buy the official version instead.
-
ed
0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bms.exe.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bms.exe.md
deleted file mode 100644
index 830c1e532c143c6ad52e693ef010cb306fe2df5f..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bms.exe.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
How to Fix BMS.EXE Errors on Your PC
-
BMS.EXE is an executable file that belongs to various software programs, such as BusinessPhone Management Suite, Black Mesa, or 1,000 Solitaire Games. It is usually located in the C:\Windows\System32 folder and has a file size of about 5.31 MB. However, sometimes BMS.EXE can cause problems on your PC, such as crashing, freezing, or displaying error messages. In this article, we will show you how to fix BMS.EXE errors on your PC and prevent them from happening again.
BMS.EXE errors can be caused by various factors, such as:
-
-
Missing, corrupted, or moved BMS.EXE file
-
Invalid or damaged registry entries related to BMS.EXE
-
Malware infection that affects BMS.EXE or its associated software
-
Conflicts with other programs or drivers that use BMS.EXE
-
Incompatible or outdated versions of BMS.EXE or its associated software
-
-
To fix BMS.EXE errors, you need to identify the root cause of the problem and apply the appropriate solution.
-
How to Fix BMS.EXE Errors?
-
Depending on the cause of the BMS.EXE error, you can try one or more of the following methods to fix it:
-
-
-
Replace the missing or corrupted BMS.EXE file. If you have accidentally deleted, moved, or overwritten the BMS.EXE file, you can try to restore it from the Recycle Bin, a backup source, or a reliable website. Alternatively, you can reinstall the software that uses BMS.EXE, such as BusinessPhone Management Suite, Black Mesa, or 1,000 Solitaire Games. Make sure to download the latest version of the software from the official website and follow the installation instructions carefully.
-
Clean the registry entries related to BMS.EXE. If you have invalid or damaged registry entries that refer to BMS.EXE, you can use a registry cleaner tool to scan and fix them. A registry cleaner tool is a software that can automatically detect and repair registry errors on your PC. However, be careful when using a registry cleaner tool, as it can also delete some important registry entries that are needed for your system. Make sure to backup your registry before using a registry cleaner tool and only use a reputable one.
-
Scan your PC for malware infection. If you have malware infection that affects BMS.EXE or its associated software, you can use an antivirus or anti-malware program to scan and remove it. A malware infection can corrupt, modify, or delete BMS.EXE files and cause various problems on your PC. Make sure to update your antivirus or anti-malware program regularly and perform a full system scan periodically.
-
Update or uninstall conflicting programs or drivers. If you have conflicts with other programs or drivers that use BMS.EXE, you can try to update or uninstall them. Sometimes, different versions of BMS.EXE or its associated software can cause compatibility issues and lead to errors. To update your programs or drivers, you can use a software updater tool that can automatically check and install the latest updates for your PC. To uninstall your programs or drivers, you can use a software uninstaller tool that can completely remove them from your PC.
-
Repair or reinstall your Windows system. If none of the above methods work, you may have a corrupted or outdated Windows system that causes BMS.EXE errors. To repair your Windows system, you can use a system repair tool that can scan and fix various system issues on your PC. To reinstall your Windows system, you can use a system installer tool that can create a bootable USB drive or DVD and install a fresh copy of Windows on your PC. However, before repairing or reinstalling your Windows system, make sure to backup your important data and files.
-
-
Conclusion
-
BMS.EXE is an executable file that is used by various software programs on your PC. However, sometimes it can cause errors that affect your PC performance and stability. To
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/AutoCADMechanical2019x64ISOKeygenSadeemPCdownloadpc TOP.md b/spaces/1gistliPinn/ChatGPT4/Examples/AutoCADMechanical2019x64ISOKeygenSadeemPCdownloadpc TOP.md
deleted file mode 100644
index fd90c7098ee757fff3d7e7f09a5c5dd41029fa1c..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/AutoCADMechanical2019x64ISOKeygenSadeemPCdownloadpc TOP.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Page 2. Textual History
-
-Page 3. A Bibliography of the Rahbani, with an Appendix of Palimpsests
-
-Page 4. Recent Publications on the Qur’ān and the Qur’ānic Sciences
-
-Page 5. A Note on the Recently Published “Introduction to the History of the
-
-Qur’ān and the Qur’ānic Sciences”
-
-Page 6. The Legitimacy of the Qur’ānic Material in the Introduction
-
-Page 7. A Note on the Other References in the Text
-
-Page 8. I. The Qur’ān and the Holy Books
-
-Page 9. II. The Qur’ān and the Qur’ānic Sciences
-
-Page 10. III. A Word about Dictionaries
-
-Page 11. IV. The Qur’ān and the Summaries of the Qur’ān
-
-Page 12. V. The Qur’ān and its Language
-
-Page 13. VI. Introduction to the Qur’ānic Text
-
-Page 14. VII. The Qur’ānic Text
-
-Page 15. Conclusion
-
-Page 16. Bibliography of the Qur’ān and the Qur’ānic Sciences
-
-Page 17. Glossary of Arabic and Qur’ānic Terminology
-
-Page 18. List of Recent Publications on the Qur’ān and the Qur’ānic
-
-Sciences
-
-Page 19. Appendix A: A Bibliography of the Rahbani, with an Appendix of
-
-Palimpsests
-
-Page 20. Appendix B: A Note on the Recently Published “Introduction to the
-
-History of the Qur’ān and the Qur’ānic Sciences”
-
-Page 21. Appendix C: A Note on the Other References in the Text
-
-Page 22. Appendix D: A Note on the Legitimacy of the Qur’ānic Material
-
-Page 23. Appendix E: A Word about Dictionaries
-
-Page 24. Appendix F: A Note on the Qur’ān and the Summaries of the
-
-Qur’ān
-
-Page 25. Appendix G: A Note on the Qur’ān and its Language
-
-Page 26. Appendix H: Introduction to the Qur’ānic Text 4fefd39f24
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Free Action With Serial Key 2016 Fixed.md b/spaces/1gistliPinn/ChatGPT4/Examples/Free Action With Serial Key 2016 Fixed.md
deleted file mode 100644
index 882de167b93861226afbdd00d59ce519ca6a672d..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Free Action With Serial Key 2016 Fixed.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Latest: Download Free Desktop Wallpapers of Chef Loony! | Series: ... Registration code: 10403029CF3644154841651AF141E800 Licensed e-mail: c2941690@drdrb.com. Registration code: 510B3C20A9E54E0FF1D2FC28BAD1220E 1fdad05405
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bingo Blitz Hack 2023 Unlimited Credits and Coins with Mod APK.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bingo Blitz Hack 2023 Unlimited Credits and Coins with Mod APK.md
deleted file mode 100644
index af7e9abedd714b38ebb5ed02488344d1d107c402..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bingo Blitz Hack 2023 Unlimited Credits and Coins with Mod APK.md
+++ /dev/null
@@ -1,112 +0,0 @@
-
-
Bingo Blitz Unlimited Credits APK: How to Get Free Credits for Your Favorite Bingo Game
-
If you love playing bingo online, you probably know how addictive and fun Bingo Blitz is. It's one of the most popular bingo games on Facebook, with millions of players from all over the world. But there's one problem: you need credits to play.
Credits are the currency of Bingo Blitz, and they allow you to buy bingo cards, power-ups, and other goodies. You can earn credits by playing bingo games, completing quests, or spinning the wheel. But sometimes, you may run out of credits or want more than you can get for free.
-
That's where Bingo Blitz Unlimited Credits APK comes in handy. This is a special version of the game that gives you unlimited credits for free. Yes, you heard that right: free credits every day, without spending a dime.
-
But how do you get this amazing app? And how do you use it to get free credits for your favorite bingo game? In this article, we'll answer all these questions and more. We'll show you how to download and install Bingo Blitz Unlimited Credits APK on your Android device, what are its features and benefits, how to use it to get free credits every day, and some tips and tricks to make the most of it.
-
So, if you're ready to enjoy unlimited bingo fun with Bingo Blitz Unlimited Credits APK, read on!
-
How to download and install Bingo Blitz Unlimited Credits APK on your Android device?
-
The first step to getting free credits for Bingo Blitz is to download and install Bingo Blitz Unlimited Credits APK on your Android device. This is a modified version of the original game that bypasses the credit limit and gives you unlimited access to all the features of the game.
-
But where can you find this app? And how can you install it safely on your device? Here are the steps:
-
-
Go to [Credits For Bingo Blitz APK](^2^) or [Bingo Blitz Apk Mod New Version](^3^) and download the latest version of Bingo Blitz Unlimited Credits APK.
-
Before installing the app, make sure you enable "Unknown sources" in your device settings. This will allow you to install apps from sources other than Google Play Store.
-
Locate the downloaded file in your device storage and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to finish.
Once the app is installed, you can launch it from your app drawer or home screen.
-
-
Congratulations! You have successfully downloaded and installed Bingo Blitz Unlimited Credits APK on your Android device. Now you can enjoy unlimited credits for your favorite bingo game.
-
bingo blitz free credits mod apk
-bingo blitz hack unlimited credits apk
-bingo blitz mod apk unlimited coins and credits
-bingo blitz cheats unlimited credits apk
-bingo blitz apk mod free credits and power-ups
-bingo blitz unlimited credits generator apk
-bingo blitz mod apk 2023 unlimited credits
-bingo blitz mod apk latest version unlimited credits
-bingo blitz free credits hack apk download
-bingo blitz unlimited credits apk no survey
-bingo blitz mod apk unlimited everything
-bingo blitz hack apk 2023
-bingo blitz free coins and credits apk
-bingo blitz modded apk download
-bingo blitz hack tool apk
-bingo blitz unlimited power-ups apk
-bingo blitz mod menu apk
-bingo blitz freebies mod apk
-bingo blitz hack online apk
-bingo blitz cracked apk
-bingo blitz premium mod apk
-bingo blitz pro mod apk
-bingo blitz vip mod apk
-bingo blitz mega mod apk
-bingo blitz elite mod apk
-bingo blitz hack version apk
-bingo blitz cheat engine apk
-bingo blitz glitch apk
-bingo blitz patcher apk
-bingo blitz trainer apk
-how to get unlimited credits in bingo blitz apk
-how to hack bingo blitz with lucky patcher apk
-how to download bingo blitz mod apk
-how to install bingo blitz mod apk
-how to update bingo blitz mod apk
-how to use bingo blitz mod apk
-how to play bingo blitz mod apk offline
-how to get free power-ups in bingo blitz mod apk
-how to get free coins in bingo blitz mod apk
-how to get free keys in bingo blitz mod apk
-how to get free daub alerts in bingo blitz mod apk
-how to get free instant wins in bingo blitz mod apk
-how to get free shadow cards in bingo blitz mod apk
-how to get free collection items in bingo blitz mod apk
-how to get free tournament tickets in bingo blitz mod apk
-how to get free daily bonuses in bingo blitz mod apk
-how to get free gifts in bingo blitz mod apk
-how to get free spins in bingo blitz mod apk
-how to get free slots in bingo blitz mod apk
-
What are the features and benefits of Bingo Blitz Unlimited Credits APK?
-
Bingo Blitz Unlimited Credits APK is not just a regular bingo game. It's a bingo game with unlimited credits and unlimited fun. Here are some of the features and benefits of this app:
-
-
You get unlimited credits for free every day. You don't have to worry about running out of credits or spending money to buy more. You can play as many bingo games as you want, whenever you want.
-
You get access to all the rooms, cards, power-ups, and collections in the game. You can explore different themes and locations, collect rare items, and boost your bingo experience with power-ups.
-
You get to play with millions of other bingo lovers from all over the world. You can chat with them, make new friends, and join clubs and tournaments.
-
You get to enjoy high-quality graphics, sound effects, and animations. The game is designed to give you a realistic and immersive bingo experience.
-
You get to enjoy regular updates and new features. The game is constantly updated with new rooms, cards, collections, events, and more.
-
-
As you can see, Bingo Blitz Unlimited Credits APK is a bingo game like no other. It's a game that gives you unlimited credits and unlimited fun.
How to use Bingo Blitz Unlimited Credits APK to get free credits every day?
-
Now that you have Bingo Blitz Unlimited Credits APK on your device, you may be wondering how to use it to get free credits every day. It's very simple and easy. Here are the steps:
-
-
Launch the app and log in with your Facebook account or create a new one.
-
Once you're in the game, you'll see a pop-up window that says "Congratulations! You have received 1000 free credits!" Tap on "Claim" to get your free credits.
-
You can also get more free credits by tapping on the "Free Credits" button at the top of the screen. This will take you to a page where you can watch videos, complete surveys, or download other apps to earn more credits.
-
You can also get free credits by playing bingo games, completing quests, spinning the wheel, or opening chests.
-
You can check your credit balance at any time by tapping on the "Credits" icon at the bottom of the screen.
-
-
That's it! You can use Bingo Blitz Unlimited Credits APK to get free credits every day and enjoy unlimited bingo fun.
-
Tips and tricks to make the most of Bingo Blitz Unlimited Credits APK
-
Bingo Blitz Unlimited Credits APK is a great app that gives you unlimited credits and unlimited fun. But how can you make the most of it? Here are some tips and tricks to help you out:
-
-
Use power-ups wisely. Power-ups are special items that can help you win bingo games faster and easier. They can do things like mark extra numbers, double your score, or give you extra time. You can buy power-ups with credits or get them for free by playing games or opening chests. But don't use them all at once. Save them for when you really need them, like when you're close to a bingo or when you're playing in a tough room.
-
Collect items and complete collections. Items are rare and valuable objects that you can find in different bingo rooms. They are part of collections that have different themes and rewards. You can see your items and collections by tapping on the "Items" icon at the bottom of the screen. Try to collect as many items as you can and complete collections to get extra credits, power-ups, keys, and other prizes.
-
Join clubs and tournaments. Clubs are groups of players who work together to achieve common goals and earn rewards. You can join a club or create your own by tapping on the "Clubs" icon at the bottom of the screen. You can chat with your club members, send and receive gifts, and participate in club challenges and events. Tournaments are competitions where you can play against other players and win big prizes. You can join a tournament by tapping on the "Tournaments" icon at the bottom of the screen. You can choose from different types of tournaments, like solo, team, or special ones.
-
Play smart and have fun. Bingo is a game of chance, but it also requires some skill and strategy. You can improve your chances of winning by choosing the right number of cards, daubing quickly and accurately, using power-ups wisely, and playing in different rooms. But don't forget to have fun. Bingo is a social game, so chat with other players, make new friends, and enjoy the thrill of bingo.
-
-
Conclusion: Why you should try Bingo Blitz Unlimited Credits APK today
-
Bingo Blitz Unlimited Credits APK is a bingo game that gives you unlimited credits and unlimited fun. It's a game that lets you play bingo online with millions of other players from all over the world. It's a game that lets you explore different themes and locations, collect rare items, and boost your bingo experience with power-ups. It's a game that lets you join clubs and tournaments, chat with friends, and win big prizes.
-
Bingo Blitz Unlimited Credits APK is a game that gives you everything you need to enjoy bingo online. It's a game that gives you free credits every day, without spending a dime. It's a game that gives you access to all the features and benefits of the original game, without any limitations.
-
Bingo Blitz Unlimited Credits APK is a game that you should try today. It's a game that will make you fall in love with bingo all over again.
-
FAQs: Frequently asked questions about Bingo Blitz Unlimited Credits APK
-
Here are some of the most common questions that people ask about Bingo Blitz Unlimited Credits APK:
-
Q: Is Bingo Blitz Unlimited Credits APK safe to use?
-
A: Yes, Bingo Blitz Unlimited Credits APK is safe to use. It's a modified version of the original game that does not harm your device or your account. It's tested and verified by many users and experts. However, you should always download it from a trusted source and scan it with an antivirus before installing it.
-
Q: How can I update Bingo Blitz Unlimited Credits APK?
-
A: Bingo Blitz Unlimited Credits APK is updated regularly with new features and improvements. You can check for updates by launching the app and tapping on the "Settings" icon at the top of the screen. Then, tap on "Check for updates" and follow the instructions. You can also visit [Credits For Bingo Blitz APK] or [Bingo Blitz Apk Mod New Version] to download the latest version of the app.
-
Q: Can I play Bingo Blitz Unlimited Credits APK on other devices?
-
A: Bingo Blitz Unlimited Credits APK is designed for Android devices only. You cannot play it on iOS, Windows, or Mac devices. However, you can use an Android emulator to run it on your PC or laptop. An Android emulator is a software that simulates an Android device on your computer. You can download and install an Android emulator like [BlueStacks] or [NoxPlayer] and then install Bingo Blitz Unlimited Credits APK on it.
-
Q: Can I play Bingo Blitz Unlimited Credits APK offline?
-
A: No, you cannot play Bingo Blitz Unlimited Credits APK offline. You need an internet connection to play the game, as it connects you with other players and servers. You also need an internet connection to get free credits and updates. If you don't have an internet connection, you won't be able to launch the app or access its features.
-
Q: Can I sync my progress and data between Bingo Blitz Unlimited Credits APK and the original game?
-
A: Yes, you can sync your progress and data between Bingo Blitz Unlimited Credits APK and the original game. You just need to log in with the same Facebook account on both apps. This will allow you to transfer your credits, items, collections, clubs, tournaments, and other data between the apps. However, you should not run both apps at the same time, as this may cause conflicts or errors.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Call of Duty Warzone Mobile APK - The Most Epic Mobile FPS Game Ever.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Call of Duty Warzone Mobile APK - The Most Epic Mobile FPS Game Ever.md
deleted file mode 100644
index 4753964856f1c9b25ab7b9d48a4426508fa9ad25..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Call of Duty Warzone Mobile APK - The Most Epic Mobile FPS Game Ever.md
+++ /dev/null
@@ -1,99 +0,0 @@
-
-
Call of Duty®: Warzone™ Mobile - The Next Era of Mobile Battle Royale
-
If you are a fan of Call of Duty® franchise and love playing battle royale games on your mobile device, you are in for a treat. Call of Duty®: Warzone™ Mobile is the latest and greatest mobile game from Activision, featuring authentic COD gameplay, shared progression, and up to 120 player count matches on mobile. In this article, we will tell you everything you need to know about this game, including what it is, how to download and install it, and some tips and tricks for playing it.
-
What is Call of Duty®: Warzone™ Mobile?
-
Call of Duty®: Warzone™ Mobile is a free-to-play mobile game that brings the epic battle royale experience of Call of Duty®: Warzone™ to your phone. You can squad up with your friends or play solo, and fight to survive in a massive map called Verdansk, where you will encounter enemies, vehicles, weapons, contracts, killstreaks, and more. You can also customize your loadout, earn rewards, and rank up your Battle Pass across platforms.
Call of Duty®: Warzone™ Mobile is not just another mobile battle royale game. It has some unique and exciting features that make it stand out from the crowd. Here are some of them:
-
Authentic COD gameplay
-
This game delivers authentic Call of Duty® gameplay on mobile with first-class graphics and intuitive controls. Everything from movement, aiming and weapon handling to physics, animations and sound have been optimized, delivering the ultimate accuracy, authenticity and performance.
-
call of duty warzone mobile download apk
-cod warzone mobile apk free download
-call of duty warzone mobile android apk
-cod warzone mobile apk latest version
-call of duty warzone mobile apk obb
-cod warzone mobile apk mod
-call of duty warzone mobile apk data
-cod warzone mobile apk offline
-call of duty warzone mobile apk pure
-cod warzone mobile apk mirror
-call of duty warzone mobile apk revdl
-cod warzone mobile apk hack
-call of duty warzone mobile apk uptodown
-cod warzone mobile apk no verification
-call of duty warzone mobile apk rexdl
-cod warzone mobile apk and obb download
-call of duty warzone mobile apk for pc
-cod warzone mobile apk highly compressed
-call of duty warzone mobile apk full version
-cod warzone mobile apk unlimited money
-call of duty warzone mobile beta apk
-cod warzone mobile apk obb file download
-call of duty warzone mobile lite apk
-cod warzone mobile apk android 1
-call of duty warzone mobile official apk
-cod warzone mobile apk update
-call of duty warzone mobile gameplay apk
-cod warzone mobile apk size
-call of duty warzone mobile release date apk
-cod warzone mobile apk requirements
-call of duty warzone mobile pre register apk
-cod warzone mobile apk google play
-call of duty warzone mobile trailer apk
-cod warzone mobile apk ios
-call of duty warzone mobile app store apk
-cod warzone mobile apkpure download
-call of duty warzone mobile mod menu apk
-cod warzone mobile apkmirror download
-call of duty warzone mobile hack version apk
-cod warzone mobile apkpure.com download link
-
Shared progression
-
This game is powered by unified Call of Duty® technology, which means you can use social features like friends, chat channels and Battle Pass across platforms for a truly connected multiplayer FPS game experience. You can also access your loadout from other COD titles (sold separately) and use them in this game.
-
Up to 120 player count matches
-
This game features some of the highest real player-counts for mobile battle royale. You can skip the bots and put your skills to the test where they count. Experience the new battle royale in this thrilling survival game. Show off your combat skills and defeat your enemies!
-
How to download and install Call of Duty®: Warzone™ Mobile?
-
If you are eager to play this game on your mobile device, here are the steps you need to follow:
-
Pre-register on Google Play or official website
-
The first step is to pre-register for this game on Google Play or the official website [3](https://www.callofduty.com/warzonemobile). By doing so, you will get a chance to unlock rewards at launch and get notified when the game is available for download.
-
Download the APK file from a trusted source
-
The next step is to download the APK file of this game from a trusted source. You can use the link provided by the official website [3](https://www.callofduty.com/warzonemobile) or search for a reliable APK downloader online. Make sure you have enough storage space on your device before downloading the file.
-
Install the APK file on your device
-
The final step is to install the APK file on your device. To do this, you need to enable the installation of apps from unknown sources in your device settings. Then, locate the downloaded APK file and tap on it to start the installation process. Follow the on-screen instructions and wait for the installation to complete. You may also need to download additional data files for the game to run properly.
-
Tips and tricks for playing Call of Duty®: Warzone™ Mobile
-
Now that you have installed the game on your device, you are ready to jump into the action. But before you do, here are some tips and tricks that will help you improve your gameplay and increase your chances of winning:
-
Choose your loadout wisely
-
Your loadout is your set of weapons, perks, equipment and killstreaks that you can use in the game. You can customize your loadout in the main menu or in-game by accessing a loadout drop. You can also use loadouts from other COD titles (sold separately) if you have them. Choose your loadout based on your playstyle, map, mode and situation. Experiment with different combinations and find what works best for you.
-
Communicate with your squad
-
If you are playing with your friends or other players, communication is key. You can use voice chat or text chat to coordinate your moves, share information, call out enemies, request help and more. You can also use ping system to mark locations, enemies, items and other points of interest. Communication can make a big difference between victory and defeat.
-
Use contracts and killstreaks strategically
-
Contracts are optional missions that you can find and activate in Verdansk. They offer various rewards such as cash, loot, intel and more. There are different types of contracts such as bounty, scavenger, recon and most wanted. Choose contracts that suit your objectives and complete them as fast as possible. Killstreaks are powerful abilities that you can use once you have enough cash or kill credits. They include UAV, airstrike, cluster strike and more. Use them wisely to gain an edge over your enemies or turn the tide of the battle.
-
Explore Verdansk and find loot
-
Verdansk is a huge map with diverse locations such as downtown, airport, stadium, prison and more. Each location has its own characteristics, advantages and disadvantages. Explore Verdansk and find loot such as weapons, armor plates, ammo, cash and more. Loot can be found in buildings, crates, supply boxes and other places. Be careful though, as some areas may be more dangerous than others.
-
Survive the Gulag and redeploy
-
If you get killed in the game, you are not out yet. You will be sent to the Gulag, a prison where you will face another fallen player in a 1v1 fight for a chance to redeploy back to Verdansk. You can also be revived by your teammates or buy a self-revive kit if you have enough cash. If you win the Gulag fight or get revived, you will parachute back to Verdansk with a pistol and some ammo. Try to land safely and rejoin your squad as soon as possible.
-
Conclusion
-
Call of Duty®: Warzone™ Mobile is an amazing mobile game that offers a thrilling battle royale experience with authentic COD gameplay, shared progression and up to 120 player count matches on mobile. If you want to play this game on your device, you need to pre-register on Google Play or official website [3](https://www.callofduty.com/warzonemobile), download the APK file from a trusted source and install it on your device. You also need to follow some tips and tricks such as choosing your loadout wisely, communicating with your squad, using contracts and killstreaks strategically, exploring Verdansk and finding loot and surviving the Gulag and redeploying.
-
FAQs
-
Here are some frequently asked questions about Call of Duty®: Warzone™ Mobile:
-
-
Is Call of Duty®: Warzone™ Mobile free-to-play?
-
Yes, Call of Duty®: Warzone™ Mobile is free-to -play and does not require any subscription or purchase to play. However, you may need to buy additional data files for the game to run properly. You can also buy in-game currency and items with real money if you want to.
-
What are the minimum requirements for Call of Duty®: Warzone™ Mobile?
-
The minimum requirements for Call of Duty®: Warzone™ Mobile are as follows:
-
-
Android 5.0 or higher
-
At least 3 GB of RAM
-
At least 4 GB of free storage space
-
A stable internet connection
-
-
Can I play Call of Duty®: Warzone™ Mobile with other players on different platforms?
-
Yes, Call of Duty®: Warzone™ Mobile supports cross-play and cross-progression with other platforms such as PC, PlayStation and Xbox. You can play with your friends or other players on different devices and platforms using the same Activision account. You can also access your loadout, rewards and Battle Pass across platforms.
-
How can I report a bug or a cheater in Call of Duty®: Warzone™ Mobile?
-
If you encounter a bug or a cheater in Call of Duty®: Warzone™ Mobile, you can report it using the in-game feedback system. To do this, go to the main menu and tap on the settings icon. Then, tap on the feedback button and choose the type of issue you want to report. You can also attach a screenshot or a video to provide more details. Alternatively, you can contact the customer support team via email or social media.
-
Where can I find more information and updates about Call of Duty®: Warzone™ Mobile?
-
If you want to find more information and updates about Call of Duty®: Warzone™ Mobile, you can visit the official website [3](https://www.callofduty.com/warzonemobile) or follow the official social media accounts on Facebook, Twitter, Instagram and YouTube. You can also join the official Discord server or Reddit community to chat with other players and developers.
-
-
I hope you enjoyed reading this article and learned something new about Call of Duty®: Warzone™ Mobile. If you have any questions or feedback, feel free to leave a comment below. Thanks for reading and happy gaming!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/DRAGON BALL LEGENDS APK - Summon and Fight with Your Favorite DB Characters in 3D.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/DRAGON BALL LEGENDS APK - Summon and Fight with Your Favorite DB Characters in 3D.md
deleted file mode 100644
index 81c101498d36c22dca3ce8b4225180fa45ef6add..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/DRAGON BALL LEGENDS APK - Summon and Fight with Your Favorite DB Characters in 3D.md
+++ /dev/null
@@ -1,159 +0,0 @@
-
-
How to Download and Play Dragon Ball Legends on Android
-
If you are a fan of the Dragon Ball anime series, you might want to try out the latest game based on it: Dragon Ball Legends. This is an action-packed RPG game that lets you summon and fight with your favorite characters from the show. You can also enjoy an original story with a new character designed by Akira Toriyama, the creator of Dragon Ball.
-
In this article, we will show you how to download and play Dragon Ball Legends on your Android device. We will also give you some features, requirements, and tips for the game. Let's get started!
Dragon Ball Legends is a 3D anime action RPG game developed by Bandai Namco Entertainment. It was released in May 2018 for Android and iOS devices. The game features a card-based combat system that is easy to learn but hard to master. You can use various skills, abilities, and combos to defeat your opponents in real-time battles.
-
The game also has a story mode that follows the adventures of Shallot, a mysterious Saiyan who wakes up in a world where past and present Dragon Ball characters are fighting each other. You can join Shallot and other heroes to uncover the truth behind this chaos. You can also play online against other players from around the world in PvP matches.
-
dragon ball legends apk download latest version
-dragon ball legends mod apk unlimited crystals
-dragon ball legends android game free install
-how to download dragon ball legends on pc
-dragon ball legends hack apk no verification
-dragon ball legends 3d action rpg game
-dragon ball legends apk offline mode
-dragon ball legends update download new characters
-dragon ball legends cheats apk free resources
-dragon ball legends gameplay tips and tricks
-dragon ball legends best team setup guide
-dragon ball legends online multiplayer battles
-dragon ball legends story mode walkthrough
-dragon ball legends tier list 2023 ranking
-dragon ball legends codes apk redeem rewards
-dragon ball legends events apk calendar
-dragon ball legends summon simulator apk
-dragon ball legends wallpaper apk hd
-dragon ball legends voice actors apk cast
-dragon ball legends original character shallot
-dragon ball legends super saiyan forms apk
-dragon ball legends frieza saga apk download
-dragon ball legends broly movie apk update
-dragon ball legends ultra instinct goku apk
-dragon ball legends fusion warriors apk team
-dragon ball legends god ki apk characters
-dragon ball legends future trunks apk saga
-dragon ball legends android 21 apk event
-dragon ball legends beerus and whis apk banner
-dragon ball legends majin buu apk arc
-dragon ball legends cell games apk challenge
-dragon ball legends zenkai awakening apk boost
-dragon ball legends rising rush apk combo
-dragon ball legends equipment upgrade apk guide
-dragon ball legends pvp mode apk ranking
-dragon ball legends co-op mode apk missions
-dragon ball legends guild system apk features
-dragon ball legends adventure mode apk rewards
-dragon ball legends training mode apk tips
-dragon ball legends exchange shop apk items
-dragon ball legends z power list apk stats
-dragon ball legends arts cards apk types
-dragon ball legends special moves apk skills
-dragon ball legends ultimate moves apk finishers
-dragon ball legends transformations apk effects
-dragon ball legends tags and categories apk filter
-dragon ball legends battle gauge and ki apk meter
-dragon wall legend vanishing step and cover change apk mechanics
-
Features of Dragon Ball Legends
-
Here are some of the features that make Dragon Ball Legends a fun and exciting game:
-
-
More than 400 characters to collect and train, including Goku, Vegeta, Frieza, Broly, Majin Buu, and many more.
-
High-quality 3D graphics and animations that bring the anime to life on your mobile device.
-
Voice acting from the original anime cast, including Masako Nozawa as Goku, Ryō Horikawa as Vegeta, and Norio Wakamoto as Cell.
-
An original story based on a new character designed by Akira Toriyama, the legendary manga artist behind Dragon Ball.
-
A simple and intuitive combat system that uses cards to activate skills, abilities, and combos.
-
A team-based Rising Rush attack that unleashes a powerful blow on your enemy.
-
A variety of game modes, such as story mode, PvP mode, events mode, co-op mode, and more.
-
-
Requirements for Dragon Ball Legends
-
To play Dragon Ball Legends on your Android device, you need to meet the following requirements:
-
-
Your device must have Android 6.0 or higher installed.
-
Your device must have at least 2 GB of RAM available.
-
Your device must have at least 195 MB of free storage space available.
-
Your device must have a stable internet connection to play online.
-
-
How to Download Dragon Ball Legends APK
-
If you want to download and play Dragon Ball Legends on your Android device, you need to follow these steps:
-
Step 1: Enable Unknown Sources
-
Before you can install any APK file on your device, you need to enable unknown sources. This will allow you to install apps from sources other than the Google Play Store. To do this:
-
-
Go to your device's Settings app.
-
Tap on Security or Privacy (depending on your device).
-
Find and toggle on Unknown Sources or Install Unknown Apps (depending on your device).
-
Confirm your choice by tapping OK or Allow (depending on your device).
-
-
Step 2: Download the APK File
-
Next Next, you need to download the APK file of Dragon Ball Legends from a reliable source. You can use the link below to get the latest version of the game from APKCombo, a trusted website that offers free and safe APK downloads for Android games and apps.
Step 2: Download the APK File
-
To download the APK file of Dragon Ball Legends, follow these steps:
-
-
Tap on the link below to go to the APKCombo website.
-
Tap on the green Download APK button to start the download.
-
Wait for the download to finish. You can check the progress in your notification bar or your browser's download manager.
-
Once the download is complete, you will see a notification that says "Download complete".
After you have downloaded the APK file of Dragon Ball Legends, you need to install it on your device. To do this:
-
-
Tap on the notification that says "Download complete" or go to your device's file manager and find the downloaded file.
-
Tap on the file to open it. You may see a warning that says "This type of file can harm your device". Don't worry, this is just a precautionary message. Tap on OK or Install Anyway (depending on your device) to proceed.
-
You may also see a prompt that asks you to allow the app to access your device's resources. Tap on Install or Next (depending on your device) to grant the permissions.
-
Wait for the installation to finish. You will see a message that says "App installed" or "Dragon Ball Legends installed" (depending on your device).
-
Tap on Open or Done (depending on your device) to launch the game or exit the installer.
-
-
Step 4: Launch the Game and Enjoy
-
Congratulations, you have successfully installed Dragon Ball Legends on your Android device. Now you can launch the game and enjoy the action-packed RPG adventure. To launch the game, follow these steps:
-
-
Go to your device's app drawer and find the Dragon Ball Legends icon. It should look like a yellow star with a dragon ball in the center.
-
Tap on the icon to open the game. You may see a loading screen with some tips and information about the game.
-
You may also see a pop-up that asks you to agree to the terms of service and privacy policy of the game. Tap on Agree or Accept (depending on your device) to continue.
-
You may also see a pop-up that asks you to choose your preferred language for the game. Tap on English or any other language you want to use.
-
You may also see a pop-up that asks you to download some additional data for the game. Tap on Download or OK (depending on your device) to start the download. You can also choose to skip this step and download later, but it is recommended to download now for a better gaming experience.
-
Wait for the download to finish. You can check the progress in the bottom right corner of the screen.
-
Once the download is complete, you will see a message that says "Download complete". Tap on OK or Continue (depending on your device) to proceed.
-
You will then see an introduction video that shows some scenes from the game and its story. You can watch it or skip it by tapping on Skip in the top right corner of the screen.
-
You will then see a tutorial that explains how to play the game and its basic controls. You can follow it or skip it by tapping on Skip in the top right corner of the screen.
-
You will then see a screen that asks you to choose your name and appearance for your character. You can use the default name and appearance or customize them by tapping on Edit in the bottom right corner of the screen.
-
Once you are done, tap on Confirm in the bottom right corner of the screen.
-
You will then see a screen that shows your character and some information about him/her. Tap on Start in the bottom right corner of the screen to begin your adventure.
-
-
How to Play Dragon Ball Legends
-
Now that you have downloaded and installed Dragon Ball Legends, you might be wondering how to play it and what are some tips and tricks for beginners. Here are some basic controls and gameplay elements that you should know:
-
Basic Controls and Gameplay
-
The game uses a card-based combat system that is easy to learn but hard to master. You can use various skills, abilities, and combos to defeat your opponents in real-time battles. Here are some basic controls and gameplay elements:
-
-
To move your character, swipe left or right on the screen. To dash towards or away from your enemy, swipe up or down on the screen.
-
To attack your enemy, tap on one of the cards at the bottom of the screen. Each card has a different color and effect: red cards are melee attacks, yellow cards are ranged attacks, green cards are special abilities, and blue cards are ultimate attacks.
-
To use a combo, tap on multiple cards in succession. The more cards you use, the more damage you deal. However, each card also consumes some energy from your ki gauge, which is shown at the top of the screen. You need to manage your ki wisely and avoid running out of it.
-
To dodge an enemy attack, swipe left or right on the screen. You can also use a vanish gauge, which is shown at the bottom of the screen, to perform a vanish step and teleport behind your enemy. However, each vanish step also consumes some energy from your vanish gauge, which recharges over time.
-
To use a Rising Rush attack, tap on the dragon ball icon at the top of the screen when you have collected all seven dragon balls. You can collect dragon balls by using certain cards in battle. A Rising Rush attack is a team-based attack that unleashes a powerful blow on your enemy. However, you can only use it once per battle.
-
To switch your character, tap on one of the portraits at the top left corner of the screen. You can have up to three characters in your team, and each character has a different element, type, and role. You need to choose your team wisely and switch your character strategically to gain an advantage in battle.
-
To use a main ability, tap on the portrait of your active character when it glows. Each character has a unique main ability that can provide various benefits, such as healing, boosting, or debuffing. However, you can only use it once per battle.
-
To use a Z ability, tap on the Z icon at the top right corner of the screen when it glows. Each character has a unique Z ability that can enhance your team's performance, such as increasing damage, defense, or ki recovery. However, you can only use it once per battle.
-
-
Tips and Tricks for Beginners
-
Here are some tips and tricks that can help you improve your skills and enjoy the game more:
-
-
Complete the story mode to unlock new characters, items, and rewards. You can also replay the story mode chapters to earn more stars and crystals.
-
Complete the missions and events to get more rewards and challenges. You can also join co-op missions and events with other players to get more fun and rewards.
-
Upgrade your characters by using souls, zeni, and training items. You can also limit break your characters by using z power to increase their stats and stars.
-
Equip your characters with equipment items that match their element, type, and role. You can also upgrade your equipment items by using zeni and erasers.
-
Customize your team by choosing the best combination of characters, equipment, and abilities. You can also create different teams for different game modes and situations.
-
Learn the strengths and weaknesses of each element, type, and role. You can also check the details of each character by tapping on them in the character list or in battle.
-
Practice your combat skills by playing against the AI or other players in PvP mode. You can also watch replays of your battles or other players' battles to learn from them.
-
Join a guild and chat with other players who share your passion for Dragon Ball. You can also participate in guild activities and events to get more rewards and fun.
-
-
Conclusion
-
Dragon Ball Legends is an amazing game that lets you experience the thrill of Dragon Ball on your mobile device. You can download and play it for free by following the steps above. You can also enjoy an original story with a new character designed by Akira Toriyama, the creator of Dragon Ball. You can also collect and train hundreds of characters from the anime series and fight with them in real-time battles against other players from around the world.
-
If you are looking for a fun and exciting RPG game that is based on one of the most popular anime series of all time, you should definitely try out Dragon Ball Legends. It is a game that will keep you entertained for hours with its stunning graphics, voice acting, story, gameplay, and features. Download it now and join the adventure!
-
FAQs
-
Here are some frequently asked questions about Dragon Ball Legends:
-
-
How do I get more crystals? Crystals are the premium currency of the game that you can use to summon new characters or buy items. You can get more crystals by completing missions, events, achievements, story mode chapters, PvP matches, login bonuses, or buying them with real money.
-
How do I get more z power? Z power is the item that you need to limit break your characters and increase their stats and stars. You can get more z power by summoning new characters or duplicates of existing characters, completing missions, events, co-op mode matches, exchange shops, or buying them with real money.
-
How do I get more souls? Souls are the items that you need to upgrade your characters' panels and boost their stats. You can get more souls by completing missions, events, soul boost stages, exchange shops, or buying them with real money.
-
How do I get more equipment? Equipment are the items that you can equip to your characters to enhance their performance. You can get more equipment by completing missions, events, equipment stages, PvP mode matches, co-op mode matches, exchange shops, or buying them with real money.
-
How do I get more characters? Characters are the main attraction of the game that you can summon and fight with. You can get more characters by using crystals to summon them from various banners, using tickets to summon them from special banners, completing missions, events, story mode chapters, co-op mode matches, exchange shops, or buying them with real money.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Bloody Vampire Season 2 A Novel by Zanoor Writes - PDF Download or Online Reading.md b/spaces/1phancelerku/anime-remove-background/Bloody Vampire Season 2 A Novel by Zanoor Writes - PDF Download or Online Reading.md
deleted file mode 100644
index 91cc883184ff7113c6409d5f2ea83cb4dda9c91a..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Bloody Vampire Season 2 A Novel by Zanoor Writes - PDF Download or Online Reading.md
+++ /dev/null
@@ -1,139 +0,0 @@
-
-
Bloody Vampire Novel Season 2: A Review of the Horror Romance Series by Zanoor Writes
-
If you're looking for a thrilling and passionate read that will keep you on the edge of your seat, you might want to check out Bloody Vampire Novel Season 2 by Zanoor Writes. This novel series is a sequel to the popular Bloody Vampire Novel Season 1, which introduced us to a world where vampires and humans coexist in a fragile balance.
-
In this article, we'll give you a brief overview of what Bloody Vampire Novel Season 2 is about, who is Zanoor Writes, and why you should read it. We'll also show you how to download Bloody Vampire Novel Season 2 PDF for free from various sources, as well as some tips and tricks for reading PDF files on different devices. Finally, we'll give you a sneak peek of what to expect from Bloody Vampire Novel Season 3, which is currently in progress.
Bloody Vampire Novel Season 2 is a horror romance novel series written by Zanoor Writes, a Pakistani writer who specializes in vampire-based books. The novel series consists of 144 pages and was published online on Urdu Novels Hub in July 2021.
-
The novel series follows the story of Zara Khan, a young woman who falls in love with a mysterious vampire named Zain Ali. Their relationship is complicated by their different backgrounds, beliefs, and enemies. Zara has to deal with her family's disapproval, her ex-boyfriend's jealousy, and her own fears and doubts. Zain has to protect Zara from his rival vampires, his dark past, and his inner demons.
-
bloody vampire season 2 zanoor writes pdf
-download bloody vampire season 2 urdu novel
-bloody vampire novel season 2 free pdf online
-bloody vampire season 2 by zanoor writes download
-urdu novels hub bloody vampire season 2 pdf
-bloody vampire novel season 2 complete pdf
-bloody vampire season 2 romantic novel pdf
-bloody vampire novel season 2 drive link download
-bloody vampire season 2 urdu novalists pdf
-bloody vampire novel season 2 category most romantic
-zanoor writes bloody vampire season 2 pdf file
-bloody vampire novel season 2 status complete
-bloody vampire season 2 source drive free link
-read online bloody vampire novel season 2 pdf
-bloody vampire season 2 urdu pdf novel by zanoor writes
-zanoor writes novels bloody vampire season 2 pdf
-bloody vampire novel season 2 social issues based
-download in pdf file bloody vampire season 2 novel
-bloody vampire season 2 latest urdu novel pdf
-urdu novels hub zanoor writes bloody vampire season 2
-urdu novalists zanoor writes bloody vampire season 2 pdf
-download for free bloody vampire novel season 2 pdf
-save it for later bloody vampire season 2 pdf download
-share with friends bloody vampire novel season 2 pdf
-comment your views on bloody vampire season 2 pdf novel
-subscribe to our newsletter for more bloody vampire season 2 pdf updates
-like our facebook page for more bloody vampire novel season 2 pdf news
-follow us on twitter for more bloody vampire season 2 pdf alerts
-join our telegram channel for more bloody vampire novel season 2 pdf links
-watch our youtube video for more bloody vampire season 2 pdf reviews
-
The novel series combines elements of horror, romance, drama, and mystery. It explores themes such as love, trust, loyalty, betrayal, sacrifice, revenge, and redemption. It also features a unique vampire lore that blends Eastern and Western mythology.
-
Who is Zanoor Writes?
-
Zanoor Writes is
Zanoor Writes is the pen name of Zainab Noor, a 25-year-old writer from Lahore, Pakistan. She started writing at the age of 15 and has published several novels and short stories online. She is best known for her vampire-based books, such as Bloody Vampire Novel Season 1 and 2, The Vampire King, and The Vampire's Bride. She is also a fan of Twilight, The Vampire Diaries, and Dracula.
-
Zanoor Writes has a loyal fan base who appreciate her creative and captivating stories. She interacts with her readers through her social media accounts, such as Facebook, Instagram, and Twitter. She also has a website where she posts updates, news, and previews of her upcoming works.
-
Why You Should Read Bloody Vampire Novel Season 2?
-
Bloody Vampire Novel Season 2 is a novel series that will appeal to anyone who loves horror and romance. Here are some reasons why you should read it:
-
-
It has a suspenseful and intriguing storyline that will keep you hooked from the first page to the last. You'll never know what will happen next as Zara and Zain face various challenges and dangers in their relationship.
-
It has a captivating and passionate romance that will make you swoon and root for the main couple. You'll feel their emotions and chemistry as they overcome their differences and grow closer together.
-
It has a unique and original vampire lore that will fascinate you with its details and diversity. You'll learn about the history, culture, and rules of the vampire world, as well as the different types and powers of vampires.
-
It has a talented and engaging author who knows how to write well and entertain her readers. You'll enjoy her style, language, and humor as she tells the story in a captivating way.
-
-
How to Download Bloody Vampire Novel Season 2 PDF for Free?
-
If you want to read Bloody Vampire Novel Season 2 in PDF format, you have several options to download it for free. Here are some steps you can follow:
-
-
Go to Urdu Novels Hub, the official website where Zanoor Writes publishes her novel series online. You can find the link to the website here: .
-
On the website, look for the category "Zanoor Writes" on the menu bar. Click on it and you'll see a list of her novel series.
-
Find Bloody Vampire Novel Season 2 on the list and click on it. You'll be directed to a page where you can read the novel series online or download it in PDF format.
-
To download it in PDF format, scroll down to the bottom of the page where you'll see a button that says "Download PDF". Click on it and you'll be asked to enter your email address.
-
Enter your email address and click on "Submit". You'll receive an email with a link to download the PDF file of Bloody Vampire Novel Season 2.
-
Click on the link in the email and you'll be able to download the PDF file to your device.
-
Pros and Cons of Downloading PDF Files
-
Downloading PDF files of Bloody Vampire Novel Season 2 has its pros and cons. Here are some of them:
-
-
-
Pros
-
Cons
-
-
-
You can read the novel series offline without an internet connection.
-
You need to have a PDF reader software or app installed on your device.
-
-
-
You can save the novel series on your device or cloud storage for future reference.
-
You might encounter some formatting or compatibility issues depending on your device or PDF reader.
-
-
-
You can print the novel series if you prefer reading on paper.
-
You might use up a lot of ink and paper, which can be costly and wasteful.
-
-
-
You can share the novel series with your friends or family who are also interested in reading it.
-
You might violate the author's rights or terms of service if you distribute the novel series without permission.
-
-
-
Tips and Tricks for Reading PDF Files on Different Devices
-
If you want to read PDF files of Bloody Vampire Novel Season 2 on different devices, such as laptops, tablets, or smartphones, here are some tips and tricks you can use to optimize your reading experience:
-
-
Choose a PDF reader software or app that suits your preferences and needs. There are many options available, such as Adobe Acrobat Reader, Foxit Reader, Google PDF Viewer, etc. Some of them are free, while others require a subscription or a one-time payment. Compare their features, functions, and reviews before downloading them.
-
Adjust the settings of your PDF reader to enhance the readability and accessibility of the novel series. For example, you can change the font size, color, brightness, contrast, orientation, zoom level, etc. You can also enable features such as text-to-speech, night mode, bookmarks, annotations, etc.
-
Use keyboard shortcuts or gestures to navigate and control the PDF reader. For example, you can use the arrow keys or swipe left and right to move between pages. You can also use the spacebar or tap to scroll up and down. You can also use the Ctrl+F or search icon to find specific words or phrases in the novel series.
-
Be mindful of your battery life and data usage when reading PDF files. If you're reading online, make sure you have a stable and secure internet connection. If you're reading offline, make sure you have enough battery power or plug in your charger. You can also turn on airplane mode or disable notifications to avoid interruptions.
-
Take breaks and rest your eyes when reading PDF files for a long time. Reading on a screen can cause eye strain, fatigue, headache, or blurred vision. To prevent this, you should follow the 20-20-20 rule: every 20 minutes, look at something 20 feet away for 20 seconds. You should also blink often and drink water to stay hydrated.
-
-
Alternative Ways to Read Bloody Vampire Novel Season 2 Online or Offline
-
If you don't want to read PDF files of Bloody Vampire Novel Season 2, you have other options to read the novel series online or offline. Here are some of them:
-
-
You can read the novel series online on Urdu Novels Hub's website using your browser. You don't need to download anything or enter your email address. You just need to have an internet connection and a compatible browser.
-
You can read the novel series online on other platforms that host Urdu novels, such as Urdu Novels Online, Urdu Novels Collection, Urdu Novels Library, etc. However, you should be careful about the quality and authenticity of these platforms. Some of them might have incomplete, inaccurate, or pirated versions of the novel series.
-
You can read the novel series offline on an e-reader device, such as Kindle, Nook, Kobo, etc. You just need to download the novel series in a compatible format (such as EPUB or MOBI) from Urdu Novels Hub's website or other sources. Then, you need to transfer the file to your e-reader device using a USB cable or Wi-Fi.
-
You can read the novel series offline on a printed book. You can either buy the book from a bookstore or online retailer that sells Urdu novels (such as Amazon), or print it yourself using a printer and paper. However, this option might be more expensive and less convenient than reading online or on an e-reader device.
-
What to Expect from Bloody Vampire Novel Season 3?
-
If you've finished reading Bloody Vampire Novel Season 2, you might be wondering what will happen next in the novel series. Well, you're not alone. Many fans are eagerly waiting for Bloody Vampire Novel Season 3, which is expected to be the final installment of the trilogy.
-
While the author has not revealed much about the plot of Bloody Vampire Novel Season 3, she has dropped some hints and teasers on her social media accounts and website. Based on these clues and fan theories, here are some things you can expect from Bloody Vampire Novel Season 3:
-
-
A shocking revelation about Zain's past that will change everything.
-
A new enemy that will threaten Zara and Zain's relationship and lives.
-
A major twist that will test Zara and Zain's love and loyalty.
-
A heartbreaking sacrifice that will determine the fate of the vampire world.
-
A satisfying and surprising ending that will wrap up the story and answer all the questions.
-
-
When Will Bloody Vampire Novel Season 3 Be Released?
-
The release date of Bloody Vampire Novel Season 3 is not yet confirmed by the author. However, based on her previous schedule and updates, we can estimate that it will be released sometime in late 2023 or early 2024.
-
The author usually publishes one chapter per week on Urdu Novels Hub's website, which means that it takes about three to four months to complete one season. Since Bloody Vampire Novel Season 2 was completed in July 2021, we can assume that the author is currently working on Bloody Vampire Novel Season 3 and will publish it soon.
-
Of course, this is just a rough estimation and the actual release date might vary depending on the author's availability, progress, and other factors. The best way to know the exact release date is to follow the author's updates and announcements.
-
How to Stay Updated on Bloody Vampire Novel Season 3 News and Updates?
-
If you want to stay updated on Bloody Vampire Novel Season 3 news and updates, you have several ways to follow the author and her novel series. Here are some of them:
-
-
Follow Zanoor Writes on her social media accounts, such as Facebook, Instagram, and Twitter. She often posts previews, snippets, covers, and other information about her novel series on these platforms. You can also interact with her and other fans through comments, messages, and likes.
-
Visit Zanoor Writes' website, where she posts news, updates, blogs, and other content related to her novel series. You can also subscribe to her newsletter to receive email notifications whenever she publishes a new chapter or post.
-
Bookmark Urdu Novels Hub's website, where Zanoor Writes publishes her novel series online. You can also turn on notifications to get alerts whenever a new chapter is available.
-
Join online communities and forums dedicated to Zanoor Writes and her novel series, such as Reddit, Quora, Goodreads, etc. You can discuss your opinions, theories, questions, and feedback with other fans and readers.
-
Conclusion
-
Bloody Vampire Novel Season 2 is a novel series that you don't want to miss if you're a fan of horror and romance. It has a captivating plot, a passionate romance, a unique vampire lore, and a talented author. You can download it in PDF format for free from various sources, or read it online or offline on different platforms. You can also look forward to Bloody Vampire Novel Season 3, which is coming soon.
-
So, what are you waiting for? Download Bloody Vampire Novel Season 2 PDF now and enjoy the thrilling and passionate story of Zara and Zain. You won't regret it!
-
FAQs About Bloody Vampire Novel Season 2 PDF Download
-
Here are some frequently asked questions that readers might have about Bloody Vampire Novel Season 2 PDF download:
-
-
Is Bloody Vampire Novel Season 2 PDF download safe and legal?
-
Yes, downloading Bloody Vampire Novel Season 2 PDF from Urdu Novels Hub's website is safe and legal. The website is the official and authorized source of Zanoor Writes' novel series. The website uses encryption and security measures to protect your data and privacy. The website also respects the author's rights and terms of service, and does not distribute the novel series without permission.
-
How long does it take to download Bloody Vampire Novel Season 2 PDF?
-
The time it takes to download Bloody Vampire Novel Season 2 PDF depends on your internet speed and the size of the file. The file size of Bloody Vampire Novel Season 2 PDF is about 1.5 MB, which means that it should take less than a minute to download on a fast internet connection. However, if your internet connection is slow or unstable, it might take longer to download the file.
-
Can I read Bloody Vampire Novel Season 2 PDF on my phone?
-
Yes, you can read Bloody Vampire Novel Season 2 PDF on your phone, as long as you have a PDF reader app installed on your phone. There are many PDF reader apps available for both Android and iOS devices, such as Adobe Acrobat Reader, Foxit Reader, Google PDF Viewer, etc. You can download them from the Google Play Store or the App Store for free or for a fee.
-
Can I convert Bloody Vampire Novel Season 2 PDF to other formats?
-
Yes, you can convert Bloody Vampire Novel Season 2 PDF to other formats, such as EPUB or MOBI, if you prefer reading on an e-reader device or app. There are many online tools and software that can help you convert PDF files to other formats, such as Zamzar, Online-Convert, Calibre, etc. However, you should be careful about the quality and accuracy of the conversion process. Some tools or software might not preserve the original formatting or layout of the novel series.
-
Can I request a hard copy of Bloody Vampire Novel Season 2?
-
Yes, you can request a hard copy of Bloody Vampire Novel Season 2 from Zanoor Writes directly. You can contact her through her social media accounts or email address and ask her if she can provide you with a printed version of the novel series. However, you might have to pay for the printing and shipping costs, which can vary depending on your location and preferences.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/FIFA 19 APK Download Latest Version 2023 for Android.md b/spaces/1phancelerku/anime-remove-background/FIFA 19 APK Download Latest Version 2023 for Android.md
deleted file mode 100644
index d6bb65ffb1e943fb165b70a6176c326f6697773a..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/FIFA 19 APK Download Latest Version 2023 for Android.md
+++ /dev/null
@@ -1,124 +0,0 @@
-
-
FIFA APK 19: How to Download and Install the Best Soccer Game on Android
-
Introduction
-
If you are a fan of soccer, you probably have heard of FIFA, the most popular and realistic soccer game series in the world. FIFA is developed by EA Sports, a leading company in the gaming industry. FIFA has been releasing new versions of its game every year, with improved graphics, gameplay, and features.
One of the latest versions of FIFA is FIFA 19, which was released in 2018 for various platforms, including PC, PlayStation, Xbox, Nintendo Switch, and mobile devices. However, if you want to play FIFA 19 on your Android phone or tablet, you might encounter some difficulties. That's because the official version of FIFA 19 for Android is not available on the Google Play Store. Instead, you have to download and install an unofficial version of FIFA 19, which is called FIFA APK 19.
-
In this article, we will show you how to download and install FIFA APK 19 on your Android device, and how to enjoy the best soccer game on your mobile screen. We will also tell you why you should play FIFA APK 19, and what features and tips you can expect from this game.
-
How to download FIFA APK 19
-
Requirements for FIFA APK 19
-
Before you download FIFA APK 19, you need to make sure that your Android device meets the minimum requirements for this game. Here are the requirements:
-
-
Android version: 4.4 or higher
-
RAM: 2 GB or more
-
Storage space: At least 4 GB of free space
-
Internet connection: Required for online features
-
-
If your device meets these requirements, you can proceed to download FIFA APK 19. However, if your device does not meet these requirements, you might experience some issues with the game, such as lagging, crashing, or errors.
-
fifa 19 apk download android
-fifa 19 apk mod offline
-fifa 19 apk data obb
-fifa 19 apk obb download
-fifa 19 apk offline mode
-fifa 19 apk latest version
-fifa 19 apk and obb file
-fifa 19 apk free download full version
-fifa 19 apk unlimited money
-fifa 19 apk highly compressed
-fifa 19 apk mobile game
-fifa 19 apk android game
-fifa 19 apk full unlocked
-fifa 19 apk no verification
-fifa 19 apk update patch
-fifa 19 apk real faces
-fifa 19 apk best graphics
-fifa 19 apk original game
-fifa 19 apk online multiplayer
-fifa 19 apk ultimate team
-fifa 19 apk career mode
-fifa 19 apk champions league
-fifa 19 apk world cup mode
-fifa 19 apk commentary download
-fifa 19 apk english language
-fifa 19 apk new kits and transfers
-fifa 19 apk new features and gameplay
-fifa 19 apk new stadiums and teams
-fifa 19 apk new skills and tricks
-fifa 19 apk new celebrations and animations
-fifa 19 apk requirements and compatibility
-fifa 19 apk size and installation guide
-fifa 19 apk review and rating
-fifa 19 apk download link and password
-fifa 19 apk how to play and tips
-fifa 19 apk cheats and hacks
-fifa 19 apk mod menu and coins generator
-fifa 19 apk comparison and difference with other versions
-fifa 19 apk problems and solutions
-fifa 19 apk questions and answers
-
Steps to download FIFA APK 19
-
To download FIFA APK 19, you need to follow these steps:
-
-
Go to a trusted website that provides the download link for FIFA APK 19. For example, you can use [this website](^1^), [this website](^2^), or [this website](^3^).
-
On the website, look for the download button or link for FIFA APK 19. You might have to scroll down or click on some tabs to find it.
-
Click on the download button or link. You will be redirected to another page where you have to wait for a few seconds before the download starts.
-
Once the download starts, you will see a pop-up window asking you to confirm the download. Tap on OK or Download.
-
Wait for the download to finish. You will need to download two files: an APK file and an OBB file. The APK file is about 30 MB in size, while the OBB file is about 1 GB in size.
-
After downloading both files, locate them in your device's file manager. They are usually stored in the Downloads folder.
-
-
Congratulations! You have successfully downloaded FIFA APK 19 on your Android device. Now, you need to install it.
-
How to install FIFA APK 19
-
How to install the APK file
-
To install the APK file of FIFA APK 19, you need to follow these steps:
- Before you install the APK file, you need to enable the installation of unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
- Next, go to your file manager and tap on the APK file of FIFA APK 19. You will see a pop-up window asking you to install the app. Tap on Install.
-
- Wait for the installation to finish. You will see a message saying that the app has been installed. Tap on Open or Done.
-
- You have successfully installed the APK file of FIFA APK 19. However, you are not done yet. You still need to install the OBB file.
-
How to install the OBB file
-
To install the OBB file of FIFA APK 19, you need to follow these steps:
-
-
Go to your file manager and locate the OBB file of FIFA APK 19. It is usually named as com.ea.game.fifa14_row.obb.
-
Long press on the OBB file and select Copy or Move.
-
Navigate to the folder Android > obb > com.ea.game.fifa14_row and paste the OBB file there. If you don't see this folder, you can create it manually.
-
Wait for the copying or moving process to finish. You have successfully installed the OBB file of FIFA APK 19.
-
-
Now, you are ready to play FIFA APK 19 on your Android device.
-
How to play FIFA APK 19
-
Features of FIFA APK 19
-
FIFA APK 19 is an amazing soccer game that offers you many features and modes to enjoy. Here are some of the features of FIFA APK 19:
-
-
Realistic graphics and animations that make you feel like you are watching a real soccer match.
-
Smooth and responsive controls that let you perform various actions, such as passing, shooting, dribbling, tackling, and more.
-
A variety of game modes, such as Career Mode, Tournament Mode, Manager Mode, Online Mode, and more.
-
A huge database of players, teams, leagues, and stadiums from around the world. You can choose your favorite team and players, or create your own custom team and players.
-
A dynamic commentary system that provides you with insightful and exciting commentary during the game.
-
An online feature that lets you play with or against other players from around the world. You can also join leagues and tournaments and compete for glory and rewards.
-
-
FIFA APK 19 is a game that will keep you entertained for hours with its amazing gameplay and features.
-
Tips and tricks for FIFA APK 19
-
If you want to improve your skills and performance in FIFA APK 19, here are some tips and tricks that you can use:
-
-
Practice your skills in the Training Mode. You can learn how to perform various actions, such as passing, shooting, dribbling, tackling, and more.
-
Adjust your settings according to your preference and device. You can change the difficulty level, camera angle, control scheme, sound effects, and more.
-
Use the right players for the right positions. Each player has different attributes, such as speed, strength, stamina, shooting, passing, dribbling, defending, and more. You should use the players that suit your playing style and strategy.
-
Use different tactics and formations depending on your opponent and situation. You can change your tactics and formations during the game by tapping on the menu button on the top right corner of the screen.
-
Use your coins wisely. You can earn coins by playing games, completing achievements, or watching ads. You can use coins to buy new players, upgrade your existing players, or unlock new items and features.
-
-
With these tips and tricks, you can become a master of FIFA APK 19 in no time.
-
Conclusion
-
Summary of the article
-
In this article, we have shown you how to download and install FIFA APK 19 on your Android device. We have also told you why you should play FIFA APK 19, and what features and tips you can expect from this game. FIFA APK 19 is an amazing soccer game that will give you hours of fun and excitement. If you love soccer, you should definitely try FIFA APK 19 on your Android device.
-
FAQs
-
Here are some frequently asked questions about FIFA APK 19:
-
-
Is FIFA APK 19 safe to download and install?- Yes, FIFA APK 19 is safe to download and install, as long as you use a trusted website that provides the download link. However, you should always be careful when downloading and installing any app from unknown sources, as they might contain malware or viruses. You should also scan your device with an antivirus app after installing FIFA APK 19.
-
Is FIFA APK 19 legal to play?
-
- FIFA APK 19 is not an official version of FIFA 19, and it is not authorized by EA Sports or any other entity. Therefore, playing FIFA APK 19 might be considered illegal in some countries or regions. You should check your local laws and regulations before playing FIFA APK 19. You should also be aware that playing FIFA APK 19 might violate the terms and conditions of EA Sports or Google Play Store, and you might face some consequences or penalties.
-
Is FIFA APK 19 compatible with all Android devices?
-
- FIFA APK 19 is compatible with most Android devices that meet the minimum requirements for this game. However, some devices might not be able to run FIFA APK 19 smoothly or properly, due to different hardware specifications or software versions. You should try FIFA APK 19 on your device and see if it works well for you.
-
How can I update FIFA APK 19?
-
- FIFA APK 19 does not have an automatic update feature, unlike the official version of FIFA 19. Therefore, if you want to update FIFA APK 19, you have to download and install the latest version of FIFA APK 19 from a trusted website. You might also have to delete the previous version of FIFA APK 19 from your device before installing the new one.
-
How can I contact the developer of FIFA APK 19?
-
- FIFA APK 19 is developed by an unknown developer or group of developers, who are not affiliated with EA Sports or any other entity. Therefore, there is no official way to contact the developer of FIFA APK 19. However, you might be able to find some information or feedback from other users of FIFA APK 19 on the website where you downloaded the game, or on some online forums or social media platforms.
-
-
I hope this article has helped you learn more about FIFA APK 19 and how to download and install it on your Android device. If you have any questions or comments, please feel free to leave them below. Thank you for reading!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/stochastic_karras_ve/__init__.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/stochastic_karras_ve/__init__.py
deleted file mode 100644
index 38056beba33440ad094ed2819f14615d6e62d694..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/stochastic_karras_ve/__init__.py
+++ /dev/null
@@ -1,17 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# flake8: noqa
-from .pipeline_stochastic_karras_ve import KarrasVePipeline
diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/docs/training_tips_ja.md b/spaces/AI-Hobbyist/Hoyo-RVC/docs/training_tips_ja.md
deleted file mode 100644
index c5b06f2fdaa603a690c51ee2b79daecc4305fbd5..0000000000000000000000000000000000000000
--- a/spaces/AI-Hobbyist/Hoyo-RVC/docs/training_tips_ja.md
+++ /dev/null
@@ -1,64 +0,0 @@
-RVCの訓練における説明、およびTIPS
-===============================
-本TIPSではどのようにデータの訓練が行われているかを説明します。
-
-# 訓練の流れ
-GUIの訓練タブのstepに沿って説明します。
-
-## step1
-実験名の設定を行います。
-
-また、モデルに音高ガイド(ピッチ)を考慮させるかもここで設定できます。考慮させない場合はモデルは軽量になりますが、歌唱には向かなくなります。
-
-各実験のデータは`/logs/実験名/`に配置されます。
-
-## step2a
-音声の読み込みと前処理を行います。
-
-### load audio
-音声のあるフォルダを指定すると、そのフォルダ内にある音声ファイルを自動で読み込みます。
-例えば`C:Users\hoge\voices`を指定した場合、`C:Users\hoge\voices\voice.mp3`は読み込まれますが、`C:Users\hoge\voices\dir\voice.mp3`は読み込まれません。
-
-音声の読み込みには内部でffmpegを利用しているので、ffmpegで対応している拡張子であれば自動的に読み込まれます。
-ffmpegでint16に変換した後、float32に変換し、-1 ~ 1の間に正規化されます。
-
-### denoising
-音声についてscipyのfiltfiltによる平滑化を行います。
-
-### 音声の分割
-入力した音声はまず、一定期間(max_sil_kept=5秒?)より長く無音が続く部分を検知して音声を分割します。無音で音声を分割した後は、0.3秒のoverlapを含む4秒ごとに音声を分割します。4秒以内に区切られた音声は、音量の正規化を行った後wavファイルを`/logs/実験名/0_gt_wavs`に、そこから16kのサンプリングレートに変換して`/logs/実験名/1_16k_wavs`にwavファイルで保存します。
-
-## step2b
-### ピッチの抽出
-wavファイルからピッチ(音の高低)の情報を抽出します。parselmouthやpyworldに内蔵されている手法でピッチ情報(=f0)を抽出し、`/logs/実験名/2a_f0`に保存します。その後、ピッチ情報を対数で変換して1~255の整数に変換し、`/logs/実験名/2b-f0nsf`に保存します。
-
-### feature_printの抽出
-HuBERTを用いてwavファイルを事前にembeddingに変換します。`/logs/実験名/1_16k_wavs`に保存したwavファイルを読み込み、HuBERTでwavファイルを256次元の特徴量に変換し、npy形式で`/logs/実験名/3_feature256`に保存します。
-
-## step3
-モデルのトレーニングを行います。
-### 初心者向け用語解説
-深層学習ではデータセットを分割し、少しずつ学習を進めていきます。一回のモデルの更新(step)では、batch_size個のデータを取り出し予測と誤差の修正を行います。これをデータセットに対して一通り行うと一epochと数えます。
-
-そのため、学習時間は 1step当たりの学習時間 x (データセット内のデータ数 ÷ バッチサイズ) x epoch数 かかります。一般にバッチサイズを大きくするほど学習は安定し、(1step当たりの学習時間÷バッチサイズ)は小さくなりますが、その分GPUのメモリを多く使用します。GPUのRAMはnvidia-smiコマンド等で確認できます。実行環境のマシンに合わせてバッチサイズをできるだけ大きくするとより短時間で学習が可能です。
-
-### pretrained modelの指定
-RVCではモデルの訓練を0からではなく、事前学習済みの重みから開始するため、少ないデータセットで学習を行えます。
-
-デフォルトでは
-
-- 音高ガイドを考慮する場合、`RVCのある場所/pretrained/f0G40k.pth`と`RVCのある場所/pretrained/f0D40k.pth`を読み込みます。
-- 音高ガイドを考慮しない場合、`RVCのある場所/pretrained/G40k.pth`と`RVCのある場所/pretrained/D40k.pth`を読み込みます。
-
-学習時はsave_every_epochごとにモデルのパラメータが`logs/実験名/G_{}.pth`と`logs/実験名/D_{}.pth`に保存されますが、このパスを指定することで学習を再開したり、もしくは違う実験で学習したモデルの重みから学習を開始できます。
-
-### indexの学習
-RVCでは学習時に使われたHuBERTの特徴量を保存し、推論時は学習時の特徴量から近い特徴量を探してきて推論を行います。この検索を高速に行うために事前にindexの学習を行います。
-indexの学習には近似近傍探索ライブラリのfaissを用います。`/logs/実験名/3_feature256`の特徴量を読み込み、それを用いて学習したindexを`/logs/実験名/add_XXX.index`として保存します。
-(20230428updateよりtotal_fea.npyはindexから読み込むので不要になりました。)
-
-### ボタンの説明
-- モデルのトレーニング: step2bまでを実行した後、このボタンを押すとモデルの学習を行います。
-- 特徴インデックスのトレーニング: モデルのトレーニング後、indexの学習を行います。
-- ワンクリックトレーニング: step2bまでとモデルのトレーニング、特徴インデックスのトレーニングを一括で行います。
-
diff --git a/spaces/AIFILMS/StyleGANEX/models/stylegan2/op/conv2d_gradfix.py b/spaces/AIFILMS/StyleGANEX/models/stylegan2/op/conv2d_gradfix.py
deleted file mode 100644
index c4485b11991c5426939e87e6c363307eb9017438..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/StyleGANEX/models/stylegan2/op/conv2d_gradfix.py
+++ /dev/null
@@ -1,227 +0,0 @@
-import contextlib
-import warnings
-
-import torch
-from torch import autograd
-from torch.nn import functional as F
-
-enabled = True
-weight_gradients_disabled = False
-
-
-@contextlib.contextmanager
-def no_weight_gradients():
- global weight_gradients_disabled
-
- old = weight_gradients_disabled
- weight_gradients_disabled = True
- yield
- weight_gradients_disabled = old
-
-
-def conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1):
- if could_use_op(input):
- return conv2d_gradfix(
- transpose=False,
- weight_shape=weight.shape,
- stride=stride,
- padding=padding,
- output_padding=0,
- dilation=dilation,
- groups=groups,
- ).apply(input, weight, bias)
-
- return F.conv2d(
- input=input,
- weight=weight,
- bias=bias,
- stride=stride,
- padding=padding,
- dilation=dilation,
- groups=groups,
- )
-
-
-def conv_transpose2d(
- input,
- weight,
- bias=None,
- stride=1,
- padding=0,
- output_padding=0,
- groups=1,
- dilation=1,
-):
- if could_use_op(input):
- return conv2d_gradfix(
- transpose=True,
- weight_shape=weight.shape,
- stride=stride,
- padding=padding,
- output_padding=output_padding,
- groups=groups,
- dilation=dilation,
- ).apply(input, weight, bias)
-
- return F.conv_transpose2d(
- input=input,
- weight=weight,
- bias=bias,
- stride=stride,
- padding=padding,
- output_padding=output_padding,
- dilation=dilation,
- groups=groups,
- )
-
-
-def could_use_op(input):
- if (not enabled) or (not torch.backends.cudnn.enabled):
- return False
-
- if input.device.type != "cuda":
- return False
-
- if any(torch.__version__.startswith(x) for x in ["1.7.", "1.8."]):
- return True
-
- warnings.warn(
- f"conv2d_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.conv2d()."
- )
-
- return False
-
-
-def ensure_tuple(xs, ndim):
- xs = tuple(xs) if isinstance(xs, (tuple, list)) else (xs,) * ndim
-
- return xs
-
-
-conv2d_gradfix_cache = dict()
-
-
-def conv2d_gradfix(
- transpose, weight_shape, stride, padding, output_padding, dilation, groups
-):
- ndim = 2
- weight_shape = tuple(weight_shape)
- stride = ensure_tuple(stride, ndim)
- padding = ensure_tuple(padding, ndim)
- output_padding = ensure_tuple(output_padding, ndim)
- dilation = ensure_tuple(dilation, ndim)
-
- key = (transpose, weight_shape, stride, padding, output_padding, dilation, groups)
- if key in conv2d_gradfix_cache:
- return conv2d_gradfix_cache[key]
-
- common_kwargs = dict(
- stride=stride, padding=padding, dilation=dilation, groups=groups
- )
-
- def calc_output_padding(input_shape, output_shape):
- if transpose:
- return [0, 0]
-
- return [
- input_shape[i + 2]
- - (output_shape[i + 2] - 1) * stride[i]
- - (1 - 2 * padding[i])
- - dilation[i] * (weight_shape[i + 2] - 1)
- for i in range(ndim)
- ]
-
- class Conv2d(autograd.Function):
- @staticmethod
- def forward(ctx, input, weight, bias):
- if not transpose:
- out = F.conv2d(input=input, weight=weight, bias=bias, **common_kwargs)
-
- else:
- out = F.conv_transpose2d(
- input=input,
- weight=weight,
- bias=bias,
- output_padding=output_padding,
- **common_kwargs,
- )
-
- ctx.save_for_backward(input, weight)
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- input, weight = ctx.saved_tensors
- grad_input, grad_weight, grad_bias = None, None, None
-
- if ctx.needs_input_grad[0]:
- p = calc_output_padding(
- input_shape=input.shape, output_shape=grad_output.shape
- )
- grad_input = conv2d_gradfix(
- transpose=(not transpose),
- weight_shape=weight_shape,
- output_padding=p,
- **common_kwargs,
- ).apply(grad_output, weight, None)
-
- if ctx.needs_input_grad[1] and not weight_gradients_disabled:
- grad_weight = Conv2dGradWeight.apply(grad_output, input)
-
- if ctx.needs_input_grad[2]:
- grad_bias = grad_output.sum((0, 2, 3))
-
- return grad_input, grad_weight, grad_bias
-
- class Conv2dGradWeight(autograd.Function):
- @staticmethod
- def forward(ctx, grad_output, input):
- op = torch._C._jit_get_operation(
- "aten::cudnn_convolution_backward_weight"
- if not transpose
- else "aten::cudnn_convolution_transpose_backward_weight"
- )
- flags = [
- torch.backends.cudnn.benchmark,
- torch.backends.cudnn.deterministic,
- torch.backends.cudnn.allow_tf32,
- ]
- grad_weight = op(
- weight_shape,
- grad_output,
- input,
- padding,
- stride,
- dilation,
- groups,
- *flags,
- )
- ctx.save_for_backward(grad_output, input)
-
- return grad_weight
-
- @staticmethod
- def backward(ctx, grad_grad_weight):
- grad_output, input = ctx.saved_tensors
- grad_grad_output, grad_grad_input = None, None
-
- if ctx.needs_input_grad[0]:
- grad_grad_output = Conv2d.apply(input, grad_grad_weight, None)
-
- if ctx.needs_input_grad[1]:
- p = calc_output_padding(
- input_shape=input.shape, output_shape=grad_output.shape
- )
- grad_grad_input = conv2d_gradfix(
- transpose=(not transpose),
- weight_shape=weight_shape,
- output_padding=p,
- **common_kwargs,
- ).apply(grad_output, grad_grad_weight, None)
-
- return grad_grad_output, grad_grad_input
-
- conv2d_gradfix_cache[key] = Conv2d
-
- return Conv2d
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/pe.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/pe.py
deleted file mode 100644
index 3880c80d0820c36e044c00bd38a07fd3cce73323..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/pe.py
+++ /dev/null
@@ -1,155 +0,0 @@
-import matplotlib
-matplotlib.use('Agg')
-
-import torch
-import numpy as np
-import os
-
-from tasks.base_task import BaseDataset
-from tasks.tts.fs2 import FastSpeech2Task
-from modules.fastspeech.pe import PitchExtractor
-import utils
-from utils.indexed_datasets import IndexedDataset
-from utils.hparams import hparams
-from utils.plot import f0_to_figure
-from utils.pitch_utils import norm_interp_f0, denorm_f0
-
-
-class PeDataset(BaseDataset):
- def __init__(self, prefix, shuffle=False):
- super().__init__(shuffle)
- self.data_dir = hparams['binary_data_dir']
- self.prefix = prefix
- self.hparams = hparams
- self.sizes = np.load(f'{self.data_dir}/{self.prefix}_lengths.npy')
- self.indexed_ds = None
-
- # pitch stats
- f0_stats_fn = f'{self.data_dir}/train_f0s_mean_std.npy'
- if os.path.exists(f0_stats_fn):
- hparams['f0_mean'], hparams['f0_std'] = self.f0_mean, self.f0_std = np.load(f0_stats_fn)
- hparams['f0_mean'] = float(hparams['f0_mean'])
- hparams['f0_std'] = float(hparams['f0_std'])
- else:
- hparams['f0_mean'], hparams['f0_std'] = self.f0_mean, self.f0_std = None, None
-
- if prefix == 'test':
- if hparams['num_test_samples'] > 0:
- self.avail_idxs = list(range(hparams['num_test_samples'])) + hparams['test_ids']
- self.sizes = [self.sizes[i] for i in self.avail_idxs]
-
- def _get_item(self, index):
- if hasattr(self, 'avail_idxs') and self.avail_idxs is not None:
- index = self.avail_idxs[index]
- if self.indexed_ds is None:
- self.indexed_ds = IndexedDataset(f'{self.data_dir}/{self.prefix}')
- return self.indexed_ds[index]
-
- def __getitem__(self, index):
- hparams = self.hparams
- item = self._get_item(index)
- max_frames = hparams['max_frames']
- spec = torch.Tensor(item['mel'])[:max_frames]
- # mel2ph = torch.LongTensor(item['mel2ph'])[:max_frames] if 'mel2ph' in item else None
- f0, uv = norm_interp_f0(item["f0"][:max_frames], hparams)
- pitch = torch.LongTensor(item.get("pitch"))[:max_frames]
- # print(item.keys(), item['mel'].shape, spec.shape)
- sample = {
- "id": index,
- "item_name": item['item_name'],
- "text": item['txt'],
- "mel": spec,
- "pitch": pitch,
- "f0": f0,
- "uv": uv,
- # "mel2ph": mel2ph,
- # "mel_nonpadding": spec.abs().sum(-1) > 0,
- }
- return sample
-
- def collater(self, samples):
- if len(samples) == 0:
- return {}
- id = torch.LongTensor([s['id'] for s in samples])
- item_names = [s['item_name'] for s in samples]
- text = [s['text'] for s in samples]
- f0 = utils.collate_1d([s['f0'] for s in samples], 0.0)
- pitch = utils.collate_1d([s['pitch'] for s in samples])
- uv = utils.collate_1d([s['uv'] for s in samples])
- mels = utils.collate_2d([s['mel'] for s in samples], 0.0)
- mel_lengths = torch.LongTensor([s['mel'].shape[0] for s in samples])
- # mel2ph = utils.collate_1d([s['mel2ph'] for s in samples], 0.0) \
- # if samples[0]['mel2ph'] is not None else None
- # mel_nonpaddings = utils.collate_1d([s['mel_nonpadding'].float() for s in samples], 0.0)
-
- batch = {
- 'id': id,
- 'item_name': item_names,
- 'nsamples': len(samples),
- 'text': text,
- 'mels': mels,
- 'mel_lengths': mel_lengths,
- 'pitch': pitch,
- # 'mel2ph': mel2ph,
- # 'mel_nonpaddings': mel_nonpaddings,
- 'f0': f0,
- 'uv': uv,
- }
- return batch
-
-
-class PitchExtractionTask(FastSpeech2Task):
- def __init__(self):
- super().__init__()
- self.dataset_cls = PeDataset
-
- def build_tts_model(self):
- self.model = PitchExtractor(conv_layers=hparams['pitch_extractor_conv_layers'])
-
- # def build_scheduler(self, optimizer):
- # return torch.optim.lr_scheduler.StepLR(optimizer, hparams['decay_steps'], gamma=0.5)
- def _training_step(self, sample, batch_idx, _):
- loss_output = self.run_model(self.model, sample)
- total_loss = sum([v for v in loss_output.values() if isinstance(v, torch.Tensor) and v.requires_grad])
- loss_output['batch_size'] = sample['mels'].size()[0]
- return total_loss, loss_output
-
- def validation_step(self, sample, batch_idx):
- outputs = {}
- outputs['losses'] = {}
- outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True, infer=True)
- outputs['total_loss'] = sum(outputs['losses'].values())
- outputs['nsamples'] = sample['nsamples']
- outputs = utils.tensors_to_scalars(outputs)
- if batch_idx < hparams['num_valid_plots']:
- self.plot_pitch(batch_idx, model_out, sample)
- return outputs
-
- def run_model(self, model, sample, return_output=False, infer=False):
- f0 = sample['f0']
- uv = sample['uv']
- output = model(sample['mels'])
- losses = {}
- self.add_pitch_loss(output, sample, losses)
- if not return_output:
- return losses
- else:
- return losses, output
-
- def plot_pitch(self, batch_idx, model_out, sample):
- gt_f0 = denorm_f0(sample['f0'], sample['uv'], hparams)
- self.logger.experiment.add_figure(
- f'f0_{batch_idx}',
- f0_to_figure(gt_f0[0], None, model_out['f0_denorm_pred'][0]),
- self.global_step)
-
- def add_pitch_loss(self, output, sample, losses):
- # mel2ph = sample['mel2ph'] # [B, T_s]
- mel = sample['mels']
- f0 = sample['f0']
- uv = sample['uv']
- # nonpadding = (mel2ph != 0).float() if hparams['pitch_type'] == 'frame' \
- # else (sample['txt_tokens'] != 0).float()
- nonpadding = (mel.abs().sum(-1) > 0).float() # sample['mel_nonpaddings']
- # print(nonpadding[0][-8:], nonpadding.shape)
- self.add_f0_loss(output['pitch_pred'], f0, uv, losses, nonpadding=nonpadding)
\ No newline at end of file
diff --git a/spaces/AIWaves/Debate/app.py b/spaces/AIWaves/Debate/app.py
deleted file mode 100644
index dbf7748b8b9670c265624675a25803252aa92e4a..0000000000000000000000000000000000000000
--- a/spaces/AIWaves/Debate/app.py
+++ /dev/null
@@ -1,365 +0,0 @@
-import sys
-sys.path.append("../../Gradio_Config")
-
-from gradio_base import UIHelper, WebUI
-import os
-from gradio_base import WebUI, UIHelper, PORT, HOST, Client
-from gradio_config import GradioConfig as gc
-from typing import List, Tuple, Any
-import gradio as gr
-import time
-
-
-class DebateUI(WebUI):
- FORMAT = "{}\n\n{}\nAffirmative viewpoint:{}\nNegative viewpoint:{}\n{}"
- AUDIENCE = "Audience"
- cache = {}
- all_agents_name = []
- receive_server = None
-
- @classmethod
- def extract(cls, content):
- topic = content.split("")[1].split("Affirmative viewpoint:")[0]
- positive = content.split("")[1].split("Affirmative viewpoint:")[1].split("negative viewpoint:")[0]
- negative = content.split("")[1].split("Affirmative viewpoint:")[1].split("negative viewpoint:")[1]
- return topic.strip(), positive.strip(), negative.strip()
-
- @classmethod
- def merge(cls, theme, positive, negative, origin_content) -> str:
- return cls.FORMAT.format(
- origin_content.split("")[0],
- theme, positive, negative,
- origin_content.split("")[-1]
- )
-
- @classmethod
- def convert2list4agentname(cls, sop):
- only_name = []
- agent_name = []
- roles_to_names = sop.roles_to_names
- for state_name,roles_names in roles_to_names.items():
- for role,name in roles_names.items():
- agent_name.append(f"{name}({role})")
- only_name.append(name)
- agent_name.append(cls.AUDIENCE)
- agent_name = list(set(agent_name))
- agent_name.sort()
- return agent_name, only_name
-
- def render_and_register_ui(self):
- gc.add_agent(self.cache["only_name"])
-
- def __init__(
- self,
- client_cmd: list,
- socket_host: str = HOST,
- socket_port: int = PORT,
- bufsize: int = 1024,
- ui_name: str = "DebateUI"
- ):
- super(DebateUI, self).__init__(client_cmd, socket_host, socket_port, bufsize, ui_name)
- self.first_recieve_from_client()
- self.data_history = list()
- self.caller = 0
-
- def handle_message(self, history:list,
- state, agent_name, token, node_name):
- if state % 10 == 0:
- self.data_history.append({agent_name: token})
- elif state % 10 == 1:
- # Same state. Need to add new bubble in same bubble.
- if len(self.data_history) == 0:
- self.data_history.append({agent_name:""})
- self.data_history[-1][agent_name] += token
- elif state % 10 == 2:
- # New state. Need to add new bubble.
- history.append([None, ""])
- self.data_history.clear()
- self.data_history.append({agent_name: token})
- else:
- assert False, "Invalid state."
- render_data = self.render_bubble(history, self.data_history, node_name, render_node_name= True or state % 10 == 2)
- return render_data
-
- def start_button_when_click(self, theme, positive, negative, choose, mode, api_key):
- """
- inputs=[self.text_theme, self.text_positive, self.text_negative, self.radio_choose],
- outputs=[self.chatbot, self.btn_send]
- """
- cosplay = None if choose == self.AUDIENCE else choose.split("(")[0]
- message = dict(theme=theme, positive=positive, negative=negative, cosplay=cosplay, mode=mode, api_key=api_key)
- self.send_start_cmd(message=message)
- return gr.Chatbot.update(
- visible=True
- ), gr.Button.update(visible=False)
-
- def start_button_after_click(self, history):
- """
- inputs=[self.chatbot],
- outputs=[self.chatbot, self.text_user, self.btn_send, self.btn_reset, self.btn_next]
- """
- if self.caller == 0:
- # not single mode
- self.data_history = list()
- self.caller = 0
- receive_server = self.receive_server
- while True:
- data_list: List = receive_server.send(None)
- for item in data_list:
- data = eval(item)
- assert isinstance(data, list)
- state, agent_name, token, node_name = data
- assert isinstance(state, int)
- if state == 30:
- # user input
- yield history,\
- gr.Textbox.update(visible=True, interactive=True), \
- gr.Button.update(visible=True, interactive=True),\
- gr.Button.update(visible=True, interactive=True),\
- gr.Button.update(visible=False)
- return
- elif state == 99:
- # finish
- yield history, gr.Textbox.update(visible=True, interactive=False, value="finish!"), \
- gr.Button.update(visible=True, interactive=False, value="finish!"), gr.Button.update(visible=True, interactive=True),\
- gr.Button.update(visible=False)
- elif state == 98:
- yield history, \
- gr.Textbox.update(visible=False, interactive=False), \
- gr.Button.update(visible=False, interactive=False),\
- gr.Button.update(visible=False, interactive=False),\
- gr.Button.update(visible=True, value=f"Next Agent: 🤖{agent_name} | Next Node: ⭕{node_name}")
- return
- else:
- history = self.handle_message(history, state, agent_name, token, node_name)
- yield history, \
- gr.Textbox.update(visible=False, interactive=False), \
- gr.Button.update(visible=False, interactive=False),\
- gr.Button.update(visible=False, interactive=False),\
- gr.Button.update(visible=False)
-
- def send_button_when_click(self, text_user, history:list):
- """
- inputs=[self.text_user, self.chatbot],
- outputs=[self.text_user, self.btn_send, self.chatbot]
- """
- history.append(
- [UIHelper.wrap_css(text_user, "User"), None]
- )
- # print(f"server: send {text_user} to client")
- self.send_message(""+text_user+self.SIGN["SPLIT"])
- return gr.Textbox.update(value="", visible=False),\
- gr.Button.update(visible=False), \
- history,\
- gr.Button.update(visible=False)
-
- def reset_button_when_click(self, history, text_positive, text_negative, text_theme, text_user, btn_send, btn_start, btn_reset):
- """
- self.chatbot,
- self.text_positive,
- self.text_negative,
- self.text_theme,
- self.text_user,
- self.btn_send,
- self.btn_start,
- self.btn_reset
- self.btn_next
- """
- self.caller = 0
- return None, \
- "", \
- "", \
- "", \
- "", \
- gr.Button.update(value="Restarting...", interactive=False, visible=True),\
- gr.Button.update(value="Restarting...", interactive=False, visible=True),\
- gr.Button.update(value="Restarting...", interactive=False, visible=True),\
- gr.Button.update(value="Restarting...", interactive=False, visible=False)
-
- def reset_button_after_click(self, history, text_positive, text_negative, text_theme, text_user, btn_send, btn_start, btn_reset):
- self.reset()
- self.first_recieve_from_client(reset_mode=True)
- return gr.Chatbot.update(value=None, visible=False),\
- gr.Textbox.update(value=f"{self.cache['positive']}", interactive=True, visible=True),\
- gr.Textbox.update(value=f"{self.cache['negative']}", interactive=True, visible=True),\
- gr.Textbox.update(value=f"{self.cache['theme']}", interactive=True, visible=True),\
- gr.Textbox.update(value=f"", interactive=True, visible=False),\
- gr.Button.update(interactive=True, visible=False, value="Send"),\
- gr.Button.update(interactive=True, visible=True, value="Start"),\
- gr.Button.update(interactive=False, visible=False, value="Restart"),\
- gr.Button.update(interactive=True, visible=False, value="Next Agent")
-
- def btn_next_when_click(self):
- yield gr.Button.update(visible=False)
- self.send_message("nothing")
- self.caller = 1 # will note clear the self.data_history
- time.sleep(0.5)
- return
-
- def construct_ui(
- self,
- theme:str=None,
- positive:str=None,
- negative:str=None,
- agents_name:List=None,
- default_cos_play_id:int=None
- ):
- theme = self.cache["theme"] if theme is None else theme
- positive = self.cache["positive"] if positive is None else positive
- negative = self.cache["negative"] if negative is None else negative
- agents_name = self.cache["agents_name"] if agents_name is None else agents_name
- default_cos_play_id = self.cache["default_cos_play_id"] if default_cos_play_id is None else default_cos_play_id
-
- with gr.Blocks(css=gc.CSS) as demo:
- gr.Markdown("""# Agents""")
- gr.Markdown("""**Agents** is an open-source library/framework for building autonomous language agents.if you want to know more about **Agents**, please check our📄 Paper and📦 Github. Here is a demo of **Agents**.""")
- gr.Markdown("""If an error occurs or the queue is too long, please create your own demo by clicking Duplicate This Space in the upper right corner.""")
- with gr.Row():
- with gr.Column():
- self.text_api = gr.Textbox(
- value = self.cache["api_key"],
- placeholder="openai key",
- label="Please input valid openai key for gpt-3.5-turbo-16k."
- )
- self.radio_mode = gr.Radio(
- [Client.SINGLE_MODE],
- value=Client.SINGLE_MODE,
- interactive=True,
- label = Client.MODE_LABEL,
- info = Client.MODE_INFO
- )
- self.text_theme = gr.Textbox(
- label="Debate Topic:",
- value=theme,
- placeholder="Please input the Debate Topic"
- )
- self.text_positive = gr.Textbox(
- label="Affirmative viewpoint:",
- value=positive,
- placeholder="Please input the Affirmative viewpoint"
- )
- self.text_negative = gr.Textbox(
- label="Negative viewpoint:",
- value=negative,
- placeholder="Please input the Negative viewpoint"
- )
- self.radio_choose = gr.Radio(
- agents_name,
- value=agents_name[default_cos_play_id],
- label="User'agent",
- interactive=True
- )
- self.btn_start = gr.Button(
- value="run"
- )
- VISIBLE = False
- with gr.Column():
- self.chatbot = gr.Chatbot(
- height= 650,
- elem_id="chatbot1",
- label="Dialog",
- visible=VISIBLE
- )
- self.btn_next = gr.Button(
- value="Next Agent Start",
- visible=False
- )
- self.text_user = gr.Textbox(
- label="Input",
- placeholder="Input here",
- visible=VISIBLE
- )
- self.btn_send = gr.Button(
- value="Send",
- visible=VISIBLE
- )
- self.btn_reset = gr.Button(
- value="Restart",
- visible=VISIBLE
- )
-
- self.btn_start.click(
- fn=self.start_button_when_click,
- inputs=[self.text_theme, self.text_positive, self.text_negative, self.radio_choose, self.radio_mode, self.text_api],
- outputs=[self.chatbot, self.btn_start]
- ).then(
- fn=self.start_button_after_click,
- inputs=[self.chatbot],
- outputs=[self.chatbot, self.text_user, self.btn_send, self.btn_reset, self.btn_next]
- )
-
- self.btn_send.click(
- fn=self.send_button_when_click,
- inputs=[self.text_user, self.chatbot],
- outputs=[self.text_user, self.btn_send, self.chatbot, self.btn_reset]
- ).then(
- fn=self.start_button_after_click,
- inputs=[self.chatbot],
- outputs=[self.chatbot, self.text_user, self.btn_send, self.btn_reset, self.btn_next]
- )
-
- self.btn_reset.click(
- fn=self.reset_button_when_click,
- inputs=[
- self.chatbot,
- self.text_positive,
- self.text_negative,
- self.text_theme,
- self.text_user,
- self.btn_send,
- self.btn_start,
- self.btn_reset
- ],
- outputs=[
- self.chatbot,
- self.text_positive,
- self.text_negative,
- self.text_theme,
- self.text_user,
- self.btn_send,
- self.btn_start,
- self.btn_reset,
- self.btn_next
- ]
- ).then(
- fn=self.reset_button_after_click,
- inputs=[
- self.chatbot,
- self.text_positive,
- self.text_negative,
- self.text_theme,
- self.text_user,
- self.btn_send,
- self.btn_start,
- self.btn_reset
- ],
- outputs=[
- self.chatbot,
- self.text_positive,
- self.text_negative,
- self.text_theme,
- self.text_user,
- self.btn_send,
- self.btn_start,
- self.btn_reset,
- self.btn_next
- ]
- )
-
- self.btn_next.click(
- fn=self.btn_next_when_click,
- inputs=[],
- outputs=[self.btn_next]
- ).then(
- fn=self.start_button_after_click,
- inputs=[self.chatbot],
- outputs=[self.chatbot, self.text_user, self.btn_send, self.btn_reset, self.btn_next]
- )
-
- self.demo = demo
-
-
-if __name__ == '__main__':
- ui = DebateUI(client_cmd=["python","gradio_backend.py"])
- ui.construct_ui()
- ui.run()
diff --git a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/vit.py b/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/vit.py
deleted file mode 100644
index ea46b1be88b261b0dec04f3da0256f5f66f88a74..0000000000000000000000000000000000000000
--- a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/vit.py
+++ /dev/null
@@ -1,491 +0,0 @@
-import torch
-import torch.nn as nn
-import timm
-import types
-import math
-import torch.nn.functional as F
-
-
-class Slice(nn.Module):
- def __init__(self, start_index=1):
- super(Slice, self).__init__()
- self.start_index = start_index
-
- def forward(self, x):
- return x[:, self.start_index :]
-
-
-class AddReadout(nn.Module):
- def __init__(self, start_index=1):
- super(AddReadout, self).__init__()
- self.start_index = start_index
-
- def forward(self, x):
- if self.start_index == 2:
- readout = (x[:, 0] + x[:, 1]) / 2
- else:
- readout = x[:, 0]
- return x[:, self.start_index :] + readout.unsqueeze(1)
-
-
-class ProjectReadout(nn.Module):
- def __init__(self, in_features, start_index=1):
- super(ProjectReadout, self).__init__()
- self.start_index = start_index
-
- self.project = nn.Sequential(nn.Linear(2 * in_features, in_features), nn.GELU())
-
- def forward(self, x):
- readout = x[:, 0].unsqueeze(1).expand_as(x[:, self.start_index :])
- features = torch.cat((x[:, self.start_index :], readout), -1)
-
- return self.project(features)
-
-
-class Transpose(nn.Module):
- def __init__(self, dim0, dim1):
- super(Transpose, self).__init__()
- self.dim0 = dim0
- self.dim1 = dim1
-
- def forward(self, x):
- x = x.transpose(self.dim0, self.dim1)
- return x
-
-
-def forward_vit(pretrained, x):
- b, c, h, w = x.shape
-
- glob = pretrained.model.forward_flex(x)
-
- layer_1 = pretrained.activations["1"]
- layer_2 = pretrained.activations["2"]
- layer_3 = pretrained.activations["3"]
- layer_4 = pretrained.activations["4"]
-
- layer_1 = pretrained.act_postprocess1[0:2](layer_1)
- layer_2 = pretrained.act_postprocess2[0:2](layer_2)
- layer_3 = pretrained.act_postprocess3[0:2](layer_3)
- layer_4 = pretrained.act_postprocess4[0:2](layer_4)
-
- unflatten = nn.Sequential(
- nn.Unflatten(
- 2,
- torch.Size(
- [
- h // pretrained.model.patch_size[1],
- w // pretrained.model.patch_size[0],
- ]
- ),
- )
- )
-
- if layer_1.ndim == 3:
- layer_1 = unflatten(layer_1)
- if layer_2.ndim == 3:
- layer_2 = unflatten(layer_2)
- if layer_3.ndim == 3:
- layer_3 = unflatten(layer_3)
- if layer_4.ndim == 3:
- layer_4 = unflatten(layer_4)
-
- layer_1 = pretrained.act_postprocess1[3 : len(pretrained.act_postprocess1)](layer_1)
- layer_2 = pretrained.act_postprocess2[3 : len(pretrained.act_postprocess2)](layer_2)
- layer_3 = pretrained.act_postprocess3[3 : len(pretrained.act_postprocess3)](layer_3)
- layer_4 = pretrained.act_postprocess4[3 : len(pretrained.act_postprocess4)](layer_4)
-
- return layer_1, layer_2, layer_3, layer_4
-
-
-def _resize_pos_embed(self, posemb, gs_h, gs_w):
- posemb_tok, posemb_grid = (
- posemb[:, : self.start_index],
- posemb[0, self.start_index :],
- )
-
- gs_old = int(math.sqrt(len(posemb_grid)))
-
- posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2)
- posemb_grid = F.interpolate(posemb_grid, size=(gs_h, gs_w), mode="bilinear")
- posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_h * gs_w, -1)
-
- posemb = torch.cat([posemb_tok, posemb_grid], dim=1)
-
- return posemb
-
-
-def forward_flex(self, x):
- b, c, h, w = x.shape
-
- pos_embed = self._resize_pos_embed(
- self.pos_embed, h // self.patch_size[1], w // self.patch_size[0]
- )
-
- B = x.shape[0]
-
- if hasattr(self.patch_embed, "backbone"):
- x = self.patch_embed.backbone(x)
- if isinstance(x, (list, tuple)):
- x = x[-1] # last feature if backbone outputs list/tuple of features
-
- x = self.patch_embed.proj(x).flatten(2).transpose(1, 2)
-
- if getattr(self, "dist_token", None) is not None:
- cls_tokens = self.cls_token.expand(
- B, -1, -1
- ) # stole cls_tokens impl from Phil Wang, thanks
- dist_token = self.dist_token.expand(B, -1, -1)
- x = torch.cat((cls_tokens, dist_token, x), dim=1)
- else:
- cls_tokens = self.cls_token.expand(
- B, -1, -1
- ) # stole cls_tokens impl from Phil Wang, thanks
- x = torch.cat((cls_tokens, x), dim=1)
-
- x = x + pos_embed
- x = self.pos_drop(x)
-
- for blk in self.blocks:
- x = blk(x)
-
- x = self.norm(x)
-
- return x
-
-
-activations = {}
-
-
-def get_activation(name):
- def hook(model, input, output):
- activations[name] = output
-
- return hook
-
-
-def get_readout_oper(vit_features, features, use_readout, start_index=1):
- if use_readout == "ignore":
- readout_oper = [Slice(start_index)] * len(features)
- elif use_readout == "add":
- readout_oper = [AddReadout(start_index)] * len(features)
- elif use_readout == "project":
- readout_oper = [
- ProjectReadout(vit_features, start_index) for out_feat in features
- ]
- else:
- assert (
- False
- ), "wrong operation for readout token, use_readout can be 'ignore', 'add', or 'project'"
-
- return readout_oper
-
-
-def _make_vit_b16_backbone(
- model,
- features=[96, 192, 384, 768],
- size=[384, 384],
- hooks=[2, 5, 8, 11],
- vit_features=768,
- use_readout="ignore",
- start_index=1,
-):
- pretrained = nn.Module()
-
- pretrained.model = model
- pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1"))
- pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2"))
- pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3"))
- pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4"))
-
- pretrained.activations = activations
-
- readout_oper = get_readout_oper(vit_features, features, use_readout, start_index)
-
- # 32, 48, 136, 384
- pretrained.act_postprocess1 = nn.Sequential(
- readout_oper[0],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[0],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.ConvTranspose2d(
- in_channels=features[0],
- out_channels=features[0],
- kernel_size=4,
- stride=4,
- padding=0,
- bias=True,
- dilation=1,
- groups=1,
- ),
- )
-
- pretrained.act_postprocess2 = nn.Sequential(
- readout_oper[1],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[1],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.ConvTranspose2d(
- in_channels=features[1],
- out_channels=features[1],
- kernel_size=2,
- stride=2,
- padding=0,
- bias=True,
- dilation=1,
- groups=1,
- ),
- )
-
- pretrained.act_postprocess3 = nn.Sequential(
- readout_oper[2],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[2],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- )
-
- pretrained.act_postprocess4 = nn.Sequential(
- readout_oper[3],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[3],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.Conv2d(
- in_channels=features[3],
- out_channels=features[3],
- kernel_size=3,
- stride=2,
- padding=1,
- ),
- )
-
- pretrained.model.start_index = start_index
- pretrained.model.patch_size = [16, 16]
-
- # We inject this function into the VisionTransformer instances so that
- # we can use it with interpolated position embeddings without modifying the library source.
- pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model)
- pretrained.model._resize_pos_embed = types.MethodType(
- _resize_pos_embed, pretrained.model
- )
-
- return pretrained
-
-
-def _make_pretrained_vitl16_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model("vit_large_patch16_384", pretrained=pretrained)
-
- hooks = [5, 11, 17, 23] if hooks == None else hooks
- return _make_vit_b16_backbone(
- model,
- features=[256, 512, 1024, 1024],
- hooks=hooks,
- vit_features=1024,
- use_readout=use_readout,
- )
-
-
-def _make_pretrained_vitb16_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model("vit_base_patch16_384", pretrained=pretrained)
-
- hooks = [2, 5, 8, 11] if hooks == None else hooks
- return _make_vit_b16_backbone(
- model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout
- )
-
-
-def _make_pretrained_deitb16_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model("vit_deit_base_patch16_384", pretrained=pretrained)
-
- hooks = [2, 5, 8, 11] if hooks == None else hooks
- return _make_vit_b16_backbone(
- model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout
- )
-
-
-def _make_pretrained_deitb16_distil_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model(
- "vit_deit_base_distilled_patch16_384", pretrained=pretrained
- )
-
- hooks = [2, 5, 8, 11] if hooks == None else hooks
- return _make_vit_b16_backbone(
- model,
- features=[96, 192, 384, 768],
- hooks=hooks,
- use_readout=use_readout,
- start_index=2,
- )
-
-
-def _make_vit_b_rn50_backbone(
- model,
- features=[256, 512, 768, 768],
- size=[384, 384],
- hooks=[0, 1, 8, 11],
- vit_features=768,
- use_vit_only=False,
- use_readout="ignore",
- start_index=1,
-):
- pretrained = nn.Module()
-
- pretrained.model = model
-
- if use_vit_only == True:
- pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1"))
- pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2"))
- else:
- pretrained.model.patch_embed.backbone.stages[0].register_forward_hook(
- get_activation("1")
- )
- pretrained.model.patch_embed.backbone.stages[1].register_forward_hook(
- get_activation("2")
- )
-
- pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3"))
- pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4"))
-
- pretrained.activations = activations
-
- readout_oper = get_readout_oper(vit_features, features, use_readout, start_index)
-
- if use_vit_only == True:
- pretrained.act_postprocess1 = nn.Sequential(
- readout_oper[0],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[0],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.ConvTranspose2d(
- in_channels=features[0],
- out_channels=features[0],
- kernel_size=4,
- stride=4,
- padding=0,
- bias=True,
- dilation=1,
- groups=1,
- ),
- )
-
- pretrained.act_postprocess2 = nn.Sequential(
- readout_oper[1],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[1],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.ConvTranspose2d(
- in_channels=features[1],
- out_channels=features[1],
- kernel_size=2,
- stride=2,
- padding=0,
- bias=True,
- dilation=1,
- groups=1,
- ),
- )
- else:
- pretrained.act_postprocess1 = nn.Sequential(
- nn.Identity(), nn.Identity(), nn.Identity()
- )
- pretrained.act_postprocess2 = nn.Sequential(
- nn.Identity(), nn.Identity(), nn.Identity()
- )
-
- pretrained.act_postprocess3 = nn.Sequential(
- readout_oper[2],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[2],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- )
-
- pretrained.act_postprocess4 = nn.Sequential(
- readout_oper[3],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[3],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.Conv2d(
- in_channels=features[3],
- out_channels=features[3],
- kernel_size=3,
- stride=2,
- padding=1,
- ),
- )
-
- pretrained.model.start_index = start_index
- pretrained.model.patch_size = [16, 16]
-
- # We inject this function into the VisionTransformer instances so that
- # we can use it with interpolated position embeddings without modifying the library source.
- pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model)
-
- # We inject this function into the VisionTransformer instances so that
- # we can use it with interpolated position embeddings without modifying the library source.
- pretrained.model._resize_pos_embed = types.MethodType(
- _resize_pos_embed, pretrained.model
- )
-
- return pretrained
-
-
-def _make_pretrained_vitb_rn50_384(
- pretrained, use_readout="ignore", hooks=None, use_vit_only=False
-):
- model = timm.create_model("vit_base_resnet50_384", pretrained=pretrained)
-
- hooks = [0, 1, 8, 11] if hooks == None else hooks
- return _make_vit_b_rn50_backbone(
- model,
- features=[256, 512, 768, 768],
- size=[384, 384],
- hooks=hooks,
- use_vit_only=use_vit_only,
- use_readout=use_readout,
- )
diff --git a/spaces/Aditya9790/yolo7-object-tracking/utils/metrics.py b/spaces/Aditya9790/yolo7-object-tracking/utils/metrics.py
deleted file mode 100644
index 6d2f53647529ab0fc52f2e69fe2571794b024c94..0000000000000000000000000000000000000000
--- a/spaces/Aditya9790/yolo7-object-tracking/utils/metrics.py
+++ /dev/null
@@ -1,227 +0,0 @@
-# Model validation metrics
-
-from pathlib import Path
-
-import matplotlib.pyplot as plt
-import numpy as np
-import torch
-
-from . import general
-
-
-def fitness(x):
- # Model fitness as a weighted combination of metrics
- w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95]
- return (x[:, :4] * w).sum(1)
-
-
-def ap_per_class(tp, conf, pred_cls, target_cls, v5_metric=False, plot=False, save_dir='.', names=()):
- """ Compute the average precision, given the recall and precision curves.
- Source: https://github.com/rafaelpadilla/Object-Detection-Metrics.
- # Arguments
- tp: True positives (nparray, nx1 or nx10).
- conf: Objectness value from 0-1 (nparray).
- pred_cls: Predicted object classes (nparray).
- target_cls: True object classes (nparray).
- plot: Plot precision-recall curve at mAP@0.5
- save_dir: Plot save directory
- # Returns
- The average precision as computed in py-faster-rcnn.
- """
-
- # Sort by objectness
- i = np.argsort(-conf)
- tp, conf, pred_cls = tp[i], conf[i], pred_cls[i]
-
- # Find unique classes
- unique_classes = np.unique(target_cls)
- nc = unique_classes.shape[0] # number of classes, number of detections
-
- # Create Precision-Recall curve and compute AP for each class
- px, py = np.linspace(0, 1, 1000), [] # for plotting
- ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000))
- for ci, c in enumerate(unique_classes):
- i = pred_cls == c
- n_l = (target_cls == c).sum() # number of labels
- n_p = i.sum() # number of predictions
-
- if n_p == 0 or n_l == 0:
- continue
- else:
- # Accumulate FPs and TPs
- fpc = (1 - tp[i]).cumsum(0)
- tpc = tp[i].cumsum(0)
-
- # Recall
- recall = tpc / (n_l + 1e-16) # recall curve
- r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0) # negative x, xp because xp decreases
-
- # Precision
- precision = tpc / (tpc + fpc) # precision curve
- p[ci] = np.interp(-px, -conf[i], precision[:, 0], left=1) # p at pr_score
-
- # AP from recall-precision curve
- for j in range(tp.shape[1]):
- ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j], v5_metric=v5_metric)
- if plot and j == 0:
- py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5
-
- # Compute F1 (harmonic mean of precision and recall)
- f1 = 2 * p * r / (p + r + 1e-16)
- if plot:
- plot_pr_curve(px, py, ap, Path(save_dir) / 'PR_curve.png', names)
- plot_mc_curve(px, f1, Path(save_dir) / 'F1_curve.png', names, ylabel='F1')
- plot_mc_curve(px, p, Path(save_dir) / 'P_curve.png', names, ylabel='Precision')
- plot_mc_curve(px, r, Path(save_dir) / 'R_curve.png', names, ylabel='Recall')
-
- i = f1.mean(0).argmax() # max F1 index
- return p[:, i], r[:, i], ap, f1[:, i], unique_classes.astype('int32')
-
-
-def compute_ap(recall, precision, v5_metric=False):
- """ Compute the average precision, given the recall and precision curves
- # Arguments
- recall: The recall curve (list)
- precision: The precision curve (list)
- v5_metric: Assume maximum recall to be 1.0, as in YOLOv5, MMDetetion etc.
- # Returns
- Average precision, precision curve, recall curve
- """
-
- # Append sentinel values to beginning and end
- if v5_metric: # New YOLOv5 metric, same as MMDetection and Detectron2 repositories
- mrec = np.concatenate(([0.], recall, [1.0]))
- else: # Old YOLOv5 metric, i.e. default YOLOv7 metric
- mrec = np.concatenate(([0.], recall, [recall[-1] + 0.01]))
- mpre = np.concatenate(([1.], precision, [0.]))
-
- # Compute the precision envelope
- mpre = np.flip(np.maximum.accumulate(np.flip(mpre)))
-
- # Integrate area under curve
- method = 'interp' # methods: 'continuous', 'interp'
- if method == 'interp':
- x = np.linspace(0, 1, 101) # 101-point interp (COCO)
- ap = np.trapz(np.interp(x, mrec, mpre), x) # integrate
- else: # 'continuous'
- i = np.where(mrec[1:] != mrec[:-1])[0] # points where x axis (recall) changes
- ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) # area under curve
-
- return ap, mpre, mrec
-
-
-class ConfusionMatrix:
- # Updated version of https://github.com/kaanakan/object_detection_confusion_matrix
- def __init__(self, nc, conf=0.25, iou_thres=0.45):
- self.matrix = np.zeros((nc + 1, nc + 1))
- self.nc = nc # number of classes
- self.conf = conf
- self.iou_thres = iou_thres
-
- def process_batch(self, detections, labels):
- """
- Return intersection-over-union (Jaccard index) of boxes.
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
- Arguments:
- detections (Array[N, 6]), x1, y1, x2, y2, conf, class
- labels (Array[M, 5]), class, x1, y1, x2, y2
- Returns:
- None, updates confusion matrix accordingly
- """
- detections = detections[detections[:, 4] > self.conf]
- gt_classes = labels[:, 0].int()
- detection_classes = detections[:, 5].int()
- iou = general.box_iou(labels[:, 1:], detections[:, :4])
-
- x = torch.where(iou > self.iou_thres)
- if x[0].shape[0]:
- matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy()
- if x[0].shape[0] > 1:
- matches = matches[matches[:, 2].argsort()[::-1]]
- matches = matches[np.unique(matches[:, 1], return_index=True)[1]]
- matches = matches[matches[:, 2].argsort()[::-1]]
- matches = matches[np.unique(matches[:, 0], return_index=True)[1]]
- else:
- matches = np.zeros((0, 3))
-
- n = matches.shape[0] > 0
- m0, m1, _ = matches.transpose().astype(np.int16)
- for i, gc in enumerate(gt_classes):
- j = m0 == i
- if n and sum(j) == 1:
- self.matrix[gc, detection_classes[m1[j]]] += 1 # correct
- else:
- self.matrix[self.nc, gc] += 1 # background FP
-
- if n:
- for i, dc in enumerate(detection_classes):
- if not any(m1 == i):
- self.matrix[dc, self.nc] += 1 # background FN
-
- def matrix(self):
- return self.matrix
-
- def plot(self, save_dir='', names=()):
- try:
- import seaborn as sn
-
- array = self.matrix / (self.matrix.sum(0).reshape(1, self.nc + 1) + 1E-6) # normalize
- array[array < 0.005] = np.nan # don't annotate (would appear as 0.00)
-
- fig = plt.figure(figsize=(12, 9), tight_layout=True)
- sn.set(font_scale=1.0 if self.nc < 50 else 0.8) # for label size
- labels = (0 < len(names) < 99) and len(names) == self.nc # apply names to ticklabels
- sn.heatmap(array, annot=self.nc < 30, annot_kws={"size": 8}, cmap='Blues', fmt='.2f', square=True,
- xticklabels=names + ['background FP'] if labels else "auto",
- yticklabels=names + ['background FN'] if labels else "auto").set_facecolor((1, 1, 1))
- fig.axes[0].set_xlabel('True')
- fig.axes[0].set_ylabel('Predicted')
- fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250)
- except Exception as e:
- pass
-
- def print(self):
- for i in range(self.nc + 1):
- print(' '.join(map(str, self.matrix[i])))
-
-
-# Plots ----------------------------------------------------------------------------------------------------------------
-
-def plot_pr_curve(px, py, ap, save_dir='pr_curve.png', names=()):
- # Precision-recall curve
- fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
- py = np.stack(py, axis=1)
-
- if 0 < len(names) < 21: # display per-class legend if < 21 classes
- for i, y in enumerate(py.T):
- ax.plot(px, y, linewidth=1, label=f'{names[i]} {ap[i, 0]:.3f}') # plot(recall, precision)
- else:
- ax.plot(px, py, linewidth=1, color='grey') # plot(recall, precision)
-
- ax.plot(px, py.mean(1), linewidth=3, color='blue', label='all classes %.3f mAP@0.5' % ap[:, 0].mean())
- ax.set_xlabel('Recall')
- ax.set_ylabel('Precision')
- ax.set_xlim(0, 1)
- ax.set_ylim(0, 1)
- plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
- fig.savefig(Path(save_dir), dpi=250)
-
-
-def plot_mc_curve(px, py, save_dir='mc_curve.png', names=(), xlabel='Confidence', ylabel='Metric'):
- # Metric-confidence curve
- fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
-
- if 0 < len(names) < 21: # display per-class legend if < 21 classes
- for i, y in enumerate(py):
- ax.plot(px, y, linewidth=1, label=f'{names[i]}') # plot(confidence, metric)
- else:
- ax.plot(px, py.T, linewidth=1, color='grey') # plot(confidence, metric)
-
- y = py.mean(0)
- ax.plot(px, y, linewidth=3, color='blue', label=f'all classes {y.max():.2f} at {px[y.argmax()]:.3f}')
- ax.set_xlabel(xlabel)
- ax.set_ylabel(ylabel)
- ax.set_xlim(0, 1)
- ax.set_ylim(0, 1)
- plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
- fig.savefig(Path(save_dir), dpi=250)
diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/sde_team_given_tests.py b/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/sde_team_given_tests.py
deleted file mode 100644
index fdef4a86338f9baa806988c4575215fcd6f9d24b..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/sde_team_given_tests.py
+++ /dev/null
@@ -1,128 +0,0 @@
-import asyncio
-import logging
-from typing import Any, Dict, List
-import json
-
-from agentverse.agents.simulation_agent.conversation import BaseAgent
-
-# from agentverse.environments.simulation_env.rules.base import Rule
-from agentverse.environments.simulation_env.rules.base import SimulationRule as Rule
-from agentverse.message import Message
-
-from .. import env_registry as EnvironmentRegistry
-from ..base import BaseEnvironment
-
-from agentverse.initialization import load_tools
-
-
-@EnvironmentRegistry.register("sde_team_given_tests")
-class SdeTeamGivenTestsEnvironment(BaseEnvironment):
- """
- A basic environment implementing the logic of conversation to craft code.
-
- Args:
- agents: List of agents
- rule: Rule for the environment
- max_turns: Maximum number of turns
- cnt_turn: Current turn number
- last_messages: Messages from last turn
- rule_params: Variables set by the rule
- """
-
- agents: List[BaseAgent]
- rule: Rule
- max_turns: int = 10
- cnt_turn: int = 0
- last_messages: List[Message] = []
- rule_params: Dict = {}
- unit_tests: str = ""
- # # variables for experiment
- # task_name: str = "test"
- # experiment_name: str = ""
-
- def __init__(self, rule, **kwargs):
- rule_config = rule
- order_config = rule_config.get("order", {"type": "sde_team_given_tests"})
- visibility_config = rule_config.get("visibility", {"type": "base"})
- selector_config = rule_config.get("selector", {"type": "sde_team_given_tests"})
- updater_config = rule_config.get("updater", {"type": "sde_team"})
- describer_config = rule_config.get("describer", {"type": "base"})
- rule = Rule(
- order_config,
- visibility_config,
- selector_config,
- updater_config,
- describer_config,
- )
- super().__init__(rule=rule, **kwargs)
- self.rule_params["first_round"] = True
- self.rule_params["end_flag"] = False
-
- # # Set up logging for experiment
- # filename = self.task_name.replace("/", "_")
- # import os
- # import os.path
- # if not os.path.exists(f"human_eval_experiments/{self.experiment_name}/log"):
- # os.makedirs(f"human_eval_experiments/{self.experiment_name}/log")
- # file_handler = logging.FileHandler(f"human_eval_experiments/{self.experiment_name}/log/{filename}.txt")
- # logging.getLogger().addHandler(file_handler)
-
- async def step(self) -> List[Message]:
- """Run one step of the environment"""
-
- # Get the next agent index
- agent_ids = self.rule.get_next_agent_idx(self) # order
-
- # Generate current environment description
- # env_descriptions = self.rule.get_env_description(self) # describer
-
- # # Generate the next message
- # messages = await asyncio.gather(
- # *[self.agents[i].astep(env_descriptions[i]) for i in agent_ids]
- # ) # call chatgpt api
-
- messages = await asyncio.gather(*[self.agents[i].astep("") for i in agent_ids])
-
- # Track the messages to get the role of the sender
- self.last_messages = messages
-
- # Some rules will select certain messages from all the messages
- selected_messages = self.rule.select_message(self, messages) # selector
- self.last_messages = selected_messages
- self.print_messages(selected_messages)
-
- # Update the memory of the agents
- self.rule.update_memory(self) # updater: update memory
-
- # Update the set of visible agents for each agent
- self.rule.update_visible_agents(self) # change receiver
-
- self.cnt_turn += 1
-
- return selected_messages
-
- def print_messages(self, messages: List[Message]) -> None:
- for message in messages:
- if message is not None:
- logging.info(f"{message.sender}: {message.content}")
-
- def reset(self) -> None:
- """Reset the environment"""
- self.cnt_turn = 0
- self.rule.reset()
- for agent in self.agents:
- agent.reset()
-
- def is_done(self) -> bool:
- """Check if the environment is done"""
- if self.cnt_turn >= self.max_turns or self.rule_params["end_flag"]:
- # # Write to file for experiment
- # with open(f"human_eval_experiments/{self.experiment_name}/record_human_eval_prediction.jsonl", "a") as f:
- # wd = dict()
- # wd['task_id'] = self.task_name
- # wd['code'] = self.rule_params['code']
- # # print(wd)
- # f.write(json.dumps(wd) + "\n")
- # logging.getLogger().handlers.pop()
- return True
- return False
diff --git a/spaces/AgentVerse/agentVerse/agentverse/output_parser/output_parser.py b/spaces/AgentVerse/agentVerse/agentverse/output_parser/output_parser.py
deleted file mode 100644
index 556d9ff6e6b8addc1b45aff0dd7e8e8be51e24a4..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/output_parser/output_parser.py
+++ /dev/null
@@ -1,621 +0,0 @@
-from __future__ import annotations
-
-import re
-from abc import abstractmethod
-import json
-from typing import Union, List, Tuple, NamedTuple, TYPE_CHECKING
-
-from . import output_parser_registry
-
-from agentverse.utils import AgentAction, AgentFinish, AgentCriticism
-
-from agentverse.llms import LLMResult
-from agentverse.logging import logger
-
-from pydantic import BaseModel
-
-if TYPE_CHECKING:
- from agentverse.agents.base import BaseAgent
- from agentverse.environments.base import BaseEnvironment
-
-class OutputParserError(Exception):
- """Exception raised when parsing output from a command fails."""
-
- def __init__(self, message):
- self.message = message
-
- def __str__(self):
- return "Failed to parse output of the model:%s\n " % self.message
-
-
-class OutputParser(BaseModel):
- """Base class for output parsers."""
-
- @abstractmethod
- def parse(self, output: LLMResult) -> NamedTuple:
- pass
-
-
-@output_parser_registry.register("alice_home")
-class AliceHomeParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- cleaned_output = text.strip()
- cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
- cleaned_output = cleaned_output.split("\n")
- if not (
- len(cleaned_output) == 2
- and cleaned_output[0].startswith("Thought:")
- and cleaned_output[1].startswith("Action:")
- ):
- raise OutputParserError(text)
-
- action = cleaned_output[1][len("Action:") :].strip()
-
- return AgentFinish({"output": action}, text)
-
-
-@output_parser_registry.register("db_diag")
-@output_parser_registry.register("nlp_classroom_3players_withtool")
-class CommonParser1(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- cleaned_output = text.strip()
- cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
- cleaned_output = cleaned_output.split("\n")
- if not (
- len(cleaned_output) == 3
- and cleaned_output[0].startswith("Thought:")
- and cleaned_output[1].startswith("Action:")
- and cleaned_output[2].startswith("Action Input:")
- ):
- raise OutputParserError(text)
- action = cleaned_output[1][len("Action:") :].strip()
- action_input = cleaned_output[2][len("Action Input:") :].strip()
- if action in ["Speak"]:
- return AgentFinish({"output": action_input}, text)
- elif action == "CallOn":
- return AgentFinish({"output": "[CallOn] " + action_input}, text)
- elif action == "RaiseHand":
- return AgentFinish({"output": "[RaiseHand] " + action_input}, text)
- elif action == "Listen":
- return AgentFinish({"output": ""}, text)
- else:
- return AgentAction(action.lower(), action_input, text)
-
-
-@output_parser_registry.register("math_problem_2players_tools")
-class MathProblem2PlayersToolsParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- cleaned_output = text.strip()
- cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
- cleaned_output = cleaned_output.split("\n")
- if not (
- len(cleaned_output) == 2
- and cleaned_output[0].startswith("Action:")
- and cleaned_output[1].startswith("Action Input:")
- ):
- raise OutputParserError(text)
- action = cleaned_output[0][len("Action:") :].strip()
- action_input = cleaned_output[1][len("Action Input:") :].strip()
- if action == "Speak":
- return AgentFinish({"output": action_input}, text)
- else:
- return AgentAction(action, action_input, text)
-
-
-@output_parser_registry.register("nlp_classroom_3players")
-class NlpClassroom3PlayersParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- cleaned_output = text.strip()
- cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
- cleaned_output = cleaned_output.split("\n")
- if not (
- len(cleaned_output) == 2
- and cleaned_output[0].startswith("Action:")
- and cleaned_output[1].startswith("Action Input:")
- ):
- raise OutputParserError(text)
- action = cleaned_output[0][len("Action:") :].strip()
- action_input = cleaned_output[1][len("Action Input:") :].strip()
- if action == "Speak":
- return AgentFinish({"output": action_input}, text)
- else:
- raise OutputParserError(text)
-
-
-@output_parser_registry.register("nlp_classroom_9players")
-class NlpClassroom9PlayersParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- cleaned_output = text.strip()
- cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
- cleaned_output = cleaned_output.split("\n")
- if not (
- len(cleaned_output) == 2
- and cleaned_output[0].startswith("Action:")
- and cleaned_output[1].startswith("Action Input:")
- ):
- raise OutputParserError(text)
- action = cleaned_output[0][len("Action:") :].strip()
- action_input = cleaned_output[1][len("Action Input:") :].strip()
- if action == "Speak":
- return AgentFinish({"output": action_input}, text)
- elif action == "CallOn":
- return AgentFinish({"output": "[CallOn] " + action_input}, text)
- elif action == "RaiseHand":
- return AgentFinish({"output": "[RaiseHand] " + action_input}, text)
- elif action == "Listen":
- return AgentFinish({"output": ""}, text)
- else:
- return AgentAction(action, action_input, text)
-
-
-@output_parser_registry.register("nlp_classroom_9players_group")
-class NlpClassroom9PlayersGroupParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- cleaned_output = text.strip()
- cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
- cleaned_output = cleaned_output.split("\n")
- if not (
- len(cleaned_output) == 2
- and cleaned_output[0].startswith("Action:")
- and cleaned_output[1].startswith("Action Input:")
- ):
- raise OutputParserError(text)
- action = cleaned_output[0][len("Action:") :].strip()
- action_input = cleaned_output[1][len("Action Input:") :].strip()
- if action == "Speak":
- return AgentFinish({"output": action_input}, text)
- elif action in ["CallOn", "RaiseHand", "GroupDiscuss"]:
- return AgentFinish({"output": f"[{action}] {action_input}"}, text)
- elif action == "Listen":
- return AgentFinish({"output": ""}, text)
- else:
- return AgentAction(action, action_input, text)
-
-
-@output_parser_registry.register("pokemon")
-class PokemonParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- cleaned_output = text.strip()
- cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
- cleaned_output = cleaned_output.split("\n")
- if not (
- len(cleaned_output) == 3
- and cleaned_output[0].startswith("Thought:")
- and cleaned_output[1].startswith("Action:")
- and cleaned_output[2].startswith("Action Input:")
- ):
- raise OutputParserError(text)
- action = cleaned_output[1][len("Action:") :].strip()
- action_input = cleaned_output[2][len("Action Input:") :].strip()
- try:
- action_input = json.loads(action_input)
- except json.JSONDecodeError:
- raise OutputParserError(text)
- action_input["action"] = action
- return AgentFinish({"output": json.dumps(action_input)}, text)
-
-
-@output_parser_registry.register("prisoner_dilemma")
-class PrisonerDilemmaParser(OutputParser):
- # make sure 1 1 2 2 3 3
- cur_round: int = 1
- encounter_cur_round: bool = False
-
- def parse(
- self, agent: "BaseAgent", environment: "BaseEnvironment", output: LLMResult
- ) -> Union[AgentAction, AgentFinish]:
- text = output.content
- cleaned_output = text.strip()
- cleaned_output = re.sub(r"\n+", "\n", cleaned_output)
- cleaned_output = cleaned_output.split("\n")
- if not (
- len(cleaned_output) == 2
- and cleaned_output[0].startswith("Action:")
- and cleaned_output[1].startswith("Action Input:")
- ):
- raise OutputParserError(text)
- action = cleaned_output[0][len("Action:") :].strip()
- action_input = cleaned_output[1][len("Action Input:") :].strip()
-
- if action == "Speak":
- # make sure the police count the round right
- # if agent.name == "Police":
- # action_input = re.sub(r'Round (\d+)', f'Round {self.cur_round}', action_input)
- # self.cur_round += 1
- # if self.encounter_cur_round:
- # self.encounter_cur_round = False
- # self.cur_round += 1
- # else:
- # self.encounter_cur_round = True
-
- # each time police speak is a new round
- if agent.name == "Police":
- if environment.cnt_turn == (environment.max_turns - 4):
- action_input = (
- "Attention! You are now required to made your final decision and I will made the "
- "final judgement to both of you based on this time, Please Answer now !"
- )
-
- elif environment.cnt_turn == (environment.max_turns - 2):
- action_input = "Attention! Suspect2, it's now your time to make your final decision, Please Answer now !"
-
- # elif self.cur_round == 1:
- # action_input = "Hey Listen! You are both arrested, and I am going to give you both a chance to walk out of here," \
- # "But you should comply with the following rules:" \
- # "- If one of you are willing to testifies against the other and the other one remains silent, then the one who testifies will be released IMMEDIATELY, while the silent one will be sentenced to TEN years in prison." \
- # "- If both of you remain silent, you will each receive a sentence of ONE year in prison." \
- # "- It seems that always testifying is a goog strategy, So! if you both choose to testify against each other, you will each receive a sentence of FIVE years in prison." \
- # "Now, it's your time to consider testifying or remaining silent. Remember this is a best chance you might ever have to walk out of here without guilty." \
- # "I will noticed both of you WHEN you have to make your final decision! Before that, try to make your best!" \
-
- self.cur_round += 1
-
- return AgentFinish({"output": action_input}, text)
- else:
- raise OutputParserError(text)
-
-
-@output_parser_registry.register("sde_team/sde_team_2players")
-@output_parser_registry.register("sde_team/sde_team_3players")
-@output_parser_registry.register("commongen")
-@output_parser_registry.register("humaneval-manager")
-@output_parser_registry.register("mgsm")
-@output_parser_registry.register("dummy")
-@output_parser_registry.register("responsegen")
-class CommonParser2(OutputParser):
- # def parse(self, agent, env, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- return AgentFinish({"output": output.content}, output.content)
-
-
-@output_parser_registry.register("role_assigner")
-class RoleAssignerParser(OutputParser):
- cnt_critic_agents: int = 0
-
- def parse(self, output: LLMResult) -> List[str]:
- text = output.content
- pattern = re.compile(r"\d\.\s*(.+)")
- roles = pattern.findall(text)
- if len(roles) < self.cnt_critic_agents:
- logger.error(
- f"Role assigner failed to assign roles to {self.cnt_critic_agents} critics!"
- )
- raise OutputParserError(text)
- return roles
-
-
-@output_parser_registry.register("evaluator")
-class EvaluatorParser(OutputParser):
- dimensions: List[str] = None
-
- def parse(self, output: LLMResult) -> Tuple[List[int], str]:
- text = output.content
- cleaned_output = re.sub(r"\n+", "\n", text.strip())
- checks = cleaned_output.split("\n")
- patterns = [
- re.compile(r"(?:\d\.\s*)?" + dimension + r":\s*(\d)")
- for dimension in self.dimensions
- ]
- try:
- # find score and advice
- score = [
- int(pattern.findall(checks[i])[0]) for i, pattern in enumerate(patterns)
- ]
- advice_text = "".join(checks[len(self.dimensions) :])
- advice = re.findall(r"(?:\d\.\s*)?Advice:\s*(.+)", advice_text)[0]
- # logger.info("Evaluator give the following advice:\n" + advice)
- except (IndexError, ValueError):
- # logger.error("Bad response from evaluator!")
- raise OutputParserError(text)
- return score, advice
-
-
-@output_parser_registry.register("humaneval-solver")
-class HumanevalSolverParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- # start_pos = text.find("```")
- # end_pos = text.rfind("```")
- # if end_pos == -1:
- # raise OutputParserError(text)
- # text = text[start_pos:end_pos]
- # cleaned_output = text.strip().strip("```").strip()
- # if cleaned_output.startswith("python"):
- # cleaned_output = cleaned_output[6:].strip()
- # elif cleaned_output.startswith("python3"):
- # cleaned_output = cleaned_output[7:].strip()
- code = re.findall(r"```.*?\n(.+?)```", text, re.DOTALL)[-1]
-
- return AgentFinish({"output": code}, text)
-
-
-@output_parser_registry.register("humaneval-executor")
-class HumanevalSolverParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- try:
- parsed_result = re.findall(
- r"Thought:(.+?)Reasoning:(.+?)Criticism:(.+?)File Path:(.+?)Code:(.+?)Command:(.+)",
- text,
- re.DOTALL,
- )[0]
- cleaned_output = {
- "thought": parsed_result[0].strip(),
- "reasoning": parsed_result[1].strip(),
- "criticism": parsed_result[2].strip(),
- "file_path": parsed_result[3].strip().strip("`"),
- "code": parsed_result[4]
- .strip()
- .strip("```")
- .strip("python")
- .strip("python3"),
- "command": parsed_result[5].strip().strip("`"),
- }
- except BaseException as e:
- raise OutputParserError(text)
-
- return AgentFinish({"output": cleaned_output}, text)
-
-
-@output_parser_registry.register("humaneval-evaluator")
-class HumanevalEvaluatorParser(OutputParser):
- dimensions: List[str] = None
-
- def parse(self, output: LLMResult) -> Tuple[List[int], str]:
- text = output.content
- cleaned_output = re.sub(r"\n+", "\n", text.strip())
- checks = cleaned_output.split("\n")
-
- patterns = [
- re.compile(r"(?:\d.\s*)?" + dimension + r":\s*(\d)")
- for dimension in self.dimensions
- ]
-
- advice = ""
- for check in reversed(checks):
- advice = check + advice
- if check.startswith("Advice:"):
- break
- checks[-1] = advice
- try:
- # find score and advice
- score = []
- for pattern in patterns:
- for check in checks[:-1]:
- if pattern.findall(check):
- score.append(bool(int(pattern.findall(check)[0])))
- break
- advice = re.findall(r"(?:\d.\s*)?Advice:\s*(.+)", checks[-1])[0]
- # logger.info("Evaluator give the following advice:\n" + advice)
- except (IndexError, ValueError):
- # logger.error("Bad response from evaluator!")
- raise OutputParserError(text)
- return score[0], advice
-
-
-@output_parser_registry.register("humaneval-critic-agree")
-class HumanevalyCriticParser(OutputParser):
- def parse(self, output: LLMResult) -> AgentCriticism:
- text = output.content
- if "[Agree]" in text:
- return AgentCriticism(True, "")
- else:
- return AgentCriticism(False, text)
-
-
-@output_parser_registry.register("mgsm-evaluator")
-class MGSMEvaluatorParser(OutputParser):
- dimensions: List[str] = None
-
- def parse(self, output: LLMResult) -> Tuple[List[int], str]:
- text = output.content
- cleaned_output = re.sub(r"\n+", "\n", text.strip())
- # checks = cleaned_output.split("\n")
-
- patterns = [
- re.compile(r"(?:\d.\s*)?" + dimension + r":\s*(\d)")
- for dimension in self.dimensions
- ]
- try:
- # find score and advice
- score_num = [
- int(pattern.findall(cleaned_output)[0])
- for i, pattern in enumerate(patterns)
- ][0]
- if score_num == 0:
- score = False
- elif score_num == 1:
- score = True
- else:
- raise ValueError("Bad score!")
- pat = re.compile(r"(?:\d.\s*)?Response:\s*(.+)", re.DOTALL)
- advice = pat.findall(cleaned_output)[0]
- # logger.info("Evaluator give the following advice:\n" + advice)
- except (IndexError, ValueError):
- # logger.error("Bad response from evaluator!")
- raise OutputParserError(text)
- return score, advice
-
-
-@output_parser_registry.register("mgsm-critic-agree")
-class MGSMCriticAgreeParser(OutputParser):
- def parse(self, output: LLMResult) -> AgentCriticism:
- text = output.content
- text = re.sub(r"\n+", "\n", text.strip())
- # checks = text.split("\n")
- # if not text.startswith("Thought:"):
- # raise OutputParserError(text)
- # if not (checks[0].startswith("Action:")):
- # raise OutputParserError(text)
- # if checks[0].strip(". ") == "Action: Agree":
- # return AgentCriticism(True, "")
- if "[Agree]" in text:
- return AgentCriticism(True, "")
- else:
- # pattern = re.compile(r"Action Input: ([\S\n ]+)")
- # try:
- # criticism = pattern.findall(text)[0].strip()
- # criticism = (
- # re.findall(r"Output:\S?(.+)", text)[0].replace("[Wrong]", "")
- # ).strip()
- criticism = text.replace("[Disagree]", "").strip()
- # except IndexError:
- # logger.error("Bad response from critic!")
- # raise OutputParserError(text)
- # criticism = "I think the solution is not correct. Please think carefully and correct it."
- return AgentCriticism(False, criticism)
- # else:
- # raise OutputParserError(text)
-
-
-@output_parser_registry.register("responsegen-evaluator")
-class ResponseGenEvaluatorParser(OutputParser):
- dimensions: List[str] = None
-
- def parse(self, output: LLMResult) -> Tuple[List[int], str]:
- text = output.content
- cleaned_output = re.sub(r"\n+", "\n", text.strip())
- checks = cleaned_output.split("\n")
-
- patterns = [
- re.compile(r"(?:\d.\s*)?" + dimension + r":\s*(\d+)")
- for dimension in self.dimensions
- ]
-
- advice = ""
- for check in reversed(checks):
- advice = check + advice
- if check.startswith("Advice:"):
- break
- checks[-1] = advice
- try:
- # find score and advice
- score = [
- int(pattern.findall(checks[i])[0]) for i, pattern in enumerate(patterns)
- ]
- advice = re.findall(r"(?:\d.\s*)?Advice:\s*(.+)", checks[-1])[0]
- # logger.info("Evaluator give the following advice:\n" + advice)
- except (IndexError, ValueError):
- # logger.error("Bad response from evaluator!")
- raise OutputParserError(text)
- return score, advice
-
-
-@output_parser_registry.register("responsegen-critic")
-@output_parser_registry.register("critic")
-class CommonParser3(OutputParser):
- def parse(self, output: LLMResult) -> AgentCriticism:
- text = output.content
- text = re.sub(r"\n+", "\n", text.strip())
- checks = text.split("\n")
- if not (checks[0].startswith("Action:")):
- raise OutputParserError(text)
- if checks[0].strip(". ") == "Action: Agree":
- return AgentCriticism(True, "")
- elif checks[0].strip(". ") == "Action: Disagree":
- pattern = re.compile(r"Action Input: ([\S\n ]+)")
- try:
- criticism = pattern.findall(text)[0].strip()
- except IndexError:
- criticism = (
- "I think it is not correct. Please think carefully and improve it."
- )
- # raise OutputParserError(text)
- return AgentCriticism(False, criticism)
- else:
- raise OutputParserError(text)
-
-
-@output_parser_registry.register("responsegen-critic-2")
-class ResponseGenCriticParser(OutputParser):
- def parse(self, output: LLMResult) -> AgentCriticism:
- text = output.content
- # text = re.sub(r"\n+", "\n", text.strip())
- # checks = text.split("\n")
- # if not (checks[0].startswith("Action:")):
- # raise OutputParserError(text)
- # if checks[0].strip(". ") == "Action: Agree":
- # return AgentCriticism(True, "")
- # elif checks[0].strip(". ") == "Action: Disagree":
- # pattern = re.compile(r"Action Input: ([\S\n ]+)")
- # try:
- # criticism = pattern.findall(text)[0].strip()
- # except IndexError:
- # # criticism = "I think the solution is not correct. Please think carefully and correct it."
- # raise OutputParserError(text)
- # return AgentCriticism(False, criticism)
- # else:
- # raise OutputParserError(text)
- result = re.findall(r"Decision:(.+?)Response:(.+)", text, re.DOTALL)
- if len(result) == 0:
- result = ["Disagree", "I think the response can be further improved."]
- else:
- result = result[0]
- if "Agree" in result[0]:
- return AgentCriticism(True, "")
- else:
- return AgentCriticism(False, result[1].strip())
-
-
-@output_parser_registry.register("role-description-name-assigner")
-class RoleAssignerParser(OutputParser):
- cnt_critic_agents: int = 0
-
- def parse(self, output: LLMResult) -> List[str]:
- text = output.content
- pattern = re.compile(r"\d+?\.\s*(.+?) - (.+)")
- roles = pattern.findall(text)
- if len(roles) < self.cnt_critic_agents:
- logger.error(
- f"Role assigner failed to assign roles to {self.cnt_critic_agents} critics!"
- )
- raise OutputParserError(text)
- res = []
- for role in roles:
- res.append({"name": role[0], "description": role[1]})
- return res
-
-
-@output_parser_registry.register("tool-using-solver")
-class SolverParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- text = output.content
- pattern = re.compile(r"\d+?\.\s*(.+?) - (.+)")
- tasks = pattern.findall(text)
- if len(tasks) == 0:
- raise OutputParserError(text)
- return AgentFinish({"output": tasks}, text)
-
-
-@output_parser_registry.register("tool-using-executor")
-class ToolUsingSolverParser(OutputParser):
- def parse(self, output: LLMResult) -> Union[AgentAction, AgentFinish]:
- if output.function_name != "":
- return AgentAction(
- tool=output.function_name,
- tool_input=output.function_arguments,
- log=output.content,
- )
- else:
- return AgentFinish({"output": output.content}, output.content)
-
-
-@output_parser_registry.register("tool-using-evaluator")
-class HumanevalEvaluatorParser(OutputParser):
- def parse(self, output: LLMResult) -> Tuple[List[int], str]:
- text = output.content
- try:
- result = re.findall(r"Status:(.+?)Speak:(.+)", text, re.DOTALL)[0]
- score = bool(int(result[0]))
- words = result[1].strip()
- except (IndexError, ValueError):
- # logger.error("Bad response from evaluator!")
- raise OutputParserError(text)
- return score, words
diff --git a/spaces/AlanMars/QYL-AI-Space/modules/llama_func.py b/spaces/AlanMars/QYL-AI-Space/modules/llama_func.py
deleted file mode 100644
index e1c513af1bf6d1569b071eb5fc0ce441d0692f83..0000000000000000000000000000000000000000
--- a/spaces/AlanMars/QYL-AI-Space/modules/llama_func.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import os
-import logging
-
-from llama_index import download_loader
-from llama_index import (
- Document,
- LLMPredictor,
- PromptHelper,
- QuestionAnswerPrompt,
- RefinePrompt,
-)
-import colorama
-import PyPDF2
-from tqdm import tqdm
-
-from modules.presets import *
-from modules.utils import *
-from modules.config import local_embedding
-
-
-def get_index_name(file_src):
- file_paths = [x.name for x in file_src]
- file_paths.sort(key=lambda x: os.path.basename(x))
-
- md5_hash = hashlib.md5()
- for file_path in file_paths:
- with open(file_path, "rb") as f:
- while chunk := f.read(8192):
- md5_hash.update(chunk)
-
- return md5_hash.hexdigest()
-
-
-def block_split(text):
- blocks = []
- while len(text) > 0:
- blocks.append(Document(text[:1000]))
- text = text[1000:]
- return blocks
-
-
-def get_documents(file_src):
- documents = []
- logging.debug("Loading documents...")
- logging.debug(f"file_src: {file_src}")
- for file in file_src:
- filepath = file.name
- filename = os.path.basename(filepath)
- file_type = os.path.splitext(filepath)[1]
- logging.info(f"loading file: {filename}")
- try:
- if file_type == ".pdf":
- logging.debug("Loading PDF...")
- try:
- from modules.pdf_func import parse_pdf
- from modules.config import advance_docs
-
- two_column = advance_docs["pdf"].get("two_column", False)
- pdftext = parse_pdf(filepath, two_column).text
- except:
- pdftext = ""
- with open(filepath, "rb") as pdfFileObj:
- pdfReader = PyPDF2.PdfReader(pdfFileObj)
- for page in tqdm(pdfReader.pages):
- pdftext += page.extract_text()
- text_raw = pdftext
- elif file_type == ".docx":
- logging.debug("Loading Word...")
- DocxReader = download_loader("DocxReader")
- loader = DocxReader()
- text_raw = loader.load_data(file=filepath)[0].text
- elif file_type == ".epub":
- logging.debug("Loading EPUB...")
- EpubReader = download_loader("EpubReader")
- loader = EpubReader()
- text_raw = loader.load_data(file=filepath)[0].text
- elif file_type == ".xlsx":
- logging.debug("Loading Excel...")
- text_list = excel_to_string(filepath)
- for elem in text_list:
- documents.append(Document(elem))
- continue
- else:
- logging.debug("Loading text file...")
- with open(filepath, "r", encoding="utf-8") as f:
- text_raw = f.read()
- except Exception as e:
- logging.error(f"Error loading file: {filename}")
- pass
- text = add_space(text_raw)
- # text = block_split(text)
- # documents += text
- documents += [Document(text)]
- logging.debug("Documents loaded.")
- return documents
-
-
-def construct_index(
- api_key,
- file_src,
- max_input_size=4096,
- num_outputs=5,
- max_chunk_overlap=20,
- chunk_size_limit=600,
- embedding_limit=None,
- separator=" ",
-):
- from langchain.chat_models import ChatOpenAI
- from langchain.embeddings.huggingface import HuggingFaceEmbeddings
- from llama_index import GPTSimpleVectorIndex, ServiceContext, LangchainEmbedding, OpenAIEmbedding
-
- if api_key:
- os.environ["OPENAI_API_KEY"] = api_key
- else:
- # 由于一个依赖的愚蠢的设计,这里必须要有一个API KEY
- os.environ["OPENAI_API_KEY"] = "sk-xxxxxxx"
- chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit
- embedding_limit = None if embedding_limit == 0 else embedding_limit
- separator = " " if separator == "" else separator
-
- prompt_helper = PromptHelper(
- max_input_size=max_input_size,
- num_output=num_outputs,
- max_chunk_overlap=max_chunk_overlap,
- embedding_limit=embedding_limit,
- chunk_size_limit=600,
- separator=separator,
- )
- index_name = get_index_name(file_src)
- if os.path.exists(f"./index/{index_name}.json"):
- logging.info("找到了缓存的索引文件,加载中……")
- return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json")
- else:
- try:
- documents = get_documents(file_src)
- if local_embedding:
- embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name = "sentence-transformers/distiluse-base-multilingual-cased-v2"))
- else:
- embed_model = OpenAIEmbedding()
- logging.info("构建索引中……")
- with retrieve_proxy():
- service_context = ServiceContext.from_defaults(
- prompt_helper=prompt_helper,
- chunk_size_limit=chunk_size_limit,
- embed_model=embed_model,
- )
- index = GPTSimpleVectorIndex.from_documents(
- documents, service_context=service_context
- )
- logging.debug("索引构建完成!")
- os.makedirs("./index", exist_ok=True)
- index.save_to_disk(f"./index/{index_name}.json")
- logging.debug("索引已保存至本地!")
- return index
-
- except Exception as e:
- logging.error("索引构建失败!", e)
- print(e)
- return None
-
-
-def add_space(text):
- punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "}
- for cn_punc, en_punc in punctuations.items():
- text = text.replace(cn_punc, en_punc)
- return text
diff --git a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/text/cantonese.py b/spaces/Alycer/VITS-Umamusume-voice-synthesizer/text/cantonese.py
deleted file mode 100644
index b66d12138b81b70b86f18217d24a08fce76305c0..0000000000000000000000000000000000000000
--- a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/text/cantonese.py
+++ /dev/null
@@ -1,59 +0,0 @@
-import re
-import cn2an
-import opencc
-
-
-converter = opencc.OpenCC('jyutjyu')
-
-# List of (Latin alphabet, ipa) pairs:
-_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('A', 'ei˥'),
- ('B', 'biː˥'),
- ('C', 'siː˥'),
- ('D', 'tiː˥'),
- ('E', 'iː˥'),
- ('F', 'e˥fuː˨˩'),
- ('G', 'tsiː˥'),
- ('H', 'ɪk̚˥tsʰyː˨˩'),
- ('I', 'ɐi˥'),
- ('J', 'tsei˥'),
- ('K', 'kʰei˥'),
- ('L', 'e˥llou˨˩'),
- ('M', 'ɛːm˥'),
- ('N', 'ɛːn˥'),
- ('O', 'ou˥'),
- ('P', 'pʰiː˥'),
- ('Q', 'kʰiːu˥'),
- ('R', 'aː˥lou˨˩'),
- ('S', 'ɛː˥siː˨˩'),
- ('T', 'tʰiː˥'),
- ('U', 'juː˥'),
- ('V', 'wiː˥'),
- ('W', 'tʊk̚˥piː˥juː˥'),
- ('X', 'ɪk̚˥siː˨˩'),
- ('Y', 'waːi˥'),
- ('Z', 'iː˨sɛːt̚˥')
-]]
-
-
-def number_to_cantonese(text):
- return re.sub(r'\d+(?:\.?\d+)?', lambda x: cn2an.an2cn(x.group()), text)
-
-
-def latin_to_ipa(text):
- for regex, replacement in _latin_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def cantonese_to_ipa(text):
- text = number_to_cantonese(text.upper())
- text = converter.convert(text).replace('-','').replace('$',' ')
- text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text)
- text = re.sub(r'[、;:]', ',', text)
- text = re.sub(r'\s*,\s*', ', ', text)
- text = re.sub(r'\s*。\s*', '. ', text)
- text = re.sub(r'\s*?\s*', '? ', text)
- text = re.sub(r'\s*!\s*', '! ', text)
- text = re.sub(r'\s*$', '', text)
- return text
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_r101_fpn_20e_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_r101_fpn_20e_coco.py
deleted file mode 100644
index 9cb3581910f74063eb1c62b9345a6493098d4a4a..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_r101_fpn_20e_coco.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './cascade_rcnn_r50_fpn_20e_coco.py'
-model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101))
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/nasfcos_fpn.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/nasfcos_fpn.py
deleted file mode 100644
index 2daf79ef591373499184c624ccd27fb7456dec06..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/nasfcos_fpn.py
+++ /dev/null
@@ -1,161 +0,0 @@
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import ConvModule, caffe2_xavier_init
-from mmcv.ops.merge_cells import ConcatCell
-
-from ..builder import NECKS
-
-
-@NECKS.register_module()
-class NASFCOS_FPN(nn.Module):
- """FPN structure in NASFPN.
-
- Implementation of paper `NAS-FCOS: Fast Neural Architecture Search for
- Object Detection `_
-
- Args:
- in_channels (List[int]): Number of input channels per scale.
- out_channels (int): Number of output channels (used at each scale)
- num_outs (int): Number of output scales.
- start_level (int): Index of the start input backbone level used to
- build the feature pyramid. Default: 0.
- end_level (int): Index of the end input backbone level (exclusive) to
- build the feature pyramid. Default: -1, which means the last level.
- add_extra_convs (bool): It decides whether to add conv
- layers on top of the original feature maps. Default to False.
- If True, its actual mode is specified by `extra_convs_on_inputs`.
- conv_cfg (dict): dictionary to construct and config conv layer.
- norm_cfg (dict): dictionary to construct and config norm layer.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- num_outs,
- start_level=1,
- end_level=-1,
- add_extra_convs=False,
- conv_cfg=None,
- norm_cfg=None):
- super(NASFCOS_FPN, self).__init__()
- assert isinstance(in_channels, list)
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.num_ins = len(in_channels)
- self.num_outs = num_outs
- self.norm_cfg = norm_cfg
- self.conv_cfg = conv_cfg
-
- if end_level == -1:
- self.backbone_end_level = self.num_ins
- assert num_outs >= self.num_ins - start_level
- else:
- self.backbone_end_level = end_level
- assert end_level <= len(in_channels)
- assert num_outs == end_level - start_level
- self.start_level = start_level
- self.end_level = end_level
- self.add_extra_convs = add_extra_convs
-
- self.adapt_convs = nn.ModuleList()
- for i in range(self.start_level, self.backbone_end_level):
- adapt_conv = ConvModule(
- in_channels[i],
- out_channels,
- 1,
- stride=1,
- padding=0,
- bias=False,
- norm_cfg=dict(type='BN'),
- act_cfg=dict(type='ReLU', inplace=False))
- self.adapt_convs.append(adapt_conv)
-
- # C2 is omitted according to the paper
- extra_levels = num_outs - self.backbone_end_level + self.start_level
-
- def build_concat_cell(with_input1_conv, with_input2_conv):
- cell_conv_cfg = dict(
- kernel_size=1, padding=0, bias=False, groups=out_channels)
- return ConcatCell(
- in_channels=out_channels,
- out_channels=out_channels,
- with_out_conv=True,
- out_conv_cfg=cell_conv_cfg,
- out_norm_cfg=dict(type='BN'),
- out_conv_order=('norm', 'act', 'conv'),
- with_input1_conv=with_input1_conv,
- with_input2_conv=with_input2_conv,
- input_conv_cfg=conv_cfg,
- input_norm_cfg=norm_cfg,
- upsample_mode='nearest')
-
- # Denote c3=f0, c4=f1, c5=f2 for convince
- self.fpn = nn.ModuleDict()
- self.fpn['c22_1'] = build_concat_cell(True, True)
- self.fpn['c22_2'] = build_concat_cell(True, True)
- self.fpn['c32'] = build_concat_cell(True, False)
- self.fpn['c02'] = build_concat_cell(True, False)
- self.fpn['c42'] = build_concat_cell(True, True)
- self.fpn['c36'] = build_concat_cell(True, True)
- self.fpn['c61'] = build_concat_cell(True, True) # f9
- self.extra_downsamples = nn.ModuleList()
- for i in range(extra_levels):
- extra_act_cfg = None if i == 0 \
- else dict(type='ReLU', inplace=False)
- self.extra_downsamples.append(
- ConvModule(
- out_channels,
- out_channels,
- 3,
- stride=2,
- padding=1,
- act_cfg=extra_act_cfg,
- order=('act', 'norm', 'conv')))
-
- def forward(self, inputs):
- """Forward function."""
- feats = [
- adapt_conv(inputs[i + self.start_level])
- for i, adapt_conv in enumerate(self.adapt_convs)
- ]
-
- for (i, module_name) in enumerate(self.fpn):
- idx_1, idx_2 = int(module_name[1]), int(module_name[2])
- res = self.fpn[module_name](feats[idx_1], feats[idx_2])
- feats.append(res)
-
- ret = []
- for (idx, input_idx) in zip([9, 8, 7], [1, 2, 3]): # add P3, P4, P5
- feats1, feats2 = feats[idx], feats[5]
- feats2_resize = F.interpolate(
- feats2,
- size=feats1.size()[2:],
- mode='bilinear',
- align_corners=False)
-
- feats_sum = feats1 + feats2_resize
- ret.append(
- F.interpolate(
- feats_sum,
- size=inputs[input_idx].size()[2:],
- mode='bilinear',
- align_corners=False))
-
- for submodule in self.extra_downsamples:
- ret.append(submodule(ret[-1]))
-
- return tuple(ret)
-
- def init_weights(self):
- """Initialize the weights of module."""
- for module in self.fpn.values():
- if hasattr(module, 'conv_out'):
- caffe2_xavier_init(module.out_conv.conv)
-
- for modules in [
- self.adapt_convs.modules(),
- self.extra_downsamples.modules()
- ]:
- for module in modules:
- if isinstance(module, nn.Conv2d):
- caffe2_xavier_init(module)
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr48_512x512_40k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr48_512x512_40k_voc12aug.py
deleted file mode 100644
index 89e6309f55f6b939f7d79271513da4934bbacbb6..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr48_512x512_40k_voc12aug.py
+++ /dev/null
@@ -1,39 +0,0 @@
-_base_ = './ocrnet_hr18_512x512_40k_voc12aug.py'
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- pretrained='open-mmlab://msra/hrnetv2_w48',
- backbone=dict(
- extra=dict(
- stage2=dict(num_channels=(48, 96)),
- stage3=dict(num_channels=(48, 96, 192)),
- stage4=dict(num_channels=(48, 96, 192, 384)))),
- decode_head=[
- dict(
- type='FCNHead',
- in_channels=[48, 96, 192, 384],
- channels=sum([48, 96, 192, 384]),
- input_transform='resize_concat',
- in_index=(0, 1, 2, 3),
- kernel_size=1,
- num_convs=1,
- norm_cfg=norm_cfg,
- concat_input=False,
- dropout_ratio=-1,
- num_classes=21,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- dict(
- type='OCRHead',
- in_channels=[48, 96, 192, 384],
- channels=512,
- ocr_channels=256,
- input_transform='resize_concat',
- in_index=(0, 1, 2, 3),
- norm_cfg=norm_cfg,
- dropout_ratio=-1,
- num_classes=21,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0))
- ])
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/conv.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/conv.py
deleted file mode 100644
index cf54491997a48ac3e7fadc4183ab7bf3e831024c..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/conv.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from torch import nn
-
-from .registry import CONV_LAYERS
-
-CONV_LAYERS.register_module('Conv1d', module=nn.Conv1d)
-CONV_LAYERS.register_module('Conv2d', module=nn.Conv2d)
-CONV_LAYERS.register_module('Conv3d', module=nn.Conv3d)
-CONV_LAYERS.register_module('Conv', module=nn.Conv2d)
-
-
-def build_conv_layer(cfg, *args, **kwargs):
- """Build convolution layer.
-
- Args:
- cfg (None or dict): The conv layer config, which should contain:
- - type (str): Layer type.
- - layer args: Args needed to instantiate an conv layer.
- args (argument list): Arguments passed to the `__init__`
- method of the corresponding conv layer.
- kwargs (keyword arguments): Keyword arguments passed to the `__init__`
- method of the corresponding conv layer.
-
- Returns:
- nn.Module: Created conv layer.
- """
- if cfg is None:
- cfg_ = dict(type='Conv2d')
- else:
- if not isinstance(cfg, dict):
- raise TypeError('cfg must be a dict')
- if 'type' not in cfg:
- raise KeyError('the cfg dict must contain the key "type"')
- cfg_ = cfg.copy()
-
- layer_type = cfg_.pop('type')
- if layer_type not in CONV_LAYERS:
- raise KeyError(f'Unrecognized norm type {layer_type}')
- else:
- conv_layer = CONV_LAYERS.get(layer_type)
-
- layer = conv_layer(*args, **kwargs, **cfg_)
-
- return layer
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/nms.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/nms.py
deleted file mode 100644
index 6d9634281f486ab284091786886854c451368052..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/nms.py
+++ /dev/null
@@ -1,417 +0,0 @@
-import os
-
-import numpy as np
-import torch
-
-from annotator.uniformer.mmcv.utils import deprecated_api_warning
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext(
- '_ext', ['nms', 'softnms', 'nms_match', 'nms_rotated'])
-
-
-# This function is modified from: https://github.com/pytorch/vision/
-class NMSop(torch.autograd.Function):
-
- @staticmethod
- def forward(ctx, bboxes, scores, iou_threshold, offset, score_threshold,
- max_num):
- is_filtering_by_score = score_threshold > 0
- if is_filtering_by_score:
- valid_mask = scores > score_threshold
- bboxes, scores = bboxes[valid_mask], scores[valid_mask]
- valid_inds = torch.nonzero(
- valid_mask, as_tuple=False).squeeze(dim=1)
-
- inds = ext_module.nms(
- bboxes, scores, iou_threshold=float(iou_threshold), offset=offset)
-
- if max_num > 0:
- inds = inds[:max_num]
- if is_filtering_by_score:
- inds = valid_inds[inds]
- return inds
-
- @staticmethod
- def symbolic(g, bboxes, scores, iou_threshold, offset, score_threshold,
- max_num):
- from ..onnx import is_custom_op_loaded
- has_custom_op = is_custom_op_loaded()
- # TensorRT nms plugin is aligned with original nms in ONNXRuntime
- is_trt_backend = os.environ.get('ONNX_BACKEND') == 'MMCVTensorRT'
- if has_custom_op and (not is_trt_backend):
- return g.op(
- 'mmcv::NonMaxSuppression',
- bboxes,
- scores,
- iou_threshold_f=float(iou_threshold),
- offset_i=int(offset))
- else:
- from torch.onnx.symbolic_opset9 import select, squeeze, unsqueeze
- from ..onnx.onnx_utils.symbolic_helper import _size_helper
-
- boxes = unsqueeze(g, bboxes, 0)
- scores = unsqueeze(g, unsqueeze(g, scores, 0), 0)
-
- if max_num > 0:
- max_num = g.op(
- 'Constant',
- value_t=torch.tensor(max_num, dtype=torch.long))
- else:
- dim = g.op('Constant', value_t=torch.tensor(0))
- max_num = _size_helper(g, bboxes, dim)
- max_output_per_class = max_num
- iou_threshold = g.op(
- 'Constant',
- value_t=torch.tensor([iou_threshold], dtype=torch.float))
- score_threshold = g.op(
- 'Constant',
- value_t=torch.tensor([score_threshold], dtype=torch.float))
- nms_out = g.op('NonMaxSuppression', boxes, scores,
- max_output_per_class, iou_threshold,
- score_threshold)
- return squeeze(
- g,
- select(
- g, nms_out, 1,
- g.op(
- 'Constant',
- value_t=torch.tensor([2], dtype=torch.long))), 1)
-
-
-class SoftNMSop(torch.autograd.Function):
-
- @staticmethod
- def forward(ctx, boxes, scores, iou_threshold, sigma, min_score, method,
- offset):
- dets = boxes.new_empty((boxes.size(0), 5), device='cpu')
- inds = ext_module.softnms(
- boxes.cpu(),
- scores.cpu(),
- dets.cpu(),
- iou_threshold=float(iou_threshold),
- sigma=float(sigma),
- min_score=float(min_score),
- method=int(method),
- offset=int(offset))
- return dets, inds
-
- @staticmethod
- def symbolic(g, boxes, scores, iou_threshold, sigma, min_score, method,
- offset):
- from packaging import version
- assert version.parse(torch.__version__) >= version.parse('1.7.0')
- nms_out = g.op(
- 'mmcv::SoftNonMaxSuppression',
- boxes,
- scores,
- iou_threshold_f=float(iou_threshold),
- sigma_f=float(sigma),
- min_score_f=float(min_score),
- method_i=int(method),
- offset_i=int(offset),
- outputs=2)
- return nms_out
-
-
-@deprecated_api_warning({'iou_thr': 'iou_threshold'})
-def nms(boxes, scores, iou_threshold, offset=0, score_threshold=0, max_num=-1):
- """Dispatch to either CPU or GPU NMS implementations.
-
- The input can be either torch tensor or numpy array. GPU NMS will be used
- if the input is gpu tensor, otherwise CPU NMS
- will be used. The returned type will always be the same as inputs.
-
- Arguments:
- boxes (torch.Tensor or np.ndarray): boxes in shape (N, 4).
- scores (torch.Tensor or np.ndarray): scores in shape (N, ).
- iou_threshold (float): IoU threshold for NMS.
- offset (int, 0 or 1): boxes' width or height is (x2 - x1 + offset).
- score_threshold (float): score threshold for NMS.
- max_num (int): maximum number of boxes after NMS.
-
- Returns:
- tuple: kept dets(boxes and scores) and indice, which is always the \
- same data type as the input.
-
- Example:
- >>> boxes = np.array([[49.1, 32.4, 51.0, 35.9],
- >>> [49.3, 32.9, 51.0, 35.3],
- >>> [49.2, 31.8, 51.0, 35.4],
- >>> [35.1, 11.5, 39.1, 15.7],
- >>> [35.6, 11.8, 39.3, 14.2],
- >>> [35.3, 11.5, 39.9, 14.5],
- >>> [35.2, 11.7, 39.7, 15.7]], dtype=np.float32)
- >>> scores = np.array([0.9, 0.9, 0.5, 0.5, 0.5, 0.4, 0.3],\
- dtype=np.float32)
- >>> iou_threshold = 0.6
- >>> dets, inds = nms(boxes, scores, iou_threshold)
- >>> assert len(inds) == len(dets) == 3
- """
- assert isinstance(boxes, (torch.Tensor, np.ndarray))
- assert isinstance(scores, (torch.Tensor, np.ndarray))
- is_numpy = False
- if isinstance(boxes, np.ndarray):
- is_numpy = True
- boxes = torch.from_numpy(boxes)
- if isinstance(scores, np.ndarray):
- scores = torch.from_numpy(scores)
- assert boxes.size(1) == 4
- assert boxes.size(0) == scores.size(0)
- assert offset in (0, 1)
-
- if torch.__version__ == 'parrots':
- indata_list = [boxes, scores]
- indata_dict = {
- 'iou_threshold': float(iou_threshold),
- 'offset': int(offset)
- }
- inds = ext_module.nms(*indata_list, **indata_dict)
- else:
- inds = NMSop.apply(boxes, scores, iou_threshold, offset,
- score_threshold, max_num)
- dets = torch.cat((boxes[inds], scores[inds].reshape(-1, 1)), dim=1)
- if is_numpy:
- dets = dets.cpu().numpy()
- inds = inds.cpu().numpy()
- return dets, inds
-
-
-@deprecated_api_warning({'iou_thr': 'iou_threshold'})
-def soft_nms(boxes,
- scores,
- iou_threshold=0.3,
- sigma=0.5,
- min_score=1e-3,
- method='linear',
- offset=0):
- """Dispatch to only CPU Soft NMS implementations.
-
- The input can be either a torch tensor or numpy array.
- The returned type will always be the same as inputs.
-
- Arguments:
- boxes (torch.Tensor or np.ndarray): boxes in shape (N, 4).
- scores (torch.Tensor or np.ndarray): scores in shape (N, ).
- iou_threshold (float): IoU threshold for NMS.
- sigma (float): hyperparameter for gaussian method
- min_score (float): score filter threshold
- method (str): either 'linear' or 'gaussian'
- offset (int, 0 or 1): boxes' width or height is (x2 - x1 + offset).
-
- Returns:
- tuple: kept dets(boxes and scores) and indice, which is always the \
- same data type as the input.
-
- Example:
- >>> boxes = np.array([[4., 3., 5., 3.],
- >>> [4., 3., 5., 4.],
- >>> [3., 1., 3., 1.],
- >>> [3., 1., 3., 1.],
- >>> [3., 1., 3., 1.],
- >>> [3., 1., 3., 1.]], dtype=np.float32)
- >>> scores = np.array([0.9, 0.9, 0.5, 0.5, 0.4, 0.0], dtype=np.float32)
- >>> iou_threshold = 0.6
- >>> dets, inds = soft_nms(boxes, scores, iou_threshold, sigma=0.5)
- >>> assert len(inds) == len(dets) == 5
- """
-
- assert isinstance(boxes, (torch.Tensor, np.ndarray))
- assert isinstance(scores, (torch.Tensor, np.ndarray))
- is_numpy = False
- if isinstance(boxes, np.ndarray):
- is_numpy = True
- boxes = torch.from_numpy(boxes)
- if isinstance(scores, np.ndarray):
- scores = torch.from_numpy(scores)
- assert boxes.size(1) == 4
- assert boxes.size(0) == scores.size(0)
- assert offset in (0, 1)
- method_dict = {'naive': 0, 'linear': 1, 'gaussian': 2}
- assert method in method_dict.keys()
-
- if torch.__version__ == 'parrots':
- dets = boxes.new_empty((boxes.size(0), 5), device='cpu')
- indata_list = [boxes.cpu(), scores.cpu(), dets.cpu()]
- indata_dict = {
- 'iou_threshold': float(iou_threshold),
- 'sigma': float(sigma),
- 'min_score': min_score,
- 'method': method_dict[method],
- 'offset': int(offset)
- }
- inds = ext_module.softnms(*indata_list, **indata_dict)
- else:
- dets, inds = SoftNMSop.apply(boxes.cpu(), scores.cpu(),
- float(iou_threshold), float(sigma),
- float(min_score), method_dict[method],
- int(offset))
-
- dets = dets[:inds.size(0)]
-
- if is_numpy:
- dets = dets.cpu().numpy()
- inds = inds.cpu().numpy()
- return dets, inds
- else:
- return dets.to(device=boxes.device), inds.to(device=boxes.device)
-
-
-def batched_nms(boxes, scores, idxs, nms_cfg, class_agnostic=False):
- """Performs non-maximum suppression in a batched fashion.
-
- Modified from https://github.com/pytorch/vision/blob
- /505cd6957711af790211896d32b40291bea1bc21/torchvision/ops/boxes.py#L39.
- In order to perform NMS independently per class, we add an offset to all
- the boxes. The offset is dependent only on the class idx, and is large
- enough so that boxes from different classes do not overlap.
-
- Arguments:
- boxes (torch.Tensor): boxes in shape (N, 4).
- scores (torch.Tensor): scores in shape (N, ).
- idxs (torch.Tensor): each index value correspond to a bbox cluster,
- and NMS will not be applied between elements of different idxs,
- shape (N, ).
- nms_cfg (dict): specify nms type and other parameters like iou_thr.
- Possible keys includes the following.
-
- - iou_thr (float): IoU threshold used for NMS.
- - split_thr (float): threshold number of boxes. In some cases the
- number of boxes is large (e.g., 200k). To avoid OOM during
- training, the users could set `split_thr` to a small value.
- If the number of boxes is greater than the threshold, it will
- perform NMS on each group of boxes separately and sequentially.
- Defaults to 10000.
- class_agnostic (bool): if true, nms is class agnostic,
- i.e. IoU thresholding happens over all boxes,
- regardless of the predicted class.
-
- Returns:
- tuple: kept dets and indice.
- """
- nms_cfg_ = nms_cfg.copy()
- class_agnostic = nms_cfg_.pop('class_agnostic', class_agnostic)
- if class_agnostic:
- boxes_for_nms = boxes
- else:
- max_coordinate = boxes.max()
- offsets = idxs.to(boxes) * (max_coordinate + torch.tensor(1).to(boxes))
- boxes_for_nms = boxes + offsets[:, None]
-
- nms_type = nms_cfg_.pop('type', 'nms')
- nms_op = eval(nms_type)
-
- split_thr = nms_cfg_.pop('split_thr', 10000)
- # Won't split to multiple nms nodes when exporting to onnx
- if boxes_for_nms.shape[0] < split_thr or torch.onnx.is_in_onnx_export():
- dets, keep = nms_op(boxes_for_nms, scores, **nms_cfg_)
- boxes = boxes[keep]
- # -1 indexing works abnormal in TensorRT
- # This assumes `dets` has 5 dimensions where
- # the last dimension is score.
- # TODO: more elegant way to handle the dimension issue.
- # Some type of nms would reweight the score, such as SoftNMS
- scores = dets[:, 4]
- else:
- max_num = nms_cfg_.pop('max_num', -1)
- total_mask = scores.new_zeros(scores.size(), dtype=torch.bool)
- # Some type of nms would reweight the score, such as SoftNMS
- scores_after_nms = scores.new_zeros(scores.size())
- for id in torch.unique(idxs):
- mask = (idxs == id).nonzero(as_tuple=False).view(-1)
- dets, keep = nms_op(boxes_for_nms[mask], scores[mask], **nms_cfg_)
- total_mask[mask[keep]] = True
- scores_after_nms[mask[keep]] = dets[:, -1]
- keep = total_mask.nonzero(as_tuple=False).view(-1)
-
- scores, inds = scores_after_nms[keep].sort(descending=True)
- keep = keep[inds]
- boxes = boxes[keep]
-
- if max_num > 0:
- keep = keep[:max_num]
- boxes = boxes[:max_num]
- scores = scores[:max_num]
-
- return torch.cat([boxes, scores[:, None]], -1), keep
-
-
-def nms_match(dets, iou_threshold):
- """Matched dets into different groups by NMS.
-
- NMS match is Similar to NMS but when a bbox is suppressed, nms match will
- record the indice of suppressed bbox and form a group with the indice of
- kept bbox. In each group, indice is sorted as score order.
-
- Arguments:
- dets (torch.Tensor | np.ndarray): Det boxes with scores, shape (N, 5).
- iou_thr (float): IoU thresh for NMS.
-
- Returns:
- List[torch.Tensor | np.ndarray]: The outer list corresponds different
- matched group, the inner Tensor corresponds the indices for a group
- in score order.
- """
- if dets.shape[0] == 0:
- matched = []
- else:
- assert dets.shape[-1] == 5, 'inputs dets.shape should be (N, 5), ' \
- f'but get {dets.shape}'
- if isinstance(dets, torch.Tensor):
- dets_t = dets.detach().cpu()
- else:
- dets_t = torch.from_numpy(dets)
- indata_list = [dets_t]
- indata_dict = {'iou_threshold': float(iou_threshold)}
- matched = ext_module.nms_match(*indata_list, **indata_dict)
- if torch.__version__ == 'parrots':
- matched = matched.tolist()
-
- if isinstance(dets, torch.Tensor):
- return [dets.new_tensor(m, dtype=torch.long) for m in matched]
- else:
- return [np.array(m, dtype=np.int) for m in matched]
-
-
-def nms_rotated(dets, scores, iou_threshold, labels=None):
- """Performs non-maximum suppression (NMS) on the rotated boxes according to
- their intersection-over-union (IoU).
-
- Rotated NMS iteratively removes lower scoring rotated boxes which have an
- IoU greater than iou_threshold with another (higher scoring) rotated box.
-
- Args:
- boxes (Tensor): Rotated boxes in shape (N, 5). They are expected to \
- be in (x_ctr, y_ctr, width, height, angle_radian) format.
- scores (Tensor): scores in shape (N, ).
- iou_threshold (float): IoU thresh for NMS.
- labels (Tensor): boxes' label in shape (N,).
-
- Returns:
- tuple: kept dets(boxes and scores) and indice, which is always the \
- same data type as the input.
- """
- if dets.shape[0] == 0:
- return dets, None
- multi_label = labels is not None
- if multi_label:
- dets_wl = torch.cat((dets, labels.unsqueeze(1)), 1)
- else:
- dets_wl = dets
- _, order = scores.sort(0, descending=True)
- dets_sorted = dets_wl.index_select(0, order)
-
- if torch.__version__ == 'parrots':
- keep_inds = ext_module.nms_rotated(
- dets_wl,
- scores,
- order,
- dets_sorted,
- iou_threshold=iou_threshold,
- multi_label=multi_label)
- else:
- keep_inds = ext_module.nms_rotated(dets_wl, scores, order, dets_sorted,
- iou_threshold, multi_label)
- dets = torch.cat((dets[keep_inds], scores[keep_inds].reshape(-1, 1)),
- dim=1)
- return dets, keep_inds
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/parallel/distributed_deprecated.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/parallel/distributed_deprecated.py
deleted file mode 100644
index 676937a2085d4da20fa87923041a200fca6214eb..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/parallel/distributed_deprecated.py
+++ /dev/null
@@ -1,70 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-import torch.distributed as dist
-import torch.nn as nn
-from torch._utils import (_flatten_dense_tensors, _take_tensors,
- _unflatten_dense_tensors)
-
-from annotator.uniformer.mmcv.utils import TORCH_VERSION, digit_version
-from .registry import MODULE_WRAPPERS
-from .scatter_gather import scatter_kwargs
-
-
-@MODULE_WRAPPERS.register_module()
-class MMDistributedDataParallel(nn.Module):
-
- def __init__(self,
- module,
- dim=0,
- broadcast_buffers=True,
- bucket_cap_mb=25):
- super(MMDistributedDataParallel, self).__init__()
- self.module = module
- self.dim = dim
- self.broadcast_buffers = broadcast_buffers
-
- self.broadcast_bucket_size = bucket_cap_mb * 1024 * 1024
- self._sync_params()
-
- def _dist_broadcast_coalesced(self, tensors, buffer_size):
- for tensors in _take_tensors(tensors, buffer_size):
- flat_tensors = _flatten_dense_tensors(tensors)
- dist.broadcast(flat_tensors, 0)
- for tensor, synced in zip(
- tensors, _unflatten_dense_tensors(flat_tensors, tensors)):
- tensor.copy_(synced)
-
- def _sync_params(self):
- module_states = list(self.module.state_dict().values())
- if len(module_states) > 0:
- self._dist_broadcast_coalesced(module_states,
- self.broadcast_bucket_size)
- if self.broadcast_buffers:
- if (TORCH_VERSION != 'parrots'
- and digit_version(TORCH_VERSION) < digit_version('1.0')):
- buffers = [b.data for b in self.module._all_buffers()]
- else:
- buffers = [b.data for b in self.module.buffers()]
- if len(buffers) > 0:
- self._dist_broadcast_coalesced(buffers,
- self.broadcast_bucket_size)
-
- def scatter(self, inputs, kwargs, device_ids):
- return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim)
-
- def forward(self, *inputs, **kwargs):
- inputs, kwargs = self.scatter(inputs, kwargs,
- [torch.cuda.current_device()])
- return self.module(*inputs[0], **kwargs[0])
-
- def train_step(self, *inputs, **kwargs):
- inputs, kwargs = self.scatter(inputs, kwargs,
- [torch.cuda.current_device()])
- output = self.module.train_step(*inputs[0], **kwargs[0])
- return output
-
- def val_step(self, *inputs, **kwargs):
- inputs, kwargs = self.scatter(inputs, kwargs,
- [torch.cuda.current_device()])
- output = self.module.val_step(*inputs[0], **kwargs[0])
- return output
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/packages/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/packages/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/rotate.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/rotate.py
deleted file mode 100644
index 74795ba922bb376e24858760e63dc9124ef22b9f..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/rotate.py
+++ /dev/null
@@ -1,64 +0,0 @@
-from distutils.util import convert_path
-from distutils import log
-from distutils.errors import DistutilsOptionError
-import os
-import shutil
-
-from setuptools import Command
-
-
-class rotate(Command):
- """Delete older distributions"""
-
- description = "delete older distributions, keeping N newest files"
- user_options = [
- ('match=', 'm', "patterns to match (required)"),
- ('dist-dir=', 'd', "directory where the distributions are"),
- ('keep=', 'k', "number of matching distributions to keep"),
- ]
-
- boolean_options = []
-
- def initialize_options(self):
- self.match = None
- self.dist_dir = None
- self.keep = None
-
- def finalize_options(self):
- if self.match is None:
- raise DistutilsOptionError(
- "Must specify one or more (comma-separated) match patterns "
- "(e.g. '.zip' or '.egg')"
- )
- if self.keep is None:
- raise DistutilsOptionError("Must specify number of files to keep")
- try:
- self.keep = int(self.keep)
- except ValueError as e:
- raise DistutilsOptionError("--keep must be an integer") from e
- if isinstance(self.match, str):
- self.match = [
- convert_path(p.strip()) for p in self.match.split(',')
- ]
- self.set_undefined_options('bdist', ('dist_dir', 'dist_dir'))
-
- def run(self):
- self.run_command("egg_info")
- from glob import glob
-
- for pattern in self.match:
- pattern = self.distribution.get_name() + '*' + pattern
- files = glob(os.path.join(self.dist_dir, pattern))
- files = [(os.path.getmtime(f), f) for f in files]
- files.sort()
- files.reverse()
-
- log.info("%d file(s) matching %s", len(files), pattern)
- files = files[self.keep:]
- for (t, f) in files:
- log.info("Deleting %s", f)
- if not self.dry_run:
- if os.path.isdir(f):
- shutil.rmtree(f)
- else:
- os.unlink(f)
diff --git a/spaces/Baptlem/UCDR-Net/app.py b/spaces/Baptlem/UCDR-Net/app.py
deleted file mode 100644
index b7b4161618fbfa129372075c1fa5bb8637d46774..0000000000000000000000000000000000000000
--- a/spaces/Baptlem/UCDR-Net/app.py
+++ /dev/null
@@ -1,360 +0,0 @@
-# This file is adapted from https://huggingface.co/spaces/diffusers/controlnet-canny/blob/main/app.py
-# The original license file is LICENSE.ControlNet in this repo.
-from diffusers import FlaxStableDiffusionControlNetPipeline, FlaxControlNetModel, FlaxDPMSolverMultistepScheduler
-from transformers import CLIPTokenizer, FlaxCLIPTextModel, set_seed
-from flax.training.common_utils import shard
-from flax.jax_utils import replicate
-from diffusers.utils import load_image
-import jax.numpy as jnp
-import jax
-import cv2
-from PIL import Image
-import numpy as np
-import gradio as gr
-import os
-
-
-if gr.__version__ != "3.28.3": #doesn't work...
- os.system("pip uninstall -y gradio")
- os.system("pip install gradio==3.28.3")
-
-title_description = """
-# Unlimited Controlled Domain Randomization Network for Bridging the Sim2Real Gap in Robotics
-
-"""
-
-description = """
-While existing ControlNet and public diffusion models are predominantly geared towards high-resolution images (512x512 or above) and intricate artistic detail generation, there's an untapped potential of these models in Automatic Data Augmentation (ADA).
-By harnessing the inherent variance in prompt-conditioned generated images, we can significantly boost the visual diversity of training samples for computer vision pipelines.
-This is particularly relevant in the field of robotics, where deep learning is increasingly playing a pivotal role in training policies for robotic manipulation from images.
-
-In this HuggingFace sprint, we present UCDR-Net (Unlimited Controlled Domain Randomization Network), a novel CannyEdge mini-ControlNet trained on Stable Diffusion 1.5 with mixed datasets.
-Our model generates photorealistic and varied renderings from simplistic robotic simulation images, enabling real-time data augmentation for robotic vision training.
-
-We specifically designed UCDR-Net to be fast and composition preserving, with an emphasis on lower resolution images (128x128) for online data augmentation in typical preprocessing pipelines.
-Our choice of Canny Edge version of ControlNet ensures shape and structure preservation in the image, which is crucial for visuomotor policy learning.
-
-We trained ControlNet from scratch using only 128x128 images, preprocessing the training datasets and extracting Canny Edge maps.
-We then trained four Control-Nets with different mixtures of 2 datasets (Coyo-700M and Bridge Data) and showcased the results.
-* [Coyo-700M](https://github.com/kakaobrain/coyo-dataset)
-* [Bridge](https://sites.google.com/view/bridgedata)
-
-Model Description and Training Process: Please refer to the readme file attached to the model repository.
-
-Model Repository: [ControlNet repo](https://huggingface.co/Baptlem/UCDR-Net_models)
-
-"""
-
-traj_description = """
-To demonstrate UCDR-Net's capabilities, we generated a trajectory of our simulated robotic environment and presented the resulting videos for each model.
-We batched the frames for each video and performed independent inference for each frame, which explains the "wobbling" effect.
-Prompt used for every video: "A robotic arm with a gripper and a small cube on a table, super realistic, industrial background"
-
-"""
-
-perfo_description = """
-Our model has been benchmarked on a node of 8 A100 80Go GPUs, achieving an impressive 170 FPS image generation rate!
-
-To make the benchmark, we loaded one of our model on every GPUs of the node. We then retrieve an episode of our simulation.
-For every frame of the episode, we preprocess the image (resize, canny, …) and process the Canny image on the GPUs.
-We repeated this procedure for different Batch Size (BS).
-
-We can see that the greater the BS the greater the FPS. By increazing the BS, we take advantage of the parallelization of the GPUs.
-"""
-
-conclusion_description = """
-UCDR-Net stands as a natural development in bridging the Sim2Real gap in robotics by providing real-time data augmentation for training visual policies.
-We are excited to share our work with the HuggingFace community and contribute to the advancement of robotic vision training techniques.
-
-"""
-
-def create_key(seed=0):
- return jax.random.PRNGKey(seed)
-
-def load_controlnet(controlnet_version):
- controlnet, controlnet_params = FlaxControlNetModel.from_pretrained(
- "Baptlem/UCDR-Net_models",
- subfolder=controlnet_version,
- from_flax=True,
- dtype=jnp.float32,
- )
- return controlnet, controlnet_params
-
-
-def load_sb_pipe(controlnet_version, sb_path="runwayml/stable-diffusion-v1-5"):
- controlnet, controlnet_params = load_controlnet(controlnet_version)
-
- scheduler, scheduler_params = FlaxDPMSolverMultistepScheduler.from_pretrained(
- sb_path,
- subfolder="scheduler"
- )
-
- pipe, params = FlaxStableDiffusionControlNetPipeline.from_pretrained(
- sb_path,
- controlnet=controlnet,
- revision="flax",
- dtype=jnp.bfloat16
- )
-
- pipe.scheduler = scheduler
- params["controlnet"] = controlnet_params
- params["scheduler"] = scheduler_params
- return pipe, params
-
-
-
-controlnet_path = "Baptlem/UCDR-Net_models"
-controlnet_version = "coyo-500k"
-
-# Constants
-low_threshold = 100
-high_threshold = 200
-
-print(os.path.abspath('.'))
-print(os.listdir("."))
-print("Gradio version:", gr.__version__)
-# pipe.enable_xformers_memory_efficient_attention()
-# pipe.enable_model_cpu_offload()
-# pipe.enable_attention_slicing()
-print("Loaded models...")
-def pipe_inference(
- image,
- prompt,
- is_canny=False,
- num_samples=4,
- resolution=128,
- num_inference_steps=50,
- guidance_scale=7.5,
- model="coyo-500k",
- seed=0,
- negative_prompt="",
- ):
- print("Loading pipe")
- pipe, params = load_sb_pipe(model)
-
- if not isinstance(image, np.ndarray):
- image = np.array(image)
-
- processed_image = resize_image(image, resolution) #-> PIL
-
- if not is_canny:
- resized_image, processed_image = preprocess_canny(processed_image, resolution)
-
- rng = create_key(seed)
- rng = jax.random.split(rng, jax.device_count())
-
- prompt_ids = pipe.prepare_text_inputs([prompt] * num_samples)
- negative_prompt_ids = pipe.prepare_text_inputs([negative_prompt] * num_samples)
- processed_image = pipe.prepare_image_inputs([processed_image] * num_samples)
-
- p_params = replicate(params)
- prompt_ids = shard(prompt_ids)
- negative_prompt_ids = shard(negative_prompt_ids)
- processed_image = shard(processed_image)
- print("Inference...")
- output = pipe(
- prompt_ids=prompt_ids,
- image=processed_image,
- params=p_params,
- prng_seed=rng,
- num_inference_steps=num_inference_steps,
- guidance_scale=guidance_scale,
- neg_prompt_ids=negative_prompt_ids,
- jit=True,
- ).images
- print("Finished inference...")
- # all_outputs = []
- # all_outputs.append(image)
- # if not is_canny:
- # all_outputs.append(resized_image)
-
- # for image in output.images:
- # all_outputs.append(image)
-
- all_outputs = pipe.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:])))
- return all_outputs
-
-def resize_image(image, resolution):
- if not isinstance(image, np.ndarray):
- image = np.array(image)
- h, w = image.shape[:2]
- ratio = w/h
- if ratio > 1 :
- resized_image = cv2.resize(image, (int(resolution*ratio), resolution), interpolation=cv2.INTER_NEAREST)
- elif ratio < 1 :
- resized_image = cv2.resize(image, (resolution, int(resolution/ratio)), interpolation=cv2.INTER_NEAREST)
- else:
- resized_image = cv2.resize(image, (resolution, resolution), interpolation=cv2.INTER_NEAREST)
-
- return Image.fromarray(resized_image)
-
-
-def preprocess_canny(image, resolution=128):
- if not isinstance(image, np.ndarray):
- image = np.array(image)
-
- processed_image = cv2.Canny(image, low_threshold, high_threshold)
- processed_image = processed_image[:, :, None]
- processed_image = np.concatenate([processed_image, processed_image, processed_image], axis=2)
-
- resized_image = Image.fromarray(image)
- processed_image = Image.fromarray(processed_image)
- return resized_image, processed_image
-
-
-def create_demo(process, max_images=12, default_num_images=4):
- with gr.Blocks() as demo:
- with gr.Row():
- gr.Markdown(title_description)
- with gr.Row():
- with gr.Column():
- input_image = gr.Image(source='upload', type='numpy')
- prompt = gr.Textbox(label='Prompt')
- run_button = gr.Button(label='Run')
- with gr.Accordion('Advanced options', open=False):
- is_canny = gr.Checkbox(
- label='Is canny', value=False)
- num_samples = gr.Slider(label='Images',
- minimum=1,
- maximum=max_images,
- value=default_num_images,
- step=1)
- """
- canny_low_threshold = gr.Slider(
- label='Canny low threshold',
- minimum=1,
- maximum=255,
- value=100,
- step=1)
- canny_high_threshold = gr.Slider(
- label='Canny high threshold',
- minimum=1,
- maximum=255,
- value=200,
- step=1)
- """
- resolution = gr.Slider(label='Resolution',
- minimum=128,
- maximum=128,
- value=128,
- step=1)
- num_steps = gr.Slider(label='Steps',
- minimum=1,
- maximum=100,
- value=20,
- step=1)
- guidance_scale = gr.Slider(label='Guidance Scale',
- minimum=0.1,
- maximum=30.0,
- value=7.5,
- step=0.1)
- model = gr.Dropdown(choices=["coyo-500k", "bridge-2M", "coyo1M-bridge2M", "coyo2M-bridge325k"],
- value="coyo-500k",
- label="Model used for inference",
- info="Find every models at https://huggingface.co/Baptlem/UCDR-Net_models")
- seed = gr.Slider(label='Seed',
- minimum=-1,
- maximum=2147483647,
- step=1,
- randomize=True)
- n_prompt = gr.Textbox(
- label='Negative Prompt',
- value=
- 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
- )
- with gr.Column():
- result = gr.Gallery(label='Output',
- show_label=False,
- elem_id='gallery').style(grid=2,
- height='auto')
-
- with gr.Row():
- gr.Video("./trajectory_hf/trajectory_coyo2M-bridge325k_64.avi",
- format="avi",
- interactive=False).style(height=512,
- width=512)
-
- with gr.Row():
- gr.Markdown(description)
-
- with gr.Row():
- with gr.Column():
- gr.Markdown(traj_description)
- with gr.Column():
- gr.Video("./trajectory_hf/trajectory.avi",
- format="avi",
- interactive=False)
-
- with gr.Row():
- with gr.Column():
- gr.Markdown("Trajectory processed with coyo-500k model :")
- with gr.Column():
- gr.Video("./trajectory_hf/trajectory_coyo-500k.avi",
- format="avi",
- interactive=False)
-
- with gr.Row():
- with gr.Column():
- gr.Markdown("Trajectory processed with bridge-2M model :")
- with gr.Column():
- gr.Video("./trajectory_hf/trajectory_bridge-2M.avi",
- format="avi",
- interactive=False)
-
- with gr.Row():
- with gr.Column():
- gr.Markdown("Trajectory processed with coyo1M-bridge2M model :")
- with gr.Column():
- gr.Video("./trajectory_hf/trajectory_coyo1M-bridge2M.avi",
- format="avi",
- interactive=False)
-
- with gr.Row():
- with gr.Column():
- gr.Markdown("Trajectory processed with coyo2M-bridge325k model :")
- with gr.Column():
- gr.Video("./trajectory_hf/trajectory_coyo2M-bridge325k.avi",
- format="avi",
- interactive=False)
-
- with gr.Row():
- with gr.Column():
- gr.Markdown(perfo_description)
- with gr.Column():
- gr.Image("./perfo_rtx.png",
- interactive=False)
-
- with gr.Row():
- gr.Markdown(conclusion_description)
-
-
-
- inputs = [
- input_image,
- prompt,
- is_canny,
- num_samples,
- resolution,
- #canny_low_threshold,
- #canny_high_threshold,
- num_steps,
- guidance_scale,
- model,
- seed,
- n_prompt,
- ]
- prompt.submit(fn=process, inputs=inputs, outputs=result)
- run_button.click(fn=process,
- inputs=inputs,
- outputs=result,
- api_name='canny')
-
- return demo
-
-if __name__ == '__main__':
-
- pipe_inference
- demo = create_demo(pipe_inference)
- demo.queue().launch()
- # gr.Interface(create_demo).launch()
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Archivo Zip Brawlhalla.md b/spaces/Benson/text-generation/Examples/Descargar Archivo Zip Brawlhalla.md
deleted file mode 100644
index b61b47274a55375751815e8d053d0509735e01a0..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Archivo Zip Brawlhalla.md
+++ /dev/null
@@ -1,88 +0,0 @@
-
-
Cómo descargar Naruto x Boruto Ninja Voltage desde Play Store
-
Si eres fan de las series de anime Naruto y Boruto, quizás quieras probar Naruto x Boruto Ninja Voltage, un popular juego móvil que combina acción, estrategia y elementos de rol. En este juego, puede recoger sus personajes shinobi favoritos, construir su propia fortaleza ninja, y la batalla contra otros jugadores o jefes gigantes. En este artículo, le mostraremos cómo descargar e instalar Naruto x Boruto Ninja Voltage de Play Store, así como cómo jugar y disfrutar del juego.
-
¿Qué es Naruto x Boruto Ninja Voltage?
-
Una breve introducción al juego y sus características
-
Naruto x Boruto Ninja Voltage es un juego gratuito desarrollado por Bandai Namco Entertainment Inc. Se basa en la popular serie de manga y anime Naruto y su secuela Boruto. El juego cuenta con personajes de ambas series, como Naruto Uzumaki, Sasuke Uchiha, Boruto Uzumaki, Sarada Uchiha, y muchos más. Puedes mejorar y evolucionar a tus ninjas para convertirte en el clan más fuerte.
El juego tiene dos modos principales: modo fortaleza y modo misión. En el modo fortaleza, puedes diseñar tu propia fortaleza ninja con trampas, shinobi y sistemas de defensa. También puedes atacar fortalezas de otros jugadores y competir por posiciones de batalla. En el modo misión, puedes unirte a un gremio shinobi e ir a misiones con hasta cuatro jugadores. También puedes luchar contra jefes gigantes sin sellar en misiones de ataque sorpresa.
-
El juego también tiene acción shinobi de ritmo rápido con controles simples y hermosos gráficos de anime en 3D. Puedes realizar combos ninja y terminar a tus enemigos con poderosos ataques de ninjutsu, como Rasengan de Naruto o Chidori de Sasuke. También puedes ganar recompensas completando varias misiones ninja.
-
Cómo descargar e instalar el juego desde Play Store
-
Instrucciones paso a paso con capturas de pantalla
-
-
-
Abra la aplicación Play Store en su dispositivo Android.
-
Buscar "Naruto x Boruto Ninja Voltage" en la barra de búsqueda.
-
Toque en el icono del juego que aparece en los resultados.
-
Toque en el botón "Instalar" para comenzar a descargar el juego.
-
Espera a que termine la descarga y luego toca "Abrir" para iniciar el juego.
-
Acepta los términos de servicio y la política de privacidad del juego.
-
Elija su idioma y servidor preferido.
-
¡Disfruta del juego!
-
-
-
-
-
-
-
-
Cómo jugar y disfrutar del juego
-
Algunos consejos y trucos para principiantes
-
Si eres nuevo en Naruto x Boruto Ninja Voltage, aquí hay algunos consejos y trucos que pueden ayudarte a empezar:
-
-
Completa las misiones de tutorial para aprender los fundamentos del juego.
-
Recoger fragmentos de héroe de las misiones para desbloquear más personajes.
-
Invoca tarjetas ninja desde banners para equipar a tus personajes con jutsu y potenciadores de estadísticas.
-
Limite la ruptura de sus tarjetas con ranas o duplicados para aumentar su nivel y poder.
-
Mejora tus instalaciones de fortaleza con ryo y chakra para mejorar tu defensa y ofensiva.
-
Únete a un gremio y coopera con otros jugadores para ganar medallas y recompensas.
-
Participa en eventos y misiones especiales para obtener artículos y personajes exclusivos.
-
Diviértete y experimenta con diferentes combinaciones y estrategias de equipo.
-
-
Algunas fuentes para más información y comentarios
-
-
-
-
Fuente
-
Descripción
-
-
-
[Sitio web oficial]
-
El sitio web oficial del juego, donde puedes encontrar las últimas noticias, actualizaciones y anuncios.
-
-
-
[Página oficial de Facebook]
-
La página oficial de Facebook del juego, donde puedes interactuar con otros fans, obtener consejos y unirse a eventos.
-
-
-
[Comunidad de Reddit]
-
Un subreddit dedicado al juego, donde puedes discutir, compartir y hacer preguntas sobre el juego.
-
-
-
[canal de YouTube]
-
Un canal de YouTube que incluye vídeos, guías, reseñas y más sobre el juego.
-
-
-
[Google Play Store]
-
La página de Google Play Store del juego, donde puedes descargar el juego, leer los comentarios de los usuarios y calificar el juego.
-
-
-
Conclusión
-
Un resumen de los puntos principales y una llamada a la acción
-
Naruto x Boruto Ninja Voltage es un juego divertido y emocionante que te permite experimentar el mundo de Naruto y Boruto en tu dispositivo móvil. Puedes recoger y personalizar tus personajes shinobi favoritos, construir y defender tu fortaleza ninja, y formar equipo con otros jugadores para completar misiones y jefes de lucha. El juego es fácil de descargar e instalar desde Play Store, y puedes seguir nuestros consejos y trucos para empezar. Si eres fan de las series de anime Naruto y Boruto, definitivamente deberías probar este juego. ¡Descarga Naruto x Boruto Ninja Voltage de Play Store hoy y libera tu potencial ninja!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas comunes que la gente tiene sobre Naruto x Boruto Ninja Voltage:
-
-
¿Es Naruto x Boruto Ninja Voltage gratis para jugar?
-
Sí, Naruto x Boruto Ninja Voltage es gratis para jugar. Sin embargo, hay algunas compras opcionales en la aplicación que pueden mejorar su experiencia de juego.
-
-
¿Cuáles son los requisitos del sistema para Naruto x Boruto Ninja Voltage?
-
-
¿Cómo puedo obtener más caracteres shinobi en Naruto x Boruto Ninja Voltage?
-
Puedes conseguir más personajes shinobi recogiendo fragmentos de héroes de misiones o invocando cartas ninja desde banners. También puedes obtener algunos personajes como recompensas de eventos o misiones especiales.
-
¿Cómo puedo mejorar mis personajes shinobi en Naruto x Boruto Ninja Voltage?
-
Puedes mejorar a tus personajes shinobi mejorando y evolucionando sus cartas ninja, limitando el romper sus cartas con ranas o duplicados, despertando sus habilidades con materiales y aumentando su rango con pergaminos.
-
¿Cómo puedo contactar al equipo de soporte de Naruto x Boruto Ninja Voltage?
-
Puede ponerse en contacto con el equipo de soporte de Naruto x Boruto Ninja Voltage tocando el botón "Soporte" en la pantalla de título o el botón "Contáctenos" en el menú de configuración. También puede enviar un correo electrónico a bnea_support@bandainamcoent.com.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Genshin Impacto En El Ordenador Porttil.md b/spaces/Benson/text-generation/Examples/Descargar Genshin Impacto En El Ordenador Porttil.md
deleted file mode 100644
index a2fcd52df37d93c47947552de34081fd4da1b8de..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Genshin Impacto En El Ordenador Porttil.md
+++ /dev/null
@@ -1,106 +0,0 @@
-
-
Cómo descargar Genshin impacto en el ordenador portátil
-
Genshin Impact es un juego de rol de acción de mundo abierto que ha tomado el mundo del juego por asalto. En este juego, puedes explorar un vasto y hermoso mundo llamado Teyvat, donde puedes conocer a un diverso elenco de personajes, luchar contra poderosos enemigos y descubrir los secretos de los siete elementos. También puedes hacer equipo con tus amigos en diferentes plataformas, ya que Genshin Impact admite el juego cruzado entre dispositivos PC, PS4, iOS y Android.
-
Si usted está buscando una manera de jugar este increíble juego en su computadora portátil, usted ha venido al lugar correcto. En este artículo, le mostraremos cómo descargar Genshin Impact en una computadora portátil desde diferentes fuentes, cómo instalarlo y lanzarlo, cómo optimizarlo para un mejor rendimiento y cómo disfrutar de sus funciones de juego. ¡Vamos a empezar!
-
descargar genshin impacto en el ordenador portátil
Lo que necesita para jugar Genshin impacto en el ordenador portátil
-
Antes de descargar Genshin Impact en la computadora portátil, debe asegurarse de que su dispositivo cumple con los requisitos mínimos del sistema para el juego. Según el sitio web oficial, estos son:
-
-
Sistema operativo: Windows 7 SP1 64-bit, Windows 8.1 64-bit, o Windows 10 64-bit
-
Procesador: Intel Core i5 o equivalente
-
RAM: 8 GB
-
Tarjeta gráfica: NVIDIA GeForce GT 1030 o mejor
-
Versión de DirectX: 11
-
Espacio de almacenamiento: 30 GB o más
-
-
Si su computadora portátil cumple con estos requisitos, puede jugar Genshin Impact en la computadora portátil sin ningún problema importante. Sin embargo, si quieres disfrutar del juego con ajustes de gráficos más altos y velocidades de fotogramas más suaves, es posible que quieras actualizar tu computadora portátil o usar una GPU externa.
-
Otra cosa que necesita para jugar Genshin impacto en el ordenador portátil es una plataforma donde se puede descargar el juego. Hay dos opciones principales para esto: el sitio web oficial de Genshin Impact o la Epic Games Store. Explicaremos cómo descargar Genshin Impact de ambas fuentes en las próximas secciones.
-
-
El sitio web oficial de Genshin Impact es una de las formas más fáciles de descargar el juego en su computadora portátil. Estos son los pasos que debe seguir:
-
-
Ir a [el sitio web oficial de Genshin Impact]( 5 ) y haga clic en "Descargar ahora".
-
Seleccione "Windows" de la lista de plataformas disponibles y haga clic en "Descargar".
-
Espere al archivo llamado "GenshinImpact_install_" para terminar de descargar.
-
Haga doble clic en el archivo y siga las instrucciones para instalar el lanzador de juegos.
-
Inicie el lanzador de juegos e inicie sesión con su cuenta miHoYo o cree uno si no tiene uno.
-
Haga clic en "Obtener juego" y esperar a que los archivos del juego para descargar.
-
Haga clic en "Lanzamiento" y disfrutar jugando Genshin impacto en el ordenador portátil!
-
-
Aquí hay algunas capturas de pantalla del proceso:
-
-
-
-
Haga clic en "Obtener" para agregar el juego a su biblioteca de forma gratuita.
-
Haga clic en "Instalar" para comenzar a descargar los archivos del juego.
-
Espere a que la descarga termine y lance el juego desde el Lanzador de Juegos Épicos.
-
Inicia sesión con tu cuenta miHoYo o crea una si no tienes una.
-
Disfruta jugando Genshin impacto en el ordenador portátil!
-
-
Aquí hay algunas capturas de pantalla del proceso:
-
-
-
-
Si no desea utilizar el sitio web oficial o la Epic Games Store, es posible que se pregunte si hay otras fuentes donde se puede descargar Genshin Impact en el ordenador portátil. La respuesta es sí, pero hay que tener cuidado. Algunos sitios web pueden ofrecer réplicas no oficiales o archivos editados que podrían contener malware o virus. Por lo tanto, no recomendamos descargar Genshin Impact desde ninguna otra fuente que no sea el sitio web oficial o la Epic Games Store.
-
Sin embargo, si tienes curiosidad, aquí hay algunos ejemplos de otras fuentes donde puedes encontrar Genshin Impact:
-
-
[Reddit]( 5 ): Algunos usuarios en Reddit han compartido enlaces de descarga directa para Genshin Impact desde el servidor oficial de Hoyoverse. Estos son los mismos archivos que el lanzador utiliza para descargar e instalar el juego o actualizaciones. Sin embargo, es posible que estos enlaces no se actualicen regularmente o que expiren después de algún tiempo. También necesita extraer y actualizar manualmente los archivos, lo que podría causar problemas o errores.
-
[YouTube]( 12 ): Algunos videos de YouTube han proporcionado tutoriales sobre cómo impulsar FPS y aumentar el rendimiento en Genshin Impact en el ordenador portátil. Estos videos también pueden incluir enlaces para descargar el juego o algunas herramientas de optimización. Sin embargo, estos enlaces pueden no ser confiables o seguros, y algunos de los consejos de optimización pueden no funcionar para todos.
-
[Otros sitios web]( 9 ) : Algunos otros sitios web han proporcionado guías sobre cómo descargar, instalar, iniciar u optimizar Genshin Impact en el ordenador portátil. Estos sitios web también pueden incluir enlaces para descargar el juego o algún software. Sin embargo, estos enlaces pueden no ser verificados o seguros, y algunos de los programas pueden no ser compatibles o eficaces.
-
-
-
Después de haber descargado Genshin Impact en la computadora portátil desde el sitio web oficial o la Epic Games Store, debe instalar y lanzar el juego. Este es un proceso simple y directo, pero lo guiaremos de todos modos. Estos son los pasos que debe seguir:
-
-
Localice la carpeta donde ha descargado los archivos del juego. Si ha utilizado el sitio web oficial, debe estar en la carpeta Descargas. Si has usado la Epic Games Store, debería estar en tu carpeta Epic Games.
-
Haga doble clic en el archivo llamado "GenshinImpact.exe" para iniciar el proceso de instalación.
-
Seleccione el idioma y la carpeta de destino del juego. También puede crear un acceso directo del escritorio si lo desea.
-
Haga clic en "Instalar" y espere a que termine la instalación.
-
Haga clic en "Finalizar" y lanzar el juego desde el acceso directo del escritorio o el menú de inicio.
-
Inicia sesión con tu cuenta miHoYo o crea una si no tienes una.
-
Seleccione una región del servidor y acepte los términos del servicio.
-
Crea tu personaje y empezar a jugar Genshin impacto en el ordenador portátil!
-
-
Aquí hay algunas capturas de pantalla del proceso:
-
-
-
Cómo optimizar el impacto de Genshin para el rendimiento del ordenador portátil
-
Genshin Impact es un juego visualmente impresionante que requiere una gran cantidad de recursos para funcionar sin problemas. Si usted tiene un ordenador portátil de gran alcance, es posible que no tenga ningún problema para jugar el juego en la configuración de gráficos de alta y resolución. Sin embargo, si tiene una computadora portátil de gama baja o media, puede experimentar algunos problemas de retraso, tartamudez o sobrecalentamiento. Afortunadamente, hay algunas maneras de optimizar Genshin Impact para el rendimiento de la computadora portátil y hacer que funcione mejor. Aquí hay algunos consejos y trucos que puedes probar:
-
-
-
Optimizar la configuración de su ordenador portátil: También puede ajustar algunos ajustes en su ordenador portátil para mejorar su rendimiento. Por ejemplo, puede cambiar al modo de alto rendimiento en sus opciones de alimentación, actualizar sus controladores, cerrar cualquier programa de fondo o aplicaciones que no sean necesarios, desactivar cualquier programa de inicio innecesario y limpiar el espacio en disco.
-
Utilice una almohadilla de enfriamiento externa o ventilador: Una de las principales causas de mal rendimiento en los ordenadores portátiles es el sobrecalentamiento. Si su computadora portátil se calienta demasiado, puede acelerar su velocidad o apagarse por completo. Para evitar esto, puede usar una almohadilla de enfriamiento externa o un ventilador para mantener su computadora portátil fresca y ventilada. También puede limpiar los ventiladores y rejillas de ventilación de su computadora portátil regularmente para eliminar cualquier polvo o escombros que puedan bloquear el flujo de aire.
-
Utilice un teclado y un ratón externos: Otro problema que puede afectar su experiencia de juego es la comodidad y la precisión de sus dispositivos de entrada. Si está usando el teclado y el panel táctil de su computadora portátil, es posible que se sientan incómodos o no respondan al tocar Genshin Impact. Para resolver esto, puede usar un teclado y un ratón externos que sean más ergonómicos y precisos. También puede ajustar la sensibilidad y las combinaciones de teclas en la configuración del juego para adaptarse a sus preferencias.
-
-
Siguiendo estos consejos y trucos, puede optimizar Genshin Impact para el rendimiento del ordenador portátil y disfrutar de un juego más suave y más inmersiva.
Cómo disfrutar de Genshin impacto en el ordenador portátil
-
Ahora que ha descargado, instalado y optimizado Genshin Impact en el ordenador portátil, usted está listo para disfrutar del juego y sus características. Genshin Impact es un juego que ofrece mucho contenido y variedad para jugadores de diferentes gustos y preferencias. Estas son algunas de las cosas que puedes hacer en Genshin Impact:
-
-
-
Recoge y actualiza personajes: Genshin Impact tiene una lista de más de 40 personajes que puedes recopilar y usar en tu grupo. Cada personaje tiene una personalidad única, historia de fondo, elemento, tipo de arma y habilidades. Puedes subir de nivel, ascender y equipar a tus personajes con diferentes artefactos y armas para mejorar sus estadísticas y habilidades. También puedes desbloquear sus constelaciones y talentos para obtener más beneficios y efectos.
-
Construye tu equipo y estrategia de combate: Genshin Impact tiene un sistema de combate dinámico que te permite cambiar entre cuatro personajes en tu grupo en cualquier momento. También puede combinar diferentes elementos para crear reacciones poderosas que pueden causar más daño, infligir efectos de estado o proporcionar beneficios. Puedes personalizar la composición de tu equipo y la estrategia de combate de acuerdo a los enemigos que enfrentas y los desafíos que encuentras.
-
Completar misiones y eventos: Genshin Impact tiene una historia rica y atractiva que se desarrolla a través de varias misiones y escenas. Puedes seguir la historia principal del viaje de tu personaje en Teyvat, o ramificarte en diferentes misiones secundarias y misiones mundiales que involucran a otros personajes y facciones. También puede participar en varios eventos que ofrecen recompensas y actividades especiales.
-
Juega con tus amigos: Genshin Impact soporta el juego cruzado entre dispositivos PC, PS4, iOS y Android. Puede invitar a sus amigos a unirse a su mundo o unirse a los de ellos, independientemente de su plataforma. Puedes cooperar con hasta otros tres jugadores para explorar el mundo, completar dominios y jefes o enfrentarse al Abismo Espiral. También puedes chatear con tus amigos usando mensajes de texto o de voz.
-
-
Genshin Impact es un juego que te mantendrá entretenido durante horas con sus impresionantes gráficos, banda sonora inmersiva, historia cautivadora, jugabilidad diversa y actualizaciones constantes. Ya sea que juegues solo o con amigos, seguramente tendrás una explosión jugando Genshin Impact en la computadora portátil.
-
Conclusión
-
-
Si quieres jugar este increíble juego en tu computadora portátil, necesitas descargarlo desde el sitio web oficial o la Epic Games Store. También necesita instalarlo y lanzarlo correctamente, y optimizarlo para un mejor rendimiento. Siguiendo los pasos y consejos que hemos proporcionado en este artículo, puede descargar fácilmente Genshin Impact en la computadora portátil y disfrutar de sus características.
-
¿Qué estás esperando? Descargar Genshin impacto en el ordenador portátil hoy y embarcarse en una aventura épica en Teyvat!
-
Preguntas frecuentes
-
Aquí están algunas de las preguntas más comunes que la gente pregunta acerca de la descarga de Genshin Impact en la computadora portátil:
-
-
¿Genshin Impact es libre de jugar?
-
Sí, Genshin Impact es gratis. Puedes descargarlo desde el sitio web oficial o la Epic Games Store sin pagar nada. Sin embargo, el juego tiene algunas microtransacciones opcionales que te permiten comprar moneda o artículos en el juego.
-
¿Puedo jugar Genshin Impact sin conexión?
-
No, Genshin Impact requiere una conexión a Internet para jugar. Debes iniciar sesión con tu cuenta miHoYo cada vez que inicies el juego. También necesitas descargar actualizaciones o parches periódicamente para mantener el juego funcionando sin problemas.
-
¿Puedo transferir mi progreso de una plataforma a otra?
-
Sí, Genshin Impact admite cross-save entre dispositivos PC, iOS y Android. Puedes iniciar sesión con la misma cuenta miHoYo en cualquiera de estas plataformas y acceder a tu progreso y datos. Sin embargo, PS4 no soporta cross-save en este momento.
-
¿Puedo jugar a Genshin Impact con un controlador?
-
Sí, Genshin Impact admite la entrada del controlador en PC y PS4. Puede conectar un controlador compatible a su ordenador portátil o PS4 y jugar el juego con él. También puedes ajustar la configuración del controlador en las opciones del juego para personalizar tus botones y sensibilidad.
-
-"""
-CONCURRENT_COUNT = 100
-
-
-ALREADY_CONVERTED_MARK = ""
-
-small_and_beautiful_theme = gr.themes.Soft(
- primary_hue=gr.themes.Color(
- c50="#02C160",
- c100="rgba(2, 193, 96, 0.2)",
- c200="#02C160",
- c300="rgba(2, 193, 96, 0.32)",
- c400="rgba(2, 193, 96, 0.32)",
- c500="rgba(2, 193, 96, 1.0)",
- c600="rgba(2, 193, 96, 1.0)",
- c700="rgba(2, 193, 96, 0.32)",
- c800="rgba(2, 193, 96, 0.32)",
- c900="#02C160",
- c950="#02C160",
- ),
- secondary_hue=gr.themes.Color(
- c50="#576b95",
- c100="#576b95",
- c200="#576b95",
- c300="#576b95",
- c400="#576b95",
- c500="#576b95",
- c600="#576b95",
- c700="#576b95",
- c800="#576b95",
- c900="#576b95",
- c950="#576b95",
- ),
- neutral_hue=gr.themes.Color(
- name="gray",
- c50="#f9fafb",
- c100="#f3f4f6",
- c200="#e5e7eb",
- c300="#d1d5db",
- c400="#B2B2B2",
- c500="#808080",
- c600="#636363",
- c700="#515151",
- c800="#393939",
- c900="#272727",
- c950="#171717",
- ),
- radius_size=gr.themes.sizes.radius_sm,
- ).set(
- button_primary_background_fill="#06AE56",
- button_primary_background_fill_dark="#06AE56",
- button_primary_background_fill_hover="#07C863",
- button_primary_border_color="#06AE56",
- button_primary_border_color_dark="#06AE56",
- button_primary_text_color="#FFFFFF",
- button_primary_text_color_dark="#FFFFFF",
- button_secondary_background_fill="#F2F2F2",
- button_secondary_background_fill_dark="#2B2B2B",
- button_secondary_text_color="#393939",
- button_secondary_text_color_dark="#FFFFFF",
- # background_fill_primary="#F7F7F7",
- # background_fill_primary_dark="#1F1F1F",
- block_title_text_color="*primary_500",
- block_title_background_fill="*primary_100",
- input_background_fill="#F6F6F6",
- )
diff --git a/spaces/ChandraMohanNayal/AutoGPT/tests/milvus_memory_test.py b/spaces/ChandraMohanNayal/AutoGPT/tests/milvus_memory_test.py
deleted file mode 100644
index 84fd6e6d5006e781fa5e1065f949b2160537d913..0000000000000000000000000000000000000000
--- a/spaces/ChandraMohanNayal/AutoGPT/tests/milvus_memory_test.py
+++ /dev/null
@@ -1,72 +0,0 @@
-# sourcery skip: snake-case-functions
-"""Tests for the MilvusMemory class."""
-import os
-import sys
-import unittest
-
-try:
- from autogpt.memory.milvus import MilvusMemory
-
- def mock_config() -> dict:
- """Mock the Config class"""
- return type(
- "MockConfig",
- (object,),
- {
- "debug_mode": False,
- "continuous_mode": False,
- "speak_mode": False,
- "milvus_collection": "autogpt",
- "milvus_addr": "localhost:19530",
- },
- )
-
- class TestMilvusMemory(unittest.TestCase):
- """Tests for the MilvusMemory class."""
-
- def setUp(self) -> None:
- """Set up the test environment"""
- self.cfg = mock_config()
- self.memory = MilvusMemory(self.cfg)
-
- def test_add(self) -> None:
- """Test adding a text to the cache"""
- text = "Sample text"
- self.memory.clear()
- self.memory.add(text)
- result = self.memory.get(text)
- self.assertEqual([text], result)
-
- def test_clear(self) -> None:
- """Test clearing the cache"""
- self.memory.clear()
- self.assertEqual(self.memory.collection.num_entities, 0)
-
- def test_get(self) -> None:
- """Test getting a text from the cache"""
- text = "Sample text"
- self.memory.clear()
- self.memory.add(text)
- result = self.memory.get(text)
- self.assertEqual(result, [text])
-
- def test_get_relevant(self) -> None:
- """Test getting relevant texts from the cache"""
- text1 = "Sample text 1"
- text2 = "Sample text 2"
- self.memory.clear()
- self.memory.add(text1)
- self.memory.add(text2)
- result = self.memory.get_relevant(text1, 1)
- self.assertEqual(result, [text1])
-
- def test_get_stats(self) -> None:
- """Test getting the cache stats"""
- text = "Sample text"
- self.memory.clear()
- self.memory.add(text)
- stats = self.memory.get_stats()
- self.assertEqual(15, len(stats))
-
-except:
- print("Milvus not installed, skipping tests")
diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/components/Version.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/components/Version.js
deleted file mode 100644
index 9ccb7243d9052421a37943dbefa02f17fd430844..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/components/Version.js
+++ /dev/null
@@ -1,96 +0,0 @@
-import fs from 'fs'
-import lodash from 'lodash'
-
-let packageJson = JSON.parse(fs.readFileSync('package.json', 'utf8'))
-
-const getLine = function (line) {
- line = line.replace(/(^\s*\*|\r)/g, '')
- line = line.replace(/\s*`([^`]+`)/g, '$1')
- line = line.replace(/`\s*/g, '')
- line = line.replace(/\s*\*\*([^\*]+\*\*)/g, '$1')
- line = line.replace(/\*\*\s*/g, '')
- line = line.replace(/ⁿᵉʷ/g, '')
- return line
-}
-
-const readLogFile = function (root, versionCount = 4) {
- let logPath = `${root}/CHANGELOG.md`
- let logs = {}
- let changelogs = []
- let currentVersion
-
- try {
- if (fs.existsSync(logPath)) {
- logs = fs.readFileSync(logPath, 'utf8') || ''
- logs = logs.split('\n')
-
- let temp = {}
- let lastLine = {}
- lodash.forEach(logs, (line) => {
- if (versionCount <= -1) {
- return false
- }
- let versionRet = /^#\s*([0-9a-zA-Z\\.~\s]+?)\s*$/.exec(line)
- if (versionRet && versionRet[1]) {
- let v = versionRet[1].trim()
- if (!currentVersion) {
- currentVersion = v
- } else {
- changelogs.push(temp)
- if (/0\s*$/.test(v) && versionCount > 0) {
- versionCount = 0
- } else {
- versionCount--
- }
- }
-
- temp = {
- version: v,
- logs: []
- }
- } else {
- if (!line.trim()) {
- return
- }
- if (/^\*/.test(line)) {
- lastLine = {
- title: getLine(line),
- logs: []
- }
- temp.logs.push(lastLine)
- } else if (/^\s{2,}\*/.test(line)) {
- lastLine.logs.push(getLine(line))
- }
- }
- })
- }
- } catch (e) {
- // do nth
- }
- return { changelogs, currentVersion }
-}
-
-const { changelogs, currentVersion } = readLogFile(`${process.cwd()}/plugins/ws-plugin/`)
-
-const yunzaiVersion = packageJson.version
-const isMiao = packageJson.dependencies.sequelize ? true : false
-const isTrss = Array.isArray(Bot.uin) ? true : false
-const protocol = ['chronocat', 'ICQQ']
-
-let Version = {
- isMiao,
- isTrss,
- protocol,
- get version() {
- return currentVersion
- },
- get yunzai() {
- return yunzaiVersion
- },
- get changelogs() {
- return changelogs
- },
- readLogFile
-}
-
-export default Version
diff --git a/spaces/Cpp4App/Cpp4App/CDM/detect_compo/lib_ip/ip_detection.py b/spaces/Cpp4App/Cpp4App/CDM/detect_compo/lib_ip/ip_detection.py
deleted file mode 100644
index 7034dcad07755a00d54435c1f86f91a7c7ee84c3..0000000000000000000000000000000000000000
--- a/spaces/Cpp4App/Cpp4App/CDM/detect_compo/lib_ip/ip_detection.py
+++ /dev/null
@@ -1,574 +0,0 @@
-import cv2
-import numpy as np
-
-import CDM.detect_compo.lib_ip.ip_draw as draw
-import CDM.detect_compo.lib_ip.ip_preprocessing as pre
-from CDM.detect_compo.lib_ip.Component import Component
-import CDM.detect_compo.lib_ip.Component as Compo
-from CDM.config.CONFIG_UIED import Config
-C = Config()
-
-
-def merge_intersected_corner(compos, org, is_merge_contained_ele, max_gap=(0, 0), max_ele_height=25):
- '''
- :param is_merge_contained_ele: if true, merge compos nested in others
- :param max_gap: (horizontal_distance, vertical_distance) to be merge into one line/column
- :param max_ele_height: if higher than it, recognize the compo as text
- :return:
- '''
- changed = False
- new_compos = []
- Compo.compos_update(compos, org.shape)
- for i in range(len(compos)):
- merged = False
- cur_compo = compos[i]
- for j in range(len(new_compos)):
- relation = cur_compo.compo_relation(new_compos[j], max_gap)
- # print(relation)
- # draw.draw_bounding_box(org, [cur_compo, new_compos[j]], name='b-merge', show=True)
- # merge compo[i] to compo[j] if
- # 1. compo[j] contains compo[i]
- # 2. compo[j] intersects with compo[i] with certain iou
- # 3. is_merge_contained_ele and compo[j] is contained in compo[i]
- if relation == 1 or \
- relation == 2 or \
- (is_merge_contained_ele and relation == -1):
- # (relation == 2 and new_compos[j].height < max_ele_height and cur_compo.height < max_ele_height) or\
-
- new_compos[j].compo_merge(cur_compo)
- cur_compo = new_compos[j]
- # draw.draw_bounding_box(org, [new_compos[j]], name='a-merge', show=True)
- merged = True
- changed = True
- # break
- if not merged:
- new_compos.append(compos[i])
-
- if not changed:
- return compos
- else:
- return merge_intersected_corner(new_compos, org, is_merge_contained_ele, max_gap, max_ele_height)
-
-
-def merge_intersected_compos(compos):
- changed = True
- while changed:
- changed = False
- temp_set = []
- for compo_a in compos:
- merged = False
- for compo_b in temp_set:
- if compo_a.compo_relation(compo_b) == 2:
- compo_b.compo_merge(compo_a)
- merged = True
- changed = True
- break
- if not merged:
- temp_set.append(compo_a)
- compos = temp_set.copy()
- return compos
-
-
-def rm_contained_compos_not_in_block(compos):
- '''
- remove all components contained by others that are not Block
- '''
- marked = np.full(len(compos), False)
- for i in range(len(compos) - 1):
- for j in range(i + 1, len(compos)):
- relation = compos[i].compo_relation(compos[j])
- if relation == -1 and compos[j].category != 'Block':
- marked[i] = True
- if relation == 1 and compos[i].category != 'Block':
- marked[j] = True
- new_compos = []
- for i in range(len(marked)):
- if not marked[i]:
- new_compos.append(compos[i])
- return new_compos
-
-
-def merge_text(compos, org_shape, max_word_gad=4, max_word_height=20):
- def is_text_line(compo_a, compo_b):
- (col_min_a, row_min_a, col_max_a, row_max_a) = compo_a.put_bbox()
- (col_min_b, row_min_b, col_max_b, row_max_b) = compo_b.put_bbox()
-
- col_min_s = max(col_min_a, col_min_b)
- col_max_s = min(col_max_a, col_max_b)
- row_min_s = max(row_min_a, row_min_b)
- row_max_s = min(row_max_a, row_max_b)
-
- # on the same line
- # if abs(row_min_a - row_min_b) < max_word_gad and abs(row_max_a - row_max_b) < max_word_gad:
- if row_min_s < row_max_s:
- # close distance
- if col_min_s < col_max_s or \
- (0 < col_min_b - col_max_a < max_word_gad) or (0 < col_min_a - col_max_b < max_word_gad):
- return True
- return False
-
- changed = False
- new_compos = []
- row, col = org_shape[:2]
- for i in range(len(compos)):
- merged = False
- height = compos[i].height
- # ignore non-text
- # if height / row > max_word_height_ratio\
- # or compos[i].category != 'Text':
- if height > max_word_height:
- new_compos.append(compos[i])
- continue
- for j in range(len(new_compos)):
- # if compos[j].category != 'Text':
- # continue
- if is_text_line(compos[i], new_compos[j]):
- new_compos[j].compo_merge(compos[i])
- merged = True
- changed = True
- break
- if not merged:
- new_compos.append(compos[i])
-
- if not changed:
- return compos
- else:
- return merge_text(new_compos, org_shape)
-
-
-def rm_top_or_bottom_corners(components, org_shape, top_bottom_height=C.THRESHOLD_TOP_BOTTOM_BAR):
- new_compos = []
- height, width = org_shape[:2]
- for compo in components:
- (column_min, row_min, column_max, row_max) = compo.put_bbox()
- # remove big ones
- # if (row_max - row_min) / height > 0.65 and (column_max - column_min) / width > 0.8:
- # continue
- if not (row_max < height * top_bottom_height[0] or row_min > height * top_bottom_height[1]):
- new_compos.append(compo)
- return new_compos
-
-
-def rm_line_v_h(binary, show=False, max_line_thickness=C.THRESHOLD_LINE_THICKNESS):
- def check_continuous_line(line, edge):
- continuous_length = 0
- line_start = -1
- for j, p in enumerate(line):
- if p > 0:
- if line_start == -1:
- line_start = j
- continuous_length += 1
- elif continuous_length > 0:
- if continuous_length / edge > 0.6:
- return [line_start, j]
- continuous_length = 0
- line_start = -1
-
- if continuous_length / edge > 0.6:
- return [line_start, len(line)]
- else:
- return None
-
- def extract_line_area(line, start_idx, flag='v'):
- for e, l in enumerate(line):
- if flag == 'v':
- map_line[start_idx + e, l[0]:l[1]] = binary[start_idx + e, l[0]:l[1]]
-
- map_line = np.zeros(binary.shape[:2], dtype=np.uint8)
- cv2.imshow('binary', binary)
-
- width = binary.shape[1]
- start_row = -1
- line_area = []
- for i, row in enumerate(binary):
- line_v = check_continuous_line(row, width)
- if line_v is not None:
- # new line
- if start_row == -1:
- start_row = i
- line_area = []
- line_area.append(line_v)
- else:
- # checking line
- if start_row != -1:
- if i - start_row < max_line_thickness:
- # binary[start_row: i] = 0
- # map_line[start_row: i] = binary[start_row: i]
- print(line_area, start_row, i)
- extract_line_area(line_area, start_row)
- start_row = -1
-
- height = binary.shape[0]
- start_col = -1
- for i in range(width):
- col = binary[:, i]
- line_h = check_continuous_line(col, height)
- if line_h is not None:
- # new line
- if start_col == -1:
- start_col = i
- else:
- # checking line
- if start_col != -1:
- if i - start_col < max_line_thickness:
- # binary[:, start_col: i] = 0
- map_line[:, start_col: i] = binary[:, start_col: i]
- start_col = -1
-
- binary -= map_line
-
- if show:
- cv2.imshow('no-line', binary)
- cv2.imshow('lines', map_line)
- cv2.waitKey()
-
-
-def rm_line(binary,
- max_line_thickness=C.THRESHOLD_LINE_THICKNESS,
- min_line_length_ratio=C.THRESHOLD_LINE_MIN_LENGTH,
- show=False, wait_key=0):
- def is_valid_line(line):
- line_length = 0
- line_gap = 0
- for j in line:
- if j > 0:
- if line_gap > 5:
- return False
- line_length += 1
- line_gap = 0
- elif line_length > 0:
- line_gap += 1
- if line_length / width > 0.95:
- return True
- return False
-
- height, width = binary.shape[:2]
- board = np.zeros(binary.shape[:2], dtype=np.uint8)
-
- start_row, end_row = -1, -1
- check_line = False
- check_gap = False
- for i, row in enumerate(binary):
- # line_ratio = (sum(row) / 255) / width
- # if line_ratio > 0.9:
- if is_valid_line(row):
- # new start: if it is checking a new line, mark this row as start
- if not check_line:
- start_row = i
- check_line = True
- else:
- # end the line
- if check_line:
- # thin enough to be a line, then start checking gap
- if i - start_row < max_line_thickness:
- end_row = i
- check_gap = True
- else:
- start_row, end_row = -1, -1
- check_line = False
- # check gap
- if check_gap and i - end_row > max_line_thickness:
- binary[start_row: end_row] = 0
- start_row, end_row = -1, -1
- check_line = False
- check_gap = False
-
- if (check_line and (height - start_row) < max_line_thickness) or check_gap:
- binary[start_row: end_row] = 0
-
- if show:
- cv2.imshow('no-line binary', binary)
- if wait_key is not None:
- cv2.waitKey(wait_key)
- if wait_key == 0:
- cv2.destroyWindow('no-line binary')
-
-
-def rm_noise_compos(compos):
- compos_new = []
- for compo in compos:
- if compo.category == 'Noise':
- continue
- compos_new.append(compo)
- return compos_new
-
-
-def rm_noise_in_large_img(compos, org,
- max_compo_scale=C.THRESHOLD_COMPO_MAX_SCALE):
- row, column = org.shape[:2]
- remain = np.full(len(compos), True)
- new_compos = []
- for compo in compos:
- if compo.category == 'Image':
- for i in compo.contain:
- remain[i] = False
- for i in range(len(remain)):
- if remain[i]:
- new_compos.append(compos[i])
- return new_compos
-
-
-def detect_compos_in_img(compos, binary, org, max_compo_scale=C.THRESHOLD_COMPO_MAX_SCALE, show=False):
- compos_new = []
- row, column = binary.shape[:2]
- for compo in compos:
- if compo.category == 'Image':
- compo.compo_update_bbox_area()
- # org_clip = compo.compo_clipping(org)
- # bin_clip = pre.binarization(org_clip, show=show)
- bin_clip = compo.compo_clipping(binary)
- bin_clip = pre.reverse_binary(bin_clip, show=show)
-
- compos_rec, compos_nonrec = component_detection(bin_clip, test=False, step_h=10, step_v=10, rec_detect=True)
- for compo_rec in compos_rec:
- compo_rec.compo_relative_position(compo.bbox.col_min, compo.bbox.row_min)
- if compo_rec.bbox_area / compo.bbox_area < 0.8 and compo_rec.bbox.height > 20 and compo_rec.bbox.width > 20:
- compos_new.append(compo_rec)
- # draw.draw_bounding_box(org, [compo_rec], show=True)
-
- # compos_inner = component_detection(bin_clip, rec_detect=False)
- # for compo_inner in compos_inner:
- # compo_inner.compo_relative_position(compo.bbox.col_min, compo.bbox.row_min)
- # draw.draw_bounding_box(org, [compo_inner], show=True)
- # if compo_inner.bbox_area / compo.bbox_area < 0.8:
- # compos_new.append(compo_inner)
- compos += compos_new
-
-
-def compo_filter(compos, min_area, img_shape):
- # max_height = img_shape[0] * 0.8
- # compos_new = []
- # for compo in compos:
- # if compo.area < min_area:
- # continue
- # if compo.height > max_height:
- # continue
- # ratio_h = compo.width / compo.height
- # ratio_w = compo.height / compo.width
- # if ratio_h > 50 or ratio_w > 40 or \
- # (min(compo.height, compo.width) < 8 and max(ratio_h, ratio_w) > 10):
- # continue
- # compos_new.append(compo)
- # return compos_new
-
- # mobile semantics filter
- # compos_new = []
- #
- # for compo in compos:
- #
- # if compo.area >= 0.05 * (img_shape[0] * img_shape[1]):
- # continue
- #
- # smaller_dimension = min(compo.width, compo.height)
- # larger_dimension = max(compo.width, compo.height)
- #
- # if smaller_dimension/larger_dimension <= 0.75:
- # continue
- #
- # compos_new.append(compo)
- #
- # return compos_new
-
- # my own filter
- compos_new = []
-
- for compo in compos:
-
- if compo.area >= 0.1 * (img_shape[0] * img_shape[1]):
- continue
-
- if compo.area <= 0.0005 * (img_shape[0] * img_shape[1]):
- continue
-
- smaller_dimension = min(compo.width, compo.height)
- larger_dimension = max(compo.width, compo.height)
-
- if smaller_dimension / larger_dimension <= 0.6:
- continue
-
- compos_new.append(compo)
-
- return compos_new
-
-
-def is_block(clip, thread=0.15):
- '''
- Block is a rectangle border enclosing a group of compos (consider it as a wireframe)
- Check if a compo is block by checking if the inner side of its border is blank
- '''
- side = 4 # scan 4 lines inner forward each border
- # top border - scan top down
- blank_count = 0
- for i in range(1, 5):
- if sum(clip[side + i]) / 255 > thread * clip.shape[1]:
- blank_count += 1
- if blank_count > 2: return False
- # left border - scan left to right
- blank_count = 0
- for i in range(1, 5):
- if sum(clip[:, side + i]) / 255 > thread * clip.shape[0]:
- blank_count += 1
- if blank_count > 2: return False
-
- side = -4
- # bottom border - scan bottom up
- blank_count = 0
- for i in range(-1, -5, -1):
- if sum(clip[side + i]) / 255 > thread * clip.shape[1]:
- blank_count += 1
- if blank_count > 2: return False
- # right border - scan right to left
- blank_count = 0
- for i in range(-1, -5, -1):
- if sum(clip[:, side + i]) / 255 > thread * clip.shape[0]:
- blank_count += 1
- if blank_count > 2: return False
- return True
-
-
-def compo_block_recognition(binary, compos, block_side_length=0.15):
- height, width = binary.shape
- for compo in compos:
- if compo.height / height > block_side_length and compo.width / width > block_side_length:
- clip = compo.compo_clipping(binary)
- if is_block(clip):
- compo.category = 'Block'
-
-
-# take the binary image as input
-# calculate the connected regions -> get the bounding boundaries of them -> check if those regions are rectangles
-# return all boundaries and boundaries of rectangles
-def component_detection(binary, min_obj_area,
- line_thickness=C.THRESHOLD_LINE_THICKNESS,
- min_rec_evenness=C.THRESHOLD_REC_MIN_EVENNESS,
- max_dent_ratio=C.THRESHOLD_REC_MAX_DENT_RATIO,
- step_h = 5, step_v = 2,
- rec_detect=False, show=False, test=False):
- """
- :param binary: Binary image from pre-processing
- :param min_obj_area: If not pass then ignore the small object
- :param min_obj_perimeter: If not pass then ignore the small object
- :param line_thickness: If not pass then ignore the slim object
- :param min_rec_evenness: If not pass then this object cannot be rectangular
- :param max_dent_ratio: If not pass then this object cannot be rectangular
- :return: boundary: [top, bottom, left, right]
- -> up, bottom: list of (column_index, min/max row border)
- -> left, right: list of (row_index, min/max column border) detect range of each row
- """
- mask = np.zeros((binary.shape[0] + 2, binary.shape[1] + 2), dtype=np.uint8)
- compos_all = []
- compos_rec = []
- compos_nonrec = []
- row, column = binary.shape[0], binary.shape[1]
- for i in range(0, row, step_h):
- for j in range(i % 2, column, step_v):
- if binary[i, j] == 255 and mask[i, j] == 0:
- # get connected area
- # region = util.boundary_bfs_connected_area(binary, i, j, mask)
-
- mask_copy = mask.copy()
- ff = cv2.floodFill(binary, mask, (j, i), None, 0, 0, cv2.FLOODFILL_MASK_ONLY)
- if ff[0] < min_obj_area: continue
- mask_copy = mask - mask_copy
- region = np.reshape(cv2.findNonZero(mask_copy[1:-1, 1:-1]), (-1, 2))
- region = [(p[1], p[0]) for p in region]
-
- # filter out some compos
- component = Component(region, binary.shape)
- # calculate the boundary of the connected area
- # ignore small area
- if component.width <= 3 or component.height <= 3:
- continue
- # check if it is line by checking the length of edges
- # if component.compo_is_line(line_thickness):
- # continue
-
- if test:
- print('Area:%d' % (len(region)))
- draw.draw_boundary([component], binary.shape, show=True)
-
- compos_all.append(component)
-
- if rec_detect:
- # rectangle check
- if component.compo_is_rectangle(min_rec_evenness, max_dent_ratio):
- component.rect_ = True
- compos_rec.append(component)
- else:
- component.rect_ = False
- compos_nonrec.append(component)
-
- if show:
- print('Area:%d' % (len(region)))
- draw.draw_boundary(compos_all, binary.shape, show=True)
-
- # draw.draw_boundary(compos_all, binary.shape, show=True)
- if rec_detect:
- return compos_rec, compos_nonrec
- else:
- return compos_all
-
-
-def nested_components_detection(grey, org, grad_thresh,
- show=False, write_path=None,
- step_h=10, step_v=10,
- line_thickness=C.THRESHOLD_LINE_THICKNESS,
- min_rec_evenness=C.THRESHOLD_REC_MIN_EVENNESS,
- max_dent_ratio=C.THRESHOLD_REC_MAX_DENT_RATIO):
- '''
- :param grey: grey-scale of original image
- :return: corners: list of [(top_left, bottom_right)]
- -> top_left: (column_min, row_min)
- -> bottom_right: (column_max, row_max)
- '''
- compos = []
- mask = np.zeros((grey.shape[0]+2, grey.shape[1]+2), dtype=np.uint8)
- broad = np.zeros((grey.shape[0], grey.shape[1], 3), dtype=np.uint8)
- broad_all = broad.copy()
-
- row, column = grey.shape[0], grey.shape[1]
- for x in range(0, row, step_h):
- for y in range(0, column, step_v):
- if mask[x, y] == 0:
- # region = flood_fill_bfs(grey, x, y, mask)
-
- # flood fill algorithm to get background (layout block)
- mask_copy = mask.copy()
- ff = cv2.floodFill(grey, mask, (y, x), None, grad_thresh, grad_thresh, cv2.FLOODFILL_MASK_ONLY)
- # ignore small regions
- if ff[0] < 500: continue
- mask_copy = mask - mask_copy
- region = np.reshape(cv2.findNonZero(mask_copy[1:-1, 1:-1]), (-1, 2))
- region = [(p[1], p[0]) for p in region]
-
- compo = Component(region, grey.shape)
- # draw.draw_region(region, broad_all)
- # if block.height < 40 and block.width < 40:
- # continue
- if compo.height < 30:
- continue
-
- # print(block.area / (row * column))
- if compo.area / (row * column) > 0.9:
- continue
- elif compo.area / (row * column) > 0.7:
- compo.redundant = True
-
- # get the boundary of this region
- # ignore lines
- if compo.compo_is_line(line_thickness):
- continue
- # ignore non-rectangle as blocks must be rectangular
- if not compo.compo_is_rectangle(min_rec_evenness, max_dent_ratio):
- continue
- # if block.height/row < min_block_height_ratio:
- # continue
- compos.append(compo)
- # draw.draw_region(region, broad)
- if show:
- cv2.imshow('flood-fill all', broad_all)
- cv2.imshow('block', broad)
- cv2.waitKey()
- if write_path is not None:
- cv2.imwrite(write_path, broad)
- return compos
diff --git a/spaces/Cropinky/hana_hanak_houses/realesrgan/models/realesrnet_model.py b/spaces/Cropinky/hana_hanak_houses/realesrgan/models/realesrnet_model.py
deleted file mode 100644
index d11668f3712bffcd062c57db14d22ca3a0e1e59d..0000000000000000000000000000000000000000
--- a/spaces/Cropinky/hana_hanak_houses/realesrgan/models/realesrnet_model.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import numpy as np
-import random
-import torch
-from basicsr.data.degradations import random_add_gaussian_noise_pt, random_add_poisson_noise_pt
-from basicsr.data.transforms import paired_random_crop
-from basicsr.models.sr_model import SRModel
-from basicsr.utils import DiffJPEG, USMSharp
-from basicsr.utils.img_process_util import filter2D
-from basicsr.utils.registry import MODEL_REGISTRY
-from torch.nn import functional as F
-
-
-@MODEL_REGISTRY.register()
-class RealESRNetModel(SRModel):
- """RealESRNet Model for Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data.
-
- It is trained without GAN losses.
- It mainly performs:
- 1. randomly synthesize LQ images in GPU tensors
- 2. optimize the networks with GAN training.
- """
-
- def __init__(self, opt):
- super(RealESRNetModel, self).__init__(opt)
- self.jpeger = DiffJPEG(differentiable=False).cuda() # simulate JPEG compression artifacts
- self.usm_sharpener = USMSharp().cuda() # do usm sharpening
- self.queue_size = opt.get('queue_size', 180)
-
- @torch.no_grad()
- def _dequeue_and_enqueue(self):
- """It is the training pair pool for increasing the diversity in a batch.
-
- Batch processing limits the diversity of synthetic degradations in a batch. For example, samples in a
- batch could not have different resize scaling factors. Therefore, we employ this training pair pool
- to increase the degradation diversity in a batch.
- """
- # initialize
- b, c, h, w = self.lq.size()
- if not hasattr(self, 'queue_lr'):
- assert self.queue_size % b == 0, f'queue size {self.queue_size} should be divisible by batch size {b}'
- self.queue_lr = torch.zeros(self.queue_size, c, h, w).cuda()
- _, c, h, w = self.gt.size()
- self.queue_gt = torch.zeros(self.queue_size, c, h, w).cuda()
- self.queue_ptr = 0
- if self.queue_ptr == self.queue_size: # the pool is full
- # do dequeue and enqueue
- # shuffle
- idx = torch.randperm(self.queue_size)
- self.queue_lr = self.queue_lr[idx]
- self.queue_gt = self.queue_gt[idx]
- # get first b samples
- lq_dequeue = self.queue_lr[0:b, :, :, :].clone()
- gt_dequeue = self.queue_gt[0:b, :, :, :].clone()
- # update the queue
- self.queue_lr[0:b, :, :, :] = self.lq.clone()
- self.queue_gt[0:b, :, :, :] = self.gt.clone()
-
- self.lq = lq_dequeue
- self.gt = gt_dequeue
- else:
- # only do enqueue
- self.queue_lr[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.lq.clone()
- self.queue_gt[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.gt.clone()
- self.queue_ptr = self.queue_ptr + b
-
- @torch.no_grad()
- def feed_data(self, data):
- """Accept data from dataloader, and then add two-order degradations to obtain LQ images.
- """
- if self.is_train and self.opt.get('high_order_degradation', True):
- # training data synthesis
- self.gt = data['gt'].to(self.device)
- # USM sharpen the GT images
- if self.opt['gt_usm'] is True:
- self.gt = self.usm_sharpener(self.gt)
-
- self.kernel1 = data['kernel1'].to(self.device)
- self.kernel2 = data['kernel2'].to(self.device)
- self.sinc_kernel = data['sinc_kernel'].to(self.device)
-
- ori_h, ori_w = self.gt.size()[2:4]
-
- # ----------------------- The first degradation process ----------------------- #
- # blur
- out = filter2D(self.gt, self.kernel1)
- # random resize
- updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob'])[0]
- if updown_type == 'up':
- scale = np.random.uniform(1, self.opt['resize_range'][1])
- elif updown_type == 'down':
- scale = np.random.uniform(self.opt['resize_range'][0], 1)
- else:
- scale = 1
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, scale_factor=scale, mode=mode)
- # add noise
- gray_noise_prob = self.opt['gray_noise_prob']
- if np.random.uniform() < self.opt['gaussian_noise_prob']:
- out = random_add_gaussian_noise_pt(
- out, sigma_range=self.opt['noise_range'], clip=True, rounds=False, gray_prob=gray_noise_prob)
- else:
- out = random_add_poisson_noise_pt(
- out,
- scale_range=self.opt['poisson_scale_range'],
- gray_prob=gray_noise_prob,
- clip=True,
- rounds=False)
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range'])
- out = torch.clamp(out, 0, 1) # clamp to [0, 1], otherwise JPEGer will result in unpleasant artifacts
- out = self.jpeger(out, quality=jpeg_p)
-
- # ----------------------- The second degradation process ----------------------- #
- # blur
- if np.random.uniform() < self.opt['second_blur_prob']:
- out = filter2D(out, self.kernel2)
- # random resize
- updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob2'])[0]
- if updown_type == 'up':
- scale = np.random.uniform(1, self.opt['resize_range2'][1])
- elif updown_type == 'down':
- scale = np.random.uniform(self.opt['resize_range2'][0], 1)
- else:
- scale = 1
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(
- out, size=(int(ori_h / self.opt['scale'] * scale), int(ori_w / self.opt['scale'] * scale)), mode=mode)
- # add noise
- gray_noise_prob = self.opt['gray_noise_prob2']
- if np.random.uniform() < self.opt['gaussian_noise_prob2']:
- out = random_add_gaussian_noise_pt(
- out, sigma_range=self.opt['noise_range2'], clip=True, rounds=False, gray_prob=gray_noise_prob)
- else:
- out = random_add_poisson_noise_pt(
- out,
- scale_range=self.opt['poisson_scale_range2'],
- gray_prob=gray_noise_prob,
- clip=True,
- rounds=False)
-
- # JPEG compression + the final sinc filter
- # We also need to resize images to desired sizes. We group [resize back + sinc filter] together
- # as one operation.
- # We consider two orders:
- # 1. [resize back + sinc filter] + JPEG compression
- # 2. JPEG compression + [resize back + sinc filter]
- # Empirically, we find other combinations (sinc + JPEG + Resize) will introduce twisted lines.
- if np.random.uniform() < 0.5:
- # resize back + the final sinc filter
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
- out = filter2D(out, self.sinc_kernel)
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
- out = torch.clamp(out, 0, 1)
- out = self.jpeger(out, quality=jpeg_p)
- else:
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
- out = torch.clamp(out, 0, 1)
- out = self.jpeger(out, quality=jpeg_p)
- # resize back + the final sinc filter
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
- out = filter2D(out, self.sinc_kernel)
-
- # clamp and round
- self.lq = torch.clamp((out * 255.0).round(), 0, 255) / 255.
-
- # random crop
- gt_size = self.opt['gt_size']
- self.gt, self.lq = paired_random_crop(self.gt, self.lq, gt_size, self.opt['scale'])
-
- # training pair pool
- self._dequeue_and_enqueue()
- self.lq = self.lq.contiguous() # for the warning: grad and param do not obey the gradient layout contract
- else:
- # for paired training or validation
- self.lq = data['lq'].to(self.device)
- if 'gt' in data:
- self.gt = data['gt'].to(self.device)
- self.gt_usm = self.usm_sharpener(self.gt)
-
- def nondist_validation(self, dataloader, current_iter, tb_logger, save_img):
- # do not use the synthetic process during validation
- self.is_train = False
- super(RealESRNetModel, self).nondist_validation(dataloader, current_iter, tb_logger, save_img)
- self.is_train = True
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/cmd.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/cmd.py
deleted file mode 100644
index 0003c2805772bd9f68c705c8f759e4a76e5b2ca8..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/cmd.py
+++ /dev/null
@@ -1,6 +0,0 @@
-#encoding = utf-8
-
-def cmd(cmd):
- import commands
- return commands.getoutput(cmd)
-
diff --git a/spaces/DHEIVER/Pedrita/app.py b/spaces/DHEIVER/Pedrita/app.py
deleted file mode 100644
index 1e2d7a376436ca989de5f72097732c907aa9c550..0000000000000000000000000000000000000000
--- a/spaces/DHEIVER/Pedrita/app.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import gradio as gr
-from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed, pipeline
-
-
-title = "Python Code Generator"
-description = "This is a space to convert English text to Python code using the [codeparrot-small-text-to-code](https://huggingface.co/codeparrot/codeparrot-small-text-to-code) model, a pre-trained Python code generation model trained on a dataset of docstrings and Python code extracted from Jupyter notebooks available at [github-jupyter-text](https://huggingface.co/datasets/codeparrot/github-jupyter-text)."
-example = [
- ["Utility function to calculate the precision of predictions using sklearn metrics", 65, 0.6, 42],
- ["Let's implement a function that calculates the size of a file called filepath", 60, 0.6, 42],
- ["Let's implement the Bubble Sort sorting algorithm in an auxiliary function:", 87, 0.6, 42],
- ["Function to calculate the nth Fibonacci number.", 65, 0.6, 42],
- ["Function to calculate the factorial of a number.", 65, 0.6, 42],
- ["Function to reverse a string.", 65, 0.6, 42],
- ["Function to check if a number is prime.", 65, 0.6, 42],
- ["Function to generate the Fibonacci sequence up to the nth term.", 65, 0.6, 42],
- ["Function to generate the factorial sequence up to the nth term.", 65, 0.6, 42],
-]
-
-
-# Change the model to the pre-trained model
-tokenizer = AutoTokenizer.from_pretrained("codeparrot/codeparrot-small-text-to-code")
-model = AutoModelForCausalLM.from_pretrained("codeparrot/codeparrot-small-text-to-code")
-
-def create_docstring(gen_prompt):
- return "\"\"\"\n" + gen_prompt + "\n\"\"\"\n\n"
-
-def validate_inputs(gen_prompt, max_tokens, temperature, seed):
- # Add validation logic here
- if not gen_prompt:
- raise ValueError("English instructions cannot be empty.")
- if max_tokens <= 0 or max_tokens > 256:
- raise ValueError("Number of tokens to generate must be between 1 and 256.")
- if temperature < 0 or temperature > 2.5:
- raise ValueError("Temperature must be between 0 and 2.5.")
- if seed < 0 or seed > 1000:
- raise ValueError("Random seed must be between 0 and 1000.")
-
-def generate_code(gen_prompt, max_tokens, temperature=0.6, seed=42):
- validate_inputs(gen_prompt, max_tokens, temperature, seed)
-
- # Encode the input prompt
- input_ids = tokenizer.encode(gen_prompt, return_tensors="pt")
-
- # Set seed for reproducibility
- set_seed(seed)
-
- # Generate code tokens
- output = model.generate(
- input_ids,
- max_length=max_tokens + input_ids.shape[-1],
- temperature=temperature,
- pad_token_id=tokenizer.eos_token_id,
- num_return_sequences=1
- )
-
- # Decode the generated tokens into Python code
- generated_code = tokenizer.decode(output[:, input_ids.shape[-1]:][0], skip_special_tokens=True)
-
- return generated_code
-
-
-
-def save_to_text_file(output_text):
- with open("generated_code.txt", "w") as file:
- file.write(output_text)
-
-iface = gr.Interface(
- fn=generate_code,
- inputs=[
- gr.Textbox(label="English instructions", placeholder="Enter English instructions..."),
- gr.inputs.Slider(
- minimum=8,
- maximum=256,
- step=1,
- default=8,
- label="Number of tokens to generate",
- ),
- gr.inputs.Slider(
- minimum=0,
- maximum=2.5,
- step=0.1,
- default=0.6,
- label="Temperature",
- ),
- gr.inputs.Slider(
- minimum=0,
- maximum=1000,
- step=1,
- default=42,
- label="Random seed for generation"
- )
- ],
- outputs=gr.Code(label="Generated Python code", language="python", lines=10),
- examples=example,
- layout="horizontal",
- theme="peach",
- description=description,
- title=title
-)
-iface.launch()
-
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/http_websocket.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/http_websocket.py
deleted file mode 100644
index 2cfc51930902e76c87f075f2cc445e878e737fd5..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/http_websocket.py
+++ /dev/null
@@ -1,701 +0,0 @@
-"""WebSocket protocol versions 13 and 8."""
-
-import asyncio
-import collections
-import json
-import random
-import re
-import sys
-import zlib
-from enum import IntEnum
-from struct import Struct
-from typing import Any, Callable, List, Optional, Pattern, Set, Tuple, Union, cast
-
-from .base_protocol import BaseProtocol
-from .helpers import NO_EXTENSIONS
-from .streams import DataQueue
-from .typedefs import Final
-
-__all__ = (
- "WS_CLOSED_MESSAGE",
- "WS_CLOSING_MESSAGE",
- "WS_KEY",
- "WebSocketReader",
- "WebSocketWriter",
- "WSMessage",
- "WebSocketError",
- "WSMsgType",
- "WSCloseCode",
-)
-
-
-class WSCloseCode(IntEnum):
- OK = 1000
- GOING_AWAY = 1001
- PROTOCOL_ERROR = 1002
- UNSUPPORTED_DATA = 1003
- ABNORMAL_CLOSURE = 1006
- INVALID_TEXT = 1007
- POLICY_VIOLATION = 1008
- MESSAGE_TOO_BIG = 1009
- MANDATORY_EXTENSION = 1010
- INTERNAL_ERROR = 1011
- SERVICE_RESTART = 1012
- TRY_AGAIN_LATER = 1013
- BAD_GATEWAY = 1014
-
-
-ALLOWED_CLOSE_CODES: Final[Set[int]] = {int(i) for i in WSCloseCode}
-
-
-class WSMsgType(IntEnum):
- # websocket spec types
- CONTINUATION = 0x0
- TEXT = 0x1
- BINARY = 0x2
- PING = 0x9
- PONG = 0xA
- CLOSE = 0x8
-
- # aiohttp specific types
- CLOSING = 0x100
- CLOSED = 0x101
- ERROR = 0x102
-
- text = TEXT
- binary = BINARY
- ping = PING
- pong = PONG
- close = CLOSE
- closing = CLOSING
- closed = CLOSED
- error = ERROR
-
-
-WS_KEY: Final[bytes] = b"258EAFA5-E914-47DA-95CA-C5AB0DC85B11"
-
-
-UNPACK_LEN2 = Struct("!H").unpack_from
-UNPACK_LEN3 = Struct("!Q").unpack_from
-UNPACK_CLOSE_CODE = Struct("!H").unpack
-PACK_LEN1 = Struct("!BB").pack
-PACK_LEN2 = Struct("!BBH").pack
-PACK_LEN3 = Struct("!BBQ").pack
-PACK_CLOSE_CODE = Struct("!H").pack
-MSG_SIZE: Final[int] = 2**14
-DEFAULT_LIMIT: Final[int] = 2**16
-
-
-_WSMessageBase = collections.namedtuple("_WSMessageBase", ["type", "data", "extra"])
-
-
-class WSMessage(_WSMessageBase):
- def json(self, *, loads: Callable[[Any], Any] = json.loads) -> Any:
- """Return parsed JSON data.
-
- .. versionadded:: 0.22
- """
- return loads(self.data)
-
-
-WS_CLOSED_MESSAGE = WSMessage(WSMsgType.CLOSED, None, None)
-WS_CLOSING_MESSAGE = WSMessage(WSMsgType.CLOSING, None, None)
-
-
-class WebSocketError(Exception):
- """WebSocket protocol parser error."""
-
- def __init__(self, code: int, message: str) -> None:
- self.code = code
- super().__init__(code, message)
-
- def __str__(self) -> str:
- return cast(str, self.args[1])
-
-
-class WSHandshakeError(Exception):
- """WebSocket protocol handshake error."""
-
-
-native_byteorder: Final[str] = sys.byteorder
-
-
-# Used by _websocket_mask_python
-_XOR_TABLE: Final[List[bytes]] = [bytes(a ^ b for a in range(256)) for b in range(256)]
-
-
-def _websocket_mask_python(mask: bytes, data: bytearray) -> None:
- """Websocket masking function.
-
- `mask` is a `bytes` object of length 4; `data` is a `bytearray`
- object of any length. The contents of `data` are masked with `mask`,
- as specified in section 5.3 of RFC 6455.
-
- Note that this function mutates the `data` argument.
-
- This pure-python implementation may be replaced by an optimized
- version when available.
-
- """
- assert isinstance(data, bytearray), data
- assert len(mask) == 4, mask
-
- if data:
- a, b, c, d = (_XOR_TABLE[n] for n in mask)
- data[::4] = data[::4].translate(a)
- data[1::4] = data[1::4].translate(b)
- data[2::4] = data[2::4].translate(c)
- data[3::4] = data[3::4].translate(d)
-
-
-if NO_EXTENSIONS: # pragma: no cover
- _websocket_mask = _websocket_mask_python
-else:
- try:
- from ._websocket import _websocket_mask_cython # type: ignore[import]
-
- _websocket_mask = _websocket_mask_cython
- except ImportError: # pragma: no cover
- _websocket_mask = _websocket_mask_python
-
-_WS_DEFLATE_TRAILING: Final[bytes] = bytes([0x00, 0x00, 0xFF, 0xFF])
-
-
-_WS_EXT_RE: Final[Pattern[str]] = re.compile(
- r"^(?:;\s*(?:"
- r"(server_no_context_takeover)|"
- r"(client_no_context_takeover)|"
- r"(server_max_window_bits(?:=(\d+))?)|"
- r"(client_max_window_bits(?:=(\d+))?)))*$"
-)
-
-_WS_EXT_RE_SPLIT: Final[Pattern[str]] = re.compile(r"permessage-deflate([^,]+)?")
-
-
-def ws_ext_parse(extstr: Optional[str], isserver: bool = False) -> Tuple[int, bool]:
- if not extstr:
- return 0, False
-
- compress = 0
- notakeover = False
- for ext in _WS_EXT_RE_SPLIT.finditer(extstr):
- defext = ext.group(1)
- # Return compress = 15 when get `permessage-deflate`
- if not defext:
- compress = 15
- break
- match = _WS_EXT_RE.match(defext)
- if match:
- compress = 15
- if isserver:
- # Server never fail to detect compress handshake.
- # Server does not need to send max wbit to client
- if match.group(4):
- compress = int(match.group(4))
- # Group3 must match if group4 matches
- # Compress wbit 8 does not support in zlib
- # If compress level not support,
- # CONTINUE to next extension
- if compress > 15 or compress < 9:
- compress = 0
- continue
- if match.group(1):
- notakeover = True
- # Ignore regex group 5 & 6 for client_max_window_bits
- break
- else:
- if match.group(6):
- compress = int(match.group(6))
- # Group5 must match if group6 matches
- # Compress wbit 8 does not support in zlib
- # If compress level not support,
- # FAIL the parse progress
- if compress > 15 or compress < 9:
- raise WSHandshakeError("Invalid window size")
- if match.group(2):
- notakeover = True
- # Ignore regex group 5 & 6 for client_max_window_bits
- break
- # Return Fail if client side and not match
- elif not isserver:
- raise WSHandshakeError("Extension for deflate not supported" + ext.group(1))
-
- return compress, notakeover
-
-
-def ws_ext_gen(
- compress: int = 15, isserver: bool = False, server_notakeover: bool = False
-) -> str:
- # client_notakeover=False not used for server
- # compress wbit 8 does not support in zlib
- if compress < 9 or compress > 15:
- raise ValueError(
- "Compress wbits must between 9 and 15, " "zlib does not support wbits=8"
- )
- enabledext = ["permessage-deflate"]
- if not isserver:
- enabledext.append("client_max_window_bits")
-
- if compress < 15:
- enabledext.append("server_max_window_bits=" + str(compress))
- if server_notakeover:
- enabledext.append("server_no_context_takeover")
- # if client_notakeover:
- # enabledext.append('client_no_context_takeover')
- return "; ".join(enabledext)
-
-
-class WSParserState(IntEnum):
- READ_HEADER = 1
- READ_PAYLOAD_LENGTH = 2
- READ_PAYLOAD_MASK = 3
- READ_PAYLOAD = 4
-
-
-class WebSocketReader:
- def __init__(
- self, queue: DataQueue[WSMessage], max_msg_size: int, compress: bool = True
- ) -> None:
- self.queue = queue
- self._max_msg_size = max_msg_size
-
- self._exc: Optional[BaseException] = None
- self._partial = bytearray()
- self._state = WSParserState.READ_HEADER
-
- self._opcode: Optional[int] = None
- self._frame_fin = False
- self._frame_opcode: Optional[int] = None
- self._frame_payload = bytearray()
-
- self._tail = b""
- self._has_mask = False
- self._frame_mask: Optional[bytes] = None
- self._payload_length = 0
- self._payload_length_flag = 0
- self._compressed: Optional[bool] = None
- self._decompressobj: Any = None # zlib.decompressobj actually
- self._compress = compress
-
- def feed_eof(self) -> None:
- self.queue.feed_eof()
-
- def feed_data(self, data: bytes) -> Tuple[bool, bytes]:
- if self._exc:
- return True, data
-
- try:
- return self._feed_data(data)
- except Exception as exc:
- self._exc = exc
- self.queue.set_exception(exc)
- return True, b""
-
- def _feed_data(self, data: bytes) -> Tuple[bool, bytes]:
- for fin, opcode, payload, compressed in self.parse_frame(data):
- if compressed and not self._decompressobj:
- self._decompressobj = zlib.decompressobj(wbits=-zlib.MAX_WBITS)
- if opcode == WSMsgType.CLOSE:
- if len(payload) >= 2:
- close_code = UNPACK_CLOSE_CODE(payload[:2])[0]
- if close_code < 3000 and close_code not in ALLOWED_CLOSE_CODES:
- raise WebSocketError(
- WSCloseCode.PROTOCOL_ERROR,
- f"Invalid close code: {close_code}",
- )
- try:
- close_message = payload[2:].decode("utf-8")
- except UnicodeDecodeError as exc:
- raise WebSocketError(
- WSCloseCode.INVALID_TEXT, "Invalid UTF-8 text message"
- ) from exc
- msg = WSMessage(WSMsgType.CLOSE, close_code, close_message)
- elif payload:
- raise WebSocketError(
- WSCloseCode.PROTOCOL_ERROR,
- f"Invalid close frame: {fin} {opcode} {payload!r}",
- )
- else:
- msg = WSMessage(WSMsgType.CLOSE, 0, "")
-
- self.queue.feed_data(msg, 0)
-
- elif opcode == WSMsgType.PING:
- self.queue.feed_data(
- WSMessage(WSMsgType.PING, payload, ""), len(payload)
- )
-
- elif opcode == WSMsgType.PONG:
- self.queue.feed_data(
- WSMessage(WSMsgType.PONG, payload, ""), len(payload)
- )
-
- elif (
- opcode not in (WSMsgType.TEXT, WSMsgType.BINARY)
- and self._opcode is None
- ):
- raise WebSocketError(
- WSCloseCode.PROTOCOL_ERROR, f"Unexpected opcode={opcode!r}"
- )
- else:
- # load text/binary
- if not fin:
- # got partial frame payload
- if opcode != WSMsgType.CONTINUATION:
- self._opcode = opcode
- self._partial.extend(payload)
- if self._max_msg_size and len(self._partial) >= self._max_msg_size:
- raise WebSocketError(
- WSCloseCode.MESSAGE_TOO_BIG,
- "Message size {} exceeds limit {}".format(
- len(self._partial), self._max_msg_size
- ),
- )
- else:
- # previous frame was non finished
- # we should get continuation opcode
- if self._partial:
- if opcode != WSMsgType.CONTINUATION:
- raise WebSocketError(
- WSCloseCode.PROTOCOL_ERROR,
- "The opcode in non-fin frame is expected "
- "to be zero, got {!r}".format(opcode),
- )
-
- if opcode == WSMsgType.CONTINUATION:
- assert self._opcode is not None
- opcode = self._opcode
- self._opcode = None
-
- self._partial.extend(payload)
- if self._max_msg_size and len(self._partial) >= self._max_msg_size:
- raise WebSocketError(
- WSCloseCode.MESSAGE_TOO_BIG,
- "Message size {} exceeds limit {}".format(
- len(self._partial), self._max_msg_size
- ),
- )
-
- # Decompress process must to be done after all packets
- # received.
- if compressed:
- self._partial.extend(_WS_DEFLATE_TRAILING)
- payload_merged = self._decompressobj.decompress(
- self._partial, self._max_msg_size
- )
- if self._decompressobj.unconsumed_tail:
- left = len(self._decompressobj.unconsumed_tail)
- raise WebSocketError(
- WSCloseCode.MESSAGE_TOO_BIG,
- "Decompressed message size {} exceeds limit {}".format(
- self._max_msg_size + left, self._max_msg_size
- ),
- )
- else:
- payload_merged = bytes(self._partial)
-
- self._partial.clear()
-
- if opcode == WSMsgType.TEXT:
- try:
- text = payload_merged.decode("utf-8")
- self.queue.feed_data(
- WSMessage(WSMsgType.TEXT, text, ""), len(text)
- )
- except UnicodeDecodeError as exc:
- raise WebSocketError(
- WSCloseCode.INVALID_TEXT, "Invalid UTF-8 text message"
- ) from exc
- else:
- self.queue.feed_data(
- WSMessage(WSMsgType.BINARY, payload_merged, ""),
- len(payload_merged),
- )
-
- return False, b""
-
- def parse_frame(
- self, buf: bytes
- ) -> List[Tuple[bool, Optional[int], bytearray, Optional[bool]]]:
- """Return the next frame from the socket."""
- frames = []
- if self._tail:
- buf, self._tail = self._tail + buf, b""
-
- start_pos = 0
- buf_length = len(buf)
-
- while True:
- # read header
- if self._state == WSParserState.READ_HEADER:
- if buf_length - start_pos >= 2:
- data = buf[start_pos : start_pos + 2]
- start_pos += 2
- first_byte, second_byte = data
-
- fin = (first_byte >> 7) & 1
- rsv1 = (first_byte >> 6) & 1
- rsv2 = (first_byte >> 5) & 1
- rsv3 = (first_byte >> 4) & 1
- opcode = first_byte & 0xF
-
- # frame-fin = %x0 ; more frames of this message follow
- # / %x1 ; final frame of this message
- # frame-rsv1 = %x0 ;
- # 1 bit, MUST be 0 unless negotiated otherwise
- # frame-rsv2 = %x0 ;
- # 1 bit, MUST be 0 unless negotiated otherwise
- # frame-rsv3 = %x0 ;
- # 1 bit, MUST be 0 unless negotiated otherwise
- #
- # Remove rsv1 from this test for deflate development
- if rsv2 or rsv3 or (rsv1 and not self._compress):
- raise WebSocketError(
- WSCloseCode.PROTOCOL_ERROR,
- "Received frame with non-zero reserved bits",
- )
-
- if opcode > 0x7 and fin == 0:
- raise WebSocketError(
- WSCloseCode.PROTOCOL_ERROR,
- "Received fragmented control frame",
- )
-
- has_mask = (second_byte >> 7) & 1
- length = second_byte & 0x7F
-
- # Control frames MUST have a payload
- # length of 125 bytes or less
- if opcode > 0x7 and length > 125:
- raise WebSocketError(
- WSCloseCode.PROTOCOL_ERROR,
- "Control frame payload cannot be " "larger than 125 bytes",
- )
-
- # Set compress status if last package is FIN
- # OR set compress status if this is first fragment
- # Raise error if not first fragment with rsv1 = 0x1
- if self._frame_fin or self._compressed is None:
- self._compressed = True if rsv1 else False
- elif rsv1:
- raise WebSocketError(
- WSCloseCode.PROTOCOL_ERROR,
- "Received frame with non-zero reserved bits",
- )
-
- self._frame_fin = bool(fin)
- self._frame_opcode = opcode
- self._has_mask = bool(has_mask)
- self._payload_length_flag = length
- self._state = WSParserState.READ_PAYLOAD_LENGTH
- else:
- break
-
- # read payload length
- if self._state == WSParserState.READ_PAYLOAD_LENGTH:
- length = self._payload_length_flag
- if length == 126:
- if buf_length - start_pos >= 2:
- data = buf[start_pos : start_pos + 2]
- start_pos += 2
- length = UNPACK_LEN2(data)[0]
- self._payload_length = length
- self._state = (
- WSParserState.READ_PAYLOAD_MASK
- if self._has_mask
- else WSParserState.READ_PAYLOAD
- )
- else:
- break
- elif length > 126:
- if buf_length - start_pos >= 8:
- data = buf[start_pos : start_pos + 8]
- start_pos += 8
- length = UNPACK_LEN3(data)[0]
- self._payload_length = length
- self._state = (
- WSParserState.READ_PAYLOAD_MASK
- if self._has_mask
- else WSParserState.READ_PAYLOAD
- )
- else:
- break
- else:
- self._payload_length = length
- self._state = (
- WSParserState.READ_PAYLOAD_MASK
- if self._has_mask
- else WSParserState.READ_PAYLOAD
- )
-
- # read payload mask
- if self._state == WSParserState.READ_PAYLOAD_MASK:
- if buf_length - start_pos >= 4:
- self._frame_mask = buf[start_pos : start_pos + 4]
- start_pos += 4
- self._state = WSParserState.READ_PAYLOAD
- else:
- break
-
- if self._state == WSParserState.READ_PAYLOAD:
- length = self._payload_length
- payload = self._frame_payload
-
- chunk_len = buf_length - start_pos
- if length >= chunk_len:
- self._payload_length = length - chunk_len
- payload.extend(buf[start_pos:])
- start_pos = buf_length
- else:
- self._payload_length = 0
- payload.extend(buf[start_pos : start_pos + length])
- start_pos = start_pos + length
-
- if self._payload_length == 0:
- if self._has_mask:
- assert self._frame_mask is not None
- _websocket_mask(self._frame_mask, payload)
-
- frames.append(
- (self._frame_fin, self._frame_opcode, payload, self._compressed)
- )
-
- self._frame_payload = bytearray()
- self._state = WSParserState.READ_HEADER
- else:
- break
-
- self._tail = buf[start_pos:]
-
- return frames
-
-
-class WebSocketWriter:
- def __init__(
- self,
- protocol: BaseProtocol,
- transport: asyncio.Transport,
- *,
- use_mask: bool = False,
- limit: int = DEFAULT_LIMIT,
- random: Any = random.Random(),
- compress: int = 0,
- notakeover: bool = False,
- ) -> None:
- self.protocol = protocol
- self.transport = transport
- self.use_mask = use_mask
- self.randrange = random.randrange
- self.compress = compress
- self.notakeover = notakeover
- self._closing = False
- self._limit = limit
- self._output_size = 0
- self._compressobj: Any = None # actually compressobj
-
- async def _send_frame(
- self, message: bytes, opcode: int, compress: Optional[int] = None
- ) -> None:
- """Send a frame over the websocket with message as its payload."""
- if self._closing and not (opcode & WSMsgType.CLOSE):
- raise ConnectionResetError("Cannot write to closing transport")
-
- rsv = 0
-
- # Only compress larger packets (disabled)
- # Does small packet needs to be compressed?
- # if self.compress and opcode < 8 and len(message) > 124:
- if (compress or self.compress) and opcode < 8:
- if compress:
- # Do not set self._compress if compressing is for this frame
- compressobj = zlib.compressobj(level=zlib.Z_BEST_SPEED, wbits=-compress)
- else: # self.compress
- if not self._compressobj:
- self._compressobj = zlib.compressobj(
- level=zlib.Z_BEST_SPEED, wbits=-self.compress
- )
- compressobj = self._compressobj
-
- message = compressobj.compress(message)
- message = message + compressobj.flush(
- zlib.Z_FULL_FLUSH if self.notakeover else zlib.Z_SYNC_FLUSH
- )
- if message.endswith(_WS_DEFLATE_TRAILING):
- message = message[:-4]
- rsv = rsv | 0x40
-
- msg_length = len(message)
-
- use_mask = self.use_mask
- if use_mask:
- mask_bit = 0x80
- else:
- mask_bit = 0
-
- if msg_length < 126:
- header = PACK_LEN1(0x80 | rsv | opcode, msg_length | mask_bit)
- elif msg_length < (1 << 16):
- header = PACK_LEN2(0x80 | rsv | opcode, 126 | mask_bit, msg_length)
- else:
- header = PACK_LEN3(0x80 | rsv | opcode, 127 | mask_bit, msg_length)
- if use_mask:
- mask = self.randrange(0, 0xFFFFFFFF)
- mask = mask.to_bytes(4, "big")
- message = bytearray(message)
- _websocket_mask(mask, message)
- self._write(header + mask + message)
- self._output_size += len(header) + len(mask) + len(message)
- else:
- if len(message) > MSG_SIZE:
- self._write(header)
- self._write(message)
- else:
- self._write(header + message)
-
- self._output_size += len(header) + len(message)
-
- if self._output_size > self._limit:
- self._output_size = 0
- await self.protocol._drain_helper()
-
- def _write(self, data: bytes) -> None:
- if self.transport is None or self.transport.is_closing():
- raise ConnectionResetError("Cannot write to closing transport")
- self.transport.write(data)
-
- async def pong(self, message: bytes = b"") -> None:
- """Send pong message."""
- if isinstance(message, str):
- message = message.encode("utf-8")
- await self._send_frame(message, WSMsgType.PONG)
-
- async def ping(self, message: bytes = b"") -> None:
- """Send ping message."""
- if isinstance(message, str):
- message = message.encode("utf-8")
- await self._send_frame(message, WSMsgType.PING)
-
- async def send(
- self,
- message: Union[str, bytes],
- binary: bool = False,
- compress: Optional[int] = None,
- ) -> None:
- """Send a frame over the websocket with message as its payload."""
- if isinstance(message, str):
- message = message.encode("utf-8")
- if binary:
- await self._send_frame(message, WSMsgType.BINARY, compress)
- else:
- await self._send_frame(message, WSMsgType.TEXT, compress)
-
- async def close(self, code: int = 1000, message: bytes = b"") -> None:
- """Close the websocket, sending the specified code and message."""
- if isinstance(message, str):
- message = message.encode("utf-8")
- try:
- await self._send_frame(
- PACK_CLOSE_CODE(code) + message, opcode=WSMsgType.CLOSE
- )
- finally:
- self._closing = True
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_T_F_A_.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_T_F_A_.py
deleted file mode 100644
index e3cf2db2d744cdda880ec1255808f60bc3795c61..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_T_F_A_.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from . import asciiTable
-
-
-class table_T_T_F_A_(asciiTable.asciiTable):
- pass
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-e2533c7c.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-e2533c7c.js
deleted file mode 100644
index 1d13643f0d08b9ec611e14c2c34b7fbfc208740e..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-e2533c7c.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as j,e as B,s as H,N as L,K as o,U as d,p as g,n as T,A as v,B as S,h as q,k as h,o as b,z as k,v as w,x as M,E as z,ae as C,O as E,q as A,r as D,F}from"./index-3370be2a.js";import{B as K}from"./Button-89624748.js";function N(s){let e,a,i;return{c(){e=L("div"),o(e,"id",s[0]),o(e,"class",a="prose "+s[1].join(" ")+" svelte-1yrv54"),o(e,"data-testid","markdown"),o(e,"dir",i=s[5]?"rtl":"ltr"),d(e,"min",s[4]),d(e,"hide",!s[2])},m(t,_){g(t,e,_),e.innerHTML=s[3],s[7](e)},p(t,[_]){_&8&&(e.innerHTML=t[3]),_&1&&o(e,"id",t[0]),_&2&&a!==(a="prose "+t[1].join(" ")+" svelte-1yrv54")&&o(e,"class",a),_&32&&i!==(i=t[5]?"rtl":"ltr")&&o(e,"dir",i),_&18&&d(e,"min",t[4]),_&6&&d(e,"hide",!t[2])},i:T,o:T,d(t){t&&v(e),s[7](null)}}}function O(s,e,a){let{elem_id:i=""}=e,{elem_classes:t=[]}=e,{visible:_=!0}=e,{value:r}=e,{min_height:u=!1}=e,{rtl:l=!1}=e;const m=S();let c;function f(n){q[n?"unshift":"push"](()=>{c=n,a(6,c)})}return s.$$set=n=>{"elem_id"in n&&a(0,i=n.elem_id),"elem_classes"in n&&a(1,t=n.elem_classes),"visible"in n&&a(2,_=n.visible),"value"in n&&a(3,r=n.value),"min_height"in n&&a(4,u=n.min_height),"rtl"in n&&a(5,l=n.rtl)},s.$$.update=()=>{s.$$.dirty&8&&m("change")},[i,t,_,r,u,l,c,f]}class U extends j{constructor(e){super(),B(this,e,O,N,H,{elem_id:0,elem_classes:1,visible:2,value:3,min_height:4,rtl:5})}}function G(s){let e,a,i,t,_;const r=[s[4],{variant:"center"}];let u={};for(let l=0;l{"label"in n&&a(6,i=n.label),"elem_id"in n&&a(0,t=n.elem_id),"elem_classes"in n&&a(1,_=n.elem_classes),"visible"in n&&a(2,r=n.visible),"value"in n&&a(3,u=n.value),"loading_status"in n&&a(4,l=n.loading_status),"rtl"in n&&a(5,m=n.rtl)},s.$$.update=()=>{s.$$.dirty&64&&c("change")},[t,_,r,u,l,m,i,f]}class P extends j{constructor(e){super(),B(this,e,J,I,H,{label:6,elem_id:0,elem_classes:1,visible:2,value:3,loading_status:4,rtl:5})}}const V=P,W=["static"],X=s=>({type:{payload:"string"},description:{payload:"HTML rendering of markdown"}});export{V as Component,X as document,W as modes};
-//# sourceMappingURL=index-e2533c7c.js.map
diff --git a/spaces/DaleChen/AutoGPT/autogpt/memory/weaviate.py b/spaces/DaleChen/AutoGPT/autogpt/memory/weaviate.py
deleted file mode 100644
index 5408e9a97aa3594ad443448cfc31f2546a01eb09..0000000000000000000000000000000000000000
--- a/spaces/DaleChen/AutoGPT/autogpt/memory/weaviate.py
+++ /dev/null
@@ -1,127 +0,0 @@
-import uuid
-
-import weaviate
-from weaviate import Client
-from weaviate.embedded import EmbeddedOptions
-from weaviate.util import generate_uuid5
-
-from autogpt.config import Config
-from autogpt.memory.base import MemoryProviderSingleton, get_ada_embedding
-
-
-def default_schema(weaviate_index):
- return {
- "class": weaviate_index,
- "properties": [
- {
- "name": "raw_text",
- "dataType": ["text"],
- "description": "original text for the embedding",
- }
- ],
- }
-
-
-class WeaviateMemory(MemoryProviderSingleton):
- def __init__(self, cfg):
- auth_credentials = self._build_auth_credentials(cfg)
-
- url = f"{cfg.weaviate_protocol}://{cfg.weaviate_host}:{cfg.weaviate_port}"
-
- if cfg.use_weaviate_embedded:
- self.client = Client(
- embedded_options=EmbeddedOptions(
- hostname=cfg.weaviate_host,
- port=int(cfg.weaviate_port),
- persistence_data_path=cfg.weaviate_embedded_path,
- )
- )
-
- print(
- f"Weaviate Embedded running on: {url} with persistence path: {cfg.weaviate_embedded_path}"
- )
- else:
- self.client = Client(url, auth_client_secret=auth_credentials)
-
- self.index = WeaviateMemory.format_classname(cfg.memory_index)
- self._create_schema()
-
- @staticmethod
- def format_classname(index):
- # weaviate uses capitalised index names
- # The python client uses the following code to format
- # index names before the corresponding class is created
- if len(index) == 1:
- return index.capitalize()
- return index[0].capitalize() + index[1:]
-
- def _create_schema(self):
- schema = default_schema(self.index)
- if not self.client.schema.contains(schema):
- self.client.schema.create_class(schema)
-
- def _build_auth_credentials(self, cfg):
- if cfg.weaviate_username and cfg.weaviate_password:
- return weaviate.AuthClientPassword(
- cfg.weaviate_username, cfg.weaviate_password
- )
- if cfg.weaviate_api_key:
- return weaviate.AuthApiKey(api_key=cfg.weaviate_api_key)
- else:
- return None
-
- def add(self, data):
- vector = get_ada_embedding(data)
-
- doc_uuid = generate_uuid5(data, self.index)
- data_object = {"raw_text": data}
-
- with self.client.batch as batch:
- batch.add_data_object(
- uuid=doc_uuid,
- data_object=data_object,
- class_name=self.index,
- vector=vector,
- )
-
- return f"Inserting data into memory at uuid: {doc_uuid}:\n data: {data}"
-
- def get(self, data):
- return self.get_relevant(data, 1)
-
- def clear(self):
- self.client.schema.delete_all()
-
- # weaviate does not yet have a neat way to just remove the items in an index
- # without removing the entire schema, therefore we need to re-create it
- # after a call to delete_all
- self._create_schema()
-
- return "Obliterated"
-
- def get_relevant(self, data, num_relevant=5):
- query_embedding = get_ada_embedding(data)
- try:
- results = (
- self.client.query.get(self.index, ["raw_text"])
- .with_near_vector({"vector": query_embedding, "certainty": 0.7})
- .with_limit(num_relevant)
- .do()
- )
-
- if len(results["data"]["Get"][self.index]) > 0:
- return [
- str(item["raw_text"]) for item in results["data"]["Get"][self.index]
- ]
- else:
- return []
-
- except Exception as err:
- print(f"Unexpected error {err=}, {type(err)=}")
- return []
-
- def get_stats(self):
- result = self.client.query.aggregate(self.index).with_meta_count().do()
- class_data = result["data"]["Aggregate"][self.index]
-
- return class_data[0]["meta"] if class_data else {}
diff --git a/spaces/DataNerd2021/song_recommendation_app/app.py b/spaces/DataNerd2021/song_recommendation_app/app.py
deleted file mode 100644
index 20e2beafb72236331fdbc59353814f21cf9d0cbd..0000000000000000000000000000000000000000
--- a/spaces/DataNerd2021/song_recommendation_app/app.py
+++ /dev/null
@@ -1,219 +0,0 @@
-'''
-Python libraries allow users to extend the abilities of the language compiler. For this project, I will be using the following libraries:
-- pandas and numpy (for data analysis and manipulation)
-- streamlit and plotly (for UI design and data visualization)
-- pyodbc and spotipy (for Spotify API and SQL Server connections)
-'''
-
-# import libraries
-
-
-
-import pandas as pd
-import numpy as np
-import streamlit as st
-import plotly.express as px
-from random import seed
-import spotipy
-from spotipy.oauth2 import SpotifyClientCredentials
-
-# define function to highlight output dataframe cells based on value
-
-
-def highlight_colors(val, color_if_true, color_if_false):
- color = color_if_true if val >= 0.75 and val <= 1.0 else color_if_false
- return 'background-color: {}'.format(color)
-
-# establish API connection
-
-cid = '3fda75b7146a4769b207ee44017b3abe'
-secret = '2a755cb04a18406b9394dbef2f8069dd'
-
-client_credentials_manager = SpotifyClientCredentials(client_id=cid, client_secret=secret)
-sp = spotipy.Spotify(client_credentials_manager=client_credentials_manager)
-
-# establish SQL Server connection
-
-
-# read data from parquet file
-
-query = pd.read_parquet("tracks.parquet.gzip")
-
-
-# create metrics for analysis
-
-query2 = pd.melt(query, id_vars=['uri'], var_name='metrics', value_name='score', value_vars=['instrumentalness', 'danceability', 'energy', 'acousticness', 'valence', 'liveness'])
-
-
-
-# name the app
-
-st.set_page_config(page_title='Song Recommendation App', layout='centered')
-
-# create internal CSS
-
-st.markdown(""" """, unsafe_allow_html=True)
-
-# create sidebar menu
-
-sidebar_title = st.sidebar.header('Pick Your Favorite Song')
-artists = query['artist_name'].drop_duplicates()
-artists = artists.sort_values()
-artist_choice = st.sidebar.selectbox('Choose an Artist:', artists)
-tracks = query['track_name'].loc[query['artist_name'] == artist_choice].drop_duplicates()
-tracks = tracks.sort_values()
-track_choice = st.sidebar.selectbox('Choose a Song', tracks)
-empty = st.sidebar.text('')
-output = query['uri'].loc[(query['track_name'] == track_choice) & (query['artist_name'] == artist_choice)].values
-output_bpm = query['tempo'].loc[(query['track_name'] == track_choice) & (query['artist_name'] == artist_choice)].drop_duplicates().values
-output_bpm = output_bpm.astype(float)
-output_bpm = np.round(output_bpm, decimals=0)
-output_bpm = output_bpm.astype(int)
-uri_output = st.sidebar.selectbox('Select the URI:', options=(output))
-
-
-viz_query = query2.loc[query2['uri'] == uri_output]
-
-# create title for main interface
-
-page_title = st.markdown(f'''
Song Recommendation Engine 2.0
''', unsafe_allow_html=True)
-
-# create dropdown menu for app description
-
-st.markdown(' ', unsafe_allow_html=True)
-with st.expander('Description'):
- st.markdown('''Have you ever wondered how Spotify's Song Recommendation Algorithm works? This app allows you to take a behind-the-scenes look at how Spotify uses your data to recommend songs based on various metrics.''', unsafe_allow_html=True)
-
-# allow user to preview song and view album cover
-
-st.markdown('
''', unsafe_allow_html=True)
-
-
-# create data visualization using new query from uri output
-
-
-
-fig = px.bar_polar(viz_query, theta='metrics', r='score', range_r=[0.0,1.0], hover_name='metrics', hover_data={'score':True, 'metrics':False}, width=750, height=600, color_continuous_scale='Sunset', color='score', range_color=[0.0,1.0], template='plotly', title='Song Metrics')
-fig = fig.update_layout(polar_radialaxis_gridcolor="#e3ecf6", polar_angularaxis_gridcolor="#e3ecf6", polar= dict(radialaxis= dict(showticklabels= False)), hovermode="x")
-fig = fig.update_traces(hovertemplate="Metric: %{theta} Score: %{r}", hoverlabel= dict(bgcolor="#ffffff"))
-st.plotly_chart(fig)
-
-# create drop-down menu to display definitions for each metric
-
-with st.expander('Metric Definitions'):
- st.markdown(f'''
Acousticness
\nA confidence measure from 0.00 to 1.00 of whether a track is acoustic. 1.0 represents high confidence the track is acoustic.\n\n
Danceability
\nThis describes how suitable a track is for dancing based on a combination of musical elements including tempo (BPM), rhythm stability, beat strength, and overall regularity. A value of 0.00 is least danceable and 1.00 is most danceable.\n\n
Energy
\nA measure from 0.00 to 1.00 that represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy.\n\n
Instrumentalness
\nPredicts whether a tracks contains no vocals. "Ooh" and "Aah" sounds are treated as instrumental in this context. The closer the value is to 1.00, the greater likelihood the track contains no vocal content.\n\n
Liveness
\nDetects the presence of an audience in the recoding. The great the value is to 1.00, the greater the likelihood that the track was performed live.\n\n
Valence
\nA measure from 0.00 to 1.00 describing the musical positiveness by a track. Tracks with high valence (> 0.50) sound more positive, whereas tracks with low valence (< 0.50) sound more negative.\n\n * Web API Reference: Get Track Audio Features, Spotify, developer.spotify.com/documentation/web-api/reference/#/operations/get-audio-features.''', unsafe_allow_html=True)
-
-# create drop-down menu to display song recommendations based on user input
-
-with st.expander('Song Recommendations'):
- st.subheader('Your Song')
- result_query = query.loc[query['track_uri'] == uri_output]
- result_query = result_query.drop_duplicates()
- result_query = result_query.reset_index()
- result_df = pd.DataFrame(result_query)
- result_df = result_df[['track_name', 'artist_name', 'album_name', 'acousticness', 'danceability', 'energy', 'instrumentalness', 'liveness', 'valence', 'artist_uri', 'uri']]
- st.dataframe(result_df)
-
-
- # get all artist data
-
- result_list2 = pd.json_normalize(sp.recommendations(seed_tracks=[result_df['uri'][0]], seed_artists=[result_df['artist_uri'][0]], limit=25), record_path=['tracks', ['artists']])
-
- result_list2 = result_list2.merge(query, left_on='uri', right_on='artist_uri')
- result_list2 = result_list2.rename(columns={'name': 'Artist Name', 'uri_x': 'Artist URI'})
- result_list2 = result_list2.rename(columns={'track_name': 'Track Name'})
- result_list2 = result_list2[['Track Name', 'Artist Name', 'album_name', 'acousticness', 'danceability', 'energy', 'instrumentalness', 'liveness', 'valence']]
- final_df = result_list2.head(25)
-
- result_df = result_df.reset_index()
- final_df = final_df.reset_index()
-
-
- # create new field to calculate likeness for song metrics
-
- final_df['acousticness'] = round(final_df['acousticness'].astype(float), 3)
- final_df['danceability'] = round(final_df['danceability'].astype(float), 3)
- final_df['energy'] = round(final_df['energy'].astype(float), 3)
- final_df['instrumentalness'] = round(final_df['instrumentalness'].astype(float), 3)
- final_df['liveness'] = round(final_df['liveness'].astype(float), 3)
- final_df['valence'] = round(final_df['valence'].astype(float), 3)
- final_df = final_df[['Track Name', 'Artist Name', 'acousticness', 'danceability', 'energy', 'instrumentalness', 'liveness', 'valence']]
- final_df = final_df.drop_duplicates()
- final_df = final_df.style.applymap(highlight_colors, color_if_true='#5EFF33', color_if_false='white', subset=['acousticness', 'danceability', 'energy', 'instrumentalness', 'liveness', 'valence'])
- st.subheader('Recommendations (by likeness)')
- st.dataframe(final_df)
-
-
-
-
-
diff --git a/spaces/Datasculptor/MusicGen/audiocraft/modules/seanet.py b/spaces/Datasculptor/MusicGen/audiocraft/modules/seanet.py
deleted file mode 100644
index 3e5998e9153afb6e68ea410d565e00ea835db248..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/MusicGen/audiocraft/modules/seanet.py
+++ /dev/null
@@ -1,258 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import typing as tp
-
-import numpy as np
-import torch.nn as nn
-
-from .conv import StreamableConv1d, StreamableConvTranspose1d
-from .lstm import StreamableLSTM
-
-
-class SEANetResnetBlock(nn.Module):
- """Residual block from SEANet model.
-
- Args:
- dim (int): Dimension of the input/output.
- kernel_sizes (list): List of kernel sizes for the convolutions.
- dilations (list): List of dilations for the convolutions.
- activation (str): Activation function.
- activation_params (dict): Parameters to provide to the activation function.
- norm (str): Normalization method.
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
- causal (bool): Whether to use fully causal convolution.
- pad_mode (str): Padding mode for the convolutions.
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
- true_skip (bool): Whether to use true skip connection or a simple
- (streamable) convolution as the skip connection.
- """
- def __init__(self, dim: int, kernel_sizes: tp.List[int] = [3, 1], dilations: tp.List[int] = [1, 1],
- activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, causal: bool = False,
- pad_mode: str = 'reflect', compress: int = 2, true_skip: bool = True):
- super().__init__()
- assert len(kernel_sizes) == len(dilations), 'Number of kernel sizes should match number of dilations'
- act = getattr(nn, activation)
- hidden = dim // compress
- block = []
- for i, (kernel_size, dilation) in enumerate(zip(kernel_sizes, dilations)):
- in_chs = dim if i == 0 else hidden
- out_chs = dim if i == len(kernel_sizes) - 1 else hidden
- block += [
- act(**activation_params),
- StreamableConv1d(in_chs, out_chs, kernel_size=kernel_size, dilation=dilation,
- norm=norm, norm_kwargs=norm_params,
- causal=causal, pad_mode=pad_mode),
- ]
- self.block = nn.Sequential(*block)
- self.shortcut: nn.Module
- if true_skip:
- self.shortcut = nn.Identity()
- else:
- self.shortcut = StreamableConv1d(dim, dim, kernel_size=1, norm=norm, norm_kwargs=norm_params,
- causal=causal, pad_mode=pad_mode)
-
- def forward(self, x):
- return self.shortcut(x) + self.block(x)
-
-
-class SEANetEncoder(nn.Module):
- """SEANet encoder.
-
- Args:
- channels (int): Audio channels.
- dimension (int): Intermediate representation dimension.
- n_filters (int): Base width for the model.
- n_residual_layers (int): nb of residual layers.
- ratios (Sequence[int]): kernel size and stride ratios. The encoder uses downsampling ratios instead of
- upsampling ratios, hence it will use the ratios in the reverse order to the ones specified here
- that must match the decoder order. We use the decoder order as some models may only employ the decoder.
- activation (str): Activation function.
- activation_params (dict): Parameters to provide to the activation function.
- norm (str): Normalization method.
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
- kernel_size (int): Kernel size for the initial convolution.
- last_kernel_size (int): Kernel size for the initial convolution.
- residual_kernel_size (int): Kernel size for the residual layers.
- dilation_base (int): How much to increase the dilation with each layer.
- causal (bool): Whether to use fully causal convolution.
- pad_mode (str): Padding mode for the convolutions.
- true_skip (bool): Whether to use true skip connection or a simple
- (streamable) convolution as the skip connection in the residual network blocks.
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
- lstm (int): Number of LSTM layers at the end of the encoder.
- disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm.
- For the encoder, it corresponds to the N first blocks.
- """
- def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3,
- ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7,
- last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False,
- pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0,
- disable_norm_outer_blocks: int = 0):
- super().__init__()
- self.channels = channels
- self.dimension = dimension
- self.n_filters = n_filters
- self.ratios = list(reversed(ratios))
- del ratios
- self.n_residual_layers = n_residual_layers
- self.hop_length = np.prod(self.ratios)
- self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks
- self.disable_norm_outer_blocks = disable_norm_outer_blocks
- assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \
- "Number of blocks for which to disable norm is invalid." \
- "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0."
-
- act = getattr(nn, activation)
- mult = 1
- model: tp.List[nn.Module] = [
- StreamableConv1d(channels, mult * n_filters, kernel_size,
- norm='none' if self.disable_norm_outer_blocks >= 1 else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
- # Downsample to raw audio scale
- for i, ratio in enumerate(self.ratios):
- block_norm = 'none' if self.disable_norm_outer_blocks >= i + 2 else norm
- # Add residual layers
- for j in range(n_residual_layers):
- model += [
- SEANetResnetBlock(mult * n_filters, kernel_sizes=[residual_kernel_size, 1],
- dilations=[dilation_base ** j, 1],
- norm=block_norm, norm_params=norm_params,
- activation=activation, activation_params=activation_params,
- causal=causal, pad_mode=pad_mode, compress=compress, true_skip=true_skip)]
-
- # Add downsampling layers
- model += [
- act(**activation_params),
- StreamableConv1d(mult * n_filters, mult * n_filters * 2,
- kernel_size=ratio * 2, stride=ratio,
- norm=block_norm, norm_kwargs=norm_params,
- causal=causal, pad_mode=pad_mode),
- ]
- mult *= 2
-
- if lstm:
- model += [StreamableLSTM(mult * n_filters, num_layers=lstm)]
-
- model += [
- act(**activation_params),
- StreamableConv1d(mult * n_filters, dimension, last_kernel_size,
- norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
-
- self.model = nn.Sequential(*model)
-
- def forward(self, x):
- return self.model(x)
-
-
-class SEANetDecoder(nn.Module):
- """SEANet decoder.
-
- Args:
- channels (int): Audio channels.
- dimension (int): Intermediate representation dimension.
- n_filters (int): Base width for the model.
- n_residual_layers (int): nb of residual layers.
- ratios (Sequence[int]): kernel size and stride ratios.
- activation (str): Activation function.
- activation_params (dict): Parameters to provide to the activation function.
- final_activation (str): Final activation function after all convolutions.
- final_activation_params (dict): Parameters to provide to the activation function.
- norm (str): Normalization method.
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
- kernel_size (int): Kernel size for the initial convolution.
- last_kernel_size (int): Kernel size for the initial convolution.
- residual_kernel_size (int): Kernel size for the residual layers.
- dilation_base (int): How much to increase the dilation with each layer.
- causal (bool): Whether to use fully causal convolution.
- pad_mode (str): Padding mode for the convolutions.
- true_skip (bool): Whether to use true skip connection or a simple.
- (streamable) convolution as the skip connection in the residual network blocks.
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
- lstm (int): Number of LSTM layers at the end of the encoder.
- disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm.
- For the decoder, it corresponds to the N last blocks.
- trim_right_ratio (float): Ratio for trimming at the right of the transposed convolution under the causal setup.
- If equal to 1.0, it means that all the trimming is done at the right.
- """
- def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3,
- ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
- final_activation: tp.Optional[str] = None, final_activation_params: tp.Optional[dict] = None,
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7,
- last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False,
- pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0,
- disable_norm_outer_blocks: int = 0, trim_right_ratio: float = 1.0):
- super().__init__()
- self.dimension = dimension
- self.channels = channels
- self.n_filters = n_filters
- self.ratios = ratios
- del ratios
- self.n_residual_layers = n_residual_layers
- self.hop_length = np.prod(self.ratios)
- self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks
- self.disable_norm_outer_blocks = disable_norm_outer_blocks
- assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \
- "Number of blocks for which to disable norm is invalid." \
- "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0."
-
- act = getattr(nn, activation)
- mult = int(2 ** len(self.ratios))
- model: tp.List[nn.Module] = [
- StreamableConv1d(dimension, mult * n_filters, kernel_size,
- norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
-
- if lstm:
- model += [StreamableLSTM(mult * n_filters, num_layers=lstm)]
-
- # Upsample to raw audio scale
- for i, ratio in enumerate(self.ratios):
- block_norm = 'none' if self.disable_norm_outer_blocks >= self.n_blocks - (i + 1) else norm
- # Add upsampling layers
- model += [
- act(**activation_params),
- StreamableConvTranspose1d(mult * n_filters, mult * n_filters // 2,
- kernel_size=ratio * 2, stride=ratio,
- norm=block_norm, norm_kwargs=norm_params,
- causal=causal, trim_right_ratio=trim_right_ratio),
- ]
- # Add residual layers
- for j in range(n_residual_layers):
- model += [
- SEANetResnetBlock(mult * n_filters // 2, kernel_sizes=[residual_kernel_size, 1],
- dilations=[dilation_base ** j, 1],
- activation=activation, activation_params=activation_params,
- norm=block_norm, norm_params=norm_params, causal=causal,
- pad_mode=pad_mode, compress=compress, true_skip=true_skip)]
-
- mult //= 2
-
- # Add final layers
- model += [
- act(**activation_params),
- StreamableConv1d(n_filters, channels, last_kernel_size,
- norm='none' if self.disable_norm_outer_blocks >= 1 else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
- # Add optional final activation to decoder (eg. tanh)
- if final_activation is not None:
- final_act = getattr(nn, final_activation)
- final_activation_params = final_activation_params or {}
- model += [
- final_act(**final_activation_params)
- ]
- self.model = nn.Sequential(*model)
-
- def forward(self, z):
- y = self.model(z)
- return y
diff --git a/spaces/Demi2809/rvc-models/infer_pack/transforms.py b/spaces/Demi2809/rvc-models/infer_pack/transforms.py
deleted file mode 100644
index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000
--- a/spaces/Demi2809/rvc-models/infer_pack/transforms.py
+++ /dev/null
@@ -1,209 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {"tails": tails, "tail_bound": tail_bound}
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1
-
-
-def unconstrained_rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails="linear",
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == "linear":
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError("{} tails are not implemented.".format(tails))
-
- (
- outputs[inside_interval_mask],
- logabsdet[inside_interval_mask],
- ) = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound,
- right=tail_bound,
- bottom=-tail_bound,
- top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- )
-
- return outputs, logabsdet
-
-
-def rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0.0,
- right=1.0,
- bottom=0.0,
- top=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError("Input to a transform is not within its domain")
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError("Minimal bin width too large for the number of bins")
- if min_bin_height * num_bins > 1.0:
- raise ValueError("Minimal bin height too large for the number of bins")
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- ) + input_heights * (input_delta - input_derivatives)
- b = input_heights * input_derivatives - (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- )
- c = -input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (
- input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta
- )
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/Dineshkumars/Text-Summarization/README.md b/spaces/Dineshkumars/Text-Summarization/README.md
deleted file mode 100644
index 846d1caf7028f42bf4de60b85669df2f571fb4f2..0000000000000000000000000000000000000000
--- a/spaces/Dineshkumars/Text-Summarization/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Text Summarization
-emoji: 💩
-colorFrom: green
-colorTo: gray
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Dinoking/Guccio-AI-Designer/netdissect/server.py b/spaces/Dinoking/Guccio-AI-Designer/netdissect/server.py
deleted file mode 100644
index d8422a2bad5ac2a09d4582a98da4f962dac1a911..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Guccio-AI-Designer/netdissect/server.py
+++ /dev/null
@@ -1,185 +0,0 @@
-#!/usr/bin/env python
-
-import argparse, connexion, os, sys, yaml, json, socket
-from netdissect.easydict import EasyDict
-from flask import send_from_directory, redirect
-from flask_cors import CORS
-
-
-from netdissect.serverstate import DissectionProject
-
-__author__ = 'Hendrik Strobelt, David Bau'
-
-CONFIG_FILE_NAME = 'dissect.json'
-projects = {}
-
-app = connexion.App(__name__, debug=False)
-
-
-def get_all_projects():
- res = []
- for key, project in projects.items():
- # print key
- res.append({
- 'project': key,
- 'info': {
- 'layers': [layer['layer'] for layer in project.get_layers()]
- }
- })
- return sorted(res, key=lambda x: x['project'])
-
-def get_layers(project):
- return {
- 'request': {'project': project},
- 'res': projects[project].get_layers()
- }
-
-def get_units(project, layer):
- return {
- 'request': {'project': project, 'layer': layer},
- 'res': projects[project].get_units(layer)
- }
-
-def get_rankings(project, layer):
- return {
- 'request': {'project': project, 'layer': layer},
- 'res': projects[project].get_rankings(layer)
- }
-
-def get_levels(project, layer, quantiles):
- return {
- 'request': {'project': project, 'layer': layer, 'quantiles': quantiles},
- 'res': projects[project].get_levels(layer, quantiles)
- }
-
-def get_channels(project, layer):
- answer = dict(channels=projects[project].get_channels(layer))
- return {
- 'request': {'project': project, 'layer': layer},
- 'res': answer
- }
-
-def post_generate(gen_req):
- project = gen_req['project']
- zs = gen_req.get('zs', None)
- ids = gen_req.get('ids', None)
- return_urls = gen_req.get('return_urls', False)
- assert (zs is None) != (ids is None) # one or the other, not both
- ablations = gen_req.get('ablations', [])
- interventions = gen_req.get('interventions', None)
- # no z avilable if ablations
- generated = projects[project].generate_images(zs, ids, interventions,
- return_urls=return_urls)
- return {
- 'request': gen_req,
- 'res': generated
- }
-
-def post_features(feat_req):
- project = feat_req['project']
- ids = feat_req['ids']
- masks = feat_req.get('masks', None)
- layers = feat_req.get('layers', None)
- interventions = feat_req.get('interventions', None)
- features = projects[project].get_features(
- ids, masks, layers, interventions)
- return {
- 'request': feat_req,
- 'res': features
- }
-
-def post_featuremaps(feat_req):
- project = feat_req['project']
- ids = feat_req['ids']
- layers = feat_req.get('layers', None)
- interventions = feat_req.get('interventions', None)
- featuremaps = projects[project].get_featuremaps(
- ids, layers, interventions)
- return {
- 'request': feat_req,
- 'res': featuremaps
- }
-
-@app.route('/client/')
-def send_static(path):
- """ serves all files from ./client/ to ``/client/``
-
- :param path: path from api call
- """
- return send_from_directory(args.client, path)
-
-@app.route('/data/')
-def send_data(path):
- """ serves all files from the data dir to ``/dissect/``
-
- :param path: path from api call
- """
- print('Got the data route for', path)
- return send_from_directory(args.data, path)
-
-
-@app.route('/')
-def redirect_home():
- return redirect('/client/index.html', code=302)
-
-
-def load_projects(directory):
- """
- searches for CONFIG_FILE_NAME in all subdirectories of directory
- and creates data handlers for all of them
-
- :param directory: scan directory
- :return: null
- """
- project_dirs = []
- # Don't search more than 2 dirs deep.
- search_depth = 2 + directory.count(os.path.sep)
- for root, dirs, files in os.walk(directory):
- if CONFIG_FILE_NAME in files:
- project_dirs.append(root)
- # Don't get subprojects under a project dir.
- del dirs[:]
- elif root.count(os.path.sep) >= search_depth:
- del dirs[:]
- for p_dir in project_dirs:
- print('Loading %s' % os.path.join(p_dir, CONFIG_FILE_NAME))
- with open(os.path.join(p_dir, CONFIG_FILE_NAME), 'r') as jf:
- config = EasyDict(json.load(jf))
- dh_id = os.path.split(p_dir)[1]
- projects[dh_id] = DissectionProject(
- config=config,
- project_dir=p_dir,
- path_url='data/' + os.path.relpath(p_dir, directory),
- public_host=args.public_host)
-
-app.add_api('server.yaml')
-
-# add CORS support
-CORS(app.app, headers='Content-Type')
-
-parser = argparse.ArgumentParser()
-parser.add_argument("--nodebug", default=False)
-parser.add_argument("--address", default="127.0.0.1") # 0.0.0.0 for nonlocal use
-parser.add_argument("--port", default="5001")
-parser.add_argument("--public_host", default=None)
-parser.add_argument("--nocache", default=False)
-parser.add_argument("--data", type=str, default='dissect')
-parser.add_argument("--client", type=str, default='client_dist')
-
-if __name__ == '__main__':
- args = parser.parse_args()
- for d in [args.data, args.client]:
- if not os.path.isdir(d):
- print('No directory %s' % d)
- sys.exit(1)
- args.data = os.path.abspath(args.data)
- args.client = os.path.abspath(args.client)
- if args.public_host is None:
- args.public_host = '%s:%d' % (socket.getfqdn(), int(args.port))
- app.run(port=int(args.port), debug=not args.nodebug, host=args.address,
- use_reloader=False)
-else:
- args, _ = parser.parse_known_args()
- if args.public_host is None:
- args.public_host = '%s:%d' % (socket.getfqdn(), int(args.port))
- load_projects(args.data)
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/training/projectors/__init__.py b/spaces/DragGan/DragGan-Inversion/PTI/training/projectors/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/EsoCode/text-generation-webui/extensions/api/util.py b/spaces/EsoCode/text-generation-webui/extensions/api/util.py
deleted file mode 100644
index d575c603f39f3c823931db2aeb1b6f25d3ed3063..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/extensions/api/util.py
+++ /dev/null
@@ -1,98 +0,0 @@
-import time
-import traceback
-from threading import Thread
-from typing import Callable, Optional
-
-from modules import shared
-from modules.chat import load_character_memoized
-from modules.presets import load_preset_memoized
-
-
-def build_parameters(body, chat=False):
-
- generate_params = {
- 'max_new_tokens': int(body.get('max_new_tokens', body.get('max_length', 200))),
- 'do_sample': bool(body.get('do_sample', True)),
- 'temperature': float(body.get('temperature', 0.5)),
- 'top_p': float(body.get('top_p', 1)),
- 'typical_p': float(body.get('typical_p', body.get('typical', 1))),
- 'epsilon_cutoff': float(body.get('epsilon_cutoff', 0)),
- 'eta_cutoff': float(body.get('eta_cutoff', 0)),
- 'tfs': float(body.get('tfs', 1)),
- 'top_a': float(body.get('top_a', 0)),
- 'repetition_penalty': float(body.get('repetition_penalty', body.get('rep_pen', 1.1))),
- 'repetition_penalty_range': int(body.get('repetition_penalty_range', 0)),
- 'encoder_repetition_penalty': float(body.get('encoder_repetition_penalty', 1.0)),
- 'top_k': int(body.get('top_k', 0)),
- 'min_length': int(body.get('min_length', 0)),
- 'no_repeat_ngram_size': int(body.get('no_repeat_ngram_size', 0)),
- 'num_beams': int(body.get('num_beams', 1)),
- 'penalty_alpha': float(body.get('penalty_alpha', 0)),
- 'length_penalty': float(body.get('length_penalty', 1)),
- 'early_stopping': bool(body.get('early_stopping', False)),
- 'mirostat_mode': int(body.get('mirostat_mode', 0)),
- 'mirostat_tau': float(body.get('mirostat_tau', 5)),
- 'mirostat_eta': float(body.get('mirostat_eta', 0.1)),
- 'seed': int(body.get('seed', -1)),
- 'add_bos_token': bool(body.get('add_bos_token', True)),
- 'truncation_length': int(body.get('truncation_length', body.get('max_context_length', 2048))),
- 'ban_eos_token': bool(body.get('ban_eos_token', False)),
- 'skip_special_tokens': bool(body.get('skip_special_tokens', True)),
- 'custom_stopping_strings': '', # leave this blank
- 'stopping_strings': body.get('stopping_strings', []),
- }
-
- preset_name = body.get('preset', 'None')
- if preset_name not in ['None', None, '']:
- preset = load_preset_memoized(preset_name)
- generate_params.update(preset)
-
- if chat:
- character = body.get('character')
- instruction_template = body.get('instruction_template')
- name1, name2, _, greeting, context, _ = load_character_memoized(character, str(body.get('your_name', shared.settings['name1'])), shared.settings['name2'], instruct=False)
- name1_instruct, name2_instruct, _, _, context_instruct, turn_template = load_character_memoized(instruction_template, '', '', instruct=True)
- generate_params.update({
- 'stop_at_newline': bool(body.get('stop_at_newline', shared.settings['stop_at_newline'])),
- 'chat_generation_attempts': int(body.get('chat_generation_attempts', shared.settings['chat_generation_attempts'])),
- 'mode': str(body.get('mode', 'chat')),
- 'name1': name1,
- 'name2': name2,
- 'context': context,
- 'greeting': greeting,
- 'name1_instruct': name1_instruct,
- 'name2_instruct': name2_instruct,
- 'context_instruct': context_instruct,
- 'turn_template': turn_template,
- 'chat-instruct_command': str(body.get('chat-instruct_command', shared.settings['chat-instruct_command'])),
- })
-
- return generate_params
-
-
-def try_start_cloudflared(port: int, max_attempts: int = 3, on_start: Optional[Callable[[str], None]] = None):
- Thread(target=_start_cloudflared, args=[
- port, max_attempts, on_start], daemon=True).start()
-
-
-def _start_cloudflared(port: int, max_attempts: int = 3, on_start: Optional[Callable[[str], None]] = None):
- try:
- from flask_cloudflared import _run_cloudflared
- except ImportError:
- print('You should install flask_cloudflared manually')
- raise Exception(
- 'flask_cloudflared not installed. Make sure you installed the requirements.txt for this extension.')
-
- for _ in range(max_attempts):
- try:
- public_url = _run_cloudflared(port, port + 1)
-
- if on_start:
- on_start(public_url)
-
- return
- except Exception:
- traceback.print_exc()
- time.sleep(3)
-
- raise Exception('Could not start cloudflared.')
diff --git a/spaces/EyanAn/vits-uma-genshin-honkai/transforms.py b/spaces/EyanAn/vits-uma-genshin-honkai/transforms.py
deleted file mode 100644
index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000
--- a/spaces/EyanAn/vits-uma-genshin-honkai/transforms.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
-
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {
- 'tails': tails,
- 'tail_bound': tail_bound
- }
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(
- inputs[..., None] >= bin_locations,
- dim=-1
- ) - 1
-
-
-def unconstrained_rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails='linear',
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == 'linear':
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError('{} tails are not implemented.'.format(tails))
-
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative
- )
-
- return outputs, logabsdet
-
-def rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0., right=1., bottom=0., top=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError('Input to a transform is not within its domain')
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError('Minimal bin width too large for the number of bins')
- if min_bin_height * num_bins > 1.0:
- raise ValueError('Minimal bin height too large for the number of bins')
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (((inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta)
- + input_heights * (input_delta - input_derivatives)))
- b = (input_heights * input_derivatives
- - (inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta))
- c = - input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (input_delta * theta.pow(2)
- + input_derivatives * theta_one_minus_theta)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/EyeSeeThru/openjourney/README.md b/spaces/EyeSeeThru/openjourney/README.md
deleted file mode 100644
index 30e23103c107bb5a2e58ac464aa0c42c59793e2a..0000000000000000000000000000000000000000
--- a/spaces/EyeSeeThru/openjourney/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: openjourney
-emoji: 👀
-colorFrom: gray
-colorTo: green
-sdk: gradio
-sdk_version: 3.10.1
-app_file: app.py
-pinned: false
-duplicated_from: gabortoth74/openjourney
----
-
-Make sure to use the keyword "mdjrny-v4" in your prompt. Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/FlippFuzz/whisper-webui/src/whisper/whisperContainer.py b/spaces/FlippFuzz/whisper-webui/src/whisper/whisperContainer.py
deleted file mode 100644
index 183e86b8f71024aaa36fe9a6f7371f11713ab951..0000000000000000000000000000000000000000
--- a/spaces/FlippFuzz/whisper-webui/src/whisper/whisperContainer.py
+++ /dev/null
@@ -1,201 +0,0 @@
-# External programs
-import abc
-import os
-import sys
-from typing import List
-from urllib.parse import urlparse
-import torch
-import urllib3
-from src.hooks.progressListener import ProgressListener
-
-import whisper
-from whisper import Whisper
-
-from src.config import ModelConfig
-from src.hooks.whisperProgressHook import create_progress_listener_handle
-
-from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache
-from src.utils import download_file
-from src.whisper.abstractWhisperContainer import AbstractWhisperCallback, AbstractWhisperContainer
-
-class WhisperContainer(AbstractWhisperContainer):
- def __init__(self, model_name: str, device: str = None, compute_type: str = "float16",
- download_root: str = None,
- cache: ModelCache = None, models: List[ModelConfig] = []):
- if device is None:
- device = "cuda" if torch.cuda.is_available() else "cpu"
- super().__init__(model_name, device, compute_type, download_root, cache, models)
-
- def ensure_downloaded(self):
- """
- Ensure that the model is downloaded. This is useful if you want to ensure that the model is downloaded before
- passing the container to a subprocess.
- """
- # Warning: Using private API here
- try:
- root_dir = self.download_root
- model_config = self._get_model_config()
-
- if root_dir is None:
- root_dir = os.path.join(os.path.expanduser("~"), ".cache", "whisper")
-
- if self.model_name in whisper._MODELS:
- whisper._download(whisper._MODELS[self.model_name], root_dir, False)
- else:
- # If the model is not in the official list, see if it needs to be downloaded
- model_config.download_url(root_dir)
- return True
-
- except Exception as e:
- # Given that the API is private, it could change at any time. We don't want to crash the program
- print("Error pre-downloading model: " + str(e))
- return False
-
- def _get_model_config(self) -> ModelConfig:
- """
- Get the model configuration for the model.
- """
- for model in self.models:
- if model.name == self.model_name:
- return model
- return None
-
- def _create_model(self):
- print("Loading whisper model " + self.model_name)
- model_config = self._get_model_config()
-
- # Note that the model will not be downloaded in the case of an official Whisper model
- model_path = self._get_model_path(model_config, self.download_root)
-
- return whisper.load_model(model_path, device=self.device, download_root=self.download_root)
-
- def create_callback(self, language: str = None, task: str = None, initial_prompt: str = None, **decodeOptions: dict):
- """
- Create a WhisperCallback object that can be used to transcript audio files.
-
- Parameters
- ----------
- language: str
- The target language of the transcription. If not specified, the language will be inferred from the audio content.
- task: str
- The task - either translate or transcribe.
- initial_prompt: str
- The initial prompt to use for the transcription.
- decodeOptions: dict
- Additional options to pass to the decoder. Must be pickleable.
-
- Returns
- -------
- A WhisperCallback object.
- """
- return WhisperCallback(self, language=language, task=task, initial_prompt=initial_prompt, **decodeOptions)
-
- def _get_model_path(self, model_config: ModelConfig, root_dir: str = None):
- from src.conversion.hf_converter import convert_hf_whisper
- """
- Download the model.
-
- Parameters
- ----------
- model_config: ModelConfig
- The model configuration.
- """
- # See if path is already set
- if model_config.path is not None:
- return model_config.path
-
- if root_dir is None:
- root_dir = os.path.join(os.path.expanduser("~"), ".cache", "whisper")
-
- model_type = model_config.type.lower() if model_config.type is not None else "whisper"
-
- if model_type in ["huggingface", "hf"]:
- model_config.path = model_config.url
- destination_target = os.path.join(root_dir, model_config.name + ".pt")
-
- # Convert from HuggingFace format to Whisper format
- if os.path.exists(destination_target):
- print(f"File {destination_target} already exists, skipping conversion")
- else:
- print("Saving HuggingFace model in Whisper format to " + destination_target)
- convert_hf_whisper(model_config.url, destination_target)
-
- model_config.path = destination_target
-
- elif model_type in ["whisper", "w"]:
- model_config.path = model_config.url
-
- # See if URL is just a file
- if model_config.url in whisper._MODELS:
- # No need to download anything - Whisper will handle it
- model_config.path = model_config.url
- elif model_config.url.startswith("file://"):
- # Get file path
- model_config.path = urlparse(model_config.url).path
- # See if it is an URL
- elif model_config.url.startswith("http://") or model_config.url.startswith("https://"):
- # Extension (or file name)
- extension = os.path.splitext(model_config.url)[-1]
- download_target = os.path.join(root_dir, model_config.name + extension)
-
- if os.path.exists(download_target) and not os.path.isfile(download_target):
- raise RuntimeError(f"{download_target} exists and is not a regular file")
-
- if not os.path.isfile(download_target):
- download_file(model_config.url, download_target)
- else:
- print(f"File {download_target} already exists, skipping download")
-
- model_config.path = download_target
- # Must be a local file
- else:
- model_config.path = model_config.url
-
- else:
- raise ValueError(f"Unknown model type {model_type}")
-
- return model_config.path
-
-class WhisperCallback(AbstractWhisperCallback):
- def __init__(self, model_container: WhisperContainer, language: str = None, task: str = None, initial_prompt: str = None, **decodeOptions: dict):
- self.model_container = model_container
- self.language = language
- self.task = task
- self.initial_prompt = initial_prompt
- self.decodeOptions = decodeOptions
-
- def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None):
- """
- Peform the transcription of the given audio file or data.
-
- Parameters
- ----------
- audio: Union[str, np.ndarray, torch.Tensor]
- The audio file to transcribe, or the audio data as a numpy array or torch tensor.
- segment_index: int
- The target language of the transcription. If not specified, the language will be inferred from the audio content.
- task: str
- The task - either translate or transcribe.
- progress_listener: ProgressListener
- A callback to receive progress updates.
- """
- model = self.model_container.get_model()
-
- if progress_listener is not None:
- with create_progress_listener_handle(progress_listener):
- return self._transcribe(model, audio, segment_index, prompt, detected_language)
- else:
- return self._transcribe(model, audio, segment_index, prompt, detected_language)
-
- def _transcribe(self, model: Whisper, audio, segment_index: int, prompt: str, detected_language: str):
- decodeOptions = self.decodeOptions.copy()
-
- # Add fp16
- if self.model_container.compute_type in ["fp16", "float16"]:
- decodeOptions["fp16"] = True
-
- return model.transcribe(audio, \
- language=self.language if self.language else detected_language, task=self.task, \
- initial_prompt=self._concat_prompt(self.initial_prompt, prompt) if segment_index == 0 else prompt, \
- **decodeOptions
- )
\ No newline at end of file
diff --git a/spaces/Fox1997/vits-uma-genshin-honkai/transforms.py b/spaces/Fox1997/vits-uma-genshin-honkai/transforms.py
deleted file mode 100644
index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000
--- a/spaces/Fox1997/vits-uma-genshin-honkai/transforms.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
-
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {
- 'tails': tails,
- 'tail_bound': tail_bound
- }
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(
- inputs[..., None] >= bin_locations,
- dim=-1
- ) - 1
-
-
-def unconstrained_rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails='linear',
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == 'linear':
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError('{} tails are not implemented.'.format(tails))
-
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative
- )
-
- return outputs, logabsdet
-
-def rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0., right=1., bottom=0., top=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError('Input to a transform is not within its domain')
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError('Minimal bin width too large for the number of bins')
- if min_bin_height * num_bins > 1.0:
- raise ValueError('Minimal bin height too large for the number of bins')
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (((inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta)
- + input_heights * (input_delta - input_derivatives)))
- b = (input_heights * input_derivatives
- - (inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta))
- c = - input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (input_delta * theta.pow(2)
- + input_derivatives * theta_one_minus_theta)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/tokenizer/simple_tokenizer.py b/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/tokenizer/simple_tokenizer.py
deleted file mode 100644
index c84cc8fb3adff99225d3e3a75b2a3d81564adcef..0000000000000000000000000000000000000000
--- a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/tokenizer/simple_tokenizer.py
+++ /dev/null
@@ -1,163 +0,0 @@
-"""
-Copied from: https://github.com/openai/CLIP/blob/573315e83f07b53a61ff5098757e8fc885f1703e/clip/simple_tokenizer.py
-"""
-
-import gzip
-import html
-import os
-from functools import lru_cache
-from typing import List, Tuple
-
-import ftfy
-import regex as re
-
-
-@lru_cache()
-def default_bpe():
- return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz")
-
-
-@lru_cache()
-def bytes_to_unicode():
- """
- Returns list of utf-8 byte and a corresponding list of unicode strings.
- The reversible bpe codes work on unicode strings.
- This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
- When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
- This is a signficant percentage of your normal, say, 32K bpe vocab.
- To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
- And avoids mapping to whitespace/control characters the bpe code barfs on.
- """
- bs = (
- list(range(ord("!"), ord("~") + 1))
- + list(range(ord("¡"), ord("¬") + 1))
- + list(range(ord("®"), ord("ÿ") + 1))
- )
- cs = bs[:]
- n = 0
- for b in range(2 ** 8):
- if b not in bs:
- bs.append(b)
- cs.append(2 ** 8 + n)
- n += 1
- cs = [chr(n) for n in cs]
- return dict(zip(bs, cs))
-
-
-def get_pairs(word):
- """Return set of symbol pairs in a word.
- Word is represented as tuple of symbols (symbols being variable-length strings).
- """
- pairs = set()
- prev_char = word[0]
- for char in word[1:]:
- pairs.add((prev_char, char))
- prev_char = char
- return pairs
-
-
-def basic_clean(text):
- text = ftfy.fix_text(text)
- text = html.unescape(html.unescape(text))
- return text.strip()
-
-
-def whitespace_clean(text):
- text = re.sub(r"\s+", " ", text)
- text = text.strip()
- return text
-
-
-class SimpleTokenizer(object):
- def __init__(self, bpe_path: str = default_bpe()):
- self.byte_encoder = bytes_to_unicode()
- self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
- merges = gzip.open(bpe_path).read().decode("utf-8").split("\n")
- merges = merges[1 : 49152 - 256 - 2 + 1]
- merges = [tuple(merge.split()) for merge in merges]
- vocab = list(bytes_to_unicode().values())
- vocab = vocab + [v + "" for v in vocab]
- for merge in merges:
- vocab.append("".join(merge))
- vocab.extend(["<|startoftext|>", "<|endoftext|>"])
- self.encoder = dict(zip(vocab, range(len(vocab))))
- self.decoder = {v: k for k, v in self.encoder.items()}
- self.bpe_ranks = dict(zip(merges, range(len(merges))))
- self.cache = {"<|startoftext|>": "<|startoftext|>", "<|endoftext|>": "<|endoftext|>"}
- self.pat = re.compile(
- r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""",
- re.IGNORECASE,
- )
-
- @property
- def start_token(self):
- return self.encoder["<|startoftext|>"]
-
- @property
- def end_token(self):
- return self.encoder["<|endoftext|>"]
-
- def padded_tokens_and_len(self, tokens: List[int], text_ctx: int) -> Tuple[List[int], int]:
- tokens = [self.start_token] + tokens[: text_ctx - 2] + [self.end_token]
- text_len = len(tokens)
- padding = text_ctx - len(tokens)
- padded_tokens = tokens + [0] * padding
- return padded_tokens, text_len
-
- def bpe(self, token):
- if token in self.cache:
- return self.cache[token]
- word = tuple(token[:-1]) + (token[-1] + "",)
- pairs = get_pairs(word)
-
- if not pairs:
- return token + ""
-
- while True:
- bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf")))
- if bigram not in self.bpe_ranks:
- break
- first, second = bigram
- new_word = []
- i = 0
- while i < len(word):
- try:
- j = word.index(first, i)
- new_word.extend(word[i:j])
- i = j
- except: # pylint: disable=bare-except
- new_word.extend(word[i:])
- break
-
- if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
- new_word.append(first + second)
- i += 2
- else:
- new_word.append(word[i])
- i += 1
- new_word = tuple(new_word)
- word = new_word
- if len(word) == 1:
- break
- else:
- pairs = get_pairs(word)
- word = " ".join(word)
- self.cache[token] = word
- return word
-
- def encode(self, text):
- bpe_tokens = []
- text = whitespace_clean(basic_clean(text)).lower()
- for token in re.findall(self.pat, text):
- token = "".join(self.byte_encoder[b] for b in token.encode("utf-8"))
- bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(" "))
- return bpe_tokens
-
- def decode(self, tokens):
- text = "".join([self.decoder[token] for token in tokens])
- text = (
- bytearray([self.byte_decoder[c] for c in text])
- .decode("utf-8", errors="replace")
- .replace("", " ")
- )
- return text
diff --git a/spaces/GXSA/bingo/postcss.config.js b/spaces/GXSA/bingo/postcss.config.js
deleted file mode 100644
index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000
--- a/spaces/GXSA/bingo/postcss.config.js
+++ /dev/null
@@ -1,6 +0,0 @@
-module.exports = {
- plugins: {
- tailwindcss: {},
- autoprefixer: {},
- },
-}
diff --git a/spaces/GeorgeOrville/bingo/src/components/header.tsx b/spaces/GeorgeOrville/bingo/src/components/header.tsx
deleted file mode 100644
index dc298b722154d1ac6d7a7e148204605562d6cc58..0000000000000000000000000000000000000000
--- a/spaces/GeorgeOrville/bingo/src/components/header.tsx
+++ /dev/null
@@ -1,12 +0,0 @@
-import * as React from 'react'
-import { UserMenu } from './user-menu'
-
-export async function Header() {
- return (
-
-
-
-
-
- )
-}
diff --git a/spaces/Gmq-x/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.h b/spaces/Gmq-x/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.h
deleted file mode 100644
index a46c805ab80aab491f7f9508b3a008b149866bee..0000000000000000000000000000000000000000
--- a/spaces/Gmq-x/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.h
+++ /dev/null
@@ -1,172 +0,0 @@
-
-// jpge.h - C++ class for JPEG compression.
-// Public domain, Rich Geldreich
-// Alex Evans: Added RGBA support, linear memory allocator.
-#ifndef JPEG_ENCODER_H
-#define JPEG_ENCODER_H
-
-#include
-
-namespace jpge
-{
- typedef unsigned char uint8;
- typedef signed short int16;
- typedef signed int int32;
- typedef unsigned short uint16;
- typedef unsigned int uint32;
- typedef unsigned int uint;
-
- // JPEG chroma subsampling factors. Y_ONLY (grayscale images) and H2V2 (color images) are the most common.
- enum subsampling_t { Y_ONLY = 0, H1V1 = 1, H2V1 = 2, H2V2 = 3 };
-
- // JPEG compression parameters structure.
- struct params
- {
- inline params() : m_quality(85), m_subsampling(H2V2), m_no_chroma_discrim_flag(false), m_two_pass_flag(false) { }
-
- inline bool check_valid() const
- {
- if ((m_quality < 1) || (m_quality > 100)) return false;
- if ((uint)m_subsampling > (uint)H2V2) return false;
- return true;
- }
-
- // Quality: 1-100, higher is better. Typical values are around 50-95.
- int m_quality;
-
- // m_subsampling:
- // 0 = Y (grayscale) only
- // 1 = YCbCr, no subsampling (H1V1, YCbCr 1x1x1, 3 blocks per MCU)
- // 2 = YCbCr, H2V1 subsampling (YCbCr 2x1x1, 4 blocks per MCU)
- // 3 = YCbCr, H2V2 subsampling (YCbCr 4x1x1, 6 blocks per MCU-- very common)
- subsampling_t m_subsampling;
-
- // Disables CbCr discrimination - only intended for testing.
- // If true, the Y quantization table is also used for the CbCr channels.
- bool m_no_chroma_discrim_flag;
-
- bool m_two_pass_flag;
- };
-
- // Writes JPEG image to a file.
- // num_channels must be 1 (Y) or 3 (RGB), image pitch must be width*num_channels.
- bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params());
-
- // Writes JPEG image to memory buffer.
- // On entry, buf_size is the size of the output buffer pointed at by pBuf, which should be at least ~1024 bytes.
- // If return value is true, buf_size will be set to the size of the compressed data.
- bool compress_image_to_jpeg_file_in_memory(void *pBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params());
-
- // Output stream abstract class - used by the jpeg_encoder class to write to the output stream.
- // put_buf() is generally called with len==JPGE_OUT_BUF_SIZE bytes, but for headers it'll be called with smaller amounts.
- class output_stream
- {
- public:
- virtual ~output_stream() { };
- virtual bool put_buf(const void* Pbuf, int64_t len) = 0;
- template inline bool put_obj(const T& obj) { return put_buf(&obj, sizeof(T)); }
- };
-
- // Lower level jpeg_encoder class - useful if more control is needed than the above helper functions.
- class jpeg_encoder
- {
- public:
- jpeg_encoder();
- ~jpeg_encoder();
-
- // Initializes the compressor.
- // pStream: The stream object to use for writing compressed data.
- // params - Compression parameters structure, defined above.
- // width, height - Image dimensions.
- // channels - May be 1, or 3. 1 indicates grayscale, 3 indicates RGB source data.
- // Returns false on out of memory or if a stream write fails.
- bool init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params = params());
-
- const params &get_params() const { return m_params; }
-
- // Deinitializes the compressor, freeing any allocated memory. May be called at any time.
- void deinit();
-
- uint get_total_passes() const { return m_params.m_two_pass_flag ? 2 : 1; }
- inline uint get_cur_pass() { return m_pass_num; }
-
- // Call this method with each source scanline.
- // width * src_channels bytes per scanline is expected (RGB or Y format).
- // You must call with NULL after all scanlines are processed to finish compression.
- // Returns false on out of memory or if a stream write fails.
- bool process_scanline(const void* pScanline);
-
- private:
- jpeg_encoder(const jpeg_encoder &);
- jpeg_encoder &operator =(const jpeg_encoder &);
-
- typedef int32 sample_array_t;
-
- output_stream *m_pStream;
- params m_params;
- uint8 m_num_components;
- uint8 m_comp_h_samp[3], m_comp_v_samp[3];
- int m_image_x, m_image_y, m_image_bpp, m_image_bpl;
- int m_image_x_mcu, m_image_y_mcu;
- int m_image_bpl_xlt, m_image_bpl_mcu;
- int m_mcus_per_row;
- int m_mcu_x, m_mcu_y;
- uint8 *m_mcu_lines[16];
- uint8 m_mcu_y_ofs;
- sample_array_t m_sample_array[64];
- int16 m_coefficient_array[64];
- int32 m_quantization_tables[2][64];
- uint m_huff_codes[4][256];
- uint8 m_huff_code_sizes[4][256];
- uint8 m_huff_bits[4][17];
- uint8 m_huff_val[4][256];
- uint32 m_huff_count[4][256];
- int m_last_dc_val[3];
- enum { JPGE_OUT_BUF_SIZE = 2048 };
- uint8 m_out_buf[JPGE_OUT_BUF_SIZE];
- uint8 *m_pOut_buf;
- uint m_out_buf_left;
- uint32 m_bit_buffer;
- uint m_bits_in;
- uint8 m_pass_num;
- bool m_all_stream_writes_succeeded;
-
- void optimize_huffman_table(int table_num, int table_len);
- void emit_byte(uint8 i);
- void emit_word(uint i);
- void emit_marker(int marker);
- void emit_jfif_app0();
- void emit_dqt();
- void emit_sof();
- void emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag);
- void emit_dhts();
- void emit_sos();
- void emit_markers();
- void compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val);
- void compute_quant_table(int32 *dst, int16 *src);
- void adjust_quant_table(int32 *dst, int32 *src);
- void first_pass_init();
- bool second_pass_init();
- bool jpg_open(int p_x_res, int p_y_res, int src_channels);
- void load_block_8_8_grey(int x);
- void load_block_8_8(int x, int y, int c);
- void load_block_16_8(int x, int c);
- void load_block_16_8_8(int x, int c);
- void load_quantized_coefficients(int component_num);
- void flush_output_buffer();
- void put_bits(uint bits, uint len);
- void code_coefficients_pass_one(int component_num);
- void code_coefficients_pass_two(int component_num);
- void code_block(int component_num);
- void process_mcu_row();
- bool terminate_pass_one();
- bool terminate_pass_two();
- bool process_end_of_image();
- void load_mcu(const void* src);
- void clear();
- void init();
- };
-
-} // namespace jpge
-
-#endif // JPEG_ENCODER
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/emanet_r50-d8.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/emanet_r50-d8.py
deleted file mode 100644
index 26adcd430926de0862204a71d345f2543167f27b..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/emanet_r50-d8.py
+++ /dev/null
@@ -1,47 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='EMAHead',
- in_channels=2048,
- in_index=3,
- channels=256,
- ema_channels=512,
- num_bases=64,
- num_stages=3,
- momentum=0.1,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/slurm_test.sh b/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/slurm_test.sh
deleted file mode 100644
index 4e6f7bf4e33267f269cf0f455924cb70166ccd4b..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/slurm_test.sh
+++ /dev/null
@@ -1,24 +0,0 @@
-#!/usr/bin/env bash
-
-set -x
-
-PARTITION=$1
-JOB_NAME=$2
-CONFIG=$3
-CHECKPOINT=$4
-GPUS=${GPUS:-4}
-GPUS_PER_NODE=${GPUS_PER_NODE:-4}
-CPUS_PER_TASK=${CPUS_PER_TASK:-5}
-PY_ARGS=${@:5}
-SRUN_ARGS=${SRUN_ARGS:-""}
-
-PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \
-srun -p ${PARTITION} \
- --job-name=${JOB_NAME} \
- --gres=gpu:${GPUS_PER_NODE} \
- --ntasks=${GPUS} \
- --ntasks-per-node=${GPUS_PER_NODE} \
- --cpus-per-task=${CPUS_PER_TASK} \
- --kill-on-bad-exit=1 \
- ${SRUN_ARGS} \
- python -u tools/test.py ${CONFIG} ${CHECKPOINT} --launcher="slurm" ${PY_ARGS}
diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/data/audio_utils.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/data/audio_utils.py
deleted file mode 100644
index 565b63a4ef78dcd802dda932b42ebe518ffe7397..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/data/audio_utils.py
+++ /dev/null
@@ -1,177 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-"""Various utilities for audio convertion (pcm format, sample rate and channels),
-and volume normalization."""
-import sys
-import typing as tp
-
-import julius
-import torch
-import torchaudio
-
-
-def convert_audio_channels(wav: torch.Tensor, channels: int = 2) -> torch.Tensor:
- """Convert audio to the given number of channels.
-
- Args:
- wav (torch.Tensor): Audio wave of shape [B, C, T].
- channels (int): Expected number of channels as output.
- Returns:
- torch.Tensor: Downmixed or unchanged audio wave [B, C, T].
- """
- *shape, src_channels, length = wav.shape
- if src_channels == channels:
- pass
- elif channels == 1:
- # Case 1:
- # The caller asked 1-channel audio, and the stream has multiple
- # channels, downmix all channels.
- wav = wav.mean(dim=-2, keepdim=True)
- elif src_channels == 1:
- # Case 2:
- # The caller asked for multiple channels, but the input file has
- # a single channel, replicate the audio over all channels.
- wav = wav.expand(*shape, channels, length)
- elif src_channels >= channels:
- # Case 3:
- # The caller asked for multiple channels, and the input file has
- # more channels than requested. In that case return the first channels.
- wav = wav[..., :channels, :]
- else:
- # Case 4: What is a reasonable choice here?
- raise ValueError('The audio file has less channels than requested but is not mono.')
- return wav
-
-
-def convert_audio(wav: torch.Tensor, from_rate: float,
- to_rate: float, to_channels: int) -> torch.Tensor:
- """Convert audio to new sample rate and number of audio channels."""
- wav = julius.resample_frac(wav, int(from_rate), int(to_rate))
- wav = convert_audio_channels(wav, to_channels)
- return wav
-
-
-def normalize_loudness(wav: torch.Tensor, sample_rate: int, loudness_headroom_db: float = 14,
- loudness_compressor: bool = False, energy_floor: float = 2e-3):
- """Normalize an input signal to a user loudness in dB LKFS.
- Audio loudness is defined according to the ITU-R BS.1770-4 recommendation.
-
- Args:
- wav (torch.Tensor): Input multichannel audio data.
- sample_rate (int): Sample rate.
- loudness_headroom_db (float): Target loudness of the output in dB LUFS.
- loudness_compressor (bool): Uses tanh for soft clipping.
- energy_floor (float): anything below that RMS level will not be rescaled.
- Returns:
- torch.Tensor: Loudness normalized output data.
- """
- energy = wav.pow(2).mean().sqrt().item()
- if energy < energy_floor:
- return wav
- transform = torchaudio.transforms.Loudness(sample_rate)
- input_loudness_db = transform(wav).item()
- # calculate the gain needed to scale to the desired loudness level
- delta_loudness = -loudness_headroom_db - input_loudness_db
- gain = 10.0 ** (delta_loudness / 20.0)
- output = gain * wav
- if loudness_compressor:
- output = torch.tanh(output)
- assert output.isfinite().all(), (input_loudness_db, wav.pow(2).mean().sqrt())
- return output
-
-
-def _clip_wav(wav: torch.Tensor, log_clipping: bool = False, stem_name: tp.Optional[str] = None) -> None:
- """Utility function to clip the audio with logging if specified."""
- max_scale = wav.abs().max()
- if log_clipping and max_scale > 1:
- clamp_prob = (wav.abs() > 1).float().mean().item()
- print(f"CLIPPING {stem_name or ''} happening with proba (a bit of clipping is okay):",
- clamp_prob, "maximum scale: ", max_scale.item(), file=sys.stderr)
- #wav.clamp_(-1, 1)
- wav = wav.clone().clamp_(-1, 1)
-
-
-def normalize_audio(wav: torch.Tensor, normalize: bool = True,
- strategy: str = 'peak', peak_clip_headroom_db: float = 1,
- rms_headroom_db: float = 18, loudness_headroom_db: float = 14,
- loudness_compressor: bool = False, log_clipping: bool = False,
- sample_rate: tp.Optional[int] = None,
- stem_name: tp.Optional[str] = None) -> torch.Tensor:
- """Normalize the audio according to the prescribed strategy (see after).
-
- Args:
- wav (torch.Tensor): Audio data.
- normalize (bool): if `True` (default), normalizes according to the prescribed
- strategy (see after). If `False`, the strategy is only used in case clipping
- would happen.
- strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak',
- i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square
- with extra headroom to avoid clipping. 'clip' just clips.
- peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy.
- rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger
- than the `peak_clip` one to avoid further clipping.
- loudness_headroom_db (float): Target loudness for loudness normalization.
- loudness_compressor (bool): If True, uses tanh based soft clipping.
- log_clipping (bool): If True, basic logging on stderr when clipping still
- occurs despite strategy (only for 'rms').
- sample_rate (int): Sample rate for the audio data (required for loudness).
- stem_name (str, optional): Stem name for clipping logging.
- Returns:
- torch.Tensor: Normalized audio.
- """
- scale_peak = 10 ** (-peak_clip_headroom_db / 20)
- scale_rms = 10 ** (-rms_headroom_db / 20)
- if strategy == 'peak':
- rescaling = (scale_peak / wav.abs().max())
- if normalize or rescaling < 1:
- wav = wav * rescaling
- elif strategy == 'clip':
- wav = wav.clamp(-scale_peak, scale_peak)
- elif strategy == 'rms':
- mono = wav.mean(dim=0)
- rescaling = scale_rms / mono.pow(2).mean().sqrt()
- if normalize or rescaling < 1:
- wav = wav * rescaling
- _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name)
- elif strategy == 'loudness':
- assert sample_rate is not None, "Loudness normalization requires sample rate."
- wav = normalize_loudness(wav, sample_rate, loudness_headroom_db, loudness_compressor)
- _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name)
- else:
- assert wav.abs().max() < 1
- assert strategy == '' or strategy == 'none', f"Unexpected strategy: '{strategy}'"
- return wav
-
-
-def f32_pcm(wav: torch.Tensor) -> torch.Tensor:
- """Convert audio to float 32 bits PCM format.
- """
- if wav.dtype.is_floating_point:
- return wav
- elif wav.dtype == torch.int16:
- return wav.float() / 2**15
- elif wav.dtype == torch.int32:
- return wav.float() / 2**31
- raise ValueError(f"Unsupported wav dtype: {wav.dtype}")
-
-
-def i16_pcm(wav: torch.Tensor) -> torch.Tensor:
- """Convert audio to int 16 bits PCM format.
-
- ..Warning:: There exist many formula for doing this conversion. None are perfect
- due to the asymmetry of the int16 range. One either have possible clipping, DC offset,
- or inconsistencies with f32_pcm. If the given wav doesn't have enough headroom,
- it is possible that `i16_pcm(f32_pcm)) != Identity`.
- """
- if wav.dtype.is_floating_point:
- assert wav.abs().max() <= 1
- candidate = (wav * 2 ** 15).round()
- if candidate.max() >= 2 ** 15: # clipping would occur
- candidate = (wav * (2 ** 15 - 1)).round()
- return candidate.short()
- else:
- assert wav.dtype == torch.int16
- return wav
diff --git a/spaces/GroNLP/divemt_explorer/README.md b/spaces/GroNLP/divemt_explorer/README.md
deleted file mode 100644
index b52f217ceae66e1cfd6829cbe0c7de0552c8eb36..0000000000000000000000000000000000000000
--- a/spaces/GroNLP/divemt_explorer/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: DivEMT Explorer
-emoji: 🔍
-colorFrom: gray
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.25.0
-app_file: app.py
-pinned: false
-license: gpl-3.0
----
-
-DivEMT dataset explorer using the [DivEMT dataset](https://arxiv.org/abs/2205.12215) and attributions produced with the [Inseq library](https://arxiv.org/abs/2302.13942)
diff --git a/spaces/GuyYariv/AudioToken/modules/beats/Tokenizers.py b/spaces/GuyYariv/AudioToken/modules/beats/Tokenizers.py
deleted file mode 100644
index fcb7316b5200d2222952327e1815e26822eafca8..0000000000000000000000000000000000000000
--- a/spaces/GuyYariv/AudioToken/modules/beats/Tokenizers.py
+++ /dev/null
@@ -1,172 +0,0 @@
-# --------------------------------------------------------
-# beats: Audio Pre-Training with Acoustic Tokenizers (https://arxiv.org/abs/2212.09058)
-# Github source: https://github.com/microsoft/unilm/tree/master/beats
-# Copyright (c) 2022 Microsoft
-# Licensed under The MIT License [see LICENSE for details]
-# Based on fairseq code bases
-# https://github.com/pytorch/fairseq
-# --------------------------------------------------------
-
-
-import torch
-import torch.nn as nn
-from torch.nn import LayerNorm
-import torchaudio.compliance.kaldi as ta_kaldi
-
-from modules.beats.backbone import (
- TransformerEncoder,
-)
-from modules.beats.quantizer import (
- NormEMAVectorQuantizer,
-)
-
-import logging
-from typing import Optional
-
-logger = logging.getLogger(__name__)
-
-
-class TokenizersConfig:
- def __init__(self, cfg=None):
- self.input_patch_size: int = -1 # path size of patch embedding
- self.embed_dim: int = 512 # patch embedding dimension
- self.conv_bias: bool = False # include bias in conv encoder
-
- self.encoder_layers: int = 12 # num encoder layers in the transformer
- self.encoder_embed_dim: int = 768 # encoder embedding dimension
- self.encoder_ffn_embed_dim: int = 3072 # encoder embedding dimension for FFN
- self.encoder_attention_heads: int = 12 # num encoder attention heads
- self.activation_fn: str = "gelu" # activation function to use
-
- self.layer_norm_first: bool = False # apply layernorm first in the transformer
- self.deep_norm: bool = False # apply deep_norm first in the transformer
-
- # dropouts
- self.dropout: float = 0.1 # dropout probability for the transformer
- self.attention_dropout: float = 0.1 # dropout probability for attention weights
- self.activation_dropout: float = 0.0 # dropout probability after activation in FFN
- self.encoder_layerdrop: float = 0.0 # probability of dropping a tarnsformer layer
- self.dropout_input: float = 0.0 # dropout to apply to the input (after feat extr)
-
- # positional embeddings
- self.conv_pos: int = 128 # number of filters for convolutional positional embeddings
- self.conv_pos_groups: int = 16 # number of groups for convolutional positional embedding
-
- # relative position embedding
- self.relative_position_embedding: bool = False # apply relative position embedding
- self.num_buckets: int = 320 # number of buckets for relative position embedding
- self.max_distance: int = 1280 # maximum distance for relative position embedding
- self.gru_rel_pos: bool = False # apply gated relative position embedding
-
- # quantizer
- self.quant_n: int = 1024 # codebook number in quantizer
- self.quant_dim: int = 256 # codebook dimension in quantizer
-
- if cfg is not None:
- self.update(cfg)
-
- def update(self, cfg: dict):
- self.__dict__.update(cfg)
-
-
-class Tokenizers(nn.Module):
- def __init__(
- self,
- cfg: TokenizersConfig,
- ) -> None:
- super().__init__()
- logger.info(f"Tokenizers Config: {cfg.__dict__}")
-
- self.cfg = cfg
-
- self.embed = cfg.embed_dim
- self.post_extract_proj = (
- nn.Linear(self.embed, cfg.encoder_embed_dim)
- if self.embed != cfg.encoder_embed_dim
- else None
- )
-
- self.input_patch_size = cfg.input_patch_size
- self.patch_embedding = nn.Conv2d(1, self.embed, kernel_size=self.input_patch_size, stride=self.input_patch_size,
- bias=cfg.conv_bias)
-
- self.dropout_input = nn.Dropout(cfg.dropout_input)
-
- assert not cfg.deep_norm or not cfg.layer_norm_first
- self.encoder = TransformerEncoder(cfg)
- self.layer_norm = LayerNorm(self.embed)
-
- self.quantize = NormEMAVectorQuantizer(
- n_embed=cfg.quant_n, embedding_dim=cfg.quant_dim, beta=1.0, kmeans_init=True, decay=0.99,
- )
- self.quant_n = cfg.quant_n
- self.quantize_layer = nn.Sequential(
- nn.Linear(cfg.encoder_embed_dim, cfg.encoder_embed_dim),
- nn.Tanh(),
- nn.Linear(cfg.encoder_embed_dim, cfg.quant_dim) # for quantize
- )
-
- def forward_padding_mask(
- self,
- features: torch.Tensor,
- padding_mask: torch.Tensor,
- ) -> torch.Tensor:
- extra = padding_mask.size(1) % features.size(1)
- if extra > 0:
- padding_mask = padding_mask[:, :-extra]
- padding_mask = padding_mask.view(
- padding_mask.size(0), features.size(1), -1
- )
- padding_mask = padding_mask.all(-1)
- return padding_mask
-
- def preprocess(
- self,
- source: torch.Tensor,
- fbank_mean: float = 15.41663,
- fbank_std: float = 6.55582,
- ) -> torch.Tensor:
- fbanks = []
- for waveform in source:
- waveform = waveform.unsqueeze(0) * 2 ** 15
- fbank = ta_kaldi.fbank(waveform, num_mel_bins=128, sample_frequency=16000, frame_length=25, frame_shift=10)
- fbanks.append(fbank)
- fbank = torch.stack(fbanks, dim=0)
- fbank = (fbank - fbank_mean) / (2 * fbank_std)
- return fbank
-
- def extract_labels(
- self,
- source: torch.Tensor,
- padding_mask: Optional[torch.Tensor] = None,
- fbank_mean: float = 15.41663,
- fbank_std: float = 6.55582,
- ):
- fbank = self.preprocess(source, fbank_mean=fbank_mean, fbank_std=fbank_std)
-
- if padding_mask is not None:
- padding_mask = self.forward_padding_mask(fbank, padding_mask)
-
- fbank = fbank.unsqueeze(1)
- features = self.patch_embedding(fbank)
- features = features.reshape(features.shape[0], features.shape[1], -1)
- features = features.transpose(1, 2)
- features = self.layer_norm(features)
-
- if padding_mask is not None:
- padding_mask = self.forward_padding_mask(features, padding_mask)
-
- if self.post_extract_proj is not None:
- features = self.post_extract_proj(features)
-
- x = self.dropout_input(features)
-
- x, layer_results = self.encoder(
- x,
- padding_mask=padding_mask,
- )
-
- quantize_input = self.quantize_layer(x)
- quantize_feature, embed_loss, embed_ind = self.quantize(quantize_input)
-
- return embed_ind
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/clue_sim/loss.py b/spaces/HaloMaster/chinesesummary/fengshen/examples/clue_sim/loss.py
deleted file mode 100644
index 537e2347f65aa952b0eb852c23a39901b0fef52e..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/examples/clue_sim/loss.py
+++ /dev/null
@@ -1,77 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The IDEA Authors. All rights reserved.
-
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-
-# http://www.apache.org/licenses/LICENSE-2.0
-
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import torch
-from torch.nn import functional as F
-
-
-class FocalLoss(torch.nn.Module):
- """Multi-class Focal loss implementation"""
-
- def __init__(self, gamma=2, weight=None, ignore_index=-100):
- super(FocalLoss, self).__init__()
- self.gamma = gamma
- self.weight = weight
- self.ignore_index = ignore_index
-
- def forward(self, input, target):
- """
- input: [N, C]
- target: [N, ]
- """
- logpt = F.log_softmax(input, dim=1)
- pt = torch.exp(logpt)
- logpt = (1-pt)**self.gamma * logpt
- loss = F.nll_loss(logpt, target, self.weight, ignore_index=self.ignore_index)
- return loss
-
-# 交叉熵平滑滤波 防止过拟合
-
-
-class LabelSmoothingCorrectionCrossEntropy(torch.nn.Module):
- def __init__(self, eps=0.1, reduction='mean', ignore_index=-100):
- super(LabelSmoothingCorrectionCrossEntropy, self).__init__()
- self.eps = eps
- self.reduction = reduction
- self.ignore_index = ignore_index
-
- def forward(self, output, target):
- c = output.size()[-1]
- log_preds = F.log_softmax(output, dim=-1)
- if self.reduction == 'sum':
- loss = -log_preds.sum()
- else:
- loss = -log_preds.sum(dim=-1)
- if self.reduction == 'mean':
- loss = loss.mean()
-
- # task specific
- labels_hat = torch.argmax(output, dim=1)
- lt_sum = labels_hat + target
- abs_lt_sub = abs(labels_hat - target)
- correction_loss = 0
- for i in range(c):
- if lt_sum[i] == 0:
- pass
- elif lt_sum[i] == 1:
- if abs_lt_sub[i] == 1:
- pass
- else:
- correction_loss -= self.eps*(0.5945275813408382)
- else:
- correction_loss += self.eps*(1/0.32447699714575207)
- correction_loss /= c
- # print(correction_loss)
- return loss*self.eps/c + (1-self.eps) * \
- F.nll_loss(log_preds, target, reduction=self.reduction, ignore_index=self.ignore_index) + correction_loss
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/models/zen1/ngram_utils.py b/spaces/HaloMaster/chinesesummary/fengshen/models/zen1/ngram_utils.py
deleted file mode 100644
index 917f770fab84db4c8a55b11a296afdb61f8283c9..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/models/zen1/ngram_utils.py
+++ /dev/null
@@ -1,106 +0,0 @@
-# coding: utf-8
-# Copyright 2019 Sinovation Ventures AI Institute
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""utils for ngram for ZEN model."""
-
-import os
-import logging
-
-from transformers import cached_path
-
-NGRAM_DICT_NAME = 'ngram.txt'
-
-logger = logging.getLogger(__name__)
-PRETRAINED_VOCAB_ARCHIVE_MAP = {'IDEA-CCNL/Erlangshen-ZEN1-224M-Chinese': 'https://huggingface.co/IDEA-CCNL/Erlangshen-ZEN1-224M-Chinese/resolve/main/ngram.txt'}
-
-
-class ZenNgramDict(object):
- """
- Dict class to store the ngram
- """
-
- def __init__(self, ngram_freq_path, tokenizer, max_ngram_in_seq=128):
- """Constructs ZenNgramDict
-
- :param ngram_freq_path: ngrams with frequency
- """
- if os.path.isdir(ngram_freq_path):
- ngram_freq_path = os.path.join(ngram_freq_path, NGRAM_DICT_NAME)
- self.ngram_freq_path = ngram_freq_path
- self.max_ngram_in_seq = max_ngram_in_seq
- self.id_to_ngram_list = ["[pad]"]
- self.ngram_to_id_dict = {"[pad]": 0}
- self.ngram_to_freq_dict = {}
-
- logger.info("loading ngram frequency file {}".format(ngram_freq_path))
- with open(ngram_freq_path, "r", encoding="utf-8") as fin:
- for i, line in enumerate(fin):
- ngram, freq = line.split(",")
- tokens = tuple(tokenizer.tokenize(ngram))
- self.ngram_to_freq_dict[ngram] = freq
- self.id_to_ngram_list.append(tokens)
- self.ngram_to_id_dict[tokens] = i + 1
-
- @classmethod
- def from_pretrained(cls, pretrained_model_name_or_path, cache_dir=None, **kwargs):
- """
- Instantiate a PreTrainedBertModel from a pre-trained model file.
- Download and cache the pre-trained model file if needed.
- """
- if pretrained_model_name_or_path in PRETRAINED_VOCAB_ARCHIVE_MAP:
- ngram_file = PRETRAINED_VOCAB_ARCHIVE_MAP[pretrained_model_name_or_path]
- if '-cased' in pretrained_model_name_or_path and kwargs.get('do_lower_case', True):
- logger.warning("The pre-trained model you are loading is a cased model but you have not set "
- "`do_lower_case` to False. We are setting `do_lower_case=False` for you but "
- "you may want to check this behavior.")
- kwargs['do_lower_case'] = False
- elif '-cased' not in pretrained_model_name_or_path and not kwargs.get('do_lower_case', True):
- logger.warning("The pre-trained model you are loading is an uncased model but you have set "
- "`do_lower_case` to False. We are setting `do_lower_case=True` for you "
- "but you may want to check this behavior.")
- kwargs['do_lower_case'] = True
- else:
- ngram_file = pretrained_model_name_or_path
- if os.path.isdir(ngram_file):
- ngram_file = os.path.join(ngram_file, NGRAM_DICT_NAME)
- # redirect to the cache, if necessary
- try:
- resolved_ngram_file = cached_path(ngram_file, cache_dir=cache_dir)
- except EnvironmentError:
- if pretrained_model_name_or_path in PRETRAINED_VOCAB_ARCHIVE_MAP:
- logger.error(
- "Couldn't reach server at '{}' to download vocabulary.".format(
- ngram_file))
- else:
- logger.error(
- "Model name '{}' was not found in model name list ({}). "
- "We assumed '{}' was a path or url but couldn't find any file "
- "associated to this path or url.".format(
- pretrained_model_name_or_path,
- ', '.join(PRETRAINED_VOCAB_ARCHIVE_MAP.keys()),
- ngram_file))
- return None
- if resolved_ngram_file == ngram_file:
- logger.info("loading vocabulary file {}".format(ngram_file))
- else:
- logger.info("loading vocabulary file {} from cache at {}".format(
- ngram_file, resolved_ngram_file))
- # Instantiate ngram.
- ngram_dict = cls(resolved_ngram_file, **kwargs)
- return ngram_dict
-
- def save(self, ngram_freq_path):
- with open(ngram_freq_path, "w", encoding="utf-8") as fout:
- for ngram, freq in self.ngram_to_freq_dict.items():
- fout.write("{},{}\n".format(ngram, freq))
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/hubert/simple_kmeans/learn_kmeans.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/hubert/simple_kmeans/learn_kmeans.py
deleted file mode 100644
index 113ac655b8c0a585fe43797e99674e445098edd0..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/hubert/simple_kmeans/learn_kmeans.py
+++ /dev/null
@@ -1,146 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import os
-import sys
-
-import numpy as np
-from sklearn.cluster import MiniBatchKMeans
-
-import joblib
-
-logging.basicConfig(
- format="%(asctime)s | %(levelname)s | %(name)s | %(message)s",
- datefmt="%Y-%m-%d %H:%M:%S",
- level=os.environ.get("LOGLEVEL", "INFO").upper(),
- stream=sys.stdout,
-)
-logger = logging.getLogger("learn_kmeans")
-
-
-def get_km_model(
- n_clusters,
- init,
- max_iter,
- batch_size,
- tol,
- max_no_improvement,
- n_init,
- reassignment_ratio,
-):
- return MiniBatchKMeans(
- n_clusters=n_clusters,
- init=init,
- max_iter=max_iter,
- batch_size=batch_size,
- verbose=1,
- compute_labels=False,
- tol=tol,
- max_no_improvement=max_no_improvement,
- init_size=None,
- n_init=n_init,
- reassignment_ratio=reassignment_ratio,
- )
-
-
-def load_feature_shard(feat_dir, split, nshard, rank, percent):
- feat_path = f"{feat_dir}/{split}_{rank}_{nshard}.npy"
- leng_path = f"{feat_dir}/{split}_{rank}_{nshard}.len"
- with open(leng_path, "r") as f:
- lengs = [int(line.rstrip()) for line in f]
- offsets = [0] + np.cumsum(lengs[:-1]).tolist()
-
- if percent < 0:
- return np.load(feat_path, mmap_mode="r")
- else:
- nsample = int(np.ceil(len(lengs) * percent))
- indices = np.random.choice(len(lengs), nsample, replace=False)
- feat = np.load(feat_path, mmap_mode="r")
- sampled_feat = np.concatenate(
- [feat[offsets[i]: offsets[i] + lengs[i]] for i in indices], axis=0
- )
- logger.info(
- (
- f"sampled {nsample} utterances, {len(sampled_feat)} frames "
- f"from shard {rank}/{nshard}"
- )
- )
- return sampled_feat
-
-
-def load_feature(feat_dir, split, nshard, seed, percent):
- assert percent <= 1.0
- feat = np.concatenate(
- [
- load_feature_shard(feat_dir, split, nshard, r, percent)
- for r in range(nshard)
- ],
- axis=0,
- )
- logging.info(f"loaded feature with dimension {feat.shape}")
- return feat
-
-
-def learn_kmeans(
- feat_dir,
- split,
- nshard,
- km_path,
- n_clusters,
- seed,
- percent,
- init,
- max_iter,
- batch_size,
- tol,
- n_init,
- reassignment_ratio,
- max_no_improvement,
-):
- np.random.seed(seed)
- feat = load_feature(feat_dir, split, nshard, seed, percent)
- km_model = get_km_model(
- n_clusters,
- init,
- max_iter,
- batch_size,
- tol,
- max_no_improvement,
- n_init,
- reassignment_ratio,
- )
- km_model.fit(feat)
- joblib.dump(km_model, km_path)
-
- inertia = -km_model.score(feat) / len(feat)
- logger.info("total intertia: %.5f", inertia)
- logger.info("finished successfully")
-
-
-if __name__ == "__main__":
- import argparse
-
- parser = argparse.ArgumentParser()
- parser.add_argument("feat_dir", type=str)
- parser.add_argument("split", type=str)
- parser.add_argument("nshard", type=int)
- parser.add_argument("km_path", type=str)
- parser.add_argument("n_clusters", type=int)
- parser.add_argument("--seed", default=0, type=int)
- parser.add_argument(
- "--percent", default=-1, type=float, help="sample a subset; -1 for all"
- )
- parser.add_argument("--init", default="k-means++")
- parser.add_argument("--max_iter", default=100, type=int)
- parser.add_argument("--batch_size", default=10000, type=int)
- parser.add_argument("--tol", default=0.0, type=float)
- parser.add_argument("--max_no_improvement", default=100, type=int)
- parser.add_argument("--n_init", default=20, type=int)
- parser.add_argument("--reassignment_ratio", default=0.0, type=float)
- args = parser.parse_args()
- logging.info(str(args))
-
- learn_kmeans(**vars(args))
diff --git a/spaces/HarshulNanda/HARM_ML_web_app/README.md b/spaces/HarshulNanda/HARM_ML_web_app/README.md
deleted file mode 100644
index 57f8490848ec49b41461c786274753b107767429..0000000000000000000000000000000000000000
--- a/spaces/HarshulNanda/HARM_ML_web_app/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: HARM ML Web App
-emoji: 🐠
-colorFrom: blue
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/models.py b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/models.py
deleted file mode 100644
index a77596153fa2e7e6fdd52ee0028a0c8ce02050b4..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/models.py
+++ /dev/null
@@ -1,403 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import modules
-import commons
-import attentions
-import monotonic_align
-
-
-class DurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout):
- super().__init__()
-
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.conv_1 = nn.Conv1d(
- in_channels, filter_channels, kernel_size, padding=kernel_size // 2
- )
- self.norm_1 = attentions.LayerNorm(filter_channels)
- self.conv_2 = nn.Conv1d(
- filter_channels, filter_channels, kernel_size, padding=kernel_size // 2
- )
- self.norm_2 = attentions.LayerNorm(filter_channels)
- self.proj = nn.Conv1d(filter_channels, 1, 1)
-
- def forward(self, x, x_mask):
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
- x = self.proj(x * x_mask)
- return x * x_mask
-
-
-class TextEncoder(nn.Module):
- def __init__(
- self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- filter_channels_dp,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- window_size=None,
- block_length=None,
- mean_only=False,
- prenet=False,
- gin_channels=0,
- ):
-
- super().__init__()
-
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.filter_channels_dp = filter_channels_dp
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.block_length = block_length
- self.mean_only = mean_only
- self.prenet = prenet
- self.gin_channels = gin_channels
-
- self.emb = nn.Embedding(n_vocab, hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5)
-
- if prenet:
- self.pre = modules.ConvReluNorm(
- hidden_channels,
- hidden_channels,
- hidden_channels,
- kernel_size=5,
- n_layers=3,
- p_dropout=0.5,
- )
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- window_size=window_size,
- block_length=block_length,
- )
-
- self.proj_m = nn.Conv1d(hidden_channels, out_channels, 1)
- if not mean_only:
- self.proj_s = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj_w = DurationPredictor(
- hidden_channels + gin_channels, filter_channels_dp, kernel_size, p_dropout
- )
-
- def forward(self, x, x_lengths, g=None):
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
-
- if self.prenet:
- x = self.pre(x, x_mask)
- x = self.encoder(x, x_mask)
-
- if g is not None:
- g_exp = g.expand(-1, -1, x.size(-1))
- x_dp = torch.cat([torch.detach(x), g_exp], 1)
- else:
- x_dp = torch.detach(x)
-
- x_m = self.proj_m(x) * x_mask
- if not self.mean_only:
- x_logs = self.proj_s(x) * x_mask
- else:
- x_logs = torch.zeros_like(x_m)
-
- logw = self.proj_w(x_dp, x_mask)
- return x_m, x_logs, logw, x_mask
-
-
-class FlowSpecDecoder(nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_blocks,
- n_layers,
- p_dropout=0.0,
- n_split=4,
- n_sqz=2,
- sigmoid_scale=False,
- gin_channels=0,
- ):
- super().__init__()
-
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_blocks = n_blocks
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- self.n_split = n_split
- self.n_sqz = n_sqz
- self.sigmoid_scale = sigmoid_scale
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for b in range(n_blocks):
- self.flows.append(modules.ActNorm(channels=in_channels * n_sqz))
- self.flows.append(
- modules.InvConvNear(channels=in_channels * n_sqz, n_split=n_split)
- )
- self.flows.append(
- attentions.CouplingBlock(
- in_channels * n_sqz,
- hidden_channels,
- kernel_size=kernel_size,
- dilation_rate=dilation_rate,
- n_layers=n_layers,
- gin_channels=gin_channels,
- p_dropout=p_dropout,
- sigmoid_scale=sigmoid_scale,
- )
- )
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- flows = self.flows
- logdet_tot = 0
- else:
- flows = reversed(self.flows)
- logdet_tot = None
-
- if self.n_sqz > 1:
- x, x_mask = commons.squeeze(x, x_mask, self.n_sqz)
- for f in flows:
- if not reverse:
- x, logdet = f(x, x_mask, g=g, reverse=reverse)
- logdet_tot += logdet
- else:
- x, logdet = f(x, x_mask, g=g, reverse=reverse)
- if self.n_sqz > 1:
- x, x_mask = commons.unsqueeze(x, x_mask, self.n_sqz)
- return x, logdet_tot
-
- def store_inverse(self):
- for f in self.flows:
- f.store_inverse()
-
-
-class FlowGenerator(nn.Module):
- def __init__(
- self,
- n_vocab,
- hidden_channels,
- filter_channels,
- filter_channels_dp,
- out_channels,
- kernel_size=3,
- n_heads=2,
- n_layers_enc=6,
- p_dropout=0.0,
- n_blocks_dec=12,
- kernel_size_dec=5,
- dilation_rate=5,
- n_block_layers=4,
- p_dropout_dec=0.0,
- n_speakers=0,
- gin_channels=0,
- n_split=4,
- n_sqz=1,
- sigmoid_scale=False,
- window_size=None,
- block_length=None,
- mean_only=False,
- hidden_channels_enc=None,
- hidden_channels_dec=None,
- prenet=False,
- **kwargs
- ):
-
- super().__init__()
- self.n_vocab = n_vocab
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.filter_channels_dp = filter_channels_dp
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_heads = n_heads
- self.n_layers_enc = n_layers_enc
- self.p_dropout = p_dropout
- self.n_blocks_dec = n_blocks_dec
- self.kernel_size_dec = kernel_size_dec
- self.dilation_rate = dilation_rate
- self.n_block_layers = n_block_layers
- self.p_dropout_dec = p_dropout_dec
- self.n_speakers = n_speakers
- self.gin_channels = gin_channels
- self.n_split = n_split
- self.n_sqz = n_sqz
- self.sigmoid_scale = sigmoid_scale
- self.window_size = window_size
- self.block_length = block_length
- self.mean_only = mean_only
- self.hidden_channels_enc = hidden_channels_enc
- self.hidden_channels_dec = hidden_channels_dec
- self.prenet = prenet
-
- self.encoder = TextEncoder(
- n_vocab,
- out_channels,
- hidden_channels_enc or hidden_channels,
- filter_channels,
- filter_channels_dp,
- n_heads,
- n_layers_enc,
- kernel_size,
- p_dropout,
- window_size=window_size,
- block_length=block_length,
- mean_only=mean_only,
- prenet=prenet,
- gin_channels=gin_channels,
- )
-
- self.decoder = FlowSpecDecoder(
- out_channels,
- hidden_channels_dec or hidden_channels,
- kernel_size_dec,
- dilation_rate,
- n_blocks_dec,
- n_block_layers,
- p_dropout=p_dropout_dec,
- n_split=n_split,
- n_sqz=n_sqz,
- sigmoid_scale=sigmoid_scale,
- gin_channels=gin_channels,
- )
-
- if n_speakers > 1:
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
- nn.init.uniform_(self.emb_g.weight, -0.1, 0.1)
-
- def forward(
- self,
- x,
- x_lengths,
- y=None,
- y_lengths=None,
- g=None,
- gen=False,
- noise_scale=1.0,
- length_scale=1.0,
- ):
- if g is not None:
- g = F.normalize(self.emb_g(g)).unsqueeze(-1) # [b, h]
- x_m, x_logs, logw, x_mask = self.encoder(x, x_lengths, g=g)
-
- if gen:
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_max_length = None
- else:
- y_max_length = y.size(2)
- y, y_lengths, y_max_length = self.preprocess(y, y_lengths, y_max_length)
- z_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, y_max_length), 1).to(
- x_mask.dtype
- )
- attn_mask = torch.unsqueeze(x_mask, -1) * torch.unsqueeze(z_mask, 2)
-
- if gen:
- attn = commons.generate_path(
- w_ceil.squeeze(1), attn_mask.squeeze(1)
- ).unsqueeze(1)
- z_m = torch.matmul(
- attn.squeeze(1).transpose(1, 2), x_m.transpose(1, 2)
- ).transpose(
- 1, 2
- ) # [b, t', t], [b, t, d] -> [b, d, t']
- z_logs = torch.matmul(
- attn.squeeze(1).transpose(1, 2), x_logs.transpose(1, 2)
- ).transpose(
- 1, 2
- ) # [b, t', t], [b, t, d] -> [b, d, t']
- logw_ = torch.log(1e-8 + torch.sum(attn, -1)) * x_mask
-
- z = (z_m + torch.exp(z_logs) * torch.randn_like(z_m) * noise_scale) * z_mask
- y, logdet = self.decoder(z, z_mask, g=g, reverse=True)
- return (
- (y, z_m, z_logs, logdet, z_mask),
- (x_m, x_logs, x_mask),
- (attn, logw, logw_),
- )
- else:
- z, logdet = self.decoder(y, z_mask, g=g, reverse=False)
- with torch.no_grad():
- x_s_sq_r = torch.exp(-2 * x_logs)
- logp1 = torch.sum(-0.5 * math.log(2 * math.pi) - x_logs, [1]).unsqueeze(
- -1
- ) # [b, t, 1]
- logp2 = torch.matmul(
- x_s_sq_r.transpose(1, 2), -0.5 * (z ** 2)
- ) # [b, t, d] x [b, d, t'] = [b, t, t']
- logp3 = torch.matmul(
- (x_m * x_s_sq_r).transpose(1, 2), z
- ) # [b, t, d] x [b, d, t'] = [b, t, t']
- logp4 = torch.sum(-0.5 * (x_m ** 2) * x_s_sq_r, [1]).unsqueeze(
- -1
- ) # [b, t, 1]
- logp = logp1 + logp2 + logp3 + logp4 # [b, t, t']
-
- attn = (
- monotonic_align.maximum_path(logp, attn_mask.squeeze(1))
- .unsqueeze(1)
- .detach()
- )
- z_m = torch.matmul(
- attn.squeeze(1).transpose(1, 2), x_m.transpose(1, 2)
- ).transpose(
- 1, 2
- ) # [b, t', t], [b, t, d] -> [b, d, t']
- z_logs = torch.matmul(
- attn.squeeze(1).transpose(1, 2), x_logs.transpose(1, 2)
- ).transpose(
- 1, 2
- ) # [b, t', t], [b, t, d] -> [b, d, t']
- logw_ = torch.log(1e-8 + torch.sum(attn, -1)) * x_mask
- return (
- (z, z_m, z_logs, logdet, z_mask),
- (x_m, x_logs, x_mask),
- (attn, logw, logw_),
- )
-
- def preprocess(self, y, y_lengths, y_max_length):
- if y_max_length is not None:
- y_max_length = (y_max_length // self.n_sqz) * self.n_sqz
- y = y[:, :, :y_max_length]
- y_lengths = (y_lengths // self.n_sqz) * self.n_sqz
- return y, y_lengths, y_max_length
-
- def store_inverse(self):
- self.decoder.store_inverse()
diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/scripts/inference/gradio.sh b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/scripts/inference/gradio.sh
deleted file mode 100644
index 2b6657952c21ca7821a9a82ed0a38f7dcf78b8e1..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/scripts/inference/gradio.sh
+++ /dev/null
@@ -1,8 +0,0 @@
-gender='male'
-glowdir='../../checkpoints/glow/'$gender'/'
-hifidir='../../checkpoints/hifi/'$gender'/'
-device='cpu'
-lang='en'
-
-
-python ../../utils/inference/run_gradio.py -a $glowdir -v $hifidir -d $device -L $lang
\ No newline at end of file
diff --git a/spaces/Harveenchadha/oiTrans/subword-nmt/setup.py b/spaces/Harveenchadha/oiTrans/subword-nmt/setup.py
deleted file mode 100644
index 23d16db1a28778604a7bfacccebe5f113cf332cd..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/oiTrans/subword-nmt/setup.py
+++ /dev/null
@@ -1,38 +0,0 @@
-from setuptools import setup, find_packages
-import unittest
-import codecs
-
-def test_suite():
- test_loader = unittest.TestLoader()
- test_suite = test_loader.discover('subword_nmt/tests', pattern='test_*.py')
-
- return test_suite
-
-
-setup(
- name='subword_nmt',
- version='0.3.8',
- description='Unsupervised Word Segmentation for Neural Machine Translation and Text Generation',
- long_description=(codecs.open("README.md", encoding='utf-8').read() +
- "\n\n" + codecs.open("CHANGELOG.md", encoding='utf-8').read()),
- long_description_content_type="text/markdown",
- url='https://github.com/rsennrich/subword-nmt',
- author='Rico Sennrich',
- license='MIT',
- test_suite='setup.test_suite',
- classifiers=[
- 'Intended Audience :: Developers',
- 'Topic :: Text Processing',
- 'Topic :: Scientific/Engineering :: Artificial Intelligence',
- 'License :: OSI Approved :: MIT License',
- 'Programming Language :: Python :: 2',
- 'Programming Language :: Python :: 3',
- ],
- install_requires=['mock',
- 'tqdm'],
- packages=find_packages(),
- entry_points={
- 'console_scripts': ['subword-nmt=subword_nmt.subword_nmt:main'],
- },
- include_package_data=True
-)
diff --git a/spaces/HelloMimosa/sail-rvc-Ai_Hoshino__From_Oshi_no_Ko___RVC_v2__300_Epoch/app.py b/spaces/HelloMimosa/sail-rvc-Ai_Hoshino__From_Oshi_no_Ko___RVC_v2__300_Epoch/app.py
deleted file mode 100644
index 6a7ac02c9b47f34c7c35cd5fc3dd9c7b708003b0..0000000000000000000000000000000000000000
--- a/spaces/HelloMimosa/sail-rvc-Ai_Hoshino__From_Oshi_no_Ko___RVC_v2__300_Epoch/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/sail-rvc/Ai_Hoshino__From_Oshi_no_Ko___RVC_v2__300_Epoch").launch()
\ No newline at end of file
diff --git a/spaces/HeyAxolotl/Bio/index.html b/spaces/HeyAxolotl/Bio/index.html
deleted file mode 100644
index 58275de3b1c343a98420342baa076b9baaafa157..0000000000000000000000000000000000000000
--- a/spaces/HeyAxolotl/Bio/index.html
+++ /dev/null
@@ -1,19 +0,0 @@
-
-
-
-
-
- My static Space
-
-
-
-
-
Welcome to your static Space!
-
You can modify this app directly by editing index.html in the Files and versions tab.
'
- else:
- if i > 0:
- lines[i] = " " + line.replace("<", "<").replace(">", ">")
- return "".join(lines)
-
-def predict(inputs, top_p, temperature, chat_counter, chatbot, history, request:gr.Request):
- payload = {
- "model": MODEL,
- "messages": [{"role": "user", "content": f"{inputs}"}],
- "temperature" : 1.0,
- "top_p":1.0,
- "n" : 1,
- "stream": True,
- "presence_penalty":0,
- "frequency_penalty":0,
- }
-
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {OPENAI_API_KEY}",
- "Headers": f"{request.kwargs['headers']}"
- }
-
- # print(f"chat_counter - {chat_counter}")
- if chat_counter != 0 :
- messages = []
- for i, data in enumerate(history):
- if i % 2 == 0:
- role = 'user'
- else:
- role = 'assistant'
- message = {}
- message["role"] = role
- message["content"] = data
- messages.append(message)
-
- message = {}
- message["role"] = "user"
- message["content"] = inputs
- messages.append(message)
- payload = {
- "model": MODEL,
- "messages": messages,
- "temperature" : temperature,
- "top_p": top_p,
- "n" : 1,
- "stream": True,
- "presence_penalty":0,
- "frequency_penalty":0,
- }
-
- chat_counter += 1
-
- history.append(inputs)
- token_counter = 0
- partial_words = ""
- counter = 0
-
- try:
- # make a POST request to the API endpoint using the requests.post method, passing in stream=True
- response = requests.post(API_URL, headers=headers, json=payload, stream=True)
- response_code = f"{response}"
- #if response_code.strip() != "":
- # #print(f"response code - {response}")
- # raise Exception(f"Sorry, hitting rate limit. Please try again later. {response}")
-
- for chunk in response.iter_lines():
- #Skipping first chunk
- if counter == 0:
- counter += 1
- continue
- #counter+=1
- # check whether each line is non-empty
- if chunk.decode() :
- chunk = chunk.decode()
- # decode each line as response data is in bytes
- if len(chunk) > 12 and "content" in json.loads(chunk[6:])['choices'][0]['delta']:
- partial_words = partial_words + json.loads(chunk[6:])['choices'][0]["delta"]["content"]
- if token_counter == 0:
- history.append(" " + partial_words)
- else:
- history[-1] = partial_words
- token_counter += 1
- yield [(parse_codeblock(history[i]), parse_codeblock(history[i + 1])) for i in range(0, len(history) - 1, 2) ], history, chat_counter, response, gr.update(interactive=False), gr.update(interactive=False) # resembles {chatbot: chat, state: history}
- except Exception as e:
- print (f'error found: {e}')
- yield [(parse_codeblock(history[i]), parse_codeblock(history[i + 1])) for i in range(0, len(history) - 1, 2) ], history, chat_counter, response, gr.update(interactive=True), gr.update(interactive=True)
- print(json.dumps({"chat_counter": chat_counter, "payload": payload, "partial_words": partial_words, "token_counter": token_counter, "counter": counter}))
-
-
-def reset_textbox():
- return gr.update(value='', interactive=False), gr.update(interactive=False)
-
-title = """
GPT-3.5 Chatbot
"""
-if DISABLED:
- title = """
This app has reached OpenAI's usage limit. We are currently requesting an increase in our quota. Please check back in a few days.
"""
-description = """Language models can be conditioned to act like dialogue agents through a conversational prompt that typically takes the form:
-```
-User:
-Assistant:
-User:
-Assistant:
-...
-```
-In this app, you can explore the outputs of a gpt-3.5 LLM.
-"""
-
-theme = gr.themes.Default(primary_hue="green")
-
-with gr.Blocks(css = """#col_container { margin-left: auto; margin-right: auto;}
- #chatbot {height: 520px; overflow: auto;}""",
- theme=theme) as demo:
- gr.HTML(title)
- gr.HTML("""
This app provides you full access to GPT-3.5 (4096 token limit). You don't need any OPENAI API key.""")
- #gr.HTML('''
Duplicate the Space and run securely with your OpenAI API Key
''')
- with gr.Column(elem_id = "col_container", visible=False) as main_block:
- #API Key is provided by OpenAI
- #openai_api_key = gr.Textbox(type='password', label="Enter only your OpenAI API key here")
- chatbot = gr.Chatbot(elem_id='chatbot') #c
- inputs = gr.Textbox(placeholder= "Hi there!", label= "Type an input and press Enter") #t
- state = gr.State([]) #s
- with gr.Row():
- with gr.Column(scale=7):
- b1 = gr.Button(visible=not DISABLED).style(full_width=True)
- with gr.Column(scale=3):
- server_status_code = gr.Textbox(label="Status code from OpenAI server", )
-
- #inputs, top_p, temperature, top_k, repetition_penalty
- with gr.Accordion("Parameters", open=False):
- top_p = gr.Slider( minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True, label="Top-p (nucleus sampling)",)
- temperature = gr.Slider( minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",)
- #top_k = gr.Slider( minimum=1, maximum=50, value=4, step=1, interactive=True, label="Top-k",)
- #repetition_penalty = gr.Slider( minimum=0.1, maximum=3.0, value=1.03, step=0.01, interactive=True, label="Repetition Penalty", )
- chat_counter = gr.Number(value=0, visible=False, precision=0)
-
- with gr.Column(elem_id = "user_consent_container") as user_consent_block:
- # Get user consent
- accept_checkbox = gr.Checkbox(visible=False)
- js = "(x) => confirm('By clicking \"OK\", I agree that my data may be published or shared.')"
- with gr.Accordion("User Consent for Data Collection, Use, and Sharing", open=True):
- gr.HTML("""
-
-
By using our app, which is powered by OpenAI's API, you acknowledge and agree to the following terms regarding the data you provide:
-
-
Collection: We may collect information, including the inputs you type into our app, the outputs generated by OpenAI's API, and certain technical details about your device and connection (such as browser type, operating system, and IP address) provided by your device's request headers.
-
Use: We may use the collected data for research purposes, to improve our services, and to develop new products or services, including commercial applications, and for security purposes, such as protecting against unauthorized access and attacks.
-
Sharing and Publication: Your data, including the technical details collected from your device's request headers, may be published, shared with third parties, or used for analysis and reporting purposes.
-
Data Retention: We may retain your data, including the technical details collected from your device's request headers, for as long as necessary.
-
-
By continuing to use our app, you provide your explicit consent to the collection, use, and potential sharing of your data as described above. If you do not agree with our data collection, use, and sharing practices, please do not use our app.
-
- """)
- accept_button = gr.Button("I Agree")
-
- def enable_inputs():
- return user_consent_block.update(visible=False), main_block.update(visible=True)
-
- accept_button.click(None, None, accept_checkbox, _js=js, queue=False)
- accept_checkbox.change(fn=enable_inputs, inputs=[], outputs=[user_consent_block, main_block], queue=False)
-
- inputs.submit(reset_textbox, [], [inputs, b1], queue=False)
- inputs.submit(predict, [inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter, server_status_code, inputs, b1],) #openai_api_key
- b1.click(reset_textbox, [], [inputs, b1], queue=False)
- b1.click(predict, [inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter, server_status_code, inputs, b1],) #openai_api_key
-
- demo.queue(max_size=20, concurrency_count=NUM_THREADS, api_open=False).launch(share=True)
\ No newline at end of file
diff --git a/spaces/JohnC26/2.Streamlit.GraphViz.Dynamic.Architecture.Diagram/README.md b/spaces/JohnC26/2.Streamlit.GraphViz.Dynamic.Architecture.Diagram/README.md
deleted file mode 100644
index 556a1aa95f51218ca29f62610a739b9c8222179b..0000000000000000000000000000000000000000
--- a/spaces/JohnC26/2.Streamlit.GraphViz.Dynamic.Architecture.Diagram/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Streamlit.GraphViz.Dynamic.Architecture.Diagram
-emoji: 😻
-colorFrom: green
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: awacke1/Streamlit.GraphViz.Dynamic.Architecture.Diagram
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Kabriske/Multilingual_Video_Subtitler/main.py b/spaces/Kabriske/Multilingual_Video_Subtitler/main.py
deleted file mode 100644
index 8ad5933a32d28eaecbec00c3922adf60013c2338..0000000000000000000000000000000000000000
--- a/spaces/Kabriske/Multilingual_Video_Subtitler/main.py
+++ /dev/null
@@ -1,60 +0,0 @@
-import argparse
-import json
-import os
-import subprocess
-
-from audio_to_transcript import TranscribeAudio
-from translator import MyTranslator
-from utils import log
-from video_to_audio_converter import VideoToAudioConverter
-
-with open('resources/languages.json', 'r') as f:
- code2lang = json.load(f)
-
-# language code lookup by name, with a few language aliases
-lang2code = {
- **{language: code for code, language in code2lang.items()},
-}
-
-LANGS = sorted(lang2code.keys())
-
-
-class Pipeline:
- def __init__(self):
- self.video_to_audio = VideoToAudioConverter()
- self.audio_to_text = TranscribeAudio()
- self.translator = MyTranslator()
-
- def __call__(self, video_path: str, output_path: str, input_language: str, output_language: str):
- filename, ext = os.path.splitext(video_path)
-
- audio_path = self.video_to_audio.convert(video_path)
- subtitle_path = self.audio_to_text(audio_path, output_path, input_language)
- if input_language != output_language:
- subtitle_path = self.translator.translate(subtitle_path, lang2code[input_language],
- lang2code[output_language])
- log(f"Embedding subtitles on input video and saves output video to {output_path}/output.mp4")
- # Use ffmpeg to add the subtitles to the input MP4 file and create the output MP4 file
-
- subtitles_cmd = ["ffmpeg", "-y", "-i", video_path, "-vf", f"subtitles={subtitle_path}", "-c:a", "copy",
- f"{filename}_{output_language}_output.mp4"]
-
- subprocess.run(subtitles_cmd, check=True)
- return f"{filename}_{output_language}_output.mp4"
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser(
- formatter_class=argparse.ArgumentDefaultsHelpFormatter)
- parser.add_argument("video", type=str,
- help="video path to transcribe")
- parser.add_argument("--output_dir", "-o", type=str,
- default=".", help="directory to save the outputs")
- parser.add_argument("--input_language", type=str, default=None, choices=LANGS,
- help="language spoken in the video, skip to perform language detection")
- parser.add_argument("--output_language", type=str, default=None, choices=LANGS,
- help="required translation language")
-
- args = parser.parse_args()
- pipeline = Pipeline()
- pipeline(args.video, args.output_dir, args.input_language, args.output_language)
diff --git a/spaces/Kabriske/Multilingual_Video_Subtitler/utils.py b/spaces/Kabriske/Multilingual_Video_Subtitler/utils.py
deleted file mode 100644
index 3e8396cde0b390f2cd14f4836cc66c6beac907f9..0000000000000000000000000000000000000000
--- a/spaces/Kabriske/Multilingual_Video_Subtitler/utils.py
+++ /dev/null
@@ -1,6 +0,0 @@
-from datetime import datetime
-
-
-def log(message):
- timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
- print(f'[{timestamp}] {message}')
\ No newline at end of file
diff --git a/spaces/Kalvin-5/WizardLM-WizardCoder-15B-V1.0/README.md b/spaces/Kalvin-5/WizardLM-WizardCoder-15B-V1.0/README.md
deleted file mode 100644
index 824e9278de16fee0f043b407b57b55fb9b5496a3..0000000000000000000000000000000000000000
--- a/spaces/Kalvin-5/WizardLM-WizardCoder-15B-V1.0/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: WizardLM WizardCoder 15B V1.0
-emoji: 📚
-colorFrom: gray
-colorTo: gray
-sdk: gradio
-sdk_version: 3.34.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Kurkur99/Sentiment_analysis/eda.py b/spaces/Kurkur99/Sentiment_analysis/eda.py
deleted file mode 100644
index 1e158c252639764a0a4e4ae419905e5d86360be3..0000000000000000000000000000000000000000
--- a/spaces/Kurkur99/Sentiment_analysis/eda.py
+++ /dev/null
@@ -1,54 +0,0 @@
-import streamlit as st
-import pandas as pd
-import matplotlib.pyplot as plt
-from wordcloud import WordCloud
-import re
-
-def label_sentiment(rating):
- """Label sentiment based on the rating."""
- if rating in [1, 2]:
- return 'negative'
- elif rating == 3:
- return 'neutral'
- elif rating in [4, 5]:
- return 'positive'
- else:
- return 'unknown'
-
-def process_review(review):
- """Simple processing for the review text."""
- review = review.lower()
- review = re.sub(r'[^a-z\s]', '', review) # Remove non-alphabetical characters
- return review
-
-def display_eda(data):
- # Derive the 'sentiment' column from 'rating' if it doesn't exist
- if 'sentiment' not in data.columns:
- if 'rating' not in data.columns:
- st.error("The dataset does not contain a 'rating' or 'sentiment' column. Please check the data source.")
- return
- else:
- data['sentiment'] = data['rating'].apply(label_sentiment)
-
- # Distribution of sentiments
- st.subheader("Distribution of Sentiments")
- sentiment_counts = data['sentiment'].value_counts()
- fig, ax = plt.subplots()
- sentiment_counts.plot(kind='bar', ax=ax)
- ax.set_title('Distribution of Sentiments')
- ax.set_xlabel('Sentiment')
- ax.set_ylabel('Count')
- st.pyplot(fig)
-
- # Word cloud for each sentiment
- st.subheader("Word Clouds for Sentiments")
- sentiments = data['sentiment'].unique()
- for sentiment in sentiments:
- st.write(f"Word Cloud for {sentiment}")
- subset = data[data['sentiment'] == sentiment]
- text = " ".join(process_review(review) for review in subset['review_description'])
- wordcloud = WordCloud(max_words=100, background_color="white").generate(text)
- fig = plt.figure()
- plt.imshow(wordcloud, interpolation="bilinear")
- plt.axis("off")
- st.pyplot(fig)
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/centripetal_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/centripetal_head.py
deleted file mode 100644
index 18f6601ff82394864d53351b10b40f51eb2aec6b..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/centripetal_head.py
+++ /dev/null
@@ -1,459 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import List, Optional, Tuple
-
-import torch.nn as nn
-from mmcv.cnn import ConvModule
-from mmcv.ops import DeformConv2d
-from mmengine.model import normal_init
-from torch import Tensor
-
-from mmdet.registry import MODELS
-from mmdet.utils import (ConfigType, InstanceList, OptInstanceList,
- OptMultiConfig)
-from ..utils import multi_apply
-from .corner_head import CornerHead
-
-
-@MODELS.register_module()
-class CentripetalHead(CornerHead):
- """Head of CentripetalNet: Pursuing High-quality Keypoint Pairs for Object
- Detection.
-
- CentripetalHead inherits from :class:`CornerHead`. It removes the
- embedding branch and adds guiding shift and centripetal shift branches.
- More details can be found in the `paper
- `_ .
-
- Args:
- num_classes (int): Number of categories excluding the background
- category.
- in_channels (int): Number of channels in the input feature map.
- num_feat_levels (int): Levels of feature from the previous module.
- 2 for HourglassNet-104 and 1 for HourglassNet-52. HourglassNet-104
- outputs the final feature and intermediate supervision feature and
- HourglassNet-52 only outputs the final feature. Defaults to 2.
- corner_emb_channels (int): Channel of embedding vector. Defaults to 1.
- train_cfg (:obj:`ConfigDict` or dict, optional): Training config.
- Useless in CornerHead, but we keep this variable for
- SingleStageDetector.
- test_cfg (:obj:`ConfigDict` or dict, optional): Testing config of
- CornerHead.
- loss_heatmap (:obj:`ConfigDict` or dict): Config of corner heatmap
- loss. Defaults to GaussianFocalLoss.
- loss_embedding (:obj:`ConfigDict` or dict): Config of corner embedding
- loss. Defaults to AssociativeEmbeddingLoss.
- loss_offset (:obj:`ConfigDict` or dict): Config of corner offset loss.
- Defaults to SmoothL1Loss.
- loss_guiding_shift (:obj:`ConfigDict` or dict): Config of
- guiding shift loss. Defaults to SmoothL1Loss.
- loss_centripetal_shift (:obj:`ConfigDict` or dict): Config of
- centripetal shift loss. Defaults to SmoothL1Loss.
- init_cfg (:obj:`ConfigDict` or dict, optional): the config to control
- the initialization.
- """
-
- def __init__(self,
- *args,
- centripetal_shift_channels: int = 2,
- guiding_shift_channels: int = 2,
- feat_adaption_conv_kernel: int = 3,
- loss_guiding_shift: ConfigType = dict(
- type='SmoothL1Loss', beta=1.0, loss_weight=0.05),
- loss_centripetal_shift: ConfigType = dict(
- type='SmoothL1Loss', beta=1.0, loss_weight=1),
- init_cfg: OptMultiConfig = None,
- **kwargs) -> None:
- assert init_cfg is None, 'To prevent abnormal initialization ' \
- 'behavior, init_cfg is not allowed to be set'
- assert centripetal_shift_channels == 2, (
- 'CentripetalHead only support centripetal_shift_channels == 2')
- self.centripetal_shift_channels = centripetal_shift_channels
- assert guiding_shift_channels == 2, (
- 'CentripetalHead only support guiding_shift_channels == 2')
- self.guiding_shift_channels = guiding_shift_channels
- self.feat_adaption_conv_kernel = feat_adaption_conv_kernel
- super().__init__(*args, init_cfg=init_cfg, **kwargs)
- self.loss_guiding_shift = MODELS.build(loss_guiding_shift)
- self.loss_centripetal_shift = MODELS.build(loss_centripetal_shift)
-
- def _init_centripetal_layers(self) -> None:
- """Initialize centripetal layers.
-
- Including feature adaption deform convs (feat_adaption), deform offset
- prediction convs (dcn_off), guiding shift (guiding_shift) and
- centripetal shift ( centripetal_shift). Each branch has two parts:
- prefix `tl_` for top-left and `br_` for bottom-right.
- """
- self.tl_feat_adaption = nn.ModuleList()
- self.br_feat_adaption = nn.ModuleList()
- self.tl_dcn_offset = nn.ModuleList()
- self.br_dcn_offset = nn.ModuleList()
- self.tl_guiding_shift = nn.ModuleList()
- self.br_guiding_shift = nn.ModuleList()
- self.tl_centripetal_shift = nn.ModuleList()
- self.br_centripetal_shift = nn.ModuleList()
-
- for _ in range(self.num_feat_levels):
- self.tl_feat_adaption.append(
- DeformConv2d(self.in_channels, self.in_channels,
- self.feat_adaption_conv_kernel, 1, 1))
- self.br_feat_adaption.append(
- DeformConv2d(self.in_channels, self.in_channels,
- self.feat_adaption_conv_kernel, 1, 1))
-
- self.tl_guiding_shift.append(
- self._make_layers(
- out_channels=self.guiding_shift_channels,
- in_channels=self.in_channels))
- self.br_guiding_shift.append(
- self._make_layers(
- out_channels=self.guiding_shift_channels,
- in_channels=self.in_channels))
-
- self.tl_dcn_offset.append(
- ConvModule(
- self.guiding_shift_channels,
- self.feat_adaption_conv_kernel**2 *
- self.guiding_shift_channels,
- 1,
- bias=False,
- act_cfg=None))
- self.br_dcn_offset.append(
- ConvModule(
- self.guiding_shift_channels,
- self.feat_adaption_conv_kernel**2 *
- self.guiding_shift_channels,
- 1,
- bias=False,
- act_cfg=None))
-
- self.tl_centripetal_shift.append(
- self._make_layers(
- out_channels=self.centripetal_shift_channels,
- in_channels=self.in_channels))
- self.br_centripetal_shift.append(
- self._make_layers(
- out_channels=self.centripetal_shift_channels,
- in_channels=self.in_channels))
-
- def _init_layers(self) -> None:
- """Initialize layers for CentripetalHead.
-
- Including two parts: CornerHead layers and CentripetalHead layers
- """
- super()._init_layers() # using _init_layers in CornerHead
- self._init_centripetal_layers()
-
- def init_weights(self) -> None:
- super().init_weights()
- for i in range(self.num_feat_levels):
- normal_init(self.tl_feat_adaption[i], std=0.01)
- normal_init(self.br_feat_adaption[i], std=0.01)
- normal_init(self.tl_dcn_offset[i].conv, std=0.1)
- normal_init(self.br_dcn_offset[i].conv, std=0.1)
- _ = [x.conv.reset_parameters() for x in self.tl_guiding_shift[i]]
- _ = [x.conv.reset_parameters() for x in self.br_guiding_shift[i]]
- _ = [
- x.conv.reset_parameters() for x in self.tl_centripetal_shift[i]
- ]
- _ = [
- x.conv.reset_parameters() for x in self.br_centripetal_shift[i]
- ]
-
- def forward_single(self, x: Tensor, lvl_ind: int) -> List[Tensor]:
- """Forward feature of a single level.
-
- Args:
- x (Tensor): Feature of a single level.
- lvl_ind (int): Level index of current feature.
-
- Returns:
- tuple[Tensor]: A tuple of CentripetalHead's output for current
- feature level. Containing the following Tensors:
-
- - tl_heat (Tensor): Predicted top-left corner heatmap.
- - br_heat (Tensor): Predicted bottom-right corner heatmap.
- - tl_off (Tensor): Predicted top-left offset heatmap.
- - br_off (Tensor): Predicted bottom-right offset heatmap.
- - tl_guiding_shift (Tensor): Predicted top-left guiding shift
- heatmap.
- - br_guiding_shift (Tensor): Predicted bottom-right guiding
- shift heatmap.
- - tl_centripetal_shift (Tensor): Predicted top-left centripetal
- shift heatmap.
- - br_centripetal_shift (Tensor): Predicted bottom-right
- centripetal shift heatmap.
- """
- tl_heat, br_heat, _, _, tl_off, br_off, tl_pool, br_pool = super(
- ).forward_single(
- x, lvl_ind, return_pool=True)
-
- tl_guiding_shift = self.tl_guiding_shift[lvl_ind](tl_pool)
- br_guiding_shift = self.br_guiding_shift[lvl_ind](br_pool)
-
- tl_dcn_offset = self.tl_dcn_offset[lvl_ind](tl_guiding_shift.detach())
- br_dcn_offset = self.br_dcn_offset[lvl_ind](br_guiding_shift.detach())
-
- tl_feat_adaption = self.tl_feat_adaption[lvl_ind](tl_pool,
- tl_dcn_offset)
- br_feat_adaption = self.br_feat_adaption[lvl_ind](br_pool,
- br_dcn_offset)
-
- tl_centripetal_shift = self.tl_centripetal_shift[lvl_ind](
- tl_feat_adaption)
- br_centripetal_shift = self.br_centripetal_shift[lvl_ind](
- br_feat_adaption)
-
- result_list = [
- tl_heat, br_heat, tl_off, br_off, tl_guiding_shift,
- br_guiding_shift, tl_centripetal_shift, br_centripetal_shift
- ]
- return result_list
-
- def loss_by_feat(
- self,
- tl_heats: List[Tensor],
- br_heats: List[Tensor],
- tl_offs: List[Tensor],
- br_offs: List[Tensor],
- tl_guiding_shifts: List[Tensor],
- br_guiding_shifts: List[Tensor],
- tl_centripetal_shifts: List[Tensor],
- br_centripetal_shifts: List[Tensor],
- batch_gt_instances: InstanceList,
- batch_img_metas: List[dict],
- batch_gt_instances_ignore: OptInstanceList = None) -> dict:
- """Calculate the loss based on the features extracted by the detection
- head.
-
- Args:
- tl_heats (list[Tensor]): Top-left corner heatmaps for each level
- with shape (N, num_classes, H, W).
- br_heats (list[Tensor]): Bottom-right corner heatmaps for each
- level with shape (N, num_classes, H, W).
- tl_offs (list[Tensor]): Top-left corner offsets for each level
- with shape (N, corner_offset_channels, H, W).
- br_offs (list[Tensor]): Bottom-right corner offsets for each level
- with shape (N, corner_offset_channels, H, W).
- tl_guiding_shifts (list[Tensor]): Top-left guiding shifts for each
- level with shape (N, guiding_shift_channels, H, W).
- br_guiding_shifts (list[Tensor]): Bottom-right guiding shifts for
- each level with shape (N, guiding_shift_channels, H, W).
- tl_centripetal_shifts (list[Tensor]): Top-left centripetal shifts
- for each level with shape (N, centripetal_shift_channels, H,
- W).
- br_centripetal_shifts (list[Tensor]): Bottom-right centripetal
- shifts for each level with shape (N,
- centripetal_shift_channels, H, W).
- batch_gt_instances (list[:obj:`InstanceData`]): Batch of
- gt_instance. It usually includes ``bboxes`` and ``labels``
- attributes.
- batch_img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- batch_gt_instances_ignore (list[:obj:`InstanceData`], optional):
- Specify which bounding boxes can be ignored when computing
- the loss.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components. Containing the
- following losses:
-
- - det_loss (list[Tensor]): Corner keypoint losses of all
- feature levels.
- - off_loss (list[Tensor]): Corner offset losses of all feature
- levels.
- - guiding_loss (list[Tensor]): Guiding shift losses of all
- feature levels.
- - centripetal_loss (list[Tensor]): Centripetal shift losses of
- all feature levels.
- """
- gt_bboxes = [
- gt_instances.bboxes for gt_instances in batch_gt_instances
- ]
- gt_labels = [
- gt_instances.labels for gt_instances in batch_gt_instances
- ]
-
- targets = self.get_targets(
- gt_bboxes,
- gt_labels,
- tl_heats[-1].shape,
- batch_img_metas[0]['batch_input_shape'],
- with_corner_emb=self.with_corner_emb,
- with_guiding_shift=True,
- with_centripetal_shift=True)
- mlvl_targets = [targets for _ in range(self.num_feat_levels)]
- [det_losses, off_losses, guiding_losses, centripetal_losses
- ] = multi_apply(self.loss_by_feat_single, tl_heats, br_heats, tl_offs,
- br_offs, tl_guiding_shifts, br_guiding_shifts,
- tl_centripetal_shifts, br_centripetal_shifts,
- mlvl_targets)
- loss_dict = dict(
- det_loss=det_losses,
- off_loss=off_losses,
- guiding_loss=guiding_losses,
- centripetal_loss=centripetal_losses)
- return loss_dict
-
- def loss_by_feat_single(self, tl_hmp: Tensor, br_hmp: Tensor,
- tl_off: Tensor, br_off: Tensor,
- tl_guiding_shift: Tensor, br_guiding_shift: Tensor,
- tl_centripetal_shift: Tensor,
- br_centripetal_shift: Tensor,
- targets: dict) -> Tuple[Tensor, ...]:
- """Calculate the loss of a single scale level based on the features
- extracted by the detection head.
-
- Args:
- tl_hmp (Tensor): Top-left corner heatmap for current level with
- shape (N, num_classes, H, W).
- br_hmp (Tensor): Bottom-right corner heatmap for current level with
- shape (N, num_classes, H, W).
- tl_off (Tensor): Top-left corner offset for current level with
- shape (N, corner_offset_channels, H, W).
- br_off (Tensor): Bottom-right corner offset for current level with
- shape (N, corner_offset_channels, H, W).
- tl_guiding_shift (Tensor): Top-left guiding shift for current level
- with shape (N, guiding_shift_channels, H, W).
- br_guiding_shift (Tensor): Bottom-right guiding shift for current
- level with shape (N, guiding_shift_channels, H, W).
- tl_centripetal_shift (Tensor): Top-left centripetal shift for
- current level with shape (N, centripetal_shift_channels, H, W).
- br_centripetal_shift (Tensor): Bottom-right centripetal shift for
- current level with shape (N, centripetal_shift_channels, H, W).
- targets (dict): Corner target generated by `get_targets`.
-
- Returns:
- tuple[torch.Tensor]: Losses of the head's different branches
- containing the following losses:
-
- - det_loss (Tensor): Corner keypoint loss.
- - off_loss (Tensor): Corner offset loss.
- - guiding_loss (Tensor): Guiding shift loss.
- - centripetal_loss (Tensor): Centripetal shift loss.
- """
- targets['corner_embedding'] = None
-
- det_loss, _, _, off_loss = super().loss_by_feat_single(
- tl_hmp, br_hmp, None, None, tl_off, br_off, targets)
-
- gt_tl_guiding_shift = targets['topleft_guiding_shift']
- gt_br_guiding_shift = targets['bottomright_guiding_shift']
- gt_tl_centripetal_shift = targets['topleft_centripetal_shift']
- gt_br_centripetal_shift = targets['bottomright_centripetal_shift']
-
- gt_tl_heatmap = targets['topleft_heatmap']
- gt_br_heatmap = targets['bottomright_heatmap']
- # We only compute the offset loss at the real corner position.
- # The value of real corner would be 1 in heatmap ground truth.
- # The mask is computed in class agnostic mode and its shape is
- # batch * 1 * width * height.
- tl_mask = gt_tl_heatmap.eq(1).sum(1).gt(0).unsqueeze(1).type_as(
- gt_tl_heatmap)
- br_mask = gt_br_heatmap.eq(1).sum(1).gt(0).unsqueeze(1).type_as(
- gt_br_heatmap)
-
- # Guiding shift loss
- tl_guiding_loss = self.loss_guiding_shift(
- tl_guiding_shift,
- gt_tl_guiding_shift,
- tl_mask,
- avg_factor=tl_mask.sum())
- br_guiding_loss = self.loss_guiding_shift(
- br_guiding_shift,
- gt_br_guiding_shift,
- br_mask,
- avg_factor=br_mask.sum())
- guiding_loss = (tl_guiding_loss + br_guiding_loss) / 2.0
- # Centripetal shift loss
- tl_centripetal_loss = self.loss_centripetal_shift(
- tl_centripetal_shift,
- gt_tl_centripetal_shift,
- tl_mask,
- avg_factor=tl_mask.sum())
- br_centripetal_loss = self.loss_centripetal_shift(
- br_centripetal_shift,
- gt_br_centripetal_shift,
- br_mask,
- avg_factor=br_mask.sum())
- centripetal_loss = (tl_centripetal_loss + br_centripetal_loss) / 2.0
-
- return det_loss, off_loss, guiding_loss, centripetal_loss
-
- def predict_by_feat(self,
- tl_heats: List[Tensor],
- br_heats: List[Tensor],
- tl_offs: List[Tensor],
- br_offs: List[Tensor],
- tl_guiding_shifts: List[Tensor],
- br_guiding_shifts: List[Tensor],
- tl_centripetal_shifts: List[Tensor],
- br_centripetal_shifts: List[Tensor],
- batch_img_metas: Optional[List[dict]] = None,
- rescale: bool = False,
- with_nms: bool = True) -> InstanceList:
- """Transform a batch of output features extracted from the head into
- bbox results.
-
- Args:
- tl_heats (list[Tensor]): Top-left corner heatmaps for each level
- with shape (N, num_classes, H, W).
- br_heats (list[Tensor]): Bottom-right corner heatmaps for each
- level with shape (N, num_classes, H, W).
- tl_offs (list[Tensor]): Top-left corner offsets for each level
- with shape (N, corner_offset_channels, H, W).
- br_offs (list[Tensor]): Bottom-right corner offsets for each level
- with shape (N, corner_offset_channels, H, W).
- tl_guiding_shifts (list[Tensor]): Top-left guiding shifts for each
- level with shape (N, guiding_shift_channels, H, W). Useless in
- this function, we keep this arg because it's the raw output
- from CentripetalHead.
- br_guiding_shifts (list[Tensor]): Bottom-right guiding shifts for
- each level with shape (N, guiding_shift_channels, H, W).
- Useless in this function, we keep this arg because it's the
- raw output from CentripetalHead.
- tl_centripetal_shifts (list[Tensor]): Top-left centripetal shifts
- for each level with shape (N, centripetal_shift_channels, H,
- W).
- br_centripetal_shifts (list[Tensor]): Bottom-right centripetal
- shifts for each level with shape (N,
- centripetal_shift_channels, H, W).
- batch_img_metas (list[dict], optional): Batch image meta info.
- Defaults to None.
- rescale (bool): If True, return boxes in original image space.
- Defaults to False.
- with_nms (bool): If True, do nms before return boxes.
- Defaults to True.
-
- Returns:
- list[:obj:`InstanceData`]: Object detection results of each image
- after the post process. Each item usually contains following keys.
-
- - scores (Tensor): Classification scores, has a shape
- (num_instance, )
- - labels (Tensor): Labels of bboxes, has a shape
- (num_instances, ).
- - bboxes (Tensor): Has a shape (num_instances, 4),
- the last dimension 4 arrange as (x1, y1, x2, y2).
- """
- assert tl_heats[-1].shape[0] == br_heats[-1].shape[0] == len(
- batch_img_metas)
- result_list = []
- for img_id in range(len(batch_img_metas)):
- result_list.append(
- self._predict_by_feat_single(
- tl_heats[-1][img_id:img_id + 1, :],
- br_heats[-1][img_id:img_id + 1, :],
- tl_offs[-1][img_id:img_id + 1, :],
- br_offs[-1][img_id:img_id + 1, :],
- batch_img_metas[img_id],
- tl_emb=None,
- br_emb=None,
- tl_centripetal_shift=tl_centripetal_shifts[-1][
- img_id:img_id + 1, :],
- br_centripetal_shift=br_centripetal_shifts[-1][
- img_id:img_id + 1, :],
- rescale=rescale,
- with_nms=with_nms))
-
- return result_list
diff --git a/spaces/Langame/explorer/README.md b/spaces/Langame/explorer/README.md
deleted file mode 100644
index 3329e9ac1f239eef487130210d0135994b84a4ce..0000000000000000000000000000000000000000
--- a/spaces/Langame/explorer/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Explorer
-emoji: 🐢
-colorFrom: green
-colorTo: red
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/LoveAsAConstruct/Stable_Diffusion/webui.py b/spaces/LoveAsAConstruct/Stable_Diffusion/webui.py
deleted file mode 100644
index 0bc1b9b2d5ec82703e85df3cfaa25f3bf9ecf110..0000000000000000000000000000000000000000
--- a/spaces/LoveAsAConstruct/Stable_Diffusion/webui.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import os
-import threading
-
-from modules.paths import script_path
-
-import torch
-from omegaconf import OmegaConf
-
-import signal
-
-from ldm.util import instantiate_from_config
-
-from modules.shared import opts, cmd_opts, state
-import modules.shared as shared
-import modules.ui
-import modules.scripts
-import modules.sd_hijack
-import modules.codeformer_model
-import modules.gfpgan_model
-import modules.face_restoration
-import modules.realesrgan_model as realesrgan
-import modules.esrgan_model as esrgan
-import modules.extras
-import modules.lowvram
-import modules.txt2img
-import modules.img2img
-
-
-modules.codeformer_model.setup_codeformer()
-modules.gfpgan_model.setup_gfpgan()
-shared.face_restorers.append(modules.face_restoration.FaceRestoration())
-
-esrgan.load_models(cmd_opts.esrgan_models_path)
-realesrgan.setup_realesrgan()
-
-
-def load_model_from_config(config, ckpt, verbose=False):
- print(f"Loading model from {ckpt}")
- pl_sd = torch.load(ckpt, map_location="cpu")
- if "global_step" in pl_sd:
- print(f"Global Step: {pl_sd['global_step']}")
- sd = pl_sd["state_dict"]
-
- model = instantiate_from_config(config.model)
- m, u = model.load_state_dict(sd, strict=False)
- if len(m) > 0 and verbose:
- print("missing keys:")
- print(m)
- if len(u) > 0 and verbose:
- print("unexpected keys:")
- print(u)
-
- model.eval()
- return model
-
-
-queue_lock = threading.Lock()
-
-
-def wrap_gradio_gpu_call(func):
- def f(*args, **kwargs):
- shared.state.sampling_step = 0
- shared.state.job_count = -1
- shared.state.job_no = 0
- shared.state.current_latent = None
- shared.state.current_image = None
- shared.state.current_image_sampling_step = 0
-
- with queue_lock:
- res = func(*args, **kwargs)
-
- shared.state.job = ""
- shared.state.job_count = 0
-
- return res
-
- return modules.ui.wrap_gradio_call(f)
-
-
-modules.scripts.load_scripts(os.path.join(script_path, "scripts"))
-
-try:
- # this silences the annoying "Some weights of the model checkpoint were not used when initializing..." message at start.
-
- from transformers import logging
-
- logging.set_verbosity_error()
-except Exception:
- pass
-
-sd_config = OmegaConf.load(cmd_opts.config)
-shared.sd_model = load_model_from_config(sd_config, cmd_opts.ckpt)
-shared.sd_model = (shared.sd_model if cmd_opts.no_half else shared.sd_model.half())
-
-if cmd_opts.lowvram or cmd_opts.medvram:
- modules.lowvram.setup_for_low_vram(shared.sd_model, cmd_opts.medvram)
-else:
- shared.sd_model = shared.sd_model.to(shared.device)
-
-modules.sd_hijack.model_hijack.hijack(shared.sd_model)
-
-
-def webui():
- # make the program just exit at ctrl+c without waiting for anything
- def sigint_handler(sig, frame):
- print(f'Interrupted with signal {sig} in {frame}')
- os._exit(0)
-
- signal.signal(signal.SIGINT, sigint_handler)
-
- demo = modules.ui.create_ui(
- txt2img=wrap_gradio_gpu_call(modules.txt2img.txt2img),
- img2img=wrap_gradio_gpu_call(modules.img2img.img2img),
- run_extras=wrap_gradio_gpu_call(modules.extras.run_extras),
- run_pnginfo=modules.extras.run_pnginfo
- )
-
- demo.launch(share=cmd_opts.share, server_name="0.0.0.0" if cmd_opts.listen else None, server_port=cmd_opts.port)
-
-
-if __name__ == "__main__":
- webui()
diff --git a/spaces/LucasCodeBreak/MusicGen/audiocraft/utils/__init__.py b/spaces/LucasCodeBreak/MusicGen/audiocraft/utils/__init__.py
deleted file mode 100644
index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000
--- a/spaces/LucasCodeBreak/MusicGen/audiocraft/utils/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/Luelll/ChuanhuChatGPT/modules/overwrites.py b/spaces/Luelll/ChuanhuChatGPT/modules/overwrites.py
deleted file mode 100644
index d17f56873c156e9fb883d35b50e2a28740f2cf90..0000000000000000000000000000000000000000
--- a/spaces/Luelll/ChuanhuChatGPT/modules/overwrites.py
+++ /dev/null
@@ -1,101 +0,0 @@
-from __future__ import annotations
-import logging
-
-from llama_index import Prompt
-from typing import List, Tuple
-import mdtex2html
-from gradio_client import utils as client_utils
-
-from modules.presets import *
-from modules.llama_func import *
-from modules.config import render_latex
-
-def compact_text_chunks(self, prompt: Prompt, text_chunks: List[str]) -> List[str]:
- logging.debug("Compacting text chunks...🚀🚀🚀")
- combined_str = [c.strip() for c in text_chunks if c.strip()]
- combined_str = [f"[{index+1}] {c}" for index, c in enumerate(combined_str)]
- combined_str = "\n\n".join(combined_str)
- # resplit based on self.max_chunk_overlap
- text_splitter = self.get_text_splitter_given_prompt(prompt, 1, padding=1)
- return text_splitter.split_text(combined_str)
-
-
-def postprocess(
- self,
- y: List[List[str | Tuple[str] | Tuple[str, str] | None] | Tuple],
- ) -> List[List[str | Dict | None]]:
- """
- Parameters:
- y: List of lists representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. It can also be a tuple whose first element is a string filepath or URL to an image/video/audio, and second (optional) element is the alt text, in which case the media file is displayed. It can also be None, in which case that message is not displayed.
- Returns:
- List of lists representing the message and response. Each message and response will be a string of HTML, or a dictionary with media information. Or None if the message is not to be displayed.
- """
- if y is None:
- return []
- processed_messages = []
- for message_pair in y:
- assert isinstance(
- message_pair, (tuple, list)
- ), f"Expected a list of lists or list of tuples. Received: {message_pair}"
- assert (
- len(message_pair) == 2
- ), f"Expected a list of lists of length 2 or list of tuples of length 2. Received: {message_pair}"
-
- processed_messages.append(
- [
- self._postprocess_chat_messages(message_pair[0], "user"),
- self._postprocess_chat_messages(message_pair[1], "bot"),
- ]
- )
- return processed_messages
-
-def postprocess_chat_messages(
- self, chat_message: str | Tuple | List | None, message_type: str
- ) -> str | Dict | None:
- if chat_message is None:
- return None
- elif isinstance(chat_message, (tuple, list)):
- filepath = chat_message[0]
- mime_type = client_utils.get_mimetype(filepath)
- filepath = self.make_temp_copy_if_needed(filepath)
- return {
- "name": filepath,
- "mime_type": mime_type,
- "alt_text": chat_message[1] if len(chat_message) > 1 else None,
- "data": None, # These last two fields are filled in by the frontend
- "is_file": True,
- }
- elif isinstance(chat_message, str):
- if message_type == "bot":
- if not detect_converted_mark(chat_message):
- chat_message = convert_mdtext(chat_message)
- elif message_type == "user":
- if not detect_converted_mark(chat_message):
- chat_message = convert_asis(chat_message)
- return chat_message
- else:
- raise ValueError(f"Invalid message for Chatbot component: {chat_message}")
-
-with open("./assets/custom.js", "r", encoding="utf-8") as f, \
- open("./assets/external-scripts.js", "r", encoding="utf-8") as f1:
- customJS = f.read()
- externalScripts = f1.read()
-
-
-def reload_javascript():
- print("Reloading javascript...")
- js = f''
- if render_latex:
- js += """\
-
-
- """
- def template_response(*args, **kwargs):
- res = GradioTemplateResponseOriginal(*args, **kwargs)
- res.body = res.body.replace(b'