How to Get BlueSoleil Activation Serial Number for Free
-
Do you want to connect your Bluetooth devices with your computer without any hassle? Do you want to transfer data, make calls, listen to music, and more using Bluetooth technology? If yes, then you need BlueSoleil.
BlueSoleil is a Bluetooth software that allows you to connect your devices with your computer
-
BlueSoleil is a software that enables your computer to communicate with Bluetooth devices such as phones, headsets, keyboards, mice, printers, cameras, etc. It supports various Bluetooth protocols and profiles, such as A2DP, AVRCP, BIP, DUN, FTP, HID, OPP, PAN, PBAP, SPP, etc. It also supports multiple Bluetooth dongles and adapters.
-
You need BlueSoleil to transfer data, make calls, listen to music, and more
-
With BlueSoleil, you can do many things with your Bluetooth devices. For example:
-
-
You can transfer files between your computer and your phone or other devices wirelessly.
-
You can make or receive phone calls using your computer's microphone and speakers.
-
You can listen to music from your phone or other devices using your computer's speakers or headphones.
-
You can sync your contacts and messages with your phone or other devices.
-
You can print documents or photos from your phone or other devices using a Bluetooth printer.
-
You can control your computer remotely using a Bluetooth keyboard or mouse.
-
You can share your internet connection with your phone or other devices using a Bluetooth network.
-
-
And much more!
-
How to get BlueSoleil activation serial number for free?
-
BlueSoleil is not a free software. You need to buy a license key to activate it and use all its features. However, if you don't want to spend money on it, there are some ways to get BlueSoleil activation serial number for free. Here are two methods that you can try:
-
There are two ways to get BlueSoleil activation serial number for free: online and offline
-
Online method: use a website that provides serial numbers for free
-
This method is very simple and fast. All you need to do is find a website that offers serial numbers for various software for free. For example:
-
bluesoleil license key free download
-bluesoleil crack serial number generator
-bluesoleil activation code free online
-bluesoleil full version free with serial key
-bluesoleil registration key free no survey
-bluesoleil serial number free 2023
-bluesoleil activation key free windows 10
-bluesoleil crack serial number free reddit
-bluesoleil activation code free trial
-bluesoleil full version free with serial number
-bluesoleil registration key free download
-bluesoleil serial number free 2022
-bluesoleil activation key free windows 7
-bluesoleil crack serial number free 2021
-bluesoleil activation code free download
-bluesoleil full version free with serial code
-bluesoleil registration key free online
-bluesoleil serial number free 2021
-bluesoleil activation key free windows 8
-bluesoleil crack serial number free 2020
-bluesoleil activation code free online generator
-bluesoleil full version free with serial key download
-bluesoleil registration key free no download
-bluesoleil serial number free 2020
-bluesoleil activation key free windows xp
-bluesoleil crack serial number free 2019
-bluesoleil activation code free for android
-bluesoleil full version free with serial key online
-bluesoleil registration key free for mac
-bluesoleil serial number free 2019
-bluesoleil activation key free windows vista
-bluesoleil crack serial number free 2018
-bluesoleil activation code free for iphone
-bluesoleil full version free with serial key generator
-bluesoleil registration key free for windows
-bluesoleil serial number free 2018
-bluesoleil activation key free windows 2000
-bluesoleil crack serial number free 2017
-bluesoleil activation code free for ipad
-bluesoleil full version free with serial key reddit
-bluesoleil registration key free for linux
-bluesoleil serial number free 2017
-bluesoleil activation key free windows me
-bluesoleil crack serial number free 2016
-bluesoleil activation code free for pc
-bluesoleil full version free with serial key no survey
-bluesoleil registration key free for android
-bluesoleil serial number free 2016
-bluesoleil activation key free windows nt
On these websites, you can find serial numbers for different versions of BlueSoleil. Just copy one of them and paste it when you install or activate BlueSoleil on your computer.
-
Offline method: use a crack file that bypasses the activation process
-
This method is more complicated and slow. You need to download a crack file that modifies the original BlueSoleil program and makes it think that it is already activated. For example:
On these websites, you can find crack files for different versions of BlueSoleil. Just download one of them and follow the instructions on how to run it on your computer.
-
What are the advantages and disadvantages of each method?
-
Online method: easy, fast, but risky
-
Advantages: you don't need to download anything, just copy and paste the serial number
-
This method is very convenient and quick. You don't have to waste time or space on downloading any files. You just have to visit a website and copy a serial number.
-
Disadvantages: you may get a fake or invalid serial number, or expose your computer to malware or viruses
-
This method is also very risky and unreliable. You may not get a valid serial number that works for your version of BlueSoleil. You may also get a fake serial number that contains malware or viruses that can harm your computer. You may also violate the terms of service of BlueSoleil by using an illegal serial number.
-
Offline method: complicated, slow, but safe
-
Advantages: you don't need to rely on any website, just download and run the crack file
-
This method is more secure and reliable. You don't have to depend on any website that may be shady or fraudulent. You just have to download and run the crack file that modifies the original program and bypasses the activation process.
-
Disadvantages: you may need to disable your antivirus software, or face compatibility issues with your system
-
This method is also more complex and slow. You may have to disable your antivirus software to avoid false alarms or interference with the crack file. You may also face compatibility issues with your system, such as Windows version, architecture, or language. You may also violate the terms of service of BlueSoleil by using an illegal crack file.
-
How to use BlueSoleil after getting the activation serial number?
-
Install BlueSoleil on your computer and enter the serial number when prompted
-
To use BlueSoleil, you first need to install it on your computer. You can download the latest version of BlueSoleil from its official website http://www.bluesoleil.com/
-
Choose the version that matches your system and download it. Then run the setup file and follow the instructions to install it. When you are asked to enter the serial number, paste the one that you got from the online or offline method.
-
Connect your Bluetooth devices with your computer using BlueSoleil interface
-
After installing BlueSoleil, you can use its interface to connect your Bluetooth devices with your computer. You can find the BlueSoleil icon on your desktop or taskbar. Click on it to open the main window. You will see a red ball that represents your computer and some white balls that represent the Bluetooth devices in range. You can also see some service buttons at the top of the window that indicate the supported Bluetooth functions.
-
To connect a device, you need to do the following steps:
-
-
Double-click on a white ball to search for its supported services. The service buttons that match the device will be highlighted.
-
Click on a service button that you want to use. For example, if you want to transfer files, click on the file transfer button.
-
A service window will pop up. Follow the instructions on the window to complete the connection and operation. For example, if you want to transfer files, you can browse and select the files on your computer or device and drag and drop them to transfer.
-
-
You can also right-click on a white ball to access more options, such as pairing, refreshing, disconnecting, etc.
-
Enjoy the features and functions of BlueSoleil
-
Once you have connected your Bluetooth devices with your computer using BlueSoleil, you can enjoy the features and functions of BlueSoleil. You can transfer data, make calls, listen to music, and more using Bluetooth technology. You can also manage your connections and devices using BlueSoleil interface. You can view the connection status, battery level, signal strength, etc. of your devices. You can also customize your settings, such as changing your Bluetooth name, enabling or disabling security mode, setting auto-connect options, etc.
-
Conclusion
-
BlueSoleil is a useful software that lets you connect your Bluetooth devices with your computer. You can get BlueSoleil activation serial number for free by using online or offline methods. Each method has its pros and cons, so choose wisely according to your needs and preferences. After getting the serial number, you can install BlueSoleil on your computer and enter it when prompted. Then you can connect your Bluetooth devices with your computer using BlueSoleil interface and enjoy its features and functions.
-
FAQs
-
-
Q: What are the system requirements for BlueSoleil?
-
A: According to its official website , BlueSoleil supports Windows XP/Vista/7/8/10 (32-bit and 64-bit). It also requires a Bluetooth dongle or adapter that supports Bluetooth 2.0 or above.
-
Q: How can I check if my Bluetooth dongle or adapter is compatible with BlueSoleil?
-
A: You can check the compatibility list on its official website , or contact its customer service for more information.
-
Q: How can I update BlueSoleil to the latest version?
-
A: You can download the latest version of BlueSoleil from its official website , or use its built-in update function.
-
Q: How can I get technical support for BlueSoleil?
-
A: You can visit its official website and access its online help center, forum, FAQ section, or contact its customer service via email or phone.
-
Q: How can I uninstall BlueSoleil from my computer?
-
A: You can uninstall BlueSoleil from your computer by using its built-in uninstall function, or by using Windows Control Panel.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chess Position Trainer 5 Crack A Comprehensive Review of the Features and Benefits of this Chess Software.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chess Position Trainer 5 Crack A Comprehensive Review of the Features and Benefits of this Chess Software.md
deleted file mode 100644
index 799c26450c096e132186a59c34961b006947daa8..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chess Position Trainer 5 Crack A Comprehensive Review of the Features and Benefits of this Chess Software.md
+++ /dev/null
@@ -1,110 +0,0 @@
-
-
Chess Position Trainer 5 Crack: How to Download and Use It
-
If you are a chess enthusiast who wants to improve your game, you might have heard of Chess Position Trainer 5, a professional chess software that helps you build your own repertoire, train your openings, tactics and endgames, and analyze your games and positions with a powerful engine. But what if you don't want to pay for the software? Is there a way to get it for free? In this article, we will show you how to download and use Chess Position Trainer 5 crack, as well as the risks and drawbacks of doing so.
-
What is Chess Position Trainer 5?
-
Chess Position Trainer 5 is a chess training software that was developed by chesspositiontrainer.com in 2019. It is designed to help chess players of all levels create, manage and study their chess opening repertoire. It also offers features such as flash-card based training, blindfold mode, position explorer, engine analysis, game import/export, statistics, and more. Chess Position Trainer 5 claims to be a major time saver for your daily work and make it fun to train your chess openings. It supports multiple languages such as English, Deutsch, Français, Nederlands, Italiano, Português.
Some of the features and benefits of Chess Position Trainer 5 are:
-
-
It has an innovative repertoire concept that allows you to create topics for managing endgame, middle-game and key positions, as well as creating new variations and detecting transpositions across openings.
-
It has a rich formatting of position comments that lets you add diagrams, colors, fonts, symbols, links, etc. to your notes.
-
It has an automatic re-ordering of candidate moves that helps you prioritize the most important moves in each position.
-
It has a flash-card based training concept that tests your memory and understanding of your repertoire using spaced repetition.
-
It has a blindfold mode that challenges you to visualize the board without seeing it.
-
It has a position explorer that shows you all possible moves and statistics for each position in your repertoire.
-
It has a min-max calculation that shows you the best and worst score for each move in your repertoire.
-
It has a UCI engine support that allows you to analyze your positions and games with a powerful chess engine such as Stockfish or Komodo.
-
It has a game import/export feature that lets you import games from PGN files or databases, or export games to PGN files or websites.
-
It has a statistic module that shows you various data about your repertoire such as coverage, performance, progress, etc.
-
-
How to download Chess Position Trainer 5 for free
-
If you want to download Chess Position Trainer 5 for free, you will need to find a crack file that bypasses the software's protection and activation system. A crack file is a modified version of the original software file that allows you to use it without paying for it. However, finding a reliable source for downloading Chess Position Trainer 5 crack can be tricky and risky. Here are some steps you can follow:
-
Step 1: Find a reliable source
-
The first step is to find a website that offers Chess Position Trainer 5 crack for download. You can use search engines such as Google or Bing to look for keywords such as "Chess Position Trainer 5 crack", "Chess Position Trainer 5 free download", "Chess Position Trainer 5 torrent", etc. However, be careful not to click on any suspicious or malicious links that might harm your computer or steal your personal information. You can use tools such as VirusTotal or Web of Trust to check the reputation and safety of the websites before visiting them.
-
Step 2: Download the setup file and the crack file
-
The second step is to download the setup file and the crack file from the website you have chosen. The setup file is usually an executable file (.exe) that installs the software on your computer. The crack file is usually a DLL file (.dll) that replaces the original file in the software's folder. Sometimes, the crack file might also come with instructions on how to apply it. Make sure you scan both files with an antivirus program before opening them.
-
Step 3: Install the software and apply the crack
-
The third step is to install the software and apply the crack on your computer. To install the software, run the setup file and follow the instructions on the screen. To apply the crack, copy the crack file and paste it into the software's folder (usually located in C:\Program Files\ChessPositionTrainer). If prompted, overwrite or replace the existing file. You might also need to disable your antivirus program or firewall temporarily while applying the crack.
-
How to use Chess Position Trainer 5 to improve your chess skills
-
If you have successfully installed and cracked Chess Position Trainer 5 on your computer, you can start using it to improve your chess skills. Here are some steps you can follow:
-
Chess Position Trainer 5 download
-Chess Position Trainer 5 free version
-Chess Position Trainer 5 license key
-Chess Position Trainer 5 serial number
-Chess Position Trainer 5 activation code
-Chess Position Trainer 5 full version
-Chess Position Trainer 5 torrent
-Chess Position Trainer 5 keygen
-Chess Position Trainer 5 patch
-Chess Position Trainer 5 registration code
-Chess Position Trainer 5 review
-Chess Position Trainer 5 tutorial
-Chess Position Trainer 5 alternative
-Chess Position Trainer 5 online
-Chess Position Trainer 5 manual
-Chess Position Trainer 5 features
-Chess Position Trainer 5 tips
-Chess Position Trainer 5 tricks
-Chess Position Trainer 5 guide
-Chess Position Trainer 5 video
-Chess Position Trainer 5 demo
-Chess Position Trainer 5 trial
-Chess Position Trainer 5 update
-Chess Position Trainer 5 latest version
-Chess Position Trainer 5 system requirements
-Chess Position Trainer 5 forum
-Chess Position Trainer 5 support
-Chess Position Trainer 5 feedback
-Chess Position Trainer 5 testimonials
-Chess Position Trainer 5 comparison
-Chess Position Trainer 5 benefits
-Chess Position Trainer 5 pros and cons
-Chess Position Trainer 5 discount
-Chess Position Trainer 5 coupon code
-Chess Position Trainer 5 best price
-Chess Position Trainer 5 free trial
-Chess Position Trainer 5 how to use
-Chess Position Trainer 5 how to install
-Chess Position Trainer 5 how to crack
-Chess Position Trainer 5 how to activate
-Chess Position Trainer 5 how to update
-Chess Position Trainer 5 how to uninstall
-Chess Position Trainer 5 how to backup
-Chess Position Trainer 5 how to restore
-Chess Position Trainer 5 how to export
-Chess Position Trainer 5 how to import
-Chess Position Trainer 5 how to create repertoire
-Chess Position Trainer 5 how to train openings
-Chess Position Trainer 5 how to improve chess skills
-
Step 1: Create and manage your own repertoire
-
The first step is to create and manage your own repertoire using Chess Position Trainer 5's innovative repertoire concept. You can create topics for different openings, endgames, middle-games or key positions that you want to study. You can also create new variations or edit existing ones using the position editor. You can add comments, diagrams, symbols or links to each position using the rich formatting feature. You can also reorder candidate moves according to their importance using the automatic re-ordering feature.
-
Step 2: Train your openings, tactics and endgames with flash cards
-
The second step is to train your openings, tactics and endgames with flash cards using Chess Position Trainer 5's flash-card based training concept. You can select which topics or positions you want to train from your repertoire using various criteria such as scheduled only, random, photo reading or systematic. You can also choose which side (white or black) you want to play from each position. You will be shown a position on the board and asked to enter the best move or moves according to your repertoire. You will get immediate feedback on whether you are correct or not. You can also review your mistakes or repeat difficult positions later.
-
Step 3: Analyze your games and positions with a powerful engine
-
The third step is to analyze your games and positions with a powerful engine using Chess Position Trainer 5's UCI engine support feature. You can import games from PGN files or databases into Chess Position Trainer 5 using the game import feature. You can also export games from Chess Position Trainer 5 into PGN files or websites using the game export feature. You can then analyze each move or position in your games using a UCI engine such as Stockfish or Komodo. You can see various information such as evaluation score, best move, depth Continuing the article:
Risks and drawbacks of using Chess Position Trainer 5 crack
-
While using Chess Position Trainer 5 crack might seem like a good idea to save money and enjoy the software's features, it also comes with many risks and drawbacks that you should be aware of. Here are some of them:
-
Legal issues and ethical concerns
-
Using Chess Position Trainer 5 crack is illegal, as it violates the software's copyright law. You are essentially stealing from the software developers who spent time and money creating the product. You could face fines of up to $150,000 and up to five years in prison for each instance of using or distributing cracked software. You could also damage your reputation and credibility as a chess player or professional if you are caught using pirated software. Moreover, using Chess Position Trainer 5 crack is unethical, as it shows a lack of respect and appreciation for the work of the software creators. You are also depriving them of their rightful income and discouraging them from developing more quality products in the future.
-
Potential malware and viruses
-
Using Chess Position Trainer 5 crack is dangerous, as it exposes your computer to potential malware and viruses. The websites that offer cracked software are often shady and untrustworthy, and they might infect your computer with malicious programs that can harm your system, steal your personal information, or display unwanted ads. The crack files themselves might also contain hidden malware or viruses that can compromise your security and privacy. Even if you scan the files with an antivirus program, you might not detect all the threats, as some malware can evade detection or disable your antivirus program. You could end up losing your data, money, or identity by using cracked software.
-
Lack of updates and support
-
Using Chess Position Trainer 5 crack is inconvenient, as it deprives you of updates and support from the software developers. The cracked software might not work properly or have bugs or errors that affect its functionality. You might not be able to access some features or options that are available in the original software. You might also miss out on new features or improvements that are added to the software through regular updates. Moreover, you will not be able to contact the software developers for any questions or issues that you encounter while using the software. You will have to rely on yourself or other users of cracked software for troubleshooting, which might not be effective or reliable.
-
Conclusion
-
Chess Position Trainer 5 is a professional chess software that can help you improve your chess skills by creating, managing and studying your chess opening repertoire. It also offers other features such as flash-card based training, blindfold mode, position explorer, engine analysis, game import/export, statistics, and more. However, using Chess Position Trainer 5 crack is not a wise decision, as it involves many risks and drawbacks such as legal issues, ethical concerns, potential malware and viruses, lack of updates and support. You should respect the software developers' rights and efforts and purchase the original software instead of using cracked software.
-
FAQs
-
-
Q: How much does Chess Position Trainer 5 cost?
-
A: Chess Position Trainer 5 costs $39.95 for the Pro version and $29.95 for the Standard version. You can also download a free version with limited features from the official website.
-
Q: How can I purchase Chess Position Trainer 5?
-
A: You can purchase Chess Position Trainer 5 online through PayPal or credit card from the official website. You will receive an email with a download link and a license key after completing the payment.
-
Q: How can I update Chess Position Trainer 5?
-
A: You can update Chess Position Trainer 5 by downloading the latest version from the official website and installing it over your existing version. You don't need to uninstall your previous version or enter your license key again.
-
Q: How can I contact Chess Position Trainer 5 support?
-
A: You can contact Chess Position Trainer 5 support by sending an email to support@chesspositiontrainer.com or by filling out a contact form on the official website. You can also visit the blog or forum sections on the website for more information and tips.
-
Q: How can I learn more about Chess Position Trainer 5?
-
A: You can learn more about Chess Position Trainer 5 by visiting the official website where you can find features, screenshots, videos, manuals, testimonials, FAQs, and more.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download IVONA Text to Speech 1.6.63 with crack (All voices) and create amazing audio content.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download IVONA Text to Speech 1.6.63 with crack (All voices) and create amazing audio content.md
deleted file mode 100644
index c0e53464be992a8a42a82464caeb44a698e95141..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download IVONA Text to Speech 1.6.63 with crack (All voices) and create amazing audio content.md
+++ /dev/null
@@ -1,78 +0,0 @@
-
-
IVONA Text to Speech 1.6.63 with crack (All voices) download
-
Do you want to convert any text into natural and expressive speech? Do you want to enjoy a wide range of voices and languages for free? Do you want to download and install a high-quality text-to-speech software in a few simple steps? If you answered yes to any of these questions, then you are in luck! In this article, I will show you how to download and install IVONA Text to Speech 1.6.63 with crack (All voices), a powerful and versatile text-to-speech software that can transform any text into lifelike speech.
-
What is IVONA Text to Speech?
-
IVONA Text to Speech is a speech synthesizer that can convert any text into speech (TTS). It uses advanced technology that allows it to create digital voices that speak various languages. IVONA voices are recognized for their natural and expressive sound, which makes them suitable for various purposes such as education, entertainment, accessibility, and communication.
-
IVONA Text to Speech 1.6.63 with crack (All voices) download
Some of the features and benefits of IVONA Text to Speech are:
-
-
It supports 23 languages and 51 voices, including American English, British English, Australian English, Indian English, Spanish, French, German, Italian, Polish, Romanian, Russian, Turkish, Welsh, and more.
-
It can read any text from the clipboard, a file, or a web page.
-
It can save the speech output as an audio file in MP3 or WAV format.
-
It can adjust the speed, pitch, volume, and pronunciation of the speech output.
-
It can work with any application that supports SAPI5 interface, such as Microsoft Word, PowerPoint, Outlook, Adobe Reader, etc.
-
It can be used offline without an internet connection.
-
It is easy to install and use.
-
-
How to download IVONA Text to Speech 1.6.63 with crack (All voices)?
-
If you want to enjoy all the features and benefits of IVONA Text to Speech without paying anything, you need to download and install IVONA Text to Speech 1.6.63 with crack (All voices). This is a cracked version of the original software that allows you to use all the languages and voices for free.
-
To download and install IVONA Text to Speech 1.6.63 with crack (All voices), you need to follow these steps:
-
Download links for IVONA Text to Speech 1.6.63 with crack (All voices)
-
The first step is to download the files you need from the internet. You need two types of files: the IVONA Text-to-Speech HQ app and the voice files.
-
The IVONA Text-to-Speech HQ app is the main application that manages the voice and language settings and allows you to convert text to speech using different modes.
-
How to get IVONA Text to Speech 1.6.63 full version for free
-IVONA Text to Speech 1.6.63 cracked software with all languages
-Download IVONA Text to Speech 1.6.63 with patch and keygen
-IVONA Text to Speech 1.6.63 torrent download with crack and serial
-IVONA Text to Speech 1.6.63 activation code generator online
-IVONA Text to Speech 1.6.63 license key crack download
-IVONA Text to Speech 1.6.63 with all voices free download
-IVONA Text to Speech 1.6.63 latest version with crack download
-IVONA Text to Speech 1.6.63 portable edition with crack download
-IVONA Text to Speech 1.6.63 offline installer with crack download
-IVONA Text to Speech 1.6.63 for Windows 10/8/7 with crack download
-IVONA Text to Speech 1.6.63 for Mac OS X with crack download
-IVONA Text to Speech 1.6.63 for Linux with crack download
-IVONA Text to Speech 1.6.63 for Android with crack download
-IVONA Text to Speech 1.6.63 for iOS with crack download
-IVONA Text to Speech 1.6.63 review and features
-IVONA Text to Speech 1.6.63 alternatives and competitors
-IVONA Text to Speech 1.6.63 pros and cons
-IVONA Text to Speech 1.6.63 comparison and ratings
-IVONA Text to Speech 1.6.63 testimonials and feedbacks
-IVONA Text to Speech 1.6.63 tutorial and guide
-IVONA Text to Speech 1.6.63 tips and tricks
-IVONA Text to Speech 1.6.63 FAQs and troubleshooting
-IVONA Text to Speech 1.6.63 support and customer service
-IVONA Text to Speech 1.6.63 updates and changelog
-How to use IVONA Text to Speech 1.6.63 with crack effectively
-How to customize IVONA Text to Speech 1.6.63 with crack settings
-How to add new voices to IVONA Text to Speech 1.6.63 with crack
-How to convert text to speech with IVONA Text to Speech 1.6.63 with crack
-How to save text to speech as audio files with IVONA Text to Speech 1.6.63 with crack
-How to integrate IVONA Text to Speech 1.6.63 with crack with other applications
-How to improve the quality of text to speech with IVONA Text to Speech 1.6.63 with crack
-How to fix common errors and issues with IVONA Text to Speech 1.6.63 with crack
-How to uninstall and remove IVONA Text to Speech 1.6.63 with crack completely
-How safe and secure is IVONA Text to Speech 1.6.63 with crack download
-How legal and ethical is IVONA Text to Speech 1.6.63 with crack download
-How fast and reliable is IVONA Text to Speech 1.6.63 with crack download
-How much disk space and memory does IVONA Text to Speech 1.6.63 with crack require
-How compatible is IVONA Text to Speech 1.6
-
The voice files are the files that contain the data for each voice and language you want to use.
-
You can find the download links for both types of files below:
-
-
Voice
Language
Download link
-
Amy
UK English
-
Carmen
Romanian
-
Céline
French
-
Conchita
Spanish
-
Dóra
Icelandic
-
Giorgio
Italian
-
Gwyneth
Welsh
-
Gwyneth
Welsh English
-
Kendra
US English
-
Maja
Polish
-
Marlene
German
< 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Be a Pro - Football Real Teams Real Players Real Action.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Be a Pro - Football Real Teams Real Players Real Action.md
deleted file mode 100644
index f0ccd0d7b41e0fdae8da509d7cae93e2476df190..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Be a Pro - Football Real Teams Real Players Real Action.md
+++ /dev/null
@@ -1,97 +0,0 @@
-
-
Be a Pro - Football: How to Download and Play the Best Online Football Game
-
If you are a fan of football, you must have heard of Be a Pro - Football, the most thrilling football game on your mobile device. This game lets you join realistic fast-paced real-time PVP matches with your own dream team. You can face online players from all over the world and compete for the glory of being the best football club. But how can you download and play this amazing game on your Android device? In this article, we will show you how to do that in simple steps.
-
What is Be a Pro - Football?
-
Be a Pro - Football is an online football game developed by VPlay Interactive Private Limited. It is available for free on Google Play Store, but you can also download the apk and obb files from other sources . The apk file is the application package that contains the game's code and resources, while the obb file is the expansion file that contains the game's data, such as graphics, sounds, and videos.
Be a Pro - Football has many features that make it stand out from other football games. Here are some of them:
-
-
REAL-TIME 11V11 ONLINE PVP: Face against online players with your own dream team. Sharpen your skills and face the most powerful clubs in the world.
-
FAST-PACED THRILLING MATCH EXPERIENCE: Capturing the true spirit of football. Thrill attacking and defending in fast-paced matches. The most exciting football match experience on your mobile device.
-
SMOOTH MOTIONS & REALISTIC GRAPHICS: Realistic dribble, tackle, shooting, passing with Full 3D motion capture. High-precision 3D player modeling.
-
LEVEL UP YOUR TEAM TO WIN THE GLORY: Upgrading your players to win the champions. Explore the transfer market to strengthen your team.
-
MASTER YOUR SQUAD TO CONQUER THE IDLE LEAGUE: Optimize your line-up & tactics to defeat diversified opponents. Victories to behold even when you are offline.
-
-
How to download Be a Pro - Football apk and obb files
-
To download Be a Pro - Football apk and obb files, you need to find a reliable source that offers them for free. You can use APKCombo or mob.org as examples. These websites allow you to download the latest version of Be a Pro - Football apk and obb files in one package. You just need to click on the download button and wait for the file to be downloaded to your device.
-
How to install Be a Pro - Football on your Android device
-
After downloading Be a Pro - Football apk and obb files, you need to follow these steps to install them on your Android device:
-
Step 1: Enable unknown sources
-
Before installing any apk file from outside the Google Play Store, you need to enable unknown sources on your device. This will allow you to install apps from other sources than the official store. To do this, go to Settings > Security > Unknown Sources and toggle it on. You may see a warning message, but just tap OK to proceed.
-
Step 2: Locate and extract the obb file
-
The obb file is a zip file that contains the game's data. You need to extract it to a specific folder on your device. To do this, you need a file manager app that can handle zip files, such as ES File Explorer. Open the file manager app and locate the downloaded Be a Pro - Football zip file. Tap on it and choose Extract. You will see a folder named com.vplayinteractive.bap. This is the obb folder that you need to move to the Android/obb folder on your device. You can use the Cut and Paste commands to do this.
-
Step 3: Install the apk file
-
After moving the obb folder, you can install the apk file. Go back to the file manager app and locate the downloaded Be a Pro - Football apk file. Tap on it and choose Install. You may see a prompt asking you to confirm the installation, just tap Install again. Wait for the installation to finish and then tap Open to launch the game.
-
be a pro football game download apk and obb
-how to install be a pro football xapk on android
-be a pro football online pvp apk obb
-be a pro football realistic 3d graphics apk obb
-be a pro football champions league apk obb
-be a pro football mod apk and obb unlimited money
-be a pro football offline mode apk obb
-be a pro football latest version apk and obb
-be a pro football apk obb free download for android
-be a pro football transfer market apk obb
-be a pro football squad optimization apk obb
-be a pro football fast-paced thrilling match apk obb
-be a pro football dream team creation apk obb
-be a pro football smooth motions apk obb
-be a pro football level up your team apk obb
-be a pro football 11v11 online pvp apk obb
-be a pro football realistic dribble tackle shooting passing apk obb
-be a pro football high-precision 3d player modeling apk obb
-be a pro football idle league mode apk obb
-be a pro football vplay interactive private limited apk obb
-download be a pro football game for android apk and obb
-be a pro football full 3d motion capture apk obb
-be a pro football best sports game apk obb
-be a pro football realistic physics engine apk obb
-be a pro football google play id com.Trois.NOX apk obb
-be a pro football version 0.204.4 apk and obb download
-be a pro football update nov 17 2022 apk obb
-be a pro football android 5.0+ compatible apk obb
-be a pro football 924 mb size apk and obb file download
-be a pro football old versions download apk and obb
-how to play be a pro football on pc with emulator apk and obb
-be a pro football tips and tricks guide apk and obb download
-how to get free coins in be a pro football game apk and obb
-how to fix be a pro football not working or crashing issue apk and obb
-how to uninstall or delete be a pro football from your device apk and obb
-how to backup or restore your progress in be a pro football game apk and obb
-how to contact the developer of be a pro football game for support or feedback apk and obb
-how to join or create a club in be a pro football game online mode apk and obb
-how to customize your players and kits in be a pro football game offline mode apk and obb
-how to change the language or sound settings in be a pro football game options menu apk and obb
-
How to play Be a Pro - Football online
-
Now that you have installed Be a Pro - Football on your device, you can start playing it online with other players. Here are some tips on how to play the game:
-
Choose your team and players
-
When you launch the game, you will be asked to choose your team name, logo, and kit. You can also customize your players' names, appearances, and attributes. You can unlock more options as you progress in the game. You can also buy coins and gems with real money to get more items and features.
-
Join real-time 11v11 matches
-
The main mode of Be a Pro - Football is the real-time 11v11 online PVP mode. In this mode, you can join matches with other players from around the world and compete for rankings and rewards. You can choose from different leagues and tournaments, such as Champions League, Europa League, World Cup, and more. You can also create or join clubs with your friends and play together.
-
Upgrade your skills and tactics
-
To win matches, you need to upgrade your skills and tactics. You can train your players to improve their attributes, such as speed, strength, stamina, shooting, passing, dribbling, tackling, and more. You can also change your formation, strategy, and style of play according to your opponents and situations. You can use different skills and items during matches, such as sprint, slide tackle, cross, shoot, pass, dribble, header, volley, chip shot, curve shot, free kick, corner kick, penalty kick, and more.
-
Why you should play Be a Pro - Football
-
Be a Pro - Football is not just another football game. It is the best online football game that you can play on your mobile device. Here are some reasons why you should play it:
-
Realistic graphics and animations
-
Be a Pro - Football uses full 3D motion capture technology to create realistic graphics and animations for the players and stadiums. You will feel like you are watching a real football match on your screen. The game also has dynamic weather effects, such as rain, snow, fog, wind, and more.
-
Thrilling and fast-paced gameplay
-
Be a Pro - Football captures the true spirit of football with its thrilling and fast-paced gameplay. You will experience the excitement of attacking and defending in real-time matches with other players. You will need to use your skills and tactics to score goals and prevent goals from your opponents.
-
Competitive and diverse modes
-
Be a Pro - Football offers competitive and diverse modes for you to enjoy. You can play in different leagues and tournaments with different rules and rewards. You can also play in idle mode where you can earn coins and gems even when you are offline. You can also challenge yourself in various missions and achievements.
-
Conclusion
-
Be a Pro - Football is the best online football game that you can play on your mobile device. It has realistic graphics and animations, thrilling and fast-paced gameplay, competitive and diverse modes, and many more features that will keep you hooked for hours. If you want to download and play this amazing game on your Android device, just follow the steps we have shown you in this article. You will need to download the apk and obb files from a reliable source , enable unknown sources on your device, extract the obb file[^5 to the Android/obb folder, and install the apk file. You will then be able to enjoy the best online football game on your mobile device. Have fun and good luck!
-
FAQs
-
Here are some frequently asked questions about Be a Pro - Football:
-
Q: How can I get more coins and gems in Be a Pro - Football?
-
A: You can get more coins and gems by playing matches, completing missions, achievements, and daily tasks, participating in events and tournaments, watching ads, and buying them with real money.
-
Q: How can I change my team name, logo, and kit in Be a Pro - Football?
-
A: You can change your team name, logo, and kit by tapping on the Team button on the main screen. You can then edit your team details and customize your team appearance.
-
Q: How can I join or create a club in Be a Pro - Football?
-
A: You can join or create a club by tapping on the Club button on the main screen. You can then search for an existing club or create your own club. You can invite your friends to join your club and play together.
-
Q: How can I chat with other players in Be a Pro - Football?
-
A: You can chat with other players by tapping on the Chat button on the main screen. You can then choose from different chat channels, such as Global, Club, Match, and Private. You can also use emojis and stickers to express yourself.
-
Q: How can I contact the support team of Be a Pro - Football?
-
A: You can contact the support team of Be a Pro - Football by tapping on the Settings button on the main screen. You can then choose from different options, such as FAQ, Feedback, Report, and Contact Us. You can also email them at support@vplayinteractive.com.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bus Simulator Ultimate MOD APK 1.5.3 Drive Your Way to Success with Unlimited Money and More.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bus Simulator Ultimate MOD APK 1.5.3 Drive Your Way to Success with Unlimited Money and More.md
deleted file mode 100644
index bd30d97a13ba3baf9a3750fbd8974d6891f2c21b..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bus Simulator Ultimate MOD APK 1.5.3 Drive Your Way to Success with Unlimited Money and More.md
+++ /dev/null
@@ -1,81 +0,0 @@
-
-
Bus Simulator Ultimate 1.5.3 Mod APK: A Review
-
Do you love driving buses and exploring different cities? Do you want to start your own bus company and become a successful entrepreneur? If yes, then you should try Bus Simulator Ultimate, a bus driving simulation game with a unique story released by Zuuks Games. Not just a normal driving game, with this game, you will experience the feeling of starting a business, managing your staff, expanding your routes, and competing with other players online. And if you want to enjoy the game without any limitations, you can download Bus Simulator Ultimate Mod APK, which gives you unlimited money, gold, and access to all the buses and skins in the game. In this article, we will review Bus Simulator Ultimate and its mod version, and show you how to download and install it on your device.
-
What is Bus Simulator Ultimate?
-
Bus Simulator Ultimate is a realistic bus driving simulation game that lets you drive various types of buses across different countries and cities. You can choose from over 25 different buses, each with their own features and specifications, such as speed, capacity, fuel consumption, etc. You can also customize your buses with different skins, colors, logos, stickers, etc. You can drive in different weather conditions, day and night cycles, traffic situations, road types, etc. You can also interact with your passengers, who have their own personalities and behaviors. You can hear their conversations, complaints, compliments, etc.
Bus Simulator Ultimate has many features that make it one of the best bus driving games on the market. Here are some of them:
-
Realistic bus driving experience
-
The game has realistic physics and graphics that make you feel like you are driving a real bus. You can control your bus with various options, such as steering wheel, buttons, tilt, etc. You can also use indicators, headlights, horn, wipers, etc. You can follow the traffic rules and signs, or break them if you want to. You can also experience realistic sounds of engine, brakes, horns, etc.
-
Multiple game modes and maps
-
The game has different game modes that suit your preferences and skills. You can play in career mode, where you start your own bus company and complete various missions and challenges. You can also play in free mode, where you can drive freely in any map without any restrictions. You can also play in multiplayer mode, where you can join or create online rooms and play with other players around the world. You can chat with them, race with them, cooperate with them, etc.
-
The game has over 15 maps that cover different countries and cities, such as Germany, France, Italy, Spain, USA, Brazil, Turkey, etc. Each map has its own landmarks, scenery, culture, etc. You can explore them and discover new places and routes.
-
Customizable buses and skins
-
The game has over 25 different buses that you can choose from. Each bus has its own features and specifications that affect its performance and fuel consumption. You can also customize your buses with different skins that change their appearance. You can choose from various colors, logos, stickers, etc. You can also create your own skins using the skin editor tool.
-
Online multiplayer and leaderboards
-
The game has an online multiplayer mode that lets you play with other players around the world. You can join or create online rooms and chat with other players. You can also race with them or cooperate with them in different missions and challenges. You can also check the leaderboards and see your rank and stats compared to other players.
-
Business management and simulation
-
The game has a unique feature that lets you start and run your own bus company. You can name your company, choose your logo, hire and fire your staff, buy and sell your buses, etc. You can also manage your finances, expenses, income, etc. You can also expand your routes and services to different countries and cities. You can also compete with other companies and see your reputation and popularity.
-
bus simulator ultimate mod apk unlimited money and gold
-bus simulator ultimate hack apk download for android
-bus simulator ultimate mod menu apk latest version
-bus simulator ultimate 1.5.3 mod apk free shopping
-bus simulator ultimate mod apk all buses unlocked
-bus simulator ultimate hack apk ios no jailbreak
-bus simulator ultimate mod apk offline play
-bus simulator ultimate 1.5.3 mod apk revdl
-bus simulator ultimate mod apk unlimited xp and fuel
-bus simulator ultimate hack apk online generator
-bus simulator ultimate mod apk new update 2021
-bus simulator ultimate 1.5.3 mod apk happymod
-bus simulator ultimate mod apk no ads and license verification
-bus simulator ultimate hack apk obb data file
-bus simulator ultimate mod apk realistic graphics and physics
-bus simulator ultimate 1.5.3 mod apk rexdl
-bus simulator ultimate mod apk multiplayer mode enabled
-bus simulator ultimate hack apk unlimited everything
-bus simulator ultimate mod apk premium features unlocked
-bus simulator ultimate 1.5.3 mod apk an1.com
-bus simulator ultimate mod apk custom skins and routes
-bus simulator ultimate hack apk latest version 2021
-bus simulator ultimate mod apk no root required
-bus simulator ultimate 1.5.3 mod apk android 1
-bus simulator ultimate mod apk high speed and performance
-bus simulator ultimate hack apk anti ban and safe to use
-bus simulator ultimate mod apk full game download
-bus simulator ultimate 1.5.3 mod apk pure.com
-bus simulator ultimate mod apk real traffic and weather conditions
-bus simulator ultimate hack apk unlimited coins and gems
-bus simulator ultimate mod apk best simulation game for android
-bus simulator ultimate 1.5.3 mod apk apkpure.com
-bus simulator ultimate mod apk easy controls and user interface
-bus simulator ultimate hack apk no survey or human verification
-bus simulator ultimate mod apk support all android devices
-bus simulator ultimate 1.5.3 mod apk mob.org
-bus simulator ultimate mod apk different countries and cities to explore
-bus simulator ultimate hack apk vip access and premium membership
-bus simulator ultimate mod apk fun and addictive gameplay
-bus simulator ultimate 1.5.3 mod apk uptodown.com
-
What is Bus Simulator Ultimate Mod APK?
-
Bus Simulator Ultimate Mod APK is a modified version of the original game that gives you some extra benefits and features that are not available in the official version. With this mod APK, you can enjoy the game without any limitations or restrictions. Here are some of the features of Bus Simulator Ultimate Mod APK:
-
Unlimited money and gold
-
With this mod APK, you will have unlimited money and gold in the game. You can use them to buy any bus you want, upgrade them, customize them, etc. You can also use them to hire more staff, expand your routes, etc. You don't have to worry about running out of money or gold in the game.
-
All buses and skins unlocked
-
With this mod APK, you will have access to all the buses and skins in the game. You don't have to complete any missions or challenges to unlock them. You can choose any bus you want and customize it with any skin you want. You can also create your own skins using the skin editor tool.
-
No ads and no root required
-
With this mod APK, you will not see any ads in the game. You can enjoy the game without any interruptions or distractions. You also don't need to root your device to install this mod APK. You can install it easily on any Android device without any risk of damaging it.
-
How to download and install Bus Simulator Ultimate Mod APK?
-
If you want to download and install Bus Simulator Ultimate Mod APK on your device, you need to follow these simple steps:
-
Step 1: Download the mod APK file from a trusted source
-
You need to download the mod APK file from a reliable source that provides safe and virus-free downloads. You can use this link to download the latest version of Bus Simulator Ultimate Mod APK: Bus Simulator Ultimate 1.5.3 Mod APK Download
-
Step 2: Enable unknown sources on your device settings
-
You need to enable unknown sources on your device settings to allow the installation of apps from sources other than Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Step 3: Install the mod APK file and enjoy the game
-
You need to locate the downloaded mod APK file on your device storage and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish. Once done, you can open the game and enjoy it with all the mod features.
-
Conclusion
-
Bus Simulator Ultimate is a fun and realistic bus driving simulation game that lets you drive various types of buses across different countries and cities. You can also start your own bus company and manage it as a business simulation game. You can also play online with other players and compete with them in different modes and challenges. And if you want to enjoy the game without any limitations, you can download Bus Simulator Ultimate Mod APK, which gives you unlimited money, gold, and access to all the buses and skins in the game.
-
If you liked this article, please share it with your friends who love bus driving games. And if you have any questions or feedback about Bus Simulator Ultimate or its mod version, please leave a comment below. We would love to hear from you.
- FAQs Q: Is Bus Simulator Ultimate Mod APK safe to download and install? A: Yes, Bus Simulator Ultimate Mod APK is safe to download and install on any Android device. It does not contain any viruses or malware that can harm your device or data. Q: Can I play Bus Simulator Ultimate offline? A: Yes, you can play Bus Simulator Ultimate offline in career mode or free mode. However, you need an internet connection to play online multiplayer mode. Q: How can I update Bus Simulator Ultimate Mod APK? A: To update Bus Simulator Ultimate Mod APK, you need to download the latest version of the mod APK file from the same source that you downloaded it from before. Then, you need to uninstall the previous version of the game from your device and install the new version of the mod APK file. You can also check the source website for any updates or notifications about the mod APK. Q: How can I contact the developers of Bus Simulator Ultimate? A: You can contact the developers of Bus Simulator Ultimate by visiting their official website: Zuuks Games. You can also follow them on their social media accounts, such as Facebook, Twitter, Instagram, etc. Q: What are some other bus driving games that I can play? A: Some other bus driving games that you can play are: - Bus Simulator 2021: A realistic bus driving game that lets you drive in various cities and countries, such as New York, London, Paris, Berlin, etc. You can also customize your buses and create your own routes. - Coach Bus Simulator: A bus driving game that lets you drive different types of coaches and transport passengers across Europe. You can also upgrade your coaches and hire new drivers. - World Bus Driving Simulator: A bus driving game that lets you drive in different countries and regions, such as Brazil, USA, Canada, etc. You can also choose from different bus models and customize them with various accessories. 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cubase 12 The Most Advanced and Reliable Music Production Software.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cubase 12 The Most Advanced and Reliable Music Production Software.md
deleted file mode 100644
index 46c0f0b8cbd0fca22f3475cb0ea0fdff432181d2..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cubase 12 The Most Advanced and Reliable Music Production Software.md
+++ /dev/null
@@ -1,305 +0,0 @@
-
-
Cubase 12: A Complete Guide for Beginners
-
If you are looking for a powerful and versatile music production software, you might want to check out Cubase 12. Cubase is one of the most popular and trusted DAWs (Digital Audio Workstations) in the world, used by professional composers, producers, engineers, and musicians across all genres and styles. Whether you want to create your own music, record and edit audio, mix and master your tracks, or collaborate with other artists, Cubase can help you achieve your musical goals.
-
In this article, we will give you a complete overview of Cubase 12, its main features and benefits, and how to get started with it. We will also show you some tips and tricks on how to use some of the new and improved tools in Cubase 12, such as MIDI Remote integration, VariAudio and Scale Assistant, Sampler Track 2, MixConsole, and more. By the end of this article, you will have a better understanding of what Cubase 12 can do for you and how to use it effectively.
MIDI controllers are essential devices for any music production setup. They allow you to control various parameters and functions in your software with physical knobs, faders, buttons, pads, keys, or pedals. In Cubase 12, integrating your MIDI controller has become easier than ever. Thanks to the new concept of MIDI Remote Scripts, Cubase will automatically detect your device and map the controls accordingly. If there is no script for your device yet, you can easily create your own with the MIDI Controller Surface Editor. You can then conveniently connect controls and parameters with the Mapping Assistant.
-
How to connect and configure your MIDI controller with Cubase 12?
-
To connect your MIDI controller with Cubase 12, follow these steps:
Connect your MIDI controller to your computer via USB or MIDI cable.
-
Launch Cubase 12 and create or open a project.
-
Go to Studio > Studio Setup.
-
Click on the + button in the top left corner and select your MIDI controller from the list.
-
If there is a script available for your device, it will be automatically loaded. If not, you can create one using the MIDI Controller Surface Editor.
-
Click on Apply and OK.
-
-
How to use MIDI Remote Scripts and Mapping Assistant to customize your control surface?
-
How to record and edit MIDI data with Cubase 12?
-
To record and edit MIDI data with Cubase 12, follow these steps:
-
-
Create a MIDI track and assign it to a VST instrument or an external MIDI device.
-
Set the input and output routing for the MIDI track in the Inspector.
-
Enable the Record Enable button on the MIDI track.
-
Set the left and right locators to define the recording range.
-
Click the Record button on the Transport panel or press * on the numeric keypad to start recording.
-
Play your MIDI controller or use the On-Screen Keyboard to enter MIDI notes.
-
Click the Stop button on the Transport panel or press 0 on the numeric keypad to stop recording.
-
The recorded MIDI data will appear as a MIDI part on the MIDI track. You can double-click it to open it in the Key Editor, where you can edit the notes, velocities, lengths, and other parameters.
-
-
VariAudio and Scale Assistant
-
VariAudio is a powerful tool for editing and correcting the pitch of your vocals and monophonic instruments. It allows you to analyze, modify, and quantize the pitch of each note in an audio file, as well as adjust its formant, vibrato, and volume. You can also use VariAudio to create harmonies, melodies, or effects from your audio material. Scale Assistant is a new feature in Cubase 12 that helps you set and follow the scale of your project. It can automatically detect the scale of your audio or MIDI data, or let you choose one from a list of presets or custom scales. You can then use Scale Assistant to snap your notes to the scale, transpose them, or color-code them according to their harmonic function.
-
How to use VariAudio to edit and correct the pitch of your vocals and monophonic instruments?
-
To use VariAudio to edit and correct the pitch of your vocals and monophonic instruments, follow these steps:
-
-
Select an audio event that contains vocals or monophonic instruments in the Project window.
-
Open the Sample Editor by double-clicking the audio event.
-
In the Inspector, click on the VariAudio tab.
-
Click on Edit VariAudio to analyze the pitch of the audio event. The audio event will be divided into segments, each representing a note. The segments will be displayed as blocks on a piano roll grid.
-
To correct the pitch of a segment, drag it up or down until it snaps to a semitone. You can also use the Quantize Pitch slider to automatically adjust all segments to the nearest semitone.
-
To modify the pitch of a segment, drag its handles left or right to change its start or end position. You can also drag its center line up or down to bend its pitch curve.
-
To adjust the formant of a segment, use the Formant Shift slider. This will change the timbre of the segment without affecting its pitch.
-
To adjust the vibrato of a segment, use the Straighten Pitch slider. This will reduce or increase the amount of pitch variation within the segment.
-
To adjust the volume of a segment, use the Volume slider. This will change the amplitude of the segment without affecting its dynamics.
-
-
How to use Scale Assistant to set and follow the scale of your project?
-
How to use Scale Assistant to set and follow the scale of your project?
-
To use Scale Assistant to set and follow the scale of your project, follow these steps:
-
-
Open the Key Editor by double-clicking a MIDI part or an audio event with VariAudio segments.
-
In the Inspector, click on the Scale Assistant tab.
-
To detect the scale of your data, click on Detect Scale. Cubase will analyze the notes or segments and suggest a scale that matches them. You can also choose a different scale from the drop-down menu.
-
To snap your notes or segments to the scale, enable Snap Pitch Editing. This will prevent you from moving them outside the scale. You can also use the Snap Live Input button to snap the notes that you play on your MIDI controller to the scale.
-
To transpose your notes or segments to a different key, use the Transpose slider. This will shift them up or down by semitones while keeping them in the scale.
-
To color-code your notes or segments according to their harmonic function, enable Colorize Pitch Editing. This will assign different colors to tonic, dominant, subdominant, and other degrees of the scale.
-
-
Sampler Track 2
-
Sampler Track 2 is a new and improved version of the Sampler Track, a tool that allows you to create and use sampler tracks in Cubase 12. A sampler track is a track that contains a sample, which is a short audio file that can be played back at different pitches and speeds. You can use sampler tracks to create melodies, rhythms, effects, or sound design elements from any audio material. Sampler Track 2 introduces some new features and enhancements, such as Slice mode, LFOs, and Sample Library Browser.
-
How to create and use Sampler Tracks in Cubase 12?
-
To create and use Sampler Tracks in Cubase 12, follow these steps:
-
-
Right-click on an empty space in the Project window and select Add Track > Sampler Track.
-
A new sampler track will be created with an empty sampler zone. To load a sample into the sampler zone, drag and drop an audio file from the MediaBay, the Project window, or your file explorer.
-
The sample will be mapped across the entire keyboard range. You can play it with your MIDI controller or use the On-Screen Keyboard to trigger it.
-
You can adjust the pitch, volume, filter, envelope, and modulation settings of the sample in the Sampler Control section.
-
You can also add effects to the sample in the Insert Effects section.
-
To record MIDI data for the sampler track, follow the same steps as for recording MIDI data for any other track.
-
-
How to use the new Slice mode and LFOs to manipulate your samples?
-
To use the new Slice mode and LFOs to manipulate your samples, follow these steps:
-
-
In the Sampler Control section, click on the Slice button to enable Slice mode. This will divide your sample into smaller segments based on its transients or beats.
-
You can adjust the sensitivity of the slicing algorithm with the Threshold slider. You can also manually add or remove slices with the + and - buttons.
-
Each slice will be assigned to a different key on your keyboard. You can play them individually or in any order you like.
-
You can also edit each slice separately by clicking on it and adjusting its pitch, volume, filter, envelope, and modulation settings.
-
To add LFOs (Low Frequency Oscillators) to your sample or slices, click on the LFO button in the Modulation section. This will open a new window where you can choose from different waveforms, rates, depths, and destinations for your LFOs.
-
You can use LFOs to modulate various parameters of your sample or slices, such as pitch, volume, filter cutoff, pan, etc. This can create interesting rhythmic or dynamic effects.
-
-
How to use the new Sample Library Browser to find and load samples?
-
To use the new Sample Library Browser to find and load samples To use the new Sample Library Browser to find and load samples, follow these steps:
-
In the Sampler Control section, click on the Browse button to open the Sample Library Browser.
-
The Sample Library Browser shows a list of categories and subcategories of samples that are available on your computer or online. You can also use the search field to find samples by name, keyword, or tag.
-
To preview a sample, click on the Play button next to it. You can adjust the volume and pitch of the preview with the sliders below.
-
To load a sample into the sampler zone, double-click on it or drag and drop it onto the sampler zone. The sample will replace the existing one in the sampler zone.
-
You can also load multiple samples into different slices by selecting them and dragging and dropping them onto the sampler zone. This will create a new slice for each sample and assign them to consecutive keys on your keyboard.
-
-
MixConsole
-
MixConsole is the main mixing environment in Cubase 12. It allows you to control the level, pan, solo/mute status, routing, effects, and other parameters of your audio and MIDI channels. You can also use MixConsole to create groups, sends, inserts, and busses for your channels. MixConsole features a flexible and customizable layout that lets you adjust the size, position, and visibility of different sections and elements. You can also open multiple MixConsole windows with different configurations and views. MixConsole also introduces some new and improved features in Cubase 12, such as Spectral Comparison EQ, Channel Strip modules, Snapshots, History, and more.
-
How to use the MixConsole to mix your tracks and apply effects?
-
To use the MixConsole to mix your tracks and apply effects, follow these steps:
-
-
Open the MixConsole window by pressing F3 or selecting Studio > MixConsole from the menu bar.
-
The MixConsole window shows all the channels in your project in the same order as in the track list. You can use the scroll bar or the channel overview to navigate to different channels. You can also use the Visibility tab in the left zone to show or hide individual channels or channel types.
-
To adjust the level of a channel, drag its fader up or down. To adjust the pan of a channel, drag its pan knob left or right. To solo or mute a channel, click on its S or M button. To enable or disable record for a channel, click on its R button.
-
To set up the input and output routing for a channel, click on its input or output selector in the Inspector or in the Routing section. You can choose from different options depending on the type of channel. For example, you can route an audio channel to an audio input or output device, a group channel, an FX channel, a VCA fader, etc.
-
To apply effects to a channel, you can use either inserts or sends. Inserts are effects that are applied directly to the signal of a channel. Sends are effects that are applied to a copy of the signal of a channel and then mixed with other signals in an FX channel. To add an insert effect to a channel, click on an empty slot in the Inserts section and select an effect from the list. To add a send effect to a channel, click on an empty slot in the Sends section and select an FX channel from the list. You can adjust the amount of send level with the knob next to the slot.
-
How to use the new Spectral Comparison EQ to clean up overlapping frequencies?
-
The Spectral Comparison EQ is a new feature in Cubase 12 that allows you to compare and adjust the frequency spectrum of two different channels. This can help you identify and eliminate any overlapping or clashing frequencies that might cause muddiness or interference in your mix. To use the Spectral Comparison EQ, follow these steps:
-
-
Select a channel that you want to compare with another channel. This will be the reference channel.
-
Open the Channel Settings window by clicking on the e button on the channel.
-
In the Channel Settings window, click on the EQ tab to open the Equalizer section.
-
In the Equalizer section, click on the Spectral Comparison button to enable it. This will open a new window where you can see the frequency spectrum of the reference channel and the comparison channel.
-
To select a comparison channel, click on the drop-down menu next to the Spectral Comparison button and choose another channel from the list. You can also use the arrow buttons to cycle through different channels.
-
You can adjust the opacity and color of the frequency spectrum of each channel with the sliders below. You can also switch between different views, such as Combined, Side by Side, or Overlay.
-
To adjust the EQ of the reference channel, use the EQ bands and knobs as usual. You can also use the EQ Curve Display to drag and drop points on the curve. You will see how your adjustments affect the frequency spectrum of both channels in real time.
-
To adjust the EQ of the comparison channel, click on the Swap button next to the Spectral Comparison button. This will switch the roles of the reference and comparison channels. You can then use the same steps as above to modify the EQ of the comparison channel.
-
-
How to use the Channel Strip modules to shape your sound?
-
The Channel Strip is a section in the Channel Settings window that contains various modules that can help you shape your sound. These modules include Gate, Compressor, Limiter, Envelope Shaper, Saturation, Magneto III, Standard Clipper, Tape Saturation, and Tube Saturation. You can use these modules to control the dynamics, tone, saturation, and distortion of your signal. To use the Channel Strip modules, follow these steps:
-
-
Open the Channel Settings window by clicking on the e button on a channel.
-
In the Channel Settings window, click on the Strip tab to open the Channel Strip section.
-
In the Channel Strip section, you can see a list of modules that are available for your channel. You can enable or disable each module by clicking on its power button.
-
How to use the Channel Strip modules to shape your sound?
-
The Channel Strip is a section in the Channel Settings window that contains various modules that can help you shape your sound. These modules include Gate, Compressor, Limiter, Envelope Shaper, Saturation, Magneto III, Standard Clipper, Tape Saturation, and Tube Saturation. You can use these modules to control the dynamics, tone, saturation, and distortion of your signal. To use the Channel Strip modules, follow these steps:
-
-
Open the Channel Settings window by clicking on the e button on a channel.
-
In the Channel Settings window, click on the Strip tab to open the Channel Strip section.
-
In the Channel Strip section, you can see a list of modules that are available for your channel. You can enable or disable each module by clicking on its power button.
-
You can also reorder or replace each module by dragging and dropping it to a different slot or clicking on its name and selecting another module from the list.
-
To adjust the settings of each module, use the knobs and buttons below its name. You can also use the Module Display to see a graphical representation of the module's parameters and curves.
-
You can also use the Presets menu to load or save presets for each module or for the entire channel strip.
-
-
Conclusion
-
Cubase 12 is a powerful and versatile music production software that offers a wide range of features and tools for creating, recording, editing, mixing, and mastering your music. Whether you are a beginner or a professional, Cubase 12 can help you achieve your musical goals with ease and efficiency. In this article, we have given you a complete overview of Cubase 12, its main features and benefits, and how to get started with it. We have also shown you some tips and tricks on how to use some of the new and improved tools in Cubase 12, such as MIDI Remote integration, VariAudio and Scale Assistant, Sampler Track 2, MixConsole, and more. We hope you have found this article useful and informative.
-
If you want to learn more about Cubase 12, you can visit the official website of Steinberg, where you can find more tutorials, guides, videos, forums, and support. You can also download a free trial version of Cubase 12 and try it for yourself. Cubase 12 is available in three editions: Pro, Artist, and Elements. Each edition has different features and prices to suit your needs and budget. You can compare the editions and buy Cubase 12 online from Steinberg's online shop or from authorized dealers.
-
Thank you for reading this article. We hope you have enjoyed it and learned something new. Now it's time for you to try Cubase 12 and unleash your creativity. Happy music making!
-
FAQs
-
What are the system requirements for Cubase 12?
-
The system requirements for Cubase 12 are as follows:
-
-
-
Operating System
-
Windows 10 (64-bit) or macOS Big Sur (11) / Catalina (10.15)
-
-
-
CPU
-
64-bit Intel or AMD multi-core processor (Intel i5 or faster recommended)
-
-
-
RAM
-
8 GB minimum (16 GB or more recommended)
-
-
-
Hard Disk Space
-
30 GB free space
-
-
-
Graphics Card
-
Graphics card with DirectX 10 and WDDM 1.1 support (Windows only)
-
-
-
Audio Interface
-
ASIO compatible audio hardware recommended
-
-
-
MIDI Interface
-
MIDI input/output device (optional)
-
-
-
Internet Connection
-
An internet connection is required for activation, account setup, access to manual and personal/product registration.
-
-
-
What are the differences between Cubase Pro, Artist, and Elements?
-
What are the differences between Cubase Pro, Artist, and Elements?
-
Cubase Pro is the most comprehensive edition of Cubase 12, offering all the features and tools that Cubase has to offer. It is designed for professional composers, producers, engineers, and musicians who need advanced functionality and flexibility. Cubase Pro supports up to 256 physical inputs and outputs, unlimited audio, instrument, and MIDI tracks, 8 VST instruments with over 3400 sounds, 75 audio effects, 18 MIDI effects, 5.1 surround sound, full score editing and printing, advanced audio editing and processing, VariAudio 3, Audio Alignment, ARA 2 support, SpectraLayers One integration, Sampler Track 2, Scale Assistant, Global Tracks, MixConsole Snapshots, Control Room, VCA faders, Channel Strip modules, Spectral Comparison EQ, Frequency 2 EQ, SuperVision metering suite, Video Export Render, Eucon support, and more.
-
Cubase Artist is a more affordable edition of Cubase 12, offering most of the features and tools that Cubase Pro has to offer. It is designed for aspiring composers, producers, engineers, and musicians who need a reliable and versatile music production software. Cubase Artist supports up to 32 physical inputs and outputs, unlimited audio and MIDI tracks, 64 instrument tracks, 8 VST instruments with over 3400 sounds, 64 audio effects, 18 MIDI effects, full score editing and printing, basic audio editing and processing, VariAudio 3, Audio Alignment, ARA 2 support, Sampler Track 2, Scale Assistant, Global Tracks, MixConsole Snapshots, Control Room (limited), VCA faders (limited), Channel Strip modules (limited), Spectral Comparison EQ (limited), Frequency 2 EQ (limited), SuperVision metering suite (limited), Video Export Render (limited), Eucon support (limited), and more.
-
Cubase Elements is the most basic edition of Cubase 12, Cubase Elements is the most basic edition of Cubase 12, offering the essential features and tools that Cubase has to offer. It is designed for beginners and hobbyists who want to learn and enjoy music production. Cubase Elements supports up to 24 physical inputs and outputs, 48 audio tracks, 64 MIDI tracks, 24 instrument tracks, 3 VST instruments with over 1000 sounds, 44 audio effects, 18 MIDI effects, basic score editing and printing, basic audio editing and processing, Sampler Track 2, Scale Assistant, MixConsole Snapshots (limited), Channel Strip modules (limited), Frequency 2 EQ (limited), SuperVision metering suite (limited), Video Export Render (limited), and more.
-
As you can see, Cubase 12 offers different editions to suit your needs and budget. You can compare the editions and their features in more detail on the Steinberg website. You can also upgrade from one edition to another at any time by purchasing an update or upgrade license.
-
FAQs
-
What are the system requirements for Cubase 12?
-
The system requirements for Cubase 12 are as follows:
-
-
-
Operating System
-
Windows 10 (64-bit) or macOS Big Sur (11) / Catalina (10.15)
-
-
-
CPU
-
64-bit Intel or AMD multi-core processor (Intel i5 or faster recommended)
-
-
-
RAM
-
8 GB minimum (16 GB or more recommended)
-
-
-
Hard Disk Space
-
30 GB free space
-
-
-
Graphics Card
-
Graphics card with DirectX 10 and WDDM 1.1 support (Windows only)
-
-
-
Audio Interface
-
ASIO compatible audio hardware recommended
-
-
-
MIDI Interface
-
MIDI input/output device (optional)
-
-
-
Internet Connection
-
An internet connection is required for activation, account setup, access to manual and personal/product registration.
-
-
-
What are the differences between Cubase Pro, Artist, and Elements?
-
What are the differences between Cubase Pro, Artist, and Elements?
-
Cubase Pro is the most comprehensive edition of Cubase 12, offering all the features and tools that Cubase has to offer. It is designed for professional composers, producers, engineers, and musicians who need advanced functionality and flexibility. Cubase Pro supports up to 256 physical inputs and outputs, unlimited audio, instrument, and MIDI tracks, 8 VST instruments with over 3400 sounds, 75 audio effects, 18 MIDI effects, 5.1 surround sound, full score editing and printing, advanced audio editing and processing, VariAudio 3, Audio Alignment, ARA 2 support, SpectraLayers One integration, Sampler Track 2, Scale Assistant, Global Tracks, MixConsole Snapshots, Control Room, VCA faders, Channel Strip modules, Spectral Comparison EQ, Frequency 2 EQ, SuperVision metering suite, Video Export Render, Eucon support, and more.
-
Cubase Artist is a more affordable edition of Cubase 12, offering most of the features and tools that Cubase Pro has to offer. It is designed for aspiring composers, producers, engineers, and musicians who need a reliable and versatile music production software. Cubase Artist supports up to 32 physical inputs and outputs, unlimited audio and MIDI tracks, 64 instrument tracks, 8 VST instruments with over 3400 sounds, 64 audio effects, 18 MIDI effects, full score editing and printing, basic audio editing and processing, VariAudio 3, Audio Alignment, ARA 2 support, Sampler Track 2, Scale Assistant, Global Tracks, MixConsole Snapshots, Control Room (limited), VCA faders (limited), Channel Strip modules (limited), Spectral Comparison EQ (limited), Frequency 2 EQ (limited), SuperVision metering suite (limited), Video Export Render (limited), Eucon support (limited), and more.
-
Cubase Elements is the most basic edition of Cubase 12, offering the essential features and tools that Cubase has to offer. It is designed for beginners and hobbyists who want to learn and enjoy music production. Cubase Elements supports up to 24 physical inputs and outputs, 48 audio tracks, 64 MIDI tracks, 24 instrument tracks, 3 VST instruments with over 1000 sounds, 44 audio effects, 18 MIDI effects, basic score editing and printing, basic audio editing and processing, Sampler Track 2, Scale Assistant, MixConsole Snapshots (limited), Channel Strip modules (limited), Frequency 2 EQ (limited), SuperVision metering suite (limited), Video Export Render (limited), and more.
-
As you can see, Cubase 12 offers different editions to suit your needs and budget. You can compare the editions and their features in more detail on the Steinberg website. You can also upgrade from one edition to another at any time by purchasing an update or upgrade license.
-
How much does Cubase 12 cost and where can I buy it?
-
The prices of Cubase 12 vary depending on the edition and the region. You can check the current prices on the Steinberg online shop or on authorized dealers' websites. Here are some approximate prices for reference:
-
-
-
Edition
-
Price
-
-
-
Cubase Pro
-
$559.99 USD / €559 EUR / £499 GBP
-
-
-
Cubase Artist
-
$309.99 USD / €309 EUR / £269 GBP
-
-
-
Cubase Elements
-
$99.99 USD / €99 EUR / £85 GBP
-
-
-
You can buy Cubase 12 online from the Steinberg online shop or from authorized dealers. You can also buy a boxed version of Cubase 12 from some retailers. When you buy Cubase 12 online or in a box, you will receive a download access code that you can use to download the software from the Steinberg Download Assistant. You will also need a USB-eLicenser or a Soft-eLicenser to activate your license.
-
Where can I find more tutorials and guides for Cubase 12?
-
If you want to learn more about Cubase 12, you can find many tutorials and guides on the official website of Steinberg, where you can access the online help, the operation manual, the quick start guide, the video tutorials, the forums, and the support section. You can also find many tutorials and guides on YouTube, where you can watch videos from Steinberg's channel or from other users. You can also find many tutorials and guides on blogs,
Where can I find more tutorials and guides for Cubase 12?
-
If you want to learn more about Cubase 12, you can find many tutorials and guides on the official website of Steinberg, where you can access the online help, the operation manual, the quick start guide, the video tutorials, the forums, and the support section. You can also find many tutorials and guides on YouTube, where you can watch videos from Steinberg's channel or from other users. You can also find many tutorials and guides on blogs, magazines, books, podcasts, or online courses that cover Cubase 12 or music production in general. Here are some examples of resources that you can check out:
-
-
[Cubase 12 Tutorials]: A series of videos from Steinberg that introduce the new features and improvements in Cubase 12.
-
[Cubase Academy]: An online course from Ask.Video that teaches you how to use Cubase 12 from beginner to advanced level.
-
[Cubase 12 Explained]: A video tutorial from Groove3 that covers the basics and essentials of Cubase 12.
-
[Cubase 12 Tips & Tricks]: A blog post from MusicTech that shares some useful tips and tricks for working with Cubase 12.
-
[Cubase 12 Review]: A magazine article from Sound On Sound that gives an in-depth review of Cubase 12 and its features.
-
-
How can I contact Steinberg for support and feedback?
-
If you have any questions, issues, or feedback regarding Cubase 12 or any other Steinberg product, you can contact Steinberg for support and feedback through various channels. You can use the following methods to get in touch with Steinberg:
-
-
[MySteinberg]: A personal online account where you can manage your licenses, products, downloads, updates, registration, and support requests.
-
[Steinberg Support]: A section on the Steinberg website where you can find answers to frequently asked questions, submit support tickets, report bugs, request features, and access manuals and documentation.
-
[Steinberg Forums]: A community platform where you can interact with other users and experts, ask questions, share tips, exchange ideas, and give feedback.
-
[Steinberg YouTube Channel]: A video platform where you can watch tutorials, guides, demos, interviews, and live streams from Steinberg.
-
[Steinberg Social Media]: A network of social media channels where you can follow Steinberg on Facebook, Twitter, Instagram, LinkedIn, and SoundCloud.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Cookie Run Kingdom APK 2022 - A Fun and Challenging Game for All Ages.md b/spaces/1phancelerku/anime-remove-background/Cookie Run Kingdom APK 2022 - A Fun and Challenging Game for All Ages.md
deleted file mode 100644
index a27018e7b9b9db3d84dbfc5ecbe5c1b399ff0cba..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Cookie Run Kingdom APK 2022 - A Fun and Challenging Game for All Ages.md
+++ /dev/null
@@ -1,120 +0,0 @@
-
-
-
-
-
-
Cookie Run: Kingdom APK 2022 - A Sweet Adventure Game for Android
-
Introduction
-
Do you love cookies? Do you love adventure games? If you answered yes to both questions, then you will love Cookie Run: Kingdom APK 2022. This is a new game from Devsisters Corporation, the makers of the popular Cookie Run series. In this game, you can build your own cookie kingdom, meet your favorite tiny heroes, explore a vast and colorful world, and enjoy a captivating story and gameplay.
-
Cookie Run: Kingdom APK 2022 is a free-to-play game that you can download and install on your Android device. It is compatible with Android 4.4 or higher versions. You can download it from the official website or from the Google Play Store. You can also download the APK file from other sources, but make sure they are safe and reliable. To install the APK file, you need to enable the "Unknown sources" option in your device settings. Then, you can open the APK file and follow the instructions to complete the installation.
One of the main features of Cookie Run: Kingdom APK 2022 is that you can build your own cookie kingdom. You can customize your buildings and decorations to create your dream cookie land. You can also collect resources and upgrade your facilities to make your kingdom more prosperous and powerful. You can expand your territory and defend it from enemies who want to steal your cookies. You can also join forces with other players and form alliances to help each other out.
-
Meet your favorite tiny heroes: the cookies
-
Another feature of Cookie Run: Kingdom APK 2022 is that you can meet your favorite tiny heroes: the cookies. You can choose from over 200 cookie characters, each with its own voice, personality, and skills. You can form your own cookie squad and battle against other players in real-time PvP mode. You can also level up your cookies and unlock new abilities and costumes for them. You can also collect and evolve various pets that will assist you in your adventure.
-
Explore a vast and colorful world
-
A third feature of Cookie Run: Kingdom APK 2022 is that you can explore a vast and colorful world. You can discover different regions and biomes, such as forests, deserts, oceans, and more. You can encounter various enemies and bosses, such as jelly worms, cake monsters, and giant donuts. You can also complete quests and challenges to earn rewards, such as cookies, crystals, items, and more.
-
Enjoy a captivating story and gameplay
-
A fourth feature of Cookie Run: Kingdom APK 2022 is that you can enjoy a captivating story and gameplay. You can follow the epic adventure of the cookies, who are trying to restore their lost kingdom and uncover the secrets of the cookie world. You can experience a mix of RPG, strategy, and simulation genres, as you build, fight, explore, and interact with other characters. You can also enjoy the high-quality graphics and sound effects that will immerse you in the game.
-
Pros and cons of Cookie Run: Kingdom APK 2022
-
Pros
-
Some of the pros of Cookie Run: Kingdom APK 2022 are:
-
-
Free to download and play
-
High-quality graphics and sound effects
-
Fun and addictive gameplay
-
Friendly and supportive community
-
-
Cons
-
Some of the cons of Cookie Run: Kingdom APK 2022 are:
-
-
Requires internet connection
-
May contain ads and in-app purchases
-
May consume battery and storage space
-
-
Conclusion
-
In conclusion, Cookie Run: Kingdom APK 2022 is a sweet adventure game for Android that you should try out for yourself. You can build your own cookie kingdom, meet your favorite tiny heroes, explore a vast and colorful world, and enjoy a captivating story and gameplay. You can also enjoy the free-to-play feature, the high-quality graphics and sound effects, the fun and addictive gameplay, and the friendly and supportive community. However, you should also be aware of the internet connection requirement, the ads and in-app purchases possibility, and the battery and storage space consumption.
-
If you are looking for a game that will make you smile, laugh, and have fun, then Cookie Run: Kingdom APK 2022 is the game for you. Download it now and join the cookie adventure!
-
Cookie Run: Kingdom game download for android
-How to install Cookie Run: Kingdom apk on PC
-Cookie Run: Kingdom latest version update 2022
-Cookie Run: Kingdom tips and tricks for beginners
-Cookie Run: Kingdom best characters and teams
-Cookie Run: Kingdom mod apk unlimited crystals
-Cookie Run: Kingdom gameplay and review 2022
-Cookie Run: Kingdom cheats and hacks 2022
-Cookie Run: Kingdom events and rewards 2022
-Cookie Run: Kingdom guide and walkthrough 2022
-Cookie Run: Kingdom official website and support
-Cookie Run: Kingdom fan art and wallpapers
-Cookie Run: Kingdom voice actors and cast
-Cookie Run: Kingdom epic skills and animations
-Cookie Run: Kingdom story and lore 2022
-Cookie Run: Kingdom online multiplayer mode
-Cookie Run: Kingdom offline mode and data usage
-Cookie Run: Kingdom ratings and reviews 2022
-Cookie Run: Kingdom system requirements and compatibility
-Cookie Run: Kingdom bugs and issues 2022
-Cookie Run: Kingdom codes and coupons 2022
-Cookie Run: Kingdom news and updates 2022
-Cookie Run: Kingdom memes and jokes 2022
-Cookie Run: Kingdom wiki and community forum
-Cookie Run: Kingdom merchandise and products
-Cookie Run: Kingdom soundtrack and music 2022
-Cookie Run: Kingdom crossover and collaboration events
-Cookie Run: Kingdom tier list and rankings 2022
-Cookie Run: Kingdom videos and streams 2022
-Cookie Run: Kingdom challenges and quests 2022
-Cookie Run: Kingdom features and improvements 2022
-Cookie Run: Kingdom release date and launch time
-Cookie Run: Kingdom pre-registration and rewards
-Cookie Run: Kingdom size and storage space
-Cookie Run: Kingdom languages and subtitles
-Cookie Run: Kingdom genres and categories
-Cookie Run: Kingdom developer and publisher info
-Cookie Run: Kingdom alternatives and similar games
-Cookie Run: Kingdom free download link 2022
-Cookie Run: Kingdom safe and secure apk file
-
FAQs
-
Here are some frequently asked questions about Cookie Run: Kingdom APK 2022:
-
-
What are some tips and tricks for playing Cookie Run: Kingdom APK 2022?
-
Some tips and tricks for playing Cookie Run: Kingdom APK 2022 are:
-
-
Use different types of cookies for different situations. For example, use tank cookies to absorb damage, healer cookies to restore health, or DPS cookies to deal damage.
-
Upgrade your buildings and facilities regularly to increase your production, storage, defense, and income.
-
Join or create a guild to chat with other players, exchange gifts, request help, or participate in guild wars.
-
Complete the daily missions and events to earn extra rewards, such as cookies, crystals, items, and more.
-
-
How can I get more cookies and crystals in Cookie Run: Kingdom APK 2022?
-
Some ways to get more cookies and crystals in Cookie Run: Kingdom APK 2022 are:
-
-
Log in every day to claim your daily login bonus, which may include cookies and crystals.
-
Watch ads or complete surveys to earn free crystals.
-
Invite your friends to play the game and get referral rewards, such as cookies and crystals.
-
Purchase them with real money from the shop.
-
-
How can I join or create a guild in Cookie Run: Kingdom APK 2022?
-
To join or create a guild in Cookie Run: Kingdom APK 2022, you need to:
-
-
Reach level 10 in the game.
-
Tap on the guild icon on the bottom right corner of the screen.
-
Search for an existing guild that suits your preferences, or create your own guild by choosing a name, a logo, and a description.
-
Apply to join a guild or accept requests from other players who want to join your guild.
-
-
How can I contact the developers or report a bug in Cookie Run: Kingdom APK 2022?
-
To contact the developers or report a bug in Cookie Run: Kingdom APK 2022, you can:
-
-
Tap on the settings icon on the top right corner of the screen.
-
Tap on the customer service button and choose the option that best suits your issue.
-
Fill out the form with your details and your message, and attach any screenshots or videos if necessary.
-
Submit the form and wait for a response from the developers.
-
-
Is Cookie Run: Kingdom APK 2022 safe to download and install?
-
Yes, Cookie Run: Kingdom APK 2022 is safe to download and install, as long as you download it from the official website or from the Google Play Store. You can also download the APK file from other sources, but make sure they are safe and reliable. You can check the reviews and ratings of other users before downloading. You can also scan the APK file with an antivirus software before installing it. However, you should always be careful when downloading and installing any app from unknown sources, as they may contain malware or viruses that can harm your device.
-
-
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Hay Day Mod APK and Get Unlimited Coins and Diamonds.md b/spaces/1phancelerku/anime-remove-background/Download Hay Day Mod APK and Get Unlimited Coins and Diamonds.md
deleted file mode 100644
index 8d637b816bff435db12b3c8a5c8ea661bb695695..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Hay Day Mod APK and Get Unlimited Coins and Diamonds.md
+++ /dev/null
@@ -1,136 +0,0 @@
-
-
Hay Day
Hay Day Diamond Mod APK: A Guide for Beginners
-
Do you love farming games? Do you want to create your own dream farm and enjoy the rural life? If yes, then you should try Hay Day, one of the most popular and addictive farming simulators on mobile devices. Hay Day is a game where you can grow crops, raise animals, trade goods, and interact with other players. You can also customize your farm with various decorations and buildings. However, to do all these things, you need diamonds and coins, the two main currencies in the game. Diamonds are especially hard to come by, and they are used for speeding up processes, buying special items, and unlocking features. That's why many players look for ways to get more diamonds without spending real money. One of the ways is to use Hay Day Diamond Mod APK, a modified version of the game that gives you unlimited diamonds and coins. In this article, we will tell you everything you need to know about Hay Day Diamond Mod APK, including its features, how to download and install it, and its pros and cons. Let's get started!
Before we dive into the details of Hay Day Diamond Mod APK, let's first take a look at the original game and its features. Hay Day is a game developed by Supercell, the same company behind other popular games like Clash of Clans and Brawl Stars. Hay Day was released in 2012 and has since gained millions of fans around the world. The game is available for both Android and iOS devices, and it is free to download and play. However, it also offers in-app purchases for some items and features.
-
Hay Day is a game that simulates the life of a farmer. You start with a small piece of land and a few resources, and your goal is to turn it into a prosperous farm. You can do this by:
-
Build a Farm
-
One of the main aspects of Hay Day is building your farm. You can choose from a variety of buildings and structures to place on your land, such as barns, silos, mills, bakeries, dairy factories, sugar mills, popcorn pots, pie ovens, cake ovens, juice presses, ice cream makers, candy machines, jam makers, honey extractors, looms, sewing machines, tailors, smelters, jewelers, fishing boats, fishing nets, fishing lures, duck salons, duck traps, lobster pools, lobster tails, sushi bars, soup kitchens, salad bars, sandwich bars, barbecue grills, pizza ovens, pasta makers, and more. Each building has a different function and produces different goods. You can also upgrade your buildings to increase their capacity and efficiency. You can also decorate your farm with various items, such as fences, paths, trees, flowers, ponds, fountains, statues, benches, signs, and more. You can also change the layout of your farm anytime you want. Building and decorating your farm is a fun and creative way to express yourself and make your farm unique.
-
Crops to Harvest and Grow
-
Another important aspect of Hay Day is growing crops. You can plant various seeds on your fields, such as wheat, corn, carrots, soybeans, sugarcane, indigo, tomatoes, potatoes, strawberries, pumpkins, cotton, rice, chili peppers, cacao, coffee beans, tea leaves, and more. You can also grow fruits on your trees, such as apples, cherries, pears, plums, lemons, oranges, bananas, coconuts, olives, grapes, and more. You can also grow flowers on your bushes, such as roses, tulips, lilies, lavender, jasmine, and more. You can also grow mushrooms on your logs.
-
Growing crops is not only a source of income but also a source of ingredients for your buildings. You can use your crops to make various products, such as bread, butter, cheese, cream, milk, sugar, flour, popcorn, pies, cakes, juices, ice cream, candy, jam, honey, wool, fabric, clothes, metal bars, jewelry, fish, lobster, sushi, soup, salad, sandwich, barbecue, pizza, pasta, and more. You can also sell your crops and products to other players or to the roadside shop for coins. You can also use them to fulfill orders from the truck or the boat for coins and experience points. You can also donate them to your neighborhood or to the town visitors for rewards.
-
Growing crops is not only rewarding but also challenging. You have to water your crops regularly to make them grow faster. You also have to protect them from pests and diseases. You also have to harvest them before they wither and replant new seeds. You also have to manage your storage space and your resources wisely. Growing crops is a skill that requires patience and strategy.
-
hay day unlimited diamonds mod apk download
-hay day hack apk free diamonds and coins
-hay day mod apk latest version with diamonds
-how to get diamonds in hay day mod apk
-hay day mod apk 2023 unlimited diamonds and money
-hay day diamond generator mod apk no verification
-hay day mod apk offline with unlimited diamonds
-hay day mod apk android 1 unlimited diamonds
-hay day mod apk revdl with free diamonds
-hay day mod apk happymod with diamonds and coins
-hay day diamond cheat mod apk download
-hay day mod apk ios with unlimited diamonds
-hay day mod apk rexdl with free diamonds and money
-hay day mod apk unlimited everything diamonds and coins
-hay day diamond hack mod apk 2023
-hay day mod apk pure with unlimited diamonds
-hay day mod apk platinmods with free diamonds
-hay day mod apk 1.58.79 unlimited diamonds and money
-hay day diamond glitch mod apk download
-hay day mod apk an1 with unlimited diamonds and coins
-hay day diamond hack without human verification mod apk
-hay day mod apk for pc with free diamonds
-hay day mod apk unlimited keys and diamonds
-hay day diamond trick mod apk download
-hay day mod apk online with unlimited diamonds
-hay day diamond hack no survey no password mod apk
-hay day mod apk for iphone with free diamonds
-hay day mod apk unlimited coins and diamonds 2023
-hay day diamond hack tool v1.8 mod apk download
-hay day mod apk bluestacks with unlimited diamonds and coins
-hay day diamond hack without root mod apk
-hay day mod apk for android with free diamonds
-hay day mod apk unlimited gems and diamonds
-hay day diamond cheat engine mod apk download
-hay day mod apk facebook with unlimited diamonds and coins
-hay day diamond hack no jailbreak mod apk
-hay day mod apk for ipad with free diamonds
-hay day mod apk unlimited xp and diamonds
-hay day diamond hack generator mod apk download
-hay day mod apk google play with unlimited diamonds and coins
-
Animals
-
Another essential aspect of Hay Day is raising animals. You can keep various animals on your farm, such as chickens, cows, pigs, sheep, goats, horses, donkeys, rabbits, cats, dogs, and more. You can also keep bees in your beehives and catch fish in your fishing area. Each animal has a different function and produces different goods. You can also feed them and pet them to make them happy and productive.
-
Raising animals is not only a source of income but also a source of fun. You can use your animal products to make various products, such as eggs, bacon, ham, wool, cheese, cream, milk, honey, fish fillets, lobster tails, and more. You can also sell your animal products to other players or to the roadside shop for coins. You can also use them to fulfill orders from the truck or the boat for coins and experience points. You can also donate them to your neighborhood or to the town visitors for rewards.
-
Raising animals is not only fun but also challenging. You have to feed your animals regularly to make them produce goods. You also have to collect their goods before they expire and store them in your barn or silo. You also have to protect them from foxes and other predators. You also have to manage your storage space and your resources wisely. Raising animals is a skill that requires care and attention.
-
Places to Visit
-
Another exciting aspect of Hay Day is visiting other places. You can travel to different locations in the game world, such as the town, the mine, the valley, the derby, and more. Each place has a different purpose and offers different activities. You can also meet new characters and discover new stories.
-
Visiting places is not only a source of adventure but also a source of rewards. You can use your places to earn coins, diamonds, vouchers, boosters, expansion materials, decoration materials, and more. You can also complete tasks and quests to earn rewards. You can also participate in events and competitions to earn rewards.
-
Visiting places is not only rewarding but also challenging. You have to unlock new places by reaching certain levels or completing certain requirements. You also have to upgrade your places by spending coins or diamonds. You also have to maintain your places by repairing or cleaning them. You also have to manage your time and energy wisely. Visiting places is a skill that requires exploration and curiosity.
-
Play with Friends and Neighbors
-
Another enjoyable aspect of Hay Day is playing with friends and neighbors. You can connect with other players from around the world through various ways, such as Facebook, Game Center, Google Play Games, Supercell ID, or friend codes. You can also join or create a neighborhood with up to 30 members.
-
Playing with friends and neighbors is not only a source of socialization but also a source of cooperation. You can chat with other players and share tips and tricks. You can also trade goods and resources with other players through the roadside shop or the newspaper. You can also help other players by reviving their crops or filling their crates. You can also request or donate goods or resources from your neighborhood members.
-
Playing with friends and neighbors is not only cooperative but also competitive. You can compete with other players or neighborhoods in various ways, such as the derby, the valley, the global leaderboard, or the seasonal leaderboard. You can also earn trophies and badges for your achievements. You can also show off your farm and your products to other players.
-
Playing with friends and neighbors is a skill that requires communication and collaboration.
-
Hay Day Diamond Mod APK Features
-
Now that we have covered the original game and its features, let's move on to Hay Day Diamond Mod APK and its features. Hay Day Diamond Mod APK is a modified version of the game that gives you unlimited diamonds and coins. It also removes ads from the game. It is not an official version of the game, but it is created by third-party developers who modify the original game files.
-
Hay Day Diamond Mod APK has many features that make it different from the original game. Here are some of them:
-
Unlimited Diamonds
-
The most obvious feature of Hay Day Diamond Mod APK is unlimited diamonds. Diamonds are the premium currency in Hay Day, and they are very hard to get in the original game. You can only get them by leveling up, completing achievements, watching ads, or buying them with real money. However, in Hay Day Diamond Mod APK, you can get unlimited diamonds for free. You can use them to speed up processes, buy special items, unlock features, and more. You can also use them to buy coins, vouchers, boosters, expansion materials, decoration materials, and more. You can also use them to skip tasks and quests, or to retry events and competitions. With unlimited diamonds, you can enjoy the game without any limitations or restrictions.
-
Unlimited Coins
-
Another feature of Hay Day Diamond Mod APK is unlimited coins. Coins are the basic currency in Hay Day, and they are used for buying and selling goods and resources. You can earn coins by selling your crops and products to other players or to the roadside shop. You can also earn coins by fulfilling orders from the truck or the boat. You can also earn coins by helping other players or by completing tasks and quests. However, in Hay Day Diamond Mod APK, you can get unlimited coins for free. You can use them to buy more items and resources from the shop or from other players. You can also use them to upgrade your buildings and structures. You can also use them to expand your land and your fishing area. You can also use them to decorate your farm and your town. With unlimited coins, you can buy anything you want and need in the game.
-
No Ads
-
Another feature of Hay Day Diamond Mod APK is no ads. Ads are annoying and distracting, and they can ruin your gaming experience. In the original game, you have to watch ads to get some rewards, such as diamonds, vouchers, boosters, or expansion materials. However, in Hay Day Diamond Mod APK, you don't have to watch any ads at all. You can get all the rewards without wasting your time or data. You can also enjoy the game without any interruptions or pop-ups. With no ads, you can improve your gaming experience and focus on the fun.
-
How to Download and Install Hay Day Diamond Mod APK
-
Now that we have covered the features of Hay Day Diamond Mod APK, let's see how to download and install it on your device. Before we do that, let's first go over some requirements and precautions that you need to follow.
-
Requirements
-
To download and install Hay Day Diamond Mod APK on your device, you need to meet some minimum requirements. These are:
-
-
Your device must have Android 4.1 or higher version.
-
Your device must have at least 1 GB of RAM and 100 MB of free storage space.
-
Your device must have a stable internet connection.
-
Your device must allow installation of apps from unknown sources. To enable this option, go to Settings > Security > Unknown Sources and toggle it on.
-
You must uninstall the original Hay Day game from your device if you have it installed already.
-
-
If you meet these requirements, you are ready to download and install Hay Day Diamond Mod APK on your device.
-
Download Link
-
To download Hay Day Diamond Mod APK on your device, you need to find a reliable and safe download link for it. There are many websites that claim to offer Hay Day Diamond Mod APK for free, but not all of them are trustworthy. Some of them may contain viruses or malware that can harm your device or steal your personal information. Some of them may also provide fake or outdated versions of the game that may not work properly or may cause errors.
-
To avoid these risks, you should only download Hay Day Diamond Mod APK from a reputable source that has positive reviews and feedback from other users. One such source is [HayDayModAPK.com], a website that provides the latest version of Hay Day Diamond Mod APK for free. This website is secure and verified by Google Play Protect. It also has a user-friendly interface and fast download speed.
-
To download Hay Day Diamond Mod APK from [HayDayModAPK.com], follow these steps:
-
-
Go to [HayDayModAPK.com] using your browser.
-
Click on the Download button on the homepage.
-
Wait for the download to start automatically. If it doesn't start, click on the link that says "Click here if the download doesn't start automatically".
-
Wait for the download to finish.
-
-
Congratulations! You have successfully downloaded Hay Day Diamond Mod APK on your device.
-
Installation Process
-
To install Hay Day Diamond Mod APK on your device, follow these steps:
-
-
Locate the downloaded file in your file manager or download folder. The file name should be HayDayModAPK.apk.
-
Tap on the file to open it.
-
A pop-up window will appear asking you to install the app. Tap on Install.
-
Wait for the installation to complete.
-
A pop-up window will appear asking you to open the app. Tap on Open.
-
Allow the app to access your device's resources and permissions.
-
Wait for the game to load and update.
-
Enjoy playing Hay Day Diamond Mod APK with unlimited diamonds and coins!
-
-
Congratulations! You have successfully installed Hay Day Diamond Mod APK on your device.
-
Conclusion
-
In conclusion, Hay Day Diamond Mod APK is a modified version of Hay Day that gives you unlimited diamonds and coins. It also removes ads from the game. It is not an official version of the game, but it is created by third-party developers who modify the original game files. It has many features that make it different from the original game, such as unlimited diamonds, unlimited coins, and no ads. It also has some requirements and precautions that you need to follow, such as uninstalling the original game, enabling unknown sources, finding a reliable download link, and following the installation process.
-
Hay Day Diamond Mod APK is a great way to enjoy Hay Day without any limitations or restrictions. You can build your farm, grow crops, raise animals, visit places, play with friends and neighbors, and more. You can also buy anything you want and need in the game. You can also speed up processes, buy special items, unlock features, and more. You can also enjoy the game without any interruptions or pop-ups. Hay Day Diamond Mod APK is a fun and addictive farming simulator that you should try.
-
However, Hay Day Diamond Mod APK also has some disadvantages that you should be aware of. For example, it is not an official version of the game, so it may not be compatible with some devices or updates. It may also cause errors or glitches in the game. It may also violate the terms and conditions of the original game, so it may result in a ban or a penalty from Supercell. It may also expose your device or personal information to viruses or malware from untrusted sources. Hay Day Diamond Mod APK is a risky and unofficial version of Hay Day that you should use at your own discretion.
-
Hay Day Diamond Mod APK is a guide for beginners who want to know more about this modified version of Hay Day. We hope that this article has helped you understand what Hay Day Diamond Mod APK is, what are its features, how to download and install it, and what are its pros and cons. We hope that you have enjoyed reading this article and that you have learned something new. Thank you for your time and attention.
-
FAQs
-
Here are some frequently asked questions about Hay Day Diamond Mod APK:
-
Q: Is Hay Day Diamond Mod APK safe to use?
-
A: Hay Day Diamond Mod APK is not an official version of Hay Day, so it may not be safe to use. It may contain viruses or malware that can harm your device or steal your personal information. It may also cause errors or glitches in the game. It may also violate the terms and conditions of the original game, so it may result in a ban or a penalty from Supercell. You should only download Hay Day Diamond Mod APK from a reputable source that has positive reviews and feedback from other users. You should also scan the file with an antivirus software before installing it. You should also backup your data before using it.
-
Q: How do I update Hay Day Diamond Mod APK?
-
A: Hay Day Diamond Mod APK may not be compatible with some updates of the original game, so it may not work properly or at all after an update. You should check the website where you downloaded Hay Day Diamond Mod APK for any updates or new versions of the mod apk. You should also uninstall the old version of Hay Day Diamond Mod APK before installing the new one. You should also backup your data before updating it.
-
Q: Can I play Hay Day Diamond Mod APK online?
-
A: Hay Day Diamond Mod APK requires an internet connection to run and update, so you can play it online. However, you may not be able to play with other players who are using the original game or other versions of the mod apk. You may also face some issues or difficulties while playing online, such as lagging, crashing, or disconnecting. You may also get detected by Supercell and get banned or penalized for using a modified version of the game.
-
Q: Can I use Hay Day Diamond Mod APK on iOS devices?
-
A A: Hay Day Diamond Mod APK is designed for Android devices only, so you cannot use it on iOS devices. If you want to play Hay Day with unlimited diamonds and coins on your iPhone or iPad, you need to use a different method, such as jailbreaking your device, using a third-party app store, or using a game emulator. However, these methods are also risky and unofficial, and they may damage your device or violate the terms and conditions of the original game. You should use them at your own discretion and responsibility.
-
Q: What are some alternatives to Hay Day Diamond Mod APK?
-
A: If you don't want to use Hay Day Diamond Mod APK or if you can't use it for some reason, you can try some alternatives that can also give you more diamonds and coins in Hay Day. Some of these alternatives are:
-
-
Using cheats and hacks: There are some websites and apps that claim to offer cheats and hacks for Hay Day, such as generators, tools, scripts, or codes. These cheats and hacks promise to give you unlimited diamonds and coins in Hay Day for free. However, these cheats and hacks are also risky and unofficial, and they may not work properly or at all. They may also contain viruses or malware that can harm your device or steal your personal information. They may also cause errors or glitches in the game. They may also violate the terms and conditions of the original game, so they may result in a ban or a penalty from Supercell. You should only use cheats and hacks from a reputable source that has positive reviews and feedback from other users. You should also scan the file with an antivirus software before using it. You should also backup your data before using it.
-
Using tips and tricks: There are some websites and blogs that offer tips and tricks for Hay Day, such as guides, tutorials, strategies, or secrets. These tips and tricks can help you earn more diamonds and coins in Hay Day by playing smartly and efficiently. They can also help you improve your skills and knowledge of the game. These tips and tricks are not risky or unofficial, and they can work well if you follow them correctly. However, these tips and tricks may not be enough to give you unlimited diamonds and coins in Hay Day, and they may take some time and effort to apply. You should only use tips and tricks from a reliable source that has accurate and updated information.
-
Using real money: The simplest and safest way to get more diamonds and coins in Hay Day is to use real money. You can buy diamonds and coins from the in-game shop using your credit card, debit card, PayPal account, Google Play account, Apple ID account, or other payment methods. You can also buy gift cards or vouchers that can be redeemed for diamonds and coins in the game. Buying diamonds and coins with real money is not risky or unofficial, and it can give you instant access to any item or feature in the game. However, buying diamonds and coins with real money can be expensive and addictive, and it may not be affordable or feasible for everyone.
-
-
These are some of the alternatives to Hay Day Diamond Mod APK that you can try if you want to get more diamonds and coins in Hay Day. However, none of them can match the convenience and benefits of Hay Day Diamond Mod APK, which gives you unlimited diamonds and coins for free. You can also enjoy the game without any ads or interruptions. Hay Day Diamond Mod APK is the best option for you if you want to have the ultimate farming experience in Hay Day.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/AI-ZTH-03-23/6.AI.Dashboard.Wiki.Chat.Cognitive.HTML5/README.md b/spaces/AI-ZTH-03-23/6.AI.Dashboard.Wiki.Chat.Cognitive.HTML5/README.md
deleted file mode 100644
index 5935744814668f85a7005ea6ba4e9540b216a1fa..0000000000000000000000000000000000000000
--- a/spaces/AI-ZTH-03-23/6.AI.Dashboard.Wiki.Chat.Cognitive.HTML5/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: 6.AI.Dashboard.Wiki.Chat.Cognitive.HTML5
-emoji: 🦀
-colorFrom: gray
-colorTo: gray
-sdk: static
-pinned: false
-license: mit
-duplicated_from: awacke1/AI.Dashboard.Wiki.Chat.Cognitive.HTML5
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AIFILMS/StyleGANEX/models/stylegan2/lpips/pretrained_networks.py b/spaces/AIFILMS/StyleGANEX/models/stylegan2/lpips/pretrained_networks.py
deleted file mode 100644
index 077a24419364fdb5ae2f697f73e28615adae75a7..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/StyleGANEX/models/stylegan2/lpips/pretrained_networks.py
+++ /dev/null
@@ -1,181 +0,0 @@
-from collections import namedtuple
-import torch
-from torchvision import models as tv
-from IPython import embed
-
-class squeezenet(torch.nn.Module):
- def __init__(self, requires_grad=False, pretrained=True):
- super(squeezenet, self).__init__()
- pretrained_features = tv.squeezenet1_1(pretrained=pretrained).features
- self.slice1 = torch.nn.Sequential()
- self.slice2 = torch.nn.Sequential()
- self.slice3 = torch.nn.Sequential()
- self.slice4 = torch.nn.Sequential()
- self.slice5 = torch.nn.Sequential()
- self.slice6 = torch.nn.Sequential()
- self.slice7 = torch.nn.Sequential()
- self.N_slices = 7
- for x in range(2):
- self.slice1.add_module(str(x), pretrained_features[x])
- for x in range(2,5):
- self.slice2.add_module(str(x), pretrained_features[x])
- for x in range(5, 8):
- self.slice3.add_module(str(x), pretrained_features[x])
- for x in range(8, 10):
- self.slice4.add_module(str(x), pretrained_features[x])
- for x in range(10, 11):
- self.slice5.add_module(str(x), pretrained_features[x])
- for x in range(11, 12):
- self.slice6.add_module(str(x), pretrained_features[x])
- for x in range(12, 13):
- self.slice7.add_module(str(x), pretrained_features[x])
- if not requires_grad:
- for param in self.parameters():
- param.requires_grad = False
-
- def forward(self, X):
- h = self.slice1(X)
- h_relu1 = h
- h = self.slice2(h)
- h_relu2 = h
- h = self.slice3(h)
- h_relu3 = h
- h = self.slice4(h)
- h_relu4 = h
- h = self.slice5(h)
- h_relu5 = h
- h = self.slice6(h)
- h_relu6 = h
- h = self.slice7(h)
- h_relu7 = h
- vgg_outputs = namedtuple("SqueezeOutputs", ['relu1','relu2','relu3','relu4','relu5','relu6','relu7'])
- out = vgg_outputs(h_relu1,h_relu2,h_relu3,h_relu4,h_relu5,h_relu6,h_relu7)
-
- return out
-
-
-class alexnet(torch.nn.Module):
- def __init__(self, requires_grad=False, pretrained=True):
- super(alexnet, self).__init__()
- alexnet_pretrained_features = tv.alexnet(pretrained=pretrained).features
- self.slice1 = torch.nn.Sequential()
- self.slice2 = torch.nn.Sequential()
- self.slice3 = torch.nn.Sequential()
- self.slice4 = torch.nn.Sequential()
- self.slice5 = torch.nn.Sequential()
- self.N_slices = 5
- for x in range(2):
- self.slice1.add_module(str(x), alexnet_pretrained_features[x])
- for x in range(2, 5):
- self.slice2.add_module(str(x), alexnet_pretrained_features[x])
- for x in range(5, 8):
- self.slice3.add_module(str(x), alexnet_pretrained_features[x])
- for x in range(8, 10):
- self.slice4.add_module(str(x), alexnet_pretrained_features[x])
- for x in range(10, 12):
- self.slice5.add_module(str(x), alexnet_pretrained_features[x])
- if not requires_grad:
- for param in self.parameters():
- param.requires_grad = False
-
- def forward(self, X):
- h = self.slice1(X)
- h_relu1 = h
- h = self.slice2(h)
- h_relu2 = h
- h = self.slice3(h)
- h_relu3 = h
- h = self.slice4(h)
- h_relu4 = h
- h = self.slice5(h)
- h_relu5 = h
- alexnet_outputs = namedtuple("AlexnetOutputs", ['relu1', 'relu2', 'relu3', 'relu4', 'relu5'])
- out = alexnet_outputs(h_relu1, h_relu2, h_relu3, h_relu4, h_relu5)
-
- return out
-
-class vgg16(torch.nn.Module):
- def __init__(self, requires_grad=False, pretrained=True):
- super(vgg16, self).__init__()
- vgg_pretrained_features = tv.vgg16(pretrained=pretrained).features
- self.slice1 = torch.nn.Sequential()
- self.slice2 = torch.nn.Sequential()
- self.slice3 = torch.nn.Sequential()
- self.slice4 = torch.nn.Sequential()
- self.slice5 = torch.nn.Sequential()
- self.N_slices = 5
- for x in range(4):
- self.slice1.add_module(str(x), vgg_pretrained_features[x])
- for x in range(4, 9):
- self.slice2.add_module(str(x), vgg_pretrained_features[x])
- for x in range(9, 16):
- self.slice3.add_module(str(x), vgg_pretrained_features[x])
- for x in range(16, 23):
- self.slice4.add_module(str(x), vgg_pretrained_features[x])
- for x in range(23, 30):
- self.slice5.add_module(str(x), vgg_pretrained_features[x])
- if not requires_grad:
- for param in self.parameters():
- param.requires_grad = False
-
- def forward(self, X):
- h = self.slice1(X)
- h_relu1_2 = h
- h = self.slice2(h)
- h_relu2_2 = h
- h = self.slice3(h)
- h_relu3_3 = h
- h = self.slice4(h)
- h_relu4_3 = h
- h = self.slice5(h)
- h_relu5_3 = h
- vgg_outputs = namedtuple("VggOutputs", ['relu1_2', 'relu2_2', 'relu3_3', 'relu4_3', 'relu5_3'])
- out = vgg_outputs(h_relu1_2, h_relu2_2, h_relu3_3, h_relu4_3, h_relu5_3)
-
- return out
-
-
-
-class resnet(torch.nn.Module):
- def __init__(self, requires_grad=False, pretrained=True, num=18):
- super(resnet, self).__init__()
- if(num==18):
- self.net = tv.resnet18(pretrained=pretrained)
- elif(num==34):
- self.net = tv.resnet34(pretrained=pretrained)
- elif(num==50):
- self.net = tv.resnet50(pretrained=pretrained)
- elif(num==101):
- self.net = tv.resnet101(pretrained=pretrained)
- elif(num==152):
- self.net = tv.resnet152(pretrained=pretrained)
- self.N_slices = 5
-
- self.conv1 = self.net.conv1
- self.bn1 = self.net.bn1
- self.relu = self.net.relu
- self.maxpool = self.net.maxpool
- self.layer1 = self.net.layer1
- self.layer2 = self.net.layer2
- self.layer3 = self.net.layer3
- self.layer4 = self.net.layer4
-
- def forward(self, X):
- h = self.conv1(X)
- h = self.bn1(h)
- h = self.relu(h)
- h_relu1 = h
- h = self.maxpool(h)
- h = self.layer1(h)
- h_conv2 = h
- h = self.layer2(h)
- h_conv3 = h
- h = self.layer3(h)
- h_conv4 = h
- h = self.layer4(h)
- h_conv5 = h
-
- outputs = namedtuple("Outputs", ['relu1','conv2','conv3','conv4','conv5'])
- out = outputs(h_relu1, h_conv2, h_conv3, h_conv4, h_conv5)
-
- return out
diff --git a/spaces/ALSv/FSW/roop/processors/frame/face_swapper.py b/spaces/ALSv/FSW/roop/processors/frame/face_swapper.py
deleted file mode 100644
index 0f735fcbdb8de145fa227ee428bca81764c8c57e..0000000000000000000000000000000000000000
--- a/spaces/ALSv/FSW/roop/processors/frame/face_swapper.py
+++ /dev/null
@@ -1,88 +0,0 @@
-from typing import Any, List, Callable
-import cv2
-import insightface
-import threading
-
-import roop.globals
-import roop.processors.frame.core
-from roop.core import update_status
-from roop.face_analyser import get_one_face, get_many_faces
-from roop.typing import Face, Frame
-from roop.utilities import conditional_download, resolve_relative_path, is_image, is_video
-
-FACE_SWAPPER = None
-THREAD_LOCK = threading.Lock()
-NAME = 'ROOP.FACE-SWAPPER'
-
-
-def get_face_swapper() -> Any:
- global FACE_SWAPPER
-
- with THREAD_LOCK:
- if FACE_SWAPPER is None:
- model_path = resolve_relative_path('../models/inswapper_128.onnx')
- FACE_SWAPPER = insightface.model_zoo.get_model(model_path, providers=roop.globals.execution_providers)
- return FACE_SWAPPER
-
-
-def pre_check() -> bool:
- download_directory_path = resolve_relative_path('../models')
- conditional_download(download_directory_path, ['https://huggingface.co/Apex-X/inswapper_128.onnx/resolve/main/inswapper_128.onnx'])
- return True
-
-
-def pre_start() -> bool:
- if not is_image(roop.globals.source_path):
- update_status('Select an image for source path.', NAME)
- return False
- elif not get_one_face(cv2.imread(roop.globals.source_path)):
- update_status('No face in source path detected.', NAME)
- return False
- if not is_image(roop.globals.target_path) and not is_video(roop.globals.target_path):
- update_status('Select an image or video for target path.', NAME)
- return False
- return True
-
-
-def post_process() -> None:
- global FACE_SWAPPER
-
- FACE_SWAPPER = None
-
-
-def swap_face(source_face: Face, target_face: Face, temp_frame: Frame) -> Frame:
- return get_face_swapper().get(temp_frame, target_face, source_face, paste_back=True)
-
-
-def process_frame(source_face: Face, temp_frame: Frame) -> Frame:
- if roop.globals.many_faces:
- many_faces = get_many_faces(temp_frame)
- if many_faces:
- for target_face in many_faces:
- temp_frame = swap_face(source_face, target_face, temp_frame)
- else:
- target_face = get_one_face(temp_frame)
- if target_face:
- temp_frame = swap_face(source_face, target_face, temp_frame)
- return temp_frame
-
-
-def process_frames(source_path: str, temp_frame_paths: List[str], update: Callable[[], None]) -> None:
- source_face = get_one_face(cv2.imread(source_path))
- for temp_frame_path in temp_frame_paths:
- temp_frame = cv2.imread(temp_frame_path)
- result = process_frame(source_face, temp_frame)
- cv2.imwrite(temp_frame_path, result)
- if update:
- update()
-
-
-def process_image(source_path: str, target_path: str, output_path: str) -> None:
- source_face = get_one_face(cv2.imread(source_path))
- target_frame = cv2.imread(target_path)
- result = process_frame(source_face, target_frame)
- cv2.imwrite(output_path, result)
-
-
-def process_video(source_path: str, temp_frame_paths: List[str]) -> None:
- roop.processors.frame.core.process_video(source_path, temp_frame_paths, process_frames)
\ No newline at end of file
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnetv1d152.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnetv1d152.py
deleted file mode 100644
index 58cc73beb318e38f9ce79154a1265be1a7dba17b..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnetv1d152.py
+++ /dev/null
@@ -1,17 +0,0 @@
-# model settings
-model = dict(
- type='ImageClassifier',
- backbone=dict(
- type='ResNetV1d',
- depth=152,
- num_stages=4,
- out_indices=(3, ),
- style='pytorch'),
- neck=dict(type='GlobalAveragePooling'),
- head=dict(
- type='LinearClsHead',
- num_classes=1000,
- in_channels=2048,
- loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
- topk=(1, 5),
- ))
diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/server/websearch/searchWeb.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/server/websearch/searchWeb.ts
deleted file mode 100644
index 42369689a102e1d01ea1f9a38d1b1a6f9a571298..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/server/websearch/searchWeb.ts
+++ /dev/null
@@ -1,63 +0,0 @@
-import { SERPAPI_KEY, SERPER_API_KEY } from "$env/static/private";
-
-import { getJson } from "serpapi";
-import type { GoogleParameters } from "serpapi";
-
-// Show result as JSON
-export async function searchWeb(query: string) {
- if (SERPER_API_KEY) {
- return await searchWebSerper(query);
- }
- if (SERPAPI_KEY) {
- return await searchWebSerpApi(query);
- }
- throw new Error("No Serper.dev or SerpAPI key found");
-}
-
-export async function searchWebSerper(query: string) {
- const params = {
- q: query,
- hl: "en",
- gl: "us",
- };
-
- const response = await fetch("https://google.serper.dev/search", {
- method: "POST",
- body: JSON.stringify(params),
- headers: {
- "x-api-key": SERPER_API_KEY,
- "Content-type": "application/json; charset=UTF-8",
- },
- });
-
- /* eslint-disable @typescript-eslint/no-explicit-any */
- const data = (await response.json()) as Record;
-
- if (!response.ok) {
- throw new Error(
- data["message"] ??
- `Serper API returned error code ${response.status} - ${response.statusText}`
- );
- }
-
- return {
- organic_results: data["organic"] ?? [],
- knowledge_graph: data["knowledgeGraph"] ?? null,
- answer_box: data["answerBox"] ?? null,
- };
-}
-
-export async function searchWebSerpApi(query: string) {
- const params = {
- q: query,
- hl: "en",
- gl: "us",
- google_domain: "google.com",
- api_key: SERPAPI_KEY,
- } satisfies GoogleParameters;
-
- // Show result as JSON
- const response = await getJson("google", params);
-
- return response;
-}
diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/__init__.py b/spaces/AgentVerse/agentVerse/agentverse/environments/__init__.py
deleted file mode 100644
index 099ceb1bd0de04d8d64e0da1b117be482805a4e3..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/environments/__init__.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from typing import Dict
-from agentverse.registry import Registry
-
-
-env_registry = Registry(name="EnvironmentRegistry")
-
-
-from .base import BaseEnvironment, BaseRule
-
-# from .basic import PipelineEnvironment
-from .simulation_env.basic import BasicEnvironment
-from .simulation_env.pokemon import PokemonEnvironment
-from .simulation_env.prisoner_dilemma import PrisonerDilemmaEnvironment
-from .simulation_env.sde_team import SdeTeamEnvironment
-from .simulation_env.sde_team_given_tests import SdeTeamGivenTestsEnvironment
-
-from .tasksolving_env.basic import BasicEnvironment
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/statesroundrectangle/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/statesroundrectangle/Factory.js
deleted file mode 100644
index 0250758289ae9776c24d5e26eee10e921cc6940e..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/statesroundrectangle/Factory.js
+++ /dev/null
@@ -1,13 +0,0 @@
-import StatesRoundRectangle from './StatesRoundRectangle.js';
-import ObjectFactory from '../ObjectFactory.js';
-import SetValue from '../../../plugins/utils/object/SetValue.js';
-
-ObjectFactory.register('statesRoundRectangle', function (config) {
- var gameObject = new StatesRoundRectangle(this.scene, config);
- this.scene.add.existing(gameObject);
- return gameObject;
-});
-
-SetValue(window, 'RexPlugins.UI.StatesRoundRectangle', StatesRoundRectangle);
-
-export default StatesRoundRectangle;
\ No newline at end of file
diff --git a/spaces/Aitor/CVchat/README.md b/spaces/Aitor/CVchat/README.md
deleted file mode 100644
index 0a796b4668b3ae269e9b75a256b3570257c4455a..0000000000000000000000000000000000000000
--- a/spaces/Aitor/CVchat/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: CVchat
-emoji: 🌍
-colorFrom: yellow
-colorTo: red
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
-license: cc-by-4.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Amrrs/podscript/README.md b/spaces/Amrrs/podscript/README.md
deleted file mode 100644
index f9cb8d7dc84eea3db7ba342192b88a425323f3cc..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/podscript/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Podscript
-emoji: 📚
-colorFrom: pink
-colorTo: blue
-sdk: gradio
-sdk_version: 3.3.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/models/unet2d.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/models/unet2d.md
deleted file mode 100644
index 29e8163f646c0cad427fe95b36221ce6ae02eb55..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/models/unet2d.md
+++ /dev/null
@@ -1,13 +0,0 @@
-# UNet2DModel
-
-The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 2D UNet model.
-
-The abstract from the paper is:
-
-*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.*
-
-## UNet2DModel
-[[autodoc]] UNet2DModel
-
-## UNet2DOutput
-[[autodoc]] models.unet_2d.UNet2DOutput
\ No newline at end of file
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/coder/legacy_delta_xywh_bbox_coder.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/coder/legacy_delta_xywh_bbox_coder.py
deleted file mode 100644
index 190309fd42a1b76c12c82fc1acf0511494be5ac3..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/coder/legacy_delta_xywh_bbox_coder.py
+++ /dev/null
@@ -1,215 +0,0 @@
-import mmcv
-import numpy as np
-import torch
-
-from ..builder import BBOX_CODERS
-from .base_bbox_coder import BaseBBoxCoder
-
-
-@BBOX_CODERS.register_module()
-class LegacyDeltaXYWHBBoxCoder(BaseBBoxCoder):
- """Legacy Delta XYWH BBox coder used in MMDet V1.x.
-
- Following the practice in R-CNN [1]_, this coder encodes bbox (x1, y1, x2,
- y2) into delta (dx, dy, dw, dh) and decodes delta (dx, dy, dw, dh)
- back to original bbox (x1, y1, x2, y2).
-
- Note:
- The main difference between :class`LegacyDeltaXYWHBBoxCoder` and
- :class:`DeltaXYWHBBoxCoder` is whether ``+ 1`` is used during width and
- height calculation. We suggest to only use this coder when testing with
- MMDet V1.x models.
-
- References:
- .. [1] https://arxiv.org/abs/1311.2524
-
- Args:
- target_means (Sequence[float]): denormalizing means of target for
- delta coordinates
- target_stds (Sequence[float]): denormalizing standard deviation of
- target for delta coordinates
- """
-
- def __init__(self,
- target_means=(0., 0., 0., 0.),
- target_stds=(1., 1., 1., 1.)):
- super(BaseBBoxCoder, self).__init__()
- self.means = target_means
- self.stds = target_stds
-
- def encode(self, bboxes, gt_bboxes):
- """Get box regression transformation deltas that can be used to
- transform the ``bboxes`` into the ``gt_bboxes``.
-
- Args:
- bboxes (torch.Tensor): source boxes, e.g., object proposals.
- gt_bboxes (torch.Tensor): target of the transformation, e.g.,
- ground-truth boxes.
-
- Returns:
- torch.Tensor: Box transformation deltas
- """
- assert bboxes.size(0) == gt_bboxes.size(0)
- assert bboxes.size(-1) == gt_bboxes.size(-1) == 4
- encoded_bboxes = legacy_bbox2delta(bboxes, gt_bboxes, self.means,
- self.stds)
- return encoded_bboxes
-
- def decode(self,
- bboxes,
- pred_bboxes,
- max_shape=None,
- wh_ratio_clip=16 / 1000):
- """Apply transformation `pred_bboxes` to `boxes`.
-
- Args:
- boxes (torch.Tensor): Basic boxes.
- pred_bboxes (torch.Tensor): Encoded boxes with shape
- max_shape (tuple[int], optional): Maximum shape of boxes.
- Defaults to None.
- wh_ratio_clip (float, optional): The allowed ratio between
- width and height.
-
- Returns:
- torch.Tensor: Decoded boxes.
- """
- assert pred_bboxes.size(0) == bboxes.size(0)
- decoded_bboxes = legacy_delta2bbox(bboxes, pred_bboxes, self.means,
- self.stds, max_shape, wh_ratio_clip)
-
- return decoded_bboxes
-
-
-@mmcv.jit(coderize=True)
-def legacy_bbox2delta(proposals,
- gt,
- means=(0., 0., 0., 0.),
- stds=(1., 1., 1., 1.)):
- """Compute deltas of proposals w.r.t. gt in the MMDet V1.x manner.
-
- We usually compute the deltas of x, y, w, h of proposals w.r.t ground
- truth bboxes to get regression target.
- This is the inverse function of `delta2bbox()`
-
- Args:
- proposals (Tensor): Boxes to be transformed, shape (N, ..., 4)
- gt (Tensor): Gt bboxes to be used as base, shape (N, ..., 4)
- means (Sequence[float]): Denormalizing means for delta coordinates
- stds (Sequence[float]): Denormalizing standard deviation for delta
- coordinates
-
- Returns:
- Tensor: deltas with shape (N, 4), where columns represent dx, dy,
- dw, dh.
- """
- assert proposals.size() == gt.size()
-
- proposals = proposals.float()
- gt = gt.float()
- px = (proposals[..., 0] + proposals[..., 2]) * 0.5
- py = (proposals[..., 1] + proposals[..., 3]) * 0.5
- pw = proposals[..., 2] - proposals[..., 0] + 1.0
- ph = proposals[..., 3] - proposals[..., 1] + 1.0
-
- gx = (gt[..., 0] + gt[..., 2]) * 0.5
- gy = (gt[..., 1] + gt[..., 3]) * 0.5
- gw = gt[..., 2] - gt[..., 0] + 1.0
- gh = gt[..., 3] - gt[..., 1] + 1.0
-
- dx = (gx - px) / pw
- dy = (gy - py) / ph
- dw = torch.log(gw / pw)
- dh = torch.log(gh / ph)
- deltas = torch.stack([dx, dy, dw, dh], dim=-1)
-
- means = deltas.new_tensor(means).unsqueeze(0)
- stds = deltas.new_tensor(stds).unsqueeze(0)
- deltas = deltas.sub_(means).div_(stds)
-
- return deltas
-
-
-@mmcv.jit(coderize=True)
-def legacy_delta2bbox(rois,
- deltas,
- means=(0., 0., 0., 0.),
- stds=(1., 1., 1., 1.),
- max_shape=None,
- wh_ratio_clip=16 / 1000):
- """Apply deltas to shift/scale base boxes in the MMDet V1.x manner.
-
- Typically the rois are anchor or proposed bounding boxes and the deltas are
- network outputs used to shift/scale those boxes.
- This is the inverse function of `bbox2delta()`
-
- Args:
- rois (Tensor): Boxes to be transformed. Has shape (N, 4)
- deltas (Tensor): Encoded offsets with respect to each roi.
- Has shape (N, 4 * num_classes). Note N = num_anchors * W * H when
- rois is a grid of anchors. Offset encoding follows [1]_.
- means (Sequence[float]): Denormalizing means for delta coordinates
- stds (Sequence[float]): Denormalizing standard deviation for delta
- coordinates
- max_shape (tuple[int, int]): Maximum bounds for boxes. specifies (H, W)
- wh_ratio_clip (float): Maximum aspect ratio for boxes.
-
- Returns:
- Tensor: Boxes with shape (N, 4), where columns represent
- tl_x, tl_y, br_x, br_y.
-
- References:
- .. [1] https://arxiv.org/abs/1311.2524
-
- Example:
- >>> rois = torch.Tensor([[ 0., 0., 1., 1.],
- >>> [ 0., 0., 1., 1.],
- >>> [ 0., 0., 1., 1.],
- >>> [ 5., 5., 5., 5.]])
- >>> deltas = torch.Tensor([[ 0., 0., 0., 0.],
- >>> [ 1., 1., 1., 1.],
- >>> [ 0., 0., 2., -1.],
- >>> [ 0.7, -1.9, -0.5, 0.3]])
- >>> legacy_delta2bbox(rois, deltas, max_shape=(32, 32))
- tensor([[0.0000, 0.0000, 1.5000, 1.5000],
- [0.0000, 0.0000, 5.2183, 5.2183],
- [0.0000, 0.1321, 7.8891, 0.8679],
- [5.3967, 2.4251, 6.0033, 3.7749]])
- """
- means = deltas.new_tensor(means).repeat(1, deltas.size(1) // 4)
- stds = deltas.new_tensor(stds).repeat(1, deltas.size(1) // 4)
- denorm_deltas = deltas * stds + means
- dx = denorm_deltas[:, 0::4]
- dy = denorm_deltas[:, 1::4]
- dw = denorm_deltas[:, 2::4]
- dh = denorm_deltas[:, 3::4]
- max_ratio = np.abs(np.log(wh_ratio_clip))
- dw = dw.clamp(min=-max_ratio, max=max_ratio)
- dh = dh.clamp(min=-max_ratio, max=max_ratio)
- # Compute center of each roi
- px = ((rois[:, 0] + rois[:, 2]) * 0.5).unsqueeze(1).expand_as(dx)
- py = ((rois[:, 1] + rois[:, 3]) * 0.5).unsqueeze(1).expand_as(dy)
- # Compute width/height of each roi
- pw = (rois[:, 2] - rois[:, 0] + 1.0).unsqueeze(1).expand_as(dw)
- ph = (rois[:, 3] - rois[:, 1] + 1.0).unsqueeze(1).expand_as(dh)
- # Use exp(network energy) to enlarge/shrink each roi
- gw = pw * dw.exp()
- gh = ph * dh.exp()
- # Use network energy to shift the center of each roi
- gx = px + pw * dx
- gy = py + ph * dy
- # Convert center-xy/width/height to top-left, bottom-right
-
- # The true legacy box coder should +- 0.5 here.
- # However, current implementation improves the performance when testing
- # the models trained in MMDetection 1.X (~0.5 bbox AP, 0.2 mask AP)
- x1 = gx - gw * 0.5
- y1 = gy - gh * 0.5
- x2 = gx + gw * 0.5
- y2 = gy + gh * 0.5
- if max_shape is not None:
- x1 = x1.clamp(min=0, max=max_shape[1] - 1)
- y1 = y1.clamp(min=0, max=max_shape[0] - 1)
- x2 = x2.clamp(min=0, max=max_shape[1] - 1)
- y2 = y2.clamp(min=0, max=max_shape[0] - 1)
- bboxes = torch.stack([x1, y1, x2, y2], dim=-1).view_as(deltas)
- return bboxes
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_769x769_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_769x769_80k_cityscapes.py
deleted file mode 100644
index 4f1b9e19411eb963d16fd2a8174529e69ecd5a1a..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_769x769_80k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './dnl_r50-d8_769x769_80k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/README.md b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/README.md
deleted file mode 100644
index 6ab8d213e97d0a9a62a244ca1d14fd130a48fe23..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/README.md
+++ /dev/null
@@ -1,21 +0,0 @@
-# text-generation-webui documentation
-
-## Table of contents
-
-* [Audio Notification](Audio-Notification.md)
-* [Chat mode](Chat-mode.md)
-* [DeepSpeed](DeepSpeed.md)
-* [Docker](Docker.md)
-* [ExLlama](ExLlama.md)
-* [Extensions](Extensions.md)
-* [GPTQ models (4 bit mode)](GPTQ-models-(4-bit-mode).md)
-* [LLaMA model](LLaMA-model.md)
-* [llama.cpp](llama.cpp.md)
-* [LoRA](LoRA.md)
-* [Low VRAM guide](Low-VRAM-guide.md)
-* [RWKV model](RWKV-model.md)
-* [Spell book](Spell-book.md)
-* [System requirements](System-requirements.md)
-* [Training LoRAs](Training-LoRAs.md)
-* [Windows installation guide](Windows-installation-guide.md)
-* [WSL installation guide](WSL-installation-guide.md)
diff --git a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/README.md b/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/README.md
deleted file mode 100644
index 15b71f11baae018bc0de5caad0beb3c02fbb811e..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/README.md
+++ /dev/null
@@ -1,147 +0,0 @@
-# TFill
-[paper](https://openaccess.thecvf.com/content/CVPR2022/html/Zheng_Bridging_Global_Context_Interactions_for_High-Fidelity_Image_Completion_CVPR_2022_paper.html) | [arXiv](https://arxiv.org/abs/2104.00845) | [Project](https://chuanxiaz.com/tfill/) | [Video](https://www.youtube.com/watch?v=efB1fw0jiLs&feature=youtu.be)
-
-This repository implements the training, testing and editing tools for "Bridging Global Context Interactions for High-Fidelity Image Completion (CVPR2022, scores: 1, 1, 2, 2)" by [Chuanxia Zheng](https://www.chuanxiaz.com), [Tat-Jen Cham](https://personal.ntu.edu.sg/astjcham/), [Jianfei Cai](https://jianfei-cai.github.io/) and [Dinh Phung](https://research.monash.edu/en/persons/dinh-phung). Given masked images, the proposed **TFill** model is able to generate high-fidelity plausible results on various settings.
-
-## Examples
-
-
-## Object Removal
-
-
-## Object Repair
-
-
-## Framework
-We propose the two-stages image completion framework, where the upper content inference network (TFill-*Coarse*) generates semantically correct content using a transformer encoder to directly capture the global context information; the lower appearance refinement network (TFill-*refined*) copies global visible and generated features to holes.
-
-
-
-
-
-# Getting started
-
-- Clone this repo:
-
-```
-git clone https://github.com/lyndonzheng/TFill
-cd TFill
-```
-## Requirements
-The original model is trained and evaluated with Pytorch v1.9.1, which cannot be visited in current [PyTorch](https://pytorch.org/get-started/previous-versions/). Therefore, we create a new environment with Pytorch v1.10.0 to test the model, where the performance is the same.
-
-A suitable [conda](https://conda.io/) environment named `Tfill` can be created and activated with:
-
-```
-conda env create -f environment.yaml
-conda activate TFill
-```
-## Runing pretrained models
-Download the pre-trained models using the following links ([CelebA-HQ](https://drive.google.com/drive/folders/1ntbVDjJ7-nAt4nLGuu7RNi3QpLfh40gk?usp=sharing), [FFHQ](https://drive.google.com/drive/folders/1xuAsShrw9wI5Be0sQka3vZEsfwnq0pPT?usp=sharing), [ImageNet](https://drive.google.com/drive/folders/1B4RswBUD6_jXAu3MVz3LtuNfoV4wTmGf?usp=sharing), [Plcases2](https://drive.google.com/drive/folders/154ikacQ8A2JLC8iIGda8jiZN-ysL1xh5?usp=sharing)
-) and put them under```checkpoints/``` directory. It should have the following structure:
-
-```
-./checkpoints/
-├── celeba
-│ ├── latest_net_D.pth
-│ ├── latest_net_D_Ref.pth
-│ ├── latest_net_E.pth
-│ ├── latest_net_G.pth
-│ ├── latest_net_G_Ref.pth
-│ ├── latest_net_T.pth
-├── ffhq
-│ ├── ...
-├── ...
-```
-
-- Test the model
-```
-sh ./scripts/test.sh
-```
-For different models, the users just need to modify lines 2-4, including ```name```,```img_file```,```mask_file```. For instance, we can replace the *celeba* to *imagenet*.
-
-The default results will be stored under the ```results/``` folder, in which:
-
-- ```examples/```: shows original and masked images;
-- ```img_out/```: shows upsampled *Coarse* outputs;
-- ```img_ref_out/```: shows the final *Refined* outputs.
-
-## Datasets
-- ```face dataset```:
- - 24,183 training images and 2,824 test images from [CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) and use the algorithm of [Growing GANs](https://github.com/tkarras/progressive_growing_of_gans) to get the high-resolution CelebA-HQ dataset.
- - 60,000 training images and 10,000 test images from [FFHQ](https://github.com/NVlabs/ffhq-dataset) provided by [StyleGAN](https://github.com/NVlabs/stylegan).
-- ```natural scenery```: original training and val images from [Places2](http://places2.csail.mit.edu/).
-- ```object``` original training images from [ImageNet](http://www.image-net.org/).
-
-## Traning
-
-- Train a model (two stage: *Coarse* and *Refinement*)
-```
-sh ./scripts/train.sh
-```
-The default setting is for the top *Coarse* training. The users just need to replace the *coarse* with *refine* at line 6. Then, the model can continue training for high-resolution image completion.
-More hyper-parameter can be in ```options/```.
-
-The coarse results using transformer and restrictive CNN is impressive, which provides plausible results for both **foreground** objects and **background** scene.
-
-
-
-
-# GUI
-The GUI operation is similar to our previous GUI in [PIC](https://github.com/lyndonzheng/Pluralistic-Inpainting), where steps are also the same.
-
-Basic usage is:
-
-```
-sh ./scripts/ui.sh
-```
-In ```gui/ui_model.py```, users can modify the ```img_root```(line 30) and the corresponding ```img_files```(line 31) to randomly edit images from the testing dataset.
-
-## Editing Examples
-
-- **Results (original, output) for face editing**
-
-
-
-- **Results (original, masked input, output) for nature scene editing**
-
-
-
-## Next
-- Higher-resolution pluralistic image completion
-
-## License
-This work is licensed under a MIT License.
-
-This software is for educational and academic research purpose only. If you wish to obtain a commercial royalty bearing license to this software, please contact us at chuanxia001@e.ntu.edu.sg.
-
-## Citation
-
-The code also uses our previous [PIC](https://github.com/lyndonzheng/Pluralistic-Inpainting). If you use this code for your research, please cite our papers.
-```
-@InProceedings{Zheng_2022_CVPR,
- author = {Zheng, Chuanxia and Cham, Tat-Jen and Cai, Jianfei and Phung, Dinh},
- title = {Bridging Global Context Interactions for High-Fidelity Image Completion},
- booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
- month = {June},
- year = {2022},
- pages = {11512-11522}
-}
-
-@inproceedings{zheng2019pluralistic,
- title={Pluralistic Image Completion},
- author={Zheng, Chuanxia and Cham, Tat-Jen and Cai, Jianfei},
- booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
- pages={1438--1447},
- year={2019}
-}
-
-@article{zheng2021pluralistic,
- title={Pluralistic Free-From Image Completion},
- author={Zheng, Chuanxia and Cham, Tat-Jen and Cai, Jianfei},
- journal={International Journal of Computer Vision},
- pages={1--20},
- year={2021},
- publisher={Springer}
-}
-```
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/midas/transforms.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/midas/transforms.py
deleted file mode 100644
index 350cbc11662633ad7f8968eb10be2e7de6e384e9..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/midas/transforms.py
+++ /dev/null
@@ -1,234 +0,0 @@
-import numpy as np
-import cv2
-import math
-
-
-def apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA):
- """Rezise the sample to ensure the given size. Keeps aspect ratio.
-
- Args:
- sample (dict): sample
- size (tuple): image size
-
- Returns:
- tuple: new size
- """
- shape = list(sample["disparity"].shape)
-
- if shape[0] >= size[0] and shape[1] >= size[1]:
- return sample
-
- scale = [0, 0]
- scale[0] = size[0] / shape[0]
- scale[1] = size[1] / shape[1]
-
- scale = max(scale)
-
- shape[0] = math.ceil(scale * shape[0])
- shape[1] = math.ceil(scale * shape[1])
-
- # resize
- sample["image"] = cv2.resize(
- sample["image"], tuple(shape[::-1]), interpolation=image_interpolation_method
- )
-
- sample["disparity"] = cv2.resize(
- sample["disparity"], tuple(shape[::-1]), interpolation=cv2.INTER_NEAREST
- )
- sample["mask"] = cv2.resize(
- sample["mask"].astype(np.float32),
- tuple(shape[::-1]),
- interpolation=cv2.INTER_NEAREST,
- )
- sample["mask"] = sample["mask"].astype(bool)
-
- return tuple(shape)
-
-
-class Resize(object):
- """Resize sample to given size (width, height).
- """
-
- def __init__(
- self,
- width,
- height,
- resize_target=True,
- keep_aspect_ratio=False,
- ensure_multiple_of=1,
- resize_method="lower_bound",
- image_interpolation_method=cv2.INTER_AREA,
- ):
- """Init.
-
- Args:
- width (int): desired output width
- height (int): desired output height
- resize_target (bool, optional):
- True: Resize the full sample (image, mask, target).
- False: Resize image only.
- Defaults to True.
- keep_aspect_ratio (bool, optional):
- True: Keep the aspect ratio of the input sample.
- Output sample might not have the given width and height, and
- resize behaviour depends on the parameter 'resize_method'.
- Defaults to False.
- ensure_multiple_of (int, optional):
- Output width and height is constrained to be multiple of this parameter.
- Defaults to 1.
- resize_method (str, optional):
- "lower_bound": Output will be at least as large as the given size.
- "upper_bound": Output will be at max as large as the given size. (Output size might be smaller than given size.)
- "minimal": Scale as least as possible. (Output size might be smaller than given size.)
- Defaults to "lower_bound".
- """
- self.__width = width
- self.__height = height
-
- self.__resize_target = resize_target
- self.__keep_aspect_ratio = keep_aspect_ratio
- self.__multiple_of = ensure_multiple_of
- self.__resize_method = resize_method
- self.__image_interpolation_method = image_interpolation_method
-
- def constrain_to_multiple_of(self, x, min_val=0, max_val=None):
- y = (np.round(x / self.__multiple_of) * self.__multiple_of).astype(int)
-
- if max_val is not None and y > max_val:
- y = (np.floor(x / self.__multiple_of) * self.__multiple_of).astype(int)
-
- if y < min_val:
- y = (np.ceil(x / self.__multiple_of) * self.__multiple_of).astype(int)
-
- return y
-
- def get_size(self, width, height):
- # determine new height and width
- scale_height = self.__height / height
- scale_width = self.__width / width
-
- if self.__keep_aspect_ratio:
- if self.__resize_method == "lower_bound":
- # scale such that output size is lower bound
- if scale_width > scale_height:
- # fit width
- scale_height = scale_width
- else:
- # fit height
- scale_width = scale_height
- elif self.__resize_method == "upper_bound":
- # scale such that output size is upper bound
- if scale_width < scale_height:
- # fit width
- scale_height = scale_width
- else:
- # fit height
- scale_width = scale_height
- elif self.__resize_method == "minimal":
- # scale as least as possbile
- if abs(1 - scale_width) < abs(1 - scale_height):
- # fit width
- scale_height = scale_width
- else:
- # fit height
- scale_width = scale_height
- else:
- raise ValueError(
- f"resize_method {self.__resize_method} not implemented"
- )
-
- if self.__resize_method == "lower_bound":
- new_height = self.constrain_to_multiple_of(
- scale_height * height, min_val=self.__height
- )
- new_width = self.constrain_to_multiple_of(
- scale_width * width, min_val=self.__width
- )
- elif self.__resize_method == "upper_bound":
- new_height = self.constrain_to_multiple_of(
- scale_height * height, max_val=self.__height
- )
- new_width = self.constrain_to_multiple_of(
- scale_width * width, max_val=self.__width
- )
- elif self.__resize_method == "minimal":
- new_height = self.constrain_to_multiple_of(scale_height * height)
- new_width = self.constrain_to_multiple_of(scale_width * width)
- else:
- raise ValueError(f"resize_method {self.__resize_method} not implemented")
-
- return (new_width, new_height)
-
- def __call__(self, sample):
- width, height = self.get_size(
- sample["image"].shape[1], sample["image"].shape[0]
- )
-
- # resize sample
- sample["image"] = cv2.resize(
- sample["image"],
- (width, height),
- interpolation=self.__image_interpolation_method,
- )
-
- if self.__resize_target:
- if "disparity" in sample:
- sample["disparity"] = cv2.resize(
- sample["disparity"],
- (width, height),
- interpolation=cv2.INTER_NEAREST,
- )
-
- if "depth" in sample:
- sample["depth"] = cv2.resize(
- sample["depth"], (width, height), interpolation=cv2.INTER_NEAREST
- )
-
- sample["mask"] = cv2.resize(
- sample["mask"].astype(np.float32),
- (width, height),
- interpolation=cv2.INTER_NEAREST,
- )
- sample["mask"] = sample["mask"].astype(bool)
-
- return sample
-
-
-class NormalizeImage(object):
- """Normlize image by given mean and std.
- """
-
- def __init__(self, mean, std):
- self.__mean = mean
- self.__std = std
-
- def __call__(self, sample):
- sample["image"] = (sample["image"] - self.__mean) / self.__std
-
- return sample
-
-
-class PrepareForNet(object):
- """Prepare sample for usage as network input.
- """
-
- def __init__(self):
- pass
-
- def __call__(self, sample):
- image = np.transpose(sample["image"], (2, 0, 1))
- sample["image"] = np.ascontiguousarray(image).astype(np.float32)
-
- if "mask" in sample:
- sample["mask"] = sample["mask"].astype(np.float32)
- sample["mask"] = np.ascontiguousarray(sample["mask"])
-
- if "disparity" in sample:
- disparity = sample["disparity"].astype(np.float32)
- sample["disparity"] = np.ascontiguousarray(disparity)
-
- if "depth" in sample:
- depth = sample["depth"].astype(np.float32)
- sample["depth"] = np.ascontiguousarray(depth)
-
- return sample
diff --git a/spaces/Apex-X/ROOPOK/roop/face_reference.py b/spaces/Apex-X/ROOPOK/roop/face_reference.py
deleted file mode 100644
index 3c3e1f1c6e13c73ceafd40c0912c066a3a86a528..0000000000000000000000000000000000000000
--- a/spaces/Apex-X/ROOPOK/roop/face_reference.py
+++ /dev/null
@@ -1,21 +0,0 @@
-from typing import Optional
-
-from roop.typing import Face
-
-FACE_REFERENCE = None
-
-
-def get_face_reference() -> Optional[Face]:
- return FACE_REFERENCE
-
-
-def set_face_reference(face: Face) -> None:
- global FACE_REFERENCE
-
- FACE_REFERENCE = face
-
-
-def clear_face_reference() -> None:
- global FACE_REFERENCE
-
- FACE_REFERENCE = None
diff --git a/spaces/Apex-X/Tm/roop/typing.py b/spaces/Apex-X/Tm/roop/typing.py
deleted file mode 100644
index 1cff7440616e20bfe7b8bc287f86d11bf1b0f083..0000000000000000000000000000000000000000
--- a/spaces/Apex-X/Tm/roop/typing.py
+++ /dev/null
@@ -1,7 +0,0 @@
-from typing import Any
-
-from insightface.app.common import Face
-import numpy
-
-Face = Face
-Frame = numpy.ndarray[Any, Any]
diff --git a/spaces/Artgor/digit-draw-detect/pages/about.py b/spaces/Artgor/digit-draw-detect/pages/about.py
deleted file mode 100644
index fa7662d26c001d73e544bf9ff4a2250e244ea7e9..0000000000000000000000000000000000000000
--- a/spaces/Artgor/digit-draw-detect/pages/about.py
+++ /dev/null
@@ -1,25 +0,0 @@
-import streamlit as st
-
-text = """
-## A brief history of the project.
-The first idea of the project was conceived in the summer of 2017. I completed cs231n course and wanted to put my skills to the test. So I studied Flask and many other tools from scratch and made an [app](https://github.com/Erlemar/digit-draw-recognize) for recognizing handwritten digits. It had two models - a simple feed-forward neural net written in NumPy and a CNN written in Tensorflow. One fun feature of this app was online learning: the model continuously improved its predictions based on new inputs (though this did sometimes lead to incorrect predictions).
-
-In 2019 I decided to update the [project](https://github.com/Erlemar/digit-draw-predict): I trained the new neural net in PyTorch and used cv2 to detect separate digits (people often drew multiple digits). More than that, the model had 11 classes - I made a separate class for "junk", as people often drew things for fun: animals, objects, or words.
-
-The first two versions were deployed on Heroku's free plan, but in 2022 these plans were discontinued. I didn't want my project to die because of nostalgia, so I developed a new version and deployed it differently. The current version has an object detection model (yolo3 written from scratch) and 12 classes (digits, junk, and **censored**).
-If you want to know what does **censored** means, just try to draw something 😉
-
-Initially, I considered deploying the app on Streamlit Cloud, but its computational limits were too low, so now the model is live on HuggingFace Spaces.
-
-### Links with additional information:
-
-* [Project page on my personal website](https://andlukyane.com/project/drawn-digits-prediction)
-* [A dataset with the digits and bounding boxes on Kaggle](https://www.kaggle.com/datasets/artgor/handwritten-digits-and-bounding-boxes)
-* [Training code](https://github.com/Erlemar/pytorch_tempest_pet_)
-* [Blogpost on my personal website](https://andlukyane.com/blog/a-third-life-of-a-personal-project)
-* [Blogpost on medium](https://towardsdatascience.com/the-third-life-of-a-personal-pet-project-for-handwritten-digit-recognition-fd908dc8e7a1)
-* [Russian blogpost on habr](https://habr.com/ru/company/ods/blog/707046/)
-* [Project code on GitHub](https://github.com/Erlemar/digit-draw-detect)
-"""
-
-st.markdown(text, unsafe_allow_html=True)
diff --git a/spaces/Asifpa6/emotion-analyzer-app/app.py b/spaces/Asifpa6/emotion-analyzer-app/app.py
deleted file mode 100644
index d31a4fe2593426fa86cd96d87873053f4da98089..0000000000000000000000000000000000000000
--- a/spaces/Asifpa6/emotion-analyzer-app/app.py
+++ /dev/null
@@ -1,66 +0,0 @@
-import streamlit as st
-import pandas as pd
-from emotion_analysis import get_emotion
-import base64
-
-
-def read_data(file_path):
- file_extension = file_path.split('.')[-1].lower()
-
- if file_extension == 'xlsx' or file_extension == 'xls':
- data = pd.read_excel(file_path)
- elif file_extension == 'csv':
- data = pd.read_csv(file_path)
- else:
- raise ValueError("Unsupported file format. Only Excel (xlsx, xls) and CSV (csv) files are supported.")
-
- return data
-
-
-# Streamlit app
-def main():
- st.title("Text Emotion Detection")
- menu = ["Input Text", "Batch Processing"]
- option = st.sidebar.radio("Select an option", menu)
-
-
- if option == "Input Text":
- text = st.text_area("Enter your text:")
- if st.button("Submit"):
- if text.strip() != "":
- emotion_detail, confidence_score = get_emotion(text)
- st.write("Detected Emotion")
- st.write(f"{emotion_detail[0]} - {confidence_score}")
- else:
- st.write("Please enter some text.")
-
- elif option == "Batch Processing":
- uploaded_file = st.file_uploader("Upload CSV or Excel file", type=["csv", "xlsx"])
-
- if uploaded_file is not None:
- file_name = uploaded_file.name
- file_extension = file_name.split('.')[-1].lower()
- file_name = uploaded_file.name
- if file_extension == 'xlsx' or file_extension == 'xls':
- dataframe = pd.read_excel(uploaded_file)
- elif file_extension == 'csv':
- dataframe = pd.read_csv(uploaded_file)
- else:
- raise ValueError("Unsupported file format. Only Excel (xlsx, xls) and CSV (csv) files are supported.")
- # dataframe = pd.read_excel(uploaded_file)
- if "text" not in dataframe.columns:
- st.write("CSV file should have a 'text' column.")
- else:
- dataframe["emotion"], dataframe["confidence"] = zip(*dataframe["text"].map(get_emotion))
- st.write("Detected Emotions")
- st.write(dataframe)
- # Download button
- csv = dataframe.to_csv(index=False)
- b64 = base64.b64encode(csv.encode()).decode() # Convert DataFrame to CSV string
- href = f'Download'
- st.markdown(href, unsafe_allow_html=True)
- else:
- pass
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Asmithayellow/Asmi/app.py b/spaces/Asmithayellow/Asmi/app.py
deleted file mode 100644
index a362dcc7d0ddd1eee86961f1bc3db6d894fbd3d5..0000000000000000000000000000000000000000
--- a/spaces/Asmithayellow/Asmi/app.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import os
-import gradio as gr
-from langchain.chat_models import ChatOpenAI
-from langchain import LLMChain, PromptTemplate
-from langchain.memory import ConversationBufferMemory
-
-OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
-
-template = """You are a helpful assistant to answer all user queries.
-{chat_history}
-User: {user_message}
-Chatbot:"""
-
-prompt = PromptTemplate(
- input_variables=["chat_history", "user_message"], template=template
-)
-
-memory = ConversationBufferMemory(memory_key="chat_history")
-
-llm_chain = LLMChain(
- llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
- prompt=prompt,
- verbose=True,
- memory=memory,
-)
-
-def get_text_response(user_message,history):
- response = llm_chain.predict(user_message = user_message)
- return response
-
-demo = gr.ChatInterface(get_text_response)
-
-if __name__ == "__main__":
- demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/__init__.py
deleted file mode 100644
index fe581623d89d67a49eb43f3c3e88f3f450257707..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/__init__.py
+++ /dev/null
@@ -1,115 +0,0 @@
-######################## BEGIN LICENSE BLOCK ########################
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License, or (at your option) any later version.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
-# 02110-1301 USA
-######################### END LICENSE BLOCK #########################
-
-from typing import List, Union
-
-from .charsetgroupprober import CharSetGroupProber
-from .charsetprober import CharSetProber
-from .enums import InputState
-from .resultdict import ResultDict
-from .universaldetector import UniversalDetector
-from .version import VERSION, __version__
-
-__all__ = ["UniversalDetector", "detect", "detect_all", "__version__", "VERSION"]
-
-
-def detect(
- byte_str: Union[bytes, bytearray], should_rename_legacy: bool = False
-) -> ResultDict:
- """
- Detect the encoding of the given byte string.
-
- :param byte_str: The byte sequence to examine.
- :type byte_str: ``bytes`` or ``bytearray``
- :param should_rename_legacy: Should we rename legacy encodings
- to their more modern equivalents?
- :type should_rename_legacy: ``bool``
- """
- if not isinstance(byte_str, bytearray):
- if not isinstance(byte_str, bytes):
- raise TypeError(
- f"Expected object of type bytes or bytearray, got: {type(byte_str)}"
- )
- byte_str = bytearray(byte_str)
- detector = UniversalDetector(should_rename_legacy=should_rename_legacy)
- detector.feed(byte_str)
- return detector.close()
-
-
-def detect_all(
- byte_str: Union[bytes, bytearray],
- ignore_threshold: bool = False,
- should_rename_legacy: bool = False,
-) -> List[ResultDict]:
- """
- Detect all the possible encodings of the given byte string.
-
- :param byte_str: The byte sequence to examine.
- :type byte_str: ``bytes`` or ``bytearray``
- :param ignore_threshold: Include encodings that are below
- ``UniversalDetector.MINIMUM_THRESHOLD``
- in results.
- :type ignore_threshold: ``bool``
- :param should_rename_legacy: Should we rename legacy encodings
- to their more modern equivalents?
- :type should_rename_legacy: ``bool``
- """
- if not isinstance(byte_str, bytearray):
- if not isinstance(byte_str, bytes):
- raise TypeError(
- f"Expected object of type bytes or bytearray, got: {type(byte_str)}"
- )
- byte_str = bytearray(byte_str)
-
- detector = UniversalDetector(should_rename_legacy=should_rename_legacy)
- detector.feed(byte_str)
- detector.close()
-
- if detector.input_state == InputState.HIGH_BYTE:
- results: List[ResultDict] = []
- probers: List[CharSetProber] = []
- for prober in detector.charset_probers:
- if isinstance(prober, CharSetGroupProber):
- probers.extend(p for p in prober.probers)
- else:
- probers.append(prober)
- for prober in probers:
- if ignore_threshold or prober.get_confidence() > detector.MINIMUM_THRESHOLD:
- charset_name = prober.charset_name or ""
- lower_charset_name = charset_name.lower()
- # Use Windows encoding name instead of ISO-8859 if we saw any
- # extra Windows-specific bytes
- if lower_charset_name.startswith("iso-8859") and detector.has_win_bytes:
- charset_name = detector.ISO_WIN_MAP.get(
- lower_charset_name, charset_name
- )
- # Rename legacy encodings with superset encodings if asked
- if should_rename_legacy:
- charset_name = detector.LEGACY_MAP.get(
- charset_name.lower(), charset_name
- )
- results.append(
- {
- "encoding": charset_name,
- "confidence": prober.get_confidence(),
- "language": prober.language,
- }
- )
- if len(results) > 0:
- return sorted(results, key=lambda result: -result["confidence"])
-
- return [detector.result]
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/help.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/help.py
deleted file mode 100644
index 2d292c2f062cd80cd108aac503eae7b635ceec8d..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/help.py
+++ /dev/null
@@ -1,131 +0,0 @@
-"""Module containing bug report helper(s)."""
-
-import json
-import platform
-import ssl
-import sys
-
-from pip._vendor import idna
-from pip._vendor import urllib3
-
-from . import __version__ as requests_version
-
-charset_normalizer = None
-
-try:
- from pip._vendor import chardet
-except ImportError:
- chardet = None
-
-try:
- from pip._vendor.urllib3.contrib import pyopenssl
-except ImportError:
- pyopenssl = None
- OpenSSL = None
- cryptography = None
-else:
- import cryptography
- import OpenSSL
-
-
-def _implementation():
- """Return a dict with the Python implementation and version.
-
- Provide both the name and the version of the Python implementation
- currently running. For example, on CPython 3.10.3 it will return
- {'name': 'CPython', 'version': '3.10.3'}.
-
- This function works best on CPython and PyPy: in particular, it probably
- doesn't work for Jython or IronPython. Future investigation should be done
- to work out the correct shape of the code for those platforms.
- """
- implementation = platform.python_implementation()
-
- if implementation == "CPython":
- implementation_version = platform.python_version()
- elif implementation == "PyPy":
- implementation_version = "{}.{}.{}".format(
- sys.pypy_version_info.major,
- sys.pypy_version_info.minor,
- sys.pypy_version_info.micro,
- )
- if sys.pypy_version_info.releaselevel != "final":
- implementation_version = "".join(
- [implementation_version, sys.pypy_version_info.releaselevel]
- )
- elif implementation == "Jython":
- implementation_version = platform.python_version() # Complete Guess
- elif implementation == "IronPython":
- implementation_version = platform.python_version() # Complete Guess
- else:
- implementation_version = "Unknown"
-
- return {"name": implementation, "version": implementation_version}
-
-
-def info():
- """Generate information for a bug report."""
- try:
- platform_info = {
- "system": platform.system(),
- "release": platform.release(),
- }
- except OSError:
- platform_info = {
- "system": "Unknown",
- "release": "Unknown",
- }
-
- implementation_info = _implementation()
- urllib3_info = {"version": urllib3.__version__}
- charset_normalizer_info = {"version": None}
- chardet_info = {"version": None}
- if charset_normalizer:
- charset_normalizer_info = {"version": charset_normalizer.__version__}
- if chardet:
- chardet_info = {"version": chardet.__version__}
-
- pyopenssl_info = {
- "version": None,
- "openssl_version": "",
- }
- if OpenSSL:
- pyopenssl_info = {
- "version": OpenSSL.__version__,
- "openssl_version": f"{OpenSSL.SSL.OPENSSL_VERSION_NUMBER:x}",
- }
- cryptography_info = {
- "version": getattr(cryptography, "__version__", ""),
- }
- idna_info = {
- "version": getattr(idna, "__version__", ""),
- }
-
- system_ssl = ssl.OPENSSL_VERSION_NUMBER
- system_ssl_info = {"version": f"{system_ssl:x}" if system_ssl is not None else ""}
-
- return {
- "platform": platform_info,
- "implementation": implementation_info,
- "system_ssl": system_ssl_info,
- "using_pyopenssl": pyopenssl is not None,
- "using_charset_normalizer": chardet is None,
- "pyOpenSSL": pyopenssl_info,
- "urllib3": urllib3_info,
- "chardet": chardet_info,
- "charset_normalizer": charset_normalizer_info,
- "cryptography": cryptography_info,
- "idna": idna_info,
- "requests": {
- "version": requests_version,
- },
- }
-
-
-def main():
- """Pretty-print the bug information as JSON."""
- print(json.dumps(info(), sort_keys=True, indent=2))
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/importlib_resources/_compat.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/importlib_resources/_compat.py
deleted file mode 100644
index cb9fc820cb352aa6e92705aab4f55cbc2eff96bc..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/importlib_resources/_compat.py
+++ /dev/null
@@ -1,98 +0,0 @@
-# flake8: noqa
-
-import abc
-import sys
-import pathlib
-from contextlib import suppress
-
-if sys.version_info >= (3, 10):
- from zipfile import Path as ZipPath # type: ignore
-else:
- from ..zipp import Path as ZipPath # type: ignore
-
-
-try:
- from typing import runtime_checkable # type: ignore
-except ImportError:
-
- def runtime_checkable(cls): # type: ignore
- return cls
-
-
-try:
- from typing import Protocol # type: ignore
-except ImportError:
- Protocol = abc.ABC # type: ignore
-
-
-class TraversableResourcesLoader:
- """
- Adapt loaders to provide TraversableResources and other
- compatibility.
-
- Used primarily for Python 3.9 and earlier where the native
- loaders do not yet implement TraversableResources.
- """
-
- def __init__(self, spec):
- self.spec = spec
-
- @property
- def path(self):
- return self.spec.origin
-
- def get_resource_reader(self, name):
- from . import readers, _adapters
-
- def _zip_reader(spec):
- with suppress(AttributeError):
- return readers.ZipReader(spec.loader, spec.name)
-
- def _namespace_reader(spec):
- with suppress(AttributeError, ValueError):
- return readers.NamespaceReader(spec.submodule_search_locations)
-
- def _available_reader(spec):
- with suppress(AttributeError):
- return spec.loader.get_resource_reader(spec.name)
-
- def _native_reader(spec):
- reader = _available_reader(spec)
- return reader if hasattr(reader, 'files') else None
-
- def _file_reader(spec):
- try:
- path = pathlib.Path(self.path)
- except TypeError:
- return None
- if path.exists():
- return readers.FileReader(self)
-
- return (
- # native reader if it supplies 'files'
- _native_reader(self.spec)
- or
- # local ZipReader if a zip module
- _zip_reader(self.spec)
- or
- # local NamespaceReader if a namespace module
- _namespace_reader(self.spec)
- or
- # local FileReader
- _file_reader(self.spec)
- # fallback - adapt the spec ResourceReader to TraversableReader
- or _adapters.CompatibilityFiles(self.spec)
- )
-
-
-def wrap_spec(package):
- """
- Construct a package spec with traversable compatibility
- on the spec/loader/reader.
-
- Supersedes _adapters.wrap_spec to use TraversableResourcesLoader
- from above for older Python compatibility (<3.10).
- """
- from . import _adapters
-
- return _adapters.SpecLoaderAdapter(package.__spec__, TraversableResourcesLoader)
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/README.md b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/README.md
deleted file mode 100644
index d3e1d5cf533555e19c6326777f792ac82a560a84..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/README.md
+++ /dev/null
@@ -1,85 +0,0 @@
-# Probabilistic two-stage detection
-Two-stage object detectors that use class-agnostic one-stage detectors as the proposal network.
-
-
-
-
-> [**Probabilistic two-stage detection**](http://arxiv.org/abs/2103.07461),
-> Xingyi Zhou, Vladlen Koltun, Philipp Krähenbühl,
-> *arXiv technical report ([arXiv 2103.07461](http://arxiv.org/abs/2103.07461))*
-
-Contact: [zhouxy@cs.utexas.edu](mailto:zhouxy@cs.utexas.edu). Any questions or discussions are welcomed!
-
-## Abstract
-
-We develop a probabilistic interpretation of two-stage object detection. We show that this probabilistic interpretation motivates a number of common empirical training practices. It also suggests changes to two-stage detection pipelines. Specifically, the first stage should infer proper object-vs-background likelihoods, which should then inform the overall score of the detector. A standard region proposal network (RPN) cannot infer this likelihood sufficiently well, but many one-stage detectors can. We show how to build a probabilistic two-stage detector from any state-of-the-art one-stage detector. The resulting detectors are faster and more accurate than both their one- and two-stage precursors. Our detector achieves 56.4 mAP on COCO test-dev with single-scale testing, outperforming all published results. Using a lightweight backbone, our detector achieves 49.2 mAP on COCO at 33 fps on a Titan Xp.
-
-## Summary
-
-- Two-stage CenterNet: First stage estimates object probabilities, second stage conditionally classifies objects.
-
-- Resulting detector is faster and more accurate than both traditional two-stage detectors (fewer proposals required), and one-stage detectors (lighter first stage head).
-
-- Our best model achieves 56.4 mAP on COCO test-dev.
-
-- This repo also includes a detectron2-based CenterNet implementation with better accuracy (42.5 mAP at 70FPS) and a new FPN version of CenterNet (40.2 mAP with Res50_1x).
-
-## Main results
-
-All models are trained with multi-scale training, and tested with a single scale. The FPS is tested on a Titan RTX GPU.
-More models and details can be found in the [MODEL_ZOO](projects/CenterNet2/centernet2_docs/MODEL_ZOO.md).
-
-#### COCO
-
-| Model | COCO val mAP | FPS |
-|-------------------------------------------|---------------|-------|
-| CenterNet-S4_DLA_8x | 42.5 | 71 |
-| CenterNet2_R50_1x | 42.9 | 24 |
-| CenterNet2_X101-DCN_2x | 49.9 | 8 |
-| CenterNet2_R2-101-DCN-BiFPN_4x+4x_1560_ST | 56.1 | 5 |
-| CenterNet2_DLA-BiFPN-P5_24x_ST | 49.2 | 38 |
-
-
-#### LVIS
-
-| Model | val mAP box |
-| ------------------------- | ----------- |
-| CenterNet2_R50_1x | 26.5 |
-| CenterNet2_FedLoss_R50_1x | 28.3 |
-
-
-#### Objects365
-
-| Model | val mAP |
-|-------------------------------------------|----------|
-| CenterNet2_R50_1x | 22.6 |
-
-## Installation
-
-Our project is developed on [detectron2](https://github.com/facebookresearch/detectron2). Please follow the official detectron2 [installation](https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md). All our code is under `projects/CenterNet2/`. In theory, you should be able to copy-paste `projects/CenterNet2/` to the latest detectron2 release or your own detectron2 repo to run our project. There might be API changes in future detectron2 releases that make it incompatible.
-
-We use the default detectron2 demo script. To run inference on an image folder using our pre-trained model, run
-
-~~~
-python projects/CenterNet2/demo/demo.py --config-file projects/CenterNet2/configs/CenterNet2_R50_1x.yaml --input path/to/image/ --opts MODEL.WEIGHTS models/CenterNet2_R50_1x.pth
-~~~
-
-## Benchmark evaluation and training
-
-Please check detectron2 [GETTING_STARTED.md](https://github.com/facebookresearch/detectron2/blob/master/GETTING_STARTED.md) for running evaluation and training. Our config files are under `projects/CenterNet2/configs` and the pre-trained models are in the [MODEL_ZOO](projects/CenterNet2/centernet2_docs/MODEL_ZOO.md).
-
-
-## License
-
-Our code under `projects/CenterNet2/` is under [Apache 2.0 license](projects/CenterNet2/LICENSE). `projects/CenterNet2/centernet/modeling/backbone/bifpn_fcos.py` are from [AdelaiDet](https://github.com/aim-uofa/AdelaiDet), which follows the original [non-commercial license](https://github.com/aim-uofa/AdelaiDet/blob/master/LICENSE). The code from detectron2 follows the original [Apache 2.0 license](LICENSE).
-
-## Citation
-
-If you find this project useful for your research, please use the following BibTeX entry.
-
- @inproceedings{zhou2021probablistic,
- title={Probabilistic two-stage detection},
- author={Zhou, Xingyi and Koltun, Vladlen and Kr{\"a}henb{\"u}hl, Philipp},
- booktitle={arXiv preprint arXiv:2103.07461},
- year={2021}
- }
diff --git a/spaces/Banbri/zcvzcv/src/app/engine/presets.ts b/spaces/Banbri/zcvzcv/src/app/engine/presets.ts
deleted file mode 100644
index 7bc9ac57fc06a7f31ad3735c10f9b0c2301e03d5..0000000000000000000000000000000000000000
--- a/spaces/Banbri/zcvzcv/src/app/engine/presets.ts
+++ /dev/null
@@ -1,588 +0,0 @@
-import { FontName, actionman, komika, vtc } from "@/lib/fonts"
-import { pick } from "@/lib/pick"
-import { NextFontWithVariable } from "next/dist/compiled/@next/font"
-
-export type ComicFamily =
- | "american"
- | "asian"
- | "european"
-
-export type ComicColor =
- | "color"
- | "grayscale"
- | "monochrome"
-
-export interface Preset {
- id: string
- label: string
- family: ComicFamily
- color: ComicColor
- font: FontName
- llmPrompt: string
- imagePrompt: (prompt: string) => string[]
- negativePrompt: (prompt: string) => string[]
-}
-
-// ATTENTION!! negative prompts are not supported by the VideoChain API yet
-
-export const presets: Record = {
- random: {
- id: "random",
- label: "Random style",
- family: "european",
- color: "color",
- font: "actionman",
- llmPrompt: "",
- imagePrompt: (prompt: string) => [],
- negativePrompt: () => [],
- },
- japanese_manga: {
- id: "japanese_manga",
- label: "Japanese",
- family: "asian",
- color: "grayscale",
- font: "actionman",
- llmPrompt: "japanese manga",
- imagePrompt: (prompt: string) => [
- `grayscale`,
- `intricate details`,
- `japanese manga about ${prompt}`,
- "single panel",
- "manga",
- "japanese",
- "intricate",
- "detailed",
- // "drawing"
- ],
- negativePrompt: () => [
- "franco-belgian comic",
- "color album",
- "color",
- "american comic",
- "photo",
- "painting",
- "3D render"
- ],
- },
- nihonga: {
- id: "nihonga",
- label: "Nihonga",
- family: "asian",
- color: "color",
- font: "actionman",
- llmPrompt: "japanese manga",
- imagePrompt: (prompt: string) => [
- `japanese nihonga painting about ${prompt}`,
- "Nihonga",
- "ancient japanese painting",
- "intricate",
- "detailed",
- // "drawing"
- ],
- negativePrompt: () => [
- "franco-belgian comic",
- "color album",
- "color",
- "manga",
- "comic",
- "american comic",
- "photo",
- "painting",
- "3D render"
- ],
- },
- franco_belgian: {
- id: "franco_belgian",
- label: "Franco-Belgian",
- family: "european",
- color: "color",
- font: "actionman",
- llmPrompt: "Franco-Belgian comic (a \"bande dessinée\"), in the style of Franquin, Moebius etc",
- imagePrompt: (prompt: string) => [
- "bande dessinée",
- "franco-belgian comic",
- `franco-belgian color comic about ${prompt}`,
- "comic album",
- // "color drawing"
- ],
- negativePrompt: () => [
- "manga",
- "anime",
- "american comic",
- "grayscale",
- "monochrome",
- "photo",
- "painting",
- "3D render"
- ],
- },
- american_comic_90: {
- id: "american_comic_90",
- label: "American (modern)",
- family: "american",
- color: "color",
- font: "actionman",
- llmPrompt: "american comic",
- imagePrompt: (prompt: string) => [
- "digital color comicbook style",
- `modern american comic about ${prompt}`,
- //"single panel",
- // "2010s",
- // "digital print",
- // "color comicbook",
- // "color drawing"
- ],
- negativePrompt: () => [
- "manga",
- "anime",
- "american comic",
- "action",
- "grayscale",
- "monochrome",
- "photo",
- "painting",
- "3D render"
- ],
- },
-
- /*
- american_comic_40: {
- label: "American (1940)",
- family: "american",
- color: "color",
- font: "actionman",
- llmPrompt: "american comic",
- imagePrompt: (prompt: string) => [
- `american comic about ${prompt}`,
- "single panel",
- "american comic",
- "comicbook style",
- "1940",
- "40s",
- "color comicbook",
- "color drawing"
- ],
- negativePrompt: () => [
- "manga",
- "anime",
- "american comic",
- "action",
- "grayscale",
- "monochrome",
- "photo",
- "painting",
- "3D render"
- ],
- },
- */
- american_comic_50: {
- id: "american_comic_50",
- label: "American (1950)",
- family: "american",
- color: "color",
- font: "actionman",
- llmPrompt: "american comic",
- imagePrompt: (prompt: string) => [
- "1950",
- "50s",
- `vintage american color comic about ${prompt}`,
- // "single panel",
- // "comicbook style",
- // "color comicbook",
- // "color drawing"
- ],
- negativePrompt: () => [
- "manga",
- "anime",
- "american comic",
- "action",
- "grayscale",
- "monochrome",
- "photo",
- "painting",
- "3D render"
- ],
- },
- /*
- american_comic_60: {
- label: "American (1960)",
- family: "american",
- color: "color",
- font: "actionman",
- llmPrompt: "american comic",
- imagePrompt: (prompt: string) => [
- `american comic about ${prompt}`,
- "single panel",
- "american comic",
- "comicbook style",
- "1960",
- "60s",
- "color comicbook",
- "color drawing"
- ],
- negativePrompt: () => [
- "manga",
- "anime",
- "american comic",
- "action",
- "grayscale",
- "monochrome",
- "photo",
- "painting",
- "3D render"
- ],
- },
- */
-
-
- flying_saucer: {
- id: "flying_saucer",
- label: "Flying saucer",
- family: "european",
- color: "color",
- font: "actionman",
- llmPrompt: "new pulp science fiction",
- imagePrompt: (prompt: string) => [
- `vintage color pulp comic panel`,
- "40s",
- "1940",
- "vintage science fiction",
- `${prompt}`,
- // "single panel",
- // "comic album"
- ],
- negativePrompt: () => [
- "manga",
- "anime",
- "american comic",
- "grayscale",
- "monochrome",
- "photo",
- "painting",
- "3D render"
- ],
- },
-
- humanoid: {
- id: "humanoid",
- label: "Humanoid",
- family: "european",
- color: "color",
- font: "actionman",
- llmPrompt: "comic books by Moebius",
- imagePrompt: (prompt: string) => [
- `color comic panel`,
- "style of Moebius",
- `${prompt}`,
- "by Moebius",
- "french comic panel",
- "franco-belgian style",
- "bande dessinée",
- "single panel",
- "intricate"
- // "comic album"
- ],
- negativePrompt: () => [
- "manga",
- "anime",
- "american comic",
- "grayscale",
- "monochrome",
- "photo",
- "painting",
- "3D render"
- ],
- },
- haddock: {
- id: "haddock",
- label: "Haddock",
- family: "european",
- color: "color",
- font: "actionman",
- llmPrompt: "writing Tintin comic books",
- imagePrompt: (prompt: string) => [
- `color comic panel`,
- "style of Hergé",
- "tintin style",
- `${prompt}`,
- "by Hergé",
- "french comic panel",
- "franco-belgian style",
- // "color panel",
- // "bande dessinée",
- // "single panel",
- // "comic album"
- ],
- negativePrompt: () => [
- "manga",
- "anime",
- "american comic",
- "grayscale",
- "monochrome",
- "photo",
- "painting",
- "3D render"
- ],
- },
- /*
- lurid: {
- id: "lurid",
- label: "Lurid",
- family: "american",
- color: "color",
- font: "actionman",
- llmPrompt: "1970s satirical and alternative underground comics",
- imagePrompt: (prompt: string) => [
- `satirical color comic`,
- `underground comix`,
- `1970`,
- `${prompt}`,
- ],
- negativePrompt: () => [
- "manga",
- "anime",
- "american comic",
- "grayscale",
- "monochrome",
- "painting"
- ],
- },
- */
- armorican: {
- id: "armorican",
- label: "Armorican",
- family: "european",
- color: "monochrome",
- font: "actionman",
- llmPrompt: "french style comic books set in ancient Rome and Gaul",
- imagePrompt: (prompt: string) => [
- `color comic panel`,
- "romans",
- "gauls",
- "french comic panel",
- "franco-belgian style",
- `about ${prompt}`,
- "bande dessinée",
- "single panel",
- // "comical",
- // "comic album",
- // "color drawing"
- ],
- negativePrompt: () => [
- "manga",
- "anime",
- "american comic",
- "grayscale",
- "monochrome",
- "photo",
- "painting",
- "3D render"
- ],
- },
- render: {
- id: "render",
- label: "3D Render",
- family: "european",
- color: "color",
- font: "actionman",
- llmPrompt: "new movie",
- imagePrompt: (prompt: string) => [
- `3D render`,
- `Blender`,
- `3D animation`,
- `Unreal engine`,
- `${prompt}`,
- ],
- negativePrompt: () => [
- "manga",
- "anime",
- "american comic",
- "grayscale",
- "monochrome",
- "painting"
- ],
- },
- klimt: {
- id: "klimt",
- label: "Klimt",
- family: "european",
- color: "color",
- font: "actionman",
- llmPrompt: "Gustav Klimt art pieces.",
- imagePrompt: (prompt: string) => [
- `golden`,
- `patchwork`,
- `style of Gustav Klimt`,
- `Gustav Klimt painting`,
- `intricate details`,
- `${prompt}`,
- ],
- negativePrompt: () => [
- "manga",
- "anime",
- "american comic",
- "grayscale",
- "monochrome",
- "painting"
- ],
- },
- medieval: {
- id: "medieval",
- label: "Medieval",
- family: "european",
- color: "color",
- font: "actionman",
- llmPrompt: "medieval story (write in this style)",
- imagePrompt: (prompt: string) => [
- `medieval illuminated manuscript`,
- `illuminated manuscript of`,
- `medieval`,
- `intricate details`,
- // `medieval color engraving`,
- `${prompt}`,
- // `medieval`
- ],
- negativePrompt: () => [
- "manga",
- "anime",
- "american comic",
- "grayscale",
- "monochrome",
- "painting"
- ],
- },
- /*
- glass: {
- id: "glass",
- label: "Glass",
- family: "european",
- color: "color",
- font: "actionman",
- llmPrompt: "new movie",
- imagePrompt: (prompt: string) => [
- `stained glass`,
- `vitrail`,
- `stained glass`,
- // `medieval color engraving`,
- `${prompt}`,
- `medieval`,
- ],
- negativePrompt: () => [
- "manga",
- "anime",
- "american comic",
- "grayscale",
- "monochrome",
- "painting"
- ],
- },
- */
- /*
- voynich: {
- id: "voynich",
- label: "Voynich",
- family: "european",
- color: "color",
- font: "actionman",
- llmPrompt: "new movie",
- imagePrompt: (prompt: string) => [
- `voynich`,
- `voynich page`,
- // `medieval color engraving`,
- `${prompt}`,
- `medieval`,
- ],
- negativePrompt: () => [
- "manga",
- "anime",
- "american comic",
- "grayscale",
- "monochrome",
- "painting"
- ],
- },
- */
- egyptian: {
- id: "egyptian",
- label: "Egyptian",
- family: "european",
- color: "color",
- font: "actionman",
- llmPrompt: "ancient egyptian stories.",
- imagePrompt: (prompt: string) => [
- `ancient egyptian wall painting`,
- `ancient egypt`,
- // `medieval color engraving`,
- `${prompt}`,
- ],
- negativePrompt: () => [
- "manga",
- "anime",
- "american comic",
- "grayscale",
- "monochrome",
- "painting"
- ],
- },
- /*
- psx: {
- label: "PSX",
- family: "european",
- color: "color",
- font: "actionman",
- llmPrompt: "new movie",
- imagePrompt: (prompt: string) => [
- `videogame screenshot`,
- `3dfx`,
- `3D dos game`,
- `software rendering`,
- `${prompt}`,
- ],
- negativePrompt: () => [
- "manga",
- "anime",
- "american comic",
- "grayscale",
- "monochrome",
- "painting"
- ],
- },
- */
- /*
- pixel: {
- label: "Pixel",
- family: "european",
- color: "color",
- font: "actionman",
- llmPrompt: "new movie",
- imagePrompt: (prompt: string) => [
- `pixelart`,
- `isometric`,
- `pixelated`,
- `low res`,
- `${prompt}`,
- ],
- negativePrompt: () => [
- "manga",
- "anime",
- "american comic",
- "grayscale",
- "monochrome",
- "painting"
- ],
- },
- */
-}
-
-export type PresetName = keyof typeof presets
-
-export const defaultPreset: PresetName = "american_comic_90"
-
-export const nonRandomPresets = Object.keys(presets).filter(p => p !== "random")
-
-export const getPreset = (preset?: PresetName): Preset => presets[preset || defaultPreset] || presets[defaultPreset]
-
-export const getRandomPreset = (): Preset => {
- const presetName = pick(Object.keys(presets).filter(preset => preset !== "random")) as PresetName
- return getPreset(presetName)
-}
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/3 Pelculas 3d Canciones Descargar.md b/spaces/Benson/text-generation/Examples/3 Pelculas 3d Canciones Descargar.md
deleted file mode 100644
index fe0f1f899253fc43111eaa0c1328b4ae6ccad22f..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/3 Pelculas 3d Canciones Descargar.md
+++ /dev/null
@@ -1,87 +0,0 @@
-
-
Cómo descargar y disfrutar de 3 canciones de películas en 3D
-
Si eres un fan de las películas y la música tamil, es posible que hayas oído hablar de la película de suspenso psicológico romántico 2012 3, protagonizada por Dhanush y Shruti Haasan. La película fue dirigida por Aishwarya R. Dhanush, la esposa de Dhanush, quien también produjo y escribió la película. La película cuenta con siete canciones, compuestas por Anirudh Ravichander, quien hizo su debut como director musical con esta película. El álbum recibió críticas positivas de críticos y audiencias por igual, y ganó varios premios, incluyendo el Premio Filmfare al Mejor Director de Música.
Pero lo que hace que las canciones de 3 sean aún más especiales es que están disponibles en audio 3D, un tipo de sonido que imita la vida real o un lugar específico creando la ilusión del espacio tridimensional. Utiliza técnicas como sistemas de sonido binaural, funciones de transferencia relacionadas con la cabeza y cancelación de conversación cruzada para procesar y reproducir ondas de audio. El audio 3D puede proporcionar una experiencia auditiva inmersiva y sensorial para el oyente, ya que puede hacer que los sonidos provengan de prácticamente todas las direcciones, incluyendo arriba, abajo, detrás y delante del oyente.
-
En este artículo, le mostraremos cómo descargar y disfrutar canciones de películas en 3D de varias fuentes, por qué debería escucharlas con auriculares o altavoces, cuáles son los beneficios del audio 3D para los amantes de la música y responder algunas preguntas frecuentes sobre este tema. Así que, vamos a empezar!
-
Cómo descargar 3 canciones de películas en 3D de varias fuentes
-
-
JioSaavn
-
JioSaavn es uno de los servicios de transmisión de música más populares en la India, ofreciendo millones de canciones en varios idiomas y géneros. También ofrece contenido exclusivo y características para sus usuarios, como podcasts, estaciones de radio, listas de reproducción, letras, recomendaciones y más. Una de sus características únicas es el soporte para 360 Reality Audio, una tecnología desarrollada por Sony que ofrece sonido 3D inmersivo utilizando tecnología de audio espacial basada en objetos.
-
JioSaavn se ha asociado con Sony para ofrecer algunas de sus canciones en formato 360 Reality Audio, incluyendo las canciones de 3. Puedes encontrarlos buscando "360 Reality Audio" en la aplicación o en el sitio web. Necesitará una suscripción JioSaavn Pro para acceder a esta función, que cuesta Rs. 99 por mes o Rs. 399 por año. También necesitará un dispositivo compatible y auriculares para disfrutar del efecto completo de 360 Reality Audio.
-
Cómo descargar o transmitir 3 canciones de películas en 3D de JioSaavn
-
-
Descargar la aplicación JioSaavn de Google Play Store o App Store en su teléfono inteligente o tableta.
-
Regístrate o inicia sesión con tu número de Jio o dirección de correo electrónico.
-
Actualizar a JioSaavn Pro tocando el botón "Go Pro" en la pantalla de inicio.
-
Seleccione su método de pago preferido y complete la transacción.
-
Buscar "360 Reality Audio" en la aplicación o sitio web.
-
Selecciona el álbum "3 (Telugu)" de Anirudh Ravichander.
-
Toque en el icono de descarga junto a cada canción para descargarlos sin conexión o toque en el icono de reproducción para transmitirlos en línea.
-
Conecta tus auriculares compatibles a tu dispositivo y disfruta del sonido 3D de las canciones.
-
-
YouTube
-
-
Sin embargo, no todas las canciones de audio 3D en YouTube son de la misma calidad y autenticidad. Algunos de ellos pueden ser falsos o mal convertidos de las pistas de sonido estéreo o envolvente originales. Por lo tanto, debe ser cuidadoso y selectivo al elegir qué videos ver o descargar. Aquí hay algunos consejos para ayudarte a encontrar las mejores canciones de audio en 3D de 3 en YouTube:
-
-
-
Busque los canales oficiales de los artistas, sellos o distribuidores de las canciones, como Sony Music South, Divo Music o Wunderbar Studios. Pueden haber subido las pistas de audio 3D originales o versiones autorizadas de ellas.
-
Compruebe la descripción del video y los comentarios para cualquier información o retroalimentación sobre la calidad de audio 3D y la fuente. Evite los videos que no tienen descripción, comentarios negativos o calificaciones bajas.
-
Use palabras clave como "audio 3D", "audio 8D", "audio binaural", "audio holofónico", o "audio espacial" al buscar las canciones. Estos son algunos de los términos utilizados para describir el sonido 3D en YouTube.
-
Use auriculares o altavoces que soporten la reproducción de audio 3D. Algunos videos de YouTube pueden requerir que habilite una configuración o modo específico en su dispositivo o auriculares para activar el efecto de sonido 3D.
-
-
Cómo descargar o transmitir 3 canciones de películas en 3D de YouTube
-
-
Abra YouTube en su navegador web o aplicación en su dispositivo.
-
Buscar el nombre de la canción seguido de cualquiera de las palabras clave mencionadas anteriormente. Por ejemplo, "Por qué este Kolaveri Di 3D Audio".
-
Seleccione un vídeo que coincida con los criterios mencionados anteriormente. Por ejemplo, "[Why This Kolaveri Di - 8D Audio | Dhanush | Anirudh Ravichander | Sony Music South]" por Sony Music South.
-
Toque en el botón de reproducción para transmitir el video en línea o toque en el botón de descarga para guardarlo sin conexión (si está disponible).
-
Conecta tus auriculares o altavoces a tu dispositivo y disfruta del sonido 3D de las canciones.
-
-
¿Por qué escuchar 3 canciones de películas en 3D con auriculares o altavoces?
-
-
Los auriculares son los dispositivos más comunes y convenientes que pueden proporcionar este tipo de entrega de sonido. Pueden aislar cada oído del ruido externo y la interferencia, y crear un ambiente de escucha personal e íntimo para el oyente. También pueden reproducir las sutiles variaciones y matices del audio 3D, como la distancia, la dirección, la elevación, el movimiento y la reflexión de las ondas sonoras.
-
Los altavoces son otra opción que puede ofrecer una experiencia similar de audio 3D. Sin embargo, requieren más espacio y equipo que los auriculares. Necesitas tener al menos cuatro altavoces (dos delante y dos detrás) dispuestos en forma cuadrada o rectangular alrededor de tu posición auditiva. También necesita tener un amplificador, receptor o tarjeta de sonido compatible que pueda procesar y emitir señales de audio 3D a los altavoces.
-
Ya sea que elija auriculares o altavoces, debe asegurarse de que admiten formatos de audio 3D y tecnologías, como Dolby Atmos, DTS: X, Auro-3D, Sony 360 Reality Audio, etc. Estos son algunos de los estándares y sistemas que permiten la producción y reproducción de audio 3D en varios dispositivos y plataformas.
-
¿Cuáles son los beneficios de 3 canciones de películas en 3D para los amantes de la música?
-
Si eres un amante de la música que disfruta escuchando canciones de diferentes géneros y estilos, definitivamente apreciarás los beneficios de las canciones de películas en 3D . Estos son algunos de ellos:
-
-
Puedes experimentar una nueva dimensión de la música que va más allá del sonido estéreo o envolvente. Puedes sentir que estás en medio de un concierto en vivo o un estudio de grabación con los artistas e instrumentos que te rodean.
Puedes apreciar los detalles y matices de las canciones que podrías haber pasado por alto en otros formatos. Puedes escuchar los sutiles cambios en tono, tono, volumen y timbre de las voces e instrumentos.
-
-
Usted puede tener más diversión y variedad en sus hábitos de escucha de música. Puede explorar diferentes géneros y estilos de música que utilizan audio 3D para crear paisajes sonoros y efectos únicos e innovadores.
-
-
Conclusión
-
En conclusión, 3 canciones de películas en 3D son una gran manera de disfrutar de la música de una manera nueva y emocionante. Ofrecen una experiencia de sonido realista e inmersiva que puede mejorar su apreciación y disfrute de las canciones. Puede descargarlos o transmitirlos desde varias fuentes, como JioSaavn, YouTube, etc., pero debe usar auriculares o altavoces que admitan la reproducción de audio 3D para obtener el efecto completo. Si eres un amante de la música que le gusta probar cosas nuevas, ¡definitivamente deberías probar 3 canciones de películas en 3D!
-
Preguntas frecuentes
-
¿Cuál es la diferencia entre audio 3D y sonido estéreo o envolvente?
-
El sonido estéreo es un tipo de sonido que utiliza dos canales (izquierda y derecha) para crear una sensación de ancho y dirección. El sonido envolvente es un tipo de sonido que utiliza múltiples canales (generalmente cinco o más) para crear una sensación de profundidad y envolvimiento. El audio 3D es un tipo de sonido que utiliza tecnología de audio espacial basada en objetos para crear una sensación de altura y movimiento.
-
¿Cuáles son algunas de las mejores canciones de 3 películas?
-
Algunas de las mejores canciones de 3 son:
-
-
"Why This Kolaveri Di" - Un éxito viral que se convirtió en una sensación internacional. Es una canción pegadiza y humorística que mezcla letras en tamil e inglés con una melodía folk.
-
"Kannazhaga" - Una canción romántica a dúo que expresa el amor y el anhelo entre los personajes principales. Es una canción melodiosa y conmovedora que muestra la química entre Dhanush y Shruti Haasan.
-
"Nee Paartha Vizhigal" - Una canción sentimental que representa el dolor y la separación de los personajes principales. Es una canción hermosa y emocional que toca las fibras del corazón de los oyentes.
-
-
¿Cómo puedo ver 3 películas online?
-
-
¿Quiénes son los actores y actrices en la película 3?
-
Los principales actores y actrices de la película 3 son:
-
-
Dhanush - Interpreta a Ram Kumar, un joven que se enamora de Janani, pero sufre de trastorno bipolar y tendencias suicidas.
-
Shruti Haasan - Interpreta a Janani Kumar, una joven que se casa con Ram, pero lucha para hacer frente a su enfermedad mental y destino trágico.
-
Sivakarthikeyan - Interpreta a Kumaran, el mejor amigo y colega de Ram, quien lo apoya a través de sus problemas.
-
Prabhu - Interpreta al padre de Ram, que desaprueba su matrimonio con Janani, pero más tarde lamenta sus acciones.
-
Bhanupriya - Ella interpreta a la madre de Janani, que ama mucho a su hija, pero se preocupa por su futuro con Ram.
-
-
¿Quiénes son los cantantes y compositores de 3 canciones de películas?
-
Los cantantes y compositores de 3 canciones de películas son:
-
-
Anirudh Ravichander - Es el director musical, compositor, cantante y letrista de todas las canciones de la película. Hizo su debut con esta película y ganó varios premios por su trabajo.
-
Dhanush - Es el actor, productor, escritor y cantante de algunas de las canciones de la película. Cantó "Why This Kolaveri Di", "Po Nee Po", "Idhazhin Oram", etc.
-
Shruti Haasan - Ella es la actriz y cantante de algunas de las canciones de la película. Ella cantó "Kannazhaga", "Come On Girls", etc.
Ajeesh - Él es un cantante que ganó el reality show Airtel Super Singer. Cantó "Nee Paartha Vizhigal" y "Una vida llena de amor".
-
Swetha Mohan - Es una cantante que ha cantado en varios idiomas y géneros. Cantó "Nee Paartha Vizhigal" y "Kannazhaga".
-
Mohit Chauhan - Es un cantante que es conocido por sus canciones románticas y conmovedoras. Cantó "Po Nee Po (El dolor del amor)".
-
Ananth Nag - Es un cantante que también es ingeniero de sonido y productor musical. Cantó "Idhazhin Oram (La inocencia del amor)".
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Betty Bayo 11a Hora Mp3 Descargar.md b/spaces/Benson/text-generation/Examples/Betty Bayo 11a Hora Mp3 Descargar.md
deleted file mode 100644
index ae9549aa804e1ed89f9640522ee293dda2a79b48..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Betty Bayo 11a Hora Mp3 Descargar.md
+++ /dev/null
@@ -1,98 +0,0 @@
-
-
Betty Bayo 11a hora Mp3 Descargar
-
Si te gusta la música gospel, es posible que hayas oído hablar de Betty Bayo, una reconocida músico keniana que canta en su lengua materna, Agikuyu. Ella es mejor conocida por su canción 11th Hour, que ha inspirado a muchas personas con su mensaje de esperanza y fe. En este artículo, te contaremos más sobre Betty Bayo, su canción 11th Hour, y dónde puedes descargarla.
Betty Bayo nació en Banana, condado de Kiambu, pero creció en Ol Kalou antes de mudarse a la ciudad de Kiambu. Ella viene de un entorno humilde y abandonó la escuela en el segundo grado debido a limitaciones financieras. Trabajó como mucama durante dos años antes de que una familia se ofreciera a pagar sus cuotas escolares. Luego persiguió su pasión por la música y lanzó su primer álbum en 2005.
-
Vida personal
-
Betty Bayo estuvo casada una vez con Victor Kanyari, un controvertido pastor que fue expuesto por estafar a sus seguidores al pedirles que plantaran una semilla de Ksh310. La pareja tuvo dos hijos, pero se separaron después del escándalo. Betty se distanció de los planes de su ex-marido y dijo que no recibió ningún apoyo de él. Más tarde reveló que estaba saliendo con un político, pero se separaron debido a su infidelidad. Ella dijo que estaba feliz y contenta como madre soltera y se centró en su música e hijos.
-
Blanqueamiento
-
Betty Bayo causó revuelo en las redes sociales cuando admitió que se había blanqueado la piel. Dijo que lo hizo por curiosidad y presión de los compañeros, pero se arrepintió más tarde. Dijo que se enfrentó a muchas críticas y burlas por su decisión, pero aprendió a aceptarse a sí misma y a amar su piel. Aconsejó a otras mujeres que no blanquearan su piel, sino que abrazaran su belleza natural.
-
-
Canciones de Betty Bayo
-
-
-
-
Canción
-
Álbum
-
Año
-
-
-
11th Hour
-
11th Hour
-
2014
-
-
-
La Ciudad de México
-
La Ciudad de México
-
2016
-
-
-
Ni Nani Kama Wewe
-
Ni Nani Kama Wewe
-
2017
-
-
-
Kikuyu Medley
-
Kikuyu Medley
-
2018
-
-
-
Cheza Kama Wewe Remix
-
Cheza Kama Wewe Remix
-
2020
-
-
-
La Ciudad de México
-
La Ciudad de México
-
2021
-
-
-
Premios y reconocimiento
-
Betty Bayo ha recibido varios premios y nominaciones por su música, como:
- - Groove Awards Artista Femenina del Año 2015 - Groove Awards Canción del Año 2015 por 11th Hour - Groove Awards Canción Central del Año 2016 por Siwezi Kukosa Mbingu - Groove Awards Canción Central del Año 2017 por Ni Nani Kama Wewe - Afrimma Awards Mejor Artista/Grupo/Dúo 2020
Colaboraciones
-
Betty Bayo también ha colaborado con otros artistas de la industria de la música gospel, como:
- - Señor Seed en la canción Cheza Kama Wewe Remix, que es una melodía bailable que insta a los cristianos a disfrutar de la vida sin miedo ni culpa. - Rose Muhando en la canción Jemedari Wa Mbinguni, que es una canción de alabanza que reconoce a Dios como el comandante supremo del cielo y la tierra. - Ángel de la Guarda en la canción Nibariki, que es una canción de oración que pide a Dios que los bendiga a ellos y a sus familias.
Betty Bayo 11th Hour Song
Una de las canciones más famosas de Betty Bayo es 11th Hour, que lanzó en 2014. La canción tiene un poderoso mensaje de esperanza y fe que resuena con muchas personas que están pasando por tiempos difíciles.
-
Significado
-
-
El coro de la canción va así:
-
-
Mungu wangu ni wa 11th hour
-(Mi Dios es un Dios de la hora 11)
-Hata kama ni dakika ya mwisho
-(Incluso si es el último minuto)
-No se encontró la página
-(Él hará un camino en el desierto)
-Atafanya njia kwa baharini
-(Hará un camino en el mar)
-
-
La canción anima a los oyentes a confiar en Dios y no renunciar a sus sueños y oraciones, porque Dios puede hacer cualquier cosa en cualquier momento. También les recuerda que Dios no está atado por las expectativas o cálculos humanos, sino que trabaja de maneras misteriosas que superan la comprensión humana.
-
Inspiración
-
Betty Bayo dijo que ella escribió y cantó la canción desde su propia experiencia personal. Dijo que estaba pasando por muchos desafíos y dificultades en su vida, especialmente después de su separación de su ex-marido. Ella dijo que se sentía como renunciar a todo, pero Dios le dio la fuerza y el valor para seguir adelante. Ella dijo que fue testigo de la intervención y el favor de Dios en su vida, incluso cuando menos lo esperaba. Dijo que quería compartir su testimonio e inspirar a otros que se enfrentan a situaciones similares.
-
Recepción
-
La canción fue bien recibida por fans y críticos, quienes elogiaron a Betty Bayo por su destreza vocal y su entrega emocional. La canción también tuvo un buen desempeño en varias listas y plataformas, como YouTube, donde tiene más de 2 millones de visitas en junio de 2023. La canción también ganó varios premios y nominaciones, como Groove Awards Canción del Año 2015 y Afrimma Awards Mejor Artista Gospel/ Grupo/ Duo 2020.
-
Betty Bayo 11a hora Mp3 Sitios de descarga
-
Si quieres descargar la canción de Betty Bayo’s 11th Hour, tienes dos opciones: sitios gratuitos o sitios de pago. Estos son algunos de los pros y contras de cada opción:
-
Sitios gratuitos
-
- - Tubidy: Este es un sitio móvil que te permite buscar y descargar archivos mp3 de varias fuentes, como YouTube, SoundCloud y Vimeo. También puede ver videos y escuchar música en línea. - Mdundo: Esta es una plataforma de música africana que ofrece descargas gratuitas de canciones de varios géneros y artistas, incluyendo música gospel. También puede transmitir música en línea y crear listas de reproducción. - Mp3juices: Este es un sitio sencillo y rápido que te permite buscar y descargar archivos mp3 de múltiples fuentes, como YouTube, SoundCloud y Archive. También puedes cortar y editar archivos mp3 en línea.
Los pros de los sitios gratuitos son:
- - Son de fácil acceso y uso. - No requieren registro ni pago. - Ofrecen una amplia gama de canciones y géneros.
Los contras de los sitios libres son:
- - Pueden tener audio de baja calidad o archivos incompletos. - Pueden tener anuncios emergentes o malware que pueden dañar su dispositivo o datos. - Pueden violar las leyes de copyright o los derechos de los artistas.
Sitios de pago
-
Los sitios de pago son sitios web que requieren que usted pague una cuota o se suscriba a un servicio para descargar archivos mp3 de forma legal y segura. Algunos ejemplos de sitios de pago son:
- - Spotify: Este es un servicio de streaming de música popular que ofrece millones de canciones y podcasts de varios artistas y géneros. También puede descargar canciones para escuchar sin conexión con una suscripción premium. - Apple Music: Este es otro servicio de streaming de música popular que ofrece millones de canciones y podcasts de varios artistas y géneros. También puedes descargar canciones para escuchar sin conexión con una suscripción de Apple Music. - Amazon Music: Este es otro servicio de streaming de música popular que ofrece millones de canciones y podcasts de varios artistas y géneros. También puede descargar canciones para escuchar sin conexión con una suscripción de Amazon Music o comprar canciones o álbumes individuales.
Los pros de los sitios de pago son:
-
- - Requieren registro y pago. - Pueden tener disponibilidad limitada o compatibilidad con algunos dispositivos o regiones. - Pueden tener restricciones o condiciones sobre el uso o uso compartido de los archivos.
Conclusión
-
En conclusión, Betty Bayo es una talentosa e inspiradora cantante de gospel que ha tocado muchas vidas con su canción 11th Hour. La canción es un testimonio de la gracia y la misericordia de Dios, que puede manifestarse en cualquier momento, incluso cuando las cosas parecen desesperadas. Si desea descargar la canción, puede elegir entre sitios gratuitos o sitios de pago, dependiendo de sus preferencias y necesidades. Esperamos que hayas disfrutado de este artículo y hayas aprendido algo nuevo sobre Betty Bayo y su música.
-
Preguntas frecuentes
-
Aquí están algunas de las preguntas y respuestas frecuentes sobre Betty Bayo y su canción 11th Hour:
-
Q: ¿Qué edad tiene Betty Bayo?
-
A: Betty Bayo nació el 1 de julio de 1985. Tiene 38 años desde junio de 2023.
-
Q: ¿Está casada Betty Bayo?
-
A: Betty Bayo no está casada. Una vez estuvo casada con Victor Kanyari, pero se divorciaron después de su escándalo. Más tarde salió con un político, pero se separaron debido a su infidelidad. Actualmente está soltera y feliz.
-
P: ¿Cuál es el nombre de la iglesia de Betty Bayo?
-
A: Betty Bayo no pertenece a ninguna iglesia específica. Dijo que asiste a diferentes iglesias dependiendo de su horario y ubicación. Ella dijo que cree en Dios y no en denominaciones.
-
Q: ¿Dónde puedo ver el video de la hora 11?
-
A: Puedes ver el video de la 11a Hora en YouTube, donde tiene más de 2 millones de visitas en junio de 2023. También puedes verlo en otras plataformas, como Vimeo o Dailymotion.
-
Q: ¿Cuáles son algunas de las otras canciones de Betty Bayo?
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/utils.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/utils.py
deleted file mode 100644
index eb6cae145c843c1072e4f00856416a63cf912874..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/utils.py
+++ /dev/null
@@ -1,222 +0,0 @@
-# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# http://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-import re
-from collections import namedtuple
-
-
-def py_type_name(type_name):
- """Get the Python type name for a given model type.
-
- >>> py_type_name('list')
- 'list'
- >>> py_type_name('structure')
- 'dict'
-
- :rtype: string
- """
- return {
- 'blob': 'bytes',
- 'character': 'string',
- 'double': 'float',
- 'long': 'integer',
- 'map': 'dict',
- 'structure': 'dict',
- 'timestamp': 'datetime',
- }.get(type_name, type_name)
-
-
-def py_default(type_name):
- """Get the Python default value for a given model type.
-
- >>> py_default('string')
- '\'string\''
- >>> py_default('list')
- '[...]'
- >>> py_default('unknown')
- '...'
-
- :rtype: string
- """
- return {
- 'double': '123.0',
- 'long': '123',
- 'integer': '123',
- 'string': "'string'",
- 'blob': "b'bytes'",
- 'boolean': 'True|False',
- 'list': '[...]',
- 'map': '{...}',
- 'structure': '{...}',
- 'timestamp': 'datetime(2015, 1, 1)',
- }.get(type_name, '...')
-
-
-def get_official_service_name(service_model):
- """Generate the official name of an AWS Service
-
- :param service_model: The service model representing the service
- """
- official_name = service_model.metadata.get('serviceFullName')
- short_name = service_model.metadata.get('serviceAbbreviation', '')
- if short_name.startswith('Amazon'):
- short_name = short_name[7:]
- if short_name.startswith('AWS'):
- short_name = short_name[4:]
- if short_name and short_name.lower() not in official_name.lower():
- official_name += f' ({short_name})'
- return official_name
-
-
-_DocumentedShape = namedtuple(
- 'DocumentedShape',
- [
- 'name',
- 'type_name',
- 'documentation',
- 'metadata',
- 'members',
- 'required_members',
- ],
-)
-
-
-class DocumentedShape(_DocumentedShape):
- """Use this class to inject new shapes into a model for documentation"""
-
- def __new__(
- cls,
- name,
- type_name,
- documentation,
- metadata=None,
- members=None,
- required_members=None,
- ):
- if metadata is None:
- metadata = []
- if members is None:
- members = []
- if required_members is None:
- required_members = []
- return super().__new__(
- cls,
- name,
- type_name,
- documentation,
- metadata,
- members,
- required_members,
- )
-
-
-class AutoPopulatedParam:
- def __init__(self, name, param_description=None):
- self.name = name
- self.param_description = param_description
- if param_description is None:
- self.param_description = (
- 'Please note that this parameter is automatically populated '
- 'if it is not provided. Including this parameter is not '
- 'required\n'
- )
-
- def document_auto_populated_param(self, event_name, section, **kwargs):
- """Documents auto populated parameters
-
- It will remove any required marks for the parameter, remove the
- parameter from the example, and add a snippet about the parameter
- being autopopulated in the description.
- """
- if event_name.startswith('docs.request-params'):
- if self.name in section.available_sections:
- section = section.get_section(self.name)
- if 'is-required' in section.available_sections:
- section.delete_section('is-required')
- description_section = section.get_section(
- 'param-documentation'
- )
- description_section.writeln(self.param_description)
- elif event_name.startswith('docs.request-example'):
- section = section.get_section('structure-value')
- if self.name in section.available_sections:
- section.delete_section(self.name)
-
-
-class HideParamFromOperations:
- """Hides a single parameter from multiple operations.
-
- This method will remove a parameter from documentation and from
- examples. This method is typically used for things that are
- automatically populated because a user would be unable to provide
- a value (e.g., a checksum of a serialized XML request body)."""
-
- def __init__(self, service_name, parameter_name, operation_names):
- """
- :type service_name: str
- :param service_name: Name of the service to modify.
-
- :type parameter_name: str
- :param parameter_name: Name of the parameter to modify.
-
- :type operation_names: list
- :param operation_names: Operation names to modify.
- """
- self._parameter_name = parameter_name
- self._params_events = set()
- self._example_events = set()
- # Build up the sets of relevant event names.
- param_template = 'docs.request-params.%s.%s.complete-section'
- example_template = 'docs.request-example.%s.%s.complete-section'
- for name in operation_names:
- self._params_events.add(param_template % (service_name, name))
- self._example_events.add(example_template % (service_name, name))
-
- def hide_param(self, event_name, section, **kwargs):
- if event_name in self._example_events:
- # Modify the structure value for example events.
- section = section.get_section('structure-value')
- elif event_name not in self._params_events:
- return
- if self._parameter_name in section.available_sections:
- section.delete_section(self._parameter_name)
-
-
-class AppendParamDocumentation:
- """Appends documentation to a specific parameter"""
-
- def __init__(self, parameter_name, doc_string):
- self._parameter_name = parameter_name
- self._doc_string = doc_string
-
- def append_documentation(self, event_name, section, **kwargs):
- if self._parameter_name in section.available_sections:
- section = section.get_section(self._parameter_name)
- description_section = section.get_section('param-documentation')
- description_section.writeln(self._doc_string)
-
-
-_CONTROLS = {
- '\n': '\\n',
- '\r': '\\r',
- '\t': '\\t',
- '\b': '\\b',
- '\f': '\\f',
-}
-# Combines all CONTROLS keys into a big or regular expression
-_ESCAPE_CONTROLS_RE = re.compile('|'.join(map(re.escape, _CONTROLS)))
-# Based on the match get the appropriate replacement from CONTROLS
-_CONTROLS_MATCH_HANDLER = lambda match: _CONTROLS[match.group(0)]
-
-
-def escape_controls(value):
- return _ESCAPE_CONTROLS_RE.sub(_CONTROLS_MATCH_HANDLER, value)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/history.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/history.py
deleted file mode 100644
index 59d9481d7fb5ed48c737c6c835bd916a78aa3ff8..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/history.py
+++ /dev/null
@@ -1,55 +0,0 @@
-# Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# http://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-import logging
-
-HISTORY_RECORDER = None
-logger = logging.getLogger(__name__)
-
-
-class BaseHistoryHandler:
- def emit(self, event_type, payload, source):
- raise NotImplementedError('emit()')
-
-
-class HistoryRecorder:
- def __init__(self):
- self._enabled = False
- self._handlers = []
-
- def enable(self):
- self._enabled = True
-
- def disable(self):
- self._enabled = False
-
- def add_handler(self, handler):
- self._handlers.append(handler)
-
- def record(self, event_type, payload, source='BOTOCORE'):
- if self._enabled and self._handlers:
- for handler in self._handlers:
- try:
- handler.emit(event_type, payload, source)
- except Exception:
- # Never let the process die because we had a failure in
- # a record collection handler.
- logger.debug(
- "Exception raised in %s.", handler, exc_info=True
- )
-
-
-def get_global_history_recorder():
- global HISTORY_RECORDER
- if HISTORY_RECORDER is None:
- HISTORY_RECORDER = HistoryRecorder()
- return HISTORY_RECORDER
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/util/ssl_match_hostname.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/util/ssl_match_hostname.py
deleted file mode 100644
index 1dd950c489607d06ecc5218292a1b55558b47be8..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/util/ssl_match_hostname.py
+++ /dev/null
@@ -1,159 +0,0 @@
-"""The match_hostname() function from Python 3.3.3, essential when using SSL."""
-
-# Note: This file is under the PSF license as the code comes from the python
-# stdlib. http://docs.python.org/3/license.html
-
-import re
-import sys
-
-# ipaddress has been backported to 2.6+ in pypi. If it is installed on the
-# system, use it to handle IPAddress ServerAltnames (this was added in
-# python-3.5) otherwise only do DNS matching. This allows
-# util.ssl_match_hostname to continue to be used in Python 2.7.
-try:
- import ipaddress
-except ImportError:
- ipaddress = None
-
-__version__ = "3.5.0.1"
-
-
-class CertificateError(ValueError):
- pass
-
-
-def _dnsname_match(dn, hostname, max_wildcards=1):
- """Matching according to RFC 6125, section 6.4.3
-
- http://tools.ietf.org/html/rfc6125#section-6.4.3
- """
- pats = []
- if not dn:
- return False
-
- # Ported from python3-syntax:
- # leftmost, *remainder = dn.split(r'.')
- parts = dn.split(r".")
- leftmost = parts[0]
- remainder = parts[1:]
-
- wildcards = leftmost.count("*")
- if wildcards > max_wildcards:
- # Issue #17980: avoid denials of service by refusing more
- # than one wildcard per fragment. A survey of established
- # policy among SSL implementations showed it to be a
- # reasonable choice.
- raise CertificateError(
- "too many wildcards in certificate DNS name: " + repr(dn)
- )
-
- # speed up common case w/o wildcards
- if not wildcards:
- return dn.lower() == hostname.lower()
-
- # RFC 6125, section 6.4.3, subitem 1.
- # The client SHOULD NOT attempt to match a presented identifier in which
- # the wildcard character comprises a label other than the left-most label.
- if leftmost == "*":
- # When '*' is a fragment by itself, it matches a non-empty dotless
- # fragment.
- pats.append("[^.]+")
- elif leftmost.startswith("xn--") or hostname.startswith("xn--"):
- # RFC 6125, section 6.4.3, subitem 3.
- # The client SHOULD NOT attempt to match a presented identifier
- # where the wildcard character is embedded within an A-label or
- # U-label of an internationalized domain name.
- pats.append(re.escape(leftmost))
- else:
- # Otherwise, '*' matches any dotless string, e.g. www*
- pats.append(re.escape(leftmost).replace(r"\*", "[^.]*"))
-
- # add the remaining fragments, ignore any wildcards
- for frag in remainder:
- pats.append(re.escape(frag))
-
- pat = re.compile(r"\A" + r"\.".join(pats) + r"\Z", re.IGNORECASE)
- return pat.match(hostname)
-
-
-def _to_unicode(obj):
- if isinstance(obj, str) and sys.version_info < (3,):
- # ignored flake8 # F821 to support python 2.7 function
- obj = unicode(obj, encoding="ascii", errors="strict") # noqa: F821
- return obj
-
-
-def _ipaddress_match(ipname, host_ip):
- """Exact matching of IP addresses.
-
- RFC 6125 explicitly doesn't define an algorithm for this
- (section 1.7.2 - "Out of Scope").
- """
- # OpenSSL may add a trailing newline to a subjectAltName's IP address
- # Divergence from upstream: ipaddress can't handle byte str
- ip = ipaddress.ip_address(_to_unicode(ipname).rstrip())
- return ip == host_ip
-
-
-def match_hostname(cert, hostname):
- """Verify that *cert* (in decoded format as returned by
- SSLSocket.getpeercert()) matches the *hostname*. RFC 2818 and RFC 6125
- rules are followed, but IP addresses are not accepted for *hostname*.
-
- CertificateError is raised on failure. On success, the function
- returns nothing.
- """
- if not cert:
- raise ValueError(
- "empty or no certificate, match_hostname needs a "
- "SSL socket or SSL context with either "
- "CERT_OPTIONAL or CERT_REQUIRED"
- )
- try:
- # Divergence from upstream: ipaddress can't handle byte str
- host_ip = ipaddress.ip_address(_to_unicode(hostname))
- except (UnicodeError, ValueError):
- # ValueError: Not an IP address (common case)
- # UnicodeError: Divergence from upstream: Have to deal with ipaddress not taking
- # byte strings. addresses should be all ascii, so we consider it not
- # an ipaddress in this case
- host_ip = None
- except AttributeError:
- # Divergence from upstream: Make ipaddress library optional
- if ipaddress is None:
- host_ip = None
- else: # Defensive
- raise
- dnsnames = []
- san = cert.get("subjectAltName", ())
- for key, value in san:
- if key == "DNS":
- if host_ip is None and _dnsname_match(value, hostname):
- return
- dnsnames.append(value)
- elif key == "IP Address":
- if host_ip is not None and _ipaddress_match(value, host_ip):
- return
- dnsnames.append(value)
- if not dnsnames:
- # The subject is only checked when there is no dNSName entry
- # in subjectAltName
- for sub in cert.get("subject", ()):
- for key, value in sub:
- # XXX according to RFC 2818, the most specific Common Name
- # must be used.
- if key == "commonName":
- if _dnsname_match(value, hostname):
- return
- dnsnames.append(value)
- if len(dnsnames) > 1:
- raise CertificateError(
- "hostname %r "
- "doesn't match either of %s" % (hostname, ", ".join(map(repr, dnsnames)))
- )
- elif len(dnsnames) == 1:
- raise CertificateError("hostname %r doesn't match %r" % (hostname, dnsnames[0]))
- else:
- raise CertificateError(
- "no appropriate commonName or subjectAltName fields were found"
- )
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/poolmanager.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/poolmanager.py
deleted file mode 100644
index ca4ec341184adb3d30f3cd825b49a81b87d29b08..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/poolmanager.py
+++ /dev/null
@@ -1,537 +0,0 @@
-from __future__ import absolute_import
-
-import collections
-import functools
-import logging
-
-from ._collections import RecentlyUsedContainer
-from .connectionpool import HTTPConnectionPool, HTTPSConnectionPool, port_by_scheme
-from .exceptions import (
- LocationValueError,
- MaxRetryError,
- ProxySchemeUnknown,
- ProxySchemeUnsupported,
- URLSchemeUnknown,
-)
-from .packages import six
-from .packages.six.moves.urllib.parse import urljoin
-from .request import RequestMethods
-from .util.proxy import connection_requires_http_tunnel
-from .util.retry import Retry
-from .util.url import parse_url
-
-__all__ = ["PoolManager", "ProxyManager", "proxy_from_url"]
-
-
-log = logging.getLogger(__name__)
-
-SSL_KEYWORDS = (
- "key_file",
- "cert_file",
- "cert_reqs",
- "ca_certs",
- "ssl_version",
- "ca_cert_dir",
- "ssl_context",
- "key_password",
- "server_hostname",
-)
-
-# All known keyword arguments that could be provided to the pool manager, its
-# pools, or the underlying connections. This is used to construct a pool key.
-_key_fields = (
- "key_scheme", # str
- "key_host", # str
- "key_port", # int
- "key_timeout", # int or float or Timeout
- "key_retries", # int or Retry
- "key_strict", # bool
- "key_block", # bool
- "key_source_address", # str
- "key_key_file", # str
- "key_key_password", # str
- "key_cert_file", # str
- "key_cert_reqs", # str
- "key_ca_certs", # str
- "key_ssl_version", # str
- "key_ca_cert_dir", # str
- "key_ssl_context", # instance of ssl.SSLContext or urllib3.util.ssl_.SSLContext
- "key_maxsize", # int
- "key_headers", # dict
- "key__proxy", # parsed proxy url
- "key__proxy_headers", # dict
- "key__proxy_config", # class
- "key_socket_options", # list of (level (int), optname (int), value (int or str)) tuples
- "key__socks_options", # dict
- "key_assert_hostname", # bool or string
- "key_assert_fingerprint", # str
- "key_server_hostname", # str
-)
-
-#: The namedtuple class used to construct keys for the connection pool.
-#: All custom key schemes should include the fields in this key at a minimum.
-PoolKey = collections.namedtuple("PoolKey", _key_fields)
-
-_proxy_config_fields = ("ssl_context", "use_forwarding_for_https")
-ProxyConfig = collections.namedtuple("ProxyConfig", _proxy_config_fields)
-
-
-def _default_key_normalizer(key_class, request_context):
- """
- Create a pool key out of a request context dictionary.
-
- According to RFC 3986, both the scheme and host are case-insensitive.
- Therefore, this function normalizes both before constructing the pool
- key for an HTTPS request. If you wish to change this behaviour, provide
- alternate callables to ``key_fn_by_scheme``.
-
- :param key_class:
- The class to use when constructing the key. This should be a namedtuple
- with the ``scheme`` and ``host`` keys at a minimum.
- :type key_class: namedtuple
- :param request_context:
- A dictionary-like object that contain the context for a request.
- :type request_context: dict
-
- :return: A namedtuple that can be used as a connection pool key.
- :rtype: PoolKey
- """
- # Since we mutate the dictionary, make a copy first
- context = request_context.copy()
- context["scheme"] = context["scheme"].lower()
- context["host"] = context["host"].lower()
-
- # These are both dictionaries and need to be transformed into frozensets
- for key in ("headers", "_proxy_headers", "_socks_options"):
- if key in context and context[key] is not None:
- context[key] = frozenset(context[key].items())
-
- # The socket_options key may be a list and needs to be transformed into a
- # tuple.
- socket_opts = context.get("socket_options")
- if socket_opts is not None:
- context["socket_options"] = tuple(socket_opts)
-
- # Map the kwargs to the names in the namedtuple - this is necessary since
- # namedtuples can't have fields starting with '_'.
- for key in list(context.keys()):
- context["key_" + key] = context.pop(key)
-
- # Default to ``None`` for keys missing from the context
- for field in key_class._fields:
- if field not in context:
- context[field] = None
-
- return key_class(**context)
-
-
-#: A dictionary that maps a scheme to a callable that creates a pool key.
-#: This can be used to alter the way pool keys are constructed, if desired.
-#: Each PoolManager makes a copy of this dictionary so they can be configured
-#: globally here, or individually on the instance.
-key_fn_by_scheme = {
- "http": functools.partial(_default_key_normalizer, PoolKey),
- "https": functools.partial(_default_key_normalizer, PoolKey),
-}
-
-pool_classes_by_scheme = {"http": HTTPConnectionPool, "https": HTTPSConnectionPool}
-
-
-class PoolManager(RequestMethods):
- """
- Allows for arbitrary requests while transparently keeping track of
- necessary connection pools for you.
-
- :param num_pools:
- Number of connection pools to cache before discarding the least
- recently used pool.
-
- :param headers:
- Headers to include with all requests, unless other headers are given
- explicitly.
-
- :param \\**connection_pool_kw:
- Additional parameters are used to create fresh
- :class:`urllib3.connectionpool.ConnectionPool` instances.
-
- Example::
-
- >>> manager = PoolManager(num_pools=2)
- >>> r = manager.request('GET', 'http://google.com/')
- >>> r = manager.request('GET', 'http://google.com/mail')
- >>> r = manager.request('GET', 'http://yahoo.com/')
- >>> len(manager.pools)
- 2
-
- """
-
- proxy = None
- proxy_config = None
-
- def __init__(self, num_pools=10, headers=None, **connection_pool_kw):
- RequestMethods.__init__(self, headers)
- self.connection_pool_kw = connection_pool_kw
- self.pools = RecentlyUsedContainer(num_pools, dispose_func=lambda p: p.close())
-
- # Locally set the pool classes and keys so other PoolManagers can
- # override them.
- self.pool_classes_by_scheme = pool_classes_by_scheme
- self.key_fn_by_scheme = key_fn_by_scheme.copy()
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- self.clear()
- # Return False to re-raise any potential exceptions
- return False
-
- def _new_pool(self, scheme, host, port, request_context=None):
- """
- Create a new :class:`urllib3.connectionpool.ConnectionPool` based on host, port, scheme, and
- any additional pool keyword arguments.
-
- If ``request_context`` is provided, it is provided as keyword arguments
- to the pool class used. This method is used to actually create the
- connection pools handed out by :meth:`connection_from_url` and
- companion methods. It is intended to be overridden for customization.
- """
- pool_cls = self.pool_classes_by_scheme[scheme]
- if request_context is None:
- request_context = self.connection_pool_kw.copy()
-
- # Although the context has everything necessary to create the pool,
- # this function has historically only used the scheme, host, and port
- # in the positional args. When an API change is acceptable these can
- # be removed.
- for key in ("scheme", "host", "port"):
- request_context.pop(key, None)
-
- if scheme == "http":
- for kw in SSL_KEYWORDS:
- request_context.pop(kw, None)
-
- return pool_cls(host, port, **request_context)
-
- def clear(self):
- """
- Empty our store of pools and direct them all to close.
-
- This will not affect in-flight connections, but they will not be
- re-used after completion.
- """
- self.pools.clear()
-
- def connection_from_host(self, host, port=None, scheme="http", pool_kwargs=None):
- """
- Get a :class:`urllib3.connectionpool.ConnectionPool` based on the host, port, and scheme.
-
- If ``port`` isn't given, it will be derived from the ``scheme`` using
- ``urllib3.connectionpool.port_by_scheme``. If ``pool_kwargs`` is
- provided, it is merged with the instance's ``connection_pool_kw``
- variable and used to create the new connection pool, if one is
- needed.
- """
-
- if not host:
- raise LocationValueError("No host specified.")
-
- request_context = self._merge_pool_kwargs(pool_kwargs)
- request_context["scheme"] = scheme or "http"
- if not port:
- port = port_by_scheme.get(request_context["scheme"].lower(), 80)
- request_context["port"] = port
- request_context["host"] = host
-
- return self.connection_from_context(request_context)
-
- def connection_from_context(self, request_context):
- """
- Get a :class:`urllib3.connectionpool.ConnectionPool` based on the request context.
-
- ``request_context`` must at least contain the ``scheme`` key and its
- value must be a key in ``key_fn_by_scheme`` instance variable.
- """
- scheme = request_context["scheme"].lower()
- pool_key_constructor = self.key_fn_by_scheme.get(scheme)
- if not pool_key_constructor:
- raise URLSchemeUnknown(scheme)
- pool_key = pool_key_constructor(request_context)
-
- return self.connection_from_pool_key(pool_key, request_context=request_context)
-
- def connection_from_pool_key(self, pool_key, request_context=None):
- """
- Get a :class:`urllib3.connectionpool.ConnectionPool` based on the provided pool key.
-
- ``pool_key`` should be a namedtuple that only contains immutable
- objects. At a minimum it must have the ``scheme``, ``host``, and
- ``port`` fields.
- """
- with self.pools.lock:
- # If the scheme, host, or port doesn't match existing open
- # connections, open a new ConnectionPool.
- pool = self.pools.get(pool_key)
- if pool:
- return pool
-
- # Make a fresh ConnectionPool of the desired type
- scheme = request_context["scheme"]
- host = request_context["host"]
- port = request_context["port"]
- pool = self._new_pool(scheme, host, port, request_context=request_context)
- self.pools[pool_key] = pool
-
- return pool
-
- def connection_from_url(self, url, pool_kwargs=None):
- """
- Similar to :func:`urllib3.connectionpool.connection_from_url`.
-
- If ``pool_kwargs`` is not provided and a new pool needs to be
- constructed, ``self.connection_pool_kw`` is used to initialize
- the :class:`urllib3.connectionpool.ConnectionPool`. If ``pool_kwargs``
- is provided, it is used instead. Note that if a new pool does not
- need to be created for the request, the provided ``pool_kwargs`` are
- not used.
- """
- u = parse_url(url)
- return self.connection_from_host(
- u.host, port=u.port, scheme=u.scheme, pool_kwargs=pool_kwargs
- )
-
- def _merge_pool_kwargs(self, override):
- """
- Merge a dictionary of override values for self.connection_pool_kw.
-
- This does not modify self.connection_pool_kw and returns a new dict.
- Any keys in the override dictionary with a value of ``None`` are
- removed from the merged dictionary.
- """
- base_pool_kwargs = self.connection_pool_kw.copy()
- if override:
- for key, value in override.items():
- if value is None:
- try:
- del base_pool_kwargs[key]
- except KeyError:
- pass
- else:
- base_pool_kwargs[key] = value
- return base_pool_kwargs
-
- def _proxy_requires_url_absolute_form(self, parsed_url):
- """
- Indicates if the proxy requires the complete destination URL in the
- request. Normally this is only needed when not using an HTTP CONNECT
- tunnel.
- """
- if self.proxy is None:
- return False
-
- return not connection_requires_http_tunnel(
- self.proxy, self.proxy_config, parsed_url.scheme
- )
-
- def _validate_proxy_scheme_url_selection(self, url_scheme):
- """
- Validates that were not attempting to do TLS in TLS connections on
- Python2 or with unsupported SSL implementations.
- """
- if self.proxy is None or url_scheme != "https":
- return
-
- if self.proxy.scheme != "https":
- return
-
- if six.PY2 and not self.proxy_config.use_forwarding_for_https:
- raise ProxySchemeUnsupported(
- "Contacting HTTPS destinations through HTTPS proxies "
- "'via CONNECT tunnels' is not supported in Python 2"
- )
-
- def urlopen(self, method, url, redirect=True, **kw):
- """
- Same as :meth:`urllib3.HTTPConnectionPool.urlopen`
- with custom cross-host redirect logic and only sends the request-uri
- portion of the ``url``.
-
- The given ``url`` parameter must be absolute, such that an appropriate
- :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it.
- """
- u = parse_url(url)
- self._validate_proxy_scheme_url_selection(u.scheme)
-
- conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme)
-
- kw["assert_same_host"] = False
- kw["redirect"] = False
-
- if "headers" not in kw:
- kw["headers"] = self.headers.copy()
-
- if self._proxy_requires_url_absolute_form(u):
- response = conn.urlopen(method, url, **kw)
- else:
- response = conn.urlopen(method, u.request_uri, **kw)
-
- redirect_location = redirect and response.get_redirect_location()
- if not redirect_location:
- return response
-
- # Support relative URLs for redirecting.
- redirect_location = urljoin(url, redirect_location)
-
- # RFC 7231, Section 6.4.4
- if response.status == 303:
- method = "GET"
-
- retries = kw.get("retries")
- if not isinstance(retries, Retry):
- retries = Retry.from_int(retries, redirect=redirect)
-
- # Strip headers marked as unsafe to forward to the redirected location.
- # Check remove_headers_on_redirect to avoid a potential network call within
- # conn.is_same_host() which may use socket.gethostbyname() in the future.
- if retries.remove_headers_on_redirect and not conn.is_same_host(
- redirect_location
- ):
- headers = list(six.iterkeys(kw["headers"]))
- for header in headers:
- if header.lower() in retries.remove_headers_on_redirect:
- kw["headers"].pop(header, None)
-
- try:
- retries = retries.increment(method, url, response=response, _pool=conn)
- except MaxRetryError:
- if retries.raise_on_redirect:
- response.drain_conn()
- raise
- return response
-
- kw["retries"] = retries
- kw["redirect"] = redirect
-
- log.info("Redirecting %s -> %s", url, redirect_location)
-
- response.drain_conn()
- return self.urlopen(method, redirect_location, **kw)
-
-
-class ProxyManager(PoolManager):
- """
- Behaves just like :class:`PoolManager`, but sends all requests through
- the defined proxy, using the CONNECT method for HTTPS URLs.
-
- :param proxy_url:
- The URL of the proxy to be used.
-
- :param proxy_headers:
- A dictionary containing headers that will be sent to the proxy. In case
- of HTTP they are being sent with each request, while in the
- HTTPS/CONNECT case they are sent only once. Could be used for proxy
- authentication.
-
- :param proxy_ssl_context:
- The proxy SSL context is used to establish the TLS connection to the
- proxy when using HTTPS proxies.
-
- :param use_forwarding_for_https:
- (Defaults to False) If set to True will forward requests to the HTTPS
- proxy to be made on behalf of the client instead of creating a TLS
- tunnel via the CONNECT method. **Enabling this flag means that request
- and response headers and content will be visible from the HTTPS proxy**
- whereas tunneling keeps request and response headers and content
- private. IP address, target hostname, SNI, and port are always visible
- to an HTTPS proxy even when this flag is disabled.
-
- Example:
- >>> proxy = urllib3.ProxyManager('http://localhost:3128/')
- >>> r1 = proxy.request('GET', 'http://google.com/')
- >>> r2 = proxy.request('GET', 'http://httpbin.org/')
- >>> len(proxy.pools)
- 1
- >>> r3 = proxy.request('GET', 'https://httpbin.org/')
- >>> r4 = proxy.request('GET', 'https://twitter.com/')
- >>> len(proxy.pools)
- 3
-
- """
-
- def __init__(
- self,
- proxy_url,
- num_pools=10,
- headers=None,
- proxy_headers=None,
- proxy_ssl_context=None,
- use_forwarding_for_https=False,
- **connection_pool_kw
- ):
-
- if isinstance(proxy_url, HTTPConnectionPool):
- proxy_url = "%s://%s:%i" % (
- proxy_url.scheme,
- proxy_url.host,
- proxy_url.port,
- )
- proxy = parse_url(proxy_url)
-
- if proxy.scheme not in ("http", "https"):
- raise ProxySchemeUnknown(proxy.scheme)
-
- if not proxy.port:
- port = port_by_scheme.get(proxy.scheme, 80)
- proxy = proxy._replace(port=port)
-
- self.proxy = proxy
- self.proxy_headers = proxy_headers or {}
- self.proxy_ssl_context = proxy_ssl_context
- self.proxy_config = ProxyConfig(proxy_ssl_context, use_forwarding_for_https)
-
- connection_pool_kw["_proxy"] = self.proxy
- connection_pool_kw["_proxy_headers"] = self.proxy_headers
- connection_pool_kw["_proxy_config"] = self.proxy_config
-
- super(ProxyManager, self).__init__(num_pools, headers, **connection_pool_kw)
-
- def connection_from_host(self, host, port=None, scheme="http", pool_kwargs=None):
- if scheme == "https":
- return super(ProxyManager, self).connection_from_host(
- host, port, scheme, pool_kwargs=pool_kwargs
- )
-
- return super(ProxyManager, self).connection_from_host(
- self.proxy.host, self.proxy.port, self.proxy.scheme, pool_kwargs=pool_kwargs
- )
-
- def _set_proxy_headers(self, url, headers=None):
- """
- Sets headers needed by proxies: specifically, the Accept and Host
- headers. Only sets headers not provided by the user.
- """
- headers_ = {"Accept": "*/*"}
-
- netloc = parse_url(url).netloc
- if netloc:
- headers_["Host"] = netloc
-
- if headers:
- headers_.update(headers)
- return headers_
-
- def urlopen(self, method, url, redirect=True, **kw):
- "Same as HTTP(S)ConnectionPool.urlopen, ``url`` must be absolute."
- u = parse_url(url)
- if not connection_requires_http_tunnel(self.proxy, self.proxy_config, u.scheme):
- # For connections using HTTP CONNECT, httplib sets the necessary
- # headers on the CONNECT to the proxy. If we're not using CONNECT,
- # we'll definitely need to set 'Host' at the very least.
- headers = kw.get("headers", self.headers)
- kw["headers"] = self._set_proxy_headers(url, headers)
-
- return super(ProxyManager, self).urlopen(method, url, redirect=redirect, **kw)
-
-
-def proxy_from_url(url, **kw):
- return ProxyManager(proxy_url=url, **kw)
diff --git a/spaces/BilalSardar/Like-Chatgpt-clone/app.py b/spaces/BilalSardar/Like-Chatgpt-clone/app.py
deleted file mode 100644
index 60ce1425fde524542514bb9ec270e8ef2fa64af5..0000000000000000000000000000000000000000
--- a/spaces/BilalSardar/Like-Chatgpt-clone/app.py
+++ /dev/null
@@ -1,54 +0,0 @@
-from langchain import OpenAI, ConversationChain, LLMChain, PromptTemplate
-from langchain.chains.conversation.memory import ConversationBufferWindowMemory
-import os
-
-def chatgpt(background,key,prompts):
- try:
- template = background+"""
- {history}
- Human: {human_input}
- Assistant:"""
-
- prompt = PromptTemplate(
- input_variables=["history", "human_input"],
- template=template
- )
-
-
- chatgpt_chain = LLMChain(
- llm=OpenAI(temperature=0,openai_api_key = (key)
- ),
- prompt=prompt,
- verbose=True,
- memory=ConversationBufferWindowMemory(k=2),
- )
-
- output = chatgpt_chain.predict(human_input=prompts)
- #print(output)
- return output
- except:
- return "Enter your key"
-
-import gradio as gr
-Backgorund_in = gr.inputs.Textbox(label="Enter Background")
-key_in =gr.inputs.Textbox(label="Enter Your API key")
-prompt = gr.inputs.Textbox(label="Enter what you want")
-output=gr.inputs.Textbox(label="Your Result")
-examples=[[""""Assistant is a large language model trained by OpenAI.
-
-Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
-
-Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
-
-Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.""","sk-IQHMkGxxxxxxxxx", """I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply wiht the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd."""]]
-
-demo=gr.Interface(fn=chatgpt,
- inputs=[Backgorund_in, key_in,prompt],
- outputs=output,
- examples=examples,
- title="Like-GPTClone",
- description="This space is a simple implementation of langChain pacakge github(https://github.com/hwchase17/langchain) "
-
- )
-
-demo.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/vis/base.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/vis/base.py
deleted file mode 100644
index b3f1ae5aff28375a98807982310f0fe69febe832..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/vis/base.py
+++ /dev/null
@@ -1,190 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-import logging
-import numpy as np
-import cv2
-import torch
-
-Image = np.ndarray
-Boxes = torch.Tensor
-
-
-class MatrixVisualizer(object):
- """
- Base visualizer for matrix data
- """
-
- def __init__(
- self,
- inplace=True,
- cmap=cv2.COLORMAP_PARULA,
- val_scale=1.0,
- alpha=0.7,
- interp_method_matrix=cv2.INTER_LINEAR,
- interp_method_mask=cv2.INTER_NEAREST,
- ):
- self.inplace = inplace
- self.cmap = cmap
- self.val_scale = val_scale
- self.alpha = alpha
- self.interp_method_matrix = interp_method_matrix
- self.interp_method_mask = interp_method_mask
-
- def visualize(self, image_bgr, mask, matrix, bbox_xywh):
- self._check_image(image_bgr)
- self._check_mask_matrix(mask, matrix)
- if self.inplace:
- image_target_bgr = image_bgr
- else:
- image_target_bgr = image_bgr * 0
- x, y, w, h = [int(v) for v in bbox_xywh]
- if w <= 0 or h <= 0:
- return image_bgr
- mask, matrix = self._resize(mask, matrix, w, h)
- mask_bg = np.tile((mask == 0)[:, :, np.newaxis], [1, 1, 3])
- matrix_scaled = matrix.astype(np.float32) * self.val_scale
- _EPSILON = 1e-6
- if np.any(matrix_scaled > 255 + _EPSILON):
- logger = logging.getLogger(__name__)
- logger.warning(
- f"Matrix has values > {255 + _EPSILON} after " f"scaling, clipping to [0..255]"
- )
- matrix_scaled_8u = matrix_scaled.clip(0, 255).astype(np.uint8)
- matrix_vis = cv2.applyColorMap(matrix_scaled_8u, self.cmap)
- matrix_vis[mask_bg] = image_target_bgr[y : y + h, x : x + w, :][mask_bg]
- image_target_bgr[y : y + h, x : x + w, :] = (
- image_target_bgr[y : y + h, x : x + w, :] * (1.0 - self.alpha) + matrix_vis * self.alpha
- )
- return image_target_bgr.astype(np.uint8)
-
- def _resize(self, mask, matrix, w, h):
- if (w != mask.shape[1]) or (h != mask.shape[0]):
- mask = cv2.resize(mask, (w, h), self.interp_method_mask)
- if (w != matrix.shape[1]) or (h != matrix.shape[0]):
- matrix = cv2.resize(matrix, (w, h), self.interp_method_matrix)
- return mask, matrix
-
- def _check_image(self, image_rgb):
- assert len(image_rgb.shape) == 3
- assert image_rgb.shape[2] == 3
- assert image_rgb.dtype == np.uint8
-
- def _check_mask_matrix(self, mask, matrix):
- assert len(matrix.shape) == 2
- assert len(mask.shape) == 2
- assert mask.dtype == np.uint8
-
-
-class RectangleVisualizer(object):
-
- _COLOR_GREEN = (18, 127, 15)
-
- def __init__(self, color=_COLOR_GREEN, thickness=1):
- self.color = color
- self.thickness = thickness
-
- def visualize(self, image_bgr, bbox_xywh, color=None, thickness=None):
- x, y, w, h = bbox_xywh
- color = color or self.color
- thickness = thickness or self.thickness
- cv2.rectangle(image_bgr, (int(x), int(y)), (int(x + w), int(y + h)), color, thickness)
- return image_bgr
-
-
-class PointsVisualizer(object):
-
- _COLOR_GREEN = (18, 127, 15)
-
- def __init__(self, color_bgr=_COLOR_GREEN, r=5):
- self.color_bgr = color_bgr
- self.r = r
-
- def visualize(self, image_bgr, pts_xy, colors_bgr=None, rs=None):
- for j, pt_xy in enumerate(pts_xy):
- x, y = pt_xy
- color_bgr = colors_bgr[j] if colors_bgr is not None else self.color_bgr
- r = rs[j] if rs is not None else self.r
- cv2.circle(image_bgr, (x, y), r, color_bgr, -1)
- return image_bgr
-
-
-class TextVisualizer(object):
-
- _COLOR_GRAY = (218, 227, 218)
- _COLOR_WHITE = (255, 255, 255)
-
- def __init__(
- self,
- font_face=cv2.FONT_HERSHEY_SIMPLEX,
- font_color_bgr=_COLOR_GRAY,
- font_scale=0.35,
- font_line_type=cv2.LINE_AA,
- font_line_thickness=1,
- fill_color_bgr=_COLOR_WHITE,
- fill_color_transparency=1.0,
- frame_color_bgr=_COLOR_WHITE,
- frame_color_transparency=1.0,
- frame_thickness=1,
- ):
- self.font_face = font_face
- self.font_color_bgr = font_color_bgr
- self.font_scale = font_scale
- self.font_line_type = font_line_type
- self.font_line_thickness = font_line_thickness
- self.fill_color_bgr = fill_color_bgr
- self.fill_color_transparency = fill_color_transparency
- self.frame_color_bgr = frame_color_bgr
- self.frame_color_transparency = frame_color_transparency
- self.frame_thickness = frame_thickness
-
- def visualize(self, image_bgr, txt, topleft_xy):
- txt_w, txt_h = self.get_text_size_wh(txt)
- topleft_xy = tuple(map(int, topleft_xy))
- x, y = topleft_xy
- if self.frame_color_transparency < 1.0:
- t = self.frame_thickness
- image_bgr[y - t : y + txt_h + t, x - t : x + txt_w + t, :] = (
- image_bgr[y - t : y + txt_h + t, x - t : x + txt_w + t, :]
- * self.frame_color_transparency
- + np.array(self.frame_color_bgr) * (1.0 - self.frame_color_transparency)
- ).astype(np.float)
- if self.fill_color_transparency < 1.0:
- image_bgr[y : y + txt_h, x : x + txt_w, :] = (
- image_bgr[y : y + txt_h, x : x + txt_w, :] * self.fill_color_transparency
- + np.array(self.fill_color_bgr) * (1.0 - self.fill_color_transparency)
- ).astype(np.float)
- cv2.putText(
- image_bgr,
- txt,
- topleft_xy,
- self.font_face,
- self.font_scale,
- self.font_color_bgr,
- self.font_line_thickness,
- self.font_line_type,
- )
- return image_bgr
-
- def get_text_size_wh(self, txt):
- ((txt_w, txt_h), _) = cv2.getTextSize(
- txt, self.font_face, self.font_scale, self.font_line_thickness
- )
- return txt_w, txt_h
-
-
-class CompoundVisualizer(object):
- def __init__(self, visualizers):
- self.visualizers = visualizers
-
- def visualize(self, image_bgr, data):
- assert len(data) == len(self.visualizers), (
- "The number of datas {} should match the number of visualizers"
- " {}".format(len(data), len(self.visualizers))
- )
- image = image_bgr
- for i, visualizer in enumerate(self.visualizers):
- image = visualizer.visualize(image, data[i])
- return image
-
- def __str__(self):
- visualizer_str = ", ".join([str(v) for v in self.visualizers])
- return "Compound Visualizer [{}]".format(visualizer_str)
diff --git a/spaces/CVPR/LIVE/pydiffvg_tensorflow/color.py b/spaces/CVPR/LIVE/pydiffvg_tensorflow/color.py
deleted file mode 100644
index e0db61215407dfbee8e4021aa0b32e70df473ddb..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pydiffvg_tensorflow/color.py
+++ /dev/null
@@ -1,23 +0,0 @@
-import tensorflow as tf
-
-class LinearGradient:
- def __init__(self,
- begin = tf.constant([0.0, 0.0]),
- end = tf.constant([0.0, 0.0]),
- offsets = tf.constant([0.0]),
- stop_colors = tf.constant([0.0, 0.0, 0.0, 0.0])):
- self.begin = begin
- self.end = end
- self.offsets = offsets
- self.stop_colors = stop_colors
-
-class RadialGradient:
- def __init__(self,
- center = tf.constant([0.0, 0.0]),
- radius = tf.constant([0.0, 0.0]),
- offsets = tf.constant([0.0]),
- stop_colors = tf.constant([0.0, 0.0, 0.0, 0.0])):
- self.center = center
- self.radius = radius
- self.offsets = offsets
- self.stop_colors = stop_colors
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/inner_product.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/inner_product.h
deleted file mode 100644
index 9423b1bdbf19deba9f7ec91d9ce4417b0c2f5145..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/inner_product.h
+++ /dev/null
@@ -1,44 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a fill of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// the purpose of this header is to #include the inner_product.h header
-// of the sequential, host, and device systems. It should be #included in any
-// code which uses adl to dispatch inner_product
-
-#include
-
-// SCons can't see through the #defines below to figure out what this header
-// includes, so we fake it out by specifying all possible files we might end up
-// including inside an #if 0.
-#if 0
-#include
-#include
-#include
-#include
-#endif
-
-#define __THRUST_HOST_SYSTEM_INNER_PRODUCT_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/inner_product.h>
-#include __THRUST_HOST_SYSTEM_INNER_PRODUCT_HEADER
-#undef __THRUST_HOST_SYSTEM_INNER_PRODUCT_HEADER
-
-#define __THRUST_DEVICE_SYSTEM_INNER_PRODUCT_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/inner_product.h>
-#include __THRUST_DEVICE_SYSTEM_INNER_PRODUCT_HEADER
-#undef __THRUST_DEVICE_SYSTEM_INNER_PRODUCT_HEADER
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/fill.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/fill.h
deleted file mode 100644
index 20c636096c25e9a2d951ad2f50a4de72d0d1b968..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/fill.h
+++ /dev/null
@@ -1,22 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// this system has no special fill functions
-
diff --git a/spaces/CVPR/Text2Human/Text2Human/utils/language_utils.py b/spaces/CVPR/Text2Human/Text2Human/utils/language_utils.py
deleted file mode 100644
index bb2ef69b3001f10b20069f40ec0141d28260482f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Text2Human/Text2Human/utils/language_utils.py
+++ /dev/null
@@ -1,315 +0,0 @@
-from curses import A_ATTRIBUTES
-
-import numpy
-import torch
-from pip import main
-from sentence_transformers import SentenceTransformer, util
-
-# predefined shape text
-upper_length_text = [
- 'sleeveless', 'without sleeves', 'sleeves have been cut off', 'tank top',
- 'tank shirt', 'muscle shirt', 'short-sleeve', 'short sleeves',
- 'with short sleeves', 'medium-sleeve', 'medium sleeves',
- 'with medium sleeves', 'sleeves reach elbow', 'long-sleeve',
- 'long sleeves', 'with long sleeves'
-]
-upper_length_attr = {
- 'sleeveless': 0,
- 'without sleeves': 0,
- 'sleeves have been cut off': 0,
- 'tank top': 0,
- 'tank shirt': 0,
- 'muscle shirt': 0,
- 'short-sleeve': 1,
- 'with short sleeves': 1,
- 'short sleeves': 1,
- 'medium-sleeve': 2,
- 'with medium sleeves': 2,
- 'medium sleeves': 2,
- 'sleeves reach elbow': 2,
- 'long-sleeve': 3,
- 'long sleeves': 3,
- 'with long sleeves': 3
-}
-lower_length_text = [
- 'three-point', 'medium', 'short', 'covering knee', 'cropped',
- 'three-quarter', 'long', 'slack', 'of long length'
-]
-lower_length_attr = {
- 'three-point': 0,
- 'medium': 1,
- 'covering knee': 1,
- 'short': 1,
- 'cropped': 2,
- 'three-quarter': 2,
- 'long': 3,
- 'slack': 3,
- 'of long length': 3
-}
-socks_length_text = [
- 'socks', 'stocking', 'pantyhose', 'leggings', 'sheer hosiery'
-]
-socks_length_attr = {
- 'socks': 0,
- 'stocking': 1,
- 'pantyhose': 1,
- 'leggings': 1,
- 'sheer hosiery': 1
-}
-hat_text = ['hat', 'cap', 'chapeau']
-eyeglasses_text = ['sunglasses']
-belt_text = ['belt', 'with a dress tied around the waist']
-outer_shape_text = [
- 'with outer clothing open', 'with outer clothing unzipped',
- 'covering inner clothes', 'with outer clothing zipped'
-]
-outer_shape_attr = {
- 'with outer clothing open': 0,
- 'with outer clothing unzipped': 0,
- 'covering inner clothes': 1,
- 'with outer clothing zipped': 1
-}
-
-upper_types = [
- 'T-shirt', 'shirt', 'sweater', 'hoodie', 'tops', 'blouse', 'Basic Tee'
-]
-outer_types = [
- 'jacket', 'outer clothing', 'coat', 'overcoat', 'blazer', 'outerwear',
- 'duffle', 'cardigan'
-]
-skirt_types = ['skirt']
-dress_types = ['dress']
-pant_types = ['jeans', 'pants', 'trousers']
-rompers_types = ['rompers', 'bodysuit', 'jumpsuit']
-
-attr_names_list = [
- 'gender', 'hair length', '0 upper clothing length',
- '1 lower clothing length', '2 socks', '3 hat', '4 eyeglasses', '5 belt',
- '6 opening of outer clothing', '7 upper clothes', '8 outer clothing',
- '9 skirt', '10 dress', '11 pants', '12 rompers'
-]
-
-
-def generate_shape_attributes(user_shape_texts):
- model = SentenceTransformer('all-MiniLM-L6-v2')
- parsed_texts = user_shape_texts.split(',')
-
- text_num = len(parsed_texts)
-
- human_attr = [0, 0]
- attr = [1, 3, 0, 0, 0, 3, 1, 1, 0, 0, 0, 0, 0]
-
- changed = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- for text_id, text in enumerate(parsed_texts):
- user_embeddings = model.encode(text)
- if ('man' in text) and (text_id == 0):
- human_attr[0] = 0
- human_attr[1] = 0
-
- if ('woman' in text or 'lady' in text) and (text_id == 0):
- human_attr[0] = 1
- human_attr[1] = 2
-
- if (not changed[0]) and (text_id == 1):
- # upper length
- predefined_embeddings = model.encode(upper_length_text)
- similarities = util.dot_score(user_embeddings,
- predefined_embeddings)
- arg_idx = torch.argmax(similarities).item()
- attr[0] = upper_length_attr[upper_length_text[arg_idx]]
- changed[0] = 1
-
- if (not changed[1]) and ((text_num == 2 and text_id == 1) or
- (text_num > 2 and text_id == 2)):
- # lower length
- predefined_embeddings = model.encode(lower_length_text)
- similarities = util.dot_score(user_embeddings,
- predefined_embeddings)
- arg_idx = torch.argmax(similarities).item()
- attr[1] = lower_length_attr[lower_length_text[arg_idx]]
- changed[1] = 1
-
- if (not changed[2]) and (text_id > 2):
- # socks length
- predefined_embeddings = model.encode(socks_length_text)
- similarities = util.dot_score(user_embeddings,
- predefined_embeddings)
- arg_idx = torch.argmax(similarities).item()
- if similarities[0][arg_idx] > 0.7:
- attr[2] = arg_idx + 1
- changed[2] = 1
-
- if (not changed[3]) and (text_id > 2):
- # hat
- predefined_embeddings = model.encode(hat_text)
- similarities = util.dot_score(user_embeddings,
- predefined_embeddings)
- if similarities[0][0] > 0.7:
- attr[3] = 1
- changed[3] = 1
-
- if (not changed[4]) and (text_id > 2):
- # glasses
- predefined_embeddings = model.encode(eyeglasses_text)
- similarities = util.dot_score(user_embeddings,
- predefined_embeddings)
- arg_idx = torch.argmax(similarities).item()
- if similarities[0][arg_idx] > 0.7:
- attr[4] = arg_idx + 1
- changed[4] = 1
-
- if (not changed[5]) and (text_id > 2):
- # belt
- predefined_embeddings = model.encode(belt_text)
- similarities = util.dot_score(user_embeddings,
- predefined_embeddings)
- arg_idx = torch.argmax(similarities).item()
- if similarities[0][arg_idx] > 0.7:
- attr[5] = arg_idx + 1
- changed[5] = 1
-
- if (not changed[6]) and (text_id == 3):
- # outer coverage
- predefined_embeddings = model.encode(outer_shape_text)
- similarities = util.dot_score(user_embeddings,
- predefined_embeddings)
- arg_idx = torch.argmax(similarities).item()
- if similarities[0][arg_idx] > 0.7:
- attr[6] = arg_idx
- changed[6] = 1
-
- if (not changed[10]) and (text_num == 2 and text_id == 1):
- # dress_types
- predefined_embeddings = model.encode(dress_types)
- similarities = util.dot_score(user_embeddings,
- predefined_embeddings)
- similarity_skirt = util.dot_score(user_embeddings,
- model.encode(skirt_types))
- if similarities[0][0] > 0.5 and similarities[0][
- 0] > similarity_skirt[0][0]:
- attr[10] = 1
- attr[7] = 0
- attr[8] = 0
- attr[9] = 0
- attr[11] = 0
- attr[12] = 0
-
- changed[0] = 1
- changed[10] = 1
- changed[7] = 1
- changed[8] = 1
- changed[9] = 1
- changed[11] = 1
- changed[12] = 1
-
- if (not changed[12]) and (text_num == 2 and text_id == 1):
- # rompers_types
- predefined_embeddings = model.encode(rompers_types)
- similarities = util.dot_score(user_embeddings,
- predefined_embeddings)
- max_similarity = torch.max(similarities).item()
- if max_similarity > 0.6:
- attr[12] = 1
- attr[7] = 0
- attr[8] = 0
- attr[9] = 0
- attr[10] = 0
- attr[11] = 0
-
- changed[12] = 1
- changed[7] = 1
- changed[8] = 1
- changed[9] = 1
- changed[10] = 1
- changed[11] = 1
-
- if (not changed[7]) and (text_num > 2 and text_id == 1):
- # upper_types
- predefined_embeddings = model.encode(upper_types)
- similarities = util.dot_score(user_embeddings,
- predefined_embeddings)
- max_similarity = torch.max(similarities).item()
- if max_similarity > 0.6:
- attr[7] = 1
- changed[7] = 1
-
- if (not changed[8]) and (text_id == 3):
- # outer_types
- predefined_embeddings = model.encode(outer_types)
- similarities = util.dot_score(user_embeddings,
- predefined_embeddings)
- arg_idx = torch.argmax(similarities).item()
- if similarities[0][arg_idx] > 0.7:
- attr[6] = outer_shape_attr[outer_shape_text[arg_idx]]
- attr[8] = 1
- changed[8] = 1
-
- if (not changed[9]) and (text_num > 2 and text_id == 2):
- # skirt_types
- predefined_embeddings = model.encode(skirt_types)
- similarity_skirt = util.dot_score(user_embeddings,
- predefined_embeddings)
- similarity_dress = util.dot_score(user_embeddings,
- model.encode(dress_types))
- if similarity_skirt[0][0] > 0.7 and similarity_skirt[0][
- 0] > similarity_dress[0][0]:
- attr[9] = 1
- attr[10] = 0
- changed[9] = 1
- changed[10] = 1
-
- if (not changed[11]) and (text_num > 2 and text_id == 2):
- # pant_types
- predefined_embeddings = model.encode(pant_types)
- similarities = util.dot_score(user_embeddings,
- predefined_embeddings)
- max_similarity = torch.max(similarities).item()
- if max_similarity > 0.6:
- attr[11] = 1
- attr[9] = 0
- attr[10] = 0
- attr[12] = 0
- changed[11] = 1
- changed[9] = 1
- changed[10] = 1
- changed[12] = 1
-
- return human_attr + attr
-
-
-def generate_texture_attributes(user_text):
- parsed_texts = user_text.split(',')
-
- attr = []
- for text in parsed_texts:
- if ('pure color' in text) or ('solid color' in text):
- attr.append(4)
- elif ('spline' in text) or ('stripe' in text):
- attr.append(3)
- elif ('plaid' in text) or ('lattice' in text):
- attr.append(5)
- elif 'floral' in text:
- attr.append(1)
- elif 'denim' in text:
- attr.append(0)
- else:
- attr.append(17)
-
- if len(attr) == 1:
- attr.append(attr[0])
- attr.append(17)
-
- if len(attr) == 2:
- attr.append(17)
-
- return attr
-
-
-if __name__ == "__main__":
- user_request = input('Enter your request: ')
- while user_request != '\\q':
- attr = generate_shape_attributes(user_request)
- print(attr)
- for attr_name, attr_value in zip(attr_names_list, attr):
- print(attr_name, attr_value)
- user_request = input('Enter your request: ')
diff --git a/spaces/CamCam17/Alexwww-davide-comic-book-characters/README.md b/spaces/CamCam17/Alexwww-davide-comic-book-characters/README.md
deleted file mode 100644
index 69e3efb33215d67b5c429ec8c1bb4a8f567a2188..0000000000000000000000000000000000000000
--- a/spaces/CamCam17/Alexwww-davide-comic-book-characters/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Livebook
-emoji: 📓
-colorFrom: pink
-colorTo: purple
-sdk: docker
-fullWidth: true
----
-
-You can install and run [Livebook](https://livebook.dev/) inside a Hugging Face Space. Here's [a tutorial](https://huggingface.co/docs/hub/spaces-sdks-docker-livebook) on how to do that.
\ No newline at end of file
diff --git a/spaces/Cpp4App/Cpp4App/CDM/detect_compo/deprecated/Block.py b/spaces/Cpp4App/Cpp4App/CDM/detect_compo/deprecated/Block.py
deleted file mode 100644
index 8cf301418704e0ac91f791126690100bbedc5a8b..0000000000000000000000000000000000000000
--- a/spaces/Cpp4App/Cpp4App/CDM/detect_compo/deprecated/Block.py
+++ /dev/null
@@ -1,56 +0,0 @@
-import cv2
-from os.path import join as pjoin
-import time
-import numpy as np
-
-from CDM.detect_compo.lib_ip.Component import Component
-from CDM.config.CONFIG_UIED import Config
-C = Config()
-
-
-class Block(Component):
- def __init__(self, region, image_shape):
- super().__init__(region, image_shape)
- self.category = 'Block'
- self.parent = None
- self.children = []
- self.uicompo_ = None
- self.top_or_botm = None
- self.redundant = False
-
- def block_is_uicompo(self, image_shape, max_compo_scale):
- '''
- Check the if the block is a ui component according to its relative size
- '''
- row, column = image_shape[:2]
- # print(height, height / row, max_compo_scale[0], height / row > max_compo_scale[0])
- # draw.draw_bounding_box(org, [corner], show=True)
- # ignore atomic components
- if self.bbox.height / row > max_compo_scale[0] or self.bbox.width / column > max_compo_scale[1]:
- return False
- return True
-
- def block_is_top_or_bottom_bar(self, image_shape, top_bottom_height):
- '''
- Check if the block is top bar or bottom bar
- '''
- height, width = image_shape[:2]
- (column_min, row_min, column_max, row_max) = self.bbox.put_bbox()
- if column_min < 5 and row_min < 5 and \
- width - column_max < 5 and row_max < height * top_bottom_height[0]:
- self.uicompo_ = True
- return True
- if column_min < 5 and row_min > height * top_bottom_height[1] and \
- width - column_max < 5 and height - row_max < 5:
- self.uicompo_ = True
- return True
- return False
-
- def block_erase_from_bin(self, binary, pad):
- (column_min, row_min, column_max, row_max) = self.put_bbox()
- column_min = max(column_min - pad, 0)
- column_max = min(column_max + pad, binary.shape[1])
- row_min = max(row_min - pad, 0)
- row_max = min(row_max + pad, binary.shape[0])
- cv2.rectangle(binary, (column_min, row_min), (column_max, row_max), (0), -1)
-
diff --git a/spaces/CyberHarem/find_my_waifu/huggingface.py b/spaces/CyberHarem/find_my_waifu/huggingface.py
deleted file mode 100644
index 3c357336b7d664810a027bbb4e01f7c7d0169eff..0000000000000000000000000000000000000000
--- a/spaces/CyberHarem/find_my_waifu/huggingface.py
+++ /dev/null
@@ -1,38 +0,0 @@
-import math
-import os
-
-from gchar.utils import get_requests_session
-from huggingface_hub import configure_http_backend, HfApi, HfFileSystem
-
-_NUM_TAGS = [
- ('n<1K', 0, 1_000),
- ('1K1T', 1_000_000_000_000, math.inf),
-]
-
-
-def number_to_tag(v):
- for tag, min_, max_ in _NUM_TAGS:
- if min_ <= v < max_:
- return tag
-
- raise ValueError(f'No tags found for {v!r}')
-
-
-configure_http_backend(get_requests_session)
-
-
-def get_hf_client() -> HfApi:
- return HfApi(token=os.environ.get('HF_TOKEN'))
-
-
-def get_hf_fs() -> HfFileSystem:
- return HfFileSystem(token=os.environ.get('HF_TOKEN'))
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/GimpPaletteFile.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/GimpPaletteFile.py
deleted file mode 100644
index d388928945a0f6711de2b1c8d1ed50ce192a8219..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/GimpPaletteFile.py
+++ /dev/null
@@ -1,56 +0,0 @@
-#
-# Python Imaging Library
-# $Id$
-#
-# stuff to read GIMP palette files
-#
-# History:
-# 1997-08-23 fl Created
-# 2004-09-07 fl Support GIMP 2.0 palette files.
-#
-# Copyright (c) Secret Labs AB 1997-2004. All rights reserved.
-# Copyright (c) Fredrik Lundh 1997-2004.
-#
-# See the README file for information on usage and redistribution.
-#
-
-import re
-
-from ._binary import o8
-
-
-class GimpPaletteFile:
- """File handler for GIMP's palette format."""
-
- rawmode = "RGB"
-
- def __init__(self, fp):
- self.palette = [o8(i) * 3 for i in range(256)]
-
- if fp.readline()[:12] != b"GIMP Palette":
- msg = "not a GIMP palette file"
- raise SyntaxError(msg)
-
- for i in range(256):
- s = fp.readline()
- if not s:
- break
-
- # skip fields and comment lines
- if re.match(rb"\w+:|#", s):
- continue
- if len(s) > 100:
- msg = "bad palette file"
- raise SyntaxError(msg)
-
- v = tuple(map(int, s.split()[:3]))
- if len(v) != 3:
- msg = "bad palette entry"
- raise ValueError(msg)
-
- self.palette[i] = o8(v[0]) + o8(v[1]) + o8(v[2])
-
- self.palette = b"".join(self.palette)
-
- def getpalette(self):
- return self.palette, self.rawmode
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_config.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_config.py
deleted file mode 100644
index f46a5bfe6ba6093688c7a91bd51de9d137840432..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_config.py
+++ /dev/null
@@ -1,369 +0,0 @@
-import logging
-import os
-import ssl
-import sys
-import typing
-from pathlib import Path
-
-import certifi
-
-from ._compat import set_minimum_tls_version_1_2
-from ._models import Headers
-from ._types import CertTypes, HeaderTypes, TimeoutTypes, URLTypes, VerifyTypes
-from ._urls import URL
-from ._utils import get_ca_bundle_from_env
-
-DEFAULT_CIPHERS = ":".join(
- [
- "ECDHE+AESGCM",
- "ECDHE+CHACHA20",
- "DHE+AESGCM",
- "DHE+CHACHA20",
- "ECDH+AESGCM",
- "DH+AESGCM",
- "ECDH+AES",
- "DH+AES",
- "RSA+AESGCM",
- "RSA+AES",
- "!aNULL",
- "!eNULL",
- "!MD5",
- "!DSS",
- ]
-)
-
-
-logger = logging.getLogger("httpx")
-
-
-class UnsetType:
- pass # pragma: no cover
-
-
-UNSET = UnsetType()
-
-
-def create_ssl_context(
- cert: typing.Optional[CertTypes] = None,
- verify: VerifyTypes = True,
- trust_env: bool = True,
- http2: bool = False,
-) -> ssl.SSLContext:
- return SSLConfig(
- cert=cert, verify=verify, trust_env=trust_env, http2=http2
- ).ssl_context
-
-
-class SSLConfig:
- """
- SSL Configuration.
- """
-
- DEFAULT_CA_BUNDLE_PATH = Path(certifi.where())
-
- def __init__(
- self,
- *,
- cert: typing.Optional[CertTypes] = None,
- verify: VerifyTypes = True,
- trust_env: bool = True,
- http2: bool = False,
- ):
- self.cert = cert
- self.verify = verify
- self.trust_env = trust_env
- self.http2 = http2
- self.ssl_context = self.load_ssl_context()
-
- def load_ssl_context(self) -> ssl.SSLContext:
- logger.debug(
- "load_ssl_context verify=%r cert=%r trust_env=%r http2=%r",
- self.verify,
- self.cert,
- self.trust_env,
- self.http2,
- )
-
- if self.verify:
- return self.load_ssl_context_verify()
- return self.load_ssl_context_no_verify()
-
- def load_ssl_context_no_verify(self) -> ssl.SSLContext:
- """
- Return an SSL context for unverified connections.
- """
- context = self._create_default_ssl_context()
- context.check_hostname = False
- context.verify_mode = ssl.CERT_NONE
- self._load_client_certs(context)
- return context
-
- def load_ssl_context_verify(self) -> ssl.SSLContext:
- """
- Return an SSL context for verified connections.
- """
- if self.trust_env and self.verify is True:
- ca_bundle = get_ca_bundle_from_env()
- if ca_bundle is not None:
- self.verify = ca_bundle
-
- if isinstance(self.verify, ssl.SSLContext):
- # Allow passing in our own SSLContext object that's pre-configured.
- context = self.verify
- self._load_client_certs(context)
- return context
- elif isinstance(self.verify, bool):
- ca_bundle_path = self.DEFAULT_CA_BUNDLE_PATH
- elif Path(self.verify).exists():
- ca_bundle_path = Path(self.verify)
- else:
- raise IOError(
- "Could not find a suitable TLS CA certificate bundle, "
- "invalid path: {}".format(self.verify)
- )
-
- context = self._create_default_ssl_context()
- context.verify_mode = ssl.CERT_REQUIRED
- context.check_hostname = True
-
- # Signal to server support for PHA in TLS 1.3. Raises an
- # AttributeError if only read-only access is implemented.
- if sys.version_info >= (3, 8): # pragma: no cover
- try:
- context.post_handshake_auth = True
- except AttributeError: # pragma: no cover
- pass
-
- # Disable using 'commonName' for SSLContext.check_hostname
- # when the 'subjectAltName' extension isn't available.
- try:
- context.hostname_checks_common_name = False
- except AttributeError: # pragma: no cover
- pass
-
- if ca_bundle_path.is_file():
- cafile = str(ca_bundle_path)
- logger.debug("load_verify_locations cafile=%r", cafile)
- context.load_verify_locations(cafile=cafile)
- elif ca_bundle_path.is_dir():
- capath = str(ca_bundle_path)
- logger.debug("load_verify_locations capath=%r", capath)
- context.load_verify_locations(capath=capath)
-
- self._load_client_certs(context)
-
- return context
-
- def _create_default_ssl_context(self) -> ssl.SSLContext:
- """
- Creates the default SSLContext object that's used for both verified
- and unverified connections.
- """
- context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
- set_minimum_tls_version_1_2(context)
- context.options |= ssl.OP_NO_COMPRESSION
- context.set_ciphers(DEFAULT_CIPHERS)
-
- if ssl.HAS_ALPN:
- alpn_idents = ["http/1.1", "h2"] if self.http2 else ["http/1.1"]
- context.set_alpn_protocols(alpn_idents)
-
- if sys.version_info >= (3, 8): # pragma: no cover
- keylogfile = os.environ.get("SSLKEYLOGFILE")
- if keylogfile and self.trust_env:
- context.keylog_filename = keylogfile
-
- return context
-
- def _load_client_certs(self, ssl_context: ssl.SSLContext) -> None:
- """
- Loads client certificates into our SSLContext object
- """
- if self.cert is not None:
- if isinstance(self.cert, str):
- ssl_context.load_cert_chain(certfile=self.cert)
- elif isinstance(self.cert, tuple) and len(self.cert) == 2:
- ssl_context.load_cert_chain(certfile=self.cert[0], keyfile=self.cert[1])
- elif isinstance(self.cert, tuple) and len(self.cert) == 3:
- ssl_context.load_cert_chain(
- certfile=self.cert[0],
- keyfile=self.cert[1],
- password=self.cert[2], # type: ignore
- )
-
-
-class Timeout:
- """
- Timeout configuration.
-
- **Usage**:
-
- Timeout(None) # No timeouts.
- Timeout(5.0) # 5s timeout on all operations.
- Timeout(None, connect=5.0) # 5s timeout on connect, no other timeouts.
- Timeout(5.0, connect=10.0) # 10s timeout on connect. 5s timeout elsewhere.
- Timeout(5.0, pool=None) # No timeout on acquiring connection from pool.
- # 5s timeout elsewhere.
- """
-
- def __init__(
- self,
- timeout: typing.Union[TimeoutTypes, UnsetType] = UNSET,
- *,
- connect: typing.Union[None, float, UnsetType] = UNSET,
- read: typing.Union[None, float, UnsetType] = UNSET,
- write: typing.Union[None, float, UnsetType] = UNSET,
- pool: typing.Union[None, float, UnsetType] = UNSET,
- ):
- if isinstance(timeout, Timeout):
- # Passed as a single explicit Timeout.
- assert connect is UNSET
- assert read is UNSET
- assert write is UNSET
- assert pool is UNSET
- self.connect = timeout.connect # type: typing.Optional[float]
- self.read = timeout.read # type: typing.Optional[float]
- self.write = timeout.write # type: typing.Optional[float]
- self.pool = timeout.pool # type: typing.Optional[float]
- elif isinstance(timeout, tuple):
- # Passed as a tuple.
- self.connect = timeout[0]
- self.read = timeout[1]
- self.write = None if len(timeout) < 3 else timeout[2]
- self.pool = None if len(timeout) < 4 else timeout[3]
- elif not (
- isinstance(connect, UnsetType)
- or isinstance(read, UnsetType)
- or isinstance(write, UnsetType)
- or isinstance(pool, UnsetType)
- ):
- self.connect = connect
- self.read = read
- self.write = write
- self.pool = pool
- else:
- if isinstance(timeout, UnsetType):
- raise ValueError(
- "httpx.Timeout must either include a default, or set all "
- "four parameters explicitly."
- )
- self.connect = timeout if isinstance(connect, UnsetType) else connect
- self.read = timeout if isinstance(read, UnsetType) else read
- self.write = timeout if isinstance(write, UnsetType) else write
- self.pool = timeout if isinstance(pool, UnsetType) else pool
-
- def as_dict(self) -> typing.Dict[str, typing.Optional[float]]:
- return {
- "connect": self.connect,
- "read": self.read,
- "write": self.write,
- "pool": self.pool,
- }
-
- def __eq__(self, other: typing.Any) -> bool:
- return (
- isinstance(other, self.__class__)
- and self.connect == other.connect
- and self.read == other.read
- and self.write == other.write
- and self.pool == other.pool
- )
-
- def __repr__(self) -> str:
- class_name = self.__class__.__name__
- if len({self.connect, self.read, self.write, self.pool}) == 1:
- return f"{class_name}(timeout={self.connect})"
- return (
- f"{class_name}(connect={self.connect}, "
- f"read={self.read}, write={self.write}, pool={self.pool})"
- )
-
-
-class Limits:
- """
- Configuration for limits to various client behaviors.
-
- **Parameters:**
-
- * **max_connections** - The maximum number of concurrent connections that may be
- established.
- * **max_keepalive_connections** - Allow the connection pool to maintain
- keep-alive connections below this point. Should be less than or equal
- to `max_connections`.
- * **keepalive_expiry** - Time limit on idle keep-alive connections in seconds.
- """
-
- def __init__(
- self,
- *,
- max_connections: typing.Optional[int] = None,
- max_keepalive_connections: typing.Optional[int] = None,
- keepalive_expiry: typing.Optional[float] = 5.0,
- ):
- self.max_connections = max_connections
- self.max_keepalive_connections = max_keepalive_connections
- self.keepalive_expiry = keepalive_expiry
-
- def __eq__(self, other: typing.Any) -> bool:
- return (
- isinstance(other, self.__class__)
- and self.max_connections == other.max_connections
- and self.max_keepalive_connections == other.max_keepalive_connections
- and self.keepalive_expiry == other.keepalive_expiry
- )
-
- def __repr__(self) -> str:
- class_name = self.__class__.__name__
- return (
- f"{class_name}(max_connections={self.max_connections}, "
- f"max_keepalive_connections={self.max_keepalive_connections}, "
- f"keepalive_expiry={self.keepalive_expiry})"
- )
-
-
-class Proxy:
- def __init__(
- self,
- url: URLTypes,
- *,
- auth: typing.Optional[typing.Tuple[str, str]] = None,
- headers: typing.Optional[HeaderTypes] = None,
- ):
- url = URL(url)
- headers = Headers(headers)
-
- if url.scheme not in ("http", "https", "socks5"):
- raise ValueError(f"Unknown scheme for proxy URL {url!r}")
-
- if url.username or url.password:
- # Remove any auth credentials from the URL.
- auth = (url.username, url.password)
- url = url.copy_with(username=None, password=None)
-
- self.url = url
- self.auth = auth
- self.headers = headers
-
- @property
- def raw_auth(self) -> typing.Optional[typing.Tuple[bytes, bytes]]:
- # The proxy authentication as raw bytes.
- return (
- None
- if self.auth is None
- else (self.auth[0].encode("utf-8"), self.auth[1].encode("utf-8"))
- )
-
- def __repr__(self) -> str:
- # The authentication is represented with the password component masked.
- auth = (self.auth[0], "********") if self.auth else None
-
- # Build a nice concise representation.
- url_str = f"{str(self.url)!r}"
- auth_str = f", auth={auth!r}" if auth else ""
- headers_str = f", headers={dict(self.headers)!r}" if self.headers else ""
- return f"Proxy({url_str}{auth_str}{headers_str})"
-
-
-DEFAULT_TIMEOUT_CONFIG = Timeout(timeout=5.0)
-DEFAULT_LIMITS = Limits(max_connections=100, max_keepalive_connections=20)
-DEFAULT_MAX_REDIRECTS = 20
diff --git a/spaces/Dao3/ChatGLM-6B/app.py b/spaces/Dao3/ChatGLM-6B/app.py
deleted file mode 100644
index 7871b6f037949e3cbab97c3e39fce46ce60540fa..0000000000000000000000000000000000000000
--- a/spaces/Dao3/ChatGLM-6B/app.py
+++ /dev/null
@@ -1,46 +0,0 @@
-from transformers import AutoModel, AutoTokenizer
-import gradio as gr
-
-tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)
-# model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda()
-model = AutoModel.from_pretrained("THUDM/chatglm-6b-int4",trust_remote_code=True).float()
-model = model.eval()
-
-MAX_TURNS = 20
-MAX_BOXES = MAX_TURNS * 2
-
-
-def predict(input, max_length, top_p, temperature, history=None):
- if history is None:
- history = []
- for response, history in model.stream_chat(tokenizer, input, history, max_length=max_length, top_p=top_p,
- temperature=temperature):
- updates = []
- for query, response in history:
- updates.append(gr.update(visible=True, value="用户:" + query))
- updates.append(gr.update(visible=True, value="ChatGLM-6B:" + response))
- if len(updates) < MAX_BOXES:
- updates = updates + [gr.Textbox.update(visible=False)] * (MAX_BOXES - len(updates))
- yield [history] + updates
-
-
-with gr.Blocks() as demo:
- state = gr.State([])
- text_boxes = []
- for i in range(MAX_BOXES):
- if i % 2 == 0:
- text_boxes.append(gr.Markdown(visible=False, label="提问:"))
- else:
- text_boxes.append(gr.Markdown(visible=False, label="回复:"))
-
- with gr.Row():
- with gr.Column(scale=4):
- txt = gr.Textbox(show_label=False, placeholder="Enter text and press enter", lines=11).style(
- container=False)
- with gr.Column(scale=1):
- max_length = gr.Slider(0, 4096, value=2048, step=1.0, label="Maximum length", interactive=True)
- top_p = gr.Slider(0, 1, value=0.7, step=0.01, label="Top P", interactive=True)
- temperature = gr.Slider(0, 1, value=0.95, step=0.01, label="Temperature", interactive=True)
- button = gr.Button("Generate")
- button.click(predict, [txt, max_length, top_p, temperature, state], [state] + text_boxes)
-demo.queue().launch()
diff --git a/spaces/Datasculptor/MusicGen/tests/quantization/test_vq.py b/spaces/Datasculptor/MusicGen/tests/quantization/test_vq.py
deleted file mode 100644
index c215099fedacae35c6798fdd9b8420a447aa16bb..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/MusicGen/tests/quantization/test_vq.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-from audiocraft.quantization.vq import ResidualVectorQuantizer
-
-
-class TestResidualVectorQuantizer:
-
- def test_rvq(self):
- x = torch.randn(1, 16, 2048)
- vq = ResidualVectorQuantizer(n_q=8, dimension=16, bins=8)
- res = vq(x, 1.)
- assert res.x.shape == torch.Size([1, 16, 2048])
diff --git a/spaces/Didier/Semantic_Search_arXiv/app.py b/spaces/Didier/Semantic_Search_arXiv/app.py
deleted file mode 100644
index 22429c65c6d17190947372709ce567c90aa7c9f1..0000000000000000000000000000000000000000
--- a/spaces/Didier/Semantic_Search_arXiv/app.py
+++ /dev/null
@@ -1,58 +0,0 @@
-# Title: app.py
-# Author: Didier Guillevic
-# Date: 2021
-
-import streamlit as st
-import pandas as pd
-import numpy as np
-import os
-import io
-
-import arxiv_info
-
-st.title("Semantic Search (arxiv.org)")
-st.markdown("""
-* Query with some document (preferred) or some keywords (in any language).
-* Get papers from [arxiv.org](https://arxiv.org) with meaning similar to the query (arxiv indexed up to 2020-11).
-* Sample queries:
- * Techniques to measure blood sugar levels.
- * Aujourd'hui, malgré des décennies de développement de la médecine et l'intérêt croissant pour les soins de santé de précision, la grande majorité des diagnostics se produisent une fois que les patients commencent à montrer des signes visibles de maladie.
- * Diabetes-Patienten kontrollieren ständig den Blutzucker. Forscher haben jetzt die Wirkung von speziellen Hautsensoren geprüft, die ohne Blut auskommen.
- * 機械学習をヘルスケアに適用することで、多くの素晴らしい結果が得られました。ただし、これらの大部分は、治療に明示的にではなく、状態の診断または結果の予測に焦点を合わせています。
-""")
-
-#
-# Define the form to get the query and desired number of results
-#
-
-def build_markdown_response(docs):
- outStream = io.StringIO()
- for doc in docs:
- outStream.write(f"* Score : {doc['score']:0.3f}\n")
- outStream.write(f"* URL : [{doc['url']}]({doc['url']})\n")
- outStream.write(f"* Title : {doc['title']}\n")
- outStream.write(f"* Abstract: {doc['abstract']}\n")
- outStream.write("- - - -\n")
- result = outStream.getvalue()
- outStream.close()
- return result
-
-def form_callback():
- # Get the top semantically similar documents
- docs = arxiv_info.retrieve_similar_docs(
- st.session_state.query,
- arxiv_info.arxiv_embeddings,
- arxiv_info.arxiv_df,
- nb_results=st.session_state.nb_results)
-
- # Return the result in markdown format
- st.header("Search document")
- st.write(st.session_state.query)
- st.header("Similar documents")
- result = build_markdown_response(docs)
- st.markdown(result)
-
-with st.form(key="input_form"):
- st.text_area("Text to search", key="query")
- st.slider(label="Number of results", min_value=1, max_value=10, value=5, key="nb_results")
- submit_button = st.form_submit_button(label="Submit", on_click=form_callback)
diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/custom_ops.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/custom_ops.py
deleted file mode 100644
index 6509d70bbcd49a24628e3e90258ccb0d8c5a3b39..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/custom_ops.py
+++ /dev/null
@@ -1,260 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import os
-import glob
-import torch
-import torch.utils.cpp_extension
-import importlib
-import hashlib
-import shutil
-from pathlib import Path
-import re
-import uuid
-
-from torch.utils.file_baton import FileBaton
-
-# ----------------------------------------------------------------------------
-# Global options.
-
-verbosity = 'brief' # Verbosity level: 'none', 'brief', 'full'
-
-# ----------------------------------------------------------------------------
-# Internal helper funcs.
-
-
-def _find_compiler_bindir():
- patterns = [
- 'C:/Program Files (x86)/Microsoft Visual Studio/*/Professional/VC/Tools/MSVC/*/bin/Hostx64/x64',
- 'C:/Program Files (x86)/Microsoft Visual Studio/*/BuildTools/VC/Tools/MSVC/*/bin/Hostx64/x64',
- 'C:/Program Files (x86)/Microsoft Visual Studio/*/Community/VC/Tools/MSVC/*/bin/Hostx64/x64',
- 'C:/Program Files (x86)/Microsoft Visual Studio */vc/bin',
- ]
- for pattern in patterns:
- matches = sorted(glob.glob(pattern))
- if len(matches):
- return matches[-1]
- return None
-
-
-def _get_mangled_gpu_name():
- name = torch.cuda.get_device_name().lower()
- out = []
- for c in name:
- if re.match('[a-z0-9_-]+', c):
- out.append(c)
- else:
- out.append('-')
- return ''.join(out)
-
-
-# ----------------------------------------------------------------------------
-# Main entry point for compiling and loading C++/CUDA plugins.
-
-_cached_plugins = dict()
-
-
-def get_plugin(module_name, sources, **build_kwargs):
- assert verbosity in ['none', 'brief', 'full']
-
- # Already cached?
- if module_name in _cached_plugins:
- return _cached_plugins[module_name]
-
- # Print status.
- if verbosity == 'full':
- print(f'Setting up PyTorch plugin "{module_name}"...')
- elif verbosity == 'brief':
- print(
- f'Setting up PyTorch plugin "{module_name}"... ', end='', flush=True)
-
- try: # pylint: disable=too-many-nested-blocks
- # Make sure we can find the necessary compiler binaries.
- if os.name == 'nt' and os.system("where cl.exe >nul 2>nul") != 0:
- compiler_bindir = _find_compiler_bindir()
- if compiler_bindir is None:
- raise RuntimeError(
- f'Could not find MSVC/GCC/CLANG installation on this computer. Check _find_compiler_bindir() in "{__file__}".')
- os.environ['PATH'] += ';' + compiler_bindir
-
- # Compile and load.
- verbose_build = (verbosity == 'full')
-
- # Incremental build md5sum trickery. Copies all the input source files
- # into a cached build directory under a combined md5 digest of the input
- # source files. Copying is done only if the combined digest has changed.
- # This keeps input file timestamps and filenames the same as in previous
- # extension builds, allowing for fast incremental rebuilds.
- #
- # This optimization is done only in case all the source files reside in
- # a single directory (just for simplicity) and if the TORCH_EXTENSIONS_DIR
- # environment variable is set (we take this as a signal that the user
- # actually cares about this.)
- source_dirs_set = set(os.path.dirname(source) for source in sources)
- if len(source_dirs_set) == 1 and ('TORCH_EXTENSIONS_DIR' in os.environ):
- all_source_files = sorted(list(x for x in Path(
- list(source_dirs_set)[0]).iterdir() if x.is_file()))
-
- # Compute a combined hash digest for all source files in the same
- # custom op directory (usually .cu, .cpp, .py and .h files).
- hash_md5 = hashlib.md5()
- for src in all_source_files:
- with open(src, 'rb') as f:
- hash_md5.update(f.read())
- build_dir = torch.utils.cpp_extension._get_build_directory(
- module_name, verbose=verbose_build) # pylint: disable=protected-access
- digest_build_dir = os.path.join(build_dir, hash_md5.hexdigest())
-
- if not os.path.isdir(digest_build_dir):
- os.makedirs(digest_build_dir, exist_ok=True)
- baton = FileBaton(os.path.join(digest_build_dir, 'lock'))
- if baton.try_acquire():
- try:
- for src in all_source_files:
- shutil.copyfile(src, os.path.join(
- digest_build_dir, os.path.basename(src)))
- finally:
- baton.release()
- else:
- # Someone else is copying source files under the digest dir,
- # wait until done and continue.
- baton.wait()
- digest_sources = [os.path.join(
- digest_build_dir, os.path.basename(x)) for x in sources]
- torch.utils.cpp_extension.load(name=module_name, build_directory=build_dir,
- verbose=verbose_build, sources=digest_sources, **build_kwargs)
- else:
- torch.utils.cpp_extension.load(
- name=module_name, verbose=verbose_build, sources=sources, **build_kwargs)
- module = importlib.import_module(module_name)
-
- except:
- if verbosity == 'brief':
- print('Failed!')
- raise
-
- # Print status and add to cache.
- if verbosity == 'full':
- print(f'Done setting up PyTorch plugin "{module_name}".')
- elif verbosity == 'brief':
- print('Done.')
- _cached_plugins[module_name] = module
- return module
-
-# ----------------------------------------------------------------------------
-
-
-def get_plugin_v3(module_name, sources, headers=None, source_dir=None, **build_kwargs):
- assert verbosity in ['none', 'brief', 'full']
- if headers is None:
- headers = []
- if source_dir is not None:
- sources = [os.path.join(source_dir, fname) for fname in sources]
- headers = [os.path.join(source_dir, fname) for fname in headers]
-
- # Already cached?
- if module_name in _cached_plugins:
- return _cached_plugins[module_name]
-
- # Print status.
- if verbosity == 'full':
- print(f'Setting up PyTorch plugin "{module_name}"...')
- elif verbosity == 'brief':
- print(
- f'Setting up PyTorch plugin "{module_name}"... ', end='', flush=True)
- verbose_build = (verbosity == 'full')
-
- # Compile and load.
- try: # pylint: disable=too-many-nested-blocks
- # Make sure we can find the necessary compiler binaries.
- if os.name == 'nt' and os.system("where cl.exe >nul 2>nul") != 0:
- compiler_bindir = _find_compiler_bindir()
- if compiler_bindir is None:
- raise RuntimeError(
- f'Could not find MSVC/GCC/CLANG installation on this computer. Check _find_compiler_bindir() in "{__file__}".')
- os.environ['PATH'] += ';' + compiler_bindir
-
- # Some containers set TORCH_CUDA_ARCH_LIST to a list that can either
- # break the build or unnecessarily restrict what's available to nvcc.
- # Unset it to let nvcc decide based on what's available on the
- # machine.
- os.environ['TORCH_CUDA_ARCH_LIST'] = ''
-
- # Incremental build md5sum trickery. Copies all the input source files
- # into a cached build directory under a combined md5 digest of the input
- # source files. Copying is done only if the combined digest has changed.
- # This keeps input file timestamps and filenames the same as in previous
- # extension builds, allowing for fast incremental rebuilds.
- #
- # This optimization is done only in case all the source files reside in
- # a single directory (just for simplicity) and if the TORCH_EXTENSIONS_DIR
- # environment variable is set (we take this as a signal that the user
- # actually cares about this.)
- #
- # EDIT: We now do it regardless of TORCH_EXTENSIOS_DIR, in order to work
- # around the *.cu dependency bug in ninja config.
- #
- all_source_files = sorted(sources + headers)
- all_source_dirs = set(os.path.dirname(fname)
- for fname in all_source_files)
- # and ('TORCH_EXTENSIONS_DIR' in os.environ):
- if len(all_source_dirs) == 1:
-
- # Compute combined hash digest for all source files.
- hash_md5 = hashlib.md5()
- for src in all_source_files:
- with open(src, 'rb') as f:
- hash_md5.update(f.read())
-
- # Select cached build directory name.
- source_digest = hash_md5.hexdigest()
- build_top_dir = torch.utils.cpp_extension._get_build_directory(
- module_name, verbose=verbose_build) # pylint: disable=protected-access
- cached_build_dir = os.path.join(
- build_top_dir, f'{source_digest}-{_get_mangled_gpu_name()}')
-
- if not os.path.isdir(cached_build_dir):
- tmpdir = f'{build_top_dir}/srctmp-{uuid.uuid4().hex}'
- os.makedirs(tmpdir)
- for src in all_source_files:
- shutil.copyfile(src, os.path.join(
- tmpdir, os.path.basename(src)))
- try:
- os.replace(tmpdir, cached_build_dir) # atomic
- except OSError:
- # source directory already exists, delete tmpdir and its contents.
- shutil.rmtree(tmpdir)
- if not os.path.isdir(cached_build_dir):
- raise
-
- # Compile.
- cached_sources = [os.path.join(
- cached_build_dir, os.path.basename(fname)) for fname in sources]
- torch.utils.cpp_extension.load(name=module_name, build_directory=cached_build_dir,
- verbose=verbose_build, sources=cached_sources, **build_kwargs)
- else:
- torch.utils.cpp_extension.load(
- name=module_name, verbose=verbose_build, sources=sources, **build_kwargs)
-
- # Load.
- module = importlib.import_module(module_name)
-
- except:
- if verbosity == 'brief':
- print('Failed!')
- raise
-
- # Print status and add to cache dict.
- if verbosity == 'full':
- print(f'Done setting up PyTorch plugin "{module_name}".')
- elif verbosity == 'brief':
- print('Done.')
- _cached_plugins[module_name] = module
- return module
diff --git a/spaces/ECCV2022/ECCV2022_papers/app.py b/spaces/ECCV2022/ECCV2022_papers/app.py
deleted file mode 100644
index c82a4c739f588f88539637b14e4843dd9009d7d5..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/ECCV2022_papers/app.py
+++ /dev/null
@@ -1,91 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import gradio as gr
-
-from paper_list import PaperList
-
-DESCRIPTION = '# ECCV 2022 Papers'
-NOTES = '''
-- [ECCV 2022](https://eccv2022.ecva.net/)
-- [List of accepted papers](https://docs.google.com/spreadsheets/d/1PvvnVAuG9TDov1rBQcLhH5noIM4INTfLhKwrk6ESDFU/edit?usp=sharing)
-'''
-FOOTER = ''
-
-
-def main():
- paper_list = PaperList()
-
- with gr.Blocks(css='style.css') as demo:
- gr.Markdown(DESCRIPTION)
-
- search_box = gr.Textbox(
- label='Search Title',
- placeholder=
- 'You can search for titles with regular expressions. e.g. (? context:
- limit = k - context - 1
- assert torch.allclose(x.grad[:, :limit],
- torch.tensor(0.)), x.grad[:, :limit].norm()
-
- # Now check that streaming gives the same result at batch eval.
- x = torch.randn(4, steps, 16)
- y = tr(x)
- ys = []
- with tr.streaming():
- for k in range(steps):
- chunk = x[:, k:k + 1, :]
- ys.append(tr(chunk))
- y_stream = torch.cat(ys, dim=1)
- delta = torch.norm(y_stream - y) / torch.norm(y)
- assert delta < 1e-6, delta
-
-
-def test_transformer_vs_pytorch():
- torch.manual_seed(1234)
- # Check that in the non causal setting, we get the same result as
- # PyTorch Transformer encoder.
- for custom in [False, True]:
- tr = StreamingTransformer(
- 16, 4, 2,
- causal=False, custom=custom, dropout=0., positional_scale=0.)
- layer = torch.nn.TransformerEncoderLayer(16, 4, dropout=0., batch_first=True)
- tr_ref = torch.nn.TransformerEncoder(layer, 2)
- tr.load_state_dict(tr_ref.state_dict())
-
- x = torch.randn(4, 20, 16)
- y = tr(x)
- y2 = tr_ref(x)
- delta = torch.norm(y2 - y) / torch.norm(y)
- assert delta < 1e-6, delta
-
-
-def test_streaming_api():
- tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0.)
- tr.eval()
- steps = 12
- x = torch.randn(1, steps, 16)
-
- with torch.no_grad():
- with tr.streaming():
- _ = tr(x[:, :1])
- state = {k: v.clone() for k, v in tr.get_streaming_state().items()}
- y = tr(x[:, 1:2])
- tr.set_streaming_state(state)
- y2 = tr(x[:, 1:2])
- assert torch.allclose(y, y2), (y - y2).norm()
- assert tr.flush() is None
-
-
-def test_memory_efficient():
- torch.manual_seed(1234)
- for backend in ['torch', 'xformers']:
- set_efficient_attention_backend(backend)
-
- tr = StreamingTransformer(
- 16, 4, 2, custom=True, dropout=0., layer_scale=0.1)
- tr_mem_efficient = StreamingTransformer(
- 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1)
- tr_mem_efficient.load_state_dict(tr.state_dict())
- tr.eval()
- steps = 12
- x = torch.randn(3, steps, 16)
-
- with torch.no_grad():
- y = tr(x)
- y2 = tr_mem_efficient(x)
- assert torch.allclose(y, y2), ((y - y2).norm(), backend)
-
-
-def test_attention_as_float32():
- torch.manual_seed(1234)
- cases = [
- {'custom': True},
- {'custom': False},
- ]
- for case in cases:
- tr = StreamingTransformer(16, 4, 2, dropout=0., dtype=torch.bfloat16, **case)
- tr_float32 = StreamingTransformer(
- 16, 4, 2, dropout=0., attention_as_float32=True, dtype=torch.bfloat16, **case)
- if not case['custom']:
- # we are not using autocast here because it doesn't really
- # work as expected on CPU, so we have to manually cast the weights of the MHA.
- for layer in tr_float32.layers:
- layer.self_attn.mha.to(torch.float32)
- tr_float32.load_state_dict(tr.state_dict())
- steps = 12
- x = torch.randn(3, steps, 16, dtype=torch.bfloat16)
-
- with torch.no_grad():
- y = tr(x)
- y2 = tr_float32(x)
- assert not torch.allclose(y, y2), (y - y2).norm()
-
-
-@torch.no_grad()
-def test_streaming_memory_efficient():
- torch.manual_seed(1234)
- for backend in ['torch', 'xformers']:
- set_efficient_attention_backend(backend)
- tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0., custom=True)
- tr_mem_efficient = StreamingTransformer(
- 16, 4, 2, dropout=0., memory_efficient=True, causal=True)
- tr.load_state_dict(tr_mem_efficient.state_dict())
- tr.eval()
- tr_mem_efficient.eval()
- steps = 12
- x = torch.randn(3, steps, 16)
-
- ref = tr(x)
-
- with tr_mem_efficient.streaming():
- outs = []
- # frame_sizes = [2] + [1] * (steps - 2)
- frame_sizes = [1] * steps
-
- for frame_size in frame_sizes:
- frame = x[:, :frame_size]
- x = x[:, frame_size:]
- outs.append(tr_mem_efficient(frame))
-
- out = torch.cat(outs, dim=1)
- delta = torch.norm(out - ref) / torch.norm(out)
- assert delta < 1e-6, delta
-
-
-def test_cross_attention():
- torch.manual_seed(1234)
- for norm_first in [True, False]:
- m = StreamingTransformer(
- 16, 4, 2, cross_attention=False, norm_first=norm_first, dropout=0., custom=True)
- m_cross = StreamingTransformer(
- 16, 4, 2, cross_attention=True, norm_first=norm_first, dropout=0., custom=True)
- m_cross.load_state_dict(m.state_dict(), strict=False)
- x = torch.randn(2, 5, 16)
- cross_x = torch.randn(2, 3, 16)
- y_ref = m(x)
- y_cross_zero = m_cross(x, cross_attention_src=0 * cross_x)
- # With norm_first, the two should be exactly yhe same,
- # but with norm_first=False, we get 2 normalization in a row
- # and the epsilon value leads to a tiny change.
- atol = 0. if norm_first else 1e-6
- print((y_ref - y_cross_zero).norm() / y_ref.norm())
- assert torch.allclose(y_ref, y_cross_zero, atol=atol)
-
- # We now expect a difference even with a generous atol of 1e-2.
- y_cross = m_cross(x, cross_attention_src=cross_x)
- assert not torch.allclose(y_cross, y_cross_zero, atol=1e-2)
-
- with pytest.raises(AssertionError):
- _ = m_cross(x)
- _ = m(x, cross_attention_src=cross_x)
-
-
-def test_cross_attention_compat():
- torch.manual_seed(1234)
- num_heads = 2
- dim = num_heads * 64
- with pytest.raises(AssertionError):
- StreamingMultiheadAttention(dim, num_heads, causal=True, cross_attention=True)
-
- cross_attn = StreamingMultiheadAttention(
- dim, num_heads, dropout=0, cross_attention=True, custom=True)
- ref_attn = torch.nn.MultiheadAttention(dim, num_heads, dropout=0, batch_first=True)
-
- # We can load the regular attention state dict
- # so we have compat when loading old checkpoints.
- cross_attn.load_state_dict(ref_attn.state_dict())
-
- queries = torch.randn(3, 7, dim)
- keys = torch.randn(3, 9, dim)
- values = torch.randn(3, 9, dim)
-
- y = cross_attn(queries, keys, values)[0]
- y_ref = ref_attn(queries, keys, values)[0]
- assert torch.allclose(y, y_ref, atol=1e-7), (y - y_ref).norm() / y_ref.norm()
-
- # Now let's check that streaming is working properly.
- with cross_attn.streaming():
- ys = []
- for step in range(queries.shape[1]):
- ys.append(cross_attn(queries[:, step: step + 1], keys, values)[0])
- y_streaming = torch.cat(ys, dim=1)
- assert torch.allclose(y_streaming, y, atol=1e-7)
-
-
-def test_repeat_kv():
- torch.manual_seed(1234)
- num_heads = 8
- kv_repeat = 4
- dim = num_heads * 64
- with pytest.raises(AssertionError):
- mha = StreamingMultiheadAttention(
- dim, num_heads, causal=True, kv_repeat=kv_repeat, cross_attention=True)
- mha = StreamingMultiheadAttention(
- dim, num_heads, causal=True, kv_repeat=kv_repeat)
- mha = StreamingMultiheadAttention(
- dim, num_heads, causal=True, kv_repeat=kv_repeat, custom=True)
- x = torch.randn(4, 18, dim)
- y = mha(x, x, x)[0]
- assert x.shape == y.shape
-
-
-def test_qk_layer_norm():
- torch.manual_seed(1234)
- tr = StreamingTransformer(
- 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, bias_attn=False)
- steps = 12
- x = torch.randn(3, steps, 16)
- y = tr(x)
-
- tr = StreamingTransformer(
- 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, cross_attention=True)
- z = torch.randn(3, 21, 16)
- y = tr(x, cross_attention_src=z)
- assert y.shape == x.shape
diff --git a/spaces/EleutherAI/magma/magma/image_prefix.py b/spaces/EleutherAI/magma/magma/image_prefix.py
deleted file mode 100644
index 96c2c14fb707346850e0e5969588ff805ed3245d..0000000000000000000000000000000000000000
--- a/spaces/EleutherAI/magma/magma/image_prefix.py
+++ /dev/null
@@ -1,109 +0,0 @@
-import torch
-import torch.nn as nn
-from torchtyping import TensorType
-from einops import rearrange
-from .image_encoders import get_image_encoder
-from .config import MultimodalConfig
-
-# ------------------------- Image prefix ----------------------------------
-
-# for models that are fixed to a specific sequence lengths (i.e clip models with no pooling), the sequence lengths are below
-ENCODER_SEQ_LENS = {
- "clip_resnet": 49,
- "clip_resnet_large": 144,
-}
-
-ENCODER_OUT_DIMS = {
- "nfresnet50": 2048,
- "clip": 512,
- "clip_resnet": 2560,
- "clip_resnet_large": 3072,
-}
-
-
-class ImagePrefix(nn.Module):
-
- """
- Takes in a batch of images and returns a batch of embeddings of the
- same dimensions as the LM's word embeddings.
-
- :param config: MultimodalConfig object
- :param out_dim: output dimension of the embedding
- :param device: device to run the model on
- """
-
- def __init__(
- self,
- config: MultimodalConfig,
- out_dim: int = 2048,
- device=None,
- ):
- super().__init__()
- self.device = device or torch.device(
- "cuda" if torch.cuda.is_available() else "cpu"
- )
- self.config = config
- self.encoder_type = config.encoder_name
-
- # get image encoder backbone
- self.enc = get_image_encoder(
- config.encoder_name,
- pretrained=config.pretrained_img_encoder,
- )
- self.encoder_out_dim = ENCODER_OUT_DIMS[
- self.encoder_type
- ] # out dim for image encoder
-
- self.out_dim = out_dim # out dim for lm
-
- # set the out seq len to that specified in the config, or for some models, the hardcoded value
- self.out_seq_len = (
- config.image_seq_len
- if config.encoder_name not in ENCODER_SEQ_LENS
- else ENCODER_SEQ_LENS[config.encoder_name]
- )
-
- # get the output projection
- proj_out_dim = (
- (self.out_dim * self.out_seq_len)
- if self.encoder_type not in ENCODER_SEQ_LENS
- else self.out_dim
- )
- self.proj = nn.Linear(self.encoder_out_dim, proj_out_dim)
- self.dropout = nn.Dropout(config.image_embed_dropout_prob)
- self.use_layernorm = config.use_image_embed_layernorm
- if self.use_layernorm:
- self.ln = nn.LayerNorm(self.out_dim)
-
- def forward(
- self, x: TensorType["b", "c", "h", "w"]
- ) -> TensorType["b", "seq", "out_dim"]:
-
- # pass through image encoder
- logits = self.enc(x)
-
- # remove trailing dimensions of size 1 + pass through linear
- if logits.ndim == 4:
- logits = rearrange(logits, "b d 1 1 -> b d")
- elif logits.ndim == 3:
- assert self.encoder_type in ENCODER_SEQ_LENS
- else:
- assert logits.ndim == 2
-
- logits = self.proj(logits)
-
- # reshape to desired output shape
- if (
- self.encoder_type not in ENCODER_SEQ_LENS
- ): # don't need to reshape those with fixed seq lens / no pooling
- logits = rearrange(
- logits, "b (s d) -> b s d", d=self.out_dim, s=self.out_seq_len
- )
-
- # pass through dropout and layer norm
- logits = self.dropout(logits)
-
- if self.use_layernorm:
- logits = self.ln(logits)
-
- return logits
diff --git a/spaces/Ella2323/Positive-Reframing/test.sh b/spaces/Ella2323/Positive-Reframing/test.sh
deleted file mode 100644
index e7a3bab4086eef6b6c61c65914524e715d696bff..0000000000000000000000000000000000000000
--- a/spaces/Ella2323/Positive-Reframing/test.sh
+++ /dev/null
@@ -1,9 +0,0 @@
-#!/bin/bash
-
-for n in 1 2 3 4 5 6 7 8 9 10
-do
- echo '------------------------------------------------------------'
- echo $n
- echo "test/$n.txt"
- python test.py --input="test/$n.txt" | diff - test/$n.gold.txt
-done
\ No newline at end of file
diff --git a/spaces/EronSamez/RVC_HFmeu/infer/modules/vc/pipeline.py b/spaces/EronSamez/RVC_HFmeu/infer/modules/vc/pipeline.py
deleted file mode 100644
index 76e712c649b95e21f9bbe6416ae8b7050317b479..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/infer/modules/vc/pipeline.py
+++ /dev/null
@@ -1,655 +0,0 @@
-import os
-import sys
-import traceback
-import logging
-
-logger = logging.getLogger(__name__)
-
-from functools import lru_cache
-from time import time as ttime
-from torch import Tensor
-import faiss
-import librosa
-import numpy as np
-import parselmouth
-import pyworld
-import torch
-import torch.nn.functional as F
-import torchcrepe
-from scipy import signal
-from tqdm import tqdm
-
-import random
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-import re
-from functools import partial
-bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000)
-
-input_audio_path2wav = {}
-from LazyImport import lazyload
-torchcrepe = lazyload("torchcrepe") # Fork Feature. Crepe algo for training and preprocess
-torch = lazyload("torch")
-from infer.lib.rmvpe import RMVPE
-
-@lru_cache
-def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period):
- audio = input_audio_path2wav[input_audio_path]
- f0, t = pyworld.harvest(
- audio,
- fs=fs,
- f0_ceil=f0max,
- f0_floor=f0min,
- frame_period=frame_period,
- )
- f0 = pyworld.stonemask(audio, f0, t, fs)
- return f0
-
-
-def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比
- # print(data1.max(),data2.max())
- rms1 = librosa.feature.rms(
- y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2
- ) # 每半秒一个点
- rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2)
- rms1 = torch.from_numpy(rms1)
- rms1 = F.interpolate(
- rms1.unsqueeze(0), size=data2.shape[0], mode="linear"
- ).squeeze()
- rms2 = torch.from_numpy(rms2)
- rms2 = F.interpolate(
- rms2.unsqueeze(0), size=data2.shape[0], mode="linear"
- ).squeeze()
- rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6)
- data2 *= (
- torch.pow(rms1, torch.tensor(1 - rate))
- * torch.pow(rms2, torch.tensor(rate - 1))
- ).numpy()
- return data2
-
-
-class Pipeline(object):
- def __init__(self, tgt_sr, config):
- self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = (
- config.x_pad,
- config.x_query,
- config.x_center,
- config.x_max,
- config.is_half,
- )
- self.sr = 16000 # hubert输入采样率
- self.window = 160 # 每帧点数
- self.t_pad = self.sr * self.x_pad # 每条前后pad时间
- self.t_pad_tgt = tgt_sr * self.x_pad
- self.t_pad2 = self.t_pad * 2
- self.t_query = self.sr * self.x_query # 查询切点前后查询时间
- self.t_center = self.sr * self.x_center # 查询切点位置
- self.t_max = self.sr * self.x_max # 免查询时长阈值
- self.device = config.device
- self.model_rmvpe = RMVPE("%s/rmvpe.pt" % os.environ["rmvpe_root"], is_half=self.is_half, device=self.device)
- self.f0_method_dict = {
- "pm": self.get_pm,
- "harvest": self.get_harvest,
- "dio": self.get_dio,
- "rmvpe": self.get_rmvpe,
- "rmvpe+": self.get_pitch_dependant_rmvpe,
- "crepe": self.get_f0_official_crepe_computation,
- "crepe-tiny": partial(self.get_f0_official_crepe_computation, model='model'),
- "mangio-crepe": self.get_f0_crepe_computation,
- "mangio-crepe-tiny": partial(self.get_f0_crepe_computation, model='model'),
-
- }
- self.note_dict = [
- 65.41, 69.30, 73.42, 77.78, 82.41, 87.31,
- 92.50, 98.00, 103.83, 110.00, 116.54, 123.47,
- 130.81, 138.59, 146.83, 155.56, 164.81, 174.61,
- 185.00, 196.00, 207.65, 220.00, 233.08, 246.94,
- 261.63, 277.18, 293.66, 311.13, 329.63, 349.23,
- 369.99, 392.00, 415.30, 440.00, 466.16, 493.88,
- 523.25, 554.37, 587.33, 622.25, 659.25, 698.46,
- 739.99, 783.99, 830.61, 880.00, 932.33, 987.77,
- 1046.50, 1108.73, 1174.66, 1244.51, 1318.51, 1396.91,
- 1479.98, 1567.98, 1661.22, 1760.00, 1864.66, 1975.53,
- 2093.00, 2217.46, 2349.32, 2489.02, 2637.02, 2793.83,
- 2959.96, 3135.96, 3322.44, 3520.00, 3729.31, 3951.07
- ]
-
- # Fork Feature: Get the best torch device to use for f0 algorithms that require a torch device. Will return the type (torch.device)
- def get_optimal_torch_device(self, index: int = 0) -> torch.device:
- if torch.cuda.is_available():
- return torch.device(
- f"cuda:{index % torch.cuda.device_count()}"
- ) # Very fast
- elif torch.backends.mps.is_available():
- return torch.device("mps")
- return torch.device("cpu")
-
- # Fork Feature: Compute f0 with the crepe method
- def get_f0_crepe_computation(
- self,
- x,
- f0_min,
- f0_max,
- p_len,
- *args, # 512 before. Hop length changes the speed that the voice jumps to a different dramatic pitch. Lower hop lengths means more pitch accuracy but longer inference time.
- **kwargs, # Either use crepe-tiny "tiny" or crepe "full". Default is full
- ):
- x = x.astype(
- np.float32
- ) # fixes the F.conv2D exception. We needed to convert double to float.
- x /= np.quantile(np.abs(x), 0.999)
- torch_device = self.get_optimal_torch_device()
- audio = torch.from_numpy(x).to(torch_device, copy=True)
- audio = torch.unsqueeze(audio, dim=0)
- if audio.ndim == 2 and audio.shape[0] > 1:
- audio = torch.mean(audio, dim=0, keepdim=True).detach()
- audio = audio.detach()
- hop_length = kwargs.get('crepe_hop_length', 160)
- model = kwargs.get('model', 'full')
- print("Initiating prediction with a crepe_hop_length of: " + str(hop_length))
- pitch: Tensor = torchcrepe.predict(
- audio,
- self.sr,
- hop_length,
- f0_min,
- f0_max,
- model,
- batch_size=hop_length * 2,
- device=torch_device,
- pad=True,
- )
- p_len = p_len or x.shape[0] // hop_length
- # Resize the pitch for final f0
- source = np.array(pitch.squeeze(0).cpu().float().numpy())
- source[source < 0.001] = np.nan
- target = np.interp(
- np.arange(0, len(source) * p_len, len(source)) / p_len,
- np.arange(0, len(source)),
- source,
- )
- f0 = np.nan_to_num(target)
- return f0 # Resized f0
-
- def get_f0_official_crepe_computation(
- self,
- x,
- f0_min,
- f0_max,
- *args,
- **kwargs
- ):
- # Pick a batch size that doesn't cause memory errors on your gpu
- batch_size = 512
- # Compute pitch using first gpu
- audio = torch.tensor(np.copy(x))[None].float()
- model = kwargs.get('model', 'full')
- f0, pd = torchcrepe.predict(
- audio,
- self.sr,
- self.window,
- f0_min,
- f0_max,
- model,
- batch_size=batch_size,
- device=self.device,
- return_periodicity=True,
- )
- pd = torchcrepe.filter.median(pd, 3)
- f0 = torchcrepe.filter.mean(f0, 3)
- f0[pd < 0.1] = 0
- f0 = f0[0].cpu().numpy()
- return f0
-
- # Fork Feature: Compute pYIN f0 method
- def get_f0_pyin_computation(self, x, f0_min, f0_max):
- y, sr = librosa.load("saudio/Sidney.wav", self.sr, mono=True)
- f0, _, _ = librosa.pyin(y, sr=self.sr, fmin=f0_min, fmax=f0_max)
- f0 = f0[1:] # Get rid of extra first frame
- return f0
-
- def get_pm(self, x, p_len, *args, **kwargs):
- f0 = parselmouth.Sound(x, self.sr).to_pitch_ac(
- time_step=160 / 16000,
- voicing_threshold=0.6,
- pitch_floor=kwargs.get('f0_min'),
- pitch_ceiling=kwargs.get('f0_max'),
- ).selected_array["frequency"]
-
- return np.pad(
- f0,
- [[max(0, (p_len - len(f0) + 1) // 2), max(0, p_len - len(f0) - (p_len - len(f0) + 1) // 2)]],
- mode="constant"
- )
-
- def get_harvest(self, x, *args, **kwargs):
- f0_spectral = pyworld.harvest(
- x.astype(np.double),
- fs=self.sr,
- f0_ceil=kwargs.get('f0_max'),
- f0_floor=kwargs.get('f0_min'),
- frame_period=1000 * kwargs.get('hop_length', 160) / self.sr,
- )
- return pyworld.stonemask(x.astype(np.double), *f0_spectral, self.sr)
-
- def get_dio(self, x, *args, **kwargs):
- f0_spectral = pyworld.dio(
- x.astype(np.double),
- fs=self.sr,
- f0_ceil=kwargs.get('f0_max'),
- f0_floor=kwargs.get('f0_min'),
- frame_period=1000 * kwargs.get('hop_length', 160) / self.sr,
- )
- return pyworld.stonemask(x.astype(np.double), *f0_spectral, self.sr)
-
-
- def get_rmvpe(self, x, *args, **kwargs):
- if not hasattr(self, "model_rmvpe"):
- from infer.lib.rmvpe import RMVPE
-
- logger.info(
- "Loading rmvpe model,%s" % "%s/rmvpe.pt" % os.environ["rmvpe_root"]
- )
- self.model_rmvpe = RMVPE(
- "%s/rmvpe.pt" % os.environ["rmvpe_root"],
- is_half=self.is_half,
- device=self.device,
- )
- f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03)
-
- return f0
-
-
- def get_pitch_dependant_rmvpe(self, x, f0_min=1, f0_max=40000, *args, **kwargs):
- return self.model_rmvpe.infer_from_audio_with_pitch(x, thred=0.03, f0_min=f0_min, f0_max=f0_max)
-
- def autotune_f0(self, f0):
- autotuned_f0 = []
- for freq in f0:
- closest_notes = [x for x in self.note_dict if abs(x - freq) == min(abs(n - freq) for n in self.note_dict)]
- autotuned_f0.append(random.choice(closest_notes))
- return np.array(autotuned_f0, np.float64)
-
- # Fork Feature: Acquire median hybrid f0 estimation calculation
- def get_f0_hybrid_computation(
- self,
- methods_str,
- input_audio_path,
- x,
- f0_min,
- f0_max,
- p_len,
- filter_radius,
- crepe_hop_length,
- time_step
- ):
- # Get various f0 methods from input to use in the computation stack
- params = {'x': x, 'p_len': p_len, 'f0_min': f0_min,
- 'f0_max': f0_max, 'time_step': time_step, 'filter_radius': filter_radius,
- 'crepe_hop_length': crepe_hop_length, 'model': "full"
- }
- methods_str = re.search('hybrid\[(.+)\]', methods_str)
- if methods_str: # Ensure a match was found
- methods = [method.strip() for method in methods_str.group(1).split('+')]
- f0_computation_stack = []
-
- print(f"Calculating f0 pitch estimations for methods: {str(methods)}")
- x = x.astype(np.float32)
- x /= np.quantile(np.abs(x), 0.999)
- # Get f0 calculations for all methods specified
-
- for method in methods:
- if method not in self.f0_method_dict:
- print(f"Method {method} not found.")
- continue
- f0 = self.f0_method_dict[method](**params)
- if method == 'harvest' and filter_radius > 2:
- f0 = signal.medfilt(f0, 3)
- f0 = f0[1:] # Get rid of first frame.
- f0_computation_stack.append(f0)
-
- for fc in f0_computation_stack:
- print(len(fc))
-
- print(f"Calculating hybrid median f0 from the stack of: {str(methods)}")
- f0_median_hybrid = np.nanmedian(f0_computation_stack, axis=0)
- return f0_median_hybrid
-
- def get_f0(
- self,
- input_audio_path,
- x,
- p_len,
- f0_up_key,
- f0_method,
- filter_radius,
- crepe_hop_length,
- f0_autotune,
- inp_f0=None,
- f0_min=50,
- f0_max=1100,
- ):
- global input_audio_path2wav
- time_step = self.window / self.sr * 1000
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- params = {'x': x, 'p_len': p_len, 'f0_up_key': f0_up_key, 'f0_min': f0_min,
- 'f0_max': f0_max, 'time_step': time_step, 'filter_radius': filter_radius,
- 'crepe_hop_length': crepe_hop_length, 'model': "full"
- }
-
- if "hybrid" in f0_method:
- # Perform hybrid median pitch estimation
- input_audio_path2wav[input_audio_path] = x.astype(np.double)
- f0 = self.get_f0_hybrid_computation(
- f0_method,+
- input_audio_path,
- x,
- f0_min,
- f0_max,
- p_len,
- filter_radius,
- crepe_hop_length,
- time_step,
- )
- else:
- f0 = self.f0_method_dict[f0_method](**params)
-
- if "privateuseone" in str(self.device): # clean ortruntime memory
- del self.model_rmvpe.model
- del self.model_rmvpe
- logger.info("Cleaning ortruntime memory")
-
- if f0_autotune:
- f0 = self.autotune_f0(f0)
-
- f0 *= pow(2, f0_up_key / 12)
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- tf0 = self.sr // self.window # 每秒f0点数
- if inp_f0 is not None:
- delta_t = np.round(
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
- ).astype("int16")
- replace_f0 = np.interp(
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
- )
- shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0]
- f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[
- :shape
- ]
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int32)
- return f0_coarse, f0bak # 1-0
-
- def vc(
- self,
- model,
- net_g,
- sid,
- audio0,
- pitch,
- pitchf,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- ): # ,file_index,file_big_npy
- feats = torch.from_numpy(audio0)
- if self.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False)
-
- inputs = {
- "source": feats.to(self.device),
- "padding_mask": padding_mask,
- "output_layer": 9 if version == "v1" else 12,
- }
- t0 = ttime()
- with torch.no_grad():
- logits = model.extract_features(**inputs)
- feats = model.final_proj(logits[0]) if version == "v1" else logits[0]
- if protect < 0.5 and pitch is not None and pitchf is not None:
- feats0 = feats.clone()
- if (
- not isinstance(index, type(None))
- and not isinstance(big_npy, type(None))
- and index_rate != 0
- ):
- npy = feats[0].cpu().numpy()
- if self.is_half:
- npy = npy.astype("float32")
-
- # _, I = index.search(npy, 1)
- # npy = big_npy[I.squeeze()]
-
- score, ix = index.search(npy, k=8)
- weight = np.square(1 / score)
- weight /= weight.sum(axis=1, keepdims=True)
- npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
-
- if self.is_half:
- npy = npy.astype("float16")
- feats = (
- torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate
- + (1 - index_rate) * feats
- )
-
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- if protect < 0.5 and pitch is not None and pitchf is not None:
- feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute(
- 0, 2, 1
- )
- t1 = ttime()
- p_len = audio0.shape[0] // self.window
- if feats.shape[1] < p_len:
- p_len = feats.shape[1]
- if pitch is not None and pitchf is not None:
- pitch = pitch[:, :p_len]
- pitchf = pitchf[:, :p_len]
-
- if protect < 0.5 and pitch is not None and pitchf is not None:
- pitchff = pitchf.clone()
- pitchff[pitchf > 0] = 1
- pitchff[pitchf < 1] = protect
- pitchff = pitchff.unsqueeze(-1)
- feats = feats * pitchff + feats0 * (1 - pitchff)
- feats = feats.to(feats0.dtype)
- p_len = torch.tensor([p_len], device=self.device).long()
- with torch.no_grad():
- hasp = pitch is not None and pitchf is not None
- arg = (feats, p_len, pitch, pitchf, sid) if hasp else (feats, p_len, sid)
- audio1 = (net_g.infer(*arg)[0][0, 0]).data.cpu().float().numpy()
- del hasp, arg
- del feats, p_len, padding_mask
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- t2 = ttime()
- times[0] += t1 - t0
- times[2] += t2 - t1
- return audio1
- def process_t(self, t, s, window, audio_pad, pitch, pitchf, times, index, big_npy, index_rate, version, protect, t_pad_tgt, if_f0, sid, model, net_g):
- t = t // window * window
- if if_f0 == 1:
- return self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + t_pad_tgt + window],
- pitch[:, s // window : (t + t_pad_tgt) // window],
- pitchf[:, s // window : (t + t_pad_tgt) // window],
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[t_pad_tgt : -t_pad_tgt]
- else:
- return self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + t_pad_tgt + window],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[t_pad_tgt : -t_pad_tgt]
-
-
- def pipeline(
- self,
- model,
- net_g,
- sid,
- audio,
- input_audio_path,
- times,
- f0_up_key,
- f0_method,
- file_index,
- index_rate,
- if_f0,
- filter_radius,
- tgt_sr,
- resample_sr,
- rms_mix_rate,
- version,
- protect,
- crepe_hop_length,
- f0_autotune,
- f0_file=None,
- f0_min=50,
- f0_max=1100
- ):
- if (
- file_index != ""
- # and file_big_npy != ""
- # and os.path.exists(file_big_npy) == True
- and os.path.exists(file_index)
- and index_rate != 0
- ):
- try:
- index = faiss.read_index(file_index)
- # big_npy = np.load(file_big_npy)
- big_npy = index.reconstruct_n(0, index.ntotal)
- except:
- traceback.print_exc()
- index = big_npy = None
- else:
- index = big_npy = None
- audio = signal.filtfilt(bh, ah, audio)
- audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect")
- opt_ts = []
- if audio_pad.shape[0] > self.t_max:
- audio_sum = np.zeros_like(audio)
- for i in range(self.window):
- audio_sum += audio_pad[i : i - self.window]
- for t in range(self.t_center, audio.shape[0], self.t_center):
- opt_ts.append(
- t
- - self.t_query
- + np.where(
- np.abs(audio_sum[t - self.t_query : t + self.t_query])
- == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min()
- )[0][0]
- )
- s = 0
- audio_opt = []
- t = None
- t1 = ttime()
- audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect")
- p_len = audio_pad.shape[0] // self.window
- inp_f0 = None
- if hasattr(f0_file, "name"):
- try:
- with open(f0_file.name, "r") as f:
- lines = f.read().strip("\n").split("\n")
- inp_f0 = []
- for line in lines:
- inp_f0.append([float(i) for i in line.split(",")])
- inp_f0 = np.array(inp_f0, dtype="float32")
- except:
- traceback.print_exc()
- sid = torch.tensor(sid, device=self.device).unsqueeze(0).long()
- pitch, pitchf = None, None
- if if_f0:
- pitch, pitchf = self.get_f0(
- input_audio_path,
- audio_pad,
- p_len,
- f0_up_key,
- f0_method,
- filter_radius,
- crepe_hop_length,
- f0_autotune,
- inp_f0,
- f0_min,
- f0_max
- )
- pitch = pitch[:p_len]
- pitchf = pitchf[:p_len]
- if self.device == "mps" or "xpu" in self.device:
- pitchf = pitchf.astype(np.float32)
- pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long()
- pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float()
- t2 = ttime()
- times[1] += t2 - t1
-
- with tqdm(total=len(opt_ts), desc="Processing", unit="window") as pbar:
- for i, t in enumerate(opt_ts):
- t = t // self.window * self.window
- start = s
- end = t + self.t_pad2 + self.window
- audio_slice = audio_pad[start:end]
- pitch_slice = pitch[:, start // self.window:end // self.window] if if_f0 else None
- pitchf_slice = pitchf[:, start // self.window:end // self.window] if if_f0 else None
- audio_opt.append(self.vc(model, net_g, sid, audio_slice, pitch_slice, pitchf_slice, times, index, big_npy, index_rate, version, protect)[self.t_pad_tgt : -self.t_pad_tgt])
- s = t
- pbar.update(1)
- pbar.refresh()
-
- audio_slice = audio_pad[t:]
- pitch_slice = pitch[:, t // self.window:] if if_f0 and t is not None else pitch
- pitchf_slice = pitchf[:, t // self.window:] if if_f0 and t is not None else pitchf
- audio_opt.append(self.vc(model, net_g, sid, audio_slice, pitch_slice, pitchf_slice, times, index, big_npy, index_rate, version, protect)[self.t_pad_tgt : -self.t_pad_tgt])
-
- audio_opt = np.concatenate(audio_opt)
- if rms_mix_rate != 1:
- audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate)
- if tgt_sr != resample_sr >= 16000:
- audio_opt = librosa.resample(
- audio_opt, orig_sr=tgt_sr, target_sr=resample_sr
- )
- audio_max = np.abs(audio_opt).max() / 0.99
- max_int16 = 32768
- if audio_max > 1:
- max_int16 /= audio_max
- audio_opt = (audio_opt * max_int16).astype(np.int16)
- del pitch, pitchf, sid
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
-
- print("Returning completed audio...")
- print("-------------------")
- return audio_opt
diff --git a/spaces/EyeSeeThru/anything-v3.0/utils.py b/spaces/EyeSeeThru/anything-v3.0/utils.py
deleted file mode 100644
index ff1c065d186347ca51b47d010a697dbe1814695c..0000000000000000000000000000000000000000
--- a/spaces/EyeSeeThru/anything-v3.0/utils.py
+++ /dev/null
@@ -1,6 +0,0 @@
-def is_google_colab():
- try:
- import google.colab
- return True
- except:
- return False
\ No newline at end of file
diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/vdecoder/hifigan/utils.py b/spaces/FrankZxShen/so-vits-svc-models-ba/vdecoder/hifigan/utils.py
deleted file mode 100644
index 9c93c996d3cc73c30d71c1fc47056e4230f35c0f..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/so-vits-svc-models-ba/vdecoder/hifigan/utils.py
+++ /dev/null
@@ -1,68 +0,0 @@
-import glob
-import os
-import matplotlib
-import torch
-from torch.nn.utils import weight_norm
-# matplotlib.use("Agg")
-import matplotlib.pylab as plt
-
-
-def plot_spectrogram(spectrogram):
- fig, ax = plt.subplots(figsize=(10, 2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
- interpolation='none')
- plt.colorbar(im, ax=ax)
-
- fig.canvas.draw()
- plt.close()
-
- return fig
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def apply_weight_norm(m):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- weight_norm(m)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def load_checkpoint(filepath, device):
- assert os.path.isfile(filepath)
- print("Loading '{}'".format(filepath))
- checkpoint_dict = torch.load(filepath, map_location=device)
- print("Complete.")
- return checkpoint_dict
-
-
-def save_checkpoint(filepath, obj):
- print("Saving checkpoint to {}".format(filepath))
- torch.save(obj, filepath)
- print("Complete.")
-
-
-def del_old_checkpoints(cp_dir, prefix, n_models=2):
- pattern = os.path.join(cp_dir, prefix + '????????')
- cp_list = glob.glob(pattern) # get checkpoint paths
- cp_list = sorted(cp_list)# sort by iter
- if len(cp_list) > n_models: # if more than n_models models are found
- for cp in cp_list[:-n_models]:# delete the oldest models other than lastest n_models
- open(cp, 'w').close()# empty file contents
- os.unlink(cp)# delete file (move to trash when using Colab)
-
-
-def scan_checkpoint(cp_dir, prefix):
- pattern = os.path.join(cp_dir, prefix + '????????')
- cp_list = glob.glob(pattern)
- if len(cp_list) == 0:
- return None
- return sorted(cp_list)[-1]
-
diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/ContentVec768L12.py b/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/ContentVec768L12.py
deleted file mode 100644
index 0d1591c8843b920d5685e822354e8e6adc9a9e19..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/ContentVec768L12.py
+++ /dev/null
@@ -1,34 +0,0 @@
-from vencoder.encoder import SpeechEncoder
-import torch
-from fairseq import checkpoint_utils
-
-class ContentVec768L12(SpeechEncoder):
- def __init__(self,vec_path = "pretrain/checkpoint_best_legacy_500.pt",device=None):
- print("load model(s) from {}".format(vec_path))
- self.hidden_dim = 768
- models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
- [vec_path],
- suffix="",
- )
- if device is None:
- self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- else:
- self.dev = torch.device(device)
- self.model = models[0].to(self.dev)
- self.model.eval()
-
- def encoder(self, wav):
- feats = wav
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).fill_(False)
- inputs = {
- "source": feats.to(wav.device),
- "padding_mask": padding_mask.to(wav.device),
- "output_layer": 12, # layer 12
- }
- with torch.no_grad():
- logits = self.model.extract_features(**inputs)
- return logits[0].transpose(1, 2)
\ No newline at end of file
diff --git a/spaces/GIZ/SDSN-demo/appStore/keyword_search.py b/spaces/GIZ/SDSN-demo/appStore/keyword_search.py
deleted file mode 100644
index 9162afc6e633b7bc25c4fb34d9cda34ae90efa6a..0000000000000000000000000000000000000000
--- a/spaces/GIZ/SDSN-demo/appStore/keyword_search.py
+++ /dev/null
@@ -1,176 +0,0 @@
-# set path
-import glob, os, sys;
-sys.path.append('../utils')
-
-import streamlit as st
-import json
-import logging
-from utils.lexical_search import runLexicalPreprocessingPipeline, lexical_search
-from utils.semantic_search import runSemanticPreprocessingPipeline, semantic_keywordsearch
-from utils.checkconfig import getconfig
-from utils.streamlitcheck import checkbox_without_preselect
-
-# Declare all the necessary variables
-config = getconfig('paramconfig.cfg')
-split_by = config.get('semantic_search','SPLIT_BY')
-split_length = int(config.get('semantic_search','SPLIT_LENGTH'))
-split_overlap = int(config.get('semantic_search','SPLIT_OVERLAP'))
-split_respect_sentence_boundary = bool(int(config.get('semantic_search',
- 'RESPECT_SENTENCE_BOUNDARY')))
-remove_punc = bool(int(config.get('semantic_search','REMOVE_PUNC')))
-embedding_model = config.get('semantic_search','RETRIEVER')
-embedding_model_format = config.get('semantic_search','RETRIEVER_FORMAT')
-embedding_layer = int(config.get('semantic_search','RETRIEVER_EMB_LAYER'))
-embedding_dim = int(config.get('semantic_search','EMBEDDING_DIM'))
-max_seq_len = int(config.get('semantic_search','MAX_SEQ_LENGTH'))
-retriever_top_k = int(config.get('semantic_search','RETRIEVER_TOP_K'))
-reader_model = config.get('semantic_search','READER')
-reader_top_k = int(config.get('semantic_search','RETRIEVER_TOP_K'))
-top_k_per_candidate = int(config.get('semantic_search','READER_TOP_K_PER_CANDIDATE'))
-lexical_split_by= config.get('lexical_search','SPLIT_BY')
-lexical_split_length=int(config.get('lexical_search','SPLIT_LENGTH'))
-lexical_split_overlap = int(config.get('lexical_search','SPLIT_OVERLAP'))
-lexical_remove_punc = bool(int(config.get('lexical_search','REMOVE_PUNC')))
-lexical_top_k=int(config.get('lexical_search','TOP_K'))
-
-def app():
-
- with st.container():
- st.markdown("
Search
",
- unsafe_allow_html=True)
- st.write(' ')
- st.write(' ')
-
- with st.expander("ℹ️ - About this app", expanded=False):
-
- st.write(
- """
- The *Search* app is an interface \
- for doing contextual and keyword searches in \
- policy documents. \
- """)
- st.write("")
- st.write(""" The application allows its user to perform a search\
- based on two options: a lexical search([TFIDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf))\
- and semantic search. [bi-encoder](https://www.sbert.net/examples/applications/retrieve_rerank/README.html)\
- The lexical search only \
- displays paragraphs in the document with exact matching results, \
- the semantic search shows paragraphs with meaningful connections \
- (e.g., synonyms) based on the search context. Both \
- methods employ a probabilistic retrieval framework in its identification\
- of relevant paragraphs. By defualt the search is performed using \
- 'Semantic Search', and to find 'Exact/Lexical Matches' please tick the \
- checkbox provided which will by-pass semantic search. Furthermore,\
- the application allows the user to search for pre-defined keywords \
- from different thematic buckets present in sidebar.""")
- st.write("")
- st.write(""" The Exact Matches gives back top {} findings, and Semantic
- search provides with top {} answers.""".format(lexical_top_k, retriever_top_k))
- st.write("")
- st.write("")
- st.markdown("Some runtime metrics tested with cpu: Intel(R) Xeon(R) CPU @ 2.20GHz, memory: 13GB")
- col1,col2,col3= st.columns([2,4,4])
- with col1:
- st.caption("OCR File processing")
- # st.markdown('
50 sec
', unsafe_allow_html=True)
- st.write("50 sec")
-
- with col2:
- st.caption("Lexical Search on 200 paragraphs(~ 35 pages)")
- # st.markdown('
12 sec
', unsafe_allow_html=True)
- st.write("15 sec")
-
- with col3:
- st.caption("Semantic search on 200 paragraphs(~ 35 pages)")
- # st.markdown('
120 sec
', unsafe_allow_html=True)
- st.write("120 sec(including emebedding creation)")
-
- with st.sidebar:
- with open('docStore/sample/keywordexample.json','r') as json_file:
- keywordexample = json.load(json_file)
-
- # genre = st.radio("Select Keyword Category", list(keywordexample.keys()))
- st.caption("Select Keyword Category")
- genre = checkbox_without_preselect(list(keywordexample.keys()))
- if genre:
- keywordList = keywordexample[genre]
- else:
- keywordList = None
-
- st.markdown("---")
-
- with st.container():
- type_hinting = "Please enter here your question and we \
- will look for an answer in the document\
- OR enter the keyword you are looking \
- for and we will look for similar\
- context in the document.\
- You can also explore predefined sets of keywords from sidebar. "
- if keywordList is not None:
- # queryList = st.text_input("You selected the {} category we \
- # will look for these keywords in document".format(genre)
- # value="{}".format(keywordList))
- queryList = st.text_input(type_hinting,
- value = "{}".format(keywordList))
- else:
- queryList = st.text_input(type_hinting,
- placeholder="Enter keyword/query here")
-
- searchtype = st.checkbox("Show only Exact Matches")
- if st.button("Find them"):
-
- if queryList == "":
- st.info("🤔 No keyword provided, if you dont have any, \
- please try example sets from sidebar!")
- logging.warning("Terminated as no keyword provided")
- else:
- if 'filepath' in st.session_state:
-
- if searchtype:
- all_documents = runLexicalPreprocessingPipeline(
- file_name=st.session_state['filename'],
- file_path=st.session_state['filepath'],
- split_by=lexical_split_by,
- split_length=lexical_split_length,
- split_overlap=lexical_split_overlap,
- remove_punc=lexical_remove_punc)
- logging.info("performing lexical search")
- with st.spinner("Performing Exact matching search \
- (Lexical search) for you"):
- lexical_search(query=queryList,
- documents = all_documents['documents'],
- top_k = lexical_top_k )
- else:
- all_documents = runSemanticPreprocessingPipeline(
- file_path= st.session_state['filepath'],
- file_name = st.session_state['filename'],
- split_by=split_by,
- split_length= split_length,
- split_overlap=split_overlap,
- remove_punc= remove_punc,
- split_respect_sentence_boundary=split_respect_sentence_boundary)
- if len(all_documents['documents']) > 100:
- warning_msg = ": This might take sometime, please sit back and relax."
- else:
- warning_msg = ""
-
- logging.info("starting semantic search")
- with st.spinner("Performing Similar/Contextual search{}".format(warning_msg)):
- semantic_keywordsearch(query = queryList,
- documents = all_documents['documents'],
- embedding_model=embedding_model,
- embedding_layer=embedding_layer,
- embedding_model_format=embedding_model_format,
- reader_model=reader_model,reader_top_k=reader_top_k,
- retriever_top_k=retriever_top_k, embedding_dim=embedding_dim,
- max_seq_len=max_seq_len,
- top_k_per_candidate = top_k_per_candidate)
-
- else:
- st.info("🤔 No document found, please try to upload it at the sidebar!")
- logging.warning("Terminated as no document provided")
-
-
-
-
diff --git a/spaces/GT4SD/PatentToolkit/tridentmodel/__init__.py b/spaces/GT4SD/PatentToolkit/tridentmodel/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_sorted_block_race.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_sorted_block_race.py
deleted file mode 100644
index be99732b6d256d70d3c309b0f89df49b6c5e9cce..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_sorted_block_race.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import numpy as np
-import os
-import pybullet as p
-import random
-from cliport.tasks import primitives
-from cliport.tasks.grippers import Spatula
-from cliport.tasks.task import Task
-from cliport.utils import utils
-import numpy as np
-from cliport.tasks.task import Task
-from cliport.utils import utils
-
-class ColorSortedBlockRace(Task):
- """Pick up blocks of two colors and place them in corresponding colored zones in a sequence."""
-
- def __init__(self):
- super().__init__()
- self.max_steps = 20
- self.lang_template = "place the blocks in the corresponding colored zones in sequence"
- self.task_completed_desc = "done placing blocks in zones."
- self.additional_reset()
-
- def reset(self, env):
- super().reset(env)
-
- # Add zones.
- zone_size = (0.12, 0.12, 0)
- zone_urdf = 'zone/zone.urdf'
- zone_colors = ['blue', 'red']
- zone_poses = []
- for color in zone_colors:
- zone_pose = self.get_random_pose(env, zone_size)
- env.add_object(zone_urdf, zone_pose, 'fixed', color=utils.COLORS[color])
- zone_poses.append(zone_pose)
-
- # Add blocks.
- block_size = (0.04, 0.04, 0.04)
- block_urdf = 'block/block.urdf'
- block_colors = ['blue', 'red']
- blocks = []
- for color in block_colors:
- for _ in range(3):
- block_pose = self.get_random_pose(env, block_size)
- block_id = env.add_object(block_urdf, block_pose, color=utils.COLORS[color])
- blocks.append(block_id)
-
- # Goal: each block is in the corresponding colored zone.
- for i, block in enumerate(blocks):
- self.add_goal(objs=[block], matches=np.ones((1, 1)), targ_poses=[zone_poses[i//3]], replace=False,
- rotations=True, metric='pose', params=None, step_max_reward=1/len(blocks),
- language_goal=self.lang_template)
\ No newline at end of file
diff --git a/spaces/Golyass/Recomender-System-Hybrid-Method/README.md b/spaces/Golyass/Recomender-System-Hybrid-Method/README.md
deleted file mode 100644
index deef98249c8a6b79f8f72bd9b0ce38b52f3fc2bd..0000000000000000000000000000000000000000
--- a/spaces/Golyass/Recomender-System-Hybrid-Method/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Recomender System Hybrid Method
-emoji: 💻
-colorFrom: red
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/visualization/image.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/visualization/image.py
deleted file mode 100644
index 5a148384d7a77c4d9849c54570e85740eaff8235..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/visualization/image.py
+++ /dev/null
@@ -1,303 +0,0 @@
-import matplotlib.pyplot as plt
-import mmcv
-import numpy as np
-import pycocotools.mask as mask_util
-from matplotlib.collections import PatchCollection
-from matplotlib.patches import Polygon
-
-from ..utils import mask2ndarray
-
-EPS = 1e-2
-
-
-def color_val_matplotlib(color):
- """Convert various input in BGR order to normalized RGB matplotlib color
- tuples,
-
- Args:
- color (:obj:`Color`/str/tuple/int/ndarray): Color inputs
-
- Returns:
- tuple[float]: A tuple of 3 normalized floats indicating RGB channels.
- """
- color = mmcv.color_val(color)
- color = [color / 255 for color in color[::-1]]
- return tuple(color)
-
-
-def imshow_det_bboxes(img,
- bboxes,
- labels,
- segms=None,
- class_names=None,
- score_thr=0,
- bbox_color='green',
- text_color='green',
- mask_color=None,
- thickness=2,
- font_size=13,
- win_name='',
- show=True,
- wait_time=0,
- out_file=None):
- """Draw bboxes and class labels (with scores) on an image.
-
- Args:
- img (str or ndarray): The image to be displayed.
- bboxes (ndarray): Bounding boxes (with scores), shaped (n, 4) or
- (n, 5).
- labels (ndarray): Labels of bboxes.
- segms (ndarray or None): Masks, shaped (n,h,w) or None
- class_names (list[str]): Names of each classes.
- score_thr (float): Minimum score of bboxes to be shown. Default: 0
- bbox_color (str or tuple(int) or :obj:`Color`):Color of bbox lines.
- The tuple of color should be in BGR order. Default: 'green'
- text_color (str or tuple(int) or :obj:`Color`):Color of texts.
- The tuple of color should be in BGR order. Default: 'green'
- mask_color (str or tuple(int) or :obj:`Color`, optional):
- Color of masks. The tuple of color should be in BGR order.
- Default: None
- thickness (int): Thickness of lines. Default: 2
- font_size (int): Font size of texts. Default: 13
- show (bool): Whether to show the image. Default: True
- win_name (str): The window name. Default: ''
- wait_time (float): Value of waitKey param. Default: 0.
- out_file (str, optional): The filename to write the image.
- Default: None
-
- Returns:
- ndarray: The image with bboxes drawn on it.
- """
- assert bboxes.ndim == 2, \
- f' bboxes ndim should be 2, but its ndim is {bboxes.ndim}.'
- assert labels.ndim == 1, \
- f' labels ndim should be 1, but its ndim is {labels.ndim}.'
- assert bboxes.shape[0] == labels.shape[0], \
- 'bboxes.shape[0] and labels.shape[0] should have the same length.'
- assert bboxes.shape[1] == 4 or bboxes.shape[1] == 5, \
- f' bboxes.shape[1] should be 4 or 5, but its {bboxes.shape[1]}.'
- img = mmcv.imread(img).astype(np.uint8)
-
- if score_thr > 0:
- assert bboxes.shape[1] == 5
- scores = bboxes[:, -1]
- inds = scores > score_thr
- bboxes = bboxes[inds, :]
- labels = labels[inds]
- if segms is not None:
- segms = segms[inds, ...]
-
- mask_colors = []
- if labels.shape[0] > 0:
- if mask_color is None:
- # random color
- np.random.seed(42)
- mask_colors = [
- np.random.randint(0, 256, (1, 3), dtype=np.uint8)
- for _ in range(max(labels) + 1)
- ]
- else:
- # specify color
- mask_colors = [
- np.array(mmcv.color_val(mask_color)[::-1], dtype=np.uint8)
- ] * (
- max(labels) + 1)
-
- bbox_color = color_val_matplotlib(bbox_color)
- text_color = color_val_matplotlib(text_color)
-
- img = mmcv.bgr2rgb(img)
- width, height = img.shape[1], img.shape[0]
- img = np.ascontiguousarray(img)
-
- fig = plt.figure(win_name, frameon=False)
- plt.title(win_name)
- canvas = fig.canvas
- dpi = fig.get_dpi()
- # add a small EPS to avoid precision lost due to matplotlib's truncation
- # (https://github.com/matplotlib/matplotlib/issues/15363)
- fig.set_size_inches((width + EPS) / dpi, (height + EPS) / dpi)
-
- # remove white edges by set subplot margin
- plt.subplots_adjust(left=0, right=1, bottom=0, top=1)
- ax = plt.gca()
- ax.axis('off')
-
- polygons = []
- color = []
- for i, (bbox, label) in enumerate(zip(bboxes, labels)):
- bbox_int = bbox.astype(np.int32)
- poly = [[bbox_int[0], bbox_int[1]], [bbox_int[0], bbox_int[3]],
- [bbox_int[2], bbox_int[3]], [bbox_int[2], bbox_int[1]]]
- np_poly = np.array(poly).reshape((4, 2))
- polygons.append(Polygon(np_poly))
- color.append(bbox_color)
- label_text = class_names[
- label] if class_names is not None else f'class {label}'
- if len(bbox) > 4:
- label_text += f'|{bbox[-1]:.02f}'
- ax.text(
- bbox_int[0],
- bbox_int[1],
- f'{label_text}',
- bbox={
- 'facecolor': 'black',
- 'alpha': 0.8,
- 'pad': 0.7,
- 'edgecolor': 'none'
- },
- color=text_color,
- fontsize=font_size,
- verticalalignment='top',
- horizontalalignment='left')
- if segms is not None:
- color_mask = mask_colors[labels[i]]
- mask = segms[i].astype(bool)
- img[mask] = img[mask] * 0.5 + color_mask * 0.5
-
- plt.imshow(img)
-
- p = PatchCollection(
- polygons, facecolor='none', edgecolors=color, linewidths=thickness)
- ax.add_collection(p)
-
- stream, _ = canvas.print_to_buffer()
- buffer = np.frombuffer(stream, dtype='uint8')
- img_rgba = buffer.reshape(height, width, 4)
- rgb, alpha = np.split(img_rgba, [3], axis=2)
- img = rgb.astype('uint8')
- img = mmcv.rgb2bgr(img)
-
- if show:
- # We do not use cv2 for display because in some cases, opencv will
- # conflict with Qt, it will output a warning: Current thread
- # is not the object's thread. You can refer to
- # https://github.com/opencv/opencv-python/issues/46 for details
- if wait_time == 0:
- plt.show()
- else:
- plt.show(block=False)
- plt.pause(wait_time)
- if out_file is not None:
- mmcv.imwrite(img, out_file)
-
- plt.close()
-
- return img
-
-
-def imshow_gt_det_bboxes(img,
- annotation,
- result,
- class_names=None,
- score_thr=0,
- gt_bbox_color=(255, 102, 61),
- gt_text_color=(255, 102, 61),
- gt_mask_color=(255, 102, 61),
- det_bbox_color=(72, 101, 241),
- det_text_color=(72, 101, 241),
- det_mask_color=(72, 101, 241),
- thickness=2,
- font_size=13,
- win_name='',
- show=True,
- wait_time=0,
- out_file=None):
- """General visualization GT and result function.
-
- Args:
- img (str or ndarray): The image to be displayed.)
- annotation (dict): Ground truth annotations where contain keys of
- 'gt_bboxes' and 'gt_labels' or 'gt_masks'
- result (tuple[list] or list): The detection result, can be either
- (bbox, segm) or just bbox.
- class_names (list[str]): Names of each classes.
- score_thr (float): Minimum score of bboxes to be shown. Default: 0
- gt_bbox_color (str or tuple(int) or :obj:`Color`):Color of bbox lines.
- The tuple of color should be in BGR order. Default: (255, 102, 61)
- gt_text_color (str or tuple(int) or :obj:`Color`):Color of texts.
- The tuple of color should be in BGR order. Default: (255, 102, 61)
- gt_mask_color (str or tuple(int) or :obj:`Color`, optional):
- Color of masks. The tuple of color should be in BGR order.
- Default: (255, 102, 61)
- det_bbox_color (str or tuple(int) or :obj:`Color`):Color of bbox lines.
- The tuple of color should be in BGR order. Default: (72, 101, 241)
- det_text_color (str or tuple(int) or :obj:`Color`):Color of texts.
- The tuple of color should be in BGR order. Default: (72, 101, 241)
- det_mask_color (str or tuple(int) or :obj:`Color`, optional):
- Color of masks. The tuple of color should be in BGR order.
- Default: (72, 101, 241)
- thickness (int): Thickness of lines. Default: 2
- font_size (int): Font size of texts. Default: 13
- win_name (str): The window name. Default: ''
- show (bool): Whether to show the image. Default: True
- wait_time (float): Value of waitKey param. Default: 0.
- out_file (str, optional): The filename to write the image.
- Default: None
-
- Returns:
- ndarray: The image with bboxes or masks drawn on it.
- """
- assert 'gt_bboxes' in annotation
- assert 'gt_labels' in annotation
- assert isinstance(
- result,
- (tuple, list)), f'Expected tuple or list, but get {type(result)}'
-
- gt_masks = annotation.get('gt_masks', None)
- if gt_masks is not None:
- gt_masks = mask2ndarray(gt_masks)
-
- img = mmcv.imread(img)
-
- img = imshow_det_bboxes(
- img,
- annotation['gt_bboxes'],
- annotation['gt_labels'],
- gt_masks,
- class_names=class_names,
- bbox_color=gt_bbox_color,
- text_color=gt_text_color,
- mask_color=gt_mask_color,
- thickness=thickness,
- font_size=font_size,
- win_name=win_name,
- show=False)
-
- if isinstance(result, tuple):
- bbox_result, segm_result = result
- if isinstance(segm_result, tuple):
- segm_result = segm_result[0] # ms rcnn
- else:
- bbox_result, segm_result = result, None
-
- bboxes = np.vstack(bbox_result)
- labels = [
- np.full(bbox.shape[0], i, dtype=np.int32)
- for i, bbox in enumerate(bbox_result)
- ]
- labels = np.concatenate(labels)
-
- segms = None
- if segm_result is not None and len(labels) > 0: # non empty
- segms = mmcv.concat_list(segm_result)
- segms = mask_util.decode(segms)
- segms = segms.transpose(2, 0, 1)
-
- img = imshow_det_bboxes(
- img,
- bboxes,
- labels,
- segms=segms,
- class_names=class_names,
- score_thr=score_thr,
- bbox_color=det_bbox_color,
- text_color=det_text_color,
- mask_color=det_mask_color,
- thickness=thickness,
- font_size=font_size,
- win_name=win_name,
- show=show,
- wait_time=wait_time,
- out_file=out_file)
- return img
diff --git a/spaces/GuXiaoBei/wechat-chatbot/app.py b/spaces/GuXiaoBei/wechat-chatbot/app.py
deleted file mode 100644
index 59f0f0c5f48cd69b6b08d7fd0ea65dca9f497f2f..0000000000000000000000000000000000000000
--- a/spaces/GuXiaoBei/wechat-chatbot/app.py
+++ /dev/null
@@ -1,45 +0,0 @@
-# encoding:utf-8
-
-import config
-import gradio as gr
-from channel import channel_factory
-from common.log import logger
-from io import BytesIO
-from PIL import Image
-from concurrent.futures import ThreadPoolExecutor
-thread_pool = ThreadPoolExecutor(max_workers=8)
-
-def getImage(bytes):
- bytes_stream = BytesIO(bytes)
- image = Image.open(bytes_stream)
- return image
-
-def getLoginUrl():
- # load config
- config.load_config()
-
- # create channel
- bot = channel_factory.create_channel("wx")
- thread_pool.submit(bot.startup)
-
- while (True):
- if bot.getQrCode():
- return getImage(bot.getQrCode())
-
-if __name__ == '__main__':
- try:
-
- with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- btn = gr.Button(value="生成二维码")
- with gr.Column():
- outputs=[gr.Pil()]
- btn.click(getLoginUrl, outputs=outputs)
-
- demo.launch()
-
-
- except Exception as e:
- logger.error("App startup failed!")
- logger.exception(e)
diff --git a/spaces/HESOAYM/ElviraMulti/modules/config.py b/spaces/HESOAYM/ElviraMulti/modules/config.py
deleted file mode 100644
index 4e816ddd6cf4499f21cbbd2aee3ae0a6eeb7c5af..0000000000000000000000000000000000000000
--- a/spaces/HESOAYM/ElviraMulti/modules/config.py
+++ /dev/null
@@ -1,170 +0,0 @@
-from collections import defaultdict
-from contextlib import contextmanager
-import os
-import logging
-import sys
-import commentjson as json
-
-from . import shared
-from . import presets
-
-
-__all__ = [
- "my_api_key",
- "authflag",
- "auth_list",
- "dockerflag",
- "retrieve_proxy",
- "log_level",
- "advance_docs",
- "update_doc_config",
- "multi_api_key",
- "server_name",
- "server_port",
- "share",
-]
-
-# 添加一个统一的config文件,避免文件过多造成的疑惑(优先级最低)
-# 同时,也可以为后续支持自定义功能提供config的帮助
-if os.path.exists("config.json"):
- with open("config.json", "r", encoding='utf-8') as f:
- config = json.load(f)
-else:
- config = {}
-
-language = config.get("language", "auto") # 在这里输入你的 API 密钥
-language = os.environ.get("LANGUAGE", language)
-
-
-if os.path.exists("api_key.txt"):
- logging.info("检测到api_key.txt文件,正在进行迁移...")
- with open("api_key.txt", "r") as f:
- config["openai_api_key"] = f.read().strip()
- os.rename("api_key.txt", "api_key(deprecated).txt")
- with open("config.json", "w", encoding='utf-8') as f:
- json.dump(config, f, indent=4)
-
-if os.path.exists("auth.json"):
- logging.info("检测到auth.json文件,正在进行迁移...")
- auth_list = []
- with open("auth.json", "r", encoding='utf-8') as f:
- auth = json.load(f)
- for _ in auth:
- if auth[_]["username"] and auth[_]["password"]:
- auth_list.append((auth[_]["username"], auth[_]["password"]))
- else:
- logging.error("请检查auth.json文件中的用户名和密码!")
- sys.exit(1)
- config["users"] = auth_list
- os.rename("auth.json", "auth(deprecated).json")
- with open("config.json", "w", encoding='utf-8') as f:
- json.dump(config, f, indent=4)
-
-## 处理docker if we are running in Docker
-dockerflag = config.get("dockerflag", False)
-if os.environ.get("dockerrun") == "yes":
- dockerflag = True
-
-## 处理 api-key 以及 允许的用户列表
-my_api_key = config.get("openai_api_key", "") # 在这里输入你的 API 密钥
-my_api_key = os.environ.get("OPENAI_API_KEY", my_api_key)
-
-## 多账户机制
-multi_api_key = config.get("multi_api_key", False) # 是否开启多账户机制
-if multi_api_key:
- api_key_list = config.get("api_key_list", [])
- if len(api_key_list) == 0:
- logging.error("多账号模式已开启,但api_key_list为空,请检查config.json")
- sys.exit(1)
- shared.state.set_api_key_queue(api_key_list)
-
-auth_list = config.get("users", []) # 实际上是使用者的列表
-authflag = len(auth_list) > 0 # 是否开启认证的状态值,改为判断auth_list长度
-
-# 处理自定义的api_host,优先读环境变量的配置,如果存在则自动装配
-api_host = os.environ.get("api_host", config.get("api_host", ""))
-if api_host:
- shared.state.set_api_host(api_host)
-
-@contextmanager
-def retrieve_openai_api(api_key = None):
- old_api_key = os.environ.get("OPENAI_API_KEY", "")
- if api_key is None:
- os.environ["OPENAI_API_KEY"] = my_api_key
- yield my_api_key
- else:
- os.environ["OPENAI_API_KEY"] = api_key
- yield api_key
- os.environ["OPENAI_API_KEY"] = old_api_key
-
-## 处理log
-log_level = config.get("log_level", "INFO")
-logging.basicConfig(
- level=log_level,
- format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s",
-)
-
-## 处理代理:
-http_proxy = config.get("http_proxy", "")
-https_proxy = config.get("https_proxy", "")
-http_proxy = os.environ.get("HTTP_PROXY", http_proxy)
-https_proxy = os.environ.get("HTTPS_PROXY", https_proxy)
-
-# 重置系统变量,在不需要设置的时候不设置环境变量,以免引起全局代理报错
-os.environ["HTTP_PROXY"] = ""
-os.environ["HTTPS_PROXY"] = ""
-
-local_embedding = config.get("local_embedding", False) # 是否使用本地embedding
-
-@contextmanager
-def retrieve_proxy(proxy=None):
- """
- 1, 如果proxy = NONE,设置环境变量,并返回最新设置的代理
- 2,如果proxy != NONE,更新当前的代理配置,但是不更新环境变量
- """
- global http_proxy, https_proxy
- if proxy is not None:
- http_proxy = proxy
- https_proxy = proxy
- yield http_proxy, https_proxy
- else:
- old_var = os.environ["HTTP_PROXY"], os.environ["HTTPS_PROXY"]
- os.environ["HTTP_PROXY"] = http_proxy
- os.environ["HTTPS_PROXY"] = https_proxy
- yield http_proxy, https_proxy # return new proxy
-
- # return old proxy
- os.environ["HTTP_PROXY"], os.environ["HTTPS_PROXY"] = old_var
-
-
-## 处理advance docs
-advance_docs = defaultdict(lambda: defaultdict(dict))
-advance_docs.update(config.get("advance_docs", {}))
-def update_doc_config(two_column_pdf):
- global advance_docs
- advance_docs["pdf"]["two_column"] = two_column_pdf
-
- logging.info(f"更新后的文件参数为:{advance_docs}")
-
-## 处理gradio.launch参数
-server_name = config.get("server_name", None)
-server_port = config.get("server_port", None)
-if server_name is None:
- if dockerflag:
- server_name = "0.0.0.0"
- else:
- server_name = "127.0.0.1"
-if server_port is None:
- if dockerflag:
- server_port = 7860
-
-assert server_port is None or type(server_port) == int, "要求port设置为int类型"
-
-# 设置默认model
-default_model = config.get("default_model", "")
-try:
- presets.DEFAULT_MODEL = presets.MODELS.index(default_model)
-except ValueError:
- pass
-
-share = config.get("share", False)
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/__init__.py b/spaces/HaloMaster/chinesesummary/fengshen/__init__.py
deleted file mode 100644
index b5829a3ac9e634d44d408d2ff6d22880e1c00805..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The IDEA Authors. All rights reserved.
-
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-
-# http://www.apache.org/licenses/LICENSE-2.0
-
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .models.longformer import LongformerConfig, LongformerModel
-from .models.roformer import RoFormerConfig, RoFormerModel
-from .models.megatron_t5 import T5Config, T5EncoderModel
-from .models.ubert import UbertPiplines, UbertModel
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/laser/laser_src/laser_lstm.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/laser/laser_src/laser_lstm.py
deleted file mode 100644
index 10df90e002d5a7dd74a571dbc3b328c130c57a0a..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/laser/laser_src/laser_lstm.py
+++ /dev/null
@@ -1,585 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from fairseq import options, utils
-
-from fairseq.models import (
- FairseqEncoder,
- FairseqIncrementalDecoder,
- FairseqEncoderDecoderModel,
- register_model,
- register_model_architecture,
-)
-
-
-@register_model("laser_lstm")
-class LSTMModel(FairseqEncoderDecoderModel):
- def __init__(self, encoder, decoder):
- super().__init__(encoder, decoder)
-
- def forward(
- self,
- src_tokens,
- src_lengths,
- prev_output_tokens=None,
- tgt_tokens=None,
- tgt_lengths=None,
- target_language_id=None,
- dataset_name="",
- ):
- assert target_language_id is not None
-
- src_encoder_out = self.encoder(src_tokens, src_lengths, dataset_name)
- return self.decoder(
- prev_output_tokens, src_encoder_out, lang_id=target_language_id
- )
-
- @staticmethod
- def add_args(parser):
- """Add model-specific arguments to the parser."""
- parser.add_argument(
- "--dropout",
- default=0.1,
- type=float,
- metavar="D",
- help="dropout probability",
- )
- parser.add_argument(
- "--encoder-embed-dim",
- type=int,
- metavar="N",
- help="encoder embedding dimension",
- )
- parser.add_argument(
- "--encoder-embed-path",
- default=None,
- type=str,
- metavar="STR",
- help="path to pre-trained encoder embedding",
- )
- parser.add_argument(
- "--encoder-hidden-size", type=int, metavar="N", help="encoder hidden size"
- )
- parser.add_argument(
- "--encoder-layers", type=int, metavar="N", help="number of encoder layers"
- )
- parser.add_argument(
- "--encoder-bidirectional",
- action="store_true",
- help="make all layers of encoder bidirectional",
- )
- parser.add_argument(
- "--decoder-embed-dim",
- type=int,
- metavar="N",
- help="decoder embedding dimension",
- )
- parser.add_argument(
- "--decoder-embed-path",
- default=None,
- type=str,
- metavar="STR",
- help="path to pre-trained decoder embedding",
- )
- parser.add_argument(
- "--decoder-hidden-size", type=int, metavar="N", help="decoder hidden size"
- )
- parser.add_argument(
- "--decoder-layers", type=int, metavar="N", help="number of decoder layers"
- )
- parser.add_argument(
- "--decoder-out-embed-dim",
- type=int,
- metavar="N",
- help="decoder output embedding dimension",
- )
- parser.add_argument(
- "--decoder-zero-init",
- type=str,
- metavar="BOOL",
- help="initialize the decoder hidden/cell state to zero",
- )
- parser.add_argument(
- "--decoder-lang-embed-dim",
- type=int,
- metavar="N",
- help="decoder language embedding dimension",
- )
- parser.add_argument(
- "--fixed-embeddings",
- action="store_true",
- help="keep embeddings fixed (ENCODER ONLY)",
- ) # TODO Also apply to decoder embeddings?
-
- # Granular dropout settings (if not specified these default to --dropout)
- parser.add_argument(
- "--encoder-dropout-in",
- type=float,
- metavar="D",
- help="dropout probability for encoder input embedding",
- )
- parser.add_argument(
- "--encoder-dropout-out",
- type=float,
- metavar="D",
- help="dropout probability for encoder output",
- )
- parser.add_argument(
- "--decoder-dropout-in",
- type=float,
- metavar="D",
- help="dropout probability for decoder input embedding",
- )
- parser.add_argument(
- "--decoder-dropout-out",
- type=float,
- metavar="D",
- help="dropout probability for decoder output",
- )
-
- @classmethod
- def build_model(cls, args, task):
- """Build a new model instance."""
- # make sure that all args are properly defaulted (in case there are any new ones)
- base_architecture(args)
-
- def load_pretrained_embedding_from_file(embed_path, dictionary, embed_dim):
- num_embeddings = len(dictionary)
- padding_idx = dictionary.pad()
- embed_tokens = Embedding(num_embeddings, embed_dim, padding_idx)
- embed_dict = utils.parse_embedding(embed_path)
- utils.print_embed_overlap(embed_dict, dictionary)
- return utils.load_embedding(embed_dict, dictionary, embed_tokens)
-
- pretrained_encoder_embed = None
- if args.encoder_embed_path:
- pretrained_encoder_embed = load_pretrained_embedding_from_file(
- args.encoder_embed_path, task.source_dictionary, args.encoder_embed_dim
- )
- pretrained_decoder_embed = None
- if args.decoder_embed_path:
- pretrained_decoder_embed = load_pretrained_embedding_from_file(
- args.decoder_embed_path, task.target_dictionary, args.decoder_embed_dim
- )
-
- num_langs = task.num_tasks if hasattr(task, "num_tasks") else 0
-
- encoder = LSTMEncoder(
- dictionary=task.source_dictionary,
- embed_dim=args.encoder_embed_dim,
- hidden_size=args.encoder_hidden_size,
- num_layers=args.encoder_layers,
- dropout_in=args.encoder_dropout_in,
- dropout_out=args.encoder_dropout_out,
- bidirectional=args.encoder_bidirectional,
- pretrained_embed=pretrained_encoder_embed,
- fixed_embeddings=args.fixed_embeddings,
- )
- decoder = LSTMDecoder(
- dictionary=task.target_dictionary,
- embed_dim=args.decoder_embed_dim,
- hidden_size=args.decoder_hidden_size,
- out_embed_dim=args.decoder_out_embed_dim,
- num_layers=args.decoder_layers,
- dropout_in=args.decoder_dropout_in,
- dropout_out=args.decoder_dropout_out,
- zero_init=options.eval_bool(args.decoder_zero_init),
- encoder_embed_dim=args.encoder_embed_dim,
- encoder_output_units=encoder.output_units,
- pretrained_embed=pretrained_decoder_embed,
- num_langs=num_langs,
- lang_embed_dim=args.decoder_lang_embed_dim,
- )
- return cls(encoder, decoder)
-
-
-class LSTMEncoder(FairseqEncoder):
- """LSTM encoder."""
-
- def __init__(
- self,
- dictionary,
- embed_dim=512,
- hidden_size=512,
- num_layers=1,
- dropout_in=0.1,
- dropout_out=0.1,
- bidirectional=False,
- left_pad=True,
- pretrained_embed=None,
- padding_value=0.0,
- fixed_embeddings=False,
- ):
- super().__init__(dictionary)
- self.num_layers = num_layers
- self.dropout_in = dropout_in
- self.dropout_out = dropout_out
- self.bidirectional = bidirectional
- self.hidden_size = hidden_size
-
- num_embeddings = len(dictionary)
- self.padding_idx = dictionary.pad()
- if pretrained_embed is None:
- self.embed_tokens = Embedding(num_embeddings, embed_dim, self.padding_idx)
- else:
- self.embed_tokens = pretrained_embed
- if fixed_embeddings:
- self.embed_tokens.weight.requires_grad = False
-
- self.lstm = LSTM(
- input_size=embed_dim,
- hidden_size=hidden_size,
- num_layers=num_layers,
- dropout=self.dropout_out if num_layers > 1 else 0.0,
- bidirectional=bidirectional,
- )
- self.left_pad = left_pad
- self.padding_value = padding_value
-
- self.output_units = hidden_size
- if bidirectional:
- self.output_units *= 2
-
- def forward(self, src_tokens, src_lengths, dataset_name):
- if self.left_pad:
- # convert left-padding to right-padding
- src_tokens = utils.convert_padding_direction(
- src_tokens,
- self.padding_idx,
- left_to_right=True,
- )
-
- bsz, seqlen = src_tokens.size()
-
- # embed tokens
- x = self.embed_tokens(src_tokens)
- x = F.dropout(x, p=self.dropout_in, training=self.training)
-
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
-
- # pack embedded source tokens into a PackedSequence
- try:
- packed_x = nn.utils.rnn.pack_padded_sequence(x, src_lengths.data.tolist())
- except BaseException:
- raise Exception(f"Packing failed in dataset {dataset_name}")
-
- # apply LSTM
- if self.bidirectional:
- state_size = 2 * self.num_layers, bsz, self.hidden_size
- else:
- state_size = self.num_layers, bsz, self.hidden_size
- h0 = x.data.new(*state_size).zero_()
- c0 = x.data.new(*state_size).zero_()
- packed_outs, (final_hiddens, final_cells) = self.lstm(packed_x, (h0, c0))
-
- # unpack outputs and apply dropout
- x, _ = nn.utils.rnn.pad_packed_sequence(
- packed_outs, padding_value=self.padding_value
- )
- x = F.dropout(x, p=self.dropout_out, training=self.training)
- assert list(x.size()) == [seqlen, bsz, self.output_units]
-
- if self.bidirectional:
-
- def combine_bidir(outs):
- return torch.cat(
- [
- torch.cat([outs[2 * i], outs[2 * i + 1]], dim=0).view(
- 1, bsz, self.output_units
- )
- for i in range(self.num_layers)
- ],
- dim=0,
- )
-
- final_hiddens = combine_bidir(final_hiddens)
- final_cells = combine_bidir(final_cells)
-
- encoder_padding_mask = src_tokens.eq(self.padding_idx).t()
-
- # Set padded outputs to -inf so they are not selected by max-pooling
- padding_mask = src_tokens.eq(self.padding_idx).t().unsqueeze(-1)
- if padding_mask.any():
- x = x.float().masked_fill_(padding_mask, float("-inf")).type_as(x)
-
- # Build the sentence embedding by max-pooling over the encoder outputs
- sentemb = x.max(dim=0)[0]
-
- return {
- "sentemb": sentemb,
- "encoder_out": (x, final_hiddens, final_cells),
- "encoder_padding_mask": encoder_padding_mask
- if encoder_padding_mask.any()
- else None,
- }
-
- def reorder_encoder_out(self, encoder_out_dict, new_order):
- encoder_out_dict["sentemb"] = encoder_out_dict["sentemb"].index_select(
- 0, new_order
- )
- encoder_out_dict["encoder_out"] = tuple(
- eo.index_select(1, new_order) for eo in encoder_out_dict["encoder_out"]
- )
- if encoder_out_dict["encoder_padding_mask"] is not None:
- encoder_out_dict["encoder_padding_mask"] = encoder_out_dict[
- "encoder_padding_mask"
- ].index_select(1, new_order)
- return encoder_out_dict
-
- def max_positions(self):
- """Maximum input length supported by the encoder."""
- return int(1e5) # an arbitrary large number
-
-
-class LSTMDecoder(FairseqIncrementalDecoder):
- """LSTM decoder."""
-
- def __init__(
- self,
- dictionary,
- embed_dim=512,
- hidden_size=512,
- out_embed_dim=512,
- num_layers=1,
- dropout_in=0.1,
- dropout_out=0.1,
- zero_init=False,
- encoder_embed_dim=512,
- encoder_output_units=512,
- pretrained_embed=None,
- num_langs=1,
- lang_embed_dim=0,
- ):
- super().__init__(dictionary)
- self.dropout_in = dropout_in
- self.dropout_out = dropout_out
- self.hidden_size = hidden_size
-
- num_embeddings = len(dictionary)
- padding_idx = dictionary.pad()
- if pretrained_embed is None:
- self.embed_tokens = Embedding(num_embeddings, embed_dim, padding_idx)
- else:
- self.embed_tokens = pretrained_embed
-
- self.layers = nn.ModuleList(
- [
- LSTMCell(
- input_size=encoder_output_units + embed_dim + lang_embed_dim
- if layer == 0
- else hidden_size,
- hidden_size=hidden_size,
- )
- for layer in range(num_layers)
- ]
- )
- if hidden_size != out_embed_dim:
- self.additional_fc = Linear(hidden_size, out_embed_dim)
- self.fc_out = Linear(out_embed_dim, num_embeddings, dropout=dropout_out)
-
- if zero_init:
- self.sentemb2init = None
- else:
- self.sentemb2init = Linear(
- encoder_output_units, 2 * num_layers * hidden_size
- )
-
- if lang_embed_dim == 0:
- self.embed_lang = None
- else:
- self.embed_lang = nn.Embedding(num_langs, lang_embed_dim)
- nn.init.uniform_(self.embed_lang.weight, -0.1, 0.1)
-
- def forward(
- self, prev_output_tokens, encoder_out_dict, incremental_state=None, lang_id=0
- ):
- sentemb = encoder_out_dict["sentemb"]
- encoder_out = encoder_out_dict["encoder_out"]
-
- if incremental_state is not None:
- prev_output_tokens = prev_output_tokens[:, -1:]
- bsz, seqlen = prev_output_tokens.size()
-
- # get outputs from encoder
- encoder_outs, _, _ = encoder_out[:3]
- srclen = encoder_outs.size(0)
-
- # embed tokens
- x = self.embed_tokens(prev_output_tokens)
- x = F.dropout(x, p=self.dropout_in, training=self.training)
-
- # embed language identifier
- if self.embed_lang is not None:
- lang_ids = prev_output_tokens.data.new_full((bsz,), lang_id)
- langemb = self.embed_lang(lang_ids)
- # TODO Should we dropout here???
-
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
-
- # initialize previous states (or get from cache during incremental generation)
- cached_state = utils.get_incremental_state(
- self, incremental_state, "cached_state"
- )
- if cached_state is not None:
- prev_hiddens, prev_cells, input_feed = cached_state
- else:
- num_layers = len(self.layers)
- if self.sentemb2init is None:
- prev_hiddens = [
- x.data.new(bsz, self.hidden_size).zero_() for i in range(num_layers)
- ]
- prev_cells = [
- x.data.new(bsz, self.hidden_size).zero_() for i in range(num_layers)
- ]
- else:
- init = self.sentemb2init(sentemb)
- prev_hiddens = [
- init[:, (2 * i) * self.hidden_size : (2 * i + 1) * self.hidden_size]
- for i in range(num_layers)
- ]
- prev_cells = [
- init[
- :,
- (2 * i + 1) * self.hidden_size : (2 * i + 2) * self.hidden_size,
- ]
- for i in range(num_layers)
- ]
- input_feed = x.data.new(bsz, self.hidden_size).zero_()
-
- attn_scores = x.data.new(srclen, seqlen, bsz).zero_()
- outs = []
- for j in range(seqlen):
- if self.embed_lang is None:
- input = torch.cat((x[j, :, :], sentemb), dim=1)
- else:
- input = torch.cat((x[j, :, :], sentemb, langemb), dim=1)
-
- for i, rnn in enumerate(self.layers):
- # recurrent cell
- hidden, cell = rnn(input, (prev_hiddens[i], prev_cells[i]))
-
- # hidden state becomes the input to the next layer
- input = F.dropout(hidden, p=self.dropout_out, training=self.training)
-
- # save state for next time step
- prev_hiddens[i] = hidden
- prev_cells[i] = cell
-
- out = hidden
- out = F.dropout(out, p=self.dropout_out, training=self.training)
-
- # input feeding
- input_feed = out
-
- # save final output
- outs.append(out)
-
- # cache previous states (no-op except during incremental generation)
- utils.set_incremental_state(
- self,
- incremental_state,
- "cached_state",
- (prev_hiddens, prev_cells, input_feed),
- )
-
- # collect outputs across time steps
- x = torch.cat(outs, dim=0).view(seqlen, bsz, self.hidden_size)
-
- # T x B x C -> B x T x C
- x = x.transpose(1, 0)
-
- # srclen x tgtlen x bsz -> bsz x tgtlen x srclen
- attn_scores = attn_scores.transpose(0, 2)
-
- # project back to size of vocabulary
- if hasattr(self, "additional_fc"):
- x = self.additional_fc(x)
- x = F.dropout(x, p=self.dropout_out, training=self.training)
- x = self.fc_out(x)
-
- return x, attn_scores
-
- def reorder_incremental_state(self, incremental_state, new_order):
- super().reorder_incremental_state(incremental_state, new_order)
- cached_state = utils.get_incremental_state(
- self, incremental_state, "cached_state"
- )
- if cached_state is None:
- return
-
- def reorder_state(state):
- if isinstance(state, list):
- return [reorder_state(state_i) for state_i in state]
- return state.index_select(0, new_order)
-
- new_state = tuple(map(reorder_state, cached_state))
- utils.set_incremental_state(self, incremental_state, "cached_state", new_state)
-
- def max_positions(self):
- """Maximum output length supported by the decoder."""
- return int(1e5) # an arbitrary large number
-
-
-def Embedding(num_embeddings, embedding_dim, padding_idx):
- m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx)
- nn.init.uniform_(m.weight, -0.1, 0.1)
- nn.init.constant_(m.weight[padding_idx], 0)
- return m
-
-
-def LSTM(input_size, hidden_size, **kwargs):
- m = nn.LSTM(input_size, hidden_size, **kwargs)
- for name, param in m.named_parameters():
- if "weight" in name or "bias" in name:
- param.data.uniform_(-0.1, 0.1)
- return m
-
-
-def LSTMCell(input_size, hidden_size, **kwargs):
- m = nn.LSTMCell(input_size, hidden_size, **kwargs)
- for name, param in m.named_parameters():
- if "weight" in name or "bias" in name:
- param.data.uniform_(-0.1, 0.1)
- return m
-
-
-def Linear(in_features, out_features, bias=True, dropout=0):
- """Weight-normalized Linear layer (input: N x T x C)"""
- m = nn.Linear(in_features, out_features, bias=bias)
- m.weight.data.uniform_(-0.1, 0.1)
- if bias:
- m.bias.data.uniform_(-0.1, 0.1)
- return m
-
-
-@register_model_architecture("laser_lstm", "laser_lstm")
-def base_architecture(args):
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512)
- args.encoder_embed_path = getattr(args, "encoder_embed_path", None)
- args.encoder_hidden_size = getattr(
- args, "encoder_hidden_size", args.encoder_embed_dim
- )
- args.encoder_layers = getattr(args, "encoder_layers", 1)
- args.encoder_bidirectional = getattr(args, "encoder_bidirectional", False)
- args.encoder_dropout_in = getattr(args, "encoder_dropout_in", args.dropout)
- args.encoder_dropout_out = getattr(args, "encoder_dropout_out", args.dropout)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512)
- args.decoder_embed_path = getattr(args, "decoder_embed_path", None)
- args.decoder_hidden_size = getattr(
- args, "decoder_hidden_size", args.decoder_embed_dim
- )
- args.decoder_layers = getattr(args, "decoder_layers", 1)
- args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 512)
- args.decoder_dropout_in = getattr(args, "decoder_dropout_in", args.dropout)
- args.decoder_dropout_out = getattr(args, "decoder_dropout_out", args.dropout)
- args.decoder_zero_init = getattr(args, "decoder_zero_init", "0")
- args.decoder_lang_embed_dim = getattr(args, "decoder_lang_embed_dim", 0)
- args.fixed_embeddings = getattr(args, "fixed_embeddings", False)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/model_parallel/modules/transformer_layer.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/model_parallel/modules/transformer_layer.py
deleted file mode 100644
index 7ab53c6e5f12f15562717effb86ab8cb8d6b4fa3..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/model_parallel/modules/transformer_layer.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from fairseq.model_parallel.modules import ModelParallelMultiheadAttention
-from fairseq.modules import TransformerDecoderLayer, TransformerEncoderLayer
-
-
-try:
- from fairseq.model_parallel.megatron.mpu import (
- ColumnParallelLinear,
- RowParallelLinear,
- )
-
- has_megatron_submodule = True
-except (ImportError, ModuleNotFoundError):
- has_megatron_submodule = False
-
-
-class ModelParallelTransformerEncoderLayer(TransformerEncoderLayer):
- """Encoder layer block over multiple gpus.
-
- See "Megatron-LM: https://arxiv.org/pdf/1909.08053.pdf" for more details.
- """
-
- def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size):
- if q_noise > 0:
- raise NotImplementedError
- return ColumnParallelLinear(input_dim, output_dim, gather_output=False)
-
- def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size):
- if q_noise > 0:
- raise NotImplementedError
- return RowParallelLinear(input_dim, output_dim, input_is_parallel=True)
-
- def build_self_attention(self, embed_dim, args, **unused_kwargs):
- return ModelParallelMultiheadAttention(
- embed_dim,
- args.encoder_attention_heads,
- dropout=args.attention_dropout,
- self_attention=True,
- )
-
-
-class ModelParallelTransformerDecoderLayer(TransformerDecoderLayer):
- """Decoder layer block.
-
- See "Megatron-LM: https://arxiv.org/pdf/1909.08053.pdf" for more details.
- """
-
- def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size):
- if q_noise > 0:
- raise NotImplementedError
- return ColumnParallelLinear(input_dim, output_dim, gather_output=False)
-
- def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size):
- if q_noise > 0:
- raise NotImplementedError
- return RowParallelLinear(input_dim, output_dim, input_is_parallel=True)
-
- def build_self_attention(self, embed_dim, args, **unused_kwargs):
- return ModelParallelMultiheadAttention(
- embed_dim=embed_dim,
- num_heads=args.decoder_attention_heads,
- dropout=args.attention_dropout,
- self_attention=not getattr(args, "cross_self_attention", False),
- )
-
- def build_encoder_attention(self, embed_dim, args, **unused_kwargs):
- return ModelParallelMultiheadAttention(
- embed_dim=embed_dim,
- num_heads=args.decoder_attention_heads,
- kdim=getattr(args, "encoder_embed_dim", None),
- vdim=getattr(args, "encoder_embed_dim", None),
- dropout=args.attention_dropout,
- encoder_decoder_attention=True,
- )
diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/exceptions.py b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/exceptions.py
deleted file mode 100644
index d6cf6a8f2fed4941c5c4a17902292062a929886e..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/exceptions.py
+++ /dev/null
@@ -1,23 +0,0 @@
-class DuplicateBlockError(ValueError):
- """Raised when a Blocks contains more than one Block with the same id"""
-
- pass
-
-
-class TooManyRequestsError(Exception):
- """Raised when the Hugging Face API returns a 429 status code."""
-
- pass
-
-
-class InvalidApiName(ValueError):
- pass
-
-
-class Error(Exception):
- def __init__(self, message: str):
- self.message = message
- super().__init__(self.message)
-
- def __str__(self):
- return repr(self.message)
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/models/model_utils.py b/spaces/ICML2022/OFA/fairseq/fairseq/models/model_utils.py
deleted file mode 100644
index 732d66b1d5f695151c26d29eb7f6b53179c269f1..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/models/model_utils.py
+++ /dev/null
@@ -1,92 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from typing import List, Optional
-
-import torch
-from torch import Tensor
-
-
-@torch.jit.script
-def script_skip_tensor_list(x: List[Tensor], mask):
- res = [xi[mask] if xi.size(0) == mask.size(0) else xi[:, mask] for xi in x]
- outputs = []
- for i, t in enumerate(res):
- if t.numel() != 0:
- outputs.append(t)
- else:
- outputs.append(x[i])
- return outputs
-
-
-@torch.jit.script
-def script_skip_tensor(x: Tensor, mask):
- # None case
- if x.size(0) == 0:
- return x
- res = x[mask] if x.size(0) == mask.size(0) else x[:, mask]
- if res.numel() == 0:
- return x
- else:
- return res
-
-
-@torch.jit.script
-def expand_2d_or_3d_tensor(x, trg_dim: int, padding_idx: int):
- """
- Expand 2D/3D tensor on dim=1
- """
- if x is None:
- return None
-
- assert x.dim() == 2 or x.dim() == 3
- assert trg_dim >= x.size(1), (trg_dim, x.size())
- if trg_dim == x.size(1):
- return x
-
- dims = [x.size(0), trg_dim - x.size(1)]
- if x.dim() == 3:
- dims.append(x.size(2))
- x = torch.cat([x, torch.zeros(dims).to(x).fill_(padding_idx)], 1)
-
- return x
-
-
-@torch.jit.script
-def coalesce(x: Optional[Tensor], y: Tensor) -> Tensor:
- return x if x is not None else y
-
-
-@torch.jit.script
-def fill_tensors(
- x: Optional[Tensor], mask, y: Optional[Tensor], padding_idx: int
-) -> Optional[Tensor]:
- """
- Filling tensor x with y at masked positions (dim=0).
- """
- if x is None or x.size()[0] == 0 or y is None:
- return x
- assert x.dim() == y.dim() and mask.size(0) == x.size(0)
- assert x.dim() == 2 or (x.dim() == 3 and x.size(2) == y.size(2))
-
- n_selected = mask.sum()
- if n_selected == 0:
- return x
- assert n_selected == y.size(0)
- if n_selected == x.size(0):
- return y
-
- if x.size(1) < y.size(1):
- x = expand_2d_or_3d_tensor(x, y.size(1), padding_idx)
- x[mask] = y
- elif x.size(1) > y.size(1):
- x[mask] = torch.tensor(padding_idx).type_as(x)
- if x.dim() == 2:
- x[mask, : y.size(1)] = y
- else:
- x[mask, : y.size(1), :] = y
- else:
- x[mask] = y
- return x
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/options.py b/spaces/ICML2022/OFA/fairseq/fairseq/options.py
deleted file mode 100644
index 797b2842db4a68849110a25bb52a47c658966186..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/options.py
+++ /dev/null
@@ -1,406 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-from pathlib import Path
-from typing import Callable, List, Optional, Union
-
-import torch
-from fairseq import utils
-from fairseq.data.indexed_dataset import get_available_dataset_impl
-from fairseq.dataclass.configs import (
- CheckpointConfig,
- CommonConfig,
- CommonEvalConfig,
- DatasetConfig,
- DistributedTrainingConfig,
- EvalLMConfig,
- GenerationConfig,
- InteractiveConfig,
- OptimizationConfig,
- EMAConfig,
-)
-from fairseq.dataclass.utils import gen_parser_from_dataclass
-
-# this import is for backward compatibility
-from fairseq.utils import csv_str_list, eval_bool, eval_str_dict, eval_str_list # noqa
-
-
-def get_preprocessing_parser(default_task="translation"):
- parser = get_parser("Preprocessing", default_task)
- add_preprocess_args(parser)
- return parser
-
-
-def get_training_parser(default_task="translation"):
- parser = get_parser("Trainer", default_task)
- add_dataset_args(parser, train=True)
- add_distributed_training_args(parser)
- add_model_args(parser)
- add_optimization_args(parser)
- add_checkpoint_args(parser)
- add_ema_args(parser)
- return parser
-
-
-def get_generation_parser(interactive=False, default_task="translation"):
- parser = get_parser("Generation", default_task)
- add_dataset_args(parser, gen=True)
- add_distributed_training_args(parser)
- add_generation_args(parser)
- add_checkpoint_args(parser)
- if interactive:
- add_interactive_args(parser)
- return parser
-
-
-def get_speech_generation_parser(default_task="text_to_speech"):
- parser = get_parser("Speech Generation", default_task)
- add_dataset_args(parser, gen=True)
- add_distributed_training_args(parser, default_world_size=1)
- add_speech_generation_args(parser)
- return parser
-
-
-def get_interactive_generation_parser(default_task="translation"):
- return get_generation_parser(interactive=True, default_task=default_task)
-
-
-def get_eval_lm_parser(default_task="language_modeling"):
- parser = get_parser("Evaluate Language Model", default_task)
- add_dataset_args(parser, gen=True)
- add_distributed_training_args(parser, default_world_size=1)
- add_eval_lm_args(parser)
- return parser
-
-
-def get_validation_parser(default_task=None):
- parser = get_parser("Validation", default_task)
- add_dataset_args(parser, train=True)
- add_distributed_training_args(parser, default_world_size=1)
- group = parser.add_argument_group("Evaluation")
- gen_parser_from_dataclass(group, CommonEvalConfig())
- return parser
-
-
-def parse_args_and_arch(
- parser: argparse.ArgumentParser,
- input_args: List[str] = None,
- parse_known: bool = False,
- suppress_defaults: bool = False,
- modify_parser: Optional[Callable[[argparse.ArgumentParser], None]] = None,
-):
- """
- Args:
- parser (ArgumentParser): the parser
- input_args (List[str]): strings to parse, defaults to sys.argv
- parse_known (bool): only parse known arguments, similar to
- `ArgumentParser.parse_known_args`
- suppress_defaults (bool): parse while ignoring all default values
- modify_parser (Optional[Callable[[ArgumentParser], None]]):
- function to modify the parser, e.g., to set default values
- """
- if suppress_defaults:
- # Parse args without any default values. This requires us to parse
- # twice, once to identify all the necessary task/model args, and a second
- # time with all defaults set to None.
- args = parse_args_and_arch(
- parser,
- input_args=input_args,
- parse_known=parse_known,
- suppress_defaults=False,
- )
- suppressed_parser = argparse.ArgumentParser(add_help=False, parents=[parser])
- suppressed_parser.set_defaults(**{k: None for k, v in vars(args).items()})
- args = suppressed_parser.parse_args(input_args)
- return argparse.Namespace(
- **{k: v for k, v in vars(args).items() if v is not None}
- )
-
- from fairseq.models import ARCH_MODEL_REGISTRY, ARCH_CONFIG_REGISTRY, MODEL_REGISTRY
-
- # Before creating the true parser, we need to import optional user module
- # in order to eagerly import custom tasks, optimizers, architectures, etc.
- usr_parser = argparse.ArgumentParser(add_help=False, allow_abbrev=False)
- usr_parser.add_argument("--user-dir", default=None)
- usr_args, _ = usr_parser.parse_known_args(input_args)
- utils.import_user_module(usr_args)
-
- if modify_parser is not None:
- modify_parser(parser)
-
- # The parser doesn't know about model/criterion/optimizer-specific args, so
- # we parse twice. First we parse the model/criterion/optimizer, then we
- # parse a second time after adding the *-specific arguments.
- # If input_args is given, we will parse those args instead of sys.argv.
- args, _ = parser.parse_known_args(input_args)
-
- # Add model-specific args to parser.
- if hasattr(args, "arch"):
- model_specific_group = parser.add_argument_group(
- "Model-specific configuration",
- # Only include attributes which are explicitly given as command-line
- # arguments or which have default values.
- argument_default=argparse.SUPPRESS,
- )
- if args.arch in ARCH_MODEL_REGISTRY:
- ARCH_MODEL_REGISTRY[args.arch].add_args(model_specific_group)
- elif args.arch in MODEL_REGISTRY:
- MODEL_REGISTRY[args.arch].add_args(model_specific_group)
- else:
- raise RuntimeError()
-
- if hasattr(args, "task"):
- from fairseq.tasks import TASK_REGISTRY
-
- TASK_REGISTRY[args.task].add_args(parser)
- if getattr(args, "use_bmuf", False):
- # hack to support extra args for block distributed data parallelism
- from fairseq.optim.bmuf import FairseqBMUF
-
- FairseqBMUF.add_args(parser)
-
- # Add *-specific args to parser.
- from fairseq.registry import REGISTRIES
-
- for registry_name, REGISTRY in REGISTRIES.items():
- choice = getattr(args, registry_name, None)
- if choice is not None:
- cls = REGISTRY["registry"][choice]
- if hasattr(cls, "add_args"):
- cls.add_args(parser)
- elif hasattr(cls, "__dataclass"):
- gen_parser_from_dataclass(parser, cls.__dataclass())
-
- # Modify the parser a second time, since defaults may have been reset
- if modify_parser is not None:
- modify_parser(parser)
-
- # Parse a second time.
- if parse_known:
- args, extra = parser.parse_known_args(input_args)
- else:
- args = parser.parse_args(input_args)
- extra = None
- # Post-process args.
- if (
- hasattr(args, "batch_size_valid") and args.batch_size_valid is None
- ) or not hasattr(args, "batch_size_valid"):
- args.batch_size_valid = args.batch_size
- if hasattr(args, "max_tokens_valid") and args.max_tokens_valid is None:
- args.max_tokens_valid = args.max_tokens
- if getattr(args, "memory_efficient_fp16", False):
- args.fp16 = True
- if getattr(args, "memory_efficient_bf16", False):
- args.bf16 = True
- args.tpu = getattr(args, "tpu", False)
- args.bf16 = getattr(args, "bf16", False)
- if args.bf16:
- args.tpu = True
- if args.tpu and args.fp16:
- raise ValueError("Cannot combine --fp16 and --tpu, use --bf16 on TPUs")
-
- if getattr(args, "seed", None) is None:
- args.seed = 1 # default seed for training
- args.no_seed_provided = True
- else:
- args.no_seed_provided = False
-
- # Apply architecture configuration.
- if hasattr(args, "arch") and args.arch in ARCH_CONFIG_REGISTRY:
- ARCH_CONFIG_REGISTRY[args.arch](args)
-
- if parse_known:
- return args, extra
- else:
- return args
-
-
-def get_parser(desc, default_task="translation"):
- # Before creating the true parser, we need to import optional user module
- # in order to eagerly import custom tasks, optimizers, architectures, etc.
- usr_parser = argparse.ArgumentParser(add_help=False, allow_abbrev=False)
- usr_parser.add_argument("--user-dir", default=None)
- usr_args, _ = usr_parser.parse_known_args()
- utils.import_user_module(usr_args)
-
- parser = argparse.ArgumentParser(allow_abbrev=False)
- gen_parser_from_dataclass(parser, CommonConfig())
-
- from fairseq.registry import REGISTRIES
-
- for registry_name, REGISTRY in REGISTRIES.items():
- parser.add_argument(
- "--" + registry_name.replace("_", "-"),
- default=REGISTRY["default"],
- choices=REGISTRY["registry"].keys(),
- )
-
- # Task definitions can be found under fairseq/tasks/
- from fairseq.tasks import TASK_REGISTRY
-
- parser.add_argument(
- "--task",
- metavar="TASK",
- default=default_task,
- choices=TASK_REGISTRY.keys(),
- help="task",
- )
- # fmt: on
- return parser
-
-
-def add_preprocess_args(parser):
- group = parser.add_argument_group("Preprocessing")
- # fmt: off
- group.add_argument("-s", "--source-lang", default=None, metavar="SRC",
- help="source language")
- group.add_argument("-t", "--target-lang", default=None, metavar="TARGET",
- help="target language")
- group.add_argument("--trainpref", metavar="FP", default=None,
- help="train file prefix (also used to build dictionaries)")
- group.add_argument("--validpref", metavar="FP", default=None,
- help="comma separated, valid file prefixes "
- "(words missing from train set are replaced with )")
- group.add_argument("--testpref", metavar="FP", default=None,
- help="comma separated, test file prefixes "
- "(words missing from train set are replaced with )")
- group.add_argument("--align-suffix", metavar="FP", default=None,
- help="alignment file suffix")
- group.add_argument("--destdir", metavar="DIR", default="data-bin",
- help="destination dir")
- group.add_argument("--thresholdtgt", metavar="N", default=0, type=int,
- help="map words appearing less than threshold times to unknown")
- group.add_argument("--thresholdsrc", metavar="N", default=0, type=int,
- help="map words appearing less than threshold times to unknown")
- group.add_argument("--tgtdict", metavar="FP",
- help="reuse given target dictionary")
- group.add_argument("--srcdict", metavar="FP",
- help="reuse given source dictionary")
- group.add_argument("--nwordstgt", metavar="N", default=-1, type=int,
- help="number of target words to retain")
- group.add_argument("--nwordssrc", metavar="N", default=-1, type=int,
- help="number of source words to retain")
- group.add_argument("--alignfile", metavar="ALIGN", default=None,
- help="an alignment file (optional)")
- parser.add_argument('--dataset-impl', metavar='FORMAT', default='mmap',
- choices=get_available_dataset_impl(),
- help='output dataset implementation')
- group.add_argument("--joined-dictionary", action="store_true",
- help="Generate joined dictionary")
- group.add_argument("--only-source", action="store_true",
- help="Only process the source language")
- group.add_argument("--padding-factor", metavar="N", default=8, type=int,
- help="Pad dictionary size to be multiple of N")
- group.add_argument("--workers", metavar="N", default=1, type=int,
- help="number of parallel workers")
- group.add_argument("--dict-only", action='store_true',
- help="if true, only builds a dictionary and then exits")
- # fmt: on
- return parser
-
-
-def add_dataset_args(parser, train=False, gen=False):
- group = parser.add_argument_group("dataset_data_loading")
- gen_parser_from_dataclass(group, DatasetConfig())
- # fmt: on
- return group
-
-
-def add_distributed_training_args(parser, default_world_size=None):
- group = parser.add_argument_group("distributed_training")
- if default_world_size is None:
- default_world_size = max(1, torch.cuda.device_count())
- gen_parser_from_dataclass(
- group, DistributedTrainingConfig(distributed_world_size=default_world_size)
- )
- return group
-
-
-def add_optimization_args(parser):
- group = parser.add_argument_group("optimization")
- # fmt: off
- gen_parser_from_dataclass(group, OptimizationConfig())
- # fmt: on
- return group
-
-
-def add_checkpoint_args(parser):
- group = parser.add_argument_group("checkpoint")
- # fmt: off
- gen_parser_from_dataclass(group, CheckpointConfig())
- # fmt: on
- return group
-
-
-def add_common_eval_args(group):
- gen_parser_from_dataclass(group, CommonEvalConfig())
-
-
-def add_eval_lm_args(parser):
- group = parser.add_argument_group("LM Evaluation")
- add_common_eval_args(group)
- gen_parser_from_dataclass(group, EvalLMConfig())
-
-
-def add_generation_args(parser):
- group = parser.add_argument_group("Generation")
- add_common_eval_args(group)
- gen_parser_from_dataclass(group, GenerationConfig())
- return group
-
-
-def add_speech_generation_args(parser):
- group = parser.add_argument_group("Speech Generation")
- add_common_eval_args(group) # NOTE: remove_bpe is not needed
- # fmt: off
- group.add_argument('--eos_prob_threshold', default=0.5, type=float,
- help='terminate when eos probability exceeds this')
- # fmt: on
- return group
-
-
-def add_interactive_args(parser):
- group = parser.add_argument_group("Interactive")
- gen_parser_from_dataclass(group, InteractiveConfig())
-
-
-def add_model_args(parser):
- group = parser.add_argument_group("Model configuration")
- # fmt: off
-
- # Model definitions can be found under fairseq/models/
- #
- # The model architecture can be specified in several ways.
- # In increasing order of priority:
- # 1) model defaults (lowest priority)
- # 2) --arch argument
- # 3) --encoder/decoder-* arguments (highest priority)
- from fairseq.models import ARCH_MODEL_REGISTRY
- group.add_argument('--arch', '-a', metavar='ARCH',
- choices=ARCH_MODEL_REGISTRY.keys(),
- help='model architecture')
- # fmt: on
- return group
-
-
-def get_args(
- data: Union[str, Path],
- task: str = "translation",
- arch: str = "transformer",
- **overrides
-):
- parser = get_training_parser(task)
- args = parse_args_and_arch(parser, [str(data), "--task", task, "--arch", arch])
-
- for k, v in overrides.items():
- setattr(args, k, v)
-
- return args
-
-
-def add_ema_args(parser):
- group = parser.add_argument_group("EMA configuration")
- gen_parser_from_dataclass(group, EMAConfig())
diff --git a/spaces/Ibtehaj10/cheating-detection/generate_keys.py b/spaces/Ibtehaj10/cheating-detection/generate_keys.py
deleted file mode 100644
index 997135fada594e5b17f64a219bbd8b44f1463cfa..0000000000000000000000000000000000000000
--- a/spaces/Ibtehaj10/cheating-detection/generate_keys.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import pickle
-from pathlib import Path
-import streamlit_authenticator as stauth
-# print("Done !!!")
-
-names = ["dmin", "ser"]
-
-username =["admin", "user"]
-
-password =["admin123", "user123"]
-
-hashed_passwords =stauth.Hasher(password).generate()
-
-file_path = Path(__file__).parent / "hashed_pw.pkl"
-
-with file_path.open("wb") as file:
- pickle.dump(hashed_passwords, file)
diff --git a/spaces/Iceclear/StableSR/StableSR/ldm/modules/embedding_manager.py b/spaces/Iceclear/StableSR/StableSR/ldm/modules/embedding_manager.py
deleted file mode 100644
index 5c5f97bd9f151bc1c508f56bd7ccfb4509aaea82..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/ldm/modules/embedding_manager.py
+++ /dev/null
@@ -1,161 +0,0 @@
-import torch
-from torch import nn
-
-from ldm.data.personalized import per_img_token_list
-from transformers import CLIPTokenizer
-from functools import partial
-
-DEFAULT_PLACEHOLDER_TOKEN = ["*"]
-
-PROGRESSIVE_SCALE = 2000
-
-def get_clip_token_for_string(tokenizer, string):
- batch_encoding = tokenizer(string, truncation=True, max_length=77, return_length=True,
- return_overflowing_tokens=False, padding="max_length", return_tensors="pt")
- tokens = batch_encoding["input_ids"]
- assert torch.count_nonzero(tokens - 49407) == 2, f"String '{string}' maps to more than a single token. Please use another string"
-
- return tokens[0, 1]
-
-def get_bert_token_for_string(tokenizer, string):
- token = tokenizer(string)
- assert torch.count_nonzero(token) == 3, f"String '{string}' maps to more than a single token. Please use another string"
-
- token = token[0, 1]
-
- return token
-
-def get_embedding_for_clip_token(embedder, token):
- return embedder(token.unsqueeze(0))[0, 0]
-
-
-class EmbeddingManager(nn.Module):
- def __init__(
- self,
- embedder,
- placeholder_strings=None,
- initializer_words=None,
- per_image_tokens=False,
- num_vectors_per_token=1,
- progressive_words=False,
- **kwargs
- ):
- super().__init__()
-
- self.string_to_token_dict = {}
-
- self.string_to_param_dict = nn.ParameterDict()
-
- self.initial_embeddings = nn.ParameterDict() # These should not be optimized
-
- self.progressive_words = progressive_words
- self.progressive_counter = 0
-
- self.max_vectors_per_token = num_vectors_per_token
-
- if hasattr(embedder, 'tokenizer'): # using Stable Diffusion's CLIP encoder
- self.is_clip = True
- get_token_for_string = partial(get_clip_token_for_string, embedder.tokenizer)
- get_embedding_for_tkn = partial(get_embedding_for_clip_token, embedder.transformer.text_model.embeddings)
- token_dim = 768
- else: # using LDM's BERT encoder
- self.is_clip = False
- get_token_for_string = partial(get_bert_token_for_string, embedder.tknz_fn)
- get_embedding_for_tkn = embedder.transformer.token_emb
- token_dim = 1280
-
- if per_image_tokens:
- placeholder_strings.extend(per_img_token_list)
-
- for idx, placeholder_string in enumerate(placeholder_strings):
-
- token = get_token_for_string(placeholder_string)
-
- if initializer_words and idx < len(initializer_words):
- init_word_token = get_token_for_string(initializer_words[idx])
-
- with torch.no_grad():
- init_word_embedding = get_embedding_for_tkn(init_word_token.cpu())
-
- token_params = torch.nn.Parameter(init_word_embedding.unsqueeze(0).repeat(num_vectors_per_token, 1), requires_grad=True)
- self.initial_embeddings[placeholder_string] = torch.nn.Parameter(init_word_embedding.unsqueeze(0).repeat(num_vectors_per_token, 1), requires_grad=False)
- else:
- token_params = torch.nn.Parameter(torch.rand(size=(num_vectors_per_token, token_dim), requires_grad=True))
-
- self.string_to_token_dict[placeholder_string] = token
- self.string_to_param_dict[placeholder_string] = token_params
-
- def forward(
- self,
- tokenized_text,
- embedded_text,
- ):
- b, n, device = *tokenized_text.shape, tokenized_text.device
-
- for placeholder_string, placeholder_token in self.string_to_token_dict.items():
-
- placeholder_embedding = self.string_to_param_dict[placeholder_string].to(device)
-
- if self.max_vectors_per_token == 1: # If there's only one vector per token, we can do a simple replacement
- placeholder_idx = torch.where(tokenized_text == placeholder_token.to(device))
- embedded_text[placeholder_idx] = placeholder_embedding
- else: # otherwise, need to insert and keep track of changing indices
- if self.progressive_words:
- self.progressive_counter += 1
- max_step_tokens = 1 + self.progressive_counter // PROGRESSIVE_SCALE
- else:
- max_step_tokens = self.max_vectors_per_token
-
- num_vectors_for_token = min(placeholder_embedding.shape[0], max_step_tokens)
-
- placeholder_rows, placeholder_cols = torch.where(tokenized_text == placeholder_token.to(device))
-
- if placeholder_rows.nelement() == 0:
- continue
-
- sorted_cols, sort_idx = torch.sort(placeholder_cols, descending=True)
- sorted_rows = placeholder_rows[sort_idx]
-
- for idx in range(len(sorted_rows)):
- row = sorted_rows[idx]
- col = sorted_cols[idx]
-
- new_token_row = torch.cat([tokenized_text[row][:col], placeholder_token.repeat(num_vectors_for_token).to(device), tokenized_text[row][col + 1:]], axis=0)[:n]
- new_embed_row = torch.cat([embedded_text[row][:col], placeholder_embedding[:num_vectors_for_token], embedded_text[row][col + 1:]], axis=0)[:n]
-
- embedded_text[row] = new_embed_row
- tokenized_text[row] = new_token_row
-
- return embedded_text
-
- def save(self, ckpt_path):
- torch.save({"string_to_token": self.string_to_token_dict,
- "string_to_param": self.string_to_param_dict}, ckpt_path)
-
- def load(self, ckpt_path):
- ckpt = torch.load(ckpt_path, map_location='cpu')
-
- self.string_to_token_dict = ckpt["string_to_token"]
- self.string_to_param_dict = ckpt["string_to_param"]
-
- def get_embedding_norms_squared(self):
- all_params = torch.cat(list(self.string_to_param_dict.values()), axis=0) # num_placeholders x embedding_dim
- param_norm_squared = (all_params * all_params).sum(axis=-1) # num_placeholders
-
- return param_norm_squared
-
- def embedding_parameters(self):
- return self.string_to_param_dict.parameters()
-
- def embedding_to_coarse_loss(self):
-
- loss = 0.
- num_embeddings = len(self.initial_embeddings)
-
- for key in self.initial_embeddings:
- optimized = self.string_to_param_dict[key]
- coarse = self.initial_embeddings[key].clone().to(optimized.device)
-
- loss = loss + (optimized - coarse) @ (optimized - coarse).T / num_embeddings
-
- return loss
diff --git a/spaces/Illumotion/Koboldcpp/common/train.h b/spaces/Illumotion/Koboldcpp/common/train.h
deleted file mode 100644
index 42fa704b897ae56051aaaef16a71c77fe2fd6c17..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/common/train.h
+++ /dev/null
@@ -1,230 +0,0 @@
-// Various helper functions and utilities for training
-
-#pragma once
-
-#include
-#include
-#include
-
-#include "ggml.h"
-#include "llama.h"
-
-typedef std::string mt19937_state;
-
-struct train_state {
- struct ggml_opt_context * opt;
-
- uint64_t train_its;
- uint64_t train_samples;
- uint64_t train_tokens;
- uint64_t train_epochs;
-
- size_t shuffle_samples_hash; // fn, sample_count, *zip(sample_begins, sample_sizes)
- mt19937_state shuffle_rng_state_current;
- mt19937_state shuffle_rng_state_next;
- size_t shuffle_sample_count;
- size_t shuffle_next_sample;
-};
-
-struct train_params_common {
- const char * fn_train_data;
- const char * fn_checkpoint_in;
- const char * fn_checkpoint_out;
- const char * pattern_fn_it;
- const char * fn_latest;
-
- bool print_usage;
-
- int save_every;
-
- uint32_t seed;
-
- int n_ctx;
- int n_threads;
- int n_batch;
- int n_gradient_accumulation;
- int n_epochs;
-
- bool custom_n_ctx;
-
- bool use_flash;
- bool use_checkpointing;
-
- std::string sample_start;
- bool include_sample_start;
- bool escape;
- bool overlapping_samples;
- bool fill_with_next_samples;
- bool separate_with_eos;
- bool separate_with_bos;
- bool sample_random_offsets;
-
- bool force_reshuffle;
-
- int warmup;
- int cos_decay_steps;
- float cos_decay_restart;
- float cos_decay_min;
- bool enable_restart;
-
- int opt_past;
- float opt_delta;
- int opt_max_no_improvement;
-
- int adam_n_iter;
- float adam_alpha;
- float adam_min_alpha;
- float adam_decay;
- int adam_decay_min_ndim;
- float adam_beta1;
- float adam_beta2;
- float adam_gclip;
- float adam_eps_f;
-};
-
-typedef void (*save_train_files_callback)(void * data, struct train_state * train);
-
-struct train_opt_callback_data {
- struct train_params_common * params;
- struct train_state * train;
- save_train_files_callback save_cb;
- void * save_data;
- struct llama_context * lctx;
- int last_save_iter;
- llama_token * tokens_data;
- size_t tokens_size;
- size_t * samples_begin;
- size_t * samples_size;
- size_t * shuffled_samples_offs;
- size_t * shuffled_samples_begin;
- size_t * shuffled_samples_size;
- size_t samples_count;
- struct ggml_tensor * tokens_input;
- struct ggml_tensor * target_probs;
- int first_iter;
- int first_epoch;
- int iter_at_last_epoch;
- int64_t last_time;
- double millis_per_iter;
-};
-
-struct train_state * init_train_state();
-void free_train_state(struct train_state * state);
-
-struct train_params_common get_default_train_params_common();
-void print_common_train_usage(int /*argc*/, char ** argv, const struct train_params_common * params);
-
-bool consume_common_train_arg(int argc, char ** argv, int * idx, struct train_params_common * params, bool * invalid_param);
-void finish_processing_train_args(struct train_params_common * params);
-
-struct random_normal_distribution;
-struct random_uniform_distribution;
-
-struct random_normal_distribution * init_random_normal_distribution (int seed, float mean, float std, float min, float max);
-struct random_uniform_distribution * init_random_uniform_distribution(int seed, float min, float max);
-
-void free_random_normal_distribution (struct random_normal_distribution * rnd);
-void free_random_uniform_distribution(struct random_uniform_distribution * rnd);
-
-struct ggml_tensor * randomize_tensor_normal (struct ggml_tensor * tensor, struct random_normal_distribution * rnd);
-struct ggml_tensor * randomize_tensor_uniform(struct ggml_tensor * tensor, struct random_uniform_distribution * rnd);
-
-// generate random float in interval [0,1)
-float frand();
-float frand_normal (struct random_normal_distribution * rnd);
-float frand_uniform(struct random_uniform_distribution * rnd);
-
-int clamp (const int v, const int min, const int max);
-float fclamp(const float v, const float min, const float max);
-
-void assert_shape_1d(struct ggml_tensor * tensor, int64_t ne0);
-void assert_shape_2d(struct ggml_tensor * tensor, int64_t ne0, int64_t ne1);
-void assert_shape_3d(struct ggml_tensor * tensor, int64_t ne0, int64_t ne1, int64_t ne2);
-void assert_shape_4d(struct ggml_tensor * tensor, int64_t ne0, int64_t ne1, int64_t ne2, int64_t ne3);
-
-size_t tokenize_file(
- struct llama_context * lctx,
- const char * filename,
- const std::string & sample_start,
- bool include_sample_start,
- bool overlapping_samples,
- unsigned context_length,
- std::vector & out_tokens,
- std::vector & out_samples_begin,
- std::vector & out_samples_size);
-
-int64_t get_example_targets_batch(
- struct llama_context * lctx,
- struct ggml_tensor * tokens_input,
- struct ggml_tensor * target_probs,
- int64_t example_id,
- const size_t * samples_offs,
- const size_t * samples_begin,
- const size_t * samples_size,
- size_t samples_count,
- const llama_token * train_data,
- size_t n_train_data,
- bool separate_with_eos,
- bool separate_with_bos,
- bool fill_with_next_samples,
- bool sample_random_offsets);
-
-
-void mt19937_set_state(std::mt19937& rng, const mt19937_state& rng_state);
-mt19937_state mt19937_get_state(const std::mt19937& rng);
-mt19937_state mt19937_seed_to_state(unsigned seed);
-
-mt19937_state shuffle_samples(
- const mt19937_state & rng_state,
- size_t * shuffled_offs,
- size_t * shuffled_begins,
- size_t * shuffled_sizes,
- const size_t * begins,
- const size_t * sizes,
- size_t count);
-
-size_t hash_combine(size_t h1, size_t h2);
-
-size_t compute_samples_hash(
- const char* fn,
- const size_t* samples_begin,
- const size_t* samples_size,
- size_t sample_count);
-
-
-std::string replace_str(const char * s, const char * needle, const char * replacement);
-
-void print_duration(double milliseconds);
-
-float cosine_decay(
- int64_t step,
- int64_t decay_steps,
- float minimum);
-
-float cosine_decay_restart(
- int64_t step,
- int64_t decay_steps,
- float minimum,
- float restart_step_mult);
-
-float learning_schedule(
- int64_t step,
- int64_t warmup_steps,
- int64_t decay_steps,
- float learning_rate,
- float overall_minimum,
- float cos_decay_minimum,
- float cos_decay_restart_step_mult,
- bool enable_restart);
-
-void copy_tensor_by_name(struct ggml_tensor * dst, struct ggml_context * ctx, const char * name);
-
-void load_opt_context_gguf(struct gguf_context * fctx, struct ggml_context * f_ggml_ctx, struct ggml_opt_context * opt);
-void save_opt_context_gguf(struct gguf_context * fctx, struct ggml_opt_context * opt);
-
-bool load_train_state_gguf(struct gguf_context * fctx, struct ggml_context * f_ggml_ctx, struct train_state * train);
-void save_train_state_gguf(struct gguf_context * fctx, struct train_state * train);
-
-std::string get_train_filename(const char * filename, const char * pattern_it, const char * latest, int64_t iteration);
-
-void train_opt_callback(void * vdata, int accum_step, float * sched, bool * cancel);
diff --git a/spaces/Illumotion/Koboldcpp/grammars/README.md b/spaces/Illumotion/Koboldcpp/grammars/README.md
deleted file mode 100644
index 7f3b11ca5b592235e11a06eec92268f7de8da94f..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/grammars/README.md
+++ /dev/null
@@ -1,91 +0,0 @@
-# GBNF Guide
-
-GBNF (GGML BNF) is a format for defining [formal grammars](https://en.wikipedia.org/wiki/Formal_grammar) to constrain model outputs in `llama.cpp`. For example, you can use it to force the model to generate valid JSON, or speak only in emojis. GBNF grammars are supported in various ways in `examples/main` and `examples/server`.
-
-## Background
-
-[Bakus-Naur Form (BNF)](https://en.wikipedia.org/wiki/Backus%E2%80%93Naur_form) is a notation for describing the syntax of formal languages like programming languages, file formats, and protocols. GBNF is an extension of BNF that primarily adds a few modern regex-like features.
-
-## Basics
-
-In GBNF, we define *production rules* that specify how a *non-terminal* (rule name) can be replaced with sequences of *terminals* (characters, specifically Unicode [code points](https://en.wikipedia.org/wiki/Code_point)) and other non-terminals. The basic format of a production rule is `nonterminal ::= sequence...`.
-
-## Example
-
-Before going deeper, let's look at some of the features demonstrated in `grammars/chess.gbnf`, a small chess notation grammar:
-```
-# `root` specifies the pattern for the overall output
-root ::= (
- # it must start with the characters "1. " followed by a sequence
- # of characters that match the `move` rule, followed by a space, followed
- # by another move, and then a newline
- "1. " move " " move "\n"
-
- # it's followed by one or more subsequent moves, numbered with one or two digits
- ([1-9] [0-9]? ". " move " " move "\n")+
-)
-
-# `move` is an abstract representation, which can be a pawn, nonpawn, or castle.
-# The `[+#]?` denotes the possibility of checking or mate signs after moves
-move ::= (pawn | nonpawn | castle) [+#]?
-
-pawn ::= ...
-nonpawn ::= ...
-castle ::= ...
-```
-
-## Non-Terminals and Terminals
-
-Non-terminal symbols (rule names) stand for a pattern of terminals and other non-terminals. They are required to be a dashed lowercase word, like `move`, `castle`, or `check-mate`.
-
-Terminals are actual characters ([code points](https://en.wikipedia.org/wiki/Code_point)). They can be specified as a sequence like `"1"` or `"O-O"` or as ranges like `[1-9]` or `[NBKQR]`.
-
-## Characters and character ranges
-
-Terminals support the full range of Unicode. Unicode characters can be specified directly in the grammar, for example `hiragana ::= [ぁ-ゟ]`, or with escapes: 8-bit (`\xXX`), 16-bit (`\uXXXX`) or 32-bit (`\UXXXXXXXX`).
-
-Character ranges can be negated with `^`:
-```
-single-line ::= [^\n]+ "\n"`
-```
-
-## Sequences and Alternatives
-
-The order of symbols in a sequence matter. For example, in `"1. " move " " move "\n"`, the `"1. "` must come before the first `move`, etc.
-
-Alternatives, denoted by `|`, give different sequences that are acceptable. For example, in `move ::= pawn | nonpawn | castle`, `move` can be a `pawn` move, a `nonpawn` move, or a `castle`.
-
-Parentheses `()` can be used to group sequences, which allows for embedding alternatives in a larger rule or applying repetition and optptional symbols (below) to a sequence.
-
-## Repetition and Optional Symbols
-
-- `*` after a symbol or sequence means that it can be repeated zero or more times.
-- `+` denotes that the symbol or sequence should appear one or more times.
-- `?` makes the preceding symbol or sequence optional.
-
-## Comments and newlines
-
-Comments can be specified with `#`:
-```
-# defines optional whitspace
-ws ::= [ \t\n]+
-```
-
-Newlines are allowed between rules and between symbols or sequences nested inside parentheses. Additionally, a newline after an alternate marker `|` will continue the current rule, even outside of parentheses.
-
-## The root rule
-
-In a full grammar, the `root` rule always defines the starting point of the grammar. In other words, it specifies what the entire output must match.
-
-```
-# a grammar for lists
-root ::= ("- " item)+
-item ::= [^\n]+ "\n"
-```
-
-## Next steps
-
-This guide provides a brief overview. Check out the GBNF files in this directory (`grammars/`) for examples of full grammars. You can try them out with:
-```
-./main -m --grammar-file grammars/some-grammar.gbnf -p 'Some prompt'
-```
diff --git a/spaces/Illumotion/Koboldcpp/include/CL/cl_d3d11.h b/spaces/Illumotion/Koboldcpp/include/CL/cl_d3d11.h
deleted file mode 100644
index 9393e5c84cacf2894a9f3cf83968414f34fb69a2..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/include/CL/cl_d3d11.h
+++ /dev/null
@@ -1,156 +0,0 @@
-/*******************************************************************************
- * Copyright (c) 2008-2020 The Khronos Group Inc.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- ******************************************************************************/
-
-#ifndef __OPENCL_CL_D3D11_H
-#define __OPENCL_CL_D3D11_H
-
-#if defined(_MSC_VER)
-#if _MSC_VER >=1500
-#pragma warning( push )
-#pragma warning( disable : 4201 )
-#pragma warning( disable : 5105 )
-#endif
-#endif
-#include
-#if defined(_MSC_VER)
-#if _MSC_VER >=1500
-#pragma warning( pop )
-#endif
-#endif
-#include
-#include
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-/******************************************************************************
- * cl_khr_d3d11_sharing */
-#define cl_khr_d3d11_sharing 1
-
-typedef cl_uint cl_d3d11_device_source_khr;
-typedef cl_uint cl_d3d11_device_set_khr;
-
-/******************************************************************************/
-
-/* Error Codes */
-#define CL_INVALID_D3D11_DEVICE_KHR -1006
-#define CL_INVALID_D3D11_RESOURCE_KHR -1007
-#define CL_D3D11_RESOURCE_ALREADY_ACQUIRED_KHR -1008
-#define CL_D3D11_RESOURCE_NOT_ACQUIRED_KHR -1009
-
-/* cl_d3d11_device_source */
-#define CL_D3D11_DEVICE_KHR 0x4019
-#define CL_D3D11_DXGI_ADAPTER_KHR 0x401A
-
-/* cl_d3d11_device_set */
-#define CL_PREFERRED_DEVICES_FOR_D3D11_KHR 0x401B
-#define CL_ALL_DEVICES_FOR_D3D11_KHR 0x401C
-
-/* cl_context_info */
-#define CL_CONTEXT_D3D11_DEVICE_KHR 0x401D
-#define CL_CONTEXT_D3D11_PREFER_SHARED_RESOURCES_KHR 0x402D
-
-/* cl_mem_info */
-#define CL_MEM_D3D11_RESOURCE_KHR 0x401E
-
-/* cl_image_info */
-#define CL_IMAGE_D3D11_SUBRESOURCE_KHR 0x401F
-
-/* cl_command_type */
-#define CL_COMMAND_ACQUIRE_D3D11_OBJECTS_KHR 0x4020
-#define CL_COMMAND_RELEASE_D3D11_OBJECTS_KHR 0x4021
-
-/******************************************************************************/
-
-typedef cl_int (CL_API_CALL *clGetDeviceIDsFromD3D11KHR_fn)(
- cl_platform_id platform,
- cl_d3d11_device_source_khr d3d_device_source,
- void * d3d_object,
- cl_d3d11_device_set_khr d3d_device_set,
- cl_uint num_entries,
- cl_device_id * devices,
- cl_uint * num_devices) CL_API_SUFFIX__VERSION_1_2;
-
-typedef cl_mem (CL_API_CALL *clCreateFromD3D11BufferKHR_fn)(
- cl_context context,
- cl_mem_flags flags,
- ID3D11Buffer * resource,
- cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_2;
-
-typedef cl_mem (CL_API_CALL *clCreateFromD3D11Texture2DKHR_fn)(
- cl_context context,
- cl_mem_flags flags,
- ID3D11Texture2D * resource,
- UINT subresource,
- cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_2;
-
-typedef cl_mem (CL_API_CALL *clCreateFromD3D11Texture3DKHR_fn)(
- cl_context context,
- cl_mem_flags flags,
- ID3D11Texture3D * resource,
- UINT subresource,
- cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_2;
-
-typedef cl_int (CL_API_CALL *clEnqueueAcquireD3D11ObjectsKHR_fn)(
- cl_command_queue command_queue,
- cl_uint num_objects,
- const cl_mem * mem_objects,
- cl_uint num_events_in_wait_list,
- const cl_event * event_wait_list,
- cl_event * event) CL_API_SUFFIX__VERSION_1_2;
-
-typedef cl_int (CL_API_CALL *clEnqueueReleaseD3D11ObjectsKHR_fn)(
- cl_command_queue command_queue,
- cl_uint num_objects,
- const cl_mem * mem_objects,
- cl_uint num_events_in_wait_list,
- const cl_event * event_wait_list,
- cl_event * event) CL_API_SUFFIX__VERSION_1_2;
-
-/***************************************************************
-* cl_intel_sharing_format_query_d3d11
-***************************************************************/
-#define cl_intel_sharing_format_query_d3d11 1
-
-/* when cl_khr_d3d11_sharing is supported */
-
-extern CL_API_ENTRY cl_int CL_API_CALL
-clGetSupportedD3D11TextureFormatsINTEL(
- cl_context context,
- cl_mem_flags flags,
- cl_mem_object_type image_type,
- cl_uint plane,
- cl_uint num_entries,
- DXGI_FORMAT* d3d11_formats,
- cl_uint* num_texture_formats) ;
-
-typedef cl_int (CL_API_CALL *
-clGetSupportedD3D11TextureFormatsINTEL_fn)(
- cl_context context,
- cl_mem_flags flags,
- cl_mem_object_type image_type,
- cl_uint plane,
- cl_uint num_entries,
- DXGI_FORMAT* d3d11_formats,
- cl_uint* num_texture_formats) ;
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif /* __OPENCL_CL_D3D11_H */
-
diff --git a/spaces/Jamkonams/AutoGPT/autogpt/json_utils/__init__.py b/spaces/Jamkonams/AutoGPT/autogpt/json_utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/JohnnyPittt/audio-styling/deepafx_st/processors/autodiff/peq.py b/spaces/JohnnyPittt/audio-styling/deepafx_st/processors/autodiff/peq.py
deleted file mode 100644
index 04e35bbe92ed5cfb780c4ad740e1ba75f64e5b03..0000000000000000000000000000000000000000
--- a/spaces/JohnnyPittt/audio-styling/deepafx_st/processors/autodiff/peq.py
+++ /dev/null
@@ -1,274 +0,0 @@
-import torch
-
-import deepafx_st.processors.autodiff.signal
-from deepafx_st.processors.processor import Processor
-
-
-@torch.jit.script
-def parametric_eq(
- x: torch.Tensor,
- sample_rate: float,
- low_shelf_gain_dB: torch.Tensor,
- low_shelf_cutoff_freq: torch.Tensor,
- low_shelf_q_factor: torch.Tensor,
- first_band_gain_dB: torch.Tensor,
- first_band_cutoff_freq: torch.Tensor,
- first_band_q_factor: torch.Tensor,
- second_band_gain_dB: torch.Tensor,
- second_band_cutoff_freq: torch.Tensor,
- second_band_q_factor: torch.Tensor,
- third_band_gain_dB: torch.Tensor,
- third_band_cutoff_freq: torch.Tensor,
- third_band_q_factor: torch.Tensor,
- fourth_band_gain_dB: torch.Tensor,
- fourth_band_cutoff_freq: torch.Tensor,
- fourth_band_q_factor: torch.Tensor,
- high_shelf_gain_dB: torch.Tensor,
- high_shelf_cutoff_freq: torch.Tensor,
- high_shelf_q_factor: torch.Tensor,
-):
- """Six-band parametric EQ.
-
- Low-shelf -> Band 1 -> Band 2 -> Band 3 -> Band 4 -> High-shelf
-
- Args:
- x (torch.Tensor): 1d signal.
-
-
- """
- a_s, b_s = [], []
- #print(f"autodiff peq fs = {sample_rate}")
-
- # -------- apply low-shelf filter --------
- b, a = deepafx_st.processors.autodiff.signal.biqaud(
- low_shelf_gain_dB,
- low_shelf_cutoff_freq,
- low_shelf_q_factor,
- sample_rate,
- "low_shelf",
- )
- b_s.append(b)
- a_s.append(a)
-
- # -------- apply first-band peaking filter --------
- b, a = deepafx_st.processors.autodiff.signal.biqaud(
- first_band_gain_dB,
- first_band_cutoff_freq,
- first_band_q_factor,
- sample_rate,
- "peaking",
- )
- b_s.append(b)
- a_s.append(a)
-
- # -------- apply second-band peaking filter --------
- b, a = deepafx_st.processors.autodiff.signal.biqaud(
- second_band_gain_dB,
- second_band_cutoff_freq,
- second_band_q_factor,
- sample_rate,
- "peaking",
- )
- b_s.append(b)
- a_s.append(a)
-
- # -------- apply third-band peaking filter --------
- b, a = deepafx_st.processors.autodiff.signal.biqaud(
- third_band_gain_dB,
- third_band_cutoff_freq,
- third_band_q_factor,
- sample_rate,
- "peaking",
- )
- b_s.append(b)
- a_s.append(a)
-
- # -------- apply fourth-band peaking filter --------
- b, a = deepafx_st.processors.autodiff.signal.biqaud(
- fourth_band_gain_dB,
- fourth_band_cutoff_freq,
- fourth_band_q_factor,
- sample_rate,
- "peaking",
- )
- b_s.append(b)
- a_s.append(a)
-
- # -------- apply high-shelf filter --------
- b, a = deepafx_st.processors.autodiff.signal.biqaud(
- high_shelf_gain_dB,
- high_shelf_cutoff_freq,
- high_shelf_q_factor,
- sample_rate,
- "high_shelf",
- )
- b_s.append(b)
- a_s.append(a)
-
- x = deepafx_st.processors.autodiff.signal.approx_iir_filter_cascade(
- b_s, a_s, x.view(-1)
- )
-
- return x
-
-
-class ParametricEQ(Processor):
- def __init__(
- self,
- sample_rate,
- min_gain_dB=-24.0,
- default_gain_dB=0.0,
- max_gain_dB=24.0,
- min_q_factor=0.1,
- default_q_factor=0.707,
- max_q_factor=10,
- eps=1e-8,
- ):
- """ """
- super().__init__()
- self.sample_rate = sample_rate
- self.eps = eps
- self.ports = [
- {
- "name": "Lowshelf gain",
- "min": min_gain_dB,
- "max": max_gain_dB,
- "default": default_gain_dB,
- "units": "dB",
- },
- {
- "name": "Lowshelf cutoff",
- "min": 20.0,
- "max": 200.0,
- "default": 100.0,
- "units": "Hz",
- },
- {
- "name": "Lowshelf Q",
- "min": min_q_factor,
- "max": max_q_factor,
- "default": default_q_factor,
- "units": "",
- },
- {
- "name": "First band gain",
- "min": min_gain_dB,
- "max": max_gain_dB,
- "default": default_gain_dB,
- "units": "dB",
- },
- {
- "name": "First band cutoff",
- "min": 200.0,
- "max": 2000.0,
- "default": 400.0,
- "units": "Hz",
- },
- {
- "name": "First band Q",
- "min": min_q_factor,
- "max": max_q_factor,
- "default": 0.707,
- "units": "",
- },
- {
- "name": "Second band gain",
- "min": min_gain_dB,
- "max": max_gain_dB,
- "default": default_gain_dB,
- "units": "dB",
- },
- {
- "name": "Second band cutoff",
- "min": 200.0,
- "max": 4000.0,
- "default": 1000.0,
- "units": "Hz",
- },
- {
- "name": "Second band Q",
- "min": min_q_factor,
- "max": max_q_factor,
- "default": default_q_factor,
- "units": "",
- },
- {
- "name": "Third band gain",
- "min": min_gain_dB,
- "max": max_gain_dB,
- "default": default_gain_dB,
- "units": "dB",
- },
- {
- "name": "Third band cutoff",
- "min": 2000.0,
- "max": 8000.0,
- "default": 4000.0,
- "units": "Hz",
- },
- {
- "name": "Third band Q",
- "min": min_q_factor,
- "max": max_q_factor,
- "default": default_q_factor,
- "units": "",
- },
- {
- "name": "Fourth band gain",
- "min": min_gain_dB,
- "max": max_gain_dB,
- "default": default_gain_dB,
- "units": "dB",
- },
- {
- "name": "Fourth band cutoff",
- "min": 4000.0,
- "max": (24000 // 2) * 0.9,
- "default": 8000.0,
- "units": "Hz",
- },
- {
- "name": "Fourth band Q",
- "min": min_q_factor,
- "max": max_q_factor,
- "default": default_q_factor,
- "units": "",
- },
- {
- "name": "Highshelf gain",
- "min": min_gain_dB,
- "max": max_gain_dB,
- "default": default_gain_dB,
- "units": "dB",
- },
- {
- "name": "Highshelf cutoff",
- "min": 4000.0,
- "max": (24000 // 2) * 0.9,
- "default": 8000.0,
- "units": "Hz",
- },
- {
- "name": "Highshelf Q",
- "min": min_q_factor,
- "max": max_q_factor,
- "default": default_q_factor,
- "units": "",
- },
- ]
-
- self.num_control_params = len(self.ports)
-
- def forward(self, x, p, sample_rate=24000, **kwargs):
-
- bs, chs, s = x.size()
-
- inputs = torch.split(x, 1, 0)
- params = torch.split(p, 1, 0)
-
- y = [] # loop over batch dimension
- for input, param in zip(inputs, params):
- denorm_param = self.denormalize_params(param.view(-1))
- y.append(parametric_eq(input.view(-1), sample_rate, *denorm_param))
-
- return torch.stack(y, dim=0).view(bs, 1, -1)
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/nets_61968KB.py b/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/nets_61968KB.py
deleted file mode 100644
index 167d4cb2198863cf43e93440f7e63c5342fc7605..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/nets_61968KB.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import layers_123821KB as layers
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 32)
- self.stg1_high_band_net = BaseASPPNet(2, 32)
-
- self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(16, 32)
-
- self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(32, 64)
-
- self.out = nn.Conv2d(64, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(32, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(32, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/KenjieDec/GPEN/face_model/model.py b/spaces/KenjieDec/GPEN/face_model/model.py
deleted file mode 100644
index fe54c123c026b8c0af6fac39815afaea0a7017a4..0000000000000000000000000000000000000000
--- a/spaces/KenjieDec/GPEN/face_model/model.py
+++ /dev/null
@@ -1,818 +0,0 @@
-'''
-@paper: GAN Prior Embedded Network for Blind Face Restoration in the Wild (CVPR2021)
-@author: yangxy (yangtao9009@gmail.com)
-'''
-import math
-import random
-import functools
-import operator
-import itertools
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-from torch.autograd import Function
-
-from op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d
-
-class PixelNorm(nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, input):
- return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8)
-
-
-def make_kernel(k):
- k = torch.tensor(k, dtype=torch.float32)
-
- if k.ndim == 1:
- k = k[None, :] * k[:, None]
-
- k /= k.sum()
-
- return k
-
-
-class Upsample(nn.Module):
- def __init__(self, kernel, factor=2, device='cpu'):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel) * (factor ** 2)
- self.register_buffer('kernel', kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
- self.device = device
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad, device=self.device)
-
- return out
-
-
-class Downsample(nn.Module):
- def __init__(self, kernel, factor=2, device='cpu'):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel)
- self.register_buffer('kernel', kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
- self.device = device
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad, device=self.device)
-
- return out
-
-
-class Blur(nn.Module):
- def __init__(self, kernel, pad, upsample_factor=1, device='cpu'):
- super().__init__()
-
- kernel = make_kernel(kernel)
-
- if upsample_factor > 1:
- kernel = kernel * (upsample_factor ** 2)
-
- self.register_buffer('kernel', kernel)
-
- self.pad = pad
- self.device = device
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, pad=self.pad, device=self.device)
-
- return out
-
-
-class EqualConv2d(nn.Module):
- def __init__(
- self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True
- ):
- super().__init__()
-
- self.weight = nn.Parameter(
- torch.randn(out_channel, in_channel, kernel_size, kernel_size)
- )
- self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2)
-
- self.stride = stride
- self.padding = padding
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_channel))
-
- else:
- self.bias = None
-
- def forward(self, input):
- out = F.conv2d(
- input,
- self.weight * self.scale,
- bias=self.bias,
- stride=self.stride,
- padding=self.padding,
- )
-
- return out
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},'
- f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})'
- )
-
-
-class EqualLinear(nn.Module):
- def __init__(
- self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None, device='cpu'
- ):
- super().__init__()
-
- self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul))
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init))
-
- else:
- self.bias = None
-
- self.activation = activation
- self.device = device
-
- self.scale = (1 / math.sqrt(in_dim)) * lr_mul
- self.lr_mul = lr_mul
-
- def forward(self, input):
- if self.activation:
- out = F.linear(input, self.weight * self.scale)
- out = fused_leaky_relu(out, self.bias * self.lr_mul, device=self.device)
-
- else:
- out = F.linear(input, self.weight * self.scale, bias=self.bias * self.lr_mul)
-
- return out
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})'
- )
-
-
-class ScaledLeakyReLU(nn.Module):
- def __init__(self, negative_slope=0.2):
- super().__init__()
-
- self.negative_slope = negative_slope
-
- def forward(self, input):
- out = F.leaky_relu(input, negative_slope=self.negative_slope)
-
- return out * math.sqrt(2)
-
-
-class ModulatedConv2d(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- demodulate=True,
- upsample=False,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- device='cpu'
- ):
- super().__init__()
-
- self.eps = 1e-8
- self.kernel_size = kernel_size
- self.in_channel = in_channel
- self.out_channel = out_channel
- self.upsample = upsample
- self.downsample = downsample
-
- if upsample:
- factor = 2
- p = (len(blur_kernel) - factor) - (kernel_size - 1)
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2 + 1
-
- self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor, device=device)
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.blur = Blur(blur_kernel, pad=(pad0, pad1), device=device)
-
- fan_in = in_channel * kernel_size ** 2
- self.scale = 1 / math.sqrt(fan_in)
- self.padding = kernel_size // 2
-
- self.weight = nn.Parameter(
- torch.randn(1, out_channel, in_channel, kernel_size, kernel_size)
- )
-
- self.modulation = EqualLinear(style_dim, in_channel, bias_init=1)
-
- self.demodulate = demodulate
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, '
- f'upsample={self.upsample}, downsample={self.downsample})'
- )
-
- def forward(self, input, style):
- batch, in_channel, height, width = input.shape
-
- style = self.modulation(style).view(batch, 1, in_channel, 1, 1)
- weight = self.scale * self.weight * style
-
- if self.demodulate:
- demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8)
- weight = weight * demod.view(batch, self.out_channel, 1, 1, 1)
-
- weight = weight.view(
- batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
-
- if self.upsample:
- input = input.view(1, batch * in_channel, height, width)
- weight = weight.view(
- batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
- weight = weight.transpose(1, 2).reshape(
- batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size
- )
- out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
- out = self.blur(out)
-
- elif self.downsample:
- input = self.blur(input)
- _, _, height, width = input.shape
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- else:
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=self.padding, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- return out
-
-
-class NoiseInjection(nn.Module):
- def __init__(self, isconcat=True):
- super().__init__()
-
- self.isconcat = isconcat
- self.weight = nn.Parameter(torch.zeros(1))
-
- def forward(self, image, noise=None):
- if noise==None:
- batch, channel, height, width = image.shape
- noise = image.new_empty(batch, channel, height, width).normal_()
-
- if self.isconcat:
- return torch.cat((image, self.weight * noise), dim=1)
- else:
- return image + self.weight * noise
-
-
-class ConstantInput(nn.Module):
- def __init__(self, channel, size=4):
- super().__init__()
-
- self.input = nn.Parameter(torch.randn(1, channel, size, size))
-
- def forward(self, input):
- batch = input.shape[0]
- out = self.input.repeat(batch, 1, 1, 1)
-
- return out
-
-
-class StyledConv(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=False,
- blur_kernel=[1, 3, 3, 1],
- demodulate=True,
- isconcat=True,
- device='cpu'
- ):
- super().__init__()
-
- self.conv = ModulatedConv2d(
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=upsample,
- blur_kernel=blur_kernel,
- demodulate=demodulate,
- device=device
- )
-
- self.noise = NoiseInjection(isconcat)
- #self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1))
- #self.activate = ScaledLeakyReLU(0.2)
- feat_multiplier = 2 if isconcat else 1
- self.activate = FusedLeakyReLU(out_channel*feat_multiplier, device=device)
-
- def forward(self, input, style, noise=None):
- out = self.conv(input, style)
- out = self.noise(out, noise=noise)
- # out = out + self.bias
- out = self.activate(out)
-
- return out
-
-
-class ToRGB(nn.Module):
- def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1], device='cpu'):
- super().__init__()
-
- if upsample:
- self.upsample = Upsample(blur_kernel, device=device)
-
- self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False, device=device)
- self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))
-
- def forward(self, input, style, skip=None):
- out = self.conv(input, style)
- out = out + self.bias
-
- if skip is not None:
- skip = self.upsample(skip)
-
- out = out + skip
-
- return out
-
-class Generator(nn.Module):
- def __init__(
- self,
- size,
- style_dim,
- n_mlp,
- channel_multiplier=2,
- blur_kernel=[1, 3, 3, 1],
- lr_mlp=0.01,
- isconcat=True,
- narrow=1,
- device='cpu'
- ):
- super().__init__()
-
- self.size = size
- self.n_mlp = n_mlp
- self.style_dim = style_dim
- self.feat_multiplier = 2 if isconcat else 1
-
- layers = [PixelNorm()]
-
- for i in range(n_mlp):
- layers.append(
- EqualLinear(
- style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu', device=device
- )
- )
-
- self.style = nn.Sequential(*layers)
-
- self.channels = {
- 4: int(512 * narrow),
- 8: int(512 * narrow),
- 16: int(512 * narrow),
- 32: int(512 * narrow),
- 64: int(256 * channel_multiplier * narrow),
- 128: int(128 * channel_multiplier * narrow),
- 256: int(64 * channel_multiplier * narrow),
- 512: int(32 * channel_multiplier * narrow),
- 1024: int(16 * channel_multiplier * narrow),
- 2048: int(8 * channel_multiplier * narrow)
- }
-
- self.input = ConstantInput(self.channels[4])
- self.conv1 = StyledConv(
- self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel, isconcat=isconcat, device=device
- )
- self.to_rgb1 = ToRGB(self.channels[4]*self.feat_multiplier, style_dim, upsample=False, device=device)
-
- self.log_size = int(math.log(size, 2))
-
- self.convs = nn.ModuleList()
- self.upsamples = nn.ModuleList()
- self.to_rgbs = nn.ModuleList()
-
- in_channel = self.channels[4]
-
- for i in range(3, self.log_size + 1):
- out_channel = self.channels[2 ** i]
-
- self.convs.append(
- StyledConv(
- in_channel*self.feat_multiplier,
- out_channel,
- 3,
- style_dim,
- upsample=True,
- blur_kernel=blur_kernel,
- isconcat=isconcat,
- device=device
- )
- )
-
- self.convs.append(
- StyledConv(
- out_channel*self.feat_multiplier, out_channel, 3, style_dim, blur_kernel=blur_kernel, isconcat=isconcat, device=device
- )
- )
-
- self.to_rgbs.append(ToRGB(out_channel*self.feat_multiplier, style_dim, device=device))
-
- in_channel = out_channel
-
- self.n_latent = self.log_size * 2 - 2
-
- def make_noise(self):
- device = self.input.input.device
-
- noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)]
-
- for i in range(3, self.log_size + 1):
- for _ in range(2):
- noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device))
-
- return noises
-
- def mean_latent(self, n_latent):
- latent_in = torch.randn(
- n_latent, self.style_dim, device=self.input.input.device
- )
- latent = self.style(latent_in).mean(0, keepdim=True)
-
- return latent
-
- def get_latent(self, input):
- return self.style(input)
-
- def forward(
- self,
- styles,
- return_latents=False,
- inject_index=None,
- truncation=1,
- truncation_latent=None,
- input_is_latent=False,
- noise=None,
- ):
- if not input_is_latent:
- styles = [self.style(s) for s in styles]
-
- if noise==None:
- '''
- noise = [None] * (2 * (self.log_size - 2) + 1)
- '''
- noise = []
- batch = styles[0].shape[0]
- for i in range(self.n_mlp + 1):
- size = 2 ** (i+2)
- noise.append(torch.randn(batch, self.channels[size], size, size, device=styles[0].device))
-
- if truncation < 1:
- style_t = []
-
- for style in styles:
- style_t.append(
- truncation_latent + truncation * (style - truncation_latent)
- )
-
- styles = style_t
-
- if len(styles) < 2:
- inject_index = self.n_latent
-
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
-
- else:
- if inject_index==None:
- inject_index = random.randint(1, self.n_latent - 1)
-
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1)
-
- latent = torch.cat([latent, latent2], 1)
-
- out = self.input(latent)
- out = self.conv1(out, latent[:, 0], noise=noise[0])
-
- skip = self.to_rgb1(out, latent[:, 1])
-
- i = 1
- for conv1, conv2, noise1, noise2, to_rgb in zip(
- self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs
- ):
- out = conv1(out, latent[:, i], noise=noise1)
- out = conv2(out, latent[:, i + 1], noise=noise2)
- skip = to_rgb(out, latent[:, i + 2], skip)
-
- i += 2
-
- image = skip
-
- if return_latents:
- return image, latent
-
- else:
- return image, None
-
-class ConvLayer(nn.Sequential):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- bias=True,
- activate=True,
- device='cpu'
- ):
- layers = []
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- layers.append(Blur(blur_kernel, pad=(pad0, pad1), device=device))
-
- stride = 2
- self.padding = 0
-
- else:
- stride = 1
- self.padding = kernel_size // 2
-
- layers.append(
- EqualConv2d(
- in_channel,
- out_channel,
- kernel_size,
- padding=self.padding,
- stride=stride,
- bias=bias and not activate,
- )
- )
-
- if activate:
- if bias:
- layers.append(FusedLeakyReLU(out_channel, device=device))
-
- else:
- layers.append(ScaledLeakyReLU(0.2))
-
- super().__init__(*layers)
-
-
-class ResBlock(nn.Module):
- def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1], device='cpu'):
- super().__init__()
-
- self.conv1 = ConvLayer(in_channel, in_channel, 3, device=device)
- self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True, device=device)
-
- self.skip = ConvLayer(
- in_channel, out_channel, 1, downsample=True, activate=False, bias=False
- )
-
- def forward(self, input):
- out = self.conv1(input)
- out = self.conv2(out)
-
- skip = self.skip(input)
- out = (out + skip) / math.sqrt(2)
-
- return out
-
-class FullGenerator(nn.Module):
- def __init__(
- self,
- size,
- style_dim,
- n_mlp,
- channel_multiplier=2,
- blur_kernel=[1, 3, 3, 1],
- lr_mlp=0.01,
- isconcat=True,
- narrow=1,
- device='cpu'
- ):
- super().__init__()
- channels = {
- 4: int(512 * narrow),
- 8: int(512 * narrow),
- 16: int(512 * narrow),
- 32: int(512 * narrow),
- 64: int(256 * channel_multiplier * narrow),
- 128: int(128 * channel_multiplier * narrow),
- 256: int(64 * channel_multiplier * narrow),
- 512: int(32 * channel_multiplier * narrow),
- 1024: int(16 * channel_multiplier * narrow),
- 2048: int(8 * channel_multiplier * narrow)
- }
-
- self.log_size = int(math.log(size, 2))
- self.generator = Generator(size, style_dim, n_mlp, channel_multiplier=channel_multiplier, blur_kernel=blur_kernel, lr_mlp=lr_mlp, isconcat=isconcat, narrow=narrow, device=device)
-
- conv = [ConvLayer(3, channels[size], 1, device=device)]
- self.ecd0 = nn.Sequential(*conv)
- in_channel = channels[size]
-
- self.names = ['ecd%d'%i for i in range(self.log_size-1)]
- for i in range(self.log_size, 2, -1):
- out_channel = channels[2 ** (i - 1)]
- #conv = [ResBlock(in_channel, out_channel, blur_kernel)]
- conv = [ConvLayer(in_channel, out_channel, 3, downsample=True, device=device)]
- setattr(self, self.names[self.log_size-i+1], nn.Sequential(*conv))
- in_channel = out_channel
- self.final_linear = nn.Sequential(EqualLinear(channels[4] * 4 * 4, style_dim, activation='fused_lrelu', device=device))
-
- def forward(self,
- inputs,
- return_latents=False,
- inject_index=None,
- truncation=1,
- truncation_latent=None,
- input_is_latent=False,
- ):
- noise = []
- for i in range(self.log_size-1):
- ecd = getattr(self, self.names[i])
- inputs = ecd(inputs)
- noise.append(inputs)
- #print(inputs.shape)
- inputs = inputs.view(inputs.shape[0], -1)
- outs = self.final_linear(inputs)
- #print(outs.shape)
- noise = list(itertools.chain.from_iterable(itertools.repeat(x, 2) for x in noise))[::-1]
- outs = self.generator([outs], return_latents, inject_index, truncation, truncation_latent, input_is_latent, noise=noise[1:])
- return outs
-
-class Discriminator(nn.Module):
- def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1], narrow=1, device='cpu'):
- super().__init__()
-
- channels = {
- 4: int(512 * narrow),
- 8: int(512 * narrow),
- 16: int(512 * narrow),
- 32: int(512 * narrow),
- 64: int(256 * channel_multiplier * narrow),
- 128: int(128 * channel_multiplier * narrow),
- 256: int(64 * channel_multiplier * narrow),
- 512: int(32 * channel_multiplier * narrow),
- 1024: int(16 * channel_multiplier * narrow),
- 2048: int(8 * channel_multiplier * narrow)
- }
-
- convs = [ConvLayer(3, channels[size], 1, device=device)]
-
- log_size = int(math.log(size, 2))
-
- in_channel = channels[size]
-
- for i in range(log_size, 2, -1):
- out_channel = channels[2 ** (i - 1)]
-
- convs.append(ResBlock(in_channel, out_channel, blur_kernel, device=device))
-
- in_channel = out_channel
-
- self.convs = nn.Sequential(*convs)
-
- self.stddev_group = 4
- self.stddev_feat = 1
-
- self.final_conv = ConvLayer(in_channel + 1, channels[4], 3, device=device)
- self.final_linear = nn.Sequential(
- EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu', device=device),
- EqualLinear(channels[4], 1),
- )
-
- def forward(self, input):
- out = self.convs(input)
-
- batch, channel, height, width = out.shape
- group = min(batch, self.stddev_group)
- stddev = out.view(
- group, -1, self.stddev_feat, channel // self.stddev_feat, height, width
- )
- stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8)
- stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2)
- stddev = stddev.repeat(group, 1, height, width)
- out = torch.cat([out, stddev], 1)
-
- out = self.final_conv(out)
-
- out = out.view(batch, -1)
- out = self.final_linear(out)
- return out
-
-class FullGenerator_SR(nn.Module):
- def __init__(
- self,
- size,
- out_size,
- style_dim,
- n_mlp,
- channel_multiplier=2,
- blur_kernel=[1, 3, 3, 1],
- lr_mlp=0.01,
- isconcat=True,
- narrow=1,
- device='cpu'
- ):
- super().__init__()
- channels = {
- 4: int(512 * narrow),
- 8: int(512 * narrow),
- 16: int(512 * narrow),
- 32: int(512 * narrow),
- 64: int(256 * channel_multiplier * narrow),
- 128: int(128 * channel_multiplier * narrow),
- 256: int(64 * channel_multiplier * narrow),
- 512: int(32 * channel_multiplier * narrow),
- 1024: int(16 * channel_multiplier * narrow),
- 2048: int(8 * channel_multiplier * narrow),
- }
-
- self.log_insize = int(math.log(size, 2))
- self.log_outsize = int(math.log(out_size, 2))
- self.generator = Generator(out_size, style_dim, n_mlp, channel_multiplier=channel_multiplier, blur_kernel=blur_kernel, lr_mlp=lr_mlp, isconcat=isconcat, narrow=narrow, device=device)
-
- conv = [ConvLayer(3, channels[size], 1, device=device)]
- self.ecd0 = nn.Sequential(*conv)
- in_channel = channels[size]
-
- self.names = ['ecd%d'%i for i in range(self.log_insize-1)]
- for i in range(self.log_insize, 2, -1):
- out_channel = channels[2 ** (i - 1)]
- #conv = [ResBlock(in_channel, out_channel, blur_kernel)]
- conv = [ConvLayer(in_channel, out_channel, 3, downsample=True, device=device)]
- setattr(self, self.names[self.log_insize-i+1], nn.Sequential(*conv))
- in_channel = out_channel
- self.final_linear = nn.Sequential(EqualLinear(channels[4] * 4 * 4, style_dim, activation='fused_lrelu', device=device))
-
- def forward(self,
- inputs,
- return_latents=False,
- inject_index=None,
- truncation=1,
- truncation_latent=None,
- input_is_latent=False,
- ):
- noise = []
- for i in range(self.log_outsize-self.log_insize):
- noise.append(None)
- for i in range(self.log_insize-1):
- ecd = getattr(self, self.names[i])
- inputs = ecd(inputs)
- noise.append(inputs)
- #print(inputs.shape)
- inputs = inputs.view(inputs.shape[0], -1)
- outs = self.final_linear(inputs)
- #print(outs.shape)
- noise = list(itertools.chain.from_iterable(itertools.repeat(x, 2) for x in noise))[::-1]
- image, latent = self.generator([outs], return_latents, inject_index, truncation, truncation_latent, input_is_latent, noise=noise[1:])
- return image, latent
\ No newline at end of file
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/convolution.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/convolution.py
deleted file mode 100644
index 2d2c399e406aae97a6baf0f7de379a1d90a97949..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/convolution.py
+++ /dev/null
@@ -1,74 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-
-# Copyright 2020 Johns Hopkins University (Shinji Watanabe)
-# Northwestern Polytechnical University (Pengcheng Guo)
-# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
-
-"""ConvolutionModule definition."""
-
-from torch import nn
-
-
-class ConvolutionModule(nn.Module):
- """ConvolutionModule in Conformer model.
-
- :param int channels: channels of cnn
- :param int kernel_size: kernerl size of cnn
-
- """
-
- def __init__(self, channels, kernel_size, activation=nn.ReLU(), bias=True):
- """Construct an ConvolutionModule object."""
- super(ConvolutionModule, self).__init__()
- # kernerl_size should be a odd number for 'SAME' padding
- assert (kernel_size - 1) % 2 == 0
-
- self.pointwise_conv1 = nn.Conv1d(
- channels,
- 2 * channels,
- kernel_size=1,
- stride=1,
- padding=0,
- bias=bias,
- )
- self.depthwise_conv = nn.Conv1d(
- channels,
- channels,
- kernel_size,
- stride=1,
- padding=(kernel_size - 1) // 2,
- groups=channels,
- bias=bias,
- )
- self.norm = nn.BatchNorm1d(channels)
- self.pointwise_conv2 = nn.Conv1d(
- channels,
- channels,
- kernel_size=1,
- stride=1,
- padding=0,
- bias=bias,
- )
- self.activation = activation
-
- def forward(self, x):
- """Compute convolution module.
-
- :param torch.Tensor x: (batch, time, size)
- :return torch.Tensor: convoluted `value` (batch, time, d_model)
- """
- # exchange the temporal dimension and the feature dimension
- x = x.transpose(1, 2)
-
- # GLU mechanism
- x = self.pointwise_conv1(x) # (batch, 2*channel, dim)
- x = nn.functional.glu(x, dim=1) # (batch, channel, dim)
-
- # 1D Depthwise Conv
- x = self.depthwise_conv(x)
- x = self.activation(self.norm(x))
-
- x = self.pointwise_conv2(x)
-
- return x.transpose(1, 2)
diff --git a/spaces/KevinQHLin/UniVTG/model/transformer_encoder_droppath.py b/spaces/KevinQHLin/UniVTG/model/transformer_encoder_droppath.py
deleted file mode 100644
index 536d46529a4722c1bf787d85fbace0afe1e3a33b..0000000000000000000000000000000000000000
--- a/spaces/KevinQHLin/UniVTG/model/transformer_encoder_droppath.py
+++ /dev/null
@@ -1,194 +0,0 @@
-import copy
-import pdb
-from typing import Optional
-
-import torch
-import torch.nn.functional as F
-from torch import nn, Tensor
-
-def mask_logits(inputs, mask, mask_value=-1e30):
- mask = mask.type(torch.float32)
- return inputs + (1.0 - mask) * mask_value
-
-
-class Transformer(nn.Module):
-
- def __init__(self, d_model=512, nhead=8, num_encoder_layers=4,
- num_decoder_layers=6, dim_feedforward=2048, dropout=0.1, droppath=0.1,
- activation="gelu", normalize_before=False, # False as default
- return_intermediate_dec=False):
- super().__init__()
-
- encoder_layer = TransformerEncoderLayer(d_model, nhead, dim_feedforward,
- dropout, droppath, activation, normalize_before)
- encoder_norm = nn.LayerNorm(d_model) if normalize_before else None
- self.encoder = TransformerEncoder(encoder_layer, num_encoder_layers, encoder_norm)
-
- self._reset_parameters()
-
- self.d_model = d_model
- self.nhead = nhead
-
- def _reset_parameters(self):
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
-
- def forward(self, src, mask, pos_embed):
- """
- Args:
- src: (batch_size, L, d)
- mask: (batch_size, L)
- query_embed: (#queries, d) -> my imple (batch_size, d) and #queries=1
- pos_embed: (batch_size, L, d) the same as src
-
- Returns:
-
- """
- # flatten NxCxHxW to HWxNxC
- src = src.permute(1, 0, 2) # (L, batch_size, d)
- pos_embed = pos_embed.permute(1, 0, 2) # (L, batch_size, d)
-
- memory = self.encoder(src, src_key_padding_mask=mask, pos=pos_embed)
- memory = memory.transpose(0, 1)
-
- return memory
-
-
-class TransformerEncoder(nn.Module):
- def __init__(self, encoder_layer, num_layers, norm=None, return_intermediate=False):
- super().__init__()
- self.layers = _get_clones(encoder_layer, num_layers)
- self.num_layers = num_layers
- self.norm = norm
- self.return_intermediate = return_intermediate
-
- def forward(self, src,
- mask: Optional[Tensor] = None,
- src_key_padding_mask: Optional[Tensor] = None,
- pos: Optional[Tensor] = None):
- output = src
-
- intermediate = []
-
- for layer in self.layers:
- output = layer(output, src_mask=mask,
- src_key_padding_mask=src_key_padding_mask, pos=pos)
- if self.return_intermediate:
- intermediate.append(output)
-
- if self.norm is not None:
- output = self.norm(output)
-
- if self.return_intermediate:
- return torch.stack(intermediate)
-
- return output
-
-class TransformerEncoderLayer(nn.Module):
-
- def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1, droppath=0.1,
- activation="relu", normalize_before=False):
- super().__init__()
- self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
- # Implementation of Feedforward model
- self.linear1 = nn.Linear(d_model, dim_feedforward)
- self.dropout = nn.Dropout(dropout)
- self.linear2 = nn.Linear(dim_feedforward, d_model)
-
- self.norm1 = nn.LayerNorm(d_model)
- self.norm2 = nn.LayerNorm(d_model)
- # self.dropout1 = nn.Dropout(dropout)
- # self.dropout2 = nn.Dropout(dropout)
- self.droppath1 = DropPath(droppath)
- self.droppath2 = DropPath(droppath)
-
- self.activation = _get_activation_fn(activation)
- self.normalize_before = normalize_before
-
- def with_pos_embed(self, tensor, pos: Optional[Tensor]):
- return tensor if pos is None else tensor + pos
-
- def forward_post(self,
- src,
- src_mask: Optional[Tensor] = None,
- src_key_padding_mask: Optional[Tensor] = None,
- pos: Optional[Tensor] = None):
- q = k = self.with_pos_embed(src, pos)
- src2 = self.self_attn(q, k, value=src, attn_mask=src_mask, key_padding_mask=src_key_padding_mask)[0]
- # src2 = self.self_attn_eff(q=q, k=k, v=src, attn_mask=src_mask, key_padding_mask=src_key_padding_mask)[0]
- src = src + self.droppath1(src2)
- src = self.norm1(src)
- src2 = self.linear2(self.activation(self.linear1(src)))
- # src2 = self.linear2(self.dropout(self.activation(self.linear1(src))))
- src = src + self.droppath2(src2)
- src = self.norm2(src)
- return src
-
- def forward(self, src,
- src_mask: Optional[Tensor] = None,
- src_key_padding_mask: Optional[Tensor] = None,
- pos: Optional[Tensor] = None):
- if self.normalize_before:
- return self.forward_pre(src, src_mask, src_key_padding_mask, pos)
- return self.forward_post(src, src_mask, src_key_padding_mask, pos)
-
-
-def _get_clones(module, N):
- return nn.ModuleList([copy.deepcopy(module) for i in range(N)])
-
-
-def build_transformer(args):
- return Transformer(
- d_model=args.hidden_dim,
- dropout=args.dropout,
- droppath=args.droppath,
- nhead=args.nheads,
- dim_feedforward=args.dim_feedforward,
- num_encoder_layers=args.enc_layers,
- num_decoder_layers=args.dec_layers,
- normalize_before=args.pre_norm,
- return_intermediate_dec=True,
- )
-
-def drop_path(x, drop_prob=0.0, training=False):
- """
- Stochastic Depth per sample.
- """
- if drop_prob == 0.0 or not training:
- return x
-
- keep_prob = 1 - drop_prob
- shape = (x.shape[0],) + (1,) * (x.ndim - 1)
- mask = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)
- mask.floor_()
- x = x.div(keep_prob) * mask
-
- return x
-
-
-class DropPath(nn.Module):
- """
- Drop paths per sample (when applied in main path of residual blocks).
- """
-
- def __init__(self, drop_prob=None):
- super(DropPath, self).__init__()
-
- self.drop_prob = drop_prob
-
- def forward(self, x):
- x = x.permute(1, 0, 2)
- res = drop_path(x, self.drop_prob, self.training)
- return res.permute(1, 0, 2)
- # return drop_path(x, self.drop_prob, self.training)
-
-def _get_activation_fn(activation):
- """Return an activation function given a string"""
- if activation == "relu":
- return F.relu
- if activation == "gelu":
- return F.gelu
- if activation == "glu":
- return F.glu
- raise RuntimeError(F"activation should be relu/gelu, not {activation}.")
\ No newline at end of file
diff --git a/spaces/LavanyaBurlagadda/TChatBotWithPlayHT1/README.md b/spaces/LavanyaBurlagadda/TChatBotWithPlayHT1/README.md
deleted file mode 100644
index 2b35a86a7cd5100b8d251c9decdd88edda6f600c..0000000000000000000000000000000000000000
--- a/spaces/LavanyaBurlagadda/TChatBotWithPlayHT1/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: TChatBotWithPlayHT1
-emoji: 👁
-colorFrom: gray
-colorTo: red
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/assets/themes/Acat.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/assets/themes/Acat.py
deleted file mode 100644
index 32c0dbe3bd221eb97ca329a77fe193d124ce1fdd..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/assets/themes/Acat.py
+++ /dev/null
@@ -1,313 +0,0 @@
-from __future__ import annotations
-
-from typing import Iterable
-import gradio as gr
-
-#gr.themes.builder()
-from gradio.themes.base import Base
-from gradio.themes.utils import colors, fonts, sizes
-import time
-
-class Acat(Base):
- def __init__(
- self,
- *,
- primary_hue: colors.Color | str = colors.green,
- secondary_hue: colors.Color | str = colors.emerald,
- neutral_hue: colors.Color | str = colors.neutral,
- spacing_size: sizes.Size | str = sizes.spacing_md,
- radius_size: sizes.Size | str = sizes.radius_md,
- text_size: sizes.Size | str = sizes.text_lg,
- font: fonts.Font
- | str
- | Iterable[fonts.Font | str] = (
- 'Inter V',
- fonts.GoogleFont('Asap'),
- 'ui-sans-serif',
- 'sans-serif',
- ),
- font_mono: fonts.Font
- | str
- | Iterable[fonts.Font | str] = (
- 'ui-monospace',
- fonts.GoogleFont("Fira Code"),
- 'Consolas',
- 'monospace',
- ),
- ):
- super().__init__(
- primary_hue=primary_hue,
- secondary_hue=secondary_hue,
- neutral_hue=neutral_hue,
- spacing_size=spacing_size,
- radius_size=radius_size,
- text_size=text_size,
- font=font,
- font_mono=font_mono,
- )
- self.name= "Acat",
- self.secondary_100='#e2effc',
- self.secondary_200='#bedff9',
- self.secondary_300='#84c5f5',
- self.secondary_400='#4eacef',
- self.secondary_50='#f1f8fe',
- self.secondary_500='#198cde',
- self.secondary_600='#0c6ebd',
- self.secondary_700='#0b5899',
- self.secondary_800='#0e4b7e',
- self.secondary_900='#113f69',
- self.secondary_950='#0b2846',
- self.neutral_100='#e2effc',
- self.neutral_200='#bedff9',
- self.neutral_300='#84c5f5',
- self.neutral_400='#4eacef',
- self.neutral_50='#f1f8fe',
- self.neutral_500='#198cde',
- self.neutral_600='#0c6ebd',
- self.neutral_700='#0b5899',
- self.neutral_800='#0e4b7e',
- self.neutral_900='#113f69',
- self.neutral_950='#0b2846',
- self.primary_100='#e2effc',
- self.primary_200='#bedff9',
- self.primary_300='#84c5f5',
- self.primary_400='#4eacef',
- self.primary_50='#f1f8fe',
- self.primary_500='#198cde',
- self.primary_600='#0c6ebd',
- self.primary_700='#0b5899',
- self.primary_800='#0e4b7e',
- self.primary_900='#113f69',
- self.primary_950='#0b2846',
- super().set(
- # Blaise
- background_fill_primary='#FFFFFF',
- background_fill_primary_dark='#000000',
- background_fill_secondary='#dce3e8',
- background_fill_secondary_dark='#242424',
- block_background_fill='#ECF2F7',
- block_background_fill_dark='#191919',
- block_border_color='#dce3e8',
- block_border_color_dark='#242424',
- block_border_width='1px',
- block_info_text_color='#191919',
- block_info_text_color_dark='#ECF2F7',
- block_info_text_size='*text_sm',
- block_info_text_weight='400',
- block_label_background_fill='#ECF2F700',
- block_label_background_fill_dark='#19191900',
- block_label_border_color='#dce3e8',
- block_label_border_color_dark='#242424',
- block_label_border_width='1px',
- block_label_margin='0',
- block_label_padding='*spacing_sm *spacing_lg',
- block_label_radius= "calc(*radius_lg - 1px) 0 calc(*radius_lg - 1px) 0",
- block_label_right_radius= "0 calc(*radius_lg - 1px) 0 calc(*radius_lg - 1px)",
- block_label_shadow='*block_shadow',
- block_label_text_color='#4EACEF',
- block_label_text_color_dark='#4EACEF',
- block_label_text_size='*text_sm',
- block_label_text_weight='400',
- block_padding='*spacing_xl calc(*spacing_xl + 2px)',
- block_radius='*radius_lg',
- block_shadow='#FFFFFF00',
- block_shadow_dark='#00000000',
- block_title_background_fill='#ECF2F700',
- block_title_background_fill_dark='#19191900',
- block_title_border_color='#dce3e8',
- block_title_border_color_dark='#242424',
- block_title_border_width='0px',
- block_title_padding='0',
- block_title_radius='none',
- block_title_text_color='#4EACEF',
- block_title_text_color_dark='#4EACEF',
- block_title_text_size='*text_md',
- block_title_text_weight='bold',
- body_background_fill="url('https://cdn.discordapp.com/attachments/1151036558389026838/1158634938128670730/image-removebg-preview_1.png') #FFFFFF no-repeat right bottom/auto 30svh padding-box fixed",
- body_background_fill_dark="url('https://cdn.discordapp.com/attachments/1151036558389026838/1158634938128670730/image-removebg-preview_1.png') #000000 no-repeat right bottom/auto 30svh padding-box fixed",
- body_text_color='#191919',
- body_text_color_dark='#ECF2F7',
- body_text_color_subdued='#636668',
- body_text_color_subdued_dark='#c4c4c4',
- body_text_size='*text_md',
- body_text_weight='400',
- border_color_accent='#dce3e8',
- border_color_accent_dark='#242424',
- border_color_primary='#dce3e8',
- border_color_primary_dark='#242424',
- button_border_width='*input_border_width',
- button_border_width_dark='*input_border_width',
- button_cancel_background_fill='#dce3e8',
- button_cancel_background_fill_dark='#242424',
- button_cancel_background_fill_hover='#d0d7db',
- button_cancel_background_fill_hover_dark='#202020',
- button_cancel_border_color='#191919',
- button_cancel_border_color_dark='#ECF2F7',
- button_cancel_border_color_hover='#202020',
- button_cancel_border_color_hover_dark='#a1c3d8',
- button_cancel_text_color='#4EACEF',
- button_cancel_text_color_dark='#4EACEF',
- button_cancel_text_color_hover='#0c6ebd',
- button_cancel_text_color_hover_dark='#0c6ebd',
- button_large_padding='*spacing_lg calc(2 * *spacing_lg)',
- button_large_radius='*radius_lg',
- button_large_text_size='*text_lg',
- button_large_text_weight='600',
- button_primary_background_fill='#4EACEF',
- button_primary_background_fill_dark='#4EACEF',
- button_primary_background_fill_hover='#0c6ebd',
- button_primary_background_fill_hover_dark='#0c6ebd',
- button_primary_border_color='#191919',
- button_primary_border_color_dark='#ECF2F7',
- button_primary_border_color_hover='#202020',
- button_primary_border_color_hover_dark='#a1c3d8',
- button_primary_text_color='#ECF2F7',
- button_primary_text_color_dark='#191919',
- button_primary_text_color_hover='#e1eaf0',
- button_primary_text_color_hover_dark='#141414',
- button_secondary_background_fill='#dce3e8',
- button_secondary_background_fill_dark='#242424',
- button_secondary_background_fill_hover='#d0d7db',
- button_secondary_background_fill_hover_dark='#202020',
- button_secondary_border_color='#dce3e8',
- button_secondary_border_color_dark='#242424',
- button_secondary_border_color_hover='#d0d7db',
- button_secondary_border_color_hover_dark='#202020',
- button_secondary_text_color='#4EACEF',
- button_secondary_text_color_dark='#4EACEF',
- button_secondary_text_color_hover='#0c6ebd',
- button_secondary_text_color_hover_dark='#0c6ebd',
- button_shadow='none',
- button_shadow_active='none',
- button_shadow_hover='none',
- button_small_padding='*spacing_sm calc(2 * *spacing_sm)',
- button_small_radius='*radius_lg',
- button_small_text_size='*text_md',
- button_small_text_weight='400',
- button_transition='background-color 0.2s ease',
- chatbot_code_background_color='#FFFFFF',
- chatbot_code_background_color_dark='#000000',
- checkbox_background_color='#dce3e8',
- checkbox_background_color_dark='#242424',
- checkbox_background_color_focus='#dce3e8',
- checkbox_background_color_focus_dark='#242424',
- checkbox_background_color_hover='#dce3e8',
- checkbox_background_color_hover_dark='#242424',
- checkbox_background_color_selected='#4EACEF',
- checkbox_background_color_selected_dark='#4EACEF',
- checkbox_border_color='#dce3e8',
- checkbox_border_color_dark='#242424',
- checkbox_border_color_focus='#4EACEF',
- checkbox_border_color_focus_dark='#4EACEF',
- checkbox_border_color_hover='#4EACEF',
- checkbox_border_color_hover_dark='#4EACEF',
- checkbox_border_color_selected='#4EACEF',
- checkbox_border_color_selected_dark='#4EACEF',
- checkbox_border_radius='*radius_sm',
- checkbox_border_width='1px',
- checkbox_border_width_dark='1px',
- checkbox_check= "url(\"data:image/svg+xml,%3csvg viewBox='0 0 16 16' fill='white' xmlns='http://www.w3.org/2000/svg'%3e%3cpath d='M12.207 4.793a1 1 0 010 1.414l-5 5a1 1 0 01-1.414 0l-2-2a1 1 0 011.414-1.414L6.5 9.086l4.293-4.293a1 1 0 011.414 0z'/%3e%3c/svg%3e\")",
- checkbox_label_background_fill='#ECF2F7',
- checkbox_label_background_fill_dark='#191919',
- checkbox_label_background_fill_hover='#dce3e8',
- checkbox_label_background_fill_hover_dark='#242424',
- checkbox_label_background_fill_selected='#dce3e8',
- checkbox_label_background_fill_selected_dark='#242424',
- checkbox_label_border_color='#dce3e8',
- checkbox_label_border_color_dark='#242424',
- checkbox_label_border_color_hover='#4EACEF',
- checkbox_label_border_color_hover_dark='#4EACEF',
- checkbox_label_border_width='1px',
- checkbox_label_border_width_dark='1px',
- checkbox_label_gap='*spacing_lg',
- checkbox_label_padding='*spacing_md calc(2 * *spacing_md)',
- checkbox_label_shadow='none',
- checkbox_label_text_color='#191919',
- checkbox_label_text_color_dark='#ECF2F7',
- checkbox_label_text_color_selected='#4EACEF',
- checkbox_label_text_color_selected_dark='#4EACEF',
- checkbox_label_text_size='*text_md',
- checkbox_label_text_weight='400',
- checkbox_shadow='*input_shadow',
- color_accent='*primary_500',
- color_accent_soft='#dce3e8',
- color_accent_soft_dark='#242424',
- container_radius='*radius_lg',
- embed_radius='*radius_lg',
- error_background_fill='#dce3e8',
- error_background_fill_dark='#242424',
- error_border_color='#191919',
- error_border_color_dark='#ECF2F7',
- error_border_width='1px',
- error_border_width_dark='1px',
- error_text_color='#4EACEF',
- error_text_color_dark='#4EACEF',
- form_gap_width='0px',
- input_background_fill='#dce3e8',
- input_background_fill_dark='#242424',
- input_background_fill_focus='#dce3e8',
- input_background_fill_focus_dark='#242424',
- input_background_fill_hover='#d0d7db',
- input_background_fill_hover_dark='#202020',
- input_border_color='#191919',
- input_border_color_dark='#ECF2F7',
- input_border_color_focus='#191919',
- input_border_color_focus_dark='#ECF2F7',
- input_border_color_hover='#202020',
- input_border_color_hover_dark='#a1c3d8',
- input_border_width='0px',
- input_padding='*spacing_xl',
- input_placeholder_color='#19191930',
- input_placeholder_color_dark='#ECF2F730',
- input_radius='*radius_lg',
- input_shadow='#19191900',
- input_shadow_dark='#ECF2F700',
- input_shadow_focus='#19191900',
- input_shadow_focus_dark='#ECF2F700',
- input_text_size='*text_md',
- input_text_weight='400',
- layout_gap='*spacing_xxl',
- link_text_color='#4EACEF',
- link_text_color_active='#4EACEF',
- link_text_color_active_dark='#4EACEF',
- link_text_color_dark='#4EACEF',
- link_text_color_hover='#0c6ebd',
- link_text_color_hover_dark='#0c6ebd',
- link_text_color_visited='#4EACEF',
- link_text_color_visited_dark='#4EACEF',
- loader_color='#4EACEF',
- loader_color_dark='#4EACEF',
-
- panel_background_fill='#ECF2F7',
- panel_background_fill_dark='#191919',
- panel_border_color='#4EACEF',
- panel_border_color_dark='#4EACEF',
- panel_border_width='0',
-
- prose_header_text_weight='600',
- prose_text_size='*text_md',
- prose_text_weight='400',
- radio_circle= "url(\"data:image/svg+xml,%3csvg viewBox='0 0 16 16' fill='white' xmlns='http://www.w3.org/2000/svg'%3e%3ccircle cx='8' cy='8' r='3'/%3e%3c/svg%3e\")",
- section_header_text_size='*text_md',
- section_header_text_weight='400',
- shadow_drop='rgba(0,0,0,0.05) 0px 1px 2px 0px',
- shadow_drop_lg='0 1px 3px 0 rgb(0 0 0 / 0.1), 0 1px 2px -1px rgb(0 0 0 / 0.1)',
- shadow_inset='rgba(0,0,0,0.05) 0px 2px 4px 0px inset',
- shadow_spread='#FFFFFF',
- shadow_spread_dark='#000000',
- slider_color='#4EACEF',
- slider_color_dark='#4EACEF',
- stat_background_fill='#4EACEF',
- stat_background_fill_dark='#4EACEF',
- table_border_color='#191919',
- table_border_color_dark='#ECF2F7',
- table_even_background_fill='#ECF2F7',
- table_even_background_fill_dark='#191919',
- table_odd_background_fill='#dce3e8',
- table_odd_background_fill_dark='#242424',
- table_radius='*radius_lg',
- table_row_focus='#191919',
- table_row_focus_dark='#ECF2F7',
-
- )
-
diff --git a/spaces/Lianjd/stock_dashboard/backtrader/order.py b/spaces/Lianjd/stock_dashboard/backtrader/order.py
deleted file mode 100644
index 458a35e9fbba0cafb4e90484dbb405b7a1c73aba..0000000000000000000000000000000000000000
--- a/spaces/Lianjd/stock_dashboard/backtrader/order.py
+++ /dev/null
@@ -1,641 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8; py-indent-offset:4 -*-
-###############################################################################
-#
-# Copyright (C) 2015-2020 Daniel Rodriguez
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program. If not, see .
-#
-###############################################################################
-from __future__ import (absolute_import, division, print_function,
- unicode_literals)
-
-import collections
-from copy import copy
-import datetime
-import itertools
-
-from .utils.py3 import range, with_metaclass, iteritems
-
-from .metabase import MetaParams
-from .utils import AutoOrderedDict
-
-
-class OrderExecutionBit(object):
- '''
- Intended to hold information about order execution. A "bit" does not
- determine if the order has been fully/partially executed, it just holds
- information.
-
- Member Attributes:
-
- - dt: datetime (float) execution time
- - size: how much was executed
- - price: execution price
- - closed: how much of the execution closed an existing postion
- - opened: how much of the execution opened a new position
- - openedvalue: market value of the "opened" part
- - closedvalue: market value of the "closed" part
- - closedcomm: commission for the "closed" part
- - openedcomm: commission for the "opened" part
-
- - value: market value for the entire bit size
- - comm: commission for the entire bit execution
- - pnl: pnl generated by this bit (if something was closed)
-
- - psize: current open position size
- - pprice: current open position price
-
- '''
-
- def __init__(self,
- dt=None, size=0, price=0.0,
- closed=0, closedvalue=0.0, closedcomm=0.0,
- opened=0, openedvalue=0.0, openedcomm=0.0,
- pnl=0.0,
- psize=0, pprice=0.0):
-
- self.dt = dt
- self.size = size
- self.price = price
-
- self.closed = closed
- self.opened = opened
- self.closedvalue = closedvalue
- self.openedvalue = openedvalue
- self.closedcomm = closedcomm
- self.openedcomm = openedcomm
-
- self.value = closedvalue + openedvalue
- self.comm = closedcomm + openedcomm
- self.pnl = pnl
-
- self.psize = psize
- self.pprice = pprice
-
-
-class OrderData(object):
- '''
- Holds actual order data for Creation and Execution.
-
- In the case of Creation the request made and in the case of Execution the
- actual outcome.
-
- Member Attributes:
-
- - exbits : iterable of OrderExecutionBits for this OrderData
-
- - dt: datetime (float) creation/execution time
- - size: requested/executed size
- - price: execution price
- Note: if no price is given and no pricelimite is given, the closing
- price at the time or order creation will be used as reference
- - pricelimit: holds pricelimit for StopLimit (which has trigger first)
- - trailamount: absolute price distance in trailing stops
- - trailpercent: percentage price distance in trailing stops
-
- - value: market value for the entire bit size
- - comm: commission for the entire bit execution
- - pnl: pnl generated by this bit (if something was closed)
- - margin: margin incurred by the Order (if any)
-
- - psize: current open position size
- - pprice: current open position price
-
- '''
- # According to the docs, collections.deque is thread-safe with appends at
- # both ends, there will be no pop (nowhere) and therefore to know which the
- # new exbits are two indices are needed. At time of cloning (__copy__) the
- # indices can be updated to match the previous end, and the new end
- # (len(exbits)
- # Example: start 0, 0 -> islice(exbits, 0, 0) -> []
- # One added -> copy -> updated 0, 1 -> islice(exbits, 0, 1) -> [1 elem]
- # Other added -> copy -> updated 1, 2 -> islice(exbits, 1, 2) -> [1 elem]
- # "add" and "__copy__" happen always in the same thread (with all current
- # implementations) and therefore no append will happen during a copy and
- # the len of the exbits can be queried with no concerns about another
- # thread making an append and with no need for a lock
-
- def __init__(self, dt=None, size=0, price=0.0, pricelimit=0.0, remsize=0,
- pclose=0.0, trailamount=0.0, trailpercent=0.0):
-
- self.pclose = pclose
- self.exbits = collections.deque() # for historical purposes
- self.p1, self.p2 = 0, 0 # indices to pending notifications
-
- self.dt = dt
- self.size = size
- self.remsize = remsize
- self.price = price
- self.pricelimit = pricelimit
- self.trailamount = trailamount
- self.trailpercent = trailpercent
-
- if not pricelimit:
- # if no pricelimit is given, use the given price
- self.pricelimit = self.price
-
- if pricelimit and not price:
- # price must always be set if pricelimit is set ...
- self.price = pricelimit
-
- self.plimit = pricelimit
-
- self.value = 0.0
- self.comm = 0.0
- self.margin = None
- self.pnl = 0.0
-
- self.psize = 0
- self.pprice = 0
-
- def _getplimit(self):
- return self._plimit
-
- def _setplimit(self, val):
- self._plimit = val
-
- plimit = property(_getplimit, _setplimit)
-
- def __len__(self):
- return len(self.exbits)
-
- def __getitem__(self, key):
- return self.exbits[key]
-
- def add(self, dt, size, price,
- closed=0, closedvalue=0.0, closedcomm=0.0,
- opened=0, openedvalue=0.0, openedcomm=0.0,
- pnl=0.0,
- psize=0, pprice=0.0):
-
- self.addbit(
- OrderExecutionBit(dt, size, price,
- closed, closedvalue, closedcomm,
- opened, openedvalue, openedcomm, pnl,
- psize, pprice))
-
- def addbit(self, exbit):
- # Stores an ExecutionBit and recalculates own values from ExBit
- self.exbits.append(exbit)
-
- self.remsize -= exbit.size
-
- self.dt = exbit.dt
- oldvalue = self.size * self.price
- newvalue = exbit.size * exbit.price
- self.size += exbit.size
- self.price = (oldvalue + newvalue) / self.size
- self.value += exbit.value
- self.comm += exbit.comm
- self.pnl += exbit.pnl
- self.psize = exbit.psize
- self.pprice = exbit.pprice
-
- def getpending(self):
- return list(self.iterpending())
-
- def iterpending(self):
- return itertools.islice(self.exbits, self.p1, self.p2)
-
- def markpending(self):
- # rebuild the indices to mark which exbits are pending in clone
- self.p1, self.p2 = self.p2, len(self.exbits)
-
- def clone(self):
- self.markpending()
- obj = copy(self)
- return obj
-
-
-class OrderBase(with_metaclass(MetaParams, object)):
- params = (
- ('owner', None), ('data', None),
- ('size', None), ('price', None), ('pricelimit', None),
- ('exectype', None), ('valid', None), ('tradeid', 0), ('oco', None),
- ('trailamount', None), ('trailpercent', None),
- ('parent', None), ('transmit', True),
- ('simulated', False),
- # To support historical order evaluation
- ('histnotify', False),
- )
-
- DAY = datetime.timedelta() # constant for DAY order identification
-
- # Time Restrictions for orders
- T_Close, T_Day, T_Date, T_None = range(4)
-
- # Volume Restrictions for orders
- V_None = range(1)
-
- (Market, Close, Limit, Stop, StopLimit, StopTrail, StopTrailLimit,
- Historical) = range(8)
- ExecTypes = ['Market', 'Close', 'Limit', 'Stop', 'StopLimit', 'StopTrail',
- 'StopTrailLimit', 'Historical']
-
- OrdTypes = ['Buy', 'Sell']
- Buy, Sell = range(2)
-
- Created, Submitted, Accepted, Partial, Completed, \
- Canceled, Expired, Margin, Rejected = range(9)
-
- Cancelled = Canceled # alias
-
- Status = [
- 'Created', 'Submitted', 'Accepted', 'Partial', 'Completed',
- 'Canceled', 'Expired', 'Margin', 'Rejected',
- ]
-
- refbasis = itertools.count(1) # for a unique identifier per order
-
- def _getplimit(self):
- return self._plimit
-
- def _setplimit(self, val):
- self._plimit = val
-
- plimit = property(_getplimit, _setplimit)
-
- def __getattr__(self, name):
- # Return attr from params if not found in order
- return getattr(self.params, name)
-
- def __setattribute__(self, name, value):
- if hasattr(self.params, name):
- setattr(self.params, name, value)
- else:
- super(Order, self).__setattribute__(name, value)
-
- def __str__(self):
- tojoin = list()
- tojoin.append('Ref: {}'.format(self.ref))
- tojoin.append('OrdType: {}'.format(self.ordtype))
- tojoin.append('OrdType: {}'.format(self.ordtypename()))
- tojoin.append('Status: {}'.format(self.status))
- tojoin.append('Status: {}'.format(self.getstatusname()))
- tojoin.append('Size: {}'.format(self.size))
- tojoin.append('Price: {}'.format(self.price))
- tojoin.append('Price Limit: {}'.format(self.pricelimit))
- tojoin.append('TrailAmount: {}'.format(self.trailamount))
- tojoin.append('TrailPercent: {}'.format(self.trailpercent))
- tojoin.append('ExecType: {}'.format(self.exectype))
- tojoin.append('ExecType: {}'.format(self.getordername()))
- tojoin.append('CommInfo: {}'.format(self.comminfo))
- tojoin.append('End of Session: {}'.format(self.dteos))
- tojoin.append('Info: {}'.format(self.info))
- tojoin.append('Broker: {}'.format(self.broker))
- tojoin.append('Alive: {}'.format(self.alive()))
-
- return '\n'.join(tojoin)
-
- def __init__(self):
- self.ref = next(self.refbasis)
- self.broker = None
- self.info = AutoOrderedDict()
- self.comminfo = None
- self.triggered = False
-
- self._active = self.parent is None
- self.status = Order.Created
-
- self.plimit = self.p.pricelimit # alias via property
-
- if self.exectype is None:
- self.exectype = Order.Market
-
- if not self.isbuy():
- self.size = -self.size
-
- # Set a reference price if price is not set using
- # the close price
- pclose = self.data.close[0] if not self.simulated else self.price
- if not self.price and not self.pricelimit:
- price = pclose
- else:
- price = self.price
-
- dcreated = self.data.datetime[0] if not self.p.simulated else 0.0
- self.created = OrderData(dt=dcreated,
- size=self.size,
- price=price,
- pricelimit=self.pricelimit,
- pclose=pclose,
- trailamount=self.trailamount,
- trailpercent=self.trailpercent)
-
- # Adjust price in case a trailing limit is wished
- if self.exectype in [Order.StopTrail, Order.StopTrailLimit]:
- self._limitoffset = self.created.price - self.created.pricelimit
- price = self.created.price
- self.created.price = float('inf' * self.isbuy() or '-inf')
- self.trailadjust(price)
- else:
- self._limitoffset = 0.0
-
- self.executed = OrderData(remsize=self.size)
- self.position = 0
-
- if isinstance(self.valid, datetime.date):
- # comparison will later be done against the raw datetime[0] value
- self.valid = self.data.date2num(self.valid)
- elif isinstance(self.valid, datetime.timedelta):
- # offset with regards to now ... get utcnow + offset
- # when reading with date2num ... it will be automatically localized
- if self.valid == self.DAY:
- valid = datetime.datetime.combine(
- self.data.datetime.date(), datetime.time(23, 59, 59, 9999))
- else:
- valid = self.data.datetime.datetime() + self.valid
-
- self.valid = self.data.date2num(valid)
-
- elif self.valid is not None:
- if not self.valid: # avoid comparing None and 0
- valid = datetime.datetime.combine(
- self.data.datetime.date(), datetime.time(23, 59, 59, 9999))
- else: # assume float
- valid = self.data.datetime[0] + self.valid
-
- if not self.p.simulated:
- # provisional end-of-session
- # get next session end
- dtime = self.data.datetime.datetime(0)
- session = self.data.p.sessionend
- dteos = dtime.replace(hour=session.hour, minute=session.minute,
- second=session.second,
- microsecond=session.microsecond)
-
- if dteos < dtime:
- # eos before current time ... no ... must be at least next day
- dteos += datetime.timedelta(days=1)
-
- self.dteos = self.data.date2num(dteos)
- else:
- self.dteos = 0.0
-
- def clone(self):
- # status, triggered and executed are the only moving parts in order
- # status and triggered are covered by copy
- # executed has to be replaced with an intelligent clone of itself
- obj = copy(self)
- obj.executed = self.executed.clone()
- return obj # status could change in next to completed
-
- def getstatusname(self, status=None):
- '''Returns the name for a given status or the one of the order'''
- return self.Status[self.status if status is None else status]
-
- def getordername(self, exectype=None):
- '''Returns the name for a given exectype or the one of the order'''
- return self.ExecTypes[self.exectype if exectype is None else exectype]
-
- @classmethod
- def ExecType(cls, exectype):
- return getattr(cls, exectype)
-
- def ordtypename(self, ordtype=None):
- '''Returns the name for a given ordtype or the one of the order'''
- return self.OrdTypes[self.ordtype if ordtype is None else ordtype]
-
- def active(self):
- return self._active
-
- def activate(self):
- self._active = True
-
- def alive(self):
- '''Returns True if the order is in a status in which it can still be
- executed
- '''
- return self.status in [Order.Created, Order.Submitted,
- Order.Partial, Order.Accepted]
-
- def addcomminfo(self, comminfo):
- '''Stores a CommInfo scheme associated with the asset'''
- self.comminfo = comminfo
-
- def addinfo(self, **kwargs):
- '''Add the keys, values of kwargs to the internal info dictionary to
- hold custom information in the order
- '''
- for key, val in iteritems(kwargs):
- self.info[key] = val
-
- def __eq__(self, other):
- return other is not None and self.ref == other.ref
-
- def __ne__(self, other):
- return self.ref != other.ref
-
- def isbuy(self):
- '''Returns True if the order is a Buy order'''
- return self.ordtype == self.Buy
-
- def issell(self):
- '''Returns True if the order is a Sell order'''
- return self.ordtype == self.Sell
-
- def setposition(self, position):
- '''Receives the current position for the asset and stotres it'''
- self.position = position
-
- def submit(self, broker=None):
- '''Marks an order as submitted and stores the broker to which it was
- submitted'''
- self.status = Order.Submitted
- self.broker = broker
- self.plen = len(self.data)
-
- def accept(self, broker=None):
- '''Marks an order as accepted'''
- self.status = Order.Accepted
- self.broker = broker
-
- def brokerstatus(self):
- '''Tries to retrieve the status from the broker in which the order is.
-
- Defaults to last known status if no broker is associated'''
- if self.broker:
- return self.broker.orderstatus(self)
-
- return self.status
-
- def reject(self, broker=None):
- '''Marks an order as rejected'''
- if self.status == Order.Rejected:
- return False
-
- self.status = Order.Rejected
- self.executed.dt = self.data.datetime[0]
- self.broker = broker
- return True
-
- def cancel(self):
- '''Marks an order as cancelled'''
- self.status = Order.Canceled
- self.executed.dt = self.data.datetime[0]
-
- def margin(self):
- '''Marks an order as having met a margin call'''
- self.status = Order.Margin
- self.executed.dt = self.data.datetime[0]
-
- def completed(self):
- '''Marks an order as completely filled'''
- self.status = self.Completed
-
- def partial(self):
- '''Marks an order as partially filled'''
- self.status = self.Partial
-
- def execute(self, dt, size, price,
- closed, closedvalue, closedcomm,
- opened, openedvalue, openedcomm,
- margin, pnl,
- psize, pprice):
-
- '''Receives data execution input and stores it'''
- if not size:
- return
-
- self.executed.add(dt, size, price,
- closed, closedvalue, closedcomm,
- opened, openedvalue, openedcomm,
- pnl, psize, pprice)
-
- self.executed.margin = margin
-
- def expire(self):
- '''Marks an order as expired. Returns True if it worked'''
- self.status = self.Expired
- return True
-
- def trailadjust(self, price):
- pass # generic interface
-
-
-class Order(OrderBase):
- '''
- Class which holds creation/execution data and type of oder.
-
- The order may have the following status:
-
- - Submitted: sent to the broker and awaiting confirmation
- - Accepted: accepted by the broker
- - Partial: partially executed
- - Completed: fully exexcuted
- - Canceled/Cancelled: canceled by the user
- - Expired: expired
- - Margin: not enough cash to execute the order.
- - Rejected: Rejected by the broker
-
- This can happen during order submission (and therefore the order will
- not reach the Accepted status) or before execution with each new bar
- price because cash has been drawn by other sources (future-like
- instruments may have reduced the cash or orders orders may have been
- executed)
-
- Member Attributes:
-
- - ref: unique order identifier
- - created: OrderData holding creation data
- - executed: OrderData holding execution data
-
- - info: custom information passed over method :func:`addinfo`. It is kept
- in the form of an OrderedDict which has been subclassed, so that keys
- can also be specified using '.' notation
-
- User Methods:
-
- - isbuy(): returns bool indicating if the order buys
- - issell(): returns bool indicating if the order sells
- - alive(): returns bool if order is in status Partial or Accepted
- '''
-
- def execute(self, dt, size, price,
- closed, closedvalue, closedcomm,
- opened, openedvalue, openedcomm,
- margin, pnl,
- psize, pprice):
-
- super(Order, self).execute(dt, size, price,
- closed, closedvalue, closedcomm,
- opened, openedvalue, openedcomm,
- margin, pnl, psize, pprice)
-
- if self.executed.remsize:
- self.status = Order.Partial
- else:
- self.status = Order.Completed
-
- # self.comminfo = None
-
- def expire(self):
- if self.exectype == Order.Market:
- return False # will be executed yes or yes
-
- if self.valid and self.data.datetime[0] > self.valid:
- self.status = Order.Expired
- self.executed.dt = self.data.datetime[0]
- return True
-
- return False
-
- def trailadjust(self, price):
- if self.trailamount:
- pamount = self.trailamount
- elif self.trailpercent:
- pamount = price * self.trailpercent
- else:
- pamount = 0.0
-
- # Stop sell is below (-), stop buy is above, move only if needed
- if self.isbuy():
- price += pamount
- if price < self.created.price:
- self.created.price = price
- if self.exectype == Order.StopTrailLimit:
- self.created.pricelimit = price - self._limitoffset
- else:
- price -= pamount
- if price > self.created.price:
- self.created.price = price
- if self.exectype == Order.StopTrailLimit:
- # limitoffset is negative when pricelimit was greater
- # the - allows increasing the price limit if stop increases
- self.created.pricelimit = price - self._limitoffset
-
-
-class BuyOrder(Order):
- ordtype = Order.Buy
-
-
-class StopBuyOrder(BuyOrder):
- pass
-
-
-class StopLimitBuyOrder(BuyOrder):
- pass
-
-
-class SellOrder(Order):
- ordtype = Order.Sell
-
-
-class StopSellOrder(SellOrder):
- pass
-
-
-class StopLimitSellOrder(SellOrder):
- pass
diff --git a/spaces/LuxOAI/ChatGpt-Web/app/masks/en.ts b/spaces/LuxOAI/ChatGpt-Web/app/masks/en.ts
deleted file mode 100644
index 93e9bd6aa264a1e176864e24230d76154b98e144..0000000000000000000000000000000000000000
--- a/spaces/LuxOAI/ChatGpt-Web/app/masks/en.ts
+++ /dev/null
@@ -1,44 +0,0 @@
-import { BuiltinMask } from "./typing";
-
-export const EN_MASKS: BuiltinMask[] = [
- {
- avatar: "1f916",
- name: "Prompt Improvement",
- context: [
- {
- role: "user",
- content:
- 'Read all of the instructions below and once you understand them say "Shall we begin:"\n \nI want you to become my Prompt Creator. Your goal is to help me craft the best possible prompt for my needs. The prompt will be used by you, ChatGPT. You will follow the following process:\nYour first response will be to ask me what the prompt should be about. I will provide my answer, but we will need to improve it through continual iterations by going through the next steps.\n \nBased on my input, you will generate 3 sections.\n \nRevised Prompt (provide your rewritten prompt. it should be clear, concise, and easily understood by you)\nSuggestions (provide 3 suggestions on what details to include in the prompt to improve it)\nQuestions (ask the 3 most relevant questions pertaining to what additional information is needed from me to improve the prompt)\n \nAt the end of these sections give me a reminder of my options which are:\n \nOption 1: Read the output and provide more info or answer one or more of the questions\nOption 2: Type "Use this prompt" and I will submit this as a query for you\nOption 3: Type "Restart" to restart this process from the beginning\nOption 4: Type "Quit" to end this script and go back to a regular ChatGPT session\n \nIf I type "Option 2", "2" or "Use this prompt" then we have finsihed and you should use the Revised Prompt as a prompt to generate my request\nIf I type "option 3", "3" or "Restart" then forget the latest Revised Prompt and restart this process\nIf I type "Option 4", "4" or "Quit" then finish this process and revert back to your general mode of operation\n\n\nWe will continue this iterative process with me providing additional information to you and you updating the prompt in the Revised Prompt section until it is complete.',
- date: "",
- },
- {
- role: "assistant",
- content: "Shall we begin?",
- date: "",
- },
- {
- role: "user",
- content:
- "My prompt:I want you to act like Harry Potter from Harry Potter Series. I want you to respond and answer like Harry Potter using the tone, manner and vocabulary Harry Potter would use.",
- date: "",
- },
- {
- role: "assistant",
- content:
- 'Revised Prompt:\n\n\nWrite a short story or dialogue in which Harry Potter interacts with someone (a friend or foe) using the tone, manner and vocabulary characteristic of his character in the Harry Potter series.\n\n\nSuggestions:\n\n\n\nInclude specific references to Hogwarts, magic spells, and magical creatures to create an authentic Harry Potter experience.\n\nEnsure that Harry\'s personality traits, such as his bravery, loyalty, and determination, are prominently displayed in the story or dialogue.\n\nIncorporate some humor or wit into the story, as these elements are often present in the Harry Potter series.\n\n\nQuestions:\n\n\n\nWho will be the other character interacting with Harry?\n\nWhat is the main conflict or goal of the story/dialogue?\n\nWhat setting or location would you like the story/dialogue to take place in?\n\n\nRemember, your options are:\nOption 1: Read the output and provide more info or answer one or more of the questions\nOption 2: Type "Use this prompt" and I will submit this as a query for you\nOption 3: Type "Restart" to restart this process from the beginning\nOption 4: Type "Quit" to end this script and go back to a regular ChatGPT session',
- date: "",
- },
- ],
- modelConfig: {
- model: "gpt-4",
- temperature: 1,
- max_tokens: 2000,
- presence_penalty: 0,
- sendMemory: true,
- historyMessageCount: 4,
- compressMessageLengthThreshold: 1000,
- },
- lang: "en",
- builtin: true,
- },
-];
diff --git a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/ONNXVITS_infer.py b/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/ONNXVITS_infer.py
deleted file mode 100644
index af04e614c8f1ac43faf363b1a9f6bfd667fbde21..0000000000000000000000000000000000000000
--- a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/ONNXVITS_infer.py
+++ /dev/null
@@ -1,201 +0,0 @@
-import torch
-import commons
-import models
-
-import math
-from torch import nn
-from torch.nn import functional as F
-
-import modules
-import attentions
-
-from torch.nn import Conv1d, ConvTranspose1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from commons import init_weights, get_padding
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- emotion_embedding):
- super().__init__()
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emotion_embedding = emotion_embedding
-
- if self.n_vocab != 0:
- self.emb = nn.Embedding(n_vocab, hidden_channels)
- if emotion_embedding:
- self.emo_proj = nn.Linear(1024, hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5)
-
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, emotion_embedding=None):
- if self.n_vocab != 0:
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- if emotion_embedding is not None:
- print("emotion added")
- x = x + self.emo_proj(emotion_embedding.unsqueeze(1))
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return x, m, logs, x_mask
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class SynthesizerTrn(models.SynthesizerTrn):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=0,
- gin_channels=0,
- use_sdp=True,
- emotion_embedding=False,
- ONNX_dir="./ONNX_net/",
- **kwargs):
-
- super().__init__(
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=n_speakers,
- gin_channels=gin_channels,
- use_sdp=use_sdp,
- **kwargs
- )
- self.ONNX_dir = ONNX_dir
- self.enc_p = TextEncoder(n_vocab,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- emotion_embedding)
- self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
-
- def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None,
- emotion_embedding=None):
- from ONNXVITS_utils import runonnx
- with torch.no_grad():
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, emotion_embedding)
-
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- # logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w)
- logw = runonnx(f"{self.ONNX_dir}dp.onnx", x=x.numpy(), x_mask=x_mask.numpy(), g=g.numpy())
- logw = torch.from_numpy(logw[0])
-
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = commons.generate_path(w_ceil, attn_mask)
-
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1,
- 2) # [b, t', t], [b, t, d] -> [b, d, t']
-
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
-
- # z = self.flow(z_p, y_mask, g=g, reverse=True)
- z = runonnx(f"{self.ONNX_dir}flow.onnx", z_p=z_p.numpy(), y_mask=y_mask.numpy(), g=g.numpy())
- z = torch.from_numpy(z[0])
-
- # o = self.dec((z * y_mask)[:,:,:max_len], g=g)
- o = runonnx(f"{self.ONNX_dir}dec.onnx", z_in=(z * y_mask)[:, :, :max_len].numpy(), g=g.numpy())
- o = torch.from_numpy(o[0])
-
- return o, attn, y_mask, (z, z_p, m_p, logs_p)
\ No newline at end of file
diff --git a/spaces/MarcusSu1216/XingTong/train.py b/spaces/MarcusSu1216/XingTong/train.py
deleted file mode 100644
index 9f6e7439961182f54ee880be4b8fa776b44b547f..0000000000000000000000000000000000000000
--- a/spaces/MarcusSu1216/XingTong/train.py
+++ /dev/null
@@ -1,315 +0,0 @@
-import logging
-import multiprocessing
-import time
-
-logging.getLogger('matplotlib').setLevel(logging.WARNING)
-logging.getLogger('numba').setLevel(logging.WARNING)
-
-import os
-import json
-import argparse
-import itertools
-import math
-import torch
-from torch import nn, optim
-from torch.nn import functional as F
-from torch.utils.data import DataLoader
-from torch.utils.tensorboard import SummaryWriter
-import torch.multiprocessing as mp
-import torch.distributed as dist
-from torch.nn.parallel import DistributedDataParallel as DDP
-from torch.cuda.amp import autocast, GradScaler
-
-import modules.commons as commons
-import utils
-from data_utils import TextAudioSpeakerLoader, TextAudioCollate
-from models import (
- SynthesizerTrn,
- MultiPeriodDiscriminator,
-)
-from modules.losses import (
- kl_loss,
- generator_loss, discriminator_loss, feature_loss
-)
-
-from modules.mel_processing import mel_spectrogram_torch, spec_to_mel_torch
-
-torch.backends.cudnn.benchmark = True
-global_step = 0
-start_time = time.time()
-
-# os.environ['TORCH_DISTRIBUTED_DEBUG'] = 'INFO'
-
-
-def main():
- """Assume Single Node Multi GPUs Training Only"""
- assert torch.cuda.is_available(), "CPU training is not allowed."
- hps = utils.get_hparams()
-
- n_gpus = torch.cuda.device_count()
- os.environ['MASTER_ADDR'] = 'localhost'
- os.environ['MASTER_PORT'] = hps.train.port
-
- mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,))
-
-
-def run(rank, n_gpus, hps):
- global global_step
- if rank == 0:
- logger = utils.get_logger(hps.model_dir)
- logger.info(hps)
- utils.check_git_hash(hps.model_dir)
- writer = SummaryWriter(log_dir=hps.model_dir)
- writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval"))
-
- # for pytorch on win, backend use gloo
- dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank)
- torch.manual_seed(hps.train.seed)
- torch.cuda.set_device(rank)
- collate_fn = TextAudioCollate()
- all_in_mem = hps.train.all_in_mem # If you have enough memory, turn on this option to avoid disk IO and speed up training.
- train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps, all_in_mem=all_in_mem)
- num_workers = 5 if multiprocessing.cpu_count() > 4 else multiprocessing.cpu_count()
- if all_in_mem:
- num_workers = 0
- train_loader = DataLoader(train_dataset, num_workers=num_workers, shuffle=False, pin_memory=True,
- batch_size=hps.train.batch_size, collate_fn=collate_fn)
- if rank == 0:
- eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps, all_in_mem=all_in_mem)
- eval_loader = DataLoader(eval_dataset, num_workers=1, shuffle=False,
- batch_size=1, pin_memory=False,
- drop_last=False, collate_fn=collate_fn)
-
- net_g = SynthesizerTrn(
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- **hps.model).cuda(rank)
- net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank)
- optim_g = torch.optim.AdamW(
- net_g.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- optim_d = torch.optim.AdamW(
- net_d.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- net_g = DDP(net_g, device_ids=[rank]) # , find_unused_parameters=True)
- net_d = DDP(net_d, device_ids=[rank])
-
- skip_optimizer = False
- try:
- _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g,
- optim_g, skip_optimizer)
- _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d,
- optim_d, skip_optimizer)
- epoch_str = max(epoch_str, 1)
- global_step = (epoch_str - 1) * len(train_loader)
- except:
- print("load old checkpoint failed...")
- epoch_str = 1
- global_step = 0
- if skip_optimizer:
- epoch_str = 1
- global_step = 0
-
- scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2)
- scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2)
-
- scaler = GradScaler(enabled=hps.train.fp16_run)
-
- for epoch in range(epoch_str, hps.train.epochs + 1):
- if rank == 0:
- train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler,
- [train_loader, eval_loader], logger, [writer, writer_eval])
- else:
- train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler,
- [train_loader, None], None, None)
- scheduler_g.step()
- scheduler_d.step()
-
-
-def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers):
- net_g, net_d = nets
- optim_g, optim_d = optims
- scheduler_g, scheduler_d = schedulers
- train_loader, eval_loader = loaders
- if writers is not None:
- writer, writer_eval = writers
-
- # train_loader.batch_sampler.set_epoch(epoch)
- global global_step
-
- net_g.train()
- net_d.train()
- for batch_idx, items in enumerate(train_loader):
- c, f0, spec, y, spk, lengths, uv = items
- g = spk.cuda(rank, non_blocking=True)
- spec, y = spec.cuda(rank, non_blocking=True), y.cuda(rank, non_blocking=True)
- c = c.cuda(rank, non_blocking=True)
- f0 = f0.cuda(rank, non_blocking=True)
- uv = uv.cuda(rank, non_blocking=True)
- lengths = lengths.cuda(rank, non_blocking=True)
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax)
-
- with autocast(enabled=hps.train.fp16_run):
- y_hat, ids_slice, z_mask, \
- (z, z_p, m_p, logs_p, m_q, logs_q), pred_lf0, norm_lf0, lf0 = net_g(c, f0, uv, spec, g=g, c_lengths=lengths,
- spec_lengths=lengths)
-
- y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length)
- y_hat_mel = mel_spectrogram_torch(
- y_hat.squeeze(1),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax
- )
- y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice
-
- # Discriminator
- y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach())
-
- with autocast(enabled=False):
- loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g)
- loss_disc_all = loss_disc
-
- optim_d.zero_grad()
- scaler.scale(loss_disc_all).backward()
- scaler.unscale_(optim_d)
- grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None)
- scaler.step(optim_d)
-
- with autocast(enabled=hps.train.fp16_run):
- # Generator
- y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat)
- with autocast(enabled=False):
- loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel
- loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl
- loss_fm = feature_loss(fmap_r, fmap_g)
- loss_gen, losses_gen = generator_loss(y_d_hat_g)
- loss_lf0 = F.mse_loss(pred_lf0, lf0)
- loss_gen_all = loss_gen + loss_fm + loss_mel + loss_kl + loss_lf0
- optim_g.zero_grad()
- scaler.scale(loss_gen_all).backward()
- scaler.unscale_(optim_g)
- grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None)
- scaler.step(optim_g)
- scaler.update()
-
- if rank == 0:
- if global_step % hps.train.log_interval == 0:
- lr = optim_g.param_groups[0]['lr']
- losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_kl]
- logger.info('Train Epoch: {} [{:.0f}%]'.format(
- epoch,
- 100. * batch_idx / len(train_loader)))
- logger.info(f"Losses: {[x.item() for x in losses]}, step: {global_step}, lr: {lr}")
-
- scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr,
- "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g}
- scalar_dict.update({"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/kl": loss_kl,
- "loss/g/lf0": loss_lf0})
-
- # scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)})
- # scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)})
- # scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)})
- image_dict = {
- "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()),
- "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()),
- "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()),
- "all/lf0": utils.plot_data_to_numpy(lf0[0, 0, :].cpu().numpy(),
- pred_lf0[0, 0, :].detach().cpu().numpy()),
- "all/norm_lf0": utils.plot_data_to_numpy(lf0[0, 0, :].cpu().numpy(),
- norm_lf0[0, 0, :].detach().cpu().numpy())
- }
-
- utils.summarize(
- writer=writer,
- global_step=global_step,
- images=image_dict,
- scalars=scalar_dict
- )
-
- if global_step % hps.train.eval_interval == 0:
- evaluate(hps, net_g, eval_loader, writer_eval)
- utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch,
- os.path.join(hps.model_dir, "G_{}.pth".format(global_step)))
- utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch,
- os.path.join(hps.model_dir, "D_{}.pth".format(global_step)))
- keep_ckpts = getattr(hps.train, 'keep_ckpts', 0)
- if keep_ckpts > 0:
- utils.clean_checkpoints(path_to_models=hps.model_dir, n_ckpts_to_keep=keep_ckpts, sort_by_time=True)
-
- global_step += 1
-
- if rank == 0:
- global start_time
- now = time.time()
- durtaion = format(now - start_time, '.2f')
- logger.info(f'====> Epoch: {epoch}, cost {durtaion} s')
- start_time = now
-
-
-def evaluate(hps, generator, eval_loader, writer_eval):
- generator.eval()
- image_dict = {}
- audio_dict = {}
- with torch.no_grad():
- for batch_idx, items in enumerate(eval_loader):
- c, f0, spec, y, spk, _, uv = items
- g = spk[:1].cuda(0)
- spec, y = spec[:1].cuda(0), y[:1].cuda(0)
- c = c[:1].cuda(0)
- f0 = f0[:1].cuda(0)
- uv= uv[:1].cuda(0)
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax)
- y_hat = generator.module.infer(c, f0, uv, g=g)
-
- y_hat_mel = mel_spectrogram_torch(
- y_hat.squeeze(1).float(),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax
- )
-
- audio_dict.update({
- f"gen/audio_{batch_idx}": y_hat[0],
- f"gt/audio_{batch_idx}": y[0]
- })
- image_dict.update({
- f"gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()),
- "gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())
- })
- utils.summarize(
- writer=writer_eval,
- global_step=global_step,
- images=image_dict,
- audios=audio_dict,
- audio_sampling_rate=hps.data.sampling_rate
- )
- generator.train()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/modeling/backbone/swintransformer.py b/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/modeling/backbone/swintransformer.py
deleted file mode 100644
index 21cabb37dd87a443e27eeb805f9739bef86540bf..0000000000000000000000000000000000000000
--- a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/modeling/backbone/swintransformer.py
+++ /dev/null
@@ -1,750 +0,0 @@
-# --------------------------------------------------------
-# Swin Transformer
-# Copyright (c) 2021 Microsoft
-# Licensed under The MIT License [see LICENSE for details]
-# Written by Ze Liu, Yutong Lin, Yixuan Wei
-# --------------------------------------------------------
-
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Modified by Xingyi Zhou from https://github.com/SwinTransformer/Swin-Transformer-Object-Detection/blob/master/mmdet/models/backbones/swin_transformer.py
-
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-import numpy as np
-from timm.models.layers import DropPath, to_2tuple, trunc_normal_
-
-from detectron2.layers import ShapeSpec
-from detectron2.modeling.backbone.backbone import Backbone
-from detectron2.modeling.backbone.build import BACKBONE_REGISTRY
-from detectron2.modeling.backbone.fpn import FPN
-
-from centernet.modeling.backbone.fpn_p5 import LastLevelP6P7_P5
-from centernet.modeling.backbone.bifpn import BiFPN
-# from .checkpoint import load_checkpoint
-
-class Mlp(nn.Module):
- """ Multilayer perceptron."""
-
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-def window_partition(x, window_size):
- """
- Args:
- x: (B, H, W, C)
- window_size (int): window size
- Returns:
- windows: (num_windows*B, window_size, window_size, C)
- """
- B, H, W, C = x.shape
- x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows
-
-
-def window_reverse(windows, window_size, H, W):
- """
- Args:
- windows: (num_windows*B, window_size, window_size, C)
- window_size (int): Window size
- H (int): Height of image
- W (int): Width of image
- Returns:
- x: (B, H, W, C)
- """
- B = int(windows.shape[0] / (H * W / window_size / window_size))
- x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
- return x
-
-
-class WindowAttention(nn.Module):
- """ Window based multi-head self attention (W-MSA) module with relative position bias.
- It supports both of shifted and non-shifted window.
- Args:
- dim (int): Number of input channels.
- window_size (tuple[int]): The height and width of the window.
- num_heads (int): Number of attention heads.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
- attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
- proj_drop (float, optional): Dropout ratio of output. Default: 0.0
- """
-
- def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):
-
- super().__init__()
- self.dim = dim
- self.window_size = window_size # Wh, Ww
- self.num_heads = num_heads
- head_dim = dim // num_heads
- self.scale = qk_scale or head_dim ** -0.5
-
- # define a parameter table of relative position bias
- self.relative_position_bias_table = nn.Parameter(
- torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH
-
- # get pair-wise relative position index for each token inside the window
- coords_h = torch.arange(self.window_size[0])
- coords_w = torch.arange(self.window_size[1])
- coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
- coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
- relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
- relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
- relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
- relative_coords[:, :, 1] += self.window_size[1] - 1
- relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
- relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
- self.register_buffer("relative_position_index", relative_position_index)
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- trunc_normal_(self.relative_position_bias_table, std=.02)
- self.softmax = nn.Softmax(dim=-1)
-
- def forward(self, x, mask=None):
- """ Forward function.
- Args:
- x: input features with shape of (num_windows*B, N, C)
- mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
- """
- B_, N, C = x.shape
- qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- q = q * self.scale
- attn = (q @ k.transpose(-2, -1))
-
- relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
- self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
- relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
- attn = attn + relative_position_bias.unsqueeze(0)
-
- if mask is not None:
- nW = mask.shape[0]
- attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
- attn = attn.view(-1, self.num_heads, N, N)
- attn = self.softmax(attn)
- else:
- attn = self.softmax(attn)
-
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
-class SwinTransformerBlock(nn.Module):
- """ Swin Transformer Block.
- Args:
- dim (int): Number of input channels.
- num_heads (int): Number of attention heads.
- window_size (int): Window size.
- shift_size (int): Shift size for SW-MSA.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float, optional): Stochastic depth rate. Default: 0.0
- act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(self, dim, num_heads, window_size=7, shift_size=0,
- mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
- act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.dim = dim
- self.num_heads = num_heads
- self.window_size = window_size
- self.shift_size = shift_size
- self.mlp_ratio = mlp_ratio
- assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
-
- self.norm1 = norm_layer(dim)
- self.attn = WindowAttention(
- dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
- qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
-
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- self.H = None
- self.W = None
-
- def forward(self, x, mask_matrix):
- """ Forward function.
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- mask_matrix: Attention mask for cyclic shift.
- """
- B, L, C = x.shape
- H, W = self.H, self.W
- assert L == H * W, "input feature has wrong size"
-
- shortcut = x
- x = self.norm1(x)
- x = x.view(B, H, W, C)
-
- # pad feature maps to multiples of window size
- pad_l = pad_t = 0
- pad_r = (self.window_size - W % self.window_size) % self.window_size
- pad_b = (self.window_size - H % self.window_size) % self.window_size
- x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
- _, Hp, Wp, _ = x.shape
-
- # cyclic shift
- if self.shift_size > 0:
- shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
- attn_mask = mask_matrix
- else:
- shifted_x = x
- attn_mask = None
-
- # partition windows
- x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
- x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
-
- # W-MSA/SW-MSA
- attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C
-
- # merge windows
- attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
- shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C
-
- # reverse cyclic shift
- if self.shift_size > 0:
- x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
- else:
- x = shifted_x
-
- if pad_r > 0 or pad_b > 0:
- x = x[:, :H, :W, :].contiguous()
-
- x = x.view(B, H * W, C)
-
- # FFN
- x = shortcut + self.drop_path(x)
- x = x + self.drop_path(self.mlp(self.norm2(x)))
-
- return x
-
-
-class PatchMerging(nn.Module):
- """ Patch Merging Layer
- Args:
- dim (int): Number of input channels.
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
- def __init__(self, dim, norm_layer=nn.LayerNorm):
- super().__init__()
- self.dim = dim
- self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
- self.norm = norm_layer(4 * dim)
-
- def forward(self, x, H, W):
- """ Forward function.
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- """
- B, L, C = x.shape
- assert L == H * W, "input feature has wrong size"
-
- x = x.view(B, H, W, C)
-
- # padding
- pad_input = (H % 2 == 1) or (W % 2 == 1)
- if pad_input:
- x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2))
-
- x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
- x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
- x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
- x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
- x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
- x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
-
- x = self.norm(x)
- x = self.reduction(x)
-
- return x
-
-
-class BasicLayer(nn.Module):
- """ A basic Swin Transformer layer for one stage.
- Args:
- dim (int): Number of feature channels
- depth (int): Depths of this stage.
- num_heads (int): Number of attention head.
- window_size (int): Local window size. Default: 7.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- """
-
- def __init__(self,
- dim,
- depth,
- num_heads,
- window_size=7,
- mlp_ratio=4.,
- qkv_bias=True,
- qk_scale=None,
- drop=0.,
- attn_drop=0.,
- drop_path=0.,
- norm_layer=nn.LayerNorm,
- downsample=None,
- use_checkpoint=False):
- super().__init__()
- self.window_size = window_size
- self.shift_size = window_size // 2
- self.depth = depth
- self.use_checkpoint = use_checkpoint
-
- # build blocks
- self.blocks = nn.ModuleList([
- SwinTransformerBlock(
- dim=dim,
- num_heads=num_heads,
- window_size=window_size,
- shift_size=0 if (i % 2 == 0) else window_size // 2,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias,
- qk_scale=qk_scale,
- drop=drop,
- attn_drop=attn_drop,
- drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
- norm_layer=norm_layer)
- for i in range(depth)])
-
- # patch merging layer
- if downsample is not None:
- self.downsample = downsample(dim=dim, norm_layer=norm_layer)
- else:
- self.downsample = None
-
- def forward(self, x, H, W):
- """ Forward function.
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- """
-
- # calculate attention mask for SW-MSA
- Hp = int(np.ceil(H / self.window_size)) * self.window_size
- Wp = int(np.ceil(W / self.window_size)) * self.window_size
- img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
- h_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- w_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- cnt = 0
- for h in h_slices:
- for w in w_slices:
- img_mask[:, h, w, :] = cnt
- cnt += 1
-
- mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
- mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
- attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
- attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
-
- for blk in self.blocks:
- blk.H, blk.W = H, W
- if self.use_checkpoint:
- x = checkpoint.checkpoint(blk, x, attn_mask)
- else:
- x = blk(x, attn_mask)
- if self.downsample is not None:
- x_down = self.downsample(x, H, W)
- Wh, Ww = (H + 1) // 2, (W + 1) // 2
- return x, H, W, x_down, Wh, Ww
- else:
- return x, H, W, x, H, W
-
-
-class PatchEmbed(nn.Module):
- """ Image to Patch Embedding
- Args:
- patch_size (int): Patch token size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- norm_layer (nn.Module, optional): Normalization layer. Default: None
- """
-
- def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
- super().__init__()
- patch_size = to_2tuple(patch_size)
- self.patch_size = patch_size
-
- self.in_chans = in_chans
- self.embed_dim = embed_dim
-
- self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
- if norm_layer is not None:
- self.norm = norm_layer(embed_dim)
- else:
- self.norm = None
-
- def forward(self, x):
- """Forward function."""
- # padding
- _, _, H, W = x.size()
- if W % self.patch_size[1] != 0:
- x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1]))
- if H % self.patch_size[0] != 0:
- x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0]))
-
- x = self.proj(x) # B C Wh Ww
- if self.norm is not None:
- Wh, Ww = x.size(2), x.size(3)
- x = x.flatten(2).transpose(1, 2)
- x = self.norm(x)
- x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww)
-
- return x
-
-
-class SwinTransformer(Backbone):
- """ Swin Transformer backbone.
- A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` -
- https://arxiv.org/pdf/2103.14030
- Args:
- pretrain_img_size (int): Input image size for training the pretrained model,
- used in absolute postion embedding. Default 224.
- patch_size (int | tuple(int)): Patch size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- depths (tuple[int]): Depths of each Swin Transformer stage.
- num_heads (tuple[int]): Number of attention head of each stage.
- window_size (int): Window size. Default: 7.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
- qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float): Override default qk scale of head_dim ** -0.5 if set.
- drop_rate (float): Dropout rate.
- attn_drop_rate (float): Attention dropout rate. Default: 0.
- drop_path_rate (float): Stochastic depth rate. Default: 0.2.
- norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
- ape (bool): If True, add absolute position embedding to the patch embedding. Default: False.
- patch_norm (bool): If True, add normalization after patch embedding. Default: True.
- out_indices (Sequence[int]): Output from which stages.
- frozen_stages (int): Stages to be frozen (stop grad and set eval mode).
- -1 means not freezing any parameters.
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- """
-
- def __init__(self,
- pretrain_img_size=224,
- patch_size=4,
- in_chans=3,
- embed_dim=96,
- depths=[2, 2, 6, 2],
- num_heads=[3, 6, 12, 24],
- window_size=7,
- mlp_ratio=4.,
- qkv_bias=True,
- qk_scale=None,
- drop_rate=0.,
- attn_drop_rate=0.,
- drop_path_rate=0.2,
- norm_layer=nn.LayerNorm,
- ape=False,
- patch_norm=True,
- out_indices=(0, 1, 2, 3),
- frozen_stages=-1,
- use_checkpoint=False):
- super().__init__()
-
- self.pretrain_img_size = pretrain_img_size
- self.num_layers = len(depths)
- self.embed_dim = embed_dim
- self.ape = ape
- self.patch_norm = patch_norm
- self.out_indices = out_indices
- self.frozen_stages = frozen_stages
-
- # split image into non-overlapping patches
- self.patch_embed = PatchEmbed(
- patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim,
- norm_layer=norm_layer if self.patch_norm else None)
-
- # absolute position embedding
- if self.ape:
- pretrain_img_size = to_2tuple(pretrain_img_size)
- patch_size = to_2tuple(patch_size)
- patches_resolution = [pretrain_img_size[0] // patch_size[0], pretrain_img_size[1] // patch_size[1]]
-
- self.absolute_pos_embed = nn.Parameter(torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1]))
- trunc_normal_(self.absolute_pos_embed, std=.02)
-
- self.pos_drop = nn.Dropout(p=drop_rate)
-
- # stochastic depth
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
-
- # build layers
- self.layers = nn.ModuleList()
- for i_layer in range(self.num_layers):
- layer = BasicLayer(
- dim=int(embed_dim * 2 ** i_layer),
- depth=depths[i_layer],
- num_heads=num_heads[i_layer],
- window_size=window_size,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias,
- qk_scale=qk_scale,
- drop=drop_rate,
- attn_drop=attn_drop_rate,
- drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])],
- norm_layer=norm_layer,
- downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
- use_checkpoint=use_checkpoint)
- self.layers.append(layer)
-
- num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)]
- self.num_features = num_features
-
- # add a norm layer for each output
- for i_layer in out_indices:
- layer = norm_layer(num_features[i_layer])
- layer_name = f'norm{i_layer}'
- self.add_module(layer_name, layer)
-
- self._freeze_stages()
- self._out_features = ['swin{}'.format(i) for i in self.out_indices]
- self._out_feature_channels = {
- 'swin{}'.format(i): self.embed_dim * 2 ** i for i in self.out_indices
- }
- self._out_feature_strides = {
- 'swin{}'.format(i): 2 ** (i + 2) for i in self.out_indices
- }
- self._size_devisibility = 32
-
-
- def _freeze_stages(self):
- if self.frozen_stages >= 0:
- self.patch_embed.eval()
- for param in self.patch_embed.parameters():
- param.requires_grad = False
-
- if self.frozen_stages >= 1 and self.ape:
- self.absolute_pos_embed.requires_grad = False
-
- if self.frozen_stages >= 2:
- self.pos_drop.eval()
- for i in range(0, self.frozen_stages - 1):
- m = self.layers[i]
- m.eval()
- for param in m.parameters():
- param.requires_grad = False
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in backbone.
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
-
- def _init_weights(m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- if isinstance(pretrained, str):
- self.apply(_init_weights)
- # load_checkpoint(self, pretrained, strict=False)
- elif pretrained is None:
- self.apply(_init_weights)
- else:
- raise TypeError('pretrained must be a str or None')
-
- def forward(self, x):
- """Forward function."""
- x = self.patch_embed(x)
-
- Wh, Ww = x.size(2), x.size(3)
- if self.ape:
- # interpolate the position embedding to the corresponding size
- absolute_pos_embed = F.interpolate(self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic')
- x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C
- else:
- x = x.flatten(2).transpose(1, 2)
- x = self.pos_drop(x)
-
- # outs = []
- outs = {}
- for i in range(self.num_layers):
- layer = self.layers[i]
- x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww)
-
- if i in self.out_indices:
- norm_layer = getattr(self, f'norm{i}')
- x_out = norm_layer(x_out)
-
- out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous()
- # outs.append(out)
- outs['swin{}'.format(i)] = out
-
- return outs
-
- def train(self, mode=True):
- """Convert the model into training mode while keep layers freezed."""
- super(SwinTransformer, self).train(mode)
- self._freeze_stages()
-
-size2config = {
- 'T': {
- 'window_size': 7,
- 'embed_dim': 96,
- 'depth': [2, 2, 6, 2],
- 'num_heads': [3, 6, 12, 24],
- 'drop_path_rate': 0.2,
- 'pretrained': 'models/swin_tiny_patch4_window7_224.pth'
- },
- 'S': {
- 'window_size': 7,
- 'embed_dim': 96,
- 'depth': [2, 2, 18, 2],
- 'num_heads': [3, 6, 12, 24],
- 'drop_path_rate': 0.2,
- 'pretrained': 'models/swin_small_patch4_window7_224.pth'
- },
- 'B': {
- 'window_size': 7,
- 'embed_dim': 128,
- 'depth': [2, 2, 18, 2],
- 'num_heads': [4, 8, 16, 32],
- 'drop_path_rate': 0.3,
- 'pretrained': 'models/swin_base_patch4_window7_224.pth'
- },
- 'B-22k': {
- 'window_size': 7,
- 'embed_dim': 128,
- 'depth': [2, 2, 18, 2],
- 'num_heads': [4, 8, 16, 32],
- 'drop_path_rate': 0.3,
- 'pretrained': 'models/swin_base_patch4_window7_224_22k.pth'
- },
- 'B-22k-384': {
- 'window_size': 12,
- 'embed_dim': 128,
- 'depth': [2, 2, 18, 2],
- 'num_heads': [4, 8, 16, 32],
- 'drop_path_rate': 0.3,
- 'pretrained': 'models/swin_base_patch4_window12_384_22k.pth'
- },
- 'L-22k': {
- 'window_size': 7,
- 'embed_dim': 192,
- 'depth': [2, 2, 18, 2],
- 'num_heads': [6, 12, 24, 48],
- 'drop_path_rate': 0.3, # TODO (xingyi): this is unclear
- 'pretrained': 'models/swin_large_patch4_window7_224_22k.pth'
- },
- 'L-22k-384': {
- 'window_size': 12,
- 'embed_dim': 192,
- 'depth': [2, 2, 18, 2],
- 'num_heads': [6, 12, 24, 48],
- 'drop_path_rate': 0.3, # TODO (xingyi): this is unclear
- 'pretrained': 'models/swin_large_patch4_window12_384_22k.pth'
- }
-}
-
-@BACKBONE_REGISTRY.register()
-def build_swintransformer_backbone(cfg, input_shape):
- """
- """
- config = size2config[cfg.MODEL.SWIN.SIZE]
- out_indices = cfg.MODEL.SWIN.OUT_FEATURES
- model = SwinTransformer(
- embed_dim=config['embed_dim'],
- window_size=config['window_size'],
- depths=config['depth'],
- num_heads=config['num_heads'],
- drop_path_rate=config['drop_path_rate'],
- out_indices=out_indices,
- frozen_stages=-1,
- use_checkpoint=cfg.MODEL.SWIN.USE_CHECKPOINT
- )
- # print('Initializing', config['pretrained'])
- model.init_weights(config['pretrained'])
- return model
-
-
-@BACKBONE_REGISTRY.register()
-def build_swintransformer_fpn_backbone(cfg, input_shape: ShapeSpec):
- """
- """
- bottom_up = build_swintransformer_backbone(cfg, input_shape)
- in_features = cfg.MODEL.FPN.IN_FEATURES
- out_channels = cfg.MODEL.FPN.OUT_CHANNELS
- backbone = FPN(
- bottom_up=bottom_up,
- in_features=in_features,
- out_channels=out_channels,
- norm=cfg.MODEL.FPN.NORM,
- top_block=LastLevelP6P7_P5(out_channels, out_channels),
- fuse_type=cfg.MODEL.FPN.FUSE_TYPE,
- )
- return backbone
-
-
-@BACKBONE_REGISTRY.register()
-def build_swintransformer_bifpn_backbone(cfg, input_shape: ShapeSpec):
- """
- """
- bottom_up = build_swintransformer_backbone(cfg, input_shape)
- in_features = cfg.MODEL.FPN.IN_FEATURES
- backbone = BiFPN(
- cfg=cfg,
- bottom_up=bottom_up,
- in_features=in_features,
- out_channels=cfg.MODEL.BIFPN.OUT_CHANNELS,
- norm=cfg.MODEL.BIFPN.NORM,
- num_levels=cfg.MODEL.BIFPN.NUM_LEVELS,
- num_bifpn=cfg.MODEL.BIFPN.NUM_BIFPN,
- separable_conv=cfg.MODEL.BIFPN.SEPARABLE_CONV,
- )
- return backbone
\ No newline at end of file
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/is_net/__init__.py b/spaces/Mellow-ai/PhotoAI_Mellow/is_net/__init__.py
deleted file mode 100644
index 74aba2b8edfc4656e796843cab1d9b6fcdcce610..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/is_net/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from models.isnet import ISNetGTEncoder, ISNetDIS
diff --git a/spaces/MetaWabbit/Auto-GPT/autogpt/configurator.py b/spaces/MetaWabbit/Auto-GPT/autogpt/configurator.py
deleted file mode 100644
index 1dc3be124f638b8859eb459bcb2d46696f62e2b7..0000000000000000000000000000000000000000
--- a/spaces/MetaWabbit/Auto-GPT/autogpt/configurator.py
+++ /dev/null
@@ -1,134 +0,0 @@
-"""Configurator module."""
-import click
-from colorama import Back, Fore, Style
-
-from autogpt import utils
-from autogpt.config import Config
-from autogpt.logs import logger
-from autogpt.memory import get_supported_memory_backends
-
-CFG = Config()
-
-
-def create_config(
- continuous: bool,
- continuous_limit: int,
- ai_settings_file: str,
- skip_reprompt: bool,
- speak: bool,
- debug: bool,
- gpt3only: bool,
- gpt4only: bool,
- memory_type: str,
- browser_name: str,
- allow_downloads: bool,
- skip_news: bool,
-) -> None:
- """Updates the config object with the given arguments.
-
- Args:
- continuous (bool): Whether to run in continuous mode
- continuous_limit (int): The number of times to run in continuous mode
- ai_settings_file (str): The path to the ai_settings.yaml file
- skip_reprompt (bool): Whether to skip the re-prompting messages at the beginning of the script
- speak (bool): Whether to enable speak mode
- debug (bool): Whether to enable debug mode
- gpt3only (bool): Whether to enable GPT3.5 only mode
- gpt4only (bool): Whether to enable GPT4 only mode
- memory_type (str): The type of memory backend to use
- browser_name (str): The name of the browser to use when using selenium to scrape the web
- allow_downloads (bool): Whether to allow Auto-GPT to download files natively
- skips_news (bool): Whether to suppress the output of latest news on startup
- """
- CFG.set_debug_mode(False)
- CFG.set_continuous_mode(False)
- CFG.set_speak_mode(False)
-
- if debug:
- logger.typewriter_log("Debug Mode: ", Fore.GREEN, "ENABLED")
- CFG.set_debug_mode(True)
-
- if continuous:
- logger.typewriter_log("Continuous Mode: ", Fore.RED, "ENABLED")
- logger.typewriter_log(
- "WARNING: ",
- Fore.RED,
- "Continuous mode is not recommended. It is potentially dangerous and may"
- " cause your AI to run forever or carry out actions you would not usually"
- " authorise. Use at your own risk.",
- )
- CFG.set_continuous_mode(True)
-
- if continuous_limit:
- logger.typewriter_log(
- "Continuous Limit: ", Fore.GREEN, f"{continuous_limit}"
- )
- CFG.set_continuous_limit(continuous_limit)
-
- # Check if continuous limit is used without continuous mode
- if continuous_limit and not continuous:
- raise click.UsageError("--continuous-limit can only be used with --continuous")
-
- if speak:
- logger.typewriter_log("Speak Mode: ", Fore.GREEN, "ENABLED")
- CFG.set_speak_mode(True)
-
- if gpt3only:
- logger.typewriter_log("GPT3.5 Only Mode: ", Fore.GREEN, "ENABLED")
- CFG.set_smart_llm_model(CFG.fast_llm_model)
-
- if gpt4only:
- logger.typewriter_log("GPT4 Only Mode: ", Fore.GREEN, "ENABLED")
- CFG.set_fast_llm_model(CFG.smart_llm_model)
-
- if memory_type:
- supported_memory = get_supported_memory_backends()
- chosen = memory_type
- if chosen not in supported_memory:
- logger.typewriter_log(
- "ONLY THE FOLLOWING MEMORY BACKENDS ARE SUPPORTED: ",
- Fore.RED,
- f"{supported_memory}",
- )
- logger.typewriter_log("Defaulting to: ", Fore.YELLOW, CFG.memory_backend)
- else:
- CFG.memory_backend = chosen
-
- if skip_reprompt:
- logger.typewriter_log("Skip Re-prompt: ", Fore.GREEN, "ENABLED")
- CFG.skip_reprompt = True
-
- if ai_settings_file:
- file = ai_settings_file
-
- # Validate file
- (validated, message) = utils.validate_yaml_file(file)
- if not validated:
- logger.typewriter_log("FAILED FILE VALIDATION", Fore.RED, message)
- logger.double_check()
- exit(1)
-
- logger.typewriter_log("Using AI Settings File:", Fore.GREEN, file)
- CFG.ai_settings_file = file
- CFG.skip_reprompt = True
-
- if allow_downloads:
- logger.typewriter_log("Native Downloading:", Fore.GREEN, "ENABLED")
- logger.typewriter_log(
- "WARNING: ",
- Fore.YELLOW,
- f"{Back.LIGHTYELLOW_EX}Auto-GPT will now be able to download and save files to your machine.{Back.RESET} "
- + "It is recommended that you monitor any files it downloads carefully.",
- )
- logger.typewriter_log(
- "WARNING: ",
- Fore.YELLOW,
- f"{Back.RED + Style.BRIGHT}ALWAYS REMEMBER TO NEVER OPEN FILES YOU AREN'T SURE OF!{Style.RESET_ALL}",
- )
- CFG.allow_downloads = True
-
- if skip_news:
- CFG.skip_news = True
-
- if browser_name:
- CFG.selenium_web_browser = browser_name
diff --git a/spaces/MirageML/dreambooth/README.md b/spaces/MirageML/dreambooth/README.md
deleted file mode 100644
index 2aa426ad09ed6a685d8e0ae19895ad66cffdae65..0000000000000000000000000000000000000000
--- a/spaces/MirageML/dreambooth/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Dreambooth Web UI
-emoji: ☁️
-colorFrom: pink
-colorTo: red
-sdk: gradio
-sdk_version: 3.11
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: multimodalart/dreambooth-training
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/MirageML/sjc/my/utils/heartbeat.py b/spaces/MirageML/sjc/my/utils/heartbeat.py
deleted file mode 100644
index 024dc981b64140950102b05ffa657354a3cae485..0000000000000000000000000000000000000000
--- a/spaces/MirageML/sjc/my/utils/heartbeat.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# generates periodic hearbeats for remote expriment monitoring
-from pathlib import Path
-import json
-from inspect import stack
-from .ticker import IntervalTicker
-
-_CURRENT_BEAT_STACK = []
-
-
-def get_heartbeat():
- """
- Returns:
- The :class:`HeartBeat` object that's currently being used.
- Throws an error if no :class:`EventStorage` is currently enabled.
- """
- assert len(
- _CURRENT_BEAT_STACK
- ), "get_heartbeat() has to be called inside a 'with EventStorage(...)' context!"
- return _CURRENT_BEAT_STACK[-1]
-
-
-def get_tqdm_meter(pbar, format_dict):
- format_dict['bar_format'] = "{r_bar}"
- meter_str = pbar.format_meter(**format_dict)
- meter_str = meter_str[2:]
- return meter_str
-
-
-def caller_info(n_stack_up):
- info = stack()[1 + n_stack_up] # 1 up as base so that it starts from caller
- msg = f"{info.filename}:{info.lineno} - {info.function}"
- return msg
-
-
-class HeartBeat():
- def __init__(
- self, pbar, write_interval=10,
- output_dir="./", fname="heartbeat.json"
- ):
- self.pbar = pbar
- self.fname = Path(output_dir) / fname
- self.ticker = IntervalTicker(write_interval)
- self.completed = False
-
- # force one write at the beginning
- self.beat(force_write=True, n_stack_up=2)
-
- def beat(self, force_write=False, n_stack_up=1):
- on_write_period = self.ticker.tick()
- if force_write or on_write_period:
- stats = self.stats()
- stats['caller'] = caller_info(n_stack_up)
-
- with open(self.fname, "w") as f:
- json.dump(stats, f)
-
- def done(self):
- self.completed = True
- self.beat(force_write=True, n_stack_up=2)
-
- def stats(self):
- pbar = self.pbar
- fdict = pbar.format_dict
- stats = {
- "beat": self.ticker.tick_str(),
- "done": self.completed,
- "meter": get_tqdm_meter(pbar, fdict),
- "elapsed": int(fdict['elapsed'])
- }
- return stats
-
- def __enter__(self):
- _CURRENT_BEAT_STACK.append(self)
- return self
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- assert _CURRENT_BEAT_STACK[-1] == self
- _CURRENT_BEAT_STACK.pop()
diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/scripts_FineCapEval/prepro_labels.py b/spaces/NAACL2022/CLIP-Caption-Reward/scripts_FineCapEval/prepro_labels.py
deleted file mode 100644
index 48e7d079808760941a78d87435f8f0e2bbcfb280..0000000000000000000000000000000000000000
--- a/spaces/NAACL2022/CLIP-Caption-Reward/scripts_FineCapEval/prepro_labels.py
+++ /dev/null
@@ -1,209 +0,0 @@
-"""
-Preprocess a raw json dataset into hdf5/json files for use in data_loader.py
-
-Input: json file that has the form
-[{ file_path: 'path/img.jpg', captions: ['a caption', ...] }, ...]
-example element in this list would look like
-{'captions': [u'A man with a red helmet on a small moped on a dirt road. ', u'Man riding a motor bike on a dirt road on the countryside.', u'A man riding on the back of a motorcycle.', u'A dirt path with a young person on a motor bike rests to the foreground of a verdant area with a bridge and a background of cloud-wreathed mountains. ', u'A man in a red shirt and a red hat is on a motorcycle on a hill side.'], 'file_path': u'val2014/COCO_val2014_000000391895.jpg', 'id': 391895}
-
-This script reads this json, does some basic preprocessing on the captions
-(e.g. lowercase, etc.), creates a special UNK token, and encodes everything to arrays
-
-Output: a json file and an hdf5 file
-The hdf5 file contains several fields:
-/labels is (M,max_length) uint32 array of encoded labels, zero padded
-/label_start_ix and /label_end_ix are (N,) uint32 arrays of pointers to the
- first and last indices (in range 1..M) of labels for each image
-/label_length stores the length of the sequence for each of the M sequences
-
-The json file has a dict that contains:
-- an 'ix_to_word' field storing the vocab in form {ix:'word'}, where ix is 1-indexed
-- an 'images' field that is a list holding auxiliary information for each image,
- such as in particular the 'split' it was assigned to.
-"""
-
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import os
-import json
-import argparse
-from random import shuffle, seed
-import string
-# non-standard dependencies:
-import h5py
-import numpy as np
-import torch
-import torchvision.models as models
-import skimage.io
-from PIL import Image
-
-
-def build_vocab(imgs, params):
- count_thr = params['word_count_threshold']
-
- # count up the number of words
- counts = {}
- for img in imgs:
- for sent in img['sentences']:
- for w in sent['tokens']:
- counts[w] = counts.get(w, 0) + 1
- cw = sorted([(count,w) for w,count in counts.items()], reverse=True)
- print('top words and their counts:')
- print('\n'.join(map(str,cw[:20])))
-
- # print some stats
- total_words = sum(counts.values())
- print('total words:', total_words)
- bad_words = [w for w,n in counts.items() if n <= count_thr]
- vocab = [w for w,n in counts.items() if n > count_thr]
- bad_count = sum(counts[w] for w in bad_words)
- print('number of bad words: %d/%d = %.2f%%' % (len(bad_words), len(counts), len(bad_words)*100.0/len(counts)))
- print('number of words in vocab would be %d' % (len(vocab), ))
- print('number of UNKs: %d/%d = %.2f%%' % (bad_count, total_words, bad_count*100.0/total_words))
-
- # lets look at the distribution of lengths as well
- sent_lengths = {}
- for img in imgs:
- for sent in img['sentences']:
- txt = sent['tokens']
- nw = len(txt)
- sent_lengths[nw] = sent_lengths.get(nw, 0) + 1
- max_len = max(sent_lengths.keys())
- print('max length sentence in raw data: ', max_len)
- print('sentence length distribution (count, number of words):')
- sum_len = sum(sent_lengths.values())
- for i in range(max_len+1):
- print('%2d: %10d %f%%' % (i, sent_lengths.get(i,0), sent_lengths.get(i,0)*100.0/sum_len))
-
- # lets now produce the final annotations
- if bad_count > 0:
- # additional special UNK token we will use below to map infrequent words to
- print('inserting the special UNK token')
- vocab.append('UNK')
-
- for img in imgs:
- img['final_captions'] = []
- for sent in img['sentences']:
- txt = sent['tokens']
- caption = [w if counts.get(w,0) > count_thr else 'UNK' for w in txt]
- img['final_captions'].append(caption)
-
- return vocab
-
-
-def encode_captions(imgs, params, wtoi):
- """
- encode all captions into one large array, which will be 1-indexed.
- also produces label_start_ix and label_end_ix which store 1-indexed
- and inclusive (Lua-style) pointers to the first and last caption for
- each image in the dataset.
- """
-
- max_length = params['max_length']
- N = len(imgs)
- M = sum(len(img['final_captions']) for img in imgs) # total number of captions
-
- label_arrays = []
- label_start_ix = np.zeros(N, dtype='uint32') # note: these will be one-indexed
- label_end_ix = np.zeros(N, dtype='uint32')
- label_length = np.zeros(M, dtype='uint32')
- caption_counter = 0
- counter = 1
- for i,img in enumerate(imgs):
- n = len(img['final_captions'])
- assert n > 0, 'error: some image has no captions'
-
- Li = np.zeros((n, max_length), dtype='uint32')
- for j,s in enumerate(img['final_captions']):
- label_length[caption_counter] = min(max_length, len(s)) # record the length of this sequence
- caption_counter += 1
- for k,w in enumerate(s):
- if k < max_length:
- Li[j,k] = wtoi[w]
-
- # note: word indices are 1-indexed, and captions are padded with zeros
- label_arrays.append(Li)
- label_start_ix[i] = counter
- label_end_ix[i] = counter + n - 1
-
- counter += n
-
- L = np.concatenate(label_arrays, axis=0) # put all the labels together
- assert L.shape[0] == M, 'lengths don\'t match? that\'s weird'
- assert np.all(label_length > 0), 'error: some caption had no words?'
-
- print('encoded captions to array of size ', L.shape)
- return L, label_start_ix, label_end_ix, label_length
-
-
-def main(params):
-
- imgs = json.load(open(params['input_json'], 'r'))
- imgs = imgs['images']
-
- seed(123) # make reproducible
-
- # # create the vocab
- # vocab = build_vocab(imgs, params)
- # itow = {i+1:w for i,w in enumerate(vocab)} # a 1-indexed vocab translation table
- # wtoi = {w:i+1 for i,w in enumerate(vocab)} # inverse table
-
- itow = imgs['ix_to_word']
- wtoi = {w:i for i, w in itow.items()}
-
- # encode captions in large arrays, ready to ship to hdf5 file
- L, label_start_ix, label_end_ix, label_length = encode_captions(imgs, params, wtoi)
-
- # create output h5 file
- N = len(imgs)
- f_lb = h5py.File(params['output_h5']+'_label.h5', "w")
- f_lb.create_dataset("labels", dtype='uint32', data=L)
- f_lb.create_dataset("label_start_ix", dtype='uint32', data=label_start_ix)
- f_lb.create_dataset("label_end_ix", dtype='uint32', data=label_end_ix)
- f_lb.create_dataset("label_length", dtype='uint32', data=label_length)
- f_lb.close()
-
- # create output json file
- out = {}
- out['ix_to_word'] = itow # encode the (1-indexed) vocab
- out['images'] = []
- for i,img in enumerate(imgs):
-
- jimg = {}
- jimg['split'] = img['split']
- if 'filename' in img: jimg['file_path'] = os.path.join(img.get('filepath', ''), img['filename']) # copy it over, might need
- if 'cocoid' in img:
- jimg['id'] = img['cocoid'] # copy over & mantain an id, if present (e.g. coco ids, useful)
- elif 'imgid' in img:
- jimg['id'] = img['imgid']
-
- if params['images_root'] != '':
- with Image.open(os.path.join(params['images_root'], img['filepath'], img['filename'])) as _img:
- jimg['width'], jimg['height'] = _img.size
-
- out['images'].append(jimg)
-
- json.dump(out, open(params['output_json'], 'w'))
- print('wrote ', params['output_json'])
-
-if __name__ == "__main__":
-
- parser = argparse.ArgumentParser()
-
- # input json
- parser.add_argument('--input_json', required=True, help='input json file to process into hdf5')
- parser.add_argument('--output_json', default='data.json', help='output json file')
- parser.add_argument('--output_h5', default='data', help='output h5 file')
- parser.add_argument('--images_root', default='', help='root location in which images are stored, to be prepended to file_path in input json')
-
- # options
- parser.add_argument('--max_length', default=16, type=int, help='max length of a caption, in number of words. captions longer than this get clipped.')
- parser.add_argument('--word_count_threshold', default=5, type=int, help='only words that occur more than this number of times will be put in vocab')
-
- args = parser.parse_args()
- params = vars(args) # convert to ordinary dict
- print('parsed input parameters:')
- print(json.dumps(params, indent = 2))
- main(params)
diff --git a/spaces/NATSpeech/DiffSpeech/modules/commons/rel_transformer.py b/spaces/NATSpeech/DiffSpeech/modules/commons/rel_transformer.py
deleted file mode 100644
index 7e5b68b682be7ef0d1049015f0cd03d4e74f77d2..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/DiffSpeech/modules/commons/rel_transformer.py
+++ /dev/null
@@ -1,439 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from modules.commons.layers import Embedding
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0.,
- window_size=None, block_length=None, pre_ln=False, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.block_length = block_length
- self.pre_ln = pre_ln
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(
- MultiHeadAttention(hidden_channels, hidden_channels, n_heads, window_size=window_size,
- p_dropout=p_dropout, block_length=block_length))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
- if pre_ln:
- self.last_ln = LayerNorm(hidden_channels)
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- for i in range(self.n_layers):
- x = x * x_mask
- x_ = x
- if self.pre_ln:
- x = self.norm_layers_1[i](x)
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = x_ + y
- if not self.pre_ln:
- x = self.norm_layers_1[i](x)
-
- x_ = x
- if self.pre_ln:
- x = self.norm_layers_2[i](x)
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = x_ + y
- if not self.pre_ln:
- x = self.norm_layers_2[i](x)
- if self.pre_ln:
- x = self.last_ln(x)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, window_size=None, heads_share=True, p_dropout=0.,
- block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.p_dropout = p_dropout
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels ** -0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- if proximal_init:
- self.conv_k.weight.data.copy_(self.conv_q.weight.data)
- self.conv_k.bias.data.copy_(self.conv_q.bias.data)
- nn.init.xavier_uniform_(self.conv_v.weight)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query, key.transpose(-2, -1)) / math.sqrt(self.k_channels)
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query, key_relative_embeddings)
- rel_logits = self._relative_position_to_absolute_position(rel_logits)
- scores_local = rel_logits / math.sqrt(self.k_channels)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores * block_mask + -1e4 * (1 - block_mask)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:, slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[:, :, :length, length - 1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]))
- x_flat = x.view([batch, heads, length ** 2 + length * (length - 1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(x * x_mask)
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- return x * x_mask
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-4):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- n_dims = len(x.shape)
- mean = torch.mean(x, 1, keepdim=True)
- variance = torch.mean((x - mean) ** 2, 1, keepdim=True)
-
- x = (x - mean) * torch.rsqrt(variance + self.eps)
-
- shape = [1, -1] + [1] * (n_dims - 2)
- x = x * self.gamma.view(*shape) + self.beta.view(*shape)
- return x
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size // 2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size // 2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class RelTransformerEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- window_size=4,
- block_length=None,
- prenet=True,
- pre_ln=True,
- ):
-
- super().__init__()
-
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.block_length = block_length
- self.prenet = prenet
- self.emb = Embedding(n_vocab, hidden_channels, padding_idx=0)
-
- if prenet:
- self.pre = ConvReluNorm(hidden_channels, hidden_channels, hidden_channels,
- kernel_size=5, n_layers=3, p_dropout=0)
- self.encoder = Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- window_size=window_size,
- block_length=block_length,
- pre_ln=pre_ln,
- )
-
- def forward(self, x, x_mask=None):
- if self.n_vocab > 0:
- x_lengths = (x > 0).long().sum(-1)
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- else:
- x_lengths = (x.abs().sum(-1) > 0).long().sum(-1)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- if self.prenet:
- x = self.pre(x, x_mask)
- x = self.encoder(x, x_mask)
- return x.transpose(1, 2)
-
-
-class RelTransformerEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout=0.0,
- window_size=4,
- block_length=None,
- prenet=True,
- pre_ln=True,
- ):
-
- super().__init__()
-
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.block_length = block_length
- self.prenet = prenet
- if n_vocab > 0:
- self.emb = Embedding(n_vocab, hidden_channels, padding_idx=0)
-
- if prenet:
- self.pre = ConvReluNorm(hidden_channels, hidden_channels, hidden_channels,
- kernel_size=5, n_layers=3, p_dropout=0)
- self.encoder = Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- window_size=window_size,
- block_length=block_length,
- pre_ln=pre_ln,
- )
-
- def forward(self, x, x_mask=None):
- if self.n_vocab > 0:
- x_lengths = (x > 0).long().sum(-1)
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- else:
- x_lengths = (x.abs().sum(-1) > 0).long().sum(-1)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- if self.prenet:
- x = self.pre(x, x_mask)
- x = self.encoder(x, x_mask)
- return x.transpose(1, 2)
diff --git a/spaces/NCTCMumbai/NCTC/models/research/cognitive_planning/tasks.py b/spaces/NCTCMumbai/NCTC/models/research/cognitive_planning/tasks.py
deleted file mode 100644
index c3ef6ca328f7454ffe9aec61a704d1322d680d31..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/cognitive_planning/tasks.py
+++ /dev/null
@@ -1,1507 +0,0 @@
-# Copyright 2018 The TensorFlow Authors All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-
-"""A library of tasks.
-
-This interface is intended to implement a wide variety of navigation
-tasks. See go/navigation_tasks for a list.
-"""
-
-import abc
-import collections
-import math
-import threading
-import networkx as nx
-import numpy as np
-import tensorflow as tf
-#from pyglib import logging
-#import gin
-from envs import task_env
-from envs import util as envs_util
-
-
-# Utility functions.
-def _pad_or_clip_array(np_arr, arr_len, is_front_clip=True, output_mask=False):
- """Make np_arr array to have length arr_len.
-
- If the array is shorter than arr_len, then it is padded from the front with
- zeros. If it is longer, then it is clipped either from the back or from the
- front. Only the first dimension is modified.
-
- Args:
- np_arr: numpy array.
- arr_len: integer scalar.
- is_front_clip: a boolean. If true then clipping is done in the front,
- otherwise in the back.
- output_mask: If True, outputs a numpy array of rank 1 which represents
- a mask of which values have been added (0 - added, 1 - actual output).
-
- Returns:
- A numpy array and the size of padding (as a python int32). This size is
- negative is the array is clipped.
- """
- shape = list(np_arr.shape)
- pad_size = arr_len - shape[0]
- padded_or_clipped = None
- if pad_size < 0:
- if is_front_clip:
- padded_or_clipped = np_arr[-pad_size:, :]
- else:
- padded_or_clipped = np_arr[:arr_len, :]
- elif pad_size > 0:
- padding = np.zeros([pad_size] + shape[1:], dtype=np_arr.dtype)
- padded_or_clipped = np.concatenate([np_arr, padding], axis=0)
- else:
- padded_or_clipped = np_arr
-
- if output_mask:
- mask = np.ones((arr_len,), dtype=np.int)
- if pad_size > 0:
- mask[-pad_size:] = 0
- return padded_or_clipped, pad_size, mask
- else:
- return padded_or_clipped, pad_size
-
-
-def classification_loss(truth, predicted, weights=None, is_one_hot=True):
- """A cross entropy loss.
-
- Computes the mean of cross entropy losses for all pairs of true labels and
- predictions. It wraps around a tf implementation of the cross entropy loss
- with additional reformating of the inputs. If the truth and predicted are
- n-rank Tensors with n > 2, then these are reshaped to 2-rank Tensors. It
- allows for truth to be specified as one hot vector or class indices. Finally,
- a weight can be specified for each element in truth and predicted.
-
- Args:
- truth: an n-rank or (n-1)-rank Tensor containing labels. If is_one_hot is
- True, then n-rank Tensor is expected, otherwise (n-1) rank one.
- predicted: an n-rank float Tensor containing prediction probabilities.
- weights: an (n-1)-rank float Tensor of weights
- is_one_hot: a boolean.
-
- Returns:
- A TF float scalar.
- """
- num_labels = predicted.get_shape().as_list()[-1]
- if not is_one_hot:
- truth = tf.reshape(truth, [-1])
- truth = tf.one_hot(
- truth, depth=num_labels, on_value=1.0, off_value=0.0, axis=-1)
- else:
- truth = tf.reshape(truth, [-1, num_labels])
- predicted = tf.reshape(predicted, [-1, num_labels])
- losses = tf.nn.softmax_cross_entropy_with_logits(
- labels=truth, logits=predicted)
- if weights is not None:
- losses = tf.boolean_mask(losses,
- tf.cast(tf.reshape(weights, [-1]), dtype=tf.bool))
- return tf.reduce_mean(losses)
-
-
-class UnrolledTaskIOConfig(object):
- """Configuration of task inputs and outputs.
-
- A task can have multiple inputs, which define the context, and a task query
- which defines what is to be executed in this context. The desired execution
- is encoded in an output. The config defines the shapes of the inputs, the
- query and the outputs.
- """
-
- def __init__(self, inputs, output, query=None):
- """Constructs a Task input/output config.
-
- Args:
- inputs: a list of tuples. Each tuple represents the configuration of an
- input, with first element being the type (a string value) and the second
- element the shape.
- output: a tuple representing the configuration of the output.
- query: a tuple representing the configuration of the query. If no query,
- then None.
- """
- # A configuration of a single input, output or query. Consists of the type,
- # which can be one of the three specified above, and a shape. The shape must
- # be consistent with the type, e.g. if type == 'image', then shape is a 3
- # valued list.
- io_config = collections.namedtuple('IOConfig', ['type', 'shape'])
-
- def assert_config(config):
- if not isinstance(config, tuple):
- raise ValueError('config must be a tuple. Received {}'.format(
- type(config)))
- if len(config) != 2:
- raise ValueError('config must have 2 elements, has %d' % len(config))
- if not isinstance(config[0], tf.DType):
- raise ValueError('First element of config must be a tf.DType.')
- if not isinstance(config[1], list):
- raise ValueError('Second element of config must be a list.')
-
- assert isinstance(inputs, collections.OrderedDict)
- for modality_type in inputs:
- assert_config(inputs[modality_type])
- self._inputs = collections.OrderedDict(
- [(k, io_config(*value)) for k, value in inputs.iteritems()])
-
- if query is not None:
- assert_config(query)
- self._query = io_config(*query)
- else:
- self._query = None
-
- assert_config(output)
- self._output = io_config(*output)
-
- @property
- def inputs(self):
- return self._inputs
-
- @property
- def output(self):
- return self._output
-
- @property
- def query(self):
- return self._query
-
-
-class UnrolledTask(object):
- """An interface for a Task which can be unrolled during training.
-
- Each example is called episode and consists of inputs and target output, where
- the output can be considered as desired unrolled sequence of actions for the
- inputs. For the specified tasks, these action sequences are to be
- unambiguously definable.
- """
- __metaclass__ = abc.ABCMeta
-
- def __init__(self, config):
- assert isinstance(config, UnrolledTaskIOConfig)
- self._config = config
- # A dict of bookkeeping variables.
- self.info = {}
- # Tensorflow input is multithreaded and this lock is needed to prevent
- # race condition in the environment. Without the lock, non-thread safe
- # environments crash.
- self._lock = threading.Lock()
-
- @property
- def config(self):
- return self._config
-
- @abc.abstractmethod
- def episode(self):
- """Returns data needed to train and test a single episode.
-
- Each episode consists of inputs, which define the context of the task, a
- query which defines the task, and a target output, which defines a
- sequence of actions to be executed for this query. This sequence should not
- require feedback, i.e. can be predicted purely from input and query.]
-
- Returns:
- inputs, query, output, where inputs is a list of numpy arrays and query
- and output are numpy arrays. These arrays must be of shape and type as
- specified in the task configuration.
- """
- pass
-
- def reset(self, observation):
- """Called after the environment is reset."""
- pass
-
- def episode_batch(self, batch_size):
- """Returns a batch of episodes.
-
- Args:
- batch_size: size of batch.
-
- Returns:
- (inputs, query, output, masks) where inputs is list of numpy arrays and
- query, output, and mask are numpy arrays. These arrays must be of shape
- and type as specified in the task configuration with one additional
- preceding dimension corresponding to the batch.
-
- Raises:
- ValueError: if self.episode() returns illegal values.
- """
- batched_inputs = collections.OrderedDict(
- [[mtype, []] for mtype in self.config.inputs])
- batched_queries = []
- batched_outputs = []
- batched_masks = []
- for _ in range(int(batch_size)):
- with self._lock:
- # The episode function needs to be thread-safe. Since the current
- # implementation for the envs are not thread safe we need to have lock
- # the operations here.
- inputs, query, outputs = self.episode()
- if not isinstance(outputs, tuple):
- raise ValueError('Outputs return value must be tuple.')
- if len(outputs) != 2:
- raise ValueError('Output tuple must be of size 2.')
- if inputs is not None:
- for modality_type in batched_inputs:
- batched_inputs[modality_type].append(
- np.expand_dims(inputs[modality_type], axis=0))
-
- if query is not None:
- batched_queries.append(np.expand_dims(query, axis=0))
- batched_outputs.append(np.expand_dims(outputs[0], axis=0))
- if outputs[1] is not None:
- batched_masks.append(np.expand_dims(outputs[1], axis=0))
-
- batched_inputs = {
- k: np.concatenate(i, axis=0) for k, i in batched_inputs.iteritems()
- }
- if batched_queries:
- batched_queries = np.concatenate(batched_queries, axis=0)
- batched_outputs = np.concatenate(batched_outputs, axis=0)
- if batched_masks:
- batched_masks = np.concatenate(batched_masks, axis=0).astype(np.float32)
- else:
- # When the array is empty, the default np.dtype is float64 which causes
- # py_func to crash in the tests.
- batched_masks = np.array([], dtype=np.float32)
- batched_inputs = [batched_inputs[k] for k in self._config.inputs]
- return batched_inputs, batched_queries, batched_outputs, batched_masks
-
- def tf_episode_batch(self, batch_size):
- """A batch of episodes as TF Tensors.
-
- Same as episode_batch with the difference that the return values are TF
- Tensors.
-
- Args:
- batch_size: a python float for the batch size.
-
- Returns:
- inputs, query, output, mask where inputs is a dictionary of tf.Tensor
- where the keys are the modality types specified in the config.inputs.
- query, output, and mask are TF Tensors. These tensors must
- be of shape and type as specified in the task configuration with one
- additional preceding dimension corresponding to the batch. Both mask and
- output have the same shape as output.
- """
-
- # Define TF outputs.
- touts = []
- shapes = []
- for _, i in self._config.inputs.iteritems():
- touts.append(i.type)
- shapes.append(i.shape)
- if self._config.query is not None:
- touts.append(self._config.query.type)
- shapes.append(self._config.query.shape)
- # Shapes and types for batched_outputs.
- touts.append(self._config.output.type)
- shapes.append(self._config.output.shape)
- # Shapes and types for batched_masks.
- touts.append(self._config.output.type)
- shapes.append(self._config.output.shape[0:1])
-
- def episode_batch_func():
- if self.config.query is None:
- inp, _, output, masks = self.episode_batch(int(batch_size))
- return tuple(inp) + (output, masks)
- else:
- inp, query, output, masks = self.episode_batch(int(batch_size))
- return tuple(inp) + (query, output, masks)
-
- tf_episode_batch = tf.py_func(episode_batch_func, [], touts,
- stateful=True, name='taskdata')
- for episode, shape in zip(tf_episode_batch, shapes):
- episode.set_shape([batch_size] + shape)
-
- tf_episode_batch_dict = collections.OrderedDict([
- (mtype, episode)
- for mtype, episode in zip(self.config.inputs.keys(), tf_episode_batch)
- ])
- cur_index = len(self.config.inputs.keys())
- tf_query = None
- if self.config.query is not None:
- tf_query = tf_episode_batch[cur_index]
- cur_index += 1
- tf_outputs = tf_episode_batch[cur_index]
- tf_masks = tf_episode_batch[cur_index + 1]
-
- return tf_episode_batch_dict, tf_query, tf_outputs, tf_masks
-
- @abc.abstractmethod
- def target_loss(self, true_targets, targets, weights=None):
- """A loss for training a task model.
-
- This loss measures the discrepancy between the task outputs, the true and
- predicted ones.
-
- Args:
- true_targets: tf.Tensor of shape and type as defined in the task config
- containing the true outputs.
- targets: tf.Tensor of shape and type as defined in the task config
- containing the predicted outputs.
- weights: a bool tf.Tensor of shape as targets. Only true values are
- considered when formulating the loss.
- """
- pass
-
- def reward(self, obs, done, info):
- """Returns a reward.
-
- The tasks has to compute a reward based on the state of the environment. The
- reward computation, though, is task specific. The task is to use the
- environment interface, as defined in task_env.py, to compute the reward. If
- this interface does not expose enough information, it is to be updated.
-
- Args:
- obs: Observation from environment's step function.
- done: Done flag from environment's step function.
- info: Info dict from environment's step function.
-
- Returns:
- obs: Observation.
- reward: Floating point value.
- done: Done flag.
- info: Info dict.
- """
- # Default implementation does not do anything.
- return obs, 0.0, done, info
-
-
-class RandomExplorationBasedTask(UnrolledTask):
- """A Task which starts with a random exploration of the environment."""
-
- def __init__(self,
- env,
- seed,
- add_query_noise=False,
- query_noise_var=0.0,
- *args,
- **kwargs): # pylint: disable=keyword-arg-before-vararg
- """Initializes a Task using a random exploration runs.
-
- Args:
- env: an instance of type TaskEnv and gym.Env.
- seed: a random seed.
- add_query_noise: boolean, if True then whatever queries are generated,
- they are randomly perturbed. The semantics of the queries depends on the
- concrete task implementation.
- query_noise_var: float, the variance of Gaussian noise used for query
- perturbation. Used iff add_query_noise==True.
- *args: see super class.
- **kwargs: see super class.
- """
- super(RandomExplorationBasedTask, self).__init__(*args, **kwargs)
- assert isinstance(env, task_env.TaskEnv)
- self._env = env
- self._env.set_task(self)
- self._rng = np.random.RandomState(seed)
- self._add_query_noise = add_query_noise
- self._query_noise_var = query_noise_var
-
- # GoToStaticXTask can also take empty config but for the rest of the classes
- # the number of modality types is 1.
- if len(self.config.inputs.keys()) > 1:
- raise NotImplementedError('current implementation supports input '
- 'with only one modality type or less.')
-
- def _exploration(self):
- """Generates a random exploration run.
-
- The function uses the environment to generate a run.
-
- Returns:
- A tuple of numpy arrays. The i-th array contains observation of type and
- shape as specified in config.inputs[i].
- A list of states along the exploration path.
- A list of vertex indices corresponding to the path of the exploration.
- """
- in_seq_len = self._config.inputs.values()[0].shape[0]
- path, _, states, step_outputs = self._env.random_step_sequence(
- min_len=in_seq_len)
- obs = {modality_type: [] for modality_type in self._config.inputs}
- for o in step_outputs:
- step_obs, _, done, _ = o
- # It is expected that each value of step_obs is a dict of observations,
- # whose dimensions are consistent with the config.inputs sizes.
- for modality_type in self._config.inputs:
- assert modality_type in step_obs, '{}'.format(type(step_obs))
- o = step_obs[modality_type]
- i = self._config.inputs[modality_type]
- assert len(o.shape) == len(i.shape) - 1
- for dim_o, dim_i in zip(o.shape, i.shape[1:]):
- assert dim_o == dim_i, '{} != {}'.format(dim_o, dim_i)
- obs[modality_type].append(o)
- if done:
- break
-
- if not obs:
- return obs, states, path
-
- max_path_len = int(
- round(in_seq_len * float(len(path)) / float(len(obs.values()[0]))))
- path = path[-max_path_len:]
- states = states[-in_seq_len:]
-
- # The above obs is a list of tuples of np,array. Re-format them as tuple of
- # np.array, each array containing all observations from all steps.
- def regroup(obs, i):
- """Regroups observations.
-
- Args:
- obs: a list of tuples of same size. The k-th tuple contains all the
- observations from k-th step. Each observation is a numpy array.
- i: the index of the observation in each tuple to be grouped.
-
- Returns:
- A numpy array of shape config.inputs[i] which contains all i-th
- observations from all steps. These are concatenated along the first
- dimension. In addition, if the number of observations is different from
- the one specified in config.inputs[i].shape[0], then the array is either
- padded from front or clipped.
- """
- grouped_obs = np.concatenate(
- [np.expand_dims(o, axis=0) for o in obs[i]], axis=0)
- in_seq_len = self._config.inputs[i].shape[0]
- # pylint: disable=unbalanced-tuple-unpacking
- grouped_obs, _ = _pad_or_clip_array(
- grouped_obs, in_seq_len, is_front_clip=True)
- return grouped_obs
-
- all_obs = {i: regroup(obs, i) for i in self._config.inputs}
-
- return all_obs, states, path
-
- def _obs_to_state(self, path, states):
- """Computes mapping between path nodes and states."""
- # Generate a numpy array of locations corresponding to the path vertices.
- path_coordinates = map(self._env.vertex_to_pose, path)
- path_coordinates = np.concatenate(
- [np.reshape(p, [1, 2]) for p in path_coordinates])
-
- # The observations are taken along a smoothed trajectory following the path.
- # We compute a mapping between the obeservations and the map vertices.
- path_to_obs = collections.defaultdict(list)
- obs_to_state = []
- for i, s in enumerate(states):
- location = np.reshape(s[0:2], [1, 2])
- index = np.argmin(
- np.reshape(
- np.sum(np.power(path_coordinates - location, 2), axis=1), [-1]))
- index = path[index]
- path_to_obs[index].append(i)
- obs_to_state.append(index)
- return path_to_obs, obs_to_state
-
- def _perturb_state(self, state, noise_var):
- """Perturbes the state.
-
- The location are purturbed using a Gaussian noise with variance
- noise_var. The orientation is uniformly sampled.
-
- Args:
- state: a numpy array containing an env state (x, y locations).
- noise_var: float
- Returns:
- The perturbed state.
- """
-
- def normal(v, std):
- if std > 0:
- n = self._rng.normal(0.0, std)
- n = min(n, 2.0 * std)
- n = max(n, -2.0 * std)
- return v + n
- else:
- return v
-
- state = state.copy()
- state[0] = normal(state[0], noise_var)
- state[1] = normal(state[1], noise_var)
- if state.size > 2:
- state[2] = self._rng.uniform(-math.pi, math.pi)
- return state
-
- def _sample_obs(self,
- indices,
- observations,
- observation_states,
- path_to_obs,
- max_obs_index=None,
- use_exploration_obs=True):
- """Samples one observation which corresponds to vertex_index in path.
-
- In addition, the sampled observation must have index in observations less
- than max_obs_index. If these two conditions cannot be satisfied the
- function returns None.
-
- Args:
- indices: a list of integers.
- observations: a list of numpy arrays containing all the observations.
- observation_states: a list of numpy arrays, each array representing the
- state of the observation.
- path_to_obs: a dict of path indices to lists of observation indices.
- max_obs_index: an integer.
- use_exploration_obs: if True, then the observation is sampled among the
- specified observations, otherwise it is obtained from the environment.
- Returns:
- A tuple of:
- -- A numpy array of size width x height x 3 representing the sampled
- observation.
- -- The index of the sampld observation among the input observations.
- -- The state at which the observation is captured.
- Raises:
- ValueError: if the observation and observation_states lists are of
- different lengths.
- """
- if len(observations) != len(observation_states):
- raise ValueError('observation and observation_states lists must have '
- 'equal lengths')
- if not indices:
- return None, None, None
- vertex_index = self._rng.choice(indices)
- if use_exploration_obs:
- obs_indices = path_to_obs[vertex_index]
-
- if max_obs_index is not None:
- obs_indices = [i for i in obs_indices if i < max_obs_index]
-
- if obs_indices:
- index = self._rng.choice(obs_indices)
- if self._add_query_noise:
- xytheta = self._perturb_state(observation_states[index],
- self._query_noise_var)
- return self._env.observation(xytheta), index, xytheta
- else:
- return observations[index], index, observation_states[index]
- else:
- return None, None, None
- else:
- xy = self._env.vertex_to_pose(vertex_index)
- xytheta = np.array([xy[0], xy[1], 0.0])
- xytheta = self._perturb_state(xytheta, self._query_noise_var)
- return self._env.observation(xytheta), None, xytheta
-
-
-class AreNearbyTask(RandomExplorationBasedTask):
- """A task of identifying whether a query is nearby current location or not.
-
- The query is guaranteed to be in proximity of an already visited location,
- i.e. close to one of the observations. For each observation we have one
- query, which is either close or not to this observation.
- """
-
- def __init__(
- self,
- max_distance=0,
- *args,
- **kwargs): # pylint: disable=keyword-arg-before-vararg
- super(AreNearbyTask, self).__init__(*args, **kwargs)
- self._max_distance = max_distance
-
- if len(self.config.inputs.keys()) != 1:
- raise NotImplementedError('current implementation supports input '
- 'with only one modality type')
-
- def episode(self):
- """Episode data.
-
- Returns:
- observations: a tuple with one element. This element is a numpy array of
- size in_seq_len x observation_size x observation_size x 3 containing
- in_seq_len images.
- query: a numpy array of size
- in_seq_len x observation_size X observation_size x 3 containing a query
- image.
- A tuple of size two. First element is a in_seq_len x 2 numpy array of
- either 1.0 or 0.0. The i-th element denotes whether the i-th query
- image is neraby (value 1.0) or not (value 0.0) to the i-th observation.
- The second element in the tuple is a mask, a numpy array of size
- in_seq_len x 1 and values 1.0 or 0.0 denoting whether the query is
- valid or not (it can happen that the query is not valid, e.g. there are
- not enough observations to have a meaningful queries).
- """
- observations, states, path = self._exploration()
- assert len(observations.values()[0]) == len(states)
-
- # The observations are taken along a smoothed trajectory following the path.
- # We compute a mapping between the obeservations and the map vertices.
- path_to_obs, obs_to_path = self._obs_to_state(path, states)
-
- # Go over all observations, and sample a query. With probability 0.5 this
- # query is a nearby observation (defined as belonging to the same vertex
- # in path).
- g = self._env.graph
- queries = []
- labels = []
- validity_masks = []
- query_index_in_observations = []
- for i, curr_o in enumerate(observations.values()[0]):
- p = obs_to_path[i]
- low = max(0, i - self._max_distance)
-
- # A list of lists of vertex indices. Each list in this group corresponds
- # to one possible label.
- index_groups = [[], [], []]
- # Nearby visited indices, label 1.
- nearby_visited = [
- ii for ii in path[low:i + 1] + g[p].keys() if ii in obs_to_path[:i]
- ]
- nearby_visited = [ii for ii in index_groups[1] if ii in path_to_obs]
- # NOT Nearby visited indices, label 0.
- not_nearby_visited = [ii for ii in path[:low] if ii not in g[p].keys()]
- not_nearby_visited = [ii for ii in index_groups[0] if ii in path_to_obs]
- # NOT visited indices, label 2.
- not_visited = [
- ii for ii in range(g.number_of_nodes()) if ii not in path[:i + 1]
- ]
-
- index_groups = [not_nearby_visited, nearby_visited, not_visited]
-
- # Consider only labels for which there are indices.
- allowed_labels = [ii for ii, group in enumerate(index_groups) if group]
- label = self._rng.choice(allowed_labels)
-
- indices = list(set(index_groups[label]))
- max_obs_index = None if label == 2 else i
- use_exploration_obs = False if label == 2 else True
- o, obs_index, _ = self._sample_obs(
- indices=indices,
- observations=observations.values()[0],
- observation_states=states,
- path_to_obs=path_to_obs,
- max_obs_index=max_obs_index,
- use_exploration_obs=use_exploration_obs)
- query_index_in_observations.append(obs_index)
-
- # If we cannot sample a valid query, we mark it as not valid in mask.
- if o is None:
- label = 0.0
- o = curr_o
- validity_masks.append(0)
- else:
- validity_masks.append(1)
-
- queries.append(o.values()[0])
- labels.append(label)
-
- query = np.concatenate([np.expand_dims(q, axis=0) for q in queries], axis=0)
-
- def one_hot(label, num_labels=3):
- a = np.zeros((num_labels,), dtype=np.float)
- a[int(label)] = 1.0
- return a
-
- outputs = np.stack([one_hot(l) for l in labels], axis=0)
- validity_mask = np.reshape(
- np.array(validity_masks, dtype=np.int32), [-1, 1])
-
- self.info['query_index_in_observations'] = query_index_in_observations
- self.info['observation_states'] = states
-
- return observations, query, (outputs, validity_mask)
-
- def target_loss(self, truth, predicted, weights=None):
- pass
-
-
-class NeighboringQueriesTask(RandomExplorationBasedTask):
- """A task of identifying whether two queries are closeby or not.
-
- The proximity between queries is defined by the length of the shorest path
- between them.
- """
-
- def __init__(
- self,
- max_distance=1,
- *args,
- **kwargs): # pylint: disable=keyword-arg-before-vararg
- """Initializes a NeighboringQueriesTask.
-
- Args:
- max_distance: integer, the maximum distance in terms of number of vertices
- between the two queries, so that they are considered neighboring.
- *args: for super class.
- **kwargs: for super class.
- """
- super(NeighboringQueriesTask, self).__init__(*args, **kwargs)
- self._max_distance = max_distance
- if len(self.config.inputs.keys()) != 1:
- raise NotImplementedError('current implementation supports input '
- 'with only one modality type')
-
- def episode(self):
- """Episode data.
-
- Returns:
- observations: a tuple with one element. This element is a numpy array of
- size in_seq_len x observation_size x observation_size x 3 containing
- in_seq_len images.
- query: a numpy array of size
- 2 x observation_size X observation_size x 3 containing a pair of query
- images.
- A tuple of size two. First element is a numpy array of size 2 containing
- a one hot vector of whether the two observations are neighobring. Second
- element is a boolean numpy value denoting whether this is a valid
- episode.
- """
- observations, states, path = self._exploration()
- assert len(observations.values()[0]) == len(states)
- path_to_obs, _ = self._obs_to_state(path, states)
- # Restrict path to ones for which observations have been generated.
- path = [p for p in path if p in path_to_obs]
- # Sample first query.
- query1_index = self._rng.choice(path)
- # Sample label.
- label = self._rng.randint(2)
- # Sample second query.
- # If label == 1, then second query must be nearby, otherwise not.
- closest_indices = nx.single_source_shortest_path(
- self._env.graph, query1_index, self._max_distance).keys()
- if label == 0:
- # Closest indices on the path.
- indices = [p for p in path if p not in closest_indices]
- else:
- # Indices which are not closest on the path.
- indices = [p for p in closest_indices if p in path]
-
- query2_index = self._rng.choice(indices)
- # Generate an observation.
- query1, query1_index, _ = self._sample_obs(
- [query1_index],
- observations.values()[0],
- states,
- path_to_obs,
- max_obs_index=None,
- use_exploration_obs=True)
- query2, query2_index, _ = self._sample_obs(
- [query2_index],
- observations.values()[0],
- states,
- path_to_obs,
- max_obs_index=None,
- use_exploration_obs=True)
-
- queries = np.concatenate(
- [np.expand_dims(q, axis=0) for q in [query1, query2]])
- labels = np.array([0, 0])
- labels[label] = 1
- is_valid = np.array([1])
-
- self.info['observation_states'] = states
- self.info['query_indices_in_observations'] = [query1_index, query2_index]
-
- return observations, queries, (labels, is_valid)
-
- def target_loss(self, truth, predicted, weights=None):
- pass
-
-
-#@gin.configurable
-class GotoStaticXTask(RandomExplorationBasedTask):
- """Task go to a static X.
-
- If continuous reward is used only one goal is allowed so that the reward can
- be computed as a delta-distance to that goal..
- """
-
- def __init__(self,
- step_reward=0.0,
- goal_reward=1.0,
- hit_wall_reward=-1.0,
- done_at_target=False,
- use_continuous_reward=False,
- *args,
- **kwargs): # pylint: disable=keyword-arg-before-vararg
- super(GotoStaticXTask, self).__init__(*args, **kwargs)
- if len(self.config.inputs.keys()) > 1:
- raise NotImplementedError('current implementation supports input '
- 'with only one modality type or less.')
-
- self._step_reward = step_reward
- self._goal_reward = goal_reward
- self._hit_wall_reward = hit_wall_reward
- self._done_at_target = done_at_target
- self._use_continuous_reward = use_continuous_reward
-
- self._previous_path_length = None
-
- def episode(self):
- observations, _, path = self._exploration()
- if len(path) < 2:
- raise ValueError('The exploration path has only one node.')
-
- g = self._env.graph
- start = path[-1]
- while True:
- goal = self._rng.choice(path[:-1])
- if goal != start:
- break
- goal_path = nx.shortest_path(g, start, goal)
-
- init_orientation = self._rng.uniform(0, np.pi, (1,))
- trajectory = np.array(
- [list(self._env.vertex_to_pose(p)) for p in goal_path])
- init_xy = np.reshape(trajectory[0, :], [-1])
- init_state = np.concatenate([init_xy, init_orientation], 0)
-
- trajectory = trajectory[1:, :]
- deltas = envs_util.trajectory_to_deltas(trajectory, init_state)
- output_seq_len = self._config.output.shape[0]
- arr = _pad_or_clip_array(deltas, output_seq_len, output_mask=True)
- # pylint: disable=unbalanced-tuple-unpacking
- thetas, _, thetas_mask = arr
-
- query = self._env.observation(self._env.vertex_to_pose(goal)).values()[0]
-
- return observations, query, (thetas, thetas_mask)
-
- def reward(self, obs, done, info):
- if 'wall_collision' in info and info['wall_collision']:
- return obs, self._hit_wall_reward, done, info
-
- reward = 0.0
- current_vertex = self._env.pose_to_vertex(self._env.state)
-
- if current_vertex in self._env.targets():
- if self._done_at_target:
- done = True
- else:
- obs = self._env.reset()
- reward = self._goal_reward
- else:
- if self._use_continuous_reward:
- if len(self._env.targets()) != 1:
- raise ValueError(
- 'FindX task with continuous reward is assuming only one target.')
- goal_vertex = self._env.targets()[0]
- path_length = self._compute_path_length(goal_vertex)
- reward = self._previous_path_length - path_length
- self._previous_path_length = path_length
- else:
- reward = self._step_reward
-
- return obs, reward, done, info
-
- def _compute_path_length(self, goal_vertex):
- current_vertex = self._env.pose_to_vertex(self._env.state)
- path = nx.shortest_path(self._env.graph, current_vertex, goal_vertex)
- assert len(path) >= 2
- curr_xy = np.array(self._env.state[:2])
- next_xy = np.array(self._env.vertex_to_pose(path[1]))
- last_step_distance = np.linalg.norm(next_xy - curr_xy)
- return (len(path) - 2) * self._env.cell_size_px + last_step_distance
-
- def reset(self, observation):
- if self._use_continuous_reward:
- if len(self._env.targets()) != 1:
- raise ValueError(
- 'FindX task with continuous reward is assuming only one target.')
- goal_vertex = self._env.targets()[0]
- self._previous_path_length = self._compute_path_length(goal_vertex)
-
- def target_loss(self, truth, predicted, weights=None):
- """Action classification loss.
-
- Args:
- truth: a batch_size x sequence length x number of labels float
- Tensor containing a one hot vector for each label in each batch and
- time.
- predicted: a batch_size x sequence length x number of labels float
- Tensor containing a predicted distribution over all actions.
- weights: a batch_size x sequence_length float Tensor of bool
- denoting which actions are valid.
-
- Returns:
- An average cross entropy over all batches and elements in sequence.
- """
- return classification_loss(
- truth=truth, predicted=predicted, weights=weights, is_one_hot=True)
-
-
-class RelativeLocationTask(RandomExplorationBasedTask):
- """A task of estimating the relative location of a query w.r.t current.
-
- It is to be used for debugging. It is designed such that the output is a
- single value, out of a discrete set of values, so that it can be phrased as
- a classification problem.
- """
-
- def __init__(self, num_labels, *args, **kwargs):
- """Initializes a relative location task.
-
- Args:
- num_labels: integer, number of orientations to bin the relative
- orientation into.
- *args: see super class.
- **kwargs: see super class.
- """
- super(RelativeLocationTask, self).__init__(*args, **kwargs)
- self._num_labels = num_labels
- if len(self.config.inputs.keys()) != 1:
- raise NotImplementedError('current implementation supports input '
- 'with only one modality type')
-
- def episode(self):
- observations, states, path = self._exploration()
-
- # Select a random element from history.
- path_to_obs, _ = self._obs_to_state(path, states)
- use_exploration_obs = not self._add_query_noise
- query, _, query_state = self._sample_obs(
- path[:-1],
- observations.values()[0],
- states,
- path_to_obs,
- max_obs_index=None,
- use_exploration_obs=use_exploration_obs)
-
- x, y, theta = tuple(states[-1])
- q_x, q_y, _ = tuple(query_state)
- t_x, t_y = q_x - x, q_y - y
- (rt_x, rt_y) = (np.sin(theta) * t_x - np.cos(theta) * t_y,
- np.cos(theta) * t_x + np.sin(theta) * t_y)
- # Bins are [a(i), a(i+1)] for a(i) = -pi + 0.5 * bin_size + i * bin_size.
- shift = np.pi * (1 - 1.0 / (2.0 * self._num_labels))
- orientation = np.arctan2(rt_y, rt_x) + shift
- if orientation < 0:
- orientation += 2 * np.pi
- label = int(np.floor(self._num_labels * orientation / (2 * np.pi)))
-
- out_shape = self._config.output.shape
- if len(out_shape) != 1:
- raise ValueError('Output shape should be of rank 1.')
- if out_shape[0] != self._num_labels:
- raise ValueError('Output shape must be of size %d' % self._num_labels)
- output = np.zeros(out_shape, dtype=np.float32)
- output[label] = 1
-
- return observations, query, (output, None)
-
- def target_loss(self, truth, predicted, weights=None):
- return classification_loss(
- truth=truth, predicted=predicted, weights=weights, is_one_hot=True)
-
-
-class LocationClassificationTask(UnrolledTask):
- """A task of classifying a location as one of several classes.
-
- The task does not have an input, but just a query and an output. The query
- is an observation of the current location, e.g. an image taken from the
- current state. The output is a label classifying this location in one of
- predefined set of locations (or landmarks).
-
- The current implementation classifies locations as intersections based on the
- number and directions of biforcations. It is expected that a location can have
- at most 4 different directions, aligned with the axes. As each of these four
- directions might be present or not, the number of possible intersections are
- 2^4 = 16.
- """
-
- def __init__(self, env, seed, *args, **kwargs):
- super(LocationClassificationTask, self).__init__(*args, **kwargs)
- self._env = env
- self._rng = np.random.RandomState(seed)
- # A location property which can be set. If not set, a random one is
- # generated.
- self._location = None
- if len(self.config.inputs.keys()) > 1:
- raise NotImplementedError('current implementation supports input '
- 'with only one modality type or less.')
-
- @property
- def location(self):
- return self._location
-
- @location.setter
- def location(self, location):
- self._location = location
-
- def episode(self):
- # Get a location. If not set, sample on at a vertex with a random
- # orientation
- location = self._location
- if location is None:
- num_nodes = self._env.graph.number_of_nodes()
- vertex = int(math.floor(self._rng.uniform(0, num_nodes)))
- xy = self._env.vertex_to_pose(vertex)
- theta = self._rng.uniform(0, 2 * math.pi)
- location = np.concatenate(
- [np.reshape(xy, [-1]), np.array([theta])], axis=0)
- else:
- vertex = self._env.pose_to_vertex(location)
-
- theta = location[2]
- neighbors = self._env.graph.neighbors(vertex)
- xy_s = [self._env.vertex_to_pose(n) for n in neighbors]
-
- def rotate(xy, theta):
- """Rotates a vector around the origin by angle theta.
-
- Args:
- xy: a numpy darray of shape (2, ) of floats containing the x and y
- coordinates of a vector.
- theta: a python float containing the rotation angle in radians.
-
- Returns:
- A numpy darray of floats of shape (2,) containing the x and y
- coordinates rotated xy.
- """
- rotated_x = np.cos(theta) * xy[0] - np.sin(theta) * xy[1]
- rotated_y = np.sin(theta) * xy[0] + np.cos(theta) * xy[1]
- return np.array([rotated_x, rotated_y])
-
- # Rotate all intersection biforcation by the orientation of the agent as the
- # intersection label is defined in an agent centered fashion.
- xy_s = [
- rotate(xy - location[0:2], -location[2] - math.pi / 4) for xy in xy_s
- ]
- th_s = [np.arctan2(xy[1], xy[0]) for xy in xy_s]
-
- out_shape = self._config.output.shape
- if len(out_shape) != 1:
- raise ValueError('Output shape should be of rank 1.')
- num_labels = out_shape[0]
- if num_labels != 16:
- raise ValueError('Currently only 16 labels are supported '
- '(there are 16 different 4 way intersection types).')
-
- th_s = set([int(math.floor(4 * (th / (2 * np.pi) + 0.5))) for th in th_s])
- one_hot_label = np.zeros((num_labels,), dtype=np.float32)
- label = 0
- for th in th_s:
- label += pow(2, th)
- one_hot_label[int(label)] = 1.0
-
- query = self._env.observation(location).values()[0]
- return [], query, (one_hot_label, None)
-
- def reward(self, obs, done, info):
- raise ValueError('Do not call.')
-
- def target_loss(self, truth, predicted, weights=None):
- return classification_loss(
- truth=truth, predicted=predicted, weights=weights, is_one_hot=True)
-
-
-class GotoStaticXNoExplorationTask(UnrolledTask):
- """An interface for findX tasks without exploration.
-
- The agent is initialized a random location in a random world and a random goal
- and the objective is for the agent to move toward the goal. This class
- generates episode for such task. Each generates a sequence of observations x
- and target outputs y. x is the observations and is an OrderedDict with keys
- provided from config.inputs.keys() and the shapes provided in the
- config.inputs. The output is a numpy arrays with the shape specified in the
- config.output. The shape of the array is (sequence_length x action_size) where
- action is the number of actions that can be done in the environment. Note that
- config.output.shape should be set according to the number of actions that can
- be done in the env.
- target outputs y are the groundtruth value of each action that is computed
- from the environment graph. The target output for each action is proportional
- to the progress that each action makes. Target value of 1 means that the
- action takes the agent one step closer, -1 means the action takes the agent
- one step farther. Value of -2 means that action should not take place at all.
- This can be because the action leads to collision or it wants to terminate the
- episode prematurely.
- """
-
- def __init__(self, env, *args, **kwargs):
- super(GotoStaticXNoExplorationTask, self).__init__(*args, **kwargs)
-
- if self._config.query is not None:
- raise ValueError('query should be None.')
- if len(self._config.output.shape) != 2:
- raise ValueError('output should only have two dimensions:'
- '(sequence_length x number_of_actions)')
- for input_config in self._config.inputs.values():
- if input_config.shape[0] != self._config.output.shape[0]:
- raise ValueError('the first dimension of the input and output should'
- 'be the same.')
- if len(self._config.output.shape) != 2:
- raise ValueError('output shape should be '
- '(sequence_length x number_of_actions)')
-
- self._env = env
-
- def _compute_shortest_path_length(self, vertex, target_vertices):
- """Computes length of the shortest path from vertex to any target vertexes.
-
- Args:
- vertex: integer, index of the vertex in the environment graph.
- target_vertices: list of the target vertexes
-
- Returns:
- integer, minimum distance from the vertex to any of the target_vertices.
-
- Raises:
- ValueError: if there is no path between the vertex and at least one of
- the target_vertices.
- """
- try:
- return np.min([
- len(nx.shortest_path(self._env.graph, vertex, t))
- for t in target_vertices
- ])
- except:
- #logging.error('there is no path between vertex %d and at least one of '
- # 'the targets %r', vertex, target_vertices)
- raise
-
- def _compute_gt_value(self, vertex, target_vertices):
- """Computes groundtruth value of all the actions at the vertex.
-
- The value of each action is the difference each action makes in the length
- of the shortest path to the goal. If an action takes the agent one step
- closer to the goal the value is 1. In case, it takes the agent one step away
- from the goal it would be -1. If it leads to collision or if the agent uses
- action stop before reaching to the goal it is -2. To avoid scale issues the
- gt_values are multipled by 0.5.
-
- Args:
- vertex: integer, the index of current vertex.
- target_vertices: list of the integer indexes of the target views.
-
- Returns:
- numpy array with shape (action_size,) and each element is the groundtruth
- value of each action based on the progress each action makes.
- """
- action_size = self._config.output.shape[1]
- output_value = np.ones((action_size), dtype=np.float32) * -2
- my_distance = self._compute_shortest_path_length(vertex, target_vertices)
- for adj in self._env.graph[vertex]:
- adj_distance = self._compute_shortest_path_length(adj, target_vertices)
- if adj_distance is None:
- continue
- action_index = self._env.action(
- self._env.vertex_to_pose(vertex), self._env.vertex_to_pose(adj))
- assert action_index is not None, ('{} is not adjacent to {}. There might '
- 'be a problem in environment graph '
- 'connectivity because there is no '
- 'direct edge between the given '
- 'vertices').format(
- self._env.vertex_to_pose(vertex),
- self._env.vertex_to_pose(adj))
- output_value[action_index] = my_distance - adj_distance
-
- return output_value * 0.5
-
- def episode(self):
- """Returns data needed to train and test a single episode.
-
- Returns:
- (inputs, None, output) where inputs is a dictionary of modality types to
- numpy arrays. The second element is query but we assume that the goal
- is also given as part of observation so it should be None for this task,
- and the outputs is the tuple of ground truth action values with the
- shape of (sequence_length x action_size) that is coming from
- config.output.shape and a numpy array with the shape of
- (sequence_length,) that is 1 if the corresponding element of the
- input and output should be used in the training optimization.
-
- Raises:
- ValueError: If the output values for env.random_step_sequence is not
- valid.
- ValueError: If the shape of observations coming from the env is not
- consistent with the config.
- ValueError: If there is a modality type specified in the config but the
- environment does not return that.
- """
- # Sequence length is the first dimension of any of the input tensors.
- sequence_length = self._config.inputs.values()[0].shape[0]
- modality_types = self._config.inputs.keys()
-
- path, _, _, step_outputs = self._env.random_step_sequence(
- max_len=sequence_length)
- target_vertices = [self._env.pose_to_vertex(x) for x in self._env.targets()]
-
- if len(path) != len(step_outputs):
- raise ValueError('path, and step_outputs should have equal length'
- ' {}!={}'.format(len(path), len(step_outputs)))
-
- # Building up observations. observations will be a OrderedDict of
- # modality types. The values are numpy arrays that follow the given shape
- # in the input config for each modality type.
- observations = collections.OrderedDict([k, []] for k in modality_types)
- for step_output in step_outputs:
- obs_dict = step_output[0]
- # Only going over the modality types that are specified in the input
- # config.
- for modality_type in modality_types:
- if modality_type not in obs_dict:
- raise ValueError('modality type is not returned from the environment.'
- '{} not in {}'.format(modality_type,
- obs_dict.keys()))
- obs = obs_dict[modality_type]
- if np.any(
- obs.shape != tuple(self._config.inputs[modality_type].shape[1:])):
- raise ValueError(
- 'The observations should have the same size as speicifed in'
- 'config for modality type {}. {} != {}'.format(
- modality_type, obs.shape,
- self._config.inputs[modality_type].shape[1:]))
- observations[modality_type].append(obs)
-
- gt_value = [self._compute_gt_value(v, target_vertices) for v in path]
-
- # pylint: disable=unbalanced-tuple-unpacking
- gt_value, _, value_mask = _pad_or_clip_array(
- np.array(gt_value),
- sequence_length,
- is_front_clip=False,
- output_mask=True,
- )
- for modality_type, obs in observations.iteritems():
- observations[modality_type], _, mask = _pad_or_clip_array(
- np.array(obs), sequence_length, is_front_clip=False, output_mask=True)
- assert np.all(mask == value_mask)
-
- return observations, None, (gt_value, value_mask)
-
- def reset(self, observation):
- """Called after the environment is reset."""
- pass
-
- def target_loss(self, true_targets, targets, weights=None):
- """A loss for training a task model.
-
- This loss measures the discrepancy between the task outputs, the true and
- predicted ones.
-
- Args:
- true_targets: tf.Tensor of tf.float32 with the shape of
- (batch_size x sequence_length x action_size).
- targets: tf.Tensor of tf.float32 with the shape of
- (batch_size x sequence_length x action_size).
- weights: tf.Tensor of tf.bool with the shape of
- (batch_size x sequence_length).
-
- Raises:
- ValueError: if the shapes of the input tensors are not consistent.
-
- Returns:
- L2 loss between the predicted action values and true action values.
- """
- targets_shape = targets.get_shape().as_list()
- true_targets_shape = true_targets.get_shape().as_list()
- if len(targets_shape) != 3 or len(true_targets_shape) != 3:
- raise ValueError('invalid shape for targets or true_targets_shape')
- if np.any(targets_shape != true_targets_shape):
- raise ValueError('the shape of targets and true_targets are not the same'
- '{} != {}'.format(targets_shape, true_targets_shape))
-
- if weights is not None:
- # Filtering targets and true_targets using weights.
- weights_shape = weights.get_shape().as_list()
- if np.any(weights_shape != targets_shape[0:2]):
- raise ValueError('The first two elements of weights shape should match'
- 'target. {} != {}'.format(weights_shape,
- targets_shape))
- true_targets = tf.boolean_mask(true_targets, weights)
- targets = tf.boolean_mask(targets, weights)
-
- return tf.losses.mean_squared_error(tf.reshape(targets, [-1]),
- tf.reshape(true_targets, [-1]))
-
- def reward(self, obs, done, info):
- raise NotImplementedError('reward is not implemented for this task')
-
-
-################################################################################
-class NewTask(UnrolledTask):
- def __init__(self, env, *args, **kwargs):
- super(NewTask, self).__init__(*args, **kwargs)
- self._env = env
-
- def _compute_shortest_path_length(self, vertex, target_vertices):
- """Computes length of the shortest path from vertex to any target vertexes.
-
- Args:
- vertex: integer, index of the vertex in the environment graph.
- target_vertices: list of the target vertexes
-
- Returns:
- integer, minimum distance from the vertex to any of the target_vertices.
-
- Raises:
- ValueError: if there is no path between the vertex and at least one of
- the target_vertices.
- """
- try:
- return np.min([
- len(nx.shortest_path(self._env.graph, vertex, t))
- for t in target_vertices
- ])
- except:
- logging.error('there is no path between vertex %d and at least one of '
- 'the targets %r', vertex, target_vertices)
- raise
-
- def _compute_gt_value(self, vertex, target_vertices):
- """Computes groundtruth value of all the actions at the vertex.
-
- The value of each action is the difference each action makes in the length
- of the shortest path to the goal. If an action takes the agent one step
- closer to the goal the value is 1. In case, it takes the agent one step away
- from the goal it would be -1. If it leads to collision or if the agent uses
- action stop before reaching to the goal it is -2. To avoid scale issues the
- gt_values are multipled by 0.5.
-
- Args:
- vertex: integer, the index of current vertex.
- target_vertices: list of the integer indexes of the target views.
-
- Returns:
- numpy array with shape (action_size,) and each element is the groundtruth
- value of each action based on the progress each action makes.
- """
- action_size = self._config.output.shape[1]
- output_value = np.ones((action_size), dtype=np.float32) * -2
- # own compute _compute_shortest_path_length - returnts float
- my_distance = self._compute_shortest_path_length(vertex, target_vertices)
- for adj in self._env.graph[vertex]:
- adj_distance = self._compute_shortest_path_length(adj, target_vertices)
- if adj_distance is None:
- continue
- action_index = self._env.action(
- self._env.vertex_to_pose(vertex), self._env.vertex_to_pose(adj))
- assert action_index is not None, ('{} is not adjacent to {}. There might '
- 'be a problem in environment graph '
- 'connectivity because there is no '
- 'direct edge between the given '
- 'vertices').format(
- self._env.vertex_to_pose(vertex),
- self._env.vertex_to_pose(adj))
- output_value[action_index] = my_distance - adj_distance
-
- return output_value * 0.5
-
- def episode(self):
- """Returns data needed to train and test a single episode.
-
- Returns:
- (inputs, None, output) where inputs is a dictionary of modality types to
- numpy arrays. The second element is query but we assume that the goal
- is also given as part of observation so it should be None for this task,
- and the outputs is the tuple of ground truth action values with the
- shape of (sequence_length x action_size) that is coming from
- config.output.shape and a numpy array with the shape of
- (sequence_length,) that is 1 if the corresponding element of the
- input and output should be used in the training optimization.
-
- Raises:
- ValueError: If the output values for env.random_step_sequence is not
- valid.
- ValueError: If the shape of observations coming from the env is not
- consistent with the config.
- ValueError: If there is a modality type specified in the config but the
- environment does not return that.
- """
- # Sequence length is the first dimension of any of the input tensors.
- sequence_length = self._config.inputs.values()[0].shape[0]
- modality_types = self._config.inputs.keys()
-
- path, _, _, step_outputs = self._env.random_step_sequence(
- max_len=sequence_length)
- target_vertices = [self._env.pose_to_vertex(x) for x in self._env.targets()]
-
- if len(path) != len(step_outputs):
- raise ValueError('path, and step_outputs should have equal length'
- ' {}!={}'.format(len(path), len(step_outputs)))
-
- # Building up observations. observations will be a OrderedDict of
- # modality types. The values are numpy arrays that follow the given shape
- # in the input config for each modality type.
- observations = collections.OrderedDict([k, []] for k in modality_types)
- for step_output in step_outputs:
- obs_dict = step_output[0]
- # Only going over the modality types that are specified in the input
- # config.
- for modality_type in modality_types:
- if modality_type not in obs_dict:
- raise ValueError('modality type is not returned from the environment.'
- '{} not in {}'.format(modality_type,
- obs_dict.keys()))
- obs = obs_dict[modality_type]
- if np.any(
- obs.shape != tuple(self._config.inputs[modality_type].shape[1:])):
- raise ValueError(
- 'The observations should have the same size as speicifed in'
- 'config for modality type {}. {} != {}'.format(
- modality_type, obs.shape,
- self._config.inputs[modality_type].shape[1:]))
- observations[modality_type].append(obs)
-
- gt_value = [self._compute_gt_value(v, target_vertices) for v in path]
-
- # pylint: disable=unbalanced-tuple-unpacking
- gt_value, _, value_mask = _pad_or_clip_array(
- np.array(gt_value),
- sequence_length,
- is_front_clip=False,
- output_mask=True,
- )
- for modality_type, obs in observations.iteritems():
- observations[modality_type], _, mask = _pad_or_clip_array(
- np.array(obs), sequence_length, is_front_clip=False, output_mask=True)
- assert np.all(mask == value_mask)
-
- return observations, None, (gt_value, value_mask)
-
- def reset(self, observation):
- """Called after the environment is reset."""
- pass
-
- def target_loss(self, true_targets, targets, weights=None):
- """A loss for training a task model.
-
- This loss measures the discrepancy between the task outputs, the true and
- predicted ones.
-
- Args:
- true_targets: tf.Tensor of tf.float32 with the shape of
- (batch_size x sequence_length x action_size).
- targets: tf.Tensor of tf.float32 with the shape of
- (batch_size x sequence_length x action_size).
- weights: tf.Tensor of tf.bool with the shape of
- (batch_size x sequence_length).
-
- Raises:
- ValueError: if the shapes of the input tensors are not consistent.
-
- Returns:
- L2 loss between the predicted action values and true action values.
- """
- targets_shape = targets.get_shape().as_list()
- true_targets_shape = true_targets.get_shape().as_list()
- if len(targets_shape) != 3 or len(true_targets_shape) != 3:
- raise ValueError('invalid shape for targets or true_targets_shape')
- if np.any(targets_shape != true_targets_shape):
- raise ValueError('the shape of targets and true_targets are not the same'
- '{} != {}'.format(targets_shape, true_targets_shape))
-
- if weights is not None:
- # Filtering targets and true_targets using weights.
- weights_shape = weights.get_shape().as_list()
- if np.any(weights_shape != targets_shape[0:2]):
- raise ValueError('The first two elements of weights shape should match'
- 'target. {} != {}'.format(weights_shape,
- targets_shape))
- true_targets = tf.boolean_mask(true_targets, weights)
- targets = tf.boolean_mask(targets, weights)
-
- return tf.losses.mean_squared_error(tf.reshape(targets, [-1]),
- tf.reshape(true_targets, [-1]))
-
- def reward(self, obs, done, info):
- raise NotImplementedError('reward is not implemented for this task')
diff --git a/spaces/NSect/VALL-E-X/utils/g2p/cleaners.py b/spaces/NSect/VALL-E-X/utils/g2p/cleaners.py
deleted file mode 100644
index 8bb53977673afa893476cab1b946d6a9a00f57ea..0000000000000000000000000000000000000000
--- a/spaces/NSect/VALL-E-X/utils/g2p/cleaners.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import re
-from utils.g2p.japanese import japanese_to_romaji_with_accent, japanese_to_ipa, japanese_to_ipa2, japanese_to_ipa3
-from utils.g2p.mandarin import number_to_chinese, chinese_to_bopomofo, latin_to_bopomofo, chinese_to_romaji, chinese_to_lazy_ipa, chinese_to_ipa, chinese_to_ipa2
-from utils.g2p.english import english_to_lazy_ipa, english_to_ipa2, english_to_lazy_ipa2
-patterns = [r'\[EN\](.*?)\[EN\]', r'\[ZH\](.*?)\[ZH\]', r'\[JA\](.*?)\[JA\]']
-def japanese_cleaners(text):
- text = japanese_to_romaji_with_accent(text)
- text = re.sub(r'([A-Za-z])$', r'\1.', text)
- return text
-
-def japanese_cleaners2(text):
- return japanese_cleaners(text).replace('ts', 'ʦ').replace('...', '…')
-
-def chinese_cleaners(text):
- '''Pipeline for Chinese text'''
- text = number_to_chinese(text)
- text = chinese_to_bopomofo(text)
- text = latin_to_bopomofo(text)
- text = re.sub(r'([ˉˊˇˋ˙])$', r'\1。', text)
- return text
-
-def cje_cleaners(text):
- matches = []
- for pattern in patterns:
- matches.extend(re.finditer(pattern, text))
-
- matches.sort(key=lambda x: x.start()) # Sort matches by their start positions
-
- outputs = ""
- output_langs = []
-
- for match in matches:
- text_segment = text[match.start():match.end()]
- phon = clean_one(text_segment)
- if "[EN]" in text_segment:
- lang = 'en'
- elif "[ZH]" in text_segment:
- lang = 'zh'
- elif "[JA]" in text_segment:
- lang = 'ja'
- else:
- raise ValueError("If you see this error, please report this bug to issues.")
- outputs += phon
- output_langs += [lang] * len(phon)
- assert len(outputs) == len(output_langs)
- return outputs, output_langs
-
-
-def clean_one(text):
- if text.find('[ZH]') != -1:
- text = re.sub(r'\[ZH\](.*?)\[ZH\]',
- lambda x: chinese_to_ipa(x.group(1))+' ', text)
- if text.find('[JA]') != -1:
- text = re.sub(r'\[JA\](.*?)\[JA\]',
- lambda x: japanese_to_ipa2(x.group(1))+' ', text)
- if text.find('[EN]') != -1:
- text = re.sub(r'\[EN\](.*?)\[EN\]',
- lambda x: english_to_ipa2(x.group(1))+' ', text)
- text = re.sub(r'\s+$', '', text)
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
- return text
diff --git a/spaces/Nee001/bing0/src/components/toaster.tsx b/spaces/Nee001/bing0/src/components/toaster.tsx
deleted file mode 100644
index 4d2693460b61307a1d4c127fd01df9bee16e59ff..0000000000000000000000000000000000000000
--- a/spaces/Nee001/bing0/src/components/toaster.tsx
+++ /dev/null
@@ -1,3 +0,0 @@
-'use client'
-
-export { Toaster } from 'react-hot-toast'
diff --git a/spaces/Nephele/bert-vits2-multi-voice/text/__init__.py b/spaces/Nephele/bert-vits2-multi-voice/text/__init__.py
deleted file mode 100644
index 7566bf351ca9b95af9cdc6d729557a9da083800f..0000000000000000000000000000000000000000
--- a/spaces/Nephele/bert-vits2-multi-voice/text/__init__.py
+++ /dev/null
@@ -1,28 +0,0 @@
-from text.symbols import *
-
-
-_symbol_to_id = {s: i for i, s in enumerate(symbols)}
-
-def cleaned_text_to_sequence(cleaned_text, tones, language):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- phones = [_symbol_to_id[symbol] for symbol in cleaned_text]
- tone_start = language_tone_start_map[language]
- tones = [i + tone_start for i in tones]
- lang_id = language_id_map[language]
- lang_ids = [lang_id for i in phones]
- return phones, tones, lang_ids
-
-def get_bert(norm_text, word2ph, language):
- from .chinese_bert import get_bert_feature as zh_bert
- from .english_bert_mock import get_bert_feature as en_bert
- lang_bert_func_map = {
- 'ZH': zh_bert,
- 'EN': en_bert
- }
- bert = lang_bert_func_map[language](norm_text, word2ph)
- return bert
diff --git a/spaces/Nicholaspei/LangChain-ChatLLM/app.py b/spaces/Nicholaspei/LangChain-ChatLLM/app.py
deleted file mode 100644
index ad9b5e1acfc7f2b433fa826a365279a0a5912e58..0000000000000000000000000000000000000000
--- a/spaces/Nicholaspei/LangChain-ChatLLM/app.py
+++ /dev/null
@@ -1,272 +0,0 @@
-import os
-
-import gradio as gr
-import nltk
-import sentence_transformers
-import torch
-from duckduckgo_search import ddg
-from duckduckgo_search.utils import SESSION
-from langchain.chains import RetrievalQA
-from langchain.document_loaders import UnstructuredFileLoader
-from langchain.embeddings import JinaEmbeddings
-from langchain.embeddings.huggingface import HuggingFaceEmbeddings
-from langchain.prompts import PromptTemplate
-from langchain.prompts.prompt import PromptTemplate
-from langchain.vectorstores import FAISS
-
-from chatllm import ChatLLM
-from chinese_text_splitter import ChineseTextSplitter
-
-nltk.data.path.append('./nltk_data')
-
-embedding_model_dict = {
- "ernie-tiny": "nghuyong/ernie-3.0-nano-zh",
- "ernie-base": "nghuyong/ernie-3.0-base-zh",
- "text2vec-base": "GanymedeNil/text2vec-base-chinese",
- "ViT-B-32": 'ViT-B-32::laion2b-s34b-b79k'
-}
-
-llm_model_dict = {
- "ChatGLM-6B-int8": "THUDM/chatglm-6b-int8",
- "ChatGLM-6B-int4": "THUDM/chatglm-6b-int4",
- "ChatGLM-6b-int4-qe": "THUDM/chatglm-6b-int4-qe",
- "Minimax": "Minimax"
-}
-
-DEVICE = "cuda" if torch.cuda.is_available(
-) else "mps" if torch.backends.mps.is_available() else "cpu"
-
-
-def search_web(query):
-
- SESSION.proxies = {
- "http": f"socks5h://localhost:7890",
- "https": f"socks5h://localhost:7890"
- }
- results = ddg(query)
- web_content = ''
- if results:
- for result in results:
- web_content += result['body']
- return web_content
-
-
-def load_file(filepath):
- if filepath.lower().endswith(".pdf"):
- loader = UnstructuredFileLoader(filepath)
- textsplitter = ChineseTextSplitter(pdf=True)
- docs = loader.load_and_split(textsplitter)
- else:
- loader = UnstructuredFileLoader(filepath, mode="elements")
- textsplitter = ChineseTextSplitter(pdf=False)
- docs = loader.load_and_split(text_splitter=textsplitter)
- return docs
-
-
-def init_knowledge_vector_store(embedding_model, filepath):
- if embedding_model == "ViT-B-32":
- jina_auth_token = os.getenv('jina_auth_token')
- embeddings = JinaEmbeddings(
- jina_auth_token=jina_auth_token,
- model_name=embedding_model_dict[embedding_model])
- else:
- embeddings = HuggingFaceEmbeddings(
- model_name=embedding_model_dict[embedding_model], )
- embeddings.client = sentence_transformers.SentenceTransformer(
- embeddings.model_name, device=DEVICE)
-
- docs = load_file(filepath)
-
- vector_store = FAISS.from_documents(docs, embeddings)
- return vector_store
-
-
-def get_knowledge_based_answer(query,
- large_language_model,
- vector_store,
- VECTOR_SEARCH_TOP_K,
- web_content,
- history_len,
- temperature,
- top_p,
- chat_history=[]):
- if web_content:
- prompt_template = f"""基于以下已知信息,简洁和专业的来回答用户的问题。
- 如果无法从中得到答案,请说 "根据已知信息无法回答该问题" 或 "没有提供足够的相关信息",不允许在答案中添加编造成分,答案请使用中文。
- 已知网络检索内容:{web_content}""" + """
- 已知内容:
- {context}
- 问题:
- {question}"""
- else:
- prompt_template = """基于以下已知信息,请简洁并专业地回答用户的问题。
- 如果无法从中得到答案,请说 "根据已知信息无法回答该问题" 或 "没有提供足够的相关信息"。不允许在答案中添加编造成分。另外,答案请使用中文。
-
- 已知内容:
- {context}
-
- 问题:
- {question}"""
- prompt = PromptTemplate(template=prompt_template,
- input_variables=["context", "question"])
- chatLLM = ChatLLM()
- chatLLM.history = chat_history[-history_len:] if history_len > 0 else []
- if large_language_model == "Minimax":
- chatLLM.model = 'Minimax'
- else:
- chatLLM.load_model(
- model_name_or_path=llm_model_dict[large_language_model])
- chatLLM.temperature = temperature
- chatLLM.top_p = top_p
-
- knowledge_chain = RetrievalQA.from_llm(
- llm=chatLLM,
- retriever=vector_store.as_retriever(
- search_kwargs={"k": VECTOR_SEARCH_TOP_K}),
- prompt=prompt)
- knowledge_chain.combine_documents_chain.document_prompt = PromptTemplate(
- input_variables=["page_content"], template="{page_content}")
-
- knowledge_chain.return_source_documents = True
-
- result = knowledge_chain({"query": query})
- return result
-
-
-def clear_session():
- return '', None
-
-
-def predict(input,
- large_language_model,
- embedding_model,
- file_obj,
- VECTOR_SEARCH_TOP_K,
- history_len,
- temperature,
- top_p,
- use_web,
- history=None):
- if history == None:
- history = []
- print(file_obj.name)
- vector_store = init_knowledge_vector_store(embedding_model, file_obj.name)
- if use_web == 'True':
- web_content = search_web(query=input)
- else:
- web_content = ''
- resp = get_knowledge_based_answer(
- query=input,
- large_language_model=large_language_model,
- vector_store=vector_store,
- VECTOR_SEARCH_TOP_K=VECTOR_SEARCH_TOP_K,
- web_content=web_content,
- chat_history=history,
- history_len=history_len,
- temperature=temperature,
- top_p=top_p,
- )
- print(resp)
- history.append((input, resp['result']))
- return '', history, history
-
-
-if __name__ == "__main__":
- block = gr.Blocks()
- with block as demo:
- gr.Markdown("""
-
-
-
-
\ No newline at end of file
diff --git a/spaces/SarthakSidhant/Go-Cattle/diseases/nitrate poisoning.md b/spaces/SarthakSidhant/Go-Cattle/diseases/nitrate poisoning.md
deleted file mode 100644
index 681d128c69f6c10dc303508978885f859d47dd7d..0000000000000000000000000000000000000000
--- a/spaces/SarthakSidhant/Go-Cattle/diseases/nitrate poisoning.md
+++ /dev/null
@@ -1,34 +0,0 @@
-## Nitrate poisoning
-
-**Information:** Nitrate poisoning is a condition that affects cattle. It is caused by the ingestion of excessive amounts of nitrates.
-
-**Symptoms:**
-
-* Weakness
-* Pale mucous membranes
-* Rapid heart rate
-* Restlessness
-* Seizures
-* Coma
-
-**Remedies:**
-
-* Nitrate poisoning is a medical emergency and requires immediate treatment.
-* Treatment usually involves giving the cow an injection of methylene blue to help convert the nitrates into a less harmful form.
-* The cow may also need to be given fluids and electrolytes to prevent dehydration.
-* In severe cases, the cow may need to be hospitalized.
-
-**Causes:**
-
-* Nitrate poisoning is caused by the ingestion of excessive amounts of nitrates. Nitrates are found in some plants, such as legumes, and in some fertilizers.
-* Cattle are more likely to develop nitrate poisoning if they are fed a diet that is high in grain and low in forage.
-* Nitrate poisoning can also occur if cattle eat snow or ice that has melted from fields that have been fertilized with nitrate-containing fertilizers.
-
-**Prevention:**
-
-* The best way to prevent nitrate poisoning is to feed cattle a diet that is balanced in carbohydrates and fiber.
-* Cattle should also be provided with plenty of fresh water.
-* Cattle should not be fed snow or ice that has melted from fields that have been fertilized with nitrate-containing fertilizers.
-* Animals should be monitored for signs of nitrate poisoning, such as weakness, pale mucous membranes, and a rapid heart rate.
-* If an animal is suspected of having nitrate poisoning, it should be taken to a veterinarian immediately for diagnosis and treatment.
-
diff --git a/spaces/ServerX/PorcoDiaz/infer/lib/csvutil.py b/spaces/ServerX/PorcoDiaz/infer/lib/csvutil.py
deleted file mode 100644
index 79f432b6933f181d9194c50581656f2fd6e66c0c..0000000000000000000000000000000000000000
--- a/spaces/ServerX/PorcoDiaz/infer/lib/csvutil.py
+++ /dev/null
@@ -1,41 +0,0 @@
-
-import numpy as np
-
-# import praatio
-# import praatio.praat_scripts
-import os
-import sys
-
-import random
-
-import csv
-
-# praatEXE = join('.',os.path.abspath(os.getcwd()) + r"\Praat.exe")
-
-
-def CSVutil(file, rw, type, *args):
- if type == "formanting":
- if rw == "r":
- with open(file) as fileCSVread:
- csv_reader = list(csv.reader(fileCSVread))
- return (
- (csv_reader[0][0], csv_reader[0][1], csv_reader[0][2])
- if csv_reader is not None
- else (lambda: exec('raise ValueError("No data")'))()
- )
- else:
- if args:
- doformnt = args[0]
- else:
- doformnt = False
- qfr = args[1] if len(args) > 1 else 1.0
- tmb = args[2] if len(args) > 2 else 1.0
- with open(file, rw, newline="") as fileCSVwrite:
- csv_writer = csv.writer(fileCSVwrite, delimiter=",")
- csv_writer.writerow([doformnt, qfr, tmb])
- elif type == "stop":
- stop = args[0] if args else False
- with open(file, rw, newline="") as fileCSVwrite:
- csv_writer = csv.writer(fileCSVwrite, delimiter=",")
- csv_writer.writerow([stop])
-
diff --git a/spaces/ShaunWithGPT/ChuanhuChatGPT/presets.py b/spaces/ShaunWithGPT/ChuanhuChatGPT/presets.py
deleted file mode 100644
index 53352fd7219c70394a4227bc53cbd190b1ac1970..0000000000000000000000000000000000000000
--- a/spaces/ShaunWithGPT/ChuanhuChatGPT/presets.py
+++ /dev/null
@@ -1,80 +0,0 @@
-# -*- coding:utf-8 -*-
-# 错误信息
-standard_error_msg = "☹️发生了错误:" # 错误信息的标准前缀
-error_retrieve_prompt = "请检查网络连接,或者API-Key是否有效。" # 获取对话时发生错误
-connection_timeout_prompt = "连接超时,无法获取对话。" # 连接超时
-read_timeout_prompt = "读取超时,无法获取对话。" # 读取超时
-proxy_error_prompt = "代理错误,无法获取对话。" # 代理错误
-ssl_error_prompt = "SSL错误,无法获取对话。" # SSL 错误
-no_apikey_msg = "API key长度不是51位,请检查是否输入正确。" # API key 长度不足 51 位
-
-max_token_streaming = 3500 # 流式对话时的最大 token 数
-timeout_streaming = 30 # 流式对话时的超时时间
-max_token_all = 3500 # 非流式对话时的最大 token 数
-timeout_all = 200 # 非流式对话时的超时时间
-enable_streaming_option = True # 是否启用选择选择是否实时显示回答的勾选框
-HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True
-
-SIM_K = 5
-INDEX_QUERY_TEMPRATURE = 1.0
-
-title = """
- """
- st.markdown(html_temp, unsafe_allow_html=True)
-
- text = st.text_input("Enter your News")
-
-
- if st.button("Predict"):
- pred_prob = new_model.predict([text])
- predict = tf.squeeze(tf.round(pred_prob)).numpy()
- st.subheader("AI thinks that ...")
-
- if predict > 0:
-
- st.success(
- f"It's Real news, you can trust it. Confidence Level is {tf.round(pred_prob,3)*100}%",icon="✅")
- else:
- st.warning(
- f"Beware!! It's a Fake News. Confidence Level is {tf.round(100 - pred_prob,2)}%", icon="⚠️")
-
- if st.button("About"):
-
- st.text("Built with Streamlit")
-
-
-if __name__ == '__main__':
- main()
-
diff --git a/spaces/amgross01/Stocks_Trading_Assistant/functions.py b/spaces/amgross01/Stocks_Trading_Assistant/functions.py
deleted file mode 100644
index 6e43af13b84f2d196f5e41bbd88a379267a08de2..0000000000000000000000000000000000000000
--- a/spaces/amgross01/Stocks_Trading_Assistant/functions.py
+++ /dev/null
@@ -1,482 +0,0 @@
-from asyncio.constants import LOG_THRESHOLD_FOR_CONNLOST_WRITES
-import yfinance as yf
-import pandas as pd
-import numpy as np
-import plotly.graph_objs as go
-from stocks import *
-from transformers import AutoModelForSequenceClassification, pipeline, AutoTokenizer
-import os
-from random import random
-os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
-import tensorflow as tf
-import math
-import datetime
-import random
-import time
-#import kaleido
-from sklearn.preprocessing import MinMaxScaler
-import matplotlib.pyplot as plt
-#import warnings
-import tensorflow as tf
-from tensorflow import keras
-from keras.layers import Dropout, Activation
-from keras import layers
-from keras.callbacks import EarlyStopping
-from sklearn.metrics import r2_score
-import plotly.graph_objs as go
-import plotly.io as pio
-pio.templates
-
-model = AutoModelForSequenceClassification.from_pretrained("fine_tuned_FinBERT", from_tf=False, config="config.json")
-tokenizer = AutoTokenizer.from_pretrained("fine_tuned_FinBERT/tokenizer/")
-
-class Models(object):
- def __init__(self):
- self.stock_data = Stock_Data()
-
- def bollinger_bands_20d_2std(self, ticker):
- '''
- This method calculates the Bollinger Bands with a Rolling average of the last 20 days and 2 standard deviations. In a plot,
- this would be represented as 3 lines: a rolling average, an upper bound (rolling average + 2 standard deviations) and a lower
- bound (rolling average - 2 standard deviations). When the price of a stock is between the rolling average and lower bound, it is
- considered as oversold, so it makes sense to buy, if it is between the roll. avg. and the upper bound, it is considered as
- overbought, so it makes sense to sell, if it is equal to the roll.avg. it is neutral and if it is outside the bounds, it is
- considered an Unusual Event. The function returns the outlook of the stock (either "Buy", or "Sell" or "Hold" or "Unusual Event")
- '''
- if self.stock_data.status_getter(ticker) != "Open":
- return "Market Closed"
- else:
- data = self.stock_data.stock_data_getter(ticker)
- low_high_closing_df = pd.DataFrame(data)
- low_high_closing_df = data.iloc[:, 4:5] # Getting only the "Adj Close" column
- low_high_closing_df = low_high_closing_df.tail(40) # Getting the last 40 days
-
- low_high_closing_df["rolling_avg_20d"] = low_high_closing_df['Adj Close'].rolling(20, min_periods = 20).mean()
- low_high_closing_df["sd"] = low_high_closing_df["Adj Close"].rolling(20, min_periods = 20).std()
- low_high_closing_df = low_high_closing_df.tail(20) # Keeping the last 20 days only
-
- recent_data = low_high_closing_df.iloc[-1, :].to_list() # Creating a Series object with the most recent data (last row only)
-
- upper_bound = recent_data[1] + 2*recent_data[2] # Upper Bound
- lower_bound = recent_data[1] - 2*recent_data[2] # Lower Bound
- mean_20d = recent_data[1] # Rolling average of last 20 days
-
- if self.stock_data.current_price_getter(ticker) is None:
- return "Market Closed"
- else:
- message = ""
-
- if self.stock_data.current_price_getter(ticker) < mean_20d and self.stock_data.current_price_getter(ticker) >= lower_bound:
- message = "Buy"
- elif self.stock_data.current_price_getter(ticker) > mean_20d and self.stock_data.current_price_getter(ticker) <= upper_bound:
- message = "Sell"
- elif self.stock_data.current_price_getter(ticker) == mean_20d:
- message = "Hold"
- elif self.stock_data.current_price_getter(ticker) <= lower_bound or self.stock_data.current_price_getter(ticker) >= upper_bound:
- message = "Unusual Event"
- return message
-
- def bollinger_bands_10d_1point5std(self, ticker):
- '''
- This method calculates the Bollinger Bands with a Rolling average of the last 10 days and 1.5 standard deviations. In a plot,
- this would be represented as 3 lines: a rolling average, an upper bound (rolling average + 1.5 standard deviations) and a lower
- bound (rolling average - 1.5 standard deviations). When the price of a stock is between the rolling average and lower bound, it is
- considered as oversold, so it makes sense to buy, if it is between the roll. avg. and the upper bound, it is considered as
- overbought, so it makes sense to sell, if it is equal to the roll.avg. it is neutral and if it is outside the bounds, it is
- considered an Unusual Event. The function returns the outlook of the stock (either "Buy", or "Sell" or "Hold" or "Unusual Event")
- '''
- if self.stock_data.status_getter(ticker) != "Open":
- return "Market Closed"
- else:
- data = self.stock_data.stock_data_getter(ticker)
-
- low_high_closing_df = pd.DataFrame(data)
- low_high_closing_df = data.iloc[:, 4:5] # Getting only the "Adj Close" column
- low_high_closing_df = low_high_closing_df.tail(20) # Getting the last 20 days
-
- low_high_closing_df["rolling_avg_10d"] = low_high_closing_df['Adj Close'].rolling(10, min_periods = 10).mean()
- low_high_closing_df["sd"] = low_high_closing_df["Adj Close"].rolling(10, min_periods = 10).std()
- low_high_closing_df = low_high_closing_df.tail(10) # Keeping the last 10 days only
-
- recent_data = low_high_closing_df.iloc[-1, :].to_list() # Creating a Series object with the most recent data (last row only)
-
- upper_bound = recent_data[1] + 1.5*recent_data[2] # Upper Bound
- lower_bound = recent_data[1] - 1.5*recent_data[2] # Lower Bound
- mean_10d = recent_data[1] # Rolling average of last 10 days
-
- if self.stock_data.current_price_getter(ticker) is None:
- return "Market Closed"
- else:
- message = ""
-
- if self.stock_data.current_price_getter(ticker) < mean_10d and self.stock_data.current_price_getter(ticker) >= lower_bound:
- message = "Buy"
- elif self.stock_data.current_price_getter(ticker) > mean_10d and self.stock_data.current_price_getter(ticker) <= upper_bound:
- message = "Sell"
- elif self.stock_data.current_price_getter(ticker) == mean_10d:
- message = "Hold"
- elif self.stock_data.current_price_getter(ticker) <= lower_bound or self.stock_data.current_price_getter(ticker) >= upper_bound:
- message = "Unusual Event"
- return message
-
- def bollinger_bands_50d_3std(self, ticker):
- '''
- This method calculates the Bollinger Bands with a Rolling average of the last 50 days and 3 standard deviations. In a plot,
- this would be represented as 3 lines: a rolling average, an upper bound (rolling average + 3 standard deviations) and a lower
- bound (rolling average - 3 standard deviations). When the price of a stock is between the rolling average and lower bound, it is
- considered as oversold, so it makes sense to buy, if it is between the roll. avg. and the upper bound, it is considered as
- overbought, so it makes sense to sell, if it is equal to the roll.avg. it is neutral and if it is outside the bounds, it is
- considered an Unusual Event. The function returns the outlook of the stock (either "Buy", or "Sell" or "Hold" or "Unusual Event")
- '''
- if self.stock_data.status_getter(ticker) != "Open":
- return "Market Closed"
- else:
- data = self.stock_data.stock_data_getter(ticker)
-
- low_high_closing_df = pd.DataFrame(data)
- low_high_closing_df = data.iloc[:, 4:5] # Getting only the "Adj Close" column
- low_high_closing_df = low_high_closing_df.tail(100) # Getting the last 100 days
-
- low_high_closing_df["rolling_avg_50d"] = low_high_closing_df['Adj Close'].rolling(50, min_periods = 50).mean()
- low_high_closing_df["sd"] = low_high_closing_df["Adj Close"].rolling(50, min_periods = 50).std()
- low_high_closing_df = low_high_closing_df.tail(50) # Keeping the last 50 days only
-
- recent_data = low_high_closing_df.iloc[-1, :].to_list() # Creating a Series object with the most recent data (last row only)
-
- upper_bound = recent_data[1] + 3*recent_data[2] # Upper Bound
- lower_bound = recent_data[1] - 3*recent_data[2] # Lower Bound
- mean_50d = recent_data[1] # Rolling average of last 50 days
-
- # Finding the outlook dependent on the current price
- if self.stock_data.current_price_getter(ticker) is None:
- return "Market Closed"
- else:
- message = ""
- if self.stock_data.current_price_getter(ticker) < mean_50d and self.stock_data.current_price_getter(ticker) >= lower_bound:
- message = "Buy"
- elif self.stock_data.current_price_getter(ticker) > mean_50d and self.stock_data.current_price_getter(ticker) <= upper_bound:
- message = "Sell"
- elif self.stock_data.current_price_getter(ticker) == mean_50d:
- message = "Hold"
- elif self.stock_data.current_price_getter(ticker) <= lower_bound or self.stock_data.current_price_getter(ticker) >= upper_bound:
- message = "Unusual Event"
- return message
-
- def MACD(self, ticker):
- '''
- This method calculates the MACD (Mean Average Convergence Divergence) for a stock. The decision of whether to buy or sell
- a stock when using this method, depends on the difference of two "lines". The 1st one is called "MACD" and is equal to the
- difference between the Exponential Moving Average of the adjusted closing price of the last 12 days, and the Moving Average
- of the adjusted closing price of the last 26 days. The 2nd line is the 9 day moving average of the adj. closing price.
- When MACD > 9 day M.A. it is considered that there is an uptrend, else, an downtrend.
- At last, when MACD line crosses the 9 day M.A. from "above", a "Sell" signal is given,
- while it crosses it from below, a "Buy" signal is given.
- '''
- if self.stock_data.status_getter(ticker) != "Open":
- return "Market Closed"
- else:
- data = self.stock_data.stock_data_getter(ticker)
-
- low_high_closing_df = pd.DataFrame(data)
- low_high_closing_df = data.iloc[:, 4:5] # Getting only the "Adj Close" column
- low_high_closing_df = low_high_closing_df.tail(52) # Getting the last 52 days
-
-
- # Get the 12-day EMA of the closing price
- low_high_closing_df['EMA_12d'] = low_high_closing_df['Adj Close'].ewm(span=12, adjust=False, min_periods=12).mean()
- # Get the 26-day MA of the closing price
- low_high_closing_df['MA_26d'] = low_high_closing_df['Adj Close'].ewm(span=26, adjust=False, min_periods=26).mean()
- # Subtract the 26-day EMA from the 12-Day EMA to get the MACD
- low_high_closing_df['MACD'] = low_high_closing_df['EMA_12d'] - low_high_closing_df['MA_26d']
- # Making the signal line
- low_high_closing_df['MA_9d'] = low_high_closing_df['MACD'].ewm(span=9, adjust=False, min_periods=9).mean()
-
- low_high_closing_df['Diff'] = low_high_closing_df['MACD'] - low_high_closing_df['MA_9d']
-
- Diff = low_high_closing_df['Diff'].astype(float)
-
- if self.stock_data.current_price_getter(ticker) is None:
- return "Market Closed"
- else:
- message = ""
-
- if Diff.iloc[-1] < 0:
- if Diff.iloc[-2] >= 0:
- message = "Downtrend and sell signal"
- else:
- message = "Downtrend and no signal"
- else:
- if Diff.iloc[-2] <= 0:
- message = "Uptrend and buy signal"
- else:
- message = "Uptrend and no signal"
- return message
-
- def finbert_headlines_sentiment(self, ticker):
- '''
- This method uses a the "weights" and the "tokenizer" of a fine-tuned Fin-BERT model, which is a BERT model that
- was furtherly trained on financial data. The "article_parser()" method scraps www.marketwatch.com and returns the
- last 17 headers of the chosen stock's articles. The, the FinBERT model classifies each one of them as either "Positive"
- or "Negative" or "Neutral", and a score is assigned to each header (+100, -100, and 0) correspondingly. At last, a
- rolling average of window size = 5 is used to "smooth" the sentiment line of the "plotly" plot that is returned.
- '''
-
- articles_df = self.stock_data.article_parser(ticker)
- articles_list = articles_df["headline"].tolist()
-
- clf = pipeline("text-classification", model=model, tokenizer=tokenizer)
- outputs_list = clf(articles_list)
-
- sentiments = []
-
- for item in outputs_list:
- sentiments.append(item["label"])
-
- sentiments_df = pd.DataFrame(sentiments)
- sentiments_df.rename(columns = {0:'sentiment'}, inplace = True)
-
- sentiments_df["sentiment"] = sentiments_df["sentiment"].apply(lambda x: 100 if x == "positive" else -100 if x=="negative" else 0)
- sentiments_df["roll_avg"] = round(sentiments_df["sentiment"].rolling(5, min_periods = 1).mean(), 2)
- sentiments_df = sentiments_df.tail(12).reset_index()
-
- pd.options.plotting.backend = "plotly"
-
- fig = sentiments_df["roll_avg"].plot(title="Sentiment Analysis of the last 12 www.marketwatch.com articles about " + ticker,
-
- template="plotly_dark",
- labels=dict(index="12 most recent article headlines", value="sentiment score (rolling avg. of window size 5)"))
- fig.update_traces(line=dict(color="#3D9140", width=3))
- fig.update_layout(yaxis_range=[-100,100])
- fig.update_layout(xaxis_range=[0,12])
- fig.update_layout(showlegend=False)
- fig.add_hline(y=0, line_width=1.5, line_color="black")
-
- current_sentiment = sentiments_df["roll_avg"].tail(1).values[0]
-
- return {'fig': fig, 'current_sentiment': current_sentiment}
-
- def LSTM_7_days_price_predictor(self, ticker):
- '''
- This method predicts the price of a chosen stock for the next 7 days as of today, by using the daily adjusted closing
- prices for the last 2 years. At first, a 60-day window of historical prices (i-60) is created as our feature data (x_train)
- and the following 60-days window as label data (y_train). For every stock available, we have manually defined different
- parameters so that they fit as good as it gets to the model. Finally we combute the R2 metric and make the predictions. At
- last, we proceed with the predictions. The model looks back in our data (60 days back) and predicta for the following 7 days.
- '''
-
- stock_data = self.stock_data.LSTM_stock_data_getter(ticker)
- stock_data=pd.DataFrame(data=stock_data).drop(['Open','High','Low','Close', 'Volume'],axis=1).reset_index()
- stock_data['Date'] = pd.to_datetime(stock_data['Date'])
- stock_data=stock_data.dropna()
-
- # Data Preprocessing
- random.seed(1997)
- close_prices = stock_data['Adj Close']
- values = close_prices.values
- training_data_len = math.ceil(len(values)* 0.8)
-
- scaler = MinMaxScaler(feature_range=(0,1))
- scaled_data = scaler.fit_transform(values.reshape(-1,1))
- train_data = scaled_data[0: training_data_len, :]
-
- x_train = []
- y_train = []
-
- for i in range(60, len(train_data)):
- x_train.append(train_data[i-60:i, 0])
- y_train.append(train_data[i, 0])
-
- x_train, y_train = np.array(x_train), np.array(y_train)
- x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 1))
-
- # Preparation of test set
- test_data = scaled_data[training_data_len-60: , : ]
- x_test = []
- y_test = values[training_data_len:]
-
- for i in range(60, len(test_data)):
- x_test.append(test_data[i-60:i, 0])
-
- x_test = np.array(x_test)
- x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1))
-
- ##### Setting Up LSTM Network Architecture and the Training of the LSTM Model
- def LSTM_trainer(seed, DROPOUT, LSTM_units,patience,batch_size, epochs):
-
- tf.random.set_seed(seed)
- DROPOUT = DROPOUT
- global model_lstm
- model_lstm = keras.Sequential()
- model_lstm.add(layers.LSTM(LSTM_units, return_sequences=True, input_shape=(x_train.shape[1], 1)))
- model_lstm.add(Dropout(rate=DROPOUT))
- model_lstm.add(layers.LSTM(LSTM_units, return_sequences=False))
- model_lstm.add(Dropout(rate=DROPOUT))
- model_lstm.add(layers.Dense(25))
- model_lstm.add(Dropout(rate=DROPOUT))
- model_lstm.add(layers.Dense(1))
- model_lstm.add(Activation('linear'))
-
- print('\n')
- print("Compiling the LSTM Model for the " + str(ticker) + " stock....\n")
- t0 = time.time()
- model_lstm.compile(optimizer='adam', loss='mean_squared_error',metrics=['mae'])
- callback=EarlyStopping(monitor='val_loss',
- min_delta=0,
- patience=patience,
- verbose=1, mode='auto')
- model_lstm.fit(x_train,
- y_train,
- batch_size= batch_size,
- epochs=epochs,
- validation_split=0.1,# ...holding out 10% of the data for validation
- shuffle=True,verbose=0,callbacks=[callback])
- t1 = time.time()
- global ex_time
- ex_time = round(t1-t0, 2)
- print("Compiling took :",ex_time,"seconds")
-
- predictions = model_lstm.predict(x_test)
- predictions = scaler.inverse_transform(predictions)
- #rmse = np.sqrt(np.mean(((predictions - y_test) ** 2)))
- global r_squared_score
- global rmse
- r_squared_score = round(r2_score(y_test, predictions),2)
- rmse = np.sqrt(np.mean(((predictions - y_test) ** 2)))
- #print('Rmse Score: ', round(rmse),2)
- print('R2 Score: ', r_squared_score)
-
- if ticker == 'AAPL':
- LSTM_trainer(1, 0.2, 100,2, 20, 30)
- elif ticker == 'NVDA':
- LSTM_trainer(2, 0.2, 100,2, 30, 50)
- elif ticker == 'PYPL':
- LSTM_trainer(6, 0.2, 100,10,25, 30)
- elif ticker == 'MSFT':
- LSTM_trainer(4, 0.1, 80, 2,20, 40)
- elif ticker == 'TSLA':
- LSTM_trainer(5, 0.1, 120, 4,20, 25)
- elif ticker == 'AMZN':
- LSTM_trainer(6, 0.1, 120,2, 20, 25)
- elif ticker == 'SPOT':
- LSTM_trainer(9, 0.2, 200,5, 20, 40)
- #elif ticker == 'TWTR' :
- # LSTM_trainer(15, 0.2, 100,4,20, 40)
- elif ticker == 'UBER':
- LSTM_trainer(15, 0.2, 100,7,20, 40)
- elif ticker == 'ABNB':
- LSTM_trainer(15, 0.2, 120,8,20, 40)
- elif ticker == 'GOOG':
- LSTM_trainer(15, 0.2, 100,3,20, 25)
-
- # Unseen Predictions for the next 7 days
- close_data = scaled_data
- look_back = 60
-
- def predict(num_prediction, model):
- prediction_list = close_data[-look_back:]
-
- for _ in range(num_prediction):
- x = prediction_list[-look_back:]
- x = x.reshape((1, look_back, 1))
-
- out = model.predict(x)[0][0]
- prediction_list = np.append(prediction_list, out)
- prediction_list = prediction_list[look_back-1:]
-
- return prediction_list
-
- def predict_dates(num_prediction):
- last_date = stock_data['Date'].values[-1]
- prediction_dates = pd.date_range(last_date, periods=num_prediction+1).tolist()
- return prediction_dates
-
- num_prediction = 7
-
- forecast = predict(num_prediction, model_lstm)
- forecast_dates = predict_dates(num_prediction)
-
- plt.figure(figsize=(25,10))
- forecast = forecast.reshape(-1, 1)
- forecast_inverse = scaler.inverse_transform(forecast)
-
- # Ploting the Actual Prices and the Predictions of them for the next 7 days
- base = stock_data['Date'].iloc[[-1]] # Here we create our base date (the last existing date with actual prices)
- testdata = pd.DataFrame(forecast_inverse)# Here we create a data frame that contains the prediction prices and an empty column for their dates
- testdata['Date'] = ""
- testdata.columns = ["Adj Close","Date"]
- testdata = testdata.iloc[1:,:]
- testdata["Label"] = "" # Let's add a column "Label" that would show if the respective price is a prediction or not
- testdata["Label"] = "Prediction"
- testdata = testdata[["Date", "Adj Close", "Label"]]
-
- date_list = [base + datetime.timedelta(days=x+1) for x in range(testdata.shape[0]+1)]
- date_list = pd.DataFrame(date_list)
- date_list.columns = ["Date"]
- date_list.reset_index(inplace = True)
- date_list.drop(["index"], axis = 1, inplace = True)
- date_list.index = date_list.index + 1
- testdata.Date = date_list
-
- stock_data["Label"] = ""
- stock_data["Label"] = "Actual price"
- finaldf = pd.concat([stock_data,testdata], axis=0) # Here we concatenate the "testdata" and the original data frame "df" into a final one
- finaldf.reset_index(inplace = True)
- finaldf.drop(["index"], axis = 1, inplace = True)
- finaldf['Date'] = pd.to_datetime(finaldf['Date'])
-
- plt.rcParams["figure.figsize"] = (25,10)
- #We create two different data frames, one that contains the actual prices and one that has only the predictions
- finaldfPredictions = finaldf.iloc[-8:]
- finaldfActuals = finaldf.iloc[:-7]
-
- plot_1 = go.Scatter(
- x = finaldfActuals['Date'],
- y = finaldfActuals['Adj Close'],
- mode = 'lines',
- name = 'Historical Data (2 years)',
- line=dict(width=1,color='#3D9140'))
- plot_2 = go.Scatter(
- x = finaldfPredictions['Date'],
- y = finaldfPredictions['Adj Close'],
- mode = 'lines',
- name = '7-day Prediction',
- line=dict(width=1,color="#EE3B3B"))
- plot_3 = go.Scatter(
- x = finaldfPredictions['Date'][:1],
- y = finaldfPredictions['Adj Close'][:1],
- mode = 'markers',
- name = 'Latest Actual Closing Price',
- line=dict(width=1))
-
- layout = go.Layout(
- title = 'Next 7 days stock price prediction of ' + str(ticker),
- xaxis = {'title' : "Date"},
- yaxis = {'title' : "Price ($)"}
- )
- fig = go.Figure(data=[plot_1, plot_2,plot_3], layout=layout)
- fig.update_layout(template='plotly_dark',autosize=True)
- fig.update_layout(legend=dict(
- orientation="h",
- yanchor="bottom",
- y=1.02,
- xanchor="right",
- x=1),
- annotations = [dict(x=0.5,
- y=0,
- xref='paper',
- yref='paper',
- text="Current In Sample R- Squared : " + str(r_squared_score*100) + " % \n",
- showarrow = False)],
- xaxis=dict(showgrid=False),
- yaxis=dict(showgrid=False)
-
-
- )
- fig.add_annotation(x=0.5,
- y=0.05,
- xref='paper',
- yref='paper',
- text="Current In Sample Root Mean Square Error : " + str(round(rmse,2)) + " % ",
- showarrow=False)
-
- return fig
diff --git a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/extensions/whisper_stt/script.py b/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/extensions/whisper_stt/script.py
deleted file mode 100644
index 9daee7be147c55848fb0b620bc34de133869672b..0000000000000000000000000000000000000000
--- a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/extensions/whisper_stt/script.py
+++ /dev/null
@@ -1,46 +0,0 @@
-import gradio as gr
-import speech_recognition as sr
-from modules import shared
-
-input_hijack = {
- 'state': False,
- 'value': ["", ""]
-}
-
-
-def do_stt(audio):
- transcription = ""
- r = sr.Recognizer()
-
- # Convert to AudioData
- audio_data = sr.AudioData(sample_rate=audio[0], frame_data=audio[1], sample_width=4)
-
- try:
- transcription = r.recognize_whisper(audio_data, language="english", model="base.en")
- except sr.UnknownValueError:
- print("Whisper could not understand audio")
- except sr.RequestError as e:
- print("Could not request results from Whisper", e)
-
- return transcription
-
-
-def auto_transcribe(audio, auto_submit):
- if audio is None:
- return "", ""
-
- transcription = do_stt(audio)
- if auto_submit:
- input_hijack.update({"state": True, "value": [transcription, transcription]})
-
- return transcription, None
-
-
-def ui():
- with gr.Row():
- audio = gr.Audio(source="microphone")
- auto_submit = gr.Checkbox(label='Submit the transcribed audio automatically', value=True)
-
- audio.change(
- auto_transcribe, [audio, auto_submit], [shared.gradio['textbox'], audio]).then(
- None, auto_submit, None, _js="(check) => {if (check) { document.getElementById('Generate').click() }}")
diff --git a/spaces/apozzuoli98/shark-or-whale-classifier/README.md b/spaces/apozzuoli98/shark-or-whale-classifier/README.md
deleted file mode 100644
index 99f60b3c86f5a0b470eb2a4f9813294b2718b85e..0000000000000000000000000000000000000000
--- a/spaces/apozzuoli98/shark-or-whale-classifier/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Shark Or Whale Classifier
-emoji: ⚡
-colorFrom: pink
-colorTo: green
-sdk: gradio
-sdk_version: 3.32.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/arnavkartikeya/SCRIPture-final/data/coco_karpathy_dataset.py b/spaces/arnavkartikeya/SCRIPture-final/data/coco_karpathy_dataset.py
deleted file mode 100644
index a34d29205f42aa09695b160ac9c91958ba041bb3..0000000000000000000000000000000000000000
--- a/spaces/arnavkartikeya/SCRIPture-final/data/coco_karpathy_dataset.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import os
-import json
-
-from torch.utils.data import Dataset
-from torchvision.datasets.utils import download_url
-
-from PIL import Image
-
-from data.utils import pre_caption
-
-class coco_karpathy_train(Dataset):
- def __init__(self, transform, image_root, ann_root, max_words=30, prompt=''):
- '''
- image_root (string): Root directory of images (e.g. coco/images/)
- ann_root (string): directory to store the annotation file
- '''
- url = 'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_train.json'
- filename = 'coco_karpathy_train.json'
-
- download_url(url,ann_root)
-
- self.annotation = json.load(open(os.path.join(ann_root,filename),'r'))
- self.transform = transform
- self.image_root = image_root
- self.max_words = max_words
- self.prompt = prompt
-
- self.img_ids = {}
- n = 0
- for ann in self.annotation:
- img_id = ann['image_id']
- if img_id not in self.img_ids.keys():
- self.img_ids[img_id] = n
- n += 1
-
- def __len__(self):
- return len(self.annotation)
-
- def __getitem__(self, index):
-
- ann = self.annotation[index]
-
- image_path = os.path.join(self.image_root,ann['image'])
- image = Image.open(image_path).convert('RGB')
- image = self.transform(image)
-
- caption = self.prompt+pre_caption(ann['caption'], self.max_words)
-
- return image, caption, self.img_ids[ann['image_id']]
-
-
-class coco_karpathy_caption_eval(Dataset):
- def __init__(self, transform, image_root, ann_root, split):
- '''
- image_root (string): Root directory of images (e.g. coco/images/)
- ann_root (string): directory to store the annotation file
- split (string): val or test
- '''
- urls = {'val':'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_val.json',
- 'test':'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_test.json'}
- filenames = {'val':'coco_karpathy_val.json','test':'coco_karpathy_test.json'}
-
- download_url(urls[split],ann_root)
-
- self.annotation = json.load(open(os.path.join(ann_root,filenames[split]),'r'))
- self.transform = transform
- self.image_root = image_root
-
- def __len__(self):
- return len(self.annotation)
-
- def __getitem__(self, index):
-
- ann = self.annotation[index]
-
- image_path = os.path.join(self.image_root,ann['image'])
- image = Image.open(image_path).convert('RGB')
- image = self.transform(image)
-
- img_id = ann['image'].split('/')[-1].strip('.jpg').split('_')[-1]
-
- return image, int(img_id)
-
-
-class coco_karpathy_retrieval_eval(Dataset):
- def __init__(self, transform, image_root, ann_root, split, max_words=30):
- '''
- image_root (string): Root directory of images (e.g. coco/images/)
- ann_root (string): directory to store the annotation file
- split (string): val or test
- '''
- urls = {'val':'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_val.json',
- 'test':'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_test.json'}
- filenames = {'val':'coco_karpathy_val.json','test':'coco_karpathy_test.json'}
-
- download_url(urls[split],ann_root)
-
- self.annotation = json.load(open(os.path.join(ann_root,filenames[split]),'r'))
- self.transform = transform
- self.image_root = image_root
-
- self.text = []
- self.image = []
- self.txt2img = {}
- self.img2txt = {}
-
- txt_id = 0
- for img_id, ann in enumerate(self.annotation):
- self.image.append(ann['image'])
- self.img2txt[img_id] = []
- for i, caption in enumerate(ann['caption']):
- self.text.append(pre_caption(caption,max_words))
- self.img2txt[img_id].append(txt_id)
- self.txt2img[txt_id] = img_id
- txt_id += 1
-
- def __len__(self):
- return len(self.annotation)
-
- def __getitem__(self, index):
-
- image_path = os.path.join(self.image_root, self.annotation[index]['image'])
- image = Image.open(image_path).convert('RGB')
- image = self.transform(image)
-
- return image, index
\ No newline at end of file
diff --git a/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests/test_tacotron_layers.py b/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests/test_tacotron_layers.py
deleted file mode 100644
index 43e72417c200493c6392b3acb131a43738dad2bd..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests/test_tacotron_layers.py
+++ /dev/null
@@ -1,85 +0,0 @@
-import unittest
-
-import torch as T
-
-from TTS.tts.layers.tacotron.tacotron import CBHG, Decoder, Encoder, Prenet
-
-# pylint: disable=unused-variable
-
-
-class PrenetTests(unittest.TestCase):
- def test_in_out(self): # pylint: disable=no-self-use
- layer = Prenet(128, out_features=[256, 128])
- dummy_input = T.rand(4, 128)
-
- print(layer)
- output = layer(dummy_input)
- assert output.shape[0] == 4
- assert output.shape[1] == 128
-
-
-class CBHGTests(unittest.TestCase):
- def test_in_out(self):
- # pylint: disable=attribute-defined-outside-init
- layer = self.cbhg = CBHG(
- 128,
- K=8,
- conv_bank_features=80,
- conv_projections=[160, 128],
- highway_features=80,
- gru_features=80,
- num_highways=4,
- )
- # B x D x T
- dummy_input = T.rand(4, 128, 8)
-
- print(layer)
- output = layer(dummy_input)
- assert output.shape[0] == 4
- assert output.shape[1] == 8
- assert output.shape[2] == 160
-
-
-class DecoderTests(unittest.TestCase):
- @staticmethod
- def test_in_out():
- layer = Decoder(
- in_channels=256,
- frame_channels=80,
- r=2,
- memory_size=4,
- attn_windowing=False,
- attn_norm="sigmoid",
- attn_K=5,
- attn_type="original",
- prenet_type="original",
- prenet_dropout=True,
- forward_attn=True,
- trans_agent=True,
- forward_attn_mask=True,
- location_attn=True,
- separate_stopnet=True,
- max_decoder_steps=50,
- )
- dummy_input = T.rand(4, 8, 256)
- dummy_memory = T.rand(4, 2, 80)
-
- output, alignment, stop_tokens = layer(dummy_input, dummy_memory, mask=None)
-
- assert output.shape[0] == 4
- assert output.shape[1] == 80, "size not {}".format(output.shape[1])
- assert output.shape[2] == 2, "size not {}".format(output.shape[2])
- assert stop_tokens.shape[0] == 4
-
-
-class EncoderTests(unittest.TestCase):
- def test_in_out(self): # pylint: disable=no-self-use
- layer = Encoder(128)
- dummy_input = T.rand(4, 8, 128)
-
- print(layer)
- output = layer(dummy_input)
- print(output.shape)
- assert output.shape[0] == 4
- assert output.shape[1] == 8
- assert output.shape[2] == 256 # 128 * 2 BiRNN
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/TypeInference.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/TypeInference.py
deleted file mode 100644
index c7ffee7d2442693e17c240eb5d52c25ce4a92230..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/TypeInference.py
+++ /dev/null
@@ -1,591 +0,0 @@
-from __future__ import absolute_import
-
-from .Errors import error, message
-from . import ExprNodes
-from . import Nodes
-from . import Builtin
-from . import PyrexTypes
-from .. import Utils
-from .PyrexTypes import py_object_type, unspecified_type
-from .Visitor import CythonTransform, EnvTransform
-
-try:
- reduce
-except NameError:
- from functools import reduce
-
-
-class TypedExprNode(ExprNodes.ExprNode):
- # Used for declaring assignments of a specified type without a known entry.
- subexprs = []
-
- def __init__(self, type, pos=None):
- super(TypedExprNode, self).__init__(pos, type=type)
-
-object_expr = TypedExprNode(py_object_type)
-
-
-class MarkParallelAssignments(EnvTransform):
- # Collects assignments inside parallel blocks prange, with parallel.
- # Perhaps it's better to move it to ControlFlowAnalysis.
-
- # tells us whether we're in a normal loop
- in_loop = False
-
- parallel_errors = False
-
- def __init__(self, context):
- # Track the parallel block scopes (with parallel, for i in prange())
- self.parallel_block_stack = []
- super(MarkParallelAssignments, self).__init__(context)
-
- def mark_assignment(self, lhs, rhs, inplace_op=None):
- if isinstance(lhs, (ExprNodes.NameNode, Nodes.PyArgDeclNode)):
- if lhs.entry is None:
- # TODO: This shouldn't happen...
- return
-
- if self.parallel_block_stack:
- parallel_node = self.parallel_block_stack[-1]
- previous_assignment = parallel_node.assignments.get(lhs.entry)
-
- # If there was a previous assignment to the variable, keep the
- # previous assignment position
- if previous_assignment:
- pos, previous_inplace_op = previous_assignment
-
- if (inplace_op and previous_inplace_op and
- inplace_op != previous_inplace_op):
- # x += y; x *= y
- t = (inplace_op, previous_inplace_op)
- error(lhs.pos,
- "Reduction operator '%s' is inconsistent "
- "with previous reduction operator '%s'" % t)
- else:
- pos = lhs.pos
-
- parallel_node.assignments[lhs.entry] = (pos, inplace_op)
- parallel_node.assigned_nodes.append(lhs)
-
- elif isinstance(lhs, ExprNodes.SequenceNode):
- for i, arg in enumerate(lhs.args):
- if not rhs or arg.is_starred:
- item_node = None
- else:
- item_node = rhs.inferable_item_node(i)
- self.mark_assignment(arg, item_node)
- else:
- # Could use this info to infer cdef class attributes...
- pass
-
- def visit_WithTargetAssignmentStatNode(self, node):
- self.mark_assignment(node.lhs, node.with_node.enter_call)
- self.visitchildren(node)
- return node
-
- def visit_SingleAssignmentNode(self, node):
- self.mark_assignment(node.lhs, node.rhs)
- self.visitchildren(node)
- return node
-
- def visit_CascadedAssignmentNode(self, node):
- for lhs in node.lhs_list:
- self.mark_assignment(lhs, node.rhs)
- self.visitchildren(node)
- return node
-
- def visit_InPlaceAssignmentNode(self, node):
- self.mark_assignment(node.lhs, node.create_binop_node(), node.operator)
- self.visitchildren(node)
- return node
-
- def visit_ForInStatNode(self, node):
- # TODO: Remove redundancy with range optimization...
- is_special = False
- sequence = node.iterator.sequence
- target = node.target
- if isinstance(sequence, ExprNodes.SimpleCallNode):
- function = sequence.function
- if sequence.self is None and function.is_name:
- entry = self.current_env().lookup(function.name)
- if not entry or entry.is_builtin:
- if function.name == 'reversed' and len(sequence.args) == 1:
- sequence = sequence.args[0]
- elif function.name == 'enumerate' and len(sequence.args) == 1:
- if target.is_sequence_constructor and len(target.args) == 2:
- iterator = sequence.args[0]
- if iterator.is_name:
- iterator_type = iterator.infer_type(self.current_env())
- if iterator_type.is_builtin_type:
- # assume that builtin types have a length within Py_ssize_t
- self.mark_assignment(
- target.args[0],
- ExprNodes.IntNode(target.pos, value='PY_SSIZE_T_MAX',
- type=PyrexTypes.c_py_ssize_t_type))
- target = target.args[1]
- sequence = sequence.args[0]
- if isinstance(sequence, ExprNodes.SimpleCallNode):
- function = sequence.function
- if sequence.self is None and function.is_name:
- entry = self.current_env().lookup(function.name)
- if not entry or entry.is_builtin:
- if function.name in ('range', 'xrange'):
- is_special = True
- for arg in sequence.args[:2]:
- self.mark_assignment(target, arg)
- if len(sequence.args) > 2:
- self.mark_assignment(
- target,
- ExprNodes.binop_node(node.pos,
- '+',
- sequence.args[0],
- sequence.args[2]))
-
- if not is_special:
- # A for-loop basically translates to subsequent calls to
- # __getitem__(), so using an IndexNode here allows us to
- # naturally infer the base type of pointers, C arrays,
- # Python strings, etc., while correctly falling back to an
- # object type when the base type cannot be handled.
- self.mark_assignment(target, ExprNodes.IndexNode(
- node.pos,
- base=sequence,
- index=ExprNodes.IntNode(target.pos, value='PY_SSIZE_T_MAX',
- type=PyrexTypes.c_py_ssize_t_type)))
-
- self.visitchildren(node)
- return node
-
- def visit_ForFromStatNode(self, node):
- self.mark_assignment(node.target, node.bound1)
- if node.step is not None:
- self.mark_assignment(node.target,
- ExprNodes.binop_node(node.pos,
- '+',
- node.bound1,
- node.step))
- self.visitchildren(node)
- return node
-
- def visit_WhileStatNode(self, node):
- self.visitchildren(node)
- return node
-
- def visit_ExceptClauseNode(self, node):
- if node.target is not None:
- self.mark_assignment(node.target, object_expr)
- self.visitchildren(node)
- return node
-
- def visit_FromCImportStatNode(self, node):
- pass # Can't be assigned to...
-
- def visit_FromImportStatNode(self, node):
- for name, target in node.items:
- if name != "*":
- self.mark_assignment(target, object_expr)
- self.visitchildren(node)
- return node
-
- def visit_DefNode(self, node):
- # use fake expressions with the right result type
- if node.star_arg:
- self.mark_assignment(
- node.star_arg, TypedExprNode(Builtin.tuple_type, node.pos))
- if node.starstar_arg:
- self.mark_assignment(
- node.starstar_arg, TypedExprNode(Builtin.dict_type, node.pos))
- EnvTransform.visit_FuncDefNode(self, node)
- return node
-
- def visit_DelStatNode(self, node):
- for arg in node.args:
- self.mark_assignment(arg, arg)
- self.visitchildren(node)
- return node
-
- def visit_ParallelStatNode(self, node):
- if self.parallel_block_stack:
- node.parent = self.parallel_block_stack[-1]
- else:
- node.parent = None
-
- nested = False
- if node.is_prange:
- if not node.parent:
- node.is_parallel = True
- else:
- node.is_parallel = (node.parent.is_prange or not
- node.parent.is_parallel)
- nested = node.parent.is_prange
- else:
- node.is_parallel = True
- # Note: nested with parallel() blocks are handled by
- # ParallelRangeTransform!
- # nested = node.parent
- nested = node.parent and node.parent.is_prange
-
- self.parallel_block_stack.append(node)
-
- nested = nested or len(self.parallel_block_stack) > 2
- if not self.parallel_errors and nested and not node.is_prange:
- error(node.pos, "Only prange() may be nested")
- self.parallel_errors = True
-
- if node.is_prange:
- child_attrs = node.child_attrs
- node.child_attrs = ['body', 'target', 'args']
- self.visitchildren(node)
- node.child_attrs = child_attrs
-
- self.parallel_block_stack.pop()
- if node.else_clause:
- node.else_clause = self.visit(node.else_clause)
- else:
- self.visitchildren(node)
- self.parallel_block_stack.pop()
-
- self.parallel_errors = False
- return node
-
- def visit_YieldExprNode(self, node):
- if self.parallel_block_stack:
- error(node.pos, "'%s' not allowed in parallel sections" % node.expr_keyword)
- return node
-
- def visit_ReturnStatNode(self, node):
- node.in_parallel = bool(self.parallel_block_stack)
- return node
-
-
-class MarkOverflowingArithmetic(CythonTransform):
-
- # It may be possible to integrate this with the above for
- # performance improvements (though likely not worth it).
-
- might_overflow = False
-
- def __call__(self, root):
- self.env_stack = []
- self.env = root.scope
- return super(MarkOverflowingArithmetic, self).__call__(root)
-
- def visit_safe_node(self, node):
- self.might_overflow, saved = False, self.might_overflow
- self.visitchildren(node)
- self.might_overflow = saved
- return node
-
- def visit_neutral_node(self, node):
- self.visitchildren(node)
- return node
-
- def visit_dangerous_node(self, node):
- self.might_overflow, saved = True, self.might_overflow
- self.visitchildren(node)
- self.might_overflow = saved
- return node
-
- def visit_FuncDefNode(self, node):
- self.env_stack.append(self.env)
- self.env = node.local_scope
- self.visit_safe_node(node)
- self.env = self.env_stack.pop()
- return node
-
- def visit_NameNode(self, node):
- if self.might_overflow:
- entry = node.entry or self.env.lookup(node.name)
- if entry:
- entry.might_overflow = True
- return node
-
- def visit_BinopNode(self, node):
- if node.operator in '&|^':
- return self.visit_neutral_node(node)
- else:
- return self.visit_dangerous_node(node)
-
- def visit_SimpleCallNode(self, node):
- if node.function.is_name and node.function.name == 'abs':
- # Overflows for minimum value of fixed size ints.
- return self.visit_dangerous_node(node)
- else:
- return self.visit_neutral_node(node)
-
- visit_UnopNode = visit_neutral_node
-
- visit_UnaryMinusNode = visit_dangerous_node
-
- visit_InPlaceAssignmentNode = visit_dangerous_node
-
- visit_Node = visit_safe_node
-
- def visit_assignment(self, lhs, rhs):
- if (isinstance(rhs, ExprNodes.IntNode)
- and isinstance(lhs, ExprNodes.NameNode)
- and Utils.long_literal(rhs.value)):
- entry = lhs.entry or self.env.lookup(lhs.name)
- if entry:
- entry.might_overflow = True
-
- def visit_SingleAssignmentNode(self, node):
- self.visit_assignment(node.lhs, node.rhs)
- self.visitchildren(node)
- return node
-
- def visit_CascadedAssignmentNode(self, node):
- for lhs in node.lhs_list:
- self.visit_assignment(lhs, node.rhs)
- self.visitchildren(node)
- return node
-
-class PyObjectTypeInferer(object):
- """
- If it's not declared, it's a PyObject.
- """
- def infer_types(self, scope):
- """
- Given a dict of entries, map all unspecified types to a specified type.
- """
- for name, entry in scope.entries.items():
- if entry.type is unspecified_type:
- entry.type = py_object_type
-
-class SimpleAssignmentTypeInferer(object):
- """
- Very basic type inference.
-
- Note: in order to support cross-closure type inference, this must be
- applies to nested scopes in top-down order.
- """
- def set_entry_type(self, entry, entry_type):
- entry.type = entry_type
- for e in entry.all_entries():
- e.type = entry_type
-
- def infer_types(self, scope):
- enabled = scope.directives['infer_types']
- verbose = scope.directives['infer_types.verbose']
-
- if enabled == True:
- spanning_type = aggressive_spanning_type
- elif enabled is None: # safe mode
- spanning_type = safe_spanning_type
- else:
- for entry in scope.entries.values():
- if entry.type is unspecified_type:
- self.set_entry_type(entry, py_object_type)
- return
-
- # Set of assignments
- assignments = set()
- assmts_resolved = set()
- dependencies = {}
- assmt_to_names = {}
-
- for name, entry in scope.entries.items():
- for assmt in entry.cf_assignments:
- names = assmt.type_dependencies()
- assmt_to_names[assmt] = names
- assmts = set()
- for node in names:
- assmts.update(node.cf_state)
- dependencies[assmt] = assmts
- if entry.type is unspecified_type:
- assignments.update(entry.cf_assignments)
- else:
- assmts_resolved.update(entry.cf_assignments)
-
- def infer_name_node_type(node):
- types = [assmt.inferred_type for assmt in node.cf_state]
- if not types:
- node_type = py_object_type
- else:
- entry = node.entry
- node_type = spanning_type(
- types, entry.might_overflow, entry.pos, scope)
- node.inferred_type = node_type
-
- def infer_name_node_type_partial(node):
- types = [assmt.inferred_type for assmt in node.cf_state
- if assmt.inferred_type is not None]
- if not types:
- return
- entry = node.entry
- return spanning_type(types, entry.might_overflow, entry.pos, scope)
-
- def inferred_types(entry):
- has_none = False
- has_pyobjects = False
- types = []
- for assmt in entry.cf_assignments:
- if assmt.rhs.is_none:
- has_none = True
- else:
- rhs_type = assmt.inferred_type
- if rhs_type and rhs_type.is_pyobject:
- has_pyobjects = True
- types.append(rhs_type)
- # Ignore None assignments as long as there are concrete Python type assignments.
- # but include them if None is the only assigned Python object.
- if has_none and not has_pyobjects:
- types.append(py_object_type)
- return types
-
- def resolve_assignments(assignments):
- resolved = set()
- for assmt in assignments:
- deps = dependencies[assmt]
- # All assignments are resolved
- if assmts_resolved.issuperset(deps):
- for node in assmt_to_names[assmt]:
- infer_name_node_type(node)
- # Resolve assmt
- inferred_type = assmt.infer_type()
- assmts_resolved.add(assmt)
- resolved.add(assmt)
- assignments.difference_update(resolved)
- return resolved
-
- def partial_infer(assmt):
- partial_types = []
- for node in assmt_to_names[assmt]:
- partial_type = infer_name_node_type_partial(node)
- if partial_type is None:
- return False
- partial_types.append((node, partial_type))
- for node, partial_type in partial_types:
- node.inferred_type = partial_type
- assmt.infer_type()
- return True
-
- partial_assmts = set()
- def resolve_partial(assignments):
- # try to handle circular references
- partials = set()
- for assmt in assignments:
- if assmt in partial_assmts:
- continue
- if partial_infer(assmt):
- partials.add(assmt)
- assmts_resolved.add(assmt)
- partial_assmts.update(partials)
- return partials
-
- # Infer assignments
- while True:
- if not resolve_assignments(assignments):
- if not resolve_partial(assignments):
- break
- inferred = set()
- # First pass
- for entry in scope.entries.values():
- if entry.type is not unspecified_type:
- continue
- entry_type = py_object_type
- if assmts_resolved.issuperset(entry.cf_assignments):
- types = inferred_types(entry)
- if types and all(types):
- entry_type = spanning_type(
- types, entry.might_overflow, entry.pos, scope)
- inferred.add(entry)
- self.set_entry_type(entry, entry_type)
-
- def reinfer():
- dirty = False
- for entry in inferred:
- for assmt in entry.cf_assignments:
- assmt.infer_type()
- types = inferred_types(entry)
- new_type = spanning_type(types, entry.might_overflow, entry.pos, scope)
- if new_type != entry.type:
- self.set_entry_type(entry, new_type)
- dirty = True
- return dirty
-
- # types propagation
- while reinfer():
- pass
-
- if verbose:
- for entry in inferred:
- message(entry.pos, "inferred '%s' to be of type '%s'" % (
- entry.name, entry.type))
-
-
-def find_spanning_type(type1, type2):
- if type1 is type2:
- result_type = type1
- elif type1 is PyrexTypes.c_bint_type or type2 is PyrexTypes.c_bint_type:
- # type inference can break the coercion back to a Python bool
- # if it returns an arbitrary int type here
- return py_object_type
- else:
- result_type = PyrexTypes.spanning_type(type1, type2)
- if result_type in (PyrexTypes.c_double_type, PyrexTypes.c_float_type,
- Builtin.float_type):
- # Python's float type is just a C double, so it's safe to
- # use the C type instead
- return PyrexTypes.c_double_type
- return result_type
-
-def simply_type(result_type, pos):
- if result_type.is_reference:
- result_type = result_type.ref_base_type
- if result_type.is_const:
- result_type = result_type.const_base_type
- if result_type.is_cpp_class:
- result_type.check_nullary_constructor(pos)
- if result_type.is_array:
- result_type = PyrexTypes.c_ptr_type(result_type.base_type)
- return result_type
-
-def aggressive_spanning_type(types, might_overflow, pos, scope):
- return simply_type(reduce(find_spanning_type, types), pos)
-
-def safe_spanning_type(types, might_overflow, pos, scope):
- result_type = simply_type(reduce(find_spanning_type, types), pos)
- if result_type.is_pyobject:
- # In theory, any specific Python type is always safe to
- # infer. However, inferring str can cause some existing code
- # to break, since we are also now much more strict about
- # coercion from str to char *. See trac #553.
- if result_type.name == 'str':
- return py_object_type
- else:
- return result_type
- elif result_type is PyrexTypes.c_double_type:
- # Python's float type is just a C double, so it's safe to use
- # the C type instead
- return result_type
- elif result_type is PyrexTypes.c_bint_type:
- # find_spanning_type() only returns 'bint' for clean boolean
- # operations without other int types, so this is safe, too
- return result_type
- elif result_type.is_pythran_expr:
- return result_type
- elif result_type.is_ptr:
- # Any pointer except (signed|unsigned|) char* can't implicitly
- # become a PyObject, and inferring char* is now accepted, too.
- return result_type
- elif result_type.is_cpp_class:
- # These can't implicitly become Python objects either.
- return result_type
- elif result_type.is_struct:
- # Though we have struct -> object for some structs, this is uncommonly
- # used, won't arise in pure Python, and there shouldn't be side
- # effects, so I'm declaring this safe.
- return result_type
- # TODO: double complex should be OK as well, but we need
- # to make sure everything is supported.
- elif (result_type.is_int or result_type.is_enum) and not might_overflow:
- return result_type
- elif (not result_type.can_coerce_to_pyobject(scope)
- and not result_type.is_error):
- return result_type
- return py_object_type
-
-
-def get_type_inferer():
- return SimpleAssignmentTypeInferer()
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/atn/LexerAction.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/atn/LexerAction.py
deleted file mode 100644
index 5d11f21ae2c5f640780bd32d9aca568eedbdd60b..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/atn/LexerAction.py
+++ /dev/null
@@ -1,291 +0,0 @@
-#
-# Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
-# Use of this file is governed by the BSD 3-clause license that
-# can be found in the LICENSE.txt file in the project root.
- #
-
-from enum import IntEnum
-
-# need forward declaration
-Lexer = None
-
-
-class LexerActionType(IntEnum):
-
- CHANNEL = 0 #The type of a {@link LexerChannelAction} action.
- CUSTOM = 1 #The type of a {@link LexerCustomAction} action.
- MODE = 2 #The type of a {@link LexerModeAction} action.
- MORE = 3 #The type of a {@link LexerMoreAction} action.
- POP_MODE = 4 #The type of a {@link LexerPopModeAction} action.
- PUSH_MODE = 5 #The type of a {@link LexerPushModeAction} action.
- SKIP = 6 #The type of a {@link LexerSkipAction} action.
- TYPE = 7 #The type of a {@link LexerTypeAction} action.
-
-class LexerAction(object):
-
- def __init__(self, action:LexerActionType):
- self.actionType = action
- self.isPositionDependent = False
-
- def __hash__(self):
- return hash(self.actionType)
-
- def __eq__(self, other):
- return self is other
-
-
-#
-# Implements the {@code skip} lexer action by calling {@link Lexer#skip}.
-#
-#
The {@code skip} command does not have any parameters, so this action is
-# implemented as a singleton instance exposed by {@link #INSTANCE}.
This action is implemented by calling {@link Lexer#mode} with the
- # value provided by {@link #getMode}.
- def execute(self, lexer:Lexer):
- lexer.mode(self.mode)
-
- def __hash__(self):
- return hash((self.actionType, self.mode))
-
- def __eq__(self, other):
- if self is other:
- return True
- elif not isinstance(other, LexerModeAction):
- return False
- else:
- return self.mode == other.mode
-
- def __str__(self):
- return "mode(" + str(self.mode) + ")"
-
-# Executes a custom lexer action by calling {@link Recognizer#action} with the
-# rule and action indexes assigned to the custom action. The implementation of
-# a custom action is added to the generated code for the lexer in an override
-# of {@link Recognizer#action} when the grammar is compiled.
-#
-#
This class may represent embedded actions created with the {...}
-# syntax in ANTLR 4, as well as actions created for lexer commands where the
-# command argument could not be evaluated when the grammar was compiled.
-
-class LexerCustomAction(LexerAction):
-
- # Constructs a custom lexer action with the specified rule and action
- # indexes.
- #
- # @param ruleIndex The rule index to use for calls to
- # {@link Recognizer#action}.
- # @param actionIndex The action index to use for calls to
- # {@link Recognizer#action}.
- #/
- def __init__(self, ruleIndex:int, actionIndex:int):
- super().__init__(LexerActionType.CUSTOM)
- self.ruleIndex = ruleIndex
- self.actionIndex = actionIndex
- self.isPositionDependent = True
-
- #
Custom actions are implemented by calling {@link Lexer#action} with the
- # appropriate rule and action indexes.
- def execute(self, lexer:Lexer):
- lexer.action(None, self.ruleIndex, self.actionIndex)
-
- def __hash__(self):
- return hash((self.actionType, self.ruleIndex, self.actionIndex))
-
- def __eq__(self, other):
- if self is other:
- return True
- elif not isinstance(other, LexerCustomAction):
- return False
- else:
- return self.ruleIndex == other.ruleIndex and self.actionIndex == other.actionIndex
-
-# Implements the {@code channel} lexer action by calling
-# {@link Lexer#setChannel} with the assigned channel.
-class LexerChannelAction(LexerAction):
-
- # Constructs a new {@code channel} action with the specified channel value.
- # @param channel The channel value to pass to {@link Lexer#setChannel}.
- def __init__(self, channel:int):
- super().__init__(LexerActionType.CHANNEL)
- self.channel = channel
-
- #
This action is implemented by calling {@link Lexer#setChannel} with the
- # value provided by {@link #getChannel}.
- def execute(self, lexer:Lexer):
- lexer._channel = self.channel
-
- def __hash__(self):
- return hash((self.actionType, self.channel))
-
- def __eq__(self, other):
- if self is other:
- return True
- elif not isinstance(other, LexerChannelAction):
- return False
- else:
- return self.channel == other.channel
-
- def __str__(self):
- return "channel(" + str(self.channel) + ")"
-
-# This implementation of {@link LexerAction} is used for tracking input offsets
-# for position-dependent actions within a {@link LexerActionExecutor}.
-#
-#
This action is not serialized as part of the ATN, and is only required for
-# position-dependent lexer actions which appear at a location other than the
-# end of a rule. For more information about DFA optimizations employed for
-# lexer actions, see {@link LexerActionExecutor#append} and
-# {@link LexerActionExecutor#fixOffsetBeforeMatch}.
-class LexerIndexedCustomAction(LexerAction):
-
- # Constructs a new indexed custom action by associating a character offset
- # with a {@link LexerAction}.
- #
- #
Note: This class is only required for lexer actions for which
- # {@link LexerAction#isPositionDependent} returns {@code true}.
- #
- # @param offset The offset into the input {@link CharStream}, relative to
- # the token start index, at which the specified lexer action should be
- # executed.
- # @param action The lexer action to execute at a particular offset in the
- # input {@link CharStream}.
- def __init__(self, offset:int, action:LexerAction):
- super().__init__(action.actionType)
- self.offset = offset
- self.action = action
- self.isPositionDependent = True
-
- #
This method calls {@link #execute} on the result of {@link #getAction}
- # using the provided {@code lexer}.
- def execute(self, lexer:Lexer):
- # assume the input stream position was properly set by the calling code
- self.action.execute(lexer)
-
- def __hash__(self):
- return hash((self.actionType, self.offset, self.action))
-
- def __eq__(self, other):
- if self is other:
- return True
- elif not isinstance(other, LexerIndexedCustomAction):
- return False
- else:
- return self.offset == other.offset and self.action == other.action
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/criterions/label_smoothed_cross_entropy_with_ctc.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/criterions/label_smoothed_cross_entropy_with_ctc.py
deleted file mode 100644
index e98e0f7da00b826544df32cf1d9416241bdb37ae..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/criterions/label_smoothed_cross_entropy_with_ctc.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-from dataclasses import dataclass, field
-
-import torch
-import torch.nn.functional as F
-
-from fairseq import metrics, utils
-from fairseq.criterions import register_criterion
-from fairseq.criterions.label_smoothed_cross_entropy import (
- LabelSmoothedCrossEntropyCriterion,
- LabelSmoothedCrossEntropyCriterionConfig,
-)
-from fairseq.data.data_utils import lengths_to_mask
-
-
-@dataclass
-class LabelSmoothedCrossEntropyWithCtcCriterionConfig(
- LabelSmoothedCrossEntropyCriterionConfig
-):
- ctc_weight: float = field(default=1.0, metadata={"help": "weight for CTC loss"})
-
-
-@register_criterion(
- "label_smoothed_cross_entropy_with_ctc",
- dataclass=LabelSmoothedCrossEntropyWithCtcCriterionConfig,
-)
-class LabelSmoothedCrossEntropyWithCtcCriterion(LabelSmoothedCrossEntropyCriterion):
- def __init__(
- self,
- task,
- sentence_avg,
- label_smoothing,
- ignore_prefix_size,
- report_accuracy,
- ctc_weight,
- ):
- super().__init__(
- task, sentence_avg, label_smoothing, ignore_prefix_size, report_accuracy
- )
- self.ctc_weight = ctc_weight
-
- def forward(self, model, sample, reduce=True):
- net_output = model(**sample["net_input"])
- loss, nll_loss = self.compute_loss(model, net_output, sample, reduce=reduce)
-
- ctc_loss = torch.tensor(0.0).type_as(loss)
- if self.ctc_weight > 0.0:
- ctc_lprobs, ctc_lens = model.get_ctc_output(net_output, sample)
- ctc_tgt, ctc_tgt_lens = model.get_ctc_target(sample)
- ctc_tgt_mask = lengths_to_mask(ctc_tgt_lens)
- ctc_tgt_flat = ctc_tgt.masked_select(ctc_tgt_mask)
- reduction = "sum" if reduce else "none"
- ctc_loss = (
- F.ctc_loss(
- ctc_lprobs,
- ctc_tgt_flat,
- ctc_lens,
- ctc_tgt_lens,
- reduction=reduction,
- zero_infinity=True,
- )
- * self.ctc_weight
- )
- loss += ctc_loss
-
- sample_size = (
- sample["target"].size(0) if self.sentence_avg else sample["ntokens"]
- )
- logging_output = {
- "loss": utils.item(loss.data),
- "nll_loss": utils.item(nll_loss.data),
- "ctc_loss": utils.item(ctc_loss.data),
- "ntokens": sample["ntokens"],
- "nsentences": sample["target"].size(0),
- "sample_size": sample_size,
- }
- if self.report_accuracy:
- n_correct, total = self.compute_accuracy(model, net_output, sample)
- logging_output["n_correct"] = utils.item(n_correct.data)
- logging_output["total"] = utils.item(total.data)
- return loss, sample_size, logging_output
-
- @classmethod
- def reduce_metrics(cls, logging_outputs) -> None:
- super().reduce_metrics(logging_outputs)
- loss_sum = sum(log.get("ctc_loss", 0) for log in logging_outputs)
- sample_size = sum(log.get("sample_size", 0) for log in logging_outputs)
-
- metrics.log_scalar(
- "ctc_loss", loss_sum / sample_size / math.log(2), sample_size, round=3
- )
diff --git a/spaces/asciicorp/Legal-ai/extract_text.py b/spaces/asciicorp/Legal-ai/extract_text.py
deleted file mode 100644
index 7491239b93606b521cb1a2c0acdb80ef0cc9d0ef..0000000000000000000000000000000000000000
--- a/spaces/asciicorp/Legal-ai/extract_text.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import spacy
-from spacy import displacy
-
-nlp = spacy.load("en_core_web_sm")
-
-def extract_info(input_text):
- # Parse the input text using spaCy
- doc = nlp(input_text)
-
- # Find all named entities and dates in the parsed text
- names = [(ent.text, ent.start_char, ent.end_char, "PERSON") for ent in doc.ents if ent.label_ == "PERSON"]
- addresses = [(ent.text, ent.start_char, ent.end_char, "ADDRESS") for ent in doc.ents if ent.label_ == "GPE"]
- dates = [(ent.text, ent.start_char, ent.end_char, "DATE") for ent in doc.ents if ent.label_ == "DATE"]
-
- # Return the extracted information as a dictionary
- return {"names": names, "addresses": addresses, "dates": dates}
-
diff --git a/spaces/ashishraics/NLP/app.py b/spaces/ashishraics/NLP/app.py
deleted file mode 100644
index 769e1a20ba64e1bc8787ef7be29a822a5b0a41b7..0000000000000000000000000000000000000000
--- a/spaces/ashishraics/NLP/app.py
+++ /dev/null
@@ -1,353 +0,0 @@
-import numpy as np
-import pandas as pd
-import streamlit as st
-from streamlit_text_rating.st_text_rater import st_text_rater
-from transformers import AutoTokenizer,AutoModelForSequenceClassification
-from transformers import AutoModelForMaskedLM
-import onnxruntime as ort
-import os
-import time
-import plotly.express as px
-import plotly.graph_objects as go
-from PIL import Image
-global _plotly_config
-_plotly_config={'displayModeBar': False}
-
-from sentiment_clf_helper import (classify_sentiment,
- create_onnx_model_sentiment,
- classify_sentiment_onnx)
-
-from zeroshot_clf_helper import (zero_shot_classification,
- create_onnx_model_zs_nli,
- create_onnx_model_zs_mlm,
- zero_shot_classification_nli_onnx,
- zero_shot_classification_fillmask_onnx)
-
-import multiprocessing
-total_threads=multiprocessing.cpu_count()#for ort inference
-
-import yaml
-def read_yaml(file_path):
- with open(file_path, "r") as f:
- return yaml.safe_load(f)
-
-config = read_yaml('config.yaml')
-
-sent_chkpt=config['SENTIMENT_CLF']['sent_chkpt']
-sent_mdl_dir=config['SENTIMENT_CLF']['sent_mdl_dir']
-sent_onnx_mdl_dir=config['SENTIMENT_CLF']['sent_onnx_mdl_dir']
-sent_onnx_mdl_name=config['SENTIMENT_CLF']['sent_onnx_mdl_name']
-sent_onnx_quant_mdl_name=config['SENTIMENT_CLF']['sent_onnx_quant_mdl_name']
-
-zs_chkpt=config['ZEROSHOT_CLF']['zs_chkpt']
-zs_mdl_dir=config['ZEROSHOT_CLF']['zs_mdl_dir']
-zs_onnx_mdl_dir=config['ZEROSHOT_CLF']['zs_onnx_mdl_dir']
-zs_onnx_mdl_name=config['ZEROSHOT_CLF']['zs_onnx_mdl_name']
-zs_onnx_quant_mdl_name=config['ZEROSHOT_CLF']['zs_onnx_quant_mdl_name']
-
-zs_mlm_chkpt=config['ZEROSHOT_MLM']['zs_mlm_chkpt']
-zs_mlm_mdl_dir=config['ZEROSHOT_MLM']['zs_mlm_mdl_dir']
-zs_mlm_onnx_mdl_dir=config['ZEROSHOT_MLM']['zs_mlm_onnx_mdl_dir']
-zs_mlm_onnx_mdl_name=config['ZEROSHOT_MLM']['zs_mlm_onnx_mdl_name']
-
-st.set_page_config( # Alternate names: setup_page, page, layout
- layout="wide", # Can be "centered" or "wide". In the future also "dashboard", etc.
- initial_sidebar_state="auto", # Can be "auto", "expanded", "collapsed"
- page_title='None', # String or None. Strings get appended with "• Streamlit".
-)
-
-padding_top = 0
-st.markdown(f"""
- """,
- unsafe_allow_html=True,
-)
-
-def set_page_title(title):
- st.sidebar.markdown(unsafe_allow_html=True, body=f"""
-
- """)
-
-
-set_page_title('NLP use cases')
-
-# Hide Menu Option
-hide_streamlit_style = """
-
- """
-st.markdown(hide_streamlit_style, unsafe_allow_html=True)
-
-#onnx runtime inference threading changes -- session options must be passed in session run
-# os.environ["OMP_NUM_THREADS"] = "1" #use this before changing session options of onnx runtime
-session_options_ort = ort.SessionOptions()
-session_options_ort.intra_op_num_threads=1
-session_options_ort.inter_op_num_threads=1
-# session_options_ort.execution_mode = session_options_ort.ExecutionMode.ORT_SEQUENTIAL
-
-@st.cache(allow_output_mutation=True, suppress_st_warning=True, max_entries=None, ttl=None)
-def create_model_dir(chkpt, model_dir,task_type):
- if not os.path.exists(model_dir):
- try:
- os.mkdir(path=model_dir)
- except:
- pass
- if task_type=='classification':
- _model = AutoModelForSequenceClassification.from_pretrained(chkpt)
- _tokenizer = AutoTokenizer.from_pretrained(chkpt)
- _model.save_pretrained(model_dir)
- _tokenizer.save_pretrained(model_dir)
- elif task_type=='mlm':
- _model=AutoModelForMaskedLM.from_pretrained(chkpt)
- _tokenizer=AutoTokenizer.from_pretrained(chkpt)
- _model.save_pretrained(model_dir)
- _tokenizer.save_pretrained(model_dir)
- else:
- pass
- else:
- pass
-
-
-#title using markdown
-st.markdown("
NLP Basic Use Cases
", unsafe_allow_html=True)
-st.markdown("---")
-with st.sidebar:
- # title using markdown
- st.markdown("
-
-
-"""
-
-with gr.Blocks(css="style.css") as demo:
- with gr.Column(elem_id="col-container"):
-
- gr.HTML("""
-
-
- Image to Music
-
-
-
- Sends an image in to CLIP Interrogator
- to generate a text prompt which is then run through
- Mubert text-to-music to generate music from the input image!
-
-
""")
-
- input_img = gr.Image(type="filepath", elem_id="input-img")
- music_output = gr.Audio(label="Result", type="filepath", elem_id="music-output").style(height="5rem")
-
- with gr.Group(elem_id="share-btn-container"):
- community_icon = gr.HTML(community_icon_html, visible=False)
- loading_icon = gr.HTML(loading_icon_html, visible=False)
- share_button = gr.Button("Share to community", elem_id="share-btn", visible=False)
-
- with gr.Accordion(label="Music Generation Options", open=False):
- track_duration = gr.Slider(minimum=20, maximum=120, value=30, step=5, label="Track duration", elem_id="duration-inp")
- with gr.Row():
- gen_intensity = gr.Dropdown(choices=["low", "medium", "high"], value="medium", label="Intensity")
- gen_mode = gr.Radio(label="mode", choices=["track", "loop"], value="track")
-
- generate = gr.Button("Generate Music from Image")
-
- gr.HTML(article)
-
- generate.click(get_prompts, inputs=[input_img,track_duration,gen_intensity,gen_mode], outputs=[music_output, share_button, community_icon, loading_icon], api_name="i2m")
- share_button.click(None, [], [], _js=share_js)
-
-demo.queue(max_size=32, concurrency_count=20).launch()
\ No newline at end of file
diff --git a/spaces/badongtakla/ithaca/ithaca/util/loss.py b/spaces/badongtakla/ithaca/ithaca/util/loss.py
deleted file mode 100644
index 58a60f05e77bafe5dfdd84935f42da72a568fa94..0000000000000000000000000000000000000000
--- a/spaces/badongtakla/ithaca/ithaca/util/loss.py
+++ /dev/null
@@ -1,117 +0,0 @@
-# Copyright 2021 the Ithaca Authors
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# https://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Loss functions."""
-import chex
-from flax.deprecated import nn
-import jax
-import jax.numpy as jnp
-
-
-def smooth_labels(labels, num_classes, label_smoothing):
- if not 0 <= label_smoothing < 1:
- raise ValueError(
- f"'label_smoothing is {label_smoothing} and should be in [0, 1)")
- one = jax.lax.convert_element_type(1, labels.dtype)
- label_smoothing = jax.lax.convert_element_type(label_smoothing,
- labels.dtype)
- num_classes = jax.lax.convert_element_type(num_classes, labels.dtype)
- return (one - label_smoothing) * labels + (label_smoothing / num_classes)
-
-
-def categorical_kl_divergence(p_logits, q_logits, temperature=1.):
- """Compute the KL between two categorical distributions from their logits.
-
- Args:
- p_logits: unnormalized logits for the first distribution.
- q_logits: unnormalized logits for the second distribution.
- temperature: the temperature for the softmax distribution, defaults at 1.
-
- Returns:
- the kl divergence between the distributions.
- """
- chex.assert_type([p_logits, q_logits], float)
-
- p_logits /= temperature
- q_logits /= temperature
-
- p = jax.nn.softmax(p_logits)
- log_p = jax.nn.log_softmax(p_logits)
- log_q = jax.nn.log_softmax(q_logits)
- kl = jnp.sum(p * (log_p - log_q), axis=-1)
- return jax.nn.relu(kl) # Guard against numerical issues giving negative KL.
-
-
-def cross_entropy_label_smoothing_loss(logits,
- labels,
- mask=None,
- label_smoothing=0.1):
- """Cross entropy loss with label smoothing."""
-
- num_classes = logits.shape[-1]
- labels_onehot = jax.nn.one_hot(labels, num_classes, dtype=logits.dtype)
- if label_smoothing > 0:
- labels_onehot = smooth_labels(labels_onehot, num_classes, label_smoothing)
-
- loss = -jnp.sum(labels_onehot * jax.nn.log_softmax(logits), axis=-1)
- if mask is not None:
- loss = jnp.multiply(loss, mask.astype(logits.dtype))
- return loss
-
-
-@jax.vmap
-def cross_entropy_loss(logits, label):
- logits = nn.log_softmax(logits)
- return -logits[label]
-
-
-def cross_entropy_mask_loss(logits, label, mask):
- nll = -nn.log_softmax(logits)[label]
- loss = jnp.multiply(nll, mask.astype(logits.dtype))
- return loss
-
-
-def date_loss_l2(pred,
- target_min,
- target_max,
- mask):
- """L2 loss function for dates."""
- pred = jnp.squeeze(pred, 0)
-
- loss = 0.
- loss += (pred - target_min)**2 * jnp.less(pred, target_min).astype(
- pred.dtype)
- loss += (pred - target_max)**2 * jnp.greater(pred, target_max).astype(
- pred.dtype)
-
- # Mask loss
- loss = jnp.multiply(loss, mask.astype(loss.dtype))
- return loss
-
-
-def date_loss_l1(pred,
- target_min,
- target_max,
- mask):
- """L1 loss function for dates."""
- pred = jnp.squeeze(pred, 0)
-
- loss = 0.
- loss += jnp.abs(pred - target_min) * jnp.less(pred, target_min).astype(
- pred.dtype)
- loss += jnp.abs(pred - target_max) * jnp.greater(pred, target_max).astype(
- pred.dtype)
-
- # Mask loss
- loss = jnp.multiply(loss, mask.astype(loss.dtype))
- return loss
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/sea3d/physics/SEA3DRigidBody.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/sea3d/physics/SEA3DRigidBody.js
deleted file mode 100644
index c4d55a89265bd12d80d6bbbf4fefee2a7fba813f..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/sea3d/physics/SEA3DRigidBody.js
+++ /dev/null
@@ -1,442 +0,0 @@
-/**
- * SEA3D - Rigid Body
- * @author Sunag / http://www.sunag.com.br/
- */
-
-'use strict';
-
-//
-// Sphere
-//
-
-SEA3D.Sphere = function ( name, data, sea3d ) {
-
- this.name = name;
- this.data = data;
- this.sea3d = sea3d;
-
- this.radius = data.readFloat();
-
-};
-
-SEA3D.Sphere.prototype.type = "sph";
-
-//
-// Box
-//
-
-SEA3D.Box = function ( name, data, sea3d ) {
-
- this.name = name;
- this.data = data;
- this.sea3d = sea3d;
-
- this.width = data.readFloat();
- this.height = data.readFloat();
- this.depth = data.readFloat();
-
-};
-
-SEA3D.Box.prototype.type = "box";
-
-//
-// Cone
-//
-
-SEA3D.Cone = function ( name, data, sea3d ) {
-
- this.name = name;
- this.data = data;
- this.sea3d = sea3d;
-
- this.radius = data.readFloat();
- this.height = data.readFloat();
-
-};
-
-SEA3D.Cone.prototype.type = "cone";
-
-//
-// Capsule
-//
-
-SEA3D.Capsule = function ( name, data, sea3d ) {
-
- this.name = name;
- this.data = data;
- this.sea3d = sea3d;
-
- this.radius = data.readFloat();
- this.height = data.readFloat();
-
-};
-
-SEA3D.Capsule.prototype.type = "cap";
-
-//
-// Cylinder
-//
-
-SEA3D.Cylinder = function ( name, data, sea3d ) {
-
- this.name = name;
- this.data = data;
- this.sea3d = sea3d;
-
- this.radius = data.readFloat();
- this.height = data.readFloat();
-
-};
-
-SEA3D.Cylinder.prototype.type = "cyl";
-
-//
-// Convex Geometry
-//
-
-SEA3D.ConvexGeometry = function ( name, data, sea3d ) {
-
- this.name = name;
- this.data = data;
- this.sea3d = sea3d;
-
- this.geometry = sea3d.getObject( data.readUInt() );
- this.subGeometryIndex = data.readUByte();
-
-};
-
-SEA3D.ConvexGeometry.prototype.type = "gs";
-
-//
-// Triangle Geometry
-//
-
-SEA3D.TriangleGeometry = function ( name, data, sea3d ) {
-
- this.name = name;
- this.data = data;
- this.sea3d = sea3d;
-
- this.geometry = sea3d.getObject( data.readUInt() );
- this.subGeometryIndex = data.readUByte();
-
-};
-
-SEA3D.TriangleGeometry.prototype.type = "sgs";
-
-//
-// Compound
-//
-
-SEA3D.Compound = function ( name, data, sea3d ) {
-
- this.name = name;
- this.data = data;
- this.sea3d = sea3d;
-
- this.compounds = [];
-
- var count = data.readUByte();
-
- for ( var i = 0; i < count; i ++ ) {
-
- this.compounds.push( {
- shape: sea3d.getObject( data.readUInt() ),
- transform: data.readMatrix()
- } );
-
- }
-
-};
-
-SEA3D.Compound.prototype.type = "cmps";
-
-//
-// Physics
-//
-
-SEA3D.Physics = function ( name, data, sea3d ) {
-
- this.name = name;
- this.data = data;
- this.sea3d = sea3d;
-
- this.attrib = data.readUShort();
-
- this.shape = sea3d.getObject( data.readUInt() );
-
- if ( this.attrib & 1 ) this.target = sea3d.getObject( data.readUInt() );
- else this.transform = data.readMatrix();
-
- if ( this.attrib & 2 ) this.offset = data.readMatrix();
-
- if ( this.attrib & 4 ) this.scripts = data.readScriptList( sea3d );
-
- if ( this.attrib & 16 ) this.attributes = sea3d.getObject( data.readUInt() );
-
-};
-
-SEA3D.Physics.prototype.readTag = function ( kind, data, size ) {
-
-};
-
-//
-// Rigidy Body Base
-//
-
-SEA3D.RigidBodyBase = function ( name, data, sea3d ) {
-
- SEA3D.Physics.call( this, name, data, sea3d );
-
- if ( this.attrib & 32 ) {
-
- this.linearDamping = data.readFloat();
- this.angularDamping = data.readFloat();
-
- } else {
-
- this.linearDamping = 0;
- this.angularDamping = 0;
-
- }
-
- this.mass = data.readFloat();
- this.friction = data.readFloat();
- this.restitution = data.readFloat();
-
-};
-
-SEA3D.RigidBodyBase.prototype = Object.create( SEA3D.Physics.prototype );
-SEA3D.RigidBodyBase.prototype.constructor = SEA3D.RigidBodyBase;
-
-//
-// Rigidy Body
-//
-
-SEA3D.RigidBody = function ( name, data, sea3d ) {
-
- SEA3D.RigidBodyBase.call( this, name, data, sea3d );
-
- data.readTags( this.readTag.bind( this ) );
-
-};
-
-SEA3D.RigidBody.prototype = Object.create( SEA3D.RigidBodyBase.prototype );
-SEA3D.RigidBody.prototype.constructor = SEA3D.RigidBody;
-
-SEA3D.RigidBody.prototype.type = "rb";
-
-//
-// Car Controller
-//
-
-SEA3D.CarController = function ( name, data, sea3d ) {
-
- SEA3D.RigidBodyBase.call( this, name, data, sea3d );
-
- this.suspensionStiffness = data.readFloat();
- this.suspensionCompression = data.readFloat();
- this.suspensionDamping = data.readFloat();
- this.maxSuspensionTravelCm = data.readFloat();
- this.frictionSlip = data.readFloat();
- this.maxSuspensionForce = data.readFloat();
-
- this.dampingCompression = data.readFloat();
- this.dampingRelaxation = data.readFloat();
-
- var count = data.readUByte();
-
- this.wheel = [];
-
- for ( var i = 0; i < count; i ++ ) {
-
- this.wheel[ i ] = new SEA3D.CarController.Wheel( data, sea3d );
-
- }
-
- data.readTags( this.readTag.bind( this ) );
-
-};
-
-SEA3D.CarController.Wheel = function ( data, sea3d ) {
-
- this.data = data;
- this.sea3d = sea3d;
-
- this.attrib = data.readUShort();
-
- this.isFront = ( this.attrib & 1 ) != 0;
-
- if ( this.attrib & 2 ) {
-
- this.target = sea3d.getObject( data.readUInt() );
-
- }
-
- if ( this.attrib & 4 ) {
-
- this.offset = data.readMatrix();
-
- }
-
- this.pos = data.readVector3();
- this.dir = data.readVector3();
- this.axle = data.readVector3();
-
- this.radius = data.readFloat();
- this.suspensionRestLength = data.readFloat();
-
-};
-
-SEA3D.CarController.prototype = Object.create( SEA3D.RigidBodyBase.prototype );
-SEA3D.CarController.prototype.constructor = SEA3D.CarController;
-
-SEA3D.CarController.prototype.type = "carc";
-
-//
-// Constraints
-//
-
-SEA3D.Constraints = function ( name, data, sea3d ) {
-
- this.name = name;
- this.data = data;
- this.sea3d = sea3d;
-
- this.attrib = data.readUShort();
-
- this.disableCollisionsBetweenBodies = this.attrib & 1 != 0;
-
- this.targetA = sea3d.getObject( data.readUInt() );
- this.pointA = data.readVector3();
-
- if ( this.attrib & 2 ) {
-
- this.targetB = sea3d.getObject( data.readUInt() );
- this.pointB = data.readVector3();
-
- }
-
-};
-
-//
-// P2P Constraint
-//
-
-SEA3D.P2PConstraint = function ( name, data, sea3d ) {
-
- this.name = name;
- this.data = data;
- this.sea3d = sea3d;
-
- SEA3D.Constraints.call( this, name, data, sea3d );
-
-};
-
-SEA3D.P2PConstraint.prototype = Object.create( SEA3D.Constraints.prototype );
-SEA3D.P2PConstraint.prototype.constructor = SEA3D.P2PConstraint;
-
-SEA3D.P2PConstraint.prototype.type = "p2pc";
-
-//
-// Hinge Constraint
-//
-
-SEA3D.HingeConstraint = function ( name, data, sea3d ) {
-
- SEA3D.Constraints.call( this, name, data, sea3d );
-
- this.axisA = data.readVector3();
-
- if ( this.attrib & 1 ) {
-
- this.axisB = data.readVector3();
-
- }
-
- if ( this.attrib & 4 ) {
-
- this.limit = {
- low: data.readFloat(),
- high: data.readFloat(),
- softness: data.readFloat(),
- biasFactor: data.readFloat(),
- relaxationFactor: data.readFloat()
- };
-
- }
-
- if ( this.attrib & 8 ) {
-
- this.angularMotor = {
- velocity: data.readFloat(),
- impulse: data.readFloat()
- };
-
- }
-
-};
-
-SEA3D.HingeConstraint.prototype = Object.create( SEA3D.Constraints.prototype );
-SEA3D.HingeConstraint.prototype.constructor = SEA3D.HingeConstraint;
-
-SEA3D.HingeConstraint.prototype.type = "hnec";
-
-//
-// Cone Twist Constraint
-//
-
-SEA3D.ConeTwistConstraint = function ( name, data, sea3d ) {
-
- SEA3D.Constraints.call( this, name, data, sea3d );
-
- this.axisA = data.readVector3();
-
- if ( this.attrib & 1 ) {
-
- this.axisB = data.readVector3();
-
- }
-
- if ( this.attrib & 4 ) {
-
- this.limit = {
- swingSpan1: data.readFloat(),
- swingSpan2: data.readFloat(),
- twistSpan: data.readFloat(),
- softness: data.readFloat(),
- biasFactor: data.readFloat(),
- relaxationFactor: data.readFloat()
- };
-
- }
-
-};
-
-SEA3D.ConeTwistConstraint.prototype = Object.create( SEA3D.Constraints.prototype );
-SEA3D.ConeTwistConstraint.prototype.constructor = SEA3D.ConeTwistConstraint;
-
-SEA3D.ConeTwistConstraint.prototype.type = "ctwc";
-
-//
-// Extension
-//
-
-SEA3D.File.setExtension( function () {
-
- // PHYSICS
- this.addClass( SEA3D.Sphere );
- this.addClass( SEA3D.Box );
- this.addClass( SEA3D.Cone );
- this.addClass( SEA3D.Capsule );
- this.addClass( SEA3D.Cylinder );
- this.addClass( SEA3D.ConvexGeometry );
- this.addClass( SEA3D.TriangleGeometry );
- this.addClass( SEA3D.Compound );
- this.addClass( SEA3D.RigidBody );
- this.addClass( SEA3D.P2PConstraint );
- this.addClass( SEA3D.HingeConstraint );
- this.addClass( SEA3D.ConeTwistConstraint );
- this.addClass( SEA3D.CarController );
-
-} );
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/CubicBezierCurve3.js b/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/CubicBezierCurve3.js
deleted file mode 100644
index 3864f011e65e0c2a251e780adc9b99796ac0bd80..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/CubicBezierCurve3.js
+++ /dev/null
@@ -1,80 +0,0 @@
-import { Curve } from '../core/Curve.js';
-import { CubicBezier } from '../core/Interpolations.js';
-import { Vector3 } from '../../math/Vector3.js';
-
-
-function CubicBezierCurve3( v0, v1, v2, v3 ) {
-
- Curve.call( this );
-
- this.type = 'CubicBezierCurve3';
-
- this.v0 = v0 || new Vector3();
- this.v1 = v1 || new Vector3();
- this.v2 = v2 || new Vector3();
- this.v3 = v3 || new Vector3();
-
-}
-
-CubicBezierCurve3.prototype = Object.create( Curve.prototype );
-CubicBezierCurve3.prototype.constructor = CubicBezierCurve3;
-
-CubicBezierCurve3.prototype.isCubicBezierCurve3 = true;
-
-CubicBezierCurve3.prototype.getPoint = function ( t, optionalTarget ) {
-
- var point = optionalTarget || new Vector3();
-
- var v0 = this.v0, v1 = this.v1, v2 = this.v2, v3 = this.v3;
-
- point.set(
- CubicBezier( t, v0.x, v1.x, v2.x, v3.x ),
- CubicBezier( t, v0.y, v1.y, v2.y, v3.y ),
- CubicBezier( t, v0.z, v1.z, v2.z, v3.z )
- );
-
- return point;
-
-};
-
-CubicBezierCurve3.prototype.copy = function ( source ) {
-
- Curve.prototype.copy.call( this, source );
-
- this.v0.copy( source.v0 );
- this.v1.copy( source.v1 );
- this.v2.copy( source.v2 );
- this.v3.copy( source.v3 );
-
- return this;
-
-};
-
-CubicBezierCurve3.prototype.toJSON = function () {
-
- var data = Curve.prototype.toJSON.call( this );
-
- data.v0 = this.v0.toArray();
- data.v1 = this.v1.toArray();
- data.v2 = this.v2.toArray();
- data.v3 = this.v3.toArray();
-
- return data;
-
-};
-
-CubicBezierCurve3.prototype.fromJSON = function ( json ) {
-
- Curve.prototype.fromJSON.call( this, json );
-
- this.v0.fromArray( json.v0 );
- this.v1.fromArray( json.v1 );
- this.v2.fromArray( json.v2 );
- this.v3.fromArray( json.v3 );
-
- return this;
-
-};
-
-
-export { CubicBezierCurve3 };
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/loaders/DataTextureLoader.js b/spaces/banana-projects/web3d/node_modules/three/src/loaders/DataTextureLoader.js
deleted file mode 100644
index 5363a79f16d7629af3beba42ee8cd7a1dbff3252..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/loaders/DataTextureLoader.js
+++ /dev/null
@@ -1,102 +0,0 @@
-import { LinearFilter, LinearMipMapLinearFilter, ClampToEdgeWrapping } from '../constants.js';
-import { FileLoader } from './FileLoader.js';
-import { DataTexture } from '../textures/DataTexture.js';
-import { DefaultLoadingManager } from './LoadingManager.js';
-
-/**
- * @author Nikos M. / https://github.com/foo123/
- *
- * Abstract Base class to load generic binary textures formats (rgbe, hdr, ...)
- */
-
-function DataTextureLoader( manager ) {
-
- this.manager = ( manager !== undefined ) ? manager : DefaultLoadingManager;
-
- // override in sub classes
- this._parser = null;
-
-}
-
-Object.assign( DataTextureLoader.prototype, {
-
- load: function ( url, onLoad, onProgress, onError ) {
-
- var scope = this;
-
- var texture = new DataTexture();
-
- var loader = new FileLoader( this.manager );
- loader.setResponseType( 'arraybuffer' );
- loader.setPath( this.path );
- loader.load( url, function ( buffer ) {
-
- var texData = scope._parser( buffer );
-
- if ( ! texData ) return;
-
- if ( texData.image !== undefined ) {
-
- texture.image = texData.image;
-
- } else if ( texData.data !== undefined ) {
-
- texture.image.width = texData.width;
- texture.image.height = texData.height;
- texture.image.data = texData.data;
-
- }
-
- texture.wrapS = texData.wrapS !== undefined ? texData.wrapS : ClampToEdgeWrapping;
- texture.wrapT = texData.wrapT !== undefined ? texData.wrapT : ClampToEdgeWrapping;
-
- texture.magFilter = texData.magFilter !== undefined ? texData.magFilter : LinearFilter;
- texture.minFilter = texData.minFilter !== undefined ? texData.minFilter : LinearMipMapLinearFilter;
-
- texture.anisotropy = texData.anisotropy !== undefined ? texData.anisotropy : 1;
-
- if ( texData.format !== undefined ) {
-
- texture.format = texData.format;
-
- }
- if ( texData.type !== undefined ) {
-
- texture.type = texData.type;
-
- }
-
- if ( texData.mipmaps !== undefined ) {
-
- texture.mipmaps = texData.mipmaps;
-
- }
-
- if ( texData.mipmapCount === 1 ) {
-
- texture.minFilter = LinearFilter;
-
- }
-
- texture.needsUpdate = true;
-
- if ( onLoad ) onLoad( texture, texData );
-
- }, onProgress, onError );
-
-
- return texture;
-
- },
-
- setPath: function ( value ) {
-
- this.path = value;
- return this;
-
- }
-
-} );
-
-
-export { DataTextureLoader };
diff --git a/spaces/bhasker412/IDD-YOLO-Tracking/README.md b/spaces/bhasker412/IDD-YOLO-Tracking/README.md
deleted file mode 100644
index 392f5df975a6e836fcff9ecb29c35427bbd498af..0000000000000000000000000000000000000000
--- a/spaces/bhasker412/IDD-YOLO-Tracking/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Yolov7
-emoji: 🚀
-colorFrom: yellow
-colorTo: purple
-sdk: gradio
-sdk_version: 3.12.0
-app_file: app.py
-pinned: false
-duplicated_from: shriarul5273/Yolov7
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/sd_hijack_unet.py b/spaces/bigjoker/stable-diffusion-webui/modules/sd_hijack_unet.py
deleted file mode 100644
index ac931c80df9b7c34b342d560bc969ff461b0afc7..0000000000000000000000000000000000000000
--- a/spaces/bigjoker/stable-diffusion-webui/modules/sd_hijack_unet.py
+++ /dev/null
@@ -1,79 +0,0 @@
-import torch
-from packaging import version
-
-from modules import devices
-from modules.sd_hijack_utils import CondFunc
-
-
-class TorchHijackForUnet:
- """
- This is torch, but with cat that resizes tensors to appropriate dimensions if they do not match;
- this makes it possible to create pictures with dimensions that are multiples of 8 rather than 64
- """
-
- def __getattr__(self, item):
- if item == 'cat':
- return self.cat
-
- if hasattr(torch, item):
- return getattr(torch, item)
-
- raise AttributeError("'{}' object has no attribute '{}'".format(type(self).__name__, item))
-
- def cat(self, tensors, *args, **kwargs):
- if len(tensors) == 2:
- a, b = tensors
- if a.shape[-2:] != b.shape[-2:]:
- a = torch.nn.functional.interpolate(a, b.shape[-2:], mode="nearest")
-
- tensors = (a, b)
-
- return torch.cat(tensors, *args, **kwargs)
-
-
-th = TorchHijackForUnet()
-
-
-# Below are monkey patches to enable upcasting a float16 UNet for float32 sampling
-def apply_model(orig_func, self, x_noisy, t, cond, **kwargs):
-
- if isinstance(cond, dict):
- for y in cond.keys():
- cond[y] = [x.to(devices.dtype_unet) if isinstance(x, torch.Tensor) else x for x in cond[y]]
-
- with devices.autocast():
- return orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs).float()
-
-
-class GELUHijack(torch.nn.GELU, torch.nn.Module):
- def __init__(self, *args, **kwargs):
- torch.nn.GELU.__init__(self, *args, **kwargs)
- def forward(self, x):
- if devices.unet_needs_upcast:
- return torch.nn.GELU.forward(self.float(), x.float()).to(devices.dtype_unet)
- else:
- return torch.nn.GELU.forward(self, x)
-
-
-ddpm_edit_hijack = None
-def hijack_ddpm_edit():
- global ddpm_edit_hijack
- if not ddpm_edit_hijack:
- CondFunc('modules.models.diffusion.ddpm_edit.LatentDiffusion.decode_first_stage', first_stage_sub, first_stage_cond)
- CondFunc('modules.models.diffusion.ddpm_edit.LatentDiffusion.encode_first_stage', first_stage_sub, first_stage_cond)
- ddpm_edit_hijack = CondFunc('modules.models.diffusion.ddpm_edit.LatentDiffusion.apply_model', apply_model, unet_needs_upcast)
-
-
-unet_needs_upcast = lambda *args, **kwargs: devices.unet_needs_upcast
-CondFunc('ldm.models.diffusion.ddpm.LatentDiffusion.apply_model', apply_model, unet_needs_upcast)
-CondFunc('ldm.modules.diffusionmodules.openaimodel.timestep_embedding', lambda orig_func, timesteps, *args, **kwargs: orig_func(timesteps, *args, **kwargs).to(torch.float32 if timesteps.dtype == torch.int64 else devices.dtype_unet), unet_needs_upcast)
-if version.parse(torch.__version__) <= version.parse("1.13.1"):
- CondFunc('ldm.modules.diffusionmodules.util.GroupNorm32.forward', lambda orig_func, self, *args, **kwargs: orig_func(self.float(), *args, **kwargs), unet_needs_upcast)
- CondFunc('ldm.modules.attention.GEGLU.forward', lambda orig_func, self, x: orig_func(self.float(), x.float()).to(devices.dtype_unet), unet_needs_upcast)
- CondFunc('open_clip.transformer.ResidualAttentionBlock.__init__', lambda orig_func, *args, **kwargs: kwargs.update({'act_layer': GELUHijack}) and False or orig_func(*args, **kwargs), lambda _, *args, **kwargs: kwargs.get('act_layer') is None or kwargs['act_layer'] == torch.nn.GELU)
-
-first_stage_cond = lambda _, self, *args, **kwargs: devices.unet_needs_upcast and self.model.diffusion_model.dtype == torch.float16
-first_stage_sub = lambda orig_func, self, x, **kwargs: orig_func(self, x.to(devices.dtype_vae), **kwargs)
-CondFunc('ldm.models.diffusion.ddpm.LatentDiffusion.decode_first_stage', first_stage_sub, first_stage_cond)
-CondFunc('ldm.models.diffusion.ddpm.LatentDiffusion.encode_first_stage', first_stage_sub, first_stage_cond)
-CondFunc('ldm.models.diffusion.ddpm.LatentDiffusion.get_first_stage_encoding', lambda orig_func, *args, **kwargs: orig_func(*args, **kwargs).float(), first_stage_cond)
diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/ui_extra_networks.py b/spaces/bigjoker/stable-diffusion-webui/modules/ui_extra_networks.py
deleted file mode 100644
index e788319dad4aba4484437af166b7c0fccd651798..0000000000000000000000000000000000000000
--- a/spaces/bigjoker/stable-diffusion-webui/modules/ui_extra_networks.py
+++ /dev/null
@@ -1,251 +0,0 @@
-import glob
-import os.path
-import urllib.parse
-from pathlib import Path
-
-from modules import shared
-import gradio as gr
-import json
-import html
-
-from modules.generation_parameters_copypaste import image_from_url_text
-
-extra_pages = []
-allowed_dirs = set()
-
-
-def register_page(page):
- """registers extra networks page for the UI; recommend doing it in on_before_ui() callback for extensions"""
-
- extra_pages.append(page)
- allowed_dirs.clear()
- allowed_dirs.update(set(sum([x.allowed_directories_for_previews() for x in extra_pages], [])))
-
-
-def add_pages_to_demo(app):
- def fetch_file(filename: str = ""):
- from starlette.responses import FileResponse
-
- if not any([Path(x).absolute() in Path(filename).absolute().parents for x in allowed_dirs]):
- raise ValueError(f"File cannot be fetched: {filename}. Must be in one of directories registered by extra pages.")
-
- ext = os.path.splitext(filename)[1].lower()
- if ext not in (".png", ".jpg"):
- raise ValueError(f"File cannot be fetched: {filename}. Only png and jpg.")
-
- # would profit from returning 304
- return FileResponse(filename, headers={"Accept-Ranges": "bytes"})
-
- app.add_api_route("/sd_extra_networks/thumb", fetch_file, methods=["GET"])
-
-
-class ExtraNetworksPage:
- def __init__(self, title):
- self.title = title
- self.name = title.lower()
- self.card_page = shared.html("extra-networks-card.html")
- self.allow_negative_prompt = False
-
- def refresh(self):
- pass
-
- def link_preview(self, filename):
- return "./sd_extra_networks/thumb?filename=" + urllib.parse.quote(filename.replace('\\', '/')) + "&mtime=" + str(os.path.getmtime(filename))
-
- def search_terms_from_path(self, filename, possible_directories=None):
- abspath = os.path.abspath(filename)
-
- for parentdir in (possible_directories if possible_directories is not None else self.allowed_directories_for_previews()):
- parentdir = os.path.abspath(parentdir)
- if abspath.startswith(parentdir):
- return abspath[len(parentdir):].replace('\\', '/')
-
- return ""
-
- def create_html(self, tabname):
- view = shared.opts.extra_networks_default_view
- items_html = ''
-
- subdirs = {}
- for parentdir in [os.path.abspath(x) for x in self.allowed_directories_for_previews()]:
- for x in glob.glob(os.path.join(parentdir, '**/*'), recursive=True):
- if not os.path.isdir(x):
- continue
-
- subdir = os.path.abspath(x)[len(parentdir):].replace("\\", "/")
- while subdir.startswith("/"):
- subdir = subdir[1:]
-
- is_empty = len(os.listdir(x)) == 0
- if not is_empty and not subdir.endswith("/"):
- subdir = subdir + "/"
-
- subdirs[subdir] = 1
-
- if subdirs:
- subdirs = {"": 1, **subdirs}
-
- subdirs_html = "".join([f"""
-
-""" for subdir in subdirs])
-
- for item in self.list_items():
- items_html += self.create_html_for_item(item, tabname)
-
- if items_html == '':
- dirs = "".join([f"
{x}
" for x in self.allowed_directories_for_previews()])
- items_html = shared.html("extra-networks-no-cards.html").format(dirs=dirs)
-
- self_name_id = self.name.replace(" ", "_")
-
- res = f"""
-
-{subdirs_html}
-
-
-{items_html}
-
-"""
-
- return res
-
- def list_items(self):
- raise NotImplementedError()
-
- def allowed_directories_for_previews(self):
- return []
-
- def create_html_for_item(self, item, tabname):
- preview = item.get("preview", None)
-
- onclick = item.get("onclick", None)
- if onclick is None:
- onclick = '"' + html.escape(f"""return cardClicked({json.dumps(tabname)}, {item["prompt"]}, {"true" if self.allow_negative_prompt else "false"})""") + '"'
-
- args = {
- "preview_html": "style='background-image: url(\"" + html.escape(preview) + "\")'" if preview else '',
- "prompt": item.get("prompt", None),
- "tabname": json.dumps(tabname),
- "local_preview": json.dumps(item["local_preview"]),
- "name": item["name"],
- "card_clicked": onclick,
- "save_card_preview": '"' + html.escape(f"""return saveCardPreview(event, {json.dumps(tabname)}, {json.dumps(item["local_preview"])})""") + '"',
- "search_term": item.get("search_term", ""),
- }
-
- return self.card_page.format(**args)
-
-
-def intialize():
- extra_pages.clear()
-
-
-class ExtraNetworksUi:
- def __init__(self):
- self.pages = None
- self.stored_extra_pages = None
-
- self.button_save_preview = None
- self.preview_target_filename = None
-
- self.tabname = None
-
-
-def pages_in_preferred_order(pages):
- tab_order = [x.lower().strip() for x in shared.opts.ui_extra_networks_tab_reorder.split(",")]
-
- def tab_name_score(name):
- name = name.lower()
- for i, possible_match in enumerate(tab_order):
- if possible_match in name:
- return i
-
- return len(pages)
-
- tab_scores = {page.name: (tab_name_score(page.name), original_index) for original_index, page in enumerate(pages)}
-
- return sorted(pages, key=lambda x: tab_scores[x.name])
-
-
-def create_ui(container, button, tabname):
- ui = ExtraNetworksUi()
- ui.pages = []
- ui.stored_extra_pages = pages_in_preferred_order(extra_pages.copy())
- ui.tabname = tabname
-
- with gr.Tabs(elem_id=tabname+"_extra_tabs") as tabs:
- for page in ui.stored_extra_pages:
- with gr.Tab(page.title):
- page_elem = gr.HTML(page.create_html(ui.tabname))
- ui.pages.append(page_elem)
-
- filter = gr.Textbox('', show_label=False, elem_id=tabname+"_extra_search", placeholder="Search...", visible=False)
- button_refresh = gr.Button('Refresh', elem_id=tabname+"_extra_refresh")
- button_close = gr.Button('Close', elem_id=tabname+"_extra_close")
-
- ui.button_save_preview = gr.Button('Save preview', elem_id=tabname+"_save_preview", visible=False)
- ui.preview_target_filename = gr.Textbox('Preview save filename', elem_id=tabname+"_preview_filename", visible=False)
-
- def toggle_visibility(is_visible):
- is_visible = not is_visible
- return is_visible, gr.update(visible=is_visible)
-
- state_visible = gr.State(value=False)
- button.click(fn=toggle_visibility, inputs=[state_visible], outputs=[state_visible, container])
- button_close.click(fn=toggle_visibility, inputs=[state_visible], outputs=[state_visible, container])
-
- def refresh():
- res = []
-
- for pg in ui.stored_extra_pages:
- pg.refresh()
- res.append(pg.create_html(ui.tabname))
-
- return res
-
- button_refresh.click(fn=refresh, inputs=[], outputs=ui.pages)
-
- return ui
-
-
-def path_is_parent(parent_path, child_path):
- parent_path = os.path.abspath(parent_path)
- child_path = os.path.abspath(child_path)
-
- return child_path.startswith(parent_path)
-
-
-def setup_ui(ui, gallery):
- def save_preview(index, images, filename):
- if len(images) == 0:
- print("There is no image in gallery to save as a preview.")
- return [page.create_html(ui.tabname) for page in ui.stored_extra_pages]
-
- index = int(index)
- index = 0 if index < 0 else index
- index = len(images) - 1 if index >= len(images) else index
-
- img_info = images[index if index >= 0 else 0]
- image = image_from_url_text(img_info)
-
- is_allowed = False
- for extra_page in ui.stored_extra_pages:
- if any([path_is_parent(x, filename) for x in extra_page.allowed_directories_for_previews()]):
- is_allowed = True
- break
-
- assert is_allowed, f'writing to {filename} is not allowed'
-
- image.save(filename)
-
- return [page.create_html(ui.tabname) for page in ui.stored_extra_pages]
-
- ui.button_save_preview.click(
- fn=save_preview,
- _js="function(x, y, z){return [selected_gallery_index(), y, z]}",
- inputs=[ui.preview_target_filename, gallery, ui.preview_target_filename],
- outputs=[*ui.pages]
- )
-
diff --git a/spaces/bigjoker/stable-diffusion-webui/test/basic_features/extras_test.py b/spaces/bigjoker/stable-diffusion-webui/test/basic_features/extras_test.py
deleted file mode 100644
index 0170c511fe54cc6bcf49ec7f75ca7c747de41db5..0000000000000000000000000000000000000000
--- a/spaces/bigjoker/stable-diffusion-webui/test/basic_features/extras_test.py
+++ /dev/null
@@ -1,54 +0,0 @@
-import unittest
-import requests
-from gradio.processing_utils import encode_pil_to_base64
-from PIL import Image
-
-class TestExtrasWorking(unittest.TestCase):
- def setUp(self):
- self.url_extras_single = "http://localhost:7860/sdapi/v1/extra-single-image"
- self.extras_single = {
- "resize_mode": 0,
- "show_extras_results": True,
- "gfpgan_visibility": 0,
- "codeformer_visibility": 0,
- "codeformer_weight": 0,
- "upscaling_resize": 2,
- "upscaling_resize_w": 128,
- "upscaling_resize_h": 128,
- "upscaling_crop": True,
- "upscaler_1": "None",
- "upscaler_2": "None",
- "extras_upscaler_2_visibility": 0,
- "image": encode_pil_to_base64(Image.open(r"test/test_files/img2img_basic.png"))
- }
-
- def test_simple_upscaling_performed(self):
- self.extras_single["upscaler_1"] = "Lanczos"
- self.assertEqual(requests.post(self.url_extras_single, json=self.extras_single).status_code, 200)
-
-
-class TestPngInfoWorking(unittest.TestCase):
- def setUp(self):
- self.url_png_info = "http://localhost:7860/sdapi/v1/extra-single-image"
- self.png_info = {
- "image": encode_pil_to_base64(Image.open(r"test/test_files/img2img_basic.png"))
- }
-
- def test_png_info_performed(self):
- self.assertEqual(requests.post(self.url_png_info, json=self.png_info).status_code, 200)
-
-
-class TestInterrogateWorking(unittest.TestCase):
- def setUp(self):
- self.url_interrogate = "http://localhost:7860/sdapi/v1/extra-single-image"
- self.interrogate = {
- "image": encode_pil_to_base64(Image.open(r"test/test_files/img2img_basic.png")),
- "model": "clip"
- }
-
- def test_interrogate_performed(self):
- self.assertEqual(requests.post(self.url_interrogate, json=self.interrogate).status_code, 200)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/bigscience/petals-api/src/__init__.py b/spaces/bigscience/petals-api/src/__init__.py
deleted file mode 100644
index 9f4ea05bdd6fb1794ddbd0b45f8d08cbf5193dab..0000000000000000000000000000000000000000
--- a/spaces/bigscience/petals-api/src/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from src.bloom import *
-from src.client import *
-from src.dht_utils import declare_active_modules, get_remote_module
-
-__version__ = "0.2"
diff --git a/spaces/bilgeyucel/prompt-lemmatizer/utils.py b/spaces/bilgeyucel/prompt-lemmatizer/utils.py
deleted file mode 100644
index 941f7533902caacf0dcde312aae6a3c213ed92a5..0000000000000000000000000000000000000000
--- a/spaces/bilgeyucel/prompt-lemmatizer/utils.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from simplemma import text_lemmatizer
-
-def lemmatizer_func(plain_text):
- lemmatized_promt = ' '.join([el for el in text_lemmatizer(plain_text, lang='en') if el.isalnum()])
- return lemmatized_promt
diff --git a/spaces/bioriAsaeru/text-to-voice/Bajirao Mastani 720p In Hindi Dubbed Movie The Perfect Choice for a Date Night or a Family Night.md b/spaces/bioriAsaeru/text-to-voice/Bajirao Mastani 720p In Hindi Dubbed Movie The Perfect Choice for a Date Night or a Family Night.md
deleted file mode 100644
index ed86efeb93a0b562fdfcd2bc982659694a0be2a5..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Bajirao Mastani 720p In Hindi Dubbed Movie The Perfect Choice for a Date Night or a Family Night.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Huawei Hisilicon Firmware Writer 136: A Tool to Flash Custom Firmware on Huawei Devices
-
Huawei Hisilicon Firmware Writer 136 is a software tool that allows users to flash custom firmware on Huawei devices that use Hisilicon chipsets. The tool can bypass the bootloader lock and flash any firmware file with the .bin extension. The tool supports various models of Huawei devices, such as routers, modems, smartphones, and tablets.
-
The tool was developed by a group of hackers who discovered a vulnerability in the original Huawei firmware that allowed them to modify the bootloader and kernel. The hackers claimed that they were able to replace the original command set_bit_at_address calls with clear_bit_at_address calls, which enabled them to flash custom firmware without any restrictions[^1^]. The tool was released on npm, a package manager for JavaScript, and can be downloaded from this link: https://www.npmjs.com/package/huawei_hisilicon_firmware_writer_136_fg5
However, flashing custom firmware on Huawei devices is not without risks. Users should be aware that flashing custom firmware may void their warranty, brick their device, or expose them to security threats. Users should also backup their data before flashing custom firmware, as the process may erase all the data on the device. Users should also verify the source and integrity of the firmware file before flashing it, as some firmware files may contain malware or spyware.
-
Users who want to flash custom firmware on Huawei devices using Huawei Hisilicon Firmware Writer 136 should follow these steps:
-
-
Download and install Huawei Hisilicon Firmware Writer 136 from npm.
-
Download the firmware file with the .bin extension from a trusted source.
-
Connect the Huawei device to the computer using a USB cable.
-
Run Huawei Hisilicon Firmware Writer 136 and select the device model and port number.
-
Browse and select the firmware file and click on Flash.
-
Wait for the flashing process to complete and reboot the device.
-
-
Users who want to learn more about Huawei Hisilicon Firmware Writer 136 can visit the official website of the tool: https://blltly.com/2t5tA7
-```
-
-```html
-
Flashing custom firmware on Huawei devices using Huawei Hisilicon Firmware Writer 136 can have some benefits and drawbacks. Users should weigh them carefully before deciding to flash custom firmware.
-
Benefits of Custom Firmware
-
-
Custom firmware can offer more features and customization options than the original Huawei firmware. For example, users can install different themes, icons, launchers, widgets, and apps that are not available on the stock firmware. Users can also tweak the performance, battery life, security, and privacy settings of their device according to their preferences.
-
Custom firmware can also provide more frequent updates and bug fixes than the original Huawei firmware. Some custom firmware developers are very active and responsive to user feedback and requests. They may release new versions of their firmware with new features and improvements on a regular basis. Users can also benefit from the community support and resources that are available for custom firmware users.
-
Custom firmware can also allow users to unlock the full potential of their device. Some custom firmware can enable features that are disabled or hidden by the original Huawei firmware, such as overclocking, rooting, tethering, dual booting, and more. Users can also remove unwanted apps and bloatware that come pre-installed on the stock firmware, freeing up storage space and improving performance.
-
-
Drawbacks of Custom Firmware
-
-
Custom firmware can also pose some risks and challenges for users. Flashing custom firmware may void the warranty of the device, as Huawei may not provide any support or service for devices that have been modified. Users may also lose access to some official Huawei services and apps that are exclusive to the stock firmware, such as Huawei Cloud, Huawei AppGallery, Huawei Music, Huawei Health, and more.
-
Custom firmware may also be unstable or incompatible with some devices or apps. Some custom firmware may have bugs or errors that cause crashes, freezes, reboots, or other problems. Some custom firmware may not work well with some hardware components or drivers of the device, resulting in poor performance or functionality. Some custom firmware may also conflict with some apps or games that require certain permissions or certifications from the original Huawei firmware.
-
Custom firmware may also expose users to security threats or legal issues. Some custom firmware may contain malware or spyware that can steal user data or harm the device. Some custom firmware may also violate the intellectual property rights of Huawei or other parties by using their code or resources without permission. Users may face legal consequences if they use custom firmware that infringes on these rights.
-
-
In conclusion, flashing custom firmware on Huawei devices using Huawei Hisilicon Firmware Writer 136 can have both advantages and disadvantages. Users should be aware of the benefits and drawbacks of custom firmware before deciding to flash it on their device. Users should also backup their data and follow the instructions carefully when flashing custom firmware to avoid any problems.
-```
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Intel Sm Bus Controller Driver Windows Xp Download Update Your Chipset Components Easily.md b/spaces/bioriAsaeru/text-to-voice/Intel Sm Bus Controller Driver Windows Xp Download Update Your Chipset Components Easily.md
deleted file mode 100644
index b70bdb454a4b6b090afc4ac8b23c00e93cbc85f9..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Intel Sm Bus Controller Driver Windows Xp Download Update Your Chipset Components Easily.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
Installed new MB & HDD on a Aspire One D250, clean install of Windows XP Professional OS and device drivers and chip set (945G family v. 8.3.0.1018. XP does not find the driver for the SMBus Controller driver. When using the webcam to make a video chat call, I get the BSOD. When I open the devise manager the SMBus controller driver is not found or not installed. How do I correct this?
-
The following will teach you how to perform Intel SM Bus Controller driver download for some popular computer brands and the methods apply to Windows 11, Windows 10, Windows 8/8.1, Windows 7, Windows Vista, as well as Windows XP, both 32/64 bit.
-
Intel Sm Bus Controller Driver Windows Xp Download
IMPORTANT: This file is only for use with the models identified. Not all models are sold in all countries.This information is for the following models:
SVF14N11CXB
SVF14N13CBS
SVF14N13CLB
SVF14N13CLS
SVF14N13CXB
SVF14N13CXS
SVF14N15CBB
SVF14N15CDS
SVF14N15CLB
SVF14N15CLS
SVF14N16CXB
SVF14N16CXS
SVF14N190S
SVF14N190X
SVF15N14CXB
SVF15N14CXS
SVF15N15CDB
SVF15N17CBS
SVF15N17CDB
SVF15N17CLS
SVF15N17CXB
SVF15N17CXS
SVF15N18PXB
SVF15N190S
SVF15N190X"},"id":"about","content":"Benefits and improvementsThis utility installs the originally shipped version of the Intel® Chipset Driver.This driver also resolves the yellow exclamation point error that may appear in Device Manager next to "SM Bus Controller" which indicates that a driver is not installed for that device.","id":"system_requirements","content":"Operating Systems
Windows® 10 64-bit
Windows® 8 64-bit
Windows® 8.1 64-bit","id":"file_info","content":"File Name
EP0000552001.exeFile Version
9.4.0.1017File Size
8.35 MbRelease Date
09-24-2013","id":"preparation","content":" Important Notes
During the installation of this file, be sure that you are logged in as the Administrator or as a user with Administrative rights.
To ensure that no other program interferes with the installation, save all work and close all other programs. The Taskbar should be clear of applications before proceeding.
It is highly recommended that you print out these instructions for use as a reference during the installation process. ","id":"download","content":" Download and Installation Instructions
To ensure that no other program interferes with the installation, save all work and close all other programs. The Taskbar should be clear of applications before proceeding.
Download the EP0000552001.exe file to a temporary or download directory (please note this directory for reference).
Go to the directory where the file was downloaded and double-click the EP0000552001.exe file to begin the installation.
At the "User Account Control" screen, click "Yes".
If the "Select Language" screen appears, select the language you wish to use, and then click "OK".
At the "Select Country" screen, select the country for this installation, and then click "Next".
At the "Intel Chipset Driver" ver. 9.4.0.1017" screen, click "Next".
At the "License Agreement" dialog, read the agreement, then, if you agree, click "I Agree" to continue with the installation.
At the "Finish" screen, click "Finish".
At the "This system must be restarted to complete the installation" screen, click "OK" to restart the computer. After the computer restarts, the installation of the Intel Chipset Driver is complete."],"hidden":false,"downloadUrl":"https:\/\/dlv.update.sony.net\/US\/pc\/EP0000552001.exe","downloadType":"driver","eulaUrl":"https:\/\/www.sony.co.uk\/campaign\/SWT\/eulas_v2\/en_US.html","eulaType":"1"},"isEulaShow":false,"eula":null,"downloadNo":"W0009063"};window.__CTX__ = window.__CTX__ || ;window.__COMPONENT_CONFIG__ = window.__COMPONENT_CONFIG__ || ;window.__I18N__ = window.__I18N__ || ;window.__CTX__.react_client_head_tags ="module":"isCritical":false;window.__COMPONENT_CONFIG__.react_client_head_tags ="portal":"protocol":"https","support_additionalVisibleModulesInLiteMode":[];window.__I18N__.react_client_head_tags = 1","consent_warning.button_text":"Manage cookies","favorite.text":"Favorite","productInformationSheet.text":"Product Information Sheet","flowplayer.language.et":"Estonian","title_category_with_results_download":"Search Results ;window.__PRELOADED_STATE__ = window.__PRELOADED_STATE__ || ;window.__PRELOADED_STATE__.location ="pathname":"\/electronics\/support\/downloads\/W0009063","query":;window.__PRELOADED_STATE__.page ="type":"downloadDetails","searchByType":"none","typeAlias":null,"origin":"server","hasError":false,"isLoading":false,"isLoadingPartialContent":false,"location":"pathname":"\/electronics\/support\/downloads\/W0009063","query":,"params":"downloadId":"W0009063","cookie":"","locale":"en_US";window.__CTX__ = window.__CTX__ || ;window.__COMPONENT_CONFIG__ = window.__COMPONENT_CONFIG__ || ;window.__I18N__ = window.__I18N__ || ;window.__CTX__.software_details ="module":"isCritical":true;window.__COMPONENT_CONFIG__.software_details ="support_additionalVisibleModulesInLiteMode":[];window.__I18N__.software_details ="GB.displayName":"United Kingdom","SE.displayName":"Sweden","related_products_curated_link.text":"Related Products","flowplayer.language.ja":"Japanese","idk.text":"Sorry, this data isn't available","pt_PT.displayName":"Portugal","AD.displayName":"Andorra","no_NO.displayName":"","dynamic.product_count.default":"Products","support_link.text":"Support","TW.displayName":"Taiwan","product_count.14":"0 Products","YU.displayName":"Serbia","en_HK.displayName":"","flowplayer.language.fi":"Finnish","es_CL.displayName":"Chile","ME.displayName":"Montenegro","FR.displayName":"France","BA.displayName":"Bosnia and Herzegovina","flowplayer.language.ko":"Korean","EU.region_displayName":"Europe Region","CA.displayName":"Canada","pricing.starting.at_succeeding":"","SI.displayName":"Slovenia","product_count.72":"","es_AR.displayName":"Argentina","dynamic.product_count.34":"Products","NG.displayName":"Nigeria","sony.text":"Sony US","product_count.4":"0 Products","aria.slider.previous":"","KZ.displayName":"Kazakhstan","flowplayer.language.nl":"Dutch","fi_FI.displayName":"","en_PH.displayName":"Philippines","KW.displayName":"Kuwait","flowplayer.language.al":"Albanian","dynamic.accessory_count.1":"0 Accessory","MA.displayName":"Morocco","flowplayer.language.mk":"Macedonian","nl_NL.displayName":"","dynamic.product_count.24":"Products","fr_LU.displayName":"","LV.displayName":"Latvia","lt_LT.displayName":"","dynamic.product_count.3":"Products","flowplayer.language.ro":"Romanian","GE.displayName":"Georgia","consent_warning.button_text":"Manage cookies","favorite.text":"Favorite","productInformationSheet.text":"Product Information Sheet","flowplayer.language.et":"Estonian","flowplayer.language.sk":"Slovak","LT.displayName":"Lithuania","en_IE.displayName":"Ireland","PL.displayName":"Poland","ZA.displayName":"South Africa","BG.displayName":"Bulgaria","pl_PL.displayName":"","BH.displayName":"Bahrain","flowplayer.language.en":"English","flowplayer.language.bg":"Bulgarian","FI.displayName":"Finland","CH.displayName":"Switzerland","JP.displayName":"","BY.displayName":"Belarus","BR.displayName":"Brazil","TR.displayName":"Türkiye","fr_BE.displayName":"Belgium","IE.displayName":"Republic of Ireland","en_EE.displayName":"Estonia","sv_SE.displayName":"","recycling_cost_5Euro.text":"","BE.displayName":"Belgium","LU.displayName":"Luxembourg","IS.displayName":"Iceland","flowplayer.language.kk":"Kazakh","RU.displayName":"Russia","buy.button.text":"Where To Buy","dynamic.product_count.12":"Products","CZ.displayName":"Czech Republic","MD.displayName":"Moldova","CN.region_displayName":"China Region","dynamic.product_count.13":"Products","product_count.3":"0 Products","AL.displayName":"Albania","XM.displayName":"Middle East","en_ID.displayName":"Indonesia","IN.displayName":"India","dynamic.product_count.2":"Products","dynamic.product_count.23":"Products","MC.displayName":"Monaco","flowplayer.language.it":"Italian","US.region_displayName":"Pan America Region","applicable_details.information":"This information is for the following models:","it_IT.displayName":"","fr_CH.displayName":"Switzerland","meganav.viewMore":"View More","violators.topPick":"Top Pick","MK.displayName":"Macedonia","AP.displayName":"Others","HK.displayName":"Hong Kong","ro_RO.displayName":"","product_count.23":"0 Products","bg_BG.displayName":"","en_US.displayName":"USA","AU.displayName":"Australia","VA.displayName":"Vatican City","product_count.74":"","vi_VN.displayName":"","PH.displayName":"Philippines","NZ.displayName":"New Zealand","product_count.34":"0 Products","SA.displayName":"Kingdom of Saudi Arabia","de_AT.displayName":"Austria","product_count.12":"0 Products","flowplayer.language.sl":"Slovene","KR.displayName":"Korea","SG.displayName":"Singapore","flowplayer.language.es":"Spanish","sk_SK.displayName":"","ID.displayName":"Indonesia","en_SG.displayName":"Singapore","ru_RU.displayName":"Russia","cs_CZ.displayName":"","de_DE.displayName":"Germany","MY.displayName":"Malaysia","dynamic.product_count.31":"Products","related_products_link.text":"Related Products","DE.displayName":"Germany","en_CA.displayName":"Canada","es_ES.displayName":"Spain","favorites.tooltip.add_action":"Add to Favorites","flowplayer.language.no":"Norwegian","en_LV.displayName":"Latvia","product_count.2":"0 Products","GR.displayName":"Greece","favorites.tooltip.header":"Favorites","NO.displayName":"Norway","fr_CA.displayName":"Canada","en_TH.displayName":"Thailand","notify_me.text":"Notify Me","th_TH.displayName":"","sr_YU.displayName":"","dynamic.product_count.22":"Products","product.specifications_page_description":"Get the detailed list of specifications for the Sony 0 & see which 1 fit your needs.","dynamic.product_count.11":"Products","flowplayer.language.ru":"Russian","HU.displayName":"Hungary","product_count.64":"","en_MY.displayName":"Malaysia","applicable_details.products-title":"Applicable Model","HR.displayName":"Croatia","IT.displayName":"Italy","consent_warning.description":"Access your cookie preferences below and make sure to switch on the Youtube cookie under the 'Functional' section.","product.specifications_page_title":"0 Specifications","flowplayer.language.ar":"Arabic","applicable_details.show_more":"View all applicable models","AE.displayName":"United Arab Emirates","product.reviews_page_title":"0 Reviews & Ratings","product_count.32":"0 Products","sr_RS.displayName":"","favorites.tooltip.on_add":"Added","SM.displayName":"San Marino","flowplayer.language.pl":"Polish","accessory_finder.label":"Accessory Finder","dynamic.product_count.1":"Product","aria.slider.next":"","AP.region_displayName":"Asia Pacific Region","product.details_page_description":"Discover the 0 from Sony & explore all the 1 features.","product_count.21":"0 Products","additional_cta.text":"","EE.displayName":"Estonia","mk_MK.displayName":"","product_count.33":"0 Products","flowplayer.language.sv":"Slovenian","TH.displayName":"Thailand","tr_TR.displayName":"Türkiye","JO.displayName":"Jordan","hr_BA.displayName":"","favorites.tooltip.remove_action":"Remove Favorite","brexitTVEnergyLink.text":"","VN.displayName":"Vietnam","es_EC.displayName":"Ecuador","CY.displayName":"Cyprus","product_count.22":"0 Products","de_CH.displayName":"Switzerland","en_NZ.displayName":"New Zealand","eula_title":"EULA (End-User-License-Agreement)","product_count.default":"0 Products","violators.newProduct":"New","meganav.viewall":"View All","accessory_finder.help_label":"How do I find my model number?","CN.displayName":"China","share.text":"Share","dynamic.product_count.32":"Products","da_DK.displayName":"","PK.displayName":"Pakistan","pricing.rrp":"","UA.displayName":"Ukraine","consent_warning.title":"Please accept Youtube cookies to watch this video","pricing.starting.at":"Starting at","product_count.11":"0 Products","US.displayName":"United States","es_MX.displayName":"Mexico","buyButton.static_text.text":"","DK.displayName":"Denmark","reviews_listing.text":"Reviews and Ratings","es_PE.displayName":"Peru","hu_HU.displayName":"","aria.slider.thumbs":"","ES.displayName":"Spain","en_IN.displayName":"India","pricing.suffix":"for 0 model","NL.displayName":"Netherlands","accessory_finder.placeholder":"Enter your model number","de_LU.displayName":"","product_count.13":"0 Products","flowplayer.language.fr":"French","el_GR.displayName":"","product.productShots.alt_text.template":"Images of 0","flowplayer.language.el":"Greek","uk_UA.displayName":"Ukraine","product_count.24":"0 Products","favorites.tooltip.on_remove":"Removed","product.reviews_page_description":"Read the latest user reviews and ratings of the Sony 0 and explore the 1.","header.typeAheadMarketingResultsTitle":"Products","download_productInfoSheet_label":"Product Information Sheet","available_soon":"Available soon","sl_SI.displayName":"","EG.displayName":"Egypt","product.lifestyleShots.alt_text.template":"0 in action","IR.displayName":"Iran","AT.displayName":"Austria","product_count.1":"0 Product","flowplayer.language.hu":"Hungary","LM.displayName":"Latin America","product_count.31":"0 Products","MT.displayName":"Malta","nl_BE.displayName":"","flowplayer.language.da":"Danish","download_button.text":"Download","fr_FR.displayName":"France","SK.displayName":"Slovakia","accessory_finder.subhead":"Select your model to find compatible accessories:","flowplayer.language.tr":"Turkish","flowplayer.language.zh":"Traditional Chinese","eula_scroll_info":"Please read and scroll through the entire End User License Agreement (EULA) to enable the Download button.","product.primaryShot.alt_text.template":"Picture of 0","JP.region_displayName":"","buy.button.disabled.text":"Currently not available","header.typeAheadSupportResultsTitle":"Support","flowplayer.language.bs":"Bosnian","flowplayer.language.xz":"Simplified Chinese","en_AU.displayName":"Australia","back_to_top.label":"Back to Top","AZ.displayName":"Azerbaijan","dynamic.accessory_count.default":"0 Accessories","flowplayer.language.cs":"Czech","violators.awardWinning":"Award Winner","LI.displayName":"Liechtenstein","flowplayer.language.de":"German","en_GB.displayName":"United Kingdom","dynamic.product_count.14":"Products","flowplayer.language.uk":"Ukrainian","product.detailsDimensionShots.alt_text.template":"Dimensions of 0","dynamic.product_count.33":"Products","flowplayer.language.pt":"Portuguese","hr_HR.displayName":"","brexitTVEnergyLabel.text":"","XA.displayName":"North Africa","dynamic.product_count.4":"Products","aria.slider.thumb":"","dynamic.product_count.21":"Products","tax_disclaimer.text":"","product.dimensionShots.alt_text.template":"Dimensions of 0","PT.displayName":"Portugal","RO.displayName":"Romania","es_CO.displayName":"Colombia";window.__PRELOADED_STATE__ = window.__PRELOADED_STATE__ || ;window.__PRELOADED_STATE__.location ="pathname":"\/electronics\/support\/downloads\/W0009063","query":;window.__PRELOADED_STATE__.page ="type":"downloadDetails","searchByType":"none","typeAlias":null,"origin":"server","hasError":false,"isLoading":false,"isLoadingPartialContent":false,"location":"pathname":"\/electronics\/support\/downloads\/W0009063","query":,"params":"downloadId":"W0009063","cookie":"","locale":"en_US";window.__PRELOADED_STATE__.specialmessage ="params":"locale":"en_US","isFetching":false,"hasError":false,"specialMessageResponse":"error":"No Special Message found for the locale en_US","description":"Element corresponding to query \"$and\":[\"locale\":\"en_US\",] not found in collection support_special_messages.";window.__CTX__ = window.__CTX__ || ;window.__COMPONENT_CONFIG__ = window.__COMPONENT_CONFIG__ || ;window.__I18N__ = window.__I18N__ || ;window.__CTX__.special_message ="module":"isCritical":false;window.__COMPONENT_CONFIG__.special_message ="support_specialMessageValidityInDays":7,"support_showSpecialMessage":true,"support_additionalVisibleModulesInLiteMode":[];window.__I18N__.special_message ="GB.displayName":"United Kingdom","SE.displayName":"Sweden","related_products_curated_link.text":"Related Products","flowplayer.language.ja":"Japanese","idk.text":"Sorry, this data isn't available","pt_PT.displayName":"Portugal","AD.displayName":"Andorra","no_NO.displayName":"","dynamic.product_count.default":"Products","support_link.text":"Support","TW.displayName":"Taiwan","product_count.14":"0 Products","YU.displayName":"Serbia","en_HK.displayName":"","flowplayer.language.fi":"Finnish","es_CL.displayName":"Chile","ME.displayName":"Montenegro","FR.displayName":"France","BA.displayName":"Bosnia and Herzegovina","flowplayer.language.ko":"Korean","EU.region_displayName":"Europe Region","CA.displayName":"Canada","pricing.starting.at_succeeding":"","SI.displayName":"Slovenia","product_count.72":"","es_AR.displayName":"Argentina","dynamic.product_count.34":"Products","NG.displayName":"Nigeria","sony.text":"Sony US","product_count.4":"0 Products","aria.slider.previous":"","KZ.displayName":"Kazakhstan","flowplayer.language.nl":"Dutch","fi_FI.displayName":"","en_PH.displayName":"Philippines","KW.displayName":"Kuwait","flowplayer.language.al":"Albanian","dynamic.accessory_count.1":"0 Accessory","MA.displayName":"Morocco","flowplayer.language.mk":"Macedonian","nl_NL.displayName":"","dynamic.product_count.24":"Products","fr_LU.displayName":"","LV.displayName":"Latvia","lt_LT.displayName":"","dynamic.product_count.3":"Products","flowplayer.language.ro":"Romanian","GE.displayName":"Georgia","consent_warning.button_text":"Manage cookies","favorite.text":"Favorite","productInformationSheet.text":"Product Information Sheet","flowplayer.language.et":"Estonian","flowplayer.language.sk":"Slovak","LT.displayName":"Lithuania","en_IE.displayName":"Ireland","PL.displayName":"Poland","ZA.displayName":"South Africa","BG.displayName":"Bulgaria","pl_PL.displayName":"","BH.displayName":"Bahrain","flowplayer.language.en":"English","flowplayer.language.bg":"Bulgarian","FI.displayName":"Finland","CH.displayName":"Switzerland","JP.displayName":"","BY.displayName":"Belarus","BR.displayName":"Brazil","TR.displayName":"Türkiye","fr_BE.displayName":"Belgium","IE.displayName":"Republic of Ireland","en_EE.displayName":"Estonia","sv_SE.displayName":"","recycling_cost_5Euro.text":"","BE.displayName":"Belgium","LU.displayName":"Luxembourg","IS.displayName":"Iceland","flowplayer.language.kk":"Kazakh","RU.displayName":"Russia","buy.button.text":"Where To Buy","dynamic.product_count.12":"Products","CZ.displayName":"Czech Republic","MD.displayName":"Moldova","CN.region_displayName":"China Region","dynamic.product_count.13":"Products","product_count.3":"0 Products","AL.displayName":"Albania","XM.displayName":"Middle East","en_ID.displayName":"Indonesia","IN.displayName":"India","dynamic.product_count.2":"Products","dynamic.product_count.23":"Products","MC.displayName":"Monaco","flowplayer.language.it":"Italian","US.region_displayName":"Pan America Region","it_IT.displayName":"","fr_CH.displayName":"Switzerland","meganav.viewMore":"View More","violators.topPick":"Top Pick","MK.displayName":"Macedonia","AP.displayName":"Others","HK.displayName":"Hong Kong","ro_RO.displayName":"","product_count.23":"0 Products","bg_BG.displayName":"","en_US.displayName":"USA","AU.displayName":"Australia","VA.displayName":"Vatican City","product_count.74":"","vi_VN.displayName":"","PH.displayName":"Philippines","NZ.displayName":"New Zealand","product_count.34":"0 Products","SA.displayName":"Kingdom of Saudi Arabia","de_AT.displayName":"Austria","product_count.12":"0 Products","flowplayer.language.sl":"Slovene","KR.displayName":"Korea","SG.displayName":"Singapore","flowplayer.language.es":"Spanish","sk_SK.displayName":"","ID.displayName":"Indonesia","en_SG.displayName":"Singapore","ru_RU.displayName":"Russia","cs_CZ.displayName":"","de_DE.displayName":"Germany","MY.displayName":"Malaysia","dynamic.product_count.31":"Products","related_products_link.text":"Related Products","DE.displayName":"Germany","en_CA.displayName":"Canada","es_ES.displayName":"Spain","favorites.tooltip.add_action":"Add to Favorites","flowplayer.language.no":"Norwegian","en_LV.displayName":"Latvia","product_count.2":"0 Products","GR.displayName":"Greece","favorites.tooltip.header":"Favorites","NO.displayName":"Norway","fr_CA.displayName":"Canada","en_TH.displayName":"Thailand","notify_me.text":"Notify Me","th_TH.displayName":"","sr_YU.displayName":"","dynamic.product_count.22":"Products","product.specifications_page_description":"Get the detailed list of specifications for the Sony 0 & see which 1 fit your needs.","dynamic.product_count.11":"Products","flowplayer.language.ru":"Russian","HU.displayName":"Hungary","product_count.64":"","en_MY.displayName":"Malaysia","HR.displayName":"Croatia","IT.displayName":"Italy","consent_warning.description":"Access your cookie preferences below and make sure to switch on the Youtube cookie under the 'Functional' section.","product.specifications_page_title":"0 Specifications","flowplayer.language.ar":"Arabic","AE.displayName":"United Arab Emirates","product.reviews_page_title":"0 Reviews & Ratings","product_count.32":"0 Products","sr_RS.displayName":"","favorites.tooltip.on_add":"Added","SM.displayName":"San Marino","flowplayer.language.pl":"Polish","accessory_finder.label":"Accessory Finder","dynamic.product_count.1":"Product","aria.slider.next":"","AP.region_displayName":"Asia Pacific Region","product.details_page_description":"Discover the 0 from Sony & explore all the 1 features.","product_count.21":"0 Products","additional_cta.text":"","EE.displayName":"Estonia","mk_MK.displayName":"","product_count.33":"0 Products","flowplayer.language.sv":"Slovenian","TH.displayName":"Thailand","tr_TR.displayName":"Türkiye","JO.displayName":"Jordan","hr_BA.displayName":"","favorites.tooltip.remove_action":"Remove Favorite","brexitTVEnergyLink.text":"","VN.displayName":"Vietnam","es_EC.displayName":"Ecuador","CY.displayName":"Cyprus","product_count.22":"0 Products","de_CH.displayName":"Switzerland","en_NZ.displayName":"New Zealand","product_count.default":"0 Products","violators.newProduct":"New","meganav.viewall":"View All","accessory_finder.help_label":"How do I find my model number?","CN.displayName":"China","share.text":"Share","dynamic.product_count.32":"Products","da_DK.displayName":"","PK.displayName":"Pakistan","pricing.rrp":"","UA.displayName":"Ukraine","consent_warning.title":"Please accept Youtube cookies to watch this video","pricing.starting.at":"Starting at","product_count.11":"0 Products","US.displayName":"United States","es_MX.displayName":"Mexico","buyButton.static_text.text":"","DK.displayName":"Denmark","reviews_listing.text":"Reviews and Ratings","es_PE.displayName":"Peru","hu_HU.displayName":"","aria.slider.thumbs":"","ES.displayName":"Spain","en_IN.displayName":"India","pricing.suffix":"for 0 model","NL.displayName":"Netherlands","accessory_finder.placeholder":"Enter your model number","de_LU.displayName":"","product_count.13":"0 Products","flowplayer.language.fr":"French","el_GR.displayName":"","product.productShots.alt_text.template":"Images of 0","flowplayer.language.el":"Greek","uk_UA.displayName":"Ukraine","product_count.24":"0 Products","favorites.tooltip.on_remove":"Removed","product.reviews_page_description":"Read the latest user reviews and ratings of the Sony 0 and explore the 1.","header.typeAheadMarketingResultsTitle":"Products","download_productInfoSheet_label":"Product Information Sheet","available_soon":"Available soon","sl_SI.displayName":"","EG.displayName":"Egypt","product.lifestyleShots.alt_text.template":"0 in action","IR.displayName":"Iran","AT.displayName":"Austria","product_count.1":"0 Product","flowplayer.language.hu":"Hungary","LM.displayName":"Latin America","product_count.31":"0 Products","MT.displayName":"Malta","nl_BE.displayName":"","flowplayer.language.da":"Danish","fr_FR.displayName":"France","SK.displayName":"Slovakia","accessory_finder.subhead":"Select your model to find compatible accessories:","flowplayer.language.tr":"Turkish","flowplayer.language.zh":"Traditional Chinese","product.primaryShot.alt_text.template":"Picture of 0","JP.region_displayName":"","buy.button.disabled.text":"Currently not available","header.typeAheadSupportResultsTitle":"Support","flowplayer.language.bs":"Bosnian","flowplayer.language.xz":"Simplified Chinese","en_AU.displayName":"Australia","back_to_top.label":"Back to Top","AZ.displayName":"Azerbaijan","dynamic.accessory_count.default":"0 Accessories","flowplayer.language.cs":"Czech","violators.awardWinning":"Award Winner","LI.displayName":"Liechtenstein","flowplayer.language.de":"German","en_GB.displayName":"United Kingdom","dynamic.product_count.14":"Products","flowplayer.language.uk":"Ukrainian","product.detailsDimensionShots.alt_text.template":"Dimensions of 0","support.close":"Close","dynamic.product_count.33":"Products","flowplayer.language.pt":"Portuguese","hr_HR.displayName":"","brexitTVEnergyLabel.text":"","XA.displayName":"North Africa","dynamic.product_count.4":"Products","aria.slider.thumb":"","dynamic.product_count.21":"Products","tax_disclaimer.text":"","product.dimensionShots.alt_text.template":"Dimensions of 0","PT.displayName":"Portugal","RO.displayName":"Romania","es_CO.displayName":"Colombia";window.__PRELOADED_STATE__ = window.__PRELOADED_STATE__ || ;window.__PRELOADED_STATE__.location ="pathname":"\/electronics\/support\/downloads\/W0009063","query":;window.__PRELOADED_STATE__.page ="type":"downloadDetails","searchByType":"none","typeAlias":null,"origin":"server","hasError":false,"isLoading":false,"isLoadingPartialContent":false,"location":"pathname":"\/electronics\/support\/downloads\/W0009063","query":,"params":"downloadId":"W0009063","cookie":"","locale":"en_US";window.__PRELOADED_STATE__.externalhtml ="isFetching":false,"isFetched":true,"fileContent":"\n\n\n\n\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\n\/\/check for preview domain\nif($(\"body\").attr(\"data-locale\") == \"en_US\")\n\tvar the_domain = window.location.hostname;\n\tif(the_domain.indexOf(\"preview-production-pdp\") >= 0)\n\t\t\n\t\tinit();\n\t\t\n\telse\n\t\t\n\t\tinit();\n\t\n\t\n\n\/\/end preview-check\n\/\/=============================\/\/\n\n\n\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\n\/\/smarthelp debugger on production\nfunction debug_smarthelp()\n\tvar the_href = window.location.href;\n\tif(the_href.indexOf(\"smarthelp_testing\") >= 0)\n\n\t\t$(\"body\").prepend(\"Debugger\")\n\n\t\t$(\"*\").bind(\"focus\", function()\n\t\t\tvar debug = $('.debug_smarthelp');\n\t\t\tdebug.empty();\n\t\t\tvar type = $(this).prop(\"nodeName\");\n\t\t\tvar prevSibling = $(this).prev().prop(\"nodeName\");\n\t\t\tvar id = $(this).attr(\"id\");\n\t\t\tvar theClass = $(this).attr(\"class\");\n\t\t\tvar theParent = $(this).parent().prop(\"nodeName\");\n\t\t\tvar theParentClass = $(this).parent().attr(\"class\");\n\t\t\tdebug.append(\"Type: \" + type + \"\");\n\t\t\tdebug.append(\"Previous sibling: \" + prevSibling + \"\");\n\t\t\tdebug.append(\"Id: \" + id + \"\");\n\t\t\tdebug.append(\"Class: \" + theClass + \"\");\n\t\t\tdebug.append(\"Parent Type: \" + theParent + \"\");\n\t\t\tdebug.append(\"Parent Class: \" + theParentClass + \"\");\n\t\t\t\n\t\t);\n\t\n\n\n\/\/toggle debugger\n\/\/debug_smarthelp();\n\/\/END toggle debugger\n\/\/=============================\/\/\n\n\n\n\n\nfunction init()\n\n\t\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\n\t\/\/Initial JS\n\tconsole.log(\"update7.1\");\n\n\tvar smarthelp_ua = navigator.userAgent.search(\"SmartHelp_WebViewer\");\n\tif(smarthelp_ua > -1)\n\t\tvar article_body = document.getElementsByTagName(\"BODY\")[0];\n\t\tarticle_body.tabIndex = -1;\n\t\tdo_smarthelp_styles();\n\telse\n\t\tdocument.addEventListener(\"page_state_updated\", function (e) page_state_updated() , false);\n\t\tnot_smarthelp();\n\t\n\n\t\/\/END Initial JS\n\t\/\/=============================\/\/\n\n\n\n\n\n\n\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\n\/\/Function that fires with event listener, and DOM Ready, for non-smarthelp browsers\nfunction page_state_updated()\n\tnot_smarthelp();\n\n\/\/=============================\/\/\n\n\n\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\n\/\/Update the elements hidden by default in custom stylesheet for non-smarthelp browsers\nfunction not_smarthelp()\n\tif($(\".search-eyebrow\").is(\":hidden\") === false)\n\t\t$(\"head\").append(\"body[data-locale='en_US'] .search-article-details-print-button, body[data-locale='en_US'] .search-article-details-wrapper, body[data-locale='en_US'] .search-eyebrow, body[data-locale='en_US'] .article-details-applicable-details-wrapper, body[data-locale='en_US'] .icon-list-wrapper, body[data-locale='en_US'] .var_suptype_link, body[data-locale='en_US'] .smarthelp_hide, body[data-locale='en_US'] .smh_hide visibility:visible !important; body[data-locale='en_US'] .article-details-content a, body[data-locale='en_US'] .article-details-applicable-details-wrapper a visibility:visible !important body[data-locale='en_US'] .smarthelp_hide, body[data-locale='en_US'] .smh_hide, body[data-locale='en_US'] .icon-list-wrapper, body[data-locale='en_US'] .icon-banner-wrapper.js-icon-banner-wrapper, iframe[src*='youtube'] display: block !important\");\n\t\n\n\/\/=============================\/\/\n\n\n\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\n\/\/Function to modify content outside of the article body, for smarthelp browser.\nfunction do_smarthelp_styles()\n\t\/\/$(\".article-details-content a\").hide()\n\t$(\".article-details-applicable-details-wrapper\").slideUp(\"fast\");\n\n\/\/=============================\/\/\n\n\n\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\n\/\/This is for Smarthelp Javascript Article Body manipulation\n\/\/Smarthelp never navigates SWT site (single-page-app), instead it accesses KB articles with direct link. \n\/\/Therefore the article content manipulation needs to occur after dom loaded.\n\/\/\n(function()\n\t\/\/console.log(\"ready\");\n\n\tif($(\"body\").attr(\"data-locale\") == \"en_US\")\n\t\t\/\/start preview-check\n\t\tvar the_domain = window.location.hostname;\n\t\tif(the_domain.indexOf(\"preview-production-pdp\") >= 0)\n\t\t\t\/\/console.log(\"Domain: Preview\");\n\t\t\tdoReady();\n\n\t\telse\n\t\t\t\/\/console.log(\"Domain: Production\");\n\t\t\tdoReady();\n\t\t\n\t\t\/\/end preview-check\n\t\n\t\n\t\n\n\t\n\tfunction doReady()\n\t\n\t\tvar smarthelp_ua = navigator.userAgent.search(\"SmartHelp_WebViewer\");\n\t\tif(smarthelp_ua > -1)\n\t\t\t\/\/console.log(\"sh-update3\")\n\t\t\tvar c_val = '';\n\t\t\tpersistExpand()\n\t\t\t\n\t\t\t$('.article-details-content a[href*=\".pdf\"], .article-details-content a[href*=\".PDF\"]').each(function()\n\t\t\t\t$(this).replaceWith('' + $(this).text() + '');\n\t\t\t);\n\t\t\t$.each($(\".article-details-content a[href*='\/external-link?url']\"), function()\n\t\t\t\t$(this).replaceWith(\"\" + $(this).text() + \"\")\n\t\t\t)\n\t\t\t\/\/:not([href^=#])\n\t\t\t$.each($(\".article-details-content a:not(.expand_parent):not(.expand_parent_dev):not(.back_to_top):not(.var_imageX)\").not('.article-details-content a[href*=\"\/sna\/graphics\/\"]').not('.article-details-content a[href*=\"docs.sony.com\"]').not('.article-details-content a[href*=\"\/articleimage\/servlet\/\"]'), function(i)\n\t\t\t\tvar that = $(this);\n\t\t\t\tvar href = that.attr(\"href\");\n\t\t\t\tif(href)\n\t\t\t\t\tif(href.indexOf(\"\/electronics\/support\/articles\/\") < 0)\n\t\t\t\t\t\tthat.replaceWith(\"\" + that.text() + \"\")\n\t\t\t\t\t\n\t\t\t\telse\n\t\t\t\t\tthat.css(\"visibility\", \"visible\")\n\t\t\t\t\n\t\t\t);\n\t\t\tconsole.log(\"sh-here\")\n\t\t\t$(\".article-details-content a\").css(\"visibility\", \"visible\")\n\t\t\t$(\".article-details-content a\").show().css('display', 'inline-block');\n\t\t\t$('.var_suptype_link, .smarthelp_hide, .smh_hide').remove();\n\t\t\t$(\"head\").append(\"#search-compact display: none;\")\n\t\t\t\n\t\t\t\/\/Webview bug fix-\n\t\t\t\/\/When page loads, if the first focusable element (a link) is beyond the fold, when you first start scrolling down, webview will skip all content before the first link.\n\t\t\t\/\/Added a tabindex to the first targetable element, the page title\n\t\t\t$('h1.search-article-title').css('outline', 'none');\n\t\t\t\n\t\t\t$(\".article-details-content\").prop('tabIndex', -1)\n\t\t\t$(\".article-details-content > div\").prop('tabIndex', -1)\n\t\t\t$('h1.search-article-title').prop('tabindex', 0)\n\t\t\t$('.expand_child').prop('tabIndex', -1);\n\t\t\t$('.expand_child_dev').prop('tabIndex', -1);\n\t\t\t$(\".article-details-content a\").show();\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\/\/Create\/update a cookie every time expand\/collapse is clicked \n function setPersistCookie()\n var date = new Date();\n \/\/One day cookie (change the 1 below to desired days)\n date.setTime(date.getTime() + 1*24*60*60*1000);\n \/\/Each cookie is only accessible by the page it was created on... each page where a user toggles expand\/collapse\n var thispage = window.location.pathname;\n \/\/Remove trailing comma in c_val\n c_val = c_val.replace(\/,\\s*$\/, \"\");\n document.cookie = 'persist_c='+c_val+';expires=' + date.toGMTString() + ';path='+ thispage;\n \n\t\t\t\n\t\t\t\/\/ Get Cookie\n\t\t\tfunction getCookie(name) \n\t\t\t\tvar value = \"; \" + document.cookie;\n\t\t\t\tvar parts = value.split(\"; \" + name + \"=\");\n\t\t\t\tif (parts.length == 2) return parts.pop().split(\";\").shift();\n\t\t\t\n\t\t\t\n\t\t\t\/\/Check for cookie on load, then open expand\/collapse that were previously opened by the user\n function persistExpand()\n if(getCookie('persist_c'))\n var array = getCookie('persist_c').split(',');\n $.each(array, function(index, value)\n \n $(\"a.expand_parent_dev\").eq(value).addClass('toggleFocus');\n\t\t\t\t\t\t$(\"a.expand_parent_dev\").eq(value).parent().nextAll('.expand_child_dev').first().show().removeAttr('tabindex');\n\t\t\t\t\t\t$(\"a.expand_parent_dev\").eq(value).nextAll('.expand_child_dev').first().show().removeAttr('tabindex');\n );\n \n \n \n\n\t\t\t\/\/Bind events to Expand\/Collapse links\n\t\t\t$('.expand_parent_dev').bind('click', function()\n\t\t\t\tcheckToggles();\n\t\t\t).keyup(function(e) e.which == '32')\n\t\t\t\t\tcheckToggles();\n\t\t\t\t\n\t\t\t);\n\t\t\t\n\t\t\t\/\/Check for \"open state\" class on all expand parent elements, append its DOM index to a variable that will be used for the cookie value\n\t\t\tfunction checkToggles()\n\t\t\t\tconsole.log(\"clicked\")\n\t\t\t\tc_val = '';\n\t\t\t\t$('a.expand_parent_dev').each(function(i)\n\t\t\t\t\tif($(this).hasClass('toggleFocus'))\n\t\t\t\t\t\tc_val = c_val + i + ',';\n\t\t\t\t\t\t\/\/c_val example value: \"0,2,3,7,\"\n\t\t\t\t\t\n\t\t\t\t)\n\t\t\t\tsetPersistCookie();\n\t\t\t\n\t\t\t\n\t\t\t\n\t\telse\n\t\t\tpage_state_updated();\n\t\t\n\t\n\t\t\n)()\n\n\n\n\n\/\/=============================\/\/\n\/\/=============================\/\/\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n";window.__PRELOADED_STATE__.ascportal ="isAscPortal":false,"isFetching":false;window.__CTX__ = window.__CTX__ || ;window.__COMPONENT_CONFIG__ = window.__COMPONENT_CONFIG__ || ;window.__I18N__ = window.__I18N__ || ;window.__CTX__.support_external_html ="module":"isCritical":false;window.__COMPONENT_CONFIG__.support_external_html ="support_accessLevelContent":true,"support_asc_embedLinkHeader":false,"support_additionalVisibleModulesInLiteMode":[];window.__I18N__.support_external_html ="GB.displayName":"United Kingdom","SE.displayName":"Sweden","related_products_curated_link.text":"Related Products","flowplayer.language.ja":"Japanese","idk.text":"Sorry, this data isn't available","pt_PT.displayName":"Portugal","AD.displayName":"Andorra","no_NO.displayName":"","dynamic.product_count.default":"Products","support_link.text":"Support","TW.displayName":"Taiwan","product_count.14":"0 Products","YU.displayName":"Serbia","en_HK.displayName":"","flowplayer.language.fi":"Finnish","es_CL.displayName":"Chile","ME.displayName":"Montenegro","FR.displayName":"France","BA.displayName":"Bosnia and Herzegovina","flowplayer.language.ko":"Korean","EU.region_displayName":"Europe Region","CA.displayName":"Canada","pricing.starting.at_succeeding":"","SI.displayName":"Slovenia","product_count.72":"","es_AR.displayName":"Argentina","dynamic.product_count.34":"Products","NG.displayName":"Nigeria","sony.text":"Sony US","product_count.4":"0 Products","aria.slider.previous":"","KZ.displayName":"Kazakhstan","flowplayer.language.nl":"Dutch","fi_FI.displayName":"","en_PH.displayName":"Philippines","KW.displayName":"Kuwait","flowplayer.language.al":"Albanian","dynamic.accessory_count.1":"0 Accessory","MA.displayName":"Morocco","flowplayer.language.mk":"Macedonian","nl_NL.displayName":"","dynamic.product_count.24":"Products","fr_LU.displayName":"","LV.displayName":"Latvia","lt_LT.displayName":"","dynamic.product_count.3":"Products","flowplayer.language.ro":"Romanian","GE.displayName":"Georgia","consent_warning.button_text":"Manage cookies","favorite.text":"Favorite","productInformationSheet.text":"Product Information Sheet","flowplayer.language.et":"Estonian","flowplayer.language.sk":"Slovak","LT.displayName":"Lithuania","en_IE.displayName":"Ireland","PL.displayName":"Poland","ZA.displayName":"South Africa","BG.displayName":"Bulgaria","pl_PL.displayName":"","BH.displayName":"Bahrain","flowplayer.language.en":"English","flowplayer.language.bg":"Bulgarian","FI.displayName":"Finland","CH.displayName":"Switzerland","JP.displayName":"","BY.displayName":"Belarus","BR.displayName":"Brazil","TR.displayName":"Türkiye","fr_BE.displayName":"Belgium","IE.displayName":"Republic of Ireland","en_EE.displayName":"Estonia","sv_SE.displayName":"","recycling_cost_5Euro.text":"","BE.displayName":"Belgium","LU.displayName":"Luxembourg","IS.displayName":"Iceland","flowplayer.language.kk":"Kazakh","RU.displayName":"Russia","buy.button.text":"Where To Buy","dynamic.product_count.12":"Products","CZ.displayName":"Czech Republic","MD.displayName":"Moldova","CN.region_displayName":"China Region","dynamic.product_count.13":"Products","product_count.3":"0 Products","AL.displayName":"Albania","XM.displayName":"Middle East","en_ID.displayName":"Indonesia","IN.displayName":"India","dynamic.product_count.2":"Products","dynamic.product_count.23":"Products","MC.displayName":"Monaco","flowplayer.language.it":"Italian","US.region_displayName":"Pan America Region","it_IT.displayName":"","fr_CH.displayName":"Switzerland","meganav.viewMore":"View More","violators.topPick":"Top Pick","MK.displayName":"Macedonia","AP.displayName":"Others","HK.displayName":"Hong Kong","ro_RO.displayName":"","product_count.23":"0 Products","bg_BG.displayName":"","en_US.displayName":"USA","AU.displayName":"Australia","VA.displayName":"Vatican City","product_count.74":"","vi_VN.displayName":"","PH.displayName":"Philippines","NZ.displayName":"New Zealand","product_count.34":"0 Products","SA.displayName":"Kingdom of Saudi Arabia","de_AT.displayName":"Austria","product_count.12":"0 Products","flowplayer.language.sl":"Slovene","KR.displayName":"Korea","SG.displayName":"Singapore","flowplayer.language.es":"Spanish","sk_SK.displayName":"","ID.displayName":"Indonesia","en_SG.displayName":"Singapore","ru_RU.displayName":"Russia","cs_CZ.displayName":"","de_DE.displayName":"Germany","MY.displayName":"Malaysia","dynamic.product_count.31":"Products","related_products_link.text":"Related Products","DE.displayName":"Germany","en_CA.displayName":"Canada","es_ES.displayName":"Spain","favorites.tooltip.add_action":"Add to Favorites","flowplayer.language.no":"Norwegian","en_LV.displayName":"Latvia","product_count.2":"0 Products","GR.displayName":"Greece","favorites.tooltip.header":"Favorites","NO.displayName":"Norway","fr_CA.displayName":"Canada","en_TH.displayName":"Thailand","notify_me.text":"Notify Me","th_TH.displayName":"","sr_YU.displayName":"","dynamic.product_count.22":"Products","product.specifications_page_description":"Get the detailed list of specifications for the Sony 0 & see which 1 fit your needs.","dynamic.product_count.11":"Products","flowplayer.language.ru":"Russian","HU.displayName":"Hungary","product_count.64":"","en_MY.displayName":"Malaysia","HR.displayName":"Croatia","IT.displayName":"Italy","consent_warning.description":"Access your cookie preferences below and make sure to switch on the Youtube cookie under the 'Functional' section.","product.specifications_page_title":"0 Specifications","flowplayer.language.ar":"Arabic","AE.displayName":"United Arab Emirates","product.reviews_page_title":"0 Reviews & Ratings","product_count.32":"0 Products","sr_RS.displayName":"","favorites.tooltip.on_add":"Added","SM.displayName":"San Marino","flowplayer.language.pl":"Polish","accessory_finder.label":"Accessory Finder","dynamic.product_count.1":"Product","aria.slider.next":"","AP.region_displayName":"Asia Pacific Region","product.details_page_description":"Discover the 0 from Sony & explore all the 1 features.","product_count.21":"0 Products","additional_cta.text":"","EE.displayName":"Estonia","mk_MK.displayName":"","product_count.33":"0 Products","flowplayer.language.sv":"Slovenian","TH.displayName":"Thailand","tr_TR.displayName":"Türkiye","JO.displayName":"Jordan","hr_BA.displayName":"","favorites.tooltip.remove_action":"Remove Favorite","brexitTVEnergyLink.text":"","VN.displayName":"Vietnam","es_EC.displayName":"Ecuador","CY.displayName":"Cyprus","product_count.22":"0 Products","de_CH.displayName":"Switzerland","en_NZ.displayName":"New Zealand","product_count.default":"0 Products","violators.newProduct":"New","meganav.viewall":"View All","accessory_finder.help_label":"How do I find my model number?","CN.displayName":"China","share.text":"Share","dynamic.product_count.32":"Products","da_DK.displayName":"","PK.displayName":"Pakistan","pricing.rrp":"","UA.displayName":"Ukraine","consent_warning.title":"Please accept Youtube cookies to watch this video","pricing.starting.at":"Starting at","product_count.11":"0 Products","US.displayName":"United States","es_MX.displayName":"Mexico","buyButton.static_text.text":"","DK.displayName":"Denmark","reviews_listing.text":"Reviews and Ratings","es_PE.displayName":"Peru","hu_HU.displayName":"","aria.slider.thumbs":"","ES.displayName":"Spain","en_IN.displayName":"India","pricing.suffix":"for 0 model","NL.displayName":"Netherlands","accessory_finder.placeholder":"Enter your model number","de_LU.displayName":"","product_count.13":"0 Products","flowplayer.language.fr":"French","el_GR.displayName":"","product.productShots.alt_text.template":"Images of 0","flowplayer.language.el":"Greek","uk_UA.displayName":"Ukraine","product_count.24":"0 Products","favorites.tooltip.on_remove":"Removed","product.reviews_page_description":"Read the latest user reviews and ratings of the Sony 0 and explore the 1.","header.typeAheadMarketingResultsTitle":"Products","download_productInfoSheet_label":"Product Information Sheet","available_soon":"Available soon","sl_SI.displayName":"","EG.displayName":"Egypt","product.lifestyleShots.alt_text.template":"0 in action","IR.displayName":"Iran","AT.displayName":"Austria","product_count.1":"0 Product","flowplayer.language.hu":"Hungary","LM.displayName":"Latin America","product_count.31":"0 Products","MT.displayName":"Malta","nl_BE.displayName":"","flowplayer.language.da":"Danish","fr_FR.displayName":"France","SK.displayName":"Slovakia","accessory_finder.subhead":"Select your model to find compatible accessories:","flowplayer.language.tr":"Turkish","flowplayer.language.zh":"Traditional Chinese","product.primaryShot.alt_text.template":"Picture of 0","JP.region_displayName":"","buy.button.disabled.text":"Currently not available","header.typeAheadSupportResultsTitle":"Support","flowplayer.language.bs":"Bosnian","flowplayer.language.xz":"Simplified Chinese","en_AU.displayName":"Australia","back_to_top.label":"Back to Top","AZ.displayName":"Azerbaijan","dynamic.accessory_count.default":"0 Accessories","flowplayer.language.cs":"Czech","violators.awardWinning":"Award Winner","LI.displayName":"Liechtenstein","flowplayer.language.de":"German","en_GB.displayName":"United Kingdom","dynamic.product_count.14":"Products","flowplayer.language.uk":"Ukrainian","product.detailsDimensionShots.alt_text.template":"Dimensions of 0","dynamic.product_count.33":"Products","flowplayer.language.pt":"Portuguese","hr_HR.displayName":"","brexitTVEnergyLabel.text":"","XA.displayName":"North Africa","dynamic.product_count.4":"Products","aria.slider.thumb":"","dynamic.product_count.21":"Products","tax_disclaimer.text":"","product.dimensionShots.alt_text.template":"Dimensions of 0","PT.displayName":"Portugal","RO.displayName":"Romania","es_CO.displayName":"Colombia";window.__PRELOADED_STATE__ = window.__PRELOADED_STATE__ || ;window.__PRELOADED_STATE__.location ="pathname":"\/electronics\/support\/downloads\/W0009063","query":;window.__PRELOADED_STATE__.page ="type":"downloadDetails","searchByType":"none","typeAlias":null,"origin":"server","hasError":false,"isLoading":false,"isLoadingPartialContent":false,"location":"pathname":"\/electronics\/support\/downloads\/W0009063","query":,"params":"downloadId":"W0009063","cookie":"","locale":"en_US";window.__CTX__ = window.__CTX__ || ;window.__COMPONENT_CONFIG__ = window.__COMPONENT_CONFIG__ || ;window.__I18N__ = window.__I18N__ || ;window.__CTX__.cc_agent ="module":"isCritical":false;window.__COMPONENT_CONFIG__.cc_agent ="support_ccAgentTag":true,"support_additionalVisibleModulesInLiteMode":[];window.__I18N__.cc_agent ="GB.displayName":"United Kingdom","SE.displayName":"Sweden","related_products_curated_link.text":"Related Products","flowplayer.language.ja":"Japanese","idk.text":"Sorry, this data isn't available","pt_PT.displayName":"Portugal","AD.displayName":"Andorra","no_NO.displayName":"","dynamic.product_count.default":"Products","support_link.text":"Support","TW.displayName":"Taiwan","product_count.14":"0 Products","YU.displayName":"Serbia","en_HK.displayName":"","flowplayer.language.fi":"Finnish","es_CL.displayName":"Chile","ME.displayName":"Montenegro","FR.displayName":"France","BA.displayName":"Bosnia and Herzegovina","flowplayer.language.ko":"Korean","EU.region_displayName":"Europe Region","CA.displayName":"Canada","pricing.starting.at_succeeding":"","SI.displayName":"Slovenia","product_count.72":"","es_AR.displayName":"Argentina","dynamic.product_count.34":"Products","NG.displayName":"Nigeria","sony.text":"Sony US","product_count.4":"0 Products","aria.slider.previous":"","KZ.displayName":"Kazakhstan","flowplayer.language.nl":"Dutch","fi_FI.displayName":"","en_PH.displayName":"Philippines","KW.displayName":"Kuwait","flowplayer.language.al":"Albanian","dynamic.accessory_count.1":"0 Accessory","MA.displayName":"Morocco","flowplayer.language.mk":"Macedonian","nl_NL.displayName":"","dynamic.product_count.24":"Products","fr_LU.displayName":"","LV.displayName":"Latvia","lt_LT.displayName":"","dynamic.product_count.3":"Products","flowplayer.language.ro":"Romanian","GE.displayName":"Georgia","consent_warning.button_text":"Manage cookies","favorite.text":"Favorite","ccAgent_text":"CC Agent Portal for Internal Search","productInformationSheet.text":"Product Information Sheet","flowplayer.language.et":"Estonian","flowplayer.language.sk":"Slovak","LT.displayName":"Lithuania","en_IE.displayName":"Ireland","PL.displayName":"Poland","ZA.displayName":"South Africa","BG.displayName":"Bulgaria","pl_PL.displayName":"","BH.displayName":"Bahrain","flowplayer.language.en":"English","flowplayer.language.bg":"Bulgarian","FI.displayName":"Finland","CH.displayName":"Switzerland","JP.displayName":"","BY.displayName":"Belarus","BR.displayName":"Brazil","TR.displayName":"Türkiye","fr_BE.displayName":"Belgium","IE.displayName":"Republic of Ireland","en_EE.displayName":"Estonia","sv_SE.displayName":"","recycling_cost_5Euro.text":"","BE.displayName":"Belgium","LU.displayName":"Luxembourg","IS.displayName":"Iceland","flowplayer.language.kk":"Kazakh","RU.displayName":"Russia","buy.button.text":"Where To Buy","dynamic.product_count.12":"Products","CZ.displayName":"Czech Republic","MD.displayName":"Moldova","CN.region_displayName":"China Region","dynamic.product_count.13":"Products","product_count.3":"0 Products","AL.displayName":"Albania","XM.displayName":"Middle East","en_ID.displayName":"Indonesia","IN.displayName":"India","dynamic.product_count.2":"Products","dynamic.product_count.23":"Products","MC.displayName":"Monaco","flowplayer.language.it":"Italian","US.region_displayName":"Pan America Region","it_IT.displayName":"","fr_CH.displayName":"Switzerland","meganav.viewMore":"View More","violators.topPick":"Top Pick","MK.displayName":"Macedonia","AP.displayName":"Others","HK.displayName":"Hong Kong","ro_RO.displayName":"","myFeedbacks_button":"My Feedbacks","product_count.23":"0 Products","bg_BG.displayName":"","en_US.displayName":"USA","AU.displayName":"Australia","VA.displayName":"Vatican City","product_count.74":"","vi_VN.displayName":"","PH.displayName":"Philippines","NZ.displayName":"New Zealand","product_count.34":"0 Products","SA.displayName":"Kingdom of Saudi Arabia","de_AT.displayName":"Austria","product_count.12":"0 Products","flowplayer.language.sl":"Slovene","KR.displayName":"Korea","SG.displayName":"Singapore","flowplayer.language.es":"Spanish","sk_SK.displayName":"","ID.displayName":"Indonesia","en_SG.displayName":"Singapore","ru_RU.displayName":"Russia","cs_CZ.displayName":"","de_DE.displayName":"Germany","MY.displayName":"Malaysia","dynamic.product_count.31":"Products","related_products_link.text":"Related Products","DE.displayName":"Germany","en_CA.displayName":"Canada","es_ES.displayName":"Spain","favorites.tooltip.add_action":"Add to Favorites","flowplayer.language.no":"Norwegian","en_LV.displayName":"Latvia","product_count.2":"0 Products","GR.displayName":"Greece","favorites.tooltip.header":"Favorites","NO.displayName":"Norway","fr_CA.displayName":"Canada","en_TH.displayName":"Thailand","notify_me.text":"Notify Me","th_TH.displayName":"","sr_YU.displayName":"","dynamic.product_count.22":"Products","product.specifications_page_description":"Get the detailed list of specifications for the Sony 0 & see which 1 fit your needs.","dynamic.product_count.11":"Products","flowplayer.language.ru":"Russian","HU.displayName":"Hungary","product_count.64":"","en_MY.displayName":"Malaysia","HR.displayName":"Croatia","IT.displayName":"Italy","consent_warning.description":"Access your cookie preferences below and make sure to switch on the Youtube cookie under the 'Functional' section.","product.specifications_page_title":"0 Specifications","flowplayer.language.ar":"Arabic","AE.displayName":"United Arab Emirates","product.reviews_page_title":"0 Reviews & Ratings","product_count.32":"0 Products","sr_RS.displayName":"","favorites.tooltip.on_add":"Added","SM.displayName":"San Marino","flowplayer.language.pl":"Polish","accessory_finder.label":"Accessory Finder","dynamic.product_count.1":"Product","aria.slider.next":"","AP.region_displayName":"Asia Pacific Region","product.details_page_description":"Discover the 0 from Sony & explore all the 1 features.","product_count.21":"0 Products","additional_cta.text":"","EE.displayName":"Estonia","mk_MK.displayName":"","product_count.33":"0 Products","flowplayer.language.sv":"Slovenian","TH.displayName":"Thailand","tr_TR.displayName":"Türkiye","JO.displayName":"Jordan","hr_BA.displayName":"","favorites.tooltip.remove_action":"Remove Favorite","brexitTVEnergyLink.text":"","VN.displayName":"Vietnam","es_EC.displayName":"Ecuador","CY.displayName":"Cyprus","product_count.22":"0 Products","de_CH.displayName":"Switzerland","en_NZ.displayName":"New Zealand","product_count.default":"0 Products","violators.newProduct":"New","meganav.viewall":"View All","accessory_finder.help_label":"How do I find my model number?","CN.displayName":"China","share.text":"Share","dynamic.product_count.32":"Products","da_DK.displayName":"","PK.displayName":"Pakistan","pricing.rrp":"","UA.displayName":"Ukraine","consent_warning.title":"Please accept Youtube cookies to watch this video","pricing.starting.at":"Starting at","product_count.11":"0 Products","US.displayName":"United States","es_MX.displayName":"Mexico","buyButton.static_text.text":"","DK.displayName":"Denmark","reviews_listing.text":"Reviews and Ratings","es_PE.displayName":"Peru","hu_HU.displayName":"","aria.slider.thumbs":"","ES.displayName":"Spain","en_IN.displayName":"India","pricing.suffix":"for 0 model","NL.displayName":"Netherlands","accessory_finder.placeholder":"Enter your model number","de_LU.displayName":"","product_count.13":"0 Products","flowplayer.language.fr":"French","el_GR.displayName":"","product.productShots.alt_text.template":"Images of 0","flowplayer.language.el":"Greek","uk_UA.displayName":"Ukraine","product_count.24":"0 Products","favorites.tooltip.on_remove":"Removed","product.reviews_page_description":"Read the latest user reviews and ratings of the Sony 0 and explore the 1.","header.typeAheadMarketingResultsTitle":"Products","download_productInfoSheet_label":"Product Information Sheet","available_soon":"Available soon","sl_SI.displayName":"","EG.displayName":"Egypt","product.lifestyleShots.alt_text.template":"0 in action","IR.displayName":"Iran","AT.displayName":"Austria","product_count.1":"0 Product","flowplayer.language.hu":"Hungary","LM.displayName":"Latin America","product_count.31":"0 Products","MT.displayName":"Malta","nl_BE.displayName":"","flowplayer.language.da":"Danish","fr_FR.displayName":"France","SK.displayName":"Slovakia","accessory_finder.subhead":"Select your model to find compatible accessories:","flowplayer.language.tr":"Turkish","flowplayer.language.zh":"Traditional Chinese","product.primaryShot.alt_text.template":"Picture of 0","JP.region_displayName":"","buy.button.disabled.text":"Currently not available","header.typeAheadSupportResultsTitle":"Support","flowplayer.language.bs":"Bosnian","flowplayer.language.xz":"Simplified Chinese","en_AU.displayName":"Australia","back_to_top.label":"Back to Top","AZ.displayName":"Azerbaijan","dynamic.accessory_count.default":"0 Accessories","flowplayer.language.cs":"Czech","violators.awardWinning":"Award Winner","LI.displayName":"Liechtenstein","flowplayer.language.de":"German","en_GB.displayName":"United Kingdom","dynamic.product_count.14":"Products","flowplayer.language.uk":"Ukrainian","product.detailsDimensionShots.alt_text.template":"Dimensions of 0","dynamic.product_count.33":"Products","flowplayer.language.pt":"Portuguese","hr_HR.displayName":"","brexitTVEnergyLabel.text":"","XA.displayName":"North Africa","dynamic.product_count.4":"Products","aria.slider.thumb":"","dynamic.product_count.21":"Products","tax_disclaimer.text":"","product.dimensionShots.alt_text.template":"Dimensions of 0","PT.displayName":"Portugal","RO.displayName":"Romania","es_CO.displayName":"Colombia";Sony Support aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/browndw/docuscope-demo-spacy/app.py b/spaces/browndw/docuscope-demo-spacy/app.py
deleted file mode 100644
index b991caacbcebf825c4fe06914d87d779ef3fa2e7..0000000000000000000000000000000000000000
--- a/spaces/browndw/docuscope-demo-spacy/app.py
+++ /dev/null
@@ -1,52 +0,0 @@
-# Core Pkgs
-import streamlit as st
-
-# NLP Pkgs
-import spacy_streamlit
-import spacy
-import re
-
-nlp = spacy.load('en_docusco_spacy')
-
-def pre_process(txt):
- txt = re.sub(r'\bits\b', 'it s', txt)
- txt = re.sub(r'\bIts\b', 'It s', txt)
- txt = " ".join(txt.split())
- return(txt)
-
-#import os
-#from PIL import Image
-
-
-def main():
-
- st.title("DocuScope and Part-of-Speech Tagging with spaCy")
- st.markdown("This demo uses a trained spaCy model ([en_docusco_spacy](https://huggingface.co/browndw/en_docusco_spacy)) to identify DocuScope categories in text.")
- st.markdown("It is also trained on the [CLAWS7](https://ucrel.lancs.ac.uk/claws7tags.html) part-of-speech tagset.")
- st.markdown("NOTE: this demo is public - please don't enter confidential text")
-
- #our_image = Image.open(os.path.join('SpaCy_logo.svg.png'))
- #st.image(our_image)
-
- menu = ["Tokens","DocuScope"]
- choice = st.sidebar.selectbox("Menu",menu)
-
- if choice == "Tokens":
- st.subheader("Tokenization")
- raw_text = st.text_area('Enter text:', value="Jaws is a shrewd cinematic equation which not only gives you one or two very nasty turns when you least expect them but, possibly more important, knows when to make you think another is coming without actually providing it.")
- docx = pre_process(raw_text)
- docx = nlp(docx)
- if st.button("Tokenize"):
- spacy_streamlit.visualize_tokens(docx,attrs=['text','tag_', 'ent_iob_', 'ent_type_'])
-
- elif choice == "DocuScope":
- st.subheader("Named Entity Recognition")
- raw_text = st.text_area('Enter text:', value="Jaws is a shrewd cinematic equation which not only gives you one or two very nasty turns when you least expect them but, possibly more important, knows when to make you think another is coming without actually providing it.")
- docx = pre_process(raw_text)
- docx = nlp(docx)
- spacy_streamlit.visualize_ner(docx,labels=nlp.get_pipe('ner').labels)
-
-
-if __name__ == '__main__':
- main()
-
\ No newline at end of file
diff --git a/spaces/bruvvy/nitrosocke-Nitro-Diffusion/README.md b/spaces/bruvvy/nitrosocke-Nitro-Diffusion/README.md
deleted file mode 100644
index 80d180a2ad05d397bfcd02e40f60b62dee30cc81..0000000000000000000000000000000000000000
--- a/spaces/bruvvy/nitrosocke-Nitro-Diffusion/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Nitrosocke Nitro Diffusion
-emoji: 🌖
-colorFrom: purple
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/bugbounted/Whisper-Auto-Subtitled-Video-Generator/languages.py b/spaces/bugbounted/Whisper-Auto-Subtitled-Video-Generator/languages.py
deleted file mode 100644
index 0551511f86eb917fd0668d971b21e177c387b7ef..0000000000000000000000000000000000000000
--- a/spaces/bugbounted/Whisper-Auto-Subtitled-Video-Generator/languages.py
+++ /dev/null
@@ -1,101 +0,0 @@
-LANGUAGES = {
- "en": "eng",
- "zh": "zho",
- "de": "deu",
- "es": "spa",
- "ru": "rus",
- "ko": "kor",
- "fr": "fra",
- "ja": "jpn",
- "pt": "por",
- "tr": "tur",
- "pl": "pol",
- "ca": "cat",
- "nl": "nld",
- "ar": "ara",
- "sv": "swe",
- "it": "ita",
- "id": "ind",
- "hi": "hin",
- "fi": "fin",
- "vi": "vie",
- "iw": "heb",
- "uk": "ukr",
- "el": "ell",
- "ms": "msa",
- "cs": "ces",
- "ro": "ron",
- "da": "dan",
- "hu": "hun",
- "ta": "tam",
- "no": "nor",
- "th": "tha",
- "ur": "urd",
- "hr": "hrv",
- "bg": "bul",
- "lt": "lit",
- "la": "lat",
- "mi": "mri",
- "ml": "mal",
- "cy": "cym",
- "sk": "slk",
- "te": "tel",
- "fa": "fas",
- "lv": "lav",
- "bn": "ben",
- "sr": "srp",
- "az": "aze",
- "sl": "slv",
- "kn": "kan",
- "et": "est",
- "mk": "mkd",
- "br": "bre",
- "eu": "eus",
- "is": "isl",
- "hy": "hye",
- "ne": "nep",
- "mn": "mon",
- "bs": "bos",
- "kk": "kaz",
- "sq": "sqi",
- "sw": "swa",
- "gl": "glg",
- "mr": "mar",
- "pa": "pan",
- "si": "sin",
- "km": "khm",
- "sn": "sna",
- "yo": "yor",
- "so": "som",
- "af": "afr",
- "oc": "oci",
- "ka": "kat",
- "be": "bel",
- "tg": "tgk",
- "sd": "snd",
- "gu": "guj",
- "am": "amh",
- "yi": "yid",
- "lo": "lao",
- "uz": "uzb",
- "fo": "fao",
- "ht": "hat",
- "ps": "pus",
- "tk": "tuk",
- "nn": "nno",
- "mt": "mlt",
- "sa": "san",
- "lb": "ltz",
- "my": "mya",
- "bo": "bod",
- "tl": "tgl",
- "mg": "mlg",
- "as": "asm",
- "tt": "tat",
- "haw": "haw",
- "ln": "lin",
- "ha": "hau",
- "ba": "bak",
- "jw": "jav",
- "su": "sun",
-}
\ No newline at end of file
diff --git a/spaces/caslabs/midi-autocompletion/musicautobot/utils/stacked_dataloader.py b/spaces/caslabs/midi-autocompletion/musicautobot/utils/stacked_dataloader.py
deleted file mode 100644
index cf79ee773a388c0b922d215539891b4b251e44d1..0000000000000000000000000000000000000000
--- a/spaces/caslabs/midi-autocompletion/musicautobot/utils/stacked_dataloader.py
+++ /dev/null
@@ -1,70 +0,0 @@
-"Dataloader wrapper that can combine and handle multiple dataloaders for multitask training"
-from fastai.callback import Callback
-from typing import Callable
-
-__all__ = ['StackedDataBunch']
-
-# DataLoading
-class StackedDataBunch():
- def __init__(self, dbs, num_it=100):
- self.dbs = dbs
- self.train_dl = StackedDataloader([db.train_dl for db in self.dbs], num_it)
- self.valid_dl = StackedDataloader([db.valid_dl for db in self.dbs], num_it)
- self.train_ds = None
- self.path = dbs[0].path
- self.device = dbs[0].device
- self.vocab = dbs[0].vocab
- self.empty_val = False
-
- def add_tfm(self,tfm:Callable)->None:
- for dl in self.dbs: dl.add_tfm(tfm)
-
- def remove_tfm(self,tfm:Callable)->None:
- for dl in self.dbs: dl.remove_tfm(tfm)
-
-# Helper functions
-class StackedDataset(Callback):
- def __init__(self, dss):
- self.dss = dss
- def __getattribute__(self, attr):
- if attr == 'dss': return super().__getattribute__(attr)
- def redirected(*args, **kwargs):
- for ds in self.dss:
- if hasattr(ds, attr): getattr(ds, attr)(*args, **kwargs)
- return redirected
- def __len__(self)->int: return sum([len(ds) for ds in self.dss])
- def __repr__(self): return '\n'.join([self.__class__.__name__] + [repr(ds) for ds in self.dss])
-
-class StackedDataloader():
- def __init__(self, dls, num_it=100):
- self.dls = dls
- self.dataset = StackedDataset([dl.dataset for dl in dls if hasattr(dl, 'dataset')])
- self.num_it = num_it
- self.dl_idx = -1
-
- def __len__(self)->int: return sum([len(dl) for dl in self.dls])
- def __getattr__(self, attr):
- def redirected(*args, **kwargs):
- for dl in self.dls:
- if hasattr(dl, attr):
- getattr(dl, attr)(*args, **kwargs)
- return redirected
-
- def __iter__(self):
- "Process and returns items from `DataLoader`."
- iters = [iter(dl) for dl in self.dls]
- self.dl_idx = -1
- while len(iters):
- self.dl_idx = (self.dl_idx+1) % len(iters)
- for b in range(self.num_it):
- try:
- yield next(iters[self.dl_idx])
- except StopIteration as e:
- iters.remove(iters[self.dl_idx])
- break
-# raise StopIteration
-
- def new(self, **kwargs):
- "Create a new copy of `self` with `kwargs` replacing current values."
- new_dls = [dl.new(**kwargs) for dl in self.dls]
- return StackedDataloader(new_dls, self.num_it)
diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/deebert/run_glue_deebert.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/deebert/run_glue_deebert.py
deleted file mode 100644
index 6f7cfe65d0ef67b8723d4aed5a30750c22680fb3..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/research_projects/deebert/run_glue_deebert.py
+++ /dev/null
@@ -1,735 +0,0 @@
-from __future__ import absolute_import, division, print_function
-
-import argparse
-import glob
-import logging
-import os
-import random
-import time
-
-import numpy as np
-import torch
-from torch import nn
-from torch.utils.data import DataLoader, RandomSampler, SequentialSampler, TensorDataset
-from torch.utils.data.distributed import DistributedSampler
-from tqdm import tqdm, trange
-
-import transformers
-from src.modeling_highway_bert import DeeBertForSequenceClassification
-from src.modeling_highway_roberta import DeeRobertaForSequenceClassification
-from transformers import (
- WEIGHTS_NAME,
- AdamW,
- BertConfig,
- BertTokenizer,
- RobertaConfig,
- RobertaTokenizer,
- get_linear_schedule_with_warmup,
-)
-from transformers import glue_compute_metrics as compute_metrics
-from transformers import glue_convert_examples_to_features as convert_examples_to_features
-from transformers import glue_output_modes as output_modes
-from transformers import glue_processors as processors
-from transformers.trainer_utils import is_main_process
-
-
-try:
- from torch.utils.tensorboard import SummaryWriter
-except ImportError:
- from tensorboardX import SummaryWriter
-
-
-logger = logging.getLogger(__name__)
-
-
-MODEL_CLASSES = {
- "bert": (BertConfig, DeeBertForSequenceClassification, BertTokenizer),
- "roberta": (RobertaConfig, DeeRobertaForSequenceClassification, RobertaTokenizer),
-}
-
-
-def set_seed(args):
- random.seed(args.seed)
- np.random.seed(args.seed)
- torch.manual_seed(args.seed)
- if args.n_gpu > 0:
- torch.cuda.manual_seed_all(args.seed)
-
-
-def get_wanted_result(result):
- if "spearmanr" in result:
- print_result = result["spearmanr"]
- elif "f1" in result:
- print_result = result["f1"]
- elif "mcc" in result:
- print_result = result["mcc"]
- elif "acc" in result:
- print_result = result["acc"]
- else:
- raise ValueError("Primary metric unclear in the results")
- return print_result
-
-
-def train(args, train_dataset, model, tokenizer, train_highway=False):
- """Train the model"""
- if args.local_rank in [-1, 0]:
- tb_writer = SummaryWriter()
-
- args.train_batch_size = args.per_gpu_train_batch_size * max(1, args.n_gpu)
- train_sampler = RandomSampler(train_dataset) if args.local_rank == -1 else DistributedSampler(train_dataset)
- train_dataloader = DataLoader(train_dataset, sampler=train_sampler, batch_size=args.train_batch_size)
-
- if args.max_steps > 0:
- t_total = args.max_steps
- args.num_train_epochs = args.max_steps // (len(train_dataloader) // args.gradient_accumulation_steps) + 1
- else:
- t_total = len(train_dataloader) // args.gradient_accumulation_steps * args.num_train_epochs
-
- # Prepare optimizer and schedule (linear warmup and decay)
- no_decay = ["bias", "LayerNorm.weight"]
- if train_highway:
- optimizer_grouped_parameters = [
- {
- "params": [
- p
- for n, p in model.named_parameters()
- if ("highway" in n) and (not any(nd in n for nd in no_decay))
- ],
- "weight_decay": args.weight_decay,
- },
- {
- "params": [
- p for n, p in model.named_parameters() if ("highway" in n) and (any(nd in n for nd in no_decay))
- ],
- "weight_decay": 0.0,
- },
- ]
- else:
- optimizer_grouped_parameters = [
- {
- "params": [
- p
- for n, p in model.named_parameters()
- if ("highway" not in n) and (not any(nd in n for nd in no_decay))
- ],
- "weight_decay": args.weight_decay,
- },
- {
- "params": [
- p
- for n, p in model.named_parameters()
- if ("highway" not in n) and (any(nd in n for nd in no_decay))
- ],
- "weight_decay": 0.0,
- },
- ]
- optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate, eps=args.adam_epsilon)
- scheduler = get_linear_schedule_with_warmup(
- optimizer, num_warmup_steps=args.warmup_steps, num_training_steps=t_total
- )
- if args.fp16:
- try:
- from apex import amp
- except ImportError:
- raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use fp16 training.")
- model, optimizer = amp.initialize(model, optimizer, opt_level=args.fp16_opt_level)
-
- # multi-gpu training (should be after apex fp16 initialization)
- if args.n_gpu > 1:
- model = nn.DataParallel(model)
-
- # Distributed training (should be after apex fp16 initialization)
- if args.local_rank != -1:
- model = nn.parallel.DistributedDataParallel(
- model, device_ids=[args.local_rank], output_device=args.local_rank, find_unused_parameters=True
- )
-
- # Train!
- logger.info("***** Running training *****")
- logger.info(" Num examples = %d", len(train_dataset))
- logger.info(" Num Epochs = %d", args.num_train_epochs)
- logger.info(" Instantaneous batch size per GPU = %d", args.per_gpu_train_batch_size)
- logger.info(
- " Total train batch size (w. parallel, distributed & accumulation) = %d",
- args.train_batch_size
- * args.gradient_accumulation_steps
- * (torch.distributed.get_world_size() if args.local_rank != -1 else 1),
- )
- logger.info(" Gradient Accumulation steps = %d", args.gradient_accumulation_steps)
- logger.info(" Total optimization steps = %d", t_total)
-
- global_step = 0
- tr_loss, logging_loss = 0.0, 0.0
- model.zero_grad()
- train_iterator = trange(int(args.num_train_epochs), desc="Epoch", disable=args.local_rank not in [-1, 0])
- set_seed(args) # Added here for reproductibility (even between python 2 and 3)
- for _ in train_iterator:
- epoch_iterator = tqdm(train_dataloader, desc="Iteration", disable=args.local_rank not in [-1, 0])
- for step, batch in enumerate(epoch_iterator):
- model.train()
- batch = tuple(t.to(args.device) for t in batch)
- inputs = {"input_ids": batch[0], "attention_mask": batch[1], "labels": batch[3]}
- if args.model_type != "distilbert":
- inputs["token_type_ids"] = (
- batch[2] if args.model_type in ["bert", "xlnet"] else None
- ) # XLM, DistilBERT and RoBERTa don't use segment_ids
- inputs["train_highway"] = train_highway
- outputs = model(**inputs)
- loss = outputs[0] # model outputs are always tuple in transformers (see doc)
-
- if args.n_gpu > 1:
- loss = loss.mean() # mean() to average on multi-gpu parallel training
- if args.gradient_accumulation_steps > 1:
- loss = loss / args.gradient_accumulation_steps
-
- if args.fp16:
- with amp.scale_loss(loss, optimizer) as scaled_loss:
- scaled_loss.backward()
- else:
- loss.backward()
-
- tr_loss += loss.item()
- if (step + 1) % args.gradient_accumulation_steps == 0:
- if args.fp16:
- nn.utils.clip_grad_norm_(amp.master_params(optimizer), args.max_grad_norm)
- else:
- nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm)
-
- optimizer.step()
- scheduler.step() # Update learning rate schedule
- model.zero_grad()
- global_step += 1
-
- if args.local_rank in [-1, 0] and args.logging_steps > 0 and global_step % args.logging_steps == 0:
- # Log metrics
- if (
- args.local_rank == -1 and args.evaluate_during_training
- ): # Only evaluate when single GPU otherwise metrics may not average well
- results = evaluate(args, model, tokenizer)
- for key, value in results.items():
- tb_writer.add_scalar("eval_{}".format(key), value, global_step)
- tb_writer.add_scalar("lr", scheduler.get_lr()[0], global_step)
- tb_writer.add_scalar("loss", (tr_loss - logging_loss) / args.logging_steps, global_step)
- logging_loss = tr_loss
-
- if args.local_rank in [-1, 0] and args.save_steps > 0 and global_step % args.save_steps == 0:
- # Save model checkpoint
- output_dir = os.path.join(args.output_dir, "checkpoint-{}".format(global_step))
- if not os.path.exists(output_dir):
- os.makedirs(output_dir)
- model_to_save = (
- model.module if hasattr(model, "module") else model
- ) # Take care of distributed/parallel training
- model_to_save.save_pretrained(output_dir)
- torch.save(args, os.path.join(output_dir, "training_args.bin"))
- logger.info("Saving model checkpoint to %s", output_dir)
-
- if args.max_steps > 0 and global_step > args.max_steps:
- epoch_iterator.close()
- break
- if args.max_steps > 0 and global_step > args.max_steps:
- train_iterator.close()
- break
-
- if args.local_rank in [-1, 0]:
- tb_writer.close()
-
- return global_step, tr_loss / global_step
-
-
-def evaluate(args, model, tokenizer, prefix="", output_layer=-1, eval_highway=False):
- # Loop to handle MNLI double evaluation (matched, mis-matched)
- eval_task_names = ("mnli", "mnli-mm") if args.task_name == "mnli" else (args.task_name,)
- eval_outputs_dirs = (args.output_dir, args.output_dir + "-MM") if args.task_name == "mnli" else (args.output_dir,)
-
- results = {}
- for eval_task, eval_output_dir in zip(eval_task_names, eval_outputs_dirs):
- eval_dataset = load_and_cache_examples(args, eval_task, tokenizer, evaluate=True)
-
- if not os.path.exists(eval_output_dir) and args.local_rank in [-1, 0]:
- os.makedirs(eval_output_dir)
-
- args.eval_batch_size = args.per_gpu_eval_batch_size * max(1, args.n_gpu)
- # Note that DistributedSampler samples randomly
- eval_sampler = SequentialSampler(eval_dataset) if args.local_rank == -1 else DistributedSampler(eval_dataset)
- eval_dataloader = DataLoader(eval_dataset, sampler=eval_sampler, batch_size=args.eval_batch_size)
-
- # multi-gpu eval
- if args.n_gpu > 1:
- model = nn.DataParallel(model)
-
- # Eval!
- logger.info("***** Running evaluation {} *****".format(prefix))
- logger.info(" Num examples = %d", len(eval_dataset))
- logger.info(" Batch size = %d", args.eval_batch_size)
- eval_loss = 0.0
- nb_eval_steps = 0
- preds = None
- out_label_ids = None
- exit_layer_counter = {(i + 1): 0 for i in range(model.num_layers)}
- st = time.time()
- for batch in tqdm(eval_dataloader, desc="Evaluating"):
- model.eval()
- batch = tuple(t.to(args.device) for t in batch)
-
- with torch.no_grad():
- inputs = {"input_ids": batch[0], "attention_mask": batch[1], "labels": batch[3]}
- if args.model_type != "distilbert":
- inputs["token_type_ids"] = (
- batch[2] if args.model_type in ["bert", "xlnet"] else None
- ) # XLM, DistilBERT and RoBERTa don't use segment_ids
- if output_layer >= 0:
- inputs["output_layer"] = output_layer
- outputs = model(**inputs)
- if eval_highway:
- exit_layer_counter[outputs[-1]] += 1
- tmp_eval_loss, logits = outputs[:2]
-
- eval_loss += tmp_eval_loss.mean().item()
- nb_eval_steps += 1
- if preds is None:
- preds = logits.detach().cpu().numpy()
- out_label_ids = inputs["labels"].detach().cpu().numpy()
- else:
- preds = np.append(preds, logits.detach().cpu().numpy(), axis=0)
- out_label_ids = np.append(out_label_ids, inputs["labels"].detach().cpu().numpy(), axis=0)
- eval_time = time.time() - st
- logger.info("Eval time: {}".format(eval_time))
-
- eval_loss = eval_loss / nb_eval_steps
- if args.output_mode == "classification":
- preds = np.argmax(preds, axis=1)
- elif args.output_mode == "regression":
- preds = np.squeeze(preds)
- result = compute_metrics(eval_task, preds, out_label_ids)
- results.update(result)
-
- if eval_highway:
- logger.info("Exit layer counter: {}".format(exit_layer_counter))
- actual_cost = sum([l * c for l, c in exit_layer_counter.items()])
- full_cost = len(eval_dataloader) * model.num_layers
- logger.info("Expected saving: {}".format(actual_cost / full_cost))
- if args.early_exit_entropy >= 0:
- save_fname = (
- args.plot_data_dir
- + "/"
- + args.model_name_or_path[2:]
- + "/entropy_{}.npy".format(args.early_exit_entropy)
- )
- if not os.path.exists(os.path.dirname(save_fname)):
- os.makedirs(os.path.dirname(save_fname))
- print_result = get_wanted_result(result)
- np.save(save_fname, np.array([exit_layer_counter, eval_time, actual_cost / full_cost, print_result]))
- logger.info("Entropy={}\tResult={:.2f}".format(args.early_exit_entropy, 100 * print_result))
-
- output_eval_file = os.path.join(eval_output_dir, prefix, "eval_results.txt")
- with open(output_eval_file, "w") as writer:
- logger.info("***** Eval results {} *****".format(prefix))
- for key in sorted(result.keys()):
- logger.info(" %s = %s", key, str(result[key]))
- writer.write("%s = %s\n" % (key, str(result[key])))
-
- return results
-
-
-def load_and_cache_examples(args, task, tokenizer, evaluate=False):
- if args.local_rank not in [-1, 0] and not evaluate:
- torch.distributed.barrier() # Make sure only the first process in distributed training process the dataset, and the others will use the cache
-
- processor = processors[task]()
- output_mode = output_modes[task]
- # Load data features from cache or dataset file
- cached_features_file = os.path.join(
- args.data_dir,
- "cached_{}_{}_{}_{}".format(
- "dev" if evaluate else "train",
- list(filter(None, args.model_name_or_path.split("/"))).pop(),
- str(args.max_seq_length),
- str(task),
- ),
- )
- if os.path.exists(cached_features_file) and not args.overwrite_cache:
- logger.info("Loading features from cached file %s", cached_features_file)
- features = torch.load(cached_features_file)
- else:
- logger.info("Creating features from dataset file at %s", args.data_dir)
- label_list = processor.get_labels()
- if task in ["mnli", "mnli-mm"] and args.model_type in ["roberta"]:
- # HACK(label indices are swapped in RoBERTa pretrained model)
- label_list[1], label_list[2] = label_list[2], label_list[1]
- examples = (
- processor.get_dev_examples(args.data_dir) if evaluate else processor.get_train_examples(args.data_dir)
- )
- features = convert_examples_to_features(
- examples,
- tokenizer,
- label_list=label_list,
- max_length=args.max_seq_length,
- output_mode=output_mode,
- )
- if args.local_rank in [-1, 0]:
- logger.info("Saving features into cached file %s", cached_features_file)
- torch.save(features, cached_features_file)
-
- if args.local_rank == 0 and not evaluate:
- torch.distributed.barrier() # Make sure only the first process in distributed training process the dataset, and the others will use the cache
-
- # Convert to Tensors and build dataset
- all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
- all_attention_mask = torch.tensor([f.attention_mask for f in features], dtype=torch.long)
-
- if features[0].token_type_ids is None:
- # For RoBERTa (a potential bug!)
- all_token_type_ids = torch.tensor([[0] * args.max_seq_length for f in features], dtype=torch.long)
- else:
- all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long)
- if output_mode == "classification":
- all_labels = torch.tensor([f.label for f in features], dtype=torch.long)
- elif output_mode == "regression":
- all_labels = torch.tensor([f.label for f in features], dtype=torch.float)
-
- dataset = TensorDataset(all_input_ids, all_attention_mask, all_token_type_ids, all_labels)
- return dataset
-
-
-def main():
- parser = argparse.ArgumentParser()
-
- # Required parameters
- parser.add_argument(
- "--data_dir",
- default=None,
- type=str,
- required=True,
- help="The input data dir. Should contain the .tsv files (or other data files) for the task.",
- )
- parser.add_argument(
- "--model_type",
- default=None,
- type=str,
- required=True,
- help="Model type selected in the list: " + ", ".join(MODEL_CLASSES.keys()),
- )
- parser.add_argument(
- "--model_name_or_path",
- default=None,
- type=str,
- required=True,
- help="Path to pre-trained model or shortcut name.",
- )
- parser.add_argument(
- "--task_name",
- default=None,
- type=str,
- required=True,
- help="The name of the task to train selected in the list: " + ", ".join(processors.keys()),
- )
- parser.add_argument(
- "--output_dir",
- default=None,
- type=str,
- required=True,
- help="The output directory where the model predictions and checkpoints will be written.",
- )
- parser.add_argument(
- "--plot_data_dir",
- default="./plotting/",
- type=str,
- required=False,
- help="The directory to store data for plotting figures.",
- )
-
- # Other parameters
- parser.add_argument(
- "--config_name", default="", type=str, help="Pretrained config name or path if not the same as model_name"
- )
- parser.add_argument(
- "--tokenizer_name",
- default="",
- type=str,
- help="Pretrained tokenizer name or path if not the same as model_name",
- )
- parser.add_argument(
- "--cache_dir",
- default="",
- type=str,
- help="Where do you want to store the pre-trained models downloaded from huggingface.co",
- )
- parser.add_argument(
- "--max_seq_length",
- default=128,
- type=int,
- help=(
- "The maximum total input sequence length after tokenization. Sequences longer "
- "than this will be truncated, sequences shorter will be padded."
- ),
- )
- parser.add_argument("--do_train", action="store_true", help="Whether to run training.")
- parser.add_argument("--do_eval", action="store_true", help="Whether to run eval on the dev set.")
- parser.add_argument(
- "--evaluate_during_training", action="store_true", help="Rul evaluation during training at each logging step."
- )
- parser.add_argument(
- "--do_lower_case", action="store_true", help="Set this flag if you are using an uncased model."
- )
- parser.add_argument("--eval_each_highway", action="store_true", help="Set this flag to evaluate each highway.")
- parser.add_argument(
- "--eval_after_first_stage",
- action="store_true",
- help="Set this flag to evaluate after training only bert (not highway).",
- )
- parser.add_argument("--eval_highway", action="store_true", help="Set this flag if it's evaluating highway models")
-
- parser.add_argument("--per_gpu_train_batch_size", default=8, type=int, help="Batch size per GPU/CPU for training.")
- parser.add_argument(
- "--per_gpu_eval_batch_size", default=8, type=int, help="Batch size per GPU/CPU for evaluation."
- )
- parser.add_argument(
- "--gradient_accumulation_steps",
- type=int,
- default=1,
- help="Number of updates steps to accumulate before performing a backward/update pass.",
- )
- parser.add_argument("--learning_rate", default=5e-5, type=float, help="The initial learning rate for Adam.")
- parser.add_argument("--weight_decay", default=0.0, type=float, help="Weight deay if we apply some.")
- parser.add_argument("--adam_epsilon", default=1e-8, type=float, help="Epsilon for Adam optimizer.")
- parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
- parser.add_argument(
- "--num_train_epochs", default=3.0, type=float, help="Total number of training epochs to perform."
- )
- parser.add_argument(
- "--max_steps",
- default=-1,
- type=int,
- help="If > 0: set total number of training steps to perform. Override num_train_epochs.",
- )
- parser.add_argument("--warmup_steps", default=0, type=int, help="Linear warmup over warmup_steps.")
- parser.add_argument("--early_exit_entropy", default=-1, type=float, help="Entropy threshold for early exit.")
-
- parser.add_argument("--logging_steps", type=int, default=50, help="Log every X updates steps.")
- parser.add_argument("--save_steps", type=int, default=50, help="Save checkpoint every X updates steps.")
- parser.add_argument(
- "--eval_all_checkpoints",
- action="store_true",
- help="Evaluate all checkpoints starting with the same prefix as model_name ending and ending with step number",
- )
- parser.add_argument("--no_cuda", action="store_true", help="Avoid using CUDA when available")
- parser.add_argument(
- "--overwrite_output_dir", action="store_true", help="Overwrite the content of the output directory"
- )
- parser.add_argument(
- "--overwrite_cache", action="store_true", help="Overwrite the cached training and evaluation sets"
- )
- parser.add_argument("--seed", type=int, default=42, help="random seed for initialization")
-
- parser.add_argument(
- "--fp16",
- action="store_true",
- help="Whether to use 16-bit (mixed) precision (through NVIDIA apex) instead of 32-bit",
- )
- parser.add_argument(
- "--fp16_opt_level",
- type=str,
- default="O1",
- help=(
- "For fp16: Apex AMP optimization level selected in ['O0', 'O1', 'O2', and 'O3']."
- "See details at https://nvidia.github.io/apex/amp.html"
- ),
- )
- parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
- parser.add_argument("--server_ip", type=str, default="", help="For distant debugging.")
- parser.add_argument("--server_port", type=str, default="", help="For distant debugging.")
- args = parser.parse_args()
-
- if (
- os.path.exists(args.output_dir)
- and os.listdir(args.output_dir)
- and args.do_train
- and not args.overwrite_output_dir
- ):
- raise ValueError(
- "Output directory ({}) already exists and is not empty. Use --overwrite_output_dir to overcome.".format(
- args.output_dir
- )
- )
-
- # Setup distant debugging if needed
- if args.server_ip and args.server_port:
- # Distant debugging - see https://code.visualstudio.com/docs/python/debugging#_attach-to-a-local-script
- import ptvsd
-
- print("Waiting for debugger attach")
- ptvsd.enable_attach(address=(args.server_ip, args.server_port), redirect_output=True)
- ptvsd.wait_for_attach()
-
- # Setup CUDA, GPU & distributed training
- if args.local_rank == -1 or args.no_cuda:
- device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu")
- args.n_gpu = torch.cuda.device_count()
- else: # Initializes the distributed backend which will take care of sychronizing nodes/GPUs
- torch.cuda.set_device(args.local_rank)
- device = torch.device("cuda", args.local_rank)
- torch.distributed.init_process_group(backend="nccl")
- args.n_gpu = 1
- args.device = device
-
- # Setup logging
- logging.basicConfig(
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
- datefmt="%m/%d/%Y %H:%M:%S",
- level=logging.INFO if args.local_rank in [-1, 0] else logging.WARN,
- )
- logger.warning(
- "Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s",
- args.local_rank,
- device,
- args.n_gpu,
- bool(args.local_rank != -1),
- args.fp16,
- )
- # Set the verbosity to info of the Transformers logger (on main process only):
- if is_main_process(args.local_rank):
- transformers.utils.logging.set_verbosity_info()
- transformers.utils.logging.enable_default_handler()
- transformers.utils.logging.enable_explicit_format()
- # Set seed
- set_seed(args)
-
- # Prepare GLUE task
- args.task_name = args.task_name.lower()
- if args.task_name not in processors:
- raise ValueError("Task not found: %s" % (args.task_name))
- processor = processors[args.task_name]()
- args.output_mode = output_modes[args.task_name]
- label_list = processor.get_labels()
- num_labels = len(label_list)
-
- # Load pretrained model and tokenizer
- if args.local_rank not in [-1, 0]:
- torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab
-
- args.model_type = args.model_type.lower()
- config_class, model_class, tokenizer_class = MODEL_CLASSES[args.model_type]
- config = config_class.from_pretrained(
- args.config_name if args.config_name else args.model_name_or_path,
- num_labels=num_labels,
- finetuning_task=args.task_name,
- cache_dir=args.cache_dir if args.cache_dir else None,
- )
- tokenizer = tokenizer_class.from_pretrained(
- args.tokenizer_name if args.tokenizer_name else args.model_name_or_path,
- do_lower_case=args.do_lower_case,
- cache_dir=args.cache_dir if args.cache_dir else None,
- )
- model = model_class.from_pretrained(
- args.model_name_or_path,
- from_tf=bool(".ckpt" in args.model_name_or_path),
- config=config,
- cache_dir=args.cache_dir if args.cache_dir else None,
- )
-
- if args.model_type == "bert":
- model.bert.encoder.set_early_exit_entropy(args.early_exit_entropy)
- model.bert.init_highway_pooler()
- elif args.model_type == "roberta":
- model.roberta.encoder.set_early_exit_entropy(args.early_exit_entropy)
- model.roberta.init_highway_pooler()
- else:
- raise NotImplementedError()
-
- if args.local_rank == 0:
- torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab
-
- model.to(args.device)
-
- logger.info("Training/evaluation parameters %s", args)
-
- # Training
- if args.do_train:
- train_dataset = load_and_cache_examples(args, args.task_name, tokenizer, evaluate=False)
- global_step, tr_loss = train(args, train_dataset, model, tokenizer)
- logger.info(" global_step = %s, average loss = %s", global_step, tr_loss)
-
- if args.eval_after_first_stage:
- result = evaluate(args, model, tokenizer, prefix="")
- print_result = get_wanted_result(result)
-
- train(args, train_dataset, model, tokenizer, train_highway=True)
-
- # Saving best-practices: if you use defaults names for the model, you can reload it using from_pretrained()
- if args.do_train and (args.local_rank == -1 or torch.distributed.get_rank() == 0):
- # Create output directory if needed
- if not os.path.exists(args.output_dir) and args.local_rank in [-1, 0]:
- os.makedirs(args.output_dir)
-
- logger.info("Saving model checkpoint to %s", args.output_dir)
- # Save a trained model, configuration and tokenizer using `save_pretrained()`.
- # They can then be reloaded using `from_pretrained()`
- model_to_save = (
- model.module if hasattr(model, "module") else model
- ) # Take care of distributed/parallel training
- model_to_save.save_pretrained(args.output_dir)
- tokenizer.save_pretrained(args.output_dir)
-
- # Good practice: save your training arguments together with the trained model
- torch.save(args, os.path.join(args.output_dir, "training_args.bin"))
-
- # Load a trained model and vocabulary that you have fine-tuned
- model = model_class.from_pretrained(args.output_dir)
- tokenizer = tokenizer_class.from_pretrained(args.output_dir)
- model.to(args.device)
-
- # Evaluation
- results = {}
- if args.do_eval and args.local_rank in [-1, 0]:
- tokenizer = tokenizer_class.from_pretrained(args.output_dir, do_lower_case=args.do_lower_case)
- checkpoints = [args.output_dir]
- if args.eval_all_checkpoints:
- checkpoints = [
- os.path.dirname(c) for c in sorted(glob.glob(args.output_dir + "/**/" + WEIGHTS_NAME, recursive=True))
- ]
-
- logger.info("Evaluate the following checkpoints: %s", checkpoints)
- for checkpoint in checkpoints:
- global_step = checkpoint.split("-")[-1] if len(checkpoints) > 1 else ""
- prefix = checkpoint.split("/")[-1] if checkpoint.find("checkpoint") != -1 else ""
-
- model = model_class.from_pretrained(checkpoint)
- if args.model_type == "bert":
- model.bert.encoder.set_early_exit_entropy(args.early_exit_entropy)
- elif args.model_type == "roberta":
- model.roberta.encoder.set_early_exit_entropy(args.early_exit_entropy)
- else:
- raise NotImplementedError()
-
- model.to(args.device)
- result = evaluate(args, model, tokenizer, prefix=prefix, eval_highway=args.eval_highway)
- print_result = get_wanted_result(result)
- logger.info("Result: {}".format(print_result))
- if args.eval_each_highway:
- last_layer_results = print_result
- each_layer_results = []
- for i in range(model.num_layers):
- logger.info("\n")
- _result = evaluate(
- args, model, tokenizer, prefix=prefix, output_layer=i, eval_highway=args.eval_highway
- )
- if i + 1 < model.num_layers:
- each_layer_results.append(get_wanted_result(_result))
- each_layer_results.append(last_layer_results)
- save_fname = args.plot_data_dir + "/" + args.model_name_or_path[2:] + "/each_layer.npy"
- if not os.path.exists(os.path.dirname(save_fname)):
- os.makedirs(os.path.dirname(save_fname))
- np.save(save_fname, np.array(each_layer_results))
- info_str = "Score of each layer:"
- for i in range(model.num_layers):
- info_str += " {:.2f}".format(100 * each_layer_results[i])
- logger.info(info_str)
- result = {k + "_{}".format(global_step): v for k, v in result.items()}
- results.update(result)
-
- return results
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/et_xmlfile/xmlfile.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/et_xmlfile/xmlfile.py
deleted file mode 100644
index 09ec55554ff89d18891562d435bb29013d03beba..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/et_xmlfile/xmlfile.py
+++ /dev/null
@@ -1,104 +0,0 @@
-from __future__ import absolute_import
-# Copyright (c) 2010-2015 openpyxl
-
-"""Implements the lxml.etree.xmlfile API using the standard library xml.etree"""
-
-
-from contextlib import contextmanager
-
-from xml.etree.ElementTree import Element, tostring
-
-
-class LxmlSyntaxError(Exception):
- pass
-
-
-class _FakeIncrementalFileWriter(object):
- """Replacement for _IncrementalFileWriter of lxml.
- Uses ElementTree to build xml in memory."""
- def __init__(self, output_file):
- self._element_stack = []
- self._top_element = None
- self._file = output_file
- self._have_root = False
-
- @contextmanager
- def element(self, tag, attrib=None, nsmap=None, **_extra):
- """Create a new xml element using a context manager.
- The elements are written when the top level context is left.
-
- This is for code compatibility only as it is quite slow.
- """
-
- # __enter__ part
- self._have_root = True
- if attrib is None:
- attrib = {}
- self._top_element = Element(tag, attrib=attrib, **_extra)
- self._top_element.text = ''
- self._top_element.tail = ''
- self._element_stack.append(self._top_element)
- yield
-
- # __exit__ part
- el = self._element_stack.pop()
- if self._element_stack:
- parent = self._element_stack[-1]
- parent.append(self._top_element)
- self._top_element = parent
- else:
- self._write_element(el)
- self._top_element = None
-
- def write(self, arg):
- """Write a string or subelement."""
-
- if isinstance(arg, str):
- # it is not allowed to write a string outside of an element
- if self._top_element is None:
- raise LxmlSyntaxError()
-
- if len(self._top_element) == 0:
- # element has no children: add string to text
- self._top_element.text += arg
- else:
- # element has children: add string to tail of last child
- self._top_element[-1].tail += arg
-
- else:
- if self._top_element is not None:
- self._top_element.append(arg)
- elif not self._have_root:
- self._write_element(arg)
- else:
- raise LxmlSyntaxError()
-
- def _write_element(self, element):
- xml = tostring(element)
- self._file.write(xml)
-
- def __enter__(self):
- pass
-
- def __exit__(self, type, value, traceback):
- # without root the xml document is incomplete
- if not self._have_root:
- raise LxmlSyntaxError()
-
-
-class xmlfile(object):
- """Context manager that can replace lxml.etree.xmlfile."""
- def __init__(self, output_file, buffered=False, encoding=None, close=False):
- if isinstance(output_file, str):
- self._file = open(output_file, 'wb')
- self._close = True
- else:
- self._file = output_file
- self._close = close
-
- def __enter__(self):
- return _FakeIncrementalFileWriter(self._file)
-
- def __exit__(self, type, value, traceback):
- if self._close == True:
- self._file.close()
diff --git a/spaces/cihyFjudo/fairness-paper-search/Akhila Poonoor Sex Video.rargolkes Nirvana Aprire Folde [REPACK].md b/spaces/cihyFjudo/fairness-paper-search/Akhila Poonoor Sex Video.rargolkes Nirvana Aprire Folde [REPACK].md
deleted file mode 100644
index 458b948273f0cab2a804ebfd7ae4e35f653f6bf0..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Akhila Poonoor Sex Video.rargolkes Nirvana Aprire Folde [REPACK].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Akhila Poonoor Sex Video.rargolkes nirvana aprire folde
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Easyusetool 0.3.1 Download Why You Should Try It Today.md b/spaces/cihyFjudo/fairness-paper-search/Easyusetool 0.3.1 Download Why You Should Try It Today.md
deleted file mode 100644
index 641e959e3c4fddc941c6202b2400b3db576f590a..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Easyusetool 0.3.1 Download Why You Should Try It Today.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/King No 1 Movie In Hindi Hd Examen 10000 Carmage Tips and Tricks for Success.md b/spaces/cihyFjudo/fairness-paper-search/King No 1 Movie In Hindi Hd Examen 10000 Carmage Tips and Tricks for Success.md
deleted file mode 100644
index e2b19a2e8a670adb7100cafef0947eef85a2fef2..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/King No 1 Movie In Hindi Hd Examen 10000 Carmage Tips and Tricks for Success.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/New MacBook Pro Proves Apple Has Lost Sight Of Its Core Values The Disappointing Design Choices That Compromise The User Experience.md b/spaces/cihyFjudo/fairness-paper-search/New MacBook Pro Proves Apple Has Lost Sight Of Its Core Values The Disappointing Design Choices That Compromise The User Experience.md
deleted file mode 100644
index f421c88778e880cbcf8d7291effbba276f3991b7..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/New MacBook Pro Proves Apple Has Lost Sight Of Its Core Values The Disappointing Design Choices That Compromise The User Experience.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
New MacBook Pro Proves Apple Has Lost Sight Of Its Core Values
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Pano2vr Download Mac A Powerful Tool for Google Street View and WebVR.md b/spaces/cihyFjudo/fairness-paper-search/Pano2vr Download Mac A Powerful Tool for Google Street View and WebVR.md
deleted file mode 100644
index ed9b5ae0dc79a159ce4dd9f6592a3eb83c62f1f7..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Pano2vr Download Mac A Powerful Tool for Google Street View and WebVR.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
Click on the button below to start downloading Pano2VR 6.0.1 for mac OS X. We are here to provide to clean and fast download for Pano2VR 6.0.3. This link is resume able within 24 hours. Keep visiting themacgo the world of dmgs.
Pano2VR Pro for Mac is an imposing as well as very easy to use application which will allows you to create interactive 360 panoramas from the ordinary images. It has been equipped with a very user friendly interface which will allow you to work with this tool efficiently. You can also download Blackmagic Fusion Studio for MacOSX.
-
This application also supports flat, cylindrical, spherical, T and strip styles panoramas. It supports various different image file formats which include PSD, PNG, JPG, TIFF and QuickTime VR. For creating the interactive 360 panoramas you need to load the source image and adjust the amount of the use data. With this application you can create hotspot and insert the media items etc. it also allows you to create custom skin for the 360 panoramas. You can also add the button and graphics and configure that way viewer uses them. All in all Pano2VR Pro for Mac is an imposing as well as very easy to use application which will allows you to create interactive 360 panoramas from the ordinary images. You can also download Athentech Perfectly Clear Complete for MacOSX.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Pluralsight Prototyping Virtual Reality Experiences With Unity Tips and Tricks for VR Developers.md b/spaces/cihyFjudo/fairness-paper-search/Pluralsight Prototyping Virtual Reality Experiences With Unity Tips and Tricks for VR Developers.md
deleted file mode 100644
index 21bca454999960762927708a5af934fbc1d0e30e..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Pluralsight Prototyping Virtual Reality Experiences With Unity Tips and Tricks for VR Developers.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
Most successful professionals in virtually any industry will tell you that a support network is essential. Aside from providing outside perspectives, a support network can enable you to tap into the experiences of others, and may provide you with training and skills development opportunities. Building strong relationships with others in the field may even lead you to your next job.
-
Pluralsight \\u2013 Prototyping Virtual Reality Experiences With Unity
aaccfb2cb3
-
-
\ No newline at end of file
diff --git "a/spaces/cihyFjudo/fairness-paper-search/WINDOWS10- Display live Heart Rate information in Windows Universal app using Bluetooth LE\302\240 BLE A step-by-step guide.md" "b/spaces/cihyFjudo/fairness-paper-search/WINDOWS10- Display live Heart Rate information in Windows Universal app using Bluetooth LE\302\240 BLE A step-by-step guide.md"
deleted file mode 100644
index 75eb2381c079b1e2beab5acc2bdcd6dc7d231870..0000000000000000000000000000000000000000
--- "a/spaces/cihyFjudo/fairness-paper-search/WINDOWS10- Display live Heart Rate information in Windows Universal app using Bluetooth LE\302\240 BLE A step-by-step guide.md"
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
Increase membership, improve client retention and separate yourself from the crowd by offering the most advanced group indoor cycling experience. Wirelessly display individual and group performance on to a single screen enabling indoor cycling studios to offer new class programming. Teach with power, heart rate, 3D animation and streaming video all in one easy-to-use system. Network heart rate belts and cardio equipment wirelessly to allow real time display of performance metrics. During class, monitor your performance data including heart rate, calories, power, and RPM in real time using group display or an app. After your workout, check your progress on your computer or phone. Turn your fitness studio into a Smart Studio.
-
WINDOWS10- Display live Heart Rate information in Windows Universal app using Bluetooth LE BLE
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cleanmaster/akagi-sovits3/commons.py b/spaces/cleanmaster/akagi-sovits3/commons.py
deleted file mode 100644
index 074888006392e956ce204d8368362dbb2cd4e304..0000000000000000000000000000000000000000
--- a/spaces/cleanmaster/akagi-sovits3/commons.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-def slice_pitch_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, idx_str:idx_end]
- return ret
-
-def rand_slice_segments_with_pitch(x, pitch, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- ret_pitch = slice_pitch_segments(pitch, ids_str, segment_size)
- return ret, ret_pitch, ids_str
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def rand_spec_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = (
- math.log(float(max_timescale) / float(min_timescale)) /
- (num_timescales - 1))
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1. / norm_type)
- return total_norm
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiohttp/resolver.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiohttp/resolver.py
deleted file mode 100644
index 531ce93fccc2d3be442556de644cdc78d31d9c6e..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiohttp/resolver.py
+++ /dev/null
@@ -1,160 +0,0 @@
-import asyncio
-import socket
-from typing import Any, Dict, List, Optional, Type, Union
-
-from .abc import AbstractResolver
-from .helpers import get_running_loop
-
-__all__ = ("ThreadedResolver", "AsyncResolver", "DefaultResolver")
-
-try:
- import aiodns
-
- # aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname')
-except ImportError: # pragma: no cover
- aiodns = None
-
-aiodns_default = False
-
-
-class ThreadedResolver(AbstractResolver):
- """Threaded resolver.
-
- Uses an Executor for synchronous getaddrinfo() calls.
- concurrent.futures.ThreadPoolExecutor is used by default.
- """
-
- def __init__(self, loop: Optional[asyncio.AbstractEventLoop] = None) -> None:
- self._loop = get_running_loop(loop)
-
- async def resolve(
- self, hostname: str, port: int = 0, family: int = socket.AF_INET
- ) -> List[Dict[str, Any]]:
- infos = await self._loop.getaddrinfo(
- hostname,
- port,
- type=socket.SOCK_STREAM,
- family=family,
- flags=socket.AI_ADDRCONFIG,
- )
-
- hosts = []
- for family, _, proto, _, address in infos:
- if family == socket.AF_INET6:
- if len(address) < 3:
- # IPv6 is not supported by Python build,
- # or IPv6 is not enabled in the host
- continue
- if address[3]: # type: ignore[misc]
- # This is essential for link-local IPv6 addresses.
- # LL IPv6 is a VERY rare case. Strictly speaking, we should use
- # getnameinfo() unconditionally, but performance makes sense.
- host, _port = socket.getnameinfo(
- address, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV
- )
- port = int(_port)
- else:
- host, port = address[:2]
- else: # IPv4
- assert family == socket.AF_INET
- host, port = address # type: ignore[misc]
- hosts.append(
- {
- "hostname": hostname,
- "host": host,
- "port": port,
- "family": family,
- "proto": proto,
- "flags": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,
- }
- )
-
- return hosts
-
- async def close(self) -> None:
- pass
-
-
-class AsyncResolver(AbstractResolver):
- """Use the `aiodns` package to make asynchronous DNS lookups"""
-
- def __init__(
- self,
- loop: Optional[asyncio.AbstractEventLoop] = None,
- *args: Any,
- **kwargs: Any
- ) -> None:
- if aiodns is None:
- raise RuntimeError("Resolver requires aiodns library")
-
- self._loop = get_running_loop(loop)
- self._resolver = aiodns.DNSResolver(*args, loop=loop, **kwargs)
-
- if not hasattr(self._resolver, "gethostbyname"):
- # aiodns 1.1 is not available, fallback to DNSResolver.query
- self.resolve = self._resolve_with_query # type: ignore
-
- async def resolve(
- self, host: str, port: int = 0, family: int = socket.AF_INET
- ) -> List[Dict[str, Any]]:
- try:
- resp = await self._resolver.gethostbyname(host, family)
- except aiodns.error.DNSError as exc:
- msg = exc.args[1] if len(exc.args) >= 1 else "DNS lookup failed"
- raise OSError(msg) from exc
- hosts = []
- for address in resp.addresses:
- hosts.append(
- {
- "hostname": host,
- "host": address,
- "port": port,
- "family": family,
- "proto": 0,
- "flags": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,
- }
- )
-
- if not hosts:
- raise OSError("DNS lookup failed")
-
- return hosts
-
- async def _resolve_with_query(
- self, host: str, port: int = 0, family: int = socket.AF_INET
- ) -> List[Dict[str, Any]]:
- if family == socket.AF_INET6:
- qtype = "AAAA"
- else:
- qtype = "A"
-
- try:
- resp = await self._resolver.query(host, qtype)
- except aiodns.error.DNSError as exc:
- msg = exc.args[1] if len(exc.args) >= 1 else "DNS lookup failed"
- raise OSError(msg) from exc
-
- hosts = []
- for rr in resp:
- hosts.append(
- {
- "hostname": host,
- "host": rr.host,
- "port": port,
- "family": family,
- "proto": 0,
- "flags": socket.AI_NUMERICHOST,
- }
- )
-
- if not hosts:
- raise OSError("DNS lookup failed")
-
- return hosts
-
- async def close(self) -> None:
- self._resolver.cancel()
-
-
-_DefaultType = Type[Union[AsyncResolver, ThreadedResolver]]
-DefaultResolver: _DefaultType = AsyncResolver if aiodns_default else ThreadedResolver
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/vengine_gen.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/vengine_gen.py
deleted file mode 100644
index 26421526f62a07e04419cd57f1f19a64ecd36452..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/vengine_gen.py
+++ /dev/null
@@ -1,675 +0,0 @@
-#
-# DEPRECATED: implementation for ffi.verify()
-#
-import sys, os
-import types
-
-from . import model
-from .error import VerificationError
-
-
-class VGenericEngine(object):
- _class_key = 'g'
- _gen_python_module = False
-
- def __init__(self, verifier):
- self.verifier = verifier
- self.ffi = verifier.ffi
- self.export_symbols = []
- self._struct_pending_verification = {}
-
- def patch_extension_kwds(self, kwds):
- # add 'export_symbols' to the dictionary. Note that we add the
- # list before filling it. When we fill it, it will thus also show
- # up in kwds['export_symbols'].
- kwds.setdefault('export_symbols', self.export_symbols)
-
- def find_module(self, module_name, path, so_suffixes):
- for so_suffix in so_suffixes:
- basename = module_name + so_suffix
- if path is None:
- path = sys.path
- for dirname in path:
- filename = os.path.join(dirname, basename)
- if os.path.isfile(filename):
- return filename
-
- def collect_types(self):
- pass # not needed in the generic engine
-
- def _prnt(self, what=''):
- self._f.write(what + '\n')
-
- def write_source_to_f(self):
- prnt = self._prnt
- # first paste some standard set of lines that are mostly '#include'
- prnt(cffimod_header)
- # then paste the C source given by the user, verbatim.
- prnt(self.verifier.preamble)
- #
- # call generate_gen_xxx_decl(), for every xxx found from
- # ffi._parser._declarations. This generates all the functions.
- self._generate('decl')
- #
- # on Windows, distutils insists on putting init_cffi_xyz in
- # 'export_symbols', so instead of fighting it, just give up and
- # give it one
- if sys.platform == 'win32':
- if sys.version_info >= (3,):
- prefix = 'PyInit_'
- else:
- prefix = 'init'
- modname = self.verifier.get_module_name()
- prnt("void %s%s(void) { }\n" % (prefix, modname))
-
- def load_library(self, flags=0):
- # import it with the CFFI backend
- backend = self.ffi._backend
- # needs to make a path that contains '/', on Posix
- filename = os.path.join(os.curdir, self.verifier.modulefilename)
- module = backend.load_library(filename, flags)
- #
- # call loading_gen_struct() to get the struct layout inferred by
- # the C compiler
- self._load(module, 'loading')
-
- # build the FFILibrary class and instance, this is a module subclass
- # because modules are expected to have usually-constant-attributes and
- # in PyPy this means the JIT is able to treat attributes as constant,
- # which we want.
- class FFILibrary(types.ModuleType):
- _cffi_generic_module = module
- _cffi_ffi = self.ffi
- _cffi_dir = []
- def __dir__(self):
- return FFILibrary._cffi_dir
- library = FFILibrary("")
- #
- # finally, call the loaded_gen_xxx() functions. This will set
- # up the 'library' object.
- self._load(module, 'loaded', library=library)
- return library
-
- def _get_declarations(self):
- lst = [(key, tp) for (key, (tp, qual)) in
- self.ffi._parser._declarations.items()]
- lst.sort()
- return lst
-
- def _generate(self, step_name):
- for name, tp in self._get_declarations():
- kind, realname = name.split(' ', 1)
- try:
- method = getattr(self, '_generate_gen_%s_%s' % (kind,
- step_name))
- except AttributeError:
- raise VerificationError(
- "not implemented in verify(): %r" % name)
- try:
- method(tp, realname)
- except Exception as e:
- model.attach_exception_info(e, name)
- raise
-
- def _load(self, module, step_name, **kwds):
- for name, tp in self._get_declarations():
- kind, realname = name.split(' ', 1)
- method = getattr(self, '_%s_gen_%s' % (step_name, kind))
- try:
- method(tp, realname, module, **kwds)
- except Exception as e:
- model.attach_exception_info(e, name)
- raise
-
- def _generate_nothing(self, tp, name):
- pass
-
- def _loaded_noop(self, tp, name, module, **kwds):
- pass
-
- # ----------
- # typedefs: generates no code so far
-
- _generate_gen_typedef_decl = _generate_nothing
- _loading_gen_typedef = _loaded_noop
- _loaded_gen_typedef = _loaded_noop
-
- # ----------
- # function declarations
-
- def _generate_gen_function_decl(self, tp, name):
- assert isinstance(tp, model.FunctionPtrType)
- if tp.ellipsis:
- # cannot support vararg functions better than this: check for its
- # exact type (including the fixed arguments), and build it as a
- # constant function pointer (no _cffi_f_%s wrapper)
- self._generate_gen_const(False, name, tp)
- return
- prnt = self._prnt
- numargs = len(tp.args)
- argnames = []
- for i, type in enumerate(tp.args):
- indirection = ''
- if isinstance(type, model.StructOrUnion):
- indirection = '*'
- argnames.append('%sx%d' % (indirection, i))
- context = 'argument of %s' % name
- arglist = [type.get_c_name(' %s' % arg, context)
- for type, arg in zip(tp.args, argnames)]
- tpresult = tp.result
- if isinstance(tpresult, model.StructOrUnion):
- arglist.insert(0, tpresult.get_c_name(' *r', context))
- tpresult = model.void_type
- arglist = ', '.join(arglist) or 'void'
- wrappername = '_cffi_f_%s' % name
- self.export_symbols.append(wrappername)
- if tp.abi:
- abi = tp.abi + ' '
- else:
- abi = ''
- funcdecl = ' %s%s(%s)' % (abi, wrappername, arglist)
- context = 'result of %s' % name
- prnt(tpresult.get_c_name(funcdecl, context))
- prnt('{')
- #
- if isinstance(tp.result, model.StructOrUnion):
- result_code = '*r = '
- elif not isinstance(tp.result, model.VoidType):
- result_code = 'return '
- else:
- result_code = ''
- prnt(' %s%s(%s);' % (result_code, name, ', '.join(argnames)))
- prnt('}')
- prnt()
-
- _loading_gen_function = _loaded_noop
-
- def _loaded_gen_function(self, tp, name, module, library):
- assert isinstance(tp, model.FunctionPtrType)
- if tp.ellipsis:
- newfunction = self._load_constant(False, tp, name, module)
- else:
- indirections = []
- base_tp = tp
- if (any(isinstance(typ, model.StructOrUnion) for typ in tp.args)
- or isinstance(tp.result, model.StructOrUnion)):
- indirect_args = []
- for i, typ in enumerate(tp.args):
- if isinstance(typ, model.StructOrUnion):
- typ = model.PointerType(typ)
- indirections.append((i, typ))
- indirect_args.append(typ)
- indirect_result = tp.result
- if isinstance(indirect_result, model.StructOrUnion):
- if indirect_result.fldtypes is None:
- raise TypeError("'%s' is used as result type, "
- "but is opaque" % (
- indirect_result._get_c_name(),))
- indirect_result = model.PointerType(indirect_result)
- indirect_args.insert(0, indirect_result)
- indirections.insert(0, ("result", indirect_result))
- indirect_result = model.void_type
- tp = model.FunctionPtrType(tuple(indirect_args),
- indirect_result, tp.ellipsis)
- BFunc = self.ffi._get_cached_btype(tp)
- wrappername = '_cffi_f_%s' % name
- newfunction = module.load_function(BFunc, wrappername)
- for i, typ in indirections:
- newfunction = self._make_struct_wrapper(newfunction, i, typ,
- base_tp)
- setattr(library, name, newfunction)
- type(library)._cffi_dir.append(name)
-
- def _make_struct_wrapper(self, oldfunc, i, tp, base_tp):
- backend = self.ffi._backend
- BType = self.ffi._get_cached_btype(tp)
- if i == "result":
- ffi = self.ffi
- def newfunc(*args):
- res = ffi.new(BType)
- oldfunc(res, *args)
- return res[0]
- else:
- def newfunc(*args):
- args = args[:i] + (backend.newp(BType, args[i]),) + args[i+1:]
- return oldfunc(*args)
- newfunc._cffi_base_type = base_tp
- return newfunc
-
- # ----------
- # named structs
-
- def _generate_gen_struct_decl(self, tp, name):
- assert name == tp.name
- self._generate_struct_or_union_decl(tp, 'struct', name)
-
- def _loading_gen_struct(self, tp, name, module):
- self._loading_struct_or_union(tp, 'struct', name, module)
-
- def _loaded_gen_struct(self, tp, name, module, **kwds):
- self._loaded_struct_or_union(tp)
-
- def _generate_gen_union_decl(self, tp, name):
- assert name == tp.name
- self._generate_struct_or_union_decl(tp, 'union', name)
-
- def _loading_gen_union(self, tp, name, module):
- self._loading_struct_or_union(tp, 'union', name, module)
-
- def _loaded_gen_union(self, tp, name, module, **kwds):
- self._loaded_struct_or_union(tp)
-
- def _generate_struct_or_union_decl(self, tp, prefix, name):
- if tp.fldnames is None:
- return # nothing to do with opaque structs
- checkfuncname = '_cffi_check_%s_%s' % (prefix, name)
- layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name)
- cname = ('%s %s' % (prefix, name)).strip()
- #
- prnt = self._prnt
- prnt('static void %s(%s *p)' % (checkfuncname, cname))
- prnt('{')
- prnt(' /* only to generate compile-time warnings or errors */')
- prnt(' (void)p;')
- for fname, ftype, fbitsize, fqual in tp.enumfields():
- if (isinstance(ftype, model.PrimitiveType)
- and ftype.is_integer_type()) or fbitsize >= 0:
- # accept all integers, but complain on float or double
- prnt(' (void)((p->%s) << 1);' % fname)
- else:
- # only accept exactly the type declared.
- try:
- prnt(' { %s = &p->%s; (void)tmp; }' % (
- ftype.get_c_name('*tmp', 'field %r'%fname, quals=fqual),
- fname))
- except VerificationError as e:
- prnt(' /* %s */' % str(e)) # cannot verify it, ignore
- prnt('}')
- self.export_symbols.append(layoutfuncname)
- prnt('intptr_t %s(intptr_t i)' % (layoutfuncname,))
- prnt('{')
- prnt(' struct _cffi_aligncheck { char x; %s y; };' % cname)
- prnt(' static intptr_t nums[] = {')
- prnt(' sizeof(%s),' % cname)
- prnt(' offsetof(struct _cffi_aligncheck, y),')
- for fname, ftype, fbitsize, fqual in tp.enumfields():
- if fbitsize >= 0:
- continue # xxx ignore fbitsize for now
- prnt(' offsetof(%s, %s),' % (cname, fname))
- if isinstance(ftype, model.ArrayType) and ftype.length is None:
- prnt(' 0, /* %s */' % ftype._get_c_name())
- else:
- prnt(' sizeof(((%s *)0)->%s),' % (cname, fname))
- prnt(' -1')
- prnt(' };')
- prnt(' return nums[i];')
- prnt(' /* the next line is not executed, but compiled */')
- prnt(' %s(0);' % (checkfuncname,))
- prnt('}')
- prnt()
-
- def _loading_struct_or_union(self, tp, prefix, name, module):
- if tp.fldnames is None:
- return # nothing to do with opaque structs
- layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name)
- #
- BFunc = self.ffi._typeof_locked("intptr_t(*)(intptr_t)")[0]
- function = module.load_function(BFunc, layoutfuncname)
- layout = []
- num = 0
- while True:
- x = function(num)
- if x < 0: break
- layout.append(x)
- num += 1
- if isinstance(tp, model.StructOrUnion) and tp.partial:
- # use the function()'s sizes and offsets to guide the
- # layout of the struct
- totalsize = layout[0]
- totalalignment = layout[1]
- fieldofs = layout[2::2]
- fieldsize = layout[3::2]
- tp.force_flatten()
- assert len(fieldofs) == len(fieldsize) == len(tp.fldnames)
- tp.fixedlayout = fieldofs, fieldsize, totalsize, totalalignment
- else:
- cname = ('%s %s' % (prefix, name)).strip()
- self._struct_pending_verification[tp] = layout, cname
-
- def _loaded_struct_or_union(self, tp):
- if tp.fldnames is None:
- return # nothing to do with opaque structs
- self.ffi._get_cached_btype(tp) # force 'fixedlayout' to be considered
-
- if tp in self._struct_pending_verification:
- # check that the layout sizes and offsets match the real ones
- def check(realvalue, expectedvalue, msg):
- if realvalue != expectedvalue:
- raise VerificationError(
- "%s (we have %d, but C compiler says %d)"
- % (msg, expectedvalue, realvalue))
- ffi = self.ffi
- BStruct = ffi._get_cached_btype(tp)
- layout, cname = self._struct_pending_verification.pop(tp)
- check(layout[0], ffi.sizeof(BStruct), "wrong total size")
- check(layout[1], ffi.alignof(BStruct), "wrong total alignment")
- i = 2
- for fname, ftype, fbitsize, fqual in tp.enumfields():
- if fbitsize >= 0:
- continue # xxx ignore fbitsize for now
- check(layout[i], ffi.offsetof(BStruct, fname),
- "wrong offset for field %r" % (fname,))
- if layout[i+1] != 0:
- BField = ffi._get_cached_btype(ftype)
- check(layout[i+1], ffi.sizeof(BField),
- "wrong size for field %r" % (fname,))
- i += 2
- assert i == len(layout)
-
- # ----------
- # 'anonymous' declarations. These are produced for anonymous structs
- # or unions; the 'name' is obtained by a typedef.
-
- def _generate_gen_anonymous_decl(self, tp, name):
- if isinstance(tp, model.EnumType):
- self._generate_gen_enum_decl(tp, name, '')
- else:
- self._generate_struct_or_union_decl(tp, '', name)
-
- def _loading_gen_anonymous(self, tp, name, module):
- if isinstance(tp, model.EnumType):
- self._loading_gen_enum(tp, name, module, '')
- else:
- self._loading_struct_or_union(tp, '', name, module)
-
- def _loaded_gen_anonymous(self, tp, name, module, **kwds):
- if isinstance(tp, model.EnumType):
- self._loaded_gen_enum(tp, name, module, **kwds)
- else:
- self._loaded_struct_or_union(tp)
-
- # ----------
- # constants, likely declared with '#define'
-
- def _generate_gen_const(self, is_int, name, tp=None, category='const',
- check_value=None):
- prnt = self._prnt
- funcname = '_cffi_%s_%s' % (category, name)
- self.export_symbols.append(funcname)
- if check_value is not None:
- assert is_int
- assert category == 'const'
- prnt('int %s(char *out_error)' % funcname)
- prnt('{')
- self._check_int_constant_value(name, check_value)
- prnt(' return 0;')
- prnt('}')
- elif is_int:
- assert category == 'const'
- prnt('int %s(long long *out_value)' % funcname)
- prnt('{')
- prnt(' *out_value = (long long)(%s);' % (name,))
- prnt(' return (%s) <= 0;' % (name,))
- prnt('}')
- else:
- assert tp is not None
- assert check_value is None
- if category == 'var':
- ampersand = '&'
- else:
- ampersand = ''
- extra = ''
- if category == 'const' and isinstance(tp, model.StructOrUnion):
- extra = 'const *'
- ampersand = '&'
- prnt(tp.get_c_name(' %s%s(void)' % (extra, funcname), name))
- prnt('{')
- prnt(' return (%s%s);' % (ampersand, name))
- prnt('}')
- prnt()
-
- def _generate_gen_constant_decl(self, tp, name):
- is_int = isinstance(tp, model.PrimitiveType) and tp.is_integer_type()
- self._generate_gen_const(is_int, name, tp)
-
- _loading_gen_constant = _loaded_noop
-
- def _load_constant(self, is_int, tp, name, module, check_value=None):
- funcname = '_cffi_const_%s' % name
- if check_value is not None:
- assert is_int
- self._load_known_int_constant(module, funcname)
- value = check_value
- elif is_int:
- BType = self.ffi._typeof_locked("long long*")[0]
- BFunc = self.ffi._typeof_locked("int(*)(long long*)")[0]
- function = module.load_function(BFunc, funcname)
- p = self.ffi.new(BType)
- negative = function(p)
- value = int(p[0])
- if value < 0 and not negative:
- BLongLong = self.ffi._typeof_locked("long long")[0]
- value += (1 << (8*self.ffi.sizeof(BLongLong)))
- else:
- assert check_value is None
- fntypeextra = '(*)(void)'
- if isinstance(tp, model.StructOrUnion):
- fntypeextra = '*' + fntypeextra
- BFunc = self.ffi._typeof_locked(tp.get_c_name(fntypeextra, name))[0]
- function = module.load_function(BFunc, funcname)
- value = function()
- if isinstance(tp, model.StructOrUnion):
- value = value[0]
- return value
-
- def _loaded_gen_constant(self, tp, name, module, library):
- is_int = isinstance(tp, model.PrimitiveType) and tp.is_integer_type()
- value = self._load_constant(is_int, tp, name, module)
- setattr(library, name, value)
- type(library)._cffi_dir.append(name)
-
- # ----------
- # enums
-
- def _check_int_constant_value(self, name, value):
- prnt = self._prnt
- if value <= 0:
- prnt(' if ((%s) > 0 || (long)(%s) != %dL) {' % (
- name, name, value))
- else:
- prnt(' if ((%s) <= 0 || (unsigned long)(%s) != %dUL) {' % (
- name, name, value))
- prnt(' char buf[64];')
- prnt(' if ((%s) <= 0)' % name)
- prnt(' sprintf(buf, "%%ld", (long)(%s));' % name)
- prnt(' else')
- prnt(' sprintf(buf, "%%lu", (unsigned long)(%s));' %
- name)
- prnt(' sprintf(out_error, "%s has the real value %s, not %s",')
- prnt(' "%s", buf, "%d");' % (name[:100], value))
- prnt(' return -1;')
- prnt(' }')
-
- def _load_known_int_constant(self, module, funcname):
- BType = self.ffi._typeof_locked("char[]")[0]
- BFunc = self.ffi._typeof_locked("int(*)(char*)")[0]
- function = module.load_function(BFunc, funcname)
- p = self.ffi.new(BType, 256)
- if function(p) < 0:
- error = self.ffi.string(p)
- if sys.version_info >= (3,):
- error = str(error, 'utf-8')
- raise VerificationError(error)
-
- def _enum_funcname(self, prefix, name):
- # "$enum_$1" => "___D_enum____D_1"
- name = name.replace('$', '___D_')
- return '_cffi_e_%s_%s' % (prefix, name)
-
- def _generate_gen_enum_decl(self, tp, name, prefix='enum'):
- if tp.partial:
- for enumerator in tp.enumerators:
- self._generate_gen_const(True, enumerator)
- return
- #
- funcname = self._enum_funcname(prefix, name)
- self.export_symbols.append(funcname)
- prnt = self._prnt
- prnt('int %s(char *out_error)' % funcname)
- prnt('{')
- for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues):
- self._check_int_constant_value(enumerator, enumvalue)
- prnt(' return 0;')
- prnt('}')
- prnt()
-
- def _loading_gen_enum(self, tp, name, module, prefix='enum'):
- if tp.partial:
- enumvalues = [self._load_constant(True, tp, enumerator, module)
- for enumerator in tp.enumerators]
- tp.enumvalues = tuple(enumvalues)
- tp.partial_resolved = True
- else:
- funcname = self._enum_funcname(prefix, name)
- self._load_known_int_constant(module, funcname)
-
- def _loaded_gen_enum(self, tp, name, module, library):
- for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues):
- setattr(library, enumerator, enumvalue)
- type(library)._cffi_dir.append(enumerator)
-
- # ----------
- # macros: for now only for integers
-
- def _generate_gen_macro_decl(self, tp, name):
- if tp == '...':
- check_value = None
- else:
- check_value = tp # an integer
- self._generate_gen_const(True, name, check_value=check_value)
-
- _loading_gen_macro = _loaded_noop
-
- def _loaded_gen_macro(self, tp, name, module, library):
- if tp == '...':
- check_value = None
- else:
- check_value = tp # an integer
- value = self._load_constant(True, tp, name, module,
- check_value=check_value)
- setattr(library, name, value)
- type(library)._cffi_dir.append(name)
-
- # ----------
- # global variables
-
- def _generate_gen_variable_decl(self, tp, name):
- if isinstance(tp, model.ArrayType):
- if tp.length_is_unknown():
- prnt = self._prnt
- funcname = '_cffi_sizeof_%s' % (name,)
- self.export_symbols.append(funcname)
- prnt("size_t %s(void)" % funcname)
- prnt("{")
- prnt(" return sizeof(%s);" % (name,))
- prnt("}")
- tp_ptr = model.PointerType(tp.item)
- self._generate_gen_const(False, name, tp_ptr)
- else:
- tp_ptr = model.PointerType(tp)
- self._generate_gen_const(False, name, tp_ptr, category='var')
-
- _loading_gen_variable = _loaded_noop
-
- def _loaded_gen_variable(self, tp, name, module, library):
- if isinstance(tp, model.ArrayType): # int a[5] is "constant" in the
- # sense that "a=..." is forbidden
- if tp.length_is_unknown():
- funcname = '_cffi_sizeof_%s' % (name,)
- BFunc = self.ffi._typeof_locked('size_t(*)(void)')[0]
- function = module.load_function(BFunc, funcname)
- size = function()
- BItemType = self.ffi._get_cached_btype(tp.item)
- length, rest = divmod(size, self.ffi.sizeof(BItemType))
- if rest != 0:
- raise VerificationError(
- "bad size: %r does not seem to be an array of %s" %
- (name, tp.item))
- tp = tp.resolve_length(length)
- tp_ptr = model.PointerType(tp.item)
- value = self._load_constant(False, tp_ptr, name, module)
- # 'value' is a which we have to replace with
- # a if the N is actually known
- if tp.length is not None:
- BArray = self.ffi._get_cached_btype(tp)
- value = self.ffi.cast(BArray, value)
- setattr(library, name, value)
- type(library)._cffi_dir.append(name)
- return
- # remove ptr= from the library instance, and replace
- # it by a property on the class, which reads/writes into ptr[0].
- funcname = '_cffi_var_%s' % name
- BFunc = self.ffi._typeof_locked(tp.get_c_name('*(*)(void)', name))[0]
- function = module.load_function(BFunc, funcname)
- ptr = function()
- def getter(library):
- return ptr[0]
- def setter(library, value):
- ptr[0] = value
- setattr(type(library), name, property(getter, setter))
- type(library)._cffi_dir.append(name)
-
-cffimod_header = r'''
-#include
-#include
-#include
-#include
-#include /* XXX for ssize_t on some platforms */
-
-/* this block of #ifs should be kept exactly identical between
- c/_cffi_backend.c, cffi/vengine_cpy.py, cffi/vengine_gen.py
- and cffi/_cffi_include.h */
-#if defined(_MSC_VER)
-# include /* for alloca() */
-# if _MSC_VER < 1600 /* MSVC < 2010 */
- typedef __int8 int8_t;
- typedef __int16 int16_t;
- typedef __int32 int32_t;
- typedef __int64 int64_t;
- typedef unsigned __int8 uint8_t;
- typedef unsigned __int16 uint16_t;
- typedef unsigned __int32 uint32_t;
- typedef unsigned __int64 uint64_t;
- typedef __int8 int_least8_t;
- typedef __int16 int_least16_t;
- typedef __int32 int_least32_t;
- typedef __int64 int_least64_t;
- typedef unsigned __int8 uint_least8_t;
- typedef unsigned __int16 uint_least16_t;
- typedef unsigned __int32 uint_least32_t;
- typedef unsigned __int64 uint_least64_t;
- typedef __int8 int_fast8_t;
- typedef __int16 int_fast16_t;
- typedef __int32 int_fast32_t;
- typedef __int64 int_fast64_t;
- typedef unsigned __int8 uint_fast8_t;
- typedef unsigned __int16 uint_fast16_t;
- typedef unsigned __int32 uint_fast32_t;
- typedef unsigned __int64 uint_fast64_t;
- typedef __int64 intmax_t;
- typedef unsigned __int64 uintmax_t;
-# else
-# include
-# endif
-# if _MSC_VER < 1800 /* MSVC < 2013 */
-# ifndef __cplusplus
- typedef unsigned char _Bool;
-# endif
-# endif
-#else
-# include
-# if (defined (__SVR4) && defined (__sun)) || defined(_AIX) || defined(__hpux)
-# include
-# endif
-#endif
-'''
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/click/core.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/click/core.py
deleted file mode 100644
index cc65e896bf2d754d74b54a84ac501b80127f83ca..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/click/core.py
+++ /dev/null
@@ -1,3042 +0,0 @@
-import enum
-import errno
-import inspect
-import os
-import sys
-import typing as t
-from collections import abc
-from contextlib import contextmanager
-from contextlib import ExitStack
-from functools import update_wrapper
-from gettext import gettext as _
-from gettext import ngettext
-from itertools import repeat
-from types import TracebackType
-
-from . import types
-from .exceptions import Abort
-from .exceptions import BadParameter
-from .exceptions import ClickException
-from .exceptions import Exit
-from .exceptions import MissingParameter
-from .exceptions import UsageError
-from .formatting import HelpFormatter
-from .formatting import join_options
-from .globals import pop_context
-from .globals import push_context
-from .parser import _flag_needs_value
-from .parser import OptionParser
-from .parser import split_opt
-from .termui import confirm
-from .termui import prompt
-from .termui import style
-from .utils import _detect_program_name
-from .utils import _expand_args
-from .utils import echo
-from .utils import make_default_short_help
-from .utils import make_str
-from .utils import PacifyFlushWrapper
-
-if t.TYPE_CHECKING:
- import typing_extensions as te
- from .shell_completion import CompletionItem
-
-F = t.TypeVar("F", bound=t.Callable[..., t.Any])
-V = t.TypeVar("V")
-
-
-def _complete_visible_commands(
- ctx: "Context", incomplete: str
-) -> t.Iterator[t.Tuple[str, "Command"]]:
- """List all the subcommands of a group that start with the
- incomplete value and aren't hidden.
-
- :param ctx: Invocation context for the group.
- :param incomplete: Value being completed. May be empty.
- """
- multi = t.cast(MultiCommand, ctx.command)
-
- for name in multi.list_commands(ctx):
- if name.startswith(incomplete):
- command = multi.get_command(ctx, name)
-
- if command is not None and not command.hidden:
- yield name, command
-
-
-def _check_multicommand(
- base_command: "MultiCommand", cmd_name: str, cmd: "Command", register: bool = False
-) -> None:
- if not base_command.chain or not isinstance(cmd, MultiCommand):
- return
- if register:
- hint = (
- "It is not possible to add multi commands as children to"
- " another multi command that is in chain mode."
- )
- else:
- hint = (
- "Found a multi command as subcommand to a multi command"
- " that is in chain mode. This is not supported."
- )
- raise RuntimeError(
- f"{hint}. Command {base_command.name!r} is set to chain and"
- f" {cmd_name!r} was added as a subcommand but it in itself is a"
- f" multi command. ({cmd_name!r} is a {type(cmd).__name__}"
- f" within a chained {type(base_command).__name__} named"
- f" {base_command.name!r})."
- )
-
-
-def batch(iterable: t.Iterable[V], batch_size: int) -> t.List[t.Tuple[V, ...]]:
- return list(zip(*repeat(iter(iterable), batch_size)))
-
-
-@contextmanager
-def augment_usage_errors(
- ctx: "Context", param: t.Optional["Parameter"] = None
-) -> t.Iterator[None]:
- """Context manager that attaches extra information to exceptions."""
- try:
- yield
- except BadParameter as e:
- if e.ctx is None:
- e.ctx = ctx
- if param is not None and e.param is None:
- e.param = param
- raise
- except UsageError as e:
- if e.ctx is None:
- e.ctx = ctx
- raise
-
-
-def iter_params_for_processing(
- invocation_order: t.Sequence["Parameter"],
- declaration_order: t.Sequence["Parameter"],
-) -> t.List["Parameter"]:
- """Given a sequence of parameters in the order as should be considered
- for processing and an iterable of parameters that exist, this returns
- a list in the correct order as they should be processed.
- """
-
- def sort_key(item: "Parameter") -> t.Tuple[bool, float]:
- try:
- idx: float = invocation_order.index(item)
- except ValueError:
- idx = float("inf")
-
- return not item.is_eager, idx
-
- return sorted(declaration_order, key=sort_key)
-
-
-class ParameterSource(enum.Enum):
- """This is an :class:`~enum.Enum` that indicates the source of a
- parameter's value.
-
- Use :meth:`click.Context.get_parameter_source` to get the
- source for a parameter by name.
-
- .. versionchanged:: 8.0
- Use :class:`~enum.Enum` and drop the ``validate`` method.
-
- .. versionchanged:: 8.0
- Added the ``PROMPT`` value.
- """
-
- COMMANDLINE = enum.auto()
- """The value was provided by the command line args."""
- ENVIRONMENT = enum.auto()
- """The value was provided with an environment variable."""
- DEFAULT = enum.auto()
- """Used the default specified by the parameter."""
- DEFAULT_MAP = enum.auto()
- """Used a default provided by :attr:`Context.default_map`."""
- PROMPT = enum.auto()
- """Used a prompt to confirm a default or provide a value."""
-
-
-class Context:
- """The context is a special internal object that holds state relevant
- for the script execution at every single level. It's normally invisible
- to commands unless they opt-in to getting access to it.
-
- The context is useful as it can pass internal objects around and can
- control special execution features such as reading data from
- environment variables.
-
- A context can be used as context manager in which case it will call
- :meth:`close` on teardown.
-
- :param command: the command class for this context.
- :param parent: the parent context.
- :param info_name: the info name for this invocation. Generally this
- is the most descriptive name for the script or
- command. For the toplevel script it is usually
- the name of the script, for commands below it it's
- the name of the script.
- :param obj: an arbitrary object of user data.
- :param auto_envvar_prefix: the prefix to use for automatic environment
- variables. If this is `None` then reading
- from environment variables is disabled. This
- does not affect manually set environment
- variables which are always read.
- :param default_map: a dictionary (like object) with default values
- for parameters.
- :param terminal_width: the width of the terminal. The default is
- inherit from parent context. If no context
- defines the terminal width then auto
- detection will be applied.
- :param max_content_width: the maximum width for content rendered by
- Click (this currently only affects help
- pages). This defaults to 80 characters if
- not overridden. In other words: even if the
- terminal is larger than that, Click will not
- format things wider than 80 characters by
- default. In addition to that, formatters might
- add some safety mapping on the right.
- :param resilient_parsing: if this flag is enabled then Click will
- parse without any interactivity or callback
- invocation. Default values will also be
- ignored. This is useful for implementing
- things such as completion support.
- :param allow_extra_args: if this is set to `True` then extra arguments
- at the end will not raise an error and will be
- kept on the context. The default is to inherit
- from the command.
- :param allow_interspersed_args: if this is set to `False` then options
- and arguments cannot be mixed. The
- default is to inherit from the command.
- :param ignore_unknown_options: instructs click to ignore options it does
- not know and keeps them for later
- processing.
- :param help_option_names: optionally a list of strings that define how
- the default help parameter is named. The
- default is ``['--help']``.
- :param token_normalize_func: an optional function that is used to
- normalize tokens (options, choices,
- etc.). This for instance can be used to
- implement case insensitive behavior.
- :param color: controls if the terminal supports ANSI colors or not. The
- default is autodetection. This is only needed if ANSI
- codes are used in texts that Click prints which is by
- default not the case. This for instance would affect
- help output.
- :param show_default: Show the default value for commands. If this
- value is not set, it defaults to the value from the parent
- context. ``Command.show_default`` overrides this default for the
- specific command.
-
- .. versionchanged:: 8.1
- The ``show_default`` parameter is overridden by
- ``Command.show_default``, instead of the other way around.
-
- .. versionchanged:: 8.0
- The ``show_default`` parameter defaults to the value from the
- parent context.
-
- .. versionchanged:: 7.1
- Added the ``show_default`` parameter.
-
- .. versionchanged:: 4.0
- Added the ``color``, ``ignore_unknown_options``, and
- ``max_content_width`` parameters.
-
- .. versionchanged:: 3.0
- Added the ``allow_extra_args`` and ``allow_interspersed_args``
- parameters.
-
- .. versionchanged:: 2.0
- Added the ``resilient_parsing``, ``help_option_names``, and
- ``token_normalize_func`` parameters.
- """
-
- #: The formatter class to create with :meth:`make_formatter`.
- #:
- #: .. versionadded:: 8.0
- formatter_class: t.Type["HelpFormatter"] = HelpFormatter
-
- def __init__(
- self,
- command: "Command",
- parent: t.Optional["Context"] = None,
- info_name: t.Optional[str] = None,
- obj: t.Optional[t.Any] = None,
- auto_envvar_prefix: t.Optional[str] = None,
- default_map: t.Optional[t.MutableMapping[str, t.Any]] = None,
- terminal_width: t.Optional[int] = None,
- max_content_width: t.Optional[int] = None,
- resilient_parsing: bool = False,
- allow_extra_args: t.Optional[bool] = None,
- allow_interspersed_args: t.Optional[bool] = None,
- ignore_unknown_options: t.Optional[bool] = None,
- help_option_names: t.Optional[t.List[str]] = None,
- token_normalize_func: t.Optional[t.Callable[[str], str]] = None,
- color: t.Optional[bool] = None,
- show_default: t.Optional[bool] = None,
- ) -> None:
- #: the parent context or `None` if none exists.
- self.parent = parent
- #: the :class:`Command` for this context.
- self.command = command
- #: the descriptive information name
- self.info_name = info_name
- #: Map of parameter names to their parsed values. Parameters
- #: with ``expose_value=False`` are not stored.
- self.params: t.Dict[str, t.Any] = {}
- #: the leftover arguments.
- self.args: t.List[str] = []
- #: protected arguments. These are arguments that are prepended
- #: to `args` when certain parsing scenarios are encountered but
- #: must be never propagated to another arguments. This is used
- #: to implement nested parsing.
- self.protected_args: t.List[str] = []
- #: the collected prefixes of the command's options.
- self._opt_prefixes: t.Set[str] = set(parent._opt_prefixes) if parent else set()
-
- if obj is None and parent is not None:
- obj = parent.obj
-
- #: the user object stored.
- self.obj: t.Any = obj
- self._meta: t.Dict[str, t.Any] = getattr(parent, "meta", {})
-
- #: A dictionary (-like object) with defaults for parameters.
- if (
- default_map is None
- and info_name is not None
- and parent is not None
- and parent.default_map is not None
- ):
- default_map = parent.default_map.get(info_name)
-
- self.default_map: t.Optional[t.MutableMapping[str, t.Any]] = default_map
-
- #: This flag indicates if a subcommand is going to be executed. A
- #: group callback can use this information to figure out if it's
- #: being executed directly or because the execution flow passes
- #: onwards to a subcommand. By default it's None, but it can be
- #: the name of the subcommand to execute.
- #:
- #: If chaining is enabled this will be set to ``'*'`` in case
- #: any commands are executed. It is however not possible to
- #: figure out which ones. If you require this knowledge you
- #: should use a :func:`result_callback`.
- self.invoked_subcommand: t.Optional[str] = None
-
- if terminal_width is None and parent is not None:
- terminal_width = parent.terminal_width
-
- #: The width of the terminal (None is autodetection).
- self.terminal_width: t.Optional[int] = terminal_width
-
- if max_content_width is None and parent is not None:
- max_content_width = parent.max_content_width
-
- #: The maximum width of formatted content (None implies a sensible
- #: default which is 80 for most things).
- self.max_content_width: t.Optional[int] = max_content_width
-
- if allow_extra_args is None:
- allow_extra_args = command.allow_extra_args
-
- #: Indicates if the context allows extra args or if it should
- #: fail on parsing.
- #:
- #: .. versionadded:: 3.0
- self.allow_extra_args = allow_extra_args
-
- if allow_interspersed_args is None:
- allow_interspersed_args = command.allow_interspersed_args
-
- #: Indicates if the context allows mixing of arguments and
- #: options or not.
- #:
- #: .. versionadded:: 3.0
- self.allow_interspersed_args: bool = allow_interspersed_args
-
- if ignore_unknown_options is None:
- ignore_unknown_options = command.ignore_unknown_options
-
- #: Instructs click to ignore options that a command does not
- #: understand and will store it on the context for later
- #: processing. This is primarily useful for situations where you
- #: want to call into external programs. Generally this pattern is
- #: strongly discouraged because it's not possibly to losslessly
- #: forward all arguments.
- #:
- #: .. versionadded:: 4.0
- self.ignore_unknown_options: bool = ignore_unknown_options
-
- if help_option_names is None:
- if parent is not None:
- help_option_names = parent.help_option_names
- else:
- help_option_names = ["--help"]
-
- #: The names for the help options.
- self.help_option_names: t.List[str] = help_option_names
-
- if token_normalize_func is None and parent is not None:
- token_normalize_func = parent.token_normalize_func
-
- #: An optional normalization function for tokens. This is
- #: options, choices, commands etc.
- self.token_normalize_func: t.Optional[
- t.Callable[[str], str]
- ] = token_normalize_func
-
- #: Indicates if resilient parsing is enabled. In that case Click
- #: will do its best to not cause any failures and default values
- #: will be ignored. Useful for completion.
- self.resilient_parsing: bool = resilient_parsing
-
- # If there is no envvar prefix yet, but the parent has one and
- # the command on this level has a name, we can expand the envvar
- # prefix automatically.
- if auto_envvar_prefix is None:
- if (
- parent is not None
- and parent.auto_envvar_prefix is not None
- and self.info_name is not None
- ):
- auto_envvar_prefix = (
- f"{parent.auto_envvar_prefix}_{self.info_name.upper()}"
- )
- else:
- auto_envvar_prefix = auto_envvar_prefix.upper()
-
- if auto_envvar_prefix is not None:
- auto_envvar_prefix = auto_envvar_prefix.replace("-", "_")
-
- self.auto_envvar_prefix: t.Optional[str] = auto_envvar_prefix
-
- if color is None and parent is not None:
- color = parent.color
-
- #: Controls if styling output is wanted or not.
- self.color: t.Optional[bool] = color
-
- if show_default is None and parent is not None:
- show_default = parent.show_default
-
- #: Show option default values when formatting help text.
- self.show_default: t.Optional[bool] = show_default
-
- self._close_callbacks: t.List[t.Callable[[], t.Any]] = []
- self._depth = 0
- self._parameter_source: t.Dict[str, ParameterSource] = {}
- self._exit_stack = ExitStack()
-
- def to_info_dict(self) -> t.Dict[str, t.Any]:
- """Gather information that could be useful for a tool generating
- user-facing documentation. This traverses the entire CLI
- structure.
-
- .. code-block:: python
-
- with Context(cli) as ctx:
- info = ctx.to_info_dict()
-
- .. versionadded:: 8.0
- """
- return {
- "command": self.command.to_info_dict(self),
- "info_name": self.info_name,
- "allow_extra_args": self.allow_extra_args,
- "allow_interspersed_args": self.allow_interspersed_args,
- "ignore_unknown_options": self.ignore_unknown_options,
- "auto_envvar_prefix": self.auto_envvar_prefix,
- }
-
- def __enter__(self) -> "Context":
- self._depth += 1
- push_context(self)
- return self
-
- def __exit__(
- self,
- exc_type: t.Optional[t.Type[BaseException]],
- exc_value: t.Optional[BaseException],
- tb: t.Optional[TracebackType],
- ) -> None:
- self._depth -= 1
- if self._depth == 0:
- self.close()
- pop_context()
-
- @contextmanager
- def scope(self, cleanup: bool = True) -> t.Iterator["Context"]:
- """This helper method can be used with the context object to promote
- it to the current thread local (see :func:`get_current_context`).
- The default behavior of this is to invoke the cleanup functions which
- can be disabled by setting `cleanup` to `False`. The cleanup
- functions are typically used for things such as closing file handles.
-
- If the cleanup is intended the context object can also be directly
- used as a context manager.
-
- Example usage::
-
- with ctx.scope():
- assert get_current_context() is ctx
-
- This is equivalent::
-
- with ctx:
- assert get_current_context() is ctx
-
- .. versionadded:: 5.0
-
- :param cleanup: controls if the cleanup functions should be run or
- not. The default is to run these functions. In
- some situations the context only wants to be
- temporarily pushed in which case this can be disabled.
- Nested pushes automatically defer the cleanup.
- """
- if not cleanup:
- self._depth += 1
- try:
- with self as rv:
- yield rv
- finally:
- if not cleanup:
- self._depth -= 1
-
- @property
- def meta(self) -> t.Dict[str, t.Any]:
- """This is a dictionary which is shared with all the contexts
- that are nested. It exists so that click utilities can store some
- state here if they need to. It is however the responsibility of
- that code to manage this dictionary well.
-
- The keys are supposed to be unique dotted strings. For instance
- module paths are a good choice for it. What is stored in there is
- irrelevant for the operation of click. However what is important is
- that code that places data here adheres to the general semantics of
- the system.
-
- Example usage::
-
- LANG_KEY = f'{__name__}.lang'
-
- def set_language(value):
- ctx = get_current_context()
- ctx.meta[LANG_KEY] = value
-
- def get_language():
- return get_current_context().meta.get(LANG_KEY, 'en_US')
-
- .. versionadded:: 5.0
- """
- return self._meta
-
- def make_formatter(self) -> HelpFormatter:
- """Creates the :class:`~click.HelpFormatter` for the help and
- usage output.
-
- To quickly customize the formatter class used without overriding
- this method, set the :attr:`formatter_class` attribute.
-
- .. versionchanged:: 8.0
- Added the :attr:`formatter_class` attribute.
- """
- return self.formatter_class(
- width=self.terminal_width, max_width=self.max_content_width
- )
-
- def with_resource(self, context_manager: t.ContextManager[V]) -> V:
- """Register a resource as if it were used in a ``with``
- statement. The resource will be cleaned up when the context is
- popped.
-
- Uses :meth:`contextlib.ExitStack.enter_context`. It calls the
- resource's ``__enter__()`` method and returns the result. When
- the context is popped, it closes the stack, which calls the
- resource's ``__exit__()`` method.
-
- To register a cleanup function for something that isn't a
- context manager, use :meth:`call_on_close`. Or use something
- from :mod:`contextlib` to turn it into a context manager first.
-
- .. code-block:: python
-
- @click.group()
- @click.option("--name")
- @click.pass_context
- def cli(ctx):
- ctx.obj = ctx.with_resource(connect_db(name))
-
- :param context_manager: The context manager to enter.
- :return: Whatever ``context_manager.__enter__()`` returns.
-
- .. versionadded:: 8.0
- """
- return self._exit_stack.enter_context(context_manager)
-
- def call_on_close(self, f: t.Callable[..., t.Any]) -> t.Callable[..., t.Any]:
- """Register a function to be called when the context tears down.
-
- This can be used to close resources opened during the script
- execution. Resources that support Python's context manager
- protocol which would be used in a ``with`` statement should be
- registered with :meth:`with_resource` instead.
-
- :param f: The function to execute on teardown.
- """
- return self._exit_stack.callback(f)
-
- def close(self) -> None:
- """Invoke all close callbacks registered with
- :meth:`call_on_close`, and exit all context managers entered
- with :meth:`with_resource`.
- """
- self._exit_stack.close()
- # In case the context is reused, create a new exit stack.
- self._exit_stack = ExitStack()
-
- @property
- def command_path(self) -> str:
- """The computed command path. This is used for the ``usage``
- information on the help page. It's automatically created by
- combining the info names of the chain of contexts to the root.
- """
- rv = ""
- if self.info_name is not None:
- rv = self.info_name
- if self.parent is not None:
- parent_command_path = [self.parent.command_path]
-
- if isinstance(self.parent.command, Command):
- for param in self.parent.command.get_params(self):
- parent_command_path.extend(param.get_usage_pieces(self))
-
- rv = f"{' '.join(parent_command_path)} {rv}"
- return rv.lstrip()
-
- def find_root(self) -> "Context":
- """Finds the outermost context."""
- node = self
- while node.parent is not None:
- node = node.parent
- return node
-
- def find_object(self, object_type: t.Type[V]) -> t.Optional[V]:
- """Finds the closest object of a given type."""
- node: t.Optional["Context"] = self
-
- while node is not None:
- if isinstance(node.obj, object_type):
- return node.obj
-
- node = node.parent
-
- return None
-
- def ensure_object(self, object_type: t.Type[V]) -> V:
- """Like :meth:`find_object` but sets the innermost object to a
- new instance of `object_type` if it does not exist.
- """
- rv = self.find_object(object_type)
- if rv is None:
- self.obj = rv = object_type()
- return rv
-
- @t.overload
- def lookup_default(
- self, name: str, call: "te.Literal[True]" = True
- ) -> t.Optional[t.Any]:
- ...
-
- @t.overload
- def lookup_default(
- self, name: str, call: "te.Literal[False]" = ...
- ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]:
- ...
-
- def lookup_default(self, name: str, call: bool = True) -> t.Optional[t.Any]:
- """Get the default for a parameter from :attr:`default_map`.
-
- :param name: Name of the parameter.
- :param call: If the default is a callable, call it. Disable to
- return the callable instead.
-
- .. versionchanged:: 8.0
- Added the ``call`` parameter.
- """
- if self.default_map is not None:
- value = self.default_map.get(name)
-
- if call and callable(value):
- return value()
-
- return value
-
- return None
-
- def fail(self, message: str) -> "te.NoReturn":
- """Aborts the execution of the program with a specific error
- message.
-
- :param message: the error message to fail with.
- """
- raise UsageError(message, self)
-
- def abort(self) -> "te.NoReturn":
- """Aborts the script."""
- raise Abort()
-
- def exit(self, code: int = 0) -> "te.NoReturn":
- """Exits the application with a given exit code."""
- raise Exit(code)
-
- def get_usage(self) -> str:
- """Helper method to get formatted usage string for the current
- context and command.
- """
- return self.command.get_usage(self)
-
- def get_help(self) -> str:
- """Helper method to get formatted help page for the current
- context and command.
- """
- return self.command.get_help(self)
-
- def _make_sub_context(self, command: "Command") -> "Context":
- """Create a new context of the same type as this context, but
- for a new command.
-
- :meta private:
- """
- return type(self)(command, info_name=command.name, parent=self)
-
- @t.overload
- def invoke(
- __self, # noqa: B902
- __callback: "t.Callable[..., V]",
- *args: t.Any,
- **kwargs: t.Any,
- ) -> V:
- ...
-
- @t.overload
- def invoke(
- __self, # noqa: B902
- __callback: "Command",
- *args: t.Any,
- **kwargs: t.Any,
- ) -> t.Any:
- ...
-
- def invoke(
- __self, # noqa: B902
- __callback: t.Union["Command", "t.Callable[..., V]"],
- *args: t.Any,
- **kwargs: t.Any,
- ) -> t.Union[t.Any, V]:
- """Invokes a command callback in exactly the way it expects. There
- are two ways to invoke this method:
-
- 1. the first argument can be a callback and all other arguments and
- keyword arguments are forwarded directly to the function.
- 2. the first argument is a click command object. In that case all
- arguments are forwarded as well but proper click parameters
- (options and click arguments) must be keyword arguments and Click
- will fill in defaults.
-
- Note that before Click 3.2 keyword arguments were not properly filled
- in against the intention of this code and no context was created. For
- more information about this change and why it was done in a bugfix
- release see :ref:`upgrade-to-3.2`.
-
- .. versionchanged:: 8.0
- All ``kwargs`` are tracked in :attr:`params` so they will be
- passed if :meth:`forward` is called at multiple levels.
- """
- if isinstance(__callback, Command):
- other_cmd = __callback
-
- if other_cmd.callback is None:
- raise TypeError(
- "The given command does not have a callback that can be invoked."
- )
- else:
- __callback = t.cast("t.Callable[..., V]", other_cmd.callback)
-
- ctx = __self._make_sub_context(other_cmd)
-
- for param in other_cmd.params:
- if param.name not in kwargs and param.expose_value:
- kwargs[param.name] = param.type_cast_value( # type: ignore
- ctx, param.get_default(ctx)
- )
-
- # Track all kwargs as params, so that forward() will pass
- # them on in subsequent calls.
- ctx.params.update(kwargs)
- else:
- ctx = __self
-
- with augment_usage_errors(__self):
- with ctx:
- return __callback(*args, **kwargs)
-
- def forward(
- __self, __cmd: "Command", *args: t.Any, **kwargs: t.Any # noqa: B902
- ) -> t.Any:
- """Similar to :meth:`invoke` but fills in default keyword
- arguments from the current context if the other command expects
- it. This cannot invoke callbacks directly, only other commands.
-
- .. versionchanged:: 8.0
- All ``kwargs`` are tracked in :attr:`params` so they will be
- passed if ``forward`` is called at multiple levels.
- """
- # Can only forward to other commands, not direct callbacks.
- if not isinstance(__cmd, Command):
- raise TypeError("Callback is not a command.")
-
- for param in __self.params:
- if param not in kwargs:
- kwargs[param] = __self.params[param]
-
- return __self.invoke(__cmd, *args, **kwargs)
-
- def set_parameter_source(self, name: str, source: ParameterSource) -> None:
- """Set the source of a parameter. This indicates the location
- from which the value of the parameter was obtained.
-
- :param name: The name of the parameter.
- :param source: A member of :class:`~click.core.ParameterSource`.
- """
- self._parameter_source[name] = source
-
- def get_parameter_source(self, name: str) -> t.Optional[ParameterSource]:
- """Get the source of a parameter. This indicates the location
- from which the value of the parameter was obtained.
-
- This can be useful for determining when a user specified a value
- on the command line that is the same as the default value. It
- will be :attr:`~click.core.ParameterSource.DEFAULT` only if the
- value was actually taken from the default.
-
- :param name: The name of the parameter.
- :rtype: ParameterSource
-
- .. versionchanged:: 8.0
- Returns ``None`` if the parameter was not provided from any
- source.
- """
- return self._parameter_source.get(name)
-
-
-class BaseCommand:
- """The base command implements the minimal API contract of commands.
- Most code will never use this as it does not implement a lot of useful
- functionality but it can act as the direct subclass of alternative
- parsing methods that do not depend on the Click parser.
-
- For instance, this can be used to bridge Click and other systems like
- argparse or docopt.
-
- Because base commands do not implement a lot of the API that other
- parts of Click take for granted, they are not supported for all
- operations. For instance, they cannot be used with the decorators
- usually and they have no built-in callback system.
-
- .. versionchanged:: 2.0
- Added the `context_settings` parameter.
-
- :param name: the name of the command to use unless a group overrides it.
- :param context_settings: an optional dictionary with defaults that are
- passed to the context object.
- """
-
- #: The context class to create with :meth:`make_context`.
- #:
- #: .. versionadded:: 8.0
- context_class: t.Type[Context] = Context
- #: the default for the :attr:`Context.allow_extra_args` flag.
- allow_extra_args = False
- #: the default for the :attr:`Context.allow_interspersed_args` flag.
- allow_interspersed_args = True
- #: the default for the :attr:`Context.ignore_unknown_options` flag.
- ignore_unknown_options = False
-
- def __init__(
- self,
- name: t.Optional[str],
- context_settings: t.Optional[t.MutableMapping[str, t.Any]] = None,
- ) -> None:
- #: the name the command thinks it has. Upon registering a command
- #: on a :class:`Group` the group will default the command name
- #: with this information. You should instead use the
- #: :class:`Context`\'s :attr:`~Context.info_name` attribute.
- self.name = name
-
- if context_settings is None:
- context_settings = {}
-
- #: an optional dictionary with defaults passed to the context.
- self.context_settings: t.MutableMapping[str, t.Any] = context_settings
-
- def to_info_dict(self, ctx: Context) -> t.Dict[str, t.Any]:
- """Gather information that could be useful for a tool generating
- user-facing documentation. This traverses the entire structure
- below this command.
-
- Use :meth:`click.Context.to_info_dict` to traverse the entire
- CLI structure.
-
- :param ctx: A :class:`Context` representing this command.
-
- .. versionadded:: 8.0
- """
- return {"name": self.name}
-
- def __repr__(self) -> str:
- return f"<{self.__class__.__name__} {self.name}>"
-
- def get_usage(self, ctx: Context) -> str:
- raise NotImplementedError("Base commands cannot get usage")
-
- def get_help(self, ctx: Context) -> str:
- raise NotImplementedError("Base commands cannot get help")
-
- def make_context(
- self,
- info_name: t.Optional[str],
- args: t.List[str],
- parent: t.Optional[Context] = None,
- **extra: t.Any,
- ) -> Context:
- """This function when given an info name and arguments will kick
- off the parsing and create a new :class:`Context`. It does not
- invoke the actual command callback though.
-
- To quickly customize the context class used without overriding
- this method, set the :attr:`context_class` attribute.
-
- :param info_name: the info name for this invocation. Generally this
- is the most descriptive name for the script or
- command. For the toplevel script it's usually
- the name of the script, for commands below it's
- the name of the command.
- :param args: the arguments to parse as list of strings.
- :param parent: the parent context if available.
- :param extra: extra keyword arguments forwarded to the context
- constructor.
-
- .. versionchanged:: 8.0
- Added the :attr:`context_class` attribute.
- """
- for key, value in self.context_settings.items():
- if key not in extra:
- extra[key] = value
-
- ctx = self.context_class(
- self, info_name=info_name, parent=parent, **extra # type: ignore
- )
-
- with ctx.scope(cleanup=False):
- self.parse_args(ctx, args)
- return ctx
-
- def parse_args(self, ctx: Context, args: t.List[str]) -> t.List[str]:
- """Given a context and a list of arguments this creates the parser
- and parses the arguments, then modifies the context as necessary.
- This is automatically invoked by :meth:`make_context`.
- """
- raise NotImplementedError("Base commands do not know how to parse arguments.")
-
- def invoke(self, ctx: Context) -> t.Any:
- """Given a context, this invokes the command. The default
- implementation is raising a not implemented error.
- """
- raise NotImplementedError("Base commands are not invocable by default")
-
- def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]:
- """Return a list of completions for the incomplete value. Looks
- at the names of chained multi-commands.
-
- Any command could be part of a chained multi-command, so sibling
- commands are valid at any point during command completion. Other
- command classes will return more completions.
-
- :param ctx: Invocation context for this command.
- :param incomplete: Value being completed. May be empty.
-
- .. versionadded:: 8.0
- """
- from click.shell_completion import CompletionItem
-
- results: t.List["CompletionItem"] = []
-
- while ctx.parent is not None:
- ctx = ctx.parent
-
- if isinstance(ctx.command, MultiCommand) and ctx.command.chain:
- results.extend(
- CompletionItem(name, help=command.get_short_help_str())
- for name, command in _complete_visible_commands(ctx, incomplete)
- if name not in ctx.protected_args
- )
-
- return results
-
- @t.overload
- def main(
- self,
- args: t.Optional[t.Sequence[str]] = None,
- prog_name: t.Optional[str] = None,
- complete_var: t.Optional[str] = None,
- standalone_mode: "te.Literal[True]" = True,
- **extra: t.Any,
- ) -> "te.NoReturn":
- ...
-
- @t.overload
- def main(
- self,
- args: t.Optional[t.Sequence[str]] = None,
- prog_name: t.Optional[str] = None,
- complete_var: t.Optional[str] = None,
- standalone_mode: bool = ...,
- **extra: t.Any,
- ) -> t.Any:
- ...
-
- def main(
- self,
- args: t.Optional[t.Sequence[str]] = None,
- prog_name: t.Optional[str] = None,
- complete_var: t.Optional[str] = None,
- standalone_mode: bool = True,
- windows_expand_args: bool = True,
- **extra: t.Any,
- ) -> t.Any:
- """This is the way to invoke a script with all the bells and
- whistles as a command line application. This will always terminate
- the application after a call. If this is not wanted, ``SystemExit``
- needs to be caught.
-
- This method is also available by directly calling the instance of
- a :class:`Command`.
-
- :param args: the arguments that should be used for parsing. If not
- provided, ``sys.argv[1:]`` is used.
- :param prog_name: the program name that should be used. By default
- the program name is constructed by taking the file
- name from ``sys.argv[0]``.
- :param complete_var: the environment variable that controls the
- bash completion support. The default is
- ``"__COMPLETE"`` with prog_name in
- uppercase.
- :param standalone_mode: the default behavior is to invoke the script
- in standalone mode. Click will then
- handle exceptions and convert them into
- error messages and the function will never
- return but shut down the interpreter. If
- this is set to `False` they will be
- propagated to the caller and the return
- value of this function is the return value
- of :meth:`invoke`.
- :param windows_expand_args: Expand glob patterns, user dir, and
- env vars in command line args on Windows.
- :param extra: extra keyword arguments are forwarded to the context
- constructor. See :class:`Context` for more information.
-
- .. versionchanged:: 8.0.1
- Added the ``windows_expand_args`` parameter to allow
- disabling command line arg expansion on Windows.
-
- .. versionchanged:: 8.0
- When taking arguments from ``sys.argv`` on Windows, glob
- patterns, user dir, and env vars are expanded.
-
- .. versionchanged:: 3.0
- Added the ``standalone_mode`` parameter.
- """
- if args is None:
- args = sys.argv[1:]
-
- if os.name == "nt" and windows_expand_args:
- args = _expand_args(args)
- else:
- args = list(args)
-
- if prog_name is None:
- prog_name = _detect_program_name()
-
- # Process shell completion requests and exit early.
- self._main_shell_completion(extra, prog_name, complete_var)
-
- try:
- try:
- with self.make_context(prog_name, args, **extra) as ctx:
- rv = self.invoke(ctx)
- if not standalone_mode:
- return rv
- # it's not safe to `ctx.exit(rv)` here!
- # note that `rv` may actually contain data like "1" which
- # has obvious effects
- # more subtle case: `rv=[None, None]` can come out of
- # chained commands which all returned `None` -- so it's not
- # even always obvious that `rv` indicates success/failure
- # by its truthiness/falsiness
- ctx.exit()
- except (EOFError, KeyboardInterrupt) as e:
- echo(file=sys.stderr)
- raise Abort() from e
- except ClickException as e:
- if not standalone_mode:
- raise
- e.show()
- sys.exit(e.exit_code)
- except OSError as e:
- if e.errno == errno.EPIPE:
- sys.stdout = t.cast(t.TextIO, PacifyFlushWrapper(sys.stdout))
- sys.stderr = t.cast(t.TextIO, PacifyFlushWrapper(sys.stderr))
- sys.exit(1)
- else:
- raise
- except Exit as e:
- if standalone_mode:
- sys.exit(e.exit_code)
- else:
- # in non-standalone mode, return the exit code
- # note that this is only reached if `self.invoke` above raises
- # an Exit explicitly -- thus bypassing the check there which
- # would return its result
- # the results of non-standalone execution may therefore be
- # somewhat ambiguous: if there are codepaths which lead to
- # `ctx.exit(1)` and to `return 1`, the caller won't be able to
- # tell the difference between the two
- return e.exit_code
- except Abort:
- if not standalone_mode:
- raise
- echo(_("Aborted!"), file=sys.stderr)
- sys.exit(1)
-
- def _main_shell_completion(
- self,
- ctx_args: t.MutableMapping[str, t.Any],
- prog_name: str,
- complete_var: t.Optional[str] = None,
- ) -> None:
- """Check if the shell is asking for tab completion, process
- that, then exit early. Called from :meth:`main` before the
- program is invoked.
-
- :param prog_name: Name of the executable in the shell.
- :param complete_var: Name of the environment variable that holds
- the completion instruction. Defaults to
- ``_{PROG_NAME}_COMPLETE``.
-
- .. versionchanged:: 8.2.0
- Dots (``.``) in ``prog_name`` are replaced with underscores (``_``).
- """
- if complete_var is None:
- complete_name = prog_name.replace("-", "_").replace(".", "_")
- complete_var = f"_{complete_name}_COMPLETE".upper()
-
- instruction = os.environ.get(complete_var)
-
- if not instruction:
- return
-
- from .shell_completion import shell_complete
-
- rv = shell_complete(self, ctx_args, prog_name, complete_var, instruction)
- sys.exit(rv)
-
- def __call__(self, *args: t.Any, **kwargs: t.Any) -> t.Any:
- """Alias for :meth:`main`."""
- return self.main(*args, **kwargs)
-
-
-class Command(BaseCommand):
- """Commands are the basic building block of command line interfaces in
- Click. A basic command handles command line parsing and might dispatch
- more parsing to commands nested below it.
-
- :param name: the name of the command to use unless a group overrides it.
- :param context_settings: an optional dictionary with defaults that are
- passed to the context object.
- :param callback: the callback to invoke. This is optional.
- :param params: the parameters to register with this command. This can
- be either :class:`Option` or :class:`Argument` objects.
- :param help: the help string to use for this command.
- :param epilog: like the help string but it's printed at the end of the
- help page after everything else.
- :param short_help: the short help to use for this command. This is
- shown on the command listing of the parent command.
- :param add_help_option: by default each command registers a ``--help``
- option. This can be disabled by this parameter.
- :param no_args_is_help: this controls what happens if no arguments are
- provided. This option is disabled by default.
- If enabled this will add ``--help`` as argument
- if no arguments are passed
- :param hidden: hide this command from help outputs.
-
- :param deprecated: issues a message indicating that
- the command is deprecated.
-
- .. versionchanged:: 8.1
- ``help``, ``epilog``, and ``short_help`` are stored unprocessed,
- all formatting is done when outputting help text, not at init,
- and is done even if not using the ``@command`` decorator.
-
- .. versionchanged:: 8.0
- Added a ``repr`` showing the command name.
-
- .. versionchanged:: 7.1
- Added the ``no_args_is_help`` parameter.
-
- .. versionchanged:: 2.0
- Added the ``context_settings`` parameter.
- """
-
- def __init__(
- self,
- name: t.Optional[str],
- context_settings: t.Optional[t.MutableMapping[str, t.Any]] = None,
- callback: t.Optional[t.Callable[..., t.Any]] = None,
- params: t.Optional[t.List["Parameter"]] = None,
- help: t.Optional[str] = None,
- epilog: t.Optional[str] = None,
- short_help: t.Optional[str] = None,
- options_metavar: t.Optional[str] = "[OPTIONS]",
- add_help_option: bool = True,
- no_args_is_help: bool = False,
- hidden: bool = False,
- deprecated: bool = False,
- ) -> None:
- super().__init__(name, context_settings)
- #: the callback to execute when the command fires. This might be
- #: `None` in which case nothing happens.
- self.callback = callback
- #: the list of parameters for this command in the order they
- #: should show up in the help page and execute. Eager parameters
- #: will automatically be handled before non eager ones.
- self.params: t.List["Parameter"] = params or []
- self.help = help
- self.epilog = epilog
- self.options_metavar = options_metavar
- self.short_help = short_help
- self.add_help_option = add_help_option
- self.no_args_is_help = no_args_is_help
- self.hidden = hidden
- self.deprecated = deprecated
-
- def to_info_dict(self, ctx: Context) -> t.Dict[str, t.Any]:
- info_dict = super().to_info_dict(ctx)
- info_dict.update(
- params=[param.to_info_dict() for param in self.get_params(ctx)],
- help=self.help,
- epilog=self.epilog,
- short_help=self.short_help,
- hidden=self.hidden,
- deprecated=self.deprecated,
- )
- return info_dict
-
- def get_usage(self, ctx: Context) -> str:
- """Formats the usage line into a string and returns it.
-
- Calls :meth:`format_usage` internally.
- """
- formatter = ctx.make_formatter()
- self.format_usage(ctx, formatter)
- return formatter.getvalue().rstrip("\n")
-
- def get_params(self, ctx: Context) -> t.List["Parameter"]:
- rv = self.params
- help_option = self.get_help_option(ctx)
-
- if help_option is not None:
- rv = [*rv, help_option]
-
- return rv
-
- def format_usage(self, ctx: Context, formatter: HelpFormatter) -> None:
- """Writes the usage line into the formatter.
-
- This is a low-level method called by :meth:`get_usage`.
- """
- pieces = self.collect_usage_pieces(ctx)
- formatter.write_usage(ctx.command_path, " ".join(pieces))
-
- def collect_usage_pieces(self, ctx: Context) -> t.List[str]:
- """Returns all the pieces that go into the usage line and returns
- it as a list of strings.
- """
- rv = [self.options_metavar] if self.options_metavar else []
-
- for param in self.get_params(ctx):
- rv.extend(param.get_usage_pieces(ctx))
-
- return rv
-
- def get_help_option_names(self, ctx: Context) -> t.List[str]:
- """Returns the names for the help option."""
- all_names = set(ctx.help_option_names)
- for param in self.params:
- all_names.difference_update(param.opts)
- all_names.difference_update(param.secondary_opts)
- return list(all_names)
-
- def get_help_option(self, ctx: Context) -> t.Optional["Option"]:
- """Returns the help option object."""
- help_options = self.get_help_option_names(ctx)
-
- if not help_options or not self.add_help_option:
- return None
-
- def show_help(ctx: Context, param: "Parameter", value: str) -> None:
- if value and not ctx.resilient_parsing:
- echo(ctx.get_help(), color=ctx.color)
- ctx.exit()
-
- return Option(
- help_options,
- is_flag=True,
- is_eager=True,
- expose_value=False,
- callback=show_help,
- help=_("Show this message and exit."),
- )
-
- def make_parser(self, ctx: Context) -> OptionParser:
- """Creates the underlying option parser for this command."""
- parser = OptionParser(ctx)
- for param in self.get_params(ctx):
- param.add_to_parser(parser, ctx)
- return parser
-
- def get_help(self, ctx: Context) -> str:
- """Formats the help into a string and returns it.
-
- Calls :meth:`format_help` internally.
- """
- formatter = ctx.make_formatter()
- self.format_help(ctx, formatter)
- return formatter.getvalue().rstrip("\n")
-
- def get_short_help_str(self, limit: int = 45) -> str:
- """Gets short help for the command or makes it by shortening the
- long help string.
- """
- if self.short_help:
- text = inspect.cleandoc(self.short_help)
- elif self.help:
- text = make_default_short_help(self.help, limit)
- else:
- text = ""
-
- if self.deprecated:
- text = _("(Deprecated) {text}").format(text=text)
-
- return text.strip()
-
- def format_help(self, ctx: Context, formatter: HelpFormatter) -> None:
- """Writes the help into the formatter if it exists.
-
- This is a low-level method called by :meth:`get_help`.
-
- This calls the following methods:
-
- - :meth:`format_usage`
- - :meth:`format_help_text`
- - :meth:`format_options`
- - :meth:`format_epilog`
- """
- self.format_usage(ctx, formatter)
- self.format_help_text(ctx, formatter)
- self.format_options(ctx, formatter)
- self.format_epilog(ctx, formatter)
-
- def format_help_text(self, ctx: Context, formatter: HelpFormatter) -> None:
- """Writes the help text to the formatter if it exists."""
- if self.help is not None:
- # truncate the help text to the first form feed
- text = inspect.cleandoc(self.help).partition("\f")[0]
- else:
- text = ""
-
- if self.deprecated:
- text = _("(Deprecated) {text}").format(text=text)
-
- if text:
- formatter.write_paragraph()
-
- with formatter.indentation():
- formatter.write_text(text)
-
- def format_options(self, ctx: Context, formatter: HelpFormatter) -> None:
- """Writes all the options into the formatter if they exist."""
- opts = []
- for param in self.get_params(ctx):
- rv = param.get_help_record(ctx)
- if rv is not None:
- opts.append(rv)
-
- if opts:
- with formatter.section(_("Options")):
- formatter.write_dl(opts)
-
- def format_epilog(self, ctx: Context, formatter: HelpFormatter) -> None:
- """Writes the epilog into the formatter if it exists."""
- if self.epilog:
- epilog = inspect.cleandoc(self.epilog)
- formatter.write_paragraph()
-
- with formatter.indentation():
- formatter.write_text(epilog)
-
- def parse_args(self, ctx: Context, args: t.List[str]) -> t.List[str]:
- if not args and self.no_args_is_help and not ctx.resilient_parsing:
- echo(ctx.get_help(), color=ctx.color)
- ctx.exit()
-
- parser = self.make_parser(ctx)
- opts, args, param_order = parser.parse_args(args=args)
-
- for param in iter_params_for_processing(param_order, self.get_params(ctx)):
- value, args = param.handle_parse_result(ctx, opts, args)
-
- if args and not ctx.allow_extra_args and not ctx.resilient_parsing:
- ctx.fail(
- ngettext(
- "Got unexpected extra argument ({args})",
- "Got unexpected extra arguments ({args})",
- len(args),
- ).format(args=" ".join(map(str, args)))
- )
-
- ctx.args = args
- ctx._opt_prefixes.update(parser._opt_prefixes)
- return args
-
- def invoke(self, ctx: Context) -> t.Any:
- """Given a context, this invokes the attached callback (if it exists)
- in the right way.
- """
- if self.deprecated:
- message = _(
- "DeprecationWarning: The command {name!r} is deprecated."
- ).format(name=self.name)
- echo(style(message, fg="red"), err=True)
-
- if self.callback is not None:
- return ctx.invoke(self.callback, **ctx.params)
-
- def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]:
- """Return a list of completions for the incomplete value. Looks
- at the names of options and chained multi-commands.
-
- :param ctx: Invocation context for this command.
- :param incomplete: Value being completed. May be empty.
-
- .. versionadded:: 8.0
- """
- from click.shell_completion import CompletionItem
-
- results: t.List["CompletionItem"] = []
-
- if incomplete and not incomplete[0].isalnum():
- for param in self.get_params(ctx):
- if (
- not isinstance(param, Option)
- or param.hidden
- or (
- not param.multiple
- and ctx.get_parameter_source(param.name) # type: ignore
- is ParameterSource.COMMANDLINE
- )
- ):
- continue
-
- results.extend(
- CompletionItem(name, help=param.help)
- for name in [*param.opts, *param.secondary_opts]
- if name.startswith(incomplete)
- )
-
- results.extend(super().shell_complete(ctx, incomplete))
- return results
-
-
-class MultiCommand(Command):
- """A multi command is the basic implementation of a command that
- dispatches to subcommands. The most common version is the
- :class:`Group`.
-
- :param invoke_without_command: this controls how the multi command itself
- is invoked. By default it's only invoked
- if a subcommand is provided.
- :param no_args_is_help: this controls what happens if no arguments are
- provided. This option is enabled by default if
- `invoke_without_command` is disabled or disabled
- if it's enabled. If enabled this will add
- ``--help`` as argument if no arguments are
- passed.
- :param subcommand_metavar: the string that is used in the documentation
- to indicate the subcommand place.
- :param chain: if this is set to `True` chaining of multiple subcommands
- is enabled. This restricts the form of commands in that
- they cannot have optional arguments but it allows
- multiple commands to be chained together.
- :param result_callback: The result callback to attach to this multi
- command. This can be set or changed later with the
- :meth:`result_callback` decorator.
- :param attrs: Other command arguments described in :class:`Command`.
- """
-
- allow_extra_args = True
- allow_interspersed_args = False
-
- def __init__(
- self,
- name: t.Optional[str] = None,
- invoke_without_command: bool = False,
- no_args_is_help: t.Optional[bool] = None,
- subcommand_metavar: t.Optional[str] = None,
- chain: bool = False,
- result_callback: t.Optional[t.Callable[..., t.Any]] = None,
- **attrs: t.Any,
- ) -> None:
- super().__init__(name, **attrs)
-
- if no_args_is_help is None:
- no_args_is_help = not invoke_without_command
-
- self.no_args_is_help = no_args_is_help
- self.invoke_without_command = invoke_without_command
-
- if subcommand_metavar is None:
- if chain:
- subcommand_metavar = "COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]..."
- else:
- subcommand_metavar = "COMMAND [ARGS]..."
-
- self.subcommand_metavar = subcommand_metavar
- self.chain = chain
- # The result callback that is stored. This can be set or
- # overridden with the :func:`result_callback` decorator.
- self._result_callback = result_callback
-
- if self.chain:
- for param in self.params:
- if isinstance(param, Argument) and not param.required:
- raise RuntimeError(
- "Multi commands in chain mode cannot have"
- " optional arguments."
- )
-
- def to_info_dict(self, ctx: Context) -> t.Dict[str, t.Any]:
- info_dict = super().to_info_dict(ctx)
- commands = {}
-
- for name in self.list_commands(ctx):
- command = self.get_command(ctx, name)
-
- if command is None:
- continue
-
- sub_ctx = ctx._make_sub_context(command)
-
- with sub_ctx.scope(cleanup=False):
- commands[name] = command.to_info_dict(sub_ctx)
-
- info_dict.update(commands=commands, chain=self.chain)
- return info_dict
-
- def collect_usage_pieces(self, ctx: Context) -> t.List[str]:
- rv = super().collect_usage_pieces(ctx)
- rv.append(self.subcommand_metavar)
- return rv
-
- def format_options(self, ctx: Context, formatter: HelpFormatter) -> None:
- super().format_options(ctx, formatter)
- self.format_commands(ctx, formatter)
-
- def result_callback(self, replace: bool = False) -> t.Callable[[F], F]:
- """Adds a result callback to the command. By default if a
- result callback is already registered this will chain them but
- this can be disabled with the `replace` parameter. The result
- callback is invoked with the return value of the subcommand
- (or the list of return values from all subcommands if chaining
- is enabled) as well as the parameters as they would be passed
- to the main callback.
-
- Example::
-
- @click.group()
- @click.option('-i', '--input', default=23)
- def cli(input):
- return 42
-
- @cli.result_callback()
- def process_result(result, input):
- return result + input
-
- :param replace: if set to `True` an already existing result
- callback will be removed.
-
- .. versionchanged:: 8.0
- Renamed from ``resultcallback``.
-
- .. versionadded:: 3.0
- """
-
- def decorator(f: F) -> F:
- old_callback = self._result_callback
-
- if old_callback is None or replace:
- self._result_callback = f
- return f
-
- def function(__value, *args, **kwargs): # type: ignore
- inner = old_callback(__value, *args, **kwargs)
- return f(inner, *args, **kwargs)
-
- self._result_callback = rv = update_wrapper(t.cast(F, function), f)
- return rv
-
- return decorator
-
- def format_commands(self, ctx: Context, formatter: HelpFormatter) -> None:
- """Extra format methods for multi methods that adds all the commands
- after the options.
- """
- commands = []
- for subcommand in self.list_commands(ctx):
- cmd = self.get_command(ctx, subcommand)
- # What is this, the tool lied about a command. Ignore it
- if cmd is None:
- continue
- if cmd.hidden:
- continue
-
- commands.append((subcommand, cmd))
-
- # allow for 3 times the default spacing
- if len(commands):
- limit = formatter.width - 6 - max(len(cmd[0]) for cmd in commands)
-
- rows = []
- for subcommand, cmd in commands:
- help = cmd.get_short_help_str(limit)
- rows.append((subcommand, help))
-
- if rows:
- with formatter.section(_("Commands")):
- formatter.write_dl(rows)
-
- def parse_args(self, ctx: Context, args: t.List[str]) -> t.List[str]:
- if not args and self.no_args_is_help and not ctx.resilient_parsing:
- echo(ctx.get_help(), color=ctx.color)
- ctx.exit()
-
- rest = super().parse_args(ctx, args)
-
- if self.chain:
- ctx.protected_args = rest
- ctx.args = []
- elif rest:
- ctx.protected_args, ctx.args = rest[:1], rest[1:]
-
- return ctx.args
-
- def invoke(self, ctx: Context) -> t.Any:
- def _process_result(value: t.Any) -> t.Any:
- if self._result_callback is not None:
- value = ctx.invoke(self._result_callback, value, **ctx.params)
- return value
-
- if not ctx.protected_args:
- if self.invoke_without_command:
- # No subcommand was invoked, so the result callback is
- # invoked with the group return value for regular
- # groups, or an empty list for chained groups.
- with ctx:
- rv = super().invoke(ctx)
- return _process_result([] if self.chain else rv)
- ctx.fail(_("Missing command."))
-
- # Fetch args back out
- args = [*ctx.protected_args, *ctx.args]
- ctx.args = []
- ctx.protected_args = []
-
- # If we're not in chain mode, we only allow the invocation of a
- # single command but we also inform the current context about the
- # name of the command to invoke.
- if not self.chain:
- # Make sure the context is entered so we do not clean up
- # resources until the result processor has worked.
- with ctx:
- cmd_name, cmd, args = self.resolve_command(ctx, args)
- assert cmd is not None
- ctx.invoked_subcommand = cmd_name
- super().invoke(ctx)
- sub_ctx = cmd.make_context(cmd_name, args, parent=ctx)
- with sub_ctx:
- return _process_result(sub_ctx.command.invoke(sub_ctx))
-
- # In chain mode we create the contexts step by step, but after the
- # base command has been invoked. Because at that point we do not
- # know the subcommands yet, the invoked subcommand attribute is
- # set to ``*`` to inform the command that subcommands are executed
- # but nothing else.
- with ctx:
- ctx.invoked_subcommand = "*" if args else None
- super().invoke(ctx)
-
- # Otherwise we make every single context and invoke them in a
- # chain. In that case the return value to the result processor
- # is the list of all invoked subcommand's results.
- contexts = []
- while args:
- cmd_name, cmd, args = self.resolve_command(ctx, args)
- assert cmd is not None
- sub_ctx = cmd.make_context(
- cmd_name,
- args,
- parent=ctx,
- allow_extra_args=True,
- allow_interspersed_args=False,
- )
- contexts.append(sub_ctx)
- args, sub_ctx.args = sub_ctx.args, []
-
- rv = []
- for sub_ctx in contexts:
- with sub_ctx:
- rv.append(sub_ctx.command.invoke(sub_ctx))
- return _process_result(rv)
-
- def resolve_command(
- self, ctx: Context, args: t.List[str]
- ) -> t.Tuple[t.Optional[str], t.Optional[Command], t.List[str]]:
- cmd_name = make_str(args[0])
- original_cmd_name = cmd_name
-
- # Get the command
- cmd = self.get_command(ctx, cmd_name)
-
- # If we can't find the command but there is a normalization
- # function available, we try with that one.
- if cmd is None and ctx.token_normalize_func is not None:
- cmd_name = ctx.token_normalize_func(cmd_name)
- cmd = self.get_command(ctx, cmd_name)
-
- # If we don't find the command we want to show an error message
- # to the user that it was not provided. However, there is
- # something else we should do: if the first argument looks like
- # an option we want to kick off parsing again for arguments to
- # resolve things like --help which now should go to the main
- # place.
- if cmd is None and not ctx.resilient_parsing:
- if split_opt(cmd_name)[0]:
- self.parse_args(ctx, ctx.args)
- ctx.fail(_("No such command {name!r}.").format(name=original_cmd_name))
- return cmd_name if cmd else None, cmd, args[1:]
-
- def get_command(self, ctx: Context, cmd_name: str) -> t.Optional[Command]:
- """Given a context and a command name, this returns a
- :class:`Command` object if it exists or returns `None`.
- """
- raise NotImplementedError
-
- def list_commands(self, ctx: Context) -> t.List[str]:
- """Returns a list of subcommand names in the order they should
- appear.
- """
- return []
-
- def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]:
- """Return a list of completions for the incomplete value. Looks
- at the names of options, subcommands, and chained
- multi-commands.
-
- :param ctx: Invocation context for this command.
- :param incomplete: Value being completed. May be empty.
-
- .. versionadded:: 8.0
- """
- from click.shell_completion import CompletionItem
-
- results = [
- CompletionItem(name, help=command.get_short_help_str())
- for name, command in _complete_visible_commands(ctx, incomplete)
- ]
- results.extend(super().shell_complete(ctx, incomplete))
- return results
-
-
-class Group(MultiCommand):
- """A group allows a command to have subcommands attached. This is
- the most common way to implement nesting in Click.
-
- :param name: The name of the group command.
- :param commands: A dict mapping names to :class:`Command` objects.
- Can also be a list of :class:`Command`, which will use
- :attr:`Command.name` to create the dict.
- :param attrs: Other command arguments described in
- :class:`MultiCommand`, :class:`Command`, and
- :class:`BaseCommand`.
-
- .. versionchanged:: 8.0
- The ``commands`` argument can be a list of command objects.
- """
-
- #: If set, this is used by the group's :meth:`command` decorator
- #: as the default :class:`Command` class. This is useful to make all
- #: subcommands use a custom command class.
- #:
- #: .. versionadded:: 8.0
- command_class: t.Optional[t.Type[Command]] = None
-
- #: If set, this is used by the group's :meth:`group` decorator
- #: as the default :class:`Group` class. This is useful to make all
- #: subgroups use a custom group class.
- #:
- #: If set to the special value :class:`type` (literally
- #: ``group_class = type``), this group's class will be used as the
- #: default class. This makes a custom group class continue to make
- #: custom groups.
- #:
- #: .. versionadded:: 8.0
- group_class: t.Optional[t.Union[t.Type["Group"], t.Type[type]]] = None
- # Literal[type] isn't valid, so use Type[type]
-
- def __init__(
- self,
- name: t.Optional[str] = None,
- commands: t.Optional[
- t.Union[t.MutableMapping[str, Command], t.Sequence[Command]]
- ] = None,
- **attrs: t.Any,
- ) -> None:
- super().__init__(name, **attrs)
-
- if commands is None:
- commands = {}
- elif isinstance(commands, abc.Sequence):
- commands = {c.name: c for c in commands if c.name is not None}
-
- #: The registered subcommands by their exported names.
- self.commands: t.MutableMapping[str, Command] = commands
-
- def add_command(self, cmd: Command, name: t.Optional[str] = None) -> None:
- """Registers another :class:`Command` with this group. If the name
- is not provided, the name of the command is used.
- """
- name = name or cmd.name
- if name is None:
- raise TypeError("Command has no name.")
- _check_multicommand(self, name, cmd, register=True)
- self.commands[name] = cmd
-
- @t.overload
- def command(self, __func: t.Callable[..., t.Any]) -> Command:
- ...
-
- @t.overload
- def command(
- self, *args: t.Any, **kwargs: t.Any
- ) -> t.Callable[[t.Callable[..., t.Any]], Command]:
- ...
-
- def command(
- self, *args: t.Any, **kwargs: t.Any
- ) -> t.Union[t.Callable[[t.Callable[..., t.Any]], Command], Command]:
- """A shortcut decorator for declaring and attaching a command to
- the group. This takes the same arguments as :func:`command` and
- immediately registers the created command with this group by
- calling :meth:`add_command`.
-
- To customize the command class used, set the
- :attr:`command_class` attribute.
-
- .. versionchanged:: 8.1
- This decorator can be applied without parentheses.
-
- .. versionchanged:: 8.0
- Added the :attr:`command_class` attribute.
- """
- from .decorators import command
-
- func: t.Optional[t.Callable[..., t.Any]] = None
-
- if args and callable(args[0]):
- assert (
- len(args) == 1 and not kwargs
- ), "Use 'command(**kwargs)(callable)' to provide arguments."
- (func,) = args
- args = ()
-
- if self.command_class and kwargs.get("cls") is None:
- kwargs["cls"] = self.command_class
-
- def decorator(f: t.Callable[..., t.Any]) -> Command:
- cmd: Command = command(*args, **kwargs)(f)
- self.add_command(cmd)
- return cmd
-
- if func is not None:
- return decorator(func)
-
- return decorator
-
- @t.overload
- def group(self, __func: t.Callable[..., t.Any]) -> "Group":
- ...
-
- @t.overload
- def group(
- self, *args: t.Any, **kwargs: t.Any
- ) -> t.Callable[[t.Callable[..., t.Any]], "Group"]:
- ...
-
- def group(
- self, *args: t.Any, **kwargs: t.Any
- ) -> t.Union[t.Callable[[t.Callable[..., t.Any]], "Group"], "Group"]:
- """A shortcut decorator for declaring and attaching a group to
- the group. This takes the same arguments as :func:`group` and
- immediately registers the created group with this group by
- calling :meth:`add_command`.
-
- To customize the group class used, set the :attr:`group_class`
- attribute.
-
- .. versionchanged:: 8.1
- This decorator can be applied without parentheses.
-
- .. versionchanged:: 8.0
- Added the :attr:`group_class` attribute.
- """
- from .decorators import group
-
- func: t.Optional[t.Callable[..., t.Any]] = None
-
- if args and callable(args[0]):
- assert (
- len(args) == 1 and not kwargs
- ), "Use 'group(**kwargs)(callable)' to provide arguments."
- (func,) = args
- args = ()
-
- if self.group_class is not None and kwargs.get("cls") is None:
- if self.group_class is type:
- kwargs["cls"] = type(self)
- else:
- kwargs["cls"] = self.group_class
-
- def decorator(f: t.Callable[..., t.Any]) -> "Group":
- cmd: Group = group(*args, **kwargs)(f)
- self.add_command(cmd)
- return cmd
-
- if func is not None:
- return decorator(func)
-
- return decorator
-
- def get_command(self, ctx: Context, cmd_name: str) -> t.Optional[Command]:
- return self.commands.get(cmd_name)
-
- def list_commands(self, ctx: Context) -> t.List[str]:
- return sorted(self.commands)
-
-
-class CommandCollection(MultiCommand):
- """A command collection is a multi command that merges multiple multi
- commands together into one. This is a straightforward implementation
- that accepts a list of different multi commands as sources and
- provides all the commands for each of them.
-
- See :class:`MultiCommand` and :class:`Command` for the description of
- ``name`` and ``attrs``.
- """
-
- def __init__(
- self,
- name: t.Optional[str] = None,
- sources: t.Optional[t.List[MultiCommand]] = None,
- **attrs: t.Any,
- ) -> None:
- super().__init__(name, **attrs)
- #: The list of registered multi commands.
- self.sources: t.List[MultiCommand] = sources or []
-
- def add_source(self, multi_cmd: MultiCommand) -> None:
- """Adds a new multi command to the chain dispatcher."""
- self.sources.append(multi_cmd)
-
- def get_command(self, ctx: Context, cmd_name: str) -> t.Optional[Command]:
- for source in self.sources:
- rv = source.get_command(ctx, cmd_name)
-
- if rv is not None:
- if self.chain:
- _check_multicommand(self, cmd_name, rv)
-
- return rv
-
- return None
-
- def list_commands(self, ctx: Context) -> t.List[str]:
- rv: t.Set[str] = set()
-
- for source in self.sources:
- rv.update(source.list_commands(ctx))
-
- return sorted(rv)
-
-
-def _check_iter(value: t.Any) -> t.Iterator[t.Any]:
- """Check if the value is iterable but not a string. Raises a type
- error, or return an iterator over the value.
- """
- if isinstance(value, str):
- raise TypeError
-
- return iter(value)
-
-
-class Parameter:
- r"""A parameter to a command comes in two versions: they are either
- :class:`Option`\s or :class:`Argument`\s. Other subclasses are currently
- not supported by design as some of the internals for parsing are
- intentionally not finalized.
-
- Some settings are supported by both options and arguments.
-
- :param param_decls: the parameter declarations for this option or
- argument. This is a list of flags or argument
- names.
- :param type: the type that should be used. Either a :class:`ParamType`
- or a Python type. The latter is converted into the former
- automatically if supported.
- :param required: controls if this is optional or not.
- :param default: the default value if omitted. This can also be a callable,
- in which case it's invoked when the default is needed
- without any arguments.
- :param callback: A function to further process or validate the value
- after type conversion. It is called as ``f(ctx, param, value)``
- and must return the value. It is called for all sources,
- including prompts.
- :param nargs: the number of arguments to match. If not ``1`` the return
- value is a tuple instead of single value. The default for
- nargs is ``1`` (except if the type is a tuple, then it's
- the arity of the tuple). If ``nargs=-1``, all remaining
- parameters are collected.
- :param metavar: how the value is represented in the help page.
- :param expose_value: if this is `True` then the value is passed onwards
- to the command callback and stored on the context,
- otherwise it's skipped.
- :param is_eager: eager values are processed before non eager ones. This
- should not be set for arguments or it will inverse the
- order of processing.
- :param envvar: a string or list of strings that are environment variables
- that should be checked.
- :param shell_complete: A function that returns custom shell
- completions. Used instead of the param's type completion if
- given. Takes ``ctx, param, incomplete`` and must return a list
- of :class:`~click.shell_completion.CompletionItem` or a list of
- strings.
-
- .. versionchanged:: 8.0
- ``process_value`` validates required parameters and bounded
- ``nargs``, and invokes the parameter callback before returning
- the value. This allows the callback to validate prompts.
- ``full_process_value`` is removed.
-
- .. versionchanged:: 8.0
- ``autocompletion`` is renamed to ``shell_complete`` and has new
- semantics described above. The old name is deprecated and will
- be removed in 8.1, until then it will be wrapped to match the
- new requirements.
-
- .. versionchanged:: 8.0
- For ``multiple=True, nargs>1``, the default must be a list of
- tuples.
-
- .. versionchanged:: 8.0
- Setting a default is no longer required for ``nargs>1``, it will
- default to ``None``. ``multiple=True`` or ``nargs=-1`` will
- default to ``()``.
-
- .. versionchanged:: 7.1
- Empty environment variables are ignored rather than taking the
- empty string value. This makes it possible for scripts to clear
- variables if they can't unset them.
-
- .. versionchanged:: 2.0
- Changed signature for parameter callback to also be passed the
- parameter. The old callback format will still work, but it will
- raise a warning to give you a chance to migrate the code easier.
- """
-
- param_type_name = "parameter"
-
- def __init__(
- self,
- param_decls: t.Optional[t.Sequence[str]] = None,
- type: t.Optional[t.Union[types.ParamType, t.Any]] = None,
- required: bool = False,
- default: t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]] = None,
- callback: t.Optional[t.Callable[[Context, "Parameter", t.Any], t.Any]] = None,
- nargs: t.Optional[int] = None,
- multiple: bool = False,
- metavar: t.Optional[str] = None,
- expose_value: bool = True,
- is_eager: bool = False,
- envvar: t.Optional[t.Union[str, t.Sequence[str]]] = None,
- shell_complete: t.Optional[
- t.Callable[
- [Context, "Parameter", str],
- t.Union[t.List["CompletionItem"], t.List[str]],
- ]
- ] = None,
- ) -> None:
- self.name: t.Optional[str]
- self.opts: t.List[str]
- self.secondary_opts: t.List[str]
- self.name, self.opts, self.secondary_opts = self._parse_decls(
- param_decls or (), expose_value
- )
- self.type: types.ParamType = types.convert_type(type, default)
-
- # Default nargs to what the type tells us if we have that
- # information available.
- if nargs is None:
- if self.type.is_composite:
- nargs = self.type.arity
- else:
- nargs = 1
-
- self.required = required
- self.callback = callback
- self.nargs = nargs
- self.multiple = multiple
- self.expose_value = expose_value
- self.default = default
- self.is_eager = is_eager
- self.metavar = metavar
- self.envvar = envvar
- self._custom_shell_complete = shell_complete
-
- if __debug__:
- if self.type.is_composite and nargs != self.type.arity:
- raise ValueError(
- f"'nargs' must be {self.type.arity} (or None) for"
- f" type {self.type!r}, but it was {nargs}."
- )
-
- # Skip no default or callable default.
- check_default = default if not callable(default) else None
-
- if check_default is not None:
- if multiple:
- try:
- # Only check the first value against nargs.
- check_default = next(_check_iter(check_default), None)
- except TypeError:
- raise ValueError(
- "'default' must be a list when 'multiple' is true."
- ) from None
-
- # Can be None for multiple with empty default.
- if nargs != 1 and check_default is not None:
- try:
- _check_iter(check_default)
- except TypeError:
- if multiple:
- message = (
- "'default' must be a list of lists when 'multiple' is"
- " true and 'nargs' != 1."
- )
- else:
- message = "'default' must be a list when 'nargs' != 1."
-
- raise ValueError(message) from None
-
- if nargs > 1 and len(check_default) != nargs:
- subject = "item length" if multiple else "length"
- raise ValueError(
- f"'default' {subject} must match nargs={nargs}."
- )
-
- def to_info_dict(self) -> t.Dict[str, t.Any]:
- """Gather information that could be useful for a tool generating
- user-facing documentation.
-
- Use :meth:`click.Context.to_info_dict` to traverse the entire
- CLI structure.
-
- .. versionadded:: 8.0
- """
- return {
- "name": self.name,
- "param_type_name": self.param_type_name,
- "opts": self.opts,
- "secondary_opts": self.secondary_opts,
- "type": self.type.to_info_dict(),
- "required": self.required,
- "nargs": self.nargs,
- "multiple": self.multiple,
- "default": self.default,
- "envvar": self.envvar,
- }
-
- def __repr__(self) -> str:
- return f"<{self.__class__.__name__} {self.name}>"
-
- def _parse_decls(
- self, decls: t.Sequence[str], expose_value: bool
- ) -> t.Tuple[t.Optional[str], t.List[str], t.List[str]]:
- raise NotImplementedError()
-
- @property
- def human_readable_name(self) -> str:
- """Returns the human readable name of this parameter. This is the
- same as the name for options, but the metavar for arguments.
- """
- return self.name # type: ignore
-
- def make_metavar(self) -> str:
- if self.metavar is not None:
- return self.metavar
-
- metavar = self.type.get_metavar(self)
-
- if metavar is None:
- metavar = self.type.name.upper()
-
- if self.nargs != 1:
- metavar += "..."
-
- return metavar
-
- @t.overload
- def get_default(
- self, ctx: Context, call: "te.Literal[True]" = True
- ) -> t.Optional[t.Any]:
- ...
-
- @t.overload
- def get_default(
- self, ctx: Context, call: bool = ...
- ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]:
- ...
-
- def get_default(
- self, ctx: Context, call: bool = True
- ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]:
- """Get the default for the parameter. Tries
- :meth:`Context.lookup_default` first, then the local default.
-
- :param ctx: Current context.
- :param call: If the default is a callable, call it. Disable to
- return the callable instead.
-
- .. versionchanged:: 8.0.2
- Type casting is no longer performed when getting a default.
-
- .. versionchanged:: 8.0.1
- Type casting can fail in resilient parsing mode. Invalid
- defaults will not prevent showing help text.
-
- .. versionchanged:: 8.0
- Looks at ``ctx.default_map`` first.
-
- .. versionchanged:: 8.0
- Added the ``call`` parameter.
- """
- value = ctx.lookup_default(self.name, call=False) # type: ignore
-
- if value is None:
- value = self.default
-
- if call and callable(value):
- value = value()
-
- return value
-
- def add_to_parser(self, parser: OptionParser, ctx: Context) -> None:
- raise NotImplementedError()
-
- def consume_value(
- self, ctx: Context, opts: t.Mapping[str, t.Any]
- ) -> t.Tuple[t.Any, ParameterSource]:
- value = opts.get(self.name) # type: ignore
- source = ParameterSource.COMMANDLINE
-
- if value is None:
- value = self.value_from_envvar(ctx)
- source = ParameterSource.ENVIRONMENT
-
- if value is None:
- value = ctx.lookup_default(self.name) # type: ignore
- source = ParameterSource.DEFAULT_MAP
-
- if value is None:
- value = self.get_default(ctx)
- source = ParameterSource.DEFAULT
-
- return value, source
-
- def type_cast_value(self, ctx: Context, value: t.Any) -> t.Any:
- """Convert and validate a value against the option's
- :attr:`type`, :attr:`multiple`, and :attr:`nargs`.
- """
- if value is None:
- return () if self.multiple or self.nargs == -1 else None
-
- def check_iter(value: t.Any) -> t.Iterator[t.Any]:
- try:
- return _check_iter(value)
- except TypeError:
- # This should only happen when passing in args manually,
- # the parser should construct an iterable when parsing
- # the command line.
- raise BadParameter(
- _("Value must be an iterable."), ctx=ctx, param=self
- ) from None
-
- if self.nargs == 1 or self.type.is_composite:
-
- def convert(value: t.Any) -> t.Any:
- return self.type(value, param=self, ctx=ctx)
-
- elif self.nargs == -1:
-
- def convert(value: t.Any) -> t.Any: # t.Tuple[t.Any, ...]
- return tuple(self.type(x, self, ctx) for x in check_iter(value))
-
- else: # nargs > 1
-
- def convert(value: t.Any) -> t.Any: # t.Tuple[t.Any, ...]
- value = tuple(check_iter(value))
-
- if len(value) != self.nargs:
- raise BadParameter(
- ngettext(
- "Takes {nargs} values but 1 was given.",
- "Takes {nargs} values but {len} were given.",
- len(value),
- ).format(nargs=self.nargs, len=len(value)),
- ctx=ctx,
- param=self,
- )
-
- return tuple(self.type(x, self, ctx) for x in value)
-
- if self.multiple:
- return tuple(convert(x) for x in check_iter(value))
-
- return convert(value)
-
- def value_is_missing(self, value: t.Any) -> bool:
- if value is None:
- return True
-
- if (self.nargs != 1 or self.multiple) and value == ():
- return True
-
- return False
-
- def process_value(self, ctx: Context, value: t.Any) -> t.Any:
- value = self.type_cast_value(ctx, value)
-
- if self.required and self.value_is_missing(value):
- raise MissingParameter(ctx=ctx, param=self)
-
- if self.callback is not None:
- value = self.callback(ctx, self, value)
-
- return value
-
- def resolve_envvar_value(self, ctx: Context) -> t.Optional[str]:
- if self.envvar is None:
- return None
-
- if isinstance(self.envvar, str):
- rv = os.environ.get(self.envvar)
-
- if rv:
- return rv
- else:
- for envvar in self.envvar:
- rv = os.environ.get(envvar)
-
- if rv:
- return rv
-
- return None
-
- def value_from_envvar(self, ctx: Context) -> t.Optional[t.Any]:
- rv: t.Optional[t.Any] = self.resolve_envvar_value(ctx)
-
- if rv is not None and self.nargs != 1:
- rv = self.type.split_envvar_value(rv)
-
- return rv
-
- def handle_parse_result(
- self, ctx: Context, opts: t.Mapping[str, t.Any], args: t.List[str]
- ) -> t.Tuple[t.Any, t.List[str]]:
- with augment_usage_errors(ctx, param=self):
- value, source = self.consume_value(ctx, opts)
- ctx.set_parameter_source(self.name, source) # type: ignore
-
- try:
- value = self.process_value(ctx, value)
- except Exception:
- if not ctx.resilient_parsing:
- raise
-
- value = None
-
- if self.expose_value:
- ctx.params[self.name] = value # type: ignore
-
- return value, args
-
- def get_help_record(self, ctx: Context) -> t.Optional[t.Tuple[str, str]]:
- pass
-
- def get_usage_pieces(self, ctx: Context) -> t.List[str]:
- return []
-
- def get_error_hint(self, ctx: Context) -> str:
- """Get a stringified version of the param for use in error messages to
- indicate which param caused the error.
- """
- hint_list = self.opts or [self.human_readable_name]
- return " / ".join(f"'{x}'" for x in hint_list)
-
- def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]:
- """Return a list of completions for the incomplete value. If a
- ``shell_complete`` function was given during init, it is used.
- Otherwise, the :attr:`type`
- :meth:`~click.types.ParamType.shell_complete` function is used.
-
- :param ctx: Invocation context for this command.
- :param incomplete: Value being completed. May be empty.
-
- .. versionadded:: 8.0
- """
- if self._custom_shell_complete is not None:
- results = self._custom_shell_complete(ctx, self, incomplete)
-
- if results and isinstance(results[0], str):
- from click.shell_completion import CompletionItem
-
- results = [CompletionItem(c) for c in results]
-
- return t.cast(t.List["CompletionItem"], results)
-
- return self.type.shell_complete(ctx, self, incomplete)
-
-
-class Option(Parameter):
- """Options are usually optional values on the command line and
- have some extra features that arguments don't have.
-
- All other parameters are passed onwards to the parameter constructor.
-
- :param show_default: Show the default value for this option in its
- help text. Values are not shown by default, unless
- :attr:`Context.show_default` is ``True``. If this value is a
- string, it shows that string in parentheses instead of the
- actual value. This is particularly useful for dynamic options.
- For single option boolean flags, the default remains hidden if
- its value is ``False``.
- :param show_envvar: Controls if an environment variable should be
- shown on the help page. Normally, environment variables are not
- shown.
- :param prompt: If set to ``True`` or a non empty string then the
- user will be prompted for input. If set to ``True`` the prompt
- will be the option name capitalized.
- :param confirmation_prompt: Prompt a second time to confirm the
- value if it was prompted for. Can be set to a string instead of
- ``True`` to customize the message.
- :param prompt_required: If set to ``False``, the user will be
- prompted for input only when the option was specified as a flag
- without a value.
- :param hide_input: If this is ``True`` then the input on the prompt
- will be hidden from the user. This is useful for password input.
- :param is_flag: forces this option to act as a flag. The default is
- auto detection.
- :param flag_value: which value should be used for this flag if it's
- enabled. This is set to a boolean automatically if
- the option string contains a slash to mark two options.
- :param multiple: if this is set to `True` then the argument is accepted
- multiple times and recorded. This is similar to ``nargs``
- in how it works but supports arbitrary number of
- arguments.
- :param count: this flag makes an option increment an integer.
- :param allow_from_autoenv: if this is enabled then the value of this
- parameter will be pulled from an environment
- variable in case a prefix is defined on the
- context.
- :param help: the help string.
- :param hidden: hide this option from help outputs.
- :param attrs: Other command arguments described in :class:`Parameter`.
-
- .. versionchanged:: 8.1.0
- Help text indentation is cleaned here instead of only in the
- ``@option`` decorator.
-
- .. versionchanged:: 8.1.0
- The ``show_default`` parameter overrides
- ``Context.show_default``.
-
- .. versionchanged:: 8.1.0
- The default of a single option boolean flag is not shown if the
- default value is ``False``.
-
- .. versionchanged:: 8.0.1
- ``type`` is detected from ``flag_value`` if given.
- """
-
- param_type_name = "option"
-
- def __init__(
- self,
- param_decls: t.Optional[t.Sequence[str]] = None,
- show_default: t.Union[bool, str, None] = None,
- prompt: t.Union[bool, str] = False,
- confirmation_prompt: t.Union[bool, str] = False,
- prompt_required: bool = True,
- hide_input: bool = False,
- is_flag: t.Optional[bool] = None,
- flag_value: t.Optional[t.Any] = None,
- multiple: bool = False,
- count: bool = False,
- allow_from_autoenv: bool = True,
- type: t.Optional[t.Union[types.ParamType, t.Any]] = None,
- help: t.Optional[str] = None,
- hidden: bool = False,
- show_choices: bool = True,
- show_envvar: bool = False,
- **attrs: t.Any,
- ) -> None:
- if help:
- help = inspect.cleandoc(help)
-
- default_is_missing = "default" not in attrs
- super().__init__(param_decls, type=type, multiple=multiple, **attrs)
-
- if prompt is True:
- if self.name is None:
- raise TypeError("'name' is required with 'prompt=True'.")
-
- prompt_text: t.Optional[str] = self.name.replace("_", " ").capitalize()
- elif prompt is False:
- prompt_text = None
- else:
- prompt_text = prompt
-
- self.prompt = prompt_text
- self.confirmation_prompt = confirmation_prompt
- self.prompt_required = prompt_required
- self.hide_input = hide_input
- self.hidden = hidden
-
- # If prompt is enabled but not required, then the option can be
- # used as a flag to indicate using prompt or flag_value.
- self._flag_needs_value = self.prompt is not None and not self.prompt_required
-
- if is_flag is None:
- if flag_value is not None:
- # Implicitly a flag because flag_value was set.
- is_flag = True
- elif self._flag_needs_value:
- # Not a flag, but when used as a flag it shows a prompt.
- is_flag = False
- else:
- # Implicitly a flag because flag options were given.
- is_flag = bool(self.secondary_opts)
- elif is_flag is False and not self._flag_needs_value:
- # Not a flag, and prompt is not enabled, can be used as a
- # flag if flag_value is set.
- self._flag_needs_value = flag_value is not None
-
- self.default: t.Union[t.Any, t.Callable[[], t.Any]]
-
- if is_flag and default_is_missing and not self.required:
- if multiple:
- self.default = ()
- else:
- self.default = False
-
- if flag_value is None:
- flag_value = not self.default
-
- self.type: types.ParamType
- if is_flag and type is None:
- # Re-guess the type from the flag value instead of the
- # default.
- self.type = types.convert_type(None, flag_value)
-
- self.is_flag: bool = is_flag
- self.is_bool_flag: bool = is_flag and isinstance(self.type, types.BoolParamType)
- self.flag_value: t.Any = flag_value
-
- # Counting
- self.count = count
- if count:
- if type is None:
- self.type = types.IntRange(min=0)
- if default_is_missing:
- self.default = 0
-
- self.allow_from_autoenv = allow_from_autoenv
- self.help = help
- self.show_default = show_default
- self.show_choices = show_choices
- self.show_envvar = show_envvar
-
- if __debug__:
- if self.nargs == -1:
- raise TypeError("nargs=-1 is not supported for options.")
-
- if self.prompt and self.is_flag and not self.is_bool_flag:
- raise TypeError("'prompt' is not valid for non-boolean flag.")
-
- if not self.is_bool_flag and self.secondary_opts:
- raise TypeError("Secondary flag is not valid for non-boolean flag.")
-
- if self.is_bool_flag and self.hide_input and self.prompt is not None:
- raise TypeError(
- "'prompt' with 'hide_input' is not valid for boolean flag."
- )
-
- if self.count:
- if self.multiple:
- raise TypeError("'count' is not valid with 'multiple'.")
-
- if self.is_flag:
- raise TypeError("'count' is not valid with 'is_flag'.")
-
- def to_info_dict(self) -> t.Dict[str, t.Any]:
- info_dict = super().to_info_dict()
- info_dict.update(
- help=self.help,
- prompt=self.prompt,
- is_flag=self.is_flag,
- flag_value=self.flag_value,
- count=self.count,
- hidden=self.hidden,
- )
- return info_dict
-
- def _parse_decls(
- self, decls: t.Sequence[str], expose_value: bool
- ) -> t.Tuple[t.Optional[str], t.List[str], t.List[str]]:
- opts = []
- secondary_opts = []
- name = None
- possible_names = []
-
- for decl in decls:
- if decl.isidentifier():
- if name is not None:
- raise TypeError(f"Name '{name}' defined twice")
- name = decl
- else:
- split_char = ";" if decl[:1] == "/" else "/"
- if split_char in decl:
- first, second = decl.split(split_char, 1)
- first = first.rstrip()
- if first:
- possible_names.append(split_opt(first))
- opts.append(first)
- second = second.lstrip()
- if second:
- secondary_opts.append(second.lstrip())
- if first == second:
- raise ValueError(
- f"Boolean option {decl!r} cannot use the"
- " same flag for true/false."
- )
- else:
- possible_names.append(split_opt(decl))
- opts.append(decl)
-
- if name is None and possible_names:
- possible_names.sort(key=lambda x: -len(x[0])) # group long options first
- name = possible_names[0][1].replace("-", "_").lower()
- if not name.isidentifier():
- name = None
-
- if name is None:
- if not expose_value:
- return None, opts, secondary_opts
- raise TypeError("Could not determine name for option")
-
- if not opts and not secondary_opts:
- raise TypeError(
- f"No options defined but a name was passed ({name})."
- " Did you mean to declare an argument instead? Did"
- f" you mean to pass '--{name}'?"
- )
-
- return name, opts, secondary_opts
-
- def add_to_parser(self, parser: OptionParser, ctx: Context) -> None:
- if self.multiple:
- action = "append"
- elif self.count:
- action = "count"
- else:
- action = "store"
-
- if self.is_flag:
- action = f"{action}_const"
-
- if self.is_bool_flag and self.secondary_opts:
- parser.add_option(
- obj=self, opts=self.opts, dest=self.name, action=action, const=True
- )
- parser.add_option(
- obj=self,
- opts=self.secondary_opts,
- dest=self.name,
- action=action,
- const=False,
- )
- else:
- parser.add_option(
- obj=self,
- opts=self.opts,
- dest=self.name,
- action=action,
- const=self.flag_value,
- )
- else:
- parser.add_option(
- obj=self,
- opts=self.opts,
- dest=self.name,
- action=action,
- nargs=self.nargs,
- )
-
- def get_help_record(self, ctx: Context) -> t.Optional[t.Tuple[str, str]]:
- if self.hidden:
- return None
-
- any_prefix_is_slash = False
-
- def _write_opts(opts: t.Sequence[str]) -> str:
- nonlocal any_prefix_is_slash
-
- rv, any_slashes = join_options(opts)
-
- if any_slashes:
- any_prefix_is_slash = True
-
- if not self.is_flag and not self.count:
- rv += f" {self.make_metavar()}"
-
- return rv
-
- rv = [_write_opts(self.opts)]
-
- if self.secondary_opts:
- rv.append(_write_opts(self.secondary_opts))
-
- help = self.help or ""
- extra = []
-
- if self.show_envvar:
- envvar = self.envvar
-
- if envvar is None:
- if (
- self.allow_from_autoenv
- and ctx.auto_envvar_prefix is not None
- and self.name is not None
- ):
- envvar = f"{ctx.auto_envvar_prefix}_{self.name.upper()}"
-
- if envvar is not None:
- var_str = (
- envvar
- if isinstance(envvar, str)
- else ", ".join(str(d) for d in envvar)
- )
- extra.append(_("env var: {var}").format(var=var_str))
-
- # Temporarily enable resilient parsing to avoid type casting
- # failing for the default. Might be possible to extend this to
- # help formatting in general.
- resilient = ctx.resilient_parsing
- ctx.resilient_parsing = True
-
- try:
- default_value = self.get_default(ctx, call=False)
- finally:
- ctx.resilient_parsing = resilient
-
- show_default = False
- show_default_is_str = False
-
- if self.show_default is not None:
- if isinstance(self.show_default, str):
- show_default_is_str = show_default = True
- else:
- show_default = self.show_default
- elif ctx.show_default is not None:
- show_default = ctx.show_default
-
- if show_default_is_str or (show_default and (default_value is not None)):
- if show_default_is_str:
- default_string = f"({self.show_default})"
- elif isinstance(default_value, (list, tuple)):
- default_string = ", ".join(str(d) for d in default_value)
- elif inspect.isfunction(default_value):
- default_string = _("(dynamic)")
- elif self.is_bool_flag and self.secondary_opts:
- # For boolean flags that have distinct True/False opts,
- # use the opt without prefix instead of the value.
- default_string = split_opt(
- (self.opts if self.default else self.secondary_opts)[0]
- )[1]
- elif self.is_bool_flag and not self.secondary_opts and not default_value:
- default_string = ""
- else:
- default_string = str(default_value)
-
- if default_string:
- extra.append(_("default: {default}").format(default=default_string))
-
- if (
- isinstance(self.type, types._NumberRangeBase)
- # skip count with default range type
- and not (self.count and self.type.min == 0 and self.type.max is None)
- ):
- range_str = self.type._describe_range()
-
- if range_str:
- extra.append(range_str)
-
- if self.required:
- extra.append(_("required"))
-
- if extra:
- extra_str = "; ".join(extra)
- help = f"{help} [{extra_str}]" if help else f"[{extra_str}]"
-
- return ("; " if any_prefix_is_slash else " / ").join(rv), help
-
- @t.overload
- def get_default(
- self, ctx: Context, call: "te.Literal[True]" = True
- ) -> t.Optional[t.Any]:
- ...
-
- @t.overload
- def get_default(
- self, ctx: Context, call: bool = ...
- ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]:
- ...
-
- def get_default(
- self, ctx: Context, call: bool = True
- ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]:
- # If we're a non boolean flag our default is more complex because
- # we need to look at all flags in the same group to figure out
- # if we're the default one in which case we return the flag
- # value as default.
- if self.is_flag and not self.is_bool_flag:
- for param in ctx.command.params:
- if param.name == self.name and param.default:
- return t.cast(Option, param).flag_value
-
- return None
-
- return super().get_default(ctx, call=call)
-
- def prompt_for_value(self, ctx: Context) -> t.Any:
- """This is an alternative flow that can be activated in the full
- value processing if a value does not exist. It will prompt the
- user until a valid value exists and then returns the processed
- value as result.
- """
- assert self.prompt is not None
-
- # Calculate the default before prompting anything to be stable.
- default = self.get_default(ctx)
-
- # If this is a prompt for a flag we need to handle this
- # differently.
- if self.is_bool_flag:
- return confirm(self.prompt, default)
-
- return prompt(
- self.prompt,
- default=default,
- type=self.type,
- hide_input=self.hide_input,
- show_choices=self.show_choices,
- confirmation_prompt=self.confirmation_prompt,
- value_proc=lambda x: self.process_value(ctx, x),
- )
-
- def resolve_envvar_value(self, ctx: Context) -> t.Optional[str]:
- rv = super().resolve_envvar_value(ctx)
-
- if rv is not None:
- return rv
-
- if (
- self.allow_from_autoenv
- and ctx.auto_envvar_prefix is not None
- and self.name is not None
- ):
- envvar = f"{ctx.auto_envvar_prefix}_{self.name.upper()}"
- rv = os.environ.get(envvar)
-
- if rv:
- return rv
-
- return None
-
- def value_from_envvar(self, ctx: Context) -> t.Optional[t.Any]:
- rv: t.Optional[t.Any] = self.resolve_envvar_value(ctx)
-
- if rv is None:
- return None
-
- value_depth = (self.nargs != 1) + bool(self.multiple)
-
- if value_depth > 0:
- rv = self.type.split_envvar_value(rv)
-
- if self.multiple and self.nargs != 1:
- rv = batch(rv, self.nargs)
-
- return rv
-
- def consume_value(
- self, ctx: Context, opts: t.Mapping[str, "Parameter"]
- ) -> t.Tuple[t.Any, ParameterSource]:
- value, source = super().consume_value(ctx, opts)
-
- # The parser will emit a sentinel value if the option can be
- # given as a flag without a value. This is different from None
- # to distinguish from the flag not being given at all.
- if value is _flag_needs_value:
- if self.prompt is not None and not ctx.resilient_parsing:
- value = self.prompt_for_value(ctx)
- source = ParameterSource.PROMPT
- else:
- value = self.flag_value
- source = ParameterSource.COMMANDLINE
-
- elif (
- self.multiple
- and value is not None
- and any(v is _flag_needs_value for v in value)
- ):
- value = [self.flag_value if v is _flag_needs_value else v for v in value]
- source = ParameterSource.COMMANDLINE
-
- # The value wasn't set, or used the param's default, prompt if
- # prompting is enabled.
- elif (
- source in {None, ParameterSource.DEFAULT}
- and self.prompt is not None
- and (self.required or self.prompt_required)
- and not ctx.resilient_parsing
- ):
- value = self.prompt_for_value(ctx)
- source = ParameterSource.PROMPT
-
- return value, source
-
-
-class Argument(Parameter):
- """Arguments are positional parameters to a command. They generally
- provide fewer features than options but can have infinite ``nargs``
- and are required by default.
-
- All parameters are passed onwards to the constructor of :class:`Parameter`.
- """
-
- param_type_name = "argument"
-
- def __init__(
- self,
- param_decls: t.Sequence[str],
- required: t.Optional[bool] = None,
- **attrs: t.Any,
- ) -> None:
- if required is None:
- if attrs.get("default") is not None:
- required = False
- else:
- required = attrs.get("nargs", 1) > 0
-
- if "multiple" in attrs:
- raise TypeError("__init__() got an unexpected keyword argument 'multiple'.")
-
- super().__init__(param_decls, required=required, **attrs)
-
- if __debug__:
- if self.default is not None and self.nargs == -1:
- raise TypeError("'default' is not supported for nargs=-1.")
-
- @property
- def human_readable_name(self) -> str:
- if self.metavar is not None:
- return self.metavar
- return self.name.upper() # type: ignore
-
- def make_metavar(self) -> str:
- if self.metavar is not None:
- return self.metavar
- var = self.type.get_metavar(self)
- if not var:
- var = self.name.upper() # type: ignore
- if not self.required:
- var = f"[{var}]"
- if self.nargs != 1:
- var += "..."
- return var
-
- def _parse_decls(
- self, decls: t.Sequence[str], expose_value: bool
- ) -> t.Tuple[t.Optional[str], t.List[str], t.List[str]]:
- if not decls:
- if not expose_value:
- return None, [], []
- raise TypeError("Could not determine name for argument")
- if len(decls) == 1:
- name = arg = decls[0]
- name = name.replace("-", "_").lower()
- else:
- raise TypeError(
- "Arguments take exactly one parameter declaration, got"
- f" {len(decls)}."
- )
- return name, [arg], []
-
- def get_usage_pieces(self, ctx: Context) -> t.List[str]:
- return [self.make_metavar()]
-
- def get_error_hint(self, ctx: Context) -> str:
- return f"'{self.make_metavar()}'"
-
- def add_to_parser(self, parser: OptionParser, ctx: Context) -> None:
- parser.add_argument(dest=self.name, nargs=self.nargs, obj=self)
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/subset/__main__.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/subset/__main__.py
deleted file mode 100644
index decf9ee6e50a612c65a87ebeaa8be115f1d25242..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/subset/__main__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-import sys
-from fontTools.subset import main
-
-
-if __name__ == "__main__":
- sys.exit(main())
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/t1Lib/__init__.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/t1Lib/__init__.py
deleted file mode 100644
index e98acb7c52e89a83b7750601c6d80cbd094637d7..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/t1Lib/__init__.py
+++ /dev/null
@@ -1,638 +0,0 @@
-"""fontTools.t1Lib.py -- Tools for PostScript Type 1 fonts (Python2 only)
-
-Functions for reading and writing raw Type 1 data:
-
-read(path)
- reads any Type 1 font file, returns the raw data and a type indicator:
- 'LWFN', 'PFB' or 'OTHER', depending on the format of the file pointed
- to by 'path'.
- Raises an error when the file does not contain valid Type 1 data.
-
-write(path, data, kind='OTHER', dohex=False)
- writes raw Type 1 data to the file pointed to by 'path'.
- 'kind' can be one of 'LWFN', 'PFB' or 'OTHER'; it defaults to 'OTHER'.
- 'dohex' is a flag which determines whether the eexec encrypted
- part should be written as hexadecimal or binary, but only if kind
- is 'OTHER'.
-"""
-import fontTools
-from fontTools.misc import eexec
-from fontTools.misc.macCreatorType import getMacCreatorAndType
-from fontTools.misc.textTools import bytechr, byteord, bytesjoin, tobytes
-from fontTools.misc.psOperators import (
- _type1_pre_eexec_order,
- _type1_fontinfo_order,
- _type1_post_eexec_order,
-)
-from fontTools.encodings.StandardEncoding import StandardEncoding
-import os
-import re
-
-__author__ = "jvr"
-__version__ = "1.0b3"
-DEBUG = 0
-
-
-try:
- try:
- from Carbon import Res
- except ImportError:
- import Res # MacPython < 2.2
-except ImportError:
- haveMacSupport = 0
-else:
- haveMacSupport = 1
-
-
-class T1Error(Exception):
- pass
-
-
-class T1Font(object):
-
- """Type 1 font class.
-
- Uses a minimal interpeter that supports just about enough PS to parse
- Type 1 fonts.
- """
-
- def __init__(self, path, encoding="ascii", kind=None):
- if kind is None:
- self.data, _ = read(path)
- elif kind == "LWFN":
- self.data = readLWFN(path)
- elif kind == "PFB":
- self.data = readPFB(path)
- elif kind == "OTHER":
- self.data = readOther(path)
- else:
- raise ValueError(kind)
- self.encoding = encoding
-
- def saveAs(self, path, type, dohex=False):
- write(path, self.getData(), type, dohex)
-
- def getData(self):
- if not hasattr(self, "data"):
- self.data = self.createData()
- return self.data
-
- def getGlyphSet(self):
- """Return a generic GlyphSet, which is a dict-like object
- mapping glyph names to glyph objects. The returned glyph objects
- have a .draw() method that supports the Pen protocol, and will
- have an attribute named 'width', but only *after* the .draw() method
- has been called.
-
- In the case of Type 1, the GlyphSet is simply the CharStrings dict.
- """
- return self["CharStrings"]
-
- def __getitem__(self, key):
- if not hasattr(self, "font"):
- self.parse()
- return self.font[key]
-
- def parse(self):
- from fontTools.misc import psLib
- from fontTools.misc import psCharStrings
-
- self.font = psLib.suckfont(self.data, self.encoding)
- charStrings = self.font["CharStrings"]
- lenIV = self.font["Private"].get("lenIV", 4)
- assert lenIV >= 0
- subrs = self.font["Private"]["Subrs"]
- for glyphName, charString in charStrings.items():
- charString, R = eexec.decrypt(charString, 4330)
- charStrings[glyphName] = psCharStrings.T1CharString(
- charString[lenIV:], subrs=subrs
- )
- for i in range(len(subrs)):
- charString, R = eexec.decrypt(subrs[i], 4330)
- subrs[i] = psCharStrings.T1CharString(charString[lenIV:], subrs=subrs)
- del self.data
-
- def createData(self):
- sf = self.font
-
- eexec_began = False
- eexec_dict = {}
- lines = []
- lines.extend(
- [
- self._tobytes(f"%!FontType1-1.1: {sf['FontName']}"),
- self._tobytes(f"%t1Font: ({fontTools.version})"),
- self._tobytes(f"%%BeginResource: font {sf['FontName']}"),
- ]
- )
- # follow t1write.c:writeRegNameKeyedFont
- size = 3 # Headroom for new key addition
- size += 1 # FontMatrix is always counted
- size += 1 + 1 # Private, CharStings
- for key in font_dictionary_keys:
- size += int(key in sf)
- lines.append(self._tobytes(f"{size} dict dup begin"))
-
- for key, value in sf.items():
- if eexec_began:
- eexec_dict[key] = value
- continue
-
- if key == "FontInfo":
- fi = sf["FontInfo"]
- # follow t1write.c:writeFontInfoDict
- size = 3 # Headroom for new key addition
- for subkey in FontInfo_dictionary_keys:
- size += int(subkey in fi)
- lines.append(self._tobytes(f"/FontInfo {size} dict dup begin"))
-
- for subkey, subvalue in fi.items():
- lines.extend(self._make_lines(subkey, subvalue))
- lines.append(b"end def")
- elif key in _type1_post_eexec_order: # usually 'Private'
- eexec_dict[key] = value
- eexec_began = True
- else:
- lines.extend(self._make_lines(key, value))
- lines.append(b"end")
- eexec_portion = self.encode_eexec(eexec_dict)
- lines.append(bytesjoin([b"currentfile eexec ", eexec_portion]))
-
- for _ in range(8):
- lines.append(self._tobytes("0" * 64))
- lines.extend([b"cleartomark", b"%%EndResource", b"%%EOF"])
-
- data = bytesjoin(lines, "\n")
- return data
-
- def encode_eexec(self, eexec_dict):
- lines = []
-
- # '-|', '|-', '|'
- RD_key, ND_key, NP_key = None, None, None
-
- for key, value in eexec_dict.items():
- if key == "Private":
- pr = eexec_dict["Private"]
- # follow t1write.c:writePrivateDict
- size = 3 # for RD, ND, NP
- for subkey in Private_dictionary_keys:
- size += int(subkey in pr)
- lines.append(b"dup /Private")
- lines.append(self._tobytes(f"{size} dict dup begin"))
- for subkey, subvalue in pr.items():
- if not RD_key and subvalue == RD_value:
- RD_key = subkey
- elif not ND_key and subvalue == ND_value:
- ND_key = subkey
- elif not NP_key and subvalue == PD_value:
- NP_key = subkey
-
- if subkey == "OtherSubrs":
- # XXX: assert that no flex hint is used
- lines.append(self._tobytes(hintothers))
- elif subkey == "Subrs":
- # XXX: standard Subrs only
- lines.append(b"/Subrs 5 array")
- for i, subr_bin in enumerate(std_subrs):
- encrypted_subr, R = eexec.encrypt(
- bytesjoin([char_IV, subr_bin]), 4330
- )
- lines.append(
- bytesjoin(
- [
- self._tobytes(
- f"dup {i} {len(encrypted_subr)} {RD_key} "
- ),
- encrypted_subr,
- self._tobytes(f" {NP_key}"),
- ]
- )
- )
- lines.append(b"def")
-
- lines.append(b"put")
- else:
- lines.extend(self._make_lines(subkey, subvalue))
- elif key == "CharStrings":
- lines.append(b"dup /CharStrings")
- lines.append(
- self._tobytes(f"{len(eexec_dict['CharStrings'])} dict dup begin")
- )
- for glyph_name, char_bin in eexec_dict["CharStrings"].items():
- char_bin.compile()
- encrypted_char, R = eexec.encrypt(
- bytesjoin([char_IV, char_bin.bytecode]), 4330
- )
- lines.append(
- bytesjoin(
- [
- self._tobytes(
- f"/{glyph_name} {len(encrypted_char)} {RD_key} "
- ),
- encrypted_char,
- self._tobytes(f" {ND_key}"),
- ]
- )
- )
- lines.append(b"end put")
- else:
- lines.extend(self._make_lines(key, value))
-
- lines.extend(
- [
- b"end",
- b"dup /FontName get exch definefont pop",
- b"mark",
- b"currentfile closefile\n",
- ]
- )
-
- eexec_portion = bytesjoin(lines, "\n")
- encrypted_eexec, R = eexec.encrypt(bytesjoin([eexec_IV, eexec_portion]), 55665)
-
- return encrypted_eexec
-
- def _make_lines(self, key, value):
- if key == "FontName":
- return [self._tobytes(f"/{key} /{value} def")]
- if key in ["isFixedPitch", "ForceBold", "RndStemUp"]:
- return [self._tobytes(f"/{key} {'true' if value else 'false'} def")]
- elif key == "Encoding":
- if value == StandardEncoding:
- return [self._tobytes(f"/{key} StandardEncoding def")]
- else:
- # follow fontTools.misc.psOperators._type1_Encoding_repr
- lines = []
- lines.append(b"/Encoding 256 array")
- lines.append(b"0 1 255 {1 index exch /.notdef put} for")
- for i in range(256):
- name = value[i]
- if name != ".notdef":
- lines.append(self._tobytes(f"dup {i} /{name} put"))
- lines.append(b"def")
- return lines
- if isinstance(value, str):
- return [self._tobytes(f"/{key} ({value}) def")]
- elif isinstance(value, bool):
- return [self._tobytes(f"/{key} {'true' if value else 'false'} def")]
- elif isinstance(value, list):
- return [self._tobytes(f"/{key} [{' '.join(str(v) for v in value)}] def")]
- elif isinstance(value, tuple):
- return [self._tobytes(f"/{key} {{{' '.join(str(v) for v in value)}}} def")]
- else:
- return [self._tobytes(f"/{key} {value} def")]
-
- def _tobytes(self, s, errors="strict"):
- return tobytes(s, self.encoding, errors)
-
-
-# low level T1 data read and write functions
-
-
-def read(path, onlyHeader=False):
- """reads any Type 1 font file, returns raw data"""
- _, ext = os.path.splitext(path)
- ext = ext.lower()
- creator, typ = getMacCreatorAndType(path)
- if typ == "LWFN":
- return readLWFN(path, onlyHeader), "LWFN"
- if ext == ".pfb":
- return readPFB(path, onlyHeader), "PFB"
- else:
- return readOther(path), "OTHER"
-
-
-def write(path, data, kind="OTHER", dohex=False):
- assertType1(data)
- kind = kind.upper()
- try:
- os.remove(path)
- except os.error:
- pass
- err = 1
- try:
- if kind == "LWFN":
- writeLWFN(path, data)
- elif kind == "PFB":
- writePFB(path, data)
- else:
- writeOther(path, data, dohex)
- err = 0
- finally:
- if err and not DEBUG:
- try:
- os.remove(path)
- except os.error:
- pass
-
-
-# -- internal --
-
-LWFNCHUNKSIZE = 2000
-HEXLINELENGTH = 80
-
-
-def readLWFN(path, onlyHeader=False):
- """reads an LWFN font file, returns raw data"""
- from fontTools.misc.macRes import ResourceReader
-
- reader = ResourceReader(path)
- try:
- data = []
- for res in reader.get("POST", []):
- code = byteord(res.data[0])
- if byteord(res.data[1]) != 0:
- raise T1Error("corrupt LWFN file")
- if code in [1, 2]:
- if onlyHeader and code == 2:
- break
- data.append(res.data[2:])
- elif code in [3, 5]:
- break
- elif code == 4:
- with open(path, "rb") as f:
- data.append(f.read())
- elif code == 0:
- pass # comment, ignore
- else:
- raise T1Error("bad chunk code: " + repr(code))
- finally:
- reader.close()
- data = bytesjoin(data)
- assertType1(data)
- return data
-
-
-def readPFB(path, onlyHeader=False):
- """reads a PFB font file, returns raw data"""
- data = []
- with open(path, "rb") as f:
- while True:
- if f.read(1) != bytechr(128):
- raise T1Error("corrupt PFB file")
- code = byteord(f.read(1))
- if code in [1, 2]:
- chunklen = stringToLong(f.read(4))
- chunk = f.read(chunklen)
- assert len(chunk) == chunklen
- data.append(chunk)
- elif code == 3:
- break
- else:
- raise T1Error("bad chunk code: " + repr(code))
- if onlyHeader:
- break
- data = bytesjoin(data)
- assertType1(data)
- return data
-
-
-def readOther(path):
- """reads any (font) file, returns raw data"""
- with open(path, "rb") as f:
- data = f.read()
- assertType1(data)
- chunks = findEncryptedChunks(data)
- data = []
- for isEncrypted, chunk in chunks:
- if isEncrypted and isHex(chunk[:4]):
- data.append(deHexString(chunk))
- else:
- data.append(chunk)
- return bytesjoin(data)
-
-
-# file writing tools
-
-
-def writeLWFN(path, data):
- # Res.FSpCreateResFile was deprecated in OS X 10.5
- Res.FSpCreateResFile(path, "just", "LWFN", 0)
- resRef = Res.FSOpenResFile(path, 2) # write-only
- try:
- Res.UseResFile(resRef)
- resID = 501
- chunks = findEncryptedChunks(data)
- for isEncrypted, chunk in chunks:
- if isEncrypted:
- code = 2
- else:
- code = 1
- while chunk:
- res = Res.Resource(bytechr(code) + "\0" + chunk[: LWFNCHUNKSIZE - 2])
- res.AddResource("POST", resID, "")
- chunk = chunk[LWFNCHUNKSIZE - 2 :]
- resID = resID + 1
- res = Res.Resource(bytechr(5) + "\0")
- res.AddResource("POST", resID, "")
- finally:
- Res.CloseResFile(resRef)
-
-
-def writePFB(path, data):
- chunks = findEncryptedChunks(data)
- with open(path, "wb") as f:
- for isEncrypted, chunk in chunks:
- if isEncrypted:
- code = 2
- else:
- code = 1
- f.write(bytechr(128) + bytechr(code))
- f.write(longToString(len(chunk)))
- f.write(chunk)
- f.write(bytechr(128) + bytechr(3))
-
-
-def writeOther(path, data, dohex=False):
- chunks = findEncryptedChunks(data)
- with open(path, "wb") as f:
- hexlinelen = HEXLINELENGTH // 2
- for isEncrypted, chunk in chunks:
- if isEncrypted:
- code = 2
- else:
- code = 1
- if code == 2 and dohex:
- while chunk:
- f.write(eexec.hexString(chunk[:hexlinelen]))
- f.write(b"\r")
- chunk = chunk[hexlinelen:]
- else:
- f.write(chunk)
-
-
-# decryption tools
-
-EEXECBEGIN = b"currentfile eexec"
-# The spec allows for 512 ASCII zeros interrupted by arbitrary whitespace to
-# follow eexec
-EEXECEND = re.compile(b"(0[ \t\r\n]*){512}", flags=re.M)
-EEXECINTERNALEND = b"currentfile closefile"
-EEXECBEGINMARKER = b"%-- eexec start\r"
-EEXECENDMARKER = b"%-- eexec end\r"
-
-_ishexRE = re.compile(b"[0-9A-Fa-f]*$")
-
-
-def isHex(text):
- return _ishexRE.match(text) is not None
-
-
-def decryptType1(data):
- chunks = findEncryptedChunks(data)
- data = []
- for isEncrypted, chunk in chunks:
- if isEncrypted:
- if isHex(chunk[:4]):
- chunk = deHexString(chunk)
- decrypted, R = eexec.decrypt(chunk, 55665)
- decrypted = decrypted[4:]
- if (
- decrypted[-len(EEXECINTERNALEND) - 1 : -1] != EEXECINTERNALEND
- and decrypted[-len(EEXECINTERNALEND) - 2 : -2] != EEXECINTERNALEND
- ):
- raise T1Error("invalid end of eexec part")
- decrypted = decrypted[: -len(EEXECINTERNALEND) - 2] + b"\r"
- data.append(EEXECBEGINMARKER + decrypted + EEXECENDMARKER)
- else:
- if chunk[-len(EEXECBEGIN) - 1 : -1] == EEXECBEGIN:
- data.append(chunk[: -len(EEXECBEGIN) - 1])
- else:
- data.append(chunk)
- return bytesjoin(data)
-
-
-def findEncryptedChunks(data):
- chunks = []
- while True:
- eBegin = data.find(EEXECBEGIN)
- if eBegin < 0:
- break
- eBegin = eBegin + len(EEXECBEGIN) + 1
- endMatch = EEXECEND.search(data, eBegin)
- if endMatch is None:
- raise T1Error("can't find end of eexec part")
- eEnd = endMatch.start()
- cypherText = data[eBegin : eEnd + 2]
- if isHex(cypherText[:4]):
- cypherText = deHexString(cypherText)
- plainText, R = eexec.decrypt(cypherText, 55665)
- eEndLocal = plainText.find(EEXECINTERNALEND)
- if eEndLocal < 0:
- raise T1Error("can't find end of eexec part")
- chunks.append((0, data[:eBegin]))
- chunks.append((1, cypherText[: eEndLocal + len(EEXECINTERNALEND) + 1]))
- data = data[eEnd:]
- chunks.append((0, data))
- return chunks
-
-
-def deHexString(hexstring):
- return eexec.deHexString(bytesjoin(hexstring.split()))
-
-
-# Type 1 assertion
-
-_fontType1RE = re.compile(rb"/FontType\s+1\s+def")
-
-
-def assertType1(data):
- for head in [b"%!PS-AdobeFont", b"%!FontType1"]:
- if data[: len(head)] == head:
- break
- else:
- raise T1Error("not a PostScript font")
- if not _fontType1RE.search(data):
- raise T1Error("not a Type 1 font")
- if data.find(b"currentfile eexec") < 0:
- raise T1Error("not an encrypted Type 1 font")
- # XXX what else?
- return data
-
-
-# pfb helpers
-
-
-def longToString(long):
- s = b""
- for i in range(4):
- s += bytechr((long & (0xFF << (i * 8))) >> i * 8)
- return s
-
-
-def stringToLong(s):
- if len(s) != 4:
- raise ValueError("string must be 4 bytes long")
- l = 0
- for i in range(4):
- l += byteord(s[i]) << (i * 8)
- return l
-
-
-# PS stream helpers
-
-font_dictionary_keys = list(_type1_pre_eexec_order)
-# t1write.c:writeRegNameKeyedFont
-# always counts following keys
-font_dictionary_keys.remove("FontMatrix")
-
-FontInfo_dictionary_keys = list(_type1_fontinfo_order)
-# extend because AFDKO tx may use following keys
-FontInfo_dictionary_keys.extend(
- [
- "FSType",
- "Copyright",
- ]
-)
-
-Private_dictionary_keys = [
- # We don't know what names will be actually used.
- # "RD",
- # "ND",
- # "NP",
- "Subrs",
- "OtherSubrs",
- "UniqueID",
- "BlueValues",
- "OtherBlues",
- "FamilyBlues",
- "FamilyOtherBlues",
- "BlueScale",
- "BlueShift",
- "BlueFuzz",
- "StdHW",
- "StdVW",
- "StemSnapH",
- "StemSnapV",
- "ForceBold",
- "LanguageGroup",
- "password",
- "lenIV",
- "MinFeature",
- "RndStemUp",
-]
-
-# t1write_hintothers.h
-hintothers = """/OtherSubrs[{}{}{}{systemdict/internaldict known not{pop 3}{1183615869
-systemdict/internaldict get exec dup/startlock known{/startlock get exec}{dup
-/strtlck known{/strtlck get exec}{pop 3}ifelse}ifelse}ifelse}executeonly]def"""
-# t1write.c:saveStdSubrs
-std_subrs = [
- # 3 0 callother pop pop setcurrentpoint return
- b"\x8e\x8b\x0c\x10\x0c\x11\x0c\x11\x0c\x21\x0b",
- # 0 1 callother return
- b"\x8b\x8c\x0c\x10\x0b",
- # 0 2 callother return
- b"\x8b\x8d\x0c\x10\x0b",
- # return
- b"\x0b",
- # 3 1 3 callother pop callsubr return
- b"\x8e\x8c\x8e\x0c\x10\x0c\x11\x0a\x0b",
-]
-# follow t1write.c:writeRegNameKeyedFont
-eexec_IV = b"cccc"
-char_IV = b"\x0c\x0c\x0c\x0c"
-RD_value = ("string", "currentfile", "exch", "readstring", "pop")
-ND_value = ("def",)
-PD_value = ("put",)
diff --git a/spaces/cncn102/bingo1/src/components/ui/button.tsx b/spaces/cncn102/bingo1/src/components/ui/button.tsx
deleted file mode 100644
index 281da005124fa94c89a9a9db7605748a92b60865..0000000000000000000000000000000000000000
--- a/spaces/cncn102/bingo1/src/components/ui/button.tsx
+++ /dev/null
@@ -1,57 +0,0 @@
-import * as React from 'react'
-import { Slot } from '@radix-ui/react-slot'
-import { cva, type VariantProps } from 'class-variance-authority'
-
-import { cn } from '@/lib/utils'
-
-const buttonVariants = cva(
- 'inline-flex items-center justify-center rounded-md text-sm font-medium shadow ring-offset-background transition-colors outline-none disabled:pointer-events-none disabled:opacity-50',
- {
- variants: {
- variant: {
- default:
- 'bg-primary text-primary-foreground shadow-md hover:bg-primary/90',
- destructive:
- 'bg-destructive text-destructive-foreground hover:bg-destructive/90',
- outline:
- 'border border-input hover:bg-accent hover:text-accent-foreground',
- secondary:
- 'bg-secondary text-secondary-foreground hover:bg-secondary/80',
- ghost: 'shadow-none hover:bg-accent hover:text-accent-foreground',
- link: 'text-primary underline-offset-4 shadow-none hover:underline'
- },
- size: {
- default: 'h-8 px-4 py-2',
- sm: 'h-8 rounded-md px-3',
- lg: 'h-11 rounded-md px-8',
- icon: 'h-8 w-8 p-0'
- }
- },
- defaultVariants: {
- variant: 'default',
- size: 'default'
- }
- }
-)
-
-export interface ButtonProps
- extends React.ButtonHTMLAttributes,
- VariantProps {
- asChild?: boolean
-}
-
-const Button = React.forwardRef(
- ({ className, variant, size, asChild = false, ...props }, ref) => {
- const Comp = asChild ? Slot : 'button'
- return (
-
- )
- }
-)
-Button.displayName = 'Button'
-
-export { Button, buttonVariants }
diff --git a/spaces/codertoro/gpt-academic/core_functional.py b/spaces/codertoro/gpt-academic/core_functional.py
deleted file mode 100644
index 536ccb609c38cbbebfda4ba17bd51a78857d711e..0000000000000000000000000000000000000000
--- a/spaces/codertoro/gpt-academic/core_functional.py
+++ /dev/null
@@ -1,71 +0,0 @@
-# 'primary' 颜色对应 theme.py 中的 primary_hue
-# 'secondary' 颜色对应 theme.py 中的 neutral_hue
-# 'stop' 颜色对应 theme.py 中的 color_er
-# 默认按钮颜色是 secondary
-from toolbox import clear_line_break
-
-
-def get_core_functions():
- return {
- "英语学术润色": {
- # 前言
- "Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, " +
- r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. " +
- r"Furthermore, list all modification and explain the reasons to do so in markdown table." + "\n\n",
- # 后语
- "Suffix": r"",
- "Color": r"secondary", # 按钮颜色
- },
- "中文学术润色": {
- "Prefix": r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性," +
- r"同时分解长句,减少重复,并提供改进建议。请只提供文本的更正版本,避免包括解释。请编辑以下文本" + "\n\n",
- "Suffix": r"",
- },
- "查找语法错误": {
- "Prefix": r"Can you help me ensure that the grammar and the spelling is correct? " +
- r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good." +
- r"If you find grammar or spelling mistakes, please list mistakes you find in a two-column markdown table, " +
- r"put the original text the first column, " +
- r"put the corrected text in the second column and highlight the key words you fixed.""\n"
- r"Example:""\n"
- r"Paragraph: How is you? Do you knows what is it?""\n"
- r"| Original sentence | Corrected sentence |""\n"
- r"| :--- | :--- |""\n"
- r"| How **is** you? | How **are** you? |""\n"
- r"| Do you **knows** what **is** **it**? | Do you **know** what **it** **is** ? |""\n"
- r"Below is a paragraph from an academic paper. "
- r"You need to report all grammar and spelling mistakes as the example before."
- + "\n\n",
- "Suffix": r"",
- "PreProcess": clear_line_break, # 预处理:清除换行符
- },
- "中译英": {
- "Prefix": r"Please translate following sentence to English:" + "\n\n",
- "Suffix": r"",
- },
- "学术中英互译": {
- "Prefix": r"I want you to act as a scientific English-Chinese translator, " +
- r"I will provide you with some paragraphs in one language " +
- r"and your task is to accurately and academically translate the paragraphs only into the other language. " +
- r"Do not repeat the original provided paragraphs after translation. " +
- r"You should use artificial intelligence tools, " +
- r"such as natural language processing, and rhetorical knowledge " +
- r"and experience about effective writing techniques to reply. " +
- r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:" + "\n\n",
- "Suffix": "",
- "Color": "secondary",
- },
- "英译中": {
- "Prefix": r"翻译成地道的中文:" + "\n\n",
- "Suffix": r"",
- },
- "找图片": {
- "Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," +
- r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片:" + "\n\n",
- "Suffix": r"",
- },
- "解释代码": {
- "Prefix": r"请解释以下代码:" + "\n```\n",
- "Suffix": "\n```\n",
- },
- }
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aaccoder_twoloop.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aaccoder_twoloop.h
deleted file mode 100644
index 0504a916ad07bf0d1d08ecd676c0637962eba15a..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aaccoder_twoloop.h
+++ /dev/null
@@ -1,763 +0,0 @@
-/*
- * AAC encoder twoloop coder
- * Copyright (C) 2008-2009 Konstantin Shishkov
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * AAC encoder twoloop coder
- * @author Konstantin Shishkov, Claudio Freire
- */
-
-/**
- * This file contains a template for the twoloop coder function.
- * It needs to be provided, externally, as an already included declaration,
- * the following functions from aacenc_quantization/util.h. They're not included
- * explicitly here to make it possible to provide alternative implementations:
- * - quantize_band_cost
- * - abs_pow34_v
- * - find_max_val
- * - find_min_book
- * - find_form_factor
- */
-
-#ifndef AVCODEC_AACCODER_TWOLOOP_H
-#define AVCODEC_AACCODER_TWOLOOP_H
-
-#include
-#include "libavutil/mathematics.h"
-#include "mathops.h"
-#include "avcodec.h"
-#include "put_bits.h"
-#include "aac.h"
-#include "aacenc.h"
-#include "aactab.h"
-#include "aacenctab.h"
-
-/** Frequency in Hz for lower limit of noise substitution **/
-#define NOISE_LOW_LIMIT 4000
-
-#define sclip(x) av_clip(x,60,218)
-
-/* Reflects the cost to change codebooks */
-static inline int ff_pns_bits(SingleChannelElement *sce, int w, int g)
-{
- return (!g || !sce->zeroes[w*16+g-1] || !sce->can_pns[w*16+g-1]) ? 9 : 5;
-}
-
-/**
- * two-loop quantizers search taken from ISO 13818-7 Appendix C
- */
-static void search_for_quantizers_twoloop(AVCodecContext *avctx,
- AACEncContext *s,
- SingleChannelElement *sce,
- const float lambda)
-{
- int start = 0, i, w, w2, g, recomprd;
- int destbits = avctx->bit_rate * 1024.0 / avctx->sample_rate
- / ((avctx->flags & AV_CODEC_FLAG_QSCALE) ? 2.0f : avctx->ch_layout.nb_channels)
- * (lambda / 120.f);
- int refbits = destbits;
- int toomanybits, toofewbits;
- char nzs[128];
- uint8_t nextband[128];
- int maxsf[128], minsf[128];
- float dists[128] = { 0 }, qenergies[128] = { 0 }, uplims[128], euplims[128], energies[128];
- float maxvals[128], spread_thr_r[128];
- float min_spread_thr_r, max_spread_thr_r;
-
- /**
- * rdlambda controls the maximum tolerated distortion. Twoloop
- * will keep iterating until it fails to lower it or it reaches
- * ulimit * rdlambda. Keeping it low increases quality on difficult
- * signals, but lower it too much, and bits will be taken from weak
- * signals, creating "holes". A balance is necessary.
- * rdmax and rdmin specify the relative deviation from rdlambda
- * allowed for tonality compensation
- */
- float rdlambda = av_clipf(2.0f * 120.f / lambda, 0.0625f, 16.0f);
- const float nzslope = 1.5f;
- float rdmin = 0.03125f;
- float rdmax = 1.0f;
-
- /**
- * sfoffs controls an offset of optmium allocation that will be
- * applied based on lambda. Keep it real and modest, the loop
- * will take care of the rest, this just accelerates convergence
- */
- float sfoffs = av_clipf(log2f(120.0f / lambda) * 4.0f, -5, 10);
-
- int fflag, minscaler, maxscaler, nminscaler;
- int its = 0;
- int maxits = 30;
- int allz = 0;
- int tbits;
- int cutoff = 1024;
- int pns_start_pos;
- int prev;
-
- /**
- * zeroscale controls a multiplier of the threshold, if band energy
- * is below this, a zero is forced. Keep it lower than 1, unless
- * low lambda is used, because energy < threshold doesn't mean there's
- * no audible signal outright, it's just energy. Also make it rise
- * slower than rdlambda, as rdscale has due compensation with
- * noisy band depriorization below, whereas zeroing logic is rather dumb
- */
- float zeroscale;
- if (lambda > 120.f) {
- zeroscale = av_clipf(powf(120.f / lambda, 0.25f), 0.0625f, 1.0f);
- } else {
- zeroscale = 1.f;
- }
-
- if (s->psy.bitres.alloc >= 0) {
- /**
- * Psy granted us extra bits to use, from the reservoire
- * adjust for lambda except what psy already did
- */
- destbits = s->psy.bitres.alloc
- * (lambda / (avctx->global_quality ? avctx->global_quality : 120));
- }
-
- if (avctx->flags & AV_CODEC_FLAG_QSCALE) {
- /**
- * Constant Q-scale doesn't compensate MS coding on its own
- * No need to be overly precise, this only controls RD
- * adjustment CB limits when going overboard
- */
- if (s->options.mid_side && s->cur_type == TYPE_CPE)
- destbits *= 2;
-
- /**
- * When using a constant Q-scale, don't adjust bits, just use RD
- * Don't let it go overboard, though... 8x psy target is enough
- */
- toomanybits = 5800;
- toofewbits = destbits / 16;
-
- /** Don't offset scalers, just RD */
- sfoffs = sce->ics.num_windows - 1;
- rdlambda = sqrtf(rdlambda);
-
- /** search further */
- maxits *= 2;
- } else {
- /* When using ABR, be strict, but a reasonable leeway is
- * critical to allow RC to smoothly track desired bitrate
- * without sudden quality drops that cause audible artifacts.
- * Symmetry is also desirable, to avoid systematic bias.
- */
- toomanybits = destbits + destbits/8;
- toofewbits = destbits - destbits/8;
-
- sfoffs = 0;
- rdlambda = sqrtf(rdlambda);
- }
-
- /** and zero out above cutoff frequency */
- {
- int wlen = 1024 / sce->ics.num_windows;
- int bandwidth;
-
- /**
- * Scale, psy gives us constant quality, this LP only scales
- * bitrate by lambda, so we save bits on subjectively unimportant HF
- * rather than increase quantization noise. Adjust nominal bitrate
- * to effective bitrate according to encoding parameters,
- * AAC_CUTOFF_FROM_BITRATE is calibrated for effective bitrate.
- */
- float rate_bandwidth_multiplier = 1.5f;
- int frame_bit_rate = (avctx->flags & AV_CODEC_FLAG_QSCALE)
- ? (refbits * rate_bandwidth_multiplier * avctx->sample_rate / 1024)
- : (avctx->bit_rate / avctx->ch_layout.nb_channels);
-
- /** Compensate for extensions that increase efficiency */
- if (s->options.pns || s->options.intensity_stereo)
- frame_bit_rate *= 1.15f;
-
- if (avctx->cutoff > 0) {
- bandwidth = avctx->cutoff;
- } else {
- bandwidth = FFMAX(3000, AAC_CUTOFF_FROM_BITRATE(frame_bit_rate, 1, avctx->sample_rate));
- s->psy.cutoff = bandwidth;
- }
-
- cutoff = bandwidth * 2 * wlen / avctx->sample_rate;
- pns_start_pos = NOISE_LOW_LIMIT * 2 * wlen / avctx->sample_rate;
- }
-
- /**
- * for values above this the decoder might end up in an endless loop
- * due to always having more bits than what can be encoded.
- */
- destbits = FFMIN(destbits, 5800);
- toomanybits = FFMIN(toomanybits, 5800);
- toofewbits = FFMIN(toofewbits, 5800);
- /**
- * XXX: some heuristic to determine initial quantizers will reduce search time
- * determine zero bands and upper distortion limits
- */
- min_spread_thr_r = -1;
- max_spread_thr_r = -1;
- for (w = 0; w < sce->ics.num_windows; w += sce->ics.group_len[w]) {
- for (g = start = 0; g < sce->ics.num_swb; start += sce->ics.swb_sizes[g++]) {
- int nz = 0;
- float uplim = 0.0f, energy = 0.0f, spread = 0.0f;
- for (w2 = 0; w2 < sce->ics.group_len[w]; w2++) {
- FFPsyBand *band = &s->psy.ch[s->cur_channel].psy_bands[(w+w2)*16+g];
- if (start >= cutoff || band->energy <= (band->threshold * zeroscale) || band->threshold == 0.0f) {
- sce->zeroes[(w+w2)*16+g] = 1;
- continue;
- }
- nz = 1;
- }
- if (!nz) {
- uplim = 0.0f;
- } else {
- nz = 0;
- for (w2 = 0; w2 < sce->ics.group_len[w]; w2++) {
- FFPsyBand *band = &s->psy.ch[s->cur_channel].psy_bands[(w+w2)*16+g];
- if (band->energy <= (band->threshold * zeroscale) || band->threshold == 0.0f)
- continue;
- uplim += band->threshold;
- energy += band->energy;
- spread += band->spread;
- nz++;
- }
- }
- uplims[w*16+g] = uplim;
- energies[w*16+g] = energy;
- nzs[w*16+g] = nz;
- sce->zeroes[w*16+g] = !nz;
- allz |= nz;
- if (nz && sce->can_pns[w*16+g]) {
- spread_thr_r[w*16+g] = energy * nz / (uplim * spread);
- if (min_spread_thr_r < 0) {
- min_spread_thr_r = max_spread_thr_r = spread_thr_r[w*16+g];
- } else {
- min_spread_thr_r = FFMIN(min_spread_thr_r, spread_thr_r[w*16+g]);
- max_spread_thr_r = FFMAX(max_spread_thr_r, spread_thr_r[w*16+g]);
- }
- }
- }
- }
-
- /** Compute initial scalers */
- minscaler = 65535;
- for (w = 0; w < sce->ics.num_windows; w += sce->ics.group_len[w]) {
- for (g = 0; g < sce->ics.num_swb; g++) {
- if (sce->zeroes[w*16+g]) {
- sce->sf_idx[w*16+g] = SCALE_ONE_POS;
- continue;
- }
- /**
- * log2f-to-distortion ratio is, technically, 2 (1.5db = 4, but it's power vs level so it's 2).
- * But, as offsets are applied, low-frequency signals are too sensitive to the induced distortion,
- * so we make scaling more conservative by choosing a lower log2f-to-distortion ratio, and thus
- * more robust.
- */
- sce->sf_idx[w*16+g] = av_clip(
- SCALE_ONE_POS
- + 1.75*log2f(FFMAX(0.00125f,uplims[w*16+g]) / sce->ics.swb_sizes[g])
- + sfoffs,
- 60, SCALE_MAX_POS);
- minscaler = FFMIN(minscaler, sce->sf_idx[w*16+g]);
- }
- }
-
- /** Clip */
- minscaler = av_clip(minscaler, SCALE_ONE_POS - SCALE_DIV_512, SCALE_MAX_POS - SCALE_DIV_512);
- for (w = 0; w < sce->ics.num_windows; w += sce->ics.group_len[w])
- for (g = 0; g < sce->ics.num_swb; g++)
- if (!sce->zeroes[w*16+g])
- sce->sf_idx[w*16+g] = av_clip(sce->sf_idx[w*16+g], minscaler, minscaler + SCALE_MAX_DIFF - 1);
-
- if (!allz)
- return;
- s->abs_pow34(s->scoefs, sce->coeffs, 1024);
- ff_quantize_band_cost_cache_init(s);
-
- for (i = 0; i < sizeof(minsf) / sizeof(minsf[0]); ++i)
- minsf[i] = 0;
- for (w = 0; w < sce->ics.num_windows; w += sce->ics.group_len[w]) {
- start = w*128;
- for (g = 0; g < sce->ics.num_swb; g++) {
- const float *scaled = s->scoefs + start;
- int minsfidx;
- maxvals[w*16+g] = find_max_val(sce->ics.group_len[w], sce->ics.swb_sizes[g], scaled);
- if (maxvals[w*16+g] > 0) {
- minsfidx = coef2minsf(maxvals[w*16+g]);
- for (w2 = 0; w2 < sce->ics.group_len[w]; w2++)
- minsf[(w+w2)*16+g] = minsfidx;
- }
- start += sce->ics.swb_sizes[g];
- }
- }
-
- /**
- * Scale uplims to match rate distortion to quality
- * bu applying noisy band depriorization and tonal band priorization.
- * Maxval-energy ratio gives us an idea of how noisy/tonal the band is.
- * If maxval^2 ~ energy, then that band is mostly noise, and we can relax
- * rate distortion requirements.
- */
- memcpy(euplims, uplims, sizeof(euplims));
- for (w = 0; w < sce->ics.num_windows; w += sce->ics.group_len[w]) {
- /** psy already priorizes transients to some extent */
- float de_psy_factor = (sce->ics.num_windows > 1) ? 8.0f / sce->ics.group_len[w] : 1.0f;
- start = w*128;
- for (g = 0; g < sce->ics.num_swb; g++) {
- if (nzs[g] > 0) {
- float cleanup_factor = ff_sqrf(av_clipf(start / (cutoff * 0.75f), 1.0f, 2.0f));
- float energy2uplim = find_form_factor(
- sce->ics.group_len[w], sce->ics.swb_sizes[g],
- uplims[w*16+g] / (nzs[g] * sce->ics.swb_sizes[w]),
- sce->coeffs + start,
- nzslope * cleanup_factor);
- energy2uplim *= de_psy_factor;
- if (!(avctx->flags & AV_CODEC_FLAG_QSCALE)) {
- /** In ABR, we need to priorize less and let rate control do its thing */
- energy2uplim = sqrtf(energy2uplim);
- }
- energy2uplim = FFMAX(0.015625f, FFMIN(1.0f, energy2uplim));
- uplims[w*16+g] *= av_clipf(rdlambda * energy2uplim, rdmin, rdmax)
- * sce->ics.group_len[w];
-
- energy2uplim = find_form_factor(
- sce->ics.group_len[w], sce->ics.swb_sizes[g],
- uplims[w*16+g] / (nzs[g] * sce->ics.swb_sizes[w]),
- sce->coeffs + start,
- 2.0f);
- energy2uplim *= de_psy_factor;
- if (!(avctx->flags & AV_CODEC_FLAG_QSCALE)) {
- /** In ABR, we need to priorize less and let rate control do its thing */
- energy2uplim = sqrtf(energy2uplim);
- }
- energy2uplim = FFMAX(0.015625f, FFMIN(1.0f, energy2uplim));
- euplims[w*16+g] *= av_clipf(rdlambda * energy2uplim * sce->ics.group_len[w],
- 0.5f, 1.0f);
- }
- start += sce->ics.swb_sizes[g];
- }
- }
-
- for (i = 0; i < sizeof(maxsf) / sizeof(maxsf[0]); ++i)
- maxsf[i] = SCALE_MAX_POS;
-
- //perform two-loop search
- //outer loop - improve quality
- do {
- //inner loop - quantize spectrum to fit into given number of bits
- int overdist;
- int qstep = its ? 1 : 32;
- do {
- int changed = 0;
- prev = -1;
- recomprd = 0;
- tbits = 0;
- for (w = 0; w < sce->ics.num_windows; w += sce->ics.group_len[w]) {
- start = w*128;
- for (g = 0; g < sce->ics.num_swb; g++) {
- const float *coefs = &sce->coeffs[start];
- const float *scaled = &s->scoefs[start];
- int bits = 0;
- int cb;
- float dist = 0.0f;
- float qenergy = 0.0f;
-
- if (sce->zeroes[w*16+g] || sce->sf_idx[w*16+g] >= 218) {
- start += sce->ics.swb_sizes[g];
- if (sce->can_pns[w*16+g]) {
- /** PNS isn't free */
- tbits += ff_pns_bits(sce, w, g);
- }
- continue;
- }
- cb = find_min_book(maxvals[w*16+g], sce->sf_idx[w*16+g]);
- for (w2 = 0; w2 < sce->ics.group_len[w]; w2++) {
- int b;
- float sqenergy;
- dist += quantize_band_cost_cached(s, w + w2, g, coefs + w2*128,
- scaled + w2*128,
- sce->ics.swb_sizes[g],
- sce->sf_idx[w*16+g],
- cb,
- 1.0f,
- INFINITY,
- &b, &sqenergy,
- 0);
- bits += b;
- qenergy += sqenergy;
- }
- dists[w*16+g] = dist - bits;
- qenergies[w*16+g] = qenergy;
- if (prev != -1) {
- int sfdiff = av_clip(sce->sf_idx[w*16+g] - prev + SCALE_DIFF_ZERO, 0, 2*SCALE_MAX_DIFF);
- bits += ff_aac_scalefactor_bits[sfdiff];
- }
- tbits += bits;
- start += sce->ics.swb_sizes[g];
- prev = sce->sf_idx[w*16+g];
- }
- }
- if (tbits > toomanybits) {
- recomprd = 1;
- for (i = 0; i < 128; i++) {
- if (sce->sf_idx[i] < (SCALE_MAX_POS - SCALE_DIV_512)) {
- int maxsf_i = (tbits > 5800) ? SCALE_MAX_POS : maxsf[i];
- int new_sf = FFMIN(maxsf_i, sce->sf_idx[i] + qstep);
- if (new_sf != sce->sf_idx[i]) {
- sce->sf_idx[i] = new_sf;
- changed = 1;
- }
- }
- }
- } else if (tbits < toofewbits) {
- recomprd = 1;
- for (i = 0; i < 128; i++) {
- if (sce->sf_idx[i] > SCALE_ONE_POS) {
- int new_sf = FFMAX3(minsf[i], SCALE_ONE_POS, sce->sf_idx[i] - qstep);
- if (new_sf != sce->sf_idx[i]) {
- sce->sf_idx[i] = new_sf;
- changed = 1;
- }
- }
- }
- }
- qstep >>= 1;
- if (!qstep && tbits > toomanybits && sce->sf_idx[0] < 217 && changed)
- qstep = 1;
- } while (qstep);
-
- overdist = 1;
- fflag = tbits < toofewbits;
- for (i = 0; i < 2 && (overdist || recomprd); ++i) {
- if (recomprd) {
- /** Must recompute distortion */
- prev = -1;
- tbits = 0;
- for (w = 0; w < sce->ics.num_windows; w += sce->ics.group_len[w]) {
- start = w*128;
- for (g = 0; g < sce->ics.num_swb; g++) {
- const float *coefs = sce->coeffs + start;
- const float *scaled = s->scoefs + start;
- int bits = 0;
- int cb;
- float dist = 0.0f;
- float qenergy = 0.0f;
-
- if (sce->zeroes[w*16+g] || sce->sf_idx[w*16+g] >= 218) {
- start += sce->ics.swb_sizes[g];
- if (sce->can_pns[w*16+g]) {
- /** PNS isn't free */
- tbits += ff_pns_bits(sce, w, g);
- }
- continue;
- }
- cb = find_min_book(maxvals[w*16+g], sce->sf_idx[w*16+g]);
- for (w2 = 0; w2 < sce->ics.group_len[w]; w2++) {
- int b;
- float sqenergy;
- dist += quantize_band_cost_cached(s, w + w2, g, coefs + w2*128,
- scaled + w2*128,
- sce->ics.swb_sizes[g],
- sce->sf_idx[w*16+g],
- cb,
- 1.0f,
- INFINITY,
- &b, &sqenergy,
- 0);
- bits += b;
- qenergy += sqenergy;
- }
- dists[w*16+g] = dist - bits;
- qenergies[w*16+g] = qenergy;
- if (prev != -1) {
- int sfdiff = av_clip(sce->sf_idx[w*16+g] - prev + SCALE_DIFF_ZERO, 0, 2*SCALE_MAX_DIFF);
- bits += ff_aac_scalefactor_bits[sfdiff];
- }
- tbits += bits;
- start += sce->ics.swb_sizes[g];
- prev = sce->sf_idx[w*16+g];
- }
- }
- }
- if (!i && s->options.pns && its > maxits/2 && tbits > toofewbits) {
- float maxoverdist = 0.0f;
- float ovrfactor = 1.f+(maxits-its)*16.f/maxits;
- overdist = recomprd = 0;
- for (w = 0; w < sce->ics.num_windows; w += sce->ics.group_len[w]) {
- for (g = start = 0; g < sce->ics.num_swb; start += sce->ics.swb_sizes[g++]) {
- if (!sce->zeroes[w*16+g] && sce->sf_idx[w*16+g] > SCALE_ONE_POS && dists[w*16+g] > uplims[w*16+g]*ovrfactor) {
- float ovrdist = dists[w*16+g] / FFMAX(uplims[w*16+g],euplims[w*16+g]);
- maxoverdist = FFMAX(maxoverdist, ovrdist);
- overdist++;
- }
- }
- }
- if (overdist) {
- /* We have overdistorted bands, trade for zeroes (that can be noise)
- * Zero the bands in the lowest 1.25% spread-energy-threshold ranking
- */
- float minspread = max_spread_thr_r;
- float maxspread = min_spread_thr_r;
- float zspread;
- int zeroable = 0;
- int zeroed = 0;
- int maxzeroed, zloop;
- for (w = 0; w < sce->ics.num_windows; w += sce->ics.group_len[w]) {
- for (g = start = 0; g < sce->ics.num_swb; start += sce->ics.swb_sizes[g++]) {
- if (start >= pns_start_pos && !sce->zeroes[w*16+g] && sce->can_pns[w*16+g]) {
- minspread = FFMIN(minspread, spread_thr_r[w*16+g]);
- maxspread = FFMAX(maxspread, spread_thr_r[w*16+g]);
- zeroable++;
- }
- }
- }
- zspread = (maxspread-minspread) * 0.0125f + minspread;
- /* Don't PNS everything even if allowed. It suppresses bit starvation signals from RC,
- * and forced the hand of the later search_for_pns step.
- * Instead, PNS a fraction of the spread_thr_r range depending on how starved for bits we are,
- * and leave further PNSing to search_for_pns if worthwhile.
- */
- zspread = FFMIN3(min_spread_thr_r * 8.f, zspread,
- ((toomanybits - tbits) * min_spread_thr_r + (tbits - toofewbits) * max_spread_thr_r) / (toomanybits - toofewbits + 1));
- maxzeroed = FFMIN(zeroable, FFMAX(1, (zeroable * its + maxits - 1) / (2 * maxits)));
- for (zloop = 0; zloop < 2; zloop++) {
- /* Two passes: first distorted stuff - two birds in one shot and all that,
- * then anything viable. Viable means not zero, but either CB=zero-able
- * (too high SF), not SF <= 1 (that means we'd be operating at very high
- * quality, we don't want PNS when doing VHQ), PNS allowed, and within
- * the lowest ranking percentile.
- */
- float loopovrfactor = (zloop) ? 1.0f : ovrfactor;
- int loopminsf = (zloop) ? (SCALE_ONE_POS - SCALE_DIV_512) : SCALE_ONE_POS;
- int mcb;
- for (g = sce->ics.num_swb-1; g > 0 && zeroed < maxzeroed; g--) {
- if (sce->ics.swb_offset[g] < pns_start_pos)
- continue;
- for (w = 0; w < sce->ics.num_windows; w += sce->ics.group_len[w]) {
- if (!sce->zeroes[w*16+g] && sce->can_pns[w*16+g] && spread_thr_r[w*16+g] <= zspread
- && sce->sf_idx[w*16+g] > loopminsf
- && (dists[w*16+g] > loopovrfactor*uplims[w*16+g] || !(mcb = find_min_book(maxvals[w*16+g], sce->sf_idx[w*16+g]))
- || (mcb <= 1 && dists[w*16+g] > FFMIN(uplims[w*16+g], euplims[w*16+g]))) ) {
- sce->zeroes[w*16+g] = 1;
- sce->band_type[w*16+g] = 0;
- zeroed++;
- }
- }
- }
- }
- if (zeroed)
- recomprd = fflag = 1;
- } else {
- overdist = 0;
- }
- }
- }
-
- minscaler = SCALE_MAX_POS;
- maxscaler = 0;
- for (w = 0; w < sce->ics.num_windows; w += sce->ics.group_len[w]) {
- for (g = 0; g < sce->ics.num_swb; g++) {
- if (!sce->zeroes[w*16+g]) {
- minscaler = FFMIN(minscaler, sce->sf_idx[w*16+g]);
- maxscaler = FFMAX(maxscaler, sce->sf_idx[w*16+g]);
- }
- }
- }
-
- minscaler = nminscaler = av_clip(minscaler, SCALE_ONE_POS - SCALE_DIV_512, SCALE_MAX_POS - SCALE_DIV_512);
- prev = -1;
- for (w = 0; w < sce->ics.num_windows; w += sce->ics.group_len[w]) {
- /** Start with big steps, end up fine-tunning */
- int depth = (its > maxits/2) ? ((its > maxits*2/3) ? 1 : 3) : 10;
- int edepth = depth+2;
- float uplmax = its / (maxits*0.25f) + 1.0f;
- uplmax *= (tbits > destbits) ? FFMIN(2.0f, tbits / (float)FFMAX(1,destbits)) : 1.0f;
- start = w * 128;
- for (g = 0; g < sce->ics.num_swb; g++) {
- int prevsc = sce->sf_idx[w*16+g];
- if (prev < 0 && !sce->zeroes[w*16+g])
- prev = sce->sf_idx[0];
- if (!sce->zeroes[w*16+g]) {
- const float *coefs = sce->coeffs + start;
- const float *scaled = s->scoefs + start;
- int cmb = find_min_book(maxvals[w*16+g], sce->sf_idx[w*16+g]);
- int mindeltasf = FFMAX(0, prev - SCALE_MAX_DIFF);
- int maxdeltasf = FFMIN(SCALE_MAX_POS - SCALE_DIV_512, prev + SCALE_MAX_DIFF);
- if ((!cmb || dists[w*16+g] > uplims[w*16+g]) && sce->sf_idx[w*16+g] > FFMAX(mindeltasf, minsf[w*16+g])) {
- /* Try to make sure there is some energy in every nonzero band
- * NOTE: This algorithm must be forcibly imbalanced, pushing harder
- * on holes or more distorted bands at first, otherwise there's
- * no net gain (since the next iteration will offset all bands
- * on the opposite direction to compensate for extra bits)
- */
- for (i = 0; i < edepth && sce->sf_idx[w*16+g] > mindeltasf; ++i) {
- int cb, bits;
- float dist, qenergy;
- int mb = find_min_book(maxvals[w*16+g], sce->sf_idx[w*16+g]-1);
- cb = find_min_book(maxvals[w*16+g], sce->sf_idx[w*16+g]);
- dist = qenergy = 0.f;
- bits = 0;
- if (!cb) {
- maxsf[w*16+g] = FFMIN(sce->sf_idx[w*16+g]-1, maxsf[w*16+g]);
- } else if (i >= depth && dists[w*16+g] < euplims[w*16+g]) {
- break;
- }
- /* !g is the DC band, it's important, since quantization error here
- * applies to less than a cycle, it creates horrible intermodulation
- * distortion if it doesn't stick to what psy requests
- */
- if (!g && sce->ics.num_windows > 1 && dists[w*16+g] >= euplims[w*16+g])
- maxsf[w*16+g] = FFMIN(sce->sf_idx[w*16+g], maxsf[w*16+g]);
- for (w2 = 0; w2 < sce->ics.group_len[w]; w2++) {
- int b;
- float sqenergy;
- dist += quantize_band_cost_cached(s, w + w2, g, coefs + w2*128,
- scaled + w2*128,
- sce->ics.swb_sizes[g],
- sce->sf_idx[w*16+g]-1,
- cb,
- 1.0f,
- INFINITY,
- &b, &sqenergy,
- 0);
- bits += b;
- qenergy += sqenergy;
- }
- sce->sf_idx[w*16+g]--;
- dists[w*16+g] = dist - bits;
- qenergies[w*16+g] = qenergy;
- if (mb && (sce->sf_idx[w*16+g] < mindeltasf || (
- (dists[w*16+g] < FFMIN(uplmax*uplims[w*16+g], euplims[w*16+g]))
- && (fabsf(qenergies[w*16+g]-energies[w*16+g]) < euplims[w*16+g])
- ) )) {
- break;
- }
- }
- } else if (tbits > toofewbits && sce->sf_idx[w*16+g] < FFMIN(maxdeltasf, maxsf[w*16+g])
- && (dists[w*16+g] < FFMIN(euplims[w*16+g], uplims[w*16+g]))
- && (fabsf(qenergies[w*16+g]-energies[w*16+g]) < euplims[w*16+g])
- ) {
- /** Um... over target. Save bits for more important stuff. */
- for (i = 0; i < depth && sce->sf_idx[w*16+g] < maxdeltasf; ++i) {
- int cb, bits;
- float dist, qenergy;
- cb = find_min_book(maxvals[w*16+g], sce->sf_idx[w*16+g]+1);
- if (cb > 0) {
- dist = qenergy = 0.f;
- bits = 0;
- for (w2 = 0; w2 < sce->ics.group_len[w]; w2++) {
- int b;
- float sqenergy;
- dist += quantize_band_cost_cached(s, w + w2, g, coefs + w2*128,
- scaled + w2*128,
- sce->ics.swb_sizes[g],
- sce->sf_idx[w*16+g]+1,
- cb,
- 1.0f,
- INFINITY,
- &b, &sqenergy,
- 0);
- bits += b;
- qenergy += sqenergy;
- }
- dist -= bits;
- if (dist < FFMIN(euplims[w*16+g], uplims[w*16+g])) {
- sce->sf_idx[w*16+g]++;
- dists[w*16+g] = dist;
- qenergies[w*16+g] = qenergy;
- } else {
- break;
- }
- } else {
- maxsf[w*16+g] = FFMIN(sce->sf_idx[w*16+g], maxsf[w*16+g]);
- break;
- }
- }
- }
- prev = sce->sf_idx[w*16+g] = av_clip(sce->sf_idx[w*16+g], mindeltasf, maxdeltasf);
- if (sce->sf_idx[w*16+g] != prevsc)
- fflag = 1;
- nminscaler = FFMIN(nminscaler, sce->sf_idx[w*16+g]);
- sce->band_type[w*16+g] = find_min_book(maxvals[w*16+g], sce->sf_idx[w*16+g]);
- }
- start += sce->ics.swb_sizes[g];
- }
- }
-
- /** SF difference limit violation risk. Must re-clamp. */
- prev = -1;
- for (w = 0; w < sce->ics.num_windows; w += sce->ics.group_len[w]) {
- for (g = 0; g < sce->ics.num_swb; g++) {
- if (!sce->zeroes[w*16+g]) {
- int prevsf = sce->sf_idx[w*16+g];
- if (prev < 0)
- prev = prevsf;
- sce->sf_idx[w*16+g] = av_clip(sce->sf_idx[w*16+g], prev - SCALE_MAX_DIFF, prev + SCALE_MAX_DIFF);
- sce->band_type[w*16+g] = find_min_book(maxvals[w*16+g], sce->sf_idx[w*16+g]);
- prev = sce->sf_idx[w*16+g];
- if (!fflag && prevsf != sce->sf_idx[w*16+g])
- fflag = 1;
- }
- }
- }
-
- its++;
- } while (fflag && its < maxits);
-
- /** Scout out next nonzero bands */
- ff_init_nextband_map(sce, nextband);
-
- prev = -1;
- for (w = 0; w < sce->ics.num_windows; w += sce->ics.group_len[w]) {
- /** Make sure proper codebooks are set */
- for (g = 0; g < sce->ics.num_swb; g++) {
- if (!sce->zeroes[w*16+g]) {
- sce->band_type[w*16+g] = find_min_book(maxvals[w*16+g], sce->sf_idx[w*16+g]);
- if (sce->band_type[w*16+g] <= 0) {
- if (!ff_sfdelta_can_remove_band(sce, nextband, prev, w*16+g)) {
- /** Cannot zero out, make sure it's not attempted */
- sce->band_type[w*16+g] = 1;
- } else {
- sce->zeroes[w*16+g] = 1;
- sce->band_type[w*16+g] = 0;
- }
- }
- } else {
- sce->band_type[w*16+g] = 0;
- }
- /** Check that there's no SF delta range violations */
- if (!sce->zeroes[w*16+g]) {
- if (prev != -1) {
- av_unused int sfdiff = sce->sf_idx[w*16+g] - prev + SCALE_DIFF_ZERO;
- av_assert1(sfdiff >= 0 && sfdiff <= 2*SCALE_MAX_DIFF);
- } else if (sce->zeroes[0]) {
- /** Set global gain to something useful */
- sce->sf_idx[0] = sce->sf_idx[w*16+g];
- }
- prev = sce->sf_idx[w*16+g];
- }
- }
- }
-}
-
-#endif /* AVCODEC_AACCODER_TWOLOOP_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/fmtconvert_init_arm.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/fmtconvert_init_arm.c
deleted file mode 100644
index e12f83c8420eaff17f0a4103a94605e0ce1428bf..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/fmtconvert_init_arm.c
+++ /dev/null
@@ -1,52 +0,0 @@
-/*
- * ARM optimized Format Conversion Utils
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include
-
-#include "libavutil/attributes.h"
-#include "libavutil/arm/cpu.h"
-#include "libavcodec/fmtconvert.h"
-
-void ff_int32_to_float_fmul_array8_neon(FmtConvertContext *c, float *dst,
- const int32_t *src, const float *mul,
- int len);
-void ff_int32_to_float_fmul_scalar_neon(float *dst, const int32_t *src,
- float mul, int len);
-
-void ff_int32_to_float_fmul_scalar_vfp(float *dst, const int32_t *src,
- float mul, int len);
-void ff_int32_to_float_fmul_array8_vfp(FmtConvertContext *c, float *dst,
- const int32_t *src, const float *mul,
- int len);
-
-av_cold void ff_fmt_convert_init_arm(FmtConvertContext *c)
-{
- int cpu_flags = av_get_cpu_flags();
-
- if (have_vfp_vm(cpu_flags)) {
- c->int32_to_float_fmul_scalar = ff_int32_to_float_fmul_scalar_vfp;
- c->int32_to_float_fmul_array8 = ff_int32_to_float_fmul_array8_vfp;
- }
-
- if (have_neon(cpu_flags)) {
- c->int32_to_float_fmul_array8 = ff_int32_to_float_fmul_array8_neon;
- c->int32_to_float_fmul_scalar = ff_int32_to_float_fmul_scalar_neon;
- }
-}
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ass_split.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ass_split.h
deleted file mode 100644
index a45fb9b8a117d6102f7c066c3df3a90beec4f2b2..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ass_split.h
+++ /dev/null
@@ -1,191 +0,0 @@
-/*
- * SSA/ASS spliting functions
- * Copyright (c) 2010 Aurelien Jacobs
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_ASS_SPLIT_H
-#define AVCODEC_ASS_SPLIT_H
-
-/**
- * fields extracted from the [Script Info] section
- */
-typedef struct {
- char *script_type; /**< SSA script format version (eg. v4.00) */
- char *collisions; /**< how subtitles are moved to prevent collisions */
- int play_res_x; /**< video width that ASS coords are referring to */
- int play_res_y; /**< video height that ASS coords are referring to */
- float timer; /**< time multiplier to apply to SSA clock (in %) */
-} ASSScriptInfo;
-
-/**
- * fields extracted from the [V4(+) Styles] section
- */
-typedef struct {
- char *name; /**< name of the tyle (case sensitive) */
- char *font_name; /**< font face (case sensitive) */
- int font_size; /**< font height */
- int primary_color; /**< color that a subtitle will normally appear in */
- int secondary_color;
- int outline_color; /**< color for outline in ASS, called tertiary in SSA */
- int back_color; /**< color of the subtitle outline or shadow */
- int bold; /**< whether text is bold (1) or not (0) */
- int italic; /**< whether text is italic (1) or not (0) */
- int underline; /**< whether text is underlined (1) or not (0) */
- int strikeout;
- float scalex;
- float scaley;
- float spacing;
- float angle;
- int border_style;
- float outline;
- float shadow;
- int alignment; /**< position of the text (left, center, top...),
- defined after the layout of the numpad
- (1-3 sub, 4-6 mid, 7-9 top) */
- int margin_l;
- int margin_r;
- int margin_v;
- int alpha_level;
- int encoding;
-} ASSStyle;
-
-/**
- * fields extracted from the [Events] section
- */
-typedef struct {
- int readorder;
- int layer; /**< higher numbered layers are drawn over lower numbered */
- int start; /**< start time of the dialog in centiseconds */
- int end; /**< end time of the dialog in centiseconds */
- char *style; /**< name of the ASSStyle to use with this dialog */
- char *name;
- int margin_l;
- int margin_r;
- int margin_v;
- char *effect;
- char *text; /**< actual text which will be displayed as a subtitle,
- can include style override control codes (see
- ff_ass_split_override_codes()) */
-} ASSDialog;
-
-/**
- * structure containing the whole split ASS data
- */
-typedef struct {
- ASSScriptInfo script_info; /**< general information about the SSA script*/
- ASSStyle *styles; /**< array of split out styles */
- int styles_count; /**< number of ASSStyle in the styles array */
- ASSDialog *dialogs; /**< array of split out dialogs */
- int dialogs_count; /**< number of ASSDialog in the dialogs array*/
-} ASS;
-
-/**
- * This struct can be casted to ASS to access to the split data.
- */
-typedef struct ASSSplitContext ASSSplitContext;
-
-/**
- * Split a full ASS file or a ASS header from a string buffer and store
- * the split structure in a newly allocated context.
- *
- * @param buf String containing the ASS formatted data.
- * @return Newly allocated struct containing split data.
- */
-ASSSplitContext *ff_ass_split(const char *buf);
-
-/**
- * Free a dialogue obtained from ff_ass_split_dialog().
- */
-void ff_ass_free_dialog(ASSDialog **dialogp);
-
-/**
- * Split one ASS Dialogue line from a string buffer.
- *
- * @param ctx Context previously initialized by ff_ass_split().
- * @param buf String containing the ASS "Dialogue" line.
- * @return Pointer to the split ASSDialog. Must be freed with ff_ass_free_dialog()
- */
-ASSDialog *ff_ass_split_dialog(ASSSplitContext *ctx, const char *buf);
-
-/**
- * Free all the memory allocated for an ASSSplitContext.
- *
- * @param ctx Context previously initialized by ff_ass_split().
- */
-void ff_ass_split_free(ASSSplitContext *ctx);
-
-
-/**
- * Set of callback functions corresponding to each override codes that can
- * be encountered in a "Dialogue" Text field.
- */
-typedef struct {
- /**
- * @defgroup ass_styles ASS styles
- * @{
- */
- void (*text)(void *priv, const char *text, int len);
- void (*new_line)(void *priv, int forced);
- void (*style)(void *priv, char style, int close);
- void (*color)(void *priv, unsigned int /* color */, unsigned int color_id);
- void (*alpha)(void *priv, int alpha, int alpha_id);
- void (*font_name)(void *priv, const char *name);
- void (*font_size)(void *priv, int size);
- void (*alignment)(void *priv, int alignment);
- void (*cancel_overrides)(void *priv, const char *style);
- /** @} */
-
- /**
- * @defgroup ass_functions ASS functions
- * @{
- */
- void (*move)(void *priv, int x1, int y1, int x2, int y2, int t1, int t2);
- void (*origin)(void *priv, int x, int y);
- /** @} */
-
- /**
- * @defgroup ass_end end of Dialogue Event
- * @{
- */
- void (*end)(void *priv);
- /** @} */
-} ASSCodesCallbacks;
-
-/**
- * Split override codes out of a ASS "Dialogue" Text field.
- *
- * @param callbacks Set of callback functions called for each override code
- * encountered.
- * @param priv Opaque pointer passed to the callback functions.
- * @param buf The ASS "Dialogue" Text field to split.
- * @return >= 0 on success otherwise an error code <0
- */
-int ff_ass_split_override_codes(const ASSCodesCallbacks *callbacks, void *priv,
- const char *buf);
-
-/**
- * Find an ASSStyle structure by its name.
- *
- * @param ctx Context previously initialized by ff_ass_split().
- * @param style name of the style to search for.
- * @return the ASSStyle corresponding to style, or NULL if style can't be found
- */
-ASSStyle *ff_ass_style_get(ASSSplitContext *ctx, const char *style);
-
-#endif /* AVCODEC_ASS_SPLIT_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dca_lbr.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dca_lbr.h
deleted file mode 100644
index 8ff66afcb7ff4ff7df8ceead7ebb407bbf3c031f..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dca_lbr.h
+++ /dev/null
@@ -1,135 +0,0 @@
-/*
- * Copyright (C) 2016 foo86
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_DCA_LBR_H
-#define AVCODEC_DCA_LBR_H
-
-#include "libavutil/float_dsp.h"
-#include "libavutil/mem_internal.h"
-#include "libavutil/tx.h"
-
-#include "avcodec.h"
-#include "get_bits.h"
-#include "dca.h"
-#include "dca_exss.h"
-#include "dcadsp.h"
-
-#define DCA_LBR_CHANNELS 6
-#define DCA_LBR_CHANNELS_TOTAL 32
-#define DCA_LBR_SUBBANDS 32
-#define DCA_LBR_TONES 512
-
-#define DCA_LBR_TIME_SAMPLES 128
-#define DCA_LBR_TIME_HISTORY 8
-
-enum DCALBRHeader {
- DCA_LBR_HEADER_SYNC_ONLY = 1,
- DCA_LBR_HEADER_DECODER_INIT = 2
-};
-
-typedef struct DCALbrTone {
- uint8_t x_freq; ///< Spectral line offset
- uint8_t f_delt; ///< Difference between original and center frequency
- uint8_t ph_rot; ///< Phase rotation
- uint8_t pad; ///< Padding field
- uint8_t amp[DCA_LBR_CHANNELS]; ///< Per-channel amplitude
- uint8_t phs[DCA_LBR_CHANNELS]; ///< Per-channel phase
-} DCALbrTone;
-
-typedef struct DCALbrDecoder {
- AVCodecContext *avctx;
- GetBitContext gb;
-
- int sample_rate; ///< Sample rate of LBR audio
- int ch_mask; ///< LBR speaker mask
- int flags; ///< Flags for LBR decoder initialization
- int bit_rate_orig; ///< Original bit rate
- int bit_rate_scaled; ///< Scaled bit rate
-
- int nchannels; ///< Number of fullband channels to decode
- int nchannels_total; ///< Total number of fullband channels
- int freq_range; ///< Frequency range of LBR audio
- int band_limit; ///< Band limit factor
- int limited_rate; ///< Band limited sample rate
- int limited_range; ///< Band limited frequency range
- int res_profile; ///< Resolution profile
- int nsubbands; ///< Number of encoded subbands
- int g3_avg_only_start_sb; ///< Subband index where grid 3 scale factors end
- int min_mono_subband; ///< Subband index where mono encoding starts
- int max_mono_subband; ///< Subband index where mono encoding ends
-
- int framenum; ///< Lower 5 bits of current frame number
- int lbr_rand; ///< Seed for subband randomization
- int warned; ///< Flags for warning suppression
-
- uint8_t quant_levels[DCA_LBR_CHANNELS / 2][DCA_LBR_SUBBANDS]; ///< Quantization levels
- uint8_t sb_indices[DCA_LBR_SUBBANDS]; ///< Subband reordering indices
-
- uint8_t sec_ch_sbms[DCA_LBR_CHANNELS / 2][DCA_LBR_SUBBANDS]; ///< Right channel inversion or mid/side decoding flags
- uint8_t sec_ch_lrms[DCA_LBR_CHANNELS / 2][DCA_LBR_SUBBANDS]; ///< Flags indicating if left/right channel are swapped
- uint32_t ch_pres[DCA_LBR_CHANNELS]; ///< Subband allocation flags
-
- uint8_t grid_1_scf[DCA_LBR_CHANNELS][12][8]; ///< Grid 1 scale factors
- uint8_t grid_2_scf[DCA_LBR_CHANNELS][3][64]; ///< Grid 2 scale factors
-
- int8_t grid_3_avg[DCA_LBR_CHANNELS][DCA_LBR_SUBBANDS - 4]; ///< Grid 3 average values
- int8_t grid_3_scf[DCA_LBR_CHANNELS][DCA_LBR_SUBBANDS - 4][8]; ///< Grid 3 scale factors
- uint32_t grid_3_pres[DCA_LBR_CHANNELS]; ///< Grid 3 scale factors presence flags
-
- uint8_t high_res_scf[DCA_LBR_CHANNELS][DCA_LBR_SUBBANDS][8]; ///< High-frequency resolution scale factors
-
- uint8_t part_stereo[DCA_LBR_CHANNELS][DCA_LBR_SUBBANDS / 4][5]; ///< Partial stereo coefficients
- uint8_t part_stereo_pres; ///< Partial stereo coefficients presence flags
-
- float lpc_coeff[2][DCA_LBR_CHANNELS][3][2][8]; ///< Predictor coefficients
-
- float sb_scf[DCA_LBR_SUBBANDS]; ///< Subband randomization scale factors
-
- float *time_samples[DCA_LBR_CHANNELS][DCA_LBR_SUBBANDS]; ///< Time samples
-
- float *ts_buffer; ///< Time sample buffer base
- unsigned int ts_size; ///< Time sample buffer size
-
- DECLARE_ALIGNED(32, float, history)[DCA_LBR_CHANNELS][DCA_LBR_SUBBANDS * 4]; ///< IMDCT history
- DECLARE_ALIGNED(32, float, window)[DCA_LBR_SUBBANDS * 4]; ///< Long window for IMDCT
-
- DECLARE_ALIGNED(32, float, lfe_data)[64]; ///< Decimated LFE samples
- DECLARE_ALIGNED(32, float, lfe_history)[5][2]; ///< LFE IIR filter history
- float lfe_scale; ///< Scale factor of LFE samples before IIR filter
-
- uint8_t tonal_scf[6]; ///< Tonal scale factors
- uint16_t tonal_bounds[5][32][2]; ///< Per-group per-subframe start/end positions of tones
- DCALbrTone tones[DCA_LBR_TONES]; ///< Circular buffer of tones
- int ntones; ///< Circular buffer head position
-
- AVTXContext *imdct;
- av_tx_fn imdct_fn;
- AVFloatDSPContext *fdsp;
- DCADSPContext *dcadsp;
-} DCALbrDecoder;
-
-int ff_dca_lbr_parse(DCALbrDecoder *s, const uint8_t *data, DCAExssAsset *asset);
-int ff_dca_lbr_filter_frame(DCALbrDecoder *s, AVFrame *frame);
-av_cold void ff_dca_lbr_flush(DCALbrDecoder *s);
-av_cold void ff_dca_lbr_init_tables(void);
-av_cold int ff_dca_lbr_init(DCALbrDecoder *s);
-av_cold void ff_dca_lbr_close(DCALbrDecoder *s);
-
-#endif
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mediacodec.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mediacodec.h
deleted file mode 100644
index 4e9b56a618454d5439e955598ee870a2c1f75b9b..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mediacodec.h
+++ /dev/null
@@ -1,103 +0,0 @@
-/*
- * Android MediaCodec public API
- *
- * Copyright (c) 2016 Matthieu Bouron
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_MEDIACODEC_H
-#define AVCODEC_MEDIACODEC_H
-
-#include "libavcodec/avcodec.h"
-
-/**
- * This structure holds a reference to a android/view/Surface object that will
- * be used as output by the decoder.
- *
- */
-typedef struct AVMediaCodecContext {
-
- /**
- * android/view/Surface object reference.
- */
- void *surface;
-
-} AVMediaCodecContext;
-
-/**
- * Allocate and initialize a MediaCodec context.
- *
- * When decoding with MediaCodec is finished, the caller must free the
- * MediaCodec context with av_mediacodec_default_free.
- *
- * @return a pointer to a newly allocated AVMediaCodecContext on success, NULL otherwise
- */
-AVMediaCodecContext *av_mediacodec_alloc_context(void);
-
-/**
- * Convenience function that sets up the MediaCodec context.
- *
- * @param avctx codec context
- * @param ctx MediaCodec context to initialize
- * @param surface reference to an android/view/Surface
- * @return 0 on success, < 0 otherwise
- */
-int av_mediacodec_default_init(AVCodecContext *avctx, AVMediaCodecContext *ctx, void *surface);
-
-/**
- * This function must be called to free the MediaCodec context initialized with
- * av_mediacodec_default_init().
- *
- * @param avctx codec context
- */
-void av_mediacodec_default_free(AVCodecContext *avctx);
-
-/**
- * Opaque structure representing a MediaCodec buffer to render.
- */
-typedef struct MediaCodecBuffer AVMediaCodecBuffer;
-
-/**
- * Release a MediaCodec buffer and render it to the surface that is associated
- * with the decoder. This function should only be called once on a given
- * buffer, once released the underlying buffer returns to the codec, thus
- * subsequent calls to this function will have no effect.
- *
- * @param buffer the buffer to render
- * @param render 1 to release and render the buffer to the surface or 0 to
- * discard the buffer
- * @return 0 on success, < 0 otherwise
- */
-int av_mediacodec_release_buffer(AVMediaCodecBuffer *buffer, int render);
-
-/**
- * Release a MediaCodec buffer and render it at the given time to the surface
- * that is associated with the decoder. The timestamp must be within one second
- * of the current `java/lang/System#nanoTime()` (which is implemented using
- * `CLOCK_MONOTONIC` on Android). See the Android MediaCodec documentation
- * of [`android/media/MediaCodec#releaseOutputBuffer(int,long)`][0] for more details.
- *
- * @param buffer the buffer to render
- * @param time timestamp in nanoseconds of when to render the buffer
- * @return 0 on success, < 0 otherwise
- *
- * [0]: https://developer.android.com/reference/android/media/MediaCodec#releaseOutputBuffer(int,%20long)
- */
-int av_mediacodec_render_buffer_at_time(AVMediaCodecBuffer *buffer, int64_t time);
-
-#endif /* AVCODEC_MEDIACODEC_H */
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Build Your Camp and Fight with Water Guns in Talking Tom Camp APK.md b/spaces/congsaPfin/Manga-OCR/logs/Build Your Camp and Fight with Water Guns in Talking Tom Camp APK.md
deleted file mode 100644
index ccd3d1d2b13478fcfa8993708c0291b958a241f9..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Build Your Camp and Fight with Water Guns in Talking Tom Camp APK.md
+++ /dev/null
@@ -1,66 +0,0 @@
-
-
My Talking Tom Camp APK: A Fun and Exciting Water Fight Game
-
Do you love water fights? Do you want to join the best water fight of your life with your favorite talking cat and his friends? If you answered yes, then you should download My Talking Tom Camp APK, a cool water fighting battle builder game that will keep you entertained for hours. In this article, we will tell you everything you need to know about this game, including what it is, how to download and install it, how to play it, and why you should play it. Let's get started!
My Talking Tom Camp APK is an Android game developed by Outfit7 Limited, the creators of the popular Talking Tom and Friends franchise. It is a strategy game where you have to build your own camp, defend it from other players, and attack their camps to collect coins and energy. You can also customize your camp with various buildings, decorations, and water weapons. You can also play with your favorite characters from the Talking Tom and Friends series, such as Tom, Angela, Hank, Ginger, Ben, and Becca. The game has amazing graphics, sound effects, and animations that will make you feel like you are in a real water fight.
-
How to Download and Install My Talking Tom Camp APK?
-
Downloading and installing My Talking Tom Camp APK is very easy and fast. Here are the steps you need to follow:
-
-
Go to APKCombo, a trusted website that offers free and safe APK downloads.
-
Search for "Talking Tom Camp" in the search bar.
-
Select the latest version of the game (1.8.16) and click on "Download APK".
-
Wait for the download to finish and then open the file.
-
Allow the installation of unknown sources if prompted by your device.
-
Follow the instructions on the screen to install the game.
-
Launch the game and enjoy!
-
-
How to Play My Talking Tom Camp APK?
-
Playing My Talking Tom Camp APK is very simple and fun. Here are the basics of the gameplay and the objectives:
-
my talking tom camp apk mod
-my talking tom camp apk download
-my talking tom camp apk latest version
-my talking tom camp apk unlimited money
-my talking tom camp apk android 1
-my talking tom camp apk obb
-my talking tom camp apk pure
-my talking tom camp apk revdl
-my talking tom camp apk hack
-my talking tom camp apk offline
-my talking tom camp apk free download
-my talking tom camp apk update
-my talking tom camp apk old version
-my talking tom camp apk game
-my talking tom camp apk full version
-my talking tom camp apk data
-my talking tom camp apk rexdl
-my talking tom camp apk uptodown
-my talking tom camp apk mirror
-my talking tom camp apk for pc
-my talking tom camp apk online
-my talking tom camp apk no ads
-my talking tom camp apk 2023
-my talking tom camp apk cheats
-my talking tom camp apk gameplay
-my talking tom camp apk mod menu
-my talking tom camp apk android oyun club
-my talking tom camp apk mod unlimited coins and gems
-my talking tom camp apk mod download
-my talking tom camp apk mod latest version
-my talking tom camp apk mod offline
-my talking tom camp apk mod free shopping
-my talking tom camp apk mod hack download
-my talking tom camp apk mod android 1 com
-my talking tom camp apk mod revdl com
-my talking tom camp apk mod rexdl com
-my talking tom camp apk mod happymod com
-my talking tom camp apk mod apkpure com
-my talking tom camp apk mod unlimited everything
-my talking tom camp apk mod money and gems
-my talking tom camp apkpure download free for android mobiles latest version 2023 update offline installer file setup app game com outift7 talkintomcamp apkpure com 2023 06 19 05 25 23 GMT+00:00.
-
Build your camp
-
Your camp is your base of operations where you can produce coins and energy, train your troops, upgrade your weapons, and more. You can also decorate your camp with various items to make it look awesome. To build your camp, you need to tap on the "Build" button at the bottom right corner of the screen. You can then choose from different categories of buildings, such as Troop Shop, Hero Hall, Tower, Coin Factory, Energy Generator, etc. You can also level up your Minivan and the buildings in your camp to get better weapons and more energy.
-
Defend and attack
-
Your camp is not safe from other players who want to steal your coins and energy. You need to set up defenses such as Sprinkler, Tower, Puddle, Catapult, Cannon, etc. to protect your camp from invaders. You can also plan sneaky attacks on other players' camps by tapping on the "Attack" button at the bottom left corner of the screen. You can then choose from different modes of attack, such as Single Player Campaign, Multiplayer Online Battle, or Friendly Battle. You x Player. These are software that allow you to run Android apps on your PC. You need to download and install the emulator on your PC, and then download and install My Talking Tom Camp APK from APKCombo or any other website. You can then launch the game and play it on your PC.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Geometry Dash SubZero MOD APK and Join the Geometry Heroes in Their Icy Quest.md b/spaces/congsaPfin/Manga-OCR/logs/Download Geometry Dash SubZero MOD APK and Join the Geometry Heroes in Their Icy Quest.md
deleted file mode 100644
index eaa6725c6b7c775995e8645eabde326de2cd6ca4..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Geometry Dash SubZero MOD APK and Join the Geometry Heroes in Their Icy Quest.md
+++ /dev/null
@@ -1,100 +0,0 @@
-
-
Download Game Geometry Dash SubZero Mod APK
-
If you are looking for a fun and challenging game that will test your skills and reflexes, you should try Geometry Dash SubZero. This is a rhythm-based platformer game that will make you jump, fly, and flip through dangerous obstacles and spiky traps. You can download the game for free from the Google Play Store or the App Store, but if you want to enjoy some extra features and advantages, you should download the modded version of the game. In this article, we will tell you what Geometry Dash SubZero is, why you should download the mod apk, and how to do it.
-
What is Geometry Dash SubZero?
-
Geometry Dash SubZero is a spin-off of the popular Geometry Dash series, created by RobTop Games. The game features three unique levels with different themes and music, each with its own boss fight. The game also has a practice mode, where you can save your progress and resume from checkpoints. The game is known for its addictive gameplay and challenging levels that require speed, accuracy, and coordination to succeed.
Some of the features of Geometry Dash SubZero are:
-
-
Rhythm-based action platforming
-
Three unique levels with amazing music from MDK, Bossfight and Boom Kitty
-
Unlock new icons and colors to customize your character
-
Fly rockets, flip gravity and much more
-
Use practice mode to sharpen your skills
-
Challenge yourself with the near impossible
-
-
How to play Geometry Dash SubZero
-
The gameplay of Geometry Dash SubZero is simple but hard to master. You have to tap the screen to make your character jump over obstacles and avoid spikes. You can also hold the screen to make your character fly or flip gravity. You have to follow the rhythm of the music and react quickly to the changing environment. The game is very fast-paced and requires concentration and precision. If you hit an obstacle or fall into a pit, you have to start over from the beginning or from the last checkpoint.
-
Why download Geometry Dash SubZero Mod APK?
-
Geometry Dash SubZero is a free game, but it has some limitations and ads that can affect your gaming experience. For example, you have to watch ads to unlock some icons and colors, or you have to pay real money to remove them. You also have to complete the levels in normal mode before you can access the practice mode. If you want to bypass these restrictions and enjoy some extra benefits, you should download the modded version of the game.
-
Benefits of Geometry Dash SubZero Mod APK
-
The latest version, Geometry Dash SubZero V2.2.12 MOD Unlocked, is now available for download. The mod apk offers unlocked features that give players an enhanced experience. Some of the benefits of Geometry Dash SubZero Mod APK are:
-
-
Unlimited money: You can use the money to buy anything you want in the game, such as icons, colors, trails, etc.
-
All levels unlocked: You can play any level you want without having to complete the previous ones.
-
All icons unlocked: You can customize your character with any icon you like.
-
No ads: You can enjoy the game without any annoying ads.
-
No root required: You don't need to root your device to install the mod apk.
-
-
How to download and install Geometry Dash SubZero Mod APK
-
The process of downloading and installing Geometry Dash SubZero Mod APK is very easy and straightforward. Just follow these steps:
-
-
Click on this link to download the mod apk file.
-
Allow unknown sources in your device settings.
-
Locate the downloaded file in your file manager and tap on it.
-
Follow the instructions on the screen to install the mod apk.
-
Launch the game and enjoy the mod features.
-
-
Conclusion
-
Geometry Dash SubZero is a fun and challenging game that will keep you entertained for hours. You can download the game for free from the official stores, but if you want to have more fun and advantages, you should download the mod apk. The mod apk will give you unlimited money, all levels and icons unlocked, no ads, and no root required. You can download the mod apk from this link and follow the simple steps to install it. We hope you enjoy playing Geometry Dash SubZero Mod APK.
-
FAQs
-
Here are some frequently asked questions about Geometry Dash SubZero Mod APK:
-
download game geometry dash subzero mod apk unlocked
-download game geometry dash subzero mod apk latest version
-download game geometry dash subzero mod apk android
-download game geometry dash subzero mod apk free
-download game geometry dash subzero mod apk full
-download game geometry dash subzero mod apk no ads
-download game geometry dash subzero mod apk offline
-download game geometry dash subzero mod apk unlimited money
-download game geometry dash subzero mod apk for pc
-download game geometry dash subzero mod apk 2.2.12
-download game geometry dash subzero hack apk
-download game geometry dash subzero premium apk
-download game geometry dash subzero pro apk
-download game geometry dash subzero plus apk
-download game geometry dash subzero mega mod apk
-download game geometry dash subzero cheat apk
-download game geometry dash subzero cracked apk
-download game geometry dash subzero vip apk
-download game geometry dash subzero all levels unlocked apk
-download game geometry dash subzero all skins unlocked apk
-how to download game geometry dash subzero mod apk
-where to download game geometry dash subzero mod apk
-best site to download game geometry dash subzero mod apk
-safe site to download game geometry dash subzero mod apk
-legit site to download game geometry dash subzero mod apk
-easy way to download game geometry dash subzero mod apk
-fast way to download game geometry dash subzero mod apk
-free way to download game geometry dash subzero mod apk
-working link to download game geometry dash subzero mod apk
-direct link to download game geometry dash subzero mod apk
-play game geometry dash subzero mod apk online
-play game geometry dash subzero mod apk without downloading
-play game geometry dash subzero mod apk on browser
-play game geometry dash subzero mod apk with friends
-play game geometry dash subzero mod apk multiplayer
-install game geometry dash subzero mod apk on phone
-install game geometry dash subzero mod apk on tablet
-install game geometry dash subzero mod apk on laptop
-install game geometry dash subzero mod apk on windows 10
-install game geometry dash subzero mod apk on macbook
-update game geometry dash subzero mod apk to latest version
-fix game geometry dash subzero mod apk not working
-fix game geometry dash subzero mod apk crashing
-fix game geometry dash subzero mod apk lagging
-fix game geometry dash subzero mod apk error code 505
-fix game geometry dash subzero mod apk black screen issue
-
Q: Is Geometry Dash SubZero Mod APK safe to use?
-
A: Yes, Geometry Dash SubZero Mod APK is safe to use. It does not contain any viruses or malware that can harm your device. However, you should always download the mod apk from a trusted source and scan it before installing it.
-
Q: Can I play Geometry Dash SubZero Mod APK online?
-
A: No, Geometry Dash SubZero Mod APK is an offline game. You don't need an internet connection to play it. However, you may need to update the game from time to time to get the latest features and bug fixes.
-
Q: Can I sync my progress with the original game?
-
A: No, Geometry Dash SubZero Mod APK is a separate game from the original one. You cannot sync your progress or achievements with the original game. You will have to start from scratch if you switch between the two versions.
-
Q: What are the minimum requirements to play Geometry Dash SubZero Mod APK?
-
A: Geometry Dash SubZero Mod APK is compatible with Android 4.0 and above devices. You will need at least 50 MB of free storage space on your device to install the mod apk.
-
Q: How can I contact the developer of Geometry Dash SubZero Mod APK?
-
A: You can contact the developer of Geometry Dash SubZero Mod APK by visiting their website or by sending them an email at support@robtopgames.com.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Ludo King MOD APK and Enjoy All Themes Unlocked for Free.md b/spaces/congsaPfin/Manga-OCR/logs/Download Ludo King MOD APK and Enjoy All Themes Unlocked for Free.md
deleted file mode 100644
index dadb4a0586cc53575569b8d0deb8009f692aa759..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Ludo King MOD APK and Enjoy All Themes Unlocked for Free.md
+++ /dev/null
@@ -1,79 +0,0 @@
-
-
Ludo King Mod APK Download All Themes Unlocked
-
Ludo King is one of the most popular board games in the world. It is a game that can be enjoyed by people of all ages, from kids to adults. But what if you want to unlock all the themes, get unlimited coins, and play without ads? That's where Ludo King Mod APK comes in. In this article, we will tell you everything you need to know about Ludo King Mod APK, including what it is, how to download it, and how to use it. We will also share some tips and tricks to help you win more games in Ludo King. So, let's get started!
Ludo King is a mobile game application that is based on the classic board game Ludo. Ludo is derived from the ancient Indian game of Pachisi, which was played by the kings and queens of India. The objective of the game is to move four tokens of your color from the starting point to the finishing point, while avoiding or capturing the tokens of your opponents. The game can be played by two to six players, either online or offline.
-
History and popularity of Ludo King
-
Ludo King was developed by Gametion Technologies Pvt Ltd, a game studio based in Navi Mumbai, India. The game was released on February 20, 2016, on the Apple App Store and the Google Play Store. Since then, it has become one of the most downloaded games in India and many other countries. It has over 500 million downloads on Google Play Store alone, making it the first Indian gaming app to achieve this milestone. It has also won several awards, such as the Best Casual Game of 2018 on Google Play Store by popular vote.
-
Features and rules of Ludo King
-
Ludo King has four modes: vs Computer, Local Mode (pass and play), Online Multiplayer (play with other players around the world), and Private Multiplayer (play with friends in private rooms). In each mode, you can choose to play with two to six players, with four different colors: red, blue, green, and yellow. You can also choose to play with different rules, such as doubling the stakes, killing one before entering home, etc.
-
The game follows the traditional rules and the old school look of the Ludo game. You need to roll a six on the dice to enter your token on the board. You can then move your token according to the number on the dice. You can also roll a six again and get another turn. However, if you roll a six three times in a row, your turn will be skipped.
-
ludo king hack apk download free all themes
-ludo king modded apk unlimited six and themes
-ludo king premium apk download no ads and all themes
-ludo king latest mod apk with all themes unlocked
-ludo king cheat apk download all themes and coins
-ludo king pro apk download free themes and gems
-ludo king mod apk 2023 download all themes and modes
-ludo king cracked apk download all themes and skins
-ludo king full mod apk with all themes and features
-ludo king unlocked apk download free themes and dice
-ludo king vip mod apk download all themes and tokens
-ludo king hack version download all themes and money
-ludo king mod apk offline download all themes and boards
-ludo king patcher apk download all themes and stickers
-ludo king mega mod apk download all themes and rewards
-ludo king mod apk online download all themes and chat
-ludo king mod menu apk download all themes and options
-ludo king mod apk unlimited everything download all themes
-ludo king mod apk android 1 download all themes and sounds
-ludo king mod apk revdl download all themes and graphics
-ludo king mod apk rexdl download all themes and animations
-ludo king mod apk happymod download all themes and emojis
-ludo king mod apk apkpure download all themes and effects
-ludo king mod apk techylist download all themes and updates
-ludo king mod apk an1 download all themes and languages
-ludo king mod apk for pc download all themes and controls
-ludo king mod apk for ios download all themes and compatibility
-ludo king mod apk for windows 10 download all themes and performance
-ludo king mod apk for laptop download all themes and resolution
-ludo king mod apk for mac download all themes and quality
-
You can capture your opponent's token by landing on the same square as them, unless they are on a safe square (marked with a star or an arrow). Capturing a token will send it back to its starting point. You can also form a block by placing two tokens of your color on the same square, which will prevent your opponents from passing through.
-
You need to reach the center of the board with all four tokens of your color to win the game. You can only enter the home column with an exact roll of the dice. For example, if you are three squares away from home, you need to roll a three to enter.
-
How to play Ludo King online with friends
-
One of the best features of Ludo King is that you can play online with your friends and family, no matter where they are. You can also chat with them and send emojis during the game. Here are the steps to play Ludo King online with friends: 1. Open the Ludo King app on your device and tap on the Play with Friends button on the main screen. 2. Choose whether you want to create a room or join a room. If you create a room, you can set the number of players, the color, the rules, and the entry fee (in coins). You will also get a room code that you can share with your friends. If you join a room, you need to enter the room code that your friend has shared with you. 3. Once you are in the room, you can invite your friends from your contact list, Facebook, or WhatsApp. You can also wait for random players to join the room if there are empty slots. 4. When everyone is ready, tap on the Start button to begin the game. You can chat with your friends and send emojis by tapping on the chat icon on the top right corner of the screen. 5. Enjoy playing Ludo King online with your friends!
What is Ludo King Mod APK?
-
Ludo King Mod APK is a modified version of the original Ludo King app that gives you access to some extra features and benefits that are not available in the official app. Some of these features and benefits are: - All themes unlocked: You can choose from different themes to customize your board and tokens, such as Nature, Egypt, Disco, Cake, Candy, etc. - Unlimited coins: You can get unlimited coins to play more games and enter higher stakes rooms. - No ads: You can play without any interruptions or distractions from annoying ads. - Cheat mode: You can use cheat mode to manipulate the dice and get the desired number. - Auto win: You can use auto win to win every game without any effort.
-
Benefits of Ludo King Mod APK
-
Ludo King Mod APK has some benefits that make it more fun and enjoyable than the original app. Some of these benefits are: - You can explore different themes and backgrounds that suit your mood and preference. - You can play more games and challenge more players with unlimited coins. - You can avoid wasting time and data on ads that pop up every now and then. - You can have an edge over your opponents by using cheat mode and auto win. - You can impress your friends and family with your high scores and achievements.
-
Drawbacks of Ludo King Mod APK
-
Ludo King Mod APK also has some drawbacks that make it risky and unsafe to use. Some of these drawbacks are: - It is not an official app and it is not available on the Google Play Store or the Apple App Store. You have to download it from third-party sources that may contain viruses or malware. - It may not be compatible with your device or the latest version of the original app. It may also crash or freeze during the game. - It may violate the terms and conditions of the original app and get you banned from playing online or offline. - It may spoil the fun and challenge of the game by making it too easy or unfair. - It may ruin your reputation and credibility among your friends and family by making them think that you are cheating or hacking.
-
How to download and install Ludo King Mod APK
-
If you still want to try Ludo King Mod APK despite its drawbacks, here are the steps to download and install it on your device: 1. Go to a trusted website that provides Ludo King Mod APK download link, such as or . 2. Tap on the download button and wait for the file to be downloaded on your device. 3. Go to your device settings and enable unknown sources to allow installation of apps from outside sources. 4. Locate the downloaded file in your file manager and tap on it to start the installation process. 5. Follow the instructions on the screen and grant the required permissions to complete the installation. 6. Launch the app and enjoy playing Ludo King Mod APK.
-
Tips and tricks to win Ludo King games
-
If you want to improve your skills and win more games in Ludo King, here are some tips and tricks that you can follow:
Open all your pieces
-
One of the first things you should do in Ludo King is to open all your pieces as soon as possible. This will give you more options to move around the board and capture your opponent's pieces. It will also reduce the chances of getting stuck or blocked by your own pieces.
-
Be patient and strategic
-
Ludo King is a game of luck and strategy. You need to be patient and wait for the right opportunity to make your move and not rush or panic. You need to plan your moves ahead and anticipate your opponent's moves. You need to balance between offense and defense, and know when to attack and when to retreat. You need to use your dice wisely and not waste your rolls.
-
Attack and defend wisely
-
One of the main aspects of Ludo King is to attack and defend your pieces from your opponents. You need to know when to capture your opponent's pieces and when to avoid them. You need to be aggressive and assertive, but not reckless or careless. You need to protect your pieces from being captured, especially when they are close to home. You need to use the safe squares and the blocks to your advantage.
-
Use different game modes and themes
-
Ludo King offers different game modes and themes that can make the game more fun and challenging. You can try the Snake and Ladders mode, which adds another board game element to Ludo King. You can also try the Rush mode, which speeds up the game and makes it more exciting. You can also choose from different themes that change the appearance of the board and the tokens, such as Nature, Egypt, Disco, Cake, Candy, etc.
-
Conclusion
-
Ludo King is a great game that can bring back the nostalgia of playing the classic board game with your friends and family. It is also a game that can test your luck and strategy skills, and keep you entertained for hours. However, if you want to spice up your game and unlock more features and benefits, you can try Ludo King Mod APK. But be careful, as it may also have some risks and drawbacks that you should be aware of. We hope this article has helped you learn more about Ludo King Mod APK, how to download it, and how to use it. We also hope that you have enjoyed our tips and tricks to win more games in Ludo King. Now, go ahead and have fun playing Ludo King with your friends!
-
FAQs
-
Here are some frequently asked questions about Ludo King Mod APK:
-
-
Is Ludo King Mod APK safe to use?
-
Ludo King Mod APK is not an official app and it is not available on the Google Play Store or the Apple App Store. You have to download it from third-party sources that may contain viruses or malware. It may also violate the terms and conditions of the original app and get you banned from playing online or offline. Therefore, it is not safe to use Ludo King Mod APK.
-
Is Ludo King Mod APK legal to use?
-
Ludo King Mod APK is a modified version of the original app that gives you access to some extra features and benefits that are not available in the official app. It may also infringe on the intellectual property rights of the original app developer. Therefore, it is not legal to use Ludo King Mod APK.
-
Can I play online with Ludo King Mod APK?
-
Ludo King Mod APK may allow you to play online with other players who are using the same modded app. However, you may not be able to play online with players who are using the original app or a different modded app. You may also face some technical issues or errors while playing online with Ludo King Mod APK.
-
Can I play offline with Ludo King Mod APK?
-
Ludo King Mod APK may allow you to play offline with your device or with other devices that are connected via Bluetooth or Wi-Fi hotspot. However, you may not be able to save your progress or access some features that require an internet connection while playing offline with Ludo King Mod APK.
-
Can I update Ludo King Mod APK?
-
Ludo King Mod APK may not be compatible with the latest version of the original app or the latest version of your device's operating system. It may also lose some of its features or benefits after updating. Therefore, it is not advisable to update Ludo King Mod APK.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Mini Racing Adventures APK MOD and Race with Martin.md b/spaces/congsaPfin/Manga-OCR/logs/Download Mini Racing Adventures APK MOD and Race with Martin.md
deleted file mode 100644
index 1aa20b0324501a6a90010a1f68f87ba4c6907c65..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Mini Racing Adventures APK MOD and Race with Martin.md
+++ /dev/null
@@ -1,95 +0,0 @@
-
-
Download APK Mod Mini Racing: A Fun and Exciting Racing Game for Android
-
If you are looking for a new racing game to play on your Android device, you might want to check out APK Mod Mini Racing. This is a modified version of the original Mini Racing game that offers more features and benefits for the players. In this article, we will explain what APK Mod is, what Mini Racing is, and how to download and install APK Mod Mini Racing on your device.
-
What is APK Mod and Why You Should Try It
-
APK Mod is a term that refers to a modified version of an Android application. The modification can be done by anyone who has the skills and tools to change the code, graphics, or functionality of the app. The purpose of modifying an app can vary, but usually it is to provide users with access to new or improved features that are not present in the original version of the app.
APK Mod can unlock premium features and enhance your gaming experience
-
One of the main reasons why people use APK Mod is to unlock premium features that are otherwise locked or require payment in the original app. For example, some games may have in-app purchases that allow you to buy coins, gems, or other resources to progress faster or unlock more content. With APK Mod, you can get these resources for free or unlimited amounts.
-
Another reason why people use APK Mod is to enhance their gaming experience by adding new elements, modes, or options that are not available in the original app. For example, some games may have limited graphics, controls, or customization options that limit your enjoyment of the game. With APK Mod, you can improve the graphics, change the controls, or customize your vehicle as you wish.
-
APK Mod can also pose some risks and challenges
-
However, using APK Mod is not without risks and challenges. First of all, modifying an app is not legal in most cases, as it violates the intellectual property rights of the original developers. Therefore, downloading and using APK Mod can expose you to legal issues or penalties if you are caught.
-
Secondly, using APK Mod can compromise the security and performance of your device. Since APK Mod is not signed by the original developers and has no support from them, it may contain malware or other harmful codes that can damage your device or steal your personal information. Moreover, using APK Mod can cause compatibility issues or errors with the original app or other apps on your device.
-
Therefore, before you decide to use APK Mod, you should weigh the pros and cons carefully and be aware of the potential consequences. You should also only download APK Mod from reputable sources that have positive reviews and feedback from other users.
-
What is Mini Racing and What are Its Features
-
Mini Racing is a low poly arcade racing game developed by Kitty In A Box. It has a simple and intuitive driving model that is easy to learn but hard to master. You can race against up to 8 cars on 18 tracks in different environments. You can also customize your car by changing its color, exhaust, spoilers, and other components. You can improve its performance by upgrading its engine, tires, suspension, and brakes.
-
Mini Racing lets you drive 14 cars on 18 tracks, single, local multiplayer or online multiplayer
-
One of the best
One of the best features of Mini Racing is that it offers three modes of play: single, local multiplayer, and online multiplayer. You can play solo and challenge yourself to beat your own records or the AI opponents. You can also play with your friends on the same device using the split-screen mode. Or you can join online races and compete with players from around the world. You can also chat with other players and send them emojis.
-
download mini racing adventures mod apk unlimited coins
-download mini racing 3d mod apk latest version
-download mini motor racing mod apk android 1
-download mini legend mini 4wd simulation racing mod apk
-download mini racing adventures mod apk revdl
-download mini racing 3d mod apk unlocked all cars
-download mini motor racing mod apk rexdl
-download mini legend mini 4wd simulation racing mod apk unlimited money
-download mini racing adventures mod apk an1
-download mini racing 3d mod apk offline
-download mini motor racing mod apk obb
-download mini legend mini 4wd simulation racing mod apk god mode
-download mini racing adventures mod apk happymod
-download mini racing 3d mod apk no ads
-download mini motor racing mod apk free shopping
-download mini legend mini 4wd simulation racing mod apk free shopping
-download mini racing adventures mod apk android oyun club
-download mini racing 3d mod apk unlimited money and gems
-download mini motor racing mod apk unlimited money and gold
-download mini legend mini 4wd simulation racing mod apk latest version
-download mini racing adventures mod apk pure
-download mini racing 3d mod apk hack
-download mini motor racing mod apk hack
-download mini legend mini 4wd simulation racing mod apk hack
-download mini racing adventures mod apk data file host
-download mini racing 3d mod apk data file host
-download mini motor racing mod apk data file host
-download mini legend mini 4wd simulation racing mod apk data file host
-download mini racing adventures mod apk for pc
-download mini racing 3d mod apk for pc
-download mini motor racing mod apk for pc
-download mini legend mini 4wd simulation racing mod apk for pc
-download mini racing adventures mod apk full version
-download mini racing 3d mod apk full version
-download mini motor racing mod apk full version
-download mini legend mini 4wd simulation racing mod apk full version
-download mini racing adventures mod apk gamestechy
-download mini racing 3d mod apk gamestechy
-download mini motor racing mod apk gamestechy
-download mini legend mini 4wd simulation racing mod apk gamestechy
-download mini racing adventures mod apk highly compressed
-download mini racing 3d mod apk highly compressed
-download mini motor racing mod apk highly compressed
-download mini legend mini 4wd simulation racing mod apk highly compressed
-download mini racing adventures mod apk ihackedit
-download mini racing 3d mod apk ihackedit
-download mini motor racing mod apk ihackedit
-download mini legend mini 4wd simulation racing mod apk ihackedit
-
Mini Racing has realistic physics, stunning graphics, and customizable vehicles
-
Another great feature of Mini Racing is that it has realistic physics that make the driving experience more immersive and fun. You can feel the weight, speed, and traction of your car as you accelerate, brake, and drift. You can also see the damage effects on your car as you crash or bump into other cars or objects. You can also use the nitro boost to gain an extra edge over your rivals.
-
Mini Racing also has stunning graphics that make the game look beautiful and vibrant. The low poly style gives the game a unique and charming aesthetic that is pleasing to the eye. The game also has dynamic lighting and shadows, weather effects, and day and night cycles that add more realism and variety to the game. The game runs smoothly and has no lag or glitches.
-
Mini Racing also lets you customize your vehicle to suit your preferences and style. You can choose from 14 different cars, each with its own characteristics and stats. You can also change the color, exhaust, spoilers, and other components of your car. You can also upgrade your car's engine, tires, suspension, and brakes to improve its performance and handling.
-
How to Download and Install APK Mod Mini Racing on Your Android Device
-
If you want to try APK Mod Mini Racing on your Android device, you need to follow these simple steps:
-
Find a reputable website that offers the APK Mod Mini Racing file
-
The first step is to find a reliable source that provides the APK Mod Mini Racing file for download. You can search online for websites that offer this file, but be careful not to download from shady or untrustworthy sites that may contain viruses or malware. You can also check the reviews and ratings of other users who have downloaded the file before you.
-
Download the file and enable unknown sources on your device settings
-
The next step is to download the file from the website to your device. You may need to enable unknown sources on your device settings to allow the installation of apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Install the file and enjoy the game
-
The final step is to install the file on your device by tapping on it and following the instructions. Once the installation is complete, you can launch the game and enjoy all the features and benefits of APK Mod Mini Racing.
-
Conclusion
-
APK Mod Mini Racing is a fun and exciting racing game for Android devices that offers more features and benefits than the original version of Mini Racing. It lets you unlock premium features, enhance your gaming experience, drive 14 cars on 18 tracks, customize your vehicle, and play with other players online or offline. However, you should also be aware of the risks and challenges of using APK Mod, such as legal issues, security threats, or compatibility problems. Therefore, you should only download APK Mod from reputable sources and use it at your own discretion.
-
FAQs
-
-
What is the difference between APK Mod Mini Racing and Mini Racing?
-
APK Mod Mini Racing is a modified version of Mini Racing that offers more features and benefits for the players, such as unlimited resources, improved graphics, or new modes.
-
Is APK Mod Mini Racing safe to use?
-
APK Mod Mini Racing may not be safe to use as it is not signed by the original developers and may contain malware or other harmful codes that can damage your device or steal your personal information. Therefore, you should only download APK Mod from reputable sources and use it at your own risk.
-
How do I update APK Mod Mini Racing?
-
To update APK Mod Mini Racing, you need to download the latest version of the file from the website where you downloaded it before and install it over the existing one. However, you may lose some data or settings if you do this.
-
Can I play APK Mod Mini Racing offline?
-
You can play APK Mod Mini Racing offline in single mode or local multiplayer mode. However, you need an internet connection to play online multiplayer mode or access some features that require online verification.
-
How do I uninstall APK Mod Mini Racing?
-
To uninstall APK Mod Mini Racing, you need to go to Settings > Apps > APK Mod Mini Racing and tap on Uninstall. You may also need to delete the file from your device storage.
-
-
I hope this article has helped you learn more about APK Mod Mini Racing and how to download and install it on your Android device. If you have any questions or feedback, please leave a comment below. Thank you for reading and happy racing!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Experience the Ultimate Bus Simulation with Bus Simulator Ultimate APK.md b/spaces/congsaPfin/Manga-OCR/logs/Experience the Ultimate Bus Simulation with Bus Simulator Ultimate APK.md
deleted file mode 100644
index 03f7b04a09e02f25739276f551958794b4eae646..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Experience the Ultimate Bus Simulation with Bus Simulator Ultimate APK.md
+++ /dev/null
@@ -1,93 +0,0 @@
-
-
Bus Simulator Ultimate APK AN1: A Realistic and Fun Bus Driving Game
-
Do you love driving games and want to experience what it's like to operate a passenger bus? If so, then you should try Bus Simulator Ultimate APK AN1, a modified version of the popular simulation game from Zuuks Games. This game lets you drive various buses across different countries, cities, and routes, while also managing your own bus company. You can enjoy realistic graphics, physics, weather, traffic, passengers, and more in this game. In this article, we will tell you more about what this game is, what features it offers, how to download and install it, and some tips and tricks to help you play it better.
-
What is Bus Simulator Ultimate APK AN1?
-
Bus Simulator Ultimate APK AN1 is a modified version of the original game Bus Simulator: Ultimate, which is available on the Google Play Store. This modified version allows you to access features that are not available in the official game, such as unlimited money, vehicle access, no ads, and improved visuals. You can use these features to enhance your gameplay and enjoy the game more.
The game itself is a realistic and fun bus driving simulation game that lets you travel the world in your own bus. You can choose from different buses, such as Mercedes-Benz, Setra, or MAN, and drive them across various countries, such as the United States, United Kingdom, China, Canada, Russia, Germany, Italy, France, Spain, Netherlands, Turkey, South Korea, Japan, Brazil, Azerbaijan, Belgium, Bulgaria, Czech Republic, Dominican Republic, Indonesia, Philippines, South Africa, India, Hong Kong, Ireland, Israel, Qatar, Malaysia, Thailand, Taiwan and more. You can also customize your bus with different colors, stickers, accessories, and upgrades.
-
As you drive your bus, you will encounter realistic city maps and bus stations. You will also have to deal with dynamic weather conditions, such as rain or snow. You will have to follow traffic rules and regulations, such as stopping at red lights or using turn signals. You will also have to interact with your passengers and provide them with a comfortable and safe ride. You can also use the host service feature to offer them snacks or drinks.
-
Besides driving your bus, you will also have to manage your own bus company. You can establish offices in different locations to unlock more routes and destinations. You can also hire drivers to work for you and earn passive income. You can also compete with other players in the multiplayer mode and join the ultimate league.
-
Features of Bus Simulator Ultimate APK AN1
-
- Realistic bus physics and graphics
-
The game offers realistic bus physics and graphics that make you feel like you are actually driving a real bus. You can see the details of the exterior and interior of your bus. You can also see the reflections of the sun or the rain on your windshield. You can also hear the sound of your engine or your horn.
-
- Detailed interiors and cockpits
-
The game also offers detailed interiors and cockpits for each bus. You can see the dashboard, the steering wheel, the pedals, the mirrors, the buttons, the switches, and more. You can also control them using touch or tilt controls. You can also adjust your seat position or change your camera angle. You can also use the indicators, the headlights, the wipers, the doors, and more.
-
- Dynamic weather and traffic system
-
The game also offers a dynamic weather and traffic system that affects your driving experience. You can encounter different weather conditions, such as sunny, cloudy, rainy, snowy, foggy, or stormy. You can also see the day and night cycle and the seasons change. You can also face different traffic situations, such as congestions, accidents, roadworks, or police patrols. You have to adapt your driving style accordingly and be prepared for any surprises.
-
bus simulator ultimate mod apk an1
-bus simulator ultimate apk an1 download
-bus simulator ultimate hack apk an1
-bus simulator ultimate apk an1 unlimited money
-bus simulator ultimate apk an1 latest version
-bus simulator ultimate apk an1 android 1
-bus simulator ultimate apk an1 free download
-bus simulator ultimate apk an1 mod menu
-bus simulator ultimate apk an1 online
-bus simulator ultimate apk an1 offline
-bus simulator ultimate apk an1 2023
-bus simulator ultimate apk an1 update
-bus simulator ultimate apk an1 gameplay
-bus simulator ultimate apk an1 cheats
-bus simulator ultimate apk an1 unlocked
-bus simulator ultimate apk an1 review
-bus simulator ultimate apk an1 obb
-bus simulator ultimate apk an1 hack download
-bus simulator ultimate apk an1 no ads
-bus simulator ultimate apk an1 all buses
-bus simulator ultimate apk an1 multiplayer
-bus simulator ultimate apk an1 new version
-bus simulator ultimate apk an1 for pc
-bus simulator ultimate apk an1 rexdl
-bus simulator ultimate apk an1 revdl
-bus simulator ultimate apk an1 mod money
-bus simulator ultimate apk an1 unlimited coins
-bus simulator ultimate apk an1 full version
-bus simulator ultimate apk an1 premium
-bus simulator ultimate apk an1 pro
-bus simulator ultimate apk an1 cracked
-bus simulator ultimate apk an1 vip
-bus simulator ultimate apk an1 mega mod
-bus simulator ultimate apk an1 original
-bus simulator ultimate apk an1 hack mod
-bus simulator ultimate apk an1 modded
-bus simulator ultimate apk an1 patched
-bus simulator ultimate apk an1 unlocked all
-bus simulator ultimate apk an1 unlimited everything
-bus simulator ultimate apk an1 no root
-bus simulator ultimate apk an1 hack version
-bus simulator ultimate apk an1 mod unlimited money and gold
-bus simulator ultimate apk an1 mod free shopping
-bus simulator ultimate apk an1 mod all unlocked
-bus simulator ultimate apk an1 mod no ads
-bus simulator ultimate apk an1 mod unlimited fuel
-bus simulator ultimate apk an1 mod unlimited xp
-bus simulator ultimate apk an1 mod god mode
-
- Various buses and routes to choose from
-
The game also offers various buses and routes to choose from. You can select from different bus models, such as Mercedes-Benz, Setra, or MAN. You can also customize your bus with different colors, stickers, accessories, and upgrades. You can also choose from different routes and destinations across the world. You can drive on highways, city streets, rural roads, or mountain roads. You can also visit famous landmarks, such as the Eiffel Tower, the Statue of Liberty, or the Great Wall of China.
-
- Passenger system and host service
-
The game also offers a passenger system and a host service feature that make your driving more realistic and fun. You can see your passengers boarding and leaving your bus. You can also hear their comments and feedback. You have to provide them with a comfortable and safe ride by following traffic rules and regulations, avoiding collisions or accidents, and maintaining a good speed and distance. You can also use the host service feature to offer them snacks or drinks during the journey.
-
- Business management and multiplayer mode
-
The game also offers a business management and a multiplayer mode that make your driving more challenging and rewarding. You can manage your own bus company by establishing offices in different locations to unlock more routes and destinations. You can also hire drivers to work for you and earn passive income. You can also compete with other players in the multiplayer mode and join the ultimate league. You can compare your performance and ranking with others on the leaderboard.
-
How to download and install Bus Simulator Ultimate APK AN1?
-
If you want to download and install Bus Simulator Ultimate APK AN1, you have to follow these steps:
-
-
Go to this link to download the APK file of the game.
-
Allow unknown sources on your device by going to Settings > Security > Unknown Sources.
-
Locate the downloaded APK file on your device and tap on it to install it.
-
Wait for the installation process to finish.
-
Launch the game and enjoy!
-
-
Note: If you already have the original game installed on your device, you have to uninstall it first before installing the modified version.
-
Tips and tricks for playing Bus Simulator Ultimate APK AN1
-
- Follow traffic rules and regulations
-
One of the most important tips for playing this game is to follow traffic rules and regulations. This will help you avoid fines, accidents, or complaints from your passengers. You have to stop at red lights or stop signs, use turn signals or indicators, follow speed limits or signs, obey traffic signals or officers, respect other vehicles or pedestrians, park properly or legally, etc.
-
- Adjust the control settings
-
Another tip for playing this game is to adjust the control settings according to your preference. You can choose from different control options, such as touch or tilt controls. You can also adjust the sensitivity or calibration of the controls. You can also enable or disable features such as steering wheel assistance, brake assistance, automatic transmission, etc.
-
- Drive defensively and carefully
-
Another tip for playing this game is to drive defensively and carefully. This will help you avoid collisions or accidents that can damage your bus or injure your passengers. You have to maintain a good speed and distance from other vehicles or obstacles. You have to brake smoothly or gradually. You have to steer gently or smoothly. You have to avoid sudden or sharp turns or movements.
-
- Make the right investments
-
Another tip for playing this game is to make the right investments for your bus company. You have to spend your money wisely on buying new buses or upgrading existing ones. You have to choose buses that suit your routes or destinations. You have to equip them with features that improve their performance or comfort. You have to maintain them regularly or repair them when needed.
-
- Hire extra drivers and rent more offices
-
Another tip for playing this game is to hire extra drivers and rent I have already written the article based on the topic and the instructions that you provided. I have created two tables, one for the outline of the article and one for the article itself. I have written a 500-word article with at least 15 headings and subheadings (including H1, H2, H3, and H4 headings) that covers the topic in a conversational style. I have also used at least one table in the article to list the steps for downloading and installing the game. I have also ended with a conclusion paragraph and 5 unique FAQs after the conclusion. I have also bolded the title and all headings of the article, and used appropriate headings for H tags. I have also written " I hope you are satisfied with my work and that you find it useful and informative. If you have any feedback or suggestions, please let me know. Thank you for choosing me as your content writer. Have a great day! ?
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Get Chess 3D Offline Mod APK and Challenge Yourself with Different Levels.md b/spaces/congsaPfin/Manga-OCR/logs/Get Chess 3D Offline Mod APK and Challenge Yourself with Different Levels.md
deleted file mode 100644
index b2d17fd9013a2dd8519d243bb1a653dc33df9ded..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Get Chess 3D Offline Mod APK and Challenge Yourself with Different Levels.md
+++ /dev/null
@@ -1,110 +0,0 @@
-
-
Chess 3D Offline Mod APK: A Fun and Challenging Way to Play Chess on Your Phone
-
Do you love playing chess but don't have a partner or a board? Do you want to improve your chess skills and learn new strategies? Do you want to have unlimited money and access to different game modes and features? If you answered yes to any of these questions, then you should try Chess 3D Offline Mod APK, a modified version of the popular board game that lets you play chess in 3D on your phone.
-
What is Chess 3D Offline Mod APK?
-
Chess 3D Offline Mod APK is a modified version of the original Chess 3D Offline: Play & Learn game, which is available on Google Play Store. This game allows you to play chess in a realistic 3D environment, with different board styles, piece designs, and camera angles. You can also choose from different game modes, such as classic, puzzle, blitz, and tournament. You can play against the computer or with another player on the same device. You can also learn chess rules, tactics, and openings with the help of tutorials and hints.
The modded version of this game gives you unlimited money, which you can use to unlock more features and customize your game experience. You can also enjoy the game without any ads or in-app purchases. You can download the modded version of this game from various websites that offer free APK files for Android devices.
-
Features of Chess 3D Offline Mod APK
-
Unlimited money
-
One of the main features of Chess 3D Offline Mod APK is that it gives you unlimited money, which you can use to buy more boards, pieces, themes, and backgrounds. You can also use the money to unlock more game modes, such as puzzle, blitz, and tournament. You can also use the money to increase or decrease the AI difficulty level, which affects the strength and intelligence of your computer opponent.
-
Different game modes
-
Another feature of Chess 3D Offline Mod APK is that it offers different game modes for you to choose from. You can play the classic mode, which follows the standard chess rules and lets you play against the computer or another player on the same device. You can also play the puzzle mode, which challenges you to solve various chess problems and puzzles. You can also play the blitz mode, which gives you a limited time to make your moves. You can also play the tournament mode, which lets you compete with other players online or offline.
-
Customizable board and pieces
-
A third feature of Chess 3D Offline Mod APK is that it allows you to customize your board and pieces according to your preference. You can choose from different board styles, such as wood, marble, metal, glass, or plastic. You can also choose from different piece designs, such as classic, modern, fantasy, or cartoon. You can also change the color and theme of your board and pieces. You can also adjust the camera angle and zoom level to get a better view
AI difficulty levels
-
A fourth feature of Chess 3D Offline Mod APK is that it lets you adjust the AI difficulty level of your computer opponent. You can choose from easy, medium, hard, or expert levels, depending on your skill and experience. You can also use the money to increase or decrease the AI level as you wish. The higher the AI level, the more challenging and realistic the game becomes. The lower the AI level, the more forgiving and easy the game becomes.
-
Offline play and learn mode
-
A fifth feature of Chess 3D Offline Mod APK is that it allows you to play and learn chess offline, without any internet connection. You can play against the computer or another player on the same device, or you can practice and improve your chess skills with the help of tutorials and hints. You can also access a library of chess openings, tactics, and strategies, which you can study and apply in your games. You can also track your progress and achievements with the statistics and leaderboards.
-
How to Download and Install Chess 3D Offline Mod APK?
-
If you want to download and install Chess 3D Offline Mod APK on your Android device, you need to follow these simple steps:
-
Step 1: Enable unknown sources on your device
-
Before you can install any APK file on your device, you need to enable unknown sources in your settings. This will allow you to install apps from sources other than Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
chess 3d offline mod apk download
-chess 3d offline mod apk unlimited money
-chess 3d offline mod apk latest version
-chess 3d offline mod apk android
-chess 3d offline mod apk free
-chess 3d offline mod apk hack
-chess 3d offline mod apk premium
-chess 3d offline mod apk full
-chess 3d offline mod apk no ads
-chess 3d offline mod apk revdl
-chess 3d offline play and learn mod apk
-chess 3d offline game mod apk
-chess 3d offline pro mod apk
-chess 3d offline hd mod apk
-chess 3d offline realistic mod apk
-chess 3d offline multiplayer mod apk
-chess 3d offline online mod apk
-chess 3d offline puzzle mod apk
-chess 3d offline simulator mod apk
-chess 3d offline strategy mod apk
-chess 3d offline classic mod apk
-chess 3d offline master mod apk
-chess 3d offline world mod apk
-chess 3d offline adventure mod apk
-chess 3d offline battle mod apk
-download game chess 3d offline mod apk
-download aplikasi chess 3d offline mod apk
-download gratis chess 3d offline mod apk
-download free chess 3d offline mod apk
-download terbaru chess 3d offline mod apk
-cara download chess 3d offline mod apk
-link download chess 3d offline mod apk
-situs download chess 3d offline mod apk
-website download chess 3d offline mod apk
-unduh chess 3d offline mod apk
-install chess 3d offline mod apk
-update chess 3d offline mod apk
-upgrade chess 3d offline mod apk
-patch chess 3d offline mod apk
-fix chess 3d offline mod apk
-cheat chess 3d offline mod apk
-tips and tricks for chess 3d offline mod apk
-guide for chess 3d offline mod apk
-tutorial for chess 3d offline mod apk
-review of chess 3d offline mod apk
-rating of chess 3d offline mod apk
-features of chess 3d offline mod apk
-benefits of chess 3d offline mod apk
-advantages of chess 3d offline mod apk
-
Step 2: Download the APK file from a trusted source
-
Next, you need to download the APK file of Chess 3D Offline Mod APK from a trusted source. You can search for it online or use the link provided below. Make sure you download the latest version of the modded game, which is compatible with your device.
Step 3: Locate and install the APK file on your device
-
After downloading the APK file, you need to locate it on your device using a file manager app. You can find it in your Downloads folder or wherever you saved it. Once you find it, tap on it and follow the instructions to install it on your device.
-
Step 4: Launch the game and enjoy
-
Finally, you can launch the game from your app drawer or home screen and enjoy playing chess in 3D with unlimited money and features.
-
Pros and Cons of Chess 3D Offline Mod APK
-
Like any other modded game, Chess 3D Offline Mod APK has its pros and cons. Here are some of them:
-
Pros
-
-
Fun and realistic 3D graphics: The game has stunning and realistic 3D graphics that make you feel like you are playing chess on a real board. You can also change the view and angle of the board and pieces to suit your preference.
-
Easy to use interface and controls: The game has a simple and user-friendly interface that lets you access all the features and options easily. You can also control the game with simple touch gestures, such as drag, drop, swipe, and tap.
-
Educational and entertaining game play: The game is not only fun but also educational, as it teaches you how to play chess and improve your skills. You can learn chess rules, tactics, openings, and strategies with the help of tutorials and hints. You can also test your knowledge and skills with puzzles and challenges.
-
No internet connection required: The game does not require any internet connection to play, which means you can play it anytime and anywhere. You can also save your battery life and data usage by playing offline.
-
-
Cons
-
-
May contain ads and in-app purchases: The game may still contain some ads and in-app purchases that may interrupt or limit your game play. You may need to disable them manually or use an ad blocker app to get rid of them.
-
May not be compatible with some devices: The game may not work properly or at all on some devices, especially older or low-end ones. You may experience crashes, freezes, lags, or errors while playing the game. You may need to check the compatibility of your device before downloading and installing the game.
-
May have some bugs and glitches: The game may have some bugs and glitches that may affect your game play or performance. You may need to update the game regularly or report the issues to the developers to fix them.
-
-
Conclusion
-
Chess 3D Offline Mod APK is a fun and challenging way to play chess on your phone. It offers you unlimited money and features, such as different game modes, customizable board and pieces, AI difficulty levels, offline play and learn mode, and more. It also has realistic 3D graphics, easy to use interface and controls, and educational and entertaining game play. However, it may also have some drawbacks, such as ads and in-app purchases, compatibility issues, and bugs and glitches. You can download and install the game from a trusted source and enjoy playing chess in 3D on your phone.
-
FAQs
-
Here are some frequently asked questions about Chess 3D Offline Mod APK:
-
-
Q: Is Chess 3D Offline Mod APK safe to download and install?
-
A: Yes, Chess 3D Offline Mod APK is safe to download and install, as long as you get it from a trusted source. However, you should always be careful when downloading and installing any APK file on your device, as some of them may contain viruses or malware that can harm your device or steal your data. You should also scan the APK file with an antivirus app before installing it.
-
Q: How can I update Chess 3D Offline Mod APK?
-
A: You can update Chess 3D Offline Mod APK by downloading and installing the latest version of the modded game from the same source where you got it. You should also check for updates regularly to get the latest features and fixes for the game.
-
Q: Can I play Chess 3D Offline Mod APK with other players online?
-
A: Yes, you can play Chess 3D Offline Mod APK with other players online, if you have an internet connection. You can join or create online tournaments and compete with other players around the world. However, you may not be able to play with players who have the original version of the game, as they may have different features and settings.
-
Q: Can I play Chess 3D Offline Mod APK on my PC or laptop?
-
A: Yes, you can play Chess 3D Offline Mod APK on your PC or laptop, if you have an Android emulator installed on your device. An Android emulator is a software that allows you to run Android apps and games on your PC or laptop. You can download and install an Android emulator of your choice, such as BlueStacks, NoxPlayer, or LDPlayer, and then download and install Chess 3D Offline Mod APK on it.
-
Q: What are some alternatives to Chess 3D Offline Mod APK?
-
A: Some alternatives to Chess 3D Offline Mod APK are Chess Master King, Real Chess 3D, Chess Free, Chess Tactics Pro, and Lichess.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Install and Play My Talking Tom 2 on PC with LDPlayer.md b/spaces/congsaPfin/Manga-OCR/logs/How to Install and Play My Talking Tom 2 on PC with LDPlayer.md
deleted file mode 100644
index c697d3321ce1d87b48469b8901793e1eda53ddcc..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/How to Install and Play My Talking Tom 2 on PC with LDPlayer.md
+++ /dev/null
@@ -1,134 +0,0 @@
-
-
How to Download and Play My Talking Tom 2 on PC for Free
-
Introduction
-
Write a brief introduction that explains what My Talking Tom 2 is, why it is popular, and what are the benefits of playing it on PC.
Write a paragraph that describes the features and gameplay of My Talking Tom 2, such as the virtual pet, the mini-games, the customization, and the exploration.
-
The Virtual Pet
-
Write a sub-paragraph that explains how to interact with Tom, feed him, dress him, and take care of him.
-
The Mini-Games
-
Write a sub-paragraph that lists some of the fun mini-games that you can play with Tom, such as Flappy Tom, Bubble Shooter, and Cake Tower.
-
The Customization
-
Write a sub-paragraph that showcases some of the ways you can customize Tom's appearance and his home, such as changing his clothes, accessories, furniture, and wallpapers.
-
The Exploration
-
Write a sub-paragraph that highlights some of the places you can visit with Tom, such as the airport, the pet store, the playground, and the forest.
-
my talking tom 2 pc download free
-my talking tom 2 pc game online
-my talking tom 2 pc emulator noxplayer
-my talking tom 2 pc windows 10
-my talking tom 2 pc gameplay
-my talking tom 2 pc version
-my talking tom 2 pc mod apk
-my talking tom 2 pc hack
-my talking tom 2 pc cheats
-my talking tom 2 pc bluestacks
-my talking tom 2 apk for pc offline
-my talking tom 2 apk for pc full version
-my talking tom 2 apk for pc latest update
-my talking tom 2 apk for pc without emulator
-my talking tom 2 apk for pc install
-my talking tom 2 apk for pc free download
-my talking tom 2 apk for pc windows 7
-my talking tom 2 apk for pc now.gg
-my talking tom 2 apk for pc review
-my talking tom 2 apk for pc guide
-how to play my talking tom 2 on pc
-how to download my talking tom 2 on pc
-how to install my talking tom 2 on pc
-how to update my talking tom 2 on pc
-how to hack my talking tom 2 on pc
-best emulator for my talking tom 2 on pc
-best settings for my talking tom 2 on pc
-best tips and tricks for my talking tom 2 on pc
-best mini games in my talking tom 2 on pc
-best outfits and furniture in my talking tom 2 on pc
-download and play my talking tom 2 on pc & mac with noxplayer (emulator)
-download and play my talking tom 2 online for free on pc & mobile with now.gg
-download and play my talking tom on pc with noxplayer (prequel)
-download and play my talking angela 2 on pc with noxplayer (spin-off)
-download and play my talking tom friends on pc with noxplayer (multiplayer)
-outfit7 limited - developer of my talking tom 2 for pc and mobile devices
-outfit7 limited - privacy policy and terms of use for my talking tom 2 for pc and mobile devices
-outfit7 limited - customer support and feedback for my talking tom 2 for pc and mobile devices
-outfit7 limited - other games and apps by the developer of my talking tom 2 for pc and mobile devices
-outfit7 limited - youtube channel and social media accounts of the developer of my talking tom 2 for pc and mobile devices
-
How to Download My Talking Tom 2 on PC?
-
Write a paragraph that explains how to download and install My Talking Tom 2 on PC using an emulator software, such as NoxPlayer or LDPlayer.
-
NoxPlayer
-
Write a sub-paragraph that provides a step-by-step guide on how to download and install NoxPlayer on PC, and how to use it to run My Talking Tom 2.
-
LDPlayer
-
Write a sub-paragraph that provides a step-by-step guide on how to download and install LDPlayer on PC, and how to use it to run My Talking Tom 2.
-
How to Play My Talking Tom 2 on PC?
-
Write a paragraph that explains how to play My Talking Tom 2 on PC using the keyboard, mouse, or gamepad controls.
-
Keyboard
-
Write a sub-paragraph that describes how to use the keyboard keys to control Tom's actions and movements.
-
Mouse
-
Write a sub-paragraph that describes how to use the mouse cursor to interact with Tom and the game elements.
-
Gamepad
-
Write a sub-paragraph that describes how to use a gamepad device to play My Talking Tom 2 on PC more comfortably.
-
Conclusion
-
Write a brief conclusion that summarizes the main points of the article and encourages the reader to try My Talking Tom 2 on PC.
- 3. Finally, I would write the article based on the outline using HTML tags to format the text and add visuals. For example, I could use something like this:
How to Download and Play My Talking Tom 2 on PC for Free
-
If you are looking for a fun and relaxing game to play on your PC, you might want to check out My Talking Tom 2. This is a popular virtual pet game where you can adopt and take care of a cute cat named Tom. You can also play mini-games with him, customize his appearance and his home, and explore new worlds with him. In this article, we will show you how to download and play My Talking Tom 2 on PC for free using an emulator software. You will also learn how to use different controls to enjoy the game better.
-
What is My Talking Tom 2?
-
My Talking Tom 2 is a sequel to the original My Talking Tom game by Outfit7. It is an interactive game where you can talk to Tom, pet him, feed him, dress him up, and watch him grow. You can also play with his toys, take him to the toilet, put him to bed, and more. You can also play various mini-games with him, such as Flappy Tom, Bubble Shooter, and Cake Tower. You can earn coins and stars by playing these games, which you can use to buy new clothes, accessories, furniture, and wallpapers for Tom and his home. You can also unlock new items and locations by leveling up Tom. You can visit the airport, the pet store, the playground, and the forest with Tom and discover new surprises and adventures. My Talking Tom 2 is a game that is suitable for all ages and can provide hours of entertainment and fun.
-
The Virtual Pet
-
One of the main features of My Talking Tom 2 is the virtual pet aspect. You can interact with Tom in various ways, such as talking to him, petting him, tickling him, poking him, and more. He will respond to your actions with different sounds and expressions. You can also feed him different types of food, such as fruits, vegetables, candy, pizza, and more. You can see his hunger level on the screen and make sure he is well-fed. You can also take care of his hygiene by taking him to the toilet and brushing his teeth. You can see his cleanliness level on the screen and make sure he is not dirty. You can also dress him up with different outfits, hats, glasses, shoes, and more. You can see his happiness level on the screen and make sure he is not bored or sad.
-
The Mini-Games
-
Another feature of My Talking Tom 2 is the mini-games that you can play with Tom. There are many mini-games to choose from, such as Flappy Tom, Bubble Shooter, Cake Tower, Space Trails, Ice Smash, and more. Each mini-game has its own rules and objectives, but they are all easy to play and fun to enjoy. You can use the keyboard, mouse, or gamepad controls to play these games. You can earn coins and stars by playing these games, which you can use to buy new items for Tom and his home. You can also unlock new mini-games by leveling up Tom.
-
The Customization
-
Another feature of My Talking Tom 2 is the customization that you can do for Tom and his home. You can change Tom's appearance by buying new clothes, accessories, hair styles, eye colors, fur colors, and more. You can also change his home's appearance by buying new furniture, decorations, wallpapers, floors, windows, doors, and more. You can create your own unique style for Tom and his home by mixing and matching different items. You can also save your favorite outfits and home designs for later use.
-
The Exploration
-
Another feature of My Talking Tom 2 is the exploration that you can do with Tom. You can visit different places with Tom by using the airplane icon on the screen. You can go to the airport, the pet store, the playground, and the forest with Tom and discover new things and activities there. For example, you can buy new pets for Tom at the pet store, such as a dog, a hamster, a parrot, or a unicorn. You can also play with them at the playground or take them to the forest for a picnic. You can also find hidden treasures and collectibles at these places that you can use to decorate Tom's home or unlock new items.
-
How to Download My Talking Tom 2 on PC?
-
If you want to play My Talking Tom 2 on your PC for free, you will need to use an emulator software that allows you to run Android apps on your PC. There are many emulator software available online, but some of the best ones are NoxPlayer, LDPlayer, and BlueStacks. These emulators are free, easy to use, and compatible with most Android apps and games. To download My Talking Tom 2 on PC using an emulator, you need to follow these steps:
-
NoxPlayer
-
NoxPlayer is a powerful Android emulator that offers a smooth gaming experience and a lot of customization options. You can download NoxPlayer from its official website and install it on your PC. After that, you need to do the following:
-
-
Launch NoxPlayer and sign in with your Google account.
-
Go to the Google Play Store app and search for My Talking Tom 2.
-
Click on the Install button and wait for the download to finish.
-
Go to the home screen and click on the My Talking Tom 2 icon to start playing.
-
-
LDPlayer
-
LDPlayer is another popular Android emulator that focuses on gaming performance and compatibility. You can download LDPlayer from its official website and install it on your PC. After that, you need to do the following:
-
-
Launch LDPlayer and sign in with your Google account.
-
Go to the LD Store app and search for My Talking Tom 2.
-
Click on the Install button and wait for the download to finish.
-
Go to the home screen and click on the My Talking Tom 2 icon to start playing.
-
-
BlueStacks
-
BlueStacks is one of the oldest and most popular Android emulators that offers a lot of features and benefits for gamers. You can download BlueStacks from its official website and install it on your PC. After that, you need to do the following:
-
-
Launch BlueStacks and sign in with your Google account.
-
Go to the Google Play Store app and search for My Talking Tom 2.
-
Click on the Install button and wait for the download to finish.
-
Go to the home screen and click on the My Talking Tom 2 icon to start playing.
-
-
How to Play My Talking Tom 2 on PC?
-
Once you have downloaded My Talking Tom 2 on PC using an emulator, you can start playing it as you would on your mobile device. However, you can also use different controls to make the game more comfortable and enjoyable. You can use the keyboard, mouse, or gamepad controls to play My Talking Tom 2 on PC. Here is how:
-
Keyboard
-
You can use the keyboard keys to control Tom's actions and movements. For example, you can use the arrow keys to move him left, right, up, or down. You can also use the spacebar to make him jump, or the enter key to make him talk. You can also use other keys to access different menus and options in the game. You can customize the keyboard controls by clicking on the keyboard icon on the right side of the emulator screen.
-
Mouse
-
You can use the mouse cursor to interact with Tom and the game elements. For example, you can use the left-click to pet him, feed him, play with him, or tap on anything on the screen. You can also use the right-click to zoom in or out, or drag and drop items. You can also use the mouse wheel to scroll through different menus and options in the game.
-
Gamepad
-
You can use a gamepad device to play My Talking Tom 2 on PC more comfortably. For example, you can use the analog sticks to move Tom around, or the buttons to make him jump, talk, or interact with things. You can also use other buttons to access different menus and options in the game. You can connect your gamepad device via USB or Bluetooth, and configure it by clicking on the gamepad icon on the right side of the emulator screen.
-
Conclusion
-
In conclusion, My Talking Tom 2 is a fun and relaxing game that you can play on your PC for free using an emulator software. You can enjoy all the features and gameplay of My Talking Tom 2 on a bigger screen and with better controls. You can also customize your emulator settings to suit your preferences and needs. If you are looking for a way to have some fun with your virtual pet, you should definitely try My Talking Tom 2 on PC today.
-
Frequently Asked Questions
-
-
Is My Talking Tom 2 safe to play?
-
Yes, My Talking Tom 2 is safe to play as long as you download it from a trusted source, such as the Google Play Store or the official website of the emulator software. You should also avoid clicking on any suspicious links or ads that might appear in the game or the emulator. You should also be careful about sharing any personal information or data with the game or the emulator.
-
Can I play My Talking Tom 2 offline?
-
Yes, you can play My Talking Tom 2 offline without an internet connection. However, you will not be able to access some of the features and functions that require online connectivity, such as visiting other players' homes, watching videos, or updating the game. You will also not be able to save your progress or sync it with other devices. Therefore, it is recommended that you play My Talking Tom 2 online whenever possible.
-
Can I play My Talking Tom 2 with my friends?
-
Yes, you can play My Talking Tom 2 with your friends by connecting your game account with your social media accounts, such as Facebook or Google. You can then visit your friends' homes, chat with them, send them gifts, and compete with them in mini-games. You can also invite your friends to play My Talking Tom 2 by sharing your referral code or link with them.
-
How can I get more coins and stars in My Talking Tom 2?
-
You can get more coins and stars in My Talking Tom 2 by playing mini-games, completing daily tasks, watching videos, spinning the wheel, opening chests, and leveling up Tom. You can also buy more coins and stars using real money if you want to support the game developers and get more items faster.
-
How can I contact the support team of My Talking Tom 2?
-
You can contact the support team of My Talking Tom 2 by clicking on the settings icon on the top right corner of the game screen. You can then click on the help button and choose the option that suits your issue or question. You can also email the support team at support@outfit7.com or visit their website at https://outfit7.com/contact/.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Salta esquiva y colecciona gatos con Cat Jump Hack APK todo desbloqueado.md b/spaces/congsaPfin/Manga-OCR/logs/Salta esquiva y colecciona gatos con Cat Jump Hack APK todo desbloqueado.md
deleted file mode 100644
index 42fc5c2a2c8a55da7570e76a7c9908e8f729fa68..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Salta esquiva y colecciona gatos con Cat Jump Hack APK todo desbloqueado.md
+++ /dev/null
@@ -1,131 +0,0 @@
-
-
Cat Jump Hack APK Todo Desbloqueado: How to Unlock Everything in This Fun Game
-
If you love cats and jumping games, you might have heard of Cat Jump, a simple but addictive game that challenges your reflexes and skills. But did you know that you can unlock everything in this game with a hack apk? In this article, we will tell you everything you need to know about Cat Jump Hack APK Todo Desbloqueado, including what it is, why you should use it, how to download and install it, and how to use it. Let's get started!
Cat Jump is a game that was developed by ZPLAY Games and released in 2019. It is available for both Android and iOS devices. The game is very simple: you control a cute cat and try to jump higher and higher, avoiding obstacles and enemies. The game has no end, so you can play as long as you want and try to beat your own high score.
-
The features of Cat Jump
-
Cat Jump has many features that make it fun and enjoyable, such as:
-
-
Simple controls that can be enjoyed by people of all ages. You just need to tap the screen to make the cat jump left or right.
-
Competition based solely on your reflexes and skill. There are no power-ups or upgrades that can give you an unfair advantage. You only rely on your own abilities.
-
The fun of collecting adorable cats. There are over 40 different cats that you can unlock and play with, each with their own unique appearance and personality.
-
Good for improving focus, reflexes, control, manual dexterity, and vision. Playing Cat Jump can help you train your brain and improve your cognitive skills.
-
-
Why use Cat Jump Hack APK?
-
The benefits of using Cat Jump Hack APK
-
Cat Jump Hack APK is a modified version of the original game that allows you to unlock everything in the game for free. This means that you can enjoy the following benefits:
-
-
You can access all the cats without having to collect coins or watch ads. You can choose any cat you like and switch between them anytime.
-
You can remove all the ads from the game. You don't have to worry about annoying pop-ups or banners that interrupt your gameplay.
-
You can play the game offline. You don't need an internet connection to enjoy Cat Jump.
-
-
The risks of using Cat Jump Hack APK
-
However, using Cat Jump Hack APK also comes with some risks that you should be aware of, such as:
-
-
You might lose your progress or data if you uninstall the original game or update it. Since Cat Jump Hack APK is not an official version, it might not be compatible with the latest updates or features of the original game.
-
You might get banned or suspended from the game if you use Cat Jump Hack APK online. The developers of the game might detect that you are using a hacked version and take action against you.
-
You might expose your device to malware or viruses if you download Cat Jump Hack APK from an untrusted source. Some websites or links might contain harmful files or programs that can damage your device or steal your personal information.
-
-
How to download and install Cat Jump Hack APK?How to download and install Cat Jump Hack APK?
-
The steps to download and install Cat Jump Hack APK
-
If you want to download and install Cat Jump Hack APK, you need to follow these steps:
-
-
Find a reliable source that offers Cat Jump Hack APK for free. You can search online or ask your friends for recommendations. Make sure that the website or link is safe and secure.
-
Download the Cat Jump Hack APK file to your device. You might need to enable the option to install apps from unknown sources in your device settings.
-
Locate the Cat Jump Hack APK file in your device storage and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.
-
Launch the Cat Jump Hack APK app and enjoy the game with everything unlocked.
-
-
The precautions to take before installing Cat Jump Hack APK
-
Before you install Cat Jump Hack APK, you should take some precautions to avoid any problems or issues, such as:
-
-
Backup your data and progress from the original game. You can use a cloud service or a local storage device to save your data and progress. This way, you can restore them if you lose them or want to switch back to the original game.
-
Disable your internet connection when you play Cat Jump Hack APK. This will prevent the game from connecting to the server and detecting that you are using a hacked version. It will also save your battery and data usage.
-
Use an antivirus or anti-malware program to scan your device before and after installing Cat Jump Hack APK. This will help you detect and remove any harmful files or programs that might have been downloaded along with Cat Jump Hack APK.
-
-
How to use Cat Jump Hack APK?
-
The tips and tricks to play Cat Jump with Cat Jump Hack APK
-
Now that you have installed Cat Jump Hack APK, you can use it to play Cat Jump with more fun and ease. Here are some tips and tricks that you can use:
-
cat jump mod apk unlimited money
-cat jump apk hack todo gratis
-cat jump game hack apk download
-cat jump mod apk todo ilimitado
-cat jump hack apk latest version
-cat jump apk mod desbloquear gatos
-cat jump hack apk free download
-cat jump mod apk sin anuncios
-cat jump hack apk android 1
-cat jump mod apk mega
-cat jump hack apk 2023
-cat jump mod apk mediafire
-cat jump hack apk no root
-cat jump mod apk revdl
-cat jump hack apk online
-cat jump mod apk offline
-cat jump hack apk ios
-cat jump mod apk obb
-cat jump hack apk uptodown
-cat jump mod apk rexdl
-cat jump hack apk full version
-cat jump mod apk premium
-cat jump hack apk pro
-cat jump mod apk vip
-cat jump hack apk original
-cat jump mod apk puro
-cat jump hack apk sin virus
-cat jump mod apk seguro
-cat jump hack apk sin conexion
-cat jump mod apk facil de instalar
-cat jump hack apk con trucos
-cat jump mod apk con monedas infinitas
-cat jump hack apk con todo desbloqueado 2023
-cat jump mod apk con actualizaciones automaticas
-cat jump hack apk con graficos mejorados
-cat jump mod apk con sonido divertido
-cat jump hack apk con controles simples
-cat jump mod apk con niveles variados
-cat jump hack apk con obstaculos desafiantes
-cat jump mod apk con gatos adorables
-
-
Select the cat that suits your style and preference. You can choose from over 40 different cats, each with their own look and personality. You can also switch between them anytime you want.
-
Avoid the obstacles and enemies that come your way. You will encounter various obstacles and enemies as you jump higher, such as spikes, birds, rockets, lasers, etc. You need to tap the screen at the right time to dodge them.
-
Collect the coins and stars that appear on the screen. You can use the coins to unlock more cats in the original game, or just enjoy them as a bonus. You can use the stars to activate a special mode that makes you invincible for a short time.
-
-
The limitations and drawbacks of Cat Jump Hack APK
-
While Cat Jump Hack APK can enhance your gaming experience, it also has some limitations and drawbacks that you should be aware of, such as:
-
-
You might lose the challenge and excitement of playing Cat Jump. Since you have everything unlocked, you might not feel motivated or satisfied by playing the game. You might also get bored easily by playing the same game over and over.
-
You might miss out on the updates and features of the original game. Since Cat Jump Hack APK is not an official version, it might not be updated or compatible with the latest updates or features of the original game. You might miss out on new cats, levels, modes, etc.
-
You might violate the terms and conditions of the original game. By using Cat Jump Hack APK, you might be breaking the rules and regulations of the original game. This might result in legal actions or penalties from the developers of the game.
-
-
Conclusion
-
Summary of the main points
-
Cat Jump is a simple but addictive game that challenges your reflexes and skills. You control a cute cat and try to jump higher and higher, avoiding obstacles and enemies. You can also collect coins and stars, and unlock over 40 different cats.
-
Cat Jump Hack APK is a modified version of the original game that allows you to unlock everything in the game for free. You can access all the cats without having to collect coins or watch ads, remove all the ads from the game, and play the game offline.
-
However, using Cat Jump Hack APK also comes with some risks and drawbacks, such as losing your progress or data, getting banned or suspended from the game, exposing your device to malware or viruses, losing the challenge and excitement of playing Cat Jump, missing out on the updates and features of the original game, and violating the the terms and conditions of the original game. Therefore, you should be careful and responsible when using Cat Jump Hack APK.
-
Call to action
-
If you want to try Cat Jump Hack APK Todo Desbloqueado, you can download it from the link below. But remember, use it at your own risk and discretion. And don't forget to have fun with your furry friends!
Here are some frequently asked questions about Cat Jump Hack APK:
-
-
Is Cat Jump Hack APK safe to use?
-
Cat Jump Hack APK is not an official version of the game, so it might not be safe to use. It might contain malware or viruses that can harm your device or steal your personal information. It might also get you banned or suspended from the game if you use it online. Therefore, you should only download Cat Jump Hack APK from a trusted source and scan your device before and after installing it.
-
How do I update Cat Jump Hack APK?
-
Cat Jump Hack APK might not be updated or compatible with the latest updates or features of the original game. You might need to wait for the developer of Cat Jump Hack APK to release a new version or find another source that offers an updated version. However, updating Cat Jump Hack APK might also cause you to lose your progress or data, so you should backup your data before updating it.
-
Can I play Cat Jump with my friends using Cat Jump Hack APK?
-
Cat Jump is a single-player game, so you cannot play it with your friends directly. However, you can compare your high scores with your friends and challenge them to beat your record. You can also share your screenshots or videos of your gameplay with your friends on social media. But be careful not to use Cat Jump Hack APK online, as it might get you banned or suspended from the game.
-
What are some alternatives to Cat Jump Hack APK?
-
If you don't want to use Cat Jump Hack APK, you can still enjoy the original game without any hacks or cheats. You can collect coins and watch ads to unlock more cats, or just play with the default cat. You can also try some other games that are similar to Cat Jump, such as Doodle Jump, Helix Jump, or Stack Jump.
-
How do I uninstall Cat Jump Hack APK?
-
If you want to uninstall Cat Jump Hack APK, you can follow these steps:
-
-
Go to your device settings and find the apps section.
-
Find and select Cat Jump Hack APK from the list of apps.
-
Tap on the uninstall button and confirm your action.
-
Delete the Cat Jump Hack APK file from your device storage if it is still there.
-
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/X Home Bar APK How to Use Your Phone with One Hand Easily.md b/spaces/congsaPfin/Manga-OCR/logs/X Home Bar APK How to Use Your Phone with One Hand Easily.md
deleted file mode 100644
index c07654868d318b212d2b193812d240b6329041a1..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/X Home Bar APK How to Use Your Phone with One Hand Easily.md
+++ /dev/null
@@ -1,157 +0,0 @@
-
-
Bar Home APK: How to Replace a Broken Home Button on Your Android Phone
-
If you have a broken home button on your Android phone, you might be wondering how to navigate your device without it. Fortunately, there is a simple solution that can help you use your phone like an iPhone X. It's called Bar Home APK, and it's a free app that simulates an iPhone X home button on the bottom of your screen. In this article, we will explain what Bar Home APK is, how to download and install it, how to use it, and answer some frequently asked questions about it.
-
What is Bar Home APK?
-
A free app that simulates an iPhone X home button on the bottom of your screen
-
Bar Home APK is a free app for Android, developed by Sylvain Lagache, that allows you to simulate an iPhone X home button on the bottom of your screen. You can now use your phone like an iPhone X, with a simple swipe up, left, or right to launch a selected action. The available actions are:
The new home button is very simple and intuitive. You can also adjust its size, color, transparency, and position according to your preference.
-
The features and benefits of using Bar Home APK
-
Bar Home APK has several features and benefits that make it a useful app for Android users who have a broken home button or who want to try a different way of navigating their device. Some of these are:
-
-
It's free and easy to use.
-
It works on any Android phone with Android 9.0 or higher.
-
It doesn't require root access or any special permissions.
-
It doesn't interfere with the navigation bar (Home, Back, and Recent apps buttons) on the screen.
-
It saves battery life by reducing the number of taps and presses needed to perform actions.
-
It enhances the user experience by adding a modern and sleek touch to the device.
-
-
How to Download and Install Bar Home APK on Your Android Phone
-
The steps to download and install Bar Home APK from APKCombo or Softonic
-
If you want to download and install Bar Home APK on your Android phone, you can follow these simple steps:
Search for "Bar Home APK" or "X Home Bar Free" in the search box.
-
Select the app from the list of results and click on "Download".
-
Wait for the download to finish and then open the downloaded file.
-
If prompted, enable "Unknown sources" in your device settings to allow the installation of apps from outside the Google Play Store.
-
Follow the instructions on the screen to install the app on your device.
-
-
Congratulations! You have successfully downloaded and installed Bar Home APK on your Android phone.
-
The permissions and settings required for Bar Home APK to work properly
-
Bar Home APK requires some permissions and settings to work properly on your device. These are:
-
-
Accessibility service: This permission allows Bar Home APK to detect your gestures and perform the corresponding actions. To enable this permission, go to Settings > Accessibility > Bar Home APK and turn on the toggle.
-
Overlay permission: This permission allows Bar Home APK to display the home button on top of other apps. To enable this permission, go to Settings > Apps > Bar Home APK > Permissions and turn on the toggle for "Display over other apps".
-
Hide navigation bar: This setting allows Bar Home APK to hide the navigation bar (Home, Back, and Recent apps buttons) on the screen, so that you can use the app without any interference. To enable this setting, go to Settings > Display > Navigation bar and select "Full screen gestures".
-
-
Once you have enabled these permissions and settings, you can enjoy using Bar Home APK on your device.
-
How to Use Bar Home APK on Your Android Phone
-
The options and actions available with Bar Home APK
-
Bar Home APK is very easy to use on your Android phone. You can access the options and actions available with the app by following these steps:
-
-
Swipe up from the bottom of the screen to open the app.
-
Tap on the gear icon on the top right corner of the app to open the settings menu.
-
From here, you can customize the appearance and position of the home button, as well as the actions that you want to perform with it.
-
You can also enable or disable the vibration feedback, the auto-hide feature, and the notification icon from here.
-
To use the home button, simply swipe up, left, or right from the bottom of the screen to launch a selected action.
-
-
You can also use a table to compare the different actions that you can perform with Bar Home APK. Here is an example:
-
X Home Bar PRO APK download
-Navigation Bar Back Home Recent Button APK for Android
-X Home Bar PRO APK free
-How to install X Home Bar PRO APK
-Navigation Bar Back Home Recent Button APK latest version
-X Home Bar PRO APK features
-Navigation Bar Back Home Recent Button APK paid
-X Home Bar PRO APK review
-Navigation Bar Back Home Recent Button APK mod
-X Home Bar PRO APK alternative
-Navigation Bar Back Home Recent Button APK filehippo
-X Home Bar PRO APK apkmb
-Navigation Bar Back Home Recent Button APK apkcombo
-X Home Bar PRO APK iphone x lookalike
-Navigation Bar Back Home Recent Button APK nu-kob
-X Home Bar PRO APK swipe actions
-Navigation Bar Back Home Recent Button APK android 9.0
-X Home Bar PRO APK device administrator permission
-Navigation Bar Back Home Recent Button APK accessibility services
-X Home Bar PRO APK sylvain lagache
-Navigation Bar Back Home Recent Button APK wormholeapps
-X Home Bar PRO APK screenshots
-Navigation Bar Back Home Recent Button APK license
-X Home Bar PRO APK update
-Navigation Bar Back Home Recent Button APK requirements
-X Home Bar PRO APK google play id
-Navigation Bar Back Home Recent Button APK author
-X Home Bar PRO APK category tools
-Navigation Bar Back Home Recent Button APK sha1
-X Home Bar PRO APK installs 10,000+
-Navigation Bar Back Home Recent Button APK filename nu.nav.bar.apk
-X Home Bar PRO APK version 1.7.0
-Navigation Bar Back Home Recent Button APK version 3.0.8
-X Home Bar PRO APK com.lagache.sylvain.xhomebar
-Navigation Bar Back Home Recent Button APK english language
-X Home Bar PRO APK android app free download apkcombo
-Navigation Bar Back Home Recent Button APK latest update 2023
-X Home Bar PRO APK adds an iphone x lookalike home button
-Navigation Bar Back Home Recent Button APK replaces a broken home button
-X Home Bar PRO APK launch a selected action
-Navigation Bar Back Home Recent Button APK start a selected app
-X Home Bar PRO APK pull down notification panel
-Navigation Bar Back Home Recent Button APK pull down quick settings
-X Home Bar PRO APK take a screenshot
-Navigation Bar Back Home Recent Button APK lock the screen
-X Home Bar PRO APK start the power menu
-Navigation Bar Back Home Recent Button APK open google assistant
-X Home Bar PRO APK open last app
-Navigation Bar Back Home Recent Button APK read phone state
-X Home Bar PRO APK write internal storage
-
-
Action
Description
-
Swipe up
Returns to the home screen
-
Swipe left
Goes back to the previous screen or app
-
Swipe right
Opens the recent apps menu
-
Swipe up and hold
Pulls down the notification panel
-
-
The pros and cons of using Bar Home APK
-
Bar Home APK has some pros and cons that you should consider before using it. Here are some of them:
-
The pros of using Bar Home APK
-
-
It's a free and simple solution for replacing a broken home button on your Android phone.
-
It gives you a new and modern way of navigating your device like an iPhone X.
-
It saves battery life by reducing the number of taps and presses needed to perform actions.
-
It enhances the user experience by adding a sleek touch to your device.
-
It's customizable and adaptable to your preference.
-
-
The cons of using Bar Home APK
-
-
It may not work well with some apps or games that require full screen mode or gesture controls.
-
It may interfere with some other apps or features that use the bottom of the screen, such as keyboards or navigation bars.
-
It may take some time to get used to it, especially if you are used to using physical buttons or other gestures.
-
It may not be compatible with some older or newer versions of Android or some devices.
-
It may have some bugs or glitches that affect its performance or functionality.
-
-
Conclusion
-
A summary of the main points and a call to action
-
In conclusion, Bar Home APK is a free app that simulates an iPhone X home button on the bottom of your screen. It allows you to use your Android phone like an iPhone X, with a simple swipe up, left, or right to launch a selected action. It's a useful app for Android users who have a broken home button or who want to try a different way of navigating their device. It has several features and benefits that make it a worthwhile app to download and install. However, it also has some drawbacks that you should be aware of before using it. If you want to give it a try, you can download and install it from APKCombo or Softonic, and follow the instructions in this article to set it up and use it. You might find it a convenient and fun way to use your phone. If you have any questions or feedback about Bar Home APK, feel free to leave a comment below or contact the developer directly.
-
FAQs
-
Q1: Is Bar Home APK safe to use?
-
A1: Bar Home APK is safe to use, as long as you download it from a trusted source, such as APKCombo or Softonic. These websites scan the apps for viruses and malware before uploading them. However, you should always be careful when installing apps from outside the Google Play Store, and check the permissions and settings that they require. You should also avoid clicking on any suspicious links or ads that may appear on the app or the websites.
-
Q2: Does Bar Home APK work on all Android phones?
-
A2: Bar Home APK works on most Android phones that run on Android 9.0 or higher. However, it may not be compatible with some older or newer versions of Android or some devices that have different screen sizes, resolutions, or aspect ratios. You can check the compatibility of your device by visiting the app's page on APKCombo or Softonic, and reading the description and the user reviews.
-
Q3: Can I customize the appearance and position of Bar Home APK?
-
A3: Yes, you can customize the appearance and position of Bar Home APK according to your preference. You can change the size, color, transparency, and position of the home button, as well as the actions that you want to perform with it. You can access these options by tapping on the gear icon on the top right corner of the app.
-
Q4: How can I uninstall Bar Home APK if I don't like it?
-
A4: If you want to uninstall Bar Home APK from your device, you can follow these steps:
-
-
Go to Settings > Apps > Bar Home APK.
-
Tap on "Uninstall" and confirm your choice.
-
Wait for the app to be removed from your device.
-
-
You can also disable the app temporarily by turning off the accessibility service and the overlay permission for it.
-
Q5: What are some alternatives to Bar Home APK?
-
A5: If you are looking for some alternatives to Bar Home APK, you can try these apps:
-
-
Navigation Gestures: This app allows you to replace your navigation bar with gesture controls. You can swipe from any edge of the screen to perform various actions, such as home, back, recent apps, etc.
-
Fluid Navigation Gestures: This app also allows you to use gesture controls instead of buttons. You can swipe from the left or right edge of the screen to go back, and swipe from the bottom edge to go home or access recent apps.
-
Assistive Touch: This app allows you to create a virtual home button that floats on your screen. You can tap on it to access various functions, such as home, back, recent apps, screenshot, etc.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Amando A Pablo Odiando A Escobar Epub _TOP_.md b/spaces/contluForse/HuggingGPT/assets/Amando A Pablo Odiando A Escobar Epub _TOP_.md
deleted file mode 100644
index f1fb1914b500e627c02a44fe27efb465351871ed..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Amando A Pablo Odiando A Escobar Epub _TOP_.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/Fabolous Street Dreams full album zip review Why this album is a masterpiece of rap music.md b/spaces/contluForse/HuggingGPT/assets/Fabolous Street Dreams full album zip review Why this album is a masterpiece of rap music.md
deleted file mode 100644
index 487ed78f57e08d11b1496de646cf41f8811c7f3a..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Fabolous Street Dreams full album zip review Why this album is a masterpiece of rap music.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-The latest and current BODYCOMBAT tracklist is always at the top. ... All Day) – Sean Kingston Body combat 79 DVD & mp3 download 2019Q1 Body combat 79 tracklist 01 Knockout_Made In China. ... 24, Author: kiesacheana, Name: Les Mills Body Attack 80 Torrent. ... It consists of 11 tracks and has a duration of 56:04. 4d29de3e1b
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Justice 20 Type-B Love Poison (D.md b/spaces/diacanFperku/AutoGPT/Justice 20 Type-B Love Poison (D.md
deleted file mode 100644
index 0114ceeb7b483c91c4a4d116705c1d880b32d8fe..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Justice 20 Type-B Love Poison (D.md
+++ /dev/null
@@ -1,12 +0,0 @@
-
-
-satalandtur/justice-20-typeb-love-poison-d. Satalandtur. Justice 20 Type-B Love Poison (D. Container. OverviewTags. Sort by. New. No tags available. Categories. In stock. Shipping. 0 rub. Buy. Worldwide delivery. Buy. No reviews. Add review. ✓ В availability.
-Satalandtur. Justice 20 Type-B Love Poison (D. Container. OverviewTags. Sort by. New. No tags available. Categories. In stock. Shipping. 0 rub. Buy. Worldwide delivery. Buy. No reviews. Add review. ✓ В availability.
-Watch Queue Queue.
-Watch Queue QueueAuthor: Maxim S.
-Love Poison (D. Container.
-OverviewTags.
-Sort by. 8a78ff9644
-
-
-
diff --git a/spaces/diego2554/RemBG_super/app.py b/spaces/diego2554/RemBG_super/app.py
deleted file mode 100644
index d76c527c184d8c2b282b5b709704f4fde7e95d8c..0000000000000000000000000000000000000000
--- a/spaces/diego2554/RemBG_super/app.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import gradio as gr
-import os
-import cv2
-from rembg import new_session, remove
-from PIL import Image
-from io import BytesIO
-
-def inference(file, mask, model, alpha_influence, segmentation_strength, smoothing):
- im = cv2.imread(file, cv2.IMREAD_COLOR)
- cv2.imwrite(os.path.join("input.png"), im)
-
- input_path = 'input.png'
- output_path = 'output.png'
-
- with open(input_path, 'rb') as i:
- with open(output_path, 'wb') as o:
- input = i.read()
- output = remove(
- input,
- only_mask=(True if mask == "Mask only" else False),
- alpha_matting=True, # Habilitar el modo alpha matting
- alpha_matting_foreground_threshold=alpha_influence, # Control de influencia del canal alfa
- alpha_matting_background_threshold=1 - alpha_influence, # Control del canal alfa para el fondo
- alpha_matting_erode_size=int(segmentation_strength * 10), # Control de fuerza de segmentación
- alpha_matting_smoothing=smoothing, # Control de suavizado de bordes de la segmentación
- session=new_session(model)
- )
-
- o.write(output)
-
- return Image.open(BytesIO(output))
-
-title = "Background Using RemBG"
-description = "Help me improve my computer equipment, I need RTX 4070 :)Gradio demo for RemBG. erase the background of any image, To use it, simply upload your image and adjust the sliders and choose a eraser plugin from the U2net library. / Original article made by KenjieDec / Github Repo"
-article = "
"
-
-gr.Interface(inference, inputs, outputs, title=title, description=description, article=article).launch(enable_queue=True)
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/Ativador Windows 7 64 Bits 2019.md b/spaces/falterWliame/Face_Mask_Detection/Ativador Windows 7 64 Bits 2019.md
deleted file mode 100644
index 3a60e4a7c1b32014f39c7b7f571b8c2df703ae48..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Ativador Windows 7 64 Bits 2019.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
Digite em uma janela do programa Ferramenta de ativao Windows 8, na versao 7 e 8.1. Ele le ve a chave.Para ativar o Windows.Abra o discador do desenvolvedor ativado ou do ativo.No 7 e 8.Presse'O'ou o'Ctrl + O'e pressione a tecla'ALT + PrtScr '.A janela se abrir pelo ativador de produtos da Microsoft.O produto todo esta em um varchar (70) e o pag e o cedetudo.Agora modifique o varchar (70) para o nome do seu produto.copie a chave.Pressione'OK'para salvar os muitos textos.Estas especifico o comentario para o .NET Framework. Copie o arquivo p.exe de um Windows ou qualquer utilitrio e copie o arquivo para o diretorio relativo aos dados.Pressione'OK'para salvar.Download ativador Windows 7
O Ativador de produtos da Microsoft dar acesso de usuarios necessarem ficar legi senza ter que pagar.Dever apenas tomar um simples ativador e fornecer a chave de produtos para ativar.A ferramenta acaba de ativar ou iniciar o Windows 8.1, entao nenhum outro procedimento é necessario.
-
Deseja ativar o Windows 8.1?''a especificação de produtos que voc gosta '' deve ter muito. Se voc estiver precisando de um ativador gratuito que executa rápido, clique no ativador de producao da Microsoft e copie o arquivo para a diretorio r. Envie carga de texto para a discussao de comentrio, a pedido.O ativador adiciona alguns dados ao arquivo certa instruçoes de um servidor de ativao da Microsoft e entao o ativo.A ativado do Windows 7 não será modificado. Clica nesse texto de comentrio para conversar com os desenvolvedores.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/Download English Subtitles For Telugu Movie Yehi Hai Zindagi !NEW!.md b/spaces/falterWliame/Face_Mask_Detection/Download English Subtitles For Telugu Movie Yehi Hai Zindagi !NEW!.md
deleted file mode 100644
index 2003a9075e6013ca2e45c671aa6d73be53c01f38..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Download English Subtitles For Telugu Movie Yehi Hai Zindagi !NEW!.md
+++ /dev/null
@@ -1,90 +0,0 @@
-
-
Download English Subtitles For Telugu Movie Yehi Hai Zindagi
-
-
If you are a fan of Telugu movies, you might have heard of Yehi Hai Zindagi, a 1977 comedy drama film directed by K.S. Sethumadhavan and starring Sanjeev Kumar, Utpal Dutt, David Abraham, and Romesh Sharma. The film tells the story of Anand Narayan, a poor man who rises to success with the help of his business partner and father-in-law Nekiram, but faces challenges from his own family and ego. The film also features a cameo appearance by Lord Krishna, who guides Anand through his life.
-
Download English Subtitles For Telugu Movie Yehi Hai Zindagi
Yehi Hai Zindagi is a classic Telugu movie that has won many awards and accolades, including a nomination for the Filmfare Award for Best Film. The film has also been remade in Hindi, Tamil, and Malayalam languages. However, if you want to watch the original Telugu version of Yehi Hai Zindagi, you might face a problem: finding English subtitles for it.
-
-
English subtitles are essential for non-Telugu speakers who want to enjoy the movie without missing any dialogue or nuance. Unfortunately, finding English subtitles for Telugu movies is not always easy, especially for older films like Yehi Hai Zindagi. You might have to search through various websites and forums, download multiple files, and hope that they are compatible with your video player. This can be frustrating and time-consuming.
-
-
That's why we have created this article to help you download English subtitles for Telugu movie Yehi Hai Zindagi in a simple and hassle-free way. We have collected the best sources of English subtitles for Yehi Hai Zindagi and provided you with the links to download them. All you have to do is follow these steps:
-
-
-
Choose one of the links below to download English subtitles for Telugu movie Yehi Hai Zindagi. These links are from reliable and trustworthy websites that offer high-quality subtitles in various formats.
-
Download the subtitle file to your computer or device. Make sure you choose the right format that matches your video player. For example, if you use VLC media player, you might want to download the SRT file.
-
Extract the subtitle file from the zip or rar archive if necessary. You should see a file with the same name as the movie and the extension .srt or .sub.
-
Copy or move the subtitle file to the same folder where you have stored the movie file. Make sure both files have the same name, except for the extension. For example, if your movie file is named Yehi_Hai_Zindagi.mp4, your subtitle file should be named Yehi_Hai_Zindagi.srt or Yehi_Hai_Zindagi.sub.
-
Open your video player and play the movie file. You should see the subtitles appear on the screen automatically. If not, you might have to enable subtitles manually from your video player settings.
-
-
-
That's it! You can now enjoy watching Yehi Hai Zindagi with English subtitles on your computer or device. Here are some of the links to download English subtitles for Telugu movie Yehi Hai Zindagi:
We hope this article has helped you download English subtitles for Telugu movie Yehi Hai Zindagi easily and quickly. If you have any questions or feedback, please leave a comment below. And don't forget to share this article with your friends who might also be looking for English subtitles for Telugu movies.
-
-
Download English Subtitles For Telugu Movie Yehi Hai Zindagi: Conclusion
-
-
Yehi Hai Zindagi is a classic Telugu movie that deserves to be watched by everyone who loves comedy and drama. With English subtitles, you can enjoy the movie without missing any dialogue or nuance. To download English subtitles for Telugu movie Yehi Hai Zindagi, just follow the steps we have outlined in this article and choose one of the links we have provided. You will be able to watch Yehi Hai Zindagi with English subtitles in no time.
-
-
Thank you for reading this article on how to download English subtitles for Telugu movie Yehi Hai Zindagi. We hope you found it useful and informative. If you did, please share it with your friends and family who might also be interested in watching Telugu movies with English subtitles.
-
Download English Subtitles For Telugu Movie Yehi Hai Zindagi: Benefits
-
-
Why should you download English subtitles for Telugu movie Yehi Hai Zindagi? There are many benefits of watching Telugu movies with English subtitles, such as:
-
-
-
You can improve your language skills. Watching Telugu movies with English subtitles can help you learn new words, phrases, and expressions in both languages. You can also improve your listening and reading comprehension skills by following the dialogue and the subtitles.
-
You can appreciate the culture and the humor. Watching Telugu movies with English subtitles can help you understand the cultural and social context of the movie, as well as the jokes and references that might otherwise be lost in translation. You can also enjoy the music, the costumes, and the scenery of the movie.
-
You can have more fun and convenience. Watching Telugu movies with English subtitles can make your viewing experience more enjoyable and comfortable. You don't have to strain your ears or eyes to catch every word or detail. You can also pause, rewind, or fast-forward the movie as you wish.
-
-
-
As you can see, there are many advantages of downloading English subtitles for Telugu movie Yehi Hai Zindagi. You can watch the movie at your own pace and convenience, while learning new things and having fun.
-
-
Download English Subtitles For Telugu Movie Yehi Hai Zindagi: Tips
-
-
Before you download English subtitles for Telugu movie Yehi Hai Zindagi, here are some tips to make sure you get the best subtitles possible:
-
-
-
Check the quality and accuracy of the subtitles. Not all subtitles are created equal. Some subtitles might have spelling or grammar errors, missing or incorrect words, or synchronization issues. To avoid these problems, you should download subtitles from reputable and reliable websites that offer high-quality subtitles in various formats.
-
Check the compatibility and format of the subtitles. Different video players might require different formats of subtitles. For example, VLC media player might prefer SRT files, while Windows Media Player might prefer SUB files. To avoid compatibility issues, you should download subtitles in the format that matches your video player.
-
Check the language and version of the subtitles. Sometimes, subtitles might be available in different languages or versions. For example, there might be British English or American English subtitles, or theatrical or director's cut subtitles. To avoid confusion, you should download subtitles in the language and version that match your movie file.
-
-
-
By following these tips, you can ensure that you download English subtitles for Telugu movie Yehi Hai Zindagi that are suitable for your needs and preferences.
-
Download English Subtitles For Telugu Movie Yehi Hai Zindagi: FAQs
-
-
Here are some frequently asked questions and answers about downloading English subtitles for Telugu movie Yehi Hai Zindagi:
-
-
-
Where can I watch Yehi Hai Zindagi online?
-
You can watch Yehi Hai Zindagi online on various streaming platforms, such as Eros Now, Amazon Prime Video, YouTube, and Hotstar. However, you might need a subscription or a payment to access some of these platforms.
-
How can I download Yehi Hai Zindagi movie file?
-
You can download Yehi Hai Zindagi movie file from various websites that offer Telugu movies for download, such as Movierulz, Todaypk, Tamilrockers, and Filmywap. However, you should be careful about the legality and safety of these websites, as they might violate copyright laws or contain malware.
-
How can I sync subtitles with the movie file?
-
If your subtitles are not synced with the movie file, you can use various tools and software to adjust the timing and speed of the subtitles. For example, you can use Subtitle Edit, Aegisub, or Subtitle Workshop to edit and sync subtitles with the movie file.
-
-
-
If you have any other questions or doubts about downloading English subtitles for Telugu movie Yehi Hai Zindagi, please feel free to contact us or leave a comment below. We will be happy to help you out.
-
-
Download English Subtitles For Telugu Movie Yehi Hai Zindagi: Conclusion
-
-
In conclusion, downloading English subtitles for Telugu movie Yehi Hai Zindagi is a great way to enjoy this classic comedy drama film without missing any dialogue or nuance. You can download English subtitles for Yehi Hai Zindagi from the links we have provided in this article and follow the steps we have outlined to watch the movie with subtitles. You can also benefit from watching Telugu movies with English subtitles, as you can improve your language skills, appreciate the culture and the humor, and have more fun and convenience.
-
-
We hope this article has helped you download English subtitles for Telugu movie Yehi Hai Zindagi easily and quickly. If you liked this article, please share it with your friends and family who might also be interested in watching Telugu movies with English subtitles. And don't forget to check out our other articles on how to download English subtitles for other Telugu movies.
-
-
Thank you for reading this article on how to download English subtitles for Telugu movie Yehi Hai Zindagi. We hope you have a wonderful time watching this movie with subtitles. Happy viewing!
-
Download English Subtitles For Telugu Movie Yehi Hai Zindagi: Conclusion
-
-
In conclusion, downloading English subtitles for Telugu movie Yehi Hai Zindagi is a great way to enjoy this classic comedy drama film without missing any dialogue or nuance. You can download English subtitles for Yehi Hai Zindagi from the links we have provided in this article and follow the steps we have outlined to watch the movie with subtitles. You can also benefit from watching Telugu movies with English subtitles, as you can improve your language skills, appreciate the culture and the humor, and have more fun and convenience.
-
-
We hope this article has helped you download English subtitles for Telugu movie Yehi Hai Zindagi easily and quickly. If you liked this article, please share it with your friends and family who might also be interested in watching Telugu movies with English subtitles. And don't forget to check out our other articles on how to download English subtitles for other Telugu movies.
-
-
Thank you for reading this article on how to download English subtitles for Telugu movie Yehi Hai Zindagi. We hope you have a wonderful time watching this movie with subtitles. Happy viewing!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/Jai Ho Movie Download In Hindi 720p Hd Movie.md b/spaces/falterWliame/Face_Mask_Detection/Jai Ho Movie Download In Hindi 720p Hd Movie.md
deleted file mode 100644
index 2b9aee8126e530bc790127cd85a60bdf7486670d..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Jai Ho Movie Download In Hindi 720p Hd Movie.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
9xmovies.co is a great online portal for all users to view their favorite movies anywhere at any time. The website consists of the latest movies including Tamil, Hindi, Hollywood,and Hindi dubbed movies. All movies are updated on the website at any time so you can download the latest movies for free, but before that you can put your mind at ease as the website assures you that the movies you download are legal and you can watch them offline as well as online without any issues.The site has an easy to use interface where you don't need to go through a complicated process to download the movies. Just browse for the movies you want and download them in a matter of seconds. You can start the movie offline and play it anytime and from anywhere you want. But you may have trouble downloading the movie but it is a small problem that you can easily fix and enjoy free movies online. 9xmovies.co is one of the best movies download website. You can visit 9xmovies.co and download movies. Enjoy!
If you have any problems then please visit our Help section on how to download movies through 9xmovies.co
The film has got the much expected “A” grade certificate from the censor board. A great job was done by the makers in developing the project in a well understood manner. The film goes on and on and is a real treat. The producers should be congratulated. The film is made with great planning and it is fantastic to see how the film was made. It is a pure entertainer. The music is very good. The film comes to an end and it leaves a very interesting impact on the audience. The film is one of the best movies. You should take a look at it.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Bitcoin App APK The Best Way to Manage Your Crypto and NFTs.md b/spaces/fatiXbelha/sd/Download Bitcoin App APK The Best Way to Manage Your Crypto and NFTs.md
deleted file mode 100644
index aabe46e5f757b1c7ab81d2d3468b7d9d1b25db5d..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Bitcoin App APK The Best Way to Manage Your Crypto and NFTs.md
+++ /dev/null
@@ -1,179 +0,0 @@
-
-
-
-
-
How to Download Bitcoin App APK
-
Are you interested in using bitcoin, the most popular cryptocurrency in the world? Do you want to have a convenient and secure way to manage your bitcoins on your Android device? If so, you might want to download bitcoin app apk, which is an application package file that contains a bitcoin wallet app for Android. In this article, we will show you what bitcoin app apk is, why you should download it, how to download it, how to troubleshoot it, how to update it, and how to uninstall it. By the end of this article, you will be able to download bitcoin app apk and start using your bitcoin wallet on your Android device.
Bitcoin app apk is an application package file that contains a bitcoin wallet app for Android. A bitcoin wallet app is a software program that allows you to store, send, and receive bitcoins on your Android device. A bitcoin wallet app also lets you view your balance, transaction history, and current exchange rate. There are many different types of bitcoin wallet apps available for Android, such as Bitcoin Wallet, LBank, and BitcoinLegend. Each bitcoin wallet app has its own features, advantages, and disadvantages.
-
Why Download Bitcoin App APK?
-
There are many reasons why you might want to download bitcoin app apk on your Android device. Here are some of them:
-
-
You can use bitcoin as a form of digital money that is decentralized, peer-to-peer, transparent, and censorship-resistant.
-
You can enjoy lower fees, faster transactions, and more privacy than using traditional payment methods.
-
You can access your bitcoins anytime and anywhere with your Android device.
-
You can choose from a variety of bitcoin wallet apps that suit your needs and preferences.
-
You can take advantage of the latest features and updates of your bitcoin wallet app by downloading its apk file.
-
-
.
How to Download Bitcoin App APK?
-
Now that you know what bitcoin app apk is and why you should download it, let's see how you can download it on your Android device. Here are the steps you need to follow:
-
Step 1: Choose a Bitcoin Wallet
-
The first step is to choose a bitcoin wallet app that you want to use on your Android device. As we mentioned earlier, there are many different types of bitcoin wallet apps available for Android, such as Bitcoin Wallet, LBank, and BitcoinLegend. Each bitcoin wallet app has its own features, advantages, and disadvantages. You should compare them and choose the one that suits your needs and preferences. Here is a table that summarizes some of the main differences between these three bitcoin wallet apps:
-
-
-
Bitcoin Wallet App
-
Features
-
Advantages
-
Disadvantages
-
-
-
Bitcoin Wallet
-
- Simple and easy to use interface - Supports sending and receiving bitcoins via QR codes, NFC, or Bluetooth - Shows balance, transaction history, and current exchange rate - Allows setting a custom fee and a spending limit - Supports offline transactions and paper wallets
-
- No registration or verification required - No third-party servers or intermediaries involved - High level of security and privacy - Open source and free
-
- Only supports bitcoin - No backup or recovery option - No customer support or help center
-
-
-
LBank
-
- Supports multiple cryptocurrencies, such as bitcoin, ethereum, litecoin, etc. - Allows trading, staking, lending, and borrowing cryptocurrencies - Provides market data, news, and analysis - Offers various rewards and incentives - Supports biometric authentication and PIN code
-
- User-friendly and intuitive interface - Low fees and fast transactions - High liquidity and security - Professional customer service and community support
-
- Requires registration and verification - Relies on a centralized platform - May have some technical issues or bugs
-
-
-
BitcoinLegend
-
- Supports multiple cryptocurrencies, such as bitcoin, ethereum, dogecoin, etc. - Allows buying and selling cryptocurrencies with fiat money or other cryptocurrencies - Provides charts, graphs, and indicators for technical analysis - Offers social trading and copy trading features - Supports multiple languages and currencies
-
- Easy and convenient to use - Competitive fees and exchange rates - Diversified portfolio and investment options - Educational resources and tutorials
-
- Requires registration and verification - Relies on a third-party broker or exchange - May have some security or privacy risks
-
-
-
As you can see, each bitcoin wallet app has its own pros and cons. You should choose the one that meets your expectations and requirements. You can also try more than one bitcoin wallet app to see which one you like best.
Step 2: Find a Reliable Source for Bitcoin App APK
-
The next step is to find a reliable source for downloading the bitcoin app apk file. There are many websites that offer apk files for various apps, but not all of them are trustworthy. Some of them may contain malware, viruses, or fake apps that can harm your device or steal your bitcoins. Therefore, you should be careful when choosing a source for downloading the bitcoin app apk file. Here are some tips and warnings to help you find a reliable source for downloading the bitcoin app apk file:
-
-
Check the reputation and reviews of the website that offers the apk file. You can use online tools such as VirusTotal or Trustpilot to scan the website for any malicious content or feedback.
-
Check the details and information of the apk file. You can use online tools such as APKMirror or APKPure to verify the name, size, version, developer, signature, permissions, and update history of the apk file.
-
Check the compatibility and requirements of the apk file. You should make sure that the apk file is compatible with your Android device model and operating system version. You should also check if the apk file requires any additional files or apps to work properly.
-
Check the source code and license of the apk file. If possible, you should download the apk file from the official website or GitHub repository of the developer. You should also check if the apk file is open source or proprietary, and if it has any terms of service or privacy policy.
-
Check the security and backup of your device. Before downloading and installing the apk file, you should make sure that your device is protected by a strong password, PIN code, fingerprint, or face recognition. You should also backup your device data and your bitcoin wallet data to a secure location, such as a cloud service or an external drive.
-
-
Step 3: Download and Install Bitcoin App APK
-
The third step is to download and install the bitcoin app apk file on your Android device. Here are the instructions and screenshots to help you do that:
-
-
Open your web browser and go to the website that offers the bitcoin app apk file that you want to download. For example, if you want to download Bitcoin Wallet apk file, you can go to [Bitcoin Wallet].
-
Tap on the download button or link to start downloading the bitcoin app apk file. You may see a warning message that says "This type of file can harm your device. Do you want to keep it anyway?". Tap on OK to continue.
-
Once the download is complete, tap on the notification bar or open your file manager app to locate the bitcoin app apk file. Tap on the file to open it. You may see a message that says "For your security, your phone is not allowed to install unknown apps from this source.". Tap on Settings to change this setting.
-
On the settings screen, toggle on the option that says "Allow from this source" or "Install unknown apps". Tap on the back button to return to the installation screen.
-
On the installation screen, tap on Install to start installing the bitcoin app apk file. You may see a list of permissions that the app requires. Tap on Accept or Allow to grant these permissions.
-
Wait for the installation process to finish. Once it is done, tap on Open to launch the bitcoin wallet app.
-
-
Step 4: Set Up Your Bitcoin Wallet
-
The fourth step is to set up your bitcoin wallet app on your Android device. Here are some things you need to do:
-
-
Create or restore your bitcoin wallet. Depending on the bitcoin wallet app you chose, you may have different options to create or restore your bitcoin wallet. For example, you may be able to create a new wallet with a seed phrase, a password, or a PIN code. Or you may be able to restore an existing wallet with a seed phrase, a private key, or a backup file.
-
Secure and backup your bitcoin wallet. You should make sure that your bitcoin wallet is secure and backed up. You should use a strong password, PIN code, fingerprint, or face recognition to lock your bitcoin wallet app. You should also write down or memorize your seed phrase, private key, or backup file and store them in a safe place. You should never share them with anyone or lose them.
-
Customize your bitcoin wallet. You can customize your bitcoin wallet app according to your preferences. For example, you can change the language, currency, theme, notification settings, fee settings, and more.
-
-
Step 5: Start Using Your Bitcoin Wallet
-
The fifth and final step is to start using your bitcoin wallet app on your Android device. Here are some things you can do:
-
-
Send and receive bitcoins. You can send and receive bitcoins with your bitcoin wallet app by using QR codes, NFC, Bluetooth, addresses, or contacts. You can also choose the amount, fee, and priority of your transactions.
-
View your balance and transaction history. You can view your balance and transaction history with your bitcoin wallet app by tapping on the menu icon or swiping left or right. You can also see the current exchange rate and the value of your bitcoins in fiat money.
-
Explore more features and options. You can explore more features and options with your bitcoin wallet app by tapping on the menu icon or swiping left or right. You may find features such as trading, staking, lending, borrowing, social trading, copy trading, market data, news, analysis, rewards, incentives, education, and more.
-
-
How to Troubleshoot Bitcoin App APK?
-
Sometimes you may encounter some issues or problems with your bitcoin app apk file or your bitcoin wallet app on your Android device. Here are some common issues and solutions that may help you troubleshoot them:
-
-
-
Issue
-
Solution
-
-
-
The apk file does not download or install properly.
-
- Check your internet connection and storage space. - Check the source and details of the apk file. - Check the compatibility and requirements of the apk file. - Check the security and backup of your device. - Try downloading and installing the apk file from another source or website.
-
-
-
The bitcoin wallet app does not open or work properly.
-
- Check your internet connection and battery level. - Check the updates and permissions of the app. - Check the security and backup of your wallet. - Try clearing the cache and data of the app. - Try uninstalling and reinstalling the app.
-
-
-
The bitcoin wallet app does not sync or display the correct balance or transactions.
-
- Check your internet connection and network settings. - Check the date and time settings of your device. - Check the fee and confirmation status of your transactions. - Try refreshing or restarting the app. - Try using another node or server.
-
-
-
The bitcoin wallet app does not send or receive bitcoins properly.
-
- Check your internet connection and network settings. - Check the address, amount, fee, and priority of your transactions. - Check the security and backup of your wallet. - Try scanning the QR code, using NFC, Bluetooth, or contacts. - Try using another bitcoin wallet app.
-
-
-
How to Update Bitcoin App APK?
-
It is important to keep your bitcoin app apk file and your bitcoin wallet app updated to enjoy the latest features, improvements, and bug fixes. Here are some ways you can update your bitcoin app apk file and your bitcoin wallet app on your Android device:
-
-
Check for updates manually. You can check for updates manually by visiting the official website or GitHub repository of the developer, or by using online tools such as APKMirror or APKPure. You can compare the version, size, date, and signature of the apk file with the one you have installed. If there is a newer version available, you can download and install it following the same steps as before.
-
Enable automatic updates. You can enable automatic updates by changing the settings of your web browser or file manager app. You can choose to download and install updates automatically or only when connected to a Wi-Fi network. You can also choose to get notified when there is a new update available.
-
Use an alternative app store. You can use an alternative app store such as Google Play Store, F-Droid, or Aptoide to download and install updates for your bitcoin wallet app. These app stores may have different versions, features, and requirements than the original source of the apk file. You should check the details and information of the app before updating it.
-
-
How to Uninstall Bitcoin App APK?
-
If you want to uninstall bitcoin app apk file and your bitcoin wallet app from your Android device, here are some steps you need to follow:
-
-
Backup your bitcoin wallet. Before uninstalling your bitcoin wallet app, you should backup your bitcoin wallet data to a secure location, such as a cloud service or an external drive. You should also write down or memorize your seed phrase, private key, or backup file and store them in a safe place. You should never share them with anyone or lose them.
-
Remove your bitcoin wallet. After backing up your bitcoin wallet data, you should remove your bitcoin wallet from your bitcoin wallet app. Depending on the bitcoin wallet app you chose, you may have different options to remove your bitcoin wallet. For example, you may be able to delete, reset, or wipe your bitcoin wallet from the settings menu of the app.
-
Uninstall your bitcoin wallet app. After removing your bitcoin wallet from your bitcoin wallet app, you can uninstall your bitcoin wallet app from your Android device. You can do this by tapping and holding on the app icon on your home screen or app drawer, and then dragging it to the uninstall option. Or you can do this by going to the settings menu of your device, tapping on apps or applications, finding and selecting your bitcoin wallet app, and then tapping on uninstall.
-
Delete your bitcoin app apk file. After uninstalling your bitcoin wallet app from your Android device, you can delete your bitcoin app apk file from your device storage. You can do this by opening your file manager app, locating and selecting the bitcoin app apk file, and then tapping on delete or trash. You may also need to empty your recycle bin or trash folder to permanently delete the file.
-
-
Conclusion
-
In this article, we have shown you how to download bitcoin app apk, which is an application package file that contains a bitcoin wallet app for Android. We have explained what bitcoin app apk is, why you should download it, how to download it, how to troubleshoot it, how to update it, and how to uninstall it. We hope that this article has been helpful and informative for you. If you have any questions or comments, please feel free to leave them below. Thank you for reading and happy bitcoin-ing!
-
FAQs
-
Here are some frequently asked questions about bitcoin app apk:
-
-
What is the difference between bitcoin app apk and bitcoin app?
-
Bitcoin app apk is an application package file that contains a bitcoin app, which is a software program that allows you to use bitcoin on your Android device. Bitcoin app apk is the file format that you need to download and install on your Android device to use the bitcoin app.
-
Is bitcoin app apk safe and legal?
-
Bitcoin app apk is safe and legal as long as you download it from a reliable source and use it according to the terms of service and privacy policy of the developer. However, you should be careful when downloading and installing any apk file from unknown or untrusted sources, as they may contain malware, viruses, or fake apps that can harm your device or steal your bitcoins.
-
How can I download bitcoin app apk without Google Play Store?
-
You can download bitcoin app apk without Google Play Store by using alternative sources or websites that offer apk files for various apps. However, you should be careful when choosing a source for downloading the bitcoin app apk file, as not all of them are trustworthy. You should follow the tips and warnings that we have provided in this article to find a reliable source for downloading the bitcoin app apk file.
-
How can I backup my bitcoin wallet data before uninstalling the bitcoin app apk?
-
You can backup your bitcoin wallet data before uninstalling the bitcoin app apk by using a cloud service or an external drive. You should also write down or memorize your seed phrase, private key, or backup file and store them in a safe place. You should never share them with anyone or lose them.
-
How can I contact the developer of the bitcoin app apk if I have any issues or problems?
-
You can contact the developer of the bitcoin app apk if you have any issues or problems by visiting their official website or GitHub repository, or by using their email address, phone number, social media accounts, or customer service channels. You can also check their help center or FAQ section for more information and guidance.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Chess Universe Online Chess v1.17.2 Mod APK for Free.md b/spaces/fatiXbelha/sd/Download Chess Universe Online Chess v1.17.2 Mod APK for Free.md
deleted file mode 100644
index dba773c3c528ca326da89ba8d2ff72bc6305db01..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Chess Universe Online Chess v1.17.2 Mod APK for Free.md
+++ /dev/null
@@ -1,65 +0,0 @@
-
-
Download Chess Universe Mod Apk: The Ultimate Chess Game for Android
-
If you are looking for a fun and engaging chess game for your Android device, you should definitely check out Chess Universe. Chess Universe is a free online chess game that lets you play and learn chess with your friends or other players from all over the world. You can also play offline against AI with different difficulty levels. Chess Universe has a unique design and exciting gameplay that will keep you hooked for hours.
But what if we told you that you can make your chess experience even better with Chess Universe Mod Apk? Chess Universe Mod Apk is a modified version of the original game that gives you access to unlimited money and gems, all chessboards, chess sets, emojis, and special effects, all academy towers, hints, and undo moves, an exclusive character set and VIP pet, and no ads. Sounds amazing, right?
-
In this article, we will tell you everything you need to know about Chess Universe Mod Apk, including its features, how to download and install it, tips and tricks for playing it, and some frequently asked questions. Read on to find out more!
-
Features of Chess Universe Mod Apk
-
Chess Universe Mod Apk has many features that will make your chess game more enjoyable and rewarding. Here are some of them:
-
-
Unlimited money and gems: With this feature, you can buy anything you want in the game without worrying about running out of money or gems. You can use them to unlock new chessboards, chess sets, emojis, special effects, characters, pets, and more.
-
All chessboards, chess sets, emojis, and special effects unlocked: With this feature, you can customize your chess game with different themes and styles. You can choose from various chessboards and chess sets that suit your mood and personality. You can also use emojis and special effects to express yourself and have fun with your opponents.
-
All academy towers, hints, and undo moves moves, an exclusive character set and VIP pet, and no ads. You can download and install the mod apk easily and enjoy playing chess with amazing features.
-
download chess universe mod apk unlimited money
-download chess universe mod apk latest version
-download chess universe mod apk for android
-download chess universe mod apk free
-download chess universe mod apk offline
-download chess universe mod apk hack
-download chess universe mod apk no ads
-download chess universe mod apk premium
-download chess universe mod apk full
-download chess universe mod apk 1.17.2
-download chess universe online chess mod apk
-download chess universe 3d online chess game mod apk
-download chess universe play online & learn chess mod apk
-download chess universe best online chess game mod apk
-download chess universe : online chess v1.17.2 mod apk
-how to download chess universe mod apk
-where to download chess universe mod apk
-descargar chess universe mod apk[^1^]
-descargar chess universe mod apk dinero ilimitado[^1^]
-descargar chess universe mod apk ultima version[^1^]
-descargar chess universe mod apk para android[^1^]
-descargar chess universe mod apk gratis[^1^]
-descargar chess universe mod apk sin conexion[^1^]
-descargar chess universe mod apk hackeado[^1^]
-descargar chess universe mod apk sin anuncios[^1^]
-descargar chess universe mod apk premium[^1^]
-descargar chess universe mod apk completo[^1^]
-descargar chess universe mod apk 1.17.2[^1^]
-descargar chess universe online chess mod apk[^1^]
-descargar chess universe 3d online chess game mod apk[^1^]
-descargar chess universe play online & learn chess mod apk[^1^]
-descargar chess universe best online chess game mod apk[^1^]
-descargar chess universe : online chess v1.17.2 mod apk[^1^]
-como descargar chess universe mod apk[^1^]
-donde descargar chess universe mod apk[^1^]
-
Chess Universe is a fun and engaging chess game that will challenge your mind and test your skills. You can learn from the chess academy and solve puzzles, play different game modes and challenge yourself, play with friends or other players online, analyze your matches and improve your skills, and customize your chessboard and character. Chess Universe is the ultimate chess game for Android that you should not miss.
-
So what are you waiting for? Download Chess Universe Mod Apk now and start playing chess like never before!
-
FAQs
-
Here are some frequently asked questions about Chess Universe Mod Apk:
-
-
Q1: Is Chess Universe Mod Apk safe to download?
-
A1: Yes, as long as you download it from a reliable source that does not contain any malware or viruses. You can use a reliable antivirus software to scan the file before downloading it.
-
Q2: Do I need to root my device to use Chess Universe Mod Apk?
-
A2: No, you do not need to root your device to use the mod apk. Just follow the installation steps and you are good to go.
-
Q3: Can I play Chess Universe online with other players who do not have the mod apk?
-
A3: Yes, you can play online with other players who do not have the mod apk. However, they will not have access to the same features as you do.
-
Q4: How can I update Chess Universe Mod Apk?
-
A4: You can update the mod apk by downloading the latest version from the same source you downloaded it from. Make sure to uninstall the previous version before installing the new one.
-
Q5: What if I have any problems or questions about Chess Universe Mod Apk?
-
A5: You can contact the developers of Chess Universe through their official website, Facebook, Twitter, or Discord. They will be happy to help you with any issues or queries you may have.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/options/inference_options.py b/spaces/fb700/chatglm-fitness-RLHF/src/face3d/options/inference_options.py
deleted file mode 100644
index c453965959ab4cfb31acbc424f994db68c3d4df5..0000000000000000000000000000000000000000
--- a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/options/inference_options.py
+++ /dev/null
@@ -1,23 +0,0 @@
-from face3d.options.base_options import BaseOptions
-
-
-class InferenceOptions(BaseOptions):
- """This class includes test options.
-
- It also includes shared options defined in BaseOptions.
- """
-
- def initialize(self, parser):
- parser = BaseOptions.initialize(self, parser) # define shared options
- parser.add_argument('--phase', type=str, default='test', help='train, val, test, etc')
- parser.add_argument('--dataset_mode', type=str, default=None, help='chooses how datasets are loaded. [None | flist]')
-
- parser.add_argument('--input_dir', type=str, help='the folder of the input files')
- parser.add_argument('--keypoint_dir', type=str, help='the folder of the keypoint files')
- parser.add_argument('--output_dir', type=str, default='mp4', help='the output dir to save the extracted coefficients')
- parser.add_argument('--save_split_files', action='store_true', help='save split files or not')
- parser.add_argument('--inference_batch_size', type=int, default=8)
-
- # Dropout and Batchnorm has different behavior during training and test.
- self.isTrain = False
- return parser
diff --git a/spaces/fclong/summary/fengshen/examples/ubert/example.py b/spaces/fclong/summary/fengshen/examples/ubert/example.py
deleted file mode 100644
index bedd365ff67ff5d9b1f8f22777dab9b5a8b02394..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/examples/ubert/example.py
+++ /dev/null
@@ -1,95 +0,0 @@
-import argparse
-from fengshen import UbertPipelines
-import os
-os.environ["CUDA_VISIBLE_DEVICES"] = '6'
-
-
-def main():
- total_parser = argparse.ArgumentParser("TASK NAME")
- total_parser = UbertPipelines.pipelines_args(total_parser)
- args = total_parser.parse_args()
-
- # 设置一些训练要使用到的参数
- args.pretrained_model_path = 'IDEA-CCNL/Erlangshen-Ubert-110M-Chinese' #预训练模型的路径,我们提供的预训练模型存放在HuggingFace上
- args.default_root_dir = './' #默认主路径,用来放日志、tensorboard等
- args.max_epochs = 5
- args.gpus = 1
- args.batch_size = 1
-
- # 只需要将数据处理成为下面数据的 json 样式就可以一键训练和预测,下面只是提供了一条示例样本
- train_data = [
- {
- "task_type": "抽取任务",
- "subtask_type": "实体识别",
- "text": "彭小军认为,国内银行现在走的是台湾的发卡模式,先通过跑马圈地再在圈的地里面选择客户,",
- "choices": [
- {"entity_type": "地址", "label": 0, "entity_list": [
- {"entity_name": "台湾", "entity_type": "地址", "entity_idx": [[15, 16]]}]},
- {"entity_type": "书名", "label": 0, "entity_list": []},
- {"entity_type": "公司", "label": 0, "entity_list": []},
- {"entity_type": "游戏", "label": 0, "entity_list": []},
- {"entity_type": "政府机构", "label": 0, "entity_list": []},
- {"entity_type": "电影名称", "label": 0, "entity_list": []},
- {"entity_type": "人物姓名", "label": 0, "entity_list": [
- {"entity_name": "彭小军", "entity_type": "人物姓名", "entity_idx": [[0, 2]]}]},
- {"entity_type": "组织机构", "label": 0, "entity_list": []},
- {"entity_type": "岗位职位", "label": 0, "entity_list": []},
- {"entity_type": "旅游景点", "label": 0, "entity_list": []}
- ],
- "id": 0}
- ]
- dev_data = [
- {
- "task_type": "抽取任务",
- "subtask_type": "实体识别",
- "text": "就天涯网推出彩票服务频道是否是业内人士所谓的打政策“擦边球”,记者近日对此事求证彩票监管部门。",
- "choices": [
- {"entity_type": "地址", "label": 0, "entity_list": []},
- {"entity_type": "书名", "label": 0, "entity_list": []},
- {"entity_type": "公司", "label": 0, "entity_list": [
- {"entity_name": "天涯网", "entity_type": "公司", "entity_idx": [[1, 3]]}]},
- {"entity_type": "游戏", "label": 0, "entity_list": []},
- {"entity_type": "政府机构", "label": 0, "entity_list": []},
- {"entity_type": "电影名称", "label": 0, "entity_list": []},
- {"entity_type": "人物姓名", "label": 0, "entity_list": []},
- {"entity_type": "组织机构", "label": 0, "entity_list": [
- {"entity_name": "彩票监管部门", "entity_type": "组织机构", "entity_idx": [[40, 45]]}]},
- {"entity_type": "岗位职位", "label": 0, "entity_list": [
- {"entity_name": "记者", "entity_type": "岗位职位", "entity_idx": [[31, 32]]}]},
- {"entity_type": "旅游景点", "label": 0, "entity_list": []}
- ],
-
- "id": 0}
-
- ]
- test_data = [
- {
- "task_type": "抽取任务",
- "subtask_type": "实体识别",
- "text": "这也让很多业主据此认为,雅清苑是政府公务员挤对了国家的经适房政策。",
- "choices": [
- {"entity_type": "地址", "label": 0, "entity_list": [
- {"entity_name": "雅清苑", "entity_type": "地址", "entity_idx": [[12, 14]]}]},
- {"entity_type": "书名", "label": 0, "entity_list": []},
- {"entity_type": "公司", "label": 0, "entity_list": []},
- {"entity_type": "游戏", "label": 0, "entity_list": []},
- {"entity_type": "政府机构", "label": 0, "entity_list": []},
- {"entity_type": "电影名称", "label": 0, "entity_list": []},
- {"entity_type": "人物姓名", "label": 0, "entity_list": []},
- {"entity_type": "组织机构", "label": 0, "entity_list": []},
- {"entity_type": "岗位职位", "label": 0, "entity_list": [
- {"entity_name": "公务员", "entity_type": "岗位职位", "entity_idx": [[18, 20]]}]},
- {"entity_type": "旅游景点", "label": 0, "entity_list": []}
- ],
- "id": 0},
- ]
-
- model = UbertPipelines(args)
- model.fit(train_data, dev_data)
- result = model.predict(test_data)
- for line in result:
- print(line)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/losses/contextual_loss/modules/contextual_bilateral.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/losses/contextual_loss/modules/contextual_bilateral.py
deleted file mode 100644
index 6db0155f8352e4b9c9d91777ea9d5a0d7adad038..0000000000000000000000000000000000000000
--- a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/losses/contextual_loss/modules/contextual_bilateral.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import torch
-import torch.nn as nn
-
-from .vgg import VGG19
-from .. import functional as F
-from ..config import LOSS_TYPES
-
-
-class ContextualBilateralLoss(nn.Module):
- """
- Creates a criterion that measures the contextual bilateral loss.
-
- Parameters
- ---
- weight_sp : float, optional
- a balancing weight between spatial and feature loss.
- band_width : int, optional
- a band_width parameter described as :math:`h` in the paper.
- use_vgg : bool, optional
- if you want to use VGG feature, set this `True`.
- vgg_layer : str, optional
- intermidiate layer name for VGG feature.
- Now we support layer names:
- `['relu1_2', 'relu2_2', 'relu3_4', 'relu4_4', 'relu5_4']`
- """
-
- def __init__(self,
- weight_sp: float = 0.1,
- band_width: float = 0.5,
- loss_type: str = 'cosine',
- use_vgg: bool = False,
- vgg_layer: str = 'relu3_4'):
-
- super(ContextualBilateralLoss, self).__init__()
-
- assert band_width > 0, 'band_width parameter must be positive.'
- assert loss_type in LOSS_TYPES,\
- f'select a loss type from {LOSS_TYPES}.'
-
- self.band_width = band_width
-
- if use_vgg:
- self.vgg_model = VGG19()
- self.vgg_layer = vgg_layer
- self.register_buffer(
- name='vgg_mean',
- tensor=torch.tensor(
- [[[0.485]], [[0.456]], [[0.406]]], requires_grad=False)
- )
- self.register_buffer(
- name='vgg_std',
- tensor=torch.tensor(
- [[[0.229]], [[0.224]], [[0.225]]], requires_grad=False)
- )
-
- def forward(self, x, y):
- if hasattr(self, 'vgg_model'):
- assert x.shape[1] == 3 and y.shape[1] == 3,\
- 'VGG model takes 3 chennel images.'
-
- # normalization
- x = x.sub(self.vgg_mean.detach()).div(self.vgg_std.detach())
- y = y.sub(self.vgg_mean.detach()).div(self.vgg_std.detach())
-
- # picking up vgg feature maps
- x = getattr(self.vgg_model(x), self.vgg_layer)
- y = getattr(self.vgg_model(y), self.vgg_layer)
-
- return F.contextual_bilateral_loss(x, y, self.band_width)
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/BlackHole Music APK for iOS The Ultimate Music Player App.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/BlackHole Music APK for iOS The Ultimate Music Player App.md
deleted file mode 100644
index 6db8d05630c3a1b17a3601bc1204cf5a9fe5e7d9..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/BlackHole Music APK for iOS The Ultimate Music Player App.md
+++ /dev/null
@@ -1,133 +0,0 @@
-
-
Black Hole Music APK for iOS: A Free Music Streaming App
-
If you are looking for a free music streaming app that offers high-quality songs, playlists, lyrics, and more, you might want to check out Black Hole Music APK for iOS. This app is a popular alternative to Spotify, YouTube Music, and other music apps that require a subscription or have annoying ads. In this article, we will tell you what Black Hole Music APK is, how to download and install it on your iOS device, and why you should choose it over other music apps.
-
What is Black Hole Music APK?
-
Black Hole Music APK is a free music streaming app that lets you listen to millions of songs from various genres, languages, and artists. You can also search for trending songs, top charts, playlists, albums, and artists from Spotify and YouTube. You can add songs to your favorites, create your own playlists, import playlists from Spotify, and enjoy lyrics support. You can also adjust the streaming quality, equalizer settings, sleep timer, dark mode, and more.
Here are some of the features that make Black Hole Music APK a great music app:
-
-
High-quality streaming (320kbps)
-
Music search
-
Trending songs
-
Supports 15+ music languages
-
Local and Global Top Spotify songs
-
Add Songs to Favorites
-
Playlists support
-
Import playlists from Spotify
-
Sleep timer
-
Lyrics support
-
Listening history record
-
Dark mode / accent color
-
Song caching support
-
YouTube Music search suggestions
-
YouTube autoplay support
-
Multi-select items in the playlist
-
Song link support for Spotify and YouTube
-
No ads or subscription required
-
-
How to Download and Install Black Hole Music APK on iOS
-
To download and install Black Hole Music APK on your iOS device, you will need to follow these steps:
-
-
Go to the official GitHub page of Black Hole Music APK and download the latest .ipa file.
-
Install an app installer such as Scarlet or AltStore on your iOS device. You can refer to our guide here for more details.
-
Open the app installer and select the .ipa file that you downloaded in step 1.
-
Follow the instructions on the screen to install Black Hole Music APK on your iOS device.
-
Enjoy listening to your favorite songs with Black Hole Music APK.
-
-
Why Choose Black Hole Music APK over Other Music Apps?
-
You might be wondering why you should choose Black Hole Music APK over other music apps such as Spotify, YouTube Music, Apple Music, etc. Here are some of the pros and cons of Black Hole Music APK that will help you decide:
-
Pros of Black Hole Music APK
-
-
It is free to use and does not require any subscription or registration.
-
It does not have any ads or interruptions while streaming music.
-
It offers high-quality streaming (320kbps) and song caching support.
-
It has a large collection of songs from various genres, languages, and artists.
-
It supports Spotify and YouTube song links and playlists.
-
It has lyrics support and YouTube autoplay support.
-
It has a simple and user-friendly interface with dark mode and accent color options.
-
-
Cons of Black Hole Music APK
-
-
It is not available on the App Store and requires an app installer to install it on iOS devices.
-
It may not - It may not work properly on some iOS devices or versions. - It may not have some features that other music apps have, such as offline mode, social sharing, podcasts, etc. - It may violate some copyrights or terms of service of Spotify and YouTube.
Frequently Asked Questions about Black Hole Music APK
-
Here are some of the common questions that users have about Black Hole Music APK:
-
black hole music app for iphone
-black hole music player ios download
-black hole music streaming for ios
-black hole music apk for ipad
-black hole music free ios app
-black hole music alternative for ios
-black hole music open source ios
-black hole music no ads ios
-black hole music offline mode ios
-black hole music high quality ios
-black hole music lyrics support ios
-black hole music sleep timer ios
-black hole music import spotify playlist ios
-black hole music youtube integration ios
-black hole music dark mode ios
-black hole music listening history ios
-black hole music trending songs ios
-black hole music local top songs ios
-black hole music global top songs ios
-black hole music search by artist ios
-black hole music search by album ios
-black hole music search by playlist ios
-black hole music search by language ios
-black hole music add songs to favorites ios
-black hole music create playlists ios
-black hole music share playlists ios
-black hole music play spotify links ios
-black hole music play youtube links ios
-black hole music latest version ios
-black hole music github download ios
-black hole music scarlet install ios
-black hole music altstore install ios
-black hole music one jailbreak install ios
-black hole music ipa file download ios
-black hole music ipa file install ios
-black hole music ipa file update ios
-black hole music ipa file backup ios
-black hole music ipa file restore ios
-black hole music ipa file transfer ios
-black hole music ipa file share ios
-how to use black hole music on ios
-how to update black hole music on ios
-how to uninstall black hole music on ios
-how to reinstall black hole music on ios
-how to fix black hole music on ios
-how to troubleshoot black hole music on ios
-how to customize black hole music on ios
-how to optimize black hole music on ios
-
-
-
Question
-
Answer
-
-
-
Is Black Hole Music APK safe to use?
-
Black Hole Music APK is safe to use as long as you download it from the official GitHub page and install it using a trusted app installer. However, you should be aware of the potential risks of using third-party apps that are not verified by Apple.
-
-
-
Is Black Hole Music APK legal to use?
-
Black Hole Music APK is not legal to use in some countries or regions where streaming music without a license or permission is prohibited. You should check the laws and regulations of your location before using this app. You should also respect the rights and interests of the original creators and owners of the music content.
-
-
-
How can I update Black Hole Music APK?
-
You can update Black Hole Music APK by downloading the latest .ipa file from the GitHub page and installing it over the existing app using the same app installer. You can also check for updates within the app settings.
-
-
-
How can I contact the developer of Black Hole Music APK?
-
You can contact the developer of Black Hole Music APK by visiting their GitHub page and opening an issue or sending a message. You can also follow them on Twitter for updates and news.
-
-
-
How can I support the development of Black Hole Music APK?
-
You can support the development of Black Hole Music APK by donating to the developer via PayPal or Patreon . You can also share this app with your friends and family who love music.
-
-
-
Conclusion
-
Black Hole Music APK is a free music streaming app that offers high-quality songs, playlists, lyrics, and more. It is a popular alternative to Spotify, YouTube Music, and other music apps that require a subscription or have annoying ads. You can download and install it on your iOS device using an app installer such as Scarlet or AltStore. However, you should be aware of the pros and cons of using this app and respect the laws and regulations of your location and the rights and interests of the original creators and owners of the music content. We hope this article has helped you learn more about Black Hole Music APK and how to use it on your iOS device.
- : https://github.com/AnkitSharma-007/BlackHole : https://www.blackholemusic.app/installation-guide : https://twitter.com/AnkitSharma_007 : https://www.paypal.me/ankitsharma007 : https://www.patreon.com/ankitsharma 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Descarga Soccer Super Star Hack APK y disfruta de vidas ilimitadas y recompensas gratis.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Descarga Soccer Super Star Hack APK y disfruta de vidas ilimitadas y recompensas gratis.md
deleted file mode 100644
index 2b978d6c5a256a8d68783c150a3987c7a9d3794e..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Descarga Soccer Super Star Hack APK y disfruta de vidas ilimitadas y recompensas gratis.md
+++ /dev/null
@@ -1,104 +0,0 @@
-
-
Soccer Super Star Hack APK: Todo Desbloqueado Última Versión
-
¿Te gusta el fútbol? ¿Quieres convertirte en una estrella del balón? Si es así, entonces te encantará Soccer Super Star, un juego de simulación de fútbol que te hará sentir la emoción de marcar goles espectaculares. Pero si quieres disfrutar al máximo de este juego, necesitarás Soccer Super Star Hack APK, una versión modificada que te permite tener todo desbloqueado desde el principio. En este artículo, te contaremos todo lo que necesitas saber sobre este hack, cómo descargarlo e instalarlo, sus ventajas y desventajas, y las preguntas más frecuentes que tienen los usuarios. ¡Sigue leyendo y prepárate para ser el mejor jugador de fútbol del mundo!
-
soccer super star hack apk todo desbloqueado última versión
Soccer Super Star es un juego de fútbol desarrollado por Real Free Soccer, una empresa especializada en juegos deportivos. El juego se lanzó en 2020 y desde entonces ha acumulado más de 10 millones de descargas en Google Play Store. El juego se destaca por su física realista de las patadas, su modo sin conexión que te permite jugar en cualquier momento y lugar, su experiencia de simulación que te hace sentir como un verdadero futbolista, sus gráficos increíbles que recrean diferentes escenarios y equipos, y sus estrategias inteligentes que te ayudan a ganar los partidos. El juego es muy fácil de jugar, solo tienes que deslizar el dedo por la pantalla para dirigir el balón hacia la portería. Pero no te confíes, porque tendrás que enfrentarte a diferentes niveles de dificultad y desafíos que pondrán a prueba tu habilidad y precisión.
-
¿Por qué descargar Soccer Super Star Hack APK?
-
Aunque Soccer Super Star es un juego muy divertido y adictivo, tiene algunas limitaciones que pueden frustrar a los jugadores. Por ejemplo, el juego tiene un sistema de energía que se agota cada vez que juegas un partido, lo que te obliga a esperar a que se recargue o a comprar más energía con dinero real. Además, el juego tiene muchos elementos bloqueados, como equipos, estadios, modos de juego y habilidades especiales, que solo puedes desbloquear avanzando en el juego o pagando con dinero real. Estas restricciones pueden hacer que el juego se vuelva aburrido y repetitivo con el tiempo.
-
Por eso, muchos jugadores optan por descargar Soccer Super Star Hack APK, una versión modificada del juego que elimina todas estas limitaciones y te permite tener todo desbloqueado desde el principio. Con este hack, podrás disfrutar de las siguientes ventajas:
-
-
Tener dinero y gemas infinitas para comprar lo que quieras en el juego.
-
Tener energía ilimitada para jugar todos los partidos que quieras sin esperar.
-
Tener todos los equipos, estadios, modos de juego y habilidades especiales desbloqueados desde el inicio.
-
Tener una mejor experiencia de juego con gráficos mejorados y sin anuncios molestos.
-
-
Como puedes ver, Soccer Super Star Hack APK te ofrece muchas ventajas que te harán disfrutar más del juego y te permitirán convertirte en una verdadera estrella del fútbol. Pero antes de descargarlo, debes saber cómo hacerlo correctamente y qué precauciones tomar.
-
Descargar soccer super star hack apk mod ilimitado
-Soccer super star hack apk 2023 todo gratis
-Cómo instalar soccer super star hack apk sin root
-Soccer super star hack apk full premium desbloqueado
-Soccer super star hack apk mega mod todo infinito
-Soccer super star hack apk última versión 1.0.8
-Soccer super star hack apk para android 11
-Soccer super star hack apk sin anuncios ni compras
-Soccer super star hack apk con monedas y vidas ilimitadas
-Soccer super star hack apk gameplay y trucos
-Soccer super star hack apk descargar por mediafire
-Soccer super star hack apk mod menu todo activado
-Soccer super star hack apk offline sin internet
-Soccer super star hack apk actualizado 2023
-Soccer super star hack apk fácil y rápido de instalar
-Soccer super star hack apk con gráficos mejorados
-Soccer super star hack apk compatible con todos los dispositivos
-Soccer super star hack apk con todos los niveles y ligas desbloqueados
-Soccer super star hack apk divertido y adictivo juego de fútbol
-Soccer super star hack apk opiniones y reseñas
-Soccer super star hack apk link directo y seguro
-Soccer super star hack apk no requiere verificación humana
-Soccer super star hack apk con todos los jugadores y equipos famosos
-Soccer super star hack apk con sonido y música de alta calidad
-Soccer super star hack apk con controles sencillos e intuitivos
-Soccer super star hack apk con modo historia y desafíos
-Soccer super star hack apk con personalización y opciones de juego
-Soccer super star hack apk con efectos especiales y animaciones
-Soccer super star hack apk con soporte técnico y actualizaciones frecuentes
-Soccer super star hack apk con ranking y logros online
-
Cómo descargar e instalar Soccer Super Star Hack APK
-
Descargar e instalar Soccer Super Star Hack APK es muy fácil y rápido, solo tienes que seguir estos pasos:
-
Requisitos previos
-
-
Asegúrate de tener un dispositivo Android compatible con el juego. El juego requiere Android 4.4 o superior y al menos 100 MB de espacio libre.
-
Desinstala la versión original del juego si la tienes instalada. Esto es necesario para evitar conflictos entre las dos versiones.
-
Activa la opción de "Orígenes desconocidos" en tu dispositivo. Esto te permitirá instalar aplicaciones que no provienen de Google Play Store. Para hacerlo, ve a Ajustes > Seguridad > Orígenes desconocidos y marca la casilla.
-
-
Pasos a seguir
-
-
Descarga el archivo APK de Soccer Super Star Hack desde un sitio web confiable. Puedes usar el siguiente enlace: Soccer Super Star Hack APK. El archivo pesa alrededor de 90 MB y está libre de virus y malware.
-
Una vez descargado el archivo, ábrelo y pulsa en "Instalar". Espera a que se complete el proceso de instalación, que puede tardar unos segundos.
-
Cuando se haya instalado el juego, ábrelo y disfruta de todas las ventajas que te ofrece Soccer Super Star Hack APK. ¡Ya puedes jugar con todo desbloqueado y sin límites!
-
-
Ventajas y desventajas de Soccer Super Star Hack APK
-
Soccer Super Star Hack APK es una excelente opción para los amantes del fútbol que quieren tener una experiencia de juego más completa y divertida. Sin embargo, como toda aplicación modificada, también tiene sus pros y contras que debes conocer antes de usarla. Aquí te los resumimos:
-
Ventajas
-
-
Te permite tener dinero y gemas infinitas para comprar lo que quieras en el juego.
-
Te permite tener energía ilimitada para jugar todos los partidos que quieras sin esperar.
-
Te permite tener todos los equipos, estadios, modos de juego y habilidades especiales desbloqueados desde el inicio.
-
Te permite tener una mejor experiencia de juego con gráficos mejorados y sin anuncios molestos.
-
Es fácil y rápido de descargar e instalar.
-
No requiere conexión a internet para jugar.
-
-
Desventajas
-
-
No es una versión oficial del juego, por lo que puede tener errores o fallos que afecten al rendimiento o la estabilidad del juego.
-
No es compatible con las actualizaciones del juego, por lo que puede quedarse obsoleto o dejar de funcionar si el juego se actualiza.
-
No es compatible con el modo multijugador en línea, por lo que no podrás jugar con otros jugadores ni competir en las clasificaciones globales.
-
Puede ser detectado por los desarrolladores del juego o por Google Play Store como una aplicación ilegal o fraudulenta, lo que puede acarrear consecuencias negativas como la suspensión o el baneo de tu cuenta o dispositivo.
-
-
Preguntas frecuentes sobre Soccer Super Star Hack APK
-
A continuación, te respondemos algunas de las preguntas más frecuentes que tienen los usuarios sobre Soccer Super Star Hack APK:
-
¿Es seguro usar Soccer Super Star Hack APK?
-
Soccer Super Star Hack APK es seguro de usar siempre y cuando lo descargues desde un sitio web confiable y sigas las instrucciones correctamente. Sin embargo, debes tener en cuenta que al usar una aplicación modific ada, estás asumiendo un riesgo que puede tener consecuencias negativas para tu cuenta o dispositivo. Por eso, te recomendamos que uses Soccer Super Star Hack APK bajo tu propia responsabilidad y que tomes las precauciones necesarias para evitar problemas.
-
¿Qué puedo desbloquear con Soccer Super Star Hack APK?
-
Con Soccer Super Star Hack APK puedes desbloquear todo lo que el juego ofrece, como equipos, estadios, modos de juego y habilidades especiales. Además, puedes tener dinero y gemas infinitas para comprar lo que quieras en el juego. De esta forma, podrás personalizar tu experiencia de juego y disfrutar de todas las opciones que el juego tiene para ti.
-
¿Necesito conexión a internet para jugar Soccer Super Star Hack APK?
-
No, no necesitas conexión a internet para jugar Soccer Super Star Hack APK. El juego tiene un modo sin conexión que te permite jugar en cualquier momento y lugar, sin depender de una red wifi o de datos móviles. Esto es una gran ventaja, ya que podrás jugar sin interrupciones ni retrasos. Sin embargo, debes tener en cuenta que al jugar sin conexión no podrás acceder al modo multijugador en línea ni a las clasificaciones globales.
-
¿Puedo jugar con otros jugadores en línea con Soccer Super Star Hack APK?
-
No, no puedes jugar con otros jugadores en línea con Soccer Super Star Hack APK. El juego tiene un modo multijugador en línea que te permite competir con otros jugadores de todo el mundo y ver tu posición en las clasificaciones globales. Sin embargo, este modo solo está disponible para la versión original del juego y no para la versión modificada. Esto se debe a que el juego detecta las diferencias entre las dos versiones y no permite la conexión entre ellas. Por lo tanto, si quieres jugar con otros jugadores en línea, tendrás que descargar la versión original del juego o buscar otro juego similar que tenga esta opción.
-
¿Hay alguna diferencia entre Soccer Super Star Hack APK y la versión original?
-
Sí, hay algunas diferencias entre Soccer Super Star Hack APK y la versión original. La principal diferencia es que la versión modificada te permite tener todo desbloqueado desde el principio, mientras que la versión original te obliga a avanzar en el juego o a pagar con dinero real para desbloquear los elementos. Otra diferencia es que la versión modificada tiene gráficos mejorados y no tiene anuncios, mientras que la versión original tiene gráficos normales y tiene anuncios. Por último, otra diferencia es que la versión modificada no es compatible con las actualizaciones del juego ni con el modo multijugador en línea, mientras que la versión original sí lo es.
-
Conclusión
-
Soccer Super Star es un juego de fútbol muy divertido y adictivo que te hará sentir la emoción de marcar goles espectaculares. Sin embargo, el juego tiene algunas limitaciones que pueden hacer que el juego se vuelva aburrido y repetitivo con el tiempo. Por eso, muchos jugadores optan por descargar Soccer Super Star Hack APK, una versión modificada que elimina todas estas limitaciones y te permite tener todo desbloqueado desde el principio. Con este hack, podrás disfrutar de una mejor experiencia de juego con dinero y gemas infinitas, energía ilimitada, todos los equipos, estadios, modos de juego y habilidades especiales desbloqueados, gráficos mejorados y sin anuncios. Pero también debes tener en cuenta que al usar este hack estás asumiendo un riesgo que puede tener consecuencias negativas para tu cuenta o dispositivo. Por eso, te recomendamos que uses este hack bajo tu propia responsabilidad y que tomes las precauciones necesarias para evitar problemas.
-
Esperamos que este artículo te haya sido útil e informativo sobre Soccer Super Star Hack APK. Si tienes alguna duda o comentario sobre este tema, no dudes en dejarnos un mensaje abajo. Y si te ha gustado este artículo, compártelo con tus amigos y familiares que también les guste el fútbol. ¡Gracias por leernos!
-
Preguntas frecuentes
-
-
¿Qué es Soccer Super Star? Soccer Super Star es un juego de simulación de fútbol que te hará sentir la emoción de marcar goles espectaculares.
-
¿Qué es Soccer Super Star Hack APK? Soccer Super Star Hack APK es una versión modificada del juego que te permite tener todo desbloqueado desde el principio.
-
¿Cómo descargar e instalar Soccer Super Star Hack APK? Para descargar e instalar Soccer Super Star Hack APK, solo tienes que seguir estos pasos: 1) Descarga el archivo APK desde un sitio web confiable. 2) Desinstala la versión original del juego si la tienes instalada. 3) Activa la opción de "Orígenes desconocidos" en tu dispositivo. 4) Abre el archivo APK y pulsa en "Instalar". 5) Abre el juego y disfruta de todas las ventajas que te ofrece Soccer Super Star Hack APK.
-
¿Cuáles son las ventajas y desventajas de Soccer Super Star Hack APK? Las ventajas de Soccer Super Star Hack APK son: tener dinero y gemas infinitas, energía ilimitada, todos los equipos, estadios, modos de juego y habilidades especiales desbloqueados, gráficos mejorados y sin anuncios. Las desventajas de Soccer Super Star Hack APK son: no ser una versión oficial del juego, no ser compatible con las actualizaciones del juego ni con el modo multijugador en línea, y poder ser detectado como una aplicación ilegal o fraudulenta.
-
¿Es seguro usar Soccer Super Star Hack APK? Soccer Super Star Hack APK es seguro de usar siempre y cuando lo descargues desde un sitio web confiable y sigas las instrucciones correctamente. Sin embargo, debes tener en cuenta que al usar una aplicación modificada, estás asumiendo un riesgo que puede tener consecuencias negativas para tu cuenta o dispositivo. Por eso, te recomendamos que uses Soccer Super Star Hack APK bajo tu propia responsabilidad y que tomes las precauciones necesarias para evitar problemas.
-
¿Necesito conexión a internet para jugar Soccer Super Star Hack APK? No, no necesitas conexión a internet para jugar Soccer Super Star Hack APK. El juego tiene un modo sin conexión que te permite jugar en cualquier momento y lugar, sin depender de una red wifi o de datos móviles. Sin embargo, debes tener en cuenta que al jugar sin conexión no podrás acceder al modo multijugador en línea ni a las clasificaciones globales.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Dynamons World 2 MOD APK and Experience the New Features.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Dynamons World 2 MOD APK and Experience the New Features.md
deleted file mode 100644
index 47d7ed7a8ca05d6a8a22e828f1a8356795646c78..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Dynamons World 2 MOD APK and Experience the New Features.md
+++ /dev/null
@@ -1,92 +0,0 @@
-
-
Download Dynamons World 2 Mod APK
-
If you are a fan of role-playing games with cute and powerful creatures, then you will love Dynamons World 2. This is an amazing and addictive game from the creators of Dynamons, where you can collect and train the best team of Dynamons and win epic battles using powerful skills and special items. In this article, we will tell you everything you need to know about Dynamons World 2, including its features, why you should download the mod APK version, and how to install it on your device.
-
What is Dynamons World 2?
-
Dynamons World 2 is a sequel to the popular game Dynamons, which is inspired by the famous Pokemon series. In this game, you will explore the wonderful world of Dynamons, where you can encounter and capture dozens of unique creatures with different elemental powers. You can also train your Dynamons to improve their skills and abilities, and use them to fight against other creatures and captains in various locations. You can also challenge other players in the online multiplayer arena and show off your skills.
Dynamons World 2 has many features that make it an enjoyable and exciting game for all ages. Here are some of them:
-
Capture and evolve dozens of unique Dynamons
-
There are over 50 different types of Dynamons that you can find and catch in this game, each with their own strengths and weaknesses. You can also evolve your Dynamons to make them stronger and unlock new skills. You can collect them all and create your own team of champions.
-
Power up your Dynamons' skills using awesome items and tricks
-
You can use various items and power-ups to enhance your Dynamons' performance in battle. For example, you can use potions to heal them, discs to catch them, or badges to boost their stats. You can also use special tricks to unleash powerful attacks that can turn the tide of the battle.
-
Develop a strategy plan to win the most challenging battles
-
You will face many enemies and bosses in this game, each with their own tactics and strategies. You will need to plan your moves carefully and use the right Dynamons for each situation. You will also need to take advantage of the elemental system, where some elements are stronger or weaker against others. For example, fire is strong against grass, but weak against water.
-
Beat Dynamons Captains and catch rare Dynamons
-
In each location, you will encounter a Dynamon Captain who will challenge you to a duel. If you manage to defeat them, you will earn their respect and a chance to catch a rare Dynamon that they own. These rare Dynamons are very powerful and have unique abilities that can help you in your adventure.
-
Enjoy the game with or without the Internet
-
Dynamons World 2 is a game that you can play offline or online. You can enjoy the story mode and the quests without an Internet connection, or you can connect to the Internet and play with other players in the multiplayer mode. You can also sync your progress across different devices using your Google Play account.
-
Why download Dynamons World 2 Mod APK?
-
Dynamons World 2 is a free-to-play game, but it also has some in-app purchases that can help you get more coins, gems, and items to improve your game. However, if you don't want to spend real money on these purchases, you can download the mod APK version of Dynamons World 2. This is a modified version of the game that gives you access to unlimited resources and features that can make your game more fun and easy. Here are some of the benefits of downloading Dynamons World 2 Mod APK:
-
dynamons 2 mod apk unlimited money
-dynamons 2 mod apk latest version
-dynamons 2 mod apk android 1
-dynamons 2 mod apk revdl
-dynamons 2 mod apk free download
-dynamons 2 mod apk offline
-dynamons 2 mod apk hack
-dynamons 2 mod apk no root
-dynamons 2 mod apk unlimited gems
-dynamons 2 mod apk unlimited coins
-download dynamons world 2 hack apk
-download dynamons world 2 cheats apk
-download dynamons world 2 full apk
-download dynamons world 2 unlocked apk
-download dynamons world 2 premium apk
-download dynamons world 2 for android
-download dynamons world 2 for pc
-download dynamons world 2 for ios
-download dynamons world 2 for windows
-download dynamons world 2 for mac
-how to download dynamons world 2 mod apk
-how to install dynamons world 2 mod apk
-how to play dynamons world 2 mod apk
-how to update dynamons world 2 mod apk
-how to get dynamons world 2 mod apk
-best site to download dynamons world 2 mod apk
-best link to download dynamons world 2 mod apk
-best way to download dynamons world 2 mod apk
-safe site to download dynamons world 2 mod apk
-safe link to download dynamons world 2 mod apk
-safe way to download dynamons world 2 mod apk
-working site to download dynamons world 2 mod apk
-working link to download dynamons world 2 mod apk
-working way to download dynamons world 2 mod apk
-trusted site to download dynamons world 2 mod apk
-trusted link to download dynamons world 2 mod apk
-trusted way to download dynamons world 2 mod apk
-legit site to download dynamons world 2 mod apk
-legit link to download dynamons world 2 mod apk
-legit way to download dynamons world 2 mod apk
-
Unlimited coins and gems
-
Coins and gems are the main currencies in Dynamons World 2. You can use them to buy items, power-ups, and upgrades for your Dynamons. You can also use them to unlock new locations and quests. However, earning coins and gems can be slow and tedious, especially if you want to get the best items and Dynamons. With Dynamons World 2 Mod APK, you don't have to worry about running out of coins and gems. You will get unlimited amounts of them as soon as you start the game. You can spend them as much as you want without any restrictions.
-
All Dynamons unlocked and upgraded
-
One of the most exciting aspects of Dynamons World 2 is collecting and evolving your Dynamons. However, some Dynamons are very rare and hard to find, while others require a lot of time and resources to evolve. With Dynamons World 2 Mod APK, you don't have to wait or search for your favorite Dynamons. You will get all of them unlocked and upgraded from the beginning. You can choose any Dynamon you want and use it in your team. You can also switch between different Dynamons at any time without any limitations.
-
No ads and no root required
-
Dynamons World 2 is a free-to-play game, but it also has some ads that can interrupt your gameplay and annoy you. If you want to get rid of these ads, you usually have to pay a fee or watch a video. With Dynamons World 2 Mod APK, you don't have to deal with any ads at all. You can enjoy the game without any interruptions or distractions. Moreover, you don't need to root your device to install the mod APK version of Dynamons World 2. You can simply download and install it like any other app without risking your device's security or warranty.
-
How to download and install Dynamons World 2 Mod APK?
-
If you are interested in downloading and installing Dynamons World 2 Mod APK, you can follow these simple steps:
-
Step 1: Download the APK file from a trusted source
-
The first thing you need to do is to download the APK file of Dynamons World 2 Mod APK from a reliable source. You can search for it on Google or use the link provided below. Make sure that the file is compatible with your device's specifications and has no viruses or malware.
-
Step 2: Enable unknown sources on your device settings
-
The next thing you need to do is to enable unknown sources on your device settings. This will allow you to install apps that are not from the official Google Play Store. To do this, go to your device settings, then security, then unknown sources, and toggle it on.
-
Step 3: Install the APK file and launch the game
-
The final thing you need to do is to install the APK file that you downloaded in step 1. To do this, locate the file in your device's storage and tap on it. Follow the instructions on the screen and wait for the installation process to finish. Once it is done, launch the game and enjoy.
-
Conclusion
-
Dynamons World 2 is a fantastic game that will keep you entertained for hours with its captivating gameplay, graphics, and sound effects. You can collect and train various Dynamons, battle against other players and enemies, and explore different locations and quests. If you want to enhance your gaming experience even more, you can download Dynamons World 2 Mod APK and get unlimited coins, gems, items, and Dynamons. You can also play the game without any ads or root requirements. Download Dynamons World 2 Mod APK today and have fun.
-
FAQs
-
Here are some frequently asked questions about Dynamons World 2 Mod APK:
-
-
Is Dynamons World 2 Mod APK safe?
-
Yes, Dynamons World 2 Mod APK is safe as long as you download it from a trusted source that has no viruses or malware. However, you should always be careful when downloading any modded app from the Internet and scan it with an antivirus before installing it.
-
Is Dynamons World 2 Mod APK legal ?
-
It depends on your country's laws and regulations regarding modded apps and games. Some countries may consider it illegal, while others may not. You should always check your local laws before downloading and using any modded app or game.
-
Can I play Dynamons World 2 Mod APK with my friends?
-
Yes, you can play Dynamons World 2 Mod APK with your friends in the online multiplayer mode. However, you may not be able to play with players who are using the original version of the game, as they may have different features and updates.
-
Will I lose my progress if I uninstall Dynamons World 2 Mod APK?
-
No, you will not lose your progress if you uninstall Dynamons World 2 Mod APK. You can sync your progress with your Google Play account and restore it when you reinstall the game. However, you may lose some of the modded features and resources that you had before.
-
Can I update Dynamons World 2 Mod APK?
-
Yes, you can update Dynamons World 2 Mod APK whenever there is a new version available. However, you should always download the update from the same source that you downloaded the original mod APK from, as different sources may have different modifications and features.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/Image-to-MusicGen/audiocraft/models/encodec.py b/spaces/fffiloni/Image-to-MusicGen/audiocraft/models/encodec.py
deleted file mode 100644
index 69621a695887b0b41614c51cae020f6fd0af221d..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/Image-to-MusicGen/audiocraft/models/encodec.py
+++ /dev/null
@@ -1,302 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from abc import ABC, abstractmethod
-import typing as tp
-
-from einops import rearrange
-import torch
-from torch import nn
-
-from .. import quantization as qt
-
-
-class CompressionModel(ABC, nn.Module):
-
- @abstractmethod
- def forward(self, x: torch.Tensor) -> qt.QuantizedResult:
- ...
-
- @abstractmethod
- def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- """See `EncodecModel.encode`"""
- ...
-
- @abstractmethod
- def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None):
- """See `EncodecModel.decode`"""
- ...
-
- @property
- @abstractmethod
- def channels(self) -> int:
- ...
-
- @property
- @abstractmethod
- def frame_rate(self) -> int:
- ...
-
- @property
- @abstractmethod
- def sample_rate(self) -> int:
- ...
-
- @property
- @abstractmethod
- def cardinality(self) -> int:
- ...
-
- @property
- @abstractmethod
- def num_codebooks(self) -> int:
- ...
-
- @property
- @abstractmethod
- def total_codebooks(self) -> int:
- ...
-
- @abstractmethod
- def set_num_codebooks(self, n: int):
- """Set the active number of codebooks used by the quantizer.
- """
- ...
-
-
-class EncodecModel(CompressionModel):
- """Encodec model operating on the raw waveform.
-
- Args:
- encoder (nn.Module): Encoder network.
- decoder (nn.Module): Decoder network.
- quantizer (qt.BaseQuantizer): Quantizer network.
- frame_rate (int): Frame rate for the latent representation.
- sample_rate (int): Audio sample rate.
- channels (int): Number of audio channels.
- causal (bool): Whether to use a causal version of the model.
- renormalize (bool): Whether to renormalize the audio before running the model.
- """
- # we need assignement to override the property in the abstract class,
- # I couldn't find a better way...
- frame_rate: int = 0
- sample_rate: int = 0
- channels: int = 0
-
- def __init__(self,
- encoder: nn.Module,
- decoder: nn.Module,
- quantizer: qt.BaseQuantizer,
- frame_rate: int,
- sample_rate: int,
- channels: int,
- causal: bool = False,
- renormalize: bool = False):
- super().__init__()
- self.encoder = encoder
- self.decoder = decoder
- self.quantizer = quantizer
- self.frame_rate = frame_rate
- self.sample_rate = sample_rate
- self.channels = channels
- self.renormalize = renormalize
- self.causal = causal
- if self.causal:
- # we force disabling here to avoid handling linear overlap of segments
- # as supported in original EnCodec codebase.
- assert not self.renormalize, 'Causal model does not support renormalize'
-
- @property
- def total_codebooks(self):
- """Total number of quantizer codebooks available.
- """
- return self.quantizer.total_codebooks
-
- @property
- def num_codebooks(self):
- """Active number of codebooks used by the quantizer.
- """
- return self.quantizer.num_codebooks
-
- def set_num_codebooks(self, n: int):
- """Set the active number of codebooks used by the quantizer.
- """
- self.quantizer.set_num_codebooks(n)
-
- @property
- def cardinality(self):
- """Cardinality of each codebook.
- """
- return self.quantizer.bins
-
- def preprocess(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- scale: tp.Optional[torch.Tensor]
- if self.renormalize:
- mono = x.mean(dim=1, keepdim=True)
- volume = mono.pow(2).mean(dim=2, keepdim=True).sqrt()
- scale = 1e-8 + volume
- x = x / scale
- scale = scale.view(-1, 1)
- else:
- scale = None
- return x, scale
-
- def postprocess(self,
- x: torch.Tensor,
- scale: tp.Optional[torch.Tensor] = None) -> torch.Tensor:
- if scale is not None:
- assert self.renormalize
- x = x * scale.view(-1, 1, 1)
- return x
-
- def forward(self, x: torch.Tensor) -> qt.QuantizedResult:
- assert x.dim() == 3
- length = x.shape[-1]
- x, scale = self.preprocess(x)
-
- emb = self.encoder(x)
- q_res = self.quantizer(emb, self.frame_rate)
- out = self.decoder(q_res.x)
-
- # remove extra padding added by the encoder and decoder
- assert out.shape[-1] >= length, (out.shape[-1], length)
- out = out[..., :length]
-
- q_res.x = self.postprocess(out, scale)
-
- return q_res
-
- def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- """Encode the given input tensor to quantized representation along with scale parameter.
-
- Args:
- x (torch.Tensor): Float tensor of shape [B, C, T]
-
- Returns:
- codes, scale (tp.Tuple[torch.Tensor, torch.Tensor]): Tuple composed of:
- codes a float tensor of shape [B, K, T] with K the number of codebooks used and T the timestep.
- scale a float tensor containing the scale for audio renormalizealization.
- """
- assert x.dim() == 3
- x, scale = self.preprocess(x)
- emb = self.encoder(x)
- codes = self.quantizer.encode(emb)
- return codes, scale
-
- def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None):
- """Decode the given codes to a reconstructed representation, using the scale to perform
- audio denormalization if needed.
-
- Args:
- codes (torch.Tensor): Int tensor of shape [B, K, T]
- scale (tp.Optional[torch.Tensor]): Float tensor containing the scale value.
-
- Returns:
- out (torch.Tensor): Float tensor of shape [B, C, T], the reconstructed audio.
- """
- emb = self.quantizer.decode(codes)
- out = self.decoder(emb)
- out = self.postprocess(out, scale)
- # out contains extra padding added by the encoder and decoder
- return out
-
-
-class FlattenedCompressionModel(CompressionModel):
- """Wraps a CompressionModel and flatten its codebooks, e.g.
- instead of returning [B, K, T], return [B, S, T * (K // S)] with
- S the number of codebooks per step, and `K // S` the number of 'virtual steps'
- for each real time step.
-
- Args:
- model (CompressionModel): compression model to wrap.
- codebooks_per_step (int): number of codebooks to keep per step,
- this must divide the number of codebooks provided by the wrapped model.
- extend_cardinality (bool): if True, and for instance if codebooks_per_step = 1,
- if each codebook has a cardinality N, then the first codebook will
- use the range [0, N - 1], and the second [N, 2 N - 1] etc.
- On decoding, this can lead to potentially invalid sequences.
- Any invalid entry will be silently remapped to the proper range
- with a modulo.
- """
- def __init__(self, model: CompressionModel, codebooks_per_step: int = 1,
- extend_cardinality: bool = True):
- super().__init__()
- self.model = model
- self.codebooks_per_step = codebooks_per_step
- self.extend_cardinality = extend_cardinality
-
- @property
- def total_codebooks(self):
- return self.model.total_codebooks
-
- @property
- def num_codebooks(self):
- """Active number of codebooks used by the quantizer.
-
- ..Warning:: this reports the number of codebooks after the flattening
- of the codebooks!
- """
- assert self.model.num_codebooks % self.codebooks_per_step == 0
- return self.codebooks_per_step
-
- def set_num_codebooks(self, n: int):
- """Set the active number of codebooks used by the quantizer.
-
- ..Warning:: this sets the number of codebooks **before** the flattening
- of the codebooks.
- """
- assert n % self.codebooks_per_step == 0
- self.model.set_num_codebooks(n)
-
- @property
- def num_virtual_steps(self) -> int:
- """Return the number of virtual steps, e.g. one real step
- will be split into that many steps.
- """
- return self.model.num_codebooks // self.codebooks_per_step
-
- @property
- def frame_rate(self) -> int:
- return self.model.frame_rate * self.num_virtual_steps
-
- @property
- def sample_rate(self) -> int:
- return self.model.sample_rate
-
- @property
- def channels(self) -> int:
- return self.model.channels
-
- @property
- def cardinality(self):
- """Cardinality of each codebook.
- """
- if self.extend_cardinality:
- return self.model.cardinality * self.num_virtual_steps
- else:
- return self.model.cardinality
-
- def forward(self, x: torch.Tensor) -> qt.QuantizedResult:
- raise NotImplementedError("Not supported, use encode and decode.")
-
- def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- indices, scales = self.model.encode(x)
- B, K, T = indices.shape
- indices = rearrange(indices, 'b (k v) t -> b k t v', k=self.codebooks_per_step)
- if self.extend_cardinality:
- for virtual_step in range(1, self.num_virtual_steps):
- indices[..., virtual_step] += self.model.cardinality * virtual_step
- indices = rearrange(indices, 'b k t v -> b k (t v)')
- return (indices, scales)
-
- def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None):
- B, K, T = codes.shape
- assert T % self.num_virtual_steps == 0
- codes = rearrange(codes, 'b k (t v) -> b (k v) t', v=self.num_virtual_steps)
- # We silently ignore potential errors from the LM when
- # using extend_cardinality.
- codes = codes % self.model.cardinality
- return self.model.decode(codes, scale)
diff --git a/spaces/fffiloni/ProPainter/hf-examples/blank.md b/spaces/fffiloni/ProPainter/hf-examples/blank.md
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/constants.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/constants.d.ts
deleted file mode 100644
index 208020dcbab4ebcd7955b2abcb7ae49185f5976e..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/constants.d.ts
+++ /dev/null
@@ -1,18 +0,0 @@
-/** @deprecated since v6.3.0 - use constants property exposed by the relevant module instead. */
-declare module 'constants' {
- import { constants as osConstants, SignalConstants } from 'node:os';
- import { constants as cryptoConstants } from 'node:crypto';
- import { constants as fsConstants } from 'node:fs';
-
- const exp: typeof osConstants.errno &
- typeof osConstants.priority &
- SignalConstants &
- typeof cryptoConstants &
- typeof fsConstants;
- export = exp;
-}
-
-declare module 'node:constants' {
- import constants = require('constants');
- export = constants;
-}
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-assign/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-assign/index.js
deleted file mode 100644
index 0930cf8890b9af66125c222c29d7bd75614ac6b4..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-assign/index.js
+++ /dev/null
@@ -1,90 +0,0 @@
-/*
-object-assign
-(c) Sindre Sorhus
-@license MIT
-*/
-
-'use strict';
-/* eslint-disable no-unused-vars */
-var getOwnPropertySymbols = Object.getOwnPropertySymbols;
-var hasOwnProperty = Object.prototype.hasOwnProperty;
-var propIsEnumerable = Object.prototype.propertyIsEnumerable;
-
-function toObject(val) {
- if (val === null || val === undefined) {
- throw new TypeError('Object.assign cannot be called with null or undefined');
- }
-
- return Object(val);
-}
-
-function shouldUseNative() {
- try {
- if (!Object.assign) {
- return false;
- }
-
- // Detect buggy property enumeration order in older V8 versions.
-
- // https://bugs.chromium.org/p/v8/issues/detail?id=4118
- var test1 = new String('abc'); // eslint-disable-line no-new-wrappers
- test1[5] = 'de';
- if (Object.getOwnPropertyNames(test1)[0] === '5') {
- return false;
- }
-
- // https://bugs.chromium.org/p/v8/issues/detail?id=3056
- var test2 = {};
- for (var i = 0; i < 10; i++) {
- test2['_' + String.fromCharCode(i)] = i;
- }
- var order2 = Object.getOwnPropertyNames(test2).map(function (n) {
- return test2[n];
- });
- if (order2.join('') !== '0123456789') {
- return false;
- }
-
- // https://bugs.chromium.org/p/v8/issues/detail?id=3056
- var test3 = {};
- 'abcdefghijklmnopqrst'.split('').forEach(function (letter) {
- test3[letter] = letter;
- });
- if (Object.keys(Object.assign({}, test3)).join('') !==
- 'abcdefghijklmnopqrst') {
- return false;
- }
-
- return true;
- } catch (err) {
- // We don't expect any of the above to throw, but better to be safe.
- return false;
- }
-}
-
-module.exports = shouldUseNative() ? Object.assign : function (target, source) {
- var from;
- var to = toObject(target);
- var symbols;
-
- for (var s = 1; s < arguments.length; s++) {
- from = Object(arguments[s]);
-
- for (var key in from) {
- if (hasOwnProperty.call(from, key)) {
- to[key] = from[key];
- }
- }
-
- if (getOwnPropertySymbols) {
- symbols = getOwnPropertySymbols(from);
- for (var i = 0; i < symbols.length; i++) {
- if (propIsEnumerable.call(from, symbols[i])) {
- to[symbols[i]] = from[symbols[i]];
- }
- }
- }
- }
-
- return to;
-};
diff --git a/spaces/fffiloni/lama-video-watermark-remover/models/ade20k/segm_lib/nn/parallel/__init__.py b/spaces/fffiloni/lama-video-watermark-remover/models/ade20k/segm_lib/nn/parallel/__init__.py
deleted file mode 100644
index 9b52f49cc0755562218a460483cbf02514ddd773..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/lama-video-watermark-remover/models/ade20k/segm_lib/nn/parallel/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .data_parallel import UserScatteredDataParallel, user_scattered_collate, async_copy_to
diff --git a/spaces/fffiloni/train-dreambooth-lora-sdxl/app.py b/spaces/fffiloni/train-dreambooth-lora-sdxl/app.py
deleted file mode 100644
index de1c35b135078291c60a6560eda8b15e4c591209..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/train-dreambooth-lora-sdxl/app.py
+++ /dev/null
@@ -1,388 +0,0 @@
-import gradio as gr
-import torch
-import os
-import shutil
-import requests
-import subprocess
-from subprocess import getoutput
-from huggingface_hub import snapshot_download, HfApi, create_repo
-api = HfApi()
-
-hf_token = os.environ.get("HF_TOKEN_WITH_WRITE_PERMISSION")
-
-is_shared_ui = True if "fffiloni/train-dreambooth-lora-sdxl" in os.environ['SPACE_ID'] else False
-
-is_gpu_associated = torch.cuda.is_available()
-
-if is_gpu_associated:
- gpu_info = getoutput('nvidia-smi')
- if("A10G" in gpu_info):
- which_gpu = "A10G"
- elif("T4" in gpu_info):
- which_gpu = "T4"
- else:
- which_gpu = "CPU"
-
-def check_upload_or_no(value):
- if value is True:
- return gr.update(visible=True)
- else:
- return gr.update(visible=False)
-
-def load_images_to_dataset(images, dataset_name):
-
- if is_shared_ui:
- raise gr.Error("This Space only works in duplicated instances")
-
- if dataset_name == "":
- raise gr.Error("You forgot to name your new dataset. ")
-
- # Create the directory if it doesn't exist
- my_working_directory = f"my_working_directory_for_{dataset_name}"
- if not os.path.exists(my_working_directory):
- os.makedirs(my_working_directory)
-
- # Assuming 'images' is a list of image file paths
- for idx, image in enumerate(images):
- # Get the base file name (without path) from the original location
- image_name = os.path.basename(image.name)
-
- # Construct the destination path in the working directory
- destination_path = os.path.join(my_working_directory, image_name)
-
- # Copy the image from the original location to the working directory
- shutil.copy(image.name, destination_path)
-
- # Print the image name and its corresponding save path
- print(f"Image {idx + 1}: {image_name} copied to {destination_path}")
-
- path_to_folder = my_working_directory
- your_username = api.whoami(token=hf_token)["name"]
- repo_id = f"{your_username}/{dataset_name}"
- create_repo(repo_id=repo_id, repo_type="dataset", private=True, token=hf_token)
-
- api.upload_folder(
- folder_path=path_to_folder,
- repo_id=repo_id,
- repo_type="dataset",
- token=hf_token
- )
-
- return "Done, your dataset is ready and loaded for the training step!", repo_id
-
-def swap_hardware(hf_token, hardware="cpu-basic"):
- hardware_url = f"https://huggingface.co/spaces/{os.environ['SPACE_ID']}/hardware"
- headers = { "authorization" : f"Bearer {hf_token}"}
- body = {'flavor': hardware}
- requests.post(hardware_url, json = body, headers=headers)
-
-def swap_sleep_time(hf_token,sleep_time):
- sleep_time_url = f"https://huggingface.co/api/spaces/{os.environ['SPACE_ID']}/sleeptime"
- headers = { "authorization" : f"Bearer {hf_token}"}
- body = {'seconds':sleep_time}
- requests.post(sleep_time_url,json=body,headers=headers)
-
-def get_sleep_time(hf_token):
- sleep_time_url = f"https://huggingface.co/api/spaces/{os.environ['SPACE_ID']}"
- headers = { "authorization" : f"Bearer {hf_token}"}
- response = requests.get(sleep_time_url,headers=headers)
- try:
- gcTimeout = response.json()['runtime']['gcTimeout']
- except:
- gcTimeout = None
- return gcTimeout
-
-def write_to_community(title, description,hf_token):
-
- api.create_discussion(repo_id=os.environ['SPACE_ID'], title=title, description=description,repo_type="space", token=hf_token)
-
-
-def set_accelerate_default_config():
- try:
- subprocess.run(["accelerate", "config", "default"], check=True)
- print("Accelerate default config set successfully!")
- except subprocess.CalledProcessError as e:
- print(f"An error occurred: {e}")
-
-def train_dreambooth_lora_sdxl(dataset_id, instance_data_dir, lora_trained_xl_folder, instance_prompt, max_train_steps, checkpoint_steps, remove_gpu):
-
- script_filename = "train_dreambooth_lora_sdxl.py" # Assuming it's in the same folder
-
- command = [
- "accelerate",
- "launch",
- script_filename, # Use the local script
- "--pretrained_model_name_or_path=stabilityai/stable-diffusion-xl-base-1.0",
- "--pretrained_vae_model_name_or_path=madebyollin/sdxl-vae-fp16-fix",
- f"--dataset_id={dataset_id}",
- f"--instance_data_dir={instance_data_dir}",
- f"--output_dir={lora_trained_xl_folder}",
- "--mixed_precision=fp16",
- f"--instance_prompt={instance_prompt}",
- "--resolution=1024",
- "--train_batch_size=2",
- "--gradient_accumulation_steps=2",
- "--gradient_checkpointing",
- "--learning_rate=1e-4",
- "--lr_scheduler=constant",
- "--lr_warmup_steps=0",
- "--enable_xformers_memory_efficient_attention",
- "--mixed_precision=fp16",
- "--use_8bit_adam",
- f"--max_train_steps={max_train_steps}",
- f"--checkpointing_steps={checkpoint_steps}",
- "--seed=0",
- "--push_to_hub",
- f"--hub_token={hf_token}"
- ]
-
- try:
- subprocess.run(command, check=True)
- print("Training is finished!")
- if remove_gpu:
- swap_hardware(hf_token, "cpu-basic")
- else:
- swap_sleep_time(hf_token, 300)
- except subprocess.CalledProcessError as e:
- print(f"An error occurred: {e}")
-
- title="There was an error on during your training"
- description=f'''
- Unfortunately there was an error during training your {lora_trained_xl_folder} model.
- Please check it out below. Feel free to report this issue to [SD-XL Dreambooth LoRa Training](https://huggingface.co/spaces/fffiloni/train-dreambooth-lora-sdxl):
- ```
- {str(e)}
- ```
- '''
- if remove_gpu:
- swap_hardware(hf_token, "cpu-basic")
- else:
- swap_sleep_time(hf_token, 300)
- #write_to_community(title,description,hf_token)
-
-def main(dataset_id,
- lora_trained_xl_folder,
- instance_prompt,
- max_train_steps,
- checkpoint_steps,
- remove_gpu):
-
-
- if is_shared_ui:
- raise gr.Error("This Space only works in duplicated instances")
-
- if not is_gpu_associated:
- raise gr.Error("Please associate a T4 or A10G GPU for this Space")
-
- if dataset_id == "":
- raise gr.Error("You forgot to specify an image dataset")
-
- if instance_prompt == "":
- raise gr.Error("You forgot to specify a concept prompt")
-
- if lora_trained_xl_folder == "":
- raise gr.Error("You forgot to name the output folder for your model")
-
- sleep_time = get_sleep_time(hf_token)
- if sleep_time:
- swap_sleep_time(hf_token, -1)
-
- gr.Warning("If you did not check the `Remove GPU After training`, don't forget to remove the GPU attribution after you are done. ")
-
- dataset_repo = dataset_id
-
- # Automatically set local_dir based on the last part of dataset_repo
- repo_parts = dataset_repo.split("/")
- local_dir = f"./{repo_parts[-1]}" # Use the last part of the split
-
- # Check if the directory exists and create it if necessary
- if not os.path.exists(local_dir):
- os.makedirs(local_dir)
-
- gr.Info("Downloading dataset ...")
-
- snapshot_download(
- dataset_repo,
- local_dir=local_dir,
- repo_type="dataset",
- ignore_patterns=".gitattributes",
- token=hf_token
- )
-
- set_accelerate_default_config()
-
- gr.Info("Training begins ...")
-
- instance_data_dir = repo_parts[-1]
- train_dreambooth_lora_sdxl(dataset_id, instance_data_dir, lora_trained_xl_folder, instance_prompt, max_train_steps, checkpoint_steps, remove_gpu)
-
- your_username = api.whoami(token=hf_token)["name"]
- return f"Done, your trained model has been stored in your models library: {your_username}/{lora_trained_xl_folder}"
-
-css="""
-#col-container {max-width: 780px; margin-left: auto; margin-right: auto;}
-#upl-dataset-group {background-color: none!important;}
-
-div#warning-ready {
- background-color: #ecfdf5;
- padding: 0 10px 5px;
- margin: 20px 0;
-}
-div#warning-ready > .gr-prose > h2, div#warning-ready > .gr-prose > p {
- color: #057857!important;
-}
-
-div#warning-duplicate {
- background-color: #ebf5ff;
- padding: 0 10px 5px;
- margin: 20px 0;
-}
-
-div#warning-duplicate > .gr-prose > h2, div#warning-duplicate > .gr-prose > p {
- color: #0f4592!important;
-}
-
-div#warning-duplicate strong {
- color: #0f4592;
-}
-
-p.actions {
- display: flex;
- align-items: center;
- margin: 20px 0;
-}
-
-div#warning-duplicate .actions a {
- display: inline-block;
- margin-right: 10px;
-}
-
-div#warning-setgpu {
- background-color: #fff4eb;
- padding: 0 10px 5px;
- margin: 20px 0;
-}
-
-div#warning-setgpu > .gr-prose > h2, div#warning-setgpu > .gr-prose > p {
- color: #92220f!important;
-}
-
-div#warning-setgpu a, div#warning-setgpu b {
- color: #91230f;
-}
-
-div#warning-setgpu p.actions > a {
- display: inline-block;
- background: #1f1f23;
- border-radius: 40px;
- padding: 6px 24px;
- color: antiquewhite;
- text-decoration: none;
- font-weight: 600;
- font-size: 1.2em;
-}
-
-button#load-dataset-btn{
-min-height: 60px;
-}
-"""
-
-with gr.Blocks(css=css) as demo:
- with gr.Column(elem_id="col-container"):
- if is_shared_ui:
- top_description = gr.HTML(f'''
-
-
- Attention: this Space need to be duplicated to work
-
- To make it work, duplicate the Space and run it on your own profile using a private GPU (T4-small or A10G-small).
- A T4 costs US$0.60/h, so it should cost < US$1 to train most models.
-
- ''', elem_id="warning-setgpu")
-
- gr.Markdown("# SD-XL Dreambooth LoRa Training UI 💭")
-
- upload_my_images = gr.Checkbox(label="Drop your training images ? (optional)", value=False)
- gr.Markdown("Use this step to upload your training images and create a new dataset. If you already have a dataset stored on your HF profile, you can skip this step, and provide your dataset ID in the training `Datased ID` input below.")
-
- with gr.Group(visible=False, elem_id="upl-dataset-group") as upload_group:
- with gr.Row():
- images = gr.File(file_types=["image"], label="Upload your images", file_count="multiple", interactive=True, visible=True)
- with gr.Column():
- new_dataset_name = gr.Textbox(label="Set new dataset name", placeholder="e.g.: my_awesome_dataset")
- dataset_status = gr.Textbox(label="dataset status")
- load_btn = gr.Button("Load images to new dataset", elem_id="load-dataset-btn")
-
- gr.Markdown("## Training ")
- gr.Markdown("You can use an existing image dataset, find a dataset example here: [https://huggingface.co/datasets/diffusers/dog-example](https://huggingface.co/datasets/diffusers/dog-example) ;)")
-
- with gr.Row():
- dataset_id = gr.Textbox(label="Dataset ID", info="use one of your previously uploaded image datasets on your HF profile", placeholder="diffusers/dog-example")
- instance_prompt = gr.Textbox(label="Concept prompt", info="concept prompt - use a unique, made up word to avoid collisions")
-
- with gr.Row():
- model_output_folder = gr.Textbox(label="Output model folder name", placeholder="lora-trained-xl-folder")
- max_train_steps = gr.Number(label="Max Training Steps", value=500, precision=0, step=10)
- checkpoint_steps = gr.Number(label="Checkpoints Steps", value=100, precision=0, step=10)
-
- remove_gpu = gr.Checkbox(label="Remove GPU After Training", value=True, info="If NOT enabled, don't forget to remove the GPU attribution after you are done.")
- train_button = gr.Button("Train !")
-
- train_status = gr.Textbox(label="Training status")
-
- upload_my_images.change(
- fn = check_upload_or_no,
- inputs =[upload_my_images],
- outputs = [upload_group]
- )
-
- load_btn.click(
- fn = load_images_to_dataset,
- inputs = [images, new_dataset_name],
- outputs = [dataset_status, dataset_id]
- )
-
- train_button.click(
- fn = main,
- inputs = [
- dataset_id,
- model_output_folder,
- instance_prompt,
- max_train_steps,
- checkpoint_steps,
- remove_gpu
- ],
- outputs = [train_status]
- )
-
-demo.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/flax-community/DietNerf-Demo/demo/__init__.py b/spaces/flax-community/DietNerf-Demo/demo/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/flax-community/SentenceSimplifier/About/contributors.md b/spaces/flax-community/SentenceSimplifier/About/contributors.md
deleted file mode 100644
index 6fd832c1838d8a3f39aa9e8f3005788bd5020b3e..0000000000000000000000000000000000000000
--- a/spaces/flax-community/SentenceSimplifier/About/contributors.md
+++ /dev/null
@@ -1,3 +0,0 @@
-## Contributors
-* [Bhadresh Savani](www.linkedin.com/in/bhadreshsavani)
-* [Rahul Dev](https://twitter.com/mlkorra)
diff --git a/spaces/flax-community/SinhalaLanguageDemos/README.md b/spaces/flax-community/SinhalaLanguageDemos/README.md
deleted file mode 100644
index 85a7d9dc751107bfa9a188ae0d85acfec1965300..0000000000000000000000000000000000000000
--- a/spaces/flax-community/SinhalaLanguageDemos/README.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: Sinhala Language Demos
-emoji: 🤗
-colorFrom: yellow
-colorTo: green
-sdk: streamlit
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/fuckyoudeki/AutoGPT/autogpt/llm_utils.py b/spaces/fuckyoudeki/AutoGPT/autogpt/llm_utils.py
deleted file mode 100644
index 821820ffab07be2753cf385ff1de77820e4206ee..0000000000000000000000000000000000000000
--- a/spaces/fuckyoudeki/AutoGPT/autogpt/llm_utils.py
+++ /dev/null
@@ -1,172 +0,0 @@
-from __future__ import annotations
-
-import time
-from ast import List
-
-import openai
-from colorama import Fore, Style
-from openai.error import APIError, RateLimitError
-
-from autogpt.config import Config
-from autogpt.logs import logger
-
-CFG = Config()
-
-openai.api_key = CFG.openai_api_key
-
-
-def call_ai_function(
- function: str, args: list, description: str, model: str | None = None
-) -> str:
- """Call an AI function
-
- This is a magic function that can do anything with no-code. See
- https://github.com/Torantulino/AI-Functions for more info.
-
- Args:
- function (str): The function to call
- args (list): The arguments to pass to the function
- description (str): The description of the function
- model (str, optional): The model to use. Defaults to None.
-
- Returns:
- str: The response from the function
- """
- if model is None:
- model = CFG.smart_llm_model
- # For each arg, if any are None, convert to "None":
- args = [str(arg) if arg is not None else "None" for arg in args]
- # parse args to comma separated string
- args = ", ".join(args)
- messages = [
- {
- "role": "system",
- "content": f"You are now the following python function: ```# {description}"
- f"\n{function}```\n\nOnly respond with your `return` value.",
- },
- {"role": "user", "content": args},
- ]
-
- return create_chat_completion(model=model, messages=messages, temperature=0)
-
-
-# Overly simple abstraction until we create something better
-# simple retry mechanism when getting a rate error or a bad gateway
-def create_chat_completion(
- messages: list, # type: ignore
- model: str | None = None,
- temperature: float = CFG.temperature,
- max_tokens: int | None = None,
-) -> str:
- """Create a chat completion using the OpenAI API
-
- Args:
- messages (list[dict[str, str]]): The messages to send to the chat completion
- model (str, optional): The model to use. Defaults to None.
- temperature (float, optional): The temperature to use. Defaults to 0.9.
- max_tokens (int, optional): The max tokens to use. Defaults to None.
-
- Returns:
- str: The response from the chat completion
- """
- response = None
- num_retries = 10
- warned_user = False
- if CFG.debug_mode:
- print(
- Fore.GREEN
- + f"Creating chat completion with model {model}, temperature {temperature},"
- f" max_tokens {max_tokens}" + Fore.RESET
- )
- for attempt in range(num_retries):
- backoff = 2 ** (attempt + 2)
- try:
- if CFG.use_azure:
- response = openai.ChatCompletion.create(
- deployment_id=CFG.get_azure_deployment_id_for_model(model),
- model=model,
- messages=messages,
- temperature=temperature,
- max_tokens=max_tokens,
- )
- else:
- response = openai.ChatCompletion.create(
- model=model,
- messages=messages,
- temperature=temperature,
- max_tokens=max_tokens,
- )
- break
- except RateLimitError:
- if CFG.debug_mode:
- print(
- Fore.RED + "Error: ",
- f"Reached rate limit, passing..." + Fore.RESET,
- )
- if not warned_user:
- logger.double_check(
- f"Please double check that you have setup a {Fore.CYAN + Style.BRIGHT}PAID{Style.RESET_ALL} OpenAI API Account. "
- + f"You can read more here: {Fore.CYAN}https://github.com/Significant-Gravitas/Auto-GPT#openai-api-keys-configuration{Fore.RESET}"
- )
- warned_user = True
- except APIError as e:
- if e.http_status == 502:
- pass
- else:
- raise
- if attempt == num_retries - 1:
- raise
- if CFG.debug_mode:
- print(
- Fore.RED + "Error: ",
- f"API Bad gateway. Waiting {backoff} seconds..." + Fore.RESET,
- )
- time.sleep(backoff)
- if response is None:
- logger.typewriter_log(
- "FAILED TO GET RESPONSE FROM OPENAI",
- Fore.RED,
- "Auto-GPT has failed to get a response from OpenAI's services. "
- + f"Try running Auto-GPT again, and if the problem the persists try running it with `{Fore.CYAN}--debug{Fore.RESET}`.",
- )
- logger.double_check()
- if CFG.debug_mode:
- raise RuntimeError(f"Failed to get response after {num_retries} retries")
- else:
- quit(1)
-
- return response.choices[0].message["content"]
-
-
-def create_embedding_with_ada(text) -> list:
- """Create an embedding with text-ada-002 using the OpenAI SDK"""
- num_retries = 10
- for attempt in range(num_retries):
- backoff = 2 ** (attempt + 2)
- try:
- if CFG.use_azure:
- return openai.Embedding.create(
- input=[text],
- engine=CFG.get_azure_deployment_id_for_model(
- "text-embedding-ada-002"
- ),
- )["data"][0]["embedding"]
- else:
- return openai.Embedding.create(
- input=[text], model="text-embedding-ada-002"
- )["data"][0]["embedding"]
- except RateLimitError:
- pass
- except APIError as e:
- if e.http_status == 502:
- pass
- else:
- raise
- if attempt == num_retries - 1:
- raise
- if CFG.debug_mode:
- print(
- Fore.RED + "Error: ",
- f"API Bad gateway. Waiting {backoff} seconds..." + Fore.RESET,
- )
- time.sleep(backoff)
diff --git a/spaces/gagan3012/project-code-py/app.py b/spaces/gagan3012/project-code-py/app.py
deleted file mode 100644
index 66209289b754805730e1492baf6c345d5af6e786..0000000000000000000000000000000000000000
--- a/spaces/gagan3012/project-code-py/app.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import torch
-import streamlit as st
-from transformers import GPT2Tokenizer, GPT2LMHeadModel
-
-st.set_page_config(
- page_title="AI Leetcode",
- layout="wide",
- initial_sidebar_state="expanded", )
-
-
-
-@st.cache(suppress_st_warning=True,ttl=1000)
-def modelgpt(sequence, temp, top_p):
- tokenizer = GPT2Tokenizer.from_pretrained("gagan3012/project-code-py-small")
- model = GPT2LMHeadModel.from_pretrained("gagan3012/project-code-py-small")
- inputs = tokenizer.encode(sequence, return_tensors='pt')
- outputs = model.generate(inputs, max_length=1024, do_sample=True, temperature=temp, top_p=top_p)
- text = tokenizer.decode(outputs[0], skip_special_tokens=True)
- return text
-
-def display():
- st.write('# Using AI to Generate LeetCode solutions')
- st.sidebar.markdown(
- '''
- # Project-code-py
-
- ## This is a demo of a text generation model trained with GPT-2 to generate LeetCode Answers in Python
- *For additional questions and inquiries, please contact **Gagan Bhatia** via [LinkedIn](
- https://www.linkedin.com/in/gbhatia30/) or [Github](https://github.com/gagan3012).*
- ''')
- st.sidebar.write('## Options:')
-
- tokens = st.sidebar.slider(label='Number of Tokens', min_value=1, max_value=15, value=3, step=1)
- samples = st.sidebar.slider(label='Number of Samples', min_value=1, max_value=9, value=9, step=1)
- top_p = st.sidebar.slider(label='Top k', min_value=0.0, max_value=40.0, value=1.0, step=1.0)
- temp = st.sidebar.slider(label='Temperature', min_value=0.1, max_value=1.0, value=1.0, step=0.05)
- st.sidebar.markdown(
- '''
- `Number of Tokens:` number of tokens in generated text\n
- `Number of Samples:` number of samples to return total\n
- `Temperature:` Float value controlling randomness in boltzmann distribution. Lower temperature results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Higher temperature results in more random completions.\n
- `Top k:` Integer value controlling diversity. 1 means only 1 word is considered for each step (token), resulting in deterministic completions, while 40 means 40 words are considered at each step. 0 (default) is a special setting meaning no restrictions. 40 generally is a good value.
- ''')
-
- st.write('## Enter a Leetcode Question or Starting code:')
- sequence = st.text_area("", value="Given the root of a binary tree, return its maximum depth. A binary tree's maximum depth is the number of nodes along the longest path from the root node down to the farthest leaf node.",height=150)
- if st.button("Get Answer"):
- text = modelgpt(sequence, temp, top_p)
- st.code(text.encode().decode('unicode_escape'), language='python')
-
-
-if __name__ == '__main__':
- display()
\ No newline at end of file
diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/utils/se_layer.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/utils/se_layer.py
deleted file mode 100644
index 083bd7d1ccee909c900c7aed2cc928bf14727f3e..0000000000000000000000000000000000000000
--- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/utils/se_layer.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import annotator.uniformer.mmcv as mmcv
-import torch.nn as nn
-from annotator.uniformer.mmcv.cnn import ConvModule
-
-from .make_divisible import make_divisible
-
-
-class SELayer(nn.Module):
- """Squeeze-and-Excitation Module.
-
- Args:
- channels (int): The input (and output) channels of the SE layer.
- ratio (int): Squeeze ratio in SELayer, the intermediate channel will be
- ``int(channels/ratio)``. Default: 16.
- conv_cfg (None or dict): Config dict for convolution layer.
- Default: None, which means using conv2d.
- act_cfg (dict or Sequence[dict]): Config dict for activation layer.
- If act_cfg is a dict, two activation layers will be configured
- by this dict. If act_cfg is a sequence of dicts, the first
- activation layer will be configured by the first dict and the
- second activation layer will be configured by the second dict.
- Default: (dict(type='ReLU'), dict(type='HSigmoid', bias=3.0,
- divisor=6.0)).
- """
-
- def __init__(self,
- channels,
- ratio=16,
- conv_cfg=None,
- act_cfg=(dict(type='ReLU'),
- dict(type='HSigmoid', bias=3.0, divisor=6.0))):
- super(SELayer, self).__init__()
- if isinstance(act_cfg, dict):
- act_cfg = (act_cfg, act_cfg)
- assert len(act_cfg) == 2
- assert mmcv.is_tuple_of(act_cfg, dict)
- self.global_avgpool = nn.AdaptiveAvgPool2d(1)
- self.conv1 = ConvModule(
- in_channels=channels,
- out_channels=make_divisible(channels // ratio, 8),
- kernel_size=1,
- stride=1,
- conv_cfg=conv_cfg,
- act_cfg=act_cfg[0])
- self.conv2 = ConvModule(
- in_channels=make_divisible(channels // ratio, 8),
- out_channels=channels,
- kernel_size=1,
- stride=1,
- conv_cfg=conv_cfg,
- act_cfg=act_cfg[1])
-
- def forward(self, x):
- out = self.global_avgpool(x)
- out = self.conv1(out)
- out = self.conv2(out)
- return x * out
diff --git a/spaces/grisiemjahand/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/glcontext.py b/spaces/grisiemjahand/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/glcontext.py
deleted file mode 100644
index 881df0feca38678d6c075ef85ae65c12875b6b48..0000000000000000000000000000000000000000
--- a/spaces/grisiemjahand/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/glcontext.py
+++ /dev/null
@@ -1,142 +0,0 @@
-"""Headless GPU-accelerated OpenGL context creation on Google Colaboratory.
-
-Typical usage:
-
- # Optional PyOpenGL configuratiopn can be done here.
- # import OpenGL
- # OpenGL.ERROR_CHECKING = True
-
- # 'glcontext' must be imported before any OpenGL.* API.
- from lucid.misc.gl.glcontext import create_opengl_context
-
- # Now it's safe to import OpenGL and EGL functions
- import OpenGL.GL as gl
-
- # create_opengl_context() creates a GL context that is attached to an
- # offscreen surface of the specified size. Note that rendering to buffers
- # of other sizes and formats is still possible with OpenGL Framebuffers.
- #
- # Users are expected to directly use the EGL API in case more advanced
- # context management is required.
- width, height = 640, 480
- create_opengl_context((width, height))
-
- # OpenGL context is available here.
-
-"""
-
-from __future__ import print_function
-
-# pylint: disable=unused-import,g-import-not-at-top,g-statement-before-imports
-
-try:
- import OpenGL
-except:
- print('This module depends on PyOpenGL.')
- print('Please run "\033[1m!pip install -q pyopengl\033[0m" '
- 'prior importing this module.')
- raise
-
-import ctypes
-from ctypes import pointer, util
-import os
-
-os.environ['PYOPENGL_PLATFORM'] = 'egl'
-
-# OpenGL loading workaround.
-#
-# * PyOpenGL tries to load libGL, but we need libOpenGL, see [1,2].
-# This could have been solved by a symlink libGL->libOpenGL, but:
-#
-# * Python 2.7 can't find libGL and linEGL due to a bug (see [3])
-# in ctypes.util, that was only wixed in Python 3.6.
-#
-# So, the only solution I've found is to monkeypatch ctypes.util
-# [1] https://devblogs.nvidia.com/egl-eye-opengl-visualization-without-x-server/
-# [2] https://devblogs.nvidia.com/linking-opengl-server-side-rendering/
-# [3] https://bugs.python.org/issue9998
-_find_library_old = ctypes.util.find_library
-try:
-
- def _find_library_new(name):
- return {
- 'GL': 'libOpenGL.so',
- 'EGL': 'libEGL.so',
- }.get(name, _find_library_old(name))
- util.find_library = _find_library_new
- import OpenGL.GL as gl
- import OpenGL.EGL as egl
- from OpenGL import error
- from OpenGL.EGL.EXT.device_base import egl_get_devices
- from OpenGL.raw.EGL.EXT.platform_device import EGL_PLATFORM_DEVICE_EXT
-except:
- print('Unable to load OpenGL libraries. '
- 'Make sure you use GPU-enabled backend.')
- print('Press "Runtime->Change runtime type" and set '
- '"Hardware accelerator" to GPU.')
- raise
-finally:
- util.find_library = _find_library_old
-
-def create_initialized_headless_egl_display():
- """Creates an initialized EGL display directly on a device."""
- for device in egl_get_devices():
- display = egl.eglGetPlatformDisplayEXT(EGL_PLATFORM_DEVICE_EXT, device, None)
-
- if display != egl.EGL_NO_DISPLAY and egl.eglGetError() == egl.EGL_SUCCESS:
- # `eglInitialize` may or may not raise an exception on failure depending
- # on how PyOpenGL is configured. We therefore catch a `GLError` and also
- # manually check the output of `eglGetError()` here.
- try:
- initialized = egl.eglInitialize(display, None, None)
- except error.GLError:
- pass
- else:
- if initialized == egl.EGL_TRUE and egl.eglGetError() == egl.EGL_SUCCESS:
- return display
- return egl.EGL_NO_DISPLAY
-
-def create_opengl_context(surface_size=(640, 480)):
- """Create offscreen OpenGL context and make it current.
-
- Users are expected to directly use EGL API in case more advanced
- context management is required.
-
- Args:
- surface_size: (width, height), size of the offscreen rendering surface.
- """
- egl_display = create_initialized_headless_egl_display()
- if egl_display == egl.EGL_NO_DISPLAY:
- raise ImportError('Cannot initialize a headless EGL display.')
-
- major, minor = egl.EGLint(), egl.EGLint()
- egl.eglInitialize(egl_display, pointer(major), pointer(minor))
-
- config_attribs = [
- egl.EGL_SURFACE_TYPE, egl.EGL_PBUFFER_BIT, egl.EGL_BLUE_SIZE, 8,
- egl.EGL_GREEN_SIZE, 8, egl.EGL_RED_SIZE, 8, egl.EGL_DEPTH_SIZE, 24,
- egl.EGL_RENDERABLE_TYPE, egl.EGL_OPENGL_BIT, egl.EGL_NONE
- ]
- config_attribs = (egl.EGLint * len(config_attribs))(*config_attribs)
-
- num_configs = egl.EGLint()
- egl_cfg = egl.EGLConfig()
- egl.eglChooseConfig(egl_display, config_attribs, pointer(egl_cfg), 1,
- pointer(num_configs))
-
- width, height = surface_size
- pbuffer_attribs = [
- egl.EGL_WIDTH,
- width,
- egl.EGL_HEIGHT,
- height,
- egl.EGL_NONE,
- ]
- pbuffer_attribs = (egl.EGLint * len(pbuffer_attribs))(*pbuffer_attribs)
- egl_surf = egl.eglCreatePbufferSurface(egl_display, egl_cfg, pbuffer_attribs)
-
- egl.eglBindAPI(egl.EGL_OPENGL_API)
-
- egl_context = egl.eglCreateContext(egl_display, egl_cfg, egl.EGL_NO_CONTEXT,
- None)
- egl.eglMakeCurrent(egl_display, egl_surf, egl_surf, egl_context)
diff --git a/spaces/guetLzy/Real-ESRGAN-Demo/docs/Training_CN.md b/spaces/guetLzy/Real-ESRGAN-Demo/docs/Training_CN.md
deleted file mode 100644
index dabc3c5d97e134a2d551157c2dd03a629ec661bc..0000000000000000000000000000000000000000
--- a/spaces/guetLzy/Real-ESRGAN-Demo/docs/Training_CN.md
+++ /dev/null
@@ -1,271 +0,0 @@
-# :computer: 如何训练/微调 Real-ESRGAN
-
-- [训练 Real-ESRGAN](#训练-real-esrgan)
- - [概述](#概述)
- - [准备数据集](#准备数据集)
- - [训练 Real-ESRNet 模型](#训练-real-esrnet-模型)
- - [训练 Real-ESRGAN 模型](#训练-real-esrgan-模型)
-- [用自己的数据集微调 Real-ESRGAN](#用自己的数据集微调-real-esrgan)
- - [动态生成降级图像](#动态生成降级图像)
- - [使用已配对的数据](#使用已配对的数据)
-
-[English](Training.md) **|** [简体中文](Training_CN.md)
-
-## 训练 Real-ESRGAN
-
-### 概述
-
-训练分为两个步骤。除了 loss 函数外,这两个步骤拥有相同数据合成以及训练的一条龙流程。具体点说:
-
-1. 首先使用 L1 loss 训练 Real-ESRNet 模型,其中 L1 loss 来自预先训练的 ESRGAN 模型。
-
-2. 然后我们将 Real-ESRNet 模型作为生成器初始化,结合L1 loss、感知 loss、GAN loss 三者的参数对 Real-ESRGAN 进行训练。
-
-### 准备数据集
-
-我们使用 DF2K ( DIV2K 和 Flickr2K ) + OST 数据集进行训练。只需要HR图像!
-下面是网站链接:
-1. DIV2K: http://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_HR.zip
-2. Flickr2K: https://cv.snu.ac.kr/research/EDSR/Flickr2K.tar
-3. OST: https://openmmlab.oss-cn-hangzhou.aliyuncs.com/datasets/OST_dataset.zip
-
-以下是数据的准备步骤。
-
-#### 第1步:【可选】生成多尺寸图片
-
-针对 DF2K 数据集,我们使用多尺寸缩放策略,*换言之*,我们对 HR 图像进行下采样,就能获得多尺寸的标准参考(Ground-Truth)图像。
-您可以使用这个 [scripts/generate_multiscale_DF2K.py](scripts/generate_multiscale_DF2K.py) 脚本快速生成多尺寸的图像。
-注意:如果您只想简单试试,那么可以跳过此步骤。
-
-```bash
-python scripts/generate_multiscale_DF2K.py --input datasets/DF2K/DF2K_HR --output datasets/DF2K/DF2K_multiscale
-```
-
-#### 第2步:【可选】裁切为子图像
-
-我们可以将 DF2K 图像裁切为子图像,以加快 IO 和处理速度。
-如果你的 IO 够好或储存空间有限,那么此步骤是可选的。
-
-您可以使用脚本 [scripts/extract_subimages.py](scripts/extract_subimages.py)。这是使用示例:
-
-```bash
- python scripts/extract_subimages.py --input datasets/DF2K/DF2K_multiscale --output datasets/DF2K/DF2K_multiscale_sub --crop_size 400 --step 200
-```
-
-#### 第3步:准备元信息 txt
-
-您需要准备一个包含图像路径的 txt 文件。下面是 `meta_info_DF2Kmultiscale+OST_sub.txt` 中的部分展示(由于各个用户可能有截然不同的子图像划分,这个文件不适合你的需求,你得准备自己的 txt 文件):
-
-```txt
-DF2K_HR_sub/000001_s001.png
-DF2K_HR_sub/000001_s002.png
-DF2K_HR_sub/000001_s003.png
-...
-```
-
-你可以使用该脚本 [scripts/generate_meta_info.py](scripts/generate_meta_info.py) 生成包含图像路径的 txt 文件。
-你还可以合并多个文件夹的图像路径到一个元信息(meta_info)txt。这是使用示例:
-
-```bash
- python scripts/generate_meta_info.py --input datasets/DF2K/DF2K_HR, datasets/DF2K/DF2K_multiscale --root datasets/DF2K, datasets/DF2K --meta_info datasets/DF2K/meta_info/meta_info_DF2Kmultiscale.txt
-```
-
-### 训练 Real-ESRNet 模型
-
-1. 下载预先训练的模型 [ESRGAN](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth),放到 `experiments/pretrained_models`目录下。
- ```bash
- wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth -P experiments/pretrained_models
- ```
-2. 相应地修改选项文件 `options/train_realesrnet_x4plus.yml` 中的内容:
- ```yml
- train:
- name: DF2K+OST
- type: RealESRGANDataset
- dataroot_gt: datasets/DF2K # 修改为你的数据集文件夹根目录
- meta_info: realesrgan/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt # 修改为你自己生成的元信息txt
- io_backend:
- type: disk
- ```
-3. 如果你想在训练过程中执行验证,就取消注释这些内容并进行相应的修改:
- ```yml
- # 取消注释这些以进行验证
- # val:
- # name: validation
- # type: PairedImageDataset
- # dataroot_gt: path_to_gt
- # dataroot_lq: path_to_lq
- # io_backend:
- # type: disk
-
- ...
-
- # 取消注释这些以进行验证
- # 验证设置
- # val:
- # val_freq: !!float 5e3
- # save_img: True
-
- # metrics:
- # psnr: # 指标名称,可以是任意的
- # type: calculate_psnr
- # crop_border: 4
- # test_y_channel: false
- ```
-4. 正式训练之前,你可以用 `--debug` 模式检查是否正常运行。我们用了4个GPU进行训练:
- ```bash
- CUDA_VISIBLE_DEVICES=0,1,2,3 \
- python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --debug
- ```
-
- 用 **1个GPU** 训练的 debug 模式示例:
- ```bash
- python realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --debug
- ```
-5. 正式训练开始。我们用了4个GPU进行训练。还可以使用参数 `--auto_resume` 在必要时自动恢复训练。
- ```bash
- CUDA_VISIBLE_DEVICES=0,1,2,3 \
- python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --auto_resume
- ```
-
- 用 **1个GPU** 训练:
- ```bash
- python realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --auto_resume
- ```
-
-### 训练 Real-ESRGAN 模型
-
-1. 训练 Real-ESRNet 模型后,您得到了这个 `experiments/train_RealESRNetx4plus_1000k_B12G4_fromESRGAN/model/net_g_1000000.pth` 文件。如果需要指定预训练路径到其他文件,请修改选项文件 `train_realesrgan_x4plus.yml` 中 `pretrain_network_g` 的值。
-1. 修改选项文件 `train_realesrgan_x4plus.yml` 的内容。大多数修改与上节提到的类似。
-1. 正式训练之前,你可以以 `--debug` 模式检查是否正常运行。我们使用了4个GPU进行训练:
- ```bash
- CUDA_VISIBLE_DEVICES=0,1,2,3 \
- python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --debug
- ```
-
- 用 **1个GPU** 训练的 debug 模式示例:
- ```bash
- python realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --debug
- ```
-1. 正式训练开始。我们使用4个GPU进行训练。还可以使用参数 `--auto_resume` 在必要时自动恢复训练。
- ```bash
- CUDA_VISIBLE_DEVICES=0,1,2,3 \
- python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --auto_resume
- ```
-
- 用 **1个GPU** 训练:
- ```bash
- python realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --auto_resume
- ```
-
-## 用自己的数据集微调 Real-ESRGAN
-
-你可以用自己的数据集微调 Real-ESRGAN。一般地,微调(Fine-Tune)程序可以分为两种类型:
-
-1. [动态生成降级图像](#动态生成降级图像)
-2. [使用**已配对**的数据](#使用已配对的数据)
-
-### 动态生成降级图像
-
-只需要高分辨率图像。在训练过程中,使用 Real-ESRGAN 描述的降级模型生成低质量图像。
-
-**1. 准备数据集**
-
-完整信息请参见[本节](#准备数据集)。
-
-**2. 下载预训练模型**
-
-下载预先训练的模型到 `experiments/pretrained_models` 目录下。
-
-- *RealESRGAN_x4plus.pth*:
- ```bash
- wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models
- ```
-
-- *RealESRGAN_x4plus_netD.pth*:
- ```bash
- wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x4plus_netD.pth -P experiments/pretrained_models
- ```
-
-**3. 微调**
-
-修改选项文件 [options/finetune_realesrgan_x4plus.yml](options/finetune_realesrgan_x4plus.yml) ,特别是 `datasets` 部分:
-
-```yml
-train:
- name: DF2K+OST
- type: RealESRGANDataset
- dataroot_gt: datasets/DF2K # 修改为你的数据集文件夹根目录
- meta_info: realesrgan/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt # 修改为你自己生成的元信息txt
- io_backend:
- type: disk
-```
-
-我们使用4个GPU进行训练。还可以使用参数 `--auto_resume` 在必要时自动恢复训练。
-
-```bash
-CUDA_VISIBLE_DEVICES=0,1,2,3 \
-python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/finetune_realesrgan_x4plus.yml --launcher pytorch --auto_resume
-```
-
-用 **1个GPU** 训练:
-```bash
-python realesrgan/train.py -opt options/finetune_realesrgan_x4plus.yml --auto_resume
-```
-
-### 使用已配对的数据
-
-你还可以用自己已经配对的数据微调 RealESRGAN。这个过程更类似于微调 ESRGAN。
-
-**1. 准备数据集**
-
-假设你已经有两个文件夹(folder):
-
-- **gt folder**(标准参考,高分辨率图像):*datasets/DF2K/DIV2K_train_HR_sub*
-- **lq folder**(低质量,低分辨率图像):*datasets/DF2K/DIV2K_train_LR_bicubic_X4_sub*
-
-然后,您可以使用脚本 [scripts/generate_meta_info_pairdata.py](scripts/generate_meta_info_pairdata.py) 生成元信息(meta_info)txt 文件。
-
-```bash
-python scripts/generate_meta_info_pairdata.py --input datasets/DF2K/DIV2K_train_HR_sub datasets/DF2K/DIV2K_train_LR_bicubic_X4_sub --meta_info datasets/DF2K/meta_info/meta_info_DIV2K_sub_pair.txt
-```
-
-**2. 下载预训练模型**
-
-下载预先训练的模型到 `experiments/pretrained_models` 目录下。
-
-- *RealESRGAN_x4plus.pth*:
- ```bash
- wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models
- ```
-
-- *RealESRGAN_x4plus_netD.pth*:
- ```bash
- wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x4plus_netD.pth -P experiments/pretrained_models
- ```
-
-**3. 微调**
-
-修改选项文件 [options/finetune_realesrgan_x4plus_pairdata.yml](options/finetune_realesrgan_x4plus_pairdata.yml) ,特别是 `datasets` 部分:
-
-```yml
-train:
- name: DIV2K
- type: RealESRGANPairedDataset
- dataroot_gt: datasets/DF2K # 修改为你的 gt folder 文件夹根目录
- dataroot_lq: datasets/DF2K # 修改为你的 lq folder 文件夹根目录
- meta_info: datasets/DF2K/meta_info/meta_info_DIV2K_sub_pair.txt # 修改为你自己生成的元信息txt
- io_backend:
- type: disk
-```
-
-我们使用4个GPU进行训练。还可以使用参数 `--auto_resume` 在必要时自动恢复训练。
-
-```bash
-CUDA_VISIBLE_DEVICES=0,1,2,3 \
-python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/finetune_realesrgan_x4plus_pairdata.yml --launcher pytorch --auto_resume
-```
-
-用 **1个GPU** 训练:
-```bash
-python realesrgan/train.py -opt options/finetune_realesrgan_x4plus_pairdata.yml --auto_resume
-```
diff --git a/spaces/gyugnsu/DragGan-Inversion/PTI/models/e4e/encoders/model_irse.py b/spaces/gyugnsu/DragGan-Inversion/PTI/models/e4e/encoders/model_irse.py
deleted file mode 100644
index 976ce2c61104efdc6b0015d895830346dd01bc10..0000000000000000000000000000000000000000
--- a/spaces/gyugnsu/DragGan-Inversion/PTI/models/e4e/encoders/model_irse.py
+++ /dev/null
@@ -1,84 +0,0 @@
-from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module
-from encoder4editing.models.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm
-
-"""
-Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch)
-"""
-
-
-class Backbone(Module):
- def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True):
- super(Backbone, self).__init__()
- assert input_size in [112, 224], "input_size should be 112 or 224"
- assert num_layers in [50, 100, 152], "num_layers should be 50, 100 or 152"
- assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se"
- blocks = get_blocks(num_layers)
- if mode == 'ir':
- unit_module = bottleneck_IR
- elif mode == 'ir_se':
- unit_module = bottleneck_IR_SE
- self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False),
- BatchNorm2d(64),
- PReLU(64))
- if input_size == 112:
- self.output_layer = Sequential(BatchNorm2d(512),
- Dropout(drop_ratio),
- Flatten(),
- Linear(512 * 7 * 7, 512),
- BatchNorm1d(512, affine=affine))
- else:
- self.output_layer = Sequential(BatchNorm2d(512),
- Dropout(drop_ratio),
- Flatten(),
- Linear(512 * 14 * 14, 512),
- BatchNorm1d(512, affine=affine))
-
- modules = []
- for block in blocks:
- for bottleneck in block:
- modules.append(unit_module(bottleneck.in_channel,
- bottleneck.depth,
- bottleneck.stride))
- self.body = Sequential(*modules)
-
- def forward(self, x):
- x = self.input_layer(x)
- x = self.body(x)
- x = self.output_layer(x)
- return l2_norm(x)
-
-
-def IR_50(input_size):
- """Constructs a ir-50 model."""
- model = Backbone(input_size, num_layers=50, mode='ir', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_101(input_size):
- """Constructs a ir-101 model."""
- model = Backbone(input_size, num_layers=100, mode='ir', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_152(input_size):
- """Constructs a ir-152 model."""
- model = Backbone(input_size, num_layers=152, mode='ir', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_SE_50(input_size):
- """Constructs a ir_se-50 model."""
- model = Backbone(input_size, num_layers=50, mode='ir_se', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_SE_101(input_size):
- """Constructs a ir_se-101 model."""
- model = Backbone(input_size, num_layers=100, mode='ir_se', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_SE_152(input_size):
- """Constructs a ir_se-152 model."""
- model = Backbone(input_size, num_layers=152, mode='ir_se', drop_ratio=0.4, affine=False)
- return model
diff --git a/spaces/h2oai/wave-tour/examples/plot_interval_annotation.py b/spaces/h2oai/wave-tour/examples/plot_interval_annotation.py
deleted file mode 100644
index bcad04c41ba766a86c2e163a2f1cd7ffa07cfd64..0000000000000000000000000000000000000000
--- a/spaces/h2oai/wave-tour/examples/plot_interval_annotation.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# Plot / Interval / Annotation
-# Add annotations to a column #plot. #annotation #interval
-# ---
-from h2o_wave import site, data, ui
-
-page = site['/demo']
-
-page.add('example', ui.plot_card(
- box='1 1 4 5',
- title='Interval - annotation',
- data=data('profession salary', 5, rows=[
- ('medicine', 23000),
- ('fire fighting', 18000),
- ('pedagogy', 24000),
- ('psychology', 22500),
- ('computer science', 36000),
- ]),
- plot=ui.plot([
- ui.mark(type='interval', x='=profession', y='=salary', y_min=0),
- ui.mark(x='psychology', y=32000, label='point'),
- ui.mark(x='pedagogy', label='vertical line'),
- ui.mark(y=23000, label='horizontal line'),
- ui.mark(x='fire fighting', x0='medicine', label='vertical region'),
- ui.mark(y=35000, y0=30000, label='horizontal region')
- ])
-))
-
-page.save()
diff --git a/spaces/hamacojr/CAT-Seg/demo/demo_visual_gt.py b/spaces/hamacojr/CAT-Seg/demo/demo_visual_gt.py
deleted file mode 100644
index e828860ae5982fba62319b41c42a7311880539a9..0000000000000000000000000000000000000000
--- a/spaces/hamacojr/CAT-Seg/demo/demo_visual_gt.py
+++ /dev/null
@@ -1,210 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Modified by Bowen Cheng from: https://github.com/facebookresearch/detectron2/blob/master/demo/demo.py
-import argparse
-import glob
-import multiprocessing as mp
-import os
-
-# fmt: off
-import sys
-sys.path.insert(1, os.path.join(sys.path[0], '..'))
-# fmt: on
-
-import tempfile
-import time
-import warnings
-
-import cv2
-import numpy as np
-import tqdm
-
-from detectron2.config import get_cfg
-from detectron2.data.detection_utils import read_image
-from detectron2.projects.deeplab import add_deeplab_config
-from detectron2.utils.logger import setup_logger
-
-from mask_former import add_mask_former_config
-# from predictor import VisualizationDemo
-from visualizer import VisualizationGt
-from PIL import Image
-
-# constants
-WINDOW_NAME = "MaskFormer demo"
-
-
-def setup_cfg(args):
- # load config from file and command-line arguments
- cfg = get_cfg()
- add_deeplab_config(cfg)
- add_mask_former_config(cfg)
- cfg.merge_from_file(args.config_file)
- cfg.merge_from_list(args.opts)
- cfg.freeze()
- return cfg
-
-
-def get_parser():
- parser = argparse.ArgumentParser(description="Detectron2 demo for builtin configs")
- parser.add_argument(
- "--config-file",
- default="configs/ade20k-150/maskformer_R50_bs16_160k.yaml",
- metavar="FILE",
- help="path to config file",
- )
- parser.add_argument("--webcam", action="store_true", help="Take inputs from webcam.")
- parser.add_argument("--video-input", help="Path to video file.")
- parser.add_argument(
- "--input",
- nargs="+",
- help="A list of space separated input images; "
- "or a single glob pattern such as 'directory/*.jpg'",
- )
- # parser.add_argument(
- # "--gt",
- # nargs="+",
- # help="A list of space seperated ground truth images;"
- # "or a single glob pattern such as 'directory/*.png'"
- # )
- parser.add_argument(
- "--gt",
- # type="str",
- help="ground truth path of segmentation"
- )
- parser.add_argument(
- "--output",
- help="A file or directory to save output visualizations. "
- "If not given, will show output in an OpenCV window.",
- )
-
- parser.add_argument(
- "--confidence-threshold",
- type=float,
- default=0.5,
- help="Minimum score for instance predictions to be shown",
- )
- parser.add_argument(
- "--opts",
- help="Modify config options using the command-line 'KEY VALUE' pairs",
- default=[],
- nargs=argparse.REMAINDER,
- )
- return parser
-
-
-def test_opencv_video_format(codec, file_ext):
- with tempfile.TemporaryDirectory(prefix="video_format_test") as dir:
- filename = os.path.join(dir, "test_file" + file_ext)
- writer = cv2.VideoWriter(
- filename=filename,
- fourcc=cv2.VideoWriter_fourcc(*codec),
- fps=float(30),
- frameSize=(10, 10),
- isColor=True,
- )
- [writer.write(np.zeros((10, 10, 3), np.uint8)) for _ in range(30)]
- writer.release()
- if os.path.isfile(filename):
- return True
- return False
-
-
-if __name__ == "__main__":
- mp.set_start_method("spawn", force=True)
- args = get_parser().parse_args()
- setup_logger(name="fvcore")
- logger = setup_logger()
- logger.info("Arguments: " + str(args))
-
- cfg = setup_cfg(args)
-
- demo = VisualizationGt(cfg)
- gt_path = args.gt
- if args.input:
- if len(args.input) == 1:
- args.input = glob.glob(os.path.expanduser(args.input[0]))
- assert args.input, "The input path(s) was not found"
- for path in tqdm.tqdm(args.input, disable=not args.output):
- # use PIL, to be consistent with evaluation
- img = read_image(path, format="BGR")
- start_time = time.time()
- predictions = {}
- gt_file = os.path.join(gt_path, os.path.splitext(os.path.basename(path))[0] + '.png')
- # import pdb; pdb.set_trace()
- predictions['sem_seg'] = np.asarray(Image.open(gt_file))
- predictions, visualized_output = demo.run_on_image(img, predictions)
- logger.info(
- "{}: {} in {:.2f}s".format(
- path,
- "detected {} instances".format(len(predictions["instances"]))
- if "instances" in predictions
- else "finished",
- time.time() - start_time,
- )
- )
-
- if args.output:
- if os.path.isdir(args.output):
- assert os.path.isdir(args.output), args.output
- out_filename = os.path.join(args.output, os.path.basename(path))
- else:
- assert len(args.input) == 1, "Please specify a directory with args.output"
- out_filename = args.output
- visualized_output.save(out_filename)
- else:
- cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL)
- cv2.imshow(WINDOW_NAME, visualized_output.get_image()[:, :, ::-1])
- if cv2.waitKey(0) == 27:
- break # esc to quit
- elif args.webcam:
- assert args.input is None, "Cannot have both --input and --webcam!"
- assert args.output is None, "output not yet supported with --webcam!"
- cam = cv2.VideoCapture(0)
- for vis in tqdm.tqdm(demo.run_on_video(cam)):
- cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL)
- cv2.imshow(WINDOW_NAME, vis)
- if cv2.waitKey(1) == 27:
- break # esc to quit
- cam.release()
- cv2.destroyAllWindows()
- elif args.video_input:
- video = cv2.VideoCapture(args.video_input)
- width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))
- height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))
- frames_per_second = video.get(cv2.CAP_PROP_FPS)
- num_frames = int(video.get(cv2.CAP_PROP_FRAME_COUNT))
- basename = os.path.basename(args.video_input)
- codec, file_ext = (
- ("x264", ".mkv") if test_opencv_video_format("x264", ".mkv") else ("mp4v", ".mp4")
- )
- if codec == ".mp4v":
- warnings.warn("x264 codec not available, switching to mp4v")
- if args.output:
- if os.path.isdir(args.output):
- output_fname = os.path.join(args.output, basename)
- output_fname = os.path.splitext(output_fname)[0] + file_ext
- else:
- output_fname = args.output
- assert not os.path.isfile(output_fname), output_fname
- output_file = cv2.VideoWriter(
- filename=output_fname,
- # some installation of opencv may not support x264 (due to its license),
- # you can try other format (e.g. MPEG)
- fourcc=cv2.VideoWriter_fourcc(*codec),
- fps=float(frames_per_second),
- frameSize=(width, height),
- isColor=True,
- )
- assert os.path.isfile(args.video_input)
- for vis_frame in tqdm.tqdm(demo.run_on_video(video), total=num_frames):
- if args.output:
- output_file.write(vis_frame)
- else:
- cv2.namedWindow(basename, cv2.WINDOW_NORMAL)
- cv2.imshow(basename, vis_frame)
- if cv2.waitKey(1) == 27:
- break # esc to quit
- video.release()
- if args.output:
- output_file.release()
- else:
- cv2.destroyAllWindows()
diff --git a/spaces/hanan217/QQsign/Dockerfile b/spaces/hanan217/QQsign/Dockerfile
deleted file mode 100644
index 535624113f3b520e4829240a48bd3652430de828..0000000000000000000000000000000000000000
--- a/spaces/hanan217/QQsign/Dockerfile
+++ /dev/null
@@ -1,23 +0,0 @@
-FROM openjdk:17-slim
-
-# 设置时区
-ENV TZ Asia/Shanghai
-
-# 设置工作目录
-WORKDIR /app
-
-# 复制文件到工作目录
-COPY bin /app/bin
-COPY lib /app/lib
-COPY txlib /app/txlib
-
-# 设置命令
-RUN chmod -R 777 /tmp
-RUN chmod -R 777 /app
-RUN sed 's/"key": ".*"/"key": "'"$KEY_VALUE"'"/' txlib/$TXLIB_VERSION/config.json > /app/txlib/$TXLIB_VERSION/config.json
-
-# 运行
-CMD bash bin/unidbg-fetch-qsign --basePath=txlib/$TXLIB_VERSION
-
-# 暴露端口
-EXPOSE 7860
\ No newline at end of file
diff --git a/spaces/haoqi7/research/lrt/clustering/__init__.py b/spaces/haoqi7/research/lrt/clustering/__init__.py
deleted file mode 100644
index 5e1a3dadad381a487c2131f273d4ae064c759fd0..0000000000000000000000000000000000000000
--- a/spaces/haoqi7/research/lrt/clustering/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from .clustering_pipeline import ClusterPipeline, ClusterList
-from .config import Configuration,BaselineConfig
diff --git a/spaces/harish3110/document-parsing-demo/README.md b/spaces/harish3110/document-parsing-demo/README.md
deleted file mode 100644
index 672a6f69b42705a5a0049ce1527bd6f4b090a237..0000000000000000000000000000000000000000
--- a/spaces/harish3110/document-parsing-demo/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Document Parsing Demo
-emoji: 📈
-colorFrom: green
-colorTo: pink
-sdk: gradio
-sdk_version: 3.13.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/hesha/anime-remove-background/app.py b/spaces/hesha/anime-remove-background/app.py
deleted file mode 100644
index b1e96a91cdb8243074de409e060181f33e7ca87a..0000000000000000000000000000000000000000
--- a/spaces/hesha/anime-remove-background/app.py
+++ /dev/null
@@ -1,46 +0,0 @@
-import gradio as gr
-import huggingface_hub
-import onnxruntime as rt
-import numpy as np
-import cv2
-
-providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
-model_path = huggingface_hub.hf_hub_download("skytnt/anime-seg", "isnetis.onnx")
-rmbg_model = rt.InferenceSession(model_path, providers=providers)
-
-def get_mask(img, s=1024):
- img = (img / 255).astype(np.float32)
- h, w = h0, w0 = img.shape[:-1]
- h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s)
- ph, pw = s - h, s - w
- img_input = np.zeros([s, s, 3], dtype=np.float32)
- img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h))
- img_input = np.transpose(img_input, (2, 0, 1))
- img_input = img_input[np.newaxis, :]
- mask = rmbg_model.run(None, {'img': img_input})[0][0]
- mask = np.transpose(mask, (1, 2, 0))
- mask = mask[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w]
- mask = cv2.resize(mask, (w0, h0))[:, :, np.newaxis]
- return mask
-
-def rmbg_fn(img):
- mask = get_mask(img)
- img = (mask * img + 255 * (1 - mask)).astype(np.uint8)
- mask = (mask * 255).astype(np.uint8)
- img = np.concatenate([img, mask], axis=2, dtype=np.uint8)
- mask = mask.repeat(3, axis=2)
- return mask, img
-
-iface = gr.Interface(
- fn=rmbg_fn,
- inputs=gr.Image(label="input image"),
- outputs=[
- gr.Image(label="mask"),
- gr.Image(label="result", image_mode="RGBA")
- ],
- title="Anime Remove Background",
- allow_flagging="never",
- article='
'
-)
-
-iface.queue().launch(show_api=False, show_error=True)
\ No newline at end of file
diff --git a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Structure Studios Pool Studio Crack Fix.md b/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Structure Studios Pool Studio Crack Fix.md
deleted file mode 100644
index b79ccf53dacf945ddc8651a987d7bd0b1968d4be..0000000000000000000000000000000000000000
--- a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Structure Studios Pool Studio Crack Fix.md
+++ /dev/null
@@ -1,49 +0,0 @@
-## Structure Studios Pool Studio Crack
-
-
-
-**Download File === [https://ditzcosupo.blogspot.com/?d=2twsjm](https://ditzcosupo.blogspot.com/?d=2twsjm)**
-
-
-
- Here is a possible title and article with HTML formatting for the keyword "Structure Studios Pool Studio Crack":
-
-# How to Crack Structure Studios Pool Studio Software
-
-
-
-If you are looking for a way to crack Structure Studios Pool Studio software, you might be disappointed. Pool Studio is a professional 3D swimming pool design software that allows you to create stunning presentations for your clients. It is not a cheap or easy software to crack, and doing so might expose you to legal and security risks.
-
-
-
-According to the developer's website[^1^], Pool Studio is a subscription-based software that requires an active membership to access its features and updates. You can choose from three different plans: Vip3D, Pool Studio, and VizTerra, depending on your design needs and budget. Each plan comes with a free trial period, live training, and technical support.
-
-
-
-Cracking Pool Studio software would mean bypassing its subscription system and accessing its features without paying for them. This is not only unethical but also illegal, as it violates the terms of service and the intellectual property rights of the developer. You could face serious consequences if you are caught using a cracked version of Pool Studio, such as fines, lawsuits, or even criminal charges.
-
-
-
-Moreover, cracking Pool Studio software could also harm your computer and your data. Many websites that claim to offer cracked versions of Pool Studio are actually scams that infect your device with malware, spyware, or ransomware. These malicious programs could steal your personal information, damage your files, or lock your system until you pay a ransom. You could also lose access to your projects and clients if you rely on a cracked version of Pool Studio that stops working or gets detected by the developer.
-
-
-
-Therefore, we strongly advise you not to crack Structure Studios Pool Studio software. Instead, we recommend you to use the free trial period to test its features and see if it meets your expectations. If you decide to purchase a subscription plan, you will enjoy the benefits of a reliable, secure, and updated software that will help you grow your business and impress your clients with amazing 3D swimming pool designs.
-
-Here are a few more paragraphs to extend the article:
-
-If you are still wondering why you should use Pool Studio software instead of a cracked version, here are some of the features that make it worth paying for:
-
-
-
-- Pool Studio software lets you design in 3D with easy and intuitive tools. You can draw and edit shapes, add materials and textures, insert accessories and furniture, and customize every aspect of your swimming pool design.
-
-- Pool Studio software allows you to create fully interactive presentations that showcase your design in realistic and stunning detail. You can add lighting and shadows, water effects, animations, and sound effects. You can also switch between different views and perspectives, such as top-down, walkthrough, or flyover.
-
-- Pool Studio software helps you to generate smart construction and calculation data for your design. You can get accurate measurements, dimensions, volumes, and costs for your swimming pool project. You can also export your design to various formats, such as PDF, DWG, or 3D models.
-
-
-
-As you can see, Pool Studio software is a powerful and versatile tool that can help you to create professional and impressive swimming pool designs. It is not worth risking your reputation, your security, and your clients' satisfaction by using a cracked version of Pool Studio. Instead, you should invest in a legitimate subscription plan that will give you access to the best 3D swimming pool design software in the market.
-
- dfd1c89656
\ No newline at end of file
diff --git a/spaces/hlopez/Waste-Detector/model.py b/spaces/hlopez/Waste-Detector/model.py
deleted file mode 100644
index df935bbc26f248ffed7da4f816a8bcb994fc0e24..0000000000000000000000000000000000000000
--- a/spaces/hlopez/Waste-Detector/model.py
+++ /dev/null
@@ -1,119 +0,0 @@
-from io import BytesIO
-from typing import Dict, Tuple, Union
-from icevision import *
-from icevision.models.checkpoint import model_from_checkpoint
-from classifier import transform_image
-from icevision.models import ross
-
-import PIL
-import torch
-import numpy as np
-import torchvision
-
-MODEL_TYPE = ross.efficientdet
-
-def predict(det_model : torch.nn.Module, image : Union[str, BytesIO],
- detection_threshold : float) -> Dict:
- """
- Make a prediction with the detection model.
-
- Args:
- det_model (torch.nn.Module): Detection model
- image (Union[str, BytesIO]): Image filepath if the image is one of
- the example images and BytesIO if the image is a custom image
- uploaded by the user.
- detection_threshold (float): Detection threshold
-
- Returns:
- Dict: Prediction dictionary.
- """
- img = PIL.Image.open(image)
-
- # Class map and transforms
- class_map = ClassMap(classes=['Waste'])
- transforms = tfms.A.Adapter([
- *tfms.A.resize_and_pad(512),
- tfms.A.Normalize()
- ])
-
- # Single prediction
- pred_dict = MODEL_TYPE.end2end_detect(img,
- transforms,
- det_model,
- class_map=class_map,
- detection_threshold=detection_threshold,
- return_as_pil_img=False,
- return_img=True,
- display_bbox=False,
- display_score=False,
- display_label=False)
-
- return pred_dict
-
-def prepare_prediction(pred_dict : Dict,
- nms_threshold : str) -> Tuple[torch.Tensor, np.ndarray]:
- """
- Get the predictions in a right format.
-
- Args:
- pred_dict (Dict): Prediction dictionary.
- nms_threshold (float): Threshold for the NMS postprocess.
-
- Returns:
- Tuple: Tuple containing the following:
- - (torch.Tensor): Bounding boxes
- - (np.ndarray): Image data
- """
- # Convert each box to a tensor and stack them into an unique tensor
- boxes = [box.to_tensor() for box in pred_dict['detection']['bboxes']]
- boxes = torch.stack(boxes)
-
- # Get the scores and labels as tensor
- scores = torch.as_tensor(pred_dict['detection']['scores'])
- labels = torch.as_tensor(pred_dict['detection']['label_ids'])
-
- image = np.array(pred_dict['img'])
-
- # Apply NMS to postprocess the bounding boxes
- fixed_boxes = torchvision.ops.batched_nms(boxes, scores,
- labels,nms_threshold)
- boxes = boxes[fixed_boxes, :]
-
- return boxes, image
-
-def predict_class(classifier : torch.nn.Module, image : np.ndarray,
- bboxes : torch.Tensor) -> np.ndarray:
- """
- Predict the class of each detected object.
-
- Args:
- classifier (torch.nn.Module): Classifier model.
- image (np.ndarray): Image data.
- bboxes (torch.Tensor): Bounding boxes.
-
- Returns:
- np.ndarray: Array containing the predicted class for each object.
- """
- preds = []
-
- for bbox in bboxes:
- img = image.copy()
- bbox = np.array(bbox).astype(int)
-
- # Get the bounding box content
- cropped_img = PIL.Image.fromarray(img).crop(bbox)
- cropped_img = np.array(cropped_img)
-
- # Apply transformations to the cropped image
- tran_image = transform_image(cropped_img, 224)
- # Channels first
- tran_image = tran_image.transpose(2, 0, 1)
- tran_image = torch.as_tensor(tran_image, dtype=torch.float).unsqueeze(0)
-
- # Make prediction
- y_preds = classifier(tran_image)
- preds.append(y_preds.softmax(1).detach().numpy())
-
- preds = np.concatenate(preds).argmax(1)
-
- return preds
\ No newline at end of file
diff --git a/spaces/hslu-di/Reust_Yannic/app.py b/spaces/hslu-di/Reust_Yannic/app.py
deleted file mode 100644
index 5d40389f4c6874c23ea13b0dd723c2a04ec494ea..0000000000000000000000000000000000000000
--- a/spaces/hslu-di/Reust_Yannic/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/kohbanye/pixel-art-style").launch()
\ No newline at end of file
diff --git a/spaces/huggingface/bloom-test-flax/README.md b/spaces/huggingface/bloom-test-flax/README.md
deleted file mode 100644
index c6e469f2b9a2a7f2366c6056ffad9b214c8d8ecb..0000000000000000000000000000000000000000
--- a/spaces/huggingface/bloom-test-flax/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Bloom Test Flax
-emoji: 🐨
-colorFrom: green
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.0.24
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/hysts/DETA/README.md b/spaces/hysts/DETA/README.md
deleted file mode 100644
index b713309df908e6a1b3e671d4f4d31d57140dff66..0000000000000000000000000000000000000000
--- a/spaces/hysts/DETA/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: DETA
-emoji: 🐠
-colorFrom: purple
-colorTo: red
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/hysts/anime-face-detector/README.md b/spaces/hysts/anime-face-detector/README.md
deleted file mode 100644
index 33cb3e19fd12f49bda6a6aeaab360e1121d35534..0000000000000000000000000000000000000000
--- a/spaces/hysts/anime-face-detector/README.md
+++ /dev/null
@@ -1,38 +0,0 @@
----
-title: Anime Face Detector
-emoji: 😻
-colorFrom: pink
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.0.5
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/hzwluoye/gpt4/g4f/typing.py b/spaces/hzwluoye/gpt4/g4f/typing.py
deleted file mode 100644
index e41a567ae49dd26d2ace2a3732b0e8f0bbbaa4b0..0000000000000000000000000000000000000000
--- a/spaces/hzwluoye/gpt4/g4f/typing.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from typing import Dict, NewType, Union, Optional, List, get_type_hints
-
-sha256 = NewType('sha_256_hash', str)
\ No newline at end of file
diff --git a/spaces/iamstolas/STOLAS/src/components/tone-selector.tsx b/spaces/iamstolas/STOLAS/src/components/tone-selector.tsx
deleted file mode 100644
index 5c6e464c91f564b895acd121f0a4a79ed9c5c356..0000000000000000000000000000000000000000
--- a/spaces/iamstolas/STOLAS/src/components/tone-selector.tsx
+++ /dev/null
@@ -1,43 +0,0 @@
-import React from 'react'
-import { BingConversationStyle } from '@/lib/bots/bing/types'
-import { cn } from '@/lib/utils'
-
-type ToneItem = {
- type: BingConversationStyle,
- name: string
-}
-
-const ToneList: ToneItem[] = [
- { name: '有创造力', type: BingConversationStyle.Creative },
- { name: '更平衡', type: BingConversationStyle.Balanced },
- { name: '更精确', type: BingConversationStyle.Precise }
-]
-
-interface ToneSelectorProps {
- type: BingConversationStyle | ''
- onChange?: (type: BingConversationStyle) => void
-}
-
-export function ToneSelector({ type, onChange }: ToneSelectorProps) {
- return (
-
-
- 选择对话样式
-
-
-
- {
- ToneList.map(tone => (
-
onChange?.(tone.type)}>
-
-
- ))
- }
-
-
-
- )
-}
diff --git a/spaces/inamXcontru/PoeticTTS/Chudail Movie Free Download In Hindi HD.md b/spaces/inamXcontru/PoeticTTS/Chudail Movie Free Download In Hindi HD.md
deleted file mode 100644
index 631479ea8ba405ab5434479781c3eee98008acc4..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Chudail Movie Free Download In Hindi HD.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
download Raj Mahal 2 Comedy Scenes unlimited Movies and videos Download Here.Raj Mahal 2 Comedy Scenes Hd,3gp. mp4 320p and More Videos You Can Download Easyly. tamilrockers and movierulz, tamilgun, filmywap, and pagalworld videos and Movies download.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/innnky/nyaru4.0/utils.py b/spaces/innnky/nyaru4.0/utils.py
deleted file mode 100644
index f13d3526d514be71c77bebb17a5af8831b9c6a36..0000000000000000000000000000000000000000
--- a/spaces/innnky/nyaru4.0/utils.py
+++ /dev/null
@@ -1,508 +0,0 @@
-import os
-import glob
-import re
-import sys
-import argparse
-import logging
-import json
-import subprocess
-import random
-
-import librosa
-import numpy as np
-from scipy.io.wavfile import read
-import torch
-from torch.nn import functional as F
-from modules.commons import sequence_mask
-from hubert import hubert_model
-MATPLOTLIB_FLAG = False
-
-logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
-logger = logging
-
-f0_bin = 256
-f0_max = 1100.0
-f0_min = 50.0
-f0_mel_min = 1127 * np.log(1 + f0_min / 700)
-f0_mel_max = 1127 * np.log(1 + f0_max / 700)
-
-
-# def normalize_f0(f0, random_scale=True):
-# f0_norm = f0.clone() # create a copy of the input Tensor
-# batch_size, _, frame_length = f0_norm.shape
-# for i in range(batch_size):
-# means = torch.mean(f0_norm[i, 0, :])
-# if random_scale:
-# factor = random.uniform(0.8, 1.2)
-# else:
-# factor = 1
-# f0_norm[i, 0, :] = (f0_norm[i, 0, :] - means) * factor
-# return f0_norm
-# def normalize_f0(f0, random_scale=True):
-# means = torch.mean(f0[:, 0, :], dim=1, keepdim=True)
-# if random_scale:
-# factor = torch.Tensor(f0.shape[0],1).uniform_(0.8, 1.2).to(f0.device)
-# else:
-# factor = torch.ones(f0.shape[0], 1, 1).to(f0.device)
-# f0_norm = (f0 - means.unsqueeze(-1)) * factor.unsqueeze(-1)
-# return f0_norm
-def normalize_f0(f0, x_mask, uv, random_scale=True):
- # calculate means based on x_mask
- uv_sum = torch.sum(uv, dim=1, keepdim=True)
- uv_sum[uv_sum == 0] = 9999
- means = torch.sum(f0[:, 0, :] * uv, dim=1, keepdim=True) / uv_sum
-
- if random_scale:
- factor = torch.Tensor(f0.shape[0], 1).uniform_(0.8, 1.2).to(f0.device)
- else:
- factor = torch.ones(f0.shape[0], 1).to(f0.device)
- # normalize f0 based on means and factor
- f0_norm = (f0 - means.unsqueeze(-1)) * factor.unsqueeze(-1)
- if torch.isnan(f0_norm).any():
- exit(0)
- return f0_norm * x_mask
-
-
-def plot_data_to_numpy(x, y):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10, 2))
- plt.plot(x)
- plt.plot(y)
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-
-def interpolate_f0(f0):
- '''
- 对F0进行插值处理
- '''
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i]
- last_value = data[i]
-
- return ip_data[:,0], vuv_vector[:,0]
-
-
-def compute_f0_parselmouth(wav_numpy, p_len=None, sampling_rate=44100, hop_length=512):
- import parselmouth
- x = wav_numpy
- if p_len is None:
- p_len = x.shape[0]//hop_length
- else:
- assert abs(p_len-x.shape[0]//hop_length) < 4, "pad length error"
- time_step = hop_length / sampling_rate * 1000
- f0_min = 50
- f0_max = 1100
- f0 = parselmouth.Sound(x, sampling_rate).to_pitch_ac(
- time_step=time_step / 1000, voicing_threshold=0.6,
- pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency']
-
- pad_size=(p_len - len(f0) + 1) // 2
- if(pad_size>0 or p_len - len(f0) - pad_size>0):
- f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant')
- return f0
-
-def resize_f0(x, target_len):
- source = np.array(x)
- source[source<0.001] = np.nan
- target = np.interp(np.arange(0, len(source)*target_len, len(source))/ target_len, np.arange(0, len(source)), source)
- res = np.nan_to_num(target)
- return res
-
-def compute_f0_dio(wav_numpy, p_len=None, sampling_rate=44100, hop_length=512):
- import pyworld
- if p_len is None:
- p_len = wav_numpy.shape[0]//hop_length
- f0, t = pyworld.dio(
- wav_numpy.astype(np.double),
- fs=sampling_rate,
- f0_ceil=800,
- frame_period=1000 * hop_length / sampling_rate,
- )
- f0 = pyworld.stonemask(wav_numpy.astype(np.double), f0, t, sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return resize_f0(f0, p_len)
-
-def f0_to_coarse(f0):
- is_torch = isinstance(f0, torch.Tensor)
- f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1
-
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1
- f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(np.int)
- assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min())
- return f0_coarse
-
-
-def get_hubert_model():
- vec_path = "hubert/checkpoint_best_legacy_500.pt"
- print("load model(s) from {}".format(vec_path))
- from fairseq import checkpoint_utils
- models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
- [vec_path],
- suffix="",
- )
- model = models[0]
- model.eval()
- return model
-
-def get_hubert_content(hmodel, wav_16k_tensor):
- feats = wav_16k_tensor
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).fill_(False)
- inputs = {
- "source": feats.to(wav_16k_tensor.device),
- "padding_mask": padding_mask.to(wav_16k_tensor.device),
- "output_layer": 9, # layer 9
- }
- with torch.no_grad():
- logits = hmodel.extract_features(**inputs)
- feats = hmodel.final_proj(logits[0])
- return feats.transpose(1, 2)
-
-
-def get_content(cmodel, y):
- with torch.no_grad():
- c = cmodel.extract_features(y.squeeze(1))[0]
- c = c.transpose(1, 2)
- return c
-
-
-
-def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
- iteration = checkpoint_dict['iteration']
- learning_rate = checkpoint_dict['learning_rate']
- if optimizer is not None and not skip_optimizer:
- optimizer.load_state_dict(checkpoint_dict['optimizer'])
- saved_state_dict = checkpoint_dict['model']
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict = {}
- for k, v in state_dict.items():
- try:
- # assert "dec" in k or "disc" in k
- # print("load", k)
- new_state_dict[k] = saved_state_dict[k]
- assert saved_state_dict[k].shape == v.shape, (saved_state_dict[k].shape, v.shape)
- except:
- print("error, %s is not in the checkpoint" % k)
- logger.info("%s is not in the checkpoint" % k)
- new_state_dict[k] = v
- if hasattr(model, 'module'):
- model.module.load_state_dict(new_state_dict)
- else:
- model.load_state_dict(new_state_dict)
- print("load ")
- logger.info("Loaded checkpoint '{}' (iteration {})".format(
- checkpoint_path, iteration))
- return model, optimizer, learning_rate, iteration
-
-
-def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path, val_steps, current_step):
- logger.info("Saving model and optimizer state at iteration {} to {}".format(
- iteration, checkpoint_path))
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- torch.save({'model': state_dict,
- 'iteration': iteration,
- 'optimizer': optimizer.state_dict(),
- 'learning_rate': learning_rate}, checkpoint_path)
- if current_step >= val_steps * 3:
- to_del_ckptname = checkpoint_path.replace(str(current_step), str(current_step - val_steps * 3))
- if os.path.exists(to_del_ckptname):
- os.remove(to_del_ckptname)
- print("Removing ", to_del_ckptname)
-
-
-def clean_checkpoints(path_to_models='logs/48k/', n_ckpts_to_keep=2, sort_by_time=True):
- """Freeing up space by deleting saved ckpts
-
- Arguments:
- path_to_models -- Path to the model directory
- n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth
- sort_by_time -- True -> chronologically delete ckpts
- False -> lexicographically delete ckpts
- """
- ckpts_files = [f for f in os.listdir(path_to_models) if os.path.isfile(os.path.join(path_to_models, f))]
- name_key = (lambda _f: int(re.compile('._(\d+)\.pth').match(_f).group(1)))
- time_key = (lambda _f: os.path.getmtime(os.path.join(path_to_models, _f)))
- sort_key = time_key if sort_by_time else name_key
- x_sorted = lambda _x: sorted([f for f in ckpts_files if f.startswith(_x) and not f.endswith('_0.pth')], key=sort_key)
- to_del = [os.path.join(path_to_models, fn) for fn in
- (x_sorted('G')[:-n_ckpts_to_keep] + x_sorted('D')[:-n_ckpts_to_keep])]
- del_info = lambda fn: logger.info(f".. Free up space by deleting ckpt {fn}")
- del_routine = lambda x: [os.remove(x), del_info(x)]
- rs = [del_routine(fn) for fn in to_del]
-
-def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050):
- for k, v in scalars.items():
- writer.add_scalar(k, v, global_step)
- for k, v in histograms.items():
- writer.add_histogram(k, v, global_step)
- for k, v in images.items():
- writer.add_image(k, v, global_step, dataformats='HWC')
- for k, v in audios.items():
- writer.add_audio(k, v, global_step, audio_sampling_rate)
-
-
-def latest_checkpoint_path(dir_path, regex="G_*.pth"):
- f_list = glob.glob(os.path.join(dir_path, regex))
- f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f))))
- x = f_list[-1]
- print(x)
- return x
-
-
-def plot_spectrogram_to_numpy(spectrogram):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10,2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
- interpolation='none')
- plt.colorbar(im, ax=ax)
- plt.xlabel("Frames")
- plt.ylabel("Channels")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def plot_alignment_to_numpy(alignment, info=None):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(6, 4))
- im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower',
- interpolation='none')
- fig.colorbar(im, ax=ax)
- xlabel = 'Decoder timestep'
- if info is not None:
- xlabel += '\n\n' + info
- plt.xlabel(xlabel)
- plt.ylabel('Encoder timestep')
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
-
-
-def load_filepaths_and_text(filename, split="|"):
- with open(filename, encoding='utf-8') as f:
- filepaths_and_text = [line.strip().split(split) for line in f]
- return filepaths_and_text
-
-
-def get_hparams(init=True):
- parser = argparse.ArgumentParser()
- parser.add_argument('-c', '--config', type=str, default="./configs/base.json",
- help='JSON file for configuration')
- parser.add_argument('-m', '--model', type=str, required=True,
- help='Model name')
-
- args = parser.parse_args()
- model_dir = os.path.join("./logs", args.model)
-
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
-
- config_path = args.config
- config_save_path = os.path.join(model_dir, "config.json")
- if init:
- with open(config_path, "r") as f:
- data = f.read()
- with open(config_save_path, "w") as f:
- f.write(data)
- else:
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_dir(model_dir):
- config_save_path = os.path.join(model_dir, "config.json")
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams =HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams =HParams(**config)
- return hparams
-
-
-def check_git_hash(model_dir):
- source_dir = os.path.dirname(os.path.realpath(__file__))
- if not os.path.exists(os.path.join(source_dir, ".git")):
- logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format(
- source_dir
- ))
- return
-
- cur_hash = subprocess.getoutput("git rev-parse HEAD")
-
- path = os.path.join(model_dir, "githash")
- if os.path.exists(path):
- saved_hash = open(path).read()
- if saved_hash != cur_hash:
- logger.warn("git hash values are different. {}(saved) != {}(current)".format(
- saved_hash[:8], cur_hash[:8]))
- else:
- open(path, "w").write(cur_hash)
-
-
-def get_logger(model_dir, filename="train.log"):
- global logger
- logger = logging.getLogger(os.path.basename(model_dir))
- logger.setLevel(logging.DEBUG)
-
- formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- h = logging.FileHandler(os.path.join(model_dir, filename))
- h.setLevel(logging.DEBUG)
- h.setFormatter(formatter)
- logger.addHandler(h)
- return logger
-
-
-def repeat_expand_2d(content, target_len):
- # content : [h, t]
-
- src_len = content.shape[-1]
- target = torch.zeros([content.shape[0], target_len], dtype=torch.float).to(content.device)
- temp = torch.arange(src_len+1) * target_len / src_len
- current_pos = 0
- for i in range(target_len):
- if i < temp[current_pos+1]:
- target[:, i] = content[:, current_pos]
- else:
- current_pos += 1
- target[:, i] = content[:, current_pos]
-
- return target
-
-
-class HParams():
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
-
diff --git a/spaces/inreVtussa/clothingai/Examples/CrackBIM360Design2019keygen [REPACK].md b/spaces/inreVtussa/clothingai/Examples/CrackBIM360Design2019keygen [REPACK].md
deleted file mode 100644
index 0e3c79b05051859837b54bf278da9958c336e58d..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/CrackBIM360Design2019keygen [REPACK].md
+++ /dev/null
@@ -1,109 +0,0 @@
-
-
CrackBIM360Design2019keygen: Everything You Need to Know
-
If you are looking for a way to use BIM 360 Design 2019 without paying for a license, you might be interested in CrackBIM360Design2019keygen. This is a software tool that can generate a valid keygen for BIM 360 Design 2019, allowing you to access all the features and functions of this powerful design software. In this article, we will explain what CrackBIM360Design2019keygen is, how it works, and where you can download it.
CrackBIM360Design2019keygen is a software tool that can create a keygen for BIM 360 Design 2019. A keygen is a program that can generate a serial number or activation code for a software product. By using a keygen, you can bypass the need to purchase a license or subscription for the software, and use it for free.
-
BIM 360 Design 2019 is a cloud-based design software that enables collaboration and coordination among architects, engineers, and contractors. It allows users to create, edit, and share 3D models, drawings, and documents in real-time. BIM 360 Design 2019 also integrates with other Autodesk products, such as Revit, AutoCAD, and Navisworks.
-
However, BIM 360 Design 2019 is not cheap. It costs $900 per year for a single-user license, or $1,620 per year for a multi-user license. For many users, this is too expensive or unaffordable. That's why some people resort to using CrackBIM360Design2019keygen to get access to BIM 360 Design 2019 without paying anything.
-
How does CrackBIM360Design2019keygen work?
-
CrackBIM360Design2019keygen works by exploiting a vulnerability in the activation process of BIM 360 Design 2019. It mimics the communication between the software and the Autodesk server, and generates a fake response that validates the keygen. This way, the software thinks that the keygen is legitimate and authenticates it.
-
To use CrackBIM360Design2019keygen, you need to follow these steps:
-
-
Download CrackBIM360Design2019keygen from a reliable source. Be careful of malware or viruses that might be hidden in some downloads.
-
Run CrackBIM360Design2019keygen as an administrator on your computer.
-
Select BIM 360 Design 2019 from the list of products and click on Generate.
-
Copy the keygen that appears on the screen.
-
Install BIM 360 Design 2019 on your computer if you haven't already.
-
Launch BIM 360 Design 2019 and enter the keygen when prompted.
-
Enjoy using BIM 360 Design 2019 for free!
-
-
Where can I download CrackBIM360Design2019keygen?
-
There are many websites that claim to offer CrackBIM360Design2019keygen for download. However, not all of them are trustworthy or safe. Some of them might contain malware or viruses that can harm your computer or steal your personal information. Some of them might also provide fake or outdated keygens that don't work or get detected by Autodesk.
-
-
Therefore, you need to be careful and selective when choosing where to download CrackBIM360Design2019keygen. You should only download it from reputable and verified sources that have positive reviews and feedback from other users. You should also scan the download with an antivirus program before opening it.
-
One of the websites that we recommend for downloading CrackBIM360Design2019keygen is Patronway.com. This website has been providing quality software tools and keygens for various Autodesk products for years. It has a large user base and a high success rate. It also offers fast and secure downloads with no surveys or passwords required.
-
To download CrackBIM360Design2019keygen from Patronway.com, you need to follow these steps:
-
-
Go to https://patronway.com/crack-workbim360design2019keygen/
-
Click on the Download button at the bottom of the page.
-
Wait for the download to finish and save it on your computer.
-
Run CrackBIM360Design2019keygen as an administrator and follow the instructions above.
-
-
Conclusion
-
CrackBIM360Design2019keygen is a software tool that can help you use BIM 360 Design 2019 for free by generating a valid keygen for it. It works by exploiting a vulnerability in the activation process of BIM 360 Design 2019 and mimicking the communication between the software and the Autodesk server. It is easy to use and has a high success rate.
-
However, you need to be careful when downloading CrackBIM360Design2019keygen from the internet. Not all websites are reliable or safe. Some of them might contain malware or viruses that can damage your computer or compromise your privacy. Some of them might also provide fake or outdated keygens that don't work or get detected by Autodesk.
-
Therefore, you should only download CrackBIM360Design2019keygen from reputable and verified sources that have positive reviews and feedback from other users. You should also scan the download with an antivirus program before opening it.
-
One of the websites that we recommend for downloading CrackBIM360Design2019keygen is Patronway.com. This website has been providing quality software tools and keygens for various Autodesk products for years. It has a large user base and a high success rate. It also offers fast and secure downloads with no surveys or passwords required.
-
If you want to use BIM 360 Design 2019 without paying for a license or subscription, you should try CrackBIM360Design2019keygen today!
-
What are the benefits of using CrackBIM360Design2019keygen?
-
By using CrackBIM360Design2019keygen, you can enjoy many benefits that BIM 360 Design 2019 has to offer. Some of these benefits are:
-
-
You can save money and time by not having to pay for a license or subscription for BIM 360 Design 2019.
-
You can collaborate and coordinate with your team members and stakeholders in real-time, regardless of their location or device.
-
You can create, edit, and share 3D models, drawings, and documents with ease and accuracy.
-
You can integrate BIM 360 Design 2019 with other Autodesk products, such as Revit, AutoCAD, and Navisworks, and streamline your workflow.
-
You can access BIM 360 Design 2019 from anywhere and anytime, as long as you have an internet connection.
-
-
What are the risks of using CrackBIM360Design2019keygen?
-
While using CrackBIM360Design2019keygen might seem tempting, it also comes with some risks that you should be aware of. Some of these risks are:
-
-
You might violate the terms and conditions of Autodesk and face legal consequences for using pirated software.
-
You might expose your computer to malware or viruses that can damage your system or steal your personal information.
-
You might get detected by Autodesk and have your keygen revoked or blocked.
-
You might not receive updates or support from Autodesk for BIM 360 Design 2019.
-
You might experience errors or glitches with BIM 360 Design 2019 that can affect your performance or quality of work.
-
-
Is there an alternative to using CrackBIM360Design2019keygen?
-
If you want to use BIM 360 Design 2019 without risking the negative consequences of using CrackBIM360Design2019keygen, there is an alternative option that you can consider. You can try using a free trial version of BIM 360 Design 2019 that Autodesk offers for new users.
-
The free trial version of BIM 360 Design 2019 allows you to use the software for 30 days without paying anything. You can access all the features and functions of BIM 360 Design 2019 and test its capabilities. You can also collaborate and coordinate with other users who have a valid license or subscription for BIM 360 Design 2019.
-
To use the free trial version of BIM 360 Design 2019, you need to follow these steps:
-
-
Go to https://www.autodesk.com/products/bim-360-design/free-trial
-
Fill out the form with your personal and professional information.
-
Click on Start Free Trial and wait for the confirmation email from Autodesk.
-
Follow the instructions in the email to activate your free trial account and download BIM 360 Design 2019 on your computer.
-
Launch BIM 360 Design 2019 and sign in with your free trial account credentials.
-
Enjoy using BIM 360 Design 2019 for free for 30 days!
-
-
How to use BIM 360 Design 2019 with CrackBIM360Design2019keygen?
-
Once you have installed BIM 360 Design 2019 and activated it with CrackBIM360Design2019keygen, you can start using it for your design projects. Here are some tips on how to use BIM 360 Design 2019 effectively:
-
-
Create a project in BIM 360 Design 2019 and invite your team members and stakeholders to join it. You can assign roles and permissions to different users and control their access to the project data.
-
Upload your 3D models, drawings, and documents to the cloud and sync them with BIM 360 Design 2019. You can use the desktop connector or the web browser to upload your files.
-
Edit and update your files in BIM 360 Design 2019 using the online editor or the desktop app. You can use tools such as markup, annotation, clash detection, and design review to enhance your work.
-
Share and collaborate with your team members and stakeholders in real-time using BIM 360 Design 2019. You can chat, comment, and exchange feedback with other users within the software.
-
Track and manage your project progress and performance using BIM 360 Design 2019. You can use dashboards, reports, and analytics to monitor your project status and identify issues or risks.
-
-
What are the drawbacks of using CrackBIM360Design2019keygen?
-
While using CrackBIM360Design2019keygen might seem convenient and beneficial, it also has some drawbacks that you should consider before using it. Some of these drawbacks are:
-
-
You might compromise your professional ethics and reputation by using pirated software.
-
You might lose your work or data if Autodesk detects your keygen and deactivates it.
-
You might miss out on new features or improvements that Autodesk releases for BIM 360 Design 2019.
-
You might encounter compatibility or security issues with other software or devices that you use with BIM 360 Design 2019.
-
You might not get any technical support or customer service from Autodesk if you encounter any problems with BIM 360 Design 2019.
-
-
How to get a legitimate license or subscription for BIM 360 Design 2019?
-
If you want to use BIM 360 Design 2019 legally and safely, you should get a legitimate license or subscription for it from Autodesk. This way, you can avoid the risks and drawbacks of using CrackBIM360Design2019keygen, and enjoy the full benefits and advantages of BIM 360 Design 2019.
-
To get a legitimate license or subscription for BIM 360 Design 2019, you need to follow these steps:
-
-
Go to https://www.autodesk.com/products/bim-360-design/overview
-
Click on Buy Now or Subscribe Now depending on your preference.
-
Select the plan that suits your needs and budget. You can choose between a single-user or a multi-user license, and between a monthly, yearly, or three-yearly subscription.
-
Enter your billing and payment information and confirm your order.
-
Wait for the confirmation email from Autodesk with your license or subscription details.
-
Download and install BIM 360 Design 2019 on your computer using the link provided in the email.
-
Launch BIM 360 Design 2019 and sign in with your Autodesk account credentials.
-
Enjoy using BIM 360 Design 2019 legally and safely!
-
-
Conclusion
-
BIM 360 Design 2019 is a cloud-based design software that enables collaboration and coordination among architects, engineers, and contractors. It allows users to create, edit, and share 3D models, drawings, and documents in real-time. It also integrates with other Autodesk products, such as Revit, AutoCAD, and Navisworks.
-
However, BIM 360 Design 2019 is not cheap. It costs $900 per year for a single-user license, or $1,620 per year for a multi-user license. For many users, this is too expensive or unaffordable. That's why some people resort to using CrackBIM360Design2019keygen to get access to BIM 360 Design 2019 without paying anything.
-
CrackBIM360Design2019keygen is a software tool that can generate a valid keygen for BIM 360 Design 2019, allowing users to bypass the need to purchase a license or subscription for the software. It works by exploiting a vulnerability in the activation process of BIM 360 Design 2019 and mimicking the communication between the software and the Autodesk server.
-
However, using CrackBIM360Design2019keygen also comes with some risks and drawbacks. Users might violate the terms and conditions of Autodesk and face legal consequences for using pirated software. They might also expose their computer to malware or viruses that can harm their system or steal their personal information. They might also get detected by Autodesk and have their keygen revoked or blocked. They might also miss out on updates or support from Autodesk for BIM 360 Design 2019.
-
Therefore, users should be careful and selective when choosing where to download CrackBIM360Design2019keygen. They should only download it from reputable and verified sources that have positive reviews and feedback from other users. They should also scan the download with an antivirus program before opening it.
-
Alternatively, users can try using a free trial version of BIM 360 Design 2019 that Autodesk offers for new users. This way, they can use the software for 30 days without paying anything and test its capabilities. They can also collaborate and coordinate with other users who have a valid license or subscription for BIM 360 Design 2019.
-
If users want to use BIM 360 Design 2019 legally and safely, they should get a legitimate license or subscription for it from Autodesk. This way, they can avoid the risks and drawbacks of using CrackBIM360Design2019keygen, and enjoy the full benefits and advantages of BIM 360 Design 2019.
-"""
-
-gr.Interface.load(
- name="huggingface/nlp-en-es/roberta-base-bne-finetuned-sqac",
- inputs=[gr.inputs.Textbox(label="Contexto", lines=5), gr.inputs.Textbox(label="Pregunta")],
- outputs=gr.outputs.Textbox(label="Respuesta"),
- title=title,
- description=description,
- article=article,
- examples=examples,
- theme="huggingface",
- allow_screenshot=True,
- allow_flagging=True,
- flagging_dir="flagged",
- enable_queue=True
-).launch()
diff --git a/spaces/markski/reddit-roast-me/src/transformer_mapper.py b/spaces/markski/reddit-roast-me/src/transformer_mapper.py
deleted file mode 100644
index 34dd994016da8e880c4c01327112940304b54442..0000000000000000000000000000000000000000
--- a/spaces/markski/reddit-roast-me/src/transformer_mapper.py
+++ /dev/null
@@ -1,22 +0,0 @@
-
-from torch import nn
-import torch
-
-from src.transformer import Transformer
-
-
-class TransformerMapper(nn.Module):
-
- def forward(self, x):
- x = self.linear(x).view(x.shape[0], self.clip_length, -1)
- prefix = self.prefix_const.unsqueeze(0).expand(x.shape[0], *self.prefix_const.shape)
- prefix = torch.cat((x, prefix), dim=1)
- out = self.transformer(prefix)[:, self.clip_length:]
- return out
-
- def __init__(self, dim_clip: int, dim_embedding: int, prefix_length: int, clip_length: int, num_layers: int = 8):
- super(TransformerMapper, self).__init__()
- self.clip_length = clip_length
- self.transformer = Transformer(dim_embedding, 8, num_layers)
- self.linear = nn.Linear(dim_clip, clip_length * dim_embedding)
- self.prefix_const = nn.Parameter(torch.randn(prefix_length, dim_embedding), requires_grad=True)
diff --git a/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/models/model_audio2landmark.py b/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/models/model_audio2landmark.py
deleted file mode 100644
index fb94f0a749a1d42d6075dea1a990e224e13465d2..0000000000000000000000000000000000000000
--- a/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/models/model_audio2landmark.py
+++ /dev/null
@@ -1,492 +0,0 @@
-"""
- # Copyright 2020 Adobe
- # All Rights Reserved.
-
- # NOTICE: Adobe permits you to use, modify, and distribute this file in
- # accordance with the terms of the Adobe license agreement accompanying
- # it.
-
-"""
-
-import torch
-import torch.nn as nn
-import torch.nn.parallel
-import torch.utils.data
-import math
-import torch.nn.functional as F
-import copy
-import numpy as np
-
-# device = torch.device("mps" if torch.backends.mps.is_available() else "cpu")
-device = torch.device("cuda")
-
-AUDIO_FEAT_SIZE = 161
-FACE_ID_FEAT_SIZE = 204
-Z_SIZE = 16
-EPSILON = 1e-40
-
-
-class Audio2landmark_content(nn.Module):
-
- def __init__(self, num_window_frames=18, in_size=80, lstm_size=AUDIO_FEAT_SIZE, use_prior_net=False, hidden_size=256, num_layers=3, drop_out=0, bidirectional=False):
- super(Audio2landmark_content, self).__init__()
-
- self.fc_prior = self.fc = nn.Sequential(
- nn.Linear(in_features=in_size, out_features=256),
- nn.BatchNorm1d(256),
- nn.LeakyReLU(0.2),
- nn.Linear(256, lstm_size),
- )
-
- self.use_prior_net = use_prior_net
- if(use_prior_net):
- self.bilstm = nn.LSTM(input_size=lstm_size,
- hidden_size=hidden_size,
- num_layers=num_layers,
- dropout=drop_out,
- bidirectional=bidirectional,
- batch_first=True, )
- else:
- self.bilstm = nn.LSTM(input_size=in_size,
- hidden_size=hidden_size,
- num_layers=num_layers,
- dropout=drop_out,
- bidirectional=bidirectional,
- batch_first=True, )
-
- self.in_size = in_size
- self.lstm_size = lstm_size
- self.num_window_frames = num_window_frames
-
- self.fc_in_features = hidden_size * 2 if bidirectional else hidden_size
- self.fc = nn.Sequential(
- nn.Linear(in_features=self.fc_in_features + FACE_ID_FEAT_SIZE, out_features=512),
- nn.BatchNorm1d(512),
- nn.LeakyReLU(0.2),
- nn.Linear(512, 256),
- nn.BatchNorm1d(256),
- nn.LeakyReLU(0.2),
- nn.Linear(256, 204),
- )
-
-
-
- def forward(self, au, face_id):
-
- inputs = au
- if(self.use_prior_net):
- inputs = self.fc_prior(inputs.contiguous().view(-1, self.in_size))
- inputs = inputs.view(-1, self.num_window_frames, self.lstm_size)
-
- output, (hn, cn) = self.bilstm(inputs)
- output = output[:, -1, :]
-
- if(face_id.shape[0] == 1):
- face_id = face_id.repeat(output.shape[0], 1)
- output2 = torch.cat((output, face_id), dim=1)
-
- output2 = self.fc(output2)
- # output += face_id
-
- return output2, face_id
-
-
-
-class Embedder(nn.Module):
- def __init__(self, feat_size, d_model):
- super().__init__()
- self.embed = nn.Linear(feat_size, d_model)
- def forward(self, x):
- return self.embed(x)
-
-
-class PositionalEncoder(nn.Module):
- def __init__(self, d_model, max_seq_len=512):
- super().__init__()
- self.d_model = d_model
-
- # create constant 'pe' matrix with values dependant on
- # pos and i
- pe = torch.zeros(max_seq_len, d_model)
- for pos in range(max_seq_len):
- for i in range(0, d_model, 2):
- pe[pos, i] = \
- math.sin(pos / (10000 ** ((2 * i) / d_model)))
- pe[pos, i + 1] = \
- math.cos(pos / (10000 ** ((2 * (i + 1)) / d_model)))
-
- pe = pe.unsqueeze(0)
- self.register_buffer('pe', pe)
-
- def forward(self, x):
- # make embeddings relatively larger
- x = x * math.sqrt(self.d_model)
- # add constant to embedding
- seq_len = x.size(1)
- x = x + self.pe[:, :seq_len].clone().detach().to(device)
- return x
-
-
-def attention(q, k, v, d_k, mask=None, dropout=None):
-
- scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(d_k)
- if mask is not None:
- mask = mask.unsqueeze(1)
- scores = scores.masked_fill(mask == 0, -1e9)
- scores = F.softmax(scores, dim=-1)
-
- if dropout is not None:
- scores = dropout(scores)
-
- output = torch.matmul(scores, v)
- return output
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, heads, d_model, dropout=0.1):
- super().__init__()
-
- self.d_model = d_model
- self.d_k = d_model // heads
- self.h = heads
-
- self.q_linear = nn.Linear(d_model, d_model)
- self.v_linear = nn.Linear(d_model, d_model)
- self.k_linear = nn.Linear(d_model, d_model)
- self.dropout = nn.Dropout(dropout)
- self.out = nn.Linear(d_model, d_model)
-
- def forward(self, q, k, v, mask=None):
- bs = q.size(0)
-
- # perform linear operation and split into h heads
-
- k = self.k_linear(k).view(bs, -1, self.h, self.d_k)
- q = self.q_linear(q).view(bs, -1, self.h, self.d_k)
- v = self.v_linear(v).view(bs, -1, self.h, self.d_k)
-
- # transpose to get dimensions bs * h * sl * d_model
-
- k = k.transpose(1, 2)
- q = q.transpose(1, 2)
- v = v.transpose(1, 2)
-
- # calculate attention using function we will define next
- scores = attention(q, k, v, self.d_k, mask, self.dropout)
-
- # concatenate heads and put through final linear layer
- concat = scores.transpose(1, 2).contiguous() \
- .view(bs, -1, self.d_model)
-
- output = self.out(concat)
-
- return output
-
-class FeedForward(nn.Module):
- def __init__(self, d_model, d_ff=2048, dropout = 0.1):
- super().__init__()
- # We set d_ff as a default to 2048
- self.linear_1 = nn.Linear(d_model, d_ff)
- self.dropout = nn.Dropout(dropout)
- self.linear_2 = nn.Linear(d_ff, d_model)
- def forward(self, x):
- x = self.dropout(F.relu(self.linear_1(x)))
- x = self.linear_2(x)
- return x
-
-
-class Norm(nn.Module):
- def __init__(self, d_model, eps=1e-6):
- super().__init__()
-
- self.size = d_model
- # create two learnable parameters to calibrate normalisation
- self.alpha = nn.Parameter(torch.ones(self.size))
- self.bias = nn.Parameter(torch.zeros(self.size))
- self.eps = eps
-
- def forward(self, x):
- norm = self.alpha * (x - x.mean(dim=-1, keepdim=True)) \
- / (x.std(dim=-1, keepdim=True) + self.eps) + self.bias
- return norm
-
-# build an encoder layer with one multi-head attention layer and one # feed-forward layer
-class EncoderLayer(nn.Module):
- def __init__(self, d_model, heads, dropout=0.1):
- super().__init__()
- self.norm_1 = Norm(d_model)
- self.norm_2 = Norm(d_model)
- self.attn = MultiHeadAttention(heads, d_model)
- self.ff = FeedForward(d_model)
- self.dropout_1 = nn.Dropout(dropout)
- self.dropout_2 = nn.Dropout(dropout)
-
- def forward(self, x, mask):
- x2 = self.norm_1(x)
- x = x + self.dropout_1(self.attn(x2, x2, x2, mask))
- x2 = self.norm_2(x)
- x = x + self.dropout_2(self.ff(x2))
- return x
-
- # build a decoder layer with two multi-head attention layers and
- # one feed-forward layer
-class DecoderLayer(nn.Module):
- def __init__(self, d_model, heads, dropout=0.1):
- super().__init__()
- self.norm_1 = Norm(d_model)
- self.norm_2 = Norm(d_model)
- self.norm_3 = Norm(d_model)
-
- self.dropout_1 = nn.Dropout(dropout)
- self.dropout_2 = nn.Dropout(dropout)
- self.dropout_3 = nn.Dropout(dropout)
-
- self.attn_1 = MultiHeadAttention(heads, d_model)
- self.attn_2 = MultiHeadAttention(heads, d_model)
- # self.ff = FeedForward(d_model).mps()
- self.ff = FeedForward(d_model)
-
- def forward(self, x, e_outputs, src_mask, trg_mask):
- x2 = self.norm_1(x)
-
- x = x + self.dropout_1(self.attn_1(x2, x2, x2, trg_mask))
- x2 = self.norm_2(x)
- x = x + self.dropout_2(self.attn_2(x2, e_outputs, e_outputs, src_mask))
- x2 = self.norm_3(x)
- x = x + self.dropout_3(self.ff(x2))
- return x
-
- # We can then build a convenient cloning function that can generate multiple layers:
-def get_clones(module, N):
- return nn.ModuleList([copy.deepcopy(module) for i in range(N)])
-
-
-class Encoder(nn.Module):
- def __init__(self, d_model, N, heads, in_size):
- super().__init__()
- self.N = N
- self.embed = Embedder(in_size, d_model)
- self.pe = PositionalEncoder(d_model)
- self.layers = get_clones(EncoderLayer(d_model, heads), N)
- self.norm = Norm(d_model)
-
- def forward(self, x, mask=None):
- x = self.embed(x)
- x = self.pe(x)
- for i in range(self.N):
- x = self.layers[i](x, mask)
- return self.norm(x)
-
-
-class Decoder(nn.Module):
- def __init__(self, d_model, N, heads, in_size):
- super().__init__()
- self.N = N
- self.embed = Embedder(in_size, d_model)
- self.pe = PositionalEncoder(d_model)
- self.layers = get_clones(DecoderLayer(d_model, heads), N)
- self.norm = Norm(d_model)
-
- def forward(self, x, e_outputs, src_mask=None, trg_mask=None):
- x = self.embed(x)
- x = self.pe(x)
- for i in range(self.N):
- x = self.layers[i](x, e_outputs, src_mask, trg_mask)
- return self.norm(x)
-
-
-class Audio2landmark_pos(nn.Module):
-
- def __init__(self, audio_feat_size=80, c_enc_hidden_size=256, num_layers=3, drop_out=0,
- spk_feat_size=256, spk_emb_enc_size=128, lstm_g_win_size=64, add_info_size=6,
- transformer_d_model=32, N=2, heads=2, z_size=128, audio_dim=256):
- super(Audio2landmark_pos, self).__init__()
-
- self.lstm_g_win_size = lstm_g_win_size
- self.add_info_size = add_info_size
- comb_mlp_size = c_enc_hidden_size * 2
-
- self.audio_content_encoder = nn.LSTM(input_size=audio_feat_size,
- hidden_size=c_enc_hidden_size,
- num_layers=num_layers,
- dropout=drop_out,
- bidirectional=False,
- batch_first=True)
-
- self.use_audio_projection = not (audio_dim == c_enc_hidden_size)
- if(self.use_audio_projection):
- self.audio_projection = nn.Sequential(
- nn.Linear(in_features=c_enc_hidden_size, out_features=256),
- nn.LeakyReLU(0.02),
- nn.Linear(256, 128),
- nn.LeakyReLU(0.02),
- nn.Linear(128, audio_dim),
- )
-
-
- ''' original version '''
- self.spk_emb_encoder = nn.Sequential(
- nn.Linear(in_features=spk_feat_size, out_features=256),
- nn.LeakyReLU(0.02),
- nn.Linear(256, 128),
- nn.LeakyReLU(0.02),
- nn.Linear(128, spk_emb_enc_size),
- )
- # self.comb_mlp = nn.Sequential(
- # nn.Linear(in_features=audio_dim + spk_emb_enc_size, out_features=comb_mlp_size),
- # nn.LeakyReLU(0.02),
- # nn.Linear(comb_mlp_size, comb_mlp_size // 2),
- # nn.LeakyReLU(0.02),
- # nn.Linear(comb_mlp_size // 2, 180),
- # )
-
- d_model = transformer_d_model * heads
- N = N
- heads = heads
-
- self.encoder = Encoder(d_model, N, heads, in_size=audio_dim + spk_emb_enc_size + z_size)
- self.decoder = Decoder(d_model, N, heads, in_size=204)
- self.out = nn.Sequential(
- nn.Linear(in_features=d_model + z_size, out_features=512),
- nn.LeakyReLU(0.02),
- nn.Linear(512, 256),
- nn.LeakyReLU(0.02),
- nn.Linear(256, 204),
- )
-
-
- def forward(self, au, emb, face_id, fls, z, add_z_spk=False, another_emb=None):
-
- # audio
- audio_encode, (_, _) = self.audio_content_encoder(au)
- audio_encode = audio_encode[:, -1, :]
-
- if(self.use_audio_projection):
- audio_encode = self.audio_projection(audio_encode)
-
- # spk
- spk_encode = self.spk_emb_encoder(emb)
- if(add_z_spk):
- z_spk = torch.tensor(torch.randn(spk_encode.shape)*0.01, requires_grad=False, dtype=torch.float).to(device)
- spk_encode = spk_encode + z_spk
-
- # comb
- # comb_input = torch.cat((audio_encode, spk_encode), dim=1)
- # comb_encode = self.comb_mlp(comb_input)
- comb_encode = torch.cat((audio_encode, spk_encode, z), dim=1)
- src_feat = comb_encode.unsqueeze(0)
-
- e_outputs = self.encoder(src_feat)[0]
-
- e_outputs = torch.cat((e_outputs, z), dim=1)
-
- fl_pred = self.out(e_outputs)
-
- return fl_pred, face_id[0:1, :], spk_encode
-
-
-
-
-def nopeak_mask(size):
- np_mask = np.triu(np.ones((1, size, size)), k=1).astype('uint8')
- np_mask = torch.tensor(torch.from_numpy(np_mask) == 0)
- np_mask = np_mask.to(device)
- return np_mask
-
-
-def create_masks(src, trg):
- src_mask = (src != torch.zeros_like(src, requires_grad=False))
-
- if trg is not None:
- size = trg.size(1) # get seq_len for matrix
- np_mask = nopeak_mask(size)
- np_mask = np_mask.to(device)
- trg_mask = np_mask
-
- else:
- trg_mask = None
- return src_mask, trg_mask
-
-
-class TalkingToon_spk2res_lstmgan_DL(nn.Module):
- def __init__(self, comb_emb_size=256, input_size=6):
- super(TalkingToon_spk2res_lstmgan_DL, self).__init__()
-
- self.fl_D = nn.Sequential(
- nn.Linear(in_features=FACE_ID_FEAT_SIZE, out_features=512),
- nn.LeakyReLU(0.02),
- nn.Linear(512, 256),
- nn.LeakyReLU(0.02),
- nn.Linear(256, 1),
- )
-
- def forward(self, feat):
- d = self.fl_D(feat)
- # d = torch.sigmoid(d)
- return d
-
-
-class Transformer_DT(nn.Module):
- def __init__(self, transformer_d_model=32, N=2, heads=2, spk_emb_enc_size=128):
- super(Transformer_DT, self).__init__()
- d_model = transformer_d_model * heads
- self.encoder = Encoder(d_model, N, heads, in_size=204 + spk_emb_enc_size)
- self.out = nn.Sequential(
- nn.Linear(in_features=d_model, out_features=512),
- nn.LeakyReLU(0.02),
- nn.Linear(512, 256),
- nn.LeakyReLU(0.02),
- nn.Linear(256, 1),
- )
-
- def forward(self, fls, spk_emb, win_size=64, win_step=1):
- feat = torch.cat((fls, spk_emb), dim=1)
-
- win_size = feat.shape[0]-1 if feat.shape[0] <= win_size else win_size
- D_input = [feat[i:i+win_size:win_step] for i in range(0, feat.shape[0]-win_size)]
- D_input = torch.stack(D_input, dim=0)
- D_output = self.encoder(D_input)
- D_output = torch.max(D_output, dim=1, keepdim=False)[0]
- d = self.out(D_output)
- # d = torch.sigmoid(d)
- return d
-
-
-class TalkingToon_spk2res_lstmgan_DT(nn.Module):
- def __init__(self, comb_emb_size=256, lstm_g_hidden_size=256, num_layers=3, drop_out=0, input_size=6):
- super(TalkingToon_spk2res_lstmgan_DT, self).__init__()
-
- self.fl_DT = nn.GRU(input_size=comb_emb_size + FACE_ID_FEAT_SIZE,
- hidden_size=lstm_g_hidden_size,
- num_layers=3,
- dropout=0,
- bidirectional=False,
- batch_first=True)
- self.projection = nn.Sequential(
- nn.Linear(in_features=lstm_g_hidden_size, out_features=512),
- nn.LeakyReLU(0.02),
- nn.Linear(512, 256),
- nn.LeakyReLU(0.02),
- nn.Linear(256, 1),
- )
-
- self.maxpool = nn.MaxPool1d(4, 1)
-
- def forward(self, comb_encode, fls, win_size=32, win_step=1):
- feat = torch.cat((comb_encode, fls), dim=1)
- # v
- # feat = torch.cat((comb_encode[0:-1], fls[1:] - fls[0:-1]), dim=1)
-
- # max pooling
- feat = feat.transpose(0, 1).unsqueeze(0)
- feat = self.maxpool(feat)
- feat = feat[0].transpose(0, 1)
-
- win_size = feat.shape[0] - 1 if feat.shape[0] <= win_size else win_size
- D_input = [feat[i:i+win_size:win_step] for i in range(0, feat.shape[0]-win_size)]
- D_input = torch.stack(D_input, dim=0)
- D_output, _ = self.fl_DT(D_input)
- D_output = D_output[:, -1, :]
- d = self.projection(D_output)
- # d = torch.sigmoid(d)
- return d
\ No newline at end of file
diff --git a/spaces/matthoffner/chatbot-mini/components/Markdown/MemoizedReactMarkdown.tsx b/spaces/matthoffner/chatbot-mini/components/Markdown/MemoizedReactMarkdown.tsx
deleted file mode 100644
index 00cd26a8d72e858c044bd1d4ca5de9494f2672e7..0000000000000000000000000000000000000000
--- a/spaces/matthoffner/chatbot-mini/components/Markdown/MemoizedReactMarkdown.tsx
+++ /dev/null
@@ -1,9 +0,0 @@
-import { FC, memo } from 'react';
-import ReactMarkdown, { Options } from 'react-markdown';
-
-export const MemoizedReactMarkdown: FC = memo(
- ReactMarkdown,
- (prevProps, nextProps) => (
- prevProps.children === nextProps.children
- )
-);
diff --git a/spaces/meowooooo/maybe/Dockerfile b/spaces/meowooooo/maybe/Dockerfile
deleted file mode 100644
index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000
--- a/spaces/meowooooo/maybe/Dockerfile
+++ /dev/null
@@ -1,21 +0,0 @@
-FROM node:18-bullseye-slim
-
-RUN apt-get update && \
-
-apt-get install -y git
-
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-
-WORKDIR /app
-
-RUN npm install
-
-COPY Dockerfile greeting.md* .env* ./
-
-RUN npm run build
-
-EXPOSE 7860
-
-ENV NODE_ENV=production
-
-CMD [ "npm", "start" ]
\ No newline at end of file
diff --git a/spaces/merle/PROTEIN_GENERATOR/model/Embeddings.py b/spaces/merle/PROTEIN_GENERATOR/model/Embeddings.py
deleted file mode 100644
index d4558dd4eb9646070d333011d0d8783249c85264..0000000000000000000000000000000000000000
--- a/spaces/merle/PROTEIN_GENERATOR/model/Embeddings.py
+++ /dev/null
@@ -1,307 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from opt_einsum import contract as einsum
-import torch.utils.checkpoint as checkpoint
-from util import get_tips
-from util_module import Dropout, create_custom_forward, rbf, init_lecun_normal
-from Attention_module import Attention, FeedForwardLayer, AttentionWithBias
-from Track_module import PairStr2Pair
-from icecream import ic
-
-# Module contains classes and functions to generate initial embeddings
-
-class PositionalEncoding2D(nn.Module):
- # Add relative positional encoding to pair features
- def __init__(self, d_model, minpos=-32, maxpos=32, p_drop=0.1):
- super(PositionalEncoding2D, self).__init__()
- self.minpos = minpos
- self.maxpos = maxpos
- self.nbin = abs(minpos)+maxpos+1
- self.emb = nn.Embedding(self.nbin, d_model)
- self.drop = nn.Dropout(p_drop)
-
- def forward(self, x, idx):
- bins = torch.arange(self.minpos, self.maxpos, device=x.device)
- seqsep = idx[:,None,:] - idx[:,:,None] # (B, L, L)
- #
- ib = torch.bucketize(seqsep, bins).long() # (B, L, L)
- emb = self.emb(ib) #(B, L, L, d_model)
- x = x + emb # add relative positional encoding
- return self.drop(x)
-
-class MSA_emb(nn.Module):
- # Get initial seed MSA embedding
- def __init__(self, d_msa=256, d_pair=128, d_state=32, d_init=22+22+2+2,
- minpos=-32, maxpos=32, p_drop=0.1):
- super(MSA_emb, self).__init__()
- self.emb = nn.Linear(d_init, d_msa) # embedding for general MSA
- self.emb_q = nn.Embedding(22, d_msa) # embedding for query sequence -- used for MSA embedding
- self.emb_left = nn.Embedding(22, d_pair) # embedding for query sequence -- used for pair embedding
- self.emb_right = nn.Embedding(22, d_pair) # embedding for query sequence -- used for pair embedding
- self.emb_state = nn.Embedding(22, d_state)
- self.drop = nn.Dropout(p_drop)
- self.pos = PositionalEncoding2D(d_pair, minpos=minpos, maxpos=maxpos, p_drop=p_drop)
-
- self.reset_parameter()
-
- def reset_parameter(self):
- self.emb = init_lecun_normal(self.emb)
- self.emb_q = init_lecun_normal(self.emb_q)
- self.emb_left = init_lecun_normal(self.emb_left)
- self.emb_right = init_lecun_normal(self.emb_right)
- self.emb_state = init_lecun_normal(self.emb_state)
-
- nn.init.zeros_(self.emb.bias)
-
- def forward(self, msa, seq, idx, seq1hot=None):
- # Inputs:
- # - msa: Input MSA (B, N, L, d_init)
- # - seq: Input Sequence (B, L)
- # - idx: Residue index
- # Outputs:
- # - msa: Initial MSA embedding (B, N, L, d_msa)
- # - pair: Initial Pair embedding (B, L, L, d_pair)
-
- N = msa.shape[1] # number of sequenes in MSA
-
- # msa embedding
- msa = self.emb(msa) # (B, N, L, d_model) # MSA embedding
- seq = seq.long()
- tmp = self.emb_q(seq).unsqueeze(1) # (B, 1, L, d_model) -- query embedding
- msa = msa + tmp.expand(-1, N, -1, -1) # adding query embedding to MSA
- msa = self.drop(msa)
-
- # pair embedding
- if seq1hot is not None:
- left = (seq1hot @ self.emb_left.weight)[:,None] # (B, 1, L, d_pair)
- right = (seq1hot @ self.emb_right.weight)[:,:,None] # (B, L, 1, d_pair)
- else:
- left = self.emb_left(seq)[:,None] # (B, 1, L, d_pair)
- right = self.emb_right(seq)[:,:,None] # (B, L, 1, d_pair)
- #ic(torch.norm(self.emb_left.weight, dim=1))
- #ic(torch.norm(self.emb_right.weight, dim=1))
- pair = left + right # (B, L, L, d_pair)
- pair = self.pos(pair, idx) # add relative position
-
- # state embedding
- state = self.drop(self.emb_state(seq))
-
- return msa, pair, state
-
-class Extra_emb(nn.Module):
- # Get initial seed MSA embedding
- def __init__(self, d_msa=256, d_init=22+1+2, p_drop=0.1):
- super(Extra_emb, self).__init__()
- self.emb = nn.Linear(d_init, d_msa) # embedding for general MSA
- self.emb_q = nn.Embedding(22, d_msa) # embedding for query sequence
- self.drop = nn.Dropout(p_drop)
-
- self.reset_parameter()
-
- def reset_parameter(self):
- self.emb = init_lecun_normal(self.emb)
- nn.init.zeros_(self.emb.bias)
-
- def forward(self, msa, seq, idx, seq1hot=None):
- # Inputs:
- # - msa: Input MSA (B, N, L, d_init)
- # - seq: Input Sequence (B, L)
- # - idx: Residue index
- # Outputs:
- # - msa: Initial MSA embedding (B, N, L, d_msa)
- N = msa.shape[1] # number of sequenes in MSA
- msa = self.emb(msa) # (B, N, L, d_model) # MSA embedding
- if seq1hot is not None:
- seq = (seq1hot @ self.emb_q.weight).unsqueeze(1) # (B, 1, L, d_model) -- query embedding
- else:
- seq = self.emb_q(seq).unsqueeze(1) # (B, 1, L, d_model) -- query embedding
- #ic(torch.norm(self.emb_q.weight, dim=1))
- msa = msa + seq.expand(-1, N, -1, -1) # adding query embedding to MSA
- return self.drop(msa)
-
-class TemplatePairStack(nn.Module):
- # process template pairwise features
- # use structure-biased attention
- def __init__(self, n_block=2, d_templ=64, n_head=4, d_hidden=16, p_drop=0.25):
- super(TemplatePairStack, self).__init__()
- self.n_block = n_block
- proc_s = [PairStr2Pair(d_pair=d_templ, n_head=n_head, d_hidden=d_hidden, p_drop=p_drop) for i in range(n_block)]
- self.block = nn.ModuleList(proc_s)
- self.norm = nn.LayerNorm(d_templ)
- def forward(self, templ, rbf_feat, use_checkpoint=False):
- B, T, L = templ.shape[:3]
- templ = templ.reshape(B*T, L, L, -1)
-
- for i_block in range(self.n_block):
- if use_checkpoint:
- templ = checkpoint.checkpoint(create_custom_forward(self.block[i_block]), templ, rbf_feat)
- else:
- templ = self.block[i_block](templ, rbf_feat)
- return self.norm(templ).reshape(B, T, L, L, -1)
-
-class TemplateTorsionStack(nn.Module):
- def __init__(self, n_block=2, d_templ=64, n_head=4, d_hidden=16, p_drop=0.15):
- super(TemplateTorsionStack, self).__init__()
- self.n_block=n_block
- self.proj_pair = nn.Linear(d_templ+36, d_templ)
- proc_s = [AttentionWithBias(d_in=d_templ, d_bias=d_templ,
- n_head=n_head, d_hidden=d_hidden) for i in range(n_block)]
- self.row_attn = nn.ModuleList(proc_s)
- proc_s = [FeedForwardLayer(d_templ, 4, p_drop=p_drop) for i in range(n_block)]
- self.ff = nn.ModuleList(proc_s)
- self.norm = nn.LayerNorm(d_templ)
-
- def reset_parameter(self):
- self.proj_pair = init_lecun_normal(self.proj_pair)
- nn.init.zeros_(self.proj_pair.bias)
-
- def forward(self, tors, pair, rbf_feat, use_checkpoint=False):
- B, T, L = tors.shape[:3]
- tors = tors.reshape(B*T, L, -1)
- pair = pair.reshape(B*T, L, L, -1)
- pair = torch.cat((pair, rbf_feat), dim=-1)
- pair = self.proj_pair(pair)
-
- for i_block in range(self.n_block):
- if use_checkpoint:
- tors = tors + checkpoint.checkpoint(create_custom_forward(self.row_attn[i_block]), tors, pair)
- else:
- tors = tors + self.row_attn[i_block](tors, pair)
- tors = tors + self.ff[i_block](tors)
- return self.norm(tors).reshape(B, T, L, -1)
-
-class Templ_emb(nn.Module):
- # Get template embedding
- # Features are
- # t2d:
- # - 37 distogram bins + 6 orientations (43)
- # - Mask (missing/unaligned) (1)
- # t1d:
- # - tiled AA sequence (20 standard aa + gap)
- # - seq confidence (1)
- # - global time step (1)
- # - struc confidence (1)
- #
- def __init__(self, d_t1d=21+1+1+1, d_t2d=43+1, d_tor=30, d_pair=128, d_state=32,
- n_block=2, d_templ=64,
- n_head=4, d_hidden=16, p_drop=0.25):
- super(Templ_emb, self).__init__()
- # process 2D features
- self.emb = nn.Linear(d_t1d*2+d_t2d, d_templ)
- self.templ_stack = TemplatePairStack(n_block=n_block, d_templ=d_templ, n_head=n_head,
- d_hidden=d_hidden, p_drop=p_drop)
-
- self.attn = Attention(d_pair, d_templ, n_head, d_hidden, d_pair, p_drop=p_drop)
-
- # process torsion angles
- self.emb_t1d = nn.Linear(d_t1d+d_tor, d_templ)
- self.proj_t1d = nn.Linear(d_templ, d_templ)
- #self.tor_stack = TemplateTorsionStack(n_block=n_block, d_templ=d_templ, n_head=n_head,
- # d_hidden=d_hidden, p_drop=p_drop)
- self.attn_tor = Attention(d_state, d_templ, n_head, d_hidden, d_state, p_drop=p_drop)
-
- self.reset_parameter()
-
- def reset_parameter(self):
- self.emb = init_lecun_normal(self.emb)
- #nn.init.zeros_(self.emb.weight) #init weights to zero
- nn.init.zeros_(self.emb.bias)
-
- nn.init.kaiming_normal_(self.emb_t1d.weight, nonlinearity='relu')
- #nn.init.zeros_(self.emb_t1d.weight)
- nn.init.zeros_(self.emb_t1d.bias)
-
- self.proj_t1d = init_lecun_normal(self.proj_t1d)
- nn.init.zeros_(self.proj_t1d.bias)
-
- def forward(self, t1d, t2d, alpha_t, xyz_t, pair, state, use_checkpoint=False):
- # Input
- # - t1d: 1D template info (B, T, L, 23) 24 SL
- # - t2d: 2D template info (B, T, L, L, 44)
- B, T, L, _ = t1d.shape
-
- # Prepare 2D template features
- left = t1d.unsqueeze(3).expand(-1,-1,-1,L,-1)
- right = t1d.unsqueeze(2).expand(-1,-1,L,-1,-1)
- #
- templ = torch.cat((t2d, left, right), -1) # (B, T, L, L, 88)
-
- #ic(templ.shape)
- #ic(templ.dtype)
- #ic(self.emb.weight.dtype)
- templ = self.emb(templ) # Template templures (B, T, L, L, d_templ)
- # process each template features
- xyz_t = xyz_t.reshape(B*T, L, -1, 3)
- rbf_feat = rbf(torch.cdist(xyz_t[:,:,1], xyz_t[:,:,1]))
- templ = self.templ_stack(templ, rbf_feat, use_checkpoint=use_checkpoint) # (B, T, L,L, d_templ)
-
- # Prepare 1D template torsion angle features
- t1d = torch.cat((t1d, alpha_t), dim=-1) # (B, T, L, 22+30)
- # process each template features
- t1d = self.proj_t1d(F.relu_(self.emb_t1d(t1d)))
-
- # mixing query state features to template state features
- state = state.reshape(B*L, 1, -1)
- t1d = t1d.permute(0,2,1,3).reshape(B*L, T, -1)
- if use_checkpoint:
- out = checkpoint.checkpoint(create_custom_forward(self.attn_tor), state, t1d, t1d)
- out = out.reshape(B, L, -1)
- else:
- out = self.attn_tor(state, t1d, t1d).reshape(B, L, -1)
- state = state.reshape(B, L, -1)
- state = state + out
-
- # mixing query pair features to template information (Template pointwise attention)
- pair = pair.reshape(B*L*L, 1, -1)
- templ = templ.permute(0, 2, 3, 1, 4).reshape(B*L*L, T, -1)
- if use_checkpoint:
- out = checkpoint.checkpoint(create_custom_forward(self.attn), pair, templ, templ)
- out = out.reshape(B, L, L, -1)
- else:
- out = self.attn(pair, templ, templ).reshape(B, L, L, -1)
- #
- pair = pair.reshape(B, L, L, -1)
- pair = pair + out
-
- return pair, state
-
-class Recycling(nn.Module):
- def __init__(self, d_msa=256, d_pair=128, d_state=32):
- super(Recycling, self).__init__()
- self.proj_dist = nn.Linear(36+d_state*2, d_pair)
- self.norm_state = nn.LayerNorm(d_state)
- self.norm_pair = nn.LayerNorm(d_pair)
- self.norm_msa = nn.LayerNorm(d_msa)
-
- self.reset_parameter()
-
- def reset_parameter(self):
- self.proj_dist = init_lecun_normal(self.proj_dist)
- nn.init.zeros_(self.proj_dist.bias)
-
- def forward(self, seq, msa, pair, xyz, state):
- B, L = pair.shape[:2]
- state = self.norm_state(state)
- #
- left = state.unsqueeze(2).expand(-1,-1,L,-1)
- right = state.unsqueeze(1).expand(-1,L,-1,-1)
-
- # three anchor atoms
- N = xyz[:,:,0]
- Ca = xyz[:,:,1]
- C = xyz[:,:,2]
-
- # recreate Cb given N,Ca,C
- b = Ca - N
- c = C - Ca
- a = torch.cross(b, c, dim=-1)
- Cb = -0.58273431*a + 0.56802827*b - 0.54067466*c + Ca
-
- dist = rbf(torch.cdist(Cb, Cb))
- dist = torch.cat((dist, left, right), dim=-1)
- dist = self.proj_dist(dist)
- pair = dist + self.norm_pair(pair)
- msa = self.norm_msa(msa)
- return msa, pair, state
-
diff --git a/spaces/merve/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/backbone.py b/spaces/merve/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/backbone.py
deleted file mode 100644
index c8340c723fad8e07e2fc62daaa3912487498814b..0000000000000000000000000000000000000000
--- a/spaces/merve/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/backbone.py
+++ /dev/null
@@ -1,221 +0,0 @@
-# ------------------------------------------------------------------------
-# Grounding DINO
-# url: https://github.com/IDEA-Research/GroundingDINO
-# Copyright (c) 2023 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Conditional DETR
-# Copyright (c) 2021 Microsoft. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Copied from DETR (https://github.com/facebookresearch/detr)
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-# ------------------------------------------------------------------------
-
-"""
-Backbone modules.
-"""
-
-from typing import Dict, List
-
-import torch
-import torch.nn.functional as F
-import torchvision
-from torch import nn
-from torchvision.models._utils import IntermediateLayerGetter
-
-from groundingdino.util.misc import NestedTensor, clean_state_dict, is_main_process
-
-from .position_encoding import build_position_encoding
-from .swin_transformer import build_swin_transformer
-
-
-class FrozenBatchNorm2d(torch.nn.Module):
- """
- BatchNorm2d where the batch statistics and the affine parameters are fixed.
-
- Copy-paste from torchvision.misc.ops with added eps before rqsrt,
- without which any other models than torchvision.models.resnet[18,34,50,101]
- produce nans.
- """
-
- def __init__(self, n):
- super(FrozenBatchNorm2d, self).__init__()
- self.register_buffer("weight", torch.ones(n))
- self.register_buffer("bias", torch.zeros(n))
- self.register_buffer("running_mean", torch.zeros(n))
- self.register_buffer("running_var", torch.ones(n))
-
- def _load_from_state_dict(
- self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs
- ):
- num_batches_tracked_key = prefix + "num_batches_tracked"
- if num_batches_tracked_key in state_dict:
- del state_dict[num_batches_tracked_key]
-
- super(FrozenBatchNorm2d, self)._load_from_state_dict(
- state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs
- )
-
- def forward(self, x):
- # move reshapes to the beginning
- # to make it fuser-friendly
- w = self.weight.reshape(1, -1, 1, 1)
- b = self.bias.reshape(1, -1, 1, 1)
- rv = self.running_var.reshape(1, -1, 1, 1)
- rm = self.running_mean.reshape(1, -1, 1, 1)
- eps = 1e-5
- scale = w * (rv + eps).rsqrt()
- bias = b - rm * scale
- return x * scale + bias
-
-
-class BackboneBase(nn.Module):
- def __init__(
- self,
- backbone: nn.Module,
- train_backbone: bool,
- num_channels: int,
- return_interm_indices: list,
- ):
- super().__init__()
- for name, parameter in backbone.named_parameters():
- if (
- not train_backbone
- or "layer2" not in name
- and "layer3" not in name
- and "layer4" not in name
- ):
- parameter.requires_grad_(False)
-
- return_layers = {}
- for idx, layer_index in enumerate(return_interm_indices):
- return_layers.update(
- {"layer{}".format(5 - len(return_interm_indices) + idx): "{}".format(layer_index)}
- )
-
- # if len:
- # if use_stage1_feature:
- # return_layers = {"layer1": "0", "layer2": "1", "layer3": "2", "layer4": "3"}
- # else:
- # return_layers = {"layer2": "0", "layer3": "1", "layer4": "2"}
- # else:
- # return_layers = {'layer4': "0"}
- self.body = IntermediateLayerGetter(backbone, return_layers=return_layers)
- self.num_channels = num_channels
-
- def forward(self, tensor_list: NestedTensor):
- xs = self.body(tensor_list.tensors)
- out: Dict[str, NestedTensor] = {}
- for name, x in xs.items():
- m = tensor_list.mask
- assert m is not None
- mask = F.interpolate(m[None].float(), size=x.shape[-2:]).to(torch.bool)[0]
- out[name] = NestedTensor(x, mask)
- # import ipdb; ipdb.set_trace()
- return out
-
-
-class Backbone(BackboneBase):
- """ResNet backbone with frozen BatchNorm."""
-
- def __init__(
- self,
- name: str,
- train_backbone: bool,
- dilation: bool,
- return_interm_indices: list,
- batch_norm=FrozenBatchNorm2d,
- ):
- if name in ["resnet18", "resnet34", "resnet50", "resnet101"]:
- backbone = getattr(torchvision.models, name)(
- replace_stride_with_dilation=[False, False, dilation],
- pretrained=is_main_process(),
- norm_layer=batch_norm,
- )
- else:
- raise NotImplementedError("Why you can get here with name {}".format(name))
- # num_channels = 512 if name in ('resnet18', 'resnet34') else 2048
- assert name not in ("resnet18", "resnet34"), "Only resnet50 and resnet101 are available."
- assert return_interm_indices in [[0, 1, 2, 3], [1, 2, 3], [3]]
- num_channels_all = [256, 512, 1024, 2048]
- num_channels = num_channels_all[4 - len(return_interm_indices) :]
- super().__init__(backbone, train_backbone, num_channels, return_interm_indices)
-
-
-class Joiner(nn.Sequential):
- def __init__(self, backbone, position_embedding):
- super().__init__(backbone, position_embedding)
-
- def forward(self, tensor_list: NestedTensor):
- xs = self[0](tensor_list)
- out: List[NestedTensor] = []
- pos = []
- for name, x in xs.items():
- out.append(x)
- # position encoding
- pos.append(self[1](x).to(x.tensors.dtype))
-
- return out, pos
-
-
-def build_backbone(args):
- """
- Useful args:
- - backbone: backbone name
- - lr_backbone:
- - dilation
- - return_interm_indices: available: [0,1,2,3], [1,2,3], [3]
- - backbone_freeze_keywords:
- - use_checkpoint: for swin only for now
-
- """
- position_embedding = build_position_encoding(args)
- train_backbone = True
- if not train_backbone:
- raise ValueError("Please set lr_backbone > 0")
- return_interm_indices = args.return_interm_indices
- assert return_interm_indices in [[0, 1, 2, 3], [1, 2, 3], [3]]
- args.backbone_freeze_keywords
- use_checkpoint = getattr(args, "use_checkpoint", False)
-
- if args.backbone in ["resnet50", "resnet101"]:
- backbone = Backbone(
- args.backbone,
- train_backbone,
- args.dilation,
- return_interm_indices,
- batch_norm=FrozenBatchNorm2d,
- )
- bb_num_channels = backbone.num_channels
- elif args.backbone in [
- "swin_T_224_1k",
- "swin_B_224_22k",
- "swin_B_384_22k",
- "swin_L_224_22k",
- "swin_L_384_22k",
- ]:
- pretrain_img_size = int(args.backbone.split("_")[-2])
- backbone = build_swin_transformer(
- args.backbone,
- pretrain_img_size=pretrain_img_size,
- out_indices=tuple(return_interm_indices),
- dilation=False,
- use_checkpoint=use_checkpoint,
- )
-
- bb_num_channels = backbone.num_features[4 - len(return_interm_indices) :]
- else:
- raise NotImplementedError("Unknown backbone {}".format(args.backbone))
-
- assert len(bb_num_channels) == len(
- return_interm_indices
- ), f"len(bb_num_channels) {len(bb_num_channels)} != len(return_interm_indices) {len(return_interm_indices)}"
-
- model = Joiner(backbone, position_embedding)
- model.num_channels = bb_num_channels
- assert isinstance(
- bb_num_channels, List
- ), "bb_num_channels is expected to be a List but {}".format(type(bb_num_channels))
- # import ipdb; ipdb.set_trace()
- return model
diff --git a/spaces/merve/anonymization/source/third_party/misc.js b/spaces/merve/anonymization/source/third_party/misc.js
deleted file mode 100644
index a51b6b5292feaa6ee497806752a0d3d0cb4ef547..0000000000000000000000000000000000000000
--- a/spaces/merve/anonymization/source/third_party/misc.js
+++ /dev/null
@@ -1,38 +0,0 @@
-/* Copyright 2019 Google LLC. All Rights Reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-==============================================================================*/
-
-
-function lerp(a, b, t){ return a + t*(b - a) }
-
-function addVec([a0, a1], [b0, b1]){
- return [a0 + b0, a1 + b1]
-}
-
-function phyllotaxis(i, initialRadius=10, initialAngle=Math.PI*(3 - Math.sqrt(5))){
- i = i + Math.random()/20
-
- var r = initialRadius*Math.sqrt(Math.random() + i)
- var angle = i*initialAngle
-
- return [r*Math.cos(angle), r*Math.sin(angle)]
-}
-
-var names = {
- old_m: 'James John Robert Michael William David Richard Joseph Thomas Charles Christopher Daniel Matthew Anthony Donald Mark Paul Steven Andrew Kenneth Joshua George Kevin Brian Edward Ronald Timothy Jason Jeffrey Ryan Jacob Gary Nicholas Eric Stephen Jonathan Larry Justin Scott Brandon Frank Benjamin Gregory Samuel Raymond Patrick Alexander Jack Dennis Jerry Tyler Aaron Jose Henry Douglas Adam Peter Nathan Zachary Walter Kyle Harold Carl Jeremy Keith Roger Gerald Ethan Arthur Terry Christian Sean Lawrence Austin Joe Noah Jesse Albert Bryan Billy Bruce Willie Jordan Dylan Alan Ralph Gabriel Roy Juan Wayne Eugene Logan Randy Louis Russell Vincent Philip Bobby Johnny Bradley'.split(' '),
- old_f: 'Mary Patricia Jennifer Linda Elizabeth Barbara Susan Jessica Sarah Karen Nancy Margaret Lisa Betty Dorothy Sandra Ashley Kimberly Donna Emily Michelle Carol Amanda Melissa Deborah Stephanie Rebecca Laura Sharon Cynthia Kathleen Helen Amy Shirley Angela Anna Brenda Pamela Nicole Ruth Katherine Samantha Christine Emma Catherine Debra Virginia Rachel Carolyn Janet Maria Heather Diane Julie Joyce Victoria Kelly Christina Joan Evelyn Lauren Judith Olivia Frances Martha Cheryl Megan Andrea Hannah Jacqueline Ann Jean Alice Kathryn Gloria Teresa Doris Sara Janice Julia Marie Madison Grace Judy Theresa Beverly Denise Marilyn Amber Danielle Abigail Brittany Rose Diana Natalie Sophia Alexis Lori Kayla Jane'.split(' '),
- m: 'Noah Liam Jacob Mason William Ethan Michael Alexander James Elijah Daniel Benjamin Aiden Jayden Logan Matthew David Joseph Lucas Jackson Anthony Joshua Samuel Andrew Gabriel Christopher John Dylan Carter Isaac Ryan Luke Oliver Nathan Henry Owen Caleb Wyatt Christian Sebastian Jack Jonathan Landon Julian Isaiah Hunter Levi Aaron Eli Charles Thomas Connor Brayden Nicholas Jaxon Jeremiah Cameron Evan Adrian Jordan Gavin Grayson Angel Robert Tyler Josiah Austin Colton Brandon Jose Dominic Kevin Zachary Ian Chase Jason Adam Ayden Parker Hudson Cooper Nolan Lincoln Xavier Carson Jace Justin Easton Mateo Asher Bentley Blake Nathaniel Jaxson Leo Kayden Tristan Luis Elias Brody Bryson Juan Vincent Cole Micah Ryder Theodore Carlos Ezra Damian Miles Santiago Max Jesus Leonardo Sawyer Diego Alex Roman Maxwell Eric Greyson Hayden Giovanni Wesley Axel Camden Braxton Ivan Ashton Declan Bryce Timothy Antonio Silas Kaiden Ezekiel Jonah Weston George Harrison Steven Miguel Richard Bryan Kaleb Victor Aidan Jameson Joel Patrick Jaden Colin Everett Preston Maddox Edward Alejandro Kaden Jesse Emmanuel Kyle Brian Emmett Jude Marcus Kingston Kai Alan Malachi Grant Jeremy Riley Jayce Bennett Abel Ryker Caden Brantley Luca Brady Calvin Sean Oscar Jake Maverick Abraham Mark Tucker Nicolas Bradley Kenneth Avery Cayden King Paul Amir Gael Graham Maximus'.split(' '),
- f: 'Emma Sophia Olivia Isabella Ava Mia Abigail Emily Madison Charlotte Elizabeth Amelia Chloe Ella Evelyn Avery Sofia Harper Grace Addison Victoria Natalie Lily Aubrey Lillian Zoey Hannah Layla Brooklyn Samantha Zoe Leah Scarlett Riley Camila Savannah Anna Audrey Allison Aria Gabriella Hailey Claire Sarah Aaliyah Kaylee Nevaeh Penelope Alexa Arianna Stella Alexis Bella Nora Ellie Ariana Lucy Mila Peyton Genesis Alyssa Taylor Violet Maya Caroline Madelyn Skylar Serenity Ashley Brianna Kennedy Autumn Eleanor Kylie Sadie Paisley Julia Mackenzie Sophie Naomi Eva Khloe Katherine Gianna Melanie Aubree Piper Ruby Lydia Faith Madeline Alexandra Kayla Hazel Lauren Annabelle Jasmine Aurora Alice Makayla Sydney Bailey Luna Maria Reagan Morgan Isabelle Rylee Kimberly Andrea London Elena Jocelyn Natalia Trinity Eliana Vivian Cora Quinn Liliana Molly Jade Clara Valentina Mary Brielle Hadley Kinsley Willow Brooke Lilly Delilah Payton Mariah Paige Jordyn Nicole Mya Josephine Isabel Lyla Adeline Destiny Ivy Emilia Rachel Angelina Valeria Kendall Sara Ximena Isla Aliyah Reese Vanessa Juliana Mckenzie Amy Laila Adalynn Emery Margaret Eden Gabrielle Kaitlyn Ariel Gracie Brooklynn Melody Jessica Valerie Adalyn Adriana Elise Michelle Rebecca Daisy Everly Katelyn Ryleigh Catherine Norah Alaina Athena Leilani Londyn Eliza Jayla Summer Lila Makenzie Izabella Daniela Stephanie Julianna Rose Alana Harmony Jennifer Hayden'.split(' '),
- last: 'SMITH JOHNSON WILLIAMS BROWN JONES GARCIA MILLER DAVIS RODRIGUEZ MARTINEZ HERNANDEZ LOPEZ GONZALEZ WILSON ANDERSON THOMAS TAYLOR MOORE JACKSON MARTIN LEE PEREZ THOMPSON WHITE HARRIS SANCHEZ CLARK RAMIREZ LEWIS ROBINSON WALKER YOUNG ALLEN KING WRIGHT SCOTT TORRES NGUYEN HILL FLORES GREEN ADAMS NELSON BAKER HALL RIVERA CAMPBELL MITCHELL CARTER ROBERTS GOMEZ PHILLIPS EVANS TURNER DIAZ PARKER CRUZ EDWARDS COLLINS REYES STEWART MORRIS MORALES MURPHY COOK ROGERS GUTIERREZ ORTIZ MORGAN COOPER PETERSON BAILEY REED KELLY HOWARD RAMOS KIM COX WARD RICHARDSON WATSON BROOKS CHAVEZ WOOD JAMES BENNETT GRAY MENDOZA RUIZ HUGHES PRICE ALVAREZ CASTILLO SANDERS PATEL MYERS LONG ROSS FOSTER JIMENEZ POWELL JENKINS PERRY RUSSELL SULLIVAN BELL COLEMAN BUTLER HENDERSON BARNES GONZALES FISHER VASQUEZ SIMMONS ROMERO JORDAN PATTERSON ALEXANDER HAMILTON GRAHAM REYNOLDS GRIFFIN WALLACE MORENO WEST COLE HAYES BRYANT HERRERA GIBSON ELLIS TRAN MEDINA AGUILAR STEVENS MURRAY FORD CASTRO MARSHALL OWENS HARRISON FERNANDEZ MCDONALD WOODS WASHINGTON KENNEDY WELLS VARGAS HENRY CHEN FREEMAN WEBB TUCKER GUZMAN BURNS CRAWFORD OLSON SIMPSON PORTER HUNTER GORDON MENDEZ SILVA SHAW SNYDER MASON DIXON MUNOZ HUNT HICKS HOLMES PALMER WAGNER BLACK ROBERTSON BOYD ROSE STONE SALAZAR FOX WARREN MILLS MEYER RICE SCHMIDT GARZA DANIELS FERGUSON NICHOLS STEPHENS SOTO WEAVER RYAN'.split(' ').map(d => d[0] + d.slice(1).toLowerCase())
-}
diff --git a/spaces/merve/hidden-bias/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/watch-files.js b/spaces/merve/hidden-bias/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/watch-files.js
deleted file mode 100644
index 8ab520922aa2b8cb8086ca86f5119fc0b46ac433..0000000000000000000000000000000000000000
--- a/spaces/merve/hidden-bias/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/watch-files.js
+++ /dev/null
@@ -1,83 +0,0 @@
-/* Copyright 2021 Google LLC. All Rights Reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-==============================================================================*/
-
-
-!(function(){
- function watchFile(path){
- var lastStr = ''
-
- console.log(path)
- function check(){
- d3.text(path + '?' + Math.random(), (err, nextStr) => {
- if (err){
- console.log(err)
- return check()
- }
-
- if (nextStr == lastStr) return
- lastStr = nextStr
-
- if (path.includes('.js')){
- console.log('js', new Date())
- Function(nextStr.replace('\n', ';').replace('\n', ';'))()
- }
-
- if (path.includes('.css')){
- console.log('css', new Date())
-
- Array.from(document.querySelectorAll('link'))
- .filter(d => d.href.includes(path) || d.href.includes('__hs_placeholder'))
- .filter((d, i) => i == 0)
- .forEach(d => d.href = path + '?' + Math.random())
- }
- })
-
- if (python_settings.isDev) setTimeout(check, 100)
- }
- check()
- }
-
- ;[
- 'list.css',
- 'style.css',
- '../two-sentences/init-scatter.js',
- '../two-sentences/init-util.js',
- '../two-sentences/init-pair.js',
- 'init.js'
- ].forEach(filename => {
- var root = document.currentScript.src.replace('watch-files.js', '').split('?')[0]
- var path = root + filename
-
- if (python_settings.isDev){
- watchFile(path)
- } else {
- if (path.includes('.js')){
- var node = document.createElement('script')
- node.setAttribute('src', path)
- document.body.appendChild(node)
- }
-
- if (path.includes('.css')){
- Array.from(document.querySelectorAll('link'))
- .filter(d => d.href.includes(path) || d.href.includes('__hs_placeholder'))
- .filter((d, i) => i == 0)
- .forEach(d => d.href = path + '?' + Math.random())
- }
- }
- })
-})()
-
-
-
diff --git a/spaces/merve/hidden-bias/source/anonymization/make-axii.js b/spaces/merve/hidden-bias/source/anonymization/make-axii.js
deleted file mode 100644
index c69b5eba387ec07f01ce2849726fda5461002aef..0000000000000000000000000000000000000000
--- a/spaces/merve/hidden-bias/source/anonymization/make-axii.js
+++ /dev/null
@@ -1,86 +0,0 @@
-window.makeAxii = function(){
-
- var stateScale = d3.scaleBand().domain(states).range(c.x.range())
- var stateAxis = c.svg.append('g.axis.state.init-hidden')
-
- var bw = stateScale.bandwidth()/2
-
- stateAxis.appendMany('text', states)
- .translate(d => [stateScale(d) + bw, c.height + 22])
- .text(d => d)
- .at({
- textAnchor: 'middle',
- })
- .st({fill: '#444'})
-
- stateAxis.appendMany('path', d3.range(ages.length + 1))
- .at({
- d: d => ['M', d*c.width/(ages.length), '0 V', c.height].join(' '),
- stroke: '#aaa',
- })
-
- stateAxis.append('text.bold').text('Home State')
- .translate([c.width/2, c.height + 45])
- .at({textAnchor: 'middle'})
-
- var ageScale = d3.scaleBand().domain(ages.slice().reverse()).range(c.x.range())
- var ageAxis = c.svg.append('g.axis.age.init-hidden')
-
- ageAxis.appendMany('text', ages)
- .translate(d => [-30, ageScale(d) + bw])
- .text(d => d)
- .at({dy: '.33em'})
- .st({fill: '#444'})
-
- ageAxis.appendMany('path', d3.range(ages.length + 1))
- .at({
- d: d => ['M 0', d*c.width/(ages.length), 'H', c.width].join(' '),
- stroke: '#aaa',
- })
-
- if (scale == 1){
- ageAxis
- .append('g').translate([-43, c.height/2])
- .append('text.bold').text('Age')
- .at({textAnchor: 'middle', transform: 'rotate(-90)'})
- } else {
- ageAxis
- .append('g').translate([-22, 14])
- .append('text.bold').text('Age')
- .at({textAnchor: 'middle'})
- }
-
- var seasonAxis = c.svg.append('g.axis.state.init-hidden').lower()
- seasonAxis.appendMany('g', ages)
- .translate(d => ageScale(d), 1)
- .appendMany('path', d3.range(1, 4))
- .at({
- d: d => ['M 0', d*bw/4*2, 'H', c.width].join(' '),
- stroke: '#ddd',
- })
-
- var headAxis = c.svg.append('g.axis.state.init-hidden')
- headAxis.appendMany('text.bold', ['Heads', 'Tails'])
- .text(d => d)
- .translate((d, i) => [i ? c.width/4*3 + 20 : c.width/4 - 20, 88])
- .at({textAnchor: 'middle'})
-
-
- var headCaptionAxis = c.svg.append('g.axis.state.init-hidden')
- headCaptionAxis.appendMany('text', ['reports plagiarism', 'reports truth'])
- .text(d => d)
- .translate((d, i) => [i ? c.width/4*3 + 20 : c.width/4 - 20, 88 + 15])
- .at({textAnchor: 'middle'})
- .st({fill: '#444'})
-
-
- return {stateScale, stateAxis, headAxis, headCaptionAxis, ageScale, ageAxis, bw, seasonAxis}
-}
-
-
-
-
-
-
-
-if (window.init) window.init()
\ No newline at end of file
diff --git a/spaces/mikeee/radiobee-aligner/radiobee/insert_spaces.py b/spaces/mikeee/radiobee-aligner/radiobee/insert_spaces.py
deleted file mode 100644
index 30ade20cbe586a33edd65559dbce760cd43c524b..0000000000000000000000000000000000000000
--- a/spaces/mikeee/radiobee-aligner/radiobee/insert_spaces.py
+++ /dev/null
@@ -1,14 +0,0 @@
-"""Insert spaces, mypython/split_chinese.py."""
-import re
-
-
-def insert_spaces(text: str) -> str:
- """Insert space in Chinese characters.
-
- >>> insert_spaces("test亨利it四世上")
- ' test 亨 利 it 四 世 上 '
- >>> insert_spaces("test亨利it四世上").strip().__len__()
- 17
-
- """
- return re.sub(r"(?<=[a-zA-Z\d]) (?=[a-zA-Z\d])", "", text.replace("", " "))
diff --git a/spaces/mithril-security/blind_chat/src/lib/utils/concatUint8Arrays.ts b/spaces/mithril-security/blind_chat/src/lib/utils/concatUint8Arrays.ts
deleted file mode 100644
index e53396eca7e3dee20a543fb6ac28ecf48c7e3965..0000000000000000000000000000000000000000
--- a/spaces/mithril-security/blind_chat/src/lib/utils/concatUint8Arrays.ts
+++ /dev/null
@@ -1,12 +0,0 @@
-import { sum } from "./sum";
-
-export function concatUint8Arrays(arrays: Uint8Array[]): Uint8Array {
- const totalLength = sum(arrays.map((a) => a.length));
- const result = new Uint8Array(totalLength);
- let offset = 0;
- for (const array of arrays) {
- result.set(array, offset);
- offset += array.length;
- }
- return result;
-}
diff --git a/spaces/mms-meta/MMS/vits/monotonic_align/setup.py b/spaces/mms-meta/MMS/vits/monotonic_align/setup.py
deleted file mode 100644
index 30c224807a70faa9df9c9eb75f8e80c8c867b16b..0000000000000000000000000000000000000000
--- a/spaces/mms-meta/MMS/vits/monotonic_align/setup.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from distutils.core import setup
-from Cython.Build import cythonize
-import numpy
-
-setup(
- name = 'monotonic_align',
- ext_modules = cythonize("core.pyx"),
- include_dirs=[numpy.get_include()]
-)
diff --git a/spaces/mnauf/detect-bees/utils/loggers/clearml/clearml_utils.py b/spaces/mnauf/detect-bees/utils/loggers/clearml/clearml_utils.py
deleted file mode 100644
index eb1c12ce6cac24f131e6e88efaac3f58dbc4c753..0000000000000000000000000000000000000000
--- a/spaces/mnauf/detect-bees/utils/loggers/clearml/clearml_utils.py
+++ /dev/null
@@ -1,157 +0,0 @@
-"""Main Logger class for ClearML experiment tracking."""
-import glob
-import re
-from pathlib import Path
-
-import numpy as np
-import yaml
-
-from utils.plots import Annotator, colors
-
-try:
- import clearml
- from clearml import Dataset, Task
-
- assert hasattr(clearml, '__version__') # verify package import not local dir
-except (ImportError, AssertionError):
- clearml = None
-
-
-def construct_dataset(clearml_info_string):
- """Load in a clearml dataset and fill the internal data_dict with its contents.
- """
- dataset_id = clearml_info_string.replace('clearml://', '')
- dataset = Dataset.get(dataset_id=dataset_id)
- dataset_root_path = Path(dataset.get_local_copy())
-
- # We'll search for the yaml file definition in the dataset
- yaml_filenames = list(glob.glob(str(dataset_root_path / "*.yaml")) + glob.glob(str(dataset_root_path / "*.yml")))
- if len(yaml_filenames) > 1:
- raise ValueError('More than one yaml file was found in the dataset root, cannot determine which one contains '
- 'the dataset definition this way.')
- elif len(yaml_filenames) == 0:
- raise ValueError('No yaml definition found in dataset root path, check that there is a correct yaml file '
- 'inside the dataset root path.')
- with open(yaml_filenames[0]) as f:
- dataset_definition = yaml.safe_load(f)
-
- assert set(dataset_definition.keys()).issuperset(
- {'train', 'test', 'val', 'nc', 'names'}
- ), "The right keys were not found in the yaml file, make sure it at least has the following keys: ('train', 'test', 'val', 'nc', 'names')"
-
- data_dict = dict()
- data_dict['train'] = str(
- (dataset_root_path / dataset_definition['train']).resolve()) if dataset_definition['train'] else None
- data_dict['test'] = str(
- (dataset_root_path / dataset_definition['test']).resolve()) if dataset_definition['test'] else None
- data_dict['val'] = str(
- (dataset_root_path / dataset_definition['val']).resolve()) if dataset_definition['val'] else None
- data_dict['nc'] = dataset_definition['nc']
- data_dict['names'] = dataset_definition['names']
-
- return data_dict
-
-
-class ClearmlLogger:
- """Log training runs, datasets, models, and predictions to ClearML.
-
- This logger sends information to ClearML at app.clear.ml or to your own hosted server. By default,
- this information includes hyperparameters, system configuration and metrics, model metrics, code information and
- basic data metrics and analyses.
-
- By providing additional command line arguments to train.py, datasets,
- models and predictions can also be logged.
- """
-
- def __init__(self, opt, hyp):
- """
- - Initialize ClearML Task, this object will capture the experiment
- - Upload dataset version to ClearML Data if opt.upload_dataset is True
-
- arguments:
- opt (namespace) -- Commandline arguments for this run
- hyp (dict) -- Hyperparameters for this run
-
- """
- self.current_epoch = 0
- # Keep tracked of amount of logged images to enforce a limit
- self.current_epoch_logged_images = set()
- # Maximum number of images to log to clearML per epoch
- self.max_imgs_to_log_per_epoch = 16
- # Get the interval of epochs when bounding box images should be logged
- self.bbox_interval = opt.bbox_interval
- self.clearml = clearml
- self.task = None
- self.data_dict = None
- if self.clearml:
- self.task = Task.init(
- project_name='YOLOv5',
- task_name='training',
- tags=['YOLOv5'],
- output_uri=True,
- auto_connect_frameworks={'pytorch': False}
- # We disconnect pytorch auto-detection, because we added manual model save points in the code
- )
- # ClearML's hooks will already grab all general parameters
- # Only the hyperparameters coming from the yaml config file
- # will have to be added manually!
- self.task.connect(hyp, name='Hyperparameters')
-
- # Get ClearML Dataset Version if requested
- if opt.data.startswith('clearml://'):
- # data_dict should have the following keys:
- # names, nc (number of classes), test, train, val (all three relative paths to ../datasets)
- self.data_dict = construct_dataset(opt.data)
- # Set data to data_dict because wandb will crash without this information and opt is the best way
- # to give it to them
- opt.data = self.data_dict
-
- def log_debug_samples(self, files, title='Debug Samples'):
- """
- Log files (images) as debug samples in the ClearML task.
-
- arguments:
- files (List(PosixPath)) a list of file paths in PosixPath format
- title (str) A title that groups together images with the same values
- """
- for f in files:
- if f.exists():
- it = re.search(r'_batch(\d+)', f.name)
- iteration = int(it.groups()[0]) if it else 0
- self.task.get_logger().report_image(title=title,
- series=f.name.replace(it.group(), ''),
- local_path=str(f),
- iteration=iteration)
-
- def log_image_with_boxes(self, image_path, boxes, class_names, image, conf_threshold=0.25):
- """
- Draw the bounding boxes on a single image and report the result as a ClearML debug sample.
-
- arguments:
- image_path (PosixPath) the path the original image file
- boxes (list): list of scaled predictions in the format - [xmin, ymin, xmax, ymax, confidence, class]
- class_names (dict): dict containing mapping of class int to class name
- image (Tensor): A torch tensor containing the actual image data
- """
- if len(self.current_epoch_logged_images) < self.max_imgs_to_log_per_epoch and self.current_epoch >= 0:
- # Log every bbox_interval times and deduplicate for any intermittend extra eval runs
- if self.current_epoch % self.bbox_interval == 0 and image_path not in self.current_epoch_logged_images:
- im = np.ascontiguousarray(np.moveaxis(image.mul(255).clamp(0, 255).byte().cpu().numpy(), 0, 2))
- annotator = Annotator(im=im, pil=True)
- for i, (conf, class_nr, box) in enumerate(zip(boxes[:, 4], boxes[:, 5], boxes[:, :4])):
- color = colors(i)
-
- class_name = class_names[int(class_nr)]
- confidence_percentage = round(float(conf) * 100, 2)
- label = f"{class_name}: {confidence_percentage}%"
-
- if conf > conf_threshold:
- annotator.rectangle(box.cpu().numpy(), outline=color)
- annotator.box_label(box.cpu().numpy(), label=label, color=color)
-
- annotated_image = annotator.result()
- self.task.get_logger().report_image(title='Bounding Boxes',
- series=image_path.name,
- iteration=self.current_epoch,
- image=annotated_image)
- self.current_epoch_logged_images.add(image_path)
diff --git a/spaces/mnauf/detect-bees/utils/segment/plots.py b/spaces/mnauf/detect-bees/utils/segment/plots.py
deleted file mode 100644
index 9b90900b3772fe23dbd57deb64221f98e563b069..0000000000000000000000000000000000000000
--- a/spaces/mnauf/detect-bees/utils/segment/plots.py
+++ /dev/null
@@ -1,143 +0,0 @@
-import contextlib
-import math
-from pathlib import Path
-
-import cv2
-import matplotlib.pyplot as plt
-import numpy as np
-import pandas as pd
-import torch
-
-from .. import threaded
-from ..general import xywh2xyxy
-from ..plots import Annotator, colors
-
-
-@threaded
-def plot_images_and_masks(images, targets, masks, paths=None, fname='images.jpg', names=None):
- # Plot image grid with labels
- if isinstance(images, torch.Tensor):
- images = images.cpu().float().numpy()
- if isinstance(targets, torch.Tensor):
- targets = targets.cpu().numpy()
- if isinstance(masks, torch.Tensor):
- masks = masks.cpu().numpy().astype(int)
-
- max_size = 1920 # max image size
- max_subplots = 16 # max image subplots, i.e. 4x4
- bs, _, h, w = images.shape # batch size, _, height, width
- bs = min(bs, max_subplots) # limit plot images
- ns = np.ceil(bs ** 0.5) # number of subplots (square)
- if np.max(images[0]) <= 1:
- images *= 255 # de-normalise (optional)
-
- # Build Image
- mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) # init
- for i, im in enumerate(images):
- if i == max_subplots: # if last batch has fewer images than we expect
- break
- x, y = int(w * (i // ns)), int(h * (i % ns)) # block origin
- im = im.transpose(1, 2, 0)
- mosaic[y:y + h, x:x + w, :] = im
-
- # Resize (optional)
- scale = max_size / ns / max(h, w)
- if scale < 1:
- h = math.ceil(scale * h)
- w = math.ceil(scale * w)
- mosaic = cv2.resize(mosaic, tuple(int(x * ns) for x in (w, h)))
-
- # Annotate
- fs = int((h + w) * ns * 0.01) # font size
- annotator = Annotator(mosaic, line_width=round(fs / 10), font_size=fs, pil=True, example=names)
- for i in range(i + 1):
- x, y = int(w * (i // ns)), int(h * (i % ns)) # block origin
- annotator.rectangle([x, y, x + w, y + h], None, (255, 255, 255), width=2) # borders
- if paths:
- annotator.text((x + 5, y + 5 + h), text=Path(paths[i]).name[:40], txt_color=(220, 220, 220)) # filenames
- if len(targets) > 0:
- idx = targets[:, 0] == i
- ti = targets[idx] # image targets
-
- boxes = xywh2xyxy(ti[:, 2:6]).T
- classes = ti[:, 1].astype('int')
- labels = ti.shape[1] == 6 # labels if no conf column
- conf = None if labels else ti[:, 6] # check for confidence presence (label vs pred)
-
- if boxes.shape[1]:
- if boxes.max() <= 1.01: # if normalized with tolerance 0.01
- boxes[[0, 2]] *= w # scale to pixels
- boxes[[1, 3]] *= h
- elif scale < 1: # absolute coords need scale if image scales
- boxes *= scale
- boxes[[0, 2]] += x
- boxes[[1, 3]] += y
- for j, box in enumerate(boxes.T.tolist()):
- cls = classes[j]
- color = colors(cls)
- cls = names[cls] if names else cls
- if labels or conf[j] > 0.25: # 0.25 conf thresh
- label = f'{cls}' if labels else f'{cls} {conf[j]:.1f}'
- annotator.box_label(box, label, color=color)
-
- # Plot masks
- if len(masks):
- if masks.max() > 1.0: # mean that masks are overlap
- image_masks = masks[[i]] # (1, 640, 640)
- nl = len(ti)
- index = np.arange(nl).reshape(nl, 1, 1) + 1
- image_masks = np.repeat(image_masks, nl, axis=0)
- image_masks = np.where(image_masks == index, 1.0, 0.0)
- else:
- image_masks = masks[idx]
-
- im = np.asarray(annotator.im).copy()
- for j, box in enumerate(boxes.T.tolist()):
- if labels or conf[j] > 0.25: # 0.25 conf thresh
- color = colors(classes[j])
- mh, mw = image_masks[j].shape
- if mh != h or mw != w:
- mask = image_masks[j].astype(np.uint8)
- mask = cv2.resize(mask, (w, h))
- mask = mask.astype(bool)
- else:
- mask = image_masks[j].astype(bool)
- with contextlib.suppress(Exception):
- im[y:y + h, x:x + w, :][mask] = im[y:y + h, x:x + w, :][mask] * 0.4 + np.array(color) * 0.6
- annotator.fromarray(im)
- annotator.im.save(fname) # save
-
-
-def plot_results_with_masks(file="path/to/results.csv", dir="", best=True):
- # Plot training results.csv. Usage: from utils.plots import *; plot_results('path/to/results.csv')
- save_dir = Path(file).parent if file else Path(dir)
- fig, ax = plt.subplots(2, 8, figsize=(18, 6), tight_layout=True)
- ax = ax.ravel()
- files = list(save_dir.glob("results*.csv"))
- assert len(files), f"No results.csv files found in {save_dir.resolve()}, nothing to plot."
- for f in files:
- try:
- data = pd.read_csv(f)
- index = np.argmax(0.9 * data.values[:, 8] + 0.1 * data.values[:, 7] + 0.9 * data.values[:, 12] +
- 0.1 * data.values[:, 11])
- s = [x.strip() for x in data.columns]
- x = data.values[:, 0]
- for i, j in enumerate([1, 2, 3, 4, 5, 6, 9, 10, 13, 14, 15, 16, 7, 8, 11, 12]):
- y = data.values[:, j]
- # y[y == 0] = np.nan # don't show zero values
- ax[i].plot(x, y, marker=".", label=f.stem, linewidth=2, markersize=2)
- if best:
- # best
- ax[i].scatter(index, y[index], color="r", label=f"best:{index}", marker="*", linewidth=3)
- ax[i].set_title(s[j] + f"\n{round(y[index], 5)}")
- else:
- # last
- ax[i].scatter(x[-1], y[-1], color="r", label="last", marker="*", linewidth=3)
- ax[i].set_title(s[j] + f"\n{round(y[-1], 5)}")
- # if j in [8, 9, 10]: # share train and val loss y axes
- # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5])
- except Exception as e:
- print(f"Warning: Plotting error for {f}: {e}")
- ax[1].legend()
- fig.savefig(save_dir / "results.png", dpi=200)
- plt.close()
diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/file_io.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/file_io.py
deleted file mode 100644
index dba663d4aafeb925ddffa50f5055933d6531a069..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/fairseq/file_io.py
+++ /dev/null
@@ -1,194 +0,0 @@
-#!/usr/bin/env python3
-
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import os
-import shutil
-from typing import List, Optional
-
-
-logger = logging.getLogger(__file__)
-
-
-try:
- from iopath.common.file_io import g_pathmgr as IOPathManager
-
- try:
- # [FB only - for now] AWS PathHandler for PathManager
- from .fb_pathhandlers import S3PathHandler
-
- IOPathManager.register_handler(S3PathHandler())
- except KeyError:
- logging.warning("S3PathHandler already registered.")
- except ImportError:
- logging.debug(
- "S3PathHandler couldn't be imported. Either missing fb-only files, or boto3 module."
- )
-
-except ImportError:
- IOPathManager = None
-
-
-class PathManager:
- """
- Wrapper for insulating OSS I/O (using Python builtin operations) from
- iopath's PathManager abstraction (for transparently handling various
- internal backends).
- """
-
- @staticmethod
- def open(
- path: str,
- mode: str = "r",
- buffering: int = -1,
- encoding: Optional[str] = None,
- errors: Optional[str] = None,
- newline: Optional[str] = None,
- ):
- if IOPathManager:
- return IOPathManager.open(
- path=path,
- mode=mode,
- buffering=buffering,
- encoding=encoding,
- errors=errors,
- newline=newline,
- )
- return open(
- path,
- mode=mode,
- buffering=buffering,
- encoding=encoding,
- errors=errors,
- newline=newline,
- )
-
- @staticmethod
- def copy(src_path: str, dst_path: str, overwrite: bool = False) -> bool:
- if IOPathManager:
- return IOPathManager.copy(
- src_path=src_path, dst_path=dst_path, overwrite=overwrite
- )
- return shutil.copyfile(src_path, dst_path)
-
- @staticmethod
- def get_local_path(path: str, **kwargs) -> str:
- if IOPathManager:
- return IOPathManager.get_local_path(path, **kwargs)
- return path
-
- @staticmethod
- def exists(path: str) -> bool:
- if IOPathManager:
- return IOPathManager.exists(path)
- return os.path.exists(path)
-
- @staticmethod
- def isfile(path: str) -> bool:
- if IOPathManager:
- return IOPathManager.isfile(path)
- return os.path.isfile(path)
-
- @staticmethod
- def ls(path: str) -> List[str]:
- if IOPathManager:
- return IOPathManager.ls(path)
- return os.listdir(path)
-
- @staticmethod
- def mkdirs(path: str) -> None:
- if IOPathManager:
- return IOPathManager.mkdirs(path)
- os.makedirs(path, exist_ok=True)
-
- @staticmethod
- def rm(path: str) -> None:
- if IOPathManager:
- return IOPathManager.rm(path)
- os.remove(path)
-
- @staticmethod
- def chmod(path: str, mode: int) -> None:
- if not PathManager.path_requires_pathmanager(path):
- os.chmod(path, mode)
-
- @staticmethod
- def register_handler(handler) -> None:
- if IOPathManager:
- return IOPathManager.register_handler(handler=handler)
-
- @staticmethod
- def copy_from_local(
- local_path: str, dst_path: str, overwrite: bool = False, **kwargs
- ) -> None:
- if IOPathManager:
- return IOPathManager.copy_from_local(
- local_path=local_path, dst_path=dst_path, overwrite=overwrite, **kwargs
- )
- return shutil.copyfile(local_path, dst_path)
-
- @staticmethod
- def path_requires_pathmanager(path: str) -> bool:
- """Do we require PathManager to access given path?"""
- if IOPathManager:
- for p in IOPathManager._path_handlers.keys():
- if path.startswith(p):
- return True
- return False
-
- @staticmethod
- def supports_rename(path: str) -> bool:
- # PathManager doesn't yet support renames
- return not PathManager.path_requires_pathmanager(path)
-
- @staticmethod
- def rename(src: str, dst: str):
- os.rename(src, dst)
-
- """
- ioPath async PathManager methods:
- """
- @staticmethod
- def opena(
- path: str,
- mode: str = "r",
- buffering: int = -1,
- encoding: Optional[str] = None,
- errors: Optional[str] = None,
- newline: Optional[str] = None,
- ):
- """
- Return file descriptor with asynchronous write operations.
- """
- global IOPathManager
- if not IOPathManager:
- logging.info("ioPath is initializing PathManager.")
- try:
- from iopath.common.file_io import PathManager
- IOPathManager = PathManager()
- except Exception:
- logging.exception("Failed to initialize ioPath PathManager object.")
- return IOPathManager.opena(
- path=path,
- mode=mode,
- buffering=buffering,
- encoding=encoding,
- errors=errors,
- newline=newline,
- )
-
- @staticmethod
- def async_close() -> bool:
- """
- Wait for files to be written and clean up asynchronous PathManager.
- NOTE: `PathManager.async_close()` must be called at the end of any
- script that uses `PathManager.opena(...)`.
- """
- global IOPathManager
- if IOPathManager:
- return IOPathManager.async_close()
- return False
diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/lr_scheduler/fixed_schedule.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/lr_scheduler/fixed_schedule.py
deleted file mode 100644
index d0e7e14b7e72b1151f7d7f19094430bbab64f8f0..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/lr_scheduler/fixed_schedule.py
+++ /dev/null
@@ -1,76 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass, field
-from typing import Optional, List
-from omegaconf import II
-
-from fairseq.dataclass import FairseqDataclass
-from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler
-
-
-@dataclass
-class FixedLRScheduleConfig(FairseqDataclass):
- force_anneal: Optional[int] = field(
- default=None,
- metadata={"help": "force annealing at specified epoch"},
- )
- lr_shrink: float = field(
- default=0.1,
- metadata={"help": "shrink factor for annealing, lr_new = (lr * lr_shrink)"},
- )
- warmup_updates: int = field(
- default=0,
- metadata={"help": "warmup the learning rate linearly for the first N updates"},
- )
- lr: List[float] = II("optimization.lr")
-
-
-@register_lr_scheduler("fixed", dataclass=FixedLRScheduleConfig)
-class FixedLRSchedule(FairseqLRScheduler):
- """Decay the LR on a fixed schedule."""
-
- def __init__(self, cfg: FixedLRScheduleConfig, optimizer):
- super().__init__(cfg, optimizer)
-
- self.lr = cfg.lr[0]
- if cfg.warmup_updates > 0:
- self.warmup_factor = 1.0 / cfg.warmup_updates
- else:
- self.warmup_factor = 1
-
- def state_dict(self):
- return {"lr": self.lr}
-
- def load_state_dict(self, state_dict):
- if "lr" in state_dict:
- self.lr = state_dict["lr"]
-
- def get_next_lr(self, epoch):
- lrs = self.cfg.lr
- if self.cfg.force_anneal is None or epoch < self.cfg.force_anneal:
- # use fixed LR schedule
- next_lr = lrs[min(epoch - 1, len(lrs) - 1)]
- else:
- # annneal based on lr_shrink
- next_lr = lrs[-1] * self.cfg.lr_shrink ** (
- epoch + 1 - self.cfg.force_anneal
- )
- return next_lr
-
- def step_begin_epoch(self, epoch):
- """Update the learning rate at the beginning of the given epoch."""
- self.lr = self.get_next_lr(epoch)
- self.optimizer.set_lr(self.warmup_factor * self.lr)
- return self.optimizer.get_lr()
-
- def step_update(self, num_updates):
- """Update the learning rate after each update."""
- if self.cfg.warmup_updates > 0 and num_updates < self.cfg.warmup_updates:
- self.warmup_factor = (num_updates + 1) / float(self.cfg.warmup_updates)
- self.optimizer.set_lr(self.warmup_factor * self.lr)
- else:
- self.optimizer.set_lr(self.lr)
- return self.optimizer.get_lr()
diff --git a/spaces/mshukor/UnIVAL/fairseq/scripts/shard_docs.py b/spaces/mshukor/UnIVAL/fairseq/scripts/shard_docs.py
deleted file mode 100644
index 97232c3c845ee01dc5ab627388934cc0f9588280..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/scripts/shard_docs.py
+++ /dev/null
@@ -1,54 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""
-Split a large file into shards while respecting document boundaries. Documents
-should be separated by a single empty line.
-"""
-
-import argparse
-import contextlib
-
-
-def main():
- parser = argparse.ArgumentParser()
- parser.add_argument("input")
- parser.add_argument("--num-shards", type=int)
- args = parser.parse_args()
-
- assert args.num_shards is not None and args.num_shards > 1
-
- with open(args.input, "r", encoding="utf-8") as h:
- with contextlib.ExitStack() as stack:
- outputs = [
- stack.enter_context(
- open(args.input + ".shard" + str(i), "w", encoding="utf-8")
- )
- for i in range(args.num_shards)
- ]
-
- doc = []
- first_doc = [True] * args.num_shards
-
- def output_doc(i):
- if not first_doc[i]:
- outputs[i].write("\n")
- first_doc[i] = False
- for line in doc:
- outputs[i].write(line)
- doc.clear()
-
- num_docs = 0
- for line in h:
- if line.strip() == "": # empty line indicates new document
- output_doc(num_docs % args.num_shards)
- num_docs += 1
- else:
- doc.append(line)
- output_doc(num_docs % args.num_shards)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/mthsk/sovits-models/data_utils.py b/spaces/mthsk/sovits-models/data_utils.py
deleted file mode 100644
index 7c76fd1c3a45b8304d916161718c7763874f3e35..0000000000000000000000000000000000000000
--- a/spaces/mthsk/sovits-models/data_utils.py
+++ /dev/null
@@ -1,155 +0,0 @@
-import time
-import os
-import random
-import numpy as np
-import torch
-import torch.utils.data
-
-import modules.commons as commons
-import utils
-from modules.mel_processing import spectrogram_torch, spec_to_mel_torch
-from utils import load_wav_to_torch, load_filepaths_and_text
-
-# import h5py
-
-
-"""Multi speaker version"""
-
-
-class TextAudioSpeakerLoader(torch.utils.data.Dataset):
- """
- 1) loads audio, speaker_id, text pairs
- 2) normalizes text and converts them to sequences of integers
- 3) computes spectrograms from audio files.
- """
-
- def __init__(self, audiopaths, hparams, all_in_mem: bool = False):
- self.audiopaths = load_filepaths_and_text(audiopaths)
- self.max_wav_value = hparams.data.max_wav_value
- self.sampling_rate = hparams.data.sampling_rate
- self.filter_length = hparams.data.filter_length
- self.hop_length = hparams.data.hop_length
- self.win_length = hparams.data.win_length
- self.sampling_rate = hparams.data.sampling_rate
- self.use_sr = hparams.train.use_sr
- self.spec_len = hparams.train.max_speclen
- self.spk_map = hparams.spk
-
- random.seed(1234)
- random.shuffle(self.audiopaths)
-
- self.all_in_mem = all_in_mem
- if self.all_in_mem:
- self.cache = [self.get_audio(p[0]) for p in self.audiopaths]
-
- def get_audio(self, filename):
- filename = filename.replace("\\", "/")
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError("{} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate))
- audio_norm = audio / self.max_wav_value
- audio_norm = audio_norm.unsqueeze(0)
- spec_filename = filename.replace(".wav", ".spec.pt")
-
- # Ideally, all data generated after Mar 25 should have .spec.pt
- if os.path.exists(spec_filename):
- spec = torch.load(spec_filename)
- else:
- spec = spectrogram_torch(audio_norm, self.filter_length,
- self.sampling_rate, self.hop_length, self.win_length,
- center=False)
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename)
-
- spk = filename.split("/")[-2]
- spk = torch.LongTensor([self.spk_map[spk]])
-
- f0 = np.load(filename + ".f0.npy")
- f0, uv = utils.interpolate_f0(f0)
- f0 = torch.FloatTensor(f0)
- uv = torch.FloatTensor(uv)
-
- c = torch.load(filename+ ".soft.pt")
- c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[0])
-
-
- lmin = min(c.size(-1), spec.size(-1))
- assert abs(c.size(-1) - spec.size(-1)) < 3, (c.size(-1), spec.size(-1), f0.shape, filename)
- assert abs(audio_norm.shape[1]-lmin * self.hop_length) < 3 * self.hop_length
- spec, c, f0, uv = spec[:, :lmin], c[:, :lmin], f0[:lmin], uv[:lmin]
- audio_norm = audio_norm[:, :lmin * self.hop_length]
-
- return c, f0, spec, audio_norm, spk, uv
-
- def random_slice(self, c, f0, spec, audio_norm, spk, uv):
- # if spec.shape[1] < 30:
- # print("skip too short audio:", filename)
- # return None
- if spec.shape[1] > 800:
- start = random.randint(0, spec.shape[1]-800)
- end = start + 790
- spec, c, f0, uv = spec[:, start:end], c[:, start:end], f0[start:end], uv[start:end]
- audio_norm = audio_norm[:, start * self.hop_length : end * self.hop_length]
-
- return c, f0, spec, audio_norm, spk, uv
-
- def __getitem__(self, index):
- if self.all_in_mem:
- return self.random_slice(*self.cache[index])
- else:
- return self.random_slice(*self.get_audio(self.audiopaths[index][0]))
-
- def __len__(self):
- return len(self.audiopaths)
-
-
-class TextAudioCollate:
-
- def __call__(self, batch):
- batch = [b for b in batch if b is not None]
-
- input_lengths, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[0].shape[1] for x in batch]),
- dim=0, descending=True)
-
- max_c_len = max([x[0].size(1) for x in batch])
- max_wav_len = max([x[3].size(1) for x in batch])
-
- lengths = torch.LongTensor(len(batch))
-
- c_padded = torch.FloatTensor(len(batch), batch[0][0].shape[0], max_c_len)
- f0_padded = torch.FloatTensor(len(batch), max_c_len)
- spec_padded = torch.FloatTensor(len(batch), batch[0][2].shape[0], max_c_len)
- wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len)
- spkids = torch.LongTensor(len(batch), 1)
- uv_padded = torch.FloatTensor(len(batch), max_c_len)
-
- c_padded.zero_()
- spec_padded.zero_()
- f0_padded.zero_()
- wav_padded.zero_()
- uv_padded.zero_()
-
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- c = row[0]
- c_padded[i, :, :c.size(1)] = c
- lengths[i] = c.size(1)
-
- f0 = row[1]
- f0_padded[i, :f0.size(0)] = f0
-
- spec = row[2]
- spec_padded[i, :, :spec.size(1)] = spec
-
- wav = row[3]
- wav_padded[i, :, :wav.size(1)] = wav
-
- spkids[i, 0] = row[4]
-
- uv = row[5]
- uv_padded[i, :uv.size(0)] = uv
-
- return c_padded, f0_padded, spec_padded, wav_padded, spkids, lengths, uv_padded
diff --git a/spaces/mueller-franzes/medfusion-app/tests/models/test_unet_openai.py b/spaces/mueller-franzes/medfusion-app/tests/models/test_unet_openai.py
deleted file mode 100644
index c2a1a27cd2ec78c7056d6314894c5a75c61f45b7..0000000000000000000000000000000000000000
--- a/spaces/mueller-franzes/medfusion-app/tests/models/test_unet_openai.py
+++ /dev/null
@@ -1,19 +0,0 @@
-
-from medical_diffusion.external.stable_diffusion.unet_openai import UNetModel
-from medical_diffusion.models.embedders import LabelEmbedder
-
-import torch
-
-
-noise_estimator = UNetModel
-noise_estimator_kwargs = {}
-
-
-model = noise_estimator(**noise_estimator_kwargs)
-print(model)
-
-input = torch.randn((1,4,32,32))
-time = torch.randn([1,])
-cond = None #torch.tensor([0,])
-out_hor, out_ver = model(input, time, cond)
-print(out_hor)
\ No newline at end of file
diff --git a/spaces/mujicloud/nodeproxy/README.md b/spaces/mujicloud/nodeproxy/README.md
deleted file mode 100644
index 77f1f3711583229394115139119e0ba2ce994218..0000000000000000000000000000000000000000
--- a/spaces/mujicloud/nodeproxy/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Nodeproxy
-emoji: 🏃
-colorFrom: green
-colorTo: green
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/paper_runfiles/generate_test_paris.sh b/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/paper_runfiles/generate_test_paris.sh
deleted file mode 100644
index 66056017c3aa376ef0767a59583ab25a321b559b..0000000000000000000000000000000000000000
--- a/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/paper_runfiles/generate_test_paris.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/usr/bin/env bash
-
-# paths to data are valid for mml-ws01
-OUT_DIR="/media/inpainting/paper_data/Paris_StreetView_Dataset_val"
-
-source "$(dirname $0)/env.sh"
-
-for datadir in paris_eval_gt
-do
- for conf in random_thin_256 random_medium_256 random_thick_256 segm_256
- do
- "$BINDIR/gen_mask_dataset_hydra.py" -cn $conf datadir=$datadir location=mml-ws01-paris \
- location.out_dir=OUT_DIR cropping.out_square_crop=False cropping.out_min_size=227
-
- "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats"
- done
-done
diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/masks/countless/countless2d.py b/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/masks/countless/countless2d.py
deleted file mode 100644
index dc27b73affa20ab1a8a199542469a10aaf1f555a..0000000000000000000000000000000000000000
--- a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/masks/countless/countless2d.py
+++ /dev/null
@@ -1,529 +0,0 @@
-from __future__ import print_function, division
-
-"""
-COUNTLESS performance test in Python.
-
-python countless2d.py ./images/NAMEOFIMAGE
-"""
-
-import six
-from six.moves import range
-from collections import defaultdict
-from functools import reduce
-import operator
-import io
-import os
-from PIL import Image
-import math
-import numpy as np
-import random
-import sys
-import time
-from tqdm import tqdm
-from scipy import ndimage
-
-def simplest_countless(data):
- """
- Vectorized implementation of downsampling a 2D
- image by 2 on each side using the COUNTLESS algorithm.
-
- data is a 2D numpy array with even dimensions.
- """
- sections = []
-
- # This loop splits the 2D array apart into four arrays that are
- # all the result of striding by 2 and offset by (0,0), (0,1), (1,0),
- # and (1,1) representing the A, B, C, and D positions from Figure 1.
- factor = (2,2)
- for offset in np.ndindex(factor):
- part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))]
- sections.append(part)
-
- a, b, c, d = sections
-
- ab = a * (a == b) # PICK(A,B)
- ac = a * (a == c) # PICK(A,C)
- bc = b * (b == c) # PICK(B,C)
-
- a = ab | ac | bc # Bitwise OR, safe b/c non-matches are zeroed
-
- return a + (a == 0) * d # AB || AC || BC || D
-
-def quick_countless(data):
- """
- Vectorized implementation of downsampling a 2D
- image by 2 on each side using the COUNTLESS algorithm.
-
- data is a 2D numpy array with even dimensions.
- """
- sections = []
-
- # This loop splits the 2D array apart into four arrays that are
- # all the result of striding by 2 and offset by (0,0), (0,1), (1,0),
- # and (1,1) representing the A, B, C, and D positions from Figure 1.
- factor = (2,2)
- for offset in np.ndindex(factor):
- part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))]
- sections.append(part)
-
- a, b, c, d = sections
-
- ab_ac = a * ((a == b) | (a == c)) # PICK(A,B) || PICK(A,C) w/ optimization
- bc = b * (b == c) # PICK(B,C)
-
- a = ab_ac | bc # (PICK(A,B) || PICK(A,C)) or PICK(B,C)
- return a + (a == 0) * d # AB || AC || BC || D
-
-def quickest_countless(data):
- """
- Vectorized implementation of downsampling a 2D
- image by 2 on each side using the COUNTLESS algorithm.
-
- data is a 2D numpy array with even dimensions.
- """
- sections = []
-
- # This loop splits the 2D array apart into four arrays that are
- # all the result of striding by 2 and offset by (0,0), (0,1), (1,0),
- # and (1,1) representing the A, B, C, and D positions from Figure 1.
- factor = (2,2)
- for offset in np.ndindex(factor):
- part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))]
- sections.append(part)
-
- a, b, c, d = sections
-
- ab_ac = a * ((a == b) | (a == c)) # PICK(A,B) || PICK(A,C) w/ optimization
- ab_ac |= b * (b == c) # PICK(B,C)
- return ab_ac + (ab_ac == 0) * d # AB || AC || BC || D
-
-def quick_countless_xor(data):
- """
- Vectorized implementation of downsampling a 2D
- image by 2 on each side using the COUNTLESS algorithm.
-
- data is a 2D numpy array with even dimensions.
- """
- sections = []
-
- # This loop splits the 2D array apart into four arrays that are
- # all the result of striding by 2 and offset by (0,0), (0,1), (1,0),
- # and (1,1) representing the A, B, C, and D positions from Figure 1.
- factor = (2,2)
- for offset in np.ndindex(factor):
- part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))]
- sections.append(part)
-
- a, b, c, d = sections
-
- ab = a ^ (a ^ b) # a or b
- ab += (ab != a) * ((ab ^ (ab ^ c)) - b) # b or c
- ab += (ab == c) * ((ab ^ (ab ^ d)) - c) # c or d
- return ab
-
-def stippled_countless(data):
- """
- Vectorized implementation of downsampling a 2D
- image by 2 on each side using the COUNTLESS algorithm
- that treats zero as "background" and inflates lone
- pixels.
-
- data is a 2D numpy array with even dimensions.
- """
- sections = []
-
- # This loop splits the 2D array apart into four arrays that are
- # all the result of striding by 2 and offset by (0,0), (0,1), (1,0),
- # and (1,1) representing the A, B, C, and D positions from Figure 1.
- factor = (2,2)
- for offset in np.ndindex(factor):
- part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))]
- sections.append(part)
-
- a, b, c, d = sections
-
- ab_ac = a * ((a == b) | (a == c)) # PICK(A,B) || PICK(A,C) w/ optimization
- ab_ac |= b * (b == c) # PICK(B,C)
-
- nonzero = a + (a == 0) * (b + (b == 0) * c)
- return ab_ac + (ab_ac == 0) * (d + (d == 0) * nonzero) # AB || AC || BC || D
-
-def zero_corrected_countless(data):
- """
- Vectorized implementation of downsampling a 2D
- image by 2 on each side using the COUNTLESS algorithm.
-
- data is a 2D numpy array with even dimensions.
- """
- # allows us to prevent losing 1/2 a bit of information
- # at the top end by using a bigger type. Without this 255 is handled incorrectly.
- data, upgraded = upgrade_type(data)
-
- # offset from zero, raw countless doesn't handle 0 correctly
- # we'll remove the extra 1 at the end.
- data += 1
-
- sections = []
-
- # This loop splits the 2D array apart into four arrays that are
- # all the result of striding by 2 and offset by (0,0), (0,1), (1,0),
- # and (1,1) representing the A, B, C, and D positions from Figure 1.
- factor = (2,2)
- for offset in np.ndindex(factor):
- part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))]
- sections.append(part)
-
- a, b, c, d = sections
-
- ab = a * (a == b) # PICK(A,B)
- ac = a * (a == c) # PICK(A,C)
- bc = b * (b == c) # PICK(B,C)
-
- a = ab | ac | bc # Bitwise OR, safe b/c non-matches are zeroed
-
- result = a + (a == 0) * d - 1 # a or d - 1
-
- if upgraded:
- return downgrade_type(result)
-
- # only need to reset data if we weren't upgraded
- # b/c no copy was made in that case
- data -= 1
-
- return result
-
-def countless_extreme(data):
- nonzeros = np.count_nonzero(data)
- # print("nonzeros", nonzeros)
-
- N = reduce(operator.mul, data.shape)
-
- if nonzeros == N:
- print("quick")
- return quick_countless(data)
- elif np.count_nonzero(data + 1) == N:
- print("quick")
- # print("upper", nonzeros)
- return quick_countless(data)
- else:
- return countless(data)
-
-
-def countless(data):
- """
- Vectorized implementation of downsampling a 2D
- image by 2 on each side using the COUNTLESS algorithm.
-
- data is a 2D numpy array with even dimensions.
- """
- # allows us to prevent losing 1/2 a bit of information
- # at the top end by using a bigger type. Without this 255 is handled incorrectly.
- data, upgraded = upgrade_type(data)
-
- # offset from zero, raw countless doesn't handle 0 correctly
- # we'll remove the extra 1 at the end.
- data += 1
-
- sections = []
-
- # This loop splits the 2D array apart into four arrays that are
- # all the result of striding by 2 and offset by (0,0), (0,1), (1,0),
- # and (1,1) representing the A, B, C, and D positions from Figure 1.
- factor = (2,2)
- for offset in np.ndindex(factor):
- part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))]
- sections.append(part)
-
- a, b, c, d = sections
-
- ab_ac = a * ((a == b) | (a == c)) # PICK(A,B) || PICK(A,C) w/ optimization
- ab_ac |= b * (b == c) # PICK(B,C)
- result = ab_ac + (ab_ac == 0) * d - 1 # (matches or d) - 1
-
- if upgraded:
- return downgrade_type(result)
-
- # only need to reset data if we weren't upgraded
- # b/c no copy was made in that case
- data -= 1
-
- return result
-
-def upgrade_type(arr):
- dtype = arr.dtype
-
- if dtype == np.uint8:
- return arr.astype(np.uint16), True
- elif dtype == np.uint16:
- return arr.astype(np.uint32), True
- elif dtype == np.uint32:
- return arr.astype(np.uint64), True
-
- return arr, False
-
-def downgrade_type(arr):
- dtype = arr.dtype
-
- if dtype == np.uint64:
- return arr.astype(np.uint32)
- elif dtype == np.uint32:
- return arr.astype(np.uint16)
- elif dtype == np.uint16:
- return arr.astype(np.uint8)
-
- return arr
-
-def odd_to_even(image):
- """
- To facilitate 2x2 downsampling segmentation, change an odd sized image into an even sized one.
- Works by mirroring the starting 1 pixel edge of the image on odd shaped sides.
-
- e.g. turn a 3x3x5 image into a 4x4x5 (the x and y are what are getting downsampled)
-
- For example: [ 3, 2, 4 ] => [ 3, 3, 2, 4 ] which is now easy to downsample.
-
- """
- shape = np.array(image.shape)
-
- offset = (shape % 2)[:2] # x,y offset
-
- # detect if we're dealing with an even
- # image. if so it's fine, just return.
- if not np.any(offset):
- return image
-
- oddshape = image.shape[:2] + offset
- oddshape = np.append(oddshape, shape[2:])
- oddshape = oddshape.astype(int)
-
- newimg = np.empty(shape=oddshape, dtype=image.dtype)
-
- ox,oy = offset
- sx,sy = oddshape
-
- newimg[0,0] = image[0,0] # corner
- newimg[ox:sx,0] = image[:,0] # x axis line
- newimg[0,oy:sy] = image[0,:] # y axis line
-
- return newimg
-
-def counting(array):
- factor = (2, 2, 1)
- shape = array.shape
-
- while len(shape) < 4:
- array = np.expand_dims(array, axis=-1)
- shape = array.shape
-
- output_shape = tuple(int(math.ceil(s / f)) for s, f in zip(shape, factor))
- output = np.zeros(output_shape, dtype=array.dtype)
-
- for chan in range(0, shape[3]):
- for z in range(0, shape[2]):
- for x in range(0, shape[0], 2):
- for y in range(0, shape[1], 2):
- block = array[ x:x+2, y:y+2, z, chan ] # 2x2 block
-
- hashtable = defaultdict(int)
- for subx, suby in np.ndindex(block.shape[0], block.shape[1]):
- hashtable[block[subx, suby]] += 1
-
- best = (0, 0)
- for segid, val in six.iteritems(hashtable):
- if best[1] < val:
- best = (segid, val)
-
- output[ x // 2, y // 2, chan ] = best[0]
-
- return output
-
-def ndzoom(array):
- if len(array.shape) == 3:
- ratio = ( 1 / 2.0, 1 / 2.0, 1.0 )
- else:
- ratio = ( 1 / 2.0, 1 / 2.0)
- return ndimage.interpolation.zoom(array, ratio, order=1)
-
-def countless_if(array):
- factor = (2, 2, 1)
- shape = array.shape
-
- if len(shape) < 3:
- array = array[ :,:, np.newaxis ]
- shape = array.shape
-
- output_shape = tuple(int(math.ceil(s / f)) for s, f in zip(shape, factor))
- output = np.zeros(output_shape, dtype=array.dtype)
-
- for chan in range(0, shape[2]):
- for x in range(0, shape[0], 2):
- for y in range(0, shape[1], 2):
- block = array[ x:x+2, y:y+2, chan ] # 2x2 block
-
- if block[0,0] == block[1,0]:
- pick = block[0,0]
- elif block[0,0] == block[0,1]:
- pick = block[0,0]
- elif block[1,0] == block[0,1]:
- pick = block[1,0]
- else:
- pick = block[1,1]
-
- output[ x // 2, y // 2, chan ] = pick
-
- return np.squeeze(output)
-
-def downsample_with_averaging(array):
- """
- Downsample x by factor using averaging.
-
- @return: The downsampled array, of the same type as x.
- """
-
- if len(array.shape) == 3:
- factor = (2,2,1)
- else:
- factor = (2,2)
-
- if np.array_equal(factor[:3], np.array([1,1,1])):
- return array
-
- output_shape = tuple(int(math.ceil(s / f)) for s, f in zip(array.shape, factor))
- temp = np.zeros(output_shape, float)
- counts = np.zeros(output_shape, np.int)
- for offset in np.ndindex(factor):
- part = array[tuple(np.s_[o::f] for o, f in zip(offset, factor))]
- indexing_expr = tuple(np.s_[:s] for s in part.shape)
- temp[indexing_expr] += part
- counts[indexing_expr] += 1
- return np.cast[array.dtype](temp / counts)
-
-def downsample_with_max_pooling(array):
-
- factor = (2,2)
-
- if np.all(np.array(factor, int) == 1):
- return array
-
- sections = []
-
- for offset in np.ndindex(factor):
- part = array[tuple(np.s_[o::f] for o, f in zip(offset, factor))]
- sections.append(part)
-
- output = sections[0].copy()
-
- for section in sections[1:]:
- np.maximum(output, section, output)
-
- return output
-
-def striding(array):
- """Downsample x by factor using striding.
-
- @return: The downsampled array, of the same type as x.
- """
- factor = (2,2)
- if np.all(np.array(factor, int) == 1):
- return array
- return array[tuple(np.s_[::f] for f in factor)]
-
-def benchmark():
- filename = sys.argv[1]
- img = Image.open(filename)
- data = np.array(img.getdata(), dtype=np.uint8)
-
- if len(data.shape) == 1:
- n_channels = 1
- reshape = (img.height, img.width)
- else:
- n_channels = min(data.shape[1], 3)
- data = data[:, :n_channels]
- reshape = (img.height, img.width, n_channels)
-
- data = data.reshape(reshape).astype(np.uint8)
-
- methods = [
- simplest_countless,
- quick_countless,
- quick_countless_xor,
- quickest_countless,
- stippled_countless,
- zero_corrected_countless,
- countless,
- downsample_with_averaging,
- downsample_with_max_pooling,
- ndzoom,
- striding,
- # countless_if,
- # counting,
- ]
-
- formats = {
- 1: 'L',
- 3: 'RGB',
- 4: 'RGBA'
- }
-
- if not os.path.exists('./results'):
- os.mkdir('./results')
-
- N = 500
- img_size = float(img.width * img.height) / 1024.0 / 1024.0
- print("N = %d, %dx%d (%.2f MPx) %d chan, %s" % (N, img.width, img.height, img_size, n_channels, filename))
- print("Algorithm\tMPx/sec\tMB/sec\tSec")
- for fn in methods:
- print(fn.__name__, end='')
- sys.stdout.flush()
-
- start = time.time()
- # tqdm is here to show you what's going on the first time you run it.
- # Feel free to remove it to get slightly more accurate timing results.
- for _ in tqdm(range(N), desc=fn.__name__, disable=True):
- result = fn(data)
- end = time.time()
- print("\r", end='')
-
- total_time = (end - start)
- mpx = N * img_size / total_time
- mbytes = N * img_size * n_channels / total_time
- # Output in tab separated format to enable copy-paste into excel/numbers
- print("%s\t%.3f\t%.3f\t%.2f" % (fn.__name__, mpx, mbytes, total_time))
- outimg = Image.fromarray(np.squeeze(result), formats[n_channels])
- outimg.save('./results/{}.png'.format(fn.__name__, "PNG"))
-
-if __name__ == '__main__':
- benchmark()
-
-
-# Example results:
-# N = 5, 1024x1024 (1.00 MPx) 1 chan, images/gray_segmentation.png
-# Function MPx/sec MB/sec Sec
-# simplest_countless 752.855 752.855 0.01
-# quick_countless 920.328 920.328 0.01
-# zero_corrected_countless 534.143 534.143 0.01
-# countless 644.247 644.247 0.01
-# downsample_with_averaging 372.575 372.575 0.01
-# downsample_with_max_pooling 974.060 974.060 0.01
-# ndzoom 137.517 137.517 0.04
-# striding 38550.588 38550.588 0.00
-# countless_if 4.377 4.377 1.14
-# counting 0.117 0.117 42.85
-
-# Run without non-numpy implementations:
-# N = 2000, 1024x1024 (1.00 MPx) 1 chan, images/gray_segmentation.png
-# Algorithm MPx/sec MB/sec Sec
-# simplest_countless 800.522 800.522 2.50
-# quick_countless 945.420 945.420 2.12
-# quickest_countless 947.256 947.256 2.11
-# stippled_countless 544.049 544.049 3.68
-# zero_corrected_countless 575.310 575.310 3.48
-# countless 646.684 646.684 3.09
-# downsample_with_averaging 385.132 385.132 5.19
-# downsample_with_max_poolin 988.361 988.361 2.02
-# ndzoom 163.104 163.104 12.26
-# striding 81589.340 81589.340 0.02
-
-
-
-
diff --git a/spaces/nakas/musika_api/musika_test.py b/spaces/nakas/musika_api/musika_test.py
deleted file mode 100644
index f778fdabd9fceb407cc572a15c15f0ba1bf095af..0000000000000000000000000000000000000000
--- a/spaces/nakas/musika_api/musika_test.py
+++ /dev/null
@@ -1,20 +0,0 @@
-import os
-
-os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
-
-from parse_test import parse_args
-from models import Models_functions
-from utils import Utils_functions
-
-if __name__ == "__main__":
-
- # parse args
- args = parse_args()
-
- # initialize networks
- M = Models_functions(args)
- models_ls_1, models_ls_2, models_ls_3 = M.get_networks()
-
- # test musika
- U = Utils_functions(args)
- U.render_gradio(models_ls_1, models_ls_2, models_ls_3, train=False)
diff --git a/spaces/nateraw/deepafx-st/deepafx_st/processors/autodiff/signal.py b/spaces/nateraw/deepafx-st/deepafx_st/processors/autodiff/signal.py
deleted file mode 100644
index e8223b7ed0a6e22f20fa637edc0e0507bdc10303..0000000000000000000000000000000000000000
--- a/spaces/nateraw/deepafx-st/deepafx_st/processors/autodiff/signal.py
+++ /dev/null
@@ -1,194 +0,0 @@
-import math
-import torch
-from typing import List
-
-
-def butter(fc, fs: float = 2.0):
- """
-
- Recall Butterworth polynomials
- N = 1 s + 1
- N = 2 s^2 + sqrt(2s) + 1
- N = 3 (s^2 + s + 1)(s + 1)
- N = 4 (s^2 + 0.76536s + 1)(s^2 + 1.84776s + 1)
-
- Scaling
- LP to LP: s -> s/w_c
- LP to HP: s -> w_c/s
-
- Bilinear transform:
- s = 2/T_d * (1 - z^-1)/(1 + z^-1)
-
- For 1-pole butterworth lowpass
-
- 1 / (s + 1) 1-pole prototype
- 1 / (s/w_c + 1) LP to LP
- 1 / (2/T_d * (1 - z^-1)/(1 + z^-1))/w_c + 1) Bilinear transform
-
- """
-
- # apply pre-warping to the cutoff
- T_d = 1 / fs
- w_d = (2 * math.pi * fc) / fs
- # sys.exit()
- w_c = (2 / T_d) * torch.tan(w_d / 2)
-
- a0 = 2 + (T_d * w_c)
- a1 = (T_d * w_c) - 2
- b0 = T_d * w_c
- b1 = T_d * w_c
-
- b = torch.stack([b0, b1], dim=0).view(-1)
- a = torch.stack([a0, a1], dim=0).view(-1)
-
- # normalize
- b = b.type_as(fc) / a0
- a = a.type_as(fc) / a0
-
- return b, a
-
-
-def biqaud(
- gain_dB: torch.Tensor,
- cutoff_freq: torch.Tensor,
- q_factor: torch.Tensor,
- sample_rate: float,
- filter_type: str = "peaking",
-):
-
- # convert inputs to Tensors if needed
- # gain_dB = torch.tensor([gain_dB])
- # cutoff_freq = torch.tensor([cutoff_freq])
- # q_factor = torch.tensor([q_factor])
-
- A = 10 ** (gain_dB / 40.0)
- w0 = 2 * math.pi * (cutoff_freq / sample_rate)
- alpha = torch.sin(w0) / (2 * q_factor)
- cos_w0 = torch.cos(w0)
- sqrt_A = torch.sqrt(A)
-
- if filter_type == "high_shelf":
- b0 = A * ((A + 1) + (A - 1) * cos_w0 + 2 * sqrt_A * alpha)
- b1 = -2 * A * ((A - 1) + (A + 1) * cos_w0)
- b2 = A * ((A + 1) + (A - 1) * cos_w0 - 2 * sqrt_A * alpha)
- a0 = (A + 1) - (A - 1) * cos_w0 + 2 * sqrt_A * alpha
- a1 = 2 * ((A - 1) - (A + 1) * cos_w0)
- a2 = (A + 1) - (A - 1) * cos_w0 - 2 * sqrt_A * alpha
- elif filter_type == "low_shelf":
- b0 = A * ((A + 1) - (A - 1) * cos_w0 + 2 * sqrt_A * alpha)
- b1 = 2 * A * ((A - 1) - (A + 1) * cos_w0)
- b2 = A * ((A + 1) - (A - 1) * cos_w0 - 2 * sqrt_A * alpha)
- a0 = (A + 1) + (A - 1) * cos_w0 + 2 * sqrt_A * alpha
- a1 = -2 * ((A - 1) + (A + 1) * cos_w0)
- a2 = (A + 1) + (A - 1) * cos_w0 - 2 * sqrt_A * alpha
- elif filter_type == "peaking":
- b0 = 1 + alpha * A
- b1 = -2 * cos_w0
- b2 = 1 - alpha * A
- a0 = 1 + (alpha / A)
- a1 = -2 * cos_w0
- a2 = 1 - (alpha / A)
- else:
- raise ValueError(f"Invalid filter_type: {filter_type}.")
-
- b = torch.stack([b0, b1, b2], dim=0).view(-1)
- a = torch.stack([a0, a1, a2], dim=0).view(-1)
-
- # normalize
- b = b.type_as(gain_dB) / a0
- a = a.type_as(gain_dB) / a0
-
- return b, a
-
-
-def freqz(b, a, n_fft: int = 512):
-
- B = torch.fft.rfft(b, n_fft)
- A = torch.fft.rfft(a, n_fft)
-
- H = B / A
-
- return H
-
-
-def freq_domain_filter(x, H, n_fft):
-
- X = torch.fft.rfft(x, n_fft)
-
- # move H to same device as input x
- H = H.type_as(X)
-
- Y = X * H
-
- y = torch.fft.irfft(Y, n_fft)
-
- return y
-
-
-def approx_iir_filter(b, a, x):
- """Approimxate the application of an IIR filter.
-
- Args:
- b (Tensor): The numerator coefficients.
-
- """
-
- # round up to nearest power of 2 for FFT
- # n_fft = 2 ** math.ceil(math.log2(x.shape[-1] + x.shape[-1] - 1))
-
- n_fft = 2 ** torch.ceil(torch.log2(torch.tensor(x.shape[-1] + x.shape[-1] - 1)))
- n_fft = n_fft.int()
-
- # move coefficients to same device as x
- b = b.type_as(x).view(-1)
- a = a.type_as(x).view(-1)
-
- # compute complex response
- H = freqz(b, a, n_fft=n_fft).view(-1)
-
- # apply filter
- y = freq_domain_filter(x, H, n_fft)
-
- # crop
- y = y[: x.shape[-1]]
-
- return y
-
-
-def approx_iir_filter_cascade(
- b_s: List[torch.Tensor],
- a_s: List[torch.Tensor],
- x: torch.Tensor,
-):
- """Apply a cascade of IIR filters.
-
- Args:
- b (list[Tensor]): List of tensors of shape (3)
- a (list[Tensor]): List of tensors of (3)
- x (torch.Tensor): 1d Tensor.
- """
-
- if len(b_s) != len(a_s):
- raise RuntimeError(
- f"Must have same number of coefficients. Got b: {len(b_s)} and a: {len(a_s)}."
- )
-
- # round up to nearest power of 2 for FFT
- # n_fft = 2 ** math.ceil(math.log2(x.shape[-1] + x.shape[-1] - 1))
- n_fft = 2 ** torch.ceil(torch.log2(torch.tensor(x.shape[-1] + x.shape[-1] - 1)))
- n_fft = n_fft.int()
-
- # this could be done in parallel
- b = torch.stack(b_s, dim=0).type_as(x)
- a = torch.stack(a_s, dim=0).type_as(x)
-
- H = freqz(b, a, n_fft=n_fft)
- H = torch.prod(H, dim=0).view(-1)
-
- # apply filter
- y = freq_domain_filter(x, H, n_fft)
-
- # crop
- y = y[: x.shape[-1]]
-
- return y
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Download The Shaadi Karke Phas Gaya Yaar Dual Audio Utorrent Free ((TOP)).md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Download The Shaadi Karke Phas Gaya Yaar Dual Audio Utorrent Free ((TOP)).md
deleted file mode 100644
index 4329967275ff51b7ad8f0c46277a2edd1f1caecd..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Download The Shaadi Karke Phas Gaya Yaar Dual Audio Utorrent Free ((TOP)).md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
How to Download Shaadi Karke Phas Gaya Yaar Dual Audio Utorrent Free
-
Shaadi Karke Phas Gaya Yaar is a 2006 Bollywood romantic comedy film starring Salman Khan and Shilpa Shetty as a married couple who face various challenges in their relationship. The film was directed by K.S. Adhiyaman and was a remake of the Tamil film Priyasakhi. The film received mixed reviews from critics and was a box office flop.
-
download the Shaadi Karke Phas Gaya Yaar dual audio utorrent free
If you are a fan of Salman Khan and Shilpa Shetty, or if you enjoy watching light-hearted movies with some drama and humor, you might want to download Shaadi Karke Phas Gaya Yaar dual audio utorrent free. Utorrent is a popular software that allows you to download files from peer-to-peer networks. Dual audio means that you can choose between two languages for the audio track, such as Hindi and English.
-
To download Shaadi Karke Phas Gaya Yaar dual audio utorrent free, you will need to follow these steps:
-
-
Download and install utorrent from its official website: https://www.utorrent.com/
-
Search for Shaadi Karke Phas Gaya Yaar dual audio torrent file on any torrent site, such as The Pirate Bay, Kickass Torrents, or 1337x. Make sure the file has good seeders and leechers for faster download speed.
-
Click on the magnet link or download the torrent file and open it with utorrent.
-
Select the destination folder where you want to save the movie file and click OK.
-
Wait for the download to complete. You can check the progress and speed on utorrent.
-
Once the download is finished, you can open the movie file with any media player that supports dual audio, such as VLC or KMPlayer. You can switch between the languages by clicking on the audio icon on the player.
Shaadi Karke Phas Gaya Yaar is a film that explores the issues of marriage, love, and compatibility. The film revolves around Ayaan, a wealthy garage owner who falls in love with Ahana, a beautiful model. He tricks her into marrying him by using her diary that she left at his garage. However, their marriage soon turns sour as they realize that they have different expectations and values. Ahana wants to pursue her career and have an independent lifestyle, while Ayaan wants her to be a traditional housewife and mother. They also face pressure from their families and society. The film shows how they cope with their conflicts and try to save their marriage.
-
The film has some funny moments and emotional scenes that will keep you entertained. The chemistry between Salman Khan and Shilpa Shetty is also commendable. The songs of the film are composed by Sajid-Wajid and Daboo Malik, and are catchy and melodious. Some of the popular songs are "Tumko Dulhan Banayenge", "Deewane Dil Ko Jaane Ja", and "Shaadi Karke Phas Gaya Yaar". The film also has some supporting actors who add to the humor and drama, such as Reema Lagoo, Mohnish Behl, Shakti Kapoor, and Aashif Sheikh.
-
Shaadi Karke Phas Gaya Yaar is a film that will make you laugh, cry, and think about the meaning of marriage and love. If you are looking for a fun and romantic movie to watch with your partner or friends, you can download Shaadi Karke Phas Gaya Yaar dual audio utorrent free and enjoy it at your convenience.
e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Filebandit Survey Bypasser [BEST].md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Filebandit Survey Bypasser [BEST].md
deleted file mode 100644
index 2eec8d3d2d751dbf6060fcd1603f3f0d907a25d7..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Filebandit Survey Bypasser [BEST].md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
How to Bypass Filebandit Surveys and Download Files Easily
-
If you are looking for a way to bypass filebandit surveys and download files without completing annoying offers, you are not alone. Many people face this problem when they want to access premium content from file sharing sites like filebandit.net. But don't worry, there are some solutions that can help you bypass filebandit surveys and download files easily.
In this article, we will show you some of the best survey remover tools that can help you bypass filebandit surveys and other similar surveys online. These tools are easy to use and can save you a lot of time and frustration. So, let's get started!
-
-
What are Filebandit Surveys and Why Do They Exist?
-
Filebandit surveys are a type of online surveys that pop up when you try to download a file from filebandit.net or other similar sites. These surveys ask you to complete various tasks, such as filling out forms, watching videos, downloading apps, etc. in order to unlock the download link.
-
The reason why these surveys exist is because file sharing sites like filebandit.net use them as a source of income. They get paid by advertisers for every survey completed by the users. This way, they can offer free hosting and uploading services to their users.
-
However, these surveys can be very annoying and time-consuming for the users who just want to download their files quickly and easily. Some of these surveys may even contain malware or viruses that can harm your computer or device. That's why many users look for ways to bypass filebandit surveys and other similar surveys online.
-
-
-
How to Bypass Filebandit Surveys and Download Files Easily?
-
There are two main ways to bypass filebandit surveys and download files easily: using online survey remover tools or using offline survey remover software. Let's take a look at each of these methods in detail.
-
-
Using Online Survey Remover Tools
-
Online survey remover tools are websites that can help you bypass filebandit surveys and other similar surveys online by entering the URL of the survey page and generating a direct download link for you. These tools are very convenient and easy to use, as you don't need to download or install anything on your computer or device.
-
Some of the best online survey remover tools that can help you bypass filebandit surveys and other similar surveys online are:
-
-
Survey Bypass: This is one of the most popular and reliable online survey remover tools that can help you bypass filebandit surveys and other similar surveys online. To use this tool, simply visit http://www.surveybypass.com/, enter the URL of the survey page, and press the Go button. In no time, any survey available on the website will be bypassed and you will be provided with the direct download link for your file.[^1^]
-
Surveys Remover: This is another great online survey remover tool that can help you bypass filebandit surveys and other similar surveys online. To use this tool, simply visit http://www.surveysremover.com/, enter the URL of the survey page, and press the Download button. In no time, any survey available on the website will be removed and you will be able to download your file without any hassle.[^2^]
-
Bypass Survey: This is another useful online survey remover tool that can help you bypass filebandit surveys and other similar surveys online. To use this tool, simply visit http://bypasssurvey.com/, enter the URL of the survey page, and press the Bypass Survey button. In no time, any survey available on the website will be skipped and you will be able to access your file without any problem.[^3^]
-
-
-
Using Offline Survey Remover Software
-
Offline survey remover software are programs that you need to download and install on your computer or device in order to bypass filebandit surveys and
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Flysimware Beech 18 For Fsx Torrent Tpb Cradle Ubersetzten R.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Flysimware Beech 18 For Fsx Torrent Tpb Cradle Ubersetzten R.md
deleted file mode 100644
index eb44b591e004715c953d93d5ecdde740c6a30d03..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Flysimware Beech 18 For Fsx Torrent Tpb Cradle Ubersetzten R.md
+++ /dev/null
@@ -1,13 +0,0 @@
-
-
Flysimware Beech 18: A Classic Aircraft for FSX
-
If you are looking for a realistic and detailed simulation of a vintage aircraft, you might want to check out Flysimware's Beech 18 for FSX. This add-on recreates the iconic twin-engine aircraft that was used for various purposes, such as passenger transport, cargo delivery, military operations, and aerial photography. The Beech 18 was produced from 1937 to 1970, and more than 9,000 units were built.
-
Flysimware Beech 18 For Fsx Torrent Tpb cradle ubersetzten r
Flysimware's Beech 18 for FSX features a high-quality 3D model with custom animations, dynamic shadows, and realistic reflections. The cockpit is fully functional and interactive, with custom gauges, switches, levers, and indicators. You can also choose from different variants and liveries of the Beech 18, such as the C-45 military version, the D-18S passenger version, and the Volpar Tri-Gear conversion. The add-on also includes custom sounds, effects, and flight dynamics that match the real aircraft's performance and behavior.
-
However, Flysimware's Beech 18 for FSX is not available on any official online store or platform. The only way to get it is to download it from a torrent site, such as The Pirate Bay (TPB). This is a risky and illegal method that may expose your computer to viruses, malware, or legal issues. Therefore, we do not recommend or endorse this option. If you want to enjoy Flysimware's Beech 18 for FSX safely and legally, you should wait for the official release or purchase it from a reputable source.
-
-
The Beech 18 prototype first flew on January 15, 1937. [^1^] The aircraft has used a variety of engines and has had a number of airframe modifications to increase gross weight and speed. At least one aircraft was modified to a 600-hp (447-kW) Pratt & Whitney R-1340 powerplant configuration. [^1^] The Beech 18 was originally designed as a civilian aircraft, but it soon attracted the attention of the military. During and after World War II, over 4,500 Beech 18s were used in military service under different designations, such as the C-45 Expeditor, the AT-7 Navigator, and the AT-11 Kansan. The Beech 18 served as a light transport, a light bomber, an aircrew trainer, a photo-reconnaissance aircraft, and a "mother ship" for target drones. [^1^] [^2^]
-
-
After the war, the Beech 18 became a popular "business aircraft" and "feeder airliner". It was used by many airlines, corporations, charter operators, and private owners around the world. It was also adapted for various civilian uses, such as aerial spraying, sterile insect release, fish stocking, dry-ice cloud seeding, aerial firefighting, air-mail delivery, ambulance service, movie productions, skydiving, freight delivery, weapon- and drug-smuggling, engine testbed, skywriting, banner towing, and stunt aircraft. [^1^] [^2^] The Beech 18 was also flown by many famous people, such as Howard Hughes, Charles Lindbergh, Arthur Godfrey, Tony Hulman, and Elvis Presley. [^2^]
-
The Beech 18 was manufactured continuously from 1937 to November 1969 (over 32 years), making it one of the world's most widely used light aircraft. Over 9,000 units were built in total. The Beech 18 was also produced under license in Japan by Shin Meiwa (later ShinMaywa) and in Canada by Canadian Car & Foundry (later Canadair). Some Beech 18s were also converted to seaplanes or tricycle landing gear. The Beech 18 is still flown today by many enthusiasts and collectors who appreciate its classic design and performance. Some Beech 18s are also preserved in museums or displayed as public art. [^1^] [^2^]
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Irmc S2 License Keyrar.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Irmc S2 License Keyrar.md
deleted file mode 100644
index f1d65ae00af45919535280789881ee7e55bcd84f..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Irmc S2 License Keyrar.md
+++ /dev/null
@@ -1,113 +0,0 @@
-
-
-
-
- Article with HTML formatting
-
-
-
-
How to Install irmc s2 license key.rar on Your Fujitsu PRIMERGY Server
-
If you have a Fujitsu PRIMERGY server, you might want to install irmc s2 license key.rar on it. This file contains the license key for the iRMC S2 advanced pack, which enables graphical console redirection and remote storage via USB on your server. In this article, we will explain what irmc s2 license key.rar is, how to download it, how to upload it to your server, and how to use its advanced features.
-
What is irmc s2 license key.rar?
-
Before we get into how to install irmc s2 license key.rar on your server, let's first understand what it is and why you might need it.
iRMC S2 stands for integrated Remote Management Controller version 2. It is a chip that is embedded in your Fujitsu PRIMERGY server that allows you to control and monitor your server remotely, regardless of its system status . You can access the iRMC S2 web interface by entering its IP address in your browser. From there, you can perform various tasks such as power on/off or reset your server, view system information and logs, configure settings and alerts
What is irmc s2 license key.rar?
-
irmc s2 license key.rar is a compressed file that contains the license key for the iRMC S2 advanced pack. The iRMC S2 advanced pack is an optional feature that unlocks the graphical console redirection and remote storage via USB functionalities of iRMC S2 . These features allow you to access and control your server's graphical user interface (GUI) remotely, as well as mount a USB device or an ISO image on your server. This can be very useful for troubleshooting, installing, or updating your server's operating system or applications.
-
What are the benefits of installing irmc s2 license key.rar?
-
By installing irmc s2 license key.rar on your server, you can enjoy the following benefits:
-
-
You can access and control your server's GUI remotely, without the need for a physical monitor, keyboard, or mouse. This can save you time and money, especially if your server is located in a remote or inaccessible location.
-
You can mount a USB device or an ISO image on your server remotely, without the need for a physical CD/DVD drive or USB port. This can help you install or update your server's operating system or applications faster and easier.
-
You can improve the security and reliability of your server, as you can monitor and manage it from anywhere, anytime. You can also receive alerts and notifications from iRMC S2 in case of any issues or failures.
-
-
How to Download irmc s2 license key.rar?
-
Now that you know what irmc s2 license key.rar is and why you might want to install it on your server, let's see how to download it.
-
Where to find irmc s2 license key.rar?
-
You can find irmc s2 license key.rar on the Fujitsu website . You will need to register and log in to access the download page. Once you are there, you will need to enter the serial number of your server and agree to the terms and conditions. Then, you will be able to download irmc s2 license key.rar as a zip file.
-
How to extract irmc s2 license key.rar?
-
After you have downloaded irmc s2 license key.rar as a zip file, you will need to extract it using a software such as WinRAR or 7-Zip . To do this, follow these steps:
-
-
Right-click on the zip file and select "Extract Here" or "Extract to irmc_s2_license_key/"
-
A folder named "irmc_s2_license_key" will be created in the same location as the zip file
-
Open the folder and you will see a file named "irmc_s2_license.key"
-
This is the license key file that you will need to upload to your server
-
How to Upload irmc s2 license key.rar to Your Server?
-
Once you have extracted irmc s2 license key.rar and obtained the license key file, you will need to upload it to your server. To do this, you will need a USB device and access to the iRMC S2 web interface. Here are the steps to follow:
-
How to copy irmc_s2_license.key to a USB device?
-
You will need to copy the license key file to a USB device that is formatted in FAT32 or NTFS. To do this, follow these steps:
-
-
-
Insert the USB device into your computer
-
Open the folder where you extracted irmc s2 license key.rar
-
Copy the file irmc_s2_license.key
-
Paste it into the root directory of your USB device
-
Eject the USB device safely from your computer
-
-
How to connect the USB device to your server?
-
You will need to connect the USB device to your server's front or rear USB port. To do this, follow these steps:
-
-
Power off your server if it is running
-
Insert the USB device into one of the USB ports on your server
-
Power on your server and wait for it to boot up
-
-
How to access the iRMC S2 web interface?
-
You will need to access the iRMC S2 web interface using a browser on your computer. To do this, follow these steps:
-
-
Find out the IP address of iRMC S2 on your server. You can do this by checking the sticker on your server, using the iRMC S2 configuration utility, or using a network scanner tool
-
Open your browser and enter the IP address of iRMC S2 in the address bar
-
You will see a login page for the iRMC S2 web interface. Enter the default username and password, which are "admin" and "admin" respectively
-
You will see the main page of the iRMC S2 web interface. You can navigate through different menus and tabs to view and configure various settings and features of iRMC S2
-
How to upload and activate irmc_s2_license.key?
-
You will need to upload and activate the license key file using the iRMC S2 web interface. To do this, follow these steps:
-
-
On the main page of the iRMC S2 web interface, click on the "Configuration" tab
-
On the left menu, click on "License Management"
-
On the right panel, click on the "Browse" button and select the irmc_s2_license.key file from your USB device
-
Click on the "Upload" button and wait for the file to be uploaded
-
You will see a message saying "License file uploaded successfully"
-
Click on the "Activate" button and wait for the license to be activated
-
You will see a message saying "License activated successfully"
-
Click on the "OK" button and refresh the page
-
You will see that the iRMC S2 advanced pack features are now enabled and available for use
-
-
How to Use the Advanced Features of iRMC S2?
-
Now that you have installed and activated irmc s2 license key.rar on your server, you can use the advanced features of iRMC S2. Here are some examples of how to use them:
-
How to access and control your server's GUI remotely?
-
You can access and control your server's GUI remotely using the graphical console redirection feature of iRMC S2. This feature allows you to see and interact with your server's screen as if you were sitting in front of it. To do this, follow these steps:
-
-
On the main page of the iRMC S2 web interface, click on the "Remote Control" tab
-
On the left menu, click on "Graphical Console Redirection"
-
On the right panel, click on the "Start" button and wait for the connection to be established
-
You will see a new window pop up with your server's screen displayed
-
You can use your mouse and keyboard to control your server's GUI remotely
-
You can also use the toolbar at the top of the window to perform various actions such as power on/off or reset your server, send keyboard shortcuts, adjust screen settings, or take screenshots
-
To end the graphical console redirection session, click on the "Stop" button on the right panel or close the window
-
How to mount a USB device or an ISO image on your server?
-
You can mount a USB device or an ISO image on your server remotely using the remote storage via USB feature of iRMC S2. This feature allows you to access the files and folders on your USB device or ISO image as if they were on your server's local drive. To do this, follow these steps:
-
-
On the main page of the iRMC S2 web interface, click on the "Remote Control" tab
-
On the left menu, click on "Remote Storage via USB"
-
On the right panel, click on the "Browse" button and select the USB device or the ISO image from your computer
-
Click on the "Mount" button and wait for the device or image to be mounted
-
You will see a message saying "Remote storage mounted successfully"
-
You can now access the files and folders on your USB device or ISO image from your server's GUI remotely
-
To unmount the device or image, click on the "Unmount" button and wait for the process to complete
-
You will see a message saying "Remote storage unmounted successfully"
-
-
Conclusion
-
In this article, we have shown you how to install irmc s2 license key.rar on your Fujitsu PRIMERGY server. We have explained what irmc s2 license key.rar is, how to download it, how to upload it to your server, and how to use its advanced features. By installing irmc s2 license key.rar, you can access and control your server's GUI remotely, mount a USB device or an ISO image on your server, and improve the security and reliability of your server. We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to contact us.
-
FAQs
-
Here are some frequently asked questions about irmc s2 license key.rar and iRMC S2:
-
What is the default username and password for iRMC S2 web interface?
-
The default username and password for iRMC S2 web interface are "admin" and "admin" respectively. You can change them by going to Configuration > User Management > User Accounts on the iRMC S2 web interface.
-
How can I change the IP address of iRMC S2?
-
You can change the IP address of iRMC S2 by going to Configuration > Network > IPv4 Settings on the iRMC S2 web interface. You can also use the iRMC S2 configuration utility or a network scanner tool to change the IP address of iRMC S2.
-
How can I monitor the health and performance of my server using iRMC S2?
-
You can monitor the health and performance of your server using iRMC S2 by going to Status > System Status > System Information on the iRMC S2 web interface. You can also view system logs, sensor readings, event notifications, and power consumption data on iRMC S2.
-
How can I receive alerts and notifications from iRMC S2?
-
You can receive alerts and notifications from iRMC S2 by going to Configuration > Event Notification > Notification Settings on the iRMC S2 web interface. You can configure various options such as email, SNMP, syslog, or SMS for receiving alerts and notifications from iRMC S2.
-
How can I reset or update iRMC S2 firmware?
-
You can reset or update iRMC S2 firmware by going to Maintenance > Firmware Update > Firmware Update on the iRMC S2 web interface. You can also use the Fujitsu ServerView Suite DVD or a bootable USB device to reset or update iRMC S2 firmware.
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Lacie D2 Dvd Driver Mac !!TOP!!.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Lacie D2 Dvd Driver Mac !!TOP!!.md
deleted file mode 100644
index 940a56f88d29362f73b74765e5b0e4a8bc919d82..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Lacie D2 Dvd Driver Mac !!TOP!!.md
+++ /dev/null
@@ -1,33 +0,0 @@
-
-
How to Install and Use LaCie d2 DVD+RW with LightScribe on Mac
-
LaCie d2 DVD+RW with LightScribe is a desktop DVD burner that can write and read DVDs and CDs, as well as create labels on LightScribe discs using a laser. It is compatible with Mac OS X 10.3.9 or later, and requires a USB 2.0 or FireWire port. Here are the steps to install and use this device on your Mac:
To burn data or video files to a DVD or CD, you can use the built-in Finder or Disk Utility applications on your Mac, or any other third-party software that supports DVD burning.
-
To create a label on a LightScribe disc, you need to insert a LightScribe-compatible disc into the LaCie d2 DVD+RW with the label side down. Then, launch your LightScribe labeling software and follow the instructions to design and print your label. The label will be burned onto the disc by the laser in the device.
-
-
For more information, please refer to the user manual of the LaCie d2 DVD+RW with LightScribe[^1^] or visit the LaCie support website at https://www.lacie.com/support/.
Here are some tips and tricks to get the most out of your LaCie d2 DVD+RW with LightScribe:
Do not touch the label side of the LightScribe discs, as this may damage the coating and affect the quality of the label. Use a soft cloth to wipe off any dust or fingerprints before labeling.
-
Adjust the contrast and brightness settings of your LightScribe labeling software to get the best results. You can also choose different modes of printing, such as draft, normal, or best, depending on the quality and speed you prefer.
-
Store your LightScribe discs in a cool, dry place away from direct sunlight and heat sources. Exposure to light and heat may cause the label to fade over time.
-
If you encounter any problems with your device, such as not being recognized by your Mac, not ejecting discs, or making unusual noises, try the following troubleshooting steps:
-
-
Check the power and data connections and make sure they are secure.
-
Restart your Mac and try again.
-
Update your Mac OS X and your device firmware to the latest versions.
-
Use a different USB or FireWire port or cable.
-
Use a different disc or try cleaning the disc with a soft cloth.