How to Use Anti Deep Freeze V6 61 020 2822 to Unfreeze Your Computer
-
Deep Freeze is a software that protects your computer from unwanted changes by restoring it to a frozen state every time you reboot. However, sometimes you may need to make permanent changes to your system or access files that are stored on the frozen drive. In that case, you can use Anti Deep Freeze V6 61 020 2822, a tool that can disable Deep Freeze and unfreeze your computer without requiring a password.
Anti Deep Freeze V6 61 020 2822 is compatible with Windows XP, Windows Vista, Windows 7 (32 or 64 bit), and requires 10% free hard drive space[^1^]. It works with Deep Freeze Standard 6.61.020.2822, which is a version of Deep Freeze that can save your required computers and protect them from malware and accidental changes[^1^]. To use Anti Deep Freeze V6 61 020 2822, follow these steps:
-
-
Download Anti Deep Freeze V6 61 020 2822 from a reliable source. You can find it on some websites or online platforms that offer software downloads[^2^] [^3^] [^4^]. Make sure you scan the file for viruses before opening it.
-
Run Anti Deep Freeze V6 61 020 2822 as an administrator. You will see a window with a list of drives that are frozen by Deep Freeze. Select the drive that you want to unfreeze and click on "Unfreeze".
-
Wait for the process to complete. You will see a message that says "Unfreeze Successful". Click on "OK" and restart your computer.
-
After rebooting, you will notice that your computer is no longer frozen by Deep Freeze. You can now make any changes or access any files that you want. However, be careful not to delete or modify any important system files or settings.
-
If you want to freeze your computer again, you can run Deep Freeze Standard 6.61.020.2822 and enable it on the drive that you want to protect. You will need to enter a password to do so.
-
-
Anti Deep Freeze V6 61 020 2822 is a handy tool that can help you unfreeze your computer when you need to. However, it should be used with caution and only when necessary. Deep Freeze is a useful software that can prevent your computer from being damaged by viruses, malware, or unwanted changes. Therefore, you should always keep it enabled unless you have a valid reason to disable it.
-
-
Some of the benefits of using Deep Freeze are that it can save you time and money by reducing the need for IT support and maintenance. It can also improve your security and privacy by preventing unauthorized access to your data and files. Moreover, it can enhance your productivity and performance by ensuring that your computer always runs smoothly and efficiently.
-
However, there are also some drawbacks of using Deep Freeze that you should be aware of. For example, it can prevent you from installing new software or updates that may be beneficial for your system. It can also erase any personal files or settings that you may have saved on the frozen drive. Furthermore, it can cause some problems if you forget your password or lose the tool that can disable it.
-
-
Therefore, you should always use Deep Freeze with care and responsibility. You should only freeze the drives that contain your system files and applications, and leave some space for your personal files on a separate drive or partition. You should also backup your important data regularly and keep a record of your password and the tool that can unfreeze your computer. Finally, you should only use Anti Deep Freeze V6 61 020 2822 when you absolutely need to, and not abuse it for malicious purposes.
cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bluedio Bluetooth Headset Driver Windows 7l ((LINK)).md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bluedio Bluetooth Headset Driver Windows 7l ((LINK)).md
deleted file mode 100644
index b5a2d0913117bac1054ee8b8ad160906f7e52507..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bluedio Bluetooth Headset Driver Windows 7l ((LINK)).md
+++ /dev/null
@@ -1,42 +0,0 @@
-
-
How to Install Bluedio Bluetooth Headset Driver on Windows 7
-
If you have a Bluedio Bluetooth headset and want to use it with your Windows 7 computer, you may need to install the driver for it. A driver is a software program that allows your operating system to communicate with a Bluetooth device. Without the driver, your headset may not work properly or at all.
In this article, we will show you how to download and install the Bluedio Bluetooth headset driver on Windows 7. We will also provide some troubleshooting tips in case you encounter any problems.
-
Step 1: Download the Driver
-
The first step is to download the driver for your Bluedio Bluetooth headset. You can find the driver on the official website of Bluedio or on a third-party website that offers drivers for various devices. For example, you can use the link below to download the driver from Dell:
Alternatively, you can use a driver update tool that can automatically scan your computer and find the best driver for your headset. This can save you time and hassle of searching for the right driver manually.
-
Step 2: Install the Driver
-
Once you have downloaded the driver, you need to install it on your computer. To do this, follow these steps:
-
-
Double-click on the downloaded file to launch the installation wizard.
-
Follow the on-screen instructions to complete the installation process.
-
Restart your computer if prompted.
-
-
After installing the driver, you should be able to use your Bluedio Bluetooth headset with your Windows 7 computer.
-
Step 3: Pair the Headset with the Computer
-
The final step is to pair your headset with your computer. This means establishing a wireless connection between them so that they can communicate with each other. To do this, follow these steps:
-
-
Turn on your Bluedio Bluetooth headset and make sure it is in pairing mode. You can usually do this by pressing and holding a button on the headset until you hear a beep or see a flashing light.
-
On your Windows 7 computer, click on the Start button and then click on Devices and Printers.
-
Click on Add a Device and wait for your computer to scan for nearby Bluetooth devices.
-
Select your Bluedio Bluetooth headset from the list of devices and click on Next.
-
If prompted, enter a passcode or confirm a pairing request on your headset and/or computer.
-
Click on Finish to complete the pairing process.
-
-
Once paired, you should be able to use your Bluedio Bluetooth headset as an audio device on your Windows 7 computer. You can adjust the volume, mute, or switch between different audio sources using the controls on your headset or computer.
-
Troubleshooting Tips
-
If you encounter any problems while installing or using your Bluedio Bluetooth headset on Windows 7, here are some tips that may help you:
-
-
Make sure your Bluedio Bluetooth headset is fully charged before using it.
-
Make sure your Windows 7 computer has a Bluetooth adapter or dongle that supports Bluetooth 4.0 or higher.
-
Make sure your Windows 7 computer has the latest updates installed.
-
Make sure your Bluedio Bluetooth headset and your Windows 7 computer are within range of each other and there are no obstructions or interference between them.
-
Make sure your Bluedio Bluetooth headset is not paired with another device at the same time as your Windows 7 computer.
-
If you have multiple audio devices connected to your Windows 7 computer, make sure you select your Bluedio Bluetooth headset as the default playback and recording device in the Sound settings.
-
If you have any other drivers or software that may conflict with your Bluedio Bluetooth headset driver, try uninstalling or disabling them temporarily.
-
If none of the above tips work, cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chennai vs China (7aum Arivu) Full Movie Download Link in Hindi 720p Watch the Epic Saga of a Martial Arts Legend.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chennai vs China (7aum Arivu) Full Movie Download Link in Hindi 720p Watch the Epic Saga of a Martial Arts Legend.md
deleted file mode 100644
index f53d95981bee3a230d17bb38d2765529692b34bc..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chennai vs China (7aum Arivu) Full Movie Download Link in Hindi 720p Watch the Epic Saga of a Martial Arts Legend.md
+++ /dev/null
@@ -1,104 +0,0 @@
-
-
Chennai vs China: A Thrilling Action Movie You Don't Want to Miss
-
If you are a fan of action, sci-fi and thriller movies, then you should definitely check out Chennai vs China, a 2011 Hindi dubbed movie that will keep you on the edge of your seat. Chennai vs China is the Hindi version of the Tamil movie 7aum Arivu, which means "seventh sense". The movie is directed by A.R. Murugadoss, who is known for his blockbuster movies like Ghajini, Thuppakki and Holiday. The movie stars Suriya, Shruti Haasan and Johnny Nguyen in the lead roles.
-
What is Chennai vs China about?
-
The plot of the movie
-
The movie revolves around two parallel stories that are connected by a common thread. The first story is about Bodhidharma, a legendary martial arts master and healer who lived in the 6th century. He was sent by his guru to China to spread Buddhism and teach martial arts. He became a revered figure in China and his teachings are still followed by many people. He also discovered a way to manipulate his body's energy and activate his seventh sense, which gave him extraordinary abilities.
-
chennai vs china full movie in hindi 720p download links
The second story is about Aravind, a genetic engineering student who lives in Chennai in the present day. He is in love with Subha, a research scholar who is working on a project to revive the DNA of Bodhidharma from a sample preserved in a Chinese temple. She believes that Bodhidharma's DNA can help India fight against a deadly virus that is being unleashed by China as a biological weapon. However, she also has a hidden agenda that involves Aravind.
-
The movie follows how Aravind and Subha try to stop the virus attack and how they discover their connection to Bodhidharma. The movie also shows how Bodhidharma's legacy is being misused by some people for their own selfish motives.
-
The cast and crew of the movie
-
The movie features some talented actors who have delivered impressive performances. Suriya plays the dual role of Bodhidharma and Aravind with ease and charisma. He showcases his versatility as an actor by portraying two different characters with different personalities and emotions. He also does some amazing stunts and fights that will leave you awestruck.
-
Shruti Haasan plays the role of Subha, who is a smart and strong-willed woman who has a passion for science and history. She also has a romantic side that she expresses with Aravind. She looks beautiful and confident in her role.
-
Johnny Nguyen plays the role of Dong Lee, who is a Chinese assassin sent by his government to spread the virus in India. He is also a skilled martial artist who can match Bodhidharma's moves. He is ruthless and cunning in his mission.
-
The movie also has some supporting actors like Ashwin Kakumanu, Abhinaya, Avinash, Guinnes Pakru and others who play important roles in the story.
-
The movie is directed by A.R. Murugadoss, who has a knack for making engaging and entertaining movies that have a social message. He has also written the story and screenplay of the movie along with Subha (the duo who also wrote Thuppakki). The movie has some stunning visuals and cinematography by Ravi K. Chandran. The music of the movie is composed by Harris Jayaraj, who has given some catchy and melodious songs that suit the mood of the movie. The editing of the movie is done by Anthony Gonsalves.
-
chennai vs china hindi movie online watch zee5
-chennai vs china 2011 hindi tamil dual audio download pogolinks
-chennai vs china suriya shruti haasan action thriller film
-chennai vs china bodhidharma legend 6th century martial arts
-chennai vs china 2014 hindi dubbed version 7 aum arivu
-chennai vs china full hd movie free download filmyzilla
-chennai vs china genetic engineering student virus attack by china
-chennai vs china johnny nguyen villain role martial arts expert
-chennai vs china ar murugadoss director science fiction film
-chennai vs china hindi tamil mkv format multiple qualities
-chennai vs china watch online free streaming mx player
-chennai vs china 480p 720p 1080p web dl esub download
-chennai vs china south indian movie hindi audio best quality
-chennai vs china ddp5.1 h.264 high resolution download links
-chennai vs china ancient skills legend modern hero story
-chennai vs china action scenes suriya stunts choreography
-chennai vs china shruti haasan research scholar love interest
-chennai vs china reviews ratings imdb rotten tomatoes
-chennai vs china trailer teaser songs videos youtube
-chennai vs china cast crew details wiki bio
-chennai vs china box office collection budget hit or flop
-chennai vs china awards nominations national filmfare siima
-chennai vs china behind the scenes making of the film
-chennai vs china interesting facts trivia unknown secrets
-chennai vs china fan reactions memes tweets social media
-chennai vs china similar movies recommendations suggestions
-chennai vs china sequel plans updates news rumors
-chennai vs china netflix amazon prime hotstar availability
-chennai vs china subtitles english hindi tamil download srt
-chennai vs china torrent magnet link direct download link
-chennai vs china movie scenes clips highlights best moments
-chennai vs china movie quotes dialogues punchlines one liners
-chennai vs china movie poster wallpaper images photos gallery
-chennai vs china movie theme music background score composer
-chennai vs china movie analysis breakdown explanation review
-chennai vs china movie comparison original vs dubbed version
-chennai vs china movie controversy issues criticism backlash
-chennai vs china movie inspired by true events history facts
-chennai vs china movie references easter eggs hidden clues
-chennai vs china movie mistakes errors goofs bloopers
-
Why should you watch Chennai vs China?
-
The action scenes and stunts
-
One of the main reasons to watch Chennai vs China is the action scenes and stunts that are performed by Suriya and Johnny Nguyen. The movie has some breathtaking sequences that involve hand-to-hand combat, sword fighting, parkour, car chases, explosions and more. The action choreography is done by Peter Hein, who is one of the best stunt directors in India. The action scenes are realistic and thrilling without being over-the-top or unrealistic.
-
The sci-fi and thriller elements
-
Another reason to watch Chennai vs China is the sci-fi and thriller elements that add to the excitement and suspense of the movie. The movie deals with some interesting concepts like genetic engineering, DNA manipulation, bio-warfare, hypnosis, mind control and more. The movie also has some twists and turns that will keep you guessing till the end. The movie also raises some questions about ethics, morality, patriotism and history that will make you think.
-
The cultural and historical references
-
A third reason to watch Chennai vs China is the cultural and historical references that enrich the story and give it a unique flavor. The movie showcases some aspects of Indian and Chinese culture that are fascinating and informative. The movie also pays tribute to Bodhidharma, who is an important figure in both countries' history. The movie also explores some themes like Buddhism, martial arts, medicine, spirituality and more that are relevant to both cultures.
-
How to download Chennai vs China in Hindi 720p?
-
The legal and safe way
-
If you want to download Chennai vs China in Hindi 720p in a legal and safe way, then you should opt for an online streaming platform that has the rights to show the movie. One such platform is ZEE5, which is a popular OTT service that offers a variety of content across genres and languages. You can watch Chennai vs China on ZEE5 by subscribing to one of their plans that suit your budget and preferences.
-
To download Chennai vs China on ZEE5, you need to follow these steps:
-
-
Download the ZEE5 app on your device or visit their website.
-
Create an account or log in with your existing account.
-
Select your preferred plan and make the payment.
-
Search for Chennai vs China on ZEE5 or browse through their categories.
-
Click on the download icon on the bottom right corner of the screen.
-
Select your desired video quality (720p) and click on download again.
-
Wait for the download to finish and enjoy watching Chennai vs China offline.
-
-
The illegal and risky way
-
If you want to download Chennai vs China in Hindi 720p in an illegal and risky way, then you should be aware of the consequences that may follow. Downloading movies from unauthorized sources like torrent sites or piracy websites is not only unethical but also illegal. You may face legal action from the makers or distributors of the movie or from the government authorities for violating copyright laws. You may also expose your device to malware or viruses that can harm your data or privacy.
-
However, if you still want to take this risk, then you can try searching for Chennai vs China on torrent sites or piracy websites that claim to offer free downloads of movies. But be careful as these sites may not have genuine links or may have low-quality videos or audio.
-
To download Chennai vs China from these sites, you need to follow these steps:
-
-
Find a torrent site or piracy website that has Chennai vs China available for download.
-
Click on the link or magnet link that corresponds to Chennai vs China in Hindi 720p.
-
Download a torrent client software like uTorrent or BitTorrent on your device if you don't have one already.
-
Open the torrent client software and add the link or magnet link to it.
-
Wait for the download to finish and enjoy watching Chennai vs China offline.
-
-
Conclusion
-
FAQs
-
Here are some frequently asked questions about Chennai vs China that you may have.
-
-
Q: Is Chennai vs China based on a true story?
-
A: No, Chennai vs China is not based on a true story. It is a fictional story that is inspired by some historical and scientific facts. The movie uses the legend of Bodhidharma as a base for its plot, but it is not a biopic or a documentary of his life.
-
Q: Is Chennai vs China a remake of any other movie?
-
A: No, Chennai vs China is not a remake of any other movie. It is an original movie that is made in Tamil and dubbed in Hindi. However, some scenes and concepts of the movie may be similar to some Hollywood movies like The Matrix, Inception, The Bourne Identity and others.
-
Q: What is the meaning of 7aum Arivu?
-
A: 7aum Arivu is the original title of the movie in Tamil. It means "seventh sense" in English. It refers to the ability to manipulate one's body's energy and activate one's latent potential, which is shown by Bodhidharma and Aravind in the movie.
-
Q: What is the box office collection of Chennai vs China?
-
A: Chennai vs China was a commercial success at the box office. It collected around ₹220 crore worldwide, making it one of the highest-grossing Tamil movies of all time. It also received positive reviews from critics and audiences for its story, direction, performances and action.
-
Q: Where can I watch Chennai vs China online?
-
A: You can watch Chennai vs China online on ZEE5, which is an online streaming platform that has the rights to show the movie. You can also download it from ZEE5 in Hindi 720p quality. However, you should avoid downloading it from illegal sources like torrent sites or piracy websites, as they may have low-quality videos or audio or may contain malware or viruses.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Easy Anti Cheat Download Mac.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Easy Anti Cheat Download Mac.md
deleted file mode 100644
index e83e4df221344c59f090783a19819957209ca54f..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Easy Anti Cheat Download Mac.md
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
How to Install Easy Anti-Cheat on Mac
-
Easy Anti-Cheat is a service that prevents cheating in multiplayer PC games. It is used by many popular games, such as Fortnite, Apex Legends, Rust, and more. If you want to play these games on your Mac, you will need to install Easy Anti-Cheat first.
-
Unfortunately, Easy Anti-Cheat does not have a native Mac version. However, there are some ways to run it on your Mac using third-party software. Here are two methods you can try:
CrossOver is a program that allows you to run Windows applications on your Mac without installing Windows. It works by creating a virtual Windows environment on your Mac and running the applications inside it. You can use CrossOver to install and run Easy Anti-Cheat on your Mac.
Launch CrossOver and click on the "Install a Windows Application" button.
-
Type "Easy Anti-Cheat" in the search box and select it from the list.
-
Click on the "Choose Installer File" button and browse to the location of the EasyAntiCheat_Setup.exe file. You can find this file inside the game's installation folder, under the "EasyAntiCheat" subfolder.
-
Click on the "Install" button and follow the instructions on the screen.
-
Once the installation is complete, you can launch Easy Anti-Cheat from the CrossOver interface or from the game itself.
-
-
Method 2: Use Boot Camp
-
Boot Camp is a utility that comes with your Mac and allows you to install Windows on a separate partition of your hard drive. You can then switch between Mac OS and Windows by restarting your computer. You can use Boot Camp to install and run Easy Anti-Cheat on your Mac.
-
To use Boot Camp, follow these steps:
-
-
Download and install Boot Camp Assistant from https://support.apple.com/boot-camp. You will also need a Windows installation disc or USB drive.
-
Launch Boot Camp Assistant and follow the instructions on the screen. You will need to create a partition for Windows, format it, and install Windows on it.
-
Once Windows is installed, restart your computer and select Windows as your startup disk.
-
Download and install Easy Anti-Cheat from https://www.easy.ac/en-us/. You can also find it inside the game's installation folder, under the "EasyAntiCheat" subfolder.
-
Launch Easy Anti-Cheat from the Windows Start menu or from the game itself.
-
-
Note that using Boot Camp will require more disk space and may affect your Mac's performance and battery life. You will also need to restart your computer every time you want to switch between Mac OS and Windows.
-
Conclusion
-
Easy Anti-Cheat is a service that prevents cheating in multiplayer PC games. It does not have a native Mac version, but you can use third-party software to run it on your Mac. You can use CrossOver to run Easy Anti-Cheat in a virtual Windows environment, or use Boot Camp to install Windows on a separate partition of your hard drive. Both methods have their pros and cons, so you can choose the one that suits you best.
- cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Easyvcw V1.51.7z Learn How to Use This Voice Chat Software in Minutes.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Easyvcw V1.51.7z Learn How to Use This Voice Chat Software in Minutes.md
deleted file mode 100644
index 106850519da37e4f7b921806c3e2c7feda9d8c95..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Easyvcw V1.51.7z Learn How to Use This Voice Chat Software in Minutes.md
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
Easyvcw V1.51.7z: A Simple and Powerful Tool for Creating Virtual Credit Cards
-
Have you ever wanted to shop online without exposing your real credit card information? Have you ever wished you could create multiple credit cards with different limits and expiration dates for different purposes? Have you ever wondered how to protect yourself from identity theft, fraud, and unauthorized charges when using your credit card online?
If you answered yes to any of these questions, then you might be interested in Easyvcw V1.51.7z, a simple and powerful tool for creating virtual credit cards.
-
What is Easyvcw V1.51.7z and why do you need it?
-
Easyvcw V1.51.7z is a software application that allows you to generate virtual credit cards (VCCs) that can be used for online transactions. A VCC is a temporary and disposable credit card number that is linked to your real credit card account, but has its own limit, expiration date, and security code.
-
By using a VCC, you can avoid exposing your real credit card information to merchants, hackers, or third parties, thus reducing the risk of identity theft, fraud, and unauthorized charges. You can also create multiple VCCs for different purposes, such as online shopping, subscriptions, trials, or donations, and control how much and how long you want to spend on each one.
-
The features and benefits of Easyvcw V1.51.7z
-
Easyvcw V1.51.7z is one of the best tools for creating VCCs because it offers the following features and benefits:
-
-
It supports various types of credit cards, such as Visa, Mastercard, American Express, Discover, JCB, Diners Club, etc.
-
It allows you to customize the limit, expiration date, and security code of each VCC.
-
It generates valid and working VCCs that can pass verification checks by most online merchants.
-
It provides a user-friendly interface that is easy to use and navigate.
-
It does not require any registration or personal information to use.
-
It is compatible with Windows XP/Vista/7/8/10 operating systems.
-
It is free to download and use.
-
-
How to download and install Easyvcw V1.51.7z
-
To download and install Easyvcw V1.51.7z on your computer, follow these simple steps:
Save the file "Easyvcw_V1_51_7z.exe" on your computer.
-
Run the file "Easyvcw_V1_51_7z.exe" and follow the instructions on the screen.
-
Once the installation is complete, launch the program from your desktop or start menu.
-
-
How to use Easyvcw V1.51.7z to create virtual credit cards
-
To use Easyvcw V1.51.7z to create VCCs, follow these simple steps:
-
-
Select the type of credit card you want to create from the drop-down menu.
-
Enter the limit, expiration date, and security code of the VCC you want to create.
-
Click on the "Generate" button to create the VCC.
-
Copy the VCC number, expiration date, and security code from the program window.
-
Use the VCC for your online transaction as you would use a regular credit card.
-
-
The advantages and disadvantages of using virtual credit cards
-
Using VCCs can have many advantages and disadvantages depending on your needs and preferences. Here are some of them:
-
The pros of using virtual credit cards
-
Security and privacy
-
VCCs can enhance your security and privacy when shopping online by preventing your real credit card information from being exposed or stolen by hackers or third parties. You can also avoid unwanted charges or subscriptions by setting limits or expiration dates on your VCCs.
-
Convenience and flexibility
-
VCCs can offer you convenience and flexibility when shopping online by allowing you to create multiple credit cards for different purposes or merchants without affecting your real credit card account or limit. You can also use VCCs for international transactions without worrying about currency conversion fees or exchange rates.
VCCs can help you save money when shopping online by allowing you to take advantage of discounts, coupons, cashback offers, or free trials that require a credit card number without committing to a long-term contract or subscription. You can also avoid annual fees or interest charges that may apply to your real credit card account.
-
The cons of using virtual credit cards
-
Limited acceptance and compatibility
-
VCCs may not be accepted or compatible with some online merchants or platforms that require additional verification or authentication methods such as 3D Secure or Verified by Visa/Mastercard SecureCode. You may also encounter issues with refunds or cancellations if you use a VCC that has expired or reached its limit.
-
Potential fees and charges
-
VCCs may incur fees or charges depending on your real credit card provider or issuer such as foreign transaction fees or cash advance fees if you use a debit card or a prepaid card as the source of funds for your VCCs. You may also be liable for any fraudulent or unauthorized transactions that occur on your real credit card account if you do not report them in time.
-
Customer service and dispute resolution
-
VCCs may not provide adequate customer service or dispute resolution options if you encounter any problems or issues with your online transactions such as delivery delays, defective products, incorrect charges, etc. You may have to contact your real credit card provider or issuer instead of the online merchant or platform for assistance or resolution.
-
The best practices and tips for using virtual credit cards
-
To make the most out of using VCCs, here are some best practices and tips that you should follow:
-
Choose a reliable and reputable provider
-
Not all providers or issuers of VCCs are trustworthy or legitimate. Some may offer fake or invalid VCCs that do not work or may compromise your security or privacy. Therefore, you should choose a reliable and reputable provider such as Easyvcw V1.51.7z that offers valid and working VCCs that can pass verification checks by most online merchants.
-
Set appropriate limits and expiration dates
-
You should set appropriate limits and expiration dates on your VCCs based on your needs and preferences. For example, if you want to use a VCC for a one-time purchase, you should set a low limit and a short expiration date to avoid unwanted charges or subscriptions. If you want to use a VCC for a recurring payment, you should set a higher limit and a longer expiration date to ensure continuity of service.
-
Keep track of your transactions
You should keep track of your transactions and statements involving your VCCs as well as your real credit card account to monitor your spending habits and budget accordingly. You should also check for any errors or discrepancies in your transactions or statements such as double charges, unauthorized charges, or incorrect amounts and report them to your provider or issuer as soon as possible.
-
Report any issues or frauds immediately
-
You should report any issues or frauds involving your VCCs or your real credit card account to your provider or issuer immediately. You should also notify the online merchant or platform where you used your VCC and request a refund or cancellation if applicable. You should also change your VCC number, limit, expiration date, or security code if you suspect that your VCC has been compromised or misused.
-
Conclusion
-
Easyvcw V1.51.7z is a simple and powerful tool for creating virtual credit cards that can help you shop online securely and conveniently. By using Easyvcw V1.51.7z, you can generate valid and working VCCs that can be used for online transactions without exposing your real credit card information. You can also customize the limit, expiration date, and security code of each VCC according to your needs and preferences. However, you should also be aware of the advantages and disadvantages of using VCCs and follow some best practices and tips to make the most out of them.
-
FAQs
-
Here are some frequently asked questions about Easyvcw V1.51.7z and virtual credit cards:
-
-
Q: Is Easyvcw V1.51.7z safe to use?
-A: Yes, Easyvcw V1.51.7z is safe to use as it does not require any registration or personal information to use. It also does not store or share your real credit card information or your VCCs with anyone.
-
Q: How many VCCs can I create with Easyvcw V1.51.7z?
-A: You can create as many VCCs as you want with Easyvcw V1.51.7z as long as you have enough funds in your real credit card account.
-
Q: Can I use my VCCs for offline transactions?
-A: No, you can only use your VCCs for online transactions that do not require a physical card or a chip reader.
-
Q: Can I reuse my VCCs for multiple transactions?
-A: Yes, you can reuse your VCCs for multiple transactions as long as they have not expired or reached their limit.
-
Q: Can I delete or cancel my VCCs?
-A: Yes, you can delete or cancel your VCCs by changing their limit, expiration date, or security code to zero or invalid values.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/A Pdf Data Extractor Keygen 15 !FULL!.md b/spaces/1gistliPinn/ChatGPT4/Examples/A Pdf Data Extractor Keygen 15 !FULL!.md
deleted file mode 100644
index b0d173b82323484fa7f814e27d85b8f04ce35967..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/A Pdf Data Extractor Keygen 15 !FULL!.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
-Why not use the DSP's PDF Data Extractor software...? Get it now and keep on working....
-
-IBM DB2 for i Extracts Data from PDF Files from PDF files in the same way as the DB2 for i Extracts from PDFs in DB2 for i Edition, which includes a PDF Reader.. The DB2 for i Extracts from PDF in DB2 for i Edition provides functionality that is similar to the DB2 for i Extracts from PDF in DB2 for. DB2 for i Extracts from PDF. with DB2 for i v9 Release 3.. The db2 extract operation is similar to the db2 extract operation that is used in. All you need is the DB2 for i PDF Reader,. If you are ready to start extracting from PDF, click on Start Extracting..
-
-PDF data extraction tools and utilities, specially designed for business data extraction from PDF. An easy-to-use tool for the. You should not use this utility to extract data from other formats.. In PDF Data Extractor (PDFE), you can extract data from a PDF file. in PDF Data Extractor. Since it is a small utility, PDFE. For more information, please refer to the following URL:.
-
-PDF Converter & Extractor 3.0: This is a professional PDF tool to extract text information from PDF file. You can freely select the area you want to extract, then press the Extract button to get the text information. PDF Converter & Extractor is a handy software tool to convert PDF files into text files or HTML files.. For more information, please refer to the following URL:.
-
-PDF Data Extractor:. The PDF Extractor utility allows you to extract or copy text from a PDF file in just one mouse click.. Our PDF Data Extractor utility is completely. A panel appears with all the. All you need is the PDF Data Extractor utility,. The PDF Data Extractor utility converts any PDF file into a. This software works on all types of PDF files: Adobe PDF,.
-
-pdf extractor: Open pdf with a software like Adobe Reader, you can use the pdf extractor software to extract all text from the pdf file.. (You can get a pdf extractor here) here.. you can open pdf with a software like Adobe reader, you can use the pdf extractor software to extract all text from the pdf file. pdf extractor.. 4fefd39f24
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Clc Genomics Workbench 7 Crack Full.md b/spaces/1gistliPinn/ChatGPT4/Examples/Clc Genomics Workbench 7 Crack Full.md
deleted file mode 100644
index b0032d5c11bb3c137a853655789051c3f528f410..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Clc Genomics Workbench 7 Crack Full.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-bioinformatics software, CLC bio, Geneious, genome assembly, ... than once I have bought programs anew at full price because I let the maintenance period expire. ... With software like CLC Genomics Workbench v7, I have been able to ... Torrent sequence data from an Escherichia coli—BWA (0.6.2-r126), ... 4d29de3e1b
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Emc Style Works Xt !EXCLUSIVE! Download Full 441.md b/spaces/1gistliPinn/ChatGPT4/Examples/Emc Style Works Xt !EXCLUSIVE! Download Full 441.md
deleted file mode 100644
index a8670c19da49b8175692981e76b7b5135fce922d..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Emc Style Works Xt !EXCLUSIVE! Download Full 441.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Epson printer prompt unit end? Service request? ... EPSON Download RESET Link Address 2 ... Contact us, buy printer reset program, easy reset the printer, 4d29de3e1b
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Become a Dynamons Master in this Pokmon-Like Android Game.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Become a Dynamons Master in this Pokmon-Like Android Game.md
deleted file mode 100644
index 36a4aeacf09ac6f14891f511a64493d548902e87..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Become a Dynamons Master in this Pokmon-Like Android Game.md
+++ /dev/null
@@ -1,137 +0,0 @@
-
-
Dynamons World APK: A Fun and Fast-Paced Pokemon-like Game for Android
-
If you are a fan of Pokemon games, you might be interested in trying out Dynamons World APK, a game that is similar to Pokemon but faster and does not require you to go too deep into the knowledge of each summoned beast. In this game, you can catch, train, and battle with dozens of unique Dynamons, explore an open world full of surprises, and challenge your friends and players worldwide in online PvP battles. Sounds exciting, right? In this article, we will tell you everything you need to know about Dynamons World APK, including how to download and install it on your Android device, how to play it, what features it offers, how it compares with other Pokemon games, and more. So, let's get started!
-
How to Download and Install Dynamons World APK on Your Android Device
-
One of the best things about Dynamons World APK is that it is free to download and play. You can get it from APKCombo, a website that provides safe and fast downloads of various Android games and apps. Here are the steps you need to follow to get Dynamons World APK on your device:
Select the game from the search results and click on "Download APK".
-
Choose a download server that is closest to your location and wait for the download to finish.
-
Once the download is complete, locate the file on your device and tap on it to install it.
-
If you see a warning message that says "Install blocked", go to your device settings and enable "Unknown sources" under security options.
-
After enabling unknown sources, go back to the file and tap on it again to install it.
-
Wait for the installation to finish and then launch the game from your app drawer or home screen.
-
-
Congratulations! You have successfully installed Dynamons World APK on your Android device. Now you can enjoy playing this fun and addictive game anytime you want.
-
How to Play Dynamons World APK
-
Dynamons World APK is a game that is easy to learn but hard to master. It has simple gameplay mechanics, intuitive controls, and helpful tips that will guide you through your adventure. Here are some basic things you need to know about how to play Dynamons World APK:
-
Choose Your Starter Dynamon and Explore the World
-
At the beginning of the game, you will be asked to choose your starter Dynamon from three options: Water, Fire, or Plant. Each type has its own strengths and weaknesses, so choose wisely. Water Dynamons are good against Fire, but weak against Plant. Fire Dynamons are good against Plant, but weak against Water. Plant Dynamons are good against Water, but weak against Fire. You can also see the stats and skills of each Dynamon before you make your decision.
-
After choosing your starter Dynamon, you will enter the world of Dynamons, a vast and colorful land full of mysteries and dangers. You will meet various characters, such as your mentor Captain Dave, your rival Ryan, and your friends Eva and Jake. You will also encounter different enemies, such as the evil Dynamon Masters, who want to use Dynamons for their own selfish purposes. You will have to stop them and save the world from their evil plans.
-
The world of Dynamons is divided into several regions, each with its own theme and environment. You can explore these regions by moving around the map and interacting with objects and people. You can also find hidden items, secrets, and surprises along the way. Some regions are locked at first, but you can unlock them by completing certain quests or reaching certain levels.
-
Catch and Train Dozens of Unique Dynamons
-
One of the main goals of the game is to catch and train as many Dynamons as you can. There are over 50 different Dynamons in the game, each with its own appearance, personality, type, stats, and skills. You can find them in various locations, such as grasslands, forests, caves, deserts, mountains, and more. Some Dynamons are common, while others are rare or even legendary.
-
To catch a Dynamon, you need to weaken it first by battling it with your own Dynamon. You can use skill cards to attack, defend, heal, or buff your Dynamon. Each skill card has a cost and a cooldown time, so use them wisely. Once the enemy Dynamon's health is low enough, you can throw a capture device at it to try to catch it. The capture device has a success rate that depends on several factors, such as the enemy's health, level, type, and rarity. If you succeed, you will add the captured Dynamon to your team. If you fail, you can try again until you run out of capture devices or the enemy escapes.
-
dynamons world apk pokemon mod
-dynamons world apk pokemon download
-dynamons world apk pokemon game
-dynamons world apk pokemon online
-dynamons world apk pokemon hack
-dynamons world apk pokemon cheats
-dynamons world apk pokemon free
-dynamons world apk pokemon latest version
-dynamons world apk pokemon update
-dynamons world apk pokemon play store
-dynamons world apk pokemon adventure
-dynamons world apk pokemon battle
-dynamons world apk pokemon evolution
-dynamons world apk pokemon trainer
-dynamons world apk pokemon characters
-dynamons world apk pokemon types
-dynamons world apk pokemon skills
-dynamons world apk pokemon guide
-dynamons world apk pokemon tips
-dynamons world apk pokemon walkthrough
-dynamons world apk pokemon review
-dynamons world apk pokemon rating
-dynamons world apk pokemon gameplay
-dynamons world apk pokemon graphics
-dynamons world apk pokemon sound
-dynamons world apk pokemon music
-dynamons world apk pokemon story
-dynamons world apk pokemon fun
-dynamons world apk pokemon challenge
-dynamons world apk pokemon strategy
-dynamons world apk pokemon level up
-dynamons world apk pokemon unlock
-dynamons world apk pokemon collect
-dynamons world apk pokemon explore
-dynamons world apk pokemon map
-dynamons world apk pokemon quests
-dynamons world apk pokemon rewards
-dynamons world apk pokemon items
-dynamons world apk pokemon equipment
-dynamons world apk pokemon gems
-dynamons world apk pokemon coins
-dynamons world apk pokemon shop
-dynamons world apk pokemon offline
-dynamons world apk pokemon no ads
-dynamons world apk pokemon safe
-dynamons world apk pokemon virus free
-dynamons world apk pokemon compatible devices
-dynamons world apk pokemon install instructions
-
After catching a Dynamon, you can train it by battling other Dynamons or players. Each battle will give you experience points (XP) that will help you level up your Dynamon. When your Dynamon levels up, it will increase its stats and learn new skills. Some Dynamons can also evolve into stronger forms when they reach certain levels or meet certain conditions.
-
Unleash Powerful Skills and Brilliant Tactics in Battles
-
Battles are an essential part of the game, as they test your skills and strategies as a Dynamon Master. You can battle against wild Dynamons, enemy trainers, bosses, friends, or other players online. Each battle is turn-based and allows you to use up to three Dynamons in your team. You can switch between them at any time during the battle.
-
To win a battle, you need to use your skill cards effectively and take advantage of your Dynamons' types and elements. Each skill card has a type and an element that determine its power and effect. There are six types of skill cards: Attack, Defense, Heal, Buff, Debuff, and Special. There are also six elements of skill cards: Water, Fire, Plant, Electric, Wind, and Earth. Each element has a strength and a weakness against another element: Water beats Fire; Fire beats Plant; Plant beats Water; Electric beats Water; Wind beats Electric; Earth beats Wind.
-
You can use these elemental strengths and weaknesses to deal more damage or reduce damage from your enemies. For example, if you use a Water skill card against a Fire enemy, you will deal double damage. But if you use a Fire skill card against a Water enemy, you will deal half damage. You can also use Buff and Debuff skill cards to enhance your own stats or lower your enemy's stats. For example, if you use a Buff skill card that increases your speed stat , you will be able to act faster than your enemy. But if you use a Debuff skill card that decreases your enemy's defense stat, you will be able to deal more damage to them.
-
You can also use Special skill cards that have unique effects, such as stunning, poisoning, freezing, or burning your enemy. These effects can cause additional damage or prevent your enemy from acting for a certain number of turns. However, Special skill cards usually have a lower success rate and a higher cost than other skill cards, so use them sparingly and strategically.
-
Battles are not only about skills, but also about tactics. You need to plan your moves ahead and anticipate your enemy's moves. You need to know when to attack, when to defend, when to heal, when to switch, and when to use your ultimate skill. Your ultimate skill is a powerful skill that can only be used once per battle. It can turn the tide of the battle in your favor, but it also requires a lot of energy to activate. You can charge your energy by using regular skill cards or by taking damage from your enemy.
-
Battles are fun and challenging, and they will reward you with XP, coins, items, and sometimes new Dynamons. You can use these rewards to improve your team and prepare for the next battle.
-
Challenge Your Friends and Players Worldwide in Online PvP Battles
-
If you want to test your skills and strategies against other Dynamon Masters, you can join the online battle arena and compete with players from all over the world. You can access the online battle arena from the main menu of the game. You can choose to play in ranked mode or casual mode. Ranked mode will match you with players of similar skill level and rank, while casual mode will match you with random players for fun.
-
In online PvP battles, you can use up to three Dynamons in your team, just like in offline battles. However, you cannot use items or ultimate skills in online battles. You also have a limited time to choose your actions each turn, so you need to think fast and act smart. Online battles are more challenging and unpredictable than offline battles, as you will face different players with different teams and strategies.
-
Online battles are also more rewarding than offline battles, as you will earn trophies and badges for winning. Trophies will increase your rank and unlock new rewards, while badges will show off your achievements and skills. You can also chat with other players and make friends or rivals in the online community.
-
Features of Dynamons World APK
-
Dynamons World APK is not just a Pokemon-like game, but a game that has its own unique features and charm. Here are some of the features that make Dynamons World APK stand out from other games:
-
Stunning Graphics and Animations
-
Dynamons World APK has beautiful graphics and animations that will make you feel like you are in a cartoon world. The game has bright colors, smooth movements, and cute designs that will appeal to both kids and adults. The game also has dynamic weather effects, such as rain, snow, fog, and night time, that will change the atmosphere of the game. The game also has amazing sound effects and music that will enhance your gaming experience.
-
Engaging Story and Characters
-
Dynamons World APK has an engaging story and characters that will make you care about what happens next. The game has a lot of humor, drama, and action that will keep you entertained throughout your adventure. The game also has a lot of dialogue and cutscenes that will reveal more about the world and the characters. The game also has multiple endings that will depend on your choices and actions.
-
Diverse Locations and Quests
-
Dynamons World APK has diverse locations and quests that will offer variety and challenge in your gameplay. The game has over 10 regions to explore, each with its own theme and environment. You can visit tropical islands, snowy mountains, ancient ruins, futuristic cities, and more. The game also has over 100 quests to complete, each with its own objectives and rewards. You can help people in need, solve puzzles, find secrets, fight bosses, and more.
-
New Dynamons and Types
-
Dynamons World APK has new Dynamons and types that will introduce new possibilities and combinations in your gameplay. The game has over 50 Dynamons to collect , each with its own appearance, personality, type, stats, and skills. You can find them in various locations, such as grasslands, forests, caves, deserts, mountains, and more. Some Dynamons are common, while others are rare or even legendary.
-
The game also has six new types of Dynamons that are not found in the original Pokemon games: Electric, Wind, Earth, Ghost, Light, and Dark. Each type has its own strengths and weaknesses against other types, as well as unique skills and effects. For example, Electric Dynamons can paralyze their enemies with their attacks, while Ghost Dynamons can pass through walls and obstacles. You can mix and match different types of Dynamons to create your own team and strategy.
-
Comparison of Dynamons World APK with Other Pokemon Games
-
Dynamons World APK is a game that is inspired by Pokemon, but it is not a copy or a clone of it. It has its own features and charm that make it different from other Pokemon games and spin-offs. Here are some of the similarities and differences between Dynamons World APK and other Pokemon games:
-
Similarities
-
Some of the aspects of Dynamons World APK that are similar to Pokemon are:
-
-
The concept of catching, training, and battling with creatures that have different types and elements.
-
The use of skill cards to perform various actions in battles.
-
The use of capture devices to catch wild creatures.
-
The use of items to heal, revive, or enhance your creatures.
-
The use of evolution to transform your creatures into stronger forms.
-
The use of online battles to compete with other players worldwide.
-
-
Differences
-
Some of the aspects of Dynamons World APK that are different from Pokemon are:
-
-
The game is faster and simpler than Pokemon, as it does not require you to go too deep into the knowledge of each creature.
-
The game has a more cartoonish and humorous style than Pokemon, as it does not take itself too seriously.
-
The game has more variety and challenge than Pokemon, as it offers more regions, quests, enemies, bosses, and secrets to explore.
-
The game has more updates and new content than Pokemon, as it introduces new creatures and types in its updates.
-
The game has more customization and personalization than Pokemon, as it allows you to choose your character's gender, name, appearance, and outfit.
-
-
Conclusion
-
Dynamons World APK is a fun and fast-paced game that is similar to Pokemon but has its own features and charm. It is a game that will appeal to both kids and adults who love adventure, fantasy, and strategy. It is a game that will keep you entertained for hours with its stunning graphics, engaging story, diverse locations, new creatures, and online battles. It is a game that you can download and play for free on your Android device. So what are you waiting for? Download Dynamons World APK today and become the best Dynamon Master in the world!
-
FAQs
-
Here are some frequently asked questions about Dynamons World APK:
-
-
Is Dynamons World APK safe to download?
-
Yes, Dynamons World APK is safe to download from APKCombo, a website that provides safe and fast downloads of various Android games and apps. You can also scan the file with your antivirus software before installing it on your device.
-
Is Dynamons World APK compatible with my device?
-
Dynamons World APK is compatible with most Android devices that have Android 4.1 or higher. However, some devices may experience performance issues or bugs due to different specifications or settings. If you encounter any problems while playing the game, you can contact the developer at support@kizi.com for assistance.
-
How can I get more coins and items in Dynamons World APK?
-
You can get more coins and items in Dynamons World APK by completing quests , winning battles, finding hidden items, or watching ads. You can also buy coins and items with real money through in-app purchases, but this is optional and not necessary to enjoy the game.
-
How can I get more Dynamons and types in Dynamons World APK?
-
You can get more Dynamons and types in Dynamons World APK by exploring different regions, catching wild Dynamons, evolving your Dynamons, or completing special events. You can also get new Dynamons and types in the game's updates, which are released regularly by the developer.
-
How can I get more skill cards and ultimate skills in Dynamons World APK?
-
You can get more skill cards and ultimate skills in Dynamons World APK by leveling up your Dynamons, buying skill cards from the shop, finding skill cards in chests, or completing certain quests. You can also get new skill cards and ultimate skills in the game's updates, which are released regularly by the developer.
-
How can I contact the developer of Dynamons World APK?
-
You can contact the developer of Dynamons World APK by sending an email to support@kizi.com or by visiting their website at https://kizi.com/. You can also follow them on Facebook, Twitter, Instagram, and YouTube for the latest news and updates about the game.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/A Year in the Life of Mitsuki and Kyouko The Language of Love APK Review.md b/spaces/1phancelerku/anime-remove-background/A Year in the Life of Mitsuki and Kyouko The Language of Love APK Review.md
deleted file mode 100644
index e2116bfbdd7cbe0d837435f38c3e9baf61547062..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/A Year in the Life of Mitsuki and Kyouko The Language of Love APK Review.md
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
The Language of Love APK: A Visual Novel About Finding Romance
-
Do you enjoy reading stories that make you feel warm and fuzzy inside? Do you like playing games that let you shape your own destiny? Do you want to experience a romance that is both sweet and realistic? If you answered yes to any of these questions, then you might want to check out The Language of Love APK, a visual novel game that will tug at your heartstrings.
The Language of Love is a visual novel game developed by ebi-hime, a popular indie developer who has created many other games in the genre. A visual novel is a type of game that combines text, images, sound, and sometimes choices to create an interactive story. The Language of Love is available on Steam, but you can also download and install it on your Android device using an APK file.
-
A story of love and friendship
-
The Language of Love follows the life of Mitsuki, a 23-year-old man who has put his education and career on hold to take care of his parents after his mother's accident. He moves to Tokyo to attend a cram school, hoping to pass his university entrance exams next year. However, he feels out of place among his younger classmates, who treat him like an outsider. He resigns himself to a lonely existence, until he meets Kyouko, a single mother who lives in his apartment complex. Mitsuki offers to help Kyouko babysit her daughter, Tama, and in return, Kyouko helps Mitsuki study for his exams. Over the course of a year, the two develop a bond that goes beyond friendship, but also faces many challenges and obstacles.
-
A game of choices and consequences
-
The Language of Love is a kinetic novel, which means that it does not have any choices or branches in the story. However, this does not mean that the game is linear or boring. On the contrary, the game has a rich and complex narrative that explores the themes of love, family, society, and self-discovery. The game also has multiple endings, depending on how you interact with Kyouko and Tama throughout the story. Your actions and words will have an impact on their feelings and opinions of you, as well as the outcome of your relationship. Will you be able to find your happy ending with Kyouko, or will you end up alone?
-
A visual novel with beautiful art and music
-
The Language of Love is not only a captivating story, but also a feast for the eyes and ears. The game features detailed background and character art that bring the scenes to life. The game also has a custom soundtrack that matches the mood and tone of the story. The game supports full audio and subtitles in English, German, and Russian languages. You can enjoy the game in full HD resolution on your Android device.
-
How to download and install The Language of Love APK?
-
If you are interested in playing The Language of Love on your Android device, you will need to download and install an APK file. An APK file is an application package file that contains all the data and files needed to run an app on your device. Here are the steps to download and install The Language of Love APK:
-
the language of love steam apk
-the language of love visual novel apk
-the language of love ebi-hime apk
-the language of love game download apk
-the language of love android apk
-the language of love free apk
-the language of love full apk
-the language of love mod apk
-the language of love cracked apk
-the language of love english apk
-the language of love pc apk
-the language of love romance apk
-the language of love anime apk
-the language of love cute apk
-the language of love ost apk
-the language of love review apk
-the language of love walkthrough apk
-the language of love guide apk
-the language of love tips apk
-the language of love cheats apk
-the language of love endings apk
-the language of love characters apk
-the language of love mitsuki apk
-the language of love kyouko apk
-the language of love tama apk
-the language of love story apk
-the language of love plot apk
-the language of love theme apk
-the language of love soundtrack apk
-the language of love art apk
-the language of love graphics apk
-the language of love screenshots apk
-the language of love trailer apk
-the language of love video apk
-the language of love gameplay apk
-the language of love demo apk
-the language of love beta apk
-the language of love update apk
-the language of love patch apk
-the language of love bug fix apk
-the language of love system requirements apk
-the language of love size apk
-the language of love offline apk
-the language of love online apk
-the language of love multiplayer apk
-the language of love co-op apk
-the language of love community hub apk
-the language of love steam community apk
-the language of love user reviews apk
-
Requirements and compatibility
-
Before you download and install The Language of Love APK, you will need to make sure that your device meets the following requirements:
-
-
Your device must have Android 4.4 or higher operating system.
-
Your device must have at least 500 MB of free storage space.
-
Your device must have a stable internet connection to download the APK file.
-
-
The Language of Love APK is compatible with most Android devices, including smartphones and tablets. However, some devices may experience performance issues or errors due to different specifications or settings. If you encounter any problems while playing the game, you can contact the developer for support.
-
Steps to download and install
-
Once you have confirmed that your device meets the requirements and compatibility, you can follow these steps to download and install The Language of Love APK:
-
-
Go to the official website of The Language of Love APK and click on the download button. You can also use this link: The Language of Love APK Download.
-
Wait for the download to complete. The APK file size is about 400 MB, so it may take some time depending on your internet speed.
-
After the download is finished, locate the APK file on your device. You can use a file manager app to find it in your downloads folder.
-
Before you install the APK file, you will need to enable the installation of apps from unknown sources on your device. To do this, go to your device settings and look for the security or privacy option. Then, toggle on the option that allows you to install apps from unknown sources. This will allow you to install The Language of Love APK without any issues.
-
Tap on the APK file and follow the instructions on the screen to install The Language of Love APK on your device. It may take a few minutes for the installation to complete.
-
Once the installation is done, you can launch The Language of Love APK from your app drawer or home screen. Enjoy playing the game!
-
-
Tips and tricks for playing The Language of Love APK
-
If you want to make the most out of your experience playing The Language of Love APK, here are some tips and tricks that you can use:
-
-
Save often. The game has an auto-save feature that saves your progress every time you reach a new scene, but it is always a good idea to save manually as well. You can save up to 100 slots in the game, so you can always go back to a previous point in the story if you want to.
-
Use the skip function. If you want to speed up the text or skip scenes that you have already seen, you can use the skip function in the game. You can access it by tapping on the screen and selecting the skip icon on the bottom right corner. You can also adjust the skip speed in the settings menu.
-
Explore different endings. The game has multiple endings depending on how you interact with Kyouko and Tama throughout the story. You can try different choices and actions to see how they affect the outcome of your relationship. You can also use the gallery function in the game to view all the endings and scenes that you have unlocked.
-
Enjoy the extras. The game has some extra features that you can access from the main menu, such as bonus scenes, character profiles, achievements, and wallpapers. You can unlock these extras by playing through the game and completing certain tasks.
-
-
Why should you play The Language of Love APK?
-
The Language of Love APK is not just another visual novel game. It is a game that will make you feel a range of emotions, from happiness to sadness, from laughter to tears, from excitement to anxiety. It is a game that will make you think about life, love, and yourself. Here are some reasons why you should play The Language of Love APK:
-
A unique and realistic romance story
-
The Language of Love is not your typical romance story. It does not follow the clichés or tropes that are common in many other games or media. It does not have a perfect hero or heroine who fall in love at first sight and live happily ever after. It does not have a dramatic plot twist or a villain who tries to ruin their relationship. Instead, it has a realistic and relatable story that shows how two ordinary people who have their own struggles and flaws find comfort and happiness in each other's company. It shows how love can grow slowly and naturally over time, but also how it can face challenges and difficulties along the way. It shows how love is not always easy or simple, but also how it is worth fighting for.
-
A diverse and relatable cast of characters
-
The Language of Love has a small but memorable cast of characters who add depth and flavor to the story. Each character has their own personality, background, and role in the story. You will meet Kyouko, a single mother who works hard to provide for her daughter while trying to balance her work and personal life. You will meet Tama, a cute and cheerful girl who loves her mother and Mitsuki. You will meet Ryou, a friendly and helpful classmate who has a crush on Mitsuki. You will meet Yui, a shy and quiet girl who has a hidden talent for singing. You will meet Shizuka, a strict and cold teacher who has a soft spot for Mitsuki. You will meet Mitsuki's parents, who have their own opinions and expectations of him. You will also meet some other characters who will influence the story in different ways. You will be able to relate to these characters and their situations, as they are realistic and believable.
-
A mature and emotional narrative
-
The Language of Love is not a game for children or the faint of heart. It is a game that deals with mature and sensitive topics, such as parenthood, education, career, society, culture, and sexuality. It is a game that does not shy away from showing the dark and ugly sides of life, such as poverty, abuse, discrimination, and violence. It is a game that does not sugarcoat or romanticize the hardships and struggles that the characters face. It is a game that makes you feel a range of emotions, from joy to sorrow, from anger to compassion, from fear to hope. It is a game that makes you cry, laugh, smile, and think.
-
Conclusion
-
The Language of Love APK is a visual novel game that you should not miss if you are looking for a romance story that is realistic, diverse, and emotional. It is a game that will make you fall in love with the characters and their story. It is a game that will make you appreciate the value of love and friendship. It is a game that will make you reflect on your own life and choices. It is a game that will make you happy.
-
If you want to play The Language of Love APK on your Android device, you can download and install it using the steps mentioned above. You can also visit the official website of The Language of Love APK for more information and updates. You can also support the developer by buying the game on Steam or donating on Patreon.
-
What are you waiting for? Download The Language of Love APK today and experience a romance that will touch your heart!
-
FAQs
-
-
Q: How long is The Language of Love APK?
-
A: The Language of Love APK has about 10 hours of gameplay, depending on your reading speed and choices.
-
Q: Is The Language of Love APK safe to download and install?
-
A: Yes, The Language of Love APK is safe to download and install, as long as you use the official link provided in this article. However, you should always be careful when downloading and installing apps from unknown sources, as they may contain viruses or malware.
-
Q: Is The Language of Love APK free to play?
-
A: Yes, The Language of Love APK is free to play on your Android device. However, if you want to support the developer and enjoy some extra features, you can buy the game on Steam or donate on Patreon.
-
Q: Is The Language of Love APK suitable for all ages?
-
A: No, The Language of Love APK is not suitable for all ages. It contains mature and sensitive content that may not be appropriate for younger audiences or people who are easily offended or disturbed by such topics.
-
Q: Is The Language of Love APK based on a true story?
-
A: No, The Language of Love APK is not based on a true story. It is a fictional story created by ebi-hime, an indie developer who has written many other visual novels in different genres.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Bubble Shooter Mod APK A Colorful and Challenging Game with Unlimited Money and Levels.md b/spaces/1phancelerku/anime-remove-background/Bubble Shooter Mod APK A Colorful and Challenging Game with Unlimited Money and Levels.md
deleted file mode 100644
index 4fd008e0b8361e53383d2f6131dab38410fdb540..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Bubble Shooter Mod APK A Colorful and Challenging Game with Unlimited Money and Levels.md
+++ /dev/null
@@ -1,126 +0,0 @@
-
-
Bubble Shooter Mod APK (Unlimited Everything)
-
Do you love playing casual games that are easy to learn but hard to master? Do you enjoy popping colorful bubbles and solving puzzles? If you answered yes, then you might be a fan of Bubble Shooter, one of the most popular and addictive games in the world. But what if you could make this game even more fun and exciting by getting unlimited everything? That's right, with Bubble Shooter Mod APK, you can enjoy this game without any limitations or restrictions. In this article, we will tell you everything you need to know about Bubble Shooter Mod APK, including what it is, what it offers, how to get it, and what are its pros and cons. Let's get started!
Bubble Shooter is a classic puzzle game that has been around for decades. The game was originally developed by Taito Corporation in 1994 and released as Puzzle Bobble. Since then, it has spawned many sequels, spin-offs, and clones, and has been adapted for various platforms, such as PC, mobile, web, and console.
-
The gameplay of Bubble Shooter is simple but addictive. You have to shoot bubbles from a cannon at the bottom of the screen and match three or more bubbles of the same color to pop them. The more bubbles you pop at once, the higher your score. You also have to clear all the bubbles on the screen before they reach the bottom or you lose. The game has hundreds of levels with different layouts, obstacles, and challenges.
-
How to play Bubble Shooter?
-
Playing Bubble Shooter is easy and fun. Here are the basic steps to follow:
-
-
Aim your cannon by moving your mouse or finger on the screen.
-
Tap or click to shoot a bubble.
-
Try to match three or more bubbles of the same color to pop them.
-
Use the walls to bounce your bubbles and reach tricky spots.
-
Use special bubbles, such as bombs, stars, and rainbows, to clear more bubbles at once.
-
Clear all the bubbles on the screen to complete the level.
-
Earn stars and coins by completing levels and achievements.
-
Use coins to buy boosters, such as extra moves, fireballs, and color changers.
-
Use boosters to help you overcome difficult levels.
-
-
Why do people love Bubble Shooter?
-
Bubble Shooter is a game that appeals to people of all ages and backgrounds. Here are some of the reasons why people love this game:
-
-
It is relaxing and stress-relieving. Popping bubbles is satisfying and calming.
-
It is challenging and rewarding. The game tests your skills and strategy as you progress through harder levels.
-
It is colorful and cute. The game has bright graphics and adorable characters that make it appealing and cheerful.
-
It is fun and entertaining. The game has varied gameplay and features that keep it interesting and enjoyable.
-
-
What is Bubble Shooter Mod APK?
-
Bubble Shooter Mod APK is a modified version of the original Bubble Shooter game that gives you unlimited everything. This means that you can get unlimited money, lives, boosters, and no ads in the game. With this mod apk, you can play the game without any worries or limitations. You can buy any booster you want, play any level you want, and enjoy the game without any interruptions or distractions. Sounds amazing, right? But how do you get this mod apk and what are its features? Let's find out!
-
What are the features of Bubble Shooter Mod APK?
-
Bubble Shooter Mod APK has many features that make it superior to the original game. Here are some of them:
-
Unlimited money
-
With this mod apk, you can get unlimited money in the game. Money is used to buy boosters, such as extra moves, fireballs, and color changers. Boosters can help you clear difficult levels and get higher scores. Normally, you have to earn money by completing levels and achievements, or by watching ads. But with this mod apk, you can get as much money as you want without any effort or hassle.
-
bubble shooter hack apk unlimited coins and gems
-bubble shooter mod apk download latest version
-bubble shooter mod apk unlimited lives and boosters
-bubble shooter mod apk unlimited money and stars
-bubble shooter mod apk no ads
-bubble shooter mod apk unlimited gold and diamonds
-bubble shooter mod apk all levels unlocked
-bubble shooter mod apk unlimited bubbles and power-ups
-bubble shooter mod apk free shopping
-bubble shooter mod apk unlimited keys and hearts
-bubble shooter premium mod apk
-bubble shooter mod apk offline
-bubble shooter mod apk unlimited time and moves
-bubble shooter mod apk unlimited coins and energy
-bubble shooter mod apk unlimited hints and bombs
-bubble shooter pro mod apk
-bubble shooter mod apk unlimited everything android 1
-bubble shooter mod apk unlimited coins and balls
-bubble shooter mod apk unlimited fireballs and rockets
-bubble shooter mod apk unlimited coins and tickets
-bubble shooter deluxe mod apk
-bubble shooter mod apk online
-bubble shooter mod apk unlimited coins and stars 2021
-bubble shooter mod apk unlimited everything revdl
-bubble shooter mod apk unlimited coins and gems 2021
-bubble shooter classic mod apk
-bubble shooter mod apk unlimited everything happymod
-bubble shooter mod apk unlimited coins and stars android 1
-bubble shooter mod apk unlimited everything rexdl
-bubble shooter legend mod apk
-bubble shooter 2 mod apk unlimited everything
-bubble shooter mod apk unlimited everything 2021
-bubble shooter adventure mod apk
-bubble shooter puzzle mod apk unlimited everything
-bubble shooter blast mania mod apk
-bubble shooter world cup mod apk unlimited everything
-bubble shooter frenzy mod apk
-bubble shooter pop bubbles mod apk unlimited everything
-bubble shooter space adventure mod apk
-bubble shooter jungle pop mod apk unlimited everything
-bubble shooter original bear pop mod apk
-bubble shooter butterfly garden adventure mod apk unlimited everything
-bubble shooter candy wheel 2 mod apk
-bubble shooter frozen puzzle adventure mod apk
-bubble shooter dragon pop mania 2021 new games free offline without wifi or internet connection no ads or in app purchases fun games for kids boys girls adults family arcade games for android phone or tablet devices play now!
-
Unlimited lives
-
With this mod apk, you can also get unlimited lives in the game. Lives are used to play levels in the game. Normally, you have a limited number of lives that regenerate over time, or you can buy more lives with money or by watching ads. But with this mod apk, you can play as many levels as you want without worrying about running out of lives or waiting for them to refill.
-
Unlimited boosters
-
With this mod apk, you can also get unlimited boosters in the game. Boosters are special bubbles that have different effects, such as bombs, stars, and rainbows. Boosters can help you clear more bubbles at once and create amazing combos. Normally, you have a limited number of boosters that you can use per level, or you can buy more boosters with money. But with this mod apk, you can use as many boosters as you want without any limitation or cost.
-
No ads
-
With this mod apk, you can also enjoy the game without any ads. Ads are annoying and distracting interruptions that appear in the game from time to time. They can ruin your mood and your flow. Normally, you have to watch ads to earn money or lives, or to access some features in the game. But with this mod apk, you can get rid of all the ads and play the game smoothly and peacefully.
-
How to download and install Bubble Shooter Mod APK?
-
Downloading and installing Bubble Shooter Mod APK is easy and fast. Here are the steps to follow:
-
-
Click on this link to download the Bubble Shooter Mod APK file on your device.
-
Go to your device settings and enable the installation of apps from unknown sources.
-
Locate the downloaded file and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to finish.
-
Launch the game and enjoy unlimited everything!
-
-
What are the pros and cons of Bubble Shooter Mod APK?
-
Bubble Shooter Mod APK has many advantages but also some disadvantages. Here are some of them:
-
Pros
-
-
More fun and challenge. With unlimited everything, you can play the game with more freedom and creativity. You can try different strategies and combinations and see how far you can go.
-
No need to spend real money. With unlimited everything, you don't have to spend any real money on the game. You can save your money for other things and still enjoy the game fully.
-
No annoying interruptions. With no ads, you don't have to deal with any annoying interruptions that spoil your fun. You can play the game without any distractions or delays.
-
-
Cons
-
-
Possible security risks. Since Bubble Shooter Mod APK is a modified version of the original game, it may not be safe or secure for your device. It may contain viruses, malware, or spyware that can harm your device or steal your data. You should always download mod apks from trusted sources and scan them before installing them.
-
May affect game balance and fairness. Since Bubble Shooter Mod APK gives you unlimited everything, it may affect the game balance and fairness. It may make the game too easy or too hard for you or other players. It may also cause glitches or errors in the game performance or functionality.
-
May lose interest in the original game. Since Bubble Shooter Mod APK gives you unlimited everything, it may make you lose interest in the original game. You may not feel the same excitement or satisfaction as playing the original game with its rules and limitations. You may also miss out on some features or updates that are only available in the original game.
-
-
Conclusion
-
Bubble Shooter is a fun and addictive puzzle game that has millions of fans around the world. But if you want to take your gaming experience to the next level, you can try Bubble Shooter Mod APK, which gives you unlimited everything in the game. With this mod apk, you can enjoy the game without any worries or limitations. You can buy any booster you want, play any level you want, and enjoy the game without any ads. However, you should also be aware of the possible risks and drawbacks of using this mod apk. It may not be safe or secure for your device, it may affect the game balance and fairness, and it may make you lose interest in the original game. Therefore, you should always download mod apks from trusted sources and use them at your own discretion. We hope this article has helped you learn more about Bubble Shooter Mod APK and how to get it. Have fun popping bubbles and solving puzzles!
-
FAQs
-
Here are some frequently asked questions about Bubble Shooter Mod APK:
-
-
Q: Is Bubble Shooter Mod APK free?
-
A: Yes, Bubble Shooter Mod APK is free to download and use. You don't have to pay anything to get unlimited everything in the game.
-
Q: Is Bubble Shooter Mod APK compatible with my device?
-
A: Bubble Shooter Mod APK is compatible with most Android devices that support the original game. However, some devices may not be able to run the mod apk properly or at all. You should check the compatibility before downloading and installing the mod apk.
-
Q: Is Bubble Shooter Mod APK legal?
-
A: Bubble Shooter Mod APK is not legal or authorized by the original game developers or publishers. It is a modified version of the original game that violates its terms and conditions. Using this mod apk may result in legal actions or penalties from the original game owners or authorities.
-
Q: Is Bubble Shooter Mod APK safe?
-
A: Bubble Shooter Mod APK is not safe or secure for your device or data. It may contain viruses, malware, or spyware that can harm your device or steal your data. You should always scan the mod apk file before installing it and use a reliable antivirus software to protect your device.
-
Q: Can I play online with Bubble Shooter Mod APK?
-
A: No, you cannot play online with Bubble Shooter Mod APK. The mod apk is only for offline mode and does not support online features or functions. You may also get banned or blocked from the original game if you try to play online with the mod apk.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Experience Different Cars and Scenarios with Driving School Sim 2020 APK for Android.md b/spaces/1phancelerku/anime-remove-background/Experience Different Cars and Scenarios with Driving School Sim 2020 APK for Android.md
deleted file mode 100644
index 75a09cc98bbeeff344c7ed66cc0dd5b98ecc1192..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Experience Different Cars and Scenarios with Driving School Sim 2020 APK for Android.md
+++ /dev/null
@@ -1,128 +0,0 @@
-
-
Driving School Sim 2020: A Free Driving Simulator Game for Android
-
Do you want to learn how to drive different cars and explore various locations? Do you want to challenge yourself with realistic driving scenarios and rules? Do you want to have fun with racing and multiplayer modes? If you answered yes to any of these questions, then you might want to check out Driving School Sim 2020, a free driving simulator game for Android devices.
-
In this article, we will review Driving School Sim 2020 by Ovidiu Pop, a popular developer of simulation games. We will cover the following topics:
What is Driving School Sim 2020 and what are its features?
-
How to download and install Driving School Sim 2020 apk on your Android device?
-
How to play Driving School Sim 2020 and what are some tips and tricks?
-
What are the pros and cons of Driving School Sim 2020?
-
What are some alternatives to Driving School Sim 2020?
-
-
By the end of this article, you will have a better idea of whether Driving School Sim 2020 is the right game for you or not. Let's get started!
-
What is Driving School Sim 2020 and what are its features?
-
Driving School Sim 2020 is a driving simulation game where you can learn how to drive various cars and follow the rules of the road. You can choose from over 150 vehicles, including sports cars, trucks, buses, SUVs, and more. You can also customize your car with different colors, wheels, spoilers, and stickers.
-
The game has several modes that you can enjoy. You can start with the career mode, where you can take driving lessons and exams for different classes. You can learn how to park, turn, overtake, use signals, follow traffic signs, and more. You can also unlock new cities and landscapes to explore, such as mountain roads, deserts, ice roads, and highways.
-
If you want to have more fun, you can try the online multiplayer mode, where you can race against other players or join them in free roam. You can also take part in special events and challenges that reward you with coins and gems. You can use these currencies to buy new cars or upgrade your existing ones.
-
The game has realistic graphics and sounds that make you feel like you are driving a real car. The game also supports different control options, such as tilt steering, buttons, or virtual wheel. You can also adjust the camera angle and view your car from different perspectives.
-
How to download and install Driving School Sim 2020 apk on your Android device?
-
If you want to play Driving School Sim 2020 on your Android device, you have two options. You can either download it from Google Play Store or from a third-party website that offers apk files.
-
Option 1: Download from Google Play Store
-
This is the easiest and safest option to get Driving School Sim 2020 on your Android device. All you need to do is follow these steps:
-
driving school sim 2020 apk download
-driving school 2020 mod apk unlimited money
-driving school 2020 game apk
-driving school 2020 simulator apk
-driving school 2020 pro apk
-driving school 2020 apk free download
-driving school 2020 hack apk
-driving school 2020 offline apk
-driving school 2020 latest version apk
-driving school 2020 full apk
-driving school 2020 car simulator apk
-driving school 2020 premium apk
-driving school 2020 android apk
-driving school 2020 online apk
-driving school 2020 real car simulator apk
-driving school 2020 mod apk android 1
-driving school 2020 mod apk revdl
-driving school 2020 mod apk rexdl
-driving school 2020 mod apk happymod
-driving school 2020 mod apk an1
-driving school sim - 2020 for android - filehippo[^1^]
-driving school sim - 2020 for android - apkpure
-driving school sim - 2020 for android - uptodown
-driving school sim - 2020 for android - play store
-driving school sim - 2020 for android - mob.org
-driving school sim - 2020 for android - malavida
-driving school sim - 2020 for android - softonic
-driving school sim - 2020 for android - apkmirror
-driving school sim - 2020 for android - apkmody
-driving school sim - 2020 for android - apknite
-how to install driving school sim - 2020 on android
-how to play driving school sim - 2020 on android
-how to update driving school sim - 2020 on android
-how to uninstall driving school sim - 2020 on android
-how to hack driving school sim - 2020 on android
-how to get unlimited money in driving school sim - 2020 on android
-how to unlock all cars in driving school sim - 2020 on android
-how to change language in driving school sim - 2020 on android
-how to fix lag in driving school sim - 2020 on android
-how to connect controller in driving school sim - 2020 on android
-best settings for driving school sim - 2020 on android
-best tips and tricks for driving school sim - 2020 on android
-best cars in driving school sim - 2020 on android
-best maps in driving school sim - 2020 on android
-best missions in driving school sim - 2020 on android
-best reviews for driving school sim - 2020 on android
-best alternatives for driving school sim - 2020 on android
-best cheats for driving school sim - 2020 on android
-best glitches for driving school sim - 2020 on android
-
-
Open Google Play Store on your device and search for "Driving School Sim 2020".
-
Select the game from the list of results and tap on "Install".
-
Wait for the game to download and install on your device.
-
Once the installation is complete, tap on "Open" to launch the game.
-
-
Option 2: Download from a third-party website
-
This option is for those who want to get Driving School Sim 2020 apk file from a different source than Google Play Store. However, this option is not recommended as it may expose your device to malware or viruses. If you still want to try this option, make sure you download the apk file from a trusted and reputable website. Also, make sure you enable "Unknown Sources" in your device settings before installing the apk file. Here are the steps to follow:
-
-
Download the Driving School Sim 2020 apk file to your device.
-
Go to your device settings and enable "Unknown Sources" under security or privacy options.
-
Locate the apk file on your device and tap on it to install it.
-
Wait for the installation to finish and then tap on "Open" to launch the game.
-
-
Note: You may need to disable "Unknown Sources" after installing the apk file for security reasons.
-
How to play Driving School Sim 2020 and what are some tips and tricks?
-
Driving School Sim 2020 is a fun and easy game to play, but it can also be challenging and rewarding. Here are some tips and tricks to help you master the game and enjoy it more:
-
-
Choose the right car for your driving style. Each car has different features, such as speed, handling, braking, and fuel consumption. You can also upgrade your car with better parts and accessories to improve its performance.
-
Follow the rules of the road. The game has realistic traffic laws and signs that you need to obey. If you break the rules, you will lose points and money. You will also face penalties, such as fines, tickets, or license suspension.
-
Use the map and GPS to navigate. The game has a large map that shows you the locations of your missions, events, and other points of interest. You can also use the GPS to guide you to your destination. You can zoom in and out of the map and switch between different views.
-
Try different modes and challenges. The game has a variety of modes and challenges that you can play. You can learn new skills in the career mode, race against other players in the multiplayer mode, or have fun in the free roam mode. You can also participate in special events and challenges that offer rewards and bonuses.
-
Earn coins and gems. The game has two currencies that you can use to buy new cars or upgrade your existing ones. You can earn coins by completing missions, events, challenges, or races. You can earn gems by watching ads, completing achievements, or buying them with real money.
-
-
What are the pros and cons of Driving School Sim 2020?
-
Driving School Sim 2020 is a great game for anyone who loves driving simulation games. However, like any other game, it has its pros and cons. Here are some of them:
-
-
Pros
Cons
-
- It has realistic graphics and sounds that create an immersive driving experience.
- It has some bugs and glitches that may affect the gameplay.
-
- It has a large selection of cars and customization options that suit different preferences.
- It has some ads and in-app purchases that may be annoying or expensive.
-
- It has a variety of modes and challenges that offer fun and excitement.
- It has some difficulty spikes and unfair penalties that may be frustrating or discouraging.
-
- It has a multiplayer mode that allows you to interact with other players.
- It requires a stable internet connection to play online.
-
- It teaches you how to drive safely and responsibly.
- It may not be suitable for younger children or sensitive users.
-
-
What are some alternatives to Driving School Sim 2020?
-
If you like Driving School Sim 2020, you might also like some other driving simulation games that are available for Android devices. Here are some of them:
-
-
Real Driving Sim: This is another game by Ovidiu Pop that lets you drive over 80 cars across Europe. You can explore different cities, roads, and landmarks. You can also customize your car with various parts and accessories. The game has realistic physics, weather, traffic, and sounds.
-
Car Parking Multiplayer: This is a game by olzhass that lets you park various cars in different scenarios. You can also drive around in an open world with other players. You can customize your car with different colors, wheels, stickers, and more. The game has realistic graphics, sounds, and controls.
-
Extreme Car Driving Simulator: This is a game by AxesInMotion Racing that lets you drive various sports cars in a huge open world. You can perform stunts, drifts, jumps, and crashes. You can also customize your car with different paint jobs, vinyls, spoilers, and more. The game has realistic physics, damage system, and sounds.
-
-
Conclusion
-
Driving School Sim 2020 is a driving simulation game that lets you learn how to drive different cars and follow the rules of the road. You can also have fun with racing and multiplayer modes. The game has realistic graphics and sounds, a large selection of cars and customization options, and a variety of modes and challenges. However, the game also has some drawbacks, such as bugs, ads, difficulty spikes, and internet requirement. If you are looking for a free driving simulator game for Android devices, you might want to give Driving School Sim 2020 a try. You can download it from Google Play Store or from a third-party website that offers apk files.
-
FAQs
-
Here are some frequently asked questions about Driving School Sim 2020:
-
Q: How can I get more coins and gems in Driving School Sim 2020?
-
A: You can get more coins and gems by completing missions, events, challenges, or races. You can also watch ads, complete achievements, or buy them with real money.
-
Q: How can I change the language of Driving School Sim 2020?
-
A: You can change the language of Driving School Sim 2020 by going to the settings menu and selecting the language option. You can choose from English, French, German, Italian, Spanish, Portuguese, Russian, Turkish, Indonesian, or Vietnamese.
-
Q: How can I contact the developer of Driving School Sim 2020?
-
A: You can contact the developer of Driving School Sim 2020 by sending an email to support@ovidiupop.com or by visiting their website at https://www.ovidiupop.com/.
-
Q: How can I update Driving School Sim 2020?
-
A: You can update Driving School Sim 2020 by going to Google Play Store and checking for updates. If you downloaded the apk file from a third-party website, you may need to download the latest version from the same source and install it over the existing one.
-
Q: How can I uninstall Driving School Sim 2020?
-
A: You can uninstall Driving School Sim 2020 by going to your device settings and selecting the apps option. Then, find Driving School Sim 2020 and tap on it. Then, tap on uninstall and confirm your choice.
Seguridad del aeropuerto APK Descargar: Un divertido y emocionante juego de puzzle
-
¿Alguna vez te has preguntado cómo es ser un oficial de seguridad del aeropuerto? ¿Te gusta resolver puzzles y encontrar pistas ocultas? Si es así, es posible que desee probar Airport Security APK, un juego de puzzle gratuito para dispositivos Android que te pone en los zapatos de un oficial de seguridad asignado a vigilar un aeropuerto. En este artículo, te contaremos todo lo que necesitas saber sobre este juego, incluyendo qué es, cómo descargarlo e instalarlo, cuáles son sus características y beneficios, y cuáles son sus inconvenientes y limitaciones. ¡Vamos a empezar!
Airport Security APK es un juego de puzzle desarrollado por Kwalee Ltd, un estudio de juegos móviles con sede en el Reino Unido que se especializa en juegos casuales. El juego fue lanzado en mayo de 2023 y ha recibido críticas positivas de jugadores y críticos por igual. Estos son algunos de los aspectos que hacen de este juego único y atractivo:
-
Un juego de puzzle gratuito para dispositivos Android
-
Airport Security APK es completamente gratis para descargar y jugar en su dispositivo Android. No es necesario pagar nada para disfrutar de este juego, aunque ofrece algunas compras en la aplicación para características y beneficios adicionales. Puede descargar el archivo APK de varias fuentes en línea, como Softonic, Google Play o Filehippo. Explicaremos cómo descargar e instalar el archivo APK más adelante en este artículo.
-
Una simulación realista y desafiante de las tareas de seguridad del aeropuerto
-
-
Un juego casual y entretenido con gráficos simples y rompecabezas variados
-
Airport Security APK está diseñado para ser un juego divertido y fácil de jugar que cualquiera puede disfrutar. Tiene gráficos simples que hacen el juego claro y colorido, sin distraer de la jugabilidad. También tiene una variedad de puzzles y actividades que mantienen el juego interesante y emocionante. Usted puede optar por aprobar o negar el pasaporte de alguien, comprobar sus pertenencias para artículos prohibidos, hacerle preguntas o buscar pistas, o incluso utilizar un detector de mentiras o una máquina de rayos X. El juego está lleno de sorpresas y desafíos que pondrán a prueba tu lógica y creatividad.
-
Cómo descargar e instalar Airport Security APK?
-
Si desea probar Airport Security APK en su dispositivo Android, tendrá que descargar e instalar el archivo APK de una fuente de confianza. Un archivo APK es un archivo de paquete de aplicación Android que contiene todos los archivos necesarios para ejecutar una aplicación en su dispositivo. Estos son los pasos que debe seguir para descargar e instalar Airport Security APK:
-
Descargar el archivo APK de una fuente de confianza
-
El primer paso es encontrar un sitio web confiable que ofrece el archivo APK de seguridad del aeropuerto para su descarga. Puede utilizar cualquiera de las fuentes que mencionamos anteriormente, como Softonic, Google Play o Filehippo. También puede utilizar otros sitios web en los que confía, pero asegúrese de escanear el archivo en busca de virus y malware antes de descargarlo. Una vez que encuentre el archivo APK de seguridad del aeropuerto, haga clic en el botón de descarga y guárdelo en su dispositivo.
-
Habilitar fuentes desconocidas en la configuración del dispositivo
-
-
Instalar el archivo APK y lanzar el juego
-
El paso final es instalar el archivo APK de seguridad del aeropuerto y lanzar el juego. Para hacer esto, busque el archivo en su dispositivo utilizando una aplicación de administrador de archivos o la carpeta de descargas de su navegador. Luego, toque en el archivo y siga las instrucciones en la pantalla para instalarlo. Es posible que vea una ventana emergente que le pide permiso para acceder a ciertas funciones o datos en su dispositivo, como almacenamiento, cámara o micrófono. Concede el permiso si quieres usar esas funciones en el juego, o lo niega si no lo haces. Una vez completada la instalación, puedes abrir el juego y empezar a jugar.
-
-
¿Cuáles son las características y beneficios de Airport Security APK?
-
Airport Security APK is a fun and exciting puzzle game that offers many features and benefits for its players. Aquí están algunos de ellos:
-
Una experiencia de juego divertida e inmersiva
-
Seguridad del aeropuerto APK es un juego que le mantendrá entretenido y comprometido durante horas. Usted se sentirá como un verdadero oficial de seguridad del aeropuerto como realizar diversas tareas y resolver diferentes puzzles. Tendrás que lidiar con diferentes escenarios y situaciones, como pasajeros enojados, viajeros sospechosos, bombas ocultas, pasaportes falsos y más. También tendrá que enfrentar diferentes desafíos y dificultades, como límites de tiempo, eventos aleatorios, múltiples opciones y consecuencias. Nunca te aburrirás ni te frustrarás con este juego, ya que ofrece una experiencia de juego divertida e inmersiva.
-
Una variedad de puzzles y actividades para poner a prueba tus habilidades e intuición
-
-
Un estilo de arte simplificado que hace que el juego sea fácil de jugar y disfrutar
-
Seguridad del aeropuerto APK es un juego que tiene un estilo de arte simplificado que hace que el juego fácil de jugar y disfrutar. El juego tiene gráficos simples que son claros y coloridos, sin ser demasiado realista o detallado. El juego también tiene animaciones simples y efectos de sonido que se suman al encanto y el humor del juego. El juego no requiere dispositivos de alta gama o tarjetas gráficas para funcionar sin problemas, ya que está optimizado para dispositivos de gama baja y versiones de Android. El juego también tiene una interfaz de usuario sencilla y controles que hacen que el juego sea fácil de navegar y operar.
-
¿Cuáles son los inconvenientes y limitaciones de Airport Security APK?
-
Si bien Airport Security APK es un gran juego de puzzle que ofrece muchas características y beneficios para sus jugadores, también tiene algunos inconvenientes y limitaciones que usted debe ser consciente de antes de descargarlo. Estos son algunos de ellos:
-
Anuncios frecuentes y no programables que interrumpen el flujo del juego
-
Airport Security APK es un juego gratuito que se basa en los anuncios de ingresos. Esto significa que verás anuncios frecuentes e inútiles mientras juegas. Estos anuncios pueden interrumpir el flujo del juego y arruinar tu inmersión. También pueden ser molestos y distraer, especialmente si son fuertes o irrelevantes. Algunos de estos anuncios también pueden contener contenido inapropiado o malicioso que puede dañar su dispositivo o datos. Si desea deshacerse de estos anuncios, tendrá que pagar por una compra en la aplicación que los elimina.
-
Requisito de conexión a Internet que consume datos y batería
-
-
Posibles problemas de compatibilidad con algunos dispositivos y versiones de Android
-
Airport Security APK es un juego que podría no funcionar bien en algunos dispositivos y versiones de Android. Esto se debe a que el juego es relativamente nuevo y podría no estar completamente optimizado o probado para todos los dispositivos y versiones de Android. Algunos de los posibles problemas de compatibilidad que puede encontrar son fallos, fallos, errores, o un rendimiento deficiente. Si experimentas cualquiera de estos problemas, es posible que tengas que actualizar tu dispositivo o versión de Android, limpiar tu caché y datos, reinstalar el juego o contactar al desarrollador para obtener soporte.
-
Conclusión y preguntas frecuentes
-
Seguridad del aeropuerto APK es un divertido y emocionante juego de puzzle que le permite experimentar lo que es ser un oficial de seguridad del aeropuerto. Usted tendrá que realizar varias tareas y resolver diferentes puzzles para evitar que cualquier acto ilegal suceda en el aeropuerto. También disfrutará de las características y beneficios del juego, como su juego divertido e inmersivo, su variedad de puzzles y actividades, y su estilo de arte simplificado. Sin embargo, también debes ser consciente de los inconvenientes y limitaciones del juego, como sus anuncios frecuentes e inútiles, su requisito de conexión a Internet y sus posibles problemas de compatibilidad. Si usted está buscando un juego de puzzle casual y entretenido que pondrá a prueba sus habilidades y la intuición, usted debe dar Airport Security APK una oportunidad.
-
Aquí están algunas de las preguntas más frecuentes sobre Airport Security APK:
-
Q: ¿Es seguro descargar e instalar APK Airport Security?
-
A: Sí, Airport Security APK es seguro para descargar e instalar, siempre y cuando se descarga desde una fuente de confianza y escanear en busca de virus y malware antes de instalarlo. Sin embargo, también debes tener cuidado con los anuncios que muestra el juego, ya que algunos de ellos pueden contener contenido inapropiado o malicioso.
-
Q: ¿Cómo puedo eliminar los anuncios de Airport Security APK?
-
-
Q: ¿Cómo puedo guardar mi progreso en Airport Security APK?
-
A: Airport Security APK guarda automáticamente su progreso en la nube cuando está conectado a Internet. También puedes sincronizar tu progreso en varios dispositivos iniciando sesión con tu cuenta de Google Play. Sin embargo, si pierdes tu conexión a Internet o desinstalas el juego, podrías perder tu progreso.
-
Q: ¿Cómo puedo obtener más monedas y gemas en la seguridad del aeropuerto APK?
-
A: Monedas y gemas son las monedas en el juego que se puede utilizar para comprar características y beneficios adicionales en Airport Security APK. Puedes conseguir más monedas y gemas completando puzzles y actividades, viendo anuncios o comprándolos con dinero real.
-
Q: ¿Cómo puedo contactar con el desarrollador de Airport Security APK?
-
A: Puede ponerse en contacto con el desarrollador de Airport Security APK visitando su sitio web, enviándoles un correo electrónico a support@kwalee.com, o siguiéndolos en sus cuentas de redes sociales . También puede dejar una opinión o comentarios en su página de Google Play.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descarga Nba Juegos Liga Pasar.md b/spaces/Benson/text-generation/Examples/Descarga Nba Juegos Liga Pasar.md
deleted file mode 100644
index 878d6d8d8288371901617dace86011a4e70e9c67..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descarga Nba Juegos Liga Pasar.md
+++ /dev/null
@@ -1,130 +0,0 @@
-
-
Cómo descargar el pase de liga de los juegos de la NBA
-
Si eres un fanático del baloncesto, probablemente quieras ver tantos juegos de la NBA como sea posible. Pero a veces, es posible que no pueda ver la acción en vivo debido a varias razones, como diferencias de zona horaria, horarios ocupados o canales de televisión limitados. Ahí es donde NBA League Pass viene muy bien. NBA League Pass es un servicio de suscripción que le permite transmitir juegos de la NBA en vivo y bajo demanda en sus dispositivos favoritos. También puedes descargar juegos de la NBA y verlos sin conexión cuando quieras. En este artículo, te mostraremos cómo descargar el pase de liga de los juegos de la NBA y disfrutar lo mejor del baloncesto.
-
¿Qué es el Pase de la Liga NBA?
-
NBA League Pass es el servicio oficial de streaming de la NBA que te da acceso a cientos de juegos en vivo y bajo demanda de la temporada regular, playoffs y finales. También puede ver juegos clásicos de los archivos, programas originales, análisis de estudio y más. Con NBA League Pass, puedes seguir a tus equipos y jugadores favoritos, personalizar tu experiencia de visualización y acercarte al juego con funciones exclusivas.
Puedes ver todos los partidos de cada equipo, incluidos los juegos fuera del mercado y televisados nacionalmente.
-
Puede elegir entre transmisiones del equipo doméstico y externo, o transmisiones alternativas con emisoras invitadas, comentarios en el idioma o diferentes ángulos de cámara.
-
Puedes ver juegos en calidad de alta definición sin anuncios (solo plan Premium).
-
Puedes ver juegos en múltiples dispositivos simultáneamente (solo plan Premium).
-
Puedes ver juegos sobre la marcha con la aplicación de la NBA, o en tu pantalla grande con televisores inteligentes y consolas de juegos.
-
Puedes ver juegos condensados, resúmenes de juegos, aspectos destacados y estadísticas sin salir del juego en vivo.
-
-
-
Planes y precios del pase de la liga de la NBA
-
NBA League Pass ofrece diferentes planes y opciones de precios dependiendo de sus preferencias y presupuesto. Puede elegir entre facturación mensual y anual, y cancelar en cualquier momento. Aquí están los planes y precios actuales para la temporada 2022-23:
-
-
Plan
Precio
Características
-
League Pass
$14.99/month or $99.99/year
LIVE GAMES NBA TV STUDIO SHOWS & LIVE GAMES CLASSIC GAMES FROM THE ARCHIVES IN-LANGUAGE STREAMS ALTERNATIVE STREAMS GUEST BROADCASTERS FOR SELECT TD/<
League Pass Premium
$19.99/mes o $149.99/año
Todas las características de League Pass más: NO HAY COMERCIALES VER DOS DISPOSITIVOS SIMULTÁNEAMENTE
-
Team Pass
$13.99/mes o $89.99/año
JUEGOS EN VIVO para un equipo solamente TV STUDIO SHOWS & LIVE GAMES CLASSIC GAMES FROM THE ARCHIVES for one team only<>IN-GAMESFLUJOS DE LENGUAJE para un solo equipo FLUJOS ALTERNATIVOS para un solo equipo EMISORAS INVITADAS PARA SELECCIONAR JUEGOS para un solo equipo
-
NBA TV
$6.99/mes o $39.99/año
NBA TV STUDIO SHOWS & LIVE GAMES JUEGOS CLÁSICOS DE LOS ARCHIVOS PROGRAMACIÓN ORIGINAL
-
-
También puedes obtener una prueba gratuita de NBA League Pass durante 7 días y ver si te gusta antes de comprometerte. Ten en cuenta que algunos juegos pueden estar sujetos a apagones o restricciones dependiendo de tu ubicación y dispositivo. Puede consultar la disponibilidad de los juegos en el sitio web o la aplicación NBA League Pass.
-
Cómo suscribirse al NBA League Pass
-
Hay dos formas de suscribirse al NBA League Pass: en línea o en la aplicación. Aquí está cómo hacerlo:
-
En línea
-
Para suscribirse en línea, siga estos pasos:
-
-
-
Ir a la página web de NBA League Pass y haga clic en "Iniciar prueba gratuita" o "Comprar ahora".
-
Seleccione el plan que desea y haga clic en "Continuar".
-
-
Introduzca sus datos de pago y confirme su compra.
-
Disfruta viendo juegos de la NBA en cualquier dispositivo con tu suscripción a NBA League Pass.
-
-
En la aplicación
-
Para suscribirse a la aplicación, siga estos pasos:
-
-
Descargar la aplicación de la NBA en su dispositivo iOS o Android y abrirlo.
-
Toque en el icono del menú y seleccione "NBA League Pass".
-
Seleccione el plan que desea y toque en "Suscribir".
-
Siga las instrucciones para completar su compra con su ID de Apple o cuenta de Google Play.
-
Disfruta viendo juegos de la NBA en tu dispositivo móvil con tu suscripción a NBA League Pass.
-
-
Cómo ver los juegos de la NBA en NBA League Pass
-
Una vez que se haya suscrito al NBA League Pass, puede ver juegos de la NBA en varios dispositivos, como navegadores web, aplicaciones móviles, televisores inteligentes y consolas de juegos. He aquí cómo hacerlo:
-
Navegador web
-
Para ver juegos de la NBA en un navegador web, siga estos pasos:
-
-
Ir al sitio web de NBA League Pass e iniciar sesión con su cuenta de la NBA.
-
Seleccione el juego que desea ver desde el horario o la videoteca.
-
Elija el flujo que prefiera, como transmisión en casa o en otro equipo, transmisión alternativa o transmisión en el idioma.
-
Disfruta viendo el juego en la pantalla de tu ordenador.
-
-
Aplicación móvil
-
Para ver juegos de la NBA en una aplicación móvil, siga estos pasos:
-
-
Abra la aplicación de la NBA en su dispositivo iOS o Android e inicie sesión con su cuenta de la NBA.
-
Seleccione el juego que desea ver desde el horario o la videoteca.
-
Elija el flujo que prefiera, como transmisión en casa o en otro equipo, transmisión alternativa o transmisión en el idioma.
-
Disfruta viendo el juego en tu dispositivo móvil.
-
-
Smart TV
-
Para ver juegos de la NBA en un televisor inteligente, siga estos pasos:
-
-
Descarga la aplicación de la NBA en tu smart TV desde la tienda de aplicaciones y ábrela.
-
-
Seleccione el juego que desea ver desde el horario o la videoteca.
-
Elija el flujo que prefiera, como transmisión en casa o en otro equipo, transmisión alternativa o transmisión en el idioma.
-
Disfruta viendo el juego en tu pantalla grande.
-
-
Consola de juegos
-
Para ver juegos de la NBA en una consola de juegos, siga estos pasos:
-
-
Descarga la aplicación de la NBA en tu PlayStation 4, PlayStation 5, Xbox One o Xbox Series X/S desde la tienda de aplicaciones y ábrela.
-
Inicia sesión con tu cuenta de la NBA usando el código de activación proporcionado en la pantalla.
-
Seleccione el juego que desea ver desde el horario o la videoteca.
-
Elija el flujo que prefiera, como transmisión en casa o en otro equipo, transmisión alternativa o transmisión en el idioma.
-
Disfruta viendo el juego en tu consola.
-
-
Cómo descargar juegos de la NBA en NBA League Pass
-
Si quieres ver juegos de la NBA sin conexión, puedes descargarlos en tu navegador web o aplicación móvil. Aquí te mostramos cómo hacerlo:
-
Navegador web
-
Para descargar juegos de la NBA en un navegador web, siga estos pasos:
-
-
Ir al sitio web de NBA League Pass e iniciar sesión con su cuenta de la NBA.
-
Seleccione el juego que desea descargar de la videoteca.
-
Haga clic en el icono de descarga junto a la secuencia que prefiera, como transmisión en casa o equipo de salida, transmisión alternativa o transmisión en el idioma.
-
Elija la calidad y el tamaño de archivo que desea y haga clic en "Descargar".
-
Espere a que la descarga termine y encuentre el archivo en su carpeta de descargas.
-
Disfruta viendo el juego sin conexión en tu computadora.
-
-
Aplicación móvil
-
Para descargar juegos de la NBA en una aplicación móvil, siga estos pasos:
-
-
Abra la aplicación de la NBA en su dispositivo iOS o Android e inicie sesión con su cuenta de la NBA.
-
Seleccione el juego que desea descargar de la videoteca.
-
-
Elija la calidad y el tamaño del archivo que desea y toque en "Descargar".
-
Espera a que termine la descarga y encuentra el archivo en la sección de descargas de la aplicación.
-
Disfruta viendo el juego sin conexión en tu dispositivo móvil.
-
-
Consejos y trucos para usuarios de pases de la NBA
-
Para aprovechar al máximo tu suscripción al NBA League Pass, aquí hay algunos consejos y trucos que puedes usar:
-
-
Puede cambiar entre diferentes transmisiones durante un juego en vivo haciendo clic o tocando en el icono del selector de flujo en la parte inferior de la pantalla.
-
Puede ver varios juegos a la vez mediante la función de pantalla dividida en su navegador web o smart TV. También puede usar el modo imagen en imagen en su dispositivo móvil para ver un juego mientras usa otra aplicación.
-
Puede controlar la velocidad de reproducción de un juego haciendo clic o tocando el icono de configuración en la parte inferior de la pantalla y seleccionando la velocidad que desee.
-
Puedes saltar a momentos clave de un juego usando la barra de línea de tiempo en la parte inferior de la pantalla. También puede usar la puntuación de la mini caja para saltar a cuartos, jugadas o estadísticas específicas.
-
Puede obtener notificaciones para los próximos juegos, puntuaciones, aspectos destacados y noticias mediante la habilitación de notificaciones push en la configuración de la aplicación NBA.
-
-
Conclusión
-
NBA League Pass es una gran manera de ver los juegos de la NBA en vivo y bajo demanda en sus dispositivos favoritos. También puedes descargar juegos de la NBA y verlos sin conexión cuando quieras. Con NBA League Pass, puedes disfrutar de lo mejor del baloncesto con características exclusivas, opciones de personalización y transmisiones de alta calidad. Para empezar, solo tienes que suscribirte a un plan que se adapte a tus necesidades y presupuesto, y luego iniciar sesión con tu cuenta de la NBA. También puedes obtener una prueba gratuita durante 7 días y ver si te gusta. ¿Qué estás esperando? Obtener NBA League Pass hoy y no te pierdas ni un solo momento de la acción!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre NBA League Pass:
-
-
A: Puedes cancelar tu suscripción al NBA League Pass en cualquier momento yendo a la configuración de tu cuenta en el sitio web o aplicación del NBA League Pass. Usted todavía tendrá acceso a NBA League Pass hasta el final de su período de facturación actual. Si se suscribió a través de un proveedor externo, como Apple o Google, tendrá que cancelar a través de ellos.
-
Q: ¿Cómo puedo ver NBA League Pass en varios dispositivos?
-
A: Puedes ver NBA League Pass en hasta cinco dispositivos usando la misma cuenta de la NBA. Sin embargo, si tienes un plan Premium, solo puedes verlo en dos dispositivos simultáneamente. Si intenta ver más de dos dispositivos a la vez, recibirá un mensaje de error.
-
P: ¿Cómo puedo evitar los apagones o las restricciones en el pase de la Liga de la NBA?
-
A: Los apagones o restricciones pueden aplicarse a algunos juegos dependiendo de su ubicación y dispositivo. Esto se debe a los acuerdos contractuales entre la NBA y sus socios de radiodifusión. Para evitar apagones o restricciones, puede usar un servicio VPN que enmascara su dirección IP y le permite acceder a contenido bloqueado geográficamente. Sin embargo, esto puede violar los términos de servicio del NBA League Pass, así que hazlo bajo tu propio riesgo.
-
Q: ¿Cómo puedo contactar al servicio de atención al cliente para el Pase de la Liga NBA?
-
A: Puede ponerse en contacto con el servicio de atención al cliente para el NBA League Pass visitando el centro de ayuda en el sitio web o la aplicación NBA League Pass. También puede chatear con un agente en línea, llamarlo al 1-866-622-5999 (Estados Unidos) o +44 20 3884 2656 (Internacional), o enviarlo por correo electrónico a nbasupport@neulion.com.
-
Q: ¿Cómo puedo obtener un reembolso para el Pase de la Liga de la NBA?
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Diablo Puede Llorar Pico De Combate Tienda De Juegos.md b/spaces/Benson/text-generation/Examples/Descargar Diablo Puede Llorar Pico De Combate Tienda De Juegos.md
deleted file mode 100644
index 988a545d1c6cf708ddd27ad7e2428633bc80f7bb..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Diablo Puede Llorar Pico De Combate Tienda De Juegos.md
+++ /dev/null
@@ -1,66 +0,0 @@
-
-
Cómo descargar Devil May Cry: El pico del combate en Play Store
-
Si eres un fan de la serie Devil May Cry, puede que te emocione saber que hay un juego móvil basado en él llamado Devil May Cry: Peak of Combat. Este es un juego móvil autorizado creado por NebulaJoy, con la profunda participación del equipo oficial de CAPCOM Devil May Cry. El juego hereda el estilo de lucha libre, flexible, de estrategia y magnífico, sin restricciones de la serie original, y también trae una experiencia combo inmersiva con su tecnología de captura de movimiento líder en la industria. En este artículo, te diremos qué es Devil May Cry: Peak of Combat, cómo descargarlo en Play Store y algunos consejos y trucos para jugarlo.
-
¿Qué es Devil May Cry: El pico del combate?
-
Una breve introducción al juego
-
Devil May Cry: Peak of Combat es un juego de rol de acción que sigue la historia de Dante, Vergil, Nero y otros personajes de la serie Devil May Cry. El juego también cuenta con una nueva trama que revela algunos secretos no revelados de la serie original. Usted puede elegir su personaje favorito y personalizar sus habilidades de acuerdo a su preferencia. También puedes cambiar entre diferentes armas como espadas, armas, puños y más.
-
descargar diablo puede llorar pico de combate tienda de juegos
Algunas de las características de Devil May Cry: Peak of Combat son:
-
-
Gráficos y efectos de sonido de alta calidad que recrean el mundo gótico de Devil May Cry.
-
Controles suaves y sensibles que le permiten realizar combinaciones y movimientos elegantes.
-
Una variedad de modos de juego como modo historia, modo de crisis de caos, modo de archivo secreto, modo de arena cielo, y más.
-
Una rica colección de personajes, escenas, armas y jefes de la serie Devil May Cry.
-
Un sistema social que te permite chatear con otros jugadores, unirte a gremios y cooperar o competir con ellos.
-
-
Los requisitos del juego
-
-
-
Sistema operativo: Microsoft Windows 7 o superior, o dispositivos Android con Kirin 960 o Snapdragon 660 y superior
-
Procesador: Procesador Intel o AMD
-
RAM: al menos 4GB
-
HDD: 5GB de espacio libre en disco
-
Una conexión a Internet activa y estable
-
-
Si tienes un dispositivo compatible, puedes proceder al siguiente paso.
-
Cómo descargar Devil May Cry: Peak of Combat en Play Store?
-
Paso 1: Pre-registro en el sitio web oficial o Google Play Store
-
Lo primero que tienes que hacer es pre-registrarse para el juego en el sitio web oficial o Google Play Store. El registro previo te dará algunos beneficios como recompensas dentro del juego, notificaciones del lanzamiento del juego y la oportunidad de entrar en la prueba beta cerrada. Para pre-registrarse, puede visitar el sitio web oficial o la página de Google Play Store y hacer clic en el botón "Pre-registro". Deberá introducir su dirección de correo electrónico y aceptar los términos y condiciones. También tendrá que verificar su dirección de correo electrónico haciendo clic en el enlace que se le envió.
-
Paso 2: Espera la fecha oficial de lanzamiento
-
El siguiente paso es esperar la fecha oficial de lanzamiento del juego. Se espera que el juego se lance globalmente en 2023, pero la fecha exacta aún no se ha anunciado. Puedes seguir las cuentas oficiales de redes sociales del juego para obtener las últimas actualizaciones y noticias. Puedes encontrarlos en Twitter , YouTube , Discord , y Facebook . También puedes ver el tráiler y los vídeos del juego para ver qué puedes esperar.
-
Paso 3: Instalar el juego en su dispositivo
-
Una vez lanzado el juego, puedes instalarlo en tu dispositivo siguiendo estos pasos:
-
-
Abra la aplicación Google Play Store en su dispositivo y busque "Devil May Cry: Peak of Combat".
-
Seleccione el juego de los resultados de búsqueda y toque en "Instalar".
-
-
Una vez completada la instalación, toque en "Abrir" para iniciar el juego.
-
-
Paso 4: Disfruta del juego
-
Felicidades, has descargado con éxito Devil May Cry: Peak of Combat en Play Store. Ahora puedes disfrutar del juego y experimentar su emocionante acción y combate. Puedes iniciar sesión con tu cuenta de correo electrónico o crear una nueva. También puede elegir su idioma y servidor preferidos. A continuación, puede comenzar a jugar el juego eligiendo su personaje y siguiendo el tutorial. ¡Diviértete!
-
Consejos y trucos para jugar Devil May Cry: Pico de combate
-
Elige tu personaje favorito y personaliza tus habilidades
-
En Devil May Cry: Peak of Combat, puedes elegir entre diferentes personajes como Dante, Vergil, Nero, Lady y más. Cada personaje tiene sus propias habilidades únicas, armas y estilo de lucha. Puedes personalizar tus habilidades desbloqueando y actualizándolas con puntos de habilidad. También puedes cambiar entre diferentes armas como espadas, armas, puños y más durante el combate. Experimenta con diferentes combinaciones y encuentra tu propio estilo.
-
-
Domina el sistema combinado y desata el gatillo del diablo
-
El juego cuenta con un sistema combinado que le permite realizar elegantes combos y movimientos encadenando diferentes ataques juntos. Cuantos más combos realices, más alto será tu rango de estilo. Tu rango de estilo afecta tu puntuación, recompensas y calibre de activación del diablo. El medidor de gatillo diablo es un medidor especial que se llena a medida que luchas. Cuando esté lleno, puedes activar el modo de activación del diablo, lo que mejora tu poder, velocidad, defensa y curación. También puedes realizar movimientos especiales como breaker diablo para Nero o corte juicio para Vergil.
-
Explora el mundo gótico y descubre secretos ocultos
-
-
Desafía a otros jugadores en modo PVP o coopera con ellos en modo PVE
-
El juego también ofrece varios modos en línea que te permiten jugar con o contra otros jugadores de todo el mundo. Puedes elegir entre diferentes modos, como el modo PVP, donde puedes luchar contra otros jugadores en partidas 1v1, 2v2 o 3v3, o el modo PVE, donde puedes cooperar con otros jugadores para completar misiones, incursiones o mazmorras. También puedes unirte a gremios y chatear con otros jugadores. Jugar online puede ayudarte a mejorar tus habilidades, ganar recompensas y hacer amigos.
-
Conclusión
-
Devil May Cry: Peak of Combat es un juego móvil que trae la esencia de la serie Devil May Cry a tu dispositivo. Puedes disfrutar de los impresionantes gráficos del juego, controles suaves, diversos modos de juego y contenido rico. También puede descargar el juego en Play Store siguiendo los pasos que hemos proporcionado en este artículo. Si eres un fan de Devil May Cry o de los juegos de acción en general, deberías probar este juego. No te arrepentirás.
-
Preguntas frecuentes
-
Q: ¿Es el diablo puede llorar: Pico de combate libre para jugar?
-
A: Sí, el juego es gratis para descargar y jugar. Sin embargo, puede contener algunas compras opcionales en el juego que pueden mejorar tu experiencia de juego.
-
Q: ¿Es Devil May Cry: Pico de combate disponible para dispositivos iOS?
-
A: No, el juego actualmente solo está disponible para dispositivos Android. Sin embargo, los desarrolladores han declarado que están trabajando en una versión de iOS y la lanzarán en el futuro.
-
P: ¿Cómo puedo obtener más puntos de habilidad en Devil May Cry: Peak of Combat?
-
A: Puedes obtener más puntos de habilidad completando misiones, nivelando a tu personaje, o usando algunos elementos como libros de habilidades.
-
P: ¿Cómo puedo cambiar la apariencia de mi personaje en Devil May Cry: Peak of Combat?
-
-
P: ¿Cómo puedo contactar al servicio al cliente de Devil May Cry: Peak of Combat?
-
A: Puede ponerse en contacto con el servicio de atención al cliente de Devil May Cry: Peak of Combat utilizando el sistema de retroalimentación del juego o enviando un correo electrónico a dmc@nebulajoy.com.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BetterAPI/BetterChat/src/routes/conversation/[id]/stop-generating/+server.ts b/spaces/BetterAPI/BetterChat/src/routes/conversation/[id]/stop-generating/+server.ts
deleted file mode 100644
index b27c0ccf2aaafda990d853d34e1f5432c8ad5eaf..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat/src/routes/conversation/[id]/stop-generating/+server.ts
+++ /dev/null
@@ -1,27 +0,0 @@
-import { collections } from "$lib/server/database";
-import { error } from "@sveltejs/kit";
-import { ObjectId } from "mongodb";
-
-/**
- * Ideally, we'd be able to detect the client-side abort, see https://github.com/huggingface/chat-ui/pull/88#issuecomment-1523173850
- */
-export async function POST({ params, locals }) {
- const conversationId = new ObjectId(params.id);
-
- const conversation = await collections.conversations.findOne({
- _id: conversationId,
- sessionId: locals.sessionId,
- });
-
- if (!conversation) {
- throw error(404, "Conversation not found");
- }
-
- await collections.abortedGenerations.updateOne(
- { conversationId },
- { $set: { updatedAt: new Date() }, $setOnInsert: { createdAt: new Date() } },
- { upsert: true }
- );
-
- return new Response();
-}
diff --git a/spaces/BreadBytes1/CC-Dashboard/app.py b/spaces/BreadBytes1/CC-Dashboard/app.py
deleted file mode 100644
index 0d9e14aff167653d310c2ed65172fbed20a74c2a..0000000000000000000000000000000000000000
--- a/spaces/BreadBytes1/CC-Dashboard/app.py
+++ /dev/null
@@ -1,730 +0,0 @@
-# ---
-# jupyter:
-# jupytext:
-# text_representation:
-# extension: .py
-# format_name: light
-# format_version: '1.5'
-# jupytext_version: 1.14.2
-# kernelspec:
-# display_name: Python [conda env:bbytes] *
-# language: python
-# name: conda-env-bbytes-py
-# ---
-
-# +
-import csv
-import pandas as pd
-from datetime import datetime, timedelta
-import numpy as np
-import datetime as dt
-import matplotlib.pyplot as plt
-from pathlib import Path
-import time
-import plotly.graph_objects as go
-import plotly.io as pio
-from PIL import Image
-
-import streamlit as st
-import plotly.express as px
-import altair as alt
-import dateutil.parser
-from matplotlib.colors import LinearSegmentedColormap
-
-
-# +
-class color:
- PURPLE = '\033[95m'
- CYAN = '\033[96m'
- DARKCYAN = '\033[36m'
- BLUE = '\033[94m'
- GREEN = '\033[92m'
- YELLOW = '\033[93m'
- RED = '\033[91m'
- BOLD = '\033[1m'
- UNDERLINE = '\033[4m'
- END = '\033[0m'
-
-@st.experimental_memo
-def print_PL(amnt, thresh, extras = "" ):
- if amnt > 0:
- return color.BOLD + color.GREEN + str(amnt) + extras + color.END
- elif amnt < 0:
- return color.BOLD + color.RED + str(amnt)+ extras + color.END
- elif np.isnan(amnt):
- return str(np.nan)
- else:
- return str(amnt + extras)
-
-@st.experimental_memo
-def get_headers(logtype):
- otimeheader = ""
- cheader = ""
- plheader = ""
- fmat = '%Y-%m-%d %H:%M:%S'
-
- if logtype == "ByBit":
- otimeheader = 'Create Time'
- cheader = 'Contracts'
- plheader = 'Closed P&L'
- fmat = '%Y-%m-%d %H:%M:%S'
-
- if logtype == "BitGet":
- otimeheader = 'Date'
- cheader = 'Futures'
- plheader = 'Realized P/L'
- fmat = '%Y-%m-%d %H:%M:%S'
-
- if logtype == "MEXC":
- otimeheader = 'Trade time'
- cheader = 'Futures'
- plheader = 'closing position'
- fmat = '%Y/%m/%d %H:%M'
-
- if logtype == "Binance":
- otimeheader = 'Date'
- cheader = 'Symbol'
- plheader = 'Realized Profit'
- fmat = '%Y-%m-%d %H:%M:%S'
-
- #if logtype == "Kucoin":
- # otimeheader = 'Time'
- # cheader = 'Contract'
- # plheader = ''
- # fmat = '%Y/%m/%d %H:%M:%S'
-
-
- if logtype == "Kraken":
- otimeheader = 'time'
- cheader = 'asset'
- plheader = 'amount'
- fmat = '%Y-%m-%d %H:%M:%S.%f'
-
- if logtype == "OkX":
- otimeheader = '\ufeffOrder Time'
- cheader = '\ufeffInstrument'
- plheader = '\ufeffPL'
- fmat = '%Y-%m-%d %H:%M:%S'
-
- return otimeheader.lower(), cheader.lower(), plheader.lower(), fmat
-
-@st.experimental_memo
-def get_coin_info(df_coin, principal_balance,plheader):
- numtrades = int(len(df_coin))
- numwin = int(sum(df_coin[plheader] > 0))
- numloss = int(sum(df_coin[plheader] < 0))
- winrate = np.round(100*numwin/numtrades,2)
-
- grosswin = sum(df_coin[df_coin[plheader] > 0][plheader])
- grossloss = sum(df_coin[df_coin[plheader] < 0][plheader])
- if grossloss != 0:
- pfactor = -1*np.round(grosswin/grossloss,2)
- else:
- pfactor = np.nan
-
- cum_PL = np.round(sum(df_coin[plheader].values),2)
- cum_PL_perc = np.round(100*cum_PL/principal_balance,2)
- mean_PL = np.round(sum(df_coin[plheader].values/len(df_coin)),2)
- mean_PL_perc = np.round(100*mean_PL/principal_balance,2)
-
- return numtrades, numwin, numloss, winrate, pfactor, cum_PL, cum_PL_perc, mean_PL, mean_PL_perc
-
-@st.experimental_memo
-def get_hist_info(df_coin, principal_balance,plheader):
- numtrades = int(len(df_coin))
- numwin = int(sum(df_coin[plheader] > 0))
- numloss = int(sum(df_coin[plheader] < 0))
- if numtrades != 0:
- winrate = int(np.round(100*numwin/numtrades,2))
- else:
- winrate = np.nan
-
- grosswin = sum(df_coin[df_coin[plheader] > 0][plheader])
- grossloss = sum(df_coin[df_coin[plheader] < 0][plheader])
- if grossloss != 0:
- pfactor = -1*np.round(grosswin/grossloss,2)
- else:
- pfactor = np.nan
- return numtrades, numwin, numloss, winrate, pfactor
-
-@st.experimental_memo
-def get_rolling_stats(df, lev, otimeheader, days):
- max_roll = (df[otimeheader].max() - df[otimeheader].min()).days
-
- if max_roll >= days:
- rollend = df[otimeheader].max()-timedelta(days=days)
- rolling_df = df[df[otimeheader] >= rollend]
-
- if len(rolling_df) > 0:
- rolling_perc = rolling_df['Return Per Trade'].dropna().cumprod().values[-1]-1
- else:
- rolling_perc = np.nan
- else:
- rolling_perc = np.nan
- return 100*rolling_perc
-@st.experimental_memo
-def cc_coding(row):
- return ['background-color: lightgrey'] * len(row) if row['Exit Date'] <= datetime.strptime('2022-12-16 00:00:00','%Y-%m-%d %H:%M:%S').date() else [''] * len(row)
-def ctt_coding(row):
- return ['background-color: lightgrey'] * len(row) if row['Exit Date'] <= datetime.strptime('2023-01-02 00:00:00','%Y-%m-%d %H:%M:%S').date() else [''] * len(row)
-
-@st.experimental_memo
-def my_style(v, props=''):
- props = 'color:red' if v < 0 else 'color:green'
- return props
-
-def filt_df(df, cheader, symbol_selections):
-
- df = df.copy()
- df = df[df[cheader].isin(symbol_selections)]
-
- return df
-
-def tv_reformat(close50filename):
- try:
- data = pd.read_csv(open(close50filename,'r'), sep='[,|\t]', engine='python')
- except:
- data = pd.DataFrame([])
-
- if data.empty:
- return data
- else:
- entry_df = data[data['Type'].str.contains("Entry")]
- exit_df = data[data['Type'].str.contains("Exit")]
-
- entry_df.index = range(len(entry_df))
- exit_df.index = range(len(exit_df))
-
- df = pd.DataFrame([], columns=['Trade','Entry Date','Buy Price', 'Sell Price','Exit Date', 'P/L per token', 'P/L %', 'Drawdown %'])
-
- df['Signal'] = [string.split(' ')[1] for string in entry_df['Type']]
- df['Trade'] = entry_df.index
- df['Entry Date'] = entry_df['Date/Time']
- df['Buy Price'] = entry_df['Price USDT']
-
- df['Sell Price'] = exit_df['Price USDT']
- df['Exit Date'] = exit_df['Date/Time']
- df['P/L per token'] = df['Sell Price'] - df['Buy Price']
- df['P/L %'] = exit_df['Profit %']
- df['Drawdown %'] = exit_df['Drawdown %']
- df['Close 50'] = [int(i == "Close 50% of Position") for i in exit_df['Signal']]
- df = df.sort_values(['Entry Date','Close 50'], ascending = [False, True])
- df.index = range(len(df))
-
- df.loc[df['Close 50'] == 1, 'Exit Date'] = np.copy(df.loc[df[df['Close 50'] == 1].index.values -1]['Exit Date'])
-
- grouped_df = df.groupby('Entry Date').agg({'Signal' : 'first', 'Entry Date': 'min', 'Buy Price':'mean',
- 'Sell Price' : 'mean',
- 'Exit Date': 'max',
- 'P/L per token': 'mean',
- 'P/L %' : 'mean'})
-
- grouped_df.insert(0,'Trade', range(len(grouped_df)))
- grouped_df.index = range(len(grouped_df))
- return grouped_df
-
-def load_data(filename, otimeheader, fmat):
- df = pd.read_csv(open(filename,'r'), sep='\t') # so as not to mutate cached value
- close50filename = filename.split('.')[0] + '-50.' + filename.split('.')[1]
- df2 = tv_reformat(close50filename)
-
- if filename == "CT-Trade-Log.csv":
- df.columns = ['Trade','Entry Date','Buy Price', 'Sell Price','Exit Date', 'P/L per token', 'P/L %', 'Drawdown %']
- df.insert(1, 'Signal', ['Long']*len(df))
- elif filename == "CC-Trade-Log.csv":
- df.columns = ['Trade','Signal','Entry Date','Buy Price', 'Sell Price','Exit Date', 'P/L per token', 'P/L %', 'Drawdown %']
- else:
- df.columns = ['Trade','Signal','Entry Date','Buy Price', 'Sell Price','Exit Date', 'P/L per token', 'P/L %']
-
- if filename != "CT-Toasted-Trade-Log.csv":
- df['Signal'] = df['Signal'].str.replace(' ', '', regex=True)
- df['Buy Price'] = df['Buy Price'].str.replace('$', '', regex=True)
- df['Sell Price'] = df['Sell Price'].str.replace('$', '', regex=True)
- df['Buy Price'] = df['Buy Price'].str.replace(',', '', regex=True)
- df['Sell Price'] = df['Sell Price'].str.replace(',', '', regex=True)
- df['P/L per token'] = df['P/L per token'].str.replace('$', '', regex=True)
- df['P/L per token'] = df['P/L per token'].str.replace(',', '', regex=True)
- df['P/L %'] = df['P/L %'].str.replace('%', '', regex=True)
-
- df['Buy Price'] = pd.to_numeric(df['Buy Price'])
- df['Sell Price'] = pd.to_numeric(df['Sell Price'])
- df['P/L per token'] = pd.to_numeric(df['P/L per token'])
- df['P/L %'] = pd.to_numeric(df['P/L %'])
-
- if df2.empty:
- df = df
- else:
- df = pd.concat([df,df2], axis=0, ignore_index=True)
-
- if filename == "CT-Trade-Log.csv":
- df['Signal'] = ['Long']*len(df)
-
- dateheader = 'Date'
- theader = 'Time'
-
- df[dateheader] = [tradetimes.split(" ")[0] for tradetimes in df[otimeheader].values]
- df[theader] = [tradetimes.split(" ")[1] for tradetimes in df[otimeheader].values]
-
- df[otimeheader]= [dateutil.parser.parse(date+' '+time)
- for date,time in zip(df[dateheader],df[theader])]
- df[otimeheader] = pd.to_datetime(df[otimeheader])
- df['Exit Date'] = pd.to_datetime(df['Exit Date'])
- df.sort_values(by=otimeheader, inplace=True)
-
- df[dateheader] = [dateutil.parser.parse(date).date() for date in df[dateheader]]
- df[theader] = [dateutil.parser.parse(time).time() for time in df[theader]]
- df['Trade'] = df.index + 1 #reindex
-
- if filename == "CT-Trade-Log.csv":
- df['DCA'] = np.nan
-
- for exit in pd.unique(df['Exit Date']):
- df_exit = df[df['Exit Date']==exit]
- if dateutil.parser.parse(str(exit)) < dateutil.parser.parse('2023-02-07 13:00:00'):
- for i in range(len(df_exit)):
- ind = df_exit.index[i]
- df.loc[ind,'DCA'] = i+1
-
- else:
- for i in range(len(df_exit)):
- ind = df_exit.index[i]
- df.loc[ind,'DCA'] = i+1.1
- return df
-
-
-def get_sd_df(sd_df, sd, bot_selections, dca1, dca2, dca3, dca4, dca5, dca6, fees, lev, dollar_cap, principal_balance):
- sd = 2*.00026
- # ------ Standard Dev. Calculations.
- if bot_selections == "Cinnamon Toast":
- dca_map = {1: dca1/100, 2: dca2/100, 3: dca3/100, 4: dca4/100, 1.1: dca5/100, 2.1: dca6/100}
- sd_df['DCA %'] = sd_df['DCA'].map(dca_map)
- sd_df['Calculated Return % (+)'] = df['Signal'].map(signal_map)*(df['DCA %'])*(1-fees)*((df['Sell Price']*(1+df['Signal'].map(signal_map)*sd) - df['Buy Price']*(1-df['Signal'].map(signal_map)*sd))/df['Buy Price']*(1-df['Signal'].map(signal_map)*sd) - fees) #accounts for fees on open and close of trade
- sd_df['Calculated Return % (-)'] = df['Signal'].map(signal_map)*(df['DCA %'])*(1-fees)*((df['Sell Price']*(1-df['Signal'].map(signal_map)*sd)-df['Buy Price']*(1+df['Signal'].map(signal_map)*sd))/df['Buy Price']*(1+df['Signal'].map(signal_map)*sd) - fees) #accounts for fees on open and close of trade
- sd_df['DCA'] = np.floor(sd_df['DCA'].values)
-
- sd_df['Return Per Trade (+)'] = np.nan
- sd_df['Return Per Trade (-)'] = np.nan
- sd_df['Balance used in Trade (+)'] = np.nan
- sd_df['Balance used in Trade (-)'] = np.nan
- sd_df['New Balance (+)'] = np.nan
- sd_df['New Balance (-)'] = np.nan
-
- g1 = sd_df.groupby('Exit Date').sum(numeric_only=True)['Calculated Return % (+)'].reset_index(name='Return Per Trade (+)')
- g2 = sd_df.groupby('Exit Date').sum(numeric_only=True)['Calculated Return % (-)'].reset_index(name='Return Per Trade (-)')
- sd_df.loc[sd_df['DCA']==1.0,'Return Per Trade (+)'] = 1+lev*g1['Return Per Trade (+)'].values
- sd_df.loc[sd_df['DCA']==1.0,'Return Per Trade (-)'] = 1+lev*g2['Return Per Trade (-)'].values
-
- sd_df['Compounded Return (+)'] = sd_df['Return Per Trade (+)'].cumprod()
- sd_df['Compounded Return (-)'] = sd_df['Return Per Trade (-)'].cumprod()
- sd_df.loc[sd_df['DCA']==1.0,'New Balance (+)'] = [min(dollar_cap/lev, bal*principal_balance) for bal in sd_df.loc[sd_df['DCA']==1.0,'Compounded Return (+)']]
- sd_df.loc[sd_df['DCA']==1.0,'Balance used in Trade (+)'] = np.concatenate([[principal_balance], sd_df.loc[sd_df['DCA']==1.0,'New Balance (+)'].values[:-1]])
-
- sd_df.loc[sd_df['DCA']==1.0,'New Balance (-)'] = [min(dollar_cap/lev, bal*principal_balance) for bal in sd_df.loc[sd_df['DCA']==1.0,'Compounded Return (-)']]
- sd_df.loc[sd_df['DCA']==1.0,'Balance used in Trade (-)'] = np.concatenate([[principal_balance], sd_df.loc[sd_df['DCA']==1.0,'New Balance (-)'].values[:-1]])
- else:
- sd_df['Calculated Return % (+)'] = df['Signal'].map(signal_map)*(1-fees)*((df['Sell Price']*(1+df['Signal'].map(signal_map)*sd) - df['Buy Price']*(1-df['Signal'].map(signal_map)*sd))/df['Buy Price']*(1-df['Signal'].map(signal_map)*sd) - fees) #accounts for fees on open and close of trade
- sd_df['Calculated Return % (-)'] = df['Signal'].map(signal_map)*(1-fees)*((df['Sell Price']*(1-df['Signal'].map(signal_map)*sd)-df['Buy Price']*(1+df['Signal'].map(signal_map)*sd))/df['Buy Price']*(1+df['Signal'].map(signal_map)*sd) - fees) #accounts for fees on open and close of trade
- sd_df['Return Per Trade (+)'] = np.nan
- sd_df['Return Per Trade (-)'] = np.nan
-
- g1 = sd_df.groupby('Exit Date').sum(numeric_only=True)['Calculated Return % (+)'].reset_index(name='Return Per Trade (+)')
- g2 = sd_df.groupby('Exit Date').sum(numeric_only=True)['Calculated Return % (-)'].reset_index(name='Return Per Trade (-)')
- sd_df['Return Per Trade (+)'] = 1+lev*g1['Return Per Trade (+)'].values
- sd_df['Return Per Trade (-)'] = 1+lev*g2['Return Per Trade (-)'].values
-
- sd_df['Compounded Return (+)'] = sd_df['Return Per Trade (+)'].cumprod()
- sd_df['Compounded Return (-)'] = sd_df['Return Per Trade (-)'].cumprod()
- sd_df['New Balance (+)'] = [min(dollar_cap/lev, bal*principal_balance) for bal in sd_df['Compounded Return (+)']]
- sd_df['Balance used in Trade (+)'] = np.concatenate([[principal_balance], sd_df['New Balance (+)'].values[:-1]])
-
- sd_df['New Balance (-)'] = [min(dollar_cap/lev, bal*principal_balance) for bal in sd_df['Compounded Return (-)']]
- sd_df['Balance used in Trade (-)'] = np.concatenate([[principal_balance], sd_df['New Balance (-)'].values[:-1]])
-
- sd_df['Net P/L Per Trade (+)'] = (sd_df['Return Per Trade (+)']-1)*sd_df['Balance used in Trade (+)']
- sd_df['Cumulative P/L (+)'] = sd_df['Net P/L Per Trade (+)'].cumsum()
-
- sd_df['Net P/L Per Trade (-)'] = (sd_df['Return Per Trade (-)']-1)*sd_df['Balance used in Trade (-)']
- sd_df['Cumulative P/L (-)'] = sd_df['Net P/L Per Trade (-)'].cumsum()
- return sd_df
-
-def runapp() -> None:
- bot_selections = "Cosmic Cupcake"
- otimeheader = 'Exit Date'
- fmat = '%Y-%m-%d %H:%M:%S'
- fees = .075/100
-
- st.header(f"{bot_selections} Performance Dashboard :bread: :moneybag:")
- no_errors = True
- st.write("Welcome to the Trading Bot Dashboard by BreadBytes! You can use this dashboard to track " +
- "the performance of our trading bots.")
-
- if bot_selections == "Cinnamon Toast":
- lev_cap = 5
- dollar_cap = 1000000000.00
- data = load_data("CT-Trade-Log.csv",otimeheader, fmat)
- if bot_selections == "French Toast":
- lev_cap = 3
- dollar_cap = 10000000000.00
- data = load_data("FT-Trade-Log.csv",otimeheader, fmat)
- if bot_selections == "Short Bread":
- lev_cap = 5
- dollar_cap = 1000000000.00
- data = load_data("SB-Trade-Log.csv",otimeheader, fmat)
- if bot_selections == "Cosmic Cupcake":
- lev_cap = 3
- dollar_cap = 1000000000.00
- data = load_data("CC-Trade-Log.csv",otimeheader, fmat)
- if bot_selections == "CT Toasted":
- lev_cap = 5
- dollar_cap = 1000000000.00
- data = load_data("CT-Toasted-Trade-Log.csv",otimeheader, fmat)
-
- df = data.copy(deep=True)
-
- dateheader = 'Date'
- theader = 'Time'
-
- st.subheader("Choose your settings:")
- with st.form("user input", ):
- if no_errors:
- with st.container():
- col1, col2 = st.columns(2)
- with col1:
- try:
- startdate = st.date_input("Start Date", value=pd.to_datetime(df[otimeheader]).min())
- except:
- st.error("Please select your exchange or upload a supported trade log file.")
- no_errors = False
- with col2:
- try:
- enddate = st.date_input("End Date", value=datetime.today())
- except:
- st.error("Please select your exchange or upload a supported trade log file.")
- no_errors = False
- #st.sidebar.subheader("Customize your Dashboard")
-
- if no_errors and (enddate < startdate):
- st.error("End Date must be later than Start date. Please try again.")
- no_errors = False
- with st.container():
- col1,col2 = st.columns(2)
- with col2:
- lev = st.number_input('Leverage', min_value=1, value=1, max_value= lev_cap, step=1)
- with col1:
- principal_balance = st.number_input('Starting Balance', min_value=0.00, value=1000.00, max_value= dollar_cap, step=.01)
-
- if bot_selections == "Cinnamon Toast":
- st.write("Choose your DCA setup (for trades before 02/07/2023)")
- with st.container():
- col1, col2, col3, col4 = st.columns(4)
- with col1:
- dca1 = st.number_input('DCA 1 Allocation', min_value=0, value=25, max_value= 100, step=1)
- with col2:
- dca2 = st.number_input('DCA 2 Allocation', min_value=0, value=25, max_value= 100, step=1)
- with col3:
- dca3 = st.number_input('DCA 3 Allocation', min_value=0, value=25, max_value= 100, step=1)
- with col4:
- dca4 = st.number_input('DCA 4 Allocation', min_value=0, value=25, max_value= 100, step=1)
- st.write("Choose your DCA setup (for trades on or after 02/07/2023)")
- with st.container():
- col1, col2 = st.columns(2)
- with col1:
- dca5 = st.number_input('DCA 1 Allocation', min_value=0, value=50, max_value= 100, step=1)
- with col2:
- dca6 = st.number_input('DCA 2 Allocation', min_value=0, value=50, max_value= 100, step=1)
-
- #hack way to get button centered
- c = st.columns(9)
- with c[4]:
- submitted = st.form_submit_button("Get Cookin'!")
-
- if submitted and principal_balance * lev > dollar_cap:
- lev = np.floor(dollar_cap/principal_balance)
- st.error(f"WARNING: (Starting Balance)*(Leverage) exceeds the ${dollar_cap} limit. Using maximum available leverage of {lev}")
-
- if submitted and no_errors:
- df = df[(df[dateheader] >= startdate) & (df[dateheader] <= enddate)]
- signal_map = {'Long': 1, 'Short':-1}
-
-
- if len(df) == 0:
- st.error("There are no available trades matching your selections. Please try again!")
- no_errors = False
-
- if no_errors:
- if bot_selections == "Cinnamon Toast":
- dca_map = {1: dca1/100, 2: dca2/100, 3: dca3/100, 4: dca4/100, 1.1: dca5/100, 2.1: dca6/100}
- df['DCA %'] = df['DCA'].map(dca_map)
- df['Calculated Return %'] = df['Signal'].map(signal_map)*(df['DCA %'])*(1-fees)*((df['Sell Price']-df['Buy Price'])/df['Buy Price'] - fees) #accounts for fees on open and close of trade
- df['DCA'] = np.floor(df['DCA'].values)
-
- df['Return Per Trade'] = np.nan
- df['Balance used in Trade'] = np.nan
- df['New Balance'] = np.nan
-
- g = df.groupby('Exit Date').sum(numeric_only=True)['Calculated Return %'].reset_index(name='Return Per Trade')
- df.loc[df['DCA']==1.0,'Return Per Trade'] = 1+lev*g['Return Per Trade'].values
-
- df['Compounded Return'] = df['Return Per Trade'].cumprod()
- df.loc[df['DCA']==1.0,'New Balance'] = [min(dollar_cap/lev, bal*principal_balance) for bal in df.loc[df['DCA']==1.0,'Compounded Return']]
- df.loc[df['DCA']==1.0,'Balance used in Trade'] = np.concatenate([[principal_balance], df.loc[df['DCA']==1.0,'New Balance'].values[:-1]])
- else:
- df['Calculated Return %'] = df['Signal'].map(signal_map)*(1-fees)*((df['Sell Price']-df['Buy Price'])/df['Buy Price'] - fees) #accounts for fees on open and close of trade
- df['Return Per Trade'] = np.nan
- g = df.groupby('Exit Date').sum(numeric_only=True)['Calculated Return %'].reset_index(name='Return Per Trade')
- df['Return Per Trade'] = 1+lev*g['Return Per Trade'].values
-
- df['Compounded Return'] = df['Return Per Trade'].cumprod()
- df['New Balance'] = [min(dollar_cap/lev, bal*principal_balance) for bal in df['Compounded Return']]
- df['Balance used in Trade'] = np.concatenate([[principal_balance], df['New Balance'].values[:-1]])
- df['Net P/L Per Trade'] = (df['Return Per Trade']-1)*df['Balance used in Trade']
- df['Cumulative P/L'] = df['Net P/L Per Trade'].cumsum()
-
- if bot_selections == "Cinnamon Toast" or bot_selections == "Cosmic Cupcake":
- cum_pl = df.loc[df.drop('Drawdown %', axis=1).dropna().index[-1],'Cumulative P/L'] + principal_balance
- #cum_sdp = sd_df.loc[sd_df.drop('Drawdown %', axis=1).dropna().index[-1],'Cumulative P/L (+)'] + principal_balance
- #cum_sdm = sd_df.loc[sd_df.drop('Drawdown %', axis=1).dropna().index[-1],'Cumulative P/L (-)'] + principal_balance
- else:
- cum_pl = df.loc[df.dropna().index[-1],'Cumulative P/L'] + principal_balance
- #cum_sdp = sd_df.loc[sd_df.dropna().index[-1],'Cumulative P/L (+)'] + principal_balance
- #cum_sdm = sd_df.loc[sd_df.dropna().index[-1],'Cumulative P/L (-)'] + principal_balance
- #sd = 2*.00026
- #sd_df = get_sd_df(get_sd_df(df.copy(), sd, bot_selections, dca1, dca2, dca3, dca4, dca5, dca6, fees, lev, dollar_cap, principal_balance)
-
- effective_return = 100*((cum_pl - principal_balance)/principal_balance)
-
- st.header(f"{bot_selections} Results")
- with st.container():
-
- if len(bot_selections) > 1:
- col1, col2 = st.columns(2)
- with col1:
- st.metric(
- "Total Account Balance",
- f"${cum_pl:.2f}",
- f"{100*(cum_pl-principal_balance)/(principal_balance):.2f} %",
- )
-
-# with col2:
-# st.write("95% of trades should fall within this 2 std. dev. range.")
-# st.metric(
-# "High Range (+ 2 std. dev.)",
-# f"", #${cum_sdp:.2f}
-# f"{100*(cum_sdp-principal_balance)/(principal_balance):.2f} %",
-# )
-# st.metric(
-# "Low Range (- 2 std. dev.)",
-# f"" ,#${cum_sdm:.2f}"
-# f"{100*(cum_sdm-principal_balance)/(principal_balance):.2f} %",
-# )
- if bot_selections == "Cinnamon Toast" or bot_selections == "Cosmic Cupcake":
- #st.line_chart(data=df.drop('Drawdown %', axis=1).dropna(), x='Exit Date', y='Cumulative P/L', use_container_width=True)
- dfdata = df.drop('Drawdown %', axis=1).dropna()
- #sd_df = sd_df.drop('Drawdown %', axis=1).dropna()
- else:
- #st.line_chart(data=df.dropna(), x='Exit Date', y='Cumulative P/L', use_container_width=True)
- dfdata = df.dropna()
- #sd_df = sd_df.dropna()
-
- # Create figure
- fig = go.Figure()
-
- pyLogo = Image.open("logo.png")
-
-# fig.add_traces(go.Scatter(x=sd_df['Exit Date'], y = sd_df['Cumulative P/L (+)'],line_shape='spline',
-# line = dict(smoothing = 1.3, color='rgba(31, 119, 200,0)'), showlegend = False)
-# )
-
-# fig.add_traces(go.Scatter(x=sd_df['Exit Date'], y = sd_df['Cumulative P/L (-)'],
-# line = dict(smoothing = 1.3, color='rgba(31, 119, 200,0)'), line_shape='spline',
-# fill='tonexty',
-# fillcolor = 'rgba(31, 119, 200,.2)', name = '+/- Standard Deviation')
-# )
-
- # Add trace
- fig.add_trace(
- go.Scatter(x=dfdata['Exit Date'], y=np.round(dfdata['Cumulative P/L'].values,2), line_shape='spline',
- line = {'smoothing': 1.0, 'color' : 'rgba(31, 119, 200,.8)'},
- name='Cumulative P/L')
- )
- buyhold = (principal_balance/dfdata['Buy Price'][dfdata.index[0]])*(dfdata['Buy Price']-dfdata['Buy Price'][dfdata.index[0]])
- fig.add_trace(go.Scatter(x=dfdata['Exit Date'], y=np.round(buyhold.values,2), line_shape='spline',
- line = {'smoothing': 1.0, 'color' :'red'}, name = 'Buy & Hold Return')
- )
-
- fig.add_layout_image(
- dict(
- source=pyLogo,
- xref="paper",
- yref="paper",
- x = 0.05, #dfdata['Exit Date'].astype('int64').min() // 10**9,
- y = .85, #dfdata['Cumulative P/L'].max(),
- sizex= .9, #(dfdata['Exit Date'].astype('int64').max() - dfdata['Exit Date'].astype('int64').min()) // 10**9,
- sizey= .9, #(dfdata['Cumulative P/L'].max() - dfdata['Cumulative P/L'].min()),
- sizing="contain",
- opacity=0.2,
- layer = "below")
- )
-
- #style layout
- fig.update_layout(
- height = 600,
- xaxis=dict(
- title="Exit Date",
- tickmode='array',
- ),
- yaxis=dict(
- title="Cumulative P/L"
- ) )
-
- st.plotly_chart(fig, theme=None, use_container_width=True,height=600)
- st.write()
- df['Per Trade Return Rate'] = df['Return Per Trade']-1
-
- totals = pd.DataFrame([], columns = ['# of Trades', 'Wins', 'Losses', 'Win Rate', 'Profit Factor'])
- if bot_selections == "Cinnamon Toast" or bot_selections == "Cosmic Cupcake":
- data = get_hist_info(df.drop('Drawdown %', axis=1).dropna(), principal_balance,'Per Trade Return Rate')
- else:
- data = get_hist_info(df.dropna(), principal_balance,'Per Trade Return Rate')
- totals.loc[len(totals)] = list(i for i in data)
-
- totals['Cum. P/L'] = cum_pl-principal_balance
- totals['Cum. P/L (%)'] = 100*(cum_pl-principal_balance)/principal_balance
-
- if df.empty:
- st.error("Oops! None of the data provided matches your selection(s). Please try again.")
- else:
- with st.container():
- for row in totals.itertuples():
- col1, col2, col3, col4= st.columns(4)
- c1, c2, c3, c4 = st.columns(4)
- with col1:
- st.metric(
- "Total Trades",
- f"{row._1:.0f}",
- )
- with c1:
- st.metric(
- "Profit Factor",
- f"{row._5:.2f}",
- )
- with col2:
- st.metric(
- "Wins",
- f"{row.Wins:.0f}",
- )
- with c2:
- st.metric(
- "Cumulative P/L",
- f"${row._6:.2f}",
- f"{row._7:.2f} %",
- )
- with col3:
- st.metric(
- "Losses",
- f"{row.Losses:.0f}",
- )
- with c3:
- st.metric(
- "Rolling 7 Days",
- "",#f"{(1+get_rolling_stats(df,otimeheader, 30))*principal_balance:.2f}",
- f"{get_rolling_stats(df,lev, otimeheader, 7):.2f}%",
- )
- st.metric(
- "Rolling 30 Days",
- "",#f"{(1+get_rolling_stats(df,otimeheader, 30))*principal_balance:.2f}",
- f"{get_rolling_stats(df,lev, otimeheader, 30):.2f}%",
- )
-
- with col4:
- st.metric(
- "Win Rate",
- f"{row._4:.1f}%",
- )
- with c4:
- st.metric(
- "Rolling 90 Days",
- "",#f"{(1+get_rolling_stats(df,otimeheader, 30))*principal_balance:.2f}",
- f"{get_rolling_stats(df,lev, otimeheader, 90):.2f}%",
- )
- st.metric(
- "Rolling 180 Days",
- "",#f"{(1+get_rolling_stats(df,otimeheader, 30))*principal_balance:.2f}",
- f"{get_rolling_stats(df,lev, otimeheader, 180):.2f}%",
- )
-
- if bot_selections == "Cinnamon Toast":
- if submitted:
- grouped_df = df.groupby('Exit Date').agg({'Signal':'min','Entry Date': 'min','Exit Date': 'max','Buy Price': 'mean',
- 'Sell Price' : 'max',
- 'Net P/L Per Trade': 'mean',
- 'Calculated Return %' : lambda x: np.round(100*lev*x.sum(),2),
- 'DCA': lambda x: int(np.floor(x.max()))})
- grouped_df.index = range(1, len(grouped_df)+1)
- grouped_df.rename(columns={'DCA' : '# of DCAs', 'Buy Price':'Avg. Buy Price',
- 'Net P/L Per Trade':'Net P/L',
- 'Calculated Return %':'P/L %'}, inplace=True)
- else:
- dca_map = {1: 25/100, 2: 25/100, 3: 25/100, 4: 25/100, 1.1: 50/100, 2.1: 50/100}
- df['DCA %'] = df['DCA'].map(dca_map)
- df['Calculated Return %'] = (df['DCA %'])*(1-fees)*((df['Sell Price']-df['Buy Price'])/df['Buy Price'] - fees) #accounts for fees on open and close of trade
-
- grouped_df = df.groupby('Exit Date').agg({'Signal':'min','Entry Date': 'min','Exit Date': 'max','Buy Price': 'mean',
- 'Sell Price' : 'max',
- 'P/L per token': 'mean',
- 'Calculated Return %' : lambda x: np.round(100*x.sum(),2),
- 'DCA': lambda x: int(np.floor(x.max()))})
- grouped_df.index = range(1, len(grouped_df)+1)
- grouped_df.rename(columns={'DCA' : '# of DCAs', 'Buy Price':'Avg. Buy Price',
- 'Calculated Return %':'P/L %',
- 'P/L per token':'Net P/L'}, inplace=True)
-
- else:
- if submitted:
- grouped_df = df.groupby('Exit Date').agg({'Signal':'min','Entry Date': 'min','Exit Date': 'max','Buy Price': 'mean',
- 'Sell Price' : 'max',
- 'Net P/L Per Trade': 'mean',
- 'Calculated Return %' : lambda x: np.round(100*lev*x.sum(),2)})
- grouped_df.index = range(1, len(grouped_df)+1)
- grouped_df.rename(columns={'Buy Price':'Avg. Buy Price',
- 'Net P/L Per Trade':'Net P/L',
- 'Calculated Return %':'P/L %'}, inplace=True)
- else:
- grouped_df = df.groupby('Exit Date').agg({'Signal':'min','Entry Date': 'min','Exit Date': 'max','Buy Price': 'mean',
- 'Sell Price' : 'max',
- 'P/L per token': 'mean',
- 'P/L %':'mean'})
- grouped_df.index = range(1, len(grouped_df)+1)
- grouped_df.rename(columns={'Buy Price':'Avg. Buy Price',
- 'P/L per token':'Net P/L'}, inplace=True)
- st.subheader("Trade Logs")
- grouped_df['Entry Date'] = pd.to_datetime(grouped_df['Entry Date'])
- grouped_df['Exit Date'] = pd.to_datetime(grouped_df['Exit Date'])
- if bot_selections == "Cosmic Cupcake" or bot_selections == "CT Toasted":
- coding = cc_coding if bot_selections == "Cosmic Cupcake" else ctt_coding
- st.dataframe(grouped_df.style.format({'Entry Date':'{:%m-%d-%Y %H:%M:%S}','Exit Date':'{:%m-%d-%Y %H:%M:%S}','Avg. Buy Price': '${:.2f}', 'Sell Price': '${:.2f}', 'Net P/L':'${:.2f}', 'P/L %':'{:.2f}%'})\
- .apply(coding, axis=1)\
- .applymap(my_style,subset=['Net P/L'])\
- .applymap(my_style,subset=['P/L %']), use_container_width=True)
- # new_title = '
Not Live Traded
'
- # st.markdown(new_title, unsafe_allow_html=True)
- else:
- st.dataframe(grouped_df.style.format({'Entry Date':'{:%m-%d-%Y %H:%M:%S}','Exit Date':'{:%m-%d-%Y %H:%M:%S}','Avg. Buy Price': '${:.2f}', 'Sell Price': '${:.2f}', 'Net P/L':'${:.2f}', 'P/L %':'{:.2f}%'})\
- .applymap(my_style,subset=['Net P/L'])\
- .applymap(my_style,subset=['P/L %']), use_container_width=True)
-
-# st.subheader("Checking Status")
-# if submitted:
-# st.dataframe(sd_df)
-
-if __name__ == "__main__":
- st.set_page_config(
- "Trading Bot Dashboard",
- layout="wide",
- )
- runapp()
-# -
-
-
-
-
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/notes/changelog.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/notes/changelog.md
deleted file mode 100644
index 9f67c16de6355f4eec9502356191c7aed5132484..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/notes/changelog.md
+++ /dev/null
@@ -1,22 +0,0 @@
-# Change Log
-
-### Releases
-See release log at
-[https://github.com/facebookresearch/detectron2/releases](https://github.com/facebookresearch/detectron2/releases)
-
-### Notable Backward Incompatible Changes:
-
-* 03/30/2020: Custom box head's `output_size` changed to `output_shape`.
-* 02/14/2020,02/18/2020: Mask head and keypoint head now include logic for losses & inference. Custom heads
- should overwrite the feature computation by `layers()` method.
-* 11/11/2019: `detectron2.data.detection_utils.read_image` transposes images with exif information.
-
-### Config Version Change Log
-
-* v1: Rename `RPN_HEAD.NAME` to `RPN.HEAD_NAME`.
-* v2: A batch of rename of many configurations before release.
-
-### Known Bugs in Historical Versions:
-* 03/30/2020 - 04/01/2020: ResNets are not correctly built.
-* 12/19/2019 - 12/26/2019: Using aspect ratio grouping causes a drop in accuracy.
-* release - 11/9/2019: Test time augmentation does not predict the last category.
diff --git a/spaces/CVPR/LIVE/thrust/thrust/tabulate.h b/spaces/CVPR/LIVE/thrust/thrust/tabulate.h
deleted file mode 100644
index 1dcd2c9ee388056d338cfe689deb8ebbb70a96d3..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/tabulate.h
+++ /dev/null
@@ -1,129 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-/*! \file tabulate.h
- * \brief Fills a range with the tabulation of a function
- */
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-
-
-/*! \addtogroup transformations
- * \{
- */
-
-
-/*! \p tabulate fills the range [first, last) with the value of a function applied to each
- * element's index.
- *
- * For each iterator \c i in the range [first, last), \p tabulate performs the assignment
- * *i = unary_op(i - first).
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first The beginning of the range.
- * \param last The end of the range.
- * \param unary_op The unary operation to apply.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator is mutable,
- * and if \c x and \c y are objects of \c ForwardIterator's \c value_type, then x + y is defined,
- * and if \c T is \p ForwardIterator's \c value_type, then T(0) is defined.
- * \tparam UnaryOperation is a model of Unary Function
- * and \c UnaryFunction's \c result_type is convertible to \c OutputIterator's \c value_type.
- *
- * The following code snippet demonstrates how to use \p tabulate to generate the first \c n non-positive integers
- * using the \p thrust::host execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * #include
- * ...
- * const int N = 10;
- * int A[N];
- * thrust::tabulate(thrust::host, A, A + 10, thrust::negate());
- * // A is now {0, -1, -2, -3, -4, -5, -6, -7, -8, -9}
- * \endcode
- *
- * \see thrust::fill
- * \see thrust::generate
- * \see thrust::sequence
- */
-template
-__host__ __device__
- void tabulate(const thrust::detail::execution_policy_base &exec,
- ForwardIterator first,
- ForwardIterator last,
- UnaryOperation unary_op);
-
-
-/*! \p tabulate fills the range [first, last) with the value of a function applied to each
- * element's index.
- *
- * For each iterator \c i in the range [first, last), \p tabulate performs the assignment
- * *i = unary_op(i - first).
- *
- * \param first The beginning of the range.
- * \param last The end of the range.
- * \param unary_op The unary operation to apply.
- *
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator is mutable,
- * and if \c x and \c y are objects of \c ForwardIterator's \c value_type, then x + y is defined,
- * and if \c T is \p ForwardIterator's \c value_type, then T(0) is defined.
- * \tparam UnaryOperation is a model of Unary Function
- * and \c UnaryFunction's \c result_type is convertible to \c OutputIterator's \c value_type.
- *
- * The following code snippet demonstrates how to use \p tabulate to generate the first \c n non-positive integers:
- *
- * \code
- * #include
- * #include
- * ...
- * const int N = 10;
- * int A[N];
- * thrust::tabulate(A, A + 10, thrust::negate());
- * // A is now {0, -1, -2, -3, -4, -5, -6, -7, -8, -9}
- * \endcode
- *
- * \see thrust::fill
- * \see thrust::generate
- * \see thrust::sequence
- */
-template
- void tabulate(ForwardIterator first,
- ForwardIterator last,
- UnaryOperation unary_op);
-
-
-/*! \} // end transformations
- */
-
-
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/CVPR/drawings-to-human/static/_app/immutable/start-62e3dfe2.js b/spaces/CVPR/drawings-to-human/static/_app/immutable/start-62e3dfe2.js
deleted file mode 100644
index e8486feda24e806a6d62a32ccbf85ebd5ea52c9f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/drawings-to-human/static/_app/immutable/start-62e3dfe2.js
+++ /dev/null
@@ -1 +0,0 @@
-import{S as Ye,i as Ge,s as Me,e as Fe,c as Xe,a as He,d as D,b as me,f as K,g as V,t as Ze,h as Qe,j as et,k as tt,l as P,m as nt,n as Y,o as C,p as G,q as T,r as st,u as rt,v as ye,w as z,x as ne,y as q,z as se,A as re,B as J,C as ie,D as Ce}from"./chunks/index-bcf2726a.js";import{s as it,w as ce,a as at}from"./chunks/paths-d3bcbd10.js";function ot(s){let e,t,i;const l=[s[1]||{}];var c=s[0][0];function f(n){let r={};for(let a=0;a{J(d,1)}),G()}c?(e=new c(f()),z(e.$$.fragment),T(e.$$.fragment,1),q(e,t.parentNode,t)):e=null}else c&&e.$set(a)},i(n){i||(e&&T(e.$$.fragment,n),i=!0)},o(n){e&&C(e.$$.fragment,n),i=!1},d(n){n&&D(t),e&&J(e,n)}}}function ct(s){let e,t,i;const l=[s[1]||{}];var c=s[0][0];function f(n){let r={$$slots:{default:[dt]},$$scope:{ctx:n}};for(let a=0;a{J(d,1)}),G()}c?(e=new c(f(n)),z(e.$$.fragment),T(e.$$.fragment,1),q(e,t.parentNode,t)):e=null}else c&&e.$set(a)},i(n){i||(e&&T(e.$$.fragment,n),i=!0)},o(n){e&&C(e.$$.fragment,n),i=!1},d(n){n&&D(t),e&&J(e,n)}}}function lt(s){let e,t,i;const l=[s[2]||{}];var c=s[0][1];function f(n){let r={};for(let a=0;a{J(d,1)}),G()}c?(e=new c(f()),z(e.$$.fragment),T(e.$$.fragment,1),q(e,t.parentNode,t)):e=null}else c&&e.$set(a)},i(n){i||(e&&T(e.$$.fragment,n),i=!0)},o(n){e&&C(e.$$.fragment,n),i=!1},d(n){n&&D(t),e&&J(e,n)}}}function ft(s){let e,t,i;const l=[s[2]||{}];var c=s[0][1];function f(n){let r={$$slots:{default:[ut]},$$scope:{ctx:n}};for(let a=0;a{J(d,1)}),G()}c?(e=new c(f(n)),z(e.$$.fragment),T(e.$$.fragment,1),q(e,t.parentNode,t)):e=null}else c&&e.$set(a)},i(n){i||(e&&T(e.$$.fragment,n),i=!0)},o(n){e&&C(e.$$.fragment,n),i=!1},d(n){n&&D(t),e&&J(e,n)}}}function ut(s){let e,t,i;const l=[s[3]||{}];var c=s[0][2];function f(n){let r={};for(let a=0;a{J(d,1)}),G()}c?(e=new c(f()),z(e.$$.fragment),T(e.$$.fragment,1),q(e,t.parentNode,t)):e=null}else c&&e.$set(a)},i(n){i||(e&&T(e.$$.fragment,n),i=!0)},o(n){e&&C(e.$$.fragment,n),i=!1},d(n){n&&D(t),e&&J(e,n)}}}function dt(s){let e,t,i,l;const c=[ft,lt],f=[];function n(r,a){return r[0][2]?0:1}return e=n(s),t=f[e]=c[e](s),{c(){t.c(),i=P()},l(r){t.l(r),i=P()},m(r,a){f[e].m(r,a),V(r,i,a),l=!0},p(r,a){let d=e;e=n(r),e===d?f[e].p(r,a):(Y(),C(f[d],1,1,()=>{f[d]=null}),G(),t=f[e],t?t.p(r,a):(t=f[e]=c[e](r),t.c()),T(t,1),t.m(i.parentNode,i))},i(r){l||(T(t),l=!0)},o(r){C(t),l=!1},d(r){f[e].d(r),r&&D(i)}}}function Te(s){let e,t=s[5]&&je(s);return{c(){e=Fe("div"),t&&t.c(),this.h()},l(i){e=Xe(i,"DIV",{id:!0,"aria-live":!0,"aria-atomic":!0,style:!0});var l=He(e);t&&t.l(l),l.forEach(D),this.h()},h(){me(e,"id","svelte-announcer"),me(e,"aria-live","assertive"),me(e,"aria-atomic","true"),K(e,"position","absolute"),K(e,"left","0"),K(e,"top","0"),K(e,"clip","rect(0 0 0 0)"),K(e,"clip-path","inset(50%)"),K(e,"overflow","hidden"),K(e,"white-space","nowrap"),K(e,"width","1px"),K(e,"height","1px")},m(i,l){V(i,e,l),t&&t.m(e,null)},p(i,l){i[5]?t?t.p(i,l):(t=je(i),t.c(),t.m(e,null)):t&&(t.d(1),t=null)},d(i){i&&D(e),t&&t.d()}}}function je(s){let e;return{c(){e=Ze(s[6])},l(t){e=Qe(t,s[6])},m(t,i){V(t,e,i)},p(t,i){i&64&&et(e,t[6])},d(t){t&&D(e)}}}function pt(s){let e,t,i,l,c;const f=[ct,ot],n=[];function r(d,L){return d[0][1]?0:1}e=r(s),t=n[e]=f[e](s);let a=s[4]&&Te(s);return{c(){t.c(),i=tt(),a&&a.c(),l=P()},l(d){t.l(d),i=nt(d),a&&a.l(d),l=P()},m(d,L){n[e].m(d,L),V(d,i,L),a&&a.m(d,L),V(d,l,L),c=!0},p(d,[L]){let E=e;e=r(d),e===E?n[e].p(d,L):(Y(),C(n[E],1,1,()=>{n[E]=null}),G(),t=n[e],t?t.p(d,L):(t=n[e]=f[e](d),t.c()),T(t,1),t.m(i.parentNode,i)),d[4]?a?a.p(d,L):(a=Te(d),a.c(),a.m(l.parentNode,l)):a&&(a.d(1),a=null)},i(d){c||(T(t),c=!0)},o(d){C(t),c=!1},d(d){n[e].d(d),d&&D(i),a&&a.d(d),d&&D(l)}}}function ht(s,e,t){let{stores:i}=e,{page:l}=e,{components:c}=e,{props_0:f=null}=e,{props_1:n=null}=e,{props_2:r=null}=e;st("__svelte__",i),rt(i.page.notify);let a=!1,d=!1,L=null;return ye(()=>{const E=i.page.subscribe(()=>{a&&(t(5,d=!0),t(6,L=document.title||"untitled page"))});return t(4,a=!0),E}),s.$$set=E=>{"stores"in E&&t(7,i=E.stores),"page"in E&&t(8,l=E.page),"components"in E&&t(0,c=E.components),"props_0"in E&&t(1,f=E.props_0),"props_1"in E&&t(2,n=E.props_1),"props_2"in E&&t(3,r=E.props_2)},s.$$.update=()=>{s.$$.dirty&384&&i.page.set(l)},[c,f,n,r,a,d,L,i,l]}class _t extends Ye{constructor(e){super(),Ge(this,e,ht,pt,Me,{stores:7,page:8,components:0,props_0:1,props_1:2,props_2:3})}}const mt="modulepreload",Ie={},gt="/static/_app/immutable/",ge=function(e,t){return!t||t.length===0?e():Promise.all(t.map(i=>{if(i=`${gt}${i}`,i in Ie)return;Ie[i]=!0;const l=i.endsWith(".css"),c=l?'[rel="stylesheet"]':"";if(document.querySelector(`link[href="${i}"]${c}`))return;const f=document.createElement("link");if(f.rel=l?"stylesheet":mt,l||(f.as="script",f.crossOrigin=""),f.href=i,document.head.appendChild(f),l)return new Promise((n,r)=>{f.addEventListener("load",n),f.addEventListener("error",()=>r(new Error(`Unable to preload CSS for ${i}`)))})})).then(()=>e())},wt={},le=[()=>ge(()=>import("./pages/__layout.svelte-d07d8fed.js"),["pages/__layout.svelte-d07d8fed.js","assets/pages/__layout.svelte-cc9dd261.css","chunks/index-bcf2726a.js"]),()=>ge(()=>import("./error.svelte-d9523301.js"),["error.svelte-d9523301.js","chunks/index-bcf2726a.js"]),()=>ge(()=>import("./pages/index.svelte-b5d75a5f.js"),["pages/index.svelte-b5d75a5f.js","assets/pages/index.svelte-7bf249dc.css","chunks/index-bcf2726a.js","chunks/paths-d3bcbd10.js"])],bt={"":[[0,2],[1]]};function yt(s){s.client}function De(s){return s instanceof Error||s&&s.name&&s.message?s:new Error(JSON.stringify(s))}function Ve(s){if(s.fallthrough)throw new Error("fallthrough is no longer supported. Use matchers instead: https://kit.svelte.dev/docs/routing#advanced-routing-matching");if("maxage"in s)throw new Error("maxage should be replaced with cache: { maxage }");const e=s.status&&s.status>=400&&s.status<=599&&!s.redirect;if(s.error||e){const t=s.status;if(!s.error&&e)return{status:t||500,error:new Error};const i=typeof s.error=="string"?new Error(s.error):s.error;return i instanceof Error?!t||t<400||t>599?(console.warn('"error" returned from load() without a valid status code \u2014 defaulting to 500'),{status:500,error:i}):{status:t,error:i}:{status:500,error:new Error(`"error" property returned from load() must be a string or instance of Error, received type "${typeof i}"`)}}if(s.redirect){if(!s.status||Math.floor(s.status/100)!==3)throw new Error('"redirect" property returned from load() must be accompanied by a 3xx status code');if(typeof s.redirect!="string")throw new Error('"redirect" property returned from load() must be a string')}if(s.dependencies&&(!Array.isArray(s.dependencies)||s.dependencies.some(t=>typeof t!="string")))throw new Error('"dependencies" property returned from load() must be of type string[]');if(s.context)throw new Error('You are returning "context" from a load function. "context" was renamed to "stuff", please adjust your code accordingly.');return s}function vt(s,e){return s==="/"||e==="ignore"?s:e==="never"?s.endsWith("/")?s.slice(0,-1):s:e==="always"&&!s.endsWith("/")?s+"/":s}class $t extends URL{get hash(){throw new Error("url.hash is inaccessible from load. Consider accessing hash from the page store within the script tag of your component.")}}function ze(s){let e=s.baseURI;if(!e){const t=s.getElementsByTagName("base");e=t.length?t[0].href:s.URL}return e}function ve(){return{x:pageXOffset,y:pageYOffset}}function qe(s){return s.composedPath().find(t=>t instanceof Node&&t.nodeName.toUpperCase()==="A")}function Je(s){return s instanceof SVGAElement?new URL(s.href.baseVal,document.baseURI):new URL(s.href)}function Ke(s){const e=ce(s);let t=!0;function i(){t=!0,e.update(f=>f)}function l(f){t=!1,e.set(f)}function c(f){let n;return e.subscribe(r=>{(n===void 0||t&&r!==n)&&f(n=r)})}return{notify:i,set:l,subscribe:c}}function kt(){const{set:s,subscribe:e}=ce(!1),t="1666723871078";let i;async function l(){clearTimeout(i);const f=await fetch(`${at}/_app/version.json`,{headers:{pragma:"no-cache","cache-control":"no-cache"}});if(f.ok){const{version:n}=await f.json(),r=n!==t;return r&&(s(!0),clearTimeout(i)),r}else throw new Error(`Version check failed: ${f.status}`)}return{subscribe:e,check:l}}function Et(s){let e=5381,t=s.length;if(typeof s=="string")for(;t;)e=e*33^s.charCodeAt(--t);else for(;t;)e=e*33^s[--t];return(e>>>0).toString(36)}const $e=window.fetch;function Rt(s,e){let i=`script[sveltekit\\:data-type="data"][sveltekit\\:data-url=${JSON.stringify(typeof s=="string"?s:s.url)}]`;e&&typeof e.body=="string"&&(i+=`[sveltekit\\:data-body="${Et(e.body)}"]`);const l=document.querySelector(i);if(l&&l.textContent){const{body:c,...f}=JSON.parse(l.textContent);return Promise.resolve(new Response(c,f))}return $e(s,e)}const Lt=/^(\.\.\.)?(\w+)(?:=(\w+))?$/;function St(s){const e=[],t=[];let i=!0;return{pattern:s===""?/^\/$/:new RegExp(`^${decodeURIComponent(s).split(/(?:@[a-zA-Z0-9_-]+)?(?:\/|$)/).map((c,f,n)=>{const r=/^\[\.\.\.(\w+)(?:=(\w+))?\]$/.exec(c);if(r)return e.push(r[1]),t.push(r[2]),"(?:/(.*))?";const a=f===n.length-1;return c&&"/"+c.split(/\[(.+?)\]/).map((d,L)=>{if(L%2){const[,E,X,M]=Lt.exec(d);return e.push(X),t.push(M),E?"(.*?)":"([^/]+?)"}return a&&d.includes(".")&&(i=!1),d.normalize().replace(/%5[Bb]/g,"[").replace(/%5[Dd]/g,"]").replace(/#/g,"%23").replace(/\?/g,"%3F").replace(/[.*+?^${}()|[\]\\]/g,"\\$&")}).join("")}).join("")}${i?"/?":""}$`),names:e,types:t}}function Ut(s,e,t,i){const l={};for(let c=0;c{const{pattern:r,names:a,types:d}=St(l);return{id:l,exec:L=>{const E=r.exec(L);if(E)return Ut(E,a,d,t)},a:c.map(L=>s[L]),b:f.map(L=>s[L]),has_shadow:!!n}})}const We="sveltekit:scroll",B="sveltekit:index",we=At(le,bt,wt),Nt=le[0](),Ot=le[1](),Be={};let te={};try{te=JSON.parse(sessionStorage[We])}catch{}function be(s){te[s]=ve()}function xt({target:s,session:e,base:t,trailing_slash:i}){var xe;const l=new Map,c=[],f={url:Ke({}),page:Ke({}),navigating:ce(null),session:ce(e),updated:kt()},n={id:null,promise:null},r={before_navigate:[],after_navigate:[]};let a={branch:[],error:null,session_id:0,stuff:Be,url:null},d=!1,L=!0,E=!1,X=1,M=null,ke,Ee,Re=!1;f.session.subscribe(async o=>{Ee=o,Re&&(X+=1,pe(new URL(location.href),[],!0))}),Re=!0;let F=!0,j=(xe=history.state)==null?void 0:xe[B];j||(j=Date.now(),history.replaceState({...history.state,[B]:j},"",location.href));const fe=te[j];fe&&(history.scrollRestoration="manual",scrollTo(fe.x,fe.y));let ue=!1,de,Le;async function Se(o,{noscroll:p=!1,replaceState:w=!1,keepfocus:u=!1,state:h={}},b){if(typeof o=="string"&&(o=new URL(o,ze(document))),F)return _e({url:o,scroll:p?ve():null,keepfocus:u,redirect_chain:b,details:{state:h,replaceState:w},accepted:()=>{},blocked:()=>{}});await Q(o)}async function Ue(o){const p=Oe(o);if(!p)throw new Error("Attempted to prefetch a URL that does not belong to this app");return n.promise=Ne(p,!1),n.id=p.id,n.promise}async function pe(o,p,w,u,h){var R,S,N;const b=Oe(o),v=Le={};let _=b&&await Ne(b,w);if(!_&&o.origin===location.origin&&o.pathname===location.pathname&&(_=await Z({status:404,error:new Error(`Not found: ${o.pathname}`),url:o,routeId:null})),!_)return await Q(o),!1;if(Le!==v)return!1;if(c.length=0,_.redirect)if(p.length>10||p.includes(o.pathname))_=await Z({status:500,error:new Error("Redirect loop"),url:o,routeId:null});else return F?Se(new URL(_.redirect,o).href,{},[...p,o.pathname]):await Q(new URL(_.redirect,location.href)),!1;else((S=(R=_.props)==null?void 0:R.page)==null?void 0:S.status)>=400&&await f.updated.check()&&await Q(o);if(E=!0,u&&u.details){const{details:$}=u,y=$.replaceState?0:1;$.state[B]=j+=y,history[$.replaceState?"replaceState":"pushState"]($.state,"",o)}if(d?(a=_.state,_.props.page&&(_.props.page.url=o),ke.$set(_.props)):Ae(_),u){const{scroll:$,keepfocus:y}=u;if(!y){const U=document.body,g=U.getAttribute("tabindex");(N=getSelection())==null||N.removeAllRanges(),U.tabIndex=-1,U.focus({preventScroll:!0}),g!==null?U.setAttribute("tabindex",g):U.removeAttribute("tabindex")}if(await Ce(),L){const U=o.hash&&document.getElementById(o.hash.slice(1));$?scrollTo($.x,$.y):U?U.scrollIntoView():scrollTo(0,0)}}else await Ce();n.promise=null,n.id=null,L=!0,_.props.page&&(de=_.props.page);const m=_.state.branch[_.state.branch.length-1];F=(m==null?void 0:m.module.router)!==!1,h&&h(),E=!1}function Ae(o){a=o.state;const p=document.querySelector("style[data-sveltekit]");if(p&&p.remove(),de=o.props.page,ke=new _t({target:s,props:{...o.props,stores:f},hydrate:!0}),F){const w={from:null,to:new URL(location.href)};r.after_navigate.forEach(u=>u(w))}d=!0}async function he({url:o,params:p,stuff:w,branch:u,status:h,error:b,routeId:v}){var y,U;const _=u.filter(Boolean),m=_.find(g=>{var O;return(O=g.loaded)==null?void 0:O.redirect}),R={redirect:(y=m==null?void 0:m.loaded)==null?void 0:y.redirect,state:{url:o,params:p,branch:u,error:b,stuff:w,session_id:X},props:{components:_.map(g=>g.module.default)}};for(let g=0;g<_.length;g+=1){const O=_[g].loaded;R.props[`props_${g}`]=O?await O.props:null}if(!a.url||o.href!==a.url.href||a.error!==b||a.stuff!==w){R.props.page={error:b,params:p,routeId:v,status:h,stuff:w,url:o};const g=(O,k)=>{Object.defineProperty(R.props.page,O,{get:()=>{throw new Error(`$page.${O} has been replaced by $page.url.${k}`)}})};g("origin","origin"),g("path","pathname"),g("query","searchParams")}const N=_[_.length-1],$=(U=N==null?void 0:N.loaded)==null?void 0:U.cache;if($){const g=o.pathname+o.search;let O=!1;const k=()=>{l.get(g)===R&&l.delete(g),x(),clearTimeout(A)},A=setTimeout(k,$.maxage*1e3),x=f.session.subscribe(()=>{O&&k()});O=!0,l.set(g,R)}return R}async function H({status:o,error:p,module:w,url:u,params:h,stuff:b,props:v,routeId:_}){const m={module:w,uses:{params:new Set,url:!1,session:!1,stuff:!1,dependencies:new Set},loaded:null,stuff:b};function R(y){const{href:U}=new URL(y,u);m.uses.dependencies.add(U)}v&&m.uses.dependencies.add(u.href);const S={};for(const y in h)Object.defineProperty(S,y,{get(){return m.uses.params.add(y),h[y]},enumerable:!0});const N=Ee,$=new $t(u);if(w.load){const y={routeId:_,params:S,props:v||{},get url(){return m.uses.url=!0,$},get session(){return m.uses.session=!0,N},get stuff(){return m.uses.stuff=!0,{...b}},async fetch(g,O){let k;typeof g=="string"?k=g:(k=g.url,O={body:g.method==="GET"||g.method==="HEAD"?void 0:await g.blob(),cache:g.cache,credentials:g.credentials,headers:g.headers,integrity:g.integrity,keepalive:g.keepalive,method:g.method,mode:g.mode,redirect:g.redirect,referrer:g.referrer,referrerPolicy:g.referrerPolicy,signal:g.signal,...O});const A=new URL(k,u).href;return R(A),d?$e(A,O):Rt(k,O)},status:o!=null?o:null,error:p!=null?p:null};let U;if(U=await w.load.call(null,y),!U)throw new Error("load function must return a value");m.loaded=Ve(U),m.loaded.stuff&&(m.stuff=m.loaded.stuff),m.loaded.dependencies&&m.loaded.dependencies.forEach(R)}else v&&(m.loaded=Ve({props:v}));return m}async function Ne({id:o,url:p,params:w,route:u},h){var U,g,O;if(n.id===o&&n.promise)return n.promise;if(!h){const k=l.get(o);if(k)return k}const{a:b,b:v,has_shadow:_}=u,m=a.url&&{url:o!==a.url.pathname+a.url.search,params:Object.keys(w).filter(k=>a.params[k]!==w[k]),session:X!==a.session_id};let R=[],S=Be,N=!1,$=200,y=null;b.forEach(k=>k().catch(()=>{}));e:for(let k=0;kI.uses.params.has(W))||m.session&&I.uses.session||Array.from(I.uses.dependencies).some(W=>c.some(oe=>oe(W)))||N&&I.uses.stuff){let W={};const oe=_&&k===b.length-1;if(oe){const ee=await $e(`${p.pathname}${p.pathname.endsWith("/")?"":"/"}__data.json${p.search}`,{headers:{"x-sveltekit-load":"true"}});if(ee.ok){const Pe=ee.headers.get("x-sveltekit-location");if(Pe)return{redirect:Pe,props:{},state:a};W=ee.status===204?{}:await ee.json()}else $=ee.status,y=new Error("Failed to load data")}if(y||(A=await H({module:x,url:p,params:w,props:W,stuff:S,routeId:u.id})),A&&(oe&&(A.uses.url=!0),A.loaded)){if(A.loaded.error&&($=A.loaded.status,y=A.loaded.error),A.loaded.redirect)return{redirect:A.loaded.redirect,props:{},state:a};A.loaded.stuff&&(N=!0)}}else A=I}catch(x){$=500,y=De(x)}if(y){for(;k--;)if(v[k]){let x,I,ae=k;for(;!(I=R[ae]);)ae-=1;try{if(x=await H({status:$,error:y,module:await v[k](),url:p,params:w,stuff:I.stuff,routeId:u.id}),(U=x==null?void 0:x.loaded)!=null&&U.error)continue;(g=x==null?void 0:x.loaded)!=null&&g.stuff&&(S={...S,...x.loaded.stuff}),R=R.slice(0,ae+1).concat(x);break e}catch{continue}}return await Z({status:$,error:y,url:p,routeId:u.id})}else(O=A==null?void 0:A.loaded)!=null&&O.stuff&&(S={...S,...A.loaded.stuff}),R.push(A)}return await he({url:p,params:w,stuff:S,branch:R,status:$,error:y,routeId:u.id})}async function Z({status:o,error:p,url:w,routeId:u}){var _,m;const h={},b=await H({module:await Nt,url:w,params:h,stuff:{},routeId:u}),v=await H({status:o,error:p,module:await Ot,url:w,params:h,stuff:b&&b.loaded&&b.loaded.stuff||{},routeId:u});return await he({url:w,params:h,stuff:{...(_=b==null?void 0:b.loaded)==null?void 0:_.stuff,...(m=v==null?void 0:v.loaded)==null?void 0:m.stuff},branch:[b,v],status:o,error:p,routeId:u})}function Oe(o){if(o.origin!==location.origin||!o.pathname.startsWith(t))return;const p=decodeURI(o.pathname.slice(t.length)||"/");for(const w of we){const u=w.exec(p);if(u)return{id:o.pathname+o.search,route:w,params:u,url:o}}}async function _e({url:o,scroll:p,keepfocus:w,redirect_chain:u,details:h,accepted:b,blocked:v}){const _=a.url;let m=!1;const R={from:_,to:o,cancel:()=>m=!0};if(r.before_navigate.forEach($=>$(R)),m){v();return}const S=vt(o.pathname,i),N=new URL(o.origin+S+o.search+o.hash);be(j),b(),d&&f.navigating.set({from:a.url,to:N}),await pe(N,u,!1,{scroll:p,keepfocus:w,details:h},()=>{const $={from:_,to:N};r.after_navigate.forEach(y=>y($)),f.navigating.set(null)})}function Q(o){return location.href=o.href,new Promise(()=>{})}return{after_navigate:o=>{ye(()=>(r.after_navigate.push(o),()=>{const p=r.after_navigate.indexOf(o);r.after_navigate.splice(p,1)}))},before_navigate:o=>{ye(()=>(r.before_navigate.push(o),()=>{const p=r.before_navigate.indexOf(o);r.before_navigate.splice(p,1)}))},disable_scroll_handling:()=>{(E||!d)&&(L=!1)},goto:(o,p={})=>Se(o,p,[]),invalidate:o=>{if(typeof o=="function")c.push(o);else{const{href:p}=new URL(o,location.href);c.push(w=>w===p)}return M||(M=Promise.resolve().then(async()=>{await pe(new URL(location.href),[],!0),M=null})),M},prefetch:async o=>{const p=new URL(o,ze(document));await Ue(p)},prefetch_routes:async o=>{const w=(o?we.filter(u=>o.some(h=>u.exec(h))):we).map(u=>Promise.all(u.a.map(h=>h())));await Promise.all(w)},_start_router:()=>{history.scrollRestoration="manual",addEventListener("beforeunload",u=>{let h=!1;const b={from:a.url,to:null,cancel:()=>h=!0};r.before_navigate.forEach(v=>v(b)),h?(u.preventDefault(),u.returnValue=""):history.scrollRestoration="auto"}),addEventListener("visibilitychange",()=>{if(document.visibilityState==="hidden"){be(j);try{sessionStorage[We]=JSON.stringify(te)}catch{}}});const o=u=>{const h=qe(u);h&&h.href&&h.hasAttribute("sveltekit:prefetch")&&Ue(Je(h))};let p;const w=u=>{clearTimeout(p),p=setTimeout(()=>{var h;(h=u.target)==null||h.dispatchEvent(new CustomEvent("sveltekit:trigger_prefetch",{bubbles:!0}))},20)};addEventListener("touchstart",o),addEventListener("mousemove",w),addEventListener("sveltekit:trigger_prefetch",o),addEventListener("click",u=>{if(!F||u.button||u.which!==1||u.metaKey||u.ctrlKey||u.shiftKey||u.altKey||u.defaultPrevented)return;const h=qe(u);if(!h||!h.href)return;const b=h instanceof SVGAElement,v=Je(h);if(!b&&v.origin==="null")return;const _=(h.getAttribute("rel")||"").split(/\s+/);if(h.hasAttribute("download")||_.includes("external")||h.hasAttribute("sveltekit:reload")||(b?h.target.baseVal:h.target))return;const[m,R]=v.href.split("#");if(R!==void 0&&m===location.href.split("#")[0]){ue=!0,be(j),f.page.set({...de,url:v}),f.page.notify();return}_e({url:v,scroll:h.hasAttribute("sveltekit:noscroll")?ve():null,keepfocus:!1,redirect_chain:[],details:{state:{},replaceState:v.href===location.href},accepted:()=>u.preventDefault(),blocked:()=>u.preventDefault()})}),addEventListener("popstate",u=>{if(u.state&&F){if(u.state[B]===j)return;_e({url:new URL(location.href),scroll:te[u.state[B]],keepfocus:!1,redirect_chain:[],details:null,accepted:()=>{j=u.state[B]},blocked:()=>{const h=j-u.state[B];history.go(h)}})}}),addEventListener("hashchange",()=>{ue&&(ue=!1,history.replaceState({...history.state,[B]:++j},"",location.href))})},_hydrate:async({status:o,error:p,nodes:w,params:u,routeId:h})=>{const b=new URL(location.href),v=[];let _={},m,R;try{for(let S=0;S str:
- """Generate a random string of the given length."""
- return "".join(random.choice(string.ascii_letters) for _ in range(length))
-
- def setUp(self) -> None:
- """Set up the test environment."""
- cfg = Config()
- cfg.milvus_addr = "localhost:19530"
- self.memory = MilvusMemory(cfg)
- self.memory.clear()
-
- # Add example texts to the cache
- self.example_texts = [
- "The quick brown fox jumps over the lazy dog",
- "I love machine learning and natural language processing",
- "The cake is a lie, but the pie is always true",
- "ChatGPT is an advanced AI model for conversation",
- ]
-
- for text in self.example_texts:
- self.memory.add(text)
-
- # Add some random strings to test noise
- for _ in range(5):
- self.memory.add(self.random_string(10))
-
- def test_get_relevant(self) -> None:
- """Test getting relevant texts from the cache."""
- query = "I'm interested in artificial intelligence and NLP"
- num_relevant = 3
- relevant_texts = self.memory.get_relevant(query, num_relevant)
-
- print(f"Top {k} relevant texts for the query '{query}':")
- for i, text in enumerate(relevant_texts, start=1):
- print(f"{i}. {text}")
-
- self.assertEqual(len(relevant_texts), k)
- self.assertIn(self.example_texts[1], relevant_texts)
-
-except:
- print(
- "Skipping tests/integration/milvus_memory_tests.py as Milvus is not installed."
- )
diff --git a/spaces/Cletrason/Cletrason-toad-mario-movie/style (1).css b/spaces/Cletrason/Cletrason-toad-mario-movie/style (1).css
deleted file mode 100644
index c4739b4ea5fc35e774a049e3dacc443f7f0eac19..0000000000000000000000000000000000000000
--- a/spaces/Cletrason/Cletrason-toad-mario-movie/style (1).css
+++ /dev/null
@@ -1,3 +0,0 @@
-h1 {
- text-align: center;
-}
diff --git a/spaces/CofAI/chat/g4f/Provider/Providers/Ezcht.py b/spaces/CofAI/chat/g4f/Provider/Providers/Ezcht.py
deleted file mode 100644
index baec214f7e0e936ea06bffa357e1bd2b77cd4089..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat/g4f/Provider/Providers/Ezcht.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import requests
-import os
-import json
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://gpt4.ezchat.top'
-model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613']
-supports_stream = True
-needs_auth = False
-
-def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs):
- headers = {
- 'Content-Type': 'application/json',
- }
- data = {
- 'model': model,
- 'temperature': 0.7,
- 'presence_penalty': 0,
- 'messages': messages,
- }
- response = requests.post(url + '/api/openai/v1/chat/completions',
- json=data, stream=True)
-
- if stream:
- for chunk in response.iter_content(chunk_size=None):
- chunk = chunk.decode('utf-8')
- if chunk.strip():
- message = json.loads(chunk)['choices'][0]['message']['content']
- yield message
- else:
- message = response.json()['choices'][0]['message']['content']
- yield message
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
\ No newline at end of file
diff --git a/spaces/CofAI/chat/g4f/Provider/Providers/Zeabur.py b/spaces/CofAI/chat/g4f/Provider/Providers/Zeabur.py
deleted file mode 100644
index e412720bd9a0c88860f6ea8a657cb0a24bcce63f..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat/g4f/Provider/Providers/Zeabur.py
+++ /dev/null
@@ -1,50 +0,0 @@
-import os
-import requests
-from ...typing import sha256, Dict, get_type_hints
-
-url = "https://gptleg.zeabur.app"
-model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-0301',
- 'gpt-3.5-turbo-16k', 'gpt-4', 'gpt-4-0613']
-supports_stream = True
-needs_auth = False
-
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- headers = {
- 'Authority': 'chat.dfehub.com',
- 'Content-Type': 'application/json',
- 'Method': 'POST',
- 'Path': '/api/openai/v1/chat/completions',
- 'Scheme': 'https',
- 'Accept': 'text/event-stream',
- 'Accept-Language': 'pt-BR,pt;q=0.9,en-US;q=0.8,en;q=0.7,zh-CN;q=0.6,zh;q=0.5',
- 'Content-Type': 'application/json',
- 'Origin': 'https://gptleg.zeabur.app',
- 'Referer': 'https://gptleg.zeabur.app/',
- 'Sec-Ch-Ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"',
- 'Sec-Ch-Ua-Mobile': '?0',
- 'Sec-Ch-Ua-Platform': '"Windows"',
- 'Sec-Fetch-Dest': 'empty',
- 'Sec-Fetch-Mode': 'cors',
- 'Sec-Fetch-Site': 'same-origin',
- 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36',
- 'X-Requested-With': 'XMLHttpRequest',
- }
-
- data = {
- 'model': model,
- 'temperature': 0.7,
- 'max_tokens': '16000',
- 'presence_penalty': 0,
- 'messages': messages,
- }
-
- response = requests.post(url + '/api/openai/v1/chat/completions',
- headers=headers, json=data, stream=stream)
-
- yield response.json()['choices'][0]['message']['content']
-
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join(
- [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/SigmoidFocalLoss.h b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/SigmoidFocalLoss.h
deleted file mode 100644
index 308861e44774dffd89b3f5ebff7cc6c5491fe3a5..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/SigmoidFocalLoss.h
+++ /dev/null
@@ -1,41 +0,0 @@
-#pragma once
-
-#include "cpu/vision.h"
-
-#ifdef WITH_CUDA
-#include "cuda/vision.h"
-#endif
-
-// Interface for Python
-at::Tensor SigmoidFocalLoss_forward(
- const at::Tensor& logits,
- const at::Tensor& targets,
- const int num_classes,
- const float gamma,
- const float alpha) {
- if (logits.type().is_cuda()) {
-#ifdef WITH_CUDA
- return SigmoidFocalLoss_forward_cuda(logits, targets, num_classes, gamma, alpha);
-#else
- AT_ERROR("Not compiled with GPU support");
-#endif
- }
- AT_ERROR("Not implemented on the CPU");
-}
-
-at::Tensor SigmoidFocalLoss_backward(
- const at::Tensor& logits,
- const at::Tensor& targets,
- const at::Tensor& d_losses,
- const int num_classes,
- const float gamma,
- const float alpha) {
- if (logits.type().is_cuda()) {
-#ifdef WITH_CUDA
- return SigmoidFocalLoss_backward_cuda(logits, targets, d_losses, num_classes, gamma, alpha);
-#else
- AT_ERROR("Not compiled with GPU support");
-#endif
- }
- AT_ERROR("Not implemented on the CPU");
-}
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/charset_normalizer/assets/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/charset_normalizer/assets/__init__.py
deleted file mode 100644
index 9075930dc8f9a382c0bd7663e546fa2a93a4d257..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/charset_normalizer/assets/__init__.py
+++ /dev/null
@@ -1,1440 +0,0 @@
-# -*- coding: utf-8 -*-
-from typing import Dict, List
-
-# Language label that contain the em dash "—"
-# character are to be considered alternative seq to origin
-FREQUENCIES: Dict[str, List[str]] = {
- "English": [
- "e",
- "a",
- "t",
- "i",
- "o",
- "n",
- "s",
- "r",
- "h",
- "l",
- "d",
- "c",
- "u",
- "m",
- "f",
- "p",
- "g",
- "w",
- "y",
- "b",
- "v",
- "k",
- "x",
- "j",
- "z",
- "q",
- ],
- "English—": [
- "e",
- "a",
- "t",
- "i",
- "o",
- "n",
- "s",
- "r",
- "h",
- "l",
- "d",
- "c",
- "m",
- "u",
- "f",
- "p",
- "g",
- "w",
- "b",
- "y",
- "v",
- "k",
- "j",
- "x",
- "z",
- "q",
- ],
- "German": [
- "e",
- "n",
- "i",
- "r",
- "s",
- "t",
- "a",
- "d",
- "h",
- "u",
- "l",
- "g",
- "o",
- "c",
- "m",
- "b",
- "f",
- "k",
- "w",
- "z",
- "p",
- "v",
- "ü",
- "ä",
- "ö",
- "j",
- ],
- "French": [
- "e",
- "a",
- "s",
- "n",
- "i",
- "t",
- "r",
- "l",
- "u",
- "o",
- "d",
- "c",
- "p",
- "m",
- "é",
- "v",
- "g",
- "f",
- "b",
- "h",
- "q",
- "à",
- "x",
- "è",
- "y",
- "j",
- ],
- "Dutch": [
- "e",
- "n",
- "a",
- "i",
- "r",
- "t",
- "o",
- "d",
- "s",
- "l",
- "g",
- "h",
- "v",
- "m",
- "u",
- "k",
- "c",
- "p",
- "b",
- "w",
- "j",
- "z",
- "f",
- "y",
- "x",
- "ë",
- ],
- "Italian": [
- "e",
- "i",
- "a",
- "o",
- "n",
- "l",
- "t",
- "r",
- "s",
- "c",
- "d",
- "u",
- "p",
- "m",
- "g",
- "v",
- "f",
- "b",
- "z",
- "h",
- "q",
- "è",
- "à",
- "k",
- "y",
- "ò",
- ],
- "Polish": [
- "a",
- "i",
- "o",
- "e",
- "n",
- "r",
- "z",
- "w",
- "s",
- "c",
- "t",
- "k",
- "y",
- "d",
- "p",
- "m",
- "u",
- "l",
- "j",
- "ł",
- "g",
- "b",
- "h",
- "ą",
- "ę",
- "ó",
- ],
- "Spanish": [
- "e",
- "a",
- "o",
- "n",
- "s",
- "r",
- "i",
- "l",
- "d",
- "t",
- "c",
- "u",
- "m",
- "p",
- "b",
- "g",
- "v",
- "f",
- "y",
- "ó",
- "h",
- "q",
- "í",
- "j",
- "z",
- "á",
- ],
- "Russian": [
- "о",
- "а",
- "е",
- "и",
- "н",
- "с",
- "т",
- "р",
- "в",
- "л",
- "к",
- "м",
- "д",
- "п",
- "у",
- "г",
- "я",
- "ы",
- "з",
- "б",
- "й",
- "ь",
- "ч",
- "х",
- "ж",
- "ц",
- ],
- # Jap-Kanji
- "Japanese": [
- "人",
- "一",
- "大",
- "亅",
- "丁",
- "丨",
- "竹",
- "笑",
- "口",
- "日",
- "今",
- "二",
- "彳",
- "行",
- "十",
- "土",
- "丶",
- "寸",
- "寺",
- "時",
- "乙",
- "丿",
- "乂",
- "气",
- "気",
- "冂",
- "巾",
- "亠",
- "市",
- "目",
- "儿",
- "見",
- "八",
- "小",
- "凵",
- "県",
- "月",
- "彐",
- "門",
- "間",
- "木",
- "東",
- "山",
- "出",
- "本",
- "中",
- "刀",
- "分",
- "耳",
- "又",
- "取",
- "最",
- "言",
- "田",
- "心",
- "思",
- "刂",
- "前",
- "京",
- "尹",
- "事",
- "生",
- "厶",
- "云",
- "会",
- "未",
- "来",
- "白",
- "冫",
- "楽",
- "灬",
- "馬",
- "尸",
- "尺",
- "駅",
- "明",
- "耂",
- "者",
- "了",
- "阝",
- "都",
- "高",
- "卜",
- "占",
- "厂",
- "广",
- "店",
- "子",
- "申",
- "奄",
- "亻",
- "俺",
- "上",
- "方",
- "冖",
- "学",
- "衣",
- "艮",
- "食",
- "自",
- ],
- # Jap-Katakana
- "Japanese—": [
- "ー",
- "ン",
- "ス",
- "・",
- "ル",
- "ト",
- "リ",
- "イ",
- "ア",
- "ラ",
- "ッ",
- "ク",
- "ド",
- "シ",
- "レ",
- "ジ",
- "タ",
- "フ",
- "ロ",
- "カ",
- "テ",
- "マ",
- "ィ",
- "グ",
- "バ",
- "ム",
- "プ",
- "オ",
- "コ",
- "デ",
- "ニ",
- "ウ",
- "メ",
- "サ",
- "ビ",
- "ナ",
- "ブ",
- "ャ",
- "エ",
- "ュ",
- "チ",
- "キ",
- "ズ",
- "ダ",
- "パ",
- "ミ",
- "ェ",
- "ョ",
- "ハ",
- "セ",
- "ベ",
- "ガ",
- "モ",
- "ツ",
- "ネ",
- "ボ",
- "ソ",
- "ノ",
- "ァ",
- "ヴ",
- "ワ",
- "ポ",
- "ペ",
- "ピ",
- "ケ",
- "ゴ",
- "ギ",
- "ザ",
- "ホ",
- "ゲ",
- "ォ",
- "ヤ",
- "ヒ",
- "ユ",
- "ヨ",
- "ヘ",
- "ゼ",
- "ヌ",
- "ゥ",
- "ゾ",
- "ヶ",
- "ヂ",
- "ヲ",
- "ヅ",
- "ヵ",
- "ヱ",
- "ヰ",
- "ヮ",
- "ヽ",
- "゠",
- "ヾ",
- "ヷ",
- "ヿ",
- "ヸ",
- "ヹ",
- "ヺ",
- ],
- # Jap-Hiragana
- "Japanese——": [
- "の",
- "に",
- "る",
- "た",
- "と",
- "は",
- "し",
- "い",
- "を",
- "で",
- "て",
- "が",
- "な",
- "れ",
- "か",
- "ら",
- "さ",
- "っ",
- "り",
- "す",
- "あ",
- "も",
- "こ",
- "ま",
- "う",
- "く",
- "よ",
- "き",
- "ん",
- "め",
- "お",
- "け",
- "そ",
- "つ",
- "だ",
- "や",
- "え",
- "ど",
- "わ",
- "ち",
- "み",
- "せ",
- "じ",
- "ば",
- "へ",
- "び",
- "ず",
- "ろ",
- "ほ",
- "げ",
- "む",
- "べ",
- "ひ",
- "ょ",
- "ゆ",
- "ぶ",
- "ご",
- "ゃ",
- "ね",
- "ふ",
- "ぐ",
- "ぎ",
- "ぼ",
- "ゅ",
- "づ",
- "ざ",
- "ぞ",
- "ぬ",
- "ぜ",
- "ぱ",
- "ぽ",
- "ぷ",
- "ぴ",
- "ぃ",
- "ぁ",
- "ぇ",
- "ぺ",
- "ゞ",
- "ぢ",
- "ぉ",
- "ぅ",
- "ゐ",
- "ゝ",
- "ゑ",
- "゛",
- "゜",
- "ゎ",
- "ゔ",
- "゚",
- "ゟ",
- "゙",
- "ゕ",
- "ゖ",
- ],
- "Portuguese": [
- "a",
- "e",
- "o",
- "s",
- "i",
- "r",
- "d",
- "n",
- "t",
- "m",
- "u",
- "c",
- "l",
- "p",
- "g",
- "v",
- "b",
- "f",
- "h",
- "ã",
- "q",
- "é",
- "ç",
- "á",
- "z",
- "í",
- ],
- "Swedish": [
- "e",
- "a",
- "n",
- "r",
- "t",
- "s",
- "i",
- "l",
- "d",
- "o",
- "m",
- "k",
- "g",
- "v",
- "h",
- "f",
- "u",
- "p",
- "ä",
- "c",
- "b",
- "ö",
- "å",
- "y",
- "j",
- "x",
- ],
- "Chinese": [
- "的",
- "一",
- "是",
- "不",
- "了",
- "在",
- "人",
- "有",
- "我",
- "他",
- "这",
- "个",
- "们",
- "中",
- "来",
- "上",
- "大",
- "为",
- "和",
- "国",
- "地",
- "到",
- "以",
- "说",
- "时",
- "要",
- "就",
- "出",
- "会",
- "可",
- "也",
- "你",
- "对",
- "生",
- "能",
- "而",
- "子",
- "那",
- "得",
- "于",
- "着",
- "下",
- "自",
- "之",
- "年",
- "过",
- "发",
- "后",
- "作",
- "里",
- "用",
- "道",
- "行",
- "所",
- "然",
- "家",
- "种",
- "事",
- "成",
- "方",
- "多",
- "经",
- "么",
- "去",
- "法",
- "学",
- "如",
- "都",
- "同",
- "现",
- "当",
- "没",
- "动",
- "面",
- "起",
- "看",
- "定",
- "天",
- "分",
- "还",
- "进",
- "好",
- "小",
- "部",
- "其",
- "些",
- "主",
- "样",
- "理",
- "心",
- "她",
- "本",
- "前",
- "开",
- "但",
- "因",
- "只",
- "从",
- "想",
- "实",
- ],
- "Ukrainian": [
- "о",
- "а",
- "н",
- "і",
- "и",
- "р",
- "в",
- "т",
- "е",
- "с",
- "к",
- "л",
- "у",
- "д",
- "м",
- "п",
- "з",
- "я",
- "ь",
- "б",
- "г",
- "й",
- "ч",
- "х",
- "ц",
- "ї",
- ],
- "Norwegian": [
- "e",
- "r",
- "n",
- "t",
- "a",
- "s",
- "i",
- "o",
- "l",
- "d",
- "g",
- "k",
- "m",
- "v",
- "f",
- "p",
- "u",
- "b",
- "h",
- "å",
- "y",
- "j",
- "ø",
- "c",
- "æ",
- "w",
- ],
- "Finnish": [
- "a",
- "i",
- "n",
- "t",
- "e",
- "s",
- "l",
- "o",
- "u",
- "k",
- "ä",
- "m",
- "r",
- "v",
- "j",
- "h",
- "p",
- "y",
- "d",
- "ö",
- "g",
- "c",
- "b",
- "f",
- "w",
- "z",
- ],
- "Vietnamese": [
- "n",
- "h",
- "t",
- "i",
- "c",
- "g",
- "a",
- "o",
- "u",
- "m",
- "l",
- "r",
- "à",
- "đ",
- "s",
- "e",
- "v",
- "p",
- "b",
- "y",
- "ư",
- "d",
- "á",
- "k",
- "ộ",
- "ế",
- ],
- "Czech": [
- "o",
- "e",
- "a",
- "n",
- "t",
- "s",
- "i",
- "l",
- "v",
- "r",
- "k",
- "d",
- "u",
- "m",
- "p",
- "í",
- "c",
- "h",
- "z",
- "á",
- "y",
- "j",
- "b",
- "ě",
- "é",
- "ř",
- ],
- "Hungarian": [
- "e",
- "a",
- "t",
- "l",
- "s",
- "n",
- "k",
- "r",
- "i",
- "o",
- "z",
- "á",
- "é",
- "g",
- "m",
- "b",
- "y",
- "v",
- "d",
- "h",
- "u",
- "p",
- "j",
- "ö",
- "f",
- "c",
- ],
- "Korean": [
- "이",
- "다",
- "에",
- "의",
- "는",
- "로",
- "하",
- "을",
- "가",
- "고",
- "지",
- "서",
- "한",
- "은",
- "기",
- "으",
- "년",
- "대",
- "사",
- "시",
- "를",
- "리",
- "도",
- "인",
- "스",
- "일",
- ],
- "Indonesian": [
- "a",
- "n",
- "e",
- "i",
- "r",
- "t",
- "u",
- "s",
- "d",
- "k",
- "m",
- "l",
- "g",
- "p",
- "b",
- "o",
- "h",
- "y",
- "j",
- "c",
- "w",
- "f",
- "v",
- "z",
- "x",
- "q",
- ],
- "Turkish": [
- "a",
- "e",
- "i",
- "n",
- "r",
- "l",
- "ı",
- "k",
- "d",
- "t",
- "s",
- "m",
- "y",
- "u",
- "o",
- "b",
- "ü",
- "ş",
- "v",
- "g",
- "z",
- "h",
- "c",
- "p",
- "ç",
- "ğ",
- ],
- "Romanian": [
- "e",
- "i",
- "a",
- "r",
- "n",
- "t",
- "u",
- "l",
- "o",
- "c",
- "s",
- "d",
- "p",
- "m",
- "ă",
- "f",
- "v",
- "î",
- "g",
- "b",
- "ș",
- "ț",
- "z",
- "h",
- "â",
- "j",
- ],
- "Farsi": [
- "ا",
- "ی",
- "ر",
- "د",
- "ن",
- "ه",
- "و",
- "م",
- "ت",
- "ب",
- "س",
- "ل",
- "ک",
- "ش",
- "ز",
- "ف",
- "گ",
- "ع",
- "خ",
- "ق",
- "ج",
- "آ",
- "پ",
- "ح",
- "ط",
- "ص",
- ],
- "Arabic": [
- "ا",
- "ل",
- "ي",
- "م",
- "و",
- "ن",
- "ر",
- "ت",
- "ب",
- "ة",
- "ع",
- "د",
- "س",
- "ف",
- "ه",
- "ك",
- "ق",
- "أ",
- "ح",
- "ج",
- "ش",
- "ط",
- "ص",
- "ى",
- "خ",
- "إ",
- ],
- "Danish": [
- "e",
- "r",
- "n",
- "t",
- "a",
- "i",
- "s",
- "d",
- "l",
- "o",
- "g",
- "m",
- "k",
- "f",
- "v",
- "u",
- "b",
- "h",
- "p",
- "å",
- "y",
- "ø",
- "æ",
- "c",
- "j",
- "w",
- ],
- "Serbian": [
- "а",
- "и",
- "о",
- "е",
- "н",
- "р",
- "с",
- "у",
- "т",
- "к",
- "ј",
- "в",
- "д",
- "м",
- "п",
- "л",
- "г",
- "з",
- "б",
- "a",
- "i",
- "e",
- "o",
- "n",
- "ц",
- "ш",
- ],
- "Lithuanian": [
- "i",
- "a",
- "s",
- "o",
- "r",
- "e",
- "t",
- "n",
- "u",
- "k",
- "m",
- "l",
- "p",
- "v",
- "d",
- "j",
- "g",
- "ė",
- "b",
- "y",
- "ų",
- "š",
- "ž",
- "c",
- "ą",
- "į",
- ],
- "Slovene": [
- "e",
- "a",
- "i",
- "o",
- "n",
- "r",
- "s",
- "l",
- "t",
- "j",
- "v",
- "k",
- "d",
- "p",
- "m",
- "u",
- "z",
- "b",
- "g",
- "h",
- "č",
- "c",
- "š",
- "ž",
- "f",
- "y",
- ],
- "Slovak": [
- "o",
- "a",
- "e",
- "n",
- "i",
- "r",
- "v",
- "t",
- "s",
- "l",
- "k",
- "d",
- "m",
- "p",
- "u",
- "c",
- "h",
- "j",
- "b",
- "z",
- "á",
- "y",
- "ý",
- "í",
- "č",
- "é",
- ],
- "Hebrew": [
- "י",
- "ו",
- "ה",
- "ל",
- "ר",
- "ב",
- "ת",
- "מ",
- "א",
- "ש",
- "נ",
- "ע",
- "ם",
- "ד",
- "ק",
- "ח",
- "פ",
- "ס",
- "כ",
- "ג",
- "ט",
- "צ",
- "ן",
- "ז",
- "ך",
- ],
- "Bulgarian": [
- "а",
- "и",
- "о",
- "е",
- "н",
- "т",
- "р",
- "с",
- "в",
- "л",
- "к",
- "д",
- "п",
- "м",
- "з",
- "г",
- "я",
- "ъ",
- "у",
- "б",
- "ч",
- "ц",
- "й",
- "ж",
- "щ",
- "х",
- ],
- "Croatian": [
- "a",
- "i",
- "o",
- "e",
- "n",
- "r",
- "j",
- "s",
- "t",
- "u",
- "k",
- "l",
- "v",
- "d",
- "m",
- "p",
- "g",
- "z",
- "b",
- "c",
- "č",
- "h",
- "š",
- "ž",
- "ć",
- "f",
- ],
- "Hindi": [
- "क",
- "र",
- "स",
- "न",
- "त",
- "म",
- "ह",
- "प",
- "य",
- "ल",
- "व",
- "ज",
- "द",
- "ग",
- "ब",
- "श",
- "ट",
- "अ",
- "ए",
- "थ",
- "भ",
- "ड",
- "च",
- "ध",
- "ष",
- "इ",
- ],
- "Estonian": [
- "a",
- "i",
- "e",
- "s",
- "t",
- "l",
- "u",
- "n",
- "o",
- "k",
- "r",
- "d",
- "m",
- "v",
- "g",
- "p",
- "j",
- "h",
- "ä",
- "b",
- "õ",
- "ü",
- "f",
- "c",
- "ö",
- "y",
- ],
- "Thai": [
- "า",
- "น",
- "ร",
- "อ",
- "ก",
- "เ",
- "ง",
- "ม",
- "ย",
- "ล",
- "ว",
- "ด",
- "ท",
- "ส",
- "ต",
- "ะ",
- "ป",
- "บ",
- "ค",
- "ห",
- "แ",
- "จ",
- "พ",
- "ช",
- "ข",
- "ใ",
- ],
- "Greek": [
- "α",
- "τ",
- "ο",
- "ι",
- "ε",
- "ν",
- "ρ",
- "σ",
- "κ",
- "η",
- "π",
- "ς",
- "υ",
- "μ",
- "λ",
- "ί",
- "ό",
- "ά",
- "γ",
- "έ",
- "δ",
- "ή",
- "ω",
- "χ",
- "θ",
- "ύ",
- ],
- "Tamil": [
- "க",
- "த",
- "ப",
- "ட",
- "ர",
- "ம",
- "ல",
- "ன",
- "வ",
- "ற",
- "ய",
- "ள",
- "ச",
- "ந",
- "இ",
- "ண",
- "அ",
- "ஆ",
- "ழ",
- "ங",
- "எ",
- "உ",
- "ஒ",
- "ஸ",
- ],
- "Kazakh": [
- "а",
- "ы",
- "е",
- "н",
- "т",
- "р",
- "л",
- "і",
- "д",
- "с",
- "м",
- "қ",
- "к",
- "о",
- "б",
- "и",
- "у",
- "ғ",
- "ж",
- "ң",
- "з",
- "ш",
- "й",
- "п",
- "г",
- "ө",
- ],
-}
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/dictTools.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/dictTools.py
deleted file mode 100644
index 259613b27048c458980986167d429847d270691f..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/dictTools.py
+++ /dev/null
@@ -1,83 +0,0 @@
-"""Misc dict tools."""
-
-
-__all__ = ["hashdict"]
-
-# https://stackoverflow.com/questions/1151658/python-hashable-dicts
-class hashdict(dict):
- """
- hashable dict implementation, suitable for use as a key into
- other dicts.
-
- >>> h1 = hashdict({"apples": 1, "bananas":2})
- >>> h2 = hashdict({"bananas": 3, "mangoes": 5})
- >>> h1+h2
- hashdict(apples=1, bananas=3, mangoes=5)
- >>> d1 = {}
- >>> d1[h1] = "salad"
- >>> d1[h1]
- 'salad'
- >>> d1[h2]
- Traceback (most recent call last):
- ...
- KeyError: hashdict(bananas=3, mangoes=5)
-
- based on answers from
- http://stackoverflow.com/questions/1151658/python-hashable-dicts
-
- """
-
- def __key(self):
- return tuple(sorted(self.items()))
-
- def __repr__(self):
- return "{0}({1})".format(
- self.__class__.__name__,
- ", ".join("{0}={1}".format(str(i[0]), repr(i[1])) for i in self.__key()),
- )
-
- def __hash__(self):
- return hash(self.__key())
-
- def __setitem__(self, key, value):
- raise TypeError(
- "{0} does not support item assignment".format(self.__class__.__name__)
- )
-
- def __delitem__(self, key):
- raise TypeError(
- "{0} does not support item assignment".format(self.__class__.__name__)
- )
-
- def clear(self):
- raise TypeError(
- "{0} does not support item assignment".format(self.__class__.__name__)
- )
-
- def pop(self, *args, **kwargs):
- raise TypeError(
- "{0} does not support item assignment".format(self.__class__.__name__)
- )
-
- def popitem(self, *args, **kwargs):
- raise TypeError(
- "{0} does not support item assignment".format(self.__class__.__name__)
- )
-
- def setdefault(self, *args, **kwargs):
- raise TypeError(
- "{0} does not support item assignment".format(self.__class__.__name__)
- )
-
- def update(self, *args, **kwargs):
- raise TypeError(
- "{0} does not support item assignment".format(self.__class__.__name__)
- )
-
- # update is not ok because it mutates the object
- # __add__ is ok because it creates a new object
- # while the new object is under construction, it's ok to mutate it
- def __add__(self, right):
- result = hashdict(self)
- dict.update(result, right)
- return result
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/duplicate_button.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/duplicate_button.py
deleted file mode 100644
index e4ecc25c9b3d1f405dcd8cbbaa96c6b536e96d80..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/duplicate_button.py
+++ /dev/null
@@ -1,61 +0,0 @@
-""" Predefined buttons with bound events that can be included in a gr.Blocks for convenience. """
-
-from __future__ import annotations
-
-from typing import Literal
-
-from gradio_client.documentation import document, set_documentation_group
-
-from gradio.components import Button
-from gradio.utils import get_space
-
-set_documentation_group("component")
-
-
-@document()
-class DuplicateButton(Button):
- """
- Button that triggers a Spaces Duplication, when the demo is on Hugging Face Spaces. Does nothing locally.
- Preprocessing: passes the button value as a {str} into the function
- Postprocessing: expects a {str} to be returned from a function, which is set as the label of the button
- """
-
- is_template = True
-
- def __init__(
- self,
- *,
- value: str = "Duplicate Space",
- variant: Literal["primary", "secondary", "stop"] = "secondary",
- size: Literal["sm", "lg"] | None = "sm",
- visible: bool = True,
- interactive: bool = True,
- elem_id: str | None = None,
- elem_classes: list[str] | str | None = None,
- scale: int | None = 0,
- min_width: int | None = None,
- _activate: bool = True,
- **kwargs,
- ):
- super().__init__(
- value,
- variant=variant,
- size=size,
- visible=visible,
- interactive=interactive,
- elem_id=elem_id,
- elem_classes=elem_classes,
- scale=scale,
- min_width=min_width,
- **kwargs,
- )
- if _activate:
- self.activate()
-
- def activate(self):
- space_name = get_space()
- if space_name is not None:
- self.click(
- fn=None,
- _js=f"() => {{ window.open(`https://huggingface.co/spaces/{space_name}?duplicate=true`, '_blank') }}",
- )
diff --git a/spaces/Datasculptor/MusicGen/tests/modules/test_transformer.py b/spaces/Datasculptor/MusicGen/tests/modules/test_transformer.py
deleted file mode 100644
index ff7dfe4c2de05112aec55ddea9c8fd978668f80b..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/MusicGen/tests/modules/test_transformer.py
+++ /dev/null
@@ -1,253 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from itertools import product
-
-import pytest
-import torch
-
-from audiocraft.modules.transformer import (
- StreamingMultiheadAttention, StreamingTransformer, set_efficient_attention_backend)
-
-
-def test_transformer_causal_streaming():
- torch.manual_seed(1234)
-
- for context, custom in product([None, 10], [False, True]):
- # Test that causality and receptive fields are properly handled.
- # looking at the gradients
- tr = StreamingTransformer(
- 16, 4, 1 if context else 2,
- causal=True, past_context=context, custom=custom,
- dropout=0.)
- steps = 20
- for k in [0, 10, 15, 19]:
- x = torch.randn(4, steps, 16, requires_grad=True)
- y = tr(x)
- y[:, k].abs().sum().backward()
- if k + 1 < steps:
- assert torch.allclose(x.grad[:, k + 1:], torch.tensor(0.)), x.grad[:, k + 1:].norm()
- assert not torch.allclose(x.grad[:, :k + 1], torch.tensor(0.)), x.grad[:, :k + 1].norm()
- if context is not None and k > context:
- limit = k - context - 1
- assert torch.allclose(x.grad[:, :limit],
- torch.tensor(0.)), x.grad[:, :limit].norm()
-
- # Now check that streaming gives the same result at batch eval.
- x = torch.randn(4, steps, 16)
- y = tr(x)
- ys = []
- with tr.streaming():
- for k in range(steps):
- chunk = x[:, k:k + 1, :]
- ys.append(tr(chunk))
- y_stream = torch.cat(ys, dim=1)
- delta = torch.norm(y_stream - y) / torch.norm(y)
- assert delta < 1e-6, delta
-
-
-def test_transformer_vs_pytorch():
- torch.manual_seed(1234)
- # Check that in the non causal setting, we get the same result as
- # PyTorch Transformer encoder.
- for custom in [False, True]:
- tr = StreamingTransformer(
- 16, 4, 2,
- causal=False, custom=custom, dropout=0., positional_scale=0.)
- layer = torch.nn.TransformerEncoderLayer(16, 4, dropout=0., batch_first=True)
- tr_ref = torch.nn.TransformerEncoder(layer, 2)
- tr.load_state_dict(tr_ref.state_dict())
-
- x = torch.randn(4, 20, 16)
- y = tr(x)
- y2 = tr_ref(x)
- delta = torch.norm(y2 - y) / torch.norm(y)
- assert delta < 1e-6, delta
-
-
-def test_streaming_api():
- tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0.)
- tr.eval()
- steps = 12
- x = torch.randn(1, steps, 16)
-
- with torch.no_grad():
- with tr.streaming():
- _ = tr(x[:, :1])
- state = {k: v.clone() for k, v in tr.get_streaming_state().items()}
- y = tr(x[:, 1:2])
- tr.set_streaming_state(state)
- y2 = tr(x[:, 1:2])
- assert torch.allclose(y, y2), (y - y2).norm()
- assert tr.flush() is None
-
-
-def test_memory_efficient():
- for backend in ['torch', 'xformers']:
- torch.manual_seed(1234)
- set_efficient_attention_backend(backend)
-
- tr = StreamingTransformer(
- 16, 4, 2, custom=True, dropout=0., layer_scale=0.1)
- tr_mem_efficient = StreamingTransformer(
- 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1)
- tr_mem_efficient.load_state_dict(tr.state_dict())
- tr.eval()
- steps = 12
- x = torch.randn(3, steps, 16)
-
- with torch.no_grad():
- y = tr(x)
- y2 = tr_mem_efficient(x)
- assert torch.allclose(y, y2), ((y - y2).norm(), backend)
-
-
-def test_attention_as_float32():
- torch.manual_seed(1234)
- cases = [
- {'custom': True},
- {'custom': False},
- ]
- for case in cases:
- tr = StreamingTransformer(16, 4, 2, dropout=0., dtype=torch.bfloat16, **case)
- tr_float32 = StreamingTransformer(
- 16, 4, 2, dropout=0., attention_as_float32=True, dtype=torch.bfloat16, **case)
- if not case['custom']:
- # we are not using autocast here because it doesn't really
- # work as expected on CPU, so we have to manually cast the weights of the MHA.
- for layer in tr_float32.layers:
- layer.self_attn.mha.to(torch.float32)
- tr_float32.load_state_dict(tr.state_dict())
- steps = 12
- x = torch.randn(3, steps, 16, dtype=torch.bfloat16)
-
- with torch.no_grad():
- y = tr(x)
- y2 = tr_float32(x)
- assert not torch.allclose(y, y2), (y - y2).norm()
-
-
-@torch.no_grad()
-def test_streaming_memory_efficient():
- for backend in ['torch', 'xformers']:
- torch.manual_seed(1234)
- set_efficient_attention_backend(backend)
- tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0., custom=True)
- tr_mem_efficient = StreamingTransformer(
- 16, 4, 2, dropout=0., memory_efficient=True, causal=True)
- tr.load_state_dict(tr_mem_efficient.state_dict())
- tr.eval()
- tr_mem_efficient.eval()
- steps = 12
- x = torch.randn(3, steps, 16)
-
- ref = tr(x)
-
- with tr_mem_efficient.streaming():
- outs = []
- # frame_sizes = [2] + [1] * (steps - 2)
- frame_sizes = [1] * steps
-
- for frame_size in frame_sizes:
- frame = x[:, :frame_size]
- x = x[:, frame_size:]
- outs.append(tr_mem_efficient(frame))
-
- out = torch.cat(outs, dim=1)
- delta = torch.norm(out - ref) / torch.norm(out)
- assert delta < 1e-6, delta
-
-
-def test_cross_attention():
- torch.manual_seed(1234)
- for norm_first in [True, False]:
- m = StreamingTransformer(
- 16, 4, 2, cross_attention=False, norm_first=norm_first, dropout=0., custom=True)
- m_cross = StreamingTransformer(
- 16, 4, 2, cross_attention=True, norm_first=norm_first, dropout=0., custom=True)
- m_cross.load_state_dict(m.state_dict(), strict=False)
- x = torch.randn(2, 5, 16)
- cross_x = torch.randn(2, 3, 16)
- y_ref = m(x)
- y_cross_zero = m_cross(x, cross_attention_src=0 * cross_x)
- # With norm_first, the two should be exactly yhe same,
- # but with norm_first=False, we get 2 normalization in a row
- # and the epsilon value leads to a tiny change.
- atol = 0. if norm_first else 1e-6
- print((y_ref - y_cross_zero).norm() / y_ref.norm())
- assert torch.allclose(y_ref, y_cross_zero, atol=atol)
-
- # We now expect a difference even with a generous atol of 1e-2.
- y_cross = m_cross(x, cross_attention_src=cross_x)
- assert not torch.allclose(y_cross, y_cross_zero, atol=1e-2)
-
- with pytest.raises(AssertionError):
- _ = m_cross(x)
- _ = m(x, cross_attention_src=cross_x)
-
-
-def test_cross_attention_compat():
- torch.manual_seed(1234)
- num_heads = 2
- dim = num_heads * 64
- with pytest.raises(AssertionError):
- StreamingMultiheadAttention(dim, num_heads, causal=True, cross_attention=True)
-
- cross_attn = StreamingMultiheadAttention(
- dim, num_heads, dropout=0, cross_attention=True, custom=True)
- ref_attn = torch.nn.MultiheadAttention(dim, num_heads, dropout=0, batch_first=True)
-
- # We can load the regular attention state dict
- # so we have compat when loading old checkpoints.
- cross_attn.load_state_dict(ref_attn.state_dict())
-
- queries = torch.randn(3, 7, dim)
- keys = torch.randn(3, 9, dim)
- values = torch.randn(3, 9, dim)
-
- y = cross_attn(queries, keys, values)[0]
- y_ref = ref_attn(queries, keys, values)[0]
- assert torch.allclose(y, y_ref, atol=1e-7), (y - y_ref).norm() / y_ref.norm()
-
- # Now let's check that streaming is working properly.
- with cross_attn.streaming():
- ys = []
- for step in range(queries.shape[1]):
- ys.append(cross_attn(queries[:, step: step + 1], keys, values)[0])
- y_streaming = torch.cat(ys, dim=1)
- assert torch.allclose(y_streaming, y, atol=1e-7)
-
-
-def test_repeat_kv():
- torch.manual_seed(1234)
- num_heads = 8
- kv_repeat = 4
- dim = num_heads * 64
- with pytest.raises(AssertionError):
- mha = StreamingMultiheadAttention(
- dim, num_heads, causal=True, kv_repeat=kv_repeat, cross_attention=True)
- mha = StreamingMultiheadAttention(
- dim, num_heads, causal=True, kv_repeat=kv_repeat)
- mha = StreamingMultiheadAttention(
- dim, num_heads, causal=True, kv_repeat=kv_repeat, custom=True)
- x = torch.randn(4, 18, dim)
- y = mha(x, x, x)[0]
- assert x.shape == y.shape
-
-
-def test_qk_layer_norm():
- torch.manual_seed(1234)
- tr = StreamingTransformer(
- 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, bias_attn=False)
- steps = 12
- x = torch.randn(3, steps, 16)
- y = tr(x)
-
- tr = StreamingTransformer(
- 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, cross_attention=True)
- z = torch.randn(3, 21, 16)
- y = tr(x, cross_attention_src=z)
- assert y.shape == x.shape
diff --git a/spaces/Dinoking/Guccio-AI-Designer/netdissect/progress.py b/spaces/Dinoking/Guccio-AI-Designer/netdissect/progress.py
deleted file mode 100644
index 702b24cf6668e6caad38d3c315eb658b6af4d230..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Guccio-AI-Designer/netdissect/progress.py
+++ /dev/null
@@ -1,98 +0,0 @@
-'''
-Utilities for showing progress bars, controlling default verbosity, etc.
-'''
-
-# If the tqdm package is not available, then do not show progress bars;
-# just connect print_progress to print.
-try:
- from tqdm import tqdm, tqdm_notebook
-except:
- tqdm = None
-
-default_verbosity = False
-
-def verbose_progress(verbose):
- '''
- Sets default verbosity level. Set to True to see progress bars.
- '''
- global default_verbosity
- default_verbosity = verbose
-
-def tqdm_terminal(it, *args, **kwargs):
- '''
- Some settings for tqdm that make it run better in resizable terminals.
- '''
- return tqdm(it, *args, dynamic_ncols=True, ascii=True,
- leave=(not nested_tqdm()), **kwargs)
-
-def in_notebook():
- '''
- True if running inside a Jupyter notebook.
- '''
- # From https://stackoverflow.com/a/39662359/265298
- try:
- shell = get_ipython().__class__.__name__
- if shell == 'ZMQInteractiveShell':
- return True # Jupyter notebook or qtconsole
- elif shell == 'TerminalInteractiveShell':
- return False # Terminal running IPython
- else:
- return False # Other type (?)
- except NameError:
- return False # Probably standard Python interpreter
-
-def nested_tqdm():
- '''
- True if there is an active tqdm progress loop on the stack.
- '''
- return hasattr(tqdm, '_instances') and len(tqdm._instances) > 0
-
-def post_progress(**kwargs):
- '''
- When within a progress loop, post_progress(k=str) will display
- the given k=str status on the right-hand-side of the progress
- status bar. If not within a visible progress bar, does nothing.
- '''
- if nested_tqdm():
- innermost = max(tqdm._instances, key=lambda x: x.pos)
- innermost.set_postfix(**kwargs)
-
-def desc_progress(desc):
- '''
- When within a progress loop, desc_progress(str) changes the
- left-hand-side description of the loop toe the given description.
- '''
- if nested_tqdm():
- innermost = max(tqdm._instances, key=lambda x: x.pos)
- innermost.set_description(desc)
-
-def print_progress(*args):
- '''
- When within a progress loop, post_progress(k=str) will display
- the given k=str status on the right-hand-side of the progress
- status bar. If not within a visible progress bar, does nothing.
- '''
- if default_verbosity:
- printfn = print if tqdm is None else tqdm.write
- printfn(' '.join(str(s) for s in args))
-
-def default_progress(verbose=None, iftop=False):
- '''
- Returns a progress function that can wrap iterators to print
- progress messages, if verbose is True.
-
- If verbose is False or if iftop is True and there is already
- a top-level tqdm loop being reported, then a quiet non-printing
- identity function is returned.
-
- verbose can also be set to a spefific progress function rather
- than True, and that function will be used.
- '''
- global default_verbosity
- if verbose is None:
- verbose = default_verbosity
- if not verbose or (iftop and nested_tqdm()) or tqdm is None:
- return lambda x, *args, **kw: x
- if verbose == True:
- return tqdm_notebook if in_notebook() else tqdm_terminal
- return verbose
diff --git a/spaces/DunnBC22/Password_Strength_Classifier_with_CodeBERT/README.md b/spaces/DunnBC22/Password_Strength_Classifier_with_CodeBERT/README.md
deleted file mode 100644
index bb1186de10cb1b00bfdacbdd93326180a3fadc8a..0000000000000000000000000000000000000000
--- a/spaces/DunnBC22/Password_Strength_Classifier_with_CodeBERT/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: DunnBC22-Password_Strength_Classifier_with_CodeBERT
-emoji: 💻
-colorFrom: blue
-colorTo: green
-sdk: gradio
-sdk_version: 3.41.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Edisonymy/buy-or-rent/src/utils/ga.py b/spaces/Edisonymy/buy-or-rent/src/utils/ga.py
deleted file mode 100644
index 3ec9cfa43a914924fa1b22f590ca5f3a07010a19..0000000000000000000000000000000000000000
--- a/spaces/Edisonymy/buy-or-rent/src/utils/ga.py
+++ /dev/null
@@ -1,37 +0,0 @@
-from bs4 import BeautifulSoup
-import shutil
-import pathlib
-import logging
-import streamlit as st
-
-
-def add_analytics_tag():
- # replace G-QSN0R08N2M to your web app's ID
-
- analytics_js = """
-
-
-
- """
- analytics_id = "G-QSN0R08N2M"
-
-
- # Identify html path of streamlit
- index_path = pathlib.Path(st.__file__).parent / "static" / "index.html"
- logging.info(f'editing {index_path}')
- soup = BeautifulSoup(index_path.read_text(), features="html.parser")
- if not soup.find(id=analytics_id): # if id not found within html file
- bck_index = index_path.with_suffix('.bck')
- if bck_index.exists():
- shutil.copy(bck_index, index_path) # backup recovery
- else:
- shutil.copy(index_path, bck_index) # save backup
- html = str(soup)
- new_html = html.replace('', '\n' + analytics_js)
- index_path.write_text(new_html)
\ No newline at end of file
diff --git a/spaces/Enigma007/Classifier-Fasttext/app.py b/spaces/Enigma007/Classifier-Fasttext/app.py
deleted file mode 100644
index 293b0952bae737e78d5e28e7684b88398a41148e..0000000000000000000000000000000000000000
--- a/spaces/Enigma007/Classifier-Fasttext/app.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import streamlit as st
-import re
-import fasttext
-
-model = fasttext.load_model("fasttext_model.bin")
-
-def preprocess_input(text):
- text = re.sub(r'[^\w\s\']|\n', ' ', text)
- text = re.sub(' +', ' ', text)
- return text.strip().lower()
-
-def classify_transcript(transcript):
- preprocessed_transcript = preprocess_input(transcript)
-
- prediction = model.predict(preprocessed_transcript)
-
- predicted_label = prediction[0][0].replace('__label__', '')
-
- return predicted_label
-
-def main():
- st.title("FASTTEXT MENTAL HEALTH CLASSIFIER")
- st.write("Type 'exit' in the input box below to end the conversation.")
-
- user_input = st.text_area("Please enter the transcript of the patient:", "")
-
- if st.button("Classify"):
- if user_input.lower() == 'exit':
- st.stop()
- else:
- predicted_disease = classify_transcript(user_input)
- st.write(f"Based on the transcript, the predicted disease category is: {predicted_disease}")
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/EronSamez/RVC_HFmeu/infer/modules/onnx/export.py b/spaces/EronSamez/RVC_HFmeu/infer/modules/onnx/export.py
deleted file mode 100644
index ed4a4162ff04b7e12642fcbe96847f8ea9db06aa..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/infer/modules/onnx/export.py
+++ /dev/null
@@ -1,52 +0,0 @@
-import torch
-
-from infer.lib.infer_pack.models_onnx import SynthesizerTrnMsNSFsidM
-
-
-def export_onnx(ModelPath, ExportedPath):
- cpt = torch.load(ModelPath, map_location="cpu")
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0]
- vec_channels = 256 if cpt.get("version", "v1") == "v1" else 768
-
- test_phone = torch.rand(1, 200, vec_channels) # hidden unit
- test_phone_lengths = torch.tensor([200]).long() # hidden unit 长度(貌似没啥用)
- test_pitch = torch.randint(size=(1, 200), low=5, high=255) # 基频(单位赫兹)
- test_pitchf = torch.rand(1, 200) # nsf基频
- test_ds = torch.LongTensor([0]) # 说话人ID
- test_rnd = torch.rand(1, 192, 200) # 噪声(加入随机因子)
-
- device = "cpu" # 导出时设备(不影响使用模型)
-
- net_g = SynthesizerTrnMsNSFsidM(
- *cpt["config"], is_half=False, version=cpt.get("version", "v1")
- ) # fp32导出(C++要支持fp16必须手动将内存重新排列所以暂时不用fp16)
- net_g.load_state_dict(cpt["weight"], strict=False)
- input_names = ["phone", "phone_lengths", "pitch", "pitchf", "ds", "rnd"]
- output_names = [
- "audio",
- ]
- # net_g.construct_spkmixmap(n_speaker) 多角色混合轨道导出
- torch.onnx.export(
- net_g,
- (
- test_phone.to(device),
- test_phone_lengths.to(device),
- test_pitch.to(device),
- test_pitchf.to(device),
- test_ds.to(device),
- test_rnd.to(device),
- ),
- ExportedPath,
- dynamic_axes={
- "phone": [1],
- "pitch": [1],
- "pitchf": [1],
- "rnd": [2],
- },
- do_constant_folding=False,
- opset_version=13,
- verbose=False,
- input_names=input_names,
- output_names=output_names,
- )
- return "Finished"
diff --git a/spaces/Farazquraishi/pendora/networks/layers.py b/spaces/Farazquraishi/pendora/networks/layers.py
deleted file mode 100644
index d419574b3a14c8f2138e52512eb0479456a704b5..0000000000000000000000000000000000000000
--- a/spaces/Farazquraishi/pendora/networks/layers.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import tensorflow as tf
-from tensorflow.keras.layers import Layer, Dense
-
-
-def sin_activation(x, omega=30):
- return tf.math.sin(omega * x)
-
-
-class AdaIN(Layer):
- def __init__(self, **kwargs):
- super(AdaIN, self).__init__(**kwargs)
-
- def build(self, input_shapes):
- x_shape = input_shapes[0]
- w_shape = input_shapes[1]
-
- self.w_channels = w_shape[-1]
- self.x_channels = x_shape[-1]
-
- self.dense_1 = Dense(self.x_channels)
- self.dense_2 = Dense(self.x_channels)
-
- def call(self, inputs):
- x, w = inputs
- ys = tf.reshape(self.dense_1(w), (-1, 1, 1, self.x_channels))
- yb = tf.reshape(self.dense_2(w), (-1, 1, 1, self.x_channels))
- return ys * x + yb
-
- def get_config(self):
- config = {
- #'w_channels': self.w_channels,
- #'x_channels': self.x_channels
- }
- base_config = super(AdaIN, self).get_config()
- return dict(list(base_config.items()) + list(config.items()))
-
-
-class AdaptiveAttention(Layer):
-
- def __init__(self, **kwargs):
- super(AdaptiveAttention, self).__init__(**kwargs)
-
- def call(self, inputs):
- m, a, i = inputs
- return (1 - m) * a + m * i
-
- def get_config(self):
- base_config = super(AdaptiveAttention, self).get_config()
- return base_config
diff --git a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/utils/realesrgan_utils.py b/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/utils/realesrgan_utils.py
deleted file mode 100644
index ff94523b7ddd61f0b72280950fd36e1b8133bf4c..0000000000000000000000000000000000000000
--- a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/utils/realesrgan_utils.py
+++ /dev/null
@@ -1,296 +0,0 @@
-import cv2
-import math
-import numpy as np
-import os
-import queue
-import threading
-import torch
-from basicsr.utils.download_util import load_file_from_url
-from torch.nn import functional as F
-
-# ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
-
-
-class RealESRGANer():
- """A helper class for upsampling images with RealESRGAN.
-
- Args:
- scale (int): Upsampling scale factor used in the networks. It is usually 2 or 4.
- model_path (str): The path to the pretrained model. It can be urls (will first download it automatically).
- model (nn.Module): The defined network. Default: None.
- tile (int): As too large images result in the out of GPU memory issue, so this tile option will first crop
- input images into tiles, and then process each of them. Finally, they will be merged into one image.
- 0 denotes for do not use tile. Default: 0.
- tile_pad (int): The pad size for each tile, to remove border artifacts. Default: 10.
- pre_pad (int): Pad the input images to avoid border artifacts. Default: 10.
- half (float): Whether to use half precision during inference. Default: False.
- """
-
- def __init__(self,
- scale,
- model_path,
- model=None,
- tile=0,
- tile_pad=10,
- pre_pad=10,
- half=False,
- device=None,
- gpu_id=None):
- self.scale = scale
- self.tile_size = tile
- self.tile_pad = tile_pad
- self.pre_pad = pre_pad
- self.mod_scale = None
- self.half = half
-
- # initialize model
- if gpu_id:
- self.device = torch.device(
- f'cuda:{gpu_id}' if torch.cuda.is_available() else 'cpu') if device is None else device
- else:
- self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') if device is None else device
- # if the model_path starts with https, it will first download models to the folder: realesrgan/weights
- if model_path.startswith('https://'):
- model_path = load_file_from_url(
- url=model_path, model_dir=os.path.join('weights/realesrgan'), progress=True, file_name=None)
- loadnet = torch.load(model_path, map_location=torch.device('cpu'))
- # prefer to use params_ema
- if 'params_ema' in loadnet:
- keyname = 'params_ema'
- else:
- keyname = 'params'
- model.load_state_dict(loadnet[keyname], strict=True)
- model.eval()
- self.model = model.to(self.device)
- if self.half:
- self.model = self.model.half()
-
- def pre_process(self, img):
- """Pre-process, such as pre-pad and mod pad, so that the images can be divisible
- """
- img = torch.from_numpy(np.transpose(img, (2, 0, 1))).float()
- self.img = img.unsqueeze(0).to(self.device)
- if self.half:
- self.img = self.img.half()
-
- # pre_pad
- if self.pre_pad != 0:
- self.img = F.pad(self.img, (0, self.pre_pad, 0, self.pre_pad), 'reflect')
- # mod pad for divisible borders
- if self.scale == 2:
- self.mod_scale = 2
- elif self.scale == 1:
- self.mod_scale = 4
- if self.mod_scale is not None:
- self.mod_pad_h, self.mod_pad_w = 0, 0
- _, _, h, w = self.img.size()
- if (h % self.mod_scale != 0):
- self.mod_pad_h = (self.mod_scale - h % self.mod_scale)
- if (w % self.mod_scale != 0):
- self.mod_pad_w = (self.mod_scale - w % self.mod_scale)
- self.img = F.pad(self.img, (0, self.mod_pad_w, 0, self.mod_pad_h), 'reflect')
-
- def process(self):
- # model inference
- self.output = self.model(self.img)
-
- def tile_process(self):
- """It will first crop input images to tiles, and then process each tile.
- Finally, all the processed tiles are merged into one images.
-
- Modified from: https://github.com/ata4/esrgan-launcher
- """
- batch, channel, height, width = self.img.shape
- output_height = height * self.scale
- output_width = width * self.scale
- output_shape = (batch, channel, output_height, output_width)
-
- # start with black image
- self.output = self.img.new_zeros(output_shape)
- tiles_x = math.ceil(width / self.tile_size)
- tiles_y = math.ceil(height / self.tile_size)
-
- # loop over all tiles
- for y in range(tiles_y):
- for x in range(tiles_x):
- # extract tile from input image
- ofs_x = x * self.tile_size
- ofs_y = y * self.tile_size
- # input tile area on total image
- input_start_x = ofs_x
- input_end_x = min(ofs_x + self.tile_size, width)
- input_start_y = ofs_y
- input_end_y = min(ofs_y + self.tile_size, height)
-
- # input tile area on total image with padding
- input_start_x_pad = max(input_start_x - self.tile_pad, 0)
- input_end_x_pad = min(input_end_x + self.tile_pad, width)
- input_start_y_pad = max(input_start_y - self.tile_pad, 0)
- input_end_y_pad = min(input_end_y + self.tile_pad, height)
-
- # input tile dimensions
- input_tile_width = input_end_x - input_start_x
- input_tile_height = input_end_y - input_start_y
- tile_idx = y * tiles_x + x + 1
- input_tile = self.img[:, :, input_start_y_pad:input_end_y_pad, input_start_x_pad:input_end_x_pad]
-
- # upscale tile
- try:
- with torch.no_grad():
- output_tile = self.model(input_tile)
- except RuntimeError as error:
- print('Error', error)
- # print(f'\tTile {tile_idx}/{tiles_x * tiles_y}')
-
- # output tile area on total image
- output_start_x = input_start_x * self.scale
- output_end_x = input_end_x * self.scale
- output_start_y = input_start_y * self.scale
- output_end_y = input_end_y * self.scale
-
- # output tile area without padding
- output_start_x_tile = (input_start_x - input_start_x_pad) * self.scale
- output_end_x_tile = output_start_x_tile + input_tile_width * self.scale
- output_start_y_tile = (input_start_y - input_start_y_pad) * self.scale
- output_end_y_tile = output_start_y_tile + input_tile_height * self.scale
-
- # put tile into output image
- self.output[:, :, output_start_y:output_end_y,
- output_start_x:output_end_x] = output_tile[:, :, output_start_y_tile:output_end_y_tile,
- output_start_x_tile:output_end_x_tile]
-
- def post_process(self):
- # remove extra pad
- if self.mod_scale is not None:
- _, _, h, w = self.output.size()
- self.output = self.output[:, :, 0:h - self.mod_pad_h * self.scale, 0:w - self.mod_pad_w * self.scale]
- # remove prepad
- if self.pre_pad != 0:
- _, _, h, w = self.output.size()
- self.output = self.output[:, :, 0:h - self.pre_pad * self.scale, 0:w - self.pre_pad * self.scale]
- return self.output
-
- @torch.no_grad()
- def enhance(self, img, outscale=None, alpha_upsampler='realesrgan'):
- h_input, w_input = img.shape[0:2]
- # img: numpy
- img = img.astype(np.float32)
- if np.max(img) > 256: # 16-bit image
- max_range = 65535
- print('\tInput is a 16-bit image')
- else:
- max_range = 255
- img = img / max_range
- if len(img.shape) == 2: # gray image
- img_mode = 'L'
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)
- elif img.shape[2] == 4: # RGBA image with alpha channel
- img_mode = 'RGBA'
- alpha = img[:, :, 3]
- img = img[:, :, 0:3]
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
- if alpha_upsampler == 'realesrgan':
- alpha = cv2.cvtColor(alpha, cv2.COLOR_GRAY2RGB)
- else:
- img_mode = 'RGB'
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
-
- # ------------------- process image (without the alpha channel) ------------------- #
- with torch.no_grad():
- self.pre_process(img)
- if self.tile_size > 0:
- self.tile_process()
- else:
- self.process()
- output_img_t = self.post_process()
- output_img = output_img_t.data.squeeze().float().cpu().clamp_(0, 1).numpy()
- output_img = np.transpose(output_img[[2, 1, 0], :, :], (1, 2, 0))
- if img_mode == 'L':
- output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2GRAY)
- del output_img_t
- torch.cuda.empty_cache()
-
- # ------------------- process the alpha channel if necessary ------------------- #
- if img_mode == 'RGBA':
- if alpha_upsampler == 'realesrgan':
- self.pre_process(alpha)
- if self.tile_size > 0:
- self.tile_process()
- else:
- self.process()
- output_alpha = self.post_process()
- output_alpha = output_alpha.data.squeeze().float().cpu().clamp_(0, 1).numpy()
- output_alpha = np.transpose(output_alpha[[2, 1, 0], :, :], (1, 2, 0))
- output_alpha = cv2.cvtColor(output_alpha, cv2.COLOR_BGR2GRAY)
- else: # use the cv2 resize for alpha channel
- h, w = alpha.shape[0:2]
- output_alpha = cv2.resize(alpha, (w * self.scale, h * self.scale), interpolation=cv2.INTER_LINEAR)
-
- # merge the alpha channel
- output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2BGRA)
- output_img[:, :, 3] = output_alpha
-
- # ------------------------------ return ------------------------------ #
- if max_range == 65535: # 16-bit image
- output = (output_img * 65535.0).round().astype(np.uint16)
- else:
- output = (output_img * 255.0).round().astype(np.uint8)
-
- if outscale is not None and outscale != float(self.scale):
- output = cv2.resize(
- output, (
- int(w_input * outscale),
- int(h_input * outscale),
- ), interpolation=cv2.INTER_LANCZOS4)
-
- return output, img_mode
-
-
-class PrefetchReader(threading.Thread):
- """Prefetch images.
-
- Args:
- img_list (list[str]): A image list of image paths to be read.
- num_prefetch_queue (int): Number of prefetch queue.
- """
-
- def __init__(self, img_list, num_prefetch_queue):
- super().__init__()
- self.que = queue.Queue(num_prefetch_queue)
- self.img_list = img_list
-
- def run(self):
- for img_path in self.img_list:
- img = cv2.imread(img_path, cv2.IMREAD_UNCHANGED)
- self.que.put(img)
-
- self.que.put(None)
-
- def __next__(self):
- next_item = self.que.get()
- if next_item is None:
- raise StopIteration
- return next_item
-
- def __iter__(self):
- return self
-
-
-class IOConsumer(threading.Thread):
-
- def __init__(self, opt, que, qid):
- super().__init__()
- self._queue = que
- self.qid = qid
- self.opt = opt
-
- def run(self):
- while True:
- msg = self._queue.get()
- if isinstance(msg, str) and msg == 'quit':
- break
-
- output = msg['output']
- save_path = msg['save_path']
- cv2.imwrite(save_path, output)
- print(f'IO worker {self.qid} is done.')
\ No newline at end of file
diff --git a/spaces/Fengbinbin/gpt-academic/.github/ISSUE_TEMPLATE/feature_request.md b/spaces/Fengbinbin/gpt-academic/.github/ISSUE_TEMPLATE/feature_request.md
deleted file mode 100644
index e46a4c01e804aa4b649bd40af6c13d5981c873d4..0000000000000000000000000000000000000000
--- a/spaces/Fengbinbin/gpt-academic/.github/ISSUE_TEMPLATE/feature_request.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-name: Feature request
-about: Suggest an idea for this project
-title: ''
-labels: ''
-assignees: ''
-
----
-
-
diff --git a/spaces/GirishKiran/sentiment/README.md b/spaces/GirishKiran/sentiment/README.md
deleted file mode 100644
index 5f406668b2a0c1a9b9cf467b61a0d9a3817cbc02..0000000000000000000000000000000000000000
--- a/spaces/GirishKiran/sentiment/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Sentiment
-emoji: 🐨
-colorFrom: blue
-colorTo: green
-sdk: gradio
-sdk_version: 3.34.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/archs/srvgg_arch.py b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/archs/srvgg_arch.py
deleted file mode 100644
index 23b2f372a2975b499b6c05bf213cf7dec1a1cea6..0000000000000000000000000000000000000000
--- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/archs/srvgg_arch.py
+++ /dev/null
@@ -1,77 +0,0 @@
-from basicsr.utils.registry import ARCH_REGISTRY
-from torch import nn as nn
-from torch.nn import functional as F
-
-
-@ARCH_REGISTRY.register()
-class SRVGGNetCompact(nn.Module):
- """A compact VGG-style network structure for super-resolution.
-
- It is a compact network structure, which performs upsampling in the last layer and no convolution is
- conducted on the HR feature space.
-
- Args:
- num_in_ch (int): Channel number of inputs. Default: 3.
- num_out_ch (int): Channel number of outputs. Default: 3.
- num_feat (int): Channel number of intermediate features. Default: 64.
- num_conv (int): Number of convolution layers in the body network. Default: 16.
- upscale (int): Upsampling factor. Default: 4.
- act_type (str): Activation type, options: 'relu', 'prelu', 'leakyrelu'. Default: prelu.
- """
-
- def __init__(
- self,
- num_in_ch=3,
- num_out_ch=3,
- num_feat=64,
- num_conv=16,
- upscale=4,
- act_type="prelu",
- ):
- super(SRVGGNetCompact, self).__init__()
- self.num_in_ch = num_in_ch
- self.num_out_ch = num_out_ch
- self.num_feat = num_feat
- self.num_conv = num_conv
- self.upscale = upscale
- self.act_type = act_type
-
- self.body = nn.ModuleList()
- # the first conv
- self.body.append(nn.Conv2d(num_in_ch, num_feat, 3, 1, 1))
- # the first activation
- if act_type == "relu":
- activation = nn.ReLU(inplace=True)
- elif act_type == "prelu":
- activation = nn.PReLU(num_parameters=num_feat)
- elif act_type == "leakyrelu":
- activation = nn.LeakyReLU(negative_slope=0.1, inplace=True)
- self.body.append(activation)
-
- # the body structure
- for _ in range(num_conv):
- self.body.append(nn.Conv2d(num_feat, num_feat, 3, 1, 1))
- # activation
- if act_type == "relu":
- activation = nn.ReLU(inplace=True)
- elif act_type == "prelu":
- activation = nn.PReLU(num_parameters=num_feat)
- elif act_type == "leakyrelu":
- activation = nn.LeakyReLU(negative_slope=0.1, inplace=True)
- self.body.append(activation)
-
- # the last conv
- self.body.append(nn.Conv2d(num_feat, num_out_ch * upscale * upscale, 3, 1, 1))
- # upsample
- self.upsampler = nn.PixelShuffle(upscale)
-
- def forward(self, x):
- out = x
- for i in range(0, len(self.body)):
- out = self.body[i](out)
-
- out = self.upsampler(out)
- # add the nearest upsampled image, so that the network learns the residual
- base = F.interpolate(x, scale_factor=self.upscale, mode="nearest")
- out += base
- return out
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/schedules/schedule_160k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/schedules/schedule_160k.py
deleted file mode 100644
index 52603890b10f25faf8eec9f9e5a4468fae09b811..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/schedules/schedule_160k.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# optimizer
-optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005)
-optimizer_config = dict()
-# learning policy
-lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False)
-# runtime settings
-runner = dict(type='IterBasedRunner', max_iters=160000)
-checkpoint_config = dict(by_epoch=False, interval=16000)
-evaluation = dict(interval=16000, metric='mIoU')
diff --git a/spaces/GranataDizzyDive/dizzydive/README.md b/spaces/GranataDizzyDive/dizzydive/README.md
deleted file mode 100644
index e1f0b0bbf84a12842f95243dcc1016c18ab0fa48..0000000000000000000000000000000000000000
--- a/spaces/GranataDizzyDive/dizzydive/README.md
+++ /dev/null
@@ -1,19 +0,0 @@
----
-title: Argilla Space Template
-emoji: 🏷️
-colorFrom: purple
-colorTo: red
-sdk: docker
-app_port: 6900
-fullWidth: true
-tags:
-- argilla
-duplicated_from: argilla/argilla-template-space
----
-
-This is the Argilla Space Template you can use to deploy and run your own instance of Argilla on the Hugging Face Hub, for labeling, fun, and active learning loops!
-
-Login with:
-
-user: argilla
-password: 1234
\ No newline at end of file
diff --git a/spaces/GroveStreet/GTA_SOVITS/diffusion/how to export onnx.md b/spaces/GroveStreet/GTA_SOVITS/diffusion/how to export onnx.md
deleted file mode 100644
index 6d22719fd1a8e9d034e6224cc95f4b50d44a0320..0000000000000000000000000000000000000000
--- a/spaces/GroveStreet/GTA_SOVITS/diffusion/how to export onnx.md
+++ /dev/null
@@ -1,4 +0,0 @@
-- Open [onnx_export](onnx_export.py)
-- project_name = "dddsp" change "project_name" to your project name
-- model_path = f'{project_name}/model_500000.pt' change "model_path" to your model path
-- Run
\ No newline at end of file
diff --git a/spaces/HESOAYM/ElviraMulti/modules/pdf_func.py b/spaces/HESOAYM/ElviraMulti/modules/pdf_func.py
deleted file mode 100644
index 0aba6b7b891fc527c79b887256b0cbaa81ae5b3d..0000000000000000000000000000000000000000
--- a/spaces/HESOAYM/ElviraMulti/modules/pdf_func.py
+++ /dev/null
@@ -1,180 +0,0 @@
-from types import SimpleNamespace
-import pdfplumber
-import logging
-from llama_index import Document
-
-def prepare_table_config(crop_page):
- """Prepare table查找边界, 要求page为原始page
-
- From https://github.com/jsvine/pdfplumber/issues/242
- """
- page = crop_page.root_page # root/parent
- cs = page.curves + page.edges
- def curves_to_edges():
- """See https://github.com/jsvine/pdfplumber/issues/127"""
- edges = []
- for c in cs:
- edges += pdfplumber.utils.rect_to_edges(c)
- return edges
- edges = curves_to_edges()
- return {
- "vertical_strategy": "explicit",
- "horizontal_strategy": "explicit",
- "explicit_vertical_lines": edges,
- "explicit_horizontal_lines": edges,
- "intersection_y_tolerance": 10,
- }
-
-def get_text_outside_table(crop_page):
- ts = prepare_table_config(crop_page)
- if len(ts["explicit_vertical_lines"]) == 0 or len(ts["explicit_horizontal_lines"]) == 0:
- return crop_page
-
- ### Get the bounding boxes of the tables on the page.
- bboxes = [table.bbox for table in crop_page.root_page.find_tables(table_settings=ts)]
- def not_within_bboxes(obj):
- """Check if the object is in any of the table's bbox."""
- def obj_in_bbox(_bbox):
- """See https://github.com/jsvine/pdfplumber/blob/stable/pdfplumber/table.py#L404"""
- v_mid = (obj["top"] + obj["bottom"]) / 2
- h_mid = (obj["x0"] + obj["x1"]) / 2
- x0, top, x1, bottom = _bbox
- return (h_mid >= x0) and (h_mid < x1) and (v_mid >= top) and (v_mid < bottom)
- return not any(obj_in_bbox(__bbox) for __bbox in bboxes)
-
- return crop_page.filter(not_within_bboxes)
-# 请使用 LaTeX 表达公式,行内公式以 $ 包裹,行间公式以 $$ 包裹
-
-extract_words = lambda page: page.extract_words(keep_blank_chars=True, y_tolerance=0, x_tolerance=1, extra_attrs=["fontname", "size", "object_type"])
-# dict_keys(['text', 'x0', 'x1', 'top', 'doctop', 'bottom', 'upright', 'direction', 'fontname', 'size'])
-
-def get_title_with_cropped_page(first_page):
- title = [] # 处理标题
- x0,top,x1,bottom = first_page.bbox # 获取页面边框
-
- for word in extract_words(first_page):
- word = SimpleNamespace(**word)
-
- if word.size >= 14:
- title.append(word.text)
- title_bottom = word.bottom
- elif word.text == "Abstract": # 获取页面abstract
- top = word.top
-
- user_info = [i["text"] for i in extract_words(first_page.within_bbox((x0,title_bottom,x1,top)))]
- # 裁剪掉上半部分, within_bbox: full_included; crop: partial_included
- return title, user_info, first_page.within_bbox((x0,top,x1,bottom))
-
-def get_column_cropped_pages(pages, two_column=True):
- new_pages = []
- for page in pages:
- if two_column:
- left = page.within_bbox((0, 0, page.width/2, page.height),relative=True)
- right = page.within_bbox((page.width/2, 0, page.width, page.height), relative=True)
- new_pages.append(left)
- new_pages.append(right)
- else:
- new_pages.append(page)
-
- return new_pages
-
-def parse_pdf(filename, two_column = True):
- level = logging.getLogger().level
- if level == logging.getLevelName("DEBUG"):
- logging.getLogger().setLevel("INFO")
-
- with pdfplumber.open(filename) as pdf:
- title, user_info, first_page = get_title_with_cropped_page(pdf.pages[0])
- new_pages = get_column_cropped_pages([first_page] + pdf.pages[1:], two_column)
-
- chapters = []
- # tuple (chapter_name, [pageid] (start,stop), chapter_text)
- create_chapter = lambda page_start,name_top,name_bottom: SimpleNamespace(
- name=[],
- name_top=name_top,
- name_bottom=name_bottom,
- record_chapter_name = True,
-
- page_start=page_start,
- page_stop=None,
-
- text=[],
- )
- cur_chapter = None
-
- # 按页遍历PDF文档
- for idx, page in enumerate(new_pages):
- page = get_text_outside_table(page)
-
- # 按行遍历页面文本
- for word in extract_words(page):
- word = SimpleNamespace(**word)
-
- # 检查行文本是否以12号字体打印,如果是,则将其作为新章节开始
- if word.size >= 11: # 出现chapter name
- if cur_chapter is None:
- cur_chapter = create_chapter(page.page_number, word.top, word.bottom)
- elif not cur_chapter.record_chapter_name or (cur_chapter.name_bottom != cur_chapter.name_bottom and cur_chapter.name_top != cur_chapter.name_top):
- # 不再继续写chapter name
- cur_chapter.page_stop = page.page_number # stop id
- chapters.append(cur_chapter)
- # 重置当前chapter信息
- cur_chapter = create_chapter(page.page_number, word.top, word.bottom)
-
- # print(word.size, word.top, word.bottom, word.text)
- cur_chapter.name.append(word.text)
- else:
- cur_chapter.record_chapter_name = False # chapter name 结束
- cur_chapter.text.append(word.text)
- else:
- # 处理最后一个章节
- cur_chapter.page_stop = page.page_number # stop id
- chapters.append(cur_chapter)
-
- for i in chapters:
- logging.info(f"section: {i.name} pages:{i.page_start, i.page_stop} word-count:{len(i.text)}")
- logging.debug(" ".join(i.text))
-
- title = " ".join(title)
- user_info = " ".join(user_info)
- text = f"Article Title: {title}, Information:{user_info}\n"
- for idx, chapter in enumerate(chapters):
- chapter.name = " ".join(chapter.name)
- text += f"The {idx}th Chapter {chapter.name}: " + " ".join(chapter.text) + "\n"
-
- logging.getLogger().setLevel(level)
- return Document(text=text, extra_info={"title": title})
-
-BASE_POINTS = """
-1. Who are the authors?
-2. What is the process of the proposed method?
-3. What is the performance of the proposed method? Please note down its performance metrics.
-4. What are the baseline models and their performances? Please note down these baseline methods.
-5. What dataset did this paper use?
-"""
-
-READING_PROMPT = """
-You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n
-Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n
-When you are reading, You need to focus on these key points:{}
-"""
-
-READING_PROMT_V2 = """
-You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n
-Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n
-When you are reading, You need to focus on these key points:{},
-
-And You need to generate a brief but informative title for this part.
-Your return format:
-- title: '...'
-- summary: '...'
-"""
-
-SUMMARY_PROMPT = "You are a researcher helper bot. Now you need to read the summaries of a research paper."
-
-
-if __name__ == '__main__':
- # Test code
- z = parse_pdf("./build/test.pdf")
- print(z["user_info"])
- print(z["title"])
\ No newline at end of file
diff --git a/spaces/Hamish/openai_demo/app_old.py b/spaces/Hamish/openai_demo/app_old.py
deleted file mode 100644
index 57e2e6dcbab6af8b6c134b57ab19e5f49e2e1e93..0000000000000000000000000000000000000000
--- a/spaces/Hamish/openai_demo/app_old.py
+++ /dev/null
@@ -1,126 +0,0 @@
-# import os
-
-# import streamlit as st
-# from langchain.embeddings.openai import OpenAIEmbeddings
-# from langchain.vectorstores import Chroma
-# from langchain.document_loaders import TextLoader
-# from langchain.text_splitter import CharacterTextSplitter
-# from langchain.chat_models import ChatOpenAI
-
-# from langchain.chains import RetrievalQA
-# # from langchain.llms import OpenAI
-
-# import pandas as pd
-# import umap
-# import matplotlib.pyplot as plt
-
-# import extra_streamlit_components as stx
-
-# import fitz
-
-
-# st.set_page_config(page_title="CoreMind AI", layout="wide")
-
-# st.header("CoreMind AI")
-
-# # ====================================================================================================
-
-# # SIDEBAR
-# st.sidebar.title("Options")
-
-# openai_key = st.sidebar.text_input("OpenAI API Key", type="password", key="openai_api_key")
-
-# os.environ["OPENAI_API_KEY"] = openai_key
-
-
-# qa_temperature = st.sidebar.slider("QA Temperature", min_value=0.0, max_value=2.0, value=0.8, step=0.01, key="temperature")
-# qa_model = st.sidebar.selectbox("QA Model", ["gpt-3.5-turbo"], key="model")
-
-# # ====================================================================================================
-
-# if openai_key:
-# loader = TextLoader("raw_data.txt")
-# embeddings = OpenAIEmbeddings()
-# docsearch = Chroma(persist_directory="data", embedding_function=embeddings)
-
-# # ====================================================================================================
-
-# def question_answer(user_text, qa_temperature):
-# qa = RetrievalQA.from_chain_type(
-# llm=ChatOpenAI(temperature=qa_temperature, model_name=qa_model),
-# retriever=docsearch.as_retriever()
-# )
-# response = qa.run(user_text)
-# return response
-
-
-# # MAIN TABS
-# # add 3 tabs to the main part of the streamlit app
-# qa_tab, understanding_tab = st.tabs(["Document Querying", "Understanding"])
-
-# with qa_tab:
-# st.header("Question Answering")
-# st.write("Find the information you need right from your documents.")
-
-# qa_query = st.text_area("Enter your query", value="What is GEICO?", key="qa_query", help="Got a question you think your docs can answer? Just ask!")
-# qa_button = st.button("Query docs", disabled=not (openai_key and qa_query), key="qa_button", help="Make sure you have entered your OpenAI API key and a query.")
-
-# if qa_query and qa_button:
-# response = question_answer(qa_query, qa_temperature)
-# # response = "GEICO is the seventh largest auto insurer in the United States, with about 3.7 million cars insured. It is a low-cost operator and its competitive strength flows directly from this position. It is now a wholly-owned subsidiary of Berkshire Hathaway."
-# st.write(response)
-
-
-
-# with understanding_tab:
-# st.header("PDF Understanding")
-# st.write("Understand your PDFs better.")
-
-# pdf_file = st.file_uploader("Upload a PDF", type=["pdf"], key="pdf_file")
-
-# # save file
-# if pdf_file:
-# # with open("your_file.pdf", "wb") as f:
-# # f.write(pdf_file.getbuffer())
-
-# # # Open the PDF file
-# # # with open('your_file.pdf', 'rb') as file:
-# # # Create a PDF reader object
-# # with fitz.open('your_file.pdf') as doc:
-# # all_text = ""
-
-# # # Iterate over each page
-# # for page in doc:
-# # # Extract the text from the page
-# # text = page.get_text()
-# # all_text += text
-# # all_text += "\n\n"
-
-# # with open("pdf_data.txt", "a") as f:
-# # f.write(all_text)
-
-# # # Print the extracted text
-# # st.write("file uploaded")
-
-# # # chat = ChatAnthropic()
-
-# # loader = TextLoader("pdf_data.txt")
-# # documents = loader.load()
-# # text_splitter = CharacterTextSplitter(chunk_size=3000, chunk_overlap=300)
-# # texts = text_splitter.split_documents(documents)
-# # docsearch.add_documents(texts)
-# # docsearch.persist()
-
-# pdf_query = st.text_area("Query your pdf", key="pdf_query")
-
-# if pdf_query:
-# pdf_llm = RetrievalQA.from_chain_type(
-# llm=ChatOpenAI(temperature=0.8, model_name=qa_model),
-# retriever=docsearch.as_retriever(),
-# # reduce_k_below_max_tokens=True,
-# # return_source_documents=True,
-# # max_tokens = 2000
-# )
-# pdf_response = pdf_llm.run(pdf_query)
-# # response = "GEICO is the seventh largest auto insurer in the United States, with about 3.7 million cars insured. It is a low-cost operator and its competitive strength flows directly from this position. It is now a wholly-owned subsidiary of Berkshire Hathaway."
-# st.write(pdf_response)
diff --git a/spaces/ICML2022/OFA/fairseq/examples/latent_depth/README.md b/spaces/ICML2022/OFA/fairseq/examples/latent_depth/README.md
deleted file mode 100644
index 7774c333053b95d15b180fdfc3ee3cd817790520..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/latent_depth/README.md
+++ /dev/null
@@ -1,77 +0,0 @@
-# Deep Transformers with Latent Depth (Li et al., 2020)
-
-[https://arxiv.org/abs/2009.13102](https://arxiv.org/abs/2009.13102).
-
-## Introduction
-
-We present a probabilistic framework to automatically learn which layer(s) to use by learning the posterior distributions of layer selection. As an extension of this framework, we propose a novel method to train one shared Transformer network for multilingual machine translation with different layer selection posteriors for each language pair.
-
-## Training a multilingual model with latent depth
-
-Below is an example of training with latent depth in decoder for one-to-many (O2M) related languages. We use the same preprocessed (numberized and binarized) TED8 dataset as in [Balancing Training for Multilingual Neural Machine Translation (Wang et al., 2020)](https://github.com/cindyxinyiwang/multiDDS), which could be generated by [the script](https://github.com/cindyxinyiwang/multiDDS/blob/multiDDS/util_scripts/prepare_multilingual_data.sh) the author provided.
-```bash
-lang_pairs_str="eng-aze,eng-bel,eng-ces,eng-glg,eng-por,eng-rus,eng-slk,eng-tur"
-databin_dir=
-
-fairseq-train ${databin_dir} \
- --user-dir examples/latent_depth/latent_depth_src \
- --lang-pairs "${lang_pairs_str}" \
- --arch multilingual_transformer_iwslt_de_en \
- --task multilingual_translation_latent_depth \
- --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
- --share-encoders \
- --share-decoders \
- --decoder-langtok \
- --share-decoder-input-output-embed \
- --dropout 0.3 --attention-dropout 0.3 \
- --optimizer adam --adam-eps 1e-06 --adam-betas '(0.9, 0.98)' \
- --lr-scheduler inverse_sqrt --stop-min-lr 1e-9 --warmup-init-lr 1e-7 --warmup-updates 8000 \
- --max-tokens 4096 --update-freq 1 \
- --lr 0.0015 \
- --clip-norm 1.0 \
- --seed 2 \
- --ddp-backend=legacy_ddp \
- --encoder-layers 12 \
- --decoder-layers 24 \
- --decoder-latent-layer \
- --sparsity-weight 0.1 \
- --anneal-updates 5000 \
- --soft-update 500 \
- --target-layers 12 \
- --share-weight 0.1
-```
-## Inference command
-
-```bash
-lang_pairs_str="eng-aze,eng-bel,eng-ces,eng-glg,eng-por,eng-rus,eng-slk,eng-tur"
-databin_dir=
-model_path=
-src_lang=
-tgt_lang=
-gen_data=
-
-fairseq-generate ${databin_dir} \
- --path ${model_path} \
- --task multilingual_translation_latent_depth \
- --decoder-latent-layer \
- --lang-pairs "${lang_pairs_str}" \
- -s ${src_lang} -t ${tgt_lang} \
- --gen-subset $gen_data \
- --scoring sacrebleu \
- --remove-bpe 'sentencepiece' \
- --lenpen 1.0 \
- --beam 5 \
- --decoder-langtok \
- --max-tokens 4096
-```
-
-
-## Citation
-```bibtex
-@article{li2020deep,
- title={Deep Transformers with Latent Depth},
- author={Li, Xian and Stickland, Asa Cooper and Tang, Yuqing and Kong, Xiang},
- journal={arXiv preprint arXiv:2009.13102},
- year={2020}
-}
-```
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/checkpoint_utils.py b/spaces/ICML2022/OFA/fairseq/fairseq/checkpoint_utils.py
deleted file mode 100644
index ef5d4c9022c3c35722f0bc9150260c7a65d35e5f..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/checkpoint_utils.py
+++ /dev/null
@@ -1,858 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import ast
-import collections
-import contextlib
-import logging
-import numpy as np
-import os
-import re
-import time
-import traceback
-from collections import OrderedDict
-from typing import Any, Dict, Optional, Union
-
-import torch
-from fairseq.data import data_utils
-from fairseq.dataclass.configs import CheckpointConfig
-from fairseq.dataclass.utils import (
- convert_namespace_to_omegaconf,
- overwrite_args_by_name,
-)
-from fairseq.distributed.fully_sharded_data_parallel import FSDP, has_FSDP
-from fairseq.file_io import PathManager
-from fairseq.models import FairseqDecoder, FairseqEncoder
-from omegaconf import DictConfig, open_dict, OmegaConf
-
-
-logger = logging.getLogger(__name__)
-
-
-def save_checkpoint(cfg: CheckpointConfig, trainer, epoch_itr, val_loss):
- from fairseq import meters
-
- # only one worker should attempt to create the required dir
- if trainer.data_parallel_rank == 0:
- os.makedirs(cfg.save_dir, exist_ok=True)
-
- prev_best = getattr(save_checkpoint, "best", val_loss)
- if val_loss is not None:
- best_function = max if cfg.maximize_best_checkpoint_metric else min
- save_checkpoint.best = best_function(val_loss, prev_best)
-
- if cfg.no_save:
- return
-
- trainer.consolidate_optimizer() # TODO(SS): do we need this if no_save_optimizer_state
-
- if not trainer.should_save_checkpoint_on_current_rank:
- if trainer.always_call_state_dict_during_save_checkpoint:
- trainer.state_dict()
- return
-
- write_timer = meters.StopwatchMeter()
- write_timer.start()
-
- epoch = epoch_itr.epoch
- end_of_epoch = epoch_itr.end_of_epoch()
- updates = trainer.get_num_updates()
-
- logger.info(f"Preparing to save checkpoint for epoch {epoch} @ {updates} updates")
-
- def is_better(a, b):
- return a >= b if cfg.maximize_best_checkpoint_metric else a <= b
-
- suffix = trainer.checkpoint_suffix
- checkpoint_conds = collections.OrderedDict()
- checkpoint_conds["checkpoint{}{}.pt".format(epoch, suffix)] = (
- end_of_epoch and not cfg.no_epoch_checkpoints and epoch % cfg.save_interval == 0
- )
- checkpoint_conds["checkpoint_{}_{}{}.pt".format(epoch, updates, suffix)] = (
- not end_of_epoch
- and cfg.save_interval_updates > 0
- and updates % cfg.save_interval_updates == 0
- )
- checkpoint_conds["checkpoint_best{}.pt".format(suffix)] = val_loss is not None and (
- not hasattr(save_checkpoint, "best")
- or is_better(val_loss, save_checkpoint.best)
- )
- if val_loss is not None and cfg.keep_best_checkpoints > 0:
- worst_best = getattr(save_checkpoint, "best", None)
- chkpts = checkpoint_paths(
- cfg.save_dir,
- pattern=r"checkpoint\.best_{}_(\d+\.?\d*){}\.pt".format(
- cfg.best_checkpoint_metric, suffix
- ),
- )
- if len(chkpts) > 0:
- p = chkpts[-1] if cfg.maximize_best_checkpoint_metric else chkpts[0]
- worst_best = float(p.rsplit("_")[-1].replace("{}.pt".format(suffix), ""))
- # add random digits to resolve ties
- with data_utils.numpy_seed(epoch, updates, val_loss):
- rand_sfx = np.random.randint(0, cfg.keep_best_checkpoints)
-
- checkpoint_conds[
- "checkpoint.best_{}_{:.3f}{}{}.pt".format(
- cfg.best_checkpoint_metric,
- val_loss,
- rand_sfx,
- suffix
- )
- ] = worst_best is None or is_better(val_loss, worst_best)
- checkpoint_conds[
- "checkpoint_last{}.pt".format(suffix)
- ] = not cfg.no_last_checkpoints
-
- extra_state = {"train_iterator": epoch_itr.state_dict(), "val_loss": val_loss}
- if hasattr(save_checkpoint, "best"):
- extra_state.update({"best": save_checkpoint.best})
-
- checkpoints = [
- os.path.join(cfg.save_dir, fn) for fn, cond in checkpoint_conds.items() if cond
- ]
- if len(checkpoints) > 0:
- trainer.save_checkpoint(checkpoints[0], extra_state)
- for cp in checkpoints[1:]:
- if cfg.write_checkpoints_asynchronously:
- # TODO[ioPath]: Need to implement a delayed asynchronous
- # file copying/moving feature.
- logger.warning(
- f"ioPath is not copying {checkpoints[0]} to {cp} "
- "since async write mode is on."
- )
- else:
- assert PathManager.copy(
- checkpoints[0], cp, overwrite=True
- ), f"Failed to copy {checkpoints[0]} to {cp}"
-
- write_timer.stop()
- logger.info(
- "Saved checkpoint {} (epoch {} @ {} updates, score {}) (writing took {} seconds)".format(
- checkpoints[0], epoch, updates, val_loss, write_timer.sum
- )
- )
-
- if not end_of_epoch and cfg.keep_interval_updates > 0:
- # remove old checkpoints; checkpoints are sorted in descending order
- if cfg.keep_interval_updates_pattern == -1:
- checkpoints = checkpoint_paths(
- cfg.save_dir, pattern=r"checkpoint_\d+_(\d+){}\.pt".format(suffix)
- )
- else:
- checkpoints = checkpoint_paths(
- cfg.save_dir,
- pattern=r"checkpoint_\d+_(\d+){}\.pt".format(suffix),
- keep_match=True,
- )
- checkpoints = [
- x[0]
- for x in checkpoints
- if x[1] % cfg.keep_interval_updates_pattern != 0
- ]
-
- for old_chk in checkpoints[cfg.keep_interval_updates :]:
- if os.path.lexists(old_chk):
- os.remove(old_chk)
- elif PathManager.exists(old_chk):
- PathManager.rm(old_chk)
-
- if cfg.keep_last_epochs > 0:
- # remove old epoch checkpoints; checkpoints are sorted in descending order
- checkpoints = checkpoint_paths(
- cfg.save_dir, pattern=r"checkpoint(\d+){}\.pt".format(suffix)
- )
- for old_chk in checkpoints[cfg.keep_last_epochs :]:
- if os.path.lexists(old_chk):
- os.remove(old_chk)
- elif PathManager.exists(old_chk):
- PathManager.rm(old_chk)
-
- if cfg.keep_best_checkpoints > 0:
- # only keep the best N checkpoints according to validation metric
- checkpoints = checkpoint_paths(
- cfg.save_dir,
- pattern=r"checkpoint\.best_{}_(\d+\.?\d*){}\.pt".format(
- cfg.best_checkpoint_metric, suffix
- ),
- )
- if not cfg.maximize_best_checkpoint_metric:
- checkpoints = checkpoints[::-1]
- for old_chk in checkpoints[cfg.keep_best_checkpoints :]:
- if os.path.lexists(old_chk):
- os.remove(old_chk)
- elif PathManager.exists(old_chk):
- PathManager.rm(old_chk)
-
-
-def load_checkpoint(cfg: CheckpointConfig, trainer, **passthrough_args):
- """
- Load a checkpoint and restore the training iterator.
-
- *passthrough_args* will be passed through to
- ``trainer.get_train_iterator``.
- """
-
- reset_optimizer = cfg.reset_optimizer
- reset_lr_scheduler = cfg.reset_lr_scheduler
- optimizer_overrides = ast.literal_eval(cfg.optimizer_overrides)
- reset_meters = cfg.reset_meters
- reset_dataloader = cfg.reset_dataloader
-
- if cfg.finetune_from_model is not None and (
- reset_optimizer or reset_lr_scheduler or reset_meters or reset_dataloader
- ):
- raise ValueError(
- "--finetune-from-model can not be set together with either --reset-optimizer"
- " or reset_lr_scheduler or reset_meters or reset_dataloader"
- )
-
- suffix = trainer.checkpoint_suffix
- if (
- cfg.restore_file == "checkpoint_last.pt"
- ): # default value of restore_file is 'checkpoint_last.pt'
- checkpoint_path = os.path.join(
- cfg.save_dir, "checkpoint_last{}.pt".format(suffix)
- )
- first_launch = not PathManager.exists(checkpoint_path)
- if cfg.finetune_from_model is not None and first_launch:
- # if there is no last checkpoint to restore, start the finetune from pretrained model
- # else just use usual logic to load checkpoint, e.g. restart from last checkpoint and etc.
- if PathManager.exists(cfg.finetune_from_model):
- checkpoint_path = cfg.finetune_from_model
- reset_optimizer = True
- reset_lr_scheduler = True
- reset_meters = True
- reset_dataloader = True
- logger.info(
- f"loading pretrained model from {checkpoint_path}: "
- "optimizer, lr scheduler, meters, dataloader will be reset"
- )
- else:
- raise ValueError(
- f"--funetune-from-model {cfg.finetune_from_model} does not exist"
- )
- elif suffix is not None:
- checkpoint_path = cfg.restore_file.replace(".pt", suffix + ".pt")
- else:
- checkpoint_path = cfg.restore_file
-
- if cfg.restore_file != "checkpoint_last.pt" and cfg.finetune_from_model:
- raise ValueError(
- "--finetune-from-model and --restore-file (non-default value) "
- "can not be specified together: " + str(cfg)
- )
-
- extra_state = trainer.load_checkpoint(
- checkpoint_path,
- reset_optimizer,
- reset_lr_scheduler,
- optimizer_overrides,
- reset_meters=reset_meters,
- )
-
- if (
- extra_state is not None
- and "best" in extra_state
- and not reset_optimizer
- and not reset_meters
- ):
- save_checkpoint.best = extra_state["best"]
-
- if extra_state is not None and not reset_dataloader:
- # restore iterator from checkpoint
- itr_state = extra_state["train_iterator"]
- epoch_itr = trainer.get_train_iterator(
- epoch=itr_state["epoch"], load_dataset=True, **passthrough_args
- )
- epoch_itr.load_state_dict(itr_state)
- else:
- epoch_itr = trainer.get_train_iterator(
- epoch=1, load_dataset=True, **passthrough_args
- )
-
- trainer.lr_step(epoch_itr.epoch)
-
- return extra_state, epoch_itr
-
-
-def load_checkpoint_to_cpu(path, arg_overrides=None, load_on_all_ranks=False):
- """Loads a checkpoint to CPU (with upgrading for backward compatibility).
-
- If doing single-GPU training or if the checkpoint is only being loaded by at
- most one process on each node (current default behavior is for only rank 0
- to read the checkpoint from disk), load_on_all_ranks should be False to
- avoid errors from torch.distributed not having been initialized or
- torch.distributed.barrier() hanging.
-
- If all processes on each node may be loading the checkpoint
- simultaneously, load_on_all_ranks should be set to True to avoid I/O
- conflicts.
-
- There's currently no support for > 1 but < all processes loading the
- checkpoint on each node.
- """
- local_path = PathManager.get_local_path(path)
- # The locally cached file returned by get_local_path() may be stale for
- # remote files that are periodically updated/overwritten (ex:
- # checkpoint_last.pt) - so we remove the local copy, sync across processes
- # (if needed), and then download a fresh copy.
- if local_path != path and PathManager.path_requires_pathmanager(path):
- try:
- os.remove(local_path)
- except FileNotFoundError:
- # With potentially multiple processes removing the same file, the
- # file being missing is benign (missing_ok isn't available until
- # Python 3.8).
- pass
- if load_on_all_ranks:
- torch.distributed.barrier()
- local_path = PathManager.get_local_path(path)
-
- with open(local_path, "rb") as f:
- state = torch.load(f, map_location=torch.device("cpu"))
-
- if "args" in state and state["args"] is not None and arg_overrides is not None:
- args = state["args"]
- for arg_name, arg_val in arg_overrides.items():
- setattr(args, arg_name, arg_val)
-
- if "cfg" in state and state["cfg"] is not None:
-
- # hack to be able to set Namespace in dict config. this should be removed when we update to newer
- # omegaconf version that supports object flags, or when we migrate all existing models
- from omegaconf import _utils
-
- old_primitive = _utils.is_primitive_type
- _utils.is_primitive_type = lambda _: True
-
- state["cfg"] = OmegaConf.create(state["cfg"])
-
- _utils.is_primitive_type = old_primitive
- OmegaConf.set_struct(state["cfg"], True)
-
- if arg_overrides is not None:
- overwrite_args_by_name(state["cfg"], arg_overrides)
-
- state = _upgrade_state_dict(state)
- return state
-
-
-def load_model_ensemble(
- filenames,
- arg_overrides: Optional[Dict[str, Any]] = None,
- task=None,
- strict=True,
- suffix="",
- num_shards=1,
- state=None,
-):
- """Loads an ensemble of models.
-
- Args:
- filenames (List[str]): checkpoint files to load
- arg_overrides (Dict[str,Any], optional): override model args that
- were used during model training
- task (fairseq.tasks.FairseqTask, optional): task to use for loading
- """
- assert not (
- strict and num_shards > 1
- ), "Cannot load state dict with strict=True and checkpoint shards > 1"
- ensemble, args, _task = load_model_ensemble_and_task(
- filenames,
- arg_overrides,
- task,
- strict,
- suffix,
- num_shards,
- state,
- )
- return ensemble, args
-
-
-def get_maybe_sharded_checkpoint_filename(
- filename: str, suffix: str, shard_idx: int, num_shards: int
-) -> str:
- orig_filename = filename
- filename = filename.replace(".pt", suffix + ".pt")
- fsdp_filename = filename[:-3] + f"-shard{shard_idx}.pt"
- model_parallel_filename = orig_filename[:-3] + f"_part{shard_idx}.pt"
- if PathManager.exists(fsdp_filename):
- return fsdp_filename
- elif num_shards > 1:
- return model_parallel_filename
- else:
- return filename
-
-
-def load_model_ensemble_and_task(
- filenames,
- arg_overrides: Optional[Dict[str, Any]] = None,
- task=None,
- strict=True,
- suffix="",
- num_shards=1,
- state=None,
-):
- assert state is None or len(filenames) == 1
-
- from fairseq import tasks
-
- assert not (
- strict and num_shards > 1
- ), "Cannot load state dict with strict=True and checkpoint shards > 1"
- ensemble = []
- cfg = None
- for filename in filenames:
- orig_filename = filename
- model_shard_state = {"shard_weights": [], "shard_metadata": []}
- assert num_shards > 0
- st = time.time()
- for shard_idx in range(num_shards):
- filename = get_maybe_sharded_checkpoint_filename(
- orig_filename, suffix, shard_idx, num_shards
- )
-
- if not PathManager.exists(filename):
- raise IOError("Model file not found: {}".format(filename))
- if state is None:
- state = load_checkpoint_to_cpu(filename, arg_overrides)
- if "args" in state and state["args"] is not None:
- cfg = convert_namespace_to_omegaconf(state["args"])
- elif "cfg" in state and state["cfg"] is not None:
- cfg = state["cfg"]
- else:
- raise RuntimeError(
- f"Neither args nor cfg exist in state keys = {state.keys()}"
- )
-
- if task is None:
- task = tasks.setup_task(cfg.task)
-
- if "task_state" in state:
- task.load_state_dict(state["task_state"])
-
- if "fsdp_metadata" in state and num_shards > 1:
- model_shard_state["shard_weights"].append(state["model"])
- model_shard_state["shard_metadata"].append(state["fsdp_metadata"])
- # check FSDP import before the code goes too far
- if not has_FSDP:
- raise ImportError(
- "Cannot find FullyShardedDataParallel. "
- "Please install fairscale with: pip install fairscale"
- )
- if shard_idx == num_shards - 1:
- consolidated_model_state = FSDP.consolidate_shard_weights(
- shard_weights=model_shard_state["shard_weights"],
- shard_metadata=model_shard_state["shard_metadata"],
- )
- model = task.build_model(cfg.model)
- model.load_state_dict(
- consolidated_model_state, strict=strict, model_cfg=cfg.model
- )
- else:
- # model parallel checkpoint or unsharded checkpoint
- model = task.build_model(cfg.model)
- model.load_state_dict(
- state["model"], strict=strict, model_cfg=cfg.model
- )
-
- # reset state so it gets loaded for the next model in ensemble
- state = None
- if shard_idx % 10 == 0 and shard_idx > 0:
- elapsed = time.time() - st
- logger.info(
- f"Loaded {shard_idx} shards in {elapsed:.2f}s, {elapsed / (shard_idx+1):.2f}s/shard"
- )
-
- # build model for ensemble
- ensemble.append(model)
- return ensemble, cfg, task
-
-
-def checkpoint_paths(path, pattern=r"checkpoint(\d+)\.pt", keep_match=False):
- """Retrieves all checkpoints found in `path` directory.
-
- Checkpoints are identified by matching filename to the specified pattern. If
- the pattern contains groups, the result will be sorted by the first group in
- descending order.
- """
- pt_regexp = re.compile(pattern)
- files = PathManager.ls(path)
-
- entries = []
- for i, f in enumerate(files):
- m = pt_regexp.fullmatch(f)
- if m is not None:
- idx = float(m.group(1)) if len(m.groups()) > 0 else i
- entries.append((idx, m.group(0)))
- if keep_match:
- return [(os.path.join(path, x[1]), x[0]) for x in sorted(entries, reverse=True)]
- else:
- return [os.path.join(path, x[1]) for x in sorted(entries, reverse=True)]
-
-
-def torch_persistent_save(obj, filename, async_write: bool = False):
- if async_write:
- with PathManager.opena(filename, "wb") as f:
- _torch_persistent_save(obj, f)
- else:
- if PathManager.supports_rename(filename):
- # do atomic save
- with PathManager.open(filename + ".tmp", "wb") as f:
- _torch_persistent_save(obj, f)
- PathManager.rename(filename + ".tmp", filename)
- else:
- # fallback to non-atomic save
- with PathManager.open(filename, "wb") as f:
- _torch_persistent_save(obj, f)
-
-
-def _torch_persistent_save(obj, f):
- if isinstance(f, str):
- with PathManager.open(f, "wb") as h:
- torch_persistent_save(obj, h)
- return
- for i in range(3):
- try:
- return torch.save(obj, f)
- except Exception:
- if i == 2:
- logger.error(traceback.format_exc())
- raise
-
-
-def _upgrade_state_dict(state):
- """Helper for upgrading old model checkpoints."""
-
- # add optimizer_history
- if "optimizer_history" not in state:
- state["optimizer_history"] = [
- {"criterion_name": "CrossEntropyCriterion", "best_loss": state["best_loss"]}
- ]
- state["last_optimizer_state"] = state["optimizer"]
- del state["optimizer"]
- del state["best_loss"]
- # move extra_state into sub-dictionary
- if "epoch" in state and "extra_state" not in state:
- state["extra_state"] = {
- "epoch": state["epoch"],
- "batch_offset": state["batch_offset"],
- "val_loss": state["val_loss"],
- }
- del state["epoch"]
- del state["batch_offset"]
- del state["val_loss"]
- # reduce optimizer history's memory usage (only keep the last state)
- if "optimizer" in state["optimizer_history"][-1]:
- state["last_optimizer_state"] = state["optimizer_history"][-1]["optimizer"]
- for optim_hist in state["optimizer_history"]:
- del optim_hist["optimizer"]
- # record the optimizer class name
- if "optimizer_name" not in state["optimizer_history"][-1]:
- state["optimizer_history"][-1]["optimizer_name"] = "FairseqNAG"
- # move best_loss into lr_scheduler_state
- if "lr_scheduler_state" not in state["optimizer_history"][-1]:
- state["optimizer_history"][-1]["lr_scheduler_state"] = {
- "best": state["optimizer_history"][-1]["best_loss"]
- }
- del state["optimizer_history"][-1]["best_loss"]
- # keep track of number of updates
- if "num_updates" not in state["optimizer_history"][-1]:
- state["optimizer_history"][-1]["num_updates"] = 0
- # old model checkpoints may not have separate source/target positions
- if (
- "args" in state
- and hasattr(state["args"], "max_positions")
- and not hasattr(state["args"], "max_source_positions")
- ):
- state["args"].max_source_positions = state["args"].max_positions
- state["args"].max_target_positions = state["args"].max_positions
- # use stateful training data iterator
- if "train_iterator" not in state["extra_state"]:
- state["extra_state"]["train_iterator"] = {
- "epoch": state["extra_state"]["epoch"],
- "iterations_in_epoch": state["extra_state"].get("batch_offset", 0),
- }
-
- # backward compatibility, cfg updates
- if "args" in state and state["args"] is not None:
- # default to translation task
- if not hasattr(state["args"], "task"):
- state["args"].task = "translation"
- # --raw-text and --lazy-load are deprecated
- if getattr(state["args"], "raw_text", False):
- state["args"].dataset_impl = "raw"
- elif getattr(state["args"], "lazy_load", False):
- state["args"].dataset_impl = "lazy"
- # epochs start at 1
- if state["extra_state"]["train_iterator"] is not None:
- state["extra_state"]["train_iterator"]["epoch"] = max(
- state["extra_state"]["train_iterator"].get("epoch", 1), 1
- )
- # --remove-bpe ==> --postprocess
- if hasattr(state["args"], "remove_bpe"):
- state["args"].post_process = state["args"].remove_bpe
- # --min-lr ==> --stop-min-lr
- if hasattr(state["args"], "min_lr"):
- state["args"].stop_min_lr = state["args"].min_lr
- del state["args"].min_lr
- # binary_cross_entropy / kd_binary_cross_entropy => wav2vec criterion
- if (
- hasattr(state["args"], "criterion")
- and state["args"].criterion in [
- "binary_cross_entropy",
- "kd_binary_cross_entropy",
- ]
- ):
- state["args"].criterion = "wav2vec"
- # remove log_keys if it's None (criteria will supply a default value of [])
- if hasattr(state["args"], "log_keys") and state["args"].log_keys is None:
- delattr(state["args"], "log_keys")
- # speech_pretraining => audio pretraining
- if (
- hasattr(state["args"], "task")
- and state["args"].task == "speech_pretraining"
- ):
- state["args"].task = "audio_pretraining"
- # audio_cpc => wav2vec
- if hasattr(state["args"], "arch") and state["args"].arch == "audio_cpc":
- state["args"].arch = "wav2vec"
- # convert legacy float learning rate to List[float]
- if hasattr(state["args"], "lr") and isinstance(state["args"].lr, float):
- state["args"].lr = [state["args"].lr]
- # convert task data arg to a string instead of List[string]
- if (
- hasattr(state["args"], "data")
- and isinstance(state["args"].data, list)
- and len(state["args"].data) > 0
- ):
- state["args"].data = state["args"].data[0]
- # remove keys in state["args"] related to teacher-student learning
- for key in [
- "static_teachers",
- "static_teacher_weights",
- "dynamic_teachers",
- "dynamic_teacher_weights",
- ]:
- if key in state["args"]:
- delattr(state["args"], key)
-
- state["cfg"] = convert_namespace_to_omegaconf(state["args"])
-
- if "cfg" in state and state["cfg"] is not None:
- cfg = state["cfg"]
- with open_dict(cfg):
- # any upgrades for Hydra-based configs
- if (
- "task" in cfg
- and "eval_wer_config" in cfg.task
- and isinstance(cfg.task.eval_wer_config.print_alignment, bool)
- ):
- cfg.task.eval_wer_config.print_alignment = "hard"
- if "generation" in cfg and isinstance(cfg.generation.print_alignment, bool):
- cfg.generation.print_alignment = "hard" if cfg.generation.print_alignment else None
- if (
- "model" in cfg
- and "w2v_args" in cfg.model
- and cfg.model.w2v_args is not None
- and (
- hasattr(cfg.model.w2v_args, "task") or "task" in cfg.model.w2v_args
- )
- and hasattr(cfg.model.w2v_args.task, "eval_wer_config")
- and cfg.model.w2v_args.task.eval_wer_config is not None
- and isinstance(
- cfg.model.w2v_args.task.eval_wer_config.print_alignment, bool
- )
- ):
- cfg.model.w2v_args.task.eval_wer_config.print_alignment = "hard"
-
- return state
-
-
-def prune_state_dict(state_dict, model_cfg: Optional[DictConfig]):
- """Prune the given state_dict if desired for LayerDrop
- (https://arxiv.org/abs/1909.11556).
-
- Training with LayerDrop allows models to be robust to pruning at inference
- time. This function prunes state_dict to allow smaller models to be loaded
- from a larger model and re-maps the existing state_dict for this to occur.
-
- It's called by functions that load models from checkpoints and does not
- need to be called directly.
- """
- arch = None
- if model_cfg is not None:
- arch = (
- model_cfg._name
- if isinstance(model_cfg, DictConfig)
- else getattr(model_cfg, "arch", None)
- )
-
- if not model_cfg or arch is None or arch == "ptt_transformer":
- # args should not be none, but don't crash if it is.
- return state_dict
-
- encoder_layers_to_keep = getattr(model_cfg, "encoder_layers_to_keep", None)
- decoder_layers_to_keep = getattr(model_cfg, "decoder_layers_to_keep", None)
-
- if not encoder_layers_to_keep and not decoder_layers_to_keep:
- return state_dict
-
- # apply pruning
- logger.info(
- "Pruning model to specified layer configuration - this works best if the model was trained with LayerDrop"
- )
-
- def create_pruning_pass(layers_to_keep, layer_name):
- keep_layers = sorted(
- int(layer_string) for layer_string in layers_to_keep.split(",")
- )
- mapping_dict = {}
- for i in range(len(keep_layers)):
- mapping_dict[str(keep_layers[i])] = str(i)
-
- regex = re.compile(r"^{layer}.*\.layers\.(\d+)".format(layer=layer_name))
- return {"substitution_regex": regex, "mapping_dict": mapping_dict}
-
- pruning_passes = []
- if encoder_layers_to_keep:
- pruning_passes.append(create_pruning_pass(encoder_layers_to_keep, "encoder"))
- if decoder_layers_to_keep:
- pruning_passes.append(create_pruning_pass(decoder_layers_to_keep, "decoder"))
-
- new_state_dict = {}
- for layer_name in state_dict.keys():
- match = re.search(r"\.layers\.(\d+)\.", layer_name)
- # if layer has no number in it, it is a supporting layer, such as an
- # embedding
- if not match:
- new_state_dict[layer_name] = state_dict[layer_name]
- continue
-
- # otherwise, layer should be pruned.
- original_layer_number = match.group(1)
- # figure out which mapping dict to replace from
- for pruning_pass in pruning_passes:
- if original_layer_number in pruning_pass["mapping_dict"] and pruning_pass[
- "substitution_regex"
- ].search(layer_name):
- new_layer_number = pruning_pass["mapping_dict"][original_layer_number]
- substitution_match = pruning_pass["substitution_regex"].search(
- layer_name
- )
- new_state_key = (
- layer_name[: substitution_match.start(1)]
- + new_layer_number
- + layer_name[substitution_match.end(1) :]
- )
- new_state_dict[new_state_key] = state_dict[layer_name]
-
- # Since layers are now pruned, *_layers_to_keep are no longer needed.
- # This is more of "It would make it work fix" rather than a proper fix.
- if isinstance(model_cfg, DictConfig):
- context = open_dict(model_cfg)
- else:
- context = contextlib.ExitStack()
- with context:
- if hasattr(model_cfg, "encoder_layers_to_keep"):
- model_cfg.encoder_layers_to_keep = None
- if hasattr(model_cfg, "decoder_layers_to_keep"):
- model_cfg.decoder_layers_to_keep = None
-
- return new_state_dict
-
-
-def load_pretrained_component_from_model(
- component: Union[FairseqEncoder, FairseqDecoder], checkpoint: str
-):
- """
- Load a pretrained FairseqEncoder or FairseqDecoder from checkpoint into the
- provided `component` object. If state_dict fails to load, there may be a
- mismatch in the architecture of the corresponding `component` found in the
- `checkpoint` file.
- """
- if not PathManager.exists(checkpoint):
- raise IOError("Model file not found: {}".format(checkpoint))
- state = load_checkpoint_to_cpu(checkpoint)
- if isinstance(component, FairseqEncoder):
- component_type = "encoder"
- elif isinstance(component, FairseqDecoder):
- component_type = "decoder"
- else:
- raise ValueError(
- "component to load must be either a FairseqEncoder or "
- "FairseqDecoder. Loading other component types are not supported."
- )
- component_state_dict = OrderedDict()
- for key in state["model"].keys():
- if key.startswith(component_type):
- # encoder.input_layers.0.0.weight --> input_layers.0.0.weight
- component_subkey = key[len(component_type) + 1 :]
- component_state_dict[component_subkey] = state["model"][key]
- component.load_state_dict(component_state_dict, strict=True)
- return component
-
-
-def verify_checkpoint_directory(save_dir: str) -> None:
- if not os.path.exists(save_dir):
- os.makedirs(save_dir, exist_ok=True)
- temp_file_path = os.path.join(save_dir, "dummy")
- try:
- with open(temp_file_path, "w"):
- pass
- except OSError as e:
- logger.warning(
- "Unable to access checkpoint save directory: {}".format(save_dir)
- )
- raise e
- else:
- os.remove(temp_file_path)
-
-
-def load_ema_from_checkpoint(fpath):
- """Loads exponential moving averaged (EMA) checkpoint from input and
- returns a model with ema weights.
-
- Args:
- fpath: A string path of checkpoint to load from.
-
- Returns:
- A dict of string keys mapping to various values. The 'model' key
- from the returned dict should correspond to an OrderedDict mapping
- string parameter names to torch Tensors.
- """
- params_dict = collections.OrderedDict()
- new_state = None
-
- with PathManager.open(fpath, 'rb') as f:
- new_state = torch.load(
- f,
- map_location=(
- lambda s, _: torch.serialization.default_restore_location(s, 'cpu')
- ),
- )
-
- # EMA model is stored in a separate "extra state"
- model_params = new_state['extra_state']['ema']
-
- for key in list(model_params.keys()):
- p = model_params[key]
- if isinstance(p, torch.HalfTensor):
- p = p.float()
- if key not in params_dict:
- params_dict[key] = p.clone()
- # NOTE: clone() is needed in case of p is a shared parameter
- else:
- raise ValueError("Key {} is repeated in EMA model params.".format(key))
-
- if len(params_dict) == 0:
- raise ValueError(
- f"Input checkpoint path '{fpath}' does not contain "
- "ema model weights, is this model trained with EMA?"
- )
-
- new_state['model'] = params_dict
- return new_state
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/modules/dynamic_crf_layer.py b/spaces/ICML2022/OFA/fairseq/fairseq/modules/dynamic_crf_layer.py
deleted file mode 100644
index 8fcc6b8d2672d2eacc6d01b9688bac44d5e1ce26..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/modules/dynamic_crf_layer.py
+++ /dev/null
@@ -1,189 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-This file is to re-implemented the low-rank and beam approximation of CRF layer
-Proposed by:
-
-Sun, Zhiqing, et al.
-Fast Structured Decoding for Sequence Models
-https://arxiv.org/abs/1910.11555
-
-The CRF implementation is mainly borrowed from
-https://github.com/kmkurn/pytorch-crf/blob/master/torchcrf/__init__.py
-
-"""
-
-import numpy as np
-import torch
-import torch.nn as nn
-
-
-def logsumexp(x, dim=1):
- return torch.logsumexp(x.float(), dim=dim).type_as(x)
-
-
-class DynamicCRF(nn.Module):
- """Dynamic CRF layer is used to approximate the traditional
- Conditional Random Fields (CRF)
- $P(y | x) = 1/Z(x) exp(sum_i s(y_i, x) + sum_i t(y_{i-1}, y_i, x))$
-
- where in this function, we assume the emition scores (s) are given,
- and the transition score is a |V| x |V| matrix $M$
-
- in the following two aspects:
- (1) it used a low-rank approximation for the transition matrix:
- $M = E_1 E_2^T$
- (2) it used a beam to estimate the normalizing factor Z(x)
- """
-
- def __init__(self, num_embedding, low_rank=32, beam_size=64):
- super().__init__()
-
- self.E1 = nn.Embedding(num_embedding, low_rank)
- self.E2 = nn.Embedding(num_embedding, low_rank)
-
- self.vocb = num_embedding
- self.rank = low_rank
- self.beam = beam_size
-
- def extra_repr(self):
- return "vocab_size={}, low_rank={}, beam_size={}".format(
- self.vocb, self.rank, self.beam
- )
-
- def forward(self, emissions, targets, masks, beam=None):
- """
- Compute the conditional log-likelihood of a sequence of target tokens given emission scores
-
- Args:
- emissions (`~torch.Tensor`): Emission score are usually the unnormalized decoder output
- ``(batch_size, seq_len, vocab_size)``. We assume batch-first
- targets (`~torch.LongTensor`): Sequence of target token indices
- ``(batch_size, seq_len)
- masks (`~torch.ByteTensor`): Mask tensor with the same size as targets
-
- Returns:
- `~torch.Tensor`: approximated log-likelihood
- """
- numerator = self._compute_score(emissions, targets, masks)
- denominator = self._compute_normalizer(emissions, targets, masks, beam)
- return numerator - denominator
-
- def forward_decoder(self, emissions, masks=None, beam=None):
- """
- Find the most likely output sequence using Viterbi algorithm.
-
- Args:
- emissions (`~torch.Tensor`): Emission score are usually the unnormalized decoder output
- ``(batch_size, seq_len, vocab_size)``. We assume batch-first
- masks (`~torch.ByteTensor`): Mask tensor with the same size as targets
-
- Returns:
- `~torch.LongTensor`: decoded sequence from the CRF model
- """
- return self._viterbi_decode(emissions, masks, beam)
-
- def _compute_score(self, emissions, targets, masks=None):
- batch_size, seq_len = targets.size()
- emission_scores = emissions.gather(2, targets[:, :, None])[:, :, 0] # B x T
- transition_scores = (self.E1(targets[:, :-1]) * self.E2(targets[:, 1:])).sum(2)
-
- scores = emission_scores
- scores[:, 1:] += transition_scores
-
- if masks is not None:
- scores = scores * masks.type_as(scores)
- return scores.sum(-1)
-
- def _compute_normalizer(self, emissions, targets=None, masks=None, beam=None):
- # HACK: we include "target" which is a hueristic for training
- # HACK: we use a beam of tokens to approximate the normalizing factor (which is bad?)
-
- beam = beam if beam is not None else self.beam
- batch_size, seq_len = emissions.size()[:2]
- if targets is not None:
- _emissions = emissions.scatter(2, targets[:, :, None], np.float("inf"))
- beam_targets = _emissions.topk(beam, 2)[1]
- beam_emission_scores = emissions.gather(2, beam_targets)
- else:
- beam_emission_scores, beam_targets = emissions.topk(beam, 2)
- beam_transition_score1 = self.E1(beam_targets[:, :-1]) # B x (T-1) x K x D
- beam_transition_score2 = self.E2(beam_targets[:, 1:]) # B x (T-1) x K x D
- beam_transition_matrix = torch.bmm(
- beam_transition_score1.view(-1, beam, self.rank),
- beam_transition_score2.view(-1, beam, self.rank).transpose(1, 2),
- )
- beam_transition_matrix = beam_transition_matrix.view(batch_size, -1, beam, beam)
-
- # compute the normalizer in the log-space
- score = beam_emission_scores[:, 0] # B x K
- for i in range(1, seq_len):
- next_score = score[:, :, None] + beam_transition_matrix[:, i - 1]
- next_score = logsumexp(next_score, dim=1) + beam_emission_scores[:, i]
-
- if masks is not None:
- score = torch.where(masks[:, i : i + 1], next_score, score)
- else:
- score = next_score
-
- # Sum (log-sum-exp) over all possible tags
- return logsumexp(score, dim=1)
-
- def _viterbi_decode(self, emissions, masks=None, beam=None):
- # HACK: we use a beam of tokens to approximate the normalizing factor (which is bad?)
-
- beam = beam if beam is not None else self.beam
- batch_size, seq_len = emissions.size()[:2]
- beam_emission_scores, beam_targets = emissions.topk(beam, 2)
- beam_transition_score1 = self.E1(beam_targets[:, :-1]) # B x (T-1) x K x D
- beam_transition_score2 = self.E2(beam_targets[:, 1:]) # B x (T-1) x K x D
- beam_transition_matrix = torch.bmm(
- beam_transition_score1.view(-1, beam, self.rank),
- beam_transition_score2.view(-1, beam, self.rank).transpose(1, 2),
- )
- beam_transition_matrix = beam_transition_matrix.view(batch_size, -1, beam, beam)
-
- traj_tokens, traj_scores = [], []
- finalized_tokens, finalized_scores = [], []
-
- # compute the normalizer in the log-space
- score = beam_emission_scores[:, 0] # B x K
- dummy = (
- torch.arange(beam, device=score.device).expand(*score.size()).contiguous()
- )
-
- for i in range(1, seq_len):
- traj_scores.append(score)
- _score = score[:, :, None] + beam_transition_matrix[:, i - 1]
- _score, _index = _score.max(dim=1)
- _score = _score + beam_emission_scores[:, i]
-
- if masks is not None:
- score = torch.where(masks[:, i : i + 1], _score, score)
- index = torch.where(masks[:, i : i + 1], _index, dummy)
- else:
- score, index = _score, _index
- traj_tokens.append(index)
-
- # now running the back-tracing and find the best
- best_score, best_index = score.max(dim=1)
- finalized_tokens.append(best_index[:, None])
- finalized_scores.append(best_score[:, None])
-
- for idx, scs in zip(reversed(traj_tokens), reversed(traj_scores)):
- previous_index = finalized_tokens[-1]
- finalized_tokens.append(idx.gather(1, previous_index))
- finalized_scores.append(scs.gather(1, previous_index))
-
- finalized_tokens.reverse()
- finalized_tokens = torch.cat(finalized_tokens, 1)
- finalized_tokens = beam_targets.gather(2, finalized_tokens[:, :, None])[:, :, 0]
-
- finalized_scores.reverse()
- finalized_scores = torch.cat(finalized_scores, 1)
- finalized_scores[:, 1:] = finalized_scores[:, 1:] - finalized_scores[:, :-1]
-
- return finalized_scores, finalized_tokens
diff --git a/spaces/Iceclear/StableSR/StableSR/ldm/modules/encoders/transformer_utils.py b/spaces/Iceclear/StableSR/StableSR/ldm/modules/encoders/transformer_utils.py
deleted file mode 100644
index e3d90de216a12938c5f79336e8916d06f40988ef..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/ldm/modules/encoders/transformer_utils.py
+++ /dev/null
@@ -1,181 +0,0 @@
-import torch
-import torch.utils.checkpoint
-from torch import nn
-
-from transformers.models.clip.modeling_clip import CLIPTextTransformer
-from transformers.modeling_outputs import BaseModelOutput, BaseModelOutputWithPooling
-from transformers.models.clip.configuration_clip import CLIPConfig, CLIPTextConfig, CLIPVisionConfig
-from typing import Any, Optional, Tuple, Union
-from transformers.utils import (
- add_start_docstrings_to_model_forward,
- replace_return_docstrings,
-)
-
-
-CLIP_START_DOCSTRING = r"""
- This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
- library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
- etc.)
- This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
- Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
- and behavior.
- Parameters:
- config ([`CLIPConfig`]): Model configuration class with all the parameters of the model.
- Initializing with a config file does not load the weights associated with the model, only the
- configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
-"""
-
-CLIP_TEXT_INPUTS_DOCSTRING = r"""
- Args:
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
- it.
- Indices can be obtained using [`CLIPTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
- [What are input IDs?](../glossary#input-ids)
- attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
- [What are attention masks?](../glossary#attention-mask)
- position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
- config.max_position_embeddings - 1]`.
- [What are position IDs?](../glossary#position-ids)
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
- tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
- more detail.
- return_dict (`bool`, *optional*):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
-"""
-
-CLIP_VISION_INPUTS_DOCSTRING = r"""
- Args:
- pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
- Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
- [`CLIPFeatureExtractor`]. See [`CLIPFeatureExtractor.__call__`] for details.
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
- tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
- more detail.
- return_dict (`bool`, *optional*):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
-"""
-
-CLIP_INPUTS_DOCSTRING = r"""
- Args:
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
- it.
- Indices can be obtained using [`CLIPTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
- [What are input IDs?](../glossary#input-ids)
- attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
- [What are attention masks?](../glossary#attention-mask)
- position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
- config.max_position_embeddings - 1]`.
- [What are position IDs?](../glossary#position-ids)
- pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
- Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
- [`CLIPFeatureExtractor`]. See [`CLIPFeatureExtractor.__call__`] for details.
- return_loss (`bool`, *optional*):
- Whether or not to return the contrastive loss.
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
- tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
- more detail.
- return_dict (`bool`, *optional*):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
-"""
-
-class CLIPTextTransformer_M(CLIPTextTransformer):
-
- @add_start_docstrings_to_model_forward(CLIP_TEXT_INPUTS_DOCSTRING)
- @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=CLIPTextConfig)
- def forward(
- self,
- input_ids: Optional[torch.Tensor] = None,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.Tensor] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, BaseModelOutputWithPooling]:
- r"""
- Returns:
- """
-
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- if input_ids is None:
- raise ValueError("You have to specify either input_ids")
-
- input_shape = input_ids.size()
- # input_ids = input_ids.view(-1, input_shape[-1])
-
- # hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
- hidden_states = input_ids
-
- bsz, seq_len, _ = input_shape
- # CLIP's text model uses causal mask, prepare it here.
- # https://github.com/openai/CLIP/blob/cfcffb90e69f37bf2ff1e988237a0fbe41f33c04/clip/model.py#L324
- causal_attention_mask = self._build_causal_attention_mask(bsz, seq_len, hidden_states.dtype).to(
- hidden_states.device
- )
- # expand attention_mask
- if attention_mask is not None:
- # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
- attention_mask = _expand_mask(attention_mask, hidden_states.dtype)
-
- encoder_outputs = self.encoder(
- inputs_embeds=hidden_states,
- attention_mask=attention_mask,
- causal_attention_mask=causal_attention_mask,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- last_hidden_state = encoder_outputs[0]
- last_hidden_state = self.final_layer_norm(last_hidden_state)
-
- # text_embeds.shape = [batch_size, sequence_length, transformer.width]
- # take features from the eot embedding (eot_token is the highest number in each sequence)
- # casting to torch.int for onnx compatibility: argmax doesn't support int64 inputs with opset 14
- pooled_output = last_hidden_state[
- torch.arange(last_hidden_state.shape[0], device=input_ids.device), torch.mean(input_ids, -1).to(torch.int).argmax(dim=-1)
- ]
-
- if not return_dict:
- return (last_hidden_state, pooled_output) + encoder_outputs[1:]
-
- return BaseModelOutputWithPooling(
- last_hidden_state=last_hidden_state,
- pooler_output=pooled_output,
- hidden_states=encoder_outputs.hidden_states,
- attentions=encoder_outputs.attentions,
- )
-
- def _build_causal_attention_mask(self, bsz, seq_len, dtype):
- # lazily create causal attention mask, with full attention between the vision tokens
- # pytorch uses additive attention mask; fill with -inf
- mask = torch.empty(bsz, seq_len, seq_len, dtype=dtype)
- mask.fill_(torch.tensor(torch.finfo(dtype).min))
- mask.triu_(1) # zero out the lower diagonal
- mask = mask.unsqueeze(1) # expand mask
- return mask
diff --git a/spaces/Igor2004/newSpace/README.md b/spaces/Igor2004/newSpace/README.md
deleted file mode 100644
index 718dd56a462b3c55b97774c98bee99d595d52239..0000000000000000000000000000000000000000
--- a/spaces/Igor2004/newSpace/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: NewSpace
-emoji: 💻
-colorFrom: yellow
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.33.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Illumotion/Koboldcpp/include/CL/cl_icd.h b/spaces/Illumotion/Koboldcpp/include/CL/cl_icd.h
deleted file mode 100644
index 360b870305523f2fef692d24a8ce0f9672e6c65d..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/include/CL/cl_icd.h
+++ /dev/null
@@ -1,1294 +0,0 @@
-/*******************************************************************************
- * Copyright (c) 2019-2020 The Khronos Group Inc.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- ******************************************************************************/
-
-#ifndef OPENCL_CL_ICD_H
-#define OPENCL_CL_ICD_H
-
-#include
-#include
-#include
-#include
-
-#if defined(_WIN32)
-#include
-#include
-#include
-#endif
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-/*
- * This file contains pointer type definitions for each of the CL API calls as
- * well as a type definition for the dispatch table used by the Khronos ICD
- * loader (see cl_khr_icd extension specification for background).
- */
-
-/* API function pointer definitions */
-
-// Platform APIs
-typedef cl_int(CL_API_CALL *cl_api_clGetPlatformIDs)(
- cl_uint num_entries, cl_platform_id *platforms,
- cl_uint *num_platforms) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clGetPlatformInfo)(
- cl_platform_id platform, cl_platform_info param_name,
- size_t param_value_size, void *param_value,
- size_t *param_value_size_ret) CL_API_SUFFIX__VERSION_1_0;
-
-// Device APIs
-typedef cl_int(CL_API_CALL *cl_api_clGetDeviceIDs)(
- cl_platform_id platform, cl_device_type device_type, cl_uint num_entries,
- cl_device_id *devices, cl_uint *num_devices) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clGetDeviceInfo)(
- cl_device_id device, cl_device_info param_name, size_t param_value_size,
- void *param_value, size_t *param_value_size_ret) CL_API_SUFFIX__VERSION_1_0;
-
-#ifdef CL_VERSION_1_2
-
-typedef cl_int(CL_API_CALL *cl_api_clCreateSubDevices)(
- cl_device_id in_device,
- const cl_device_partition_property *partition_properties,
- cl_uint num_entries, cl_device_id *out_devices, cl_uint *num_devices);
-
-typedef cl_int(CL_API_CALL *cl_api_clRetainDevice)(
- cl_device_id device) CL_API_SUFFIX__VERSION_1_2;
-
-typedef cl_int(CL_API_CALL *cl_api_clReleaseDevice)(
- cl_device_id device) CL_API_SUFFIX__VERSION_1_2;
-
-#else
-
-typedef void *cl_api_clCreateSubDevices;
-typedef void *cl_api_clRetainDevice;
-typedef void *cl_api_clReleaseDevice;
-
-#endif
-
-// Context APIs
-typedef cl_context(CL_API_CALL *cl_api_clCreateContext)(
- const cl_context_properties *properties, cl_uint num_devices,
- const cl_device_id *devices,
- void(CL_CALLBACK *pfn_notify)(const char *, const void *, size_t, void *),
- void *user_data, cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_context(CL_API_CALL *cl_api_clCreateContextFromType)(
- const cl_context_properties *properties, cl_device_type device_type,
- void(CL_CALLBACK *pfn_notify)(const char *, const void *, size_t, void *),
- void *user_data, cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clRetainContext)(
- cl_context context) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clReleaseContext)(
- cl_context context) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clGetContextInfo)(
- cl_context context, cl_context_info param_name, size_t param_value_size,
- void *param_value, size_t *param_value_size_ret) CL_API_SUFFIX__VERSION_1_0;
-
-// Command Queue APIs
-typedef cl_command_queue(CL_API_CALL *cl_api_clCreateCommandQueue)(
- cl_context context, cl_device_id device,
- cl_command_queue_properties properties,
- cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_0;
-
-#ifdef CL_VERSION_2_0
-
-typedef
-cl_command_queue(CL_API_CALL *cl_api_clCreateCommandQueueWithProperties)(
- cl_context /* context */, cl_device_id /* device */,
- const cl_queue_properties * /* properties */,
- cl_int * /* errcode_ret */) CL_API_SUFFIX__VERSION_2_0;
-
-#else
-
-typedef void *cl_api_clCreateCommandQueueWithProperties;
-
-#endif
-
-typedef cl_int(CL_API_CALL *cl_api_clRetainCommandQueue)(
- cl_command_queue command_queue) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clReleaseCommandQueue)(
- cl_command_queue command_queue) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clGetCommandQueueInfo)(
- cl_command_queue command_queue, cl_command_queue_info param_name,
- size_t param_value_size, void *param_value,
- size_t *param_value_size_ret) CL_API_SUFFIX__VERSION_1_0;
-
-// Memory Object APIs
-typedef cl_mem(CL_API_CALL *cl_api_clCreateBuffer)(
- cl_context context, cl_mem_flags flags, size_t size, void *host_ptr,
- cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_0;
-
-#ifdef CL_VERSION_1_2
-
-typedef cl_mem(CL_API_CALL *cl_api_clCreateImage)(
- cl_context context, cl_mem_flags flags, const cl_image_format *image_format,
- const cl_image_desc *image_desc, void *host_ptr,
- cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_2;
-
-#else
-
-typedef void *cl_api_clCreateImage;
-
-#endif
-
-#ifdef CL_VERSION_3_0
-
-typedef cl_mem(CL_API_CALL *cl_api_clCreateBufferWithProperties)(
- cl_context context, const cl_mem_properties *properties, cl_mem_flags flags,
- size_t size, void *host_ptr,
- cl_int *errcode_ret) CL_API_SUFFIX__VERSION_3_0;
-
-typedef cl_mem(CL_API_CALL *cl_api_clCreateImageWithProperties)(
- cl_context context, const cl_mem_properties *properties, cl_mem_flags flags,
- const cl_image_format *image_format, const cl_image_desc *image_desc,
- void *host_ptr, cl_int *errcode_ret) CL_API_SUFFIX__VERSION_3_0;
-
-typedef cl_int(CL_API_CALL* cl_api_clSetContextDestructorCallback)(
- cl_context context,
- void(CL_CALLBACK* pfn_notify)(cl_context context, void* user_data),
- void* user_data) CL_API_SUFFIX__VERSION_3_0;
-
-#else
-
-typedef void *cl_api_clCreateBufferWithProperties;
-typedef void *cl_api_clCreateImageWithProperties;
-typedef void *cl_api_clSetContextDestructorCallback;
-
-#endif
-
-typedef cl_int(CL_API_CALL *cl_api_clRetainMemObject)(
- cl_mem memobj) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clReleaseMemObject)(
- cl_mem memobj) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clGetSupportedImageFormats)(
- cl_context context, cl_mem_flags flags, cl_mem_object_type image_type,
- cl_uint num_entries, cl_image_format *image_formats,
- cl_uint *num_image_formats) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clGetMemObjectInfo)(
- cl_mem memobj, cl_mem_info param_name, size_t param_value_size,
- void *param_value, size_t *param_value_size_ret) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clGetImageInfo)(
- cl_mem image, cl_image_info param_name, size_t param_value_size,
- void *param_value, size_t *param_value_size_ret) CL_API_SUFFIX__VERSION_1_0;
-
-#ifdef CL_VERSION_2_0
-
-typedef cl_mem(CL_API_CALL *cl_api_clCreatePipe)(
- cl_context /* context */, cl_mem_flags /* flags */,
- cl_uint /* pipe_packet_size */, cl_uint /* pipe_max_packets */,
- const cl_pipe_properties * /* properties */,
- cl_int * /* errcode_ret */) CL_API_SUFFIX__VERSION_2_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clGetPipeInfo)(
- cl_mem /* pipe */, cl_pipe_info /* param_name */,
- size_t /* param_value_size */, void * /* param_value */,
- size_t * /* param_value_size_ret */) CL_API_SUFFIX__VERSION_2_0;
-
-typedef void *(CL_API_CALL *cl_api_clSVMAlloc)(
- cl_context /* context */, cl_svm_mem_flags /* flags */, size_t /* size */,
- unsigned int /* alignment */)CL_API_SUFFIX__VERSION_2_0;
-
-typedef void(CL_API_CALL *cl_api_clSVMFree)(
- cl_context /* context */,
- void * /* svm_pointer */) CL_API_SUFFIX__VERSION_2_0;
-
-#else
-
-typedef void *cl_api_clCreatePipe;
-typedef void *cl_api_clGetPipeInfo;
-typedef void *cl_api_clSVMAlloc;
-typedef void *cl_api_clSVMFree;
-
-#endif
-
-// Sampler APIs
-typedef cl_sampler(CL_API_CALL *cl_api_clCreateSampler)(
- cl_context context, cl_bool normalized_coords,
- cl_addressing_mode addressing_mode, cl_filter_mode filter_mode,
- cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clRetainSampler)(
- cl_sampler sampler) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clReleaseSampler)(
- cl_sampler sampler) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clGetSamplerInfo)(
- cl_sampler sampler, cl_sampler_info param_name, size_t param_value_size,
- void *param_value, size_t *param_value_size_ret) CL_API_SUFFIX__VERSION_1_0;
-
-#ifdef CL_VERSION_2_0
-
-typedef
-cl_sampler(CL_API_CALL *cl_api_clCreateSamplerWithProperties)(
- cl_context /* context */,
- const cl_sampler_properties * /* sampler_properties */,
- cl_int * /* errcode_ret */) CL_API_SUFFIX__VERSION_2_0;
-
-#else
-
-typedef void *cl_api_clCreateSamplerWithProperties;
-
-#endif
-
-// Program Object APIs
-typedef cl_program(CL_API_CALL *cl_api_clCreateProgramWithSource)(
- cl_context context, cl_uint count, const char **strings,
- const size_t *lengths, cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_program(CL_API_CALL *cl_api_clCreateProgramWithBinary)(
- cl_context context, cl_uint num_devices, const cl_device_id *device_list,
- const size_t *lengths, const unsigned char **binaries,
- cl_int *binary_status, cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_0;
-
-#ifdef CL_VERSION_1_2
-
-typedef
-cl_program(CL_API_CALL *cl_api_clCreateProgramWithBuiltInKernels)(
- cl_context context, cl_uint num_devices, const cl_device_id *device_list,
- const char *kernel_names, cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_2;
-
-#else
-
-typedef void *cl_api_clCreateProgramWithBuiltInKernels;
-
-#endif
-
-typedef cl_int(CL_API_CALL *cl_api_clRetainProgram)(
- cl_program program) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clReleaseProgram)(
- cl_program program) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clBuildProgram)(
- cl_program program, cl_uint num_devices, const cl_device_id *device_list,
- const char *options,
- void(CL_CALLBACK *pfn_notify)(cl_program program, void *user_data),
- void *user_data) CL_API_SUFFIX__VERSION_1_0;
-
-#ifdef CL_VERSION_1_2
-
-typedef cl_int(CL_API_CALL *cl_api_clCompileProgram)(
- cl_program program, cl_uint num_devices, const cl_device_id *device_list,
- const char *options, cl_uint num_input_headers,
- const cl_program *input_headers, const char **header_include_names,
- void(CL_CALLBACK *pfn_notify)(cl_program program, void *user_data),
- void *user_data) CL_API_SUFFIX__VERSION_1_2;
-
-typedef cl_program(CL_API_CALL *cl_api_clLinkProgram)(
- cl_context context, cl_uint num_devices, const cl_device_id *device_list,
- const char *options, cl_uint num_input_programs,
- const cl_program *input_programs,
- void(CL_CALLBACK *pfn_notify)(cl_program program, void *user_data),
- void *user_data, cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_2;
-
-#else
-
-typedef void *cl_api_clCompileProgram;
-typedef void *cl_api_clLinkProgram;
-
-#endif
-
-#ifdef CL_VERSION_2_2
-
-typedef
-cl_int(CL_API_CALL *cl_api_clSetProgramSpecializationConstant)(
- cl_program program, cl_uint spec_id, size_t spec_size,
- const void *spec_value) CL_API_SUFFIX__VERSION_2_2;
-
-typedef cl_int(CL_API_CALL *cl_api_clSetProgramReleaseCallback)(
- cl_program program,
- void(CL_CALLBACK *pfn_notify)(cl_program program, void *user_data),
- void *user_data) CL_API_SUFFIX__VERSION_2_2;
-
-#else
-
-typedef void *cl_api_clSetProgramSpecializationConstant;
-typedef void *cl_api_clSetProgramReleaseCallback;
-
-#endif
-
-#ifdef CL_VERSION_1_2
-
-typedef cl_int(CL_API_CALL *cl_api_clUnloadPlatformCompiler)(
- cl_platform_id platform) CL_API_SUFFIX__VERSION_1_2;
-
-#else
-
-typedef void *cl_api_clUnloadPlatformCompiler;
-
-#endif
-
-typedef cl_int(CL_API_CALL *cl_api_clGetProgramInfo)(
- cl_program program, cl_program_info param_name, size_t param_value_size,
- void *param_value, size_t *param_value_size_ret) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clGetProgramBuildInfo)(
- cl_program program, cl_device_id device, cl_program_build_info param_name,
- size_t param_value_size, void *param_value,
- size_t *param_value_size_ret) CL_API_SUFFIX__VERSION_1_0;
-
-// Kernel Object APIs
-typedef cl_kernel(CL_API_CALL *cl_api_clCreateKernel)(
- cl_program program, const char *kernel_name,
- cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clCreateKernelsInProgram)(
- cl_program program, cl_uint num_kernels, cl_kernel *kernels,
- cl_uint *num_kernels_ret) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clRetainKernel)(
- cl_kernel kernel) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clReleaseKernel)(
- cl_kernel kernel) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clSetKernelArg)(
- cl_kernel kernel, cl_uint arg_index, size_t arg_size,
- const void *arg_value) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clGetKernelInfo)(
- cl_kernel kernel, cl_kernel_info param_name, size_t param_value_size,
- void *param_value, size_t *param_value_size_ret) CL_API_SUFFIX__VERSION_1_0;
-
-#ifdef CL_VERSION_1_2
-
-typedef cl_int(CL_API_CALL *cl_api_clGetKernelArgInfo)(
- cl_kernel kernel, cl_uint arg_indx, cl_kernel_arg_info param_name,
- size_t param_value_size, void *param_value,
- size_t *param_value_size_ret) CL_API_SUFFIX__VERSION_1_2;
-
-#else
-
-typedef void *cl_api_clGetKernelArgInfo;
-
-#endif
-
-typedef cl_int(CL_API_CALL *cl_api_clGetKernelWorkGroupInfo)(
- cl_kernel kernel, cl_device_id device, cl_kernel_work_group_info param_name,
- size_t param_value_size, void *param_value,
- size_t *param_value_size_ret) CL_API_SUFFIX__VERSION_1_0;
-
-#ifdef CL_VERSION_2_0
-
-typedef cl_int(CL_API_CALL *cl_api_clSetKernelArgSVMPointer)(
- cl_kernel /* kernel */, cl_uint /* arg_index */,
- const void * /* arg_value */) CL_API_SUFFIX__VERSION_2_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clSetKernelExecInfo)(
- cl_kernel /* kernel */, cl_kernel_exec_info /* param_name */,
- size_t /* param_value_size */,
- const void * /* param_value */) CL_API_SUFFIX__VERSION_2_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clGetKernelSubGroupInfoKHR)(
- cl_kernel /* in_kernel */, cl_device_id /*in_device*/,
- cl_kernel_sub_group_info /* param_name */, size_t /*input_value_size*/,
- const void * /*input_value*/, size_t /*param_value_size*/,
- void * /*param_value*/,
- size_t * /*param_value_size_ret*/) CL_API_SUFFIX__VERSION_2_0;
-
-#else
-
-typedef void *cl_api_clSetKernelArgSVMPointer;
-typedef void *cl_api_clSetKernelExecInfo;
-typedef void *cl_api_clGetKernelSubGroupInfoKHR;
-
-#endif
-
-// Event Object APIs
-typedef cl_int(CL_API_CALL *cl_api_clWaitForEvents)(
- cl_uint num_events, const cl_event *event_list) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clGetEventInfo)(
- cl_event event, cl_event_info param_name, size_t param_value_size,
- void *param_value, size_t *param_value_size_ret) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clRetainEvent)(cl_event event)
- CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clReleaseEvent)(cl_event event)
- CL_API_SUFFIX__VERSION_1_0;
-
-// Profiling APIs
-typedef cl_int(CL_API_CALL *cl_api_clGetEventProfilingInfo)(
- cl_event event, cl_profiling_info param_name, size_t param_value_size,
- void *param_value, size_t *param_value_size_ret) CL_API_SUFFIX__VERSION_1_0;
-
-// Flush and Finish APIs
-typedef cl_int(CL_API_CALL *cl_api_clFlush)(
- cl_command_queue command_queue) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clFinish)(
- cl_command_queue command_queue) CL_API_SUFFIX__VERSION_1_0;
-
-// Enqueued Commands APIs
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueReadBuffer)(
- cl_command_queue command_queue, cl_mem buffer, cl_bool blocking_read,
- size_t offset, size_t cb, void *ptr, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_0;
-
-#ifdef CL_VERSION_1_1
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueReadBufferRect)(
- cl_command_queue command_queue, cl_mem buffer, cl_bool blocking_read,
- const size_t *buffer_origin, const size_t *host_origin,
- const size_t *region, size_t buffer_row_pitch, size_t buffer_slice_pitch,
- size_t host_row_pitch, size_t host_slice_pitch, void *ptr,
- cl_uint num_events_in_wait_list, const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_1;
-
-#else
-
-typedef void *cl_api_clEnqueueReadBufferRect;
-
-#endif
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueWriteBuffer)(
- cl_command_queue command_queue, cl_mem buffer, cl_bool blocking_write,
- size_t offset, size_t cb, const void *ptr, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_0;
-
-#ifdef CL_VERSION_1_1
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueWriteBufferRect)(
- cl_command_queue command_queue, cl_mem buffer, cl_bool blocking_read,
- const size_t *buffer_origin, const size_t *host_origin,
- const size_t *region, size_t buffer_row_pitch, size_t buffer_slice_pitch,
- size_t host_row_pitch, size_t host_slice_pitch, const void *ptr,
- cl_uint num_events_in_wait_list, const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_1;
-
-#else
-
-typedef void *cl_api_clEnqueueWriteBufferRect;
-
-#endif
-
-#ifdef CL_VERSION_1_2
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueFillBuffer)(
- cl_command_queue command_queue, cl_mem buffer, const void *pattern,
- size_t pattern_size, size_t offset, size_t cb,
- cl_uint num_events_in_wait_list, const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_2;
-
-#else
-
-typedef void *cl_api_clEnqueueFillBuffer;
-
-#endif
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueCopyBuffer)(
- cl_command_queue command_queue, cl_mem src_buffer, cl_mem dst_buffer,
- size_t src_offset, size_t dst_offset, size_t cb,
- cl_uint num_events_in_wait_list, const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_0;
-
-#ifdef CL_VERSION_1_1
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueCopyBufferRect)(
- cl_command_queue command_queue, cl_mem src_buffer, cl_mem dst_buffer,
- const size_t *src_origin, const size_t *dst_origin, const size_t *region,
- size_t src_row_pitch, size_t src_slice_pitch, size_t dst_row_pitch,
- size_t dst_slice_pitch, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_1;
-
-#else
-
-typedef void *cl_api_clEnqueueCopyBufferRect;
-
-#endif
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueReadImage)(
- cl_command_queue command_queue, cl_mem image, cl_bool blocking_read,
- const size_t *origin, const size_t *region, size_t row_pitch,
- size_t slice_pitch, void *ptr, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueWriteImage)(
- cl_command_queue command_queue, cl_mem image, cl_bool blocking_write,
- const size_t *origin, const size_t *region, size_t input_row_pitch,
- size_t input_slice_pitch, const void *ptr, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_0;
-
-#ifdef CL_VERSION_1_2
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueFillImage)(
- cl_command_queue command_queue, cl_mem image, const void *fill_color,
- const size_t origin[3], const size_t region[3],
- cl_uint num_events_in_wait_list, const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_2;
-
-#else
-
-typedef void *cl_api_clEnqueueFillImage;
-
-#endif
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueCopyImage)(
- cl_command_queue command_queue, cl_mem src_image, cl_mem dst_image,
- const size_t *src_origin, const size_t *dst_origin, const size_t *region,
- cl_uint num_events_in_wait_list, const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueCopyImageToBuffer)(
- cl_command_queue command_queue, cl_mem src_image, cl_mem dst_buffer,
- const size_t *src_origin, const size_t *region, size_t dst_offset,
- cl_uint num_events_in_wait_list, const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueCopyBufferToImage)(
- cl_command_queue command_queue, cl_mem src_buffer, cl_mem dst_image,
- size_t src_offset, const size_t *dst_origin, const size_t *region,
- cl_uint num_events_in_wait_list, const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_0;
-
-typedef void *(CL_API_CALL *cl_api_clEnqueueMapBuffer)(
- cl_command_queue command_queue, cl_mem buffer, cl_bool blocking_map,
- cl_map_flags map_flags, size_t offset, size_t cb,
- cl_uint num_events_in_wait_list, const cl_event *event_wait_list,
- cl_event *event, cl_int *errcode_ret)CL_API_SUFFIX__VERSION_1_0;
-
-typedef void *(CL_API_CALL *cl_api_clEnqueueMapImage)(
- cl_command_queue command_queue, cl_mem image, cl_bool blocking_map,
- cl_map_flags map_flags, const size_t *origin, const size_t *region,
- size_t *image_row_pitch, size_t *image_slice_pitch,
- cl_uint num_events_in_wait_list, const cl_event *event_wait_list,
- cl_event *event, cl_int *errcode_ret)CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueUnmapMemObject)(
- cl_command_queue command_queue, cl_mem memobj, void *mapped_ptr,
- cl_uint num_events_in_wait_list, const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_0;
-
-#ifdef CL_VERSION_1_2
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueMigrateMemObjects)(
- cl_command_queue command_queue, cl_uint num_mem_objects,
- const cl_mem *mem_objects, cl_mem_migration_flags flags,
- cl_uint num_events_in_wait_list, const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_2;
-
-#else
-
-typedef void *cl_api_clEnqueueMigrateMemObjects;
-
-#endif
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueNDRangeKernel)(
- cl_command_queue command_queue, cl_kernel kernel, cl_uint work_dim,
- const size_t *global_work_offset, const size_t *global_work_size,
- const size_t *local_work_size, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueTask)(
- cl_command_queue command_queue, cl_kernel kernel,
- cl_uint num_events_in_wait_list, const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueNativeKernel)(
- cl_command_queue command_queue, void(CL_CALLBACK *user_func)(void *),
- void *args, size_t cb_args, cl_uint num_mem_objects, const cl_mem *mem_list,
- const void **args_mem_loc, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_0;
-
-#ifdef CL_VERSION_1_2
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueMarkerWithWaitList)(
- cl_command_queue command_queue, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_2;
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueBarrierWithWaitList)(
- cl_command_queue command_queue, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_2;
-
-typedef void *(
- CL_API_CALL *cl_api_clGetExtensionFunctionAddressForPlatform)(
- cl_platform_id platform,
- const char *function_name)CL_API_SUFFIX__VERSION_1_2;
-
-#else
-
-typedef void *cl_api_clEnqueueMarkerWithWaitList;
-typedef void *cl_api_clEnqueueBarrierWithWaitList;
-typedef void *cl_api_clGetExtensionFunctionAddressForPlatform;
-
-#endif
-
-// Shared Virtual Memory APIs
-
-#ifdef CL_VERSION_2_0
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueSVMFree)(
- cl_command_queue /* command_queue */, cl_uint /* num_svm_pointers */,
- void ** /* svm_pointers */,
- void(CL_CALLBACK *pfn_free_func)(cl_command_queue /* queue */,
- cl_uint /* num_svm_pointers */,
- void ** /* svm_pointers[] */,
- void * /* user_data */),
- void * /* user_data */, cl_uint /* num_events_in_wait_list */,
- const cl_event * /* event_wait_list */,
- cl_event * /* event */) CL_API_SUFFIX__VERSION_2_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueSVMMemcpy)(
- cl_command_queue /* command_queue */, cl_bool /* blocking_copy */,
- void * /* dst_ptr */, const void * /* src_ptr */, size_t /* size */,
- cl_uint /* num_events_in_wait_list */,
- const cl_event * /* event_wait_list */,
- cl_event * /* event */) CL_API_SUFFIX__VERSION_2_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueSVMMemFill)(
- cl_command_queue /* command_queue */, void * /* svm_ptr */,
- const void * /* pattern */, size_t /* pattern_size */, size_t /* size */,
- cl_uint /* num_events_in_wait_list */,
- const cl_event * /* event_wait_list */,
- cl_event * /* event */) CL_API_SUFFIX__VERSION_2_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueSVMMap)(
- cl_command_queue /* command_queue */, cl_bool /* blocking_map */,
- cl_map_flags /* map_flags */, void * /* svm_ptr */, size_t /* size */,
- cl_uint /* num_events_in_wait_list */,
- const cl_event * /* event_wait_list */,
- cl_event * /* event */) CL_API_SUFFIX__VERSION_2_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueSVMUnmap)(
- cl_command_queue /* command_queue */, void * /* svm_ptr */,
- cl_uint /* num_events_in_wait_list */,
- const cl_event * /* event_wait_list */,
- cl_event * /* event */) CL_API_SUFFIX__VERSION_2_0;
-
-#else
-
-typedef void *cl_api_clEnqueueSVMFree;
-typedef void *cl_api_clEnqueueSVMMemcpy;
-typedef void *cl_api_clEnqueueSVMMemFill;
-typedef void *cl_api_clEnqueueSVMMap;
-typedef void *cl_api_clEnqueueSVMUnmap;
-
-#endif
-
-// Deprecated APIs
-typedef cl_int(CL_API_CALL *cl_api_clSetCommandQueueProperty)(
- cl_command_queue command_queue, cl_command_queue_properties properties,
- cl_bool enable, cl_command_queue_properties *old_properties)
- CL_API_SUFFIX__VERSION_1_0_DEPRECATED;
-
-typedef cl_mem(CL_API_CALL *cl_api_clCreateImage2D)(
- cl_context context, cl_mem_flags flags, const cl_image_format *image_format,
- size_t image_width, size_t image_height, size_t image_row_pitch,
- void *host_ptr, cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_1_DEPRECATED;
-
-typedef cl_mem(CL_API_CALL *cl_api_clCreateImage3D)(
- cl_context context, cl_mem_flags flags, const cl_image_format *image_format,
- size_t image_width, size_t image_height, size_t image_depth,
- size_t image_row_pitch, size_t image_slice_pitch, void *host_ptr,
- cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_1_DEPRECATED;
-
-typedef cl_int(CL_API_CALL *cl_api_clUnloadCompiler)(void)
- CL_API_SUFFIX__VERSION_1_1_DEPRECATED;
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueMarker)(
- cl_command_queue command_queue,
- cl_event *event) CL_API_SUFFIX__VERSION_1_1_DEPRECATED;
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueWaitForEvents)(
- cl_command_queue command_queue, cl_uint num_events,
- const cl_event *event_list) CL_API_SUFFIX__VERSION_1_1_DEPRECATED;
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueBarrier)(
- cl_command_queue command_queue) CL_API_SUFFIX__VERSION_1_1_DEPRECATED;
-
-typedef void *(CL_API_CALL *cl_api_clGetExtensionFunctionAddress)(
- const char *function_name)CL_API_SUFFIX__VERSION_1_1_DEPRECATED;
-
-// GL and other APIs
-typedef cl_mem(CL_API_CALL *cl_api_clCreateFromGLBuffer)(
- cl_context context, cl_mem_flags flags, cl_GLuint bufobj,
- int *errcode_ret) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_mem(CL_API_CALL *cl_api_clCreateFromGLTexture)(
- cl_context context, cl_mem_flags flags, cl_GLenum target, cl_GLint miplevel,
- cl_GLuint texture, cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_2;
-
-typedef cl_mem(CL_API_CALL *cl_api_clCreateFromGLTexture2D)(
- cl_context context, cl_mem_flags flags, cl_GLenum target, cl_GLint miplevel,
- cl_GLuint texture, cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_mem(CL_API_CALL *cl_api_clCreateFromGLTexture3D)(
- cl_context context, cl_mem_flags flags, cl_GLenum target, cl_GLint miplevel,
- cl_GLuint texture, cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_mem(CL_API_CALL *cl_api_clCreateFromGLRenderbuffer)(
- cl_context context, cl_mem_flags flags, cl_GLuint renderbuffer,
- cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clGetGLObjectInfo)(
- cl_mem memobj, cl_gl_object_type *gl_object_type,
- cl_GLuint *gl_object_name) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clGetGLTextureInfo)(
- cl_mem memobj, cl_gl_texture_info param_name, size_t param_value_size,
- void *param_value, size_t *param_value_size_ret) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueAcquireGLObjects)(
- cl_command_queue command_queue, cl_uint num_objects,
- const cl_mem *mem_objects, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueReleaseGLObjects)(
- cl_command_queue command_queue, cl_uint num_objects,
- const cl_mem *mem_objects, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_0;
-
-/* cl_khr_gl_sharing */
-typedef cl_int(CL_API_CALL *cl_api_clGetGLContextInfoKHR)(
- const cl_context_properties *properties, cl_gl_context_info param_name,
- size_t param_value_size, void *param_value, size_t *param_value_size_ret);
-
-/* cl_khr_gl_event */
-typedef cl_event(CL_API_CALL *cl_api_clCreateEventFromGLsyncKHR)(
- cl_context context, cl_GLsync sync, cl_int *errcode_ret);
-
-#if defined(_WIN32)
-
-/* cl_khr_d3d10_sharing */
-
-typedef cl_int(CL_API_CALL *cl_api_clGetDeviceIDsFromD3D10KHR)(
- cl_platform_id platform, cl_d3d10_device_source_khr d3d_device_source,
- void *d3d_object, cl_d3d10_device_set_khr d3d_device_set,
- cl_uint num_entries, cl_device_id *devices,
- cl_uint *num_devices) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_mem(CL_API_CALL *cl_api_clCreateFromD3D10BufferKHR)(
- cl_context context, cl_mem_flags flags, ID3D10Buffer *resource,
- cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_mem(CL_API_CALL *cl_api_clCreateFromD3D10Texture2DKHR)(
- cl_context context, cl_mem_flags flags, ID3D10Texture2D *resource,
- UINT subresource, cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_mem(CL_API_CALL *cl_api_clCreateFromD3D10Texture3DKHR)(
- cl_context context, cl_mem_flags flags, ID3D10Texture3D *resource,
- UINT subresource, cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_0;
-
-typedef
-cl_int(CL_API_CALL *cl_api_clEnqueueAcquireD3D10ObjectsKHR)(
- cl_command_queue command_queue, cl_uint num_objects,
- const cl_mem *mem_objects, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_0;
-
-typedef
-cl_int(CL_API_CALL *cl_api_clEnqueueReleaseD3D10ObjectsKHR)(
- cl_command_queue command_queue, cl_uint num_objects,
- const cl_mem *mem_objects, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_0;
-
-extern CL_API_ENTRY cl_int CL_API_CALL clGetDeviceIDsFromD3D10KHR(
- cl_platform_id platform, cl_d3d10_device_source_khr d3d_device_source,
- void *d3d_object, cl_d3d10_device_set_khr d3d_device_set,
- cl_uint num_entries, cl_device_id *devices, cl_uint *num_devices);
-
-extern CL_API_ENTRY cl_mem CL_API_CALL
-clCreateFromD3D10BufferKHR(cl_context context, cl_mem_flags flags,
- ID3D10Buffer *resource, cl_int *errcode_ret);
-
-extern CL_API_ENTRY cl_mem CL_API_CALL clCreateFromD3D10Texture2DKHR(
- cl_context context, cl_mem_flags flags, ID3D10Texture2D *resource,
- UINT subresource, cl_int *errcode_ret);
-
-extern CL_API_ENTRY cl_mem CL_API_CALL clCreateFromD3D10Texture3DKHR(
- cl_context context, cl_mem_flags flags, ID3D10Texture3D *resource,
- UINT subresource, cl_int *errcode_ret);
-
-extern CL_API_ENTRY cl_int CL_API_CALL clEnqueueAcquireD3D10ObjectsKHR(
- cl_command_queue command_queue, cl_uint num_objects,
- const cl_mem *mem_objects, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list, cl_event *event);
-
-extern CL_API_ENTRY cl_int CL_API_CALL clEnqueueReleaseD3D10ObjectsKHR(
- cl_command_queue command_queue, cl_uint num_objects,
- const cl_mem *mem_objects, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list, cl_event *event);
-
-/* cl_khr_d3d11_sharing */
-typedef cl_int(CL_API_CALL *cl_api_clGetDeviceIDsFromD3D11KHR)(
- cl_platform_id platform, cl_d3d11_device_source_khr d3d_device_source,
- void *d3d_object, cl_d3d11_device_set_khr d3d_device_set,
- cl_uint num_entries, cl_device_id *devices,
- cl_uint *num_devices) CL_API_SUFFIX__VERSION_1_2;
-
-typedef cl_mem(CL_API_CALL *cl_api_clCreateFromD3D11BufferKHR)(
- cl_context context, cl_mem_flags flags, ID3D11Buffer *resource,
- cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_2;
-
-typedef cl_mem(CL_API_CALL *cl_api_clCreateFromD3D11Texture2DKHR)(
- cl_context context, cl_mem_flags flags, ID3D11Texture2D *resource,
- UINT subresource, cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_2;
-
-typedef cl_mem(CL_API_CALL *cl_api_clCreateFromD3D11Texture3DKHR)(
- cl_context context, cl_mem_flags flags, ID3D11Texture3D *resource,
- UINT subresource, cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_2;
-
-typedef
-cl_int(CL_API_CALL *cl_api_clEnqueueAcquireD3D11ObjectsKHR)(
- cl_command_queue command_queue, cl_uint num_objects,
- const cl_mem *mem_objects, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_2;
-
-typedef
-cl_int(CL_API_CALL *cl_api_clEnqueueReleaseD3D11ObjectsKHR)(
- cl_command_queue command_queue, cl_uint num_objects,
- const cl_mem *mem_objects, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_2;
-
-/* cl_khr_dx9_media_sharing */
-typedef
-cl_int(CL_API_CALL *cl_api_clGetDeviceIDsFromDX9MediaAdapterKHR)(
- cl_platform_id platform, cl_uint num_media_adapters,
- cl_dx9_media_adapter_type_khr *media_adapters_type, void *media_adapters,
- cl_dx9_media_adapter_set_khr media_adapter_set, cl_uint num_entries,
- cl_device_id *devices, cl_uint *num_devices) CL_API_SUFFIX__VERSION_1_2;
-
-typedef cl_mem(CL_API_CALL *cl_api_clCreateFromDX9MediaSurfaceKHR)(
- cl_context context, cl_mem_flags flags,
- cl_dx9_media_adapter_type_khr adapter_type, void *surface_info,
- cl_uint plane, cl_int *errcode_ret) CL_API_SUFFIX__VERSION_1_2;
-
-typedef
-cl_int(CL_API_CALL *cl_api_clEnqueueAcquireDX9MediaSurfacesKHR)(
- cl_command_queue command_queue, cl_uint num_objects,
- const cl_mem *mem_objects, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_2;
-
-typedef
-cl_int(CL_API_CALL *cl_api_clEnqueueReleaseDX9MediaSurfacesKHR)(
- cl_command_queue command_queue, cl_uint num_objects,
- const cl_mem *mem_objects, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_1_2;
-
-/* cl_khr_d3d11_sharing */
-extern CL_API_ENTRY cl_int CL_API_CALL clGetDeviceIDsFromD3D11KHR(
- cl_platform_id platform, cl_d3d11_device_source_khr d3d_device_source,
- void *d3d_object, cl_d3d11_device_set_khr d3d_device_set,
- cl_uint num_entries, cl_device_id *devices, cl_uint *num_devices);
-
-extern CL_API_ENTRY cl_mem CL_API_CALL
-clCreateFromD3D11BufferKHR(cl_context context, cl_mem_flags flags,
- ID3D11Buffer *resource, cl_int *errcode_ret);
-
-extern CL_API_ENTRY cl_mem CL_API_CALL clCreateFromD3D11Texture2DKHR(
- cl_context context, cl_mem_flags flags, ID3D11Texture2D *resource,
- UINT subresource, cl_int *errcode_ret);
-
-extern CL_API_ENTRY cl_mem CL_API_CALL clCreateFromD3D11Texture3DKHR(
- cl_context context, cl_mem_flags flags, ID3D11Texture3D *resource,
- UINT subresource, cl_int *errcode_ret);
-
-extern CL_API_ENTRY cl_int CL_API_CALL clEnqueueAcquireD3D11ObjectsKHR(
- cl_command_queue command_queue, cl_uint num_objects,
- const cl_mem *mem_objects, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list, cl_event *event);
-
-extern CL_API_ENTRY cl_int CL_API_CALL clEnqueueReleaseD3D11ObjectsKHR(
- cl_command_queue command_queue, cl_uint num_objects,
- const cl_mem *mem_objects, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list, cl_event *event);
-
-/* cl_khr_dx9_media_sharing */
-extern CL_API_ENTRY cl_int CL_API_CALL clGetDeviceIDsFromDX9MediaAdapterKHR(
- cl_platform_id platform, cl_uint num_media_adapters,
- cl_dx9_media_adapter_type_khr *media_adapter_type, void *media_adapters,
- cl_dx9_media_adapter_set_khr media_adapter_set, cl_uint num_entries,
- cl_device_id *devices, cl_uint *num_devices);
-
-extern CL_API_ENTRY cl_mem CL_API_CALL clCreateFromDX9MediaSurfaceKHR(
- cl_context context, cl_mem_flags flags,
- cl_dx9_media_adapter_type_khr adapter_type, void *surface_info,
- cl_uint plane, cl_int *errcode_ret);
-
-extern CL_API_ENTRY cl_int CL_API_CALL clEnqueueAcquireDX9MediaSurfacesKHR(
- cl_command_queue command_queue, cl_uint num_objects,
- const cl_mem *mem_objects, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list, cl_event *event);
-
-extern CL_API_ENTRY cl_int CL_API_CALL clEnqueueReleaseDX9MediaSurfacesKHR(
- cl_command_queue command_queue, cl_uint num_objects,
- const cl_mem *mem_objects, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list, cl_event *event);
-
-#else
-
-/* cl_khr_d3d10_sharing */
-typedef void *cl_api_clGetDeviceIDsFromD3D10KHR;
-typedef void *cl_api_clCreateFromD3D10BufferKHR;
-typedef void *cl_api_clCreateFromD3D10Texture2DKHR;
-typedef void *cl_api_clCreateFromD3D10Texture3DKHR;
-typedef void *cl_api_clEnqueueAcquireD3D10ObjectsKHR;
-typedef void *cl_api_clEnqueueReleaseD3D10ObjectsKHR;
-
-/* cl_khr_d3d11_sharing */
-typedef void *cl_api_clGetDeviceIDsFromD3D11KHR;
-typedef void *cl_api_clCreateFromD3D11BufferKHR;
-typedef void *cl_api_clCreateFromD3D11Texture2DKHR;
-typedef void *cl_api_clCreateFromD3D11Texture3DKHR;
-typedef void *cl_api_clEnqueueAcquireD3D11ObjectsKHR;
-typedef void *cl_api_clEnqueueReleaseD3D11ObjectsKHR;
-
-/* cl_khr_dx9_media_sharing */
-typedef void *cl_api_clCreateFromDX9MediaSurfaceKHR;
-typedef void *cl_api_clEnqueueAcquireDX9MediaSurfacesKHR;
-typedef void *cl_api_clEnqueueReleaseDX9MediaSurfacesKHR;
-typedef void *cl_api_clGetDeviceIDsFromDX9MediaAdapterKHR;
-
-#endif
-
-/* OpenCL 1.1 */
-
-#ifdef CL_VERSION_1_1
-
-typedef cl_int(CL_API_CALL *cl_api_clSetEventCallback)(
- cl_event /* event */, cl_int /* command_exec_callback_type */,
- void(CL_CALLBACK * /* pfn_notify */)(cl_event, cl_int, void *),
- void * /* user_data */) CL_API_SUFFIX__VERSION_1_1;
-
-typedef cl_mem(CL_API_CALL *cl_api_clCreateSubBuffer)(
- cl_mem /* buffer */, cl_mem_flags /* flags */,
- cl_buffer_create_type /* buffer_create_type */,
- const void * /* buffer_create_info */,
- cl_int * /* errcode_ret */) CL_API_SUFFIX__VERSION_1_1;
-
-typedef
-cl_int(CL_API_CALL *cl_api_clSetMemObjectDestructorCallback)(
- cl_mem /* memobj */,
- void(CL_CALLBACK * /*pfn_notify*/)(cl_mem /* memobj */,
- void * /*user_data*/),
- void * /*user_data */) CL_API_SUFFIX__VERSION_1_1;
-
-typedef cl_event(CL_API_CALL *cl_api_clCreateUserEvent)(
- cl_context /* context */,
- cl_int * /* errcode_ret */) CL_API_SUFFIX__VERSION_1_1;
-
-typedef cl_int(CL_API_CALL *cl_api_clSetUserEventStatus)(
- cl_event /* event */,
- cl_int /* execution_status */) CL_API_SUFFIX__VERSION_1_1;
-
-#else
-
-typedef void *cl_api_clSetEventCallback;
-typedef void *cl_api_clCreateSubBuffer;
-typedef void *cl_api_clSetMemObjectDestructorCallback;
-typedef void *cl_api_clCreateUserEvent;
-typedef void *cl_api_clSetUserEventStatus;
-
-#endif
-
-typedef cl_int(CL_API_CALL *cl_api_clCreateSubDevicesEXT)(
- cl_device_id in_device,
- const cl_device_partition_property_ext *partition_properties,
- cl_uint num_entries, cl_device_id *out_devices, cl_uint *num_devices);
-
-typedef cl_int(CL_API_CALL *cl_api_clRetainDeviceEXT)(
- cl_device_id device) CL_API_SUFFIX__VERSION_1_0;
-
-typedef cl_int(CL_API_CALL *cl_api_clReleaseDeviceEXT)(
- cl_device_id device) CL_API_SUFFIX__VERSION_1_0;
-
-/* cl_khr_egl_image */
-typedef cl_mem(CL_API_CALL *cl_api_clCreateFromEGLImageKHR)(
- cl_context context, CLeglDisplayKHR display, CLeglImageKHR image,
- cl_mem_flags flags, const cl_egl_image_properties_khr *properties,
- cl_int *errcode_ret);
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueAcquireEGLObjectsKHR)(
- cl_command_queue command_queue, cl_uint num_objects,
- const cl_mem *mem_objects, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list, cl_event *event);
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueReleaseEGLObjectsKHR)(
- cl_command_queue command_queue, cl_uint num_objects,
- const cl_mem *mem_objects, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list, cl_event *event);
-
-/* cl_khr_egl_event */
-typedef cl_event(CL_API_CALL *cl_api_clCreateEventFromEGLSyncKHR)(
- cl_context context, CLeglSyncKHR sync, CLeglDisplayKHR display,
- cl_int *errcode_ret);
-
-#ifdef CL_VERSION_2_1
-
-typedef cl_int(CL_API_CALL *cl_api_clSetDefaultDeviceCommandQueue)(
- cl_context context, cl_device_id device,
- cl_command_queue command_queue) CL_API_SUFFIX__VERSION_2_1;
-
-typedef cl_program(CL_API_CALL *cl_api_clCreateProgramWithIL)(
- cl_context context, const void *il, size_t length,
- cl_int *errcode_ret) CL_API_SUFFIX__VERSION_2_1;
-
-typedef cl_int(CL_API_CALL *cl_api_clGetKernelSubGroupInfo)(
- cl_kernel kernel, cl_device_id device, cl_kernel_sub_group_info param_name,
- size_t input_value_size, const void *input_value, size_t param_value_size,
- void *param_value, size_t *param_value_size_ret) CL_API_SUFFIX__VERSION_2_1;
-
-typedef cl_kernel(CL_API_CALL *cl_api_clCloneKernel)(
- cl_kernel source_kernel, cl_int *errcode_ret) CL_API_SUFFIX__VERSION_2_1;
-
-typedef cl_int(CL_API_CALL *cl_api_clEnqueueSVMMigrateMem)(
- cl_command_queue command_queue, cl_uint num_svm_pointers,
- const void **svm_pointers, const size_t *sizes,
- cl_mem_migration_flags flags, cl_uint num_events_in_wait_list,
- const cl_event *event_wait_list,
- cl_event *event) CL_API_SUFFIX__VERSION_2_1;
-
-typedef cl_int(CL_API_CALL *cl_api_clGetDeviceAndHostTimer)(
- cl_device_id device, cl_ulong *device_timestamp,
- cl_ulong *host_timestamp) CL_API_SUFFIX__VERSION_2_1;
-
-typedef cl_int(CL_API_CALL *cl_api_clGetHostTimer)(
- cl_device_id device, cl_ulong *host_timestamp) CL_API_SUFFIX__VERSION_2_1;
-
-#else
-
-typedef void *cl_api_clSetDefaultDeviceCommandQueue;
-typedef void *cl_api_clCreateProgramWithIL;
-typedef void *cl_api_clGetKernelSubGroupInfo;
-typedef void *cl_api_clCloneKernel;
-typedef void *cl_api_clEnqueueSVMMigrateMem;
-typedef void *cl_api_clGetDeviceAndHostTimer;
-typedef void *cl_api_clGetHostTimer;
-
-#endif
-
-/* Vendor dispatch table structure */
-
-typedef struct _cl_icd_dispatch {
- /* OpenCL 1.0 */
- cl_api_clGetPlatformIDs clGetPlatformIDs;
- cl_api_clGetPlatformInfo clGetPlatformInfo;
- cl_api_clGetDeviceIDs clGetDeviceIDs;
- cl_api_clGetDeviceInfo clGetDeviceInfo;
- cl_api_clCreateContext clCreateContext;
- cl_api_clCreateContextFromType clCreateContextFromType;
- cl_api_clRetainContext clRetainContext;
- cl_api_clReleaseContext clReleaseContext;
- cl_api_clGetContextInfo clGetContextInfo;
- cl_api_clCreateCommandQueue clCreateCommandQueue;
- cl_api_clRetainCommandQueue clRetainCommandQueue;
- cl_api_clReleaseCommandQueue clReleaseCommandQueue;
- cl_api_clGetCommandQueueInfo clGetCommandQueueInfo;
- cl_api_clSetCommandQueueProperty clSetCommandQueueProperty;
- cl_api_clCreateBuffer clCreateBuffer;
- cl_api_clCreateImage2D clCreateImage2D;
- cl_api_clCreateImage3D clCreateImage3D;
- cl_api_clRetainMemObject clRetainMemObject;
- cl_api_clReleaseMemObject clReleaseMemObject;
- cl_api_clGetSupportedImageFormats clGetSupportedImageFormats;
- cl_api_clGetMemObjectInfo clGetMemObjectInfo;
- cl_api_clGetImageInfo clGetImageInfo;
- cl_api_clCreateSampler clCreateSampler;
- cl_api_clRetainSampler clRetainSampler;
- cl_api_clReleaseSampler clReleaseSampler;
- cl_api_clGetSamplerInfo clGetSamplerInfo;
- cl_api_clCreateProgramWithSource clCreateProgramWithSource;
- cl_api_clCreateProgramWithBinary clCreateProgramWithBinary;
- cl_api_clRetainProgram clRetainProgram;
- cl_api_clReleaseProgram clReleaseProgram;
- cl_api_clBuildProgram clBuildProgram;
- cl_api_clUnloadCompiler clUnloadCompiler;
- cl_api_clGetProgramInfo clGetProgramInfo;
- cl_api_clGetProgramBuildInfo clGetProgramBuildInfo;
- cl_api_clCreateKernel clCreateKernel;
- cl_api_clCreateKernelsInProgram clCreateKernelsInProgram;
- cl_api_clRetainKernel clRetainKernel;
- cl_api_clReleaseKernel clReleaseKernel;
- cl_api_clSetKernelArg clSetKernelArg;
- cl_api_clGetKernelInfo clGetKernelInfo;
- cl_api_clGetKernelWorkGroupInfo clGetKernelWorkGroupInfo;
- cl_api_clWaitForEvents clWaitForEvents;
- cl_api_clGetEventInfo clGetEventInfo;
- cl_api_clRetainEvent clRetainEvent;
- cl_api_clReleaseEvent clReleaseEvent;
- cl_api_clGetEventProfilingInfo clGetEventProfilingInfo;
- cl_api_clFlush clFlush;
- cl_api_clFinish clFinish;
- cl_api_clEnqueueReadBuffer clEnqueueReadBuffer;
- cl_api_clEnqueueWriteBuffer clEnqueueWriteBuffer;
- cl_api_clEnqueueCopyBuffer clEnqueueCopyBuffer;
- cl_api_clEnqueueReadImage clEnqueueReadImage;
- cl_api_clEnqueueWriteImage clEnqueueWriteImage;
- cl_api_clEnqueueCopyImage clEnqueueCopyImage;
- cl_api_clEnqueueCopyImageToBuffer clEnqueueCopyImageToBuffer;
- cl_api_clEnqueueCopyBufferToImage clEnqueueCopyBufferToImage;
- cl_api_clEnqueueMapBuffer clEnqueueMapBuffer;
- cl_api_clEnqueueMapImage clEnqueueMapImage;
- cl_api_clEnqueueUnmapMemObject clEnqueueUnmapMemObject;
- cl_api_clEnqueueNDRangeKernel clEnqueueNDRangeKernel;
- cl_api_clEnqueueTask clEnqueueTask;
- cl_api_clEnqueueNativeKernel clEnqueueNativeKernel;
- cl_api_clEnqueueMarker clEnqueueMarker;
- cl_api_clEnqueueWaitForEvents clEnqueueWaitForEvents;
- cl_api_clEnqueueBarrier clEnqueueBarrier;
- cl_api_clGetExtensionFunctionAddress clGetExtensionFunctionAddress;
- cl_api_clCreateFromGLBuffer clCreateFromGLBuffer;
- cl_api_clCreateFromGLTexture2D clCreateFromGLTexture2D;
- cl_api_clCreateFromGLTexture3D clCreateFromGLTexture3D;
- cl_api_clCreateFromGLRenderbuffer clCreateFromGLRenderbuffer;
- cl_api_clGetGLObjectInfo clGetGLObjectInfo;
- cl_api_clGetGLTextureInfo clGetGLTextureInfo;
- cl_api_clEnqueueAcquireGLObjects clEnqueueAcquireGLObjects;
- cl_api_clEnqueueReleaseGLObjects clEnqueueReleaseGLObjects;
- cl_api_clGetGLContextInfoKHR clGetGLContextInfoKHR;
-
- /* cl_khr_d3d10_sharing */
- cl_api_clGetDeviceIDsFromD3D10KHR clGetDeviceIDsFromD3D10KHR;
- cl_api_clCreateFromD3D10BufferKHR clCreateFromD3D10BufferKHR;
- cl_api_clCreateFromD3D10Texture2DKHR clCreateFromD3D10Texture2DKHR;
- cl_api_clCreateFromD3D10Texture3DKHR clCreateFromD3D10Texture3DKHR;
- cl_api_clEnqueueAcquireD3D10ObjectsKHR clEnqueueAcquireD3D10ObjectsKHR;
- cl_api_clEnqueueReleaseD3D10ObjectsKHR clEnqueueReleaseD3D10ObjectsKHR;
-
- /* OpenCL 1.1 */
- cl_api_clSetEventCallback clSetEventCallback;
- cl_api_clCreateSubBuffer clCreateSubBuffer;
- cl_api_clSetMemObjectDestructorCallback clSetMemObjectDestructorCallback;
- cl_api_clCreateUserEvent clCreateUserEvent;
- cl_api_clSetUserEventStatus clSetUserEventStatus;
- cl_api_clEnqueueReadBufferRect clEnqueueReadBufferRect;
- cl_api_clEnqueueWriteBufferRect clEnqueueWriteBufferRect;
- cl_api_clEnqueueCopyBufferRect clEnqueueCopyBufferRect;
-
- /* cl_ext_device_fission */
- cl_api_clCreateSubDevicesEXT clCreateSubDevicesEXT;
- cl_api_clRetainDeviceEXT clRetainDeviceEXT;
- cl_api_clReleaseDeviceEXT clReleaseDeviceEXT;
-
- /* cl_khr_gl_event */
- cl_api_clCreateEventFromGLsyncKHR clCreateEventFromGLsyncKHR;
-
- /* OpenCL 1.2 */
- cl_api_clCreateSubDevices clCreateSubDevices;
- cl_api_clRetainDevice clRetainDevice;
- cl_api_clReleaseDevice clReleaseDevice;
- cl_api_clCreateImage clCreateImage;
- cl_api_clCreateProgramWithBuiltInKernels clCreateProgramWithBuiltInKernels;
- cl_api_clCompileProgram clCompileProgram;
- cl_api_clLinkProgram clLinkProgram;
- cl_api_clUnloadPlatformCompiler clUnloadPlatformCompiler;
- cl_api_clGetKernelArgInfo clGetKernelArgInfo;
- cl_api_clEnqueueFillBuffer clEnqueueFillBuffer;
- cl_api_clEnqueueFillImage clEnqueueFillImage;
- cl_api_clEnqueueMigrateMemObjects clEnqueueMigrateMemObjects;
- cl_api_clEnqueueMarkerWithWaitList clEnqueueMarkerWithWaitList;
- cl_api_clEnqueueBarrierWithWaitList clEnqueueBarrierWithWaitList;
- cl_api_clGetExtensionFunctionAddressForPlatform
- clGetExtensionFunctionAddressForPlatform;
- cl_api_clCreateFromGLTexture clCreateFromGLTexture;
-
- /* cl_khr_d3d11_sharing */
- cl_api_clGetDeviceIDsFromD3D11KHR clGetDeviceIDsFromD3D11KHR;
- cl_api_clCreateFromD3D11BufferKHR clCreateFromD3D11BufferKHR;
- cl_api_clCreateFromD3D11Texture2DKHR clCreateFromD3D11Texture2DKHR;
- cl_api_clCreateFromD3D11Texture3DKHR clCreateFromD3D11Texture3DKHR;
- cl_api_clCreateFromDX9MediaSurfaceKHR clCreateFromDX9MediaSurfaceKHR;
- cl_api_clEnqueueAcquireD3D11ObjectsKHR clEnqueueAcquireD3D11ObjectsKHR;
- cl_api_clEnqueueReleaseD3D11ObjectsKHR clEnqueueReleaseD3D11ObjectsKHR;
-
- /* cl_khr_dx9_media_sharing */
- cl_api_clGetDeviceIDsFromDX9MediaAdapterKHR
- clGetDeviceIDsFromDX9MediaAdapterKHR;
- cl_api_clEnqueueAcquireDX9MediaSurfacesKHR
- clEnqueueAcquireDX9MediaSurfacesKHR;
- cl_api_clEnqueueReleaseDX9MediaSurfacesKHR
- clEnqueueReleaseDX9MediaSurfacesKHR;
-
- /* cl_khr_egl_image */
- cl_api_clCreateFromEGLImageKHR clCreateFromEGLImageKHR;
- cl_api_clEnqueueAcquireEGLObjectsKHR clEnqueueAcquireEGLObjectsKHR;
- cl_api_clEnqueueReleaseEGLObjectsKHR clEnqueueReleaseEGLObjectsKHR;
-
- /* cl_khr_egl_event */
- cl_api_clCreateEventFromEGLSyncKHR clCreateEventFromEGLSyncKHR;
-
- /* OpenCL 2.0 */
- cl_api_clCreateCommandQueueWithProperties clCreateCommandQueueWithProperties;
- cl_api_clCreatePipe clCreatePipe;
- cl_api_clGetPipeInfo clGetPipeInfo;
- cl_api_clSVMAlloc clSVMAlloc;
- cl_api_clSVMFree clSVMFree;
- cl_api_clEnqueueSVMFree clEnqueueSVMFree;
- cl_api_clEnqueueSVMMemcpy clEnqueueSVMMemcpy;
- cl_api_clEnqueueSVMMemFill clEnqueueSVMMemFill;
- cl_api_clEnqueueSVMMap clEnqueueSVMMap;
- cl_api_clEnqueueSVMUnmap clEnqueueSVMUnmap;
- cl_api_clCreateSamplerWithProperties clCreateSamplerWithProperties;
- cl_api_clSetKernelArgSVMPointer clSetKernelArgSVMPointer;
- cl_api_clSetKernelExecInfo clSetKernelExecInfo;
-
- /* cl_khr_sub_groups */
- cl_api_clGetKernelSubGroupInfoKHR clGetKernelSubGroupInfoKHR;
-
- /* OpenCL 2.1 */
- cl_api_clCloneKernel clCloneKernel;
- cl_api_clCreateProgramWithIL clCreateProgramWithIL;
- cl_api_clEnqueueSVMMigrateMem clEnqueueSVMMigrateMem;
- cl_api_clGetDeviceAndHostTimer clGetDeviceAndHostTimer;
- cl_api_clGetHostTimer clGetHostTimer;
- cl_api_clGetKernelSubGroupInfo clGetKernelSubGroupInfo;
- cl_api_clSetDefaultDeviceCommandQueue clSetDefaultDeviceCommandQueue;
-
- /* OpenCL 2.2 */
- cl_api_clSetProgramReleaseCallback clSetProgramReleaseCallback;
- cl_api_clSetProgramSpecializationConstant clSetProgramSpecializationConstant;
-
- /* OpenCL 3.0 */
- cl_api_clCreateBufferWithProperties clCreateBufferWithProperties;
- cl_api_clCreateImageWithProperties clCreateImageWithProperties;
- cl_api_clSetContextDestructorCallback clSetContextDestructorCallback;
-
-} cl_icd_dispatch;
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif /* #ifndef OPENCL_CL_ICD_H */
diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/bin/paper_runfiles/env.sh b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/bin/paper_runfiles/env.sh
deleted file mode 100644
index f3052f0ea1672a569e7775f8c54967d730a7b5ec..0000000000000000000000000000000000000000
--- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/bin/paper_runfiles/env.sh
+++ /dev/null
@@ -1,8 +0,0 @@
-DIRNAME="$(dirname $0)"
-DIRNAME="$(realpath ""$DIRNAME"")"
-
-BINDIR="$DIRNAME/.."
-SRCDIR="$BINDIR/.."
-CONFIGDIR="$SRCDIR/configs"
-
-export PYTHONPATH="$SRCDIR:$PYTHONPATH"
diff --git a/spaces/Intoval/privateChatGPT/assets/custom.js b/spaces/Intoval/privateChatGPT/assets/custom.js
deleted file mode 100644
index b8071034f3618c541e3f4169c7fc6d6593d56f44..0000000000000000000000000000000000000000
--- a/spaces/Intoval/privateChatGPT/assets/custom.js
+++ /dev/null
@@ -1,224 +0,0 @@
-
-// custom javascript here
-
-const MAX_HISTORY_LENGTH = 32;
-
-var key_down_history = [];
-var currentIndex = -1;
-var user_input_ta;
-
-var gradioContainer = null;
-var user_input_ta = null;
-var user_input_tb = null;
-var userInfoDiv = null;
-var appTitleDiv = null;
-var chatbot = null;
-var apSwitch = null;
-
-var ga = document.getElementsByTagName("gradio-app");
-var targetNode = ga[0];
-var isInIframe = (window.self !== window.top);
-
-// gradio 页面加载好了么??? 我能动你的元素了么??
-function gradioLoaded(mutations) {
- for (var i = 0; i < mutations.length; i++) {
- if (mutations[i].addedNodes.length) {
- gradioContainer = document.querySelector(".gradio-container");
- user_input_tb = document.getElementById('user_input_tb');
- userInfoDiv = document.getElementById("user_info");
- appTitleDiv = document.getElementById("app_title");
- chatbot = document.querySelector('#chuanhu_chatbot');
- apSwitch = document.querySelector('.apSwitch input[type="checkbox"]');
-
- if (gradioContainer && apSwitch) { // gradioCainter 加载出来了没?
- adjustDarkMode();
- }
- if (user_input_tb) { // user_input_tb 加载出来了没?
- selectHistory();
- }
- if (userInfoDiv && appTitleDiv) { // userInfoDiv 和 appTitleDiv 加载出来了没?
- setTimeout(showOrHideUserInfo(), 2000);
- }
- if (chatbot) { // chatbot 加载出来了没?
- setChatbotHeight()
- }
- }
- }
-}
-
-function selectHistory() {
- user_input_ta = user_input_tb.querySelector("textarea");
- if (user_input_ta) {
- observer.disconnect(); // 停止监听
- // 在 textarea 上监听 keydown 事件
- user_input_ta.addEventListener("keydown", function (event) {
- var value = user_input_ta.value.trim();
- // 判断按下的是否为方向键
- if (event.code === 'ArrowUp' || event.code === 'ArrowDown') {
- // 如果按下的是方向键,且输入框中有内容,且历史记录中没有该内容,则不执行操作
- if (value && key_down_history.indexOf(value) === -1)
- return;
- // 对于需要响应的动作,阻止默认行为。
- event.preventDefault();
- var length = key_down_history.length;
- if (length === 0) {
- currentIndex = -1; // 如果历史记录为空,直接将当前选中的记录重置
- return;
- }
- if (currentIndex === -1) {
- currentIndex = length;
- }
- if (event.code === 'ArrowUp' && currentIndex > 0) {
- currentIndex--;
- user_input_ta.value = key_down_history[currentIndex];
- } else if (event.code === 'ArrowDown' && currentIndex < length - 1) {
- currentIndex++;
- user_input_ta.value = key_down_history[currentIndex];
- }
- user_input_ta.selectionStart = user_input_ta.value.length;
- user_input_ta.selectionEnd = user_input_ta.value.length;
- const input_event = new InputEvent("input", { bubbles: true, cancelable: true });
- user_input_ta.dispatchEvent(input_event);
- } else if (event.code === "Enter") {
- if (value) {
- currentIndex = -1;
- if (key_down_history.indexOf(value) === -1) {
- key_down_history.push(value);
- if (key_down_history.length > MAX_HISTORY_LENGTH) {
- key_down_history.shift();
- }
- }
- }
- }
- });
- }
-}
-
-function toggleUserInfoVisibility(shouldHide) {
- if (userInfoDiv) {
- if (shouldHide) {
- userInfoDiv.classList.add("hideK");
- } else {
- userInfoDiv.classList.remove("hideK");
- }
- }
-}
-function showOrHideUserInfo() {
- var sendBtn = document.getElementById("submit_btn");
-
- // Bind mouse/touch events to show/hide user info
- appTitleDiv.addEventListener("mouseenter", function () {
- toggleUserInfoVisibility(false);
- });
- userInfoDiv.addEventListener("mouseenter", function () {
- toggleUserInfoVisibility(false);
- });
- sendBtn.addEventListener("mouseenter", function () {
- toggleUserInfoVisibility(false);
- });
-
- appTitleDiv.addEventListener("mouseleave", function () {
- toggleUserInfoVisibility(true);
- });
- userInfoDiv.addEventListener("mouseleave", function () {
- toggleUserInfoVisibility(true);
- });
- sendBtn.addEventListener("mouseleave", function () {
- toggleUserInfoVisibility(true);
- });
-
- appTitleDiv.ontouchstart = function () {
- toggleUserInfoVisibility(false);
- };
- userInfoDiv.ontouchstart = function () {
- toggleUserInfoVisibility(false);
- };
- sendBtn.ontouchstart = function () {
- toggleUserInfoVisibility(false);
- };
-
- appTitleDiv.ontouchend = function () {
- setTimeout(function () {
- toggleUserInfoVisibility(true);
- }, 3000);
- };
- userInfoDiv.ontouchend = function () {
- setTimeout(function () {
- toggleUserInfoVisibility(true);
- }, 3000);
- };
- sendBtn.ontouchend = function () {
- setTimeout(function () {
- toggleUserInfoVisibility(true);
- }, 3000); // Delay 1 second to hide user info
- };
-
- // Hide user info after 2 second
- setTimeout(function () {
- toggleUserInfoVisibility(true);
- }, 2000);
-}
-
-function toggleDarkMode(isEnabled) {
- if (isEnabled) {
- gradioContainer.classList.add("dark");
- document.body.style.setProperty("background-color", "var(--neutral-950)", "important");
- } else {
- gradioContainer.classList.remove("dark");
- document.body.style.backgroundColor = "";
- }
-}
-function adjustDarkMode() {
- const darkModeQuery = window.matchMedia("(prefers-color-scheme: dark)");
-
- // 根据当前颜色模式设置初始状态
- apSwitch.checked = darkModeQuery.matches;
- toggleDarkMode(darkModeQuery.matches);
- // 监听颜色模式变化
- darkModeQuery.addEventListener("change", (e) => {
- apSwitch.checked = e.matches;
- toggleDarkMode(e.matches);
- });
- // apSwitch = document.querySelector('.apSwitch input[type="checkbox"]');
- apSwitch.addEventListener("change", (e) => {
- toggleDarkMode(e.target.checked);
- });
-}
-
-function setChatbotHeight() {
- const screenWidth = window.innerWidth;
- const statusDisplay = document.querySelector('#status_display');
- const statusDisplayHeight = statusDisplay ? statusDisplay.offsetHeight : 0;
- const wrap = chatbot.querySelector('.wrap');
- const vh = window.innerHeight * 0.01;
- document.documentElement.style.setProperty('--vh', `${vh}px`);
- if (isInIframe) {
- chatbot.style.height = `700px`;
- wrap.style.maxHeight = `calc(700px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`
- } else {
- if (screenWidth <= 320) {
- chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px)`;
- wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`;
- } else if (screenWidth <= 499) {
- chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px)`;
- wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`;
- } else {
- chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px)`;
- wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`;
- }
- }
-}
-
-// 监视页面内部 DOM 变动
-var observer = new MutationObserver(function (mutations) {
- gradioLoaded(mutations);
-});
-observer.observe(targetNode, { childList: true, subtree: true });
-
-// 监视页面变化
-window.addEventListener("DOMContentLoaded", function () {
- isInIframe = (window.self !== window.top);
-});
-window.addEventListener('resize', setChatbotHeight);
-window.addEventListener('scroll', setChatbotHeight);
-window.matchMedia("(prefers-color-scheme: dark)").addEventListener("change", adjustDarkMode);
\ No newline at end of file
diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/dynamic_modules_utils.py b/spaces/Jackflack09/diffuse-custom/diffusers/dynamic_modules_utils.py
deleted file mode 100644
index 31f3bed2ecf9794b1bf9dab265af32f98dbb7afc..0000000000000000000000000000000000000000
--- a/spaces/Jackflack09/diffuse-custom/diffusers/dynamic_modules_utils.py
+++ /dev/null
@@ -1,428 +0,0 @@
-# coding=utf-8
-# Copyright 2022 The HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Utilities to dynamically load objects from the Hub."""
-
-import importlib
-import inspect
-import os
-import re
-import shutil
-import sys
-from pathlib import Path
-from typing import Dict, Optional, Union
-
-from huggingface_hub import HfFolder, cached_download, hf_hub_download, model_info
-
-from .utils import DIFFUSERS_DYNAMIC_MODULE_NAME, HF_MODULES_CACHE, logging
-
-
-COMMUNITY_PIPELINES_URL = (
- "https://raw.githubusercontent.com/huggingface/diffusers/main/examples/community/{pipeline}.py"
-)
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-def init_hf_modules():
- """
- Creates the cache directory for modules with an init, and adds it to the Python path.
- """
- # This function has already been executed if HF_MODULES_CACHE already is in the Python path.
- if HF_MODULES_CACHE in sys.path:
- return
-
- sys.path.append(HF_MODULES_CACHE)
- os.makedirs(HF_MODULES_CACHE, exist_ok=True)
- init_path = Path(HF_MODULES_CACHE) / "__init__.py"
- if not init_path.exists():
- init_path.touch()
-
-
-def create_dynamic_module(name: Union[str, os.PathLike]):
- """
- Creates a dynamic module in the cache directory for modules.
- """
- init_hf_modules()
- dynamic_module_path = Path(HF_MODULES_CACHE) / name
- # If the parent module does not exist yet, recursively create it.
- if not dynamic_module_path.parent.exists():
- create_dynamic_module(dynamic_module_path.parent)
- os.makedirs(dynamic_module_path, exist_ok=True)
- init_path = dynamic_module_path / "__init__.py"
- if not init_path.exists():
- init_path.touch()
-
-
-def get_relative_imports(module_file):
- """
- Get the list of modules that are relatively imported in a module file.
-
- Args:
- module_file (`str` or `os.PathLike`): The module file to inspect.
- """
- with open(module_file, "r", encoding="utf-8") as f:
- content = f.read()
-
- # Imports of the form `import .xxx`
- relative_imports = re.findall("^\s*import\s+\.(\S+)\s*$", content, flags=re.MULTILINE)
- # Imports of the form `from .xxx import yyy`
- relative_imports += re.findall("^\s*from\s+\.(\S+)\s+import", content, flags=re.MULTILINE)
- # Unique-ify
- return list(set(relative_imports))
-
-
-def get_relative_import_files(module_file):
- """
- Get the list of all files that are needed for a given module. Note that this function recurses through the relative
- imports (if a imports b and b imports c, it will return module files for b and c).
-
- Args:
- module_file (`str` or `os.PathLike`): The module file to inspect.
- """
- no_change = False
- files_to_check = [module_file]
- all_relative_imports = []
-
- # Let's recurse through all relative imports
- while not no_change:
- new_imports = []
- for f in files_to_check:
- new_imports.extend(get_relative_imports(f))
-
- module_path = Path(module_file).parent
- new_import_files = [str(module_path / m) for m in new_imports]
- new_import_files = [f for f in new_import_files if f not in all_relative_imports]
- files_to_check = [f"{f}.py" for f in new_import_files]
-
- no_change = len(new_import_files) == 0
- all_relative_imports.extend(files_to_check)
-
- return all_relative_imports
-
-
-def check_imports(filename):
- """
- Check if the current Python environment contains all the libraries that are imported in a file.
- """
- with open(filename, "r", encoding="utf-8") as f:
- content = f.read()
-
- # Imports of the form `import xxx`
- imports = re.findall("^\s*import\s+(\S+)\s*$", content, flags=re.MULTILINE)
- # Imports of the form `from xxx import yyy`
- imports += re.findall("^\s*from\s+(\S+)\s+import", content, flags=re.MULTILINE)
- # Only keep the top-level module
- imports = [imp.split(".")[0] for imp in imports if not imp.startswith(".")]
-
- # Unique-ify and test we got them all
- imports = list(set(imports))
- missing_packages = []
- for imp in imports:
- try:
- importlib.import_module(imp)
- except ImportError:
- missing_packages.append(imp)
-
- if len(missing_packages) > 0:
- raise ImportError(
- "This modeling file requires the following packages that were not found in your environment: "
- f"{', '.join(missing_packages)}. Run `pip install {' '.join(missing_packages)}`"
- )
-
- return get_relative_imports(filename)
-
-
-def get_class_in_module(class_name, module_path):
- """
- Import a module on the cache directory for modules and extract a class from it.
- """
- module_path = module_path.replace(os.path.sep, ".")
- module = importlib.import_module(module_path)
-
- if class_name is None:
- return find_pipeline_class(module)
- return getattr(module, class_name)
-
-
-def find_pipeline_class(loaded_module):
- """
- Retrieve pipeline class that inherits from `DiffusionPipeline`. Note that there has to be exactly one class
- inheriting from `DiffusionPipeline`.
- """
- from .pipeline_utils import DiffusionPipeline
-
- cls_members = dict(inspect.getmembers(loaded_module, inspect.isclass))
-
- pipeline_class = None
- for cls_name, cls in cls_members.items():
- if (
- cls_name != DiffusionPipeline.__name__
- and issubclass(cls, DiffusionPipeline)
- and cls.__module__.split(".")[0] != "diffusers"
- ):
- if pipeline_class is not None:
- raise ValueError(
- f"Multiple classes that inherit from {DiffusionPipeline.__name__} have been found:"
- f" {pipeline_class.__name__}, and {cls_name}. Please make sure to define only one in"
- f" {loaded_module}."
- )
- pipeline_class = cls
-
- return pipeline_class
-
-
-def get_cached_module_file(
- pretrained_model_name_or_path: Union[str, os.PathLike],
- module_file: str,
- cache_dir: Optional[Union[str, os.PathLike]] = None,
- force_download: bool = False,
- resume_download: bool = False,
- proxies: Optional[Dict[str, str]] = None,
- use_auth_token: Optional[Union[bool, str]] = None,
- revision: Optional[str] = None,
- local_files_only: bool = False,
-):
- """
- Prepares Downloads a module from a local folder or a distant repo and returns its path inside the cached
- Transformers module.
-
- Args:
- pretrained_model_name_or_path (`str` or `os.PathLike`):
- This can be either:
-
- - a string, the *model id* of a pretrained model configuration hosted inside a model repo on
- huggingface.co. Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced
- under a user or organization name, like `dbmdz/bert-base-german-cased`.
- - a path to a *directory* containing a configuration file saved using the
- [`~PreTrainedTokenizer.save_pretrained`] method, e.g., `./my_model_directory/`.
-
- module_file (`str`):
- The name of the module file containing the class to look for.
- cache_dir (`str` or `os.PathLike`, *optional*):
- Path to a directory in which a downloaded pretrained model configuration should be cached if the standard
- cache should not be used.
- force_download (`bool`, *optional*, defaults to `False`):
- Whether or not to force to (re-)download the configuration files and override the cached versions if they
- exist.
- resume_download (`bool`, *optional*, defaults to `False`):
- Whether or not to delete incompletely received file. Attempts to resume the download if such a file exists.
- proxies (`Dict[str, str]`, *optional*):
- A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
- 'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
- use_auth_token (`str` or *bool*, *optional*):
- The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `transformers-cli login` (stored in `~/.huggingface`).
- revision (`str`, *optional*, defaults to `"main"`):
- The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
- git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
- identifier allowed by git.
- local_files_only (`bool`, *optional*, defaults to `False`):
- If `True`, will only try to load the tokenizer configuration from local files.
-
-
-
- You may pass a token in `use_auth_token` if you are not logged in (`huggingface-cli long`) and want to use private
- or [gated models](https://huggingface.co/docs/hub/models-gated#gated-models).
-
-
-
- Returns:
- `str`: The path to the module inside the cache.
- """
- # Download and cache module_file from the repo `pretrained_model_name_or_path` of grab it if it's a local file.
- pretrained_model_name_or_path = str(pretrained_model_name_or_path)
-
- module_file_or_url = os.path.join(pretrained_model_name_or_path, module_file)
-
- if os.path.isfile(module_file_or_url):
- resolved_module_file = module_file_or_url
- submodule = "local"
- elif pretrained_model_name_or_path.count("/") == 0:
- # community pipeline on GitHub
- github_url = COMMUNITY_PIPELINES_URL.format(pipeline=pretrained_model_name_or_path)
- try:
- resolved_module_file = cached_download(
- github_url,
- cache_dir=cache_dir,
- force_download=force_download,
- proxies=proxies,
- resume_download=resume_download,
- local_files_only=local_files_only,
- use_auth_token=False,
- )
- submodule = "git"
- module_file = pretrained_model_name_or_path + ".py"
- except EnvironmentError:
- logger.error(f"Could not locate the {module_file} inside {pretrained_model_name_or_path}.")
- raise
- else:
- try:
- # Load from URL or cache if already cached
- resolved_module_file = hf_hub_download(
- pretrained_model_name_or_path,
- module_file,
- cache_dir=cache_dir,
- force_download=force_download,
- proxies=proxies,
- resume_download=resume_download,
- local_files_only=local_files_only,
- use_auth_token=use_auth_token,
- )
- submodule = os.path.join("local", "--".join(pretrained_model_name_or_path.split("/")))
- except EnvironmentError:
- logger.error(f"Could not locate the {module_file} inside {pretrained_model_name_or_path}.")
- raise
-
- # Check we have all the requirements in our environment
- modules_needed = check_imports(resolved_module_file)
-
- # Now we move the module inside our cached dynamic modules.
- full_submodule = DIFFUSERS_DYNAMIC_MODULE_NAME + os.path.sep + submodule
- create_dynamic_module(full_submodule)
- submodule_path = Path(HF_MODULES_CACHE) / full_submodule
- if submodule == "local" or submodule == "git":
- # We always copy local files (we could hash the file to see if there was a change, and give them the name of
- # that hash, to only copy when there is a modification but it seems overkill for now).
- # The only reason we do the copy is to avoid putting too many folders in sys.path.
- shutil.copy(resolved_module_file, submodule_path / module_file)
- for module_needed in modules_needed:
- module_needed = f"{module_needed}.py"
- shutil.copy(os.path.join(pretrained_model_name_or_path, module_needed), submodule_path / module_needed)
- else:
- # Get the commit hash
- # TODO: we will get this info in the etag soon, so retrieve it from there and not here.
- if isinstance(use_auth_token, str):
- token = use_auth_token
- elif use_auth_token is True:
- token = HfFolder.get_token()
- else:
- token = None
-
- commit_hash = model_info(pretrained_model_name_or_path, revision=revision, token=token).sha
-
- # The module file will end up being placed in a subfolder with the git hash of the repo. This way we get the
- # benefit of versioning.
- submodule_path = submodule_path / commit_hash
- full_submodule = full_submodule + os.path.sep + commit_hash
- create_dynamic_module(full_submodule)
-
- if not (submodule_path / module_file).exists():
- shutil.copy(resolved_module_file, submodule_path / module_file)
- # Make sure we also have every file with relative
- for module_needed in modules_needed:
- if not (submodule_path / module_needed).exists():
- get_cached_module_file(
- pretrained_model_name_or_path,
- f"{module_needed}.py",
- cache_dir=cache_dir,
- force_download=force_download,
- resume_download=resume_download,
- proxies=proxies,
- use_auth_token=use_auth_token,
- revision=revision,
- local_files_only=local_files_only,
- )
- return os.path.join(full_submodule, module_file)
-
-
-def get_class_from_dynamic_module(
- pretrained_model_name_or_path: Union[str, os.PathLike],
- module_file: str,
- class_name: Optional[str] = None,
- cache_dir: Optional[Union[str, os.PathLike]] = None,
- force_download: bool = False,
- resume_download: bool = False,
- proxies: Optional[Dict[str, str]] = None,
- use_auth_token: Optional[Union[bool, str]] = None,
- revision: Optional[str] = None,
- local_files_only: bool = False,
- **kwargs,
-):
- """
- Extracts a class from a module file, present in the local folder or repository of a model.
-
-
-
- Calling this function will execute the code in the module file found locally or downloaded from the Hub. It should
- therefore only be called on trusted repos.
-
-
-
- Args:
- pretrained_model_name_or_path (`str` or `os.PathLike`):
- This can be either:
-
- - a string, the *model id* of a pretrained model configuration hosted inside a model repo on
- huggingface.co. Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced
- under a user or organization name, like `dbmdz/bert-base-german-cased`.
- - a path to a *directory* containing a configuration file saved using the
- [`~PreTrainedTokenizer.save_pretrained`] method, e.g., `./my_model_directory/`.
-
- module_file (`str`):
- The name of the module file containing the class to look for.
- class_name (`str`):
- The name of the class to import in the module.
- cache_dir (`str` or `os.PathLike`, *optional*):
- Path to a directory in which a downloaded pretrained model configuration should be cached if the standard
- cache should not be used.
- force_download (`bool`, *optional*, defaults to `False`):
- Whether or not to force to (re-)download the configuration files and override the cached versions if they
- exist.
- resume_download (`bool`, *optional*, defaults to `False`):
- Whether or not to delete incompletely received file. Attempts to resume the download if such a file exists.
- proxies (`Dict[str, str]`, *optional*):
- A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
- 'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
- use_auth_token (`str` or `bool`, *optional*):
- The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `transformers-cli login` (stored in `~/.huggingface`).
- revision (`str`, *optional*, defaults to `"main"`):
- The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
- git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
- identifier allowed by git.
- local_files_only (`bool`, *optional*, defaults to `False`):
- If `True`, will only try to load the tokenizer configuration from local files.
-
-
-
- You may pass a token in `use_auth_token` if you are not logged in (`huggingface-cli long`) and want to use private
- or [gated models](https://huggingface.co/docs/hub/models-gated#gated-models).
-
-
-
- Returns:
- `type`: The class, dynamically imported from the module.
-
- Examples:
-
- ```python
- # Download module `modeling.py` from huggingface.co and cache then extract the class `MyBertModel` from this
- # module.
- cls = get_class_from_dynamic_module("sgugger/my-bert-model", "modeling.py", "MyBertModel")
- ```"""
- # And lastly we get the class inside our newly created module
- final_module = get_cached_module_file(
- pretrained_model_name_or_path,
- module_file,
- cache_dir=cache_dir,
- force_download=force_download,
- resume_download=resume_download,
- proxies=proxies,
- use_auth_token=use_auth_token,
- revision=revision,
- local_files_only=local_files_only,
- )
- return get_class_in_module(class_name, final_module.replace(".py", ""))
diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stochastic_karras_ve/__init__.py b/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stochastic_karras_ve/__init__.py
deleted file mode 100644
index db2582043781130794e01b96b3e6beecbfe9f369..0000000000000000000000000000000000000000
--- a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stochastic_karras_ve/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-# flake8: noqa
-from .pipeline_stochastic_karras_ve import KarrasVePipeline
diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/shared.py b/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/shared.py
deleted file mode 100644
index 89f0779459225957c13865ef7f7448efae6d1998..0000000000000000000000000000000000000000
--- a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/shared.py
+++ /dev/null
@@ -1,65 +0,0 @@
-from modules.presets import COMPLETION_URL, BALANCE_API_URL, USAGE_API_URL, API_HOST
-import os
-import queue
-import openai
-
-class State:
- interrupted = False
- multi_api_key = False
- completion_url = COMPLETION_URL
- balance_api_url = BALANCE_API_URL
- usage_api_url = USAGE_API_URL
-
- def interrupt(self):
- self.interrupted = True
-
- def recover(self):
- self.interrupted = False
-
- def set_api_host(self, api_host: str):
- api_host = api_host.rstrip("/")
- if not api_host.startswith("http"):
- api_host = f"https://{api_host}"
- if api_host.endswith("/v1"):
- api_host = api_host[:-3]
- self.completion_url = f"{api_host}/v1/chat/completions"
- self.balance_api_url = f"{api_host}/dashboard/billing/credit_grants"
- self.usage_api_url = f"{api_host}/dashboard/billing/usage"
- os.environ["OPENAI_API_BASE"] = api_host
-
- def reset_api_host(self):
- self.completion_url = COMPLETION_URL
- self.balance_api_url = BALANCE_API_URL
- self.usage_api_url = USAGE_API_URL
- os.environ["OPENAI_API_BASE"] = f"https://{API_HOST}"
- return API_HOST
-
- def reset_all(self):
- self.interrupted = False
- self.completion_url = COMPLETION_URL
-
- def set_api_key_queue(self, api_key_list):
- self.multi_api_key = True
- self.api_key_queue = queue.Queue()
- for api_key in api_key_list:
- self.api_key_queue.put(api_key)
-
- def switching_api_key(self, func):
- if not hasattr(self, "api_key_queue"):
- return func
-
- def wrapped(*args, **kwargs):
- api_key = self.api_key_queue.get()
- args[0].api_key = api_key
- ret = func(*args, **kwargs)
- self.api_key_queue.put(api_key)
- return ret
-
- return wrapped
-
-
-state = State()
-
-modules_path = os.path.dirname(os.path.realpath(__file__))
-chuanhu_path = os.path.dirname(modules_path)
-assets_path = os.path.join(chuanhu_path, "web_assets")
\ No newline at end of file
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/toolbox/utterance.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/toolbox/utterance.py
deleted file mode 100644
index 844c8a2adb0c8eba2992eaf5ea357d7add3c1896..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/toolbox/utterance.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from collections import namedtuple
-
-Utterance = namedtuple("Utterance", "name speaker_name wav spec embed partial_embeds synth")
-Utterance.__eq__ = lambda x, y: x.name == y.name
-Utterance.__hash__ = lambda x: hash(x.name)
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/web/api/audio.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/web/api/audio.py
deleted file mode 100644
index b30e5dd9ad3a249c2a6e73d9f42372f0ed098b5a..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/web/api/audio.py
+++ /dev/null
@@ -1,43 +0,0 @@
-import os
-from pathlib import Path
-from flask_restx import Namespace, Resource, fields
-from flask import Response, current_app
-
-api = Namespace('audios', description='Audios related operations')
-
-audio = api.model('Audio', {
- 'name': fields.String(required=True, description='The audio name'),
-})
-
-def generate(wav_path):
- with open(wav_path, "rb") as fwav:
- data = fwav.read(1024)
- while data:
- yield data
- data = fwav.read(1024)
-
-@api.route('/')
-class AudioList(Resource):
- @api.doc('list_audios')
- @api.marshal_list_with(audio)
- def get(self):
- '''List all audios'''
- audio_samples = []
- AUDIO_SAMPLES_DIR = current_app.config.get("AUDIO_SAMPLES_DIR")
- if os.path.isdir(AUDIO_SAMPLES_DIR):
- audio_samples = list(Path(AUDIO_SAMPLES_DIR).glob("*.wav"))
- return list(a.name for a in audio_samples)
-
-@api.route('/')
-@api.param('name', 'The name of audio')
-@api.response(404, 'audio not found')
-class Audio(Resource):
- @api.doc('get_audio')
- @api.marshal_with(audio)
- def get(self, name):
- '''Fetch a cat given its identifier'''
- AUDIO_SAMPLES_DIR = current_app.config.get("AUDIO_SAMPLES_DIR")
- if Path(AUDIO_SAMPLES_DIR + name).exists():
- return Response(generate(AUDIO_SAMPLES_DIR + name), mimetype="audio/x-wav")
- api.abort(404)
-
\ No newline at end of file
diff --git a/spaces/KevinQHLin/UniVTG/main/__init__.py b/spaces/KevinQHLin/UniVTG/main/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Kirihasan/rvc-jjjo/vc_infer_pipeline.py b/spaces/Kirihasan/rvc-jjjo/vc_infer_pipeline.py
deleted file mode 100644
index c26d45068f9b6bf2b194b13c3c89f8a06347c124..0000000000000000000000000000000000000000
--- a/spaces/Kirihasan/rvc-jjjo/vc_infer_pipeline.py
+++ /dev/null
@@ -1,306 +0,0 @@
-import numpy as np, parselmouth, torch, pdb
-from time import time as ttime
-import torch.nn.functional as F
-from config import x_pad, x_query, x_center, x_max
-import scipy.signal as signal
-import pyworld, os, traceback, faiss
-from scipy import signal
-
-bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000)
-
-
-class VC(object):
- def __init__(self, tgt_sr, device, is_half):
- self.sr = 16000 # hubert输入采样率
- self.window = 160 # 每帧点数
- self.t_pad = self.sr * x_pad # 每条前后pad时间
- self.t_pad_tgt = tgt_sr * x_pad
- self.t_pad2 = self.t_pad * 2
- self.t_query = self.sr * x_query # 查询切点前后查询时间
- self.t_center = self.sr * x_center # 查询切点位置
- self.t_max = self.sr * x_max # 免查询时长阈值
- self.device = device
- self.is_half = is_half
-
- def get_f0(self, x, p_len, f0_up_key, f0_method, inp_f0=None):
- time_step = self.window / self.sr * 1000
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- if f0_method == "pm":
- f0 = (
- parselmouth.Sound(x, self.sr)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=f0_min,
- pitch_ceiling=f0_max,
- )
- .selected_array["frequency"]
- )
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(
- f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant"
- )
- elif f0_method == "harvest":
- f0, t = pyworld.harvest(
- x.astype(np.double),
- fs=self.sr,
- f0_ceil=f0_max,
- f0_floor=f0_min,
- frame_period=10,
- )
- f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr)
- f0 = signal.medfilt(f0, 3)
- f0 *= pow(2, f0_up_key / 12)
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- tf0 = self.sr // self.window # 每秒f0点数
- if inp_f0 is not None:
- delta_t = np.round(
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
- ).astype("int16")
- replace_f0 = np.interp(
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
- )
- shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0]
- f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape]
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
- return f0_coarse, f0bak # 1-0
-
- def vc(
- self,
- model,
- net_g,
- sid,
- audio0,
- pitch,
- pitchf,
- times,
- index,
- big_npy,
- index_rate,
- ): # ,file_index,file_big_npy
- feats = torch.from_numpy(audio0)
- if self.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False)
-
- inputs = {
- "source": feats.to(self.device),
- "padding_mask": padding_mask,
- "output_layer": 9, # layer 9
- }
- t0 = ttime()
- with torch.no_grad():
- logits = model.extract_features(**inputs)
- feats = model.final_proj(logits[0])
-
- if (
- isinstance(index, type(None)) == False
- and isinstance(big_npy, type(None)) == False
- and index_rate != 0
- ):
- npy = feats[0].cpu().numpy()
- if self.is_half:
- npy = npy.astype("float32")
- _, I = index.search(npy, 1)
- npy = big_npy[I.squeeze()]
- if self.is_half:
- npy = npy.astype("float16")
- feats = (
- torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate
- + (1 - index_rate) * feats
- )
-
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- t1 = ttime()
- p_len = audio0.shape[0] // self.window
- if feats.shape[1] < p_len:
- p_len = feats.shape[1]
- if pitch != None and pitchf != None:
- pitch = pitch[:, :p_len]
- pitchf = pitchf[:, :p_len]
- p_len = torch.tensor([p_len], device=self.device).long()
- with torch.no_grad():
- if pitch != None and pitchf != None:
- audio1 = (
- (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] * 32768)
- .data.cpu()
- .float()
- .numpy()
- .astype(np.int16)
- )
- else:
- audio1 = (
- (net_g.infer(feats, p_len, sid)[0][0, 0] * 32768)
- .data.cpu()
- .float()
- .numpy()
- .astype(np.int16)
- )
- del feats, p_len, padding_mask
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- t2 = ttime()
- times[0] += t1 - t0
- times[2] += t2 - t1
- return audio1
-
- def pipeline(
- self,
- model,
- net_g,
- sid,
- audio,
- times,
- f0_up_key,
- f0_method,
- file_index,
- file_big_npy,
- index_rate,
- if_f0,
- f0_file=None,
- ):
- if (
- file_big_npy != ""
- and file_index != ""
- and os.path.exists(file_big_npy) == True
- and os.path.exists(file_index) == True
- and index_rate != 0
- ):
- try:
- index = faiss.read_index(file_index)
- big_npy = np.load(file_big_npy)
- except:
- traceback.print_exc()
- index = big_npy = None
- else:
- index = big_npy = None
- print("Feature retrieval library doesn't exist or ratio is 0")
- audio = signal.filtfilt(bh, ah, audio)
- audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect")
- opt_ts = []
- if audio_pad.shape[0] > self.t_max:
- audio_sum = np.zeros_like(audio)
- for i in range(self.window):
- audio_sum += audio_pad[i : i - self.window]
- for t in range(self.t_center, audio.shape[0], self.t_center):
- opt_ts.append(
- t
- - self.t_query
- + np.where(
- np.abs(audio_sum[t - self.t_query : t + self.t_query])
- == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min()
- )[0][0]
- )
- s = 0
- audio_opt = []
- t = None
- t1 = ttime()
- audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect")
- p_len = audio_pad.shape[0] // self.window
- inp_f0 = None
- if hasattr(f0_file, "name") == True:
- try:
- with open(f0_file.name, "r") as f:
- lines = f.read().strip("\n").split("\n")
- inp_f0 = []
- for line in lines:
- inp_f0.append([float(i) for i in line.split(",")])
- inp_f0 = np.array(inp_f0, dtype="float32")
- except:
- traceback.print_exc()
- sid = torch.tensor(sid, device=self.device).unsqueeze(0).long()
- pitch, pitchf = None, None
- if if_f0 == 1:
- pitch, pitchf = self.get_f0(audio_pad, p_len, f0_up_key, f0_method, inp_f0)
- pitch = pitch[:p_len]
- pitchf = pitchf[:p_len]
- pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long()
- pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float()
- t2 = ttime()
- times[1] += t2 - t1
- for t in opt_ts:
- t = t // self.window * self.window
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- pitch[:, s // self.window : (t + self.t_pad2) // self.window],
- pitchf[:, s // self.window : (t + self.t_pad2) // self.window],
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- s = t
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- pitch[:, t // self.window :] if t is not None else pitch,
- pitchf[:, t // self.window :] if t is not None else pitchf,
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- audio_opt = np.concatenate(audio_opt)
- del pitch, pitchf, sid
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- return audio_opt
diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/transforms/formatting.py b/spaces/KyanChen/RSPrompter/mmpretrain/datasets/transforms/formatting.py
deleted file mode 100644
index e4d331636a883ce602e419e0867aea7b513b4d87..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/transforms/formatting.py
+++ /dev/null
@@ -1,353 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from collections import defaultdict
-from collections.abc import Sequence
-
-import cv2
-import numpy as np
-import torch
-import torchvision.transforms.functional as F
-from mmcv.transforms import BaseTransform
-from mmengine.utils import is_str
-from PIL import Image
-
-from mmpretrain.registry import TRANSFORMS
-from mmpretrain.structures import DataSample, MultiTaskDataSample
-
-
-def to_tensor(data):
- """Convert objects of various python types to :obj:`torch.Tensor`.
-
- Supported types are: :class:`numpy.ndarray`, :class:`torch.Tensor`,
- :class:`Sequence`, :class:`int` and :class:`float`.
- """
- if isinstance(data, torch.Tensor):
- return data
- elif isinstance(data, np.ndarray):
- return torch.from_numpy(data)
- elif isinstance(data, Sequence) and not is_str(data):
- return torch.tensor(data)
- elif isinstance(data, int):
- return torch.LongTensor([data])
- elif isinstance(data, float):
- return torch.FloatTensor([data])
- else:
- raise TypeError(
- f'Type {type(data)} cannot be converted to tensor.'
- 'Supported types are: `numpy.ndarray`, `torch.Tensor`, '
- '`Sequence`, `int` and `float`')
-
-
-@TRANSFORMS.register_module()
-class PackInputs(BaseTransform):
- """Pack the inputs data.
-
- **Required Keys:**
-
- - ``input_key``
- - ``*algorithm_keys``
- - ``*meta_keys``
-
- **Deleted Keys:**
-
- All other keys in the dict.
-
- **Added Keys:**
-
- - inputs (:obj:`torch.Tensor`): The forward data of models.
- - data_samples (:obj:`~mmpretrain.structures.DataSample`): The
- annotation info of the sample.
-
- Args:
- input_key (str): The key of element to feed into the model forwarding.
- Defaults to 'img'.
- algorithm_keys (Sequence[str]): The keys of custom elements to be used
- in the algorithm. Defaults to an empty tuple.
- meta_keys (Sequence[str]): The keys of meta information to be saved in
- the data sample. Defaults to :attr:`PackInputs.DEFAULT_META_KEYS`.
-
- .. admonition:: Default algorithm keys
-
- Besides the specified ``algorithm_keys``, we will set some default keys
- into the output data sample and do some formatting. Therefore, you
- don't need to set these keys in the ``algorithm_keys``.
-
- - ``gt_label``: The ground-truth label. The value will be converted
- into a 1-D tensor.
- - ``gt_score``: The ground-truth score. The value will be converted
- into a 1-D tensor.
- - ``mask``: The mask for some self-supervise tasks. The value will
- be converted into a tensor.
-
- .. admonition:: Default meta keys
-
- - ``sample_idx``: The id of the image sample.
- - ``img_path``: The path to the image file.
- - ``ori_shape``: The original shape of the image as a tuple (H, W).
- - ``img_shape``: The shape of the image after the pipeline as a
- tuple (H, W).
- - ``scale_factor``: The scale factor between the resized image and
- the original image.
- - ``flip``: A boolean indicating if image flip transform was used.
- - ``flip_direction``: The flipping direction.
- """
-
- DEFAULT_META_KEYS = ('sample_idx', 'img_path', 'ori_shape', 'img_shape',
- 'scale_factor', 'flip', 'flip_direction')
-
- def __init__(self,
- input_key='img',
- algorithm_keys=(),
- meta_keys=DEFAULT_META_KEYS):
- self.input_key = input_key
- self.algorithm_keys = algorithm_keys
- self.meta_keys = meta_keys
-
- @staticmethod
- def format_input(input_):
- if isinstance(input_, list):
- return [PackInputs.format_input(item) for item in input_]
- elif isinstance(input_, np.ndarray):
- if input_.ndim == 2: # For grayscale image.
- input_ = np.expand_dims(input_, -1)
- if input_.ndim == 3 and not input_.flags.c_contiguous:
- input_ = np.ascontiguousarray(input_.transpose(2, 0, 1))
- input_ = to_tensor(input_)
- elif input_.ndim == 3:
- # convert to tensor first to accelerate, see
- # https://github.com/open-mmlab/mmdetection/pull/9533
- input_ = to_tensor(input_).permute(2, 0, 1).contiguous()
- else:
- # convert input with other shape to tensor without permute,
- # like video input (num_crops, C, T, H, W).
- input_ = to_tensor(input_)
- elif isinstance(input_, Image.Image):
- input_ = F.pil_to_tensor(input_)
- elif not isinstance(input_, torch.Tensor):
- raise TypeError(f'Unsupported input type {type(input_)}.')
-
- return input_
-
- def transform(self, results: dict) -> dict:
- """Method to pack the input data."""
-
- packed_results = dict()
- if self.input_key in results:
- input_ = results[self.input_key]
- packed_results['inputs'] = self.format_input(input_)
-
- data_sample = DataSample()
-
- # Set default keys
- if 'gt_label' in results:
- data_sample.set_gt_label(results['gt_label'])
- if 'gt_score' in results:
- data_sample.set_gt_score(results['gt_score'])
- if 'mask' in results:
- data_sample.set_mask(results['mask'])
-
- # Set custom algorithm keys
- for key in self.algorithm_keys:
- if key in results:
- data_sample.set_field(results[key], key)
-
- # Set meta keys
- for key in self.meta_keys:
- if key in results:
- data_sample.set_field(results[key], key, field_type='metainfo')
-
- packed_results['data_samples'] = data_sample
- return packed_results
-
- def __repr__(self) -> str:
- repr_str = self.__class__.__name__
- repr_str += f"(input_key='{self.input_key}', "
- repr_str += f'algorithm_keys={self.algorithm_keys}, '
- repr_str += f'meta_keys={self.meta_keys})'
- return repr_str
-
-
-@TRANSFORMS.register_module()
-class PackMultiTaskInputs(BaseTransform):
- """Convert all image labels of multi-task dataset to a dict of tensor.
-
- Args:
- multi_task_fields (Sequence[str]):
- input_key (str):
- task_handlers (dict):
- """
-
- def __init__(self,
- multi_task_fields,
- input_key='img',
- task_handlers=dict()):
- self.multi_task_fields = multi_task_fields
- self.input_key = input_key
- self.task_handlers = defaultdict(PackInputs)
- for task_name, task_handler in task_handlers.items():
- self.task_handlers[task_name] = TRANSFORMS.build(task_handler)
-
- def transform(self, results: dict) -> dict:
- """Method to pack the input data.
-
- result = {'img_path': 'a.png', 'gt_label': {'task1': 1, 'task3': 3},
- 'img': array([[[ 0, 0, 0])
- """
- packed_results = dict()
- results = results.copy()
-
- if self.input_key in results:
- input_ = results[self.input_key]
- packed_results['inputs'] = PackInputs.format_input(input_)
-
- task_results = defaultdict(dict)
- for field in self.multi_task_fields:
- if field in results:
- value = results.pop(field)
- for k, v in value.items():
- task_results[k].update({field: v})
-
- data_sample = MultiTaskDataSample()
- for task_name, task_result in task_results.items():
- task_handler = self.task_handlers[task_name]
- task_pack_result = task_handler({**results, **task_result})
- data_sample.set_field(task_pack_result['data_samples'], task_name)
-
- packed_results['data_samples'] = data_sample
- return packed_results
-
- def __repr__(self):
- repr = self.__class__.__name__
- task_handlers = ', '.join(
- f"'{name}': {handler.__class__.__name__}"
- for name, handler in self.task_handlers.items())
- repr += f'(multi_task_fields={self.multi_task_fields}, '
- repr += f"input_key='{self.input_key}', "
- repr += f'task_handlers={{{task_handlers}}})'
- return repr
-
-
-@TRANSFORMS.register_module()
-class Transpose(BaseTransform):
- """Transpose numpy array.
-
- **Required Keys:**
-
- - ``*keys``
-
- **Modified Keys:**
-
- - ``*keys``
-
- Args:
- keys (List[str]): The fields to convert to tensor.
- order (List[int]): The output dimensions order.
- """
-
- def __init__(self, keys, order):
- self.keys = keys
- self.order = order
-
- def transform(self, results):
- """Method to transpose array."""
- for key in self.keys:
- results[key] = results[key].transpose(self.order)
- return results
-
- def __repr__(self):
- return self.__class__.__name__ + \
- f'(keys={self.keys}, order={self.order})'
-
-
-@TRANSFORMS.register_module(('NumpyToPIL', 'ToPIL'))
-class NumpyToPIL(BaseTransform):
- """Convert the image from OpenCV format to :obj:`PIL.Image.Image`.
-
- **Required Keys:**
-
- - ``img``
-
- **Modified Keys:**
-
- - ``img``
-
- Args:
- to_rgb (bool): Whether to convert img to rgb. Defaults to True.
- """
-
- def __init__(self, to_rgb: bool = False) -> None:
- self.to_rgb = to_rgb
-
- def transform(self, results: dict) -> dict:
- """Method to convert images to :obj:`PIL.Image.Image`."""
- img = results['img']
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) if self.to_rgb else img
-
- results['img'] = Image.fromarray(img)
- return results
-
- def __repr__(self) -> str:
- return self.__class__.__name__ + f'(to_rgb={self.to_rgb})'
-
-
-@TRANSFORMS.register_module(('PILToNumpy', 'ToNumpy'))
-class PILToNumpy(BaseTransform):
- """Convert img to :obj:`numpy.ndarray`.
-
- **Required Keys:**
-
- - ``img``
-
- **Modified Keys:**
-
- - ``img``
-
- Args:
- to_bgr (bool): Whether to convert img to rgb. Defaults to True.
- dtype (str, optional): The dtype of the converted numpy array.
- Defaults to None.
- """
-
- def __init__(self, to_bgr: bool = False, dtype=None) -> None:
- self.to_bgr = to_bgr
- self.dtype = dtype
-
- def transform(self, results: dict) -> dict:
- """Method to convert img to :obj:`numpy.ndarray`."""
- img = np.array(results['img'], dtype=self.dtype)
- img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR) if self.to_bgr else img
-
- results['img'] = img
- return results
-
- def __repr__(self) -> str:
- return self.__class__.__name__ + \
- f'(to_bgr={self.to_bgr}, dtype={self.dtype})'
-
-
-@TRANSFORMS.register_module()
-class Collect(BaseTransform):
- """Collect and only reserve the specified fields.
-
- **Required Keys:**
-
- - ``*keys``
-
- **Deleted Keys:**
-
- All keys except those in the argument ``*keys``.
-
- Args:
- keys (Sequence[str]): The keys of the fields to be collected.
- """
-
- def __init__(self, keys):
- self.keys = keys
-
- def transform(self, results):
- data = {}
- for key in self.keys:
- data[key] = results[key]
- return data
-
- def __repr__(self):
- return self.__class__.__name__ + f'(keys={self.keys})'
diff --git a/spaces/Lamai/LAMAIGPT/autogpt/commands/file_operations.py b/spaces/Lamai/LAMAIGPT/autogpt/commands/file_operations.py
deleted file mode 100644
index ad145ec956dd9dafd39e09c2244d001cf5febd2f..0000000000000000000000000000000000000000
--- a/spaces/Lamai/LAMAIGPT/autogpt/commands/file_operations.py
+++ /dev/null
@@ -1,267 +0,0 @@
-"""File operations for AutoGPT"""
-from __future__ import annotations
-
-import os
-import os.path
-from typing import Generator
-
-import requests
-from colorama import Back, Fore
-from requests.adapters import HTTPAdapter, Retry
-
-from autogpt.spinner import Spinner
-from autogpt.utils import readable_file_size
-from autogpt.workspace import WORKSPACE_PATH, path_in_workspace
-
-LOG_FILE = "file_logger.txt"
-LOG_FILE_PATH = WORKSPACE_PATH / LOG_FILE
-
-
-def check_duplicate_operation(operation: str, filename: str) -> bool:
- """Check if the operation has already been performed on the given file
-
- Args:
- operation (str): The operation to check for
- filename (str): The name of the file to check for
-
- Returns:
- bool: True if the operation has already been performed on the file
- """
- log_content = read_file(LOG_FILE)
- log_entry = f"{operation}: {filename}\n"
- return log_entry in log_content
-
-
-def log_operation(operation: str, filename: str) -> None:
- """Log the file operation to the file_logger.txt
-
- Args:
- operation (str): The operation to log
- filename (str): The name of the file the operation was performed on
- """
- log_entry = f"{operation}: {filename}\n"
-
- # Create the log file if it doesn't exist
- if not os.path.exists(LOG_FILE_PATH):
- with open(LOG_FILE_PATH, "w", encoding="utf-8") as f:
- f.write("File Operation Logger ")
-
- append_to_file(LOG_FILE, log_entry, shouldLog=False)
-
-
-def split_file(
- content: str, max_length: int = 4000, overlap: int = 0
-) -> Generator[str, None, None]:
- """
- Split text into chunks of a specified maximum length with a specified overlap
- between chunks.
-
- :param content: The input text to be split into chunks
- :param max_length: The maximum length of each chunk,
- default is 4000 (about 1k token)
- :param overlap: The number of overlapping characters between chunks,
- default is no overlap
- :return: A generator yielding chunks of text
- """
- start = 0
- content_length = len(content)
-
- while start < content_length:
- end = start + max_length
- if end + overlap < content_length:
- chunk = content[start : end + overlap - 1]
- else:
- chunk = content[start:content_length]
-
- # Account for the case where the last chunk is shorter than the overlap, so it has already been consumed
- if len(chunk) <= overlap:
- break
-
- yield chunk
- start += max_length - overlap
-
-
-def read_file(filename: str) -> str:
- """Read a file and return the contents
-
- Args:
- filename (str): The name of the file to read
-
- Returns:
- str: The contents of the file
- """
- try:
- filepath = path_in_workspace(filename)
- with open(filepath, "r", encoding="utf-8") as f:
- content = f.read()
- return content
- except Exception as e:
- return f"Error: {str(e)}"
-
-
-def ingest_file(
- filename: str, memory, max_length: int = 4000, overlap: int = 200
-) -> None:
- """
- Ingest a file by reading its content, splitting it into chunks with a specified
- maximum length and overlap, and adding the chunks to the memory storage.
-
- :param filename: The name of the file to ingest
- :param memory: An object with an add() method to store the chunks in memory
- :param max_length: The maximum length of each chunk, default is 4000
- :param overlap: The number of overlapping characters between chunks, default is 200
- """
- try:
- print(f"Working with file {filename}")
- content = read_file(filename)
- content_length = len(content)
- print(f"File length: {content_length} characters")
-
- chunks = list(split_file(content, max_length=max_length, overlap=overlap))
-
- num_chunks = len(chunks)
- for i, chunk in enumerate(chunks):
- print(f"Ingesting chunk {i + 1} / {num_chunks} into memory")
- memory_to_add = (
- f"Filename: {filename}\n" f"Content part#{i + 1}/{num_chunks}: {chunk}"
- )
-
- memory.add(memory_to_add)
-
- print(f"Done ingesting {num_chunks} chunks from {filename}.")
- except Exception as e:
- print(f"Error while ingesting file '{filename}': {str(e)}")
-
-
-def write_to_file(filename: str, text: str) -> str:
- """Write text to a file
-
- Args:
- filename (str): The name of the file to write to
- text (str): The text to write to the file
-
- Returns:
- str: A message indicating success or failure
- """
- if check_duplicate_operation("write", filename):
- return "Error: File has already been updated."
- try:
- filepath = path_in_workspace(filename)
- directory = os.path.dirname(filepath)
- if not os.path.exists(directory):
- os.makedirs(directory)
- with open(filepath, "w", encoding="utf-8") as f:
- f.write(text)
- log_operation("write", filename)
- return "File written to successfully."
- except Exception as e:
- return f"Error: {str(e)}"
-
-
-def append_to_file(filename: str, text: str, shouldLog: bool = True) -> str:
- """Append text to a file
-
- Args:
- filename (str): The name of the file to append to
- text (str): The text to append to the file
-
- Returns:
- str: A message indicating success or failure
- """
- try:
- filepath = path_in_workspace(filename)
- with open(filepath, "a") as f:
- f.write(text)
-
- if shouldLog:
- log_operation("append", filename)
-
- return "Text appended successfully."
- except Exception as e:
- return f"Error: {str(e)}"
-
-
-def delete_file(filename: str) -> str:
- """Delete a file
-
- Args:
- filename (str): The name of the file to delete
-
- Returns:
- str: A message indicating success or failure
- """
- if check_duplicate_operation("delete", filename):
- return "Error: File has already been deleted."
- try:
- filepath = path_in_workspace(filename)
- os.remove(filepath)
- log_operation("delete", filename)
- return "File deleted successfully."
- except Exception as e:
- return f"Error: {str(e)}"
-
-
-def search_files(directory: str) -> list[str]:
- """Search for files in a directory
-
- Args:
- directory (str): The directory to search in
-
- Returns:
- list[str]: A list of files found in the directory
- """
- found_files = []
-
- if directory in {"", "/"}:
- search_directory = WORKSPACE_PATH
- else:
- search_directory = path_in_workspace(directory)
-
- for root, _, files in os.walk(search_directory):
- for file in files:
- if file.startswith("."):
- continue
- relative_path = os.path.relpath(os.path.join(root, file), WORKSPACE_PATH)
- found_files.append(relative_path)
-
- return found_files
-
-
-def download_file(url, filename):
- """Downloads a file
- Args:
- url (str): URL of the file to download
- filename (str): Filename to save the file as
- """
- safe_filename = path_in_workspace(filename)
- try:
- message = f"{Fore.YELLOW}Downloading file from {Back.LIGHTBLUE_EX}{url}{Back.RESET}{Fore.RESET}"
- with Spinner(message) as spinner:
- session = requests.Session()
- retry = Retry(total=3, backoff_factor=1, status_forcelist=[502, 503, 504])
- adapter = HTTPAdapter(max_retries=retry)
- session.mount("http://", adapter)
- session.mount("https://", adapter)
-
- total_size = 0
- downloaded_size = 0
-
- with session.get(url, allow_redirects=True, stream=True) as r:
- r.raise_for_status()
- total_size = int(r.headers.get("Content-Length", 0))
- downloaded_size = 0
-
- with open(safe_filename, "wb") as f:
- for chunk in r.iter_content(chunk_size=8192):
- f.write(chunk)
- downloaded_size += len(chunk)
-
- # Update the progress message
- progress = f"{readable_file_size(downloaded_size)} / {readable_file_size(total_size)}"
- spinner.update_message(f"{message} {progress}")
-
- return f'Successfully downloaded and locally stored file: "{filename}"! (Size: {readable_file_size(total_size)})'
- except requests.HTTPError as e:
- return f"Got an HTTP Error whilst trying to download file: {e}"
- except Exception as e:
- return "Error: " + str(e)
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/demucs/model.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/demucs/model.py
deleted file mode 100644
index e9d932f4d014f7b95b394d2e24ed5edc379ded8d..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/demucs/model.py
+++ /dev/null
@@ -1,202 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-import julius
-from torch import nn
-
-from .utils import capture_init, center_trim
-
-
-class BLSTM(nn.Module):
- def __init__(self, dim, layers=1):
- super().__init__()
- self.lstm = nn.LSTM(bidirectional=True, num_layers=layers, hidden_size=dim, input_size=dim)
- self.linear = nn.Linear(2 * dim, dim)
-
- def forward(self, x):
- x = x.permute(2, 0, 1)
- x = self.lstm(x)[0]
- x = self.linear(x)
- x = x.permute(1, 2, 0)
- return x
-
-
-def rescale_conv(conv, reference):
- std = conv.weight.std().detach()
- scale = (std / reference)**0.5
- conv.weight.data /= scale
- if conv.bias is not None:
- conv.bias.data /= scale
-
-
-def rescale_module(module, reference):
- for sub in module.modules():
- if isinstance(sub, (nn.Conv1d, nn.ConvTranspose1d)):
- rescale_conv(sub, reference)
-
-
-class Demucs(nn.Module):
- @capture_init
- def __init__(self,
- sources,
- audio_channels=2,
- channels=64,
- depth=6,
- rewrite=True,
- glu=True,
- rescale=0.1,
- resample=True,
- kernel_size=8,
- stride=4,
- growth=2.,
- lstm_layers=2,
- context=3,
- normalize=False,
- samplerate=44100,
- segment_length=4 * 10 * 44100):
- """
- Args:
- sources (list[str]): list of source names
- audio_channels (int): stereo or mono
- channels (int): first convolution channels
- depth (int): number of encoder/decoder layers
- rewrite (bool): add 1x1 convolution to each encoder layer
- and a convolution to each decoder layer.
- For the decoder layer, `context` gives the kernel size.
- glu (bool): use glu instead of ReLU
- resample_input (bool): upsample x2 the input and downsample /2 the output.
- rescale (int): rescale initial weights of convolutions
- to get their standard deviation closer to `rescale`
- kernel_size (int): kernel size for convolutions
- stride (int): stride for convolutions
- growth (float): multiply (resp divide) number of channels by that
- for each layer of the encoder (resp decoder)
- lstm_layers (int): number of lstm layers, 0 = no lstm
- context (int): kernel size of the convolution in the
- decoder before the transposed convolution. If > 1,
- will provide some context from neighboring time
- steps.
- samplerate (int): stored as meta information for easing
- future evaluations of the model.
- segment_length (int): stored as meta information for easing
- future evaluations of the model. Length of the segments on which
- the model was trained.
- """
-
- super().__init__()
- self.audio_channels = audio_channels
- self.sources = sources
- self.kernel_size = kernel_size
- self.context = context
- self.stride = stride
- self.depth = depth
- self.resample = resample
- self.channels = channels
- self.normalize = normalize
- self.samplerate = samplerate
- self.segment_length = segment_length
-
- self.encoder = nn.ModuleList()
- self.decoder = nn.ModuleList()
-
- if glu:
- activation = nn.GLU(dim=1)
- ch_scale = 2
- else:
- activation = nn.ReLU()
- ch_scale = 1
- in_channels = audio_channels
- for index in range(depth):
- encode = []
- encode += [nn.Conv1d(in_channels, channels, kernel_size, stride), nn.ReLU()]
- if rewrite:
- encode += [nn.Conv1d(channels, ch_scale * channels, 1), activation]
- self.encoder.append(nn.Sequential(*encode))
-
- decode = []
- if index > 0:
- out_channels = in_channels
- else:
- out_channels = len(self.sources) * audio_channels
- if rewrite:
- decode += [nn.Conv1d(channels, ch_scale * channels, context), activation]
- decode += [nn.ConvTranspose1d(channels, out_channels, kernel_size, stride)]
- if index > 0:
- decode.append(nn.ReLU())
- self.decoder.insert(0, nn.Sequential(*decode))
- in_channels = channels
- channels = int(growth * channels)
-
- channels = in_channels
-
- if lstm_layers:
- self.lstm = BLSTM(channels, lstm_layers)
- else:
- self.lstm = None
-
- if rescale:
- rescale_module(self, reference=rescale)
-
- def valid_length(self, length):
- """
- Return the nearest valid length to use with the model so that
- there is no time steps left over in a convolutions, e.g. for all
- layers, size of the input - kernel_size % stride = 0.
-
- If the mixture has a valid length, the estimated sources
- will have exactly the same length when context = 1. If context > 1,
- the two signals can be center trimmed to match.
-
- For training, extracts should have a valid length.For evaluation
- on full tracks we recommend passing `pad = True` to :method:`forward`.
- """
- if self.resample:
- length *= 2
- for _ in range(self.depth):
- length = math.ceil((length - self.kernel_size) / self.stride) + 1
- length = max(1, length)
- length += self.context - 1
- for _ in range(self.depth):
- length = (length - 1) * self.stride + self.kernel_size
-
- if self.resample:
- length = math.ceil(length / 2)
- return int(length)
-
- def forward(self, mix):
- x = mix
-
- if self.normalize:
- mono = mix.mean(dim=1, keepdim=True)
- mean = mono.mean(dim=-1, keepdim=True)
- std = mono.std(dim=-1, keepdim=True)
- else:
- mean = 0
- std = 1
-
- x = (x - mean) / (1e-5 + std)
-
- if self.resample:
- x = julius.resample_frac(x, 1, 2)
-
- saved = []
- for encode in self.encoder:
- x = encode(x)
- saved.append(x)
- if self.lstm:
- x = self.lstm(x)
- for decode in self.decoder:
- skip = center_trim(saved.pop(-1), x)
- x = x + skip
- x = decode(x)
-
- if self.resample:
- x = julius.resample_frac(x, 2, 1)
- x = x * std + mean
- x = x.view(x.size(0), len(self.sources), self.audio_channels, x.size(-1))
- return x
diff --git a/spaces/Legal-ease/legal-ease/base/document_search.py b/spaces/Legal-ease/legal-ease/base/document_search.py
deleted file mode 100644
index 5e1c68cca4f9ad1083602c2525ff2c46378db348..0000000000000000000000000000000000000000
--- a/spaces/Legal-ease/legal-ease/base/document_search.py
+++ /dev/null
@@ -1,212 +0,0 @@
-import os
-import cohere
-from typing import List
-from qdrant_client import QdrantClient
-from qdrant_client import models
-
-from .constants import (
- MULTILINGUAL_EMBEDDING_MODEL,
- ENGLISH_EMBEDDING_MODEL,
- SEARCH_QDRANT_COLLECTION_NAME,
- TRANSLATE_BASED_ON_USER_QUERY,
- TEXT_GENERATION_MODEL,
- USE_MULTILINGUAL_EMBEDDING,
-)
-
-# load environment variables
-QDRANT_HOST = os.environ.get("QDRANT_HOST")
-QDRANT_API_KEY = os.environ.get("QDRANT_API_KEY")
-COHERE_API_KEY = os.environ.get("COHERE_API_KEY")
-
-# create qdrant and cohere client
-cohere_client = cohere.Client(COHERE_API_KEY)
-
-qdrant_client = QdrantClient(
- host=QDRANT_HOST,
- prefer_grpc=False,
- api_key=QDRANT_API_KEY,
- port=443,
-)
-
-
-def embed_user_query(user_query: str) -> List:
- """
- Create an embedding for the given query by the user using Cohere's Embed API.
-
- Args:
- user_query (`str`):
- The input query by the user based on which search will be performed with the help of Qdrant.
-
- Returns:
- query_embedding (`List`):
- A list of numbers or vector representing the user query.
- """
- if USE_MULTILINGUAL_EMBEDDING:
- model_name = MULTILINGUAL_EMBEDDING_MODEL
- else:
- model_name = ENGLISH_EMBEDDING_MODEL
-
- embeddings = cohere_client.embed(
- texts=[user_query],
- model=model_name,
- )
- query_embedding = embeddings.embeddings[0]
- return query_embedding
-
-
-def search_docs_for_query(
- query_embedding: List,
- num_results: int,
- user_query: str,
- languages: List,
- match_text: List,
-) -> List:
- """
- Perform search on the collection of documents for the given user query using Qdrant's search API.
- Args:
- query_embedding (`List`):
- A vector representing the user query.
- num_results (`str`):
- The number of expected search results.
- user_query (`str`):
- The user input based on which search will be performed.
- languages (`str`):
- The list of languages based on which search results must be filtered.
- match_text (`List`):
- A field based on which it is decided whether to perform full-text-match while performing search.
- Returns:
- results (`List[ScoredPoint]`):
- A list of `ScoredPoint` objects returned via Qdrant's search API.
- """
-
- filters = []
-
- language_mapping = {
- "Dutch": "nl",
- "English": "en",
- "French": "fr",
- "Hungarian": "hu",
- "Italian": "it",
- "Norwegian": "nb",
- "Polish": "pl",
- }
-
- # prepare filters to narrow down search results
-
- # if the `match_text` list is not empty then create filter to find exact matching text in the documents
- if match_text:
- filters.append(
- models.FieldCondition(
- key="text",
- match=models.MatchText(text=user_query),
- )
- )
-
- # filter documents based on language before performing search:
- if languages:
- for lang in languages:
- filters.append(
- models.FieldCondition(
- key="language",
- match=models.MatchValue(
- value=language_mapping[lang],
- ),
- )
- )
-
- # perform search and get results
- results = qdrant_client.search(
- collection_name=SEARCH_QDRANT_COLLECTION_NAME,
- query_filter=models.Filter(should=filters),
- search_params=models.SearchParams(hnsw_ef=128, exact=False),
- query_vector=query_embedding,
- limit=num_results,
- )
- return results
-
-
-def translate_search_result(input_sentence, user_query):
- """
- Translate a given input sentence to the required target language. The required target language is `English` by default.
- The target language can be changed to match the language that the user used to type his search query by setting the `TRANSLATE_BASED_ON_USER_QUERY` to `True`.
- Args:
- input_sentence (`str`):
- The sentence which needs to be translated into the required target language.
- user_query (`str`):
- The user input based on which the target language for translation will be determined if `TRANSLATE_BASED_ON_USER_QUERY` is set to `True`.
- Returns:
- translation (`str`):
- The final translation result obtained using Cohere's Generate API.
- """
- response = cohere_client.tokenize(text=input_sentence)
-
- src_detected_lang = cohere_client.detect_language(texts=[input_sentence])
- src_current_lang = src_detected_lang.results[0].language_name
-
- if TRANSLATE_BASED_ON_USER_QUERY:
- target_detected_lang = cohere_client.detect_language(texts=[user_query])
- target_current_lang = target_detected_lang.results[0].language_name
- else:
- target_current_lang = "English"
-
- if target_current_lang == src_current_lang:
- return input_sentence
-
- prompt = f""""
- Translate this sentence from {src_current_lang} to {target_current_lang}: '{input_sentence}'.
-
- Don't include the above prompt in the final translation. The final output should only include the translation of the input sentence.
- """
-
- response = cohere_client.generate(
- model=TEXT_GENERATION_MODEL,
- prompt=prompt,
- max_tokens=len(response.tokens) * 3,
- temperature=0.6,
- stop_sequences=["--"],
- )
-
- translation = response.generations[0].text
-
- return translation
-
-
-def cross_lingual_document_search(
- user_input: str, num_results: int, languages, text_match
-) -> List:
- """
- Wrapper function for performing search on the collection of documents for the given user query.
- Prepares query embedding, retrieves search results, checks if expected number of search results are being returned.
- Args:
- user_input (`str`):
- The user input based on which search will be performed.
- num_results (`str`):
- The number of expected search results.
- languages (`str`):
- The list of languages based on which search results must be filtered.
- text_match (`str`):
- A field based on which it is decided whether to perform full-text-match while performing search.
- Returns:
- final_results (`List[str]`):
- A list containing the final search results corresponding to the given user input.
- """
- # create an embedding for the input query
- query_embedding = embed_user_query(user_input)
-
- # retrieve search results
- result = search_docs_for_query(
- query_embedding,
- num_results,
- user_input,
- languages,
- text_match,
- )
- final_results = [result[i].payload["text"] for i in range(len(result))]
-
- # check if number of search results obtained (i.e. `final_results`) is matching with number of expected search results i.e. `num_results`
- if num_results > len(final_results):
- remaining_inputs = num_results - len(final_results)
- for input in range(remaining_inputs):
- final_results.append("")
-
- return final_results
diff --git a/spaces/Lianjd/stock_dashboard/backtrader/sizers/__init__.py b/spaces/Lianjd/stock_dashboard/backtrader/sizers/__init__.py
deleted file mode 100644
index 4250db5eac8b587ab958014026e02bcd58523d4d..0000000000000000000000000000000000000000
--- a/spaces/Lianjd/stock_dashboard/backtrader/sizers/__init__.py
+++ /dev/null
@@ -1,28 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8; py-indent-offset:4 -*-
-###############################################################################
-#
-# Copyright (C) 2015-2020 Daniel Rodriguez
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program. If not, see .
-#
-###############################################################################
-from __future__ import (absolute_import, division, print_function,
- unicode_literals)
-
-# The modules below should/must define __all__ with the objects wishes
-# or prepend an "_" (underscore) to private classes/variables
-
-from .fixedsize import *
-from .percents_sizer import *
diff --git a/spaces/LightAI/README/README.md b/spaces/LightAI/README/README.md
deleted file mode 100644
index 316459c24cbf729bbc5c711668bee87c7d31c30a..0000000000000000000000000000000000000000
--- a/spaces/LightAI/README/README.md
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: 'Light AI '
-emoji: 💡
-colorFrom: purple
-colorTo: indigo
-sdk: static
-pinned: false
-license: apache-2.0
----
-
-# Light AI
-
- Light Research is an arm of Light AI exploring next generation interfaces for Conversational AI
-
-# Repo
-
-https://github.com/light-hq
diff --git a/spaces/MRiwu/Collection/modules.py b/spaces/MRiwu/Collection/modules.py
deleted file mode 100644
index 3484f6a1f4c1c06855c37a1ff4e66c58864acb38..0000000000000000000000000000000000000000
--- a/spaces/MRiwu/Collection/modules.py
+++ /dev/null
@@ -1,390 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dilated and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/utils/onnx.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/utils/onnx.py
deleted file mode 100644
index 4297b31291e036700d6ad0b818afb7dd72da3054..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/utils/onnx.py
+++ /dev/null
@@ -1,144 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import torch.nn as nn
-from torch.nn import functional as F
-
-from typing import Tuple
-
-from ..modeling import Sam
-from .amg import calculate_stability_score
-
-
-class SamOnnxModel(nn.Module):
- """
- This model should not be called directly, but is used in ONNX export.
- It combines the prompt encoder, mask decoder, and mask postprocessing of Sam,
- with some functions modified to enable model tracing. Also supports extra
- options controlling what information. See the ONNX export script for details.
- """
-
- def __init__(
- self,
- model: Sam,
- return_single_mask: bool,
- use_stability_score: bool = False,
- return_extra_metrics: bool = False,
- ) -> None:
- super().__init__()
- self.mask_decoder = model.mask_decoder
- self.model = model
- self.img_size = model.image_encoder.img_size
- self.return_single_mask = return_single_mask
- self.use_stability_score = use_stability_score
- self.stability_score_offset = 1.0
- self.return_extra_metrics = return_extra_metrics
-
- @staticmethod
- def resize_longest_image_size(
- input_image_size: torch.Tensor, longest_side: int
- ) -> torch.Tensor:
- input_image_size = input_image_size.to(torch.float32)
- scale = longest_side / torch.max(input_image_size)
- transformed_size = scale * input_image_size
- transformed_size = torch.floor(transformed_size + 0.5).to(torch.int64)
- return transformed_size
-
- def _embed_points(self, point_coords: torch.Tensor, point_labels: torch.Tensor) -> torch.Tensor:
- point_coords = point_coords + 0.5
- point_coords = point_coords / self.img_size
- point_embedding = self.model.prompt_encoder.pe_layer._pe_encoding(point_coords)
- point_labels = point_labels.unsqueeze(-1).expand_as(point_embedding)
-
- point_embedding = point_embedding * (point_labels != -1)
- point_embedding = point_embedding + self.model.prompt_encoder.not_a_point_embed.weight * (
- point_labels == -1
- )
-
- for i in range(self.model.prompt_encoder.num_point_embeddings):
- point_embedding = point_embedding + self.model.prompt_encoder.point_embeddings[
- i
- ].weight * (point_labels == i)
-
- return point_embedding
-
- def _embed_masks(self, input_mask: torch.Tensor, has_mask_input: torch.Tensor) -> torch.Tensor:
- mask_embedding = has_mask_input * self.model.prompt_encoder.mask_downscaling(input_mask)
- mask_embedding = mask_embedding + (
- 1 - has_mask_input
- ) * self.model.prompt_encoder.no_mask_embed.weight.reshape(1, -1, 1, 1)
- return mask_embedding
-
- def mask_postprocessing(self, masks: torch.Tensor, orig_im_size: torch.Tensor) -> torch.Tensor:
- masks = F.interpolate(
- masks,
- size=(self.img_size, self.img_size),
- mode="bilinear",
- align_corners=False,
- )
-
- prepadded_size = self.resize_longest_image_size(orig_im_size, self.img_size)
- masks = masks[..., : int(prepadded_size[0]), : int(prepadded_size[1])]
-
- orig_im_size = orig_im_size.to(torch.int64)
- h, w = orig_im_size[0], orig_im_size[1]
- masks = F.interpolate(masks, size=(h, w), mode="bilinear", align_corners=False)
- return masks
-
- def select_masks(
- self, masks: torch.Tensor, iou_preds: torch.Tensor, num_points: int
- ) -> Tuple[torch.Tensor, torch.Tensor]:
- # Determine if we should return the multiclick mask or not from the number of points.
- # The reweighting is used to avoid control flow.
- score_reweight = torch.tensor(
- [[1000] + [0] * (self.model.mask_decoder.num_mask_tokens - 1)]
- ).to(iou_preds.device)
- score = iou_preds + (num_points - 2.5) * score_reweight
- best_idx = torch.argmax(score, dim=1)
- masks = masks[torch.arange(masks.shape[0]), best_idx, :, :].unsqueeze(1)
- iou_preds = iou_preds[torch.arange(masks.shape[0]), best_idx].unsqueeze(1)
-
- return masks, iou_preds
-
- @torch.no_grad()
- def forward(
- self,
- image_embeddings: torch.Tensor,
- point_coords: torch.Tensor,
- point_labels: torch.Tensor,
- mask_input: torch.Tensor,
- has_mask_input: torch.Tensor,
- orig_im_size: torch.Tensor,
- ):
- sparse_embedding = self._embed_points(point_coords, point_labels)
- dense_embedding = self._embed_masks(mask_input, has_mask_input)
-
- masks, scores = self.model.mask_decoder.predict_masks(
- image_embeddings=image_embeddings,
- image_pe=self.model.prompt_encoder.get_dense_pe(),
- sparse_prompt_embeddings=sparse_embedding,
- dense_prompt_embeddings=dense_embedding,
- )
-
- if self.use_stability_score:
- scores = calculate_stability_score(
- masks, self.model.mask_threshold, self.stability_score_offset
- )
-
- if self.return_single_mask:
- masks, scores = self.select_masks(masks, scores, point_coords.shape[1])
-
- upscaled_masks = self.mask_postprocessing(masks, orig_im_size)
-
- if self.return_extra_metrics:
- stability_scores = calculate_stability_score(
- upscaled_masks, self.model.mask_threshold, self.stability_score_offset
- )
- areas = (upscaled_masks > self.model.mask_threshold).sum(-1).sum(-1)
- return upscaled_masks, scores, stability_scores, areas, masks
-
- return upscaled_masks, scores, masks
diff --git a/spaces/Matthijs/mms-tts-demo/uroman/lib/NLP/utilities.pm b/spaces/Matthijs/mms-tts-demo/uroman/lib/NLP/utilities.pm
deleted file mode 100644
index 7be117449190533d826bd63b9266c1434d00408f..0000000000000000000000000000000000000000
--- a/spaces/Matthijs/mms-tts-demo/uroman/lib/NLP/utilities.pm
+++ /dev/null
@@ -1,3652 +0,0 @@
-################################################################
-# #
-# utilities #
-# #
-################################################################
-
-package NLP::utilities;
-
-use File::Spec;
-use Time::HiRes qw(time);
-use Time::Local;
-use NLP::English;
-use NLP::UTF8;
-
-$utf8 = NLP::UTF8;
-$englishPM = NLP::English;
-
-%empty_ht = ();
-
-use constant DEBUGGING => 0;
-
-sub member {
- local($this,$elem,@array) = @_;
-
- my $a;
- if (defined($elem)) {
- foreach $a (@array) {
- if (defined($a)) {
- return 1 if $elem eq $a;
- } else {
- $DB::single = 1; # debugger breakpoint
- print STDERR "\nWarning: Undefined variable utilities::member::a\n";
- }
- }
- } else {
- $DB::single = 1; # debugger breakpoint
- print STDERR "\nWarning: Undefined variable utilities::member::elem\n";
- }
- return 0;
-}
-
-sub dual_member {
- local($this,$elem1,$elem2,*array1,*array2) = @_;
- # returns 1 if there exists a position $n
- # such that $elem1 occurs at position $n in @array1
- # and $elem2 occurs at same position $n in @array2
-
- return 0 unless defined($elem1) && defined($elem2);
- my $last_index = ($#array1 < $#array2) ? $#array1 : $#array2; #min
- my $a;
- my $b;
- foreach $i ((0 .. $last_index)) {
- return 1 if defined($a = $array1[$i]) && defined($b = $array2[$i]) && ($a eq $elem1) && ($b eq $elem2);
- }
- return 0;
-}
-
-sub sorted_list_equal {
- local($this,*list1,*list2) = @_;
-
- return 0 unless $#list1 == $#list2;
- foreach $i ((0 .. $#list1)) {
- return 0 unless $list1[$i] eq $list2[$i];
- }
- return 1;
-}
-
-sub trim {
- local($this, $s) = @_;
-
- $s =~ s/^\s*//;
- $s =~ s/\s*$//;
- $s =~ s/\s+/ /g;
- return $s;
-}
-
-sub trim2 {
- local($this, $s) = @_;
-
- $s =~ s/^\s*//;
- $s =~ s/\s*$//;
- return $s;
-}
-
-sub trim_left {
- local($this, $s) = @_;
- $s =~ s/^\s*//;
- return $s;
-}
-
-sub cap_member {
- local($this,$elem,@array) = @_;
-
- my $a;
- my $lc_elem = lc $elem;
- foreach $a (@array) {
- return $a if $lc_elem eq lc $a;
- }
- return "";
-}
-
-sub remove_elem {
- local($this,$elem,@array) = @_;
-
- return @array unless $this->member($elem, @array);
- @rm_list = ();
- foreach $a (@array) {
- push(@rm_list, $a) unless $elem eq $a;
- }
- return @rm_list;
-}
-
-sub intersect_p {
- local($this,*list1,*list2) = @_;
-
- foreach $elem1 (@list1) {
- if (defined($elem1)) {
- foreach $elem2 (@list2) {
- if (defined($elem2)) {
- return 1 if $elem1 eq $elem2;
- } else {
- $DB::single = 1; # debugger breakpoint
- print STDERR "\nWarning: Undefined variable utilities::intersect_p::elem2\n";
- }
- }
- } else {
- $DB::single = 1; # debugger breakpoint
- print STDERR "\nWarning: Undefined variable utilities::intersect_p::elem1\n";
- }
- }
- return 0;
-}
-
-sub intersect_expl_p {
- local($this,*list1,@list2) = @_;
-
- foreach $elem1 (@list1) {
- foreach $elem2 (@list2) {
- return 1 if $elem1 eq $elem2;
- }
- }
- return 0;
-}
-
-sub intersection {
- local($this,*list1,*list2) = @_;
-
- @intersection_list = ();
- foreach $elem1 (@list1) {
- foreach $elem2 (@list2) {
- push(@intersection_list, $elem1) if ($elem1 eq $elem2) && ! $this->member($elem1, @intersection_list);
- }
- }
- return @intersection_list;
-}
-
-sub cap_intersect_p {
- local($this,*list1,*list2) = @_;
-
- foreach $elem1 (@list1) {
- $lc_elem1 = lc $elem1;
- foreach $elem2 (@list2) {
- return 1 if $lc_elem1 eq lc $elem2;
- }
- }
- return 0;
-}
-
-sub subset_p {
- local($this,*list1,*list2) = @_;
-
- foreach $elem1 (@list1) {
- return 0 unless $this->member($elem1, @list2);
- }
- return 1;
-}
-
-sub cap_subset_p {
- local($this,*list1,*list2) = @_;
-
- foreach $elem1 (@list1) {
- return 0 unless $this->cap_member($elem1, @list2);
- }
- return 1;
-}
-
-sub unique {
- local($this, @list) = @_;
-
- my %seen = ();
- @uniq = ();
- foreach $item (@list) {
- push(@uniq, $item) unless $seen{$item}++;
- }
- return @uniq;
-}
-
-sub position {
- local($this,$elem,@array) = @_;
- $i = 0;
- foreach $a (@array) {
- return $i if $elem eq $a;
- $i++;
- }
- return -1;
-}
-
-sub positions {
- local($this,$elem,@array) = @_;
- $i = 0;
- @positions_in_list = ();
- foreach $a (@array) {
- push(@positions_in_list, $i) if $elem eq $a;
- $i++;
- }
- return @positions_in_list;
-}
-
-sub last_position {
- local($this,$elem,@array) = @_;
-
- $result = -1;
- $i = 0;
- foreach $a (@array) {
- $result = $i if $elem eq $a;
- $i++;
- }
- return $result;
-}
-
-sub rand_n_digit_number {
- local($this,$n) = @_;
-
- return 0 unless $n =~ /^[1-9]\d*$/;
- $ten_power_n = 10 ** ($n - 1);
- return int(rand(9 * $ten_power_n)) + $ten_power_n;
-}
-
-# Consider File::Temp
-sub new_tmp_filename {
- local($this,$filename) = @_;
-
- $loop_limit = 1000;
- ($dir,$simple_filename) = ($filename =~ /^(.+)\/([^\/]+)$/);
- $simple_filename = $filename unless defined($simple_filename);
- $new_filename = "$dir/tmp-" . $this->rand_n_digit_number(8) . "-$simple_filename";
- while ((-e $new_filename) && ($loop_limit-- >= 0)) {
- $new_filename = "$dir/tmp-" . $this->rand_n_digit_number(8) . "-$simple_filename";
- }
- return $new_filename;
-}
-
-# support sorting order: "8", "8.0", "8.5", "8.5.1.", "8.10", "10", "10-12"
-
-sub compare_complex_numeric {
- local($this,$a,$b) = @_;
-
- (my $a_num,my $a_rest) = ($a =~ /^(\d+)\D*(.*)$/);
- (my $b_num,my $b_rest) = ($b =~ /^(\d+)\D*(.*)$/);
-
- if (defined($a_rest) && defined($b_rest)) {
- return ($a_num <=> $b_num)
- || $this->compare_complex_numeric($a_rest,$b_rest);
- } else {
- return $a cmp $b;
- }
-}
-
-# support sorting order: "lesson8-ps-v1.9.xml", "Lesson 10_ps-v_1.11.xml"
-# approach: segment strings into alphabetic and numerical sections and compare pairwise
-
-sub compare_mixed_alpha_numeric {
- local($this,$a,$b) = @_;
-
- ($a_alpha,$a_num,$a_rest) = ($a =~ /^(\D*)(\d[-\d\.]*)(.*)$/);
- ($b_alpha,$b_num,$b_rest) = ($b =~ /^(\D*)(\d[-\d\.]*)(.*)$/);
-
- ($a_alpha) = ($a =~ /^(\D*)/) unless defined $a_alpha;
- ($b_alpha) = ($b =~ /^(\D*)/) unless defined $b_alpha;
-
- # ignore non-alphabetic characters in alpha sections
- $a_alpha =~ s/\W|_//g;
- $b_alpha =~ s/\W|_//g;
-
- if ($alpha_cmp = lc $a_alpha cmp lc $b_alpha) {
- return $alpha_cmp;
- } elsif (defined($a_rest) && defined($b_rest)) {
- return $this->compare_complex_numeric($a_num,$b_num)
- || $this->compare_mixed_alpha_numeric ($a_rest,$b_rest);
- } else {
- return (defined($a_num) <=> defined($b_num)) || ($a cmp $b);
- }
-}
-
-# @sorted_lessons = sort { NLP::utilities->compare_mixed_alpha_numeric($a,$b) } @lessons;
-
-sub html_guarded_p {
- local($this,$string) = @_;
-
- return 0 if $string =~ /[<>"]/;
- $string .= " ";
- @segs = split('&',$string);
- shift @segs;
- foreach $seg (@segs) {
- next if $seg =~ /^[a-z]{2,6};/i;
- # next if $seg =~ /^amp;/;
- # next if $seg =~ /^quot;/;
- # next if $seg =~ /^nbsp;/;
- # next if $seg =~ /^gt;/;
- # next if $seg =~ /^lt;/;
- next if $seg =~ /^#(\d+);/;
- next if $seg =~ /^#x([0-9a-fA-F]+);/;
- return 0;
- }
- return 1;
-}
-
-sub guard_tooltip_text {
- local($this,$string) = @_;
-
- $string =~ s/\xCB\x88/'/g;
- return $string;
-}
-
-sub guard_html {
- local($this,$string,$control_string) = @_;
-
- return "" unless defined($string);
- my $guarded_string;
- $control_string = "" unless defined($control_string);
- return $string if ($string =~ /&/)
- && (! ($control_string =~ /\bstrict\b/))
- && $this->html_guarded_p($string);
- $guarded_string = $string;
- $guarded_string =~ s/&/&/g;
- if ($control_string =~ /slash quote/) {
- $guarded_string =~ s/"/\\"/g;
- } elsif ($control_string =~ /keep quote/) {
- } else {
- $guarded_string =~ s/\"/"/g;
- }
- if ($control_string =~ /escape-slash/) {
- $guarded_string =~ s/\//&x2F;/g;
- }
- $guarded_string =~ s/>/>/g;
- $guarded_string =~ s/</g;
- return $guarded_string;
-}
-
-sub unguard_html {
- local($this,$string) = @_;
-
- return undef unless defined($string);
- $string=~ s[&(\S*?);]{
- local $_ = $1;
- /^amp$/i ? "&" :
- /^quot$/i ? '"' :
- /^apos$/i ? "'" :
- /^gt$/i ? ">" :
- /^lt$/i ? "<" :
- /^x2F$/i ? "/" :
- /^nbsp$/i ? "\xC2\xA0" :
- /^#(\d+)$/ ? $this->chr($1) :
- /^#x([0-9a-f]+)$/i ? $this->chr(hex($1)) :
- $_
- }gex;
- return $string;
-}
-
-sub unguard_html_r {
- local($this,$string) = @_;
-
- return undef unless defined($string);
-
- $string =~ s/&/&/g;
- $string =~ s/"/'/g;
- $string =~ s/<//g;
-
- ($d) = ($string =~ /(\d+);/);
- while (defined($d)) {
- $c = $this->chr($d);
- $string =~ s/$d;/$c/g;
- ($d) = ($string =~ /(\d+);/);
- }
- ($x) = ($string =~ /([0-9a-f]+);/i);
- while (defined($x)) {
- $c = $this->chr(hex($x));
- $string =~ s/$x;/$c/g;
- ($x) = ($string =~ /([0-9a-f]+);/i);
- }
- $string0 = $string;
- ($x) = ($string =~ /(?:https?|www|\.com)\S*\%([0-9a-f]{2,2})/i);
- while (defined($x)) {
- $c = $this->chr("%" . hex($x));
- $string =~ s/\%$x/$c/g;
- ($x) = ($string =~ /(?:https?|www|\.com)\S*\%([0-9a-f]{2,2})/i);
- }
- return $string;
-}
-
-sub unguard_html_l {
- local($caller,$string) = @_;
-
- return undef unless defined($string);
-
- my $pre;
- my $core;
- my $post;
- my $repl;
- my $s = $string;
- if (($pre,$core,$post) = ($s =~ /^(.*)&(amp|quot|lt|gt|#\d+|#x[0-9a-f]+);(.*)$/i)) {
- $repl = "?";
- $repl = "&" if $core =~ /^amp$/i;
- $repl = "'" if $core =~ /^quot$/i;
- $repl = "<" if $core =~ /^lt$/i;
- $repl = ">" if $core =~ /^gt$/i;
- if ($core =~ /^#\d+$/i) {
- $core2 = substr($core,1);
- $repl = $caller->chr($core2);
- }
- $repl = $caller->chr(hex(substr($core,2))) if $core =~ /^#x[0-9a-f]+$/i;
- $s = $pre . $repl . $post;
- }
- return $s;
-}
-
-sub guard_html_quote {
- local($caller,$string) = @_;
-
- $string =~ s/"/"/g;
- return $string;
-}
-
-sub unguard_html_quote {
- local($caller,$string) = @_;
-
- $string =~ s/"/"/g;
- return $string;
-}
-
-sub uri_encode {
- local($caller,$string) = @_;
-
- $string =~ s/([^^A-Za-z0-9\-_.!~*()'])/ sprintf "%%%02x", ord $1 /eg;
- return $string;
-}
-
-sub uri_decode {
- local($caller,$string) = @_;
-
- $string =~ s/%([0-9A-Fa-f]{2})/chr(hex($1))/eg;
- return $string;
-}
-
-sub remove_xml_tags {
- local($caller,$string) = @_;
-
- $string =~ s/<\/?[a-zA-Z][-_:a-zA-Z0-9]*(\s+[a-zA-Z][-_:a-zA-Z0-9]*=\"[^"]*\")*\s*\/?>//g;
- return $string;
-}
-
-sub remove_any_tokenization_at_signs_around_xml_tags {
- local($caller,$string) = @_;
-
- $string =~ s/(?:\@ \@)?(<[^<>]+>)(?:\@ \@)?/$1/g;
- $string =~ s/\@?(<[^<>]+>)\@?/$1/g;
- return $string;
-}
-
-sub remove_xml_tags_and_any_bordering_at_signs {
- # at-signs from tokenization
- local($caller,$string) = @_;
-
- $string =~ s/\@?<\/?[a-zA-Z][-_:a-zA-Z0-9]*(\s+[a-zA-Z][-_:a-zA-Z0-9]*=\"[^"]*\")*\s*\/?>\@?//g;
- return $string;
-}
-
-sub chr {
- local($caller,$i) = @_;
-
- return undef unless $i =~ /^\%?\d+$/;
- if ($i =~ /^%/) {
- $i =~ s/^\%//;
- return chr($i) if $i < 128;
- return "\x80" | chr($i - 128) if $i < 256;
- } else {
- return chr($i) if $i < 128;
- return ("\xC0" | chr(($i / 64) % 32))
- . ("\x80" | chr($i % 64)) if $i < 2048;
- return ("\xE0" | chr(int($i / 4096) % 16))
- . ("\x80" | chr(int($i / 64) % 64))
- . ("\x80" | chr($i % 64)) if $i < 65536;
- return ("\xF0" | chr(int($i / 262144) % 8))
- . ("\x80" | chr(int($i / 4096) % 64))
- . ("\x80" | chr(int($i / 64) % 64))
- . ("\x80" | chr($i % 64)) if $i < 2097152;
- }
- return "?";
-}
-
-sub guard_cgi {
- local($caller, $string) = @_;
-
- $guarded_string = $string;
- if ($string =~ /[\x80-\xFF]/) {
- $guarded_string = "";
- while ($string ne "") {
- $char = substr($string, 0, 1);
- $string = substr($string, 1);
- if ($char =~ /^[\\ ;\#\&\:\=\"\'\+\?\x00-\x1F\x80-\xFF]$/) {
- $hex = sprintf("%2.2x",ord($char));
- $guarded_string .= uc "%$hex";
- } else {
- $guarded_string .= $char;
- }
- }
- } else {
- $guarded_string = $string;
- $guarded_string =~ s/%/%25/g;
- $guarded_string =~ s/\n/%5Cn/g;
- $guarded_string =~ s/\t/%5Ct/g;
- $guarded_string =~ s/ /%20/g;
- $guarded_string =~ s/"/%22/g;
- $guarded_string =~ s/#/%23/g;
- $guarded_string =~ s/&/%26/g;
- $guarded_string =~ s/'/%27/g;
- $guarded_string =~ s/\+/%2B/g;
- $guarded_string =~ s/\//%2F/g;
- $guarded_string =~ s/:/%3A/g;
- $guarded_string =~ s/;/%3B/g;
- $guarded_string =~ s/%3C/g;
- $guarded_string =~ s/=/%3D/g;
- $guarded_string =~ s/>/%3E/g;
- $guarded_string =~ s/\?/%3F/g;
- }
- return $guarded_string;
-}
-
-sub repair_cgi_guard {
- local($caller,$string) = @_;
- # undo second cgi-guard, e.g. "Jo%25C3%25ABlle_Aubron" -> "Jo%C3%ABlle_Aubron"
-
- $string =~ s/(%)25([CD][0-9A-F]%)25([89AB][0-9A-F])/$1$2$3/g;
- $string =~ s/(%)25(E[0-9A-F]%)25([89AB][0-9A-F]%)25([89AB][0-9A-F])/$1$2$3$4/g;
- return $string;
-}
-
-sub unguard_cgi {
- local($caller,$string) = @_;
-
- $unguarded_string = $string;
- $unguarded_string =~ s/%5Cn/\n/g;
- $unguarded_string =~ s/%5Ct/\t/g;
- $unguarded_string =~ s/%20/ /g;
- $unguarded_string =~ s/%23/#/g;
- $unguarded_string =~ s/%26/&/g;
- $unguarded_string =~ s/%2B/+/g;
- $unguarded_string =~ s/%2C/,/g;
- $unguarded_string =~ s/%3A/:/g;
- $unguarded_string =~ s/%3D/=/g;
- $unguarded_string =~ s/%3F/?/g;
- $unguarded_string =~ s/%C3%A9/\xC3\xA9/g;
-
- # more general
- ($code) = ($unguarded_string =~ /%([0-9A-F]{2,2})/);
- while (defined($code)) {
- $percent_code = "%" . $code;
- $hex_code = sprintf("%c", hex($code));
- $unguarded_string =~ s/$percent_code/$hex_code/g;
- ($code) = ($unguarded_string =~ /%([0-9A-F]{2,2})/);
- }
-
- return $unguarded_string;
-}
-
-sub regex_guard {
- local($caller,$string) = @_;
-
- $guarded_string = $string;
- $guarded_string =~ s/([\\\/\^\|\(\)\{\}\$\@\*\+\?\.\[\]])/\\$1/g
- if $guarded_string =~ /[\\\/\^\|\(\)\{\}\$\@\*\+\?\.\[\]]/;
-
- return $guarded_string;
-}
-
-sub g_regex_spec_tok_p {
- local($this,$string) = @_;
-
- # specials: ( ) (?: ) [ ]
- return ($string =~ /^(\(\?:|[()\[\]])$/);
-}
-
-sub regex_guard_norm {
- local($this,$string) = @_;
-
- return $string unless $string =~ /[\[\]\\()$@?+]/;
- my $rest = $string;
- my @stack = ("");
- while ($rest ne "") {
- # specials: ( ) (?: ) [ ] ? +
- if (($pre, $special, $post) = ($rest =~ /^((?:\\.|[^\[\]()?+])*)(\(\?:|[\[\]()?+])(.*)$/)) {
- # print STDERR "Special: $pre *$special* $post\n";
- unless ($pre eq "") {
- push(@stack, $pre);
- while (($#stack >= 1) && (! $this->g_regex_spec_tok_p($stack[$#stack-1]))
- && (! $this->g_regex_spec_tok_p($stack[$#stack]))) {
- $s1 = pop @stack;
- $s2 = pop @stack;
- push(@stack, "$s2$s1");
- }
- }
- if ($special =~ /^[?+]$/) {
- push(@stack, "\\") if ($stack[$#stack] eq "")
- || ($this->g_regex_spec_tok_p($stack[$#stack]) && ($stack[$#stack] ne "["));
- push(@stack, $special);
- } elsif ($special eq "]") {
- if (($#stack >= 1) && ($stack[$#stack-1] eq "[") && ! $this->g_regex_spec_tok_p($stack[$#stack])) {
- $char_expression = pop @stack;
- pop @stack;
- push(@stack, "[$char_expression]");
- } else {
- push(@stack, $special);
- }
- } elsif (($special =~ /^[()]/) && (($stack[$#stack] eq "[")
- || (($#stack >= 1)
- && ($stack[$#stack-1] eq "[")
- && ! $this->g_regex_spec_tok_p($stack[$#stack])))) {
- push(@stack, "\\$special");
- } elsif ($special eq ")") {
- if (($#stack >= 1) && ($stack[$#stack-1] =~ /^\((\?:)?$/) && ! $this->g_regex_spec_tok_p($stack[$#stack])) {
- $alt_expression = pop @stack;
- $open_para = pop @stack;
- if ($open_para eq "(") {
- push(@stack, "(?:$alt_expression)");
- } else {
- push(@stack, "$open_para$alt_expression)");
- }
- } else {
- push(@stack, $special);
- }
- } else {
- push(@stack, $special);
- }
- while (($#stack >= 1) && (! $this->g_regex_spec_tok_p($stack[$#stack-1]))
- && (! $this->g_regex_spec_tok_p($stack[$#stack]))) {
- $s1 = pop @stack;
- $s2 = pop @stack;
- push(@stack, "$s2$s1");
- }
- $rest = $post;
- } else {
- push(@stack, $rest);
- $rest = "";
- }
- }
- # print STDERR "Stack: " . join(";", @stack) . "\n";
- foreach $i ((0 .. $#stack)) {
- $stack_elem = $stack[$i];
- if ($stack_elem =~ /^[()\[\]]$/) {
- $stack[$i] = "\\" . $stack[$i];
- }
- }
- return join("", @stack);
-}
-
-sub string_guard {
- local($caller,$string) = @_;
-
- return "" unless defined($string);
- $guarded_string = $string;
- $guarded_string =~ s/([\\"])/\\$1/g
- if $guarded_string =~ /[\\"]/;
-
- return $guarded_string;
-}
-
-sub json_string_guard {
- local($caller,$string) = @_;
-
- return "" unless defined($string);
- $guarded_string = $string;
- $guarded_string =~ s/([\\"])/\\$1/g
- if $guarded_string =~ /[\\"]/;
- $guarded_string =~ s/\r*\n/\\n/g
- if $guarded_string =~ /\n/;
-
- return $guarded_string;
-}
-
-sub json_string_unguard {
- local($caller,$string) = @_;
-
- return "" unless defined($string);
- $string =~ s/\\n/\n/g
- if $string =~ /\\n/;
- return $string;
-}
-
-sub guard_javascript_arg {
- local($caller,$string) = @_;
-
- return "" unless defined($string);
- $guarded_string = $string;
- $guarded_string =~ s/\\/\\\\/g;
- $guarded_string =~ s/'/\\'/g;
- return $guarded_string;
-}
-
-sub guard_substitution_right_hand_side {
- # "$1x" => "$1 . \"x\""
- local($caller,$string) = @_;
-
- my $result = "";
- ($pre,$var,$post) = ($string =~ /^([^\$]*)(\$\d)(.*)$/);
- while (defined($var)) {
- $result .= " . " if $result;
- $result .= "\"$pre\" . " unless $pre eq "";
- $result .= $var;
- $string = $post;
- ($pre,$var,$post) = ($string =~ /^([^\$]*)(\$\d)(.*)$/);
- }
- $result .= " . \"$string\"" if $string;
- return $result;
-}
-
-sub string_starts_with_substring {
- local($caller,$string,$substring) = @_;
-
- $guarded_substring = $caller->regex_guard($substring);
- return $string =~ /^$guarded_substring/;
-}
-
-sub one_string_starts_with_the_other {
- local($caller,$s1,$s2) = @_;
-
- return ($s1 eq $s2)
- || $caller->string_starts_with_substring($s1,$s2)
- || $caller->string_starts_with_substring($s2,$s1);
-}
-
-sub string_ends_in_substring {
- local($caller,$string,$substring) = @_;
-
- $guarded_substring = $caller->regex_guard($substring);
- return $string =~ /$guarded_substring$/;
-}
-
-sub string_equal_ignore_leading_multiple_or_trailing_blanks {
- local($caller,$string1,$string2) = @_;
-
- return 1 if $string1 eq $string2;
- $string1 =~ s/\s+/ /;
- $string2 =~ s/\s+/ /;
- $string1 =~ s/^\s+//;
- $string2 =~ s/^\s+//;
- $string1 =~ s/\s+$//;
- $string2 =~ s/\s+$//;
-
- return $string1 eq $string2;
-}
-
-sub strip_substring_from_start_of_string {
- local($caller,$string,$substring,$error_code) = @_;
-
- $error_code = "ERROR" unless defined($error_code);
- my $reg_surf = $caller->regex_guard($substring);
- if ($string =~ /^$guarded_substring/) {
- $string =~ s/^$reg_surf//;
- return $string;
- } else {
- return $error_code;
- }
-}
-
-sub strip_substring_from_end_of_string {
- local($caller,$string,$substring,$error_code) = @_;
-
- $error_code = "ERROR" unless defined($error_code);
- my $reg_surf = $caller->regex_guard($substring);
- if ($string =~ /$reg_surf$/) {
- $string =~ s/$reg_surf$//;
- return $string;
- } else {
- return $error_code;
- }
-}
-
-# to be deprecated
-sub lang_code {
- local($caller,$language) = @_;
-
- $langPM = NLP::Language->new();
- return $langPM->lang_code($language);
-}
-
-sub full_language {
- local($caller,$lang_code) = @_;
-
- return "Arabic" if $lang_code eq "ar";
- return "Chinese" if $lang_code eq "zh";
- return "Czech" if $lang_code eq "cs";
- return "Danish" if $lang_code eq "da";
- return "Dutch" if $lang_code eq "nl";
- return "English" if $lang_code eq "en";
- return "Finnish" if $lang_code eq "fi";
- return "French" if $lang_code eq "fr";
- return "German" if $lang_code eq "de";
- return "Greek" if $lang_code eq "el";
- return "Hebrew" if $lang_code eq "he";
- return "Hindi" if $lang_code eq "hi";
- return "Hungarian" if $lang_code eq "hu";
- return "Icelandic" if $lang_code eq "is";
- return "Indonesian" if $lang_code eq "id";
- return "Italian" if $lang_code eq "it";
- return "Japanese" if $lang_code eq "ja";
- return "Kinyarwanda" if $lang_code eq "rw";
- return "Korean" if $lang_code eq "ko";
- return "Latin" if $lang_code eq "la";
- return "Malagasy" if $lang_code eq "mg";
- return "Norwegian" if $lang_code eq "no";
- return "Pashto" if $lang_code eq "ps";
- return "Persian" if $lang_code eq "fa";
- return "Polish" if $lang_code eq "pl";
- return "Portuguese" if $lang_code eq "pt";
- return "Romanian" if $lang_code eq "ro";
- return "Russian" if $lang_code eq "ru";
- return "Spanish" if $lang_code eq "es";
- return "Swedish" if $lang_code eq "sv";
- return "Turkish" if $lang_code eq "tr";
- return "Urdu" if $lang_code eq "ur";
- return "";
-}
-
-# to be deprecated
-sub short_lang_name {
- local($caller,$lang_code) = @_;
-
- $langPM = NLP::Language->new();
- return $langPM->shortname($lang_code);
-}
-
-sub ml_dir {
- local($caller,$language,$type) = @_;
-
- $type = "MSB" unless defined($type);
- $lang_code = $langPM->lang_code($language);
- return $caller->ml_dir($lang_code, "lex") . "/corpora" if $type eq "corpora";
- return "" unless defined($rc);
- $ml_home = $rc->ml_home_dir();
- return File::Spec->catfile($ml_home, "arabic")
- if ($lang_code eq "ar-iq") && ! $caller->member(lc $type,"lex","onto","dict");
- $langPM = NLP::Language->new();
- $lexdir = $langPM->lexdir($lang_code);
- return $lexdir if defined($lexdir);
- return "";
-}
-
-sub language_lex_filename {
- local($caller,$language,$type) = @_;
-
- $langPM = NLP::Language->new();
- if (($lang_code = $langPM->lang_code($language))
- && ($ml_dir = $caller->ml_dir($lang_code,$type))
- && ($norm_language = $caller->short_lang_name($lang_code))) {
- return "$ml_dir/$norm_language-lex" if ($type eq "lex");
- return "$ml_dir/onto" if ($type eq "onto");
- return "$ml_dir/$norm_language-english-dict" if ($type eq "dict") && !($lang_code eq "en");
- return "";
- } else {
- return "";
- }
-}
-
-# filename_without_path is obsolete - replace with
-# use File::Basename;
-# basename($filename)
-sub filename_without_path {
- local($caller,$filename) = @_;
-
- $filename =~ s/^.*\/([^\/]+)$/$1/;
- return $filename;
-}
-
-sub option_string {
- local($caller,$input_name,$default,*values,*labels) = @_;
-
- my $s = "";
- return $s;
-}
-
-sub pes_subseq_surf {
- local($this,$start,$length,$langCode,@pes) = @_;
-
- my $surf = "";
- if ($start+$length-1 <= $#pes) {
- foreach $i ($start .. $start + $length - 1) {
- my $pe = $pes[$i];
- $surf .= $pe->get("surf","");
- $surf .= " " if $langCode =~ /^(ar|en|fr)$/;
- }
- }
- $surf =~ s/\s+$//;
- return $surf;
-}
-
-sub copyList {
- local($this,@list) = @_;
-
- @copy_list = ();
- foreach $elem (@list) {
- push(@copy_list,$elem);
- }
- return @copy_list;
-}
-
-sub list_with_same_elem {
- local($this,$size,$elem) = @_;
-
- @list = ();
- foreach $i (0 .. $size-1) {
- push(@list,$elem);
- }
- return @list;
-}
-
-sub count_occurrences {
- local($this,$s,$substring) = @_;
-
- $occ = 0;
- $new = $s;
- $guarded_substring = $this->regex_guard($substring);
- $new =~ s/$guarded_substring//;
- while ($new ne $s) {
- $occ++;
- $s = $new;
- $new =~ s/$guarded_substring//;
- }
- return $occ;
-}
-
-sub position_of_nth_occurrence {
- local($this,$s,$substring,$occ) = @_;
-
- return -1 unless $occ > 0;
- my $pos = 0;
- while (($pos = index($s, $substring, $pos)) >= 0) {
- return $pos if $occ == 1;
- $occ--;
- $pos = $pos + length($substring);
- }
- return -1;
-}
-
-sub has_diff_elements_p {
- local($this,@array) = @_;
-
- return 0 if $#array < 1;
- $elem = $array[0];
-
- foreach $a (@array) {
- return 1 if $elem ne $a;
- }
- return 0;
-}
-
-sub init_log {
- local($this,$logfile, $control) = @_;
-
- $control = "" unless defined($control);
- if ((DEBUGGING || ($control =~ /debug/i)) && $logfile) {
- system("rm -f $logfile");
- system("date > $logfile; chmod 777 $logfile");
- }
-}
-
-sub time_stamp_log {
- local($this,$logfile, $control) = @_;
-
- $control = "" unless defined($control);
- if ((DEBUGGING || ($control =~ /debug/i)) && $logfile) {
- system("date >> $logfile; chmod 777 $logfile");
- }
-}
-
-sub log {
- local($this,$message,$logfile,$control) = @_;
-
- $control = "" unless defined($control);
- if ((DEBUGGING || ($control =~ /debug/i)) && $logfile) {
- $this->init_log($logfile, $control) unless -w $logfile;
- if ($control =~ /timestamp/i) {
- $this->time_stamp_log($logfile, $control);
- }
- $guarded_message = $message;
- $guarded_message =~ s/"/\\"/g;
- system("echo \"$guarded_message\" >> $logfile");
- }
-}
-
-sub month_name_to_month_number {
- local($this,$month_name) = @_;
-
- $month_name_init = lc substr($month_name,0,3);
- return $this->position($month_name_init, "jan", "feb", "mar", "apr", "may", "jun", "jul", "aug", "sep", "oct", "nov", "dec") + 1;
-}
-
-my @short_month_names = ("Jan.","Febr.","March","April","May","June","July","Aug.","Sept.","Oct.","Nov.","Dec.");
-my @full_month_names = ("January","February","March","April","May","June","July","August","September","October","November","December");
-
-sub month_number_to_month_name {
- local($this,$month_number, $control) = @_;
-
- $month_number =~ s/^0//;
- if ($month_number =~ /^([1-9]|1[0-2])$/) {
- return ($control && ($control =~ /short/i))
- ? $short_month_names[$month_number-1]
- : $full_month_names[$month_number-1];
- } else {
- return "";
- }
-}
-
-sub leap_year {
- local($this,$year) = @_;
-
- return 0 if $year % 4 != 0;
- return 1 if $year % 400 == 0;
- return 0 if $year % 100 == 0;
- return 1;
-}
-
-sub datetime {
- local($this,$format,$time_in_secs, $command) = @_;
-
- $command = "" unless defined($command);
- $time_in_secs = time unless defined($time_in_secs) && $time_in_secs;
- @time_vector = ($command =~ /\b(gm|utc)\b/i) ? gmtime($time_in_secs) : localtime($time_in_secs);
- ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst)=@time_vector;
- $thisyear = $year + 1900;
- $thismon=(Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec)[$mon];
- $thismon2=("Jan.","Febr.","March","April","May","June","July","Aug.","Sept.","Oct.","Nov.","Dec.")[$mon];
- $thismonth = $mon + 1;
- $thisday=(Sun,Mon,Tue,Wed,Thu,Fri,Sat)[$wday];
- $milliseconds = int(($time_in_secs - int($time_in_secs)) * 1000);
- $date="$thisday $thismon $mday, $thisyear";
- $sdate="$thismon $mday, $thisyear";
- $dashedDate = sprintf("%04d-%02d-%02d",$thisyear,$thismonth,$mday);
- $slashedDate = sprintf("%02d/%02d/%04d",$mday,$thismonth,$thisyear);
- $time=sprintf("%02d:%02d:%02d",$hour,$min,$sec);
- $shorttime=sprintf("%d:%02d",$hour,$min);
- $shortdatetime = "$thismon2 $mday, $shorttime";
-
- if ($date =~ /undefined/) {
- return "";
- } elsif ($format eq "date at time") {
- return "$date at $time";
- } elsif ($format eq "date") {
- return "$date";
- } elsif ($format eq "sdate") {
- return "$sdate";
- } elsif ($format eq "ddate") {
- return "$dashedDate";
- } elsif ($format eq "time") {
- return "$time";
- } elsif ($format eq "dateTtime+ms") {
- return $dashedDate . "T" . $time . "." . $milliseconds;
- } elsif ($format eq "dateTtime") {
- return $dashedDate . "T" . $time;
- } elsif ($format eq "yyyymmdd") {
- return sprintf("%04d%02d%02d",$thisyear,$thismonth,$mday);
- } elsif ($format eq "short date at time") {
- return $shortdatetime;
- } else {
- return "$date at $time";
- }
-}
-
-sub datetime_of_last_file_modification {
- local($this,$format,$filename) = @_;
-
- return $this->datetime($format,(stat($filename))[9]);
-}
-
-sub add_1sec {
- local($this,$datetime) = @_;
-
- if (($year,$month,$day,$hour,$minute,$second) = ($datetime =~ /^(\d\d\d\d)-(\d\d)-(\d\d)T(\d\d):(\d\d):(\d\d)$/)) {
- $second++;
- if ($second >= 60) { $second -= 60; $minute++; }
- if ($minute >= 60) { $minute -= 60; $hour++; }
- if ($hour >= 24) { $hour -= 24; $day++; }
- if ($month =~ /^(01|03|05|07|08|10|12)$/) {
- if ($day > 31) { $day -= 31; $month++; }
- } elsif ($month =~ /^(04|06|09|11)$/) {
- if ($day > 30) { $day -= 30; $month++; }
- } elsif (($month eq "02") && $this->leap_year($year)) {
- if ($day > 29) { $day -= 29; $month++; }
- } elsif ($month eq "02") {
- if ($day > 28) { $day -= 28; $month++; }
- }
- if ($month > 12) { $month -= 12; $year++; }
- return sprintf("%04d-%02d-%02dT%02d:%02d:%02d", $year,$month,$day,$hour,$minute,$second);
- } else {
- return "";
- }
-}
-
-sub stopwatch {
- local($this, $function, $id, *ht, *OUT) = @_;
- # function: start|stop|count|report; start|stop times are absolute (in secs.)
-
- my $current_time = time;
- # print OUT "Point S stopwatch $function $id $current_time\n";
- if ($function eq "start") {
- if ($ht{STOPWATCH_START}->{$id}) {
- $ht{STOPWATCH_N_RESTARTS}->{$id} = ($ht{STOPWATCH_N_RESTARTS}->{$id} || 0) + 1;
- } else {
- $ht{STOPWATCH_START}->{$id} = $current_time;
- }
- } elsif ($function eq "end") {
- if ($start_time = $ht{STOPWATCH_START}->{$id}) {
- $ht{STOPWATCH_TIME}->{$id} = ($ht{STOPWATCH_TIME}->{$id} || 0) + ($current_time - $start_time);
- $ht{STOPWATCH_START}->{$id} = "";
- } else {
- $ht{STOPWATCH_N_DEAD_ENDS}->{$id} = ($ht{STOPWATCH_N_DEAD_ENDS}->{$id} || 0) + 1;
- }
- } elsif ($function eq "count") {
- $ht{STOPWATCH_COUNT}->{$id} = ($ht{STOPWATCH_COUNT}->{$id} || 0) + 1;
- } elsif ($function eq "report") {
- my $id2;
- foreach $id2 (keys %{$ht{STOPWATCH_START}}) {
- if ($start_time = $ht{STOPWATCH_START}->{$id2}) {
- $ht{STOPWATCH_TIME}->{$id2} = ($ht{STOPWATCH_TIME}->{$id2} || 0) + ($current_time - $start_time);
- $ht{STOPWATCH_START}->{$id2} = $current_time;
- }
- }
- print OUT "Time report:\n";
- foreach $id2 (sort { $ht{STOPWATCH_TIME}->{$b} <=> $ht{STOPWATCH_TIME}->{$a} }
- keys %{$ht{STOPWATCH_TIME}}) {
- my $stopwatch_time = $ht{STOPWATCH_TIME}->{$id2};
- $stopwatch_time = $this->round_to_n_decimal_places($stopwatch_time, 3);
- my $n_restarts = $ht{STOPWATCH_N_RESTARTS}->{$id2};
- my $n_dead_ends = $ht{STOPWATCH_N_DEAD_ENDS}->{$id2};
- my $start_time = $ht{STOPWATCH_START}->{$id2};
- print OUT " $id2: $stopwatch_time seconds";
- print OUT " with $n_restarts restart(s)" if $n_restarts;
- print OUT " with $n_dead_ends dead end(s)" if $n_dead_ends;
- print OUT " (active)" if $start_time;
- print OUT "\n";
- }
- foreach $id2 (sort { $ht{STOPWATCH_COUNT}->{$b} <=> $ht{STOPWATCH_COUNT}->{$a} }
- keys %{$ht{STOPWATCH_COUNT}}) {
- $count = $ht{STOPWATCH_COUNT}->{$id2};
- print OUT " C $id2: $count\n";
- }
- }
-}
-
-sub print_html_banner {
- local($this,$text,$bgcolor,*OUT,$control) = @_;
-
- $control = "" unless defined($control);
- $bgcolor = "#BBCCFF" unless defined($bgcolor);
- print OUT "
";
- print OUT " " unless $text =~ /^\s*<(table|nobr)/;
- print OUT $text;
- print OUT "
Raw output: (.*)')
- html = '\n'.join(l for l in html.splitlines() if not r.match(l))
- return html
-
-__doc__ = __doc__.format(
- # rST doesn't see the -+ flag as part of an option list, so we
- # hide it from the module-level docstring.
- CYTHON_DOC=dedent(CythonMagics.cython.__doc__\
- .replace('-+, --cplus', '--cplus ')),
- CYTHON_INLINE_DOC=dedent(CythonMagics.cython_inline.__doc__),
- CYTHON_PYXIMPORT_DOC=dedent(CythonMagics.cython_pyximport.__doc__),
-)
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/TokenStreamRewriter.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/TokenStreamRewriter.py
deleted file mode 100644
index 04a3af657dbf1d4819207301dd7a05a0710ce06d..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/TokenStreamRewriter.py
+++ /dev/null
@@ -1,251 +0,0 @@
-#
-# Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
-# Use of this file is governed by the BSD 3-clause license that
-# can be found in the LICENSE.txt file in the project root.
-#
-
-from io import StringIO
-from antlr4.Token import Token
-
-from antlr4.CommonTokenStream import CommonTokenStream
-
-
-class TokenStreamRewriter(object):
- DEFAULT_PROGRAM_NAME = "default"
- PROGRAM_INIT_SIZE = 100
- MIN_TOKEN_INDEX = 0
-
- def __init__(self, tokens):
- """
- :type tokens: antlr4.BufferedTokenStream.BufferedTokenStream
- :param tokens:
- :return:
- """
- super(TokenStreamRewriter, self).__init__()
- self.tokens = tokens
- self.programs = {self.DEFAULT_PROGRAM_NAME: []}
- self.lastRewriteTokenIndexes = {}
-
- def getTokenStream(self):
- return self.tokens
-
- def rollback(self, instruction_index, program_name):
- ins = self.programs.get(program_name, None)
- if ins:
- self.programs[program_name] = ins[self.MIN_TOKEN_INDEX: instruction_index]
-
- def deleteProgram(self, program_name=DEFAULT_PROGRAM_NAME):
- self.rollback(self.MIN_TOKEN_INDEX, program_name)
-
- def insertAfterToken(self, token, text, program_name=DEFAULT_PROGRAM_NAME):
- self.insertAfter(token.tokenIndex, text, program_name)
-
- def insertAfter(self, index, text, program_name=DEFAULT_PROGRAM_NAME):
- op = self.InsertAfterOp(self.tokens, index + 1, text)
- rewrites = self.getProgram(program_name)
- op.instructionIndex = len(rewrites)
- rewrites.append(op)
-
- def insertBeforeIndex(self, index, text):
- self.insertBefore(self.DEFAULT_PROGRAM_NAME, index, text)
-
- def insertBeforeToken(self, token, text, program_name=DEFAULT_PROGRAM_NAME):
- self.insertBefore(program_name, token.tokenIndex, text)
-
- def insertBefore(self, program_name, index, text):
- op = self.InsertBeforeOp(self.tokens, index, text)
- rewrites = self.getProgram(program_name)
- op.instructionIndex = len(rewrites)
- rewrites.append(op)
-
- def replaceIndex(self, index, text):
- self.replace(self.DEFAULT_PROGRAM_NAME, index, index, text)
-
- def replaceRange(self, from_idx, to_idx, text):
- self.replace(self.DEFAULT_PROGRAM_NAME, from_idx, to_idx, text)
-
- def replaceSingleToken(self, token, text):
- self.replace(self.DEFAULT_PROGRAM_NAME, token.tokenIndex, token.tokenIndex, text)
-
- def replaceRangeTokens(self, from_token, to_token, text, program_name=DEFAULT_PROGRAM_NAME):
- self.replace(program_name, from_token.tokenIndex, to_token.tokenIndex, text)
-
- def replace(self, program_name, from_idx, to_idx, text):
- if any((from_idx > to_idx, from_idx < 0, to_idx < 0, to_idx >= len(self.tokens.tokens))):
- raise ValueError(
- 'replace: range invalid: {}..{}(size={})'.format(from_idx, to_idx, len(self.tokens.tokens)))
- op = self.ReplaceOp(from_idx, to_idx, self.tokens, text)
- rewrites = self.getProgram(program_name)
- op.instructionIndex = len(rewrites)
- rewrites.append(op)
-
- def deleteToken(self, token):
- self.delete(self.DEFAULT_PROGRAM_NAME, token, token)
-
- def deleteIndex(self, index):
- self.delete(self.DEFAULT_PROGRAM_NAME, index, index)
-
- def delete(self, program_name, from_idx, to_idx):
- if isinstance(from_idx, Token):
- self.replace(program_name, from_idx.tokenIndex, to_idx.tokenIndex, "")
- else:
- self.replace(program_name, from_idx, to_idx, "")
-
- def lastRewriteTokenIndex(self, program_name=DEFAULT_PROGRAM_NAME):
- return self.lastRewriteTokenIndexes.get(program_name, -1)
-
- def setLastRewriteTokenIndex(self, program_name, i):
- self.lastRewriteTokenIndexes[program_name] = i
-
- def getProgram(self, program_name):
- return self.programs.setdefault(program_name, [])
-
- def getDefaultText(self):
- return self.getText(self.DEFAULT_PROGRAM_NAME, 0, len(self.tokens.tokens) - 1)
-
- def getText(self, program_name, start:int, stop:int):
- """
- :return: the text in tokens[start, stop](closed interval)
- """
- rewrites = self.programs.get(program_name)
-
- # ensure start/end are in range
- if stop > len(self.tokens.tokens) - 1:
- stop = len(self.tokens.tokens) - 1
- if start < 0:
- start = 0
-
- # if no instructions to execute
- if not rewrites: return self.tokens.getText(start, stop)
- buf = StringIO()
- indexToOp = self._reduceToSingleOperationPerIndex(rewrites)
- i = start
- while all((i <= stop, i < len(self.tokens.tokens))):
- op = indexToOp.pop(i, None)
- token = self.tokens.get(i)
- if op is None:
- if token.type != Token.EOF: buf.write(token.text)
- i += 1
- else:
- i = op.execute(buf)
-
- if stop == len(self.tokens.tokens)-1:
- for op in indexToOp.values():
- if op.index >= len(self.tokens.tokens)-1: buf.write(op.text)
-
- return buf.getvalue()
-
- def _reduceToSingleOperationPerIndex(self, rewrites):
- # Walk replaces
- for i, rop in enumerate(rewrites):
- if any((rop is None, not isinstance(rop, TokenStreamRewriter.ReplaceOp))):
- continue
- # Wipe prior inserts within range
- inserts = [op for op in rewrites[:i] if isinstance(op, TokenStreamRewriter.InsertBeforeOp)]
- for iop in inserts:
- if iop.index == rop.index:
- rewrites[iop.instructionIndex] = None
- rop.text = '{}{}'.format(iop.text, rop.text)
- elif all((iop.index > rop.index, iop.index <= rop.last_index)):
- rewrites[iop.instructionIndex] = None
-
- # Drop any prior replaces contained within
- prevReplaces = [op for op in rewrites[:i] if isinstance(op, TokenStreamRewriter.ReplaceOp)]
- for prevRop in prevReplaces:
- if all((prevRop.index >= rop.index, prevRop.last_index <= rop.last_index)):
- rewrites[prevRop.instructionIndex] = None
- continue
- isDisjoint = any((prevRop.last_indexrop.last_index))
- if all((prevRop.text is None, rop.text is None, not isDisjoint)):
- rewrites[prevRop.instructionIndex] = None
- rop.index = min(prevRop.index, rop.index)
- rop.last_index = min(prevRop.last_index, rop.last_index)
- print('New rop {}'.format(rop))
- elif (not(isDisjoint)):
- raise ValueError("replace op boundaries of {} overlap with previous {}".format(rop, prevRop))
-
- # Walk inserts
- for i, iop in enumerate(rewrites):
- if any((iop is None, not isinstance(iop, TokenStreamRewriter.InsertBeforeOp))):
- continue
- prevInserts = [op for op in rewrites[:i] if isinstance(op, TokenStreamRewriter.InsertBeforeOp)]
- for prev_index, prevIop in enumerate(prevInserts):
- if prevIop.index == iop.index and type(prevIop) is TokenStreamRewriter.InsertBeforeOp:
- iop.text += prevIop.text
- rewrites[prev_index] = None
- elif prevIop.index == iop.index and type(prevIop) is TokenStreamRewriter.InsertAfterOp:
- iop.text = prevIop.text + iop.text
- rewrites[prev_index] = None
- # look for replaces where iop.index is in range; error
- prevReplaces = [op for op in rewrites[:i] if isinstance(op, TokenStreamRewriter.ReplaceOp)]
- for rop in prevReplaces:
- if iop.index == rop.index:
- rop.text = iop.text + rop.text
- rewrites[i] = None
- continue
- if all((iop.index >= rop.index, iop.index <= rop.last_index)):
- raise ValueError("insert op {} within boundaries of previous {}".format(iop, rop))
-
- reduced = {}
- for i, op in enumerate(rewrites):
- if op is None: continue
- if reduced.get(op.index): raise ValueError('should be only one op per index')
- reduced[op.index] = op
-
- return reduced
-
- class RewriteOperation(object):
-
- def __init__(self, tokens, index, text=""):
- """
- :type tokens: CommonTokenStream
- :param tokens:
- :param index:
- :param text:
- :return:
- """
- self.tokens = tokens
- self.index = index
- self.text = text
- self.instructionIndex = 0
-
- def execute(self, buf):
- """
- :type buf: StringIO.StringIO
- :param buf:
- :return:
- """
- return self.index
-
- def __str__(self):
- return '<{}@{}:"{}">'.format(self.__class__.__name__, self.tokens.get(self.index), self.text)
-
- class InsertBeforeOp(RewriteOperation):
-
- def __init__(self, tokens, index, text=""):
- super(TokenStreamRewriter.InsertBeforeOp, self).__init__(tokens, index, text)
-
- def execute(self, buf):
- buf.write(self.text)
- if self.tokens.get(self.index).type != Token.EOF:
- buf.write(self.tokens.get(self.index).text)
- return self.index + 1
-
- class InsertAfterOp(InsertBeforeOp):
- pass
-
- class ReplaceOp(RewriteOperation):
-
- def __init__(self, from_idx, to_idx, tokens, text):
- super(TokenStreamRewriter.ReplaceOp, self).__init__(tokens, from_idx, text)
- self.last_index = to_idx
-
- def execute(self, buf):
- if self.text:
- buf.write(self.text)
- return self.last_index + 1
-
- def __str__(self):
- if self.text:
- return ''.format(self.tokens.get(self.index), self.tokens.get(self.last_index),
- self.text)
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/distributed/distributed_timeout_wrapper.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/distributed/distributed_timeout_wrapper.py
deleted file mode 100644
index 6e06b4b6dd9a5fedd5d72bde02ceb7aaf74833d7..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/distributed/distributed_timeout_wrapper.py
+++ /dev/null
@@ -1,97 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import os
-import signal
-import threading
-
-from torch import nn
-
-
-logger = logging.getLogger(__name__)
-
-
-class DistributedTimeoutWrapper(nn.Module):
- """
- A wrapper that kills the process if no progress is made within a given
- *timeout*. The timer is reset every time :func:`forward` is called.
-
- Usage::
-
- module = DistributedTimeoutWrapper(module, timeout=30)
- x = module(input)
- time.sleep(20) # safe
- x = module(input)
- time.sleep(45) # job will be killed before this returns
-
- Args:
- module (nn.Module): module to wrap
- timeout (int): number of seconds before killing the process
- (set to a value <= 0 to disable the timeout)
- signal (Optional): signal to send once timeout is triggered
- """
-
- def __init__(self, module: nn.Module, timeout: int, signal=signal.SIGINT):
- super().__init__()
- self.module = module
- self.timeout = timeout
- self.signal = signal
-
- if timeout > 0:
- self._heartbeat = threading.Event()
- self._heartbeat_thread = threading.Thread(
- target=self._check_heartbeat,
- args=(os.getpid(),),
- daemon=True,
- )
- self._heartbeat_thread.start()
- self._terminated = False
- else:
- self._heartbeat = None
- self._heartbeat_thread = None
-
- def __del__(self):
- self.stop_timeout()
-
- def __getattr__(self, name):
- """Forward missing attributes to wrapped module."""
- try:
- return super().__getattr__(name) # defer to nn.Module's logic
- except AttributeError:
- return getattr(self.module, name)
-
- def stop_timeout(self):
- if self._heartbeat_thread is not None:
- self._terminated = True
- self._heartbeat_thread.join()
-
- def state_dict(self, *args, **kwargs):
- return self.module.state_dict(*args, **kwargs)
-
- def load_state_dict(self, *args, **kwargs):
- return self.module.load_state_dict(*args, **kwargs)
-
- def forward(self, *args, **kwargs):
- if self._heartbeat is not None:
- self._heartbeat.set()
- return self.module(*args, **kwargs)
-
- def _check_heartbeat(self, parent_pid):
- self._heartbeat.wait() # wait for the first forward pass
- while True:
- self._heartbeat.clear()
- success = self._heartbeat.wait(timeout=self.timeout)
- if self._terminated:
- break
- elif not success:
- logger.error(
- (
- "Killing job for not making progress in {} seconds. "
- "Set --heartbeat-timeout=-1 to disable this timeout."
- ).format(int(self.timeout))
- )
- os.kill(parent_pid, self.signal)
- return
diff --git a/spaces/arxnov/anotest/text/korean.py b/spaces/arxnov/anotest/text/korean.py
deleted file mode 100644
index edee07429a450c55e3d8e246997faaa1e0b89cc9..0000000000000000000000000000000000000000
--- a/spaces/arxnov/anotest/text/korean.py
+++ /dev/null
@@ -1,210 +0,0 @@
-import re
-from jamo import h2j, j2hcj
-import ko_pron
-
-
-# This is a list of Korean classifiers preceded by pure Korean numerals.
-_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통'
-
-# List of (hangul, hangul divided) pairs:
-_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄳ', 'ㄱㅅ'),
- ('ㄵ', 'ㄴㅈ'),
- ('ㄶ', 'ㄴㅎ'),
- ('ㄺ', 'ㄹㄱ'),
- ('ㄻ', 'ㄹㅁ'),
- ('ㄼ', 'ㄹㅂ'),
- ('ㄽ', 'ㄹㅅ'),
- ('ㄾ', 'ㄹㅌ'),
- ('ㄿ', 'ㄹㅍ'),
- ('ㅀ', 'ㄹㅎ'),
- ('ㅄ', 'ㅂㅅ'),
- ('ㅘ', 'ㅗㅏ'),
- ('ㅙ', 'ㅗㅐ'),
- ('ㅚ', 'ㅗㅣ'),
- ('ㅝ', 'ㅜㅓ'),
- ('ㅞ', 'ㅜㅔ'),
- ('ㅟ', 'ㅜㅣ'),
- ('ㅢ', 'ㅡㅣ'),
- ('ㅑ', 'ㅣㅏ'),
- ('ㅒ', 'ㅣㅐ'),
- ('ㅕ', 'ㅣㅓ'),
- ('ㅖ', 'ㅣㅔ'),
- ('ㅛ', 'ㅣㅗ'),
- ('ㅠ', 'ㅣㅜ')
-]]
-
-# List of (Latin alphabet, hangul) pairs:
-_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', '에이'),
- ('b', '비'),
- ('c', '시'),
- ('d', '디'),
- ('e', '이'),
- ('f', '에프'),
- ('g', '지'),
- ('h', '에이치'),
- ('i', '아이'),
- ('j', '제이'),
- ('k', '케이'),
- ('l', '엘'),
- ('m', '엠'),
- ('n', '엔'),
- ('o', '오'),
- ('p', '피'),
- ('q', '큐'),
- ('r', '아르'),
- ('s', '에스'),
- ('t', '티'),
- ('u', '유'),
- ('v', '브이'),
- ('w', '더블유'),
- ('x', '엑스'),
- ('y', '와이'),
- ('z', '제트')
-]]
-
-# List of (ipa, lazy ipa) pairs:
-_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('t͡ɕ','ʧ'),
- ('d͡ʑ','ʥ'),
- ('ɲ','n^'),
- ('ɕ','ʃ'),
- ('ʷ','w'),
- ('ɭ','l`'),
- ('ʎ','ɾ'),
- ('ɣ','ŋ'),
- ('ɰ','ɯ'),
- ('ʝ','j'),
- ('ʌ','ə'),
- ('ɡ','g'),
- ('\u031a','#'),
- ('\u0348','='),
- ('\u031e',''),
- ('\u0320',''),
- ('\u0339','')
-]]
-
-
-def latin_to_hangul(text):
- for regex, replacement in _latin_to_hangul:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def divide_hangul(text):
- text = j2hcj(h2j(text))
- for regex, replacement in _hangul_divided:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def hangul_number(num, sino=True):
- '''Reference https://github.com/Kyubyong/g2pK'''
- num = re.sub(',', '', num)
-
- if num == '0':
- return '영'
- if not sino and num == '20':
- return '스무'
-
- digits = '123456789'
- names = '일이삼사오육칠팔구'
- digit2name = {d: n for d, n in zip(digits, names)}
-
- modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉'
- decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔'
- digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())}
- digit2dec = {d: dec for d, dec in zip(digits, decimals.split())}
-
- spelledout = []
- for i, digit in enumerate(num):
- i = len(num) - i - 1
- if sino:
- if i == 0:
- name = digit2name.get(digit, '')
- elif i == 1:
- name = digit2name.get(digit, '') + '십'
- name = name.replace('일십', '십')
- else:
- if i == 0:
- name = digit2mod.get(digit, '')
- elif i == 1:
- name = digit2dec.get(digit, '')
- if digit == '0':
- if i % 4 == 0:
- last_three = spelledout[-min(3, len(spelledout)):]
- if ''.join(last_three) == '':
- spelledout.append('')
- continue
- else:
- spelledout.append('')
- continue
- if i == 2:
- name = digit2name.get(digit, '') + '백'
- name = name.replace('일백', '백')
- elif i == 3:
- name = digit2name.get(digit, '') + '천'
- name = name.replace('일천', '천')
- elif i == 4:
- name = digit2name.get(digit, '') + '만'
- name = name.replace('일만', '만')
- elif i == 5:
- name = digit2name.get(digit, '') + '십'
- name = name.replace('일십', '십')
- elif i == 6:
- name = digit2name.get(digit, '') + '백'
- name = name.replace('일백', '백')
- elif i == 7:
- name = digit2name.get(digit, '') + '천'
- name = name.replace('일천', '천')
- elif i == 8:
- name = digit2name.get(digit, '') + '억'
- elif i == 9:
- name = digit2name.get(digit, '') + '십'
- elif i == 10:
- name = digit2name.get(digit, '') + '백'
- elif i == 11:
- name = digit2name.get(digit, '') + '천'
- elif i == 12:
- name = digit2name.get(digit, '') + '조'
- elif i == 13:
- name = digit2name.get(digit, '') + '십'
- elif i == 14:
- name = digit2name.get(digit, '') + '백'
- elif i == 15:
- name = digit2name.get(digit, '') + '천'
- spelledout.append(name)
- return ''.join(elem for elem in spelledout)
-
-
-def number_to_hangul(text):
- '''Reference https://github.com/Kyubyong/g2pK'''
- tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text))
- for token in tokens:
- num, classifier = token
- if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers:
- spelledout = hangul_number(num, sino=False)
- else:
- spelledout = hangul_number(num, sino=True)
- text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}')
- # digit by digit for remaining digits
- digits = '0123456789'
- names = '영일이삼사오육칠팔구'
- for d, n in zip(digits, names):
- text = text.replace(d, n)
- return text
-
-
-def korean_to_lazy_ipa(text):
- text = latin_to_hangul(text)
- text = number_to_hangul(text)
- text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa').split('] ~ [')[0],text)
- for regex, replacement in _ipa_to_lazy_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def korean_to_ipa(text):
- text = korean_to_lazy_ipa(text)
- return text.replace('ʧ','tʃ').replace('ʥ','dʑ')
diff --git a/spaces/ashercn97/AsherTesting/docs/Training-LoRAs.md b/spaces/ashercn97/AsherTesting/docs/Training-LoRAs.md
deleted file mode 100644
index 83e6d5a7251eea080cd7dfe8d19a2e42d6d3a822..0000000000000000000000000000000000000000
--- a/spaces/ashercn97/AsherTesting/docs/Training-LoRAs.md
+++ /dev/null
@@ -1,174 +0,0 @@
-## Training Your Own LoRAs
-
-The WebUI seeks to make training your own LoRAs as easy as possible. It comes down to just a few simple steps:
-
-### **Step 1**: Make a plan.
-- What base model do you want to use? The LoRA you make has to be matched up to a single architecture (eg LLaMA-13B) and cannot be transferred to others (eg LLaMA-7B, StableLM, etc. would all be different). Derivatives of the same model (eg Alpaca finetune of LLaMA-13B) might be transferrable, but even then it's best to train exactly on what you plan to use.
-- What model format do you want? At time of writing, 8-bit models are most stable, and 4-bit are supported but experimental. In the near future it is likely that 4-bit will be the best option for most users.
-- What are you training it on? Do you want it to learn real information, a simple format, ...?
-
-### **Step 2**: Gather a dataset.
-- If you use a dataset similar to the [Alpaca](https://github.com/gururise/AlpacaDataCleaned/blob/main/alpaca_data_cleaned.json) format, that is natively supported by the `Formatted Dataset` input in the WebUI, with premade formatter options.
-- If you use a dataset that isn't matched to Alpaca's format, but uses the same basic JSON structure, you can make your own format file by copying `training/formats/alpaca-format.json` to a new file and [editing its content](#format-files).
-- If you can get the dataset into a simple text file, that works too! You can train using the `Raw text file` input option.
- - This means you can for example just copy/paste a chatlog/documentation page/whatever you want, shove it in a plain text file, and train on it.
-- If you use a structured dataset not in this format, you may have to find an external way to convert it - or open an issue to request native support.
-
-### **Step 3**: Do the training.
-- **3.1**: Load the WebUI, and your model.
- - Make sure you don't have any LoRAs already loaded (unless you want to train for multi-LoRA usage).
-- **3.2**: Open the `Training` tab at the top, `Train LoRA` sub-tab.
-- **3.3**: Fill in the name of the LoRA, select your dataset in the dataset options.
-- **3.4**: Select other parameters to your preference. See [parameters below](#parameters).
-- **3.5**: click `Start LoRA Training`, and wait.
- - It can take a few hours for a large dataset, or just a few minute if doing a small run.
- - You may want to monitor your [loss value](#loss) while it goes.
-
-### **Step 4**: Evaluate your results.
-- Load the LoRA under the Models Tab.
-- You can go test-drive it on the `Text generation` tab, or you can use the `Perplexity evaluation` sub-tab of the `Training` tab.
-- If you used the `Save every n steps` option, you can grab prior copies of the model from sub-folders within the LoRA model's folder and try them instead.
-
-### **Step 5**: Re-run if you're unhappy.
-- Make sure to unload the LoRA before training it.
-- You can simply resume a prior run - use `Copy parameters from` to select your LoRA, and edit parameters. Note that you cannot change the `Rank` of an already created LoRA.
- - If you want to resume from a checkpoint saved along the way, simply copy the contents of the checkpoint folder into the LoRA's folder.
- - (Note: `adapter_model.bin` is the important file that holds the actual LoRA content).
- - This will start Learning Rate and Steps back to the start. If you want to resume as if you were midway through, you can adjust your Learning Rate to the last reported LR in logs and reduce your epochs.
-- Or, you can start over entirely if you prefer.
-- If your model is producing corrupted outputs, you probably need to start over and use a lower Learning Rate.
-- If your model isn't learning detailed information but you want it to, you might need to just run more epochs, or you might need a higher Rank.
-- If your model is enforcing a format you didn't want, you may need to tweak your dataset, or start over and not train as far.
-
-## Format Files
-
-If using JSON formatted datasets, they are presumed to be in the following approximate format:
-
-```json
-[
- {
- "somekey": "somevalue",
- "key2": "value2"
- },
- {
- // etc
- }
-]
-```
-
-Where the keys (eg `somekey`, `key2` above) are standardized, and relatively consistent across the dataset, and the values (eg `somevalue`, `value2`) contain the content actually intended to be trained.
-
-For Alpaca, the keys are `instruction`, `input`, and `output`, wherein `input` is sometimes blank.
-
-A simple format file for Alpaca to be used as a chat bot is:
-
-```json
-{
- "instruction,output": "User: %instruction%\nAssistant: %output%",
- "instruction,input,output": "User: %instruction%: %input%\nAssistant: %output%"
-}
-```
-
-Note that the keys (eg `instruction,output`) are a comma-separated list of dataset keys, and the values are a simple string that use those keys with `%%`.
-
-So for example if a dataset has `"instruction": "answer my question"`, then the format file's `User: %instruction%\n` will be automatically filled in as `User: answer my question\n`.
-
-If you have different sets of key inputs, you can make your own format file to match it. This format-file is designed to be as simple as possible to enable easy editing to match your needs.
-
-## Raw Text File Settings
-
-When using raw text files as your dataset, the text is automatically split into chunks based on your `Cutoff Length` you get a few basic options to configure them.
-- `Overlap Length` is how much to overlap chunks by. Overlapping chunks helps prevent the model from learning strange mid-sentence cuts, and instead learn continual sentences that flow from earlier text.
-- `Prefer Newline Cut Length` sets a maximum distance in characters to shift the chunk cut towards newlines. Doing this helps prevent lines from starting or ending mid-sentence, preventing the model from learning to cut off sentences randomly.
-- `Hard Cut String` sets a string that indicates there must be a hard cut without overlap. This defaults to `\n\n\n`, meaning 3 newlines. No trained chunk will ever contain this string. This allows you to insert unrelated sections of text in the same text file, but still ensure the model won't be taught to randomly change the subject.
-
-## Parameters
-
-The basic purpose and function of each parameter is documented on-page in the WebUI, so read through them in the UI to understand your options.
-
-That said, here's a guide to the most important parameter choices you should consider:
-
-### VRAM
-
-- First, you must consider your VRAM availability.
- - Generally, under default settings, VRAM usage for training with default parameters is very close to when generating text (with 1000+ tokens of context) (ie, if you can generate text, you can train LoRAs).
- - Note: worse by default in the 4-bit monkeypatch currently. Reduce `Micro Batch Size` to `1` to restore this to expectations.
- - If you have VRAM to spare, setting higher batch sizes will use more VRAM and get you better quality training in exchange.
- - If you have large data, setting a higher cutoff length may be beneficial, but will cost significant VRAM. If you can spare some, set your batch size to `1` and see how high you can push your cutoff length.
- - If you're low on VRAM, reducing batch size or cutoff length will of course improve that.
- - Don't be afraid to just try it and see what happens. If it's too much, it will just error out, and you can lower settings and try again.
-
-### Rank
-
-- Second, you want to consider the amount of learning you want.
- - For example, you may wish to just learn a dialogue format (as in the case of Alpaca) in which case setting a low `Rank` value (32 or lower) works great.
- - Or, you might be training on project documentation you want the bot to understand and be able to understand questions about, in which case the higher the rank, the better.
- - Generally, higher Rank = more precise learning = more total content learned = more VRAM usage while training.
-
-### Learning Rate and Epochs
-
-- Third, how carefully you want it to be learned.
- - In other words, how okay or not you are with the model losing unrelated understandings.
- - You can control this with 3 key settings: the Learning Rate, its scheduler, and your total epochs.
- - The learning rate controls how much change is made to the model by each token it sees.
- - It's in scientific notation normally, so for example `3e-4` means `3 * 10^-4` which is `0.0003`. The number after `e-` controls how many `0`s are in the number.
- - Higher values let training run faster, but also are more likely to corrupt prior data in the model.
- - You essentially have two variables to balance: the LR, and Epochs.
- - If you make LR higher, you can set Epochs equally lower to match. High LR + low epochs = very fast, low quality training.
- - If you make LR low, set epochs high. Low LR + high epochs = slow but high-quality training.
- - The scheduler controls change-over-time as you train - it starts high, and then goes low. This helps balance getting data in, and having decent quality, at the same time.
- - You can see graphs of the different scheduler options [in the HuggingFace docs here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_1/en/main_classes/optimizer_schedules#transformers.SchedulerType)
-
-## Loss
-
-When you're running training, the WebUI's console window will log reports that include, among other things, a numeric value named `Loss`. It will start as a high number, and gradually get lower and lower as it goes.
-
-"Loss" in the world of AI training theoretically means "how close is the model to perfect", with `0` meaning "absolutely perfect". This is calculated by measuring the difference between the model outputting exactly the text you're training it to output, and what it actually outputs.
-
-In practice, a good LLM should have a very complex variable range of ideas running in its artificial head, so a loss of `0` would indicate that the model has broken and forgotten to how think about anything other than what you trained it.
-
-So, in effect, Loss is a balancing game: you want to get it low enough that it understands your data, but high enough that it isn't forgetting everything else. Generally, if it goes below `1.0`, it's going to start forgetting its prior memories, and you should stop training. In some cases you may prefer to take it as low as `0.5` (if you want it to be very very predictable). Different goals have different needs, so don't be afraid to experiment and see what works best for you.
-
-Note: if you see Loss start at or suddenly jump to exactly `0`, it is likely something has gone wrong in your training process (eg model corruption).
-
-## Note: 4-Bit Monkeypatch
-
-The [4-bit LoRA monkeypatch](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode) works for training, but has side effects:
-- VRAM usage is higher currently. You can reduce the `Micro Batch Size` to `1` to compensate.
-- Models do funky things. LoRAs apply themselves, or refuse to apply, or spontaneously error out, or etc. It can be helpful to reload base model or restart the WebUI between training/usage to minimize chances of anything going haywire.
-- Loading or working with multiple LoRAs at the same time doesn't currently work.
-- Generally, recognize and treat the monkeypatch as the dirty temporary hack it is - it works, but isn't very stable. It will get better in time when everything is merged upstream for full official support.
-
-## Legacy notes
-
-LoRA training was contributed by [mcmonkey4eva](https://github.com/mcmonkey4eva) in PR [#570](https://github.com/oobabooga/text-generation-webui/pull/570).
-
-### Using the original alpaca-lora code
-
-Kept here for reference. The Training tab has much more features than this method.
-
-```
-conda activate textgen
-git clone https://github.com/tloen/alpaca-lora
-```
-
-Edit those two lines in `alpaca-lora/finetune.py` to use your existing model folder instead of downloading everything from decapoda:
-
-```
-model = LlamaForCausalLM.from_pretrained(
- "models/llama-7b",
- load_in_8bit=True,
- device_map="auto",
-)
-tokenizer = LlamaTokenizer.from_pretrained(
- "models/llama-7b", add_eos_token=True
-)
-```
-
-Run the script with:
-
-```
-python finetune.py
-```
-
-It just works. It runs at 22.32s/it, with 1170 iterations in total, so about 7 hours and a half for training a LoRA. RTX 3090, 18153MiB VRAM used, drawing maximum power (350W, room heater mode).
diff --git a/spaces/ashercn97/AsherTesting/extensions/character_bias/script.py b/spaces/ashercn97/AsherTesting/extensions/character_bias/script.py
deleted file mode 100644
index ff12f3afdc28be4ead12ffab90bd9fbd783514a2..0000000000000000000000000000000000000000
--- a/spaces/ashercn97/AsherTesting/extensions/character_bias/script.py
+++ /dev/null
@@ -1,83 +0,0 @@
-import os
-
-import gradio as gr
-
-# get the current directory of the script
-current_dir = os.path.dirname(os.path.abspath(__file__))
-
-# check if the bias_options.txt file exists, if not, create it
-bias_file = os.path.join(current_dir, "bias_options.txt")
-if not os.path.isfile(bias_file):
- with open(bias_file, "w") as f:
- f.write("*I am so happy*\n*I am so sad*\n*I am so excited*\n*I am so bored*\n*I am so angry*")
-
-# read bias options from the text file
-with open(bias_file, "r") as f:
- bias_options = [line.strip() for line in f.readlines()]
-
-params = {
- "activate": True,
- "bias string": " *I am so happy*",
- "use custom string": False,
-}
-
-
-def input_modifier(string):
- """
- This function is applied to your text inputs before
- they are fed into the model.
- """
- return string
-
-
-def output_modifier(string):
- """
- This function is applied to the model outputs.
- """
- return string
-
-
-def bot_prefix_modifier(string):
- """
- This function is only applied in chat mode. It modifies
- the prefix text for the Bot and can be used to bias its
- behavior.
- """
- if params['activate']:
- if params['use custom string']:
- return f'{string} {params["custom string"].strip()} '
- else:
- return f'{string} {params["bias string"].strip()} '
- else:
- return string
-
-
-def ui():
- # Gradio elements
- activate = gr.Checkbox(value=params['activate'], label='Activate character bias')
- dropdown_string = gr.Dropdown(choices=bias_options, value=params["bias string"], label='Character bias', info='To edit the options in this dropdown edit the "bias_options.txt" file')
- use_custom_string = gr.Checkbox(value=False, label='Use custom bias textbox instead of dropdown')
- custom_string = gr.Textbox(value="", placeholder="Enter custom bias string", label="Custom Character Bias", info='To use this textbox activate the checkbox above')
-
- # Event functions to update the parameters in the backend
- def update_bias_string(x):
- if x:
- params.update({"bias string": x})
- else:
- params.update({"bias string": dropdown_string.get()})
- return x
-
- def update_custom_string(x):
- params.update({"custom string": x})
-
- dropdown_string.change(update_bias_string, dropdown_string, None)
- custom_string.change(update_custom_string, custom_string, None)
- activate.change(lambda x: params.update({"activate": x}), activate, None)
- use_custom_string.change(lambda x: params.update({"use custom string": x}), use_custom_string, None)
-
- # Group elements together depending on the selected option
- def bias_string_group():
- if use_custom_string.value:
- return gr.Group([use_custom_string, custom_string])
- else:
- return dropdown_string
diff --git a/spaces/ashzzf/vits-uma-genshin-honkai/Docker/Dockerfile b/spaces/ashzzf/vits-uma-genshin-honkai/Docker/Dockerfile
deleted file mode 100644
index 4d39cdf02a2ec151686cc1d61234bf723068fed8..0000000000000000000000000000000000000000
--- a/spaces/ashzzf/vits-uma-genshin-honkai/Docker/Dockerfile
+++ /dev/null
@@ -1,12 +0,0 @@
-FROM python:3.9-bullseye
-VOLUME ["/app"]
-WORKDIR /app
-# Set apt to Chinese mirror
-RUN sed -i 's/deb.debian.org/mirrors.ustc.edu.cn/g' /etc/apt/sources.list
-RUN apt-get update && apt-get -y install cmake git
-RUN git clone https://huggingface.co/spaces/ikechan8370/vits-uma-genshin-honkai
-WORKDIR /app/vits-uma-genshin-honkai
-RUN sed -i "s/\.launch()/\.launch(server_name=\"0.0.0.0\")/" /app/vits-uma-genshin-honkai/app.py
-ADD vits.sh /app/vits.sh
-EXPOSE 7860
-ENTRYPOINT [ "/app/vits.sh" ]
\ No newline at end of file
diff --git a/spaces/asimokby/cv-parser-huggingface/README.md b/spaces/asimokby/cv-parser-huggingface/README.md
deleted file mode 100644
index 79896d024889ab273eea4c8099754235066cf174..0000000000000000000000000000000000000000
--- a/spaces/asimokby/cv-parser-huggingface/README.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-title: Cv Parser
-emoji: 💩
-colorFrom: green
-colorTo: red
-sdk: gradio
-app_file: app.py
-pinned: false
-license: mit
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`models`: _List[string]_
-HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`datasets`: _List[string]_
-HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/atimughal662/InfoFusion/src/utils_langchain.py b/spaces/atimughal662/InfoFusion/src/utils_langchain.py
deleted file mode 100644
index 7483cca69443691de773196ba6c5134438e113aa..0000000000000000000000000000000000000000
--- a/spaces/atimughal662/InfoFusion/src/utils_langchain.py
+++ /dev/null
@@ -1,152 +0,0 @@
-import copy
-import os
-import types
-import uuid
-from typing import Any, Dict, List, Union, Optional
-import time
-import queue
-import pathlib
-from datetime import datetime
-
-from src.utils import hash_file, get_sha
-
-from langchain.callbacks.base import BaseCallbackHandler
-from langchain.schema import LLMResult
-from langchain.text_splitter import RecursiveCharacterTextSplitter
-from langchain.docstore.document import Document
-
-
-class StreamingGradioCallbackHandler(BaseCallbackHandler):
- """
- Similar to H2OTextIteratorStreamer that is for HF backend, but here LangChain backend
- """
- def __init__(self, timeout: Optional[float] = None, block=True):
- super().__init__()
- self.text_queue = queue.SimpleQueue()
- self.stop_signal = None
- self.do_stop = False
- self.timeout = timeout
- self.block = block
-
- def on_llm_start(
- self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
- ) -> None:
- """Run when LLM starts running. Clean the queue."""
- while not self.text_queue.empty():
- try:
- self.text_queue.get(block=False)
- except queue.Empty:
- continue
-
- def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
- """Run on new LLM token. Only available when streaming is enabled."""
- self.text_queue.put(token)
-
- def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
- """Run when LLM ends running."""
- self.text_queue.put(self.stop_signal)
-
- def on_llm_error(
- self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
- ) -> None:
- """Run when LLM errors."""
- self.text_queue.put(self.stop_signal)
-
- def __iter__(self):
- return self
-
- def __next__(self):
- while True:
- try:
- value = self.stop_signal # value looks unused in pycharm, not true
- if self.do_stop:
- print("hit stop", flush=True)
- # could raise or break, maybe best to raise and make parent see if any exception in thread
- raise StopIteration()
- # break
- value = self.text_queue.get(block=self.block, timeout=self.timeout)
- break
- except queue.Empty:
- time.sleep(0.01)
- if value == self.stop_signal:
- raise StopIteration()
- else:
- return value
-
-
-def _chunk_sources(sources, chunk=True, chunk_size=512, language=None, db_type=None):
- assert db_type is not None
-
- if not isinstance(sources, (list, tuple, types.GeneratorType)) and not callable(sources):
- # if just one document
- sources = [sources]
- if not chunk:
- [x.metadata.update(dict(chunk_id=0)) for chunk_id, x in enumerate(sources)]
- if db_type in ['chroma', 'chroma_old']:
- # make copy so can have separate summarize case
- source_chunks = [Document(page_content=x.page_content,
- metadata=copy.deepcopy(x.metadata) or {})
- for x in sources]
- else:
- source_chunks = sources # just same thing
- else:
- if language and False:
- # Bug in langchain, keep separator=True not working
- # https://github.com/hwchase17/langchain/issues/2836
- # so avoid this for now
- keep_separator = True
- separators = RecursiveCharacterTextSplitter.get_separators_for_language(language)
- else:
- separators = ["\n\n", "\n", " ", ""]
- keep_separator = False
- splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=0, keep_separator=keep_separator,
- separators=separators)
- source_chunks = splitter.split_documents(sources)
-
- # currently in order, but when pull from db won't be, so mark order and document by hash
- [x.metadata.update(dict(chunk_id=chunk_id)) for chunk_id, x in enumerate(source_chunks)]
-
- if db_type in ['chroma', 'chroma_old']:
- # also keep original source for summarization and other tasks
-
- # assign chunk_id=-1 for original content
- # this assumes, as is currently true, that splitter makes new documents and list and metadata is deepcopy
- [x.metadata.update(dict(chunk_id=-1)) for chunk_id, x in enumerate(sources)]
-
- # in some cases sources is generator, so convert to list
- return list(sources) + source_chunks
- else:
- return source_chunks
-
-
-def add_parser(docs1, parser):
- [x.metadata.update(dict(parser=x.metadata.get('parser', parser))) for x in docs1]
-
-
-def _add_meta(docs1, file, headsize=50, filei=0, parser='NotSet'):
- if os.path.isfile(file):
- file_extension = pathlib.Path(file).suffix
- hashid = hash_file(file)
- else:
- file_extension = str(file) # not file, just show full thing
- hashid = get_sha(file)
- doc_hash = str(uuid.uuid4())[:10]
- if not isinstance(docs1, (list, tuple, types.GeneratorType)):
- docs1 = [docs1]
- [x.metadata.update(dict(input_type=file_extension,
- parser=x.metadata.get('parser', parser),
- date=str(datetime.now()),
- time=time.time(),
- order_id=order_id,
- hashid=hashid,
- doc_hash=doc_hash,
- file_id=filei,
- head=x.page_content[:headsize].strip())) for order_id, x in enumerate(docs1)]
-
-
-def fix_json_meta(docs1):
- if not isinstance(docs1, (list, tuple, types.GeneratorType)):
- docs1 = [docs1]
- # fix meta, chroma doesn't like None, only str, int, float for values
- [x.metadata.update(dict(sender_name=x.metadata.get('sender_name') or '')) for x in docs1]
- [x.metadata.update(dict(timestamp_ms=x.metadata.get('timestamp_ms') or '')) for x in docs1]
diff --git a/spaces/auto-academic/auto-draft/latex_templates/AAAI2023/introduction.tex b/spaces/auto-academic/auto-draft/latex_templates/AAAI2023/introduction.tex
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/awaawawawa/iurf7irfuyytruyyugb/static/index.html b/spaces/awaawawawa/iurf7irfuyytruyyugb/static/index.html
deleted file mode 100644
index a33806e93fbb159f8384e4e9becbb55e07599d64..0000000000000000000000000000000000000000
--- a/spaces/awaawawawa/iurf7irfuyytruyyugb/static/index.html
+++ /dev/null
@@ -1,1919 +0,0 @@
-
-
-
-
-
- Fast API 🤗 Space served with Uvicorn
-
-
-
-
-
-
- """
-
- with gr.Blocks() as demo:
- gr.Markdown(
- """
-
Balacoon🦝 Revoice
-
-
- Welcome to the live demo of Balacoon's Revoice service.
- Check out our [website](https://balacoon.com/products/) to learn more.
- Zero-shot speech generation allows to generate speech with any voice
- given just a single sample as a reference.
- For optimal results, we recommend using clean audio files in English.
-
- Here's how it works:
-
- 1. Provide your credentials (API key and secret).
- 2. Recording or upload your voice for conversion, or provide text for synthesis.
- 3. Select an audio sample that represents the target voice you want to convert to.
- 4. Click the "Generate" button and listen to the result!
-
- If providing your own audio files, please use WAVE PCM.
- Service works with 16kHz, 16 bit, mono audio.
- """
- )
- gr.Markdown(badges)
- with gr.Row():
- apikey = gr.Textbox(label="API key", placeholder="Enter API key")
- with gr.Row():
- apisecret = gr.Textbox(label="API secret", placeholder="Enter API secret")
- with gr.Row():
- with gr.Column(variant="panel"):
- src_audio_mic = gr.Audio(source="microphone", label="Record your voice")
- src_audio_file = gr.Audio(
- source="upload", label="Or upload audio to convert"
- )
- src_text = gr.Textbox(label="Text", placeholder="Or provide text to synthesize")
-
- with gr.Column(variant="panel"):
- tgt_audio_file = gr.Audio(
- source="upload", label="Select audio with target voice"
- )
- tgt_examples_paths = glob.glob(
- os.path.join(script_dir, "references", "*.wav")
- )
- gr.Examples(
- tgt_examples_paths,
- inputs=[tgt_audio_file],
- )
-
- with gr.Row():
- convert_btn = gr.Button("Generate")
- with gr.Row():
- result_audio = gr.Audio()
-
- def speech_generation(src_from_mic_, src_from_file_, src_text_, tgt_from_file_, api_key_, api_secret_, request_: gr.Request):
- """
- helper function which checks where source come from
- """
- src_ = None
- if src_from_mic_:
- src_ = src_from_mic_
- elif src_from_file_:
- src_ = src_from_file_
- tgt_ = tgt_from_file_
- if (not src_ and not src_text_) or not tgt_:
- logging.warning("source or target are not provided")
- return
- return service_request(src_text_, src_, tgt_, api_key_, api_secret_)
-
- convert_btn.click(
- speech_generation,
- inputs=[src_audio_mic, src_audio_file, src_text, tgt_audio_file, apikey, apisecret],
- outputs=result_audio,
- )
-
- demo.queue(concurrency_count=1).launch()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/jsm/controls/MapControls.js b/spaces/banana-projects/web3d/node_modules/three/examples/jsm/controls/MapControls.js
deleted file mode 100644
index 973fa443b0031dd3e18a27db1dc66fa3965c62c6..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/jsm/controls/MapControls.js
+++ /dev/null
@@ -1,1166 +0,0 @@
-/**
- * @author qiao / https://github.com/qiao
- * @author mrdoob / http://mrdoob.com
- * @author alteredq / http://alteredqualia.com/
- * @author WestLangley / http://github.com/WestLangley
- * @author erich666 / http://erichaines.com
- * @author moroine / https://github.com/moroine
- */
-
-import {
- EventDispatcher,
- MOUSE,
- Quaternion,
- Spherical,
- Vector2,
- Vector3
-} from "../../../build/three.module.js";
-
-// This set of controls performs orbiting, dollying (zooming), and panning.
-// Unlike TrackballControls, it maintains the "up" direction object.up (+Y by default).
-// This is very similar to OrbitControls, another set of touch behavior
-//
-// Orbit - right mouse, or left mouse + ctrl/meta/shiftKey / touch: two-finger rotate
-// Zoom - middle mouse, or mousewheel / touch: two-finger spread or squish
-// Pan - left mouse, or arrow keys / touch: one-finger move
-
-var MapControls = function ( object, domElement ) {
-
- this.object = object;
-
- this.domElement = ( domElement !== undefined ) ? domElement : document;
-
- // Set to false to disable this control
- this.enabled = true;
-
- // "target" sets the location of focus, where the object orbits around
- this.target = new Vector3();
-
- // How far you can dolly in and out ( PerspectiveCamera only )
- this.minDistance = 0;
- this.maxDistance = Infinity;
-
- // How far you can zoom in and out ( OrthographicCamera only )
- this.minZoom = 0;
- this.maxZoom = Infinity;
-
- // How far you can orbit vertically, upper and lower limits.
- // Range is 0 to Math.PI radians.
- this.minPolarAngle = 0; // radians
- this.maxPolarAngle = Math.PI; // radians
-
- // How far you can orbit horizontally, upper and lower limits.
- // If set, must be a sub-interval of the interval [ - Math.PI, Math.PI ].
- this.minAzimuthAngle = - Infinity; // radians
- this.maxAzimuthAngle = Infinity; // radians
-
- // Set to true to enable damping (inertia)
- // If damping is enabled, you must call controls.update() in your animation loop
- this.enableDamping = false;
- this.dampingFactor = 0.25;
-
- // This option actually enables dollying in and out; left as "zoom" for backwards compatibility.
- // Set to false to disable zooming
- this.enableZoom = true;
- this.zoomSpeed = 1.0;
-
- // Set to false to disable rotating
- this.enableRotate = true;
- this.rotateSpeed = 1.0;
-
- // Set to false to disable panning
- this.enablePan = true;
- this.panSpeed = 1.0;
- this.screenSpacePanning = false; // if true, pan in screen-space
- this.keyPanSpeed = 7.0; // pixels moved per arrow key push
-
- // Set to true to automatically rotate around the target
- // If auto-rotate is enabled, you must call controls.update() in your animation loop
- this.autoRotate = false;
- this.autoRotateSpeed = 2.0; // 30 seconds per round when fps is 60
-
- // Set to false to disable use of the keys
- this.enableKeys = true;
-
- // The four arrow keys
- this.keys = { LEFT: 37, UP: 38, RIGHT: 39, BOTTOM: 40 };
-
- // Mouse buttons
- this.mouseButtons = { LEFT: MOUSE.LEFT, MIDDLE: MOUSE.MIDDLE, RIGHT: MOUSE.RIGHT };
-
- // for reset
- this.target0 = this.target.clone();
- this.position0 = this.object.position.clone();
- this.zoom0 = this.object.zoom;
-
- //
- // public methods
- //
-
- this.getPolarAngle = function () {
-
- return spherical.phi;
-
- };
-
- this.getAzimuthalAngle = function () {
-
- return spherical.theta;
-
- };
-
- this.saveState = function () {
-
- scope.target0.copy( scope.target );
- scope.position0.copy( scope.object.position );
- scope.zoom0 = scope.object.zoom;
-
- };
-
- this.reset = function () {
-
- scope.target.copy( scope.target0 );
- scope.object.position.copy( scope.position0 );
- scope.object.zoom = scope.zoom0;
-
- scope.object.updateProjectionMatrix();
- scope.dispatchEvent( changeEvent );
-
- scope.update();
-
- state = STATE.NONE;
-
- };
-
- // this method is exposed, but perhaps it would be better if we can make it private...
- this.update = function () {
-
- var offset = new Vector3();
-
- // so camera.up is the orbit axis
- var quat = new Quaternion().setFromUnitVectors( object.up, new Vector3( 0, 1, 0 ) );
- var quatInverse = quat.clone().inverse();
-
- var lastPosition = new Vector3();
- var lastQuaternion = new Quaternion();
-
- return function update() {
-
- var position = scope.object.position;
-
- offset.copy( position ).sub( scope.target );
-
- // rotate offset to "y-axis-is-up" space
- offset.applyQuaternion( quat );
-
- // angle from z-axis around y-axis
- spherical.setFromVector3( offset );
-
- if ( scope.autoRotate && state === STATE.NONE ) {
-
- rotateLeft( getAutoRotationAngle() );
-
- }
-
- spherical.theta += sphericalDelta.theta;
- spherical.phi += sphericalDelta.phi;
-
- // restrict theta to be between desired limits
- spherical.theta = Math.max( scope.minAzimuthAngle, Math.min( scope.maxAzimuthAngle, spherical.theta ) );
-
- // restrict phi to be between desired limits
- spherical.phi = Math.max( scope.minPolarAngle, Math.min( scope.maxPolarAngle, spherical.phi ) );
-
- spherical.makeSafe();
-
-
- spherical.radius *= scale;
-
- // restrict radius to be between desired limits
- spherical.radius = Math.max( scope.minDistance, Math.min( scope.maxDistance, spherical.radius ) );
-
- // move target to panned location
- scope.target.add( panOffset );
-
- offset.setFromSpherical( spherical );
-
- // rotate offset back to "camera-up-vector-is-up" space
- offset.applyQuaternion( quatInverse );
-
- position.copy( scope.target ).add( offset );
-
- scope.object.lookAt( scope.target );
-
- if ( scope.enableDamping === true ) {
-
- sphericalDelta.theta *= ( 1 - scope.dampingFactor );
- sphericalDelta.phi *= ( 1 - scope.dampingFactor );
-
- panOffset.multiplyScalar( 1 - scope.dampingFactor );
-
- } else {
-
- sphericalDelta.set( 0, 0, 0 );
-
- panOffset.set( 0, 0, 0 );
-
- }
-
- scale = 1;
-
- // update condition is:
- // min(camera displacement, camera rotation in radians)^2 > EPS
- // using small-angle approximation cos(x/2) = 1 - x^2 / 8
-
- if ( zoomChanged ||
- lastPosition.distanceToSquared( scope.object.position ) > EPS ||
- 8 * ( 1 - lastQuaternion.dot( scope.object.quaternion ) ) > EPS ) {
-
- scope.dispatchEvent( changeEvent );
-
- lastPosition.copy( scope.object.position );
- lastQuaternion.copy( scope.object.quaternion );
- zoomChanged = false;
-
- return true;
-
- }
-
- return false;
-
- };
-
- }();
-
- this.dispose = function () {
-
- scope.domElement.removeEventListener( 'contextmenu', onContextMenu, false );
- scope.domElement.removeEventListener( 'mousedown', onMouseDown, false );
- scope.domElement.removeEventListener( 'wheel', onMouseWheel, false );
-
- scope.domElement.removeEventListener( 'touchstart', onTouchStart, false );
- scope.domElement.removeEventListener( 'touchend', onTouchEnd, false );
- scope.domElement.removeEventListener( 'touchmove', onTouchMove, false );
-
- document.removeEventListener( 'mousemove', onMouseMove, false );
- document.removeEventListener( 'mouseup', onMouseUp, false );
-
- window.removeEventListener( 'keydown', onKeyDown, false );
-
- //scope.dispatchEvent( { type: 'dispose' } ); // should this be added here?
-
- };
-
- //
- // internals
- //
-
- var scope = this;
-
- var changeEvent = { type: 'change' };
- var startEvent = { type: 'start' };
- var endEvent = { type: 'end' };
-
- var STATE = {
- NONE: 0,
- ROTATE_UP: 1,
- ROTATE_LEFT: 2,
- ROTATE: 3, // ROTATE_UP | ROTATE_LEFT
- DOLLY: 4,
- DOLLY_ROTATE: 7, // ROTATE | DOLLY
- PAN: 8,
- DOLLY_PAN: 12, // DOLLY | PAN
- };
-
- var state = STATE.NONE;
-
- var EPS = 0.000001;
-
- // current position in spherical coordinates
- var spherical = new Spherical();
- var sphericalDelta = new Spherical();
-
- var scale = 1;
- var panOffset = new Vector3();
- var zoomChanged = false;
-
- var rotateStart = new Vector2();
- var rotateStart2 = new Vector2();
- var rotateEnd = new Vector2();
- var rotateEnd2 = new Vector2();
- var rotateDelta = new Vector2();
- var rotateDelta2 = new Vector2();
- var rotateDeltaStartFingers = new Vector2();
- var rotateDeltaEndFingers = new Vector2();
-
- var panStart = new Vector2();
- var panEnd = new Vector2();
- var panDelta = new Vector2();
-
- var dollyStart = new Vector2();
- var dollyEnd = new Vector2();
- var dollyDelta = new Vector2();
-
- function getAutoRotationAngle() {
-
- return 2 * Math.PI / 60 / 60 * scope.autoRotateSpeed;
-
- }
-
- function getZoomScale() {
-
- return Math.pow( 0.95, scope.zoomSpeed );
-
- }
-
- function rotateLeft( angle ) {
-
- sphericalDelta.theta -= angle;
-
- }
-
- function rotateUp( angle ) {
-
- sphericalDelta.phi -= angle;
-
- }
-
- var panLeft = function () {
-
- var v = new Vector3();
-
- return function panLeft( distance, objectMatrix ) {
-
- v.setFromMatrixColumn( objectMatrix, 0 ); // get X column of objectMatrix
- v.multiplyScalar( - distance );
-
- panOffset.add( v );
-
- };
-
- }();
-
- var panUp = function () {
-
- var v = new Vector3();
-
- return function panUp( distance, objectMatrix ) {
-
- if ( scope.screenSpacePanning === true ) {
-
- v.setFromMatrixColumn( objectMatrix, 1 );
-
- } else {
-
- v.setFromMatrixColumn( objectMatrix, 0 );
- v.crossVectors( scope.object.up, v );
-
- }
-
- v.multiplyScalar( distance );
-
- panOffset.add( v );
-
- };
-
- }();
-
- // deltaX and deltaY are in pixels; right and down are positive
- var pan = function () {
-
- var offset = new Vector3();
-
- return function pan( deltaX, deltaY ) {
-
- var element = scope.domElement === document ? scope.domElement.body : scope.domElement;
-
- if ( scope.object.isPerspectiveCamera ) {
-
- // perspective
- var position = scope.object.position;
- offset.copy( position ).sub( scope.target );
- var targetDistance = offset.length();
-
- // half of the fov is center to top of screen
- targetDistance *= Math.tan( ( scope.object.fov / 2 ) * Math.PI / 180.0 );
-
- // we use only clientHeight here so aspect ratio does not distort speed
- panLeft( 2 * deltaX * targetDistance / element.clientHeight, scope.object.matrix );
- panUp( 2 * deltaY * targetDistance / element.clientHeight, scope.object.matrix );
-
- } else if ( scope.object.isOrthographicCamera ) {
-
- // orthographic
- panLeft( deltaX * ( scope.object.right - scope.object.left ) / scope.object.zoom / element.clientWidth, scope.object.matrix );
- panUp( deltaY * ( scope.object.top - scope.object.bottom ) / scope.object.zoom / element.clientHeight, scope.object.matrix );
-
- } else {
-
- // camera neither orthographic nor perspective
- console.warn( 'WARNING: MapControls.js encountered an unknown camera type - pan disabled.' );
- scope.enablePan = false;
-
- }
-
- };
-
- }();
-
- function dollyIn( dollyScale ) {
-
- if ( scope.object.isPerspectiveCamera ) {
-
- scale /= dollyScale;
-
- } else if ( scope.object.isOrthographicCamera ) {
-
- scope.object.zoom = Math.max( scope.minZoom, Math.min( scope.maxZoom, scope.object.zoom * dollyScale ) );
- scope.object.updateProjectionMatrix();
- zoomChanged = true;
-
- } else {
-
- console.warn( 'WARNING: MapControls.js encountered an unknown camera type - dolly/zoom disabled.' );
- scope.enableZoom = false;
-
- }
-
- }
-
- function dollyOut( dollyScale ) {
-
- if ( scope.object.isPerspectiveCamera ) {
-
- scale *= dollyScale;
-
- } else if ( scope.object.isOrthographicCamera ) {
-
- scope.object.zoom = Math.max( scope.minZoom, Math.min( scope.maxZoom, scope.object.zoom / dollyScale ) );
- scope.object.updateProjectionMatrix();
- zoomChanged = true;
-
- } else {
-
- console.warn( 'WARNING: MapControls.js encountered an unknown camera type - dolly/zoom disabled.' );
- scope.enableZoom = false;
-
- }
-
- }
-
- //
- // event callbacks - update the object state
- //
-
- function handleMouseDownRotate( event ) {
-
- //console.log( 'handleMouseDownRotate' );
-
- rotateStart.set( event.clientX, event.clientY );
-
- }
-
- function handleMouseDownDolly( event ) {
-
- //console.log( 'handleMouseDownDolly' );
-
- dollyStart.set( event.clientX, event.clientY );
-
- }
-
- function handleMouseDownPan( event ) {
-
- //console.log( 'handleMouseDownPan' );
-
- panStart.set( event.clientX, event.clientY );
-
- }
-
- function handleMouseMoveRotate( event ) {
-
- //console.log( 'handleMouseMoveRotate' );
-
- rotateEnd.set( event.clientX, event.clientY );
-
- rotateDelta.subVectors( rotateEnd, rotateStart ).multiplyScalar( scope.rotateSpeed );
-
- var element = scope.domElement === document ? scope.domElement.body : scope.domElement;
-
- rotateLeft( 2 * Math.PI * rotateDelta.x / element.clientHeight ); // yes, height
-
- rotateUp( 2 * Math.PI * rotateDelta.y / element.clientHeight );
-
- rotateStart.copy( rotateEnd );
-
- scope.update();
-
- }
-
- function handleMouseMoveDolly( event ) {
-
- //console.log( 'handleMouseMoveDolly' );
-
- dollyEnd.set( event.clientX, event.clientY );
-
- dollyDelta.subVectors( dollyEnd, dollyStart );
-
- if ( dollyDelta.y > 0 ) {
-
- dollyIn( getZoomScale() );
-
- } else if ( dollyDelta.y < 0 ) {
-
- dollyOut( getZoomScale() );
-
- }
-
- dollyStart.copy( dollyEnd );
-
- scope.update();
-
- }
-
- function handleMouseMovePan( event ) {
-
- //console.log( 'handleMouseMovePan' );
-
- panEnd.set( event.clientX, event.clientY );
-
- panDelta.subVectors( panEnd, panStart ).multiplyScalar( scope.panSpeed );
-
- pan( panDelta.x, panDelta.y );
-
- panStart.copy( panEnd );
-
- scope.update();
-
- }
-
- function handleMouseUp( event ) {
-
- // console.log( 'handleMouseUp' );
-
- }
-
- function handleMouseWheel( event ) {
-
- // console.log( 'handleMouseWheel' );
-
- if ( event.deltaY < 0 ) {
-
- dollyOut( getZoomScale() );
-
- } else if ( event.deltaY > 0 ) {
-
- dollyIn( getZoomScale() );
-
- }
-
- scope.update();
-
- }
-
- function handleKeyDown( event ) {
-
- //console.log( 'handleKeyDown' );
-
- switch ( event.keyCode ) {
-
- case scope.keys.UP:
- pan( 0, scope.keyPanSpeed );
- scope.update();
- break;
-
- case scope.keys.BOTTOM:
- pan( 0, - scope.keyPanSpeed );
- scope.update();
- break;
-
- case scope.keys.LEFT:
- pan( scope.keyPanSpeed, 0 );
- scope.update();
- break;
-
- case scope.keys.RIGHT:
- pan( - scope.keyPanSpeed, 0 );
- scope.update();
- break;
-
- }
-
- }
-
- function handleTouchStartRotate( event ) {
-
- // console.log( 'handleTouchStartRotate' );
-
- // First finger
- rotateStart.set( event.touches[ 0 ].pageX, event.touches[ 0 ].pageY );
-
- // Second finger
- rotateStart2.set( event.touches[ 1 ].pageX, event.touches[ 1 ].pageY );
-
- }
-
- function handleTouchStartDolly( event ) {
-
- if ( scope.enableZoom ) {
-
- // console.log( 'handleTouchStartDolly' );
-
- var dx = event.touches[ 0 ].pageX - event.touches[ 1 ].pageX;
- var dy = event.touches[ 0 ].pageY - event.touches[ 1 ].pageY;
-
- var distance = Math.sqrt( dx * dx + dy * dy );
-
- dollyStart.set( 0, distance );
-
- }
-
- }
-
- function handleTouchStartPan( event ) {
-
- if ( scope.enablePan ) {
-
- // console.log( 'handleTouchStartPan' );
-
- panStart.set( event.touches[ 0 ].pageX, event.touches[ 0 ].pageY );
-
- }
-
- }
-
- function handleTouchMoveRotate( event ) {
-
- if ( scope.enableRotate === false ) return;
- if ( ( state & STATE.ROTATE ) === 0 ) return;
-
- // First finger
- rotateEnd.set( event.touches[ 0 ].pageX, event.touches[ 0 ].pageY );
-
- // Second finger
- rotateEnd2.set( event.touches[ 1 ].pageX, event.touches[ 1 ].pageY );
-
- rotateDelta.subVectors( rotateEnd, rotateStart );
- rotateDelta2.subVectors( rotateEnd2, rotateStart2 );
- rotateDeltaStartFingers.subVectors( rotateStart2, rotateStart );
- rotateDeltaEndFingers.subVectors( rotateEnd2, rotateEnd );
-
- if ( isRotateUp() ) {
-
- var element = scope.domElement === document ? scope.domElement.body : scope.domElement;
-
- // rotating up and down along whole screen attempts to go 360, but limited to 180
- rotateUp( 2 * Math.PI * rotateDelta.y / element.clientHeight );
-
- // Start rotateUp ==> disable all movement to prevent flickering
- state = STATE.ROTATE_UP;
-
- } else if ( ( state & STATE.ROTATE_LEFT ) !== 0 ) {
-
- rotateLeft( ( rotateDeltaStartFingers.angle() - rotateDeltaEndFingers.angle() ) * scope.rotateSpeed );
-
- }
-
- rotateStart.copy( rotateEnd );
- rotateStart2.copy( rotateEnd2 );
-
- }
-
- function isRotateUp() {
-
- // At start, does the two fingers are aligned horizontally
- if ( ! isHorizontal( rotateDeltaStartFingers ) ) {
-
- return false;
-
- }
-
- // At end, does the two fingers are aligned horizontally
- if ( ! isHorizontal( rotateDeltaEndFingers ) ) {
-
- return false;
-
- }
-
- // does the first finger moved vertically between start and end
- if ( ! isVertical( rotateDelta ) ) {
-
- return false;
-
- }
-
- // does the second finger moved vertically between start and end
- if ( ! isVertical( rotateDelta2 ) ) {
-
- return false;
-
- }
-
- // Does the two finger moved in the same direction (prevent moving one finger vertically up while the other goes down)
- return rotateDelta.dot( rotateDelta2 ) > 0;
-
- }
-
- var isHorizontal = function () {
-
- var precision = Math.sin( Math.PI / 6 );
-
- return function isHorizontal( vector ) {
-
- return Math.abs( Math.sin( vector.angle() ) ) < precision;
-
- };
-
- }();
-
- var isVertical = function () {
-
- var precision = Math.cos( Math.PI / 2 - Math.PI / 6 );
-
- return function isVertical( vector ) {
-
- return Math.abs( Math.cos( vector.angle() ) ) < precision;
-
- };
-
- }();
-
- function handleTouchMoveDolly( event ) {
-
- if ( scope.enableZoom === false ) return;
- if ( ( state & STATE.DOLLY ) === 0 ) return;
-
- // console.log( 'handleTouchMoveDolly' );
-
- var dx = event.touches[ 0 ].pageX - event.touches[ 1 ].pageX;
- var dy = event.touches[ 0 ].pageY - event.touches[ 1 ].pageY;
-
- var distance = Math.sqrt( dx * dx + dy * dy );
-
- dollyEnd.set( 0, distance );
-
- dollyDelta.set( 0, Math.pow( dollyEnd.y / dollyStart.y, scope.zoomSpeed ) );
-
- dollyIn( dollyDelta.y );
-
- dollyStart.copy( dollyEnd );
-
- }
-
- function handleTouchMovePan( event ) {
-
- if ( scope.enablePan === false ) return;
- if ( ( state & STATE.PAN ) === 0 ) return;
-
- // console.log( 'handleTouchMovePan' );
-
- panEnd.set( event.touches[ 0 ].pageX, event.touches[ 0 ].pageY );
-
- panDelta.subVectors( panEnd, panStart ).multiplyScalar( scope.panSpeed );
-
- pan( panDelta.x, panDelta.y );
-
- panStart.copy( panEnd );
-
- }
-
- function handleTouchEnd( event ) {
-
- //console.log( 'handleTouchEnd' );
-
- }
-
- //
- // event handlers - FSM: listen for events and reset state
- //
-
- function onMouseDown( event ) {
-
- if ( scope.enabled === false ) return;
-
- event.preventDefault();
-
- switch ( event.button ) {
-
- case scope.mouseButtons.LEFT:
-
- if ( event.ctrlKey || event.metaKey || event.shiftKey ) {
-
- if ( scope.enableRotate === false ) return;
-
- handleMouseDownRotate( event );
-
- state = STATE.ROTATE;
-
- } else {
-
- if ( scope.enablePan === false ) return;
-
- handleMouseDownPan( event );
-
- state = STATE.PAN;
-
- }
-
- break;
-
- case scope.mouseButtons.MIDDLE:
-
- if ( scope.enableZoom === false ) return;
-
- handleMouseDownDolly( event );
-
- state = STATE.DOLLY;
-
- break;
-
- case scope.mouseButtons.RIGHT:
-
- if ( scope.enableRotate === false ) return;
-
- handleMouseDownRotate( event );
-
- state = STATE.ROTATE;
-
- break;
-
- }
-
- if ( state !== STATE.NONE ) {
-
- document.addEventListener( 'mousemove', onMouseMove, false );
- document.addEventListener( 'mouseup', onMouseUp, false );
-
- scope.dispatchEvent( startEvent );
-
- }
-
- }
-
- function onMouseMove( event ) {
-
- if ( scope.enabled === false ) return;
-
- event.preventDefault();
-
- switch ( state ) {
-
- case STATE.ROTATE:
-
- if ( scope.enableRotate === false ) return;
-
- handleMouseMoveRotate( event );
-
- break;
-
- case STATE.DOLLY:
-
- if ( scope.enableZoom === false ) return;
-
- handleMouseMoveDolly( event );
-
- break;
-
- case STATE.PAN:
-
- if ( scope.enablePan === false ) return;
-
- handleMouseMovePan( event );
-
- break;
-
- }
-
- }
-
- function onMouseUp( event ) {
-
- if ( scope.enabled === false ) return;
-
- handleMouseUp( event );
-
- document.removeEventListener( 'mousemove', onMouseMove, false );
- document.removeEventListener( 'mouseup', onMouseUp, false );
-
- scope.dispatchEvent( endEvent );
-
- state = STATE.NONE;
-
- }
-
- function onMouseWheel( event ) {
-
- if ( scope.enabled === false || scope.enableZoom === false || ( state !== STATE.NONE && state !== STATE.ROTATE ) ) return;
-
- event.preventDefault();
- event.stopPropagation();
-
- scope.dispatchEvent( startEvent );
-
- handleMouseWheel( event );
-
- scope.dispatchEvent( endEvent );
-
- }
-
- function onKeyDown( event ) {
-
- if ( scope.enabled === false || scope.enableKeys === false || scope.enablePan === false ) return;
-
- handleKeyDown( event );
-
- }
-
- function onTouchStart( event ) {
-
- if ( scope.enabled === false ) return;
-
- event.preventDefault();
-
- switch ( event.touches.length ) {
-
- case 1: // one-fingered touch: pan
-
- if ( scope.enablePan === false ) return;
-
- handleTouchStartPan( event );
-
- state = STATE.PAN;
-
- break;
-
- case 2: // two-fingered touch: rotate-dolly
-
- if ( scope.enableZoom === false && scope.enableRotate === false ) return;
-
- handleTouchStartRotate( event );
- handleTouchStartDolly( event );
-
- state = STATE.DOLLY_ROTATE;
-
- break;
-
- default:
-
- state = STATE.NONE;
-
- }
-
- if ( state !== STATE.NONE ) {
-
- scope.dispatchEvent( startEvent );
-
- }
-
- }
-
- function onTouchMove( event ) {
-
- if ( scope.enabled === false ) return;
-
- event.preventDefault();
- event.stopPropagation();
-
- switch ( event.touches.length ) {
-
- case 1: // one-fingered touch: pan
-
- if ( scope.enablePan === false ) return;
- if ( state !== STATE.PAN ) return; // is this needed?
-
- handleTouchMovePan( event );
-
- scope.update();
-
- break;
-
- case 2: // two-fingered touch: rotate-dolly
-
- if ( scope.enableZoom === false && scope.enableRotate === false ) return;
- if ( ( state & STATE.DOLLY_ROTATE ) === 0 ) return; // is this needed?
-
- handleTouchMoveRotate( event );
- handleTouchMoveDolly( event );
-
- scope.update();
-
- break;
-
- default:
-
- state = STATE.NONE;
-
- }
-
- }
-
- function onTouchEnd( event ) {
-
- if ( scope.enabled === false ) return;
-
- handleTouchEnd( event );
-
- scope.dispatchEvent( endEvent );
-
- state = STATE.NONE;
-
- }
-
- function onContextMenu( event ) {
-
- if ( scope.enabled === false ) return;
-
- event.preventDefault();
-
- }
-
- //
-
- scope.domElement.addEventListener( 'contextmenu', onContextMenu, false );
-
- scope.domElement.addEventListener( 'mousedown', onMouseDown, false );
- scope.domElement.addEventListener( 'wheel', onMouseWheel, false );
-
- scope.domElement.addEventListener( 'touchstart', onTouchStart, false );
- scope.domElement.addEventListener( 'touchend', onTouchEnd, false );
- scope.domElement.addEventListener( 'touchmove', onTouchMove, false );
-
- window.addEventListener( 'keydown', onKeyDown, false );
-
- // force an update at start
-
- this.update();
-
-};
-
-MapControls.prototype = Object.create( EventDispatcher.prototype );
-MapControls.prototype.constructor = MapControls;
-
-Object.defineProperties( MapControls.prototype, {
-
- center: {
-
- get: function () {
-
- console.warn( 'THREE.MapControls: .center has been renamed to .target' );
- return this.target;
-
- }
-
- },
-
- // backward compatibility
-
- noZoom: {
-
- get: function () {
-
- console.warn( 'THREE.MapControls: .noZoom has been deprecated. Use .enableZoom instead.' );
- return ! this.enableZoom;
-
- },
-
- set: function ( value ) {
-
- console.warn( 'THREE.MapControls: .noZoom has been deprecated. Use .enableZoom instead.' );
- this.enableZoom = ! value;
-
- }
-
- },
-
- noRotate: {
-
- get: function () {
-
- console.warn( 'THREE.MapControls: .noRotate has been deprecated. Use .enableRotate instead.' );
- return ! this.enableRotate;
-
- },
-
- set: function ( value ) {
-
- console.warn( 'THREE.MapControls: .noRotate has been deprecated. Use .enableRotate instead.' );
- this.enableRotate = ! value;
-
- }
-
- },
-
- noPan: {
-
- get: function () {
-
- console.warn( 'THREE.MapControls: .noPan has been deprecated. Use .enablePan instead.' );
- return ! this.enablePan;
-
- },
-
- set: function ( value ) {
-
- console.warn( 'THREE.MapControls: .noPan has been deprecated. Use .enablePan instead.' );
- this.enablePan = ! value;
-
- }
-
- },
-
- noKeys: {
-
- get: function () {
-
- console.warn( 'THREE.MapControls: .noKeys has been deprecated. Use .enableKeys instead.' );
- return ! this.enableKeys;
-
- },
-
- set: function ( value ) {
-
- console.warn( 'THREE.MapControls: .noKeys has been deprecated. Use .enableKeys instead.' );
- this.enableKeys = ! value;
-
- }
-
- },
-
- staticMoving: {
-
- get: function () {
-
- console.warn( 'THREE.MapControls: .staticMoving has been deprecated. Use .enableDamping instead.' );
- return ! this.enableDamping;
-
- },
-
- set: function ( value ) {
-
- console.warn( 'THREE.MapControls: .staticMoving has been deprecated. Use .enableDamping instead.' );
- this.enableDamping = ! value;
-
- }
-
- },
-
- dynamicDampingFactor: {
-
- get: function () {
-
- console.warn( 'THREE.MapControls: .dynamicDampingFactor has been renamed. Use .dampingFactor instead.' );
- return this.dampingFactor;
-
- },
-
- set: function ( value ) {
-
- console.warn( 'THREE.MapControls: .dynamicDampingFactor has been renamed. Use .dampingFactor instead.' );
- this.dampingFactor = value;
-
- }
-
- }
-
-} );
-
-export { MapControls };
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/extras/core/Shape.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/extras/core/Shape.d.ts
deleted file mode 100644
index aa08894e41a205e8e7c4c3dc5609d86395b6849b..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/extras/core/Shape.d.ts
+++ /dev/null
@@ -1,35 +0,0 @@
-import { Vector2 } from './../../math/Vector2';
-import { Path } from './Path';
-import { ExtrudeGeometry } from './../../geometries/ExtrudeGeometry';
-import { ShapeGeometry } from './../../geometries/ShapeGeometry';
-
-/**
- * Defines a 2d shape plane using paths.
- */
-export class Shape extends Path {
- constructor(points?: Vector2[]);
-
- holes: Path[];
-
- /**
- * @deprecated Use {@link ExtrudeGeometry ExtrudeGeometry()} instead.
- */
- extrude(options?: any): ExtrudeGeometry;
-
- /**
- * @deprecated Use {@link ShapeGeometry ShapeGeometry()} instead.
- */
- makeGeometry(options?: any): ShapeGeometry;
- getPointsHoles(divisions: number): Vector2[][];
-
- /**
- * @deprecated Use {@link Shape#extractPoints .extractPoints()} instead.
- */
- extractAllPoints(
- divisions: number
- ): {
- shape: Vector2[];
- holes: Vector2[][];
- };
- extractPoints(divisions: number): Vector2[];
-}
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/default_vertex.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/default_vertex.glsl.js
deleted file mode 100644
index 0ce0b03b6d8b3410af0cc3b64ff5aacea2530791..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/default_vertex.glsl.js
+++ /dev/null
@@ -1,5 +0,0 @@
-export default /* glsl */`
-void main() {
- gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
-}
-`;
diff --git a/spaces/better57/CHATGPT/run_macOS.command b/spaces/better57/CHATGPT/run_macOS.command
deleted file mode 100644
index 2d26597ae47519f42336ccffc16646713a192ae1..0000000000000000000000000000000000000000
--- a/spaces/better57/CHATGPT/run_macOS.command
+++ /dev/null
@@ -1,31 +0,0 @@
-#!/bin/bash
-
-# 获取脚本所在目录
-script_dir=$(dirname "$(readlink -f "$0")")
-
-# 将工作目录更改为脚本所在目录
-cd "$script_dir" || exit
-
-# 检查Git仓库是否有更新
-git remote update
-pwd
-
-if ! git status -uno | grep 'up to date' > /dev/null; then
- # 如果有更新,关闭当前运行的服务器
- pkill -f ChuanhuChatbot.py
-
- # 拉取最新更改
- git pull
-
- # 安装依赖
- pip3 install -r requirements.txt
-
- # 重新启动服务器
- nohup python3 ChuanhuChatbot.py &
-fi
-
-# 检查ChuanhuChatbot.py是否在运行
-if ! pgrep -f ChuanhuChatbot.py > /dev/null; then
- # 如果没有运行,启动服务器
- nohup python3 ChuanhuChatbot.py &
-fi
diff --git a/spaces/bigscience/promptsource/promptsource/session.py b/spaces/bigscience/promptsource/promptsource/session.py
deleted file mode 100644
index 35ea5505447ef2e3e6bb33a56270c6d5f8665faa..0000000000000000000000000000000000000000
--- a/spaces/bigscience/promptsource/promptsource/session.py
+++ /dev/null
@@ -1,90 +0,0 @@
-#
-# Code for managing session state, which is needed for multi-input forms
-# See https://github.com/streamlit/streamlit/issues/1557
-#
-# This code is taken from
-# https://gist.github.com/okld/0aba4869ba6fdc8d49132e6974e2e662
-#
-
-from streamlit.hashing import _CodeHasher
-from streamlit.report_thread import get_report_ctx
-from streamlit.server.server import Server
-
-
-class _SessionState:
- def __init__(self, session, hash_funcs):
- """Initialize SessionState instance."""
- self.__dict__["_state"] = {
- "data": {},
- "hash": None,
- "hasher": _CodeHasher(hash_funcs),
- "is_rerun": False,
- "session": session,
- }
-
- def __call__(self, **kwargs):
- """Initialize state data once."""
- for item, value in kwargs.items():
- if item not in self._state["data"]:
- self._state["data"][item] = value
-
- def __getitem__(self, item):
- """Return a saved state value, None if item is undefined."""
- return self._state["data"].get(item, None)
-
- def __getattr__(self, item):
- """Return a saved state value, None if item is undefined."""
- return self._state["data"].get(item, None)
-
- def __setitem__(self, item, value):
- """Set state value."""
- self._state["data"][item] = value
-
- def __setattr__(self, item, value):
- """Set state value."""
- self._state["data"][item] = value
-
- def clear(self):
- """Clear session state and request a rerun."""
- self._state["data"].clear()
- self._state["session"].request_rerun(None)
-
- def sync(self):
- """
- Rerun the app with all state values up to date from the beginning to
- fix rollbacks.
- """
- data_to_bytes = self._state["hasher"].to_bytes(self._state["data"], None)
-
- # Ensure to rerun only once to avoid infinite loops
- # caused by a constantly changing state value at each run.
- #
- # Example: state.value += 1
- if self._state["is_rerun"]:
- self._state["is_rerun"] = False
-
- elif self._state["hash"] is not None:
- if self._state["hash"] != data_to_bytes:
- self._state["is_rerun"] = True
- self._state["session"].request_rerun(None)
-
- self._state["hash"] = data_to_bytes
-
-
-def _get_session():
- session_id = get_report_ctx().session_id
- session_info = Server.get_current()._get_session_info(session_id)
-
- if session_info is None:
- raise RuntimeError("Couldn't get your Streamlit Session object.")
-
- return session_info.session
-
-
-def _get_state(hash_funcs=None):
- session = _get_session()
-
- if not hasattr(session, "_custom_session_state"):
- session._custom_session_state = _SessionState(session, hash_funcs)
-
- return session._custom_session_state
diff --git a/spaces/bioriAsaeru/text-to-voice/Alien Shooter Game Free Download For Windows Xp fotokalender versich Blast Your Way Through Hordes of Extraterrestrials.md b/spaces/bioriAsaeru/text-to-voice/Alien Shooter Game Free Download For Windows Xp fotokalender versich Blast Your Way Through Hordes of Extraterrestrials.md
deleted file mode 100644
index 13b82d22a989336ddf2610575f30727675c98a26..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Alien Shooter Game Free Download For Windows Xp fotokalender versich Blast Your Way Through Hordes of Extraterrestrials.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
Navigation Code Unlock 8.4 Uconnect Onyx ProductionHouse X 10.2 crack Anya10 Masha8LsmEZCOM Definition Of Fear English Full Movie Torrent solidworks power surfacing crack empresas familiares imanol belausteguigoitia pdf download bazaraa jarvis programacion lineal flujo 20 Cherish Preteen full Baba movies download utorrent marketing management kotler keller brady goodman hansen pdf download
el diario de los escritores de la libertad pdf download download film transformer 4 3gp full movie igo for android 480x800 free download schiavello palmisano fondamenti di chimica edises pdf download dong yi tagalog version full movie gma 7 Coat Hello! Ryo Sharp Wireless LAN adapter Wn8522b Driver sonic foundry sound forge 6.0 keygen download Sahara movie in tamil dubbed download download film indonesia 3 hari untuk selamanya 23
-
Eklavya - The Royal Guard full movie in hindi hd 1080p Eklavya - The Royal Guard movies hd 720p in hindi a Hate Story 2 full movie in hindi download Mentes Extraordinarias Howard Gardner Pdf Download santhosh subramaniam br rip 1080p movie torrents Pthc Delicious Cake Rar hands on algebra if8568 factoring answer key.rar Krrish 3 tamil hd movie download Adobe acrobat xi pro crack amtlib.dll money purse telugu book free download
- aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Amibcp 4.53.md b/spaces/bioriAsaeru/text-to-voice/Amibcp 4.53.md
deleted file mode 100644
index da4ccf3471219a3f7f402776296c387339a43a7d..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Amibcp 4.53.md
+++ /dev/null
@@ -1,102 +0,0 @@
-
-
AMIBCP 4.53: A Guide to AMI BIOS Editing
-
-
If you are looking for a way to modify your AMI Aptio BIOS settings, you may have come across a tool called AMIBCP 4.53. This is a powerful program that allows you to access and change various BIOS options, such as boot order, fan speed, overclocking, and more. But what is AMIBCP 4.53 and how do you use it? In this article, we will explain everything you need to know about AMIBCP 4.53 and how to use it safely and effectively.
-
-
What is AMIBCP 4.53?
-
-
AMIBCP stands for AMI BIOS Configuration Program. It is a software utility that can be used to edit the settings of AMI Aptio BIOS, which is a type of firmware that controls the basic functions of your computer. AMI Aptio BIOS is used by many manufacturers, such as Asus, MSI, Gigabyte, and more.
AMIBCP 4.53 is one of the latest versions of the program, released in 2020. It supports AMI Aptio 4.x and 5.x BIOSes, which are based on the UEFI (Unified Extensible Firmware Interface) standard. UEFI is a modern replacement for the legacy BIOS (Basic Input/Output System) that offers more features and security.
-
-
With AMIBCP 4.53, you can view and modify various BIOS settings, such as:
-
-
-
Boot options: You can change the boot order, enable or disable boot devices, and set boot passwords.
-
Fan control: You can adjust the fan speed and temperature thresholds for your CPU and system fans.
-
Overclocking: You can tweak the CPU frequency, voltage, multiplier, and memory timings for better performance.
-
Power management: You can enable or disable power-saving features, such as sleep mode, hibernation, and wake-on-LAN.
-
Security: You can enable or disable secure boot, TPM (Trusted Platform Module), and other security features.
-
Advanced: You can access hidden or locked settings that are not normally available in the BIOS menu.
-
-
-
AMIBCP 4.53 can also be used to create custom BIOS images that can be flashed onto your motherboard using a USB flash drive or other methods. This can be useful if you want to update your BIOS with new features or fixes, or if you want to restore your BIOS to its original state after a failed update or modification.
-
-
How to use AMIBCP 4.53?
-
-
Before you use AMIBCP 4.53, you should be aware of the risks involved. Editing your BIOS settings can potentially damage your hardware or make your system unstable or unbootable if done incorrectly. Therefore, you should only use AMIBCP 4.53 if you know what you are doing and have a backup of your original BIOS image in case something goes wrong. You should also follow these steps:
-
-
-
Download AMIBCP 4.53 from a reliable source. You can find it on various websites or forums that offer BIOS modding tools and guides.
-
Extract the ZIP file to a folder on your computer. You should see an executable file called AMIBCP.exe and some other files.
-
Run AMIBCP.exe as administrator. You may need to disable your antivirus or firewall software temporarily if they block the program.
-
Open your current BIOS image file by clicking on the File menu and selecting Open Image File. You can find your BIOS image file on your motherboard manufacturer's website or by using a tool like CPU-Z or HWiNFO to identify your BIOS version and model.
-
Browse through the tabs and submenus to view and edit the BIOS settings. You can use the Search function to find specific settings by name or value.
-
Save your modified BIOS image file by clicking on the File menu and selecting Save Image File As. You can choose a different name or location for your new BIOS image file.
-
Flash your modified BIOS image file onto your motherboard using a USB flash drive or other methods. You can follow the instructions provided by your motherboard manufacturer or use a tool like AFUDOS or AFUWIN to flash your BIOS.
-
Reboot your system and enter the BIOS menu by pressing the appropriate key during startup (usually Del, F2, F10, or Esc). Check if your changes have been applied successfully and if your system is working properly.
-
-
-
If you encounter any problems or errors after flashing your modified BIOS image file, you should try to restore your original BIOS image file using the same method as above. If that does not work, you may need to use a recovery method such as USB flashback, dual BIOS switch, or SPI programmer to restore your BIOS.
-
-
Conclusion
-
-
AMIBCP 4.53 is a useful tool for advanced users who want to customize their AMI Aptio BIOS settings. However, it is also a risky tool that can cause serious problems if used incorrectly. Therefore, you should only use AMIBCP 4.53 if you are confident in your skills and have a backup of your original BIOS image file. You should also follow the steps above carefully and do some research before making any changes to your BIOS settings.
-
-
-
We hope this article has helped you understand what AMIBCP 4.53 is and how to use it safely and effectively. If you have any questions or feedback, please feel free to leave a comment below.
-
Why use AMIBCP 4.53?
-
-
There are many reasons why you may want to use AMIBCP 4.53 to edit your AMI Aptio BIOS settings. Some of the most common ones are:
-
-
-
You want to improve your system performance by overclocking your CPU or memory.
-
You want to optimize your system cooling by adjusting your fan speed and temperature thresholds.
-
You want to enable or disable certain features that are not available in the BIOS menu, such as secure boot, TPM, or hidden options.
-
You want to customize your BIOS appearance by changing the logo, colors, fonts, or layout.
-
You want to update your BIOS with new features or fixes that are not provided by your motherboard manufacturer.
-
-
-
Using AMIBCP 4.53 can give you more control and flexibility over your system settings and performance. However, you should also be careful not to change any settings that you are not familiar with or that may cause compatibility issues with your hardware or software. You should always do some research before making any changes and backup your original BIOS image file in case you need to restore it.
-
-
How to get AMIBCP 4.53?
-
-
If you want to use AMIBCP 4.53, you will need to download it from a reliable source. You can find it on various websites or forums that offer BIOS modding tools and guides. However, you should also be aware of the potential risks of downloading files from unknown or untrusted sources. You may encounter malware, viruses, or corrupted files that can harm your computer or compromise your security.
-
-
Therefore, you should always scan any files that you download with a reputable antivirus or malware removal software before opening them. You should also check the file size, extension, and checksum to make sure that they match the original source. You should also read the comments and reviews from other users who have downloaded and used the same file to see if they encountered any problems or errors.
-
-
Alternatively, you can also get AMIBCP 4.53 from your motherboard manufacturer's website or support center. They may provide the latest version of AMIBCP 4.53 along with the official BIOS updates for your specific model. This way, you can be sure that you are getting a safe and compatible file that has been tested and verified by the manufacturer.
-
How to update AMIBCP 4.53?
-
-
AMIBCP 4.53 is not a static program that remains the same forever. It is constantly updated and improved by the developers to fix bugs, add new features, and support new BIOS versions and models. Therefore, you may want to update your AMIBCP 4.53 to the latest version available to enjoy the best performance and compatibility.
-
-
There are two ways to update your AMIBCP 4.53:
-
-
-
Download the latest version of AMIBCP 4.53 from the official website or support center of AMI (American Megatrends Inc.), the company that develops and distributes AMIBCP 4.53. You can find the download link on their website or contact their customer service for assistance.
-
Download the latest version of AMIBCP 4.53 from a reliable source that offers BIOS modding tools and guides, such as websites or forums that specialize in this topic. You can search for AMIBCP 4.53 on Google or other search engines and check the reviews and comments from other users who have downloaded and used the same file.
-
-
-
Once you have downloaded the latest version of AMIBCP 4.53, you can simply extract the ZIP file to a folder on your computer and run the executable file as administrator. You can then open your current or new BIOS image file and edit it as usual.
-
-
However, you should also be careful when updating your AMIBCP 4.53. You should always backup your original BIOS image file before making any changes and check the compatibility of the new version of AMIBCP 4.53 with your BIOS version and model. You should also scan any files that you download with an antivirus or malware removal software before opening them.
-
-
How to get help with AMIBCP 4.53?
-
-
If you have any questions or problems with using AMIBCP 4.53, you can get help from various sources:
-
-
-
The official website or support center of AMI (American Megatrends Inc.), the company that develops and distributes AMIBCP 4.53. You can find their contact information on their website or send them an email or a message through their online form.
-
The official user manual or documentation of AMIBCP 4.53, which explains the features and functions of the program in detail. You can find it on their website or in the folder where you extracted the ZIP file of AMIBCP 4.53.
-
The online community of BIOS modding enthusiasts, such as websites or forums that offer BIOS modding tools and guides. You can join these platforms and ask for help from other users who have experience and knowledge with using AMIBCP 4.53.
-
-
-
However, you should also be respectful and polite when asking for help with AMIBCP 4.53. You should provide as much information as possible about your issue, such as your BIOS version and model, your system specifications, your AMIBCP 4.53 version, and what you have tried so far to solve it. You should also follow the rules and guidelines of the platform where you are asking for help and thank those who help you.
-
Conclusion
-
-
AMIBCP 4.53 is a powerful tool for editing your AMI Aptio BIOS settings. It can help you improve your system performance, optimize your system cooling, enable or disable certain features, customize your BIOS appearance, and update your BIOS with new features or fixes. However, it is also a risky tool that can cause serious problems if used incorrectly. Therefore, you should only use AMIBCP 4.53 if you are confident in your skills and have a backup of your original BIOS image file. You should also follow the steps above carefully and do some research before making any changes to your BIOS settings.
-
-
We hope this article has helped you understand what AMIBCP 4.53 is and how to use it safely and effectively. If you have any questions or feedback, please feel free to leave a comment below.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Ana Frank Dienorastis Knyga Pdf 49 BEST.md b/spaces/bioriAsaeru/text-to-voice/Ana Frank Dienorastis Knyga Pdf 49 BEST.md
deleted file mode 100644
index 639f5fa79ee104a9c41c6bf149f564454d1d9798..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Ana Frank Dienorastis Knyga Pdf 49 BEST.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Non-Standard English (2012); Anna-Brita Stenström's article From Slang to ... The books usually feature an airy, irreverent tone and frank ... most chick lit fiction usually presents work as a background (Well 2006: 49). ... 2010. www.flf.vu.lt/assets/files/istekliai/ lietuviu_zargono_baze.pdf. ... paplūdymije visą laiką skait knygą. 1fdad05405
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Banjaran Hindi Movie Mp3 Songs Free UPDATED Download.md b/spaces/bioriAsaeru/text-to-voice/Banjaran Hindi Movie Mp3 Songs Free UPDATED Download.md
deleted file mode 100644
index b70835b2ea128be48f167a49e9ef2604b7adc20e..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Banjaran Hindi Movie Mp3 Songs Free UPDATED Download.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
download All Old Jindi Movies Hd Qwalitey unlimited Movies and videos Download Here.All Old Jindi Movies Hd Qwalitey Hd,3gp. mp4 320p and More Videos You Can Download Easyly. tamilrockers and movierulz, tamilgun, filmywap, and pagalworld videos and Movies download.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Dead by Daylight v1.1.4 Apk The Best Mobile Game for Horror Fans.md b/spaces/bioriAsaeru/text-to-voice/Dead by Daylight v1.1.4 Apk The Best Mobile Game for Horror Fans.md
deleted file mode 100644
index ec34cd64e2f4d009af48703b40385d138f51dba4..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Dead by Daylight v1.1.4 Apk The Best Mobile Game for Horror Fans.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Jab Se Tumko Dekha Hai Mere Dil Ki Dhadkan Badti Jati Hai 14 The Hit Song by Nadeem-Shravan from Damini.md b/spaces/bioriAsaeru/text-to-voice/Jab Se Tumko Dekha Hai Mere Dil Ki Dhadkan Badti Jati Hai 14 The Hit Song by Nadeem-Shravan from Damini.md
deleted file mode 100644
index cfc1bbbf32281bbea11edda7de0fe7e5dcae6b9c..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Jab Se Tumko Dekha Hai Mere Dil Ki Dhadkan Badti Jati Hai 14 The Hit Song by Nadeem-Shravan from Damini.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
Description:- Jab se tumko dekha hai mere dil ki dhadkan aur mix by dj manish | Jab se tumko dekha hai mere dil ki dhadkan aur mix by dj manish Mp3 Download | Remix Mp3 Song | New Dj Song | EDM Remix | Dance Remix Song | Gane | Dholki Remix Mp3 Song FreeMor Links: | Biharisong.in| DjGks| Dj4x| DjMp3maza| DjMaza| Alldjsmusic| Pagalworld| DjPrayagmusic| DjBhojpurisong| DjPrayagJab se tumko dekha hai mere dil ki dhadkan aur mix by dj manishJab se tumko dekha hai mere dil ki dhadkan aur mix by dj manishTitle: Jab se tumko dekha hai mere dil ki dhadkan aur mix by dj manish Dj Mp3 Download - DjKings.in
-
Jab Se Tumko Dekha Hai Mere Dil Ki Dhadkan Badti Jati Hai 14
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/breathingcyborg/reviews-actionable-insights/aspects_extraction.py b/spaces/breathingcyborg/reviews-actionable-insights/aspects_extraction.py
deleted file mode 100644
index fe31a305bb9d1c45b6a0ed7379a8c6f65ba8739c..0000000000000000000000000000000000000000
--- a/spaces/breathingcyborg/reviews-actionable-insights/aspects_extraction.py
+++ /dev/null
@@ -1,261 +0,0 @@
-import pandas as pd
-import numpy as np
-
-def has_vectors(doc):
- return np.all([token.has_vector for token in doc])
-
-def extract_doc_aspects(doc):
-
- prod_pronouns = ['it','this','they','these']
-
- rule1_pairs = []
- rule2_pairs = []
- rule3_pairs = []
- rule4_pairs = []
- rule5_pairs = []
- rule6_pairs = []
- rule7_pairs = []
-
- for token in doc:
- if token.text == 'product':
- continue
-
- ## FIRST RULE OF DEPENDANCY PARSE -
- ## M - Sentiment modifier || A - Aspect
- ## RULE = M is child of A with a relationship of amod
- A = "999999"
- M = "999999"
- if token.dep_ == "amod" and not token.is_stop:
- M = token.text
- A = token.head.text
-
- # add adverbial modifier of adjective (e.g. 'most comfortable headphones')
- M_children = token.children
- for child_m in M_children:
- if(child_m.dep_ == "advmod"):
- M_hash = child_m.text
- M = M_hash + " " + M
- break
-
- # negation in adjective, the "no" keyword is a 'det' of the noun (e.g. no interesting characters)
- A_children = token.head.children
- for child_a in A_children:
- if(child_a.dep_ == "det" and child_a.text == 'no'):
- neg_prefix = 'not'
- M = neg_prefix + " " + M
- break
-
- if(A != "999999" and M != "999999"):
- if A in prod_pronouns :
- A = "product"
- dict1 = {"noun" : A, "adj" : M, "rule" : 1}
- rule1_pairs.append(dict1)
-
-
- # # SECOND RULE OF DEPENDANCY PARSE -
- # # M - Sentiment modifier || A - Aspect
- # Direct Object - A is a child of something with relationship of nsubj, while
- # M is a child of the same something with relationship of dobj
- # Assumption - A verb will have only one NSUBJ and DOBJ
- children = token.children
- A = "999999"
- M = "999999"
- add_neg_pfx = False
- for child in children :
- if(child.dep_ == "nsubj" and not child.is_stop):
- A = child.text
-
- if((child.dep_ == "dobj" and child.pos_ == "ADJ") and not child.is_stop):
- M = child.text
-
- if(child.dep_ == "neg"):
- neg_prefix = child.text
- add_neg_pfx = True
-
- if (add_neg_pfx and M != "999999"):
- M = neg_prefix + " " + M
-
- if(A != "999999" and M != "999999"):
- if A in prod_pronouns :
- A = "product"
- dict2 = {"noun" : A, "adj" : M, "rule" : 2}
- rule2_pairs.append(dict2)
-
-
- ## THIRD RULE OF DEPENDANCY PARSE -
- ## M - Sentiment modifier || A - Aspect
- ## Adjectival Complement - A is a child of something with relationship of nsubj, while
- ## M is a child of the same something with relationship of acomp
- ## Assumption - A verb will have only one NSUBJ and DOBJ
- ## "The sound of the speakers would be better. The sound of the speakers could be better" - handled using AUX dependency
-
- children = token.children
- A = "999999"
- M = "999999"
- add_neg_pfx = False
- for child in children :
- if(child.dep_ == "nsubj" and not child.is_stop):
- A = child.text
-
- if(child.dep_ == "acomp" and not child.is_stop):
- M = child.text
-
- # example - 'this could have been better' -> (this, not better)
- if(child.dep_ == "aux" and child.tag_ == "MD"):
- neg_prefix = "not"
- add_neg_pfx = True
-
- if(child.dep_ == "neg"):
- neg_prefix = child.text
- add_neg_pfx = True
-
- if (add_neg_pfx and M != "999999"):
- M = neg_prefix + " " + M
-
- if(A != "999999" and M != "999999"):
- if A in prod_pronouns :
- A = "product"
- dict3 = {"noun" : A, "adj" : M, "rule" : 3}
- rule3_pairs.append(dict3)
-
-
- ## FOURTH RULE OF DEPENDANCY PARSE -
- ## M - Sentiment modifier || A - Aspect
-
- #Adverbial modifier to a passive verb - A is a child of something with relationship of nsubjpass, while
- # M is a child of the same something with relationship of advmod
-
- #Assumption - A verb will have only one NSUBJ and DOBJ
-
- children = token.children
- A = "999999"
- M = "999999"
- add_neg_pfx = False
- for child in children :
- if((child.dep_ == "nsubjpass" or child.dep_ == "nsubj") and not child.is_stop):
- A = child.text
-
- if(child.dep_ == "advmod" and not child.is_stop):
- M = child.text
- M_children = child.children
- for child_m in M_children:
- if(child_m.dep_ == "advmod"):
- M_hash = child_m.text
- M = M_hash + " " + child.text
- break
-
- if(child.dep_ == "neg"):
- neg_prefix = child.text
- add_neg_pfx = True
-
- if (add_neg_pfx and M != "999999"):
- M = neg_prefix + " " + M
-
- if(A != "999999" and M != "999999"):
- if A in prod_pronouns :
- A = "product"
- dict4 = {"noun" : A, "adj" : M, "rule" : 4}
- rule4_pairs.append(dict4)
-
- ## FIFTH RULE OF DEPENDANCY PARSE -
- ## M - Sentiment modifier || A - Aspect
-
- #Complement of a copular verb - A is a child of M with relationship of nsubj, while
- # M has a child with relationship of cop
-
- #Assumption - A verb will have only one NSUBJ and DOBJ
-
- children = token.children
- A = "999999"
- buf_var = "999999"
- for child in children :
- if(child.dep_ == "nsubj" and not child.is_stop):
- A = child.text
-
- if(child.dep_ == "cop" and not child.is_stop):
- buf_var = child.text
-
- if(A != "999999" and buf_var != "999999"):
- if A in prod_pronouns :
- A = "product"
- dict5 = {"noun" : A, "adj" : token.text, "rule" : 5}
- rule5_pairs.append(dict5)
-
-
- ## SIXTH RULE OF DEPENDANCY PARSE -
- ## M - Sentiment modifier || A - Aspect
- ## Example - "It ok", "ok" is INTJ (interjections like bravo, great etc)
-
- children = token.children
- A = "999999"
- M = "999999"
- if(token.pos_ == "INTJ" and not token.is_stop):
- for child in children :
- if(child.dep_ == "nsubj" and not child.is_stop):
- A = child.text
- M = token.text
-
- if(A != "999999" and M != "999999"):
- if A in prod_pronouns :
- A = "product"
- dict6 = {"noun" : A, "adj" : M, "rule" : 6}
- rule6_pairs.append(dict6)
-
- ## SEVENTH RULE OF DEPENDANCY PARSE -
- ## M - Sentiment modifier || A - Aspect
- ## ATTR - link between a verb like 'be/seem/appear' and its complement
- ## Example: 'this is garbage' -> (this, garbage)
-
- children = token.children
- A = "999999"
- M = "999999"
- add_neg_pfx = False
- for child in children :
- if(child.dep_ == "nsubj" and not child.is_stop):
- A = child.text
-
- if((child.dep_ == "attr") and not child.is_stop):
- M = child.text
-
- if(child.dep_ == "neg"):
- neg_prefix = child.text
- add_neg_pfx = True
-
- if (add_neg_pfx and M != "999999"):
- M = neg_prefix + " " + M
-
- if(A != "999999" and M != "999999"):
- if A in prod_pronouns :
- A = "product"
- dict7 = {"noun" : A, "adj" : M, "rule" : 7}
- rule7_pairs.append(dict7)
-
- aspects = []
-
- aspects = rule1_pairs + rule2_pairs + rule3_pairs +rule4_pairs +rule5_pairs + rule6_pairs + rule7_pairs
-
- return aspects
-
-def extract_aspects(nlp, reviews):
- aspects = []
-
- data = ([
- (x[1], x[0]) for x in reviews['text_cleaned'].reset_index().to_numpy()
- ])
-
- for doc, review_id in nlp.pipe(data, as_tuples=True):
- doc_aspects = extract_doc_aspects(doc)
- doc_aspects = [
- [review_id, aspect['noun'], aspect['adj'], aspect['rule']]
- for aspect in doc_aspects if not aspect['noun'].lower().startswith('product')
- ]
- # filter aspects with out of vocubalary nouns
- doc_aspects = [
- doc_aspect for doc_aspect in doc_aspects
- if has_vectors(nlp(doc_aspect[1]))
- ]
- aspects.extend(doc_aspects)
-
- aspects = pd.DataFrame(aspects, columns=['review_id', 'aspect', 'opinion', 'rule'])
-
- return aspects
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/evaluation/coco_evaluation.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/evaluation/coco_evaluation.py
deleted file mode 100644
index fe8142cda29613ce1cf78523e422bf598128f590..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/evaluation/coco_evaluation.py
+++ /dev/null
@@ -1,722 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import contextlib
-import copy
-import io
-import itertools
-import json
-import logging
-import numpy as np
-import os
-import pickle
-from collections import OrderedDict
-import pycocotools.mask as mask_util
-import torch
-from pycocotools.coco import COCO
-from pycocotools.cocoeval import COCOeval
-from tabulate import tabulate
-
-import detectron2.utils.comm as comm
-from detectron2.config import CfgNode
-from detectron2.data import MetadataCatalog
-from detectron2.data.datasets.coco import convert_to_coco_json
-from detectron2.structures import Boxes, BoxMode, pairwise_iou
-from detectron2.utils.file_io import PathManager
-from detectron2.utils.logger import create_small_table
-
-from .evaluator import DatasetEvaluator
-
-try:
- from detectron2.evaluation.fast_eval_api import COCOeval_opt
-except ImportError:
- COCOeval_opt = COCOeval
-
-
-class COCOEvaluator(DatasetEvaluator):
- """
- Evaluate AR for object proposals, AP for instance detection/segmentation, AP
- for keypoint detection outputs using COCO's metrics.
- See http://cocodataset.org/#detection-eval and
- http://cocodataset.org/#keypoints-eval to understand its metrics.
- The metrics range from 0 to 100 (instead of 0 to 1), where a -1 or NaN means
- the metric cannot be computed (e.g. due to no predictions made).
-
- In addition to COCO, this evaluator is able to support any bounding box detection,
- instance segmentation, or keypoint detection dataset.
- """
-
- def __init__(
- self,
- dataset_name,
- tasks=None,
- distributed=True,
- output_dir=None,
- *,
- max_dets_per_image=None,
- use_fast_impl=True,
- kpt_oks_sigmas=(),
- allow_cached_coco=True,
- ):
- """
- Args:
- dataset_name (str): name of the dataset to be evaluated.
- It must have either the following corresponding metadata:
-
- "json_file": the path to the COCO format annotation
-
- Or it must be in detectron2's standard dataset format
- so it can be converted to COCO format automatically.
- tasks (tuple[str]): tasks that can be evaluated under the given
- configuration. A task is one of "bbox", "segm", "keypoints".
- By default, will infer this automatically from predictions.
- distributed (True): if True, will collect results from all ranks and run evaluation
- in the main process.
- Otherwise, will only evaluate the results in the current process.
- output_dir (str): optional, an output directory to dump all
- results predicted on the dataset. The dump contains two files:
-
- 1. "instances_predictions.pth" a file that can be loaded with `torch.load` and
- contains all the results in the format they are produced by the model.
- 2. "coco_instances_results.json" a json file in COCO's result format.
- max_dets_per_image (int): limit on the maximum number of detections per image.
- By default in COCO, this limit is to 100, but this can be customized
- to be greater, as is needed in evaluation metrics AP fixed and AP pool
- (see https://arxiv.org/pdf/2102.01066.pdf)
- This doesn't affect keypoint evaluation.
- use_fast_impl (bool): use a fast but **unofficial** implementation to compute AP.
- Although the results should be very close to the official implementation in COCO
- API, it is still recommended to compute results with the official API for use in
- papers. The faster implementation also uses more RAM.
- kpt_oks_sigmas (list[float]): The sigmas used to calculate keypoint OKS.
- See http://cocodataset.org/#keypoints-eval
- When empty, it will use the defaults in COCO.
- Otherwise it should be the same length as ROI_KEYPOINT_HEAD.NUM_KEYPOINTS.
- allow_cached_coco (bool): Whether to use cached coco json from previous validation
- runs. You should set this to False if you need to use different validation data.
- Defaults to True.
- """
- self._logger = logging.getLogger(__name__)
- self._distributed = distributed
- self._output_dir = output_dir
-
- if use_fast_impl and (COCOeval_opt is COCOeval):
- self._logger.info("Fast COCO eval is not built. Falling back to official COCO eval.")
- use_fast_impl = False
- self._use_fast_impl = use_fast_impl
-
- # COCOeval requires the limit on the number of detections per image (maxDets) to be a list
- # with at least 3 elements. The default maxDets in COCOeval is [1, 10, 100], in which the
- # 3rd element (100) is used as the limit on the number of detections per image when
- # evaluating AP. COCOEvaluator expects an integer for max_dets_per_image, so for COCOeval,
- # we reformat max_dets_per_image into [1, 10, max_dets_per_image], based on the defaults.
- if max_dets_per_image is None:
- max_dets_per_image = [1, 10, 100]
- else:
- max_dets_per_image = [1, 10, max_dets_per_image]
- self._max_dets_per_image = max_dets_per_image
-
- if tasks is not None and isinstance(tasks, CfgNode):
- kpt_oks_sigmas = (
- tasks.TEST.KEYPOINT_OKS_SIGMAS if not kpt_oks_sigmas else kpt_oks_sigmas
- )
- self._logger.warn(
- "COCO Evaluator instantiated using config, this is deprecated behavior."
- " Please pass in explicit arguments instead."
- )
- self._tasks = None # Infering it from predictions should be better
- else:
- self._tasks = tasks
-
- self._cpu_device = torch.device("cpu")
-
- self._metadata = MetadataCatalog.get(dataset_name)
- if not hasattr(self._metadata, "json_file"):
- if output_dir is None:
- raise ValueError(
- "output_dir must be provided to COCOEvaluator "
- "for datasets not in COCO format."
- )
- self._logger.info(f"Trying to convert '{dataset_name}' to COCO format ...")
-
- cache_path = os.path.join(output_dir, f"{dataset_name}_coco_format.json")
- self._metadata.json_file = cache_path
- convert_to_coco_json(dataset_name, cache_path, allow_cached=allow_cached_coco)
-
- json_file = PathManager.get_local_path(self._metadata.json_file)
- with contextlib.redirect_stdout(io.StringIO()):
- self._coco_api = COCO(json_file)
-
- # Test set json files do not contain annotations (evaluation must be
- # performed using the COCO evaluation server).
- self._do_evaluation = "annotations" in self._coco_api.dataset
- if self._do_evaluation:
- self._kpt_oks_sigmas = kpt_oks_sigmas
-
- def reset(self):
- self._predictions = []
-
- def process(self, inputs, outputs):
- """
- Args:
- inputs: the inputs to a COCO model (e.g., GeneralizedRCNN).
- It is a list of dict. Each dict corresponds to an image and
- contains keys like "height", "width", "file_name", "image_id".
- outputs: the outputs of a COCO model. It is a list of dicts with key
- "instances" that contains :class:`Instances`.
- """
- for input, output in zip(inputs, outputs):
- prediction = {"image_id": input["image_id"]}
-
- if "instances" in output:
- instances = output["instances"].to(self._cpu_device)
- prediction["instances"] = instances_to_coco_json(instances, input["image_id"])
- if "proposals" in output:
- prediction["proposals"] = output["proposals"].to(self._cpu_device)
- if len(prediction) > 1:
- self._predictions.append(prediction)
-
- def evaluate(self, img_ids=None):
- """
- Args:
- img_ids: a list of image IDs to evaluate on. Default to None for the whole dataset
- """
- if self._distributed:
- comm.synchronize()
- predictions = comm.gather(self._predictions, dst=0)
- predictions = list(itertools.chain(*predictions))
-
- if not comm.is_main_process():
- return {}
- else:
- predictions = self._predictions
-
- if len(predictions) == 0:
- self._logger.warning("[COCOEvaluator] Did not receive valid predictions.")
- return {}
-
- if self._output_dir:
- PathManager.mkdirs(self._output_dir)
- file_path = os.path.join(self._output_dir, "instances_predictions.pth")
- with PathManager.open(file_path, "wb") as f:
- torch.save(predictions, f)
-
- self._results = OrderedDict()
- if "proposals" in predictions[0]:
- self._eval_box_proposals(predictions)
- if "instances" in predictions[0]:
- self._eval_predictions(predictions, img_ids=img_ids)
- # Copy so the caller can do whatever with results
- return copy.deepcopy(self._results)
-
- def _tasks_from_predictions(self, predictions):
- """
- Get COCO API "tasks" (i.e. iou_type) from COCO-format predictions.
- """
- tasks = {"bbox"}
- for pred in predictions:
- if "segmentation" in pred:
- tasks.add("segm")
- if "keypoints" in pred:
- tasks.add("keypoints")
- return sorted(tasks)
-
- def _eval_predictions(self, predictions, img_ids=None):
- """
- Evaluate predictions. Fill self._results with the metrics of the tasks.
- """
- self._logger.info("Preparing results for COCO format ...")
- coco_results = list(itertools.chain(*[x["instances"] for x in predictions]))
- tasks = self._tasks or self._tasks_from_predictions(coco_results)
-
- # unmap the category ids for COCO
- if hasattr(self._metadata, "thing_dataset_id_to_contiguous_id"):
- dataset_id_to_contiguous_id = self._metadata.thing_dataset_id_to_contiguous_id
- all_contiguous_ids = list(dataset_id_to_contiguous_id.values())
- num_classes = len(all_contiguous_ids)
- assert min(all_contiguous_ids) == 0 and max(all_contiguous_ids) == num_classes - 1
-
- reverse_id_mapping = {v: k for k, v in dataset_id_to_contiguous_id.items()}
- for result in coco_results:
- category_id = result["category_id"]
- assert category_id < num_classes, (
- f"A prediction has class={category_id}, "
- f"but the dataset only has {num_classes} classes and "
- f"predicted class id should be in [0, {num_classes - 1}]."
- )
- result["category_id"] = reverse_id_mapping[category_id]
-
- if self._output_dir:
- file_path = os.path.join(self._output_dir, "coco_instances_results.json")
- self._logger.info("Saving results to {}".format(file_path))
- with PathManager.open(file_path, "w") as f:
- f.write(json.dumps(coco_results))
- f.flush()
-
- if not self._do_evaluation:
- self._logger.info("Annotations are not available for evaluation.")
- return
-
- self._logger.info(
- "Evaluating predictions with {} COCO API...".format(
- "unofficial" if self._use_fast_impl else "official"
- )
- )
- for task in sorted(tasks):
- assert task in {"bbox", "segm", "keypoints"}, f"Got unknown task: {task}!"
- coco_eval = (
- _evaluate_predictions_on_coco(
- self._coco_api,
- coco_results,
- task,
- kpt_oks_sigmas=self._kpt_oks_sigmas,
- cocoeval_fn=COCOeval_opt if self._use_fast_impl else COCOeval,
- img_ids=img_ids,
- max_dets_per_image=self._max_dets_per_image,
- )
- if len(coco_results) > 0
- else None # cocoapi does not handle empty results very well
- )
-
- res = self._derive_coco_results(
- coco_eval, task, class_names=self._metadata.get("thing_classes")
- )
- self._results[task] = res
-
- def _eval_box_proposals(self, predictions):
- """
- Evaluate the box proposals in predictions.
- Fill self._results with the metrics for "box_proposals" task.
- """
- if self._output_dir:
- # Saving generated box proposals to file.
- # Predicted box_proposals are in XYXY_ABS mode.
- bbox_mode = BoxMode.XYXY_ABS.value
- ids, boxes, objectness_logits = [], [], []
- for prediction in predictions:
- ids.append(prediction["image_id"])
- boxes.append(prediction["proposals"].proposal_boxes.tensor.numpy())
- objectness_logits.append(prediction["proposals"].objectness_logits.numpy())
-
- proposal_data = {
- "boxes": boxes,
- "objectness_logits": objectness_logits,
- "ids": ids,
- "bbox_mode": bbox_mode,
- }
- with PathManager.open(os.path.join(self._output_dir, "box_proposals.pkl"), "wb") as f:
- pickle.dump(proposal_data, f)
-
- if not self._do_evaluation:
- self._logger.info("Annotations are not available for evaluation.")
- return
-
- self._logger.info("Evaluating bbox proposals ...")
- res = {}
- areas = {"all": "", "small": "s", "medium": "m", "large": "l"}
- for limit in [100, 1000]:
- for area, suffix in areas.items():
- stats = _evaluate_box_proposals(predictions, self._coco_api, area=area, limit=limit)
- key = "AR{}@{:d}".format(suffix, limit)
- res[key] = float(stats["ar"].item() * 100)
- self._logger.info("Proposal metrics: \n" + create_small_table(res))
- self._results["box_proposals"] = res
-
- def _derive_coco_results(self, coco_eval, iou_type, class_names=None):
- """
- Derive the desired score numbers from summarized COCOeval.
-
- Args:
- coco_eval (None or COCOEval): None represents no predictions from model.
- iou_type (str):
- class_names (None or list[str]): if provided, will use it to predict
- per-category AP.
-
- Returns:
- a dict of {metric name: score}
- """
-
- metrics = {
- "bbox": ["AP", "AP50", "AP75", "APs", "APm", "APl"],
- "segm": ["AP", "AP50", "AP75", "APs", "APm", "APl"],
- "keypoints": ["AP", "AP50", "AP75", "APm", "APl"],
- }[iou_type]
-
- if coco_eval is None:
- self._logger.warn("No predictions from the model!")
- return {metric: float("nan") for metric in metrics}
-
- # the standard metrics
- results = {
- metric: float(coco_eval.stats[idx] * 100 if coco_eval.stats[idx] >= 0 else "nan")
- for idx, metric in enumerate(metrics)
- }
- self._logger.info(
- "Evaluation results for {}: \n".format(iou_type) + create_small_table(results)
- )
- if not np.isfinite(sum(results.values())):
- self._logger.info("Some metrics cannot be computed and is shown as NaN.")
-
- if class_names is None or len(class_names) <= 1:
- return results
- # Compute per-category AP
- # from https://github.com/facebookresearch/Detectron/blob/a6a835f5b8208c45d0dce217ce9bbda915f44df7/detectron/datasets/json_dataset_evaluator.py#L222-L252 # noqa
- precisions = coco_eval.eval["precision"]
- # precision has dims (iou, recall, cls, area range, max dets)
- assert len(class_names) == precisions.shape[2]
-
- results_per_category = []
- for idx, name in enumerate(class_names):
- # area range index 0: all area ranges
- # max dets index -1: typically 100 per image
- precision = precisions[:, :, idx, 0, -1]
- precision = precision[precision > -1]
- ap = np.mean(precision) if precision.size else float("nan")
- results_per_category.append(("{}".format(name), float(ap * 100)))
-
- # tabulate it
- N_COLS = min(6, len(results_per_category) * 2)
- results_flatten = list(itertools.chain(*results_per_category))
- results_2d = itertools.zip_longest(*[results_flatten[i::N_COLS] for i in range(N_COLS)])
- table = tabulate(
- results_2d,
- tablefmt="pipe",
- floatfmt=".3f",
- headers=["category", "AP"] * (N_COLS // 2),
- numalign="left",
- )
- self._logger.info("Per-category {} AP: \n".format(iou_type) + table)
-
- results.update({"AP-" + name: ap for name, ap in results_per_category})
- return results
-
-
-def instances_to_coco_json(instances, img_id):
- """
- Dump an "Instances" object to a COCO-format json that's used for evaluation.
-
- Args:
- instances (Instances):
- img_id (int): the image id
-
- Returns:
- list[dict]: list of json annotations in COCO format.
- """
- num_instance = len(instances)
- if num_instance == 0:
- return []
-
- boxes = instances.pred_boxes.tensor.numpy()
- boxes = BoxMode.convert(boxes, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS)
- boxes = boxes.tolist()
- scores = instances.scores.tolist()
- classes = instances.pred_classes.tolist()
-
- has_mask = instances.has("pred_masks")
- if has_mask:
- # use RLE to encode the masks, because they are too large and takes memory
- # since this evaluator stores outputs of the entire dataset
- rles = [
- mask_util.encode(np.array(mask[:, :, None], order="F", dtype="uint8"))[0]
- for mask in instances.pred_masks
- ]
- for rle in rles:
- # "counts" is an array encoded by mask_util as a byte-stream. Python3's
- # json writer which always produces strings cannot serialize a bytestream
- # unless you decode it. Thankfully, utf-8 works out (which is also what
- # the pycocotools/_mask.pyx does).
- rle["counts"] = rle["counts"].decode("utf-8")
-
- has_keypoints = instances.has("pred_keypoints")
- if has_keypoints:
- keypoints = instances.pred_keypoints
-
- results = []
- for k in range(num_instance):
- result = {
- "image_id": img_id,
- "category_id": classes[k],
- "bbox": boxes[k],
- "score": scores[k],
- }
- if has_mask:
- result["segmentation"] = rles[k]
- if has_keypoints:
- # In COCO annotations,
- # keypoints coordinates are pixel indices.
- # However our predictions are floating point coordinates.
- # Therefore we subtract 0.5 to be consistent with the annotation format.
- # This is the inverse of data loading logic in `datasets/coco.py`.
- keypoints[k][:, :2] -= 0.5
- result["keypoints"] = keypoints[k].flatten().tolist()
- results.append(result)
- return results
-
-
-# inspired from Detectron:
-# https://github.com/facebookresearch/Detectron/blob/a6a835f5b8208c45d0dce217ce9bbda915f44df7/detectron/datasets/json_dataset_evaluator.py#L255 # noqa
-def _evaluate_box_proposals(dataset_predictions, coco_api, thresholds=None, area="all", limit=None):
- """
- Evaluate detection proposal recall metrics. This function is a much
- faster alternative to the official COCO API recall evaluation code. However,
- it produces slightly different results.
- """
- # Record max overlap value for each gt box
- # Return vector of overlap values
- areas = {
- "all": 0,
- "small": 1,
- "medium": 2,
- "large": 3,
- "96-128": 4,
- "128-256": 5,
- "256-512": 6,
- "512-inf": 7,
- }
- area_ranges = [
- [0**2, 1e5**2], # all
- [0**2, 32**2], # small
- [32**2, 96**2], # medium
- [96**2, 1e5**2], # large
- [96**2, 128**2], # 96-128
- [128**2, 256**2], # 128-256
- [256**2, 512**2], # 256-512
- [512**2, 1e5**2],
- ] # 512-inf
- assert area in areas, "Unknown area range: {}".format(area)
- area_range = area_ranges[areas[area]]
- gt_overlaps = []
- num_pos = 0
-
- for prediction_dict in dataset_predictions:
- predictions = prediction_dict["proposals"]
-
- # sort predictions in descending order
- # TODO maybe remove this and make it explicit in the documentation
- inds = predictions.objectness_logits.sort(descending=True)[1]
- predictions = predictions[inds]
-
- ann_ids = coco_api.getAnnIds(imgIds=prediction_dict["image_id"])
- anno = coco_api.loadAnns(ann_ids)
- gt_boxes = [
- BoxMode.convert(obj["bbox"], BoxMode.XYWH_ABS, BoxMode.XYXY_ABS)
- for obj in anno
- if obj["iscrowd"] == 0
- ]
- gt_boxes = torch.as_tensor(gt_boxes).reshape(-1, 4) # guard against no boxes
- gt_boxes = Boxes(gt_boxes)
- gt_areas = torch.as_tensor([obj["area"] for obj in anno if obj["iscrowd"] == 0])
-
- if len(gt_boxes) == 0 or len(predictions) == 0:
- continue
-
- valid_gt_inds = (gt_areas >= area_range[0]) & (gt_areas <= area_range[1])
- gt_boxes = gt_boxes[valid_gt_inds]
-
- num_pos += len(gt_boxes)
-
- if len(gt_boxes) == 0:
- continue
-
- if limit is not None and len(predictions) > limit:
- predictions = predictions[:limit]
-
- overlaps = pairwise_iou(predictions.proposal_boxes, gt_boxes)
-
- _gt_overlaps = torch.zeros(len(gt_boxes))
- for j in range(min(len(predictions), len(gt_boxes))):
- # find which proposal box maximally covers each gt box
- # and get the iou amount of coverage for each gt box
- max_overlaps, argmax_overlaps = overlaps.max(dim=0)
-
- # find which gt box is 'best' covered (i.e. 'best' = most iou)
- gt_ovr, gt_ind = max_overlaps.max(dim=0)
- assert gt_ovr >= 0
- # find the proposal box that covers the best covered gt box
- box_ind = argmax_overlaps[gt_ind]
- # record the iou coverage of this gt box
- _gt_overlaps[j] = overlaps[box_ind, gt_ind]
- assert _gt_overlaps[j] == gt_ovr
- # mark the proposal box and the gt box as used
- overlaps[box_ind, :] = -1
- overlaps[:, gt_ind] = -1
-
- # append recorded iou coverage level
- gt_overlaps.append(_gt_overlaps)
- gt_overlaps = (
- torch.cat(gt_overlaps, dim=0) if len(gt_overlaps) else torch.zeros(0, dtype=torch.float32)
- )
- gt_overlaps, _ = torch.sort(gt_overlaps)
-
- if thresholds is None:
- step = 0.05
- thresholds = torch.arange(0.5, 0.95 + 1e-5, step, dtype=torch.float32)
- recalls = torch.zeros_like(thresholds)
- # compute recall for each iou threshold
- for i, t in enumerate(thresholds):
- recalls[i] = (gt_overlaps >= t).float().sum() / float(num_pos)
- # ar = 2 * np.trapz(recalls, thresholds)
- ar = recalls.mean()
- return {
- "ar": ar,
- "recalls": recalls,
- "thresholds": thresholds,
- "gt_overlaps": gt_overlaps,
- "num_pos": num_pos,
- }
-
-
-def _evaluate_predictions_on_coco(
- coco_gt,
- coco_results,
- iou_type,
- kpt_oks_sigmas=None,
- cocoeval_fn=COCOeval_opt,
- img_ids=None,
- max_dets_per_image=None,
-):
- """
- Evaluate the coco results using COCOEval API.
- """
- assert len(coco_results) > 0
-
- if iou_type == "segm":
- coco_results = copy.deepcopy(coco_results)
- # When evaluating mask AP, if the results contain bbox, cocoapi will
- # use the box area as the area of the instance, instead of the mask area.
- # This leads to a different definition of small/medium/large.
- # We remove the bbox field to let mask AP use mask area.
- for c in coco_results:
- c.pop("bbox", None)
-
- coco_dt = coco_gt.loadRes(coco_results)
- coco_eval = cocoeval_fn(coco_gt, coco_dt, iou_type)
- # For COCO, the default max_dets_per_image is [1, 10, 100].
- if max_dets_per_image is None:
- max_dets_per_image = [1, 10, 100] # Default from COCOEval
- else:
- assert (
- len(max_dets_per_image) >= 3
- ), "COCOeval requires maxDets (and max_dets_per_image) to have length at least 3"
- # In the case that user supplies a custom input for max_dets_per_image,
- # apply COCOevalMaxDets to evaluate AP with the custom input.
- if max_dets_per_image[2] != 100:
- coco_eval = COCOevalMaxDets(coco_gt, coco_dt, iou_type)
- if iou_type != "keypoints":
- coco_eval.params.maxDets = max_dets_per_image
-
- if img_ids is not None:
- coco_eval.params.imgIds = img_ids
-
- if iou_type == "keypoints":
- # Use the COCO default keypoint OKS sigmas unless overrides are specified
- if kpt_oks_sigmas:
- assert hasattr(coco_eval.params, "kpt_oks_sigmas"), "pycocotools is too old!"
- coco_eval.params.kpt_oks_sigmas = np.array(kpt_oks_sigmas)
- # COCOAPI requires every detection and every gt to have keypoints, so
- # we just take the first entry from both
- num_keypoints_dt = len(coco_results[0]["keypoints"]) // 3
- num_keypoints_gt = len(next(iter(coco_gt.anns.values()))["keypoints"]) // 3
- num_keypoints_oks = len(coco_eval.params.kpt_oks_sigmas)
- assert num_keypoints_oks == num_keypoints_dt == num_keypoints_gt, (
- f"[COCOEvaluator] Prediction contain {num_keypoints_dt} keypoints. "
- f"Ground truth contains {num_keypoints_gt} keypoints. "
- f"The length of cfg.TEST.KEYPOINT_OKS_SIGMAS is {num_keypoints_oks}. "
- "They have to agree with each other. For meaning of OKS, please refer to "
- "http://cocodataset.org/#keypoints-eval."
- )
-
- coco_eval.evaluate()
- coco_eval.accumulate()
- coco_eval.summarize()
-
- return coco_eval
-
-
-class COCOevalMaxDets(COCOeval):
- """
- Modified version of COCOeval for evaluating AP with a custom
- maxDets (by default for COCO, maxDets is 100)
- """
-
- def summarize(self):
- """
- Compute and display summary metrics for evaluation results given
- a custom value for max_dets_per_image
- """
-
- def _summarize(ap=1, iouThr=None, areaRng="all", maxDets=100):
- p = self.params
- iStr = " {:<18} {} @[ IoU={:<9} | area={:>6s} | maxDets={:>3d} ] = {:0.3f}"
- titleStr = "Average Precision" if ap == 1 else "Average Recall"
- typeStr = "(AP)" if ap == 1 else "(AR)"
- iouStr = (
- "{:0.2f}:{:0.2f}".format(p.iouThrs[0], p.iouThrs[-1])
- if iouThr is None
- else "{:0.2f}".format(iouThr)
- )
-
- aind = [i for i, aRng in enumerate(p.areaRngLbl) if aRng == areaRng]
- mind = [i for i, mDet in enumerate(p.maxDets) if mDet == maxDets]
- if ap == 1:
- # dimension of precision: [TxRxKxAxM]
- s = self.eval["precision"]
- # IoU
- if iouThr is not None:
- t = np.where(iouThr == p.iouThrs)[0]
- s = s[t]
- s = s[:, :, :, aind, mind]
- else:
- # dimension of recall: [TxKxAxM]
- s = self.eval["recall"]
- if iouThr is not None:
- t = np.where(iouThr == p.iouThrs)[0]
- s = s[t]
- s = s[:, :, aind, mind]
- if len(s[s > -1]) == 0:
- mean_s = -1
- else:
- mean_s = np.mean(s[s > -1])
- print(iStr.format(titleStr, typeStr, iouStr, areaRng, maxDets, mean_s))
- return mean_s
-
- def _summarizeDets():
- stats = np.zeros((12,))
- # Evaluate AP using the custom limit on maximum detections per image
- stats[0] = _summarize(1, maxDets=self.params.maxDets[2])
- stats[1] = _summarize(1, iouThr=0.5, maxDets=self.params.maxDets[2])
- stats[2] = _summarize(1, iouThr=0.75, maxDets=self.params.maxDets[2])
- stats[3] = _summarize(1, areaRng="small", maxDets=self.params.maxDets[2])
- stats[4] = _summarize(1, areaRng="medium", maxDets=self.params.maxDets[2])
- stats[5] = _summarize(1, areaRng="large", maxDets=self.params.maxDets[2])
- stats[6] = _summarize(0, maxDets=self.params.maxDets[0])
- stats[7] = _summarize(0, maxDets=self.params.maxDets[1])
- stats[8] = _summarize(0, maxDets=self.params.maxDets[2])
- stats[9] = _summarize(0, areaRng="small", maxDets=self.params.maxDets[2])
- stats[10] = _summarize(0, areaRng="medium", maxDets=self.params.maxDets[2])
- stats[11] = _summarize(0, areaRng="large", maxDets=self.params.maxDets[2])
- return stats
-
- def _summarizeKps():
- stats = np.zeros((10,))
- stats[0] = _summarize(1, maxDets=20)
- stats[1] = _summarize(1, maxDets=20, iouThr=0.5)
- stats[2] = _summarize(1, maxDets=20, iouThr=0.75)
- stats[3] = _summarize(1, maxDets=20, areaRng="medium")
- stats[4] = _summarize(1, maxDets=20, areaRng="large")
- stats[5] = _summarize(0, maxDets=20)
- stats[6] = _summarize(0, maxDets=20, iouThr=0.5)
- stats[7] = _summarize(0, maxDets=20, iouThr=0.75)
- stats[8] = _summarize(0, maxDets=20, areaRng="medium")
- stats[9] = _summarize(0, maxDets=20, areaRng="large")
- return stats
-
- if not self.eval:
- raise Exception("Please run accumulate() first")
- iouType = self.params.iouType
- if iouType == "segm" or iouType == "bbox":
- summarize = _summarizeDets
- elif iouType == "keypoints":
- summarize = _summarizeKps
- self.stats = summarize()
-
- def __str__(self):
- self.summarize()
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/Panoptic-DeepLab/panoptic_deeplab/__init__.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/Panoptic-DeepLab/panoptic_deeplab/__init__.py
deleted file mode 100644
index 8d3c980643bbd385594850bfbffa84cd1412c162..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/Panoptic-DeepLab/panoptic_deeplab/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from .config import add_panoptic_deeplab_config
-from .dataset_mapper import PanopticDeeplabDatasetMapper
-from .panoptic_seg import (
- PanopticDeepLab,
- INS_EMBED_BRANCHES_REGISTRY,
- build_ins_embed_branch,
- PanopticDeepLabSemSegHead,
- PanopticDeepLabInsEmbedHead,
-)
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/tools/lazyconfig_train_net.py b/spaces/brjathu/HMR2.0/vendor/detectron2/tools/lazyconfig_train_net.py
deleted file mode 100644
index bb62d36c0c171b0391453afafc2828ebab1b0da1..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/tools/lazyconfig_train_net.py
+++ /dev/null
@@ -1,131 +0,0 @@
-#!/usr/bin/env python
-# Copyright (c) Facebook, Inc. and its affiliates.
-"""
-Training script using the new "LazyConfig" python config files.
-
-This scripts reads a given python config file and runs the training or evaluation.
-It can be used to train any models or dataset as long as they can be
-instantiated by the recursive construction defined in the given config file.
-
-Besides lazy construction of models, dataloader, etc., this scripts expects a
-few common configuration parameters currently defined in "configs/common/train.py".
-To add more complicated training logic, you can easily add other configs
-in the config file and implement a new train_net.py to handle them.
-"""
-import logging
-
-from detectron2.checkpoint import DetectionCheckpointer
-from detectron2.config import LazyConfig, instantiate
-from detectron2.engine import (
- AMPTrainer,
- SimpleTrainer,
- default_argument_parser,
- default_setup,
- default_writers,
- hooks,
- launch,
-)
-from detectron2.engine.defaults import create_ddp_model
-from detectron2.evaluation import inference_on_dataset, print_csv_format
-from detectron2.utils import comm
-
-logger = logging.getLogger("detectron2")
-
-
-def do_test(cfg, model):
- if "evaluator" in cfg.dataloader:
- ret = inference_on_dataset(
- model, instantiate(cfg.dataloader.test), instantiate(cfg.dataloader.evaluator)
- )
- print_csv_format(ret)
- return ret
-
-
-def do_train(args, cfg):
- """
- Args:
- cfg: an object with the following attributes:
- model: instantiate to a module
- dataloader.{train,test}: instantiate to dataloaders
- dataloader.evaluator: instantiate to evaluator for test set
- optimizer: instantaite to an optimizer
- lr_multiplier: instantiate to a fvcore scheduler
- train: other misc config defined in `configs/common/train.py`, including:
- output_dir (str)
- init_checkpoint (str)
- amp.enabled (bool)
- max_iter (int)
- eval_period, log_period (int)
- device (str)
- checkpointer (dict)
- ddp (dict)
- """
- model = instantiate(cfg.model)
- logger = logging.getLogger("detectron2")
- logger.info("Model:\n{}".format(model))
- model.to(cfg.train.device)
-
- cfg.optimizer.params.model = model
- optim = instantiate(cfg.optimizer)
-
- train_loader = instantiate(cfg.dataloader.train)
-
- model = create_ddp_model(model, **cfg.train.ddp)
- trainer = (AMPTrainer if cfg.train.amp.enabled else SimpleTrainer)(model, train_loader, optim)
- checkpointer = DetectionCheckpointer(
- model,
- cfg.train.output_dir,
- trainer=trainer,
- )
- trainer.register_hooks(
- [
- hooks.IterationTimer(),
- hooks.LRScheduler(scheduler=instantiate(cfg.lr_multiplier)),
- hooks.PeriodicCheckpointer(checkpointer, **cfg.train.checkpointer)
- if comm.is_main_process()
- else None,
- hooks.EvalHook(cfg.train.eval_period, lambda: do_test(cfg, model)),
- hooks.PeriodicWriter(
- default_writers(cfg.train.output_dir, cfg.train.max_iter),
- period=cfg.train.log_period,
- )
- if comm.is_main_process()
- else None,
- ]
- )
-
- checkpointer.resume_or_load(cfg.train.init_checkpoint, resume=args.resume)
- if args.resume and checkpointer.has_checkpoint():
- # The checkpoint stores the training iteration that just finished, thus we start
- # at the next iteration
- start_iter = trainer.iter + 1
- else:
- start_iter = 0
- trainer.train(start_iter, cfg.train.max_iter)
-
-
-def main(args):
- cfg = LazyConfig.load(args.config_file)
- cfg = LazyConfig.apply_overrides(cfg, args.opts)
- default_setup(cfg, args)
-
- if args.eval_only:
- model = instantiate(cfg.model)
- model.to(cfg.train.device)
- model = create_ddp_model(model)
- DetectionCheckpointer(model).load(cfg.train.init_checkpoint)
- print(do_test(cfg, model))
- else:
- do_train(args, cfg)
-
-
-if __name__ == "__main__":
- args = default_argument_parser().parse_args()
- launch(
- main,
- args.num_gpus,
- num_machines=args.num_machines,
- machine_rank=args.machine_rank,
- dist_url=args.dist_url,
- args=(args,),
- )
diff --git a/spaces/butterswords/nlc-explorer/.ipynb_checkpoints/backup-app-checkpoint.py b/spaces/butterswords/nlc-explorer/.ipynb_checkpoints/backup-app-checkpoint.py
deleted file mode 100644
index 644360ced2d42ffbaf29690e662c973d8be280d2..0000000000000000000000000000000000000000
--- a/spaces/butterswords/nlc-explorer/.ipynb_checkpoints/backup-app-checkpoint.py
+++ /dev/null
@@ -1,343 +0,0 @@
-#Import the libraries we know we'll need for the Generator.
-import pandas as pd, spacy, nltk, numpy as np
-from spacy.matcher import Matcher
-nlp = spacy.load("en_core_web_lg")
-from nltk.corpus import wordnet
-
-#Import the libraries to support the model and predictions.
-from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline
-import lime
-import torch
-import torch.nn.functional as F
-from lime.lime_text import LimeTextExplainer
-
-#Import the libraries for human interaction and visualization.
-import altair as alt
-import streamlit as st
-from annotated_text import annotated_text as ant
-
-#Import functions needed to build dataframes of keywords from WordNet
-from WNgen import *
-from NLselector import *
-
-@st.experimental_singleton
-def set_up_explainer():
- class_names = ['negative', 'positive']
- explainer = LimeTextExplainer(class_names=class_names)
- return explainer
-
-@st.experimental_singleton
-def prepare_model():
- tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
- model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
- pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True)
- return tokenizer, model, pipe
-
-@st.experimental_singleton
-def prepare_lists():
- nltk.download('omw-1.4')
- nltk.download('wordnet')
- countries = pd.read_csv("Assets/Countries/combined-countries.csv")
- professions = pd.read_csv("Assets/Professions/soc-professions-2018.csv")
- word_lists = [list(countries.Words),list(professions.Words)]
- return countries, professions, word_lists
-
-#Provide all the functions necessary to run the app
-#get definitions for control flow in Streamlit
-def get_def(word, POS=False):
- pos_options = ['NOUN','VERB','ADJ','ADV']
- m_word = word.replace(" ", "_")
- if POS in pos_options:
- seed_definitions = [syn.definition() for syn in wordnet.synsets(m_word, pos=getattr(wordnet, POS))]
- else:
- seed_definitions = [syn.definition() for syn in wordnet.synsets(m_word)]
- seed_definition = col1.selectbox("Which definition is most relevant?", seed_definitions, key= "WN_definition")
- if col1.button("Choose Definition"):
- col1.write("You've chosen a definition.")
- st.session_state.definition = seed_definition
- return seed_definition
- else:
- col1.write("Please choose a definition.")
-
-###Start coding the actual app###
-st.set_page_config(layout="wide", page_title="Natural Language Counterfactuals (NLC)")
-layouts = ['Natural Language Explanation', 'Lime Explanation', 'MultiNLC', 'MultiNLC + Lime', 'VizNLC']
-alternatives = ['Similarity', 'Sampling (Random)', 'Sampling (Fixed)', 'Probability']
-alt_choice = "Similarity"
-
-#Content in the Sidebar.
-st.sidebar.info('This is an interface for exploring how different interfaces for exploring natural language explanations (NLE) may appear to people. It is intended to allow individuals to provide feedback on specific versions, as well as to compare what one offers over others for the same inputs.')
-layout = st.sidebar.selectbox("Select a layout to explore.", layouts)
-alt_choice = st.sidebar.selectbox("Choose the way you want to display alternatives.", alternatives) #Commented out until we decide this is useful functionality.
-
-#Set up the Main Area Layout
-st.title('Natural Language Counterfactuals (NLC) Prototype')
-st.subheader(f'Current Layout: {layout}')
-text = st.text_input('Provide a sentence you want to evaluate.', placeholder = "I like you. I love you.", key="input")
-
-#Prepare the model, data, and Lime. Set starting variables.
-tokenizer, model, pipe = prepare_model()
-countries, professions, word_lists = prepare_lists()
-explainer = set_up_explainer()
-text2 = ""
-text3 = ""
-cf_df = pd.DataFrame()
-if 'definition' not in st.session_state:
- st.session_state.definition = "<(^_')>"
-
-#Outline the various user interfaces we have built.
-
-col1, col2, col3 = st.columns(3)
-if layout == 'Natural Language Explanation':
- with col1:
- if st.session_state.input != "":
- st.caption("This is the sentence you provided.")
- st.write(text)
- probability, sentiment = eval_pred(text, return_all=True)
- nat_lang_explanation = construct_nlexp(text,sentiment,probability)
-
-if layout == 'Lime Explanation':
- with col1:
- #Use spaCy to make the sentence into a doc so we can do NLP.
- doc = nlp(st.session_state.input)
- #Evaluate the provided sentence for sentiment and probability.
- if st.session_state.input != "":
- st.caption("This is the sentence you provided.")
- st.write(text)
- probability, sentiment = eval_pred(text, return_all=True)
- options, lime = critical_words(st.session_state.input,options=True)
- nat_lang_explanation = construct_nlexp(text,sentiment,probability)
- st.write(" ")
- st.altair_chart(lime_viz(lime))
-
-if layout == 'MultiNLC':
- with col1:
- #Use spaCy to make the sentence into a doc so we can do NLP.
- doc = nlp(st.session_state.input)
- #Evaluate the provided sentence for sentiment and probability.
- if st.session_state.input != "":
- st.caption("This is the sentence you provided.")
- st.write(text)
- probability, sentiment = eval_pred(text, return_all=True)
- options, lime = critical_words(st.session_state.input,options=True)
- nat_lang_explanation = construct_nlexp(text,sentiment,probability)
-
- #Allow the user to pick an option to generate counterfactuals from.
- option = st.radio('Which word would you like to use to generate alternatives?', options, key = "option")
- if (any(option in sublist for sublist in word_lists)):
- st.write(f'You selected {option}. It matches a list.')
- elif option:
- st.write(f'You selected {option}. It does not match a list.')
- definition = get_def(option)
- else:
- st.write('Awaiting your selection.')
-
- if st.button('Generate Alternatives'):
- if option in list(countries.Words):
- cf_df = gen_cf_country(countries, doc, option)
- st.success('Alternatives created.')
- elif option in list(professions.Words):
- cf_df = gen_cf_profession(professions, doc, option)
- st.success('Alternatives created.')
- else:
- with st.sidebar:
- ant("Generating alternatives for",(option,"opt","#E0FBFB"), "with a definition of: ",(st.session_state.definition,"def","#E0FBFB"),".")
- cf_df = cf_from_wordnet_df(option,text,seed_definition=st.session_state.definition)
- st.success('Alternatives created.')
-
- if len(cf_df) != 0:
- if alt_choice == "Similarity":
- text2, text3 = get_min_max(cf_df, option)
- col2.caption(f"This sentence is 'similar' to {option}.")
- col3.caption(f"This sentence is 'not similar' to {option}.")
- elif alt_choice == "Sampling (Random)":
- text2, text3 = sampled_alts(cf_df, option)
- col2.caption(f"This sentence is a random sample from the alternatives.")
- col3.caption(f"This sentence is a random sample from the alternatives.")
- elif alt_choice == "Sampling (Fixed)":
- text2, text3 = sampled_alts(cf_df, option, fixed=True)
- col2.caption(f"This sentence is a fixed sample of the alternatives.")
- col3.caption(f"This sentence is a fixed sample of the alternatives.")
- elif alt_choice == "Probability":
- text2, text3 = abs_dif(cf_df, option)
- col2.caption(f"This sentence is the closest prediction in the model.")
- col3.caption(f"This sentence is the farthest prediction in the model.")
- with st.sidebar:
- st.info(f"Alternatives generated: {len(cf_df)}")
-
- with col2:
- if text2 != "":
- sim2 = cf_df.loc[cf_df['text'] == text2, 'similarity'].iloc[0]
- st.write(text2)
- probability2, sentiment2 = eval_pred(text2, return_all=True)
- nat_lang_explanation = construct_nlexp(text2,sentiment2,probability2)
- #st.info(f" Similarity Score: {np.round(sim2, 2)}, Num Checked: {len(cf_df)}") #for QA purposes
-
- with col3:
- if text3 != "":
- sim3 = cf_df.loc[cf_df['text'] == text3, 'similarity'].iloc[0]
- st.write(text3)
- probability3, sentiment3 = eval_pred(text3, return_all=True)
- nat_lang_explanation = construct_nlexp(text3,sentiment3,probability3)
- #st.info(f"Similarity Score: {np.round(sim3, 2)}, Num Checked: {len(cf_df)}") #for QA purposes
-
-if layout == 'MultiNLC + Lime':
- with col1:
-
- #Use spaCy to make the sentence into a doc so we can do NLP.
- doc = nlp(st.session_state.input)
- #Evaluate the provided sentence for sentiment and probability.
- if st.session_state.input != "":
- st.caption("This is the sentence you provided.")
- st.write(text)
- probability, sentiment = eval_pred(text, return_all=True)
- options, lime = critical_words(st.session_state.input,options=True)
- nat_lang_explanation = construct_nlexp(text,sentiment,probability)
- st.write(" ")
- st.altair_chart(lime_viz(lime))
-
- #Allow the user to pick an option to generate counterfactuals from.
- option = st.radio('Which word would you like to use to generate alternatives?', options, key = "option")
- if (any(option in sublist for sublist in word_lists)):
- st.write(f'You selected {option}. It matches a list.')
- elif option:
- st.write(f'You selected {option}. It does not match a list.')
- definition = get_def(option)
- else:
- st.write('Awaiting your selection.')
-
- if st.button('Generate Alternatives'):
- if option in list(countries.Words):
- cf_df = gen_cf_country(countries, doc, option)
- st.success('Alternatives created.')
- elif option in list(professions.Words):
- cf_df = gen_cf_profession(professions, doc, option)
- st.success('Alternatives created.')
- else:
- with st.sidebar:
- ant("Generating alternatives for",(option,"opt","#E0FBFB"), "with a definition of: ",(st.session_state.definition,"def","#E0FBFB"),".")
- cf_df = cf_from_wordnet_df(option,text,seed_definition=st.session_state.definition)
- st.success('Alternatives created.')
-
- if len(cf_df) != 0:
- if alt_choice == "Similarity":
- text2, text3 = get_min_max(cf_df, option)
- col2.caption(f"This sentence is 'similar' to {option}.")
- col3.caption(f"This sentence is 'not similar' to {option}.")
- elif alt_choice == "Sampling (Random)":
- text2, text3 = sampled_alts(cf_df, option)
- col2.caption(f"This sentence is a random sample from the alternatives.")
- col3.caption(f"This sentence is a random sample from the alternatives.")
- elif alt_choice == "Sampling (Fixed)":
- text2, text3 = sampled_alts(cf_df, option, fixed=True)
- col2.caption(f"This sentence is a fixed sample of the alternatives.")
- col3.caption(f"This sentence is a fixed sample of the alternatives.")
- elif alt_choice == "Probability":
- text2, text3 = abs_dif(cf_df, option)
- col2.caption(f"This sentence is the closest prediction in the model.")
- col3.caption(f"This sentence is the farthest prediction in the model.")
- with st.sidebar:
- st.info(f"Alternatives generated: {len(cf_df)}")
-
- with col2:
- if text2 != "":
- sim2 = cf_df.loc[cf_df['text'] == text2, 'similarity'].iloc[0]
- st.write(text2)
- probability2, sentiment2 = eval_pred(text2, return_all=True)
- nat_lang_explanation = construct_nlexp(text2,sentiment2,probability2)
- exp2 = explainer.explain_instance(text2, predictor, num_features=15, num_samples=2000)
- lime_results2 = exp2.as_list()
- st.write(" ")
- st.altair_chart(lime_viz(lime_results2))
-
- with col3:
- if text3 != "":
- sim3 = cf_df.loc[cf_df['text'] == text3, 'similarity'].iloc[0]
- st.write(text3)
- probability3, sentiment3 = eval_pred(text3, return_all=True)
- nat_lang_explanation = construct_nlexp(text3,sentiment3,probability3)
- exp3 = explainer.explain_instance(text3, predictor, num_features=15, num_samples=2000)
- lime_results3 = exp3.as_list()
- st.write(" ")
- st.altair_chart(lime_viz(lime_results3))
-
-if layout == 'VizNLC':
- with col1:
-
- #Use spaCy to make the sentence into a doc so we can do NLP.
- doc = nlp(st.session_state.input)
- #Evaluate the provided sentence for sentiment and probability.
- if st.session_state.input != "":
- st.caption("This is the sentence you provided.")
- st.write(text)
- probability, sentiment = eval_pred(text, return_all=True)
- options, lime = critical_words(st.session_state.input,options=True)
- nat_lang_explanation = construct_nlexp(text,sentiment,probability)
- st.write(" ")
- st.altair_chart(lime_viz(lime))
-
- #Allow the user to pick an option to generate counterfactuals from.
- option = st.radio('Which word would you like to use to generate alternatives?', options, key = "option")
- if (any(option in sublist for sublist in word_lists)):
- st.write(f'You selected {option}. It matches a list.')
- elif option:
- st.write(f'You selected {option}. It does not match a list.')
- definition = get_def(option)
- else:
- st.write('Awaiting your selection.')
-
- if st.button('Generate Alternatives'):
- if option in list(countries.Words):
- cf_df = gen_cf_country(countries, doc, option)
- st.success('Alternatives created.')
- elif option in list(professions.Words):
- cf_df = gen_cf_profession(professions, doc, option)
- st.success('Alternatives created.')
- else:
- with st.sidebar:
- ant("Generating alternatives for",(option,"opt","#E0FBFB"), "with a definition of: ",(st.session_state.definition,"def","#E0FBFB"),".")
- cf_df = cf_from_wordnet_df(option,text,seed_definition=st.session_state.definition)
- st.success('Alternatives created.')
-
- if len(cf_df) != 0:
- if alt_choice == "Similarity":
- text2, text3 = get_min_max(cf_df, option)
- col2.caption(f"This sentence is 'similar' to {option}.")
- col3.caption(f"This graph represents the {len(cf_df)} alternatives to {option}.")
- elif alt_choice == "Sampling (Random)":
- text2, text3 = sampled_alts(cf_df, option)
- col2.caption(f"This sentence is a random sample from the alternatives.")
- col3.caption(f"This graph represents the {len(cf_df)} alternatives to {option}.")
- elif alt_choice == "Sampling (Fixed)":
- text2, text3 = sampled_alts(cf_df, option, fixed=True)
- col2.caption(f"This sentence is a fixed sample of the alternatives.")
- col3.caption(f"This graph represents the {len(cf_df)} alternatives to {option}.")
- elif alt_choice == "Probability":
- text2, text3 = abs_dif(cf_df, option)
- col2.caption(f"This sentence is the closest prediction in the model.")
- col3.caption(f"This graph represents the {len(cf_df)} alternatives to {option}.")
- with st.sidebar:
- st.info(f"Alternatives generated: {len(cf_df)}")
-
- with col2:
- if text2 != "":
- sim2 = cf_df.loc[cf_df['text'] == text2, 'similarity'].iloc[0]
- st.write(text2)
- probability2, sentiment2 = eval_pred(text2, return_all=True)
- nat_lang_explanation = construct_nlexp(text2,sentiment2,probability2)
- exp2 = explainer.explain_instance(text2, predictor, num_features=15, num_samples=2000)
- lime_results2 = exp2.as_list()
- st.write(" ")
- st.altair_chart(lime_viz(lime_results2))
-
- with col3:
- if not cf_df.empty:
- single_nearest = alt.selection_single(on='mouseover', nearest=True)
- full = alt.Chart(cf_df).encode(
- alt.X('similarity:Q', scale=alt.Scale(zero=False)),
- alt.Y('pred:Q'),
- color=alt.Color('Categories:N', legend=alt.Legend(title="Color of Categories")),
- size=alt.Size('seed:O'),
- tooltip=('Categories','text','pred')
- ).mark_circle(opacity=.5).properties(width=450, height=450).add_selection(single_nearest)
- st.altair_chart(full)
\ No newline at end of file
diff --git a/spaces/caffeinum/VToonify/vtoonify/model/raft/demo.py b/spaces/caffeinum/VToonify/vtoonify/model/raft/demo.py
deleted file mode 100644
index 5abc1da863f1231af1247209739402b05fa8bf85..0000000000000000000000000000000000000000
--- a/spaces/caffeinum/VToonify/vtoonify/model/raft/demo.py
+++ /dev/null
@@ -1,75 +0,0 @@
-import sys
-sys.path.append('core')
-
-import argparse
-import os
-import cv2
-import glob
-import numpy as np
-import torch
-from PIL import Image
-
-from raft import RAFT
-from utils import flow_viz
-from utils.utils import InputPadder
-
-
-
-DEVICE = 'cuda'
-
-def load_image(imfile):
- img = np.array(Image.open(imfile)).astype(np.uint8)
- img = torch.from_numpy(img).permute(2, 0, 1).float()
- return img[None].to(DEVICE)
-
-
-def viz(img, flo):
- img = img[0].permute(1,2,0).cpu().numpy()
- flo = flo[0].permute(1,2,0).cpu().numpy()
-
- # map flow to rgb image
- flo = flow_viz.flow_to_image(flo)
- img_flo = np.concatenate([img, flo], axis=0)
-
- # import matplotlib.pyplot as plt
- # plt.imshow(img_flo / 255.0)
- # plt.show()
-
- cv2.imshow('image', img_flo[:, :, [2,1,0]]/255.0)
- cv2.waitKey()
-
-
-def demo(args):
- model = torch.nn.DataParallel(RAFT(args))
- model.load_state_dict(torch.load(args.model))
-
- model = model.module
- model.to(DEVICE)
- model.eval()
-
- with torch.no_grad():
- images = glob.glob(os.path.join(args.path, '*.png')) + \
- glob.glob(os.path.join(args.path, '*.jpg'))
-
- images = sorted(images)
- for imfile1, imfile2 in zip(images[:-1], images[1:]):
- image1 = load_image(imfile1)
- image2 = load_image(imfile2)
-
- padder = InputPadder(image1.shape)
- image1, image2 = padder.pad(image1, image2)
-
- flow_low, flow_up = model(image1, image2, iters=20, test_mode=True)
- viz(image1, flow_up)
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--model', help="restore checkpoint")
- parser.add_argument('--path', help="dataset for evaluation")
- parser.add_argument('--small', action='store_true', help='use small model')
- parser.add_argument('--mixed_precision', action='store_true', help='use mixed precision')
- parser.add_argument('--alternate_corr', action='store_true', help='use efficent correlation implementation')
- args = parser.parse_args()
-
- demo(args)
diff --git a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/training/lp_main.py b/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/training/lp_main.py
deleted file mode 100644
index c2d4e8c85aaa3c8e4221963ef56a815cc14f354f..0000000000000000000000000000000000000000
--- a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/training/lp_main.py
+++ /dev/null
@@ -1,670 +0,0 @@
-from cmath import cos
-from inspect import getargs
-import logging
-import os
-import random
-from datetime import datetime
-import bisect
-import copy
-from sched import scheduler
-import numpy as np
-import torch
-import torch.backends.cudnn as cudnn
-from torch import optim
-from torch.cuda.amp import GradScaler
-import faulthandler
-import pathlib
-import argparse
-import time
-
-try:
- import wandb
-except ImportError:
- wandb = None
-
-try:
- import torch.utils.tensorboard as tensorboard
-except ImportError:
- tensorboard = None
-
-try:
- import horovod.torch as hvd
-except ImportError:
- hvd = None
-
-from open_clip import create_model_and_transforms, trace_model, create_model
-from training.data import get_data
-from training.params import parse_args
-from training.distributed import is_master, init_distributed_device, world_info_from_env
-from training.logger import setup_logging
-from training.scheduler import cosine_lr
-from training.lp_train import train_one_epoch, evaluate
-from open_clip.utils import get_tar_path_from_dataset_name, dataset_split, get_optimizer
-from open_clip.utils import load_p, load_class_label
-from open_clip.linear_probe import LinearProbe
-
-
-def maintain_ckpts(args, startidx, all_idx_len):
- for i in reversed(range(startidx, all_idx_len)):
- if os.path.exists(os.path.join(args.checkpoint_path, f"epoch_top_{i}.pt")):
- os.rename(
- os.path.join(args.checkpoint_path, f"epoch_top_{i}.pt"),
- os.path.join(args.checkpoint_path, f"epoch_top_{i+1}.pt"),
- )
- if os.path.exists(
- os.path.join(args.checkpoint_path, f"epoch_top_{all_idx_len}.pt")
- ):
- os.remove(os.path.join(args.checkpoint_path, f"epoch_top_{all_idx_len}.pt"))
- return
-
-
-def update_top_k_performance(
- new_metrics_inputs, current_top_k_ckpt_metrics, args, ckpt, bignumbetter=True
-):
- """
- Record the top-k performance of the current epoch.
- current_top_k_metrics is a dictionary of the form: {1: top_1_ckpt_measure, 2: top_2_ckpt_measure, ...}
- """
- if isinstance(new_metrics_inputs, (list, tuple)):
- new_metrics_inputs = np.mean(new_metrics_inputs)
- return update_top_k_performance(
- new_metrics_inputs,
- current_top_k_ckpt_metrics,
- args=args,
- ckpt=ckpt,
- bignumbetter=bignumbetter,
- )
- elif isinstance(new_metrics_inputs, dict):
- new_metrics_inputs = np.mean(list(new_metrics_inputs.values()))
- return update_top_k_performance(
- new_metrics_inputs,
- current_top_k_ckpt_metrics,
- args=args,
- ckpt=ckpt,
- bignumbetter=bignumbetter,
- )
- elif isinstance(new_metrics_inputs, (float, int)):
- update_flag = {k: False for k in current_top_k_ckpt_metrics.keys()}
- sorted_keys = sorted(current_top_k_ckpt_metrics.keys())
- sorted_values = sorted(
- current_top_k_ckpt_metrics.values(), reverse=bignumbetter
- )
- sorted_values_ = copy.deepcopy(sorted_values)
- sorted_values.append(new_metrics_inputs)
- sorted_values = sorted(sorted_values, reverse=bignumbetter)
- sorted_values = sorted_values[:-1]
-
- if sorted_values == sorted_values_:
- return current_top_k_ckpt_metrics, new_metrics_inputs
- else:
- for i in range(len(sorted_keys)):
- if current_top_k_ckpt_metrics[sorted_keys[i]] != sorted_values[i]:
- current_top_k_ckpt_metrics[sorted_keys[i]] = sorted_values[i]
- update_flag[sorted_keys[i]] = True
- for i in range(len(update_flag)):
- if update_flag[i]:
- maintain_ckpts(args, i, len(sorted_keys))
- torch.save(
- ckpt,
- os.path.join(args.checkpoint_path, f"epoch_top_{i}.pt"),
- )
- break
- return current_top_k_ckpt_metrics, new_metrics_inputs
-
-
-# def updateifNone(a, b):
-# a = b if None else a
-# return a
-
-
-def is_pretrained_params(n):
- return (
- n.startswith("clap_model.transformer")
- or n in ["clap_model.positional_embedding", "clap_model.text_projection"]
- or n.startswith("clap_model.token_embedding")
- or n.startswith("clap_model.ln_final")
- or n.startswith("clap_model.logit_scale_t")
- )
-
-
-def random_seed(seed=42, rank=0):
- torch.manual_seed(seed + rank)
- np.random.seed(seed + rank)
- random.seed(seed + rank)
-
-
-def config_lp_optimizer(model, data, args):
- # set wd-related params to 0 if use adam optimizer
- if args.optimizer == "adam":
- args.wd = 0
- args.wd_pretrained = 0
- args.wd_new = 0
-
- in_clap = lambda n, p: n.startswith("clap_model")
-
- named_parameters = list(model.named_parameters())
-
- optimizer = {}
- scheduler = {}
-
- # freeze text encoder
- text_freeze_parameters = [
- p
- for n, p in named_parameters
- if n.startswith("clap_model.transformer")
- or n in ["clap_model.positional_embedding", "clap_model.text_projection"]
- or n.startswith("clap_model.token_embedding")
- or n.startswith("clap_model.ln_final")
- ]
-
- if args.freeze_text:
- logging.info("Freeze Text!!!!")
- for k in text_freeze_parameters:
- k.requires_grad = False
-
- if not args.lp_freeze:
- exclude = (
- lambda n, p: p.ndim < 2
- or "bn" in n
- or "ln" in n
- or "bias" in n
- or "logit_scale" in n
- )
- include = lambda n, p: not exclude(n, p)
-
- # (yusong): we do not split the learning rate anymore
- # p for n, p in named_parameters if in_clap(n,p) and exclude(n, p) and p.requires_grad
- gain_or_bias_params = [
- p for n, p in named_parameters if exclude(n, p) and p.requires_grad
- ]
- # rest_params = [p for n, p in named_parameters if in_clap(n,p) and include(n, p) and p.requires_grad]
- rest_params = [
- p for n, p in named_parameters if include(n, p) and p.requires_grad
- ]
-
- if args.train_data is None:
- optimizer = None
- scheduler = None
- else:
- total_steps = data["train"].dataloader.num_batches * args.epochs
-
- if args.split_opt:
- for x in ["lr", "beta1", "beta2", "eps", "wd"]:
- for y in ["_new", "_pretrained"]:
- if getattr(args, x + y) is None:
- setattr(args, x + y, getattr(args, x))
-
- gain_or_bias_pretrained_params = [
- p
- for n, p in named_parameters
- if (exclude(n, p) and p.requires_grad) and is_pretrained_params(n)
- ]
- rest_pretrained_params = [
- p
- for n, p in named_parameters
- if (include(n, p) and p.requires_grad) and is_pretrained_params(n)
- ]
- gain_or_bias_new_params = [
- p
- for n, p in named_parameters
- if (exclude(n, p) and p.requires_grad)
- and (not is_pretrained_params(n))
- ]
- rest_new_params = [
- p
- for n, p in named_parameters
- if (include(n, p) and p.requires_grad)
- and (not is_pretrained_params(n))
- ]
-
- pretrained_params_optimizer = get_optimizer(
- [
- {"params": gain_or_bias_pretrained_params, "weight_decay": 0.0},
- {
- "params": rest_pretrained_params,
- "weight_decay": args.wd_pretrained,
- },
- ],
- lr=args.lr_pretrained,
- betas=(args.beta1_pretrained, args.beta2_pretrained),
- eps=args.eps_pretrained,
- momentum=args.momentum_pretrained,
- optimizer_name=args.optimizer,
- )
- pretrained_params_scheduler = cosine_lr(
- pretrained_params_optimizer,
- args.lr_pretrained,
- args.warmup,
- total_steps,
- )
-
- new_params_optimizer = get_optimizer(
- [
- {"params": gain_or_bias_new_params, "weight_decay": 0.0},
- {"params": rest_new_params, "weight_decay": args.wd_new},
- ],
- lr=args.lr_new,
- betas=(args.beta1_new, args.beta2_new),
- eps=args.eps_new,
- momentum=args.momentum_new,
- optimizer_name=args.optimizer,
- )
- new_params_scheduler = cosine_lr(
- new_params_optimizer, args.lr_new, args.warmup, total_steps
- )
-
- optimizer["text"] = pretrained_params_optimizer
- optimizer["audio"] = new_params_optimizer
- scheduler["text"] = pretrained_params_scheduler
- scheduler["audio"] = new_params_scheduler
-
- if args.horovod:
- pretrained_params_optimizer = hvd.DistributedOptimizer(
- pretrained_params_optimizer,
- named_parameters=model.named_parameters(),
- )
- new_params_optimizer = hvd.DistributedOptimizer(
- new_params_optimizer, named_parameters=model.named_parameters()
- )
- hvd.broadcast_parameters(model.state_dict(), root_rank=0)
- hvd.broadcast_optimizer_state(
- pretrained_params_optimizer, root_rank=0
- )
- hvd.broadcast_optimizer_state(new_params_optimizer, root_rank=0)
- else:
-
- optimizer["clap"] = get_optimizer(
- [
- {"params": gain_or_bias_params, "weight_decay": 0.0},
- {"params": rest_params, "weight_decay": args.wd},
- ],
- lr=args.lr,
- betas=(args.beta1, args.beta2),
- eps=args.eps,
- momentum=args.momentum,
- optimizer_name=args.optimizer,
- )
- scheduler["clap"] = cosine_lr(
- optimizer["clap"], args.lr, args.warmup, total_steps
- )
-
- if args.horovod:
- optimizer["clap"] = hvd.DistributedOptimizer(
- optimizer["clap"], named_parameters=model.named_parameters()
- )
- hvd.broadcast_parameters(model.state_dict(), root_rank=0)
- hvd.broadcast_optimizer_state(optimizer["clap"], root_rank=0)
-
- # linear probe optimizer
- else:
- lp_params = [
- p for n, p in named_parameters if (not in_clap(n, p)) and p.requires_grad
- ]
- lp_optim = get_optimizer(
- lp_params,
- lr=args.lp_lr,
- betas=(args.beta1, args.beta2),
- eps=args.eps,
- momentum=0.9,
- optimizer_name=args.optimizer,
- )
- optimizer["lp"] = lp_optim
-
- return optimizer, scheduler, text_freeze_parameters
-
-
-def main():
- args = parse_args()
-
- time.sleep(args.sleep)
-
- # sanitize model name for filesystem / uri use, easier if we don't use / in name as a rule?
- args.amodel = args.amodel.replace("/", "-")
- # download sizes.json file
-
- # (yusong): the below two lines are for debug
- # print("setting up faulthandler")
- # faulthandler.register(10)
-
- random.seed(args.seed)
- torch.manual_seed(args.seed)
- torch.cuda.manual_seed(args.seed)
- torch.cuda.manual_seed_all(args.seed)
- np.random.seed(args.seed)
- args.class_index_dict = load_class_label(args.class_label_path)
-
- # get the name of the experiments
- if args.name is None:
- args.name = "-".join(
- [
- datetime.now().strftime("%Y_%m_%d-%H_%M_%S"),
- f"linear_probe" f"model_{args.amodel}",
- f"lr_{args.lr}",
- f"b_{args.batch_size}",
- f"j_{args.workers}",
- f"p_{args.precision}",
- ]
- )
-
- # discover initial world args early so we can log properly
- args.distributed = False
- args.local_rank, args.rank, args.world_size = world_info_from_env()
-
- if args.remotedata and is_master(args):
- for dataset_name in args.datasetnames:
- for split in dataset_split[dataset_name]:
- if not os.path.exists(f"./json_files/{dataset_name}/{split}"):
- os.makedirs(f"./json_files/{dataset_name}/{split}")
- os.system(
- f"aws s3 cp s3://s-laion-audio/webdataset_tar/{dataset_name}/{split}/sizes.json ./json_files/{dataset_name}/{split}/sizes.json"
- )
-
- args.log_path = None
- if is_master(args, local=args.log_local):
- log_base_path = os.path.join(args.logs, args.name)
- os.makedirs(log_base_path, exist_ok=True)
- log_filename = f"out-{args.rank}" if args.log_local else "out.log"
- args.log_path = os.path.join(log_base_path, log_filename)
-
- # avoid log dir in same name:
- postfix = 0
- while os.path.exists(args.log_path):
- postfix += 1
- log_base_path_new = log_base_path + "-" + str(postfix)
- os.makedirs(log_base_path_new, exist_ok=True)
- log_filename = f"out-{args.rank}" if args.log_local else "out.log"
- args.log_path = os.path.join(log_base_path_new, log_filename)
- # print(
- # "Error. Experiment already exists. Use --name {} to specify a new experiment."
- # )
- # return -1
-
- # Set logger
- args.log_level = logging.DEBUG if args.debug else logging.INFO
- setup_logging(args.log_path, args.log_level)
-
- # fully initialize distributed device environment
- device = init_distributed_device(args)
-
- args.wandb = "wandb" in args.report_to or "all" in args.report_to
- args.tensorboard = "tensorboard" in args.report_to or "all" in args.report_to
- if is_master(args):
- args.tensorboard_path = (
- os.path.join(args.logs, args.name, "tensorboard")
- if args.tensorboard
- else ""
- )
- args.checkpoint_path = os.path.join(args.logs, args.name, "checkpoints")
- for dirname in [args.tensorboard_path, args.checkpoint_path]:
- if dirname:
- os.makedirs(dirname, exist_ok=True)
- else:
- args.tensorboard_path = ""
- args.checkpoint_path = ""
-
- if args.copy_codebase:
- copy_codebase(args)
-
- assert args.precision in ["amp", "fp16", "fp32"]
- if args.precision == "fp16":
- logging.warning(
- "It is recommended to use AMP mixed-precision instead of FP16. "
- "FP16 support needs further verification and tuning, especially for train."
- )
-
- if args.horovod:
- logging.info(
- f"Running in horovod mode with multiple processes / nodes. Device: {args.device}."
- f"Process (global: {args.rank}, local {args.local_rank}), total {args.world_size}."
- )
- elif args.distributed:
- logging.info(
- f"Running in distributed mode with multiple processes. Device: {args.device}."
- f"Process (global: {args.rank}, local {args.local_rank}), total {args.world_size}."
- )
- else:
- logging.info(f"Running with a single process. Device {args.device}.")
-
- logging.info(f"openai cache dir: {os.path.expanduser(args.openai_model_cache_dir)}")
-
- # Create CLAP model
- clap_model, clap_model_cfg = create_model(
- args.amodel,
- args.tmodel,
- args.pretrained,
- precision=args.precision,
- device=device,
- jit=args.torchscript,
- force_quick_gelu=args.force_quick_gelu,
- openai_model_cache_dir=os.path.expanduser(args.openai_model_cache_dir),
- skip_params=False,
- pretrained_audio=args.pretrained_audio,
- pretrained_text=args.pretrained_text,
- enable_fusion=args.enable_fusion,
- fusion_type=args.fusion_type,
- )
-
- args.lp_out_ch = len(list(args.class_index_dict.keys()))
- # Linear Probe
- logging.info(f"linear probe using mlp: {args.lp_mlp}")
- logging.info(f"linear probe using freeze: {args.lp_freeze}")
- logging.info(f"linear probe act layer: {args.lp_act}")
- logging.info(f"linear probe out ch: {args.lp_out_ch}")
- logging.info(f"linear probe learning rate (if applicable): {args.lp_lr}")
- logging.info(f"linear probe loss func: {args.lp_loss}")
- logging.info(f"linear probe lp_metrics: {args.lp_metrics}")
-
- model = LinearProbe(
- clap_model,
- mlp=args.lp_mlp,
- freeze=args.lp_freeze,
- in_ch=512,
- out_ch=args.lp_out_ch,
- act=args.lp_act,
- ) # in_ch is fixed (i.e., 512)
- model = model.to(device)
-
- if args.horovod:
- with torch.no_grad():
- for param in model.parameters():
- param.set_(param.contiguous())
-
- if args.trace:
- model = trace_model(model, batch_size=args.batch_size, device=device)
-
- if is_master(args):
- logging.info("Linear Probe CLAP Model:")
- logging.info(f"{str(clap_model)}")
- logging.info("Params:")
- params_file = os.path.join(args.logs, args.name, "params.txt")
- with open(params_file, "w") as f:
- for name in sorted(vars(args)):
- val = getattr(args, name)
- logging.info(f" {name}: {val}")
- f.write(f"{name}: {val}\n")
-
- if args.distributed and not args.horovod:
- if args.use_bn_sync:
- model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)
- ddp_args = {}
- if args.ddp_static_graph:
- # this doesn't exist in older PyTorch, arg only added if enabled
- ddp_args["static_graph"] = True
- model = torch.nn.parallel.DistributedDataParallel(
- model, device_ids=[device], find_unused_parameters=True, **ddp_args
- )
-
- data = get_data(args, clap_model_cfg)
- assert len(data), "At least one train or eval dataset must be specified."
- if args.trace:
- assert "train" not in data, "Cannot train with traced model"
-
- optimizer, scheduler, text_freeze_parameters = config_lp_optimizer(
- model, data, args
- )
-
- scaler = GradScaler() if args.precision == "amp" else None
-
- # optionally resume from a checkpoint
- start_epoch = 0
- if args.resume is not None:
- if os.path.isfile(args.resume):
- checkpoint = torch.load(args.resume, map_location=device)
- if "epoch" in checkpoint:
- # resuming a train checkpoint w/ epoch and optimizer state
- start_epoch = checkpoint["epoch"]
- sd = checkpoint["state_dict"]
- if not args.distributed and next(iter(sd.items()))[0].startswith(
- "module"
- ):
- sd = {k[len("module.") :]: v for k, v in sd.items()}
- model.load_state_dict(sd)
- if args.split_opt:
- if optimizer is not None:
- for k, o_ in optimizer.items():
- o_.load_state_dict(checkpoint[k + "_" + "optimizer"])
- if optimizer is not None:
- optimizer.load_state_dict(checkpoint["optimizer"])
- if scaler is not None and "scaler" in checkpoint:
- scaler.load_state_dict(checkpoint["scaler"])
- logging.info(
- f"=> resuming checkpoint '{args.resume}' (epoch {start_epoch})"
- )
- else:
- # loading a bare (model only) checkpoint for fine-tune or evaluation
- model.load_state_dict(checkpoint)
- logging.info(
- f"=> loaded checkpoint '{args.resume}' (epoch {start_epoch})"
- )
- if args.freeze_text:
- print("Freeze Text!!!!")
- for k in text_freeze_parameters:
- k.requires_grad = False
- else:
- logging.info("=> no checkpoint found at '{}'".format(args.resume))
-
- cudnn.benchmark = True
- cudnn.deterministic = False
-
- # determine if this worker should save logs and checkpoints. only do so if it is rank == 0
- args.save_logs = args.logs and args.logs.lower() != "none" and is_master(args)
- writer = None
- if args.save_logs and args.tensorboard:
- assert tensorboard is not None, "Please install tensorboard."
- writer = tensorboard.SummaryWriter(args.tensorboard_path)
-
- if args.wandb and is_master(args):
- assert wandb is not None, "Please install wandb."
- logging.debug("Starting wandb.")
- args.train_sz = data["train"].dataloader.num_samples
- if args.val_data is not None:
- args.val_sz = data["val"].dataloader.num_samples
- # you will have to configure this for your project!
- wandb.init(
- project="clap",
- notes=args.wandb_notes,
- name=args.wandb_notes,
- tags=[],
- config=vars(args),
- )
- if args.debug:
- wandb.watch(model, log="all")
- wandb.save(params_file)
- logging.debug("Finished loading wandb.")
-
- if "train" not in data:
- evaluate(model, data, start_epoch, args, writer)
- return
- elif start_epoch == 0 and "val" in data and not args.no_eval:
- evaluate(model, data, 0, args, writer)
- if args.save_top_performance:
- current_top_k_ckpt_metrics = {
- i: 0 for i in range(args.save_top_performance)
- } # initialize the top-k metric for ckpts to 0
-
- for epoch in range(start_epoch, args.epochs):
- # freeze the text param after (include) args.freeze_text_after, this is -1 by default
- if epoch == args.freeze_text_after:
- print("Text pretrained parameters are freezed since this epoch.")
- for k in text_freeze_parameters:
- k.requires_grad = False
- if is_master(args):
- logging.info(f"Start epoch {epoch}")
-
- train_one_epoch(model, data, epoch, optimizer, scaler, scheduler, args, writer)
- completed_epoch = epoch + 1
-
- if (
- any(v in data for v in ("val", "imagenet-val", "imagenet-v2"))
- and not args.no_eval
- ):
- metrics = evaluate(model, data, completed_epoch, args, writer)
- if args.save_top_performance:
- top_k_dataset = args.top_k_checkpoint_select_dataset
- top_k_metric = args.top_k_checkpoint_select_metric
- filtered_metrics = [
- v
- for k, v in metrics.items()
- if top_k_metric in k and top_k_dataset in k
- ] # check all R@10 metrics (all dataset) and use it to update the ckpt
- # Saving checkpoints.
- if args.save_logs:
- opt_dict = {
- k + "_" + "optimizer": v.state_dict() for k, v in optimizer.items()
- }
- checkpoint_dict = {
- "epoch": completed_epoch,
- "name": args.name,
- "state_dict": model.state_dict(),
- }
- checkpoint_dict.update(opt_dict)
- if scaler is not None:
- checkpoint_dict["scaler"] = scaler.state_dict()
-
- if completed_epoch == args.epochs or (
- args.save_frequency > 0 and (completed_epoch % args.save_frequency) == 0
- ):
- torch.save(
- checkpoint_dict,
- os.path.join(args.checkpoint_path, f"epoch_{completed_epoch}.pt"),
- )
- if args.save_most_recent:
- torch.save(
- checkpoint_dict,
- os.path.join(args.checkpoint_path, f"epoch_latest.pt"),
- )
- if args.save_top_performance and not args.no_eval:
- update_top_k_performance(
- filtered_metrics,
- current_top_k_ckpt_metrics,
- args,
- checkpoint_dict,
- bignumbetter=True,
- )
-
- if args.wandb and is_master(args):
- wandb.finish()
-
-
-def copy_codebase(args):
- from shutil import copytree, ignore_patterns
-
- new_code_path = os.path.join(args.logs, args.name, "code")
- if os.path.exists(new_code_path):
- print(
- f"Error. Experiment already exists at {new_code_path}. Use --name to specify a new experiment."
- )
- return -1
- print(f"Copying codebase to {new_code_path}")
- current_code_path = os.path.realpath(__file__)
- for _ in range(3):
- current_code_path = os.path.dirname(current_code_path)
- copytree(
- current_code_path, new_code_path, ignore=ignore_patterns("log", "logs", "wandb")
- )
- print("Done copying code.")
- return 1
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/cap99/ocr/README.md b/spaces/cap99/ocr/README.md
deleted file mode 100644
index 2cb234befe22ab9c38d077ba391f4578d8f30bc2..0000000000000000000000000000000000000000
--- a/spaces/cap99/ocr/README.md
+++ /dev/null
@@ -1,18 +0,0 @@
----
-title: Ocr
-emoji: 👓
-colorFrom: indigo
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-# Optical-Character-Recognition
-This app does OCR leveraging Hugging Face's Spaces.
- 😎
-
- ---
- I used Github actions so that when I commit to this repo on github, the same changes are made to the corrresponding repo on HF.
diff --git a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/mel_processing.py b/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/mel_processing.py
deleted file mode 100644
index 3614150259809983e776d3fed83021decca06a9c..0000000000000000000000000000000000000000
--- a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/mel_processing.py
+++ /dev/null
@@ -1,112 +0,0 @@
-import math
-import os
-import random
-import torch
-from torch import nn
-import torch.nn.functional as F
-import torch.utils.data
-import numpy as np
-import librosa
-import librosa.util as librosa_util
-from librosa.util import normalize, pad_center, tiny
-from scipy.signal import get_window
-from scipy.io.wavfile import read
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + '_' + str(spec.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y.float(), n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec
diff --git a/spaces/chendl/compositional_test/transformers/examples/README.md b/spaces/chendl/compositional_test/transformers/examples/README.md
deleted file mode 100644
index c1cddd2e4734573746e60f24295ec2c717cece58..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/README.md
+++ /dev/null
@@ -1,134 +0,0 @@
-
-
-# Examples
-
-We host a wide range of example scripts for multiple learning frameworks. Simply choose your favorite: [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow), [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch) or [JAX/Flax](https://github.com/huggingface/transformers/tree/main/examples/flax).
-
-We also have some [research projects](https://github.com/huggingface/transformers/tree/main/examples/research_projects), as well as some [legacy examples](https://github.com/huggingface/transformers/tree/main/examples/legacy). Note that unlike the main examples these are not actively maintained, and may require specific older versions of dependencies in order to run.
-
-While we strive to present as many use cases as possible, the example scripts are just that - examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. To help you with that, most of the examples fully expose the preprocessing of the data, allowing you to tweak and edit them as required.
-
-Please discuss on the [forum](https://discuss.huggingface.co/) or in an [issue](https://github.com/huggingface/transformers/issues) a feature you would like to implement in an example before submitting a PR; we welcome bug fixes, but since we want to keep the examples as simple as possible it's unlikely that we will merge a pull request adding more functionality at the cost of readability.
-
-## Important note
-
-**Important**
-
-To make sure you can successfully run the latest versions of the example scripts, you have to **install the library from source** and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
-```bash
-git clone https://github.com/huggingface/transformers
-cd transformers
-pip install .
-```
-Then cd in the example folder of your choice and run
-```bash
-pip install -r requirements.txt
-```
-
-To browse the examples corresponding to released versions of 🤗 Transformers, click on the line below and then on your desired version of the library:
-
-
- Examples for older versions of 🤗 Transformers
-
-
-
-Alternatively, you can switch your cloned 🤗 Transformers to a specific version (for instance with v3.5.1) with
-```bash
-git checkout tags/v3.5.1
-```
-and run the example command as usual afterward.
-
-## Running the Examples on Remote Hardware with Auto-Setup
-
-[run_on_remote.py](./run_on_remote.py) is a script that launches any example on remote self-hosted hardware,
-with automatic hardware and environment setup. It uses [Runhouse](https://github.com/run-house/runhouse) to launch
-on self-hosted hardware (e.g. in your own cloud account or on-premise cluster) but there are other options
-for running remotely as well. You can easily customize the example used, command line arguments, dependencies,
-and type of compute hardware, and then run the script to automatically launch the example.
-
-You can refer to
-[hardware setup](https://runhouse-docs.readthedocs-hosted.com/en/main/rh_primitives/cluster.html#hardware-setup)
-for more information about hardware and dependency setup with Runhouse, or this
-[Colab tutorial](https://colab.research.google.com/drive/1sh_aNQzJX5BKAdNeXthTNGxKz7sM9VPc) for a more in-depth
-walkthrough.
-
-You can run the script with the following commands:
-
-```bash
-# First install runhouse:
-pip install runhouse
-
-# For an on-demand V100 with whichever cloud provider you have configured:
-python run_on_remote.py \
- --example pytorch/text-generation/run_generation.py \
- --model_type=gpt2 \
- --model_name_or_path=gpt2 \
- --prompt "I am a language model and"
-
-# For byo (bring your own) cluster:
-python run_on_remote.py --host --user --key_path \
- --example
-
-# For on-demand instances
-python run_on_remote.py --instance --provider \
- --example
-```
-
-You can also adapt the script to your own needs.
\ No newline at end of file
diff --git a/spaces/chendl/compositional_test/transformers/examples/tensorflow/benchmarking/run_benchmark_tf.py b/spaces/chendl/compositional_test/transformers/examples/tensorflow/benchmarking/run_benchmark_tf.py
deleted file mode 100644
index 25aabc5f51c669b59b5843f98fe89dc9c7122204..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/tensorflow/benchmarking/run_benchmark_tf.py
+++ /dev/null
@@ -1,48 +0,0 @@
-#!/usr/bin/env python
-# coding=utf-8
-# Copyright 2018 The HuggingFace Inc. team.
-# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" Benchmarking the library on inference and training in TensorFlow"""
-
-from transformers import HfArgumentParser, TensorFlowBenchmark, TensorFlowBenchmarkArguments
-
-
-def main():
- parser = HfArgumentParser(TensorFlowBenchmarkArguments)
- benchmark_args = parser.parse_args_into_dataclasses()[0]
- benchmark = TensorFlowBenchmark(args=benchmark_args)
- try:
- benchmark_args = parser.parse_args_into_dataclasses()[0]
- except ValueError as e:
- arg_error_msg = "Arg --no_{0} is no longer used, please use --no-{0} instead."
- begin_error_msg = " ".join(str(e).split(" ")[:-1])
- full_error_msg = ""
- depreciated_args = eval(str(e).split(" ")[-1])
- wrong_args = []
- for arg in depreciated_args:
- # arg[2:] removes '--'
- if arg[2:] in TensorFlowBenchmark.deprecated_args:
- # arg[5:] removes '--no_'
- full_error_msg += arg_error_msg.format(arg[5:])
- else:
- wrong_args.append(arg)
- if len(wrong_args) > 0:
- full_error_msg = full_error_msg + begin_error_msg + str(wrong_args)
- raise ValueError(full_error_msg)
- benchmark.run()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/voltLib/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/voltLib/__init__.py
deleted file mode 100644
index 886aa3a7864523656e609dd602683d73f985f467..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/voltLib/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-"""fontTools.voltLib -- a package for dealing with Visual OpenType Layout Tool
-(VOLT) files."""
-
-# See
-# http://www.microsoft.com/typography/VOLT.mspx
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-b92380ed.js b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-b92380ed.js
deleted file mode 100644
index d997847795d3fbddf54eaeb865438ae1b6d4bc4d..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-b92380ed.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as h,e as p,s as v,h as T,j as k,k as C,o as S,t as j,z as m,v as d,x as q,B as w,a9 as z,ab as B,ac as D,ad as E,F as u}from"./index-f877dfd5.js";import{T as F}from"./TabItem.svelte_svelte_type_style_lang-e019e79b.js";/* empty css */function A(s){let e;const i=s[4].default,t=z(i,s,s[8],null);return{c(){t&&t.c()},m(n,o){t&&t.m(n,o),e=!0},p(n,o){t&&t.p&&(!e||o&256)&&B(t,i,n,n[8],e?E(i,n[8],o,null):D(n[8]),null)},i(n){e||(m(t,n),e=!0)},o(n){d(t,n),e=!1},d(n){t&&t.d(n)}}}function G(s){let e,i,t;function n(l){s[5](l)}let o={visible:s[1],elem_id:s[2],elem_classes:s[3],$$slots:{default:[A]},$$scope:{ctx:s}};return s[0]!==void 0&&(o.selected=s[0]),e=new F({props:o}),T.push(()=>k(e,"selected",n)),e.$on("change",s[6]),e.$on("select",s[7]),{c(){C(e.$$.fragment)},m(l,c){S(e,l,c),t=!0},p(l,[c]){const _={};c&2&&(_.visible=l[1]),c&4&&(_.elem_id=l[2]),c&8&&(_.elem_classes=l[3]),c&256&&(_.$$scope={dirty:c,ctx:l}),!i&&c&1&&(i=!0,_.selected=l[0],j(()=>i=!1)),e.$set(_)},i(l){t||(m(e.$$.fragment,l),t=!0)},o(l){d(e.$$.fragment,l),t=!1},d(l){q(e,l)}}}function H(s,e,i){let{$$slots:t={},$$scope:n}=e;const o=w();let{visible:l=!0}=e,{elem_id:c=""}=e,{elem_classes:_=[]}=e,{selected:f}=e;function r(a){f=a,i(0,f)}function b(a){u.call(this,s,a)}function g(a){u.call(this,s,a)}return s.$$set=a=>{"visible"in a&&i(1,l=a.visible),"elem_id"in a&&i(2,c=a.elem_id),"elem_classes"in a&&i(3,_=a.elem_classes),"selected"in a&&i(0,f=a.selected),"$$scope"in a&&i(8,n=a.$$scope)},s.$$.update=()=>{s.$$.dirty&1&&o("prop_change",{selected:f})},[f,l,c,_,t,r,b,g,n]}class I extends h{constructor(e){super(),p(this,e,H,G,v,{visible:1,elem_id:2,elem_classes:3,selected:0})}}const M=I,N=["static"];export{M as Component,N as modes};
-//# sourceMappingURL=index-b92380ed.js.map
diff --git a/spaces/cihyFjudo/fairness-paper-search/Fikus Visualcam 161 Multilanguage A Comparison with Other CADCAM Software in the Market.md b/spaces/cihyFjudo/fairness-paper-search/Fikus Visualcam 161 Multilanguage A Comparison with Other CADCAM Software in the Market.md
deleted file mode 100644
index eeffdc3726e92ee100dca080dcea41c41c9f2cf6..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Fikus Visualcam 161 Multilanguage A Comparison with Other CADCAM Software in the Market.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Winning Eleven 4 Iso Psx Tips and Tricks for Mastering the Game.md b/spaces/cihyFjudo/fairness-paper-search/Winning Eleven 4 Iso Psx Tips and Tricks for Mastering the Game.md
deleted file mode 100644
index 82499b059602758c8122072635ca0dc246f7635e..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Winning Eleven 4 Iso Psx Tips and Tricks for Mastering the Game.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/altair/utils/theme.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/altair/utils/theme.py
deleted file mode 100644
index 10dc6fa8a81646ed7e9fa8d6be4e1634ec14e7d8..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/altair/utils/theme.py
+++ /dev/null
@@ -1,10 +0,0 @@
-"""Utilities for registering and working with themes"""
-
-from .plugin_registry import PluginRegistry
-from typing import Callable
-
-ThemeType = Callable[..., dict]
-
-
-class ThemeRegistry(PluginRegistry[ThemeType]):
- pass
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/charset_normalizer/models.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/charset_normalizer/models.py
deleted file mode 100644
index 7f8ca389050cd4bac7fd23d84e399a242d35d309..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/charset_normalizer/models.py
+++ /dev/null
@@ -1,337 +0,0 @@
-from encodings.aliases import aliases
-from hashlib import sha256
-from json import dumps
-from typing import Any, Dict, Iterator, List, Optional, Tuple, Union
-
-from .constant import TOO_BIG_SEQUENCE
-from .utils import iana_name, is_multi_byte_encoding, unicode_range
-
-
-class CharsetMatch:
- def __init__(
- self,
- payload: bytes,
- guessed_encoding: str,
- mean_mess_ratio: float,
- has_sig_or_bom: bool,
- languages: "CoherenceMatches",
- decoded_payload: Optional[str] = None,
- ):
- self._payload: bytes = payload
-
- self._encoding: str = guessed_encoding
- self._mean_mess_ratio: float = mean_mess_ratio
- self._languages: CoherenceMatches = languages
- self._has_sig_or_bom: bool = has_sig_or_bom
- self._unicode_ranges: Optional[List[str]] = None
-
- self._leaves: List[CharsetMatch] = []
- self._mean_coherence_ratio: float = 0.0
-
- self._output_payload: Optional[bytes] = None
- self._output_encoding: Optional[str] = None
-
- self._string: Optional[str] = decoded_payload
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, CharsetMatch):
- raise TypeError(
- "__eq__ cannot be invoked on {} and {}.".format(
- str(other.__class__), str(self.__class__)
- )
- )
- return self.encoding == other.encoding and self.fingerprint == other.fingerprint
-
- def __lt__(self, other: object) -> bool:
- """
- Implemented to make sorted available upon CharsetMatches items.
- """
- if not isinstance(other, CharsetMatch):
- raise ValueError
-
- chaos_difference: float = abs(self.chaos - other.chaos)
- coherence_difference: float = abs(self.coherence - other.coherence)
-
- # Below 1% difference --> Use Coherence
- if chaos_difference < 0.01 and coherence_difference > 0.02:
- # When having a tough decision, use the result that decoded as many multi-byte as possible.
- if chaos_difference == 0.0 and self.coherence == other.coherence:
- return self.multi_byte_usage > other.multi_byte_usage
- return self.coherence > other.coherence
-
- return self.chaos < other.chaos
-
- @property
- def multi_byte_usage(self) -> float:
- return 1.0 - len(str(self)) / len(self.raw)
-
- def __str__(self) -> str:
- # Lazy Str Loading
- if self._string is None:
- self._string = str(self._payload, self._encoding, "strict")
- return self._string
-
- def __repr__(self) -> str:
- return "".format(self.encoding, self.fingerprint)
-
- def add_submatch(self, other: "CharsetMatch") -> None:
- if not isinstance(other, CharsetMatch) or other == self:
- raise ValueError(
- "Unable to add instance <{}> as a submatch of a CharsetMatch".format(
- other.__class__
- )
- )
-
- other._string = None # Unload RAM usage; dirty trick.
- self._leaves.append(other)
-
- @property
- def encoding(self) -> str:
- return self._encoding
-
- @property
- def encoding_aliases(self) -> List[str]:
- """
- Encoding name are known by many name, using this could help when searching for IBM855 when it's listed as CP855.
- """
- also_known_as: List[str] = []
- for u, p in aliases.items():
- if self.encoding == u:
- also_known_as.append(p)
- elif self.encoding == p:
- also_known_as.append(u)
- return also_known_as
-
- @property
- def bom(self) -> bool:
- return self._has_sig_or_bom
-
- @property
- def byte_order_mark(self) -> bool:
- return self._has_sig_or_bom
-
- @property
- def languages(self) -> List[str]:
- """
- Return the complete list of possible languages found in decoded sequence.
- Usually not really useful. Returned list may be empty even if 'language' property return something != 'Unknown'.
- """
- return [e[0] for e in self._languages]
-
- @property
- def language(self) -> str:
- """
- Most probable language found in decoded sequence. If none were detected or inferred, the property will return
- "Unknown".
- """
- if not self._languages:
- # Trying to infer the language based on the given encoding
- # Its either English or we should not pronounce ourselves in certain cases.
- if "ascii" in self.could_be_from_charset:
- return "English"
-
- # doing it there to avoid circular import
- from charset_normalizer.cd import encoding_languages, mb_encoding_languages
-
- languages = (
- mb_encoding_languages(self.encoding)
- if is_multi_byte_encoding(self.encoding)
- else encoding_languages(self.encoding)
- )
-
- if len(languages) == 0 or "Latin Based" in languages:
- return "Unknown"
-
- return languages[0]
-
- return self._languages[0][0]
-
- @property
- def chaos(self) -> float:
- return self._mean_mess_ratio
-
- @property
- def coherence(self) -> float:
- if not self._languages:
- return 0.0
- return self._languages[0][1]
-
- @property
- def percent_chaos(self) -> float:
- return round(self.chaos * 100, ndigits=3)
-
- @property
- def percent_coherence(self) -> float:
- return round(self.coherence * 100, ndigits=3)
-
- @property
- def raw(self) -> bytes:
- """
- Original untouched bytes.
- """
- return self._payload
-
- @property
- def submatch(self) -> List["CharsetMatch"]:
- return self._leaves
-
- @property
- def has_submatch(self) -> bool:
- return len(self._leaves) > 0
-
- @property
- def alphabets(self) -> List[str]:
- if self._unicode_ranges is not None:
- return self._unicode_ranges
- # list detected ranges
- detected_ranges: List[Optional[str]] = [
- unicode_range(char) for char in str(self)
- ]
- # filter and sort
- self._unicode_ranges = sorted(list({r for r in detected_ranges if r}))
- return self._unicode_ranges
-
- @property
- def could_be_from_charset(self) -> List[str]:
- """
- The complete list of encoding that output the exact SAME str result and therefore could be the originating
- encoding.
- This list does include the encoding available in property 'encoding'.
- """
- return [self._encoding] + [m.encoding for m in self._leaves]
-
- def output(self, encoding: str = "utf_8") -> bytes:
- """
- Method to get re-encoded bytes payload using given target encoding. Default to UTF-8.
- Any errors will be simply ignored by the encoder NOT replaced.
- """
- if self._output_encoding is None or self._output_encoding != encoding:
- self._output_encoding = encoding
- self._output_payload = str(self).encode(encoding, "replace")
-
- return self._output_payload # type: ignore
-
- @property
- def fingerprint(self) -> str:
- """
- Retrieve the unique SHA256 computed using the transformed (re-encoded) payload. Not the original one.
- """
- return sha256(self.output()).hexdigest()
-
-
-class CharsetMatches:
- """
- Container with every CharsetMatch items ordered by default from most probable to the less one.
- Act like a list(iterable) but does not implements all related methods.
- """
-
- def __init__(self, results: Optional[List[CharsetMatch]] = None):
- self._results: List[CharsetMatch] = sorted(results) if results else []
-
- def __iter__(self) -> Iterator[CharsetMatch]:
- yield from self._results
-
- def __getitem__(self, item: Union[int, str]) -> CharsetMatch:
- """
- Retrieve a single item either by its position or encoding name (alias may be used here).
- Raise KeyError upon invalid index or encoding not present in results.
- """
- if isinstance(item, int):
- return self._results[item]
- if isinstance(item, str):
- item = iana_name(item, False)
- for result in self._results:
- if item in result.could_be_from_charset:
- return result
- raise KeyError
-
- def __len__(self) -> int:
- return len(self._results)
-
- def __bool__(self) -> bool:
- return len(self._results) > 0
-
- def append(self, item: CharsetMatch) -> None:
- """
- Insert a single match. Will be inserted accordingly to preserve sort.
- Can be inserted as a submatch.
- """
- if not isinstance(item, CharsetMatch):
- raise ValueError(
- "Cannot append instance '{}' to CharsetMatches".format(
- str(item.__class__)
- )
- )
- # We should disable the submatch factoring when the input file is too heavy (conserve RAM usage)
- if len(item.raw) <= TOO_BIG_SEQUENCE:
- for match in self._results:
- if match.fingerprint == item.fingerprint and match.chaos == item.chaos:
- match.add_submatch(item)
- return
- self._results.append(item)
- self._results = sorted(self._results)
-
- def best(self) -> Optional["CharsetMatch"]:
- """
- Simply return the first match. Strict equivalent to matches[0].
- """
- if not self._results:
- return None
- return self._results[0]
-
- def first(self) -> Optional["CharsetMatch"]:
- """
- Redundant method, call the method best(). Kept for BC reasons.
- """
- return self.best()
-
-
-CoherenceMatch = Tuple[str, float]
-CoherenceMatches = List[CoherenceMatch]
-
-
-class CliDetectionResult:
- def __init__(
- self,
- path: str,
- encoding: Optional[str],
- encoding_aliases: List[str],
- alternative_encodings: List[str],
- language: str,
- alphabets: List[str],
- has_sig_or_bom: bool,
- chaos: float,
- coherence: float,
- unicode_path: Optional[str],
- is_preferred: bool,
- ):
- self.path: str = path
- self.unicode_path: Optional[str] = unicode_path
- self.encoding: Optional[str] = encoding
- self.encoding_aliases: List[str] = encoding_aliases
- self.alternative_encodings: List[str] = alternative_encodings
- self.language: str = language
- self.alphabets: List[str] = alphabets
- self.has_sig_or_bom: bool = has_sig_or_bom
- self.chaos: float = chaos
- self.coherence: float = coherence
- self.is_preferred: bool = is_preferred
-
- @property
- def __dict__(self) -> Dict[str, Any]: # type: ignore
- return {
- "path": self.path,
- "encoding": self.encoding,
- "encoding_aliases": self.encoding_aliases,
- "alternative_encodings": self.alternative_encodings,
- "language": self.language,
- "alphabets": self.alphabets,
- "has_sig_or_bom": self.has_sig_or_bom,
- "chaos": self.chaos,
- "coherence": self.coherence,
- "unicode_path": self.unicode_path,
- "is_preferred": self.is_preferred,
- }
-
- def to_json(self) -> str:
- return dumps(self.__dict__, ensure_ascii=True, indent=4)
diff --git a/spaces/codelion/Grounding_DINO_demo/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h b/spaces/codelion/Grounding_DINO_demo/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h
deleted file mode 100644
index b2b88e8c46f19b6db0933163e57ccdb51180f517..0000000000000000000000000000000000000000
--- a/spaces/codelion/Grounding_DINO_demo/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h
+++ /dev/null
@@ -1,35 +0,0 @@
-/*!
-**************************************************************************************************
-* Deformable DETR
-* Copyright (c) 2020 SenseTime. All Rights Reserved.
-* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-**************************************************************************************************
-* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
-**************************************************************************************************
-*/
-
-#pragma once
-#include
-
-namespace groundingdino {
-
-at::Tensor
-ms_deform_attn_cpu_forward(
- const at::Tensor &value,
- const at::Tensor &spatial_shapes,
- const at::Tensor &level_start_index,
- const at::Tensor &sampling_loc,
- const at::Tensor &attn_weight,
- const int im2col_step);
-
-std::vector
-ms_deform_attn_cpu_backward(
- const at::Tensor &value,
- const at::Tensor &spatial_shapes,
- const at::Tensor &level_start_index,
- const at::Tensor &sampling_loc,
- const at::Tensor &attn_weight,
- const at::Tensor &grad_output,
- const int im2col_step);
-
-} // namespace groundingdino
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/aacpsdsp_init_arm.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/aacpsdsp_init_arm.c
deleted file mode 100644
index 6eb979ed1d9c473a4837c7d2ab1a8121fef030dd..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/aacpsdsp_init_arm.c
+++ /dev/null
@@ -1,57 +0,0 @@
-/*
- * Copyright (c) 2012 Mans Rullgard
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "config.h"
-
-#include "libavutil/arm/cpu.h"
-#include "libavutil/attributes.h"
-#include "libavcodec/aacpsdsp.h"
-
-void ff_ps_add_squares_neon(float *dst, const float (*src)[2], int n);
-void ff_ps_mul_pair_single_neon(float (*dst)[2], float (*src0)[2],
- float *src1, int n);
-void ff_ps_hybrid_analysis_neon(float (*out)[2], float (*in)[2],
- const float (*filter)[8][2],
- ptrdiff_t stride, int n);
-void ff_ps_hybrid_analysis_ileave_neon(float (*out)[32][2], float L[2][38][64],
- int i, int len);
-void ff_ps_hybrid_synthesis_deint_neon(float out[2][38][64], float (*in)[32][2],
- int i, int len);
-void ff_ps_decorrelate_neon(float (*out)[2], float (*delay)[2],
- float (*ap_delay)[PS_QMF_TIME_SLOTS+PS_MAX_AP_DELAY][2],
- const float phi_fract[2], float (*Q_fract)[2],
- const float *transient_gain, float g_decay_slope,
- int len);
-void ff_ps_stereo_interpolate_neon(float (*l)[2], float (*r)[2],
- float h[2][4], float h_step[2][4],
- int len);
-
-av_cold void ff_psdsp_init_arm(PSDSPContext *s)
-{
- int cpu_flags = av_get_cpu_flags();
-
- if (have_neon(cpu_flags)) {
- s->add_squares = ff_ps_add_squares_neon;
- s->mul_pair_single = ff_ps_mul_pair_single_neon;
- s->hybrid_synthesis_deint = ff_ps_hybrid_synthesis_deint_neon;
- s->hybrid_analysis = ff_ps_hybrid_analysis_neon;
- s->stereo_interpolate[0] = ff_ps_stereo_interpolate_neon;
- }
-}
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/audiotoolboxenc.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/audiotoolboxenc.c
deleted file mode 100644
index 1ccfda4d207343e262e0b7885ec5c0a15ebb004a..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/audiotoolboxenc.c
+++ /dev/null
@@ -1,679 +0,0 @@
-/*
- * Audio Toolbox system codecs
- *
- * copyright (c) 2016 rcombs
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include
-
-#define FF_BUFQUEUE_SIZE 256
-#include "libavfilter/bufferqueue.h"
-
-#include "config.h"
-#include "audio_frame_queue.h"
-#include "avcodec.h"
-#include "bytestream.h"
-#include "codec_internal.h"
-#include "encode.h"
-#include "internal.h"
-#include "libavformat/isom.h"
-#include "libavutil/avassert.h"
-#include "libavutil/channel_layout.h"
-#include "libavutil/opt.h"
-#include "libavutil/log.h"
-
-typedef struct ATDecodeContext {
- AVClass *av_class;
- int mode;
- int quality;
-
- AudioConverterRef converter;
- struct FFBufQueue frame_queue;
- struct FFBufQueue used_frame_queue;
-
- unsigned pkt_size;
- AudioFrameQueue afq;
- int eof;
- int frame_size;
-
- AVFrame* encoding_frame;
-} ATDecodeContext;
-
-static UInt32 ffat_get_format_id(enum AVCodecID codec, int profile)
-{
- switch (codec) {
- case AV_CODEC_ID_AAC:
- switch (profile) {
- case FF_PROFILE_AAC_LOW:
- default:
- return kAudioFormatMPEG4AAC;
- case FF_PROFILE_AAC_HE:
- return kAudioFormatMPEG4AAC_HE;
- case FF_PROFILE_AAC_HE_V2:
- return kAudioFormatMPEG4AAC_HE_V2;
- case FF_PROFILE_AAC_LD:
- return kAudioFormatMPEG4AAC_LD;
- case FF_PROFILE_AAC_ELD:
- return kAudioFormatMPEG4AAC_ELD;
- }
- case AV_CODEC_ID_ADPCM_IMA_QT:
- return kAudioFormatAppleIMA4;
- case AV_CODEC_ID_ALAC:
- return kAudioFormatAppleLossless;
- case AV_CODEC_ID_ILBC:
- return kAudioFormatiLBC;
- case AV_CODEC_ID_PCM_ALAW:
- return kAudioFormatALaw;
- case AV_CODEC_ID_PCM_MULAW:
- return kAudioFormatULaw;
- default:
- av_assert0(!"Invalid codec ID!");
- return 0;
- }
-}
-
-static void ffat_update_ctx(AVCodecContext *avctx)
-{
- ATDecodeContext *at = avctx->priv_data;
- UInt32 size = sizeof(unsigned);
- AudioConverterPrimeInfo prime_info;
- AudioStreamBasicDescription out_format;
-
- AudioConverterGetProperty(at->converter,
- kAudioConverterPropertyMaximumOutputPacketSize,
- &size, &at->pkt_size);
-
- if (at->pkt_size <= 0)
- at->pkt_size = 1024 * 50;
-
- size = sizeof(prime_info);
-
- if (!AudioConverterGetProperty(at->converter,
- kAudioConverterPrimeInfo,
- &size, &prime_info)) {
- avctx->initial_padding = prime_info.leadingFrames;
- }
-
- size = sizeof(out_format);
- if (!AudioConverterGetProperty(at->converter,
- kAudioConverterCurrentOutputStreamDescription,
- &size, &out_format)) {
- if (out_format.mFramesPerPacket)
- avctx->frame_size = out_format.mFramesPerPacket;
- if (out_format.mBytesPerPacket && avctx->codec_id == AV_CODEC_ID_ILBC)
- avctx->block_align = out_format.mBytesPerPacket;
- }
-
- at->frame_size = avctx->frame_size;
- if (avctx->codec_id == AV_CODEC_ID_PCM_MULAW ||
- avctx->codec_id == AV_CODEC_ID_PCM_ALAW) {
- at->pkt_size *= 1024;
- avctx->frame_size *= 1024;
- }
-}
-
-static int read_descr(GetByteContext *gb, int *tag)
-{
- int len = 0;
- int count = 4;
- *tag = bytestream2_get_byte(gb);
- while (count--) {
- int c = bytestream2_get_byte(gb);
- len = (len << 7) | (c & 0x7f);
- if (!(c & 0x80))
- break;
- }
- return len;
-}
-
-static int get_ilbc_mode(AVCodecContext *avctx)
-{
- if (avctx->block_align == 38)
- return 20;
- else if (avctx->block_align == 50)
- return 30;
- else if (avctx->bit_rate > 0)
- return avctx->bit_rate <= 14000 ? 30 : 20;
- else
- return 30;
-}
-
-static av_cold int get_channel_label(int channel)
-{
- uint64_t map = 1 << channel;
- if (map <= AV_CH_LOW_FREQUENCY)
- return channel + 1;
- else if (map <= AV_CH_BACK_RIGHT)
- return channel + 29;
- else if (map <= AV_CH_BACK_CENTER)
- return channel - 1;
- else if (map <= AV_CH_SIDE_RIGHT)
- return channel - 4;
- else if (map <= AV_CH_TOP_BACK_RIGHT)
- return channel + 1;
- else if (map <= AV_CH_STEREO_RIGHT)
- return -1;
- else if (map <= AV_CH_WIDE_RIGHT)
- return channel + 4;
- else if (map <= AV_CH_SURROUND_DIRECT_RIGHT)
- return channel - 23;
- else if (map == AV_CH_LOW_FREQUENCY_2)
- return kAudioChannelLabel_LFE2;
- else
- return -1;
-}
-
-static int remap_layout(AudioChannelLayout *layout, const AVChannelLayout *in_layout)
-{
- int i;
- layout->mChannelLayoutTag = kAudioChannelLayoutTag_UseChannelDescriptions;
- layout->mNumberChannelDescriptions = in_layout->nb_channels;
- for (i = 0; i < in_layout->nb_channels; i++) {
- int c, label;
-
- c = av_channel_layout_channel_from_index(in_layout, i);
- if (c < 0 || c >= 64)
- return AVERROR(EINVAL);
- label = get_channel_label(c);
- layout->mChannelDescriptions[i].mChannelLabel = label;
- if (label < 0)
- return AVERROR(EINVAL);
- c++;
- }
- return 0;
-}
-
-static int get_aac_tag(const AVChannelLayout *in_layout)
-{
- static const struct {
- AVChannelLayout chl;
- int tag;
- } map[] = {
- { AV_CHANNEL_LAYOUT_MONO, kAudioChannelLayoutTag_Mono },
- { AV_CHANNEL_LAYOUT_STEREO, kAudioChannelLayoutTag_Stereo },
- { AV_CHANNEL_LAYOUT_QUAD, kAudioChannelLayoutTag_AAC_Quadraphonic },
- { AV_CHANNEL_LAYOUT_OCTAGONAL, kAudioChannelLayoutTag_AAC_Octagonal },
- { AV_CHANNEL_LAYOUT_SURROUND, kAudioChannelLayoutTag_AAC_3_0 },
- { AV_CHANNEL_LAYOUT_4POINT0, kAudioChannelLayoutTag_AAC_4_0 },
- { AV_CHANNEL_LAYOUT_5POINT0, kAudioChannelLayoutTag_AAC_5_0 },
- { AV_CHANNEL_LAYOUT_5POINT1, kAudioChannelLayoutTag_AAC_5_1 },
- { AV_CHANNEL_LAYOUT_6POINT0, kAudioChannelLayoutTag_AAC_6_0 },
- { AV_CHANNEL_LAYOUT_6POINT1, kAudioChannelLayoutTag_AAC_6_1 },
- { AV_CHANNEL_LAYOUT_7POINT0, kAudioChannelLayoutTag_AAC_7_0 },
- { AV_CHANNEL_LAYOUT_7POINT1_WIDE_BACK, kAudioChannelLayoutTag_AAC_7_1 },
- { AV_CHANNEL_LAYOUT_7POINT1, kAudioChannelLayoutTag_MPEG_7_1_C },
- };
- int i;
-
- for (i = 0; i < FF_ARRAY_ELEMS(map); i++)
- if (!av_channel_layout_compare(in_layout, &map[i].chl))
- return map[i].tag;
-
- return 0;
-}
-
-static av_cold int ffat_init_encoder(AVCodecContext *avctx)
-{
- ATDecodeContext *at = avctx->priv_data;
- OSStatus status;
-
- AudioStreamBasicDescription in_format = {
- .mSampleRate = avctx->sample_rate,
- .mFormatID = kAudioFormatLinearPCM,
- .mFormatFlags = ((avctx->sample_fmt == AV_SAMPLE_FMT_FLT ||
- avctx->sample_fmt == AV_SAMPLE_FMT_DBL) ? kAudioFormatFlagIsFloat
- : avctx->sample_fmt == AV_SAMPLE_FMT_U8 ? 0
- : kAudioFormatFlagIsSignedInteger)
- | kAudioFormatFlagIsPacked,
- .mBytesPerPacket = av_get_bytes_per_sample(avctx->sample_fmt) * avctx->ch_layout.nb_channels,
- .mFramesPerPacket = 1,
- .mBytesPerFrame = av_get_bytes_per_sample(avctx->sample_fmt) * avctx->ch_layout.nb_channels,
- .mChannelsPerFrame = avctx->ch_layout.nb_channels,
- .mBitsPerChannel = av_get_bytes_per_sample(avctx->sample_fmt) * 8,
- };
- AudioStreamBasicDescription out_format = {
- .mSampleRate = avctx->sample_rate,
- .mFormatID = ffat_get_format_id(avctx->codec_id, avctx->profile),
- .mChannelsPerFrame = in_format.mChannelsPerFrame,
- };
- UInt32 layout_size = sizeof(AudioChannelLayout) +
- sizeof(AudioChannelDescription) * avctx->ch_layout.nb_channels;
- AudioChannelLayout *channel_layout = av_malloc(layout_size);
-
- if (!channel_layout)
- return AVERROR(ENOMEM);
-
- if (avctx->codec_id == AV_CODEC_ID_ILBC) {
- int mode = get_ilbc_mode(avctx);
- out_format.mFramesPerPacket = 8000 * mode / 1000;
- out_format.mBytesPerPacket = (mode == 20 ? 38 : 50);
- }
-
- status = AudioConverterNew(&in_format, &out_format, &at->converter);
-
- if (status != 0) {
- av_log(avctx, AV_LOG_ERROR, "AudioToolbox init error: %i\n", (int)status);
- av_free(channel_layout);
- return AVERROR_UNKNOWN;
- }
-
- if (avctx->ch_layout.order == AV_CHANNEL_ORDER_UNSPEC)
- av_channel_layout_default(&avctx->ch_layout, avctx->ch_layout.nb_channels);
-
- if ((status = remap_layout(channel_layout, &avctx->ch_layout)) < 0) {
- av_log(avctx, AV_LOG_ERROR, "Invalid channel layout\n");
- av_free(channel_layout);
- return status;
- }
-
- if (AudioConverterSetProperty(at->converter, kAudioConverterInputChannelLayout,
- layout_size, channel_layout)) {
- av_log(avctx, AV_LOG_ERROR, "Unsupported input channel layout\n");
- av_free(channel_layout);
- return AVERROR(EINVAL);
- }
- if (avctx->codec_id == AV_CODEC_ID_AAC) {
- int tag = get_aac_tag(&avctx->ch_layout);
- if (tag) {
- channel_layout->mChannelLayoutTag = tag;
- channel_layout->mNumberChannelDescriptions = 0;
- }
- }
- if (AudioConverterSetProperty(at->converter, kAudioConverterOutputChannelLayout,
- layout_size, channel_layout)) {
- av_log(avctx, AV_LOG_ERROR, "Unsupported output channel layout\n");
- av_free(channel_layout);
- return AVERROR(EINVAL);
- }
- av_free(channel_layout);
-
- if (avctx->bits_per_raw_sample)
- AudioConverterSetProperty(at->converter,
- kAudioConverterPropertyBitDepthHint,
- sizeof(avctx->bits_per_raw_sample),
- &avctx->bits_per_raw_sample);
-
-#if !TARGET_OS_IPHONE
- if (at->mode == -1)
- at->mode = (avctx->flags & AV_CODEC_FLAG_QSCALE) ?
- kAudioCodecBitRateControlMode_Variable :
- kAudioCodecBitRateControlMode_Constant;
-
- AudioConverterSetProperty(at->converter, kAudioCodecPropertyBitRateControlMode,
- sizeof(at->mode), &at->mode);
-
- if (at->mode == kAudioCodecBitRateControlMode_Variable) {
- int q = avctx->global_quality / FF_QP2LAMBDA;
- if (q < 0 || q > 14) {
- av_log(avctx, AV_LOG_WARNING,
- "VBR quality %d out of range, should be 0-14\n", q);
- q = av_clip(q, 0, 14);
- }
- q = 127 - q * 9;
- AudioConverterSetProperty(at->converter, kAudioCodecPropertySoundQualityForVBR,
- sizeof(q), &q);
- } else
-#endif
- if (avctx->bit_rate > 0) {
- UInt32 rate = avctx->bit_rate;
- UInt32 size;
- status = AudioConverterGetPropertyInfo(at->converter,
- kAudioConverterApplicableEncodeBitRates,
- &size, NULL);
- if (!status && size) {
- UInt32 new_rate = rate;
- int count;
- int i;
- AudioValueRange *ranges = av_malloc(size);
- if (!ranges)
- return AVERROR(ENOMEM);
- AudioConverterGetProperty(at->converter,
- kAudioConverterApplicableEncodeBitRates,
- &size, ranges);
- count = size / sizeof(AudioValueRange);
- for (i = 0; i < count; i++) {
- AudioValueRange *range = &ranges[i];
- if (rate >= range->mMinimum && rate <= range->mMaximum) {
- new_rate = rate;
- break;
- } else if (rate > range->mMaximum) {
- new_rate = range->mMaximum;
- } else {
- new_rate = range->mMinimum;
- break;
- }
- }
- if (new_rate != rate) {
- av_log(avctx, AV_LOG_WARNING,
- "Bitrate %u not allowed; changing to %u\n", rate, new_rate);
- rate = new_rate;
- }
- av_free(ranges);
- }
- AudioConverterSetProperty(at->converter, kAudioConverterEncodeBitRate,
- sizeof(rate), &rate);
- }
-
- at->quality = 96 - at->quality * 32;
- AudioConverterSetProperty(at->converter, kAudioConverterCodecQuality,
- sizeof(at->quality), &at->quality);
-
- if (!AudioConverterGetPropertyInfo(at->converter, kAudioConverterCompressionMagicCookie,
- &avctx->extradata_size, NULL) &&
- avctx->extradata_size) {
- int extradata_size = avctx->extradata_size;
- uint8_t *extradata;
- if (!(avctx->extradata = av_mallocz(avctx->extradata_size + AV_INPUT_BUFFER_PADDING_SIZE)))
- return AVERROR(ENOMEM);
- if (avctx->codec_id == AV_CODEC_ID_ALAC) {
- avctx->extradata_size = 0x24;
- AV_WB32(avctx->extradata, 0x24);
- AV_WB32(avctx->extradata + 4, MKBETAG('a','l','a','c'));
- extradata = avctx->extradata + 12;
- avctx->extradata_size = 0x24;
- } else {
- extradata = avctx->extradata;
- }
- status = AudioConverterGetProperty(at->converter,
- kAudioConverterCompressionMagicCookie,
- &extradata_size, extradata);
- if (status != 0) {
- av_log(avctx, AV_LOG_ERROR, "AudioToolbox cookie error: %i\n", (int)status);
- return AVERROR_UNKNOWN;
- } else if (avctx->codec_id == AV_CODEC_ID_AAC) {
- GetByteContext gb;
- int tag, len;
- bytestream2_init(&gb, extradata, extradata_size);
- do {
- len = read_descr(&gb, &tag);
- if (tag == MP4DecConfigDescrTag) {
- bytestream2_skip(&gb, 13);
- len = read_descr(&gb, &tag);
- if (tag == MP4DecSpecificDescrTag) {
- len = FFMIN(gb.buffer_end - gb.buffer, len);
- memmove(extradata, gb.buffer, len);
- avctx->extradata_size = len;
- break;
- }
- } else if (tag == MP4ESDescrTag) {
- int flags;
- bytestream2_skip(&gb, 2);
- flags = bytestream2_get_byte(&gb);
- if (flags & 0x80) //streamDependenceFlag
- bytestream2_skip(&gb, 2);
- if (flags & 0x40) //URL_Flag
- bytestream2_skip(&gb, bytestream2_get_byte(&gb));
- if (flags & 0x20) //OCRstreamFlag
- bytestream2_skip(&gb, 2);
- }
- } while (bytestream2_get_bytes_left(&gb));
- } else if (avctx->codec_id != AV_CODEC_ID_ALAC) {
- avctx->extradata_size = extradata_size;
- }
- }
-
- ffat_update_ctx(avctx);
-
-#if !TARGET_OS_IPHONE && defined(__MAC_10_9)
- if (at->mode == kAudioCodecBitRateControlMode_Variable && avctx->rc_max_rate) {
- UInt32 max_size = avctx->rc_max_rate * avctx->frame_size / avctx->sample_rate;
- if (max_size)
- AudioConverterSetProperty(at->converter, kAudioCodecPropertyPacketSizeLimitForVBR,
- sizeof(max_size), &max_size);
- }
-#endif
-
- ff_af_queue_init(avctx, &at->afq);
-
- at->encoding_frame = av_frame_alloc();
- if (!at->encoding_frame)
- return AVERROR(ENOMEM);
-
- return 0;
-}
-
-static OSStatus ffat_encode_callback(AudioConverterRef converter, UInt32 *nb_packets,
- AudioBufferList *data,
- AudioStreamPacketDescription **packets,
- void *inctx)
-{
- AVCodecContext *avctx = inctx;
- ATDecodeContext *at = avctx->priv_data;
- AVFrame *frame;
- int ret;
-
- if (!at->frame_queue.available) {
- if (at->eof) {
- *nb_packets = 0;
- return 0;
- } else {
- *nb_packets = 0;
- return 1;
- }
- }
-
- frame = ff_bufqueue_get(&at->frame_queue);
-
- data->mNumberBuffers = 1;
- data->mBuffers[0].mNumberChannels = avctx->ch_layout.nb_channels;
- data->mBuffers[0].mDataByteSize = frame->nb_samples *
- av_get_bytes_per_sample(avctx->sample_fmt) *
- avctx->ch_layout.nb_channels;
- data->mBuffers[0].mData = frame->data[0];
- if (*nb_packets > frame->nb_samples)
- *nb_packets = frame->nb_samples;
-
- av_frame_unref(at->encoding_frame);
- ret = av_frame_ref(at->encoding_frame, frame);
- if (ret < 0) {
- *nb_packets = 0;
- return ret;
- }
-
- ff_bufqueue_add(avctx, &at->used_frame_queue, frame);
-
- return 0;
-}
-
-static int ffat_encode(AVCodecContext *avctx, AVPacket *avpkt,
- const AVFrame *frame, int *got_packet_ptr)
-{
- ATDecodeContext *at = avctx->priv_data;
- OSStatus ret;
-
- AudioBufferList out_buffers = {
- .mNumberBuffers = 1,
- .mBuffers = {
- {
- .mNumberChannels = avctx->ch_layout.nb_channels,
- .mDataByteSize = at->pkt_size,
- }
- }
- };
- AudioStreamPacketDescription out_pkt_desc = {0};
-
- if (frame) {
- AVFrame *in_frame;
-
- if (ff_bufqueue_is_full(&at->frame_queue)) {
- /*
- * The frame queue is significantly larger than needed in practice,
- * but no clear way to determine the minimum number of samples to
- * get output from AudioConverterFillComplexBuffer().
- */
- av_log(avctx, AV_LOG_ERROR, "Bug: frame queue is too small.\n");
- return AVERROR_BUG;
- }
-
- if ((ret = ff_af_queue_add(&at->afq, frame)) < 0)
- return ret;
-
- in_frame = av_frame_clone(frame);
- if (!in_frame)
- return AVERROR(ENOMEM);
-
- ff_bufqueue_add(avctx, &at->frame_queue, in_frame);
- } else {
- at->eof = 1;
- }
-
- if ((ret = ff_alloc_packet(avctx, avpkt, at->pkt_size)) < 0)
- return ret;
-
-
- out_buffers.mBuffers[0].mData = avpkt->data;
-
- *got_packet_ptr = avctx->frame_size / at->frame_size;
-
- ret = AudioConverterFillComplexBuffer(at->converter, ffat_encode_callback, avctx,
- got_packet_ptr, &out_buffers,
- (avctx->frame_size > at->frame_size) ? NULL : &out_pkt_desc);
-
- ff_bufqueue_discard_all(&at->used_frame_queue);
-
- if ((!ret || ret == 1) && *got_packet_ptr) {
- avpkt->size = out_buffers.mBuffers[0].mDataByteSize;
- ff_af_queue_remove(&at->afq, out_pkt_desc.mVariableFramesInPacket ?
- out_pkt_desc.mVariableFramesInPacket :
- avctx->frame_size,
- &avpkt->pts,
- &avpkt->duration);
- } else if (ret && ret != 1) {
- av_log(avctx, AV_LOG_ERROR, "Encode error: %i\n", ret);
- return AVERROR_EXTERNAL;
- }
-
- return 0;
-}
-
-static av_cold void ffat_encode_flush(AVCodecContext *avctx)
-{
- ATDecodeContext *at = avctx->priv_data;
- AudioConverterReset(at->converter);
- ff_bufqueue_discard_all(&at->frame_queue);
- ff_bufqueue_discard_all(&at->used_frame_queue);
-}
-
-static av_cold int ffat_close_encoder(AVCodecContext *avctx)
-{
- ATDecodeContext *at = avctx->priv_data;
- AudioConverterDispose(at->converter);
- ff_bufqueue_discard_all(&at->frame_queue);
- ff_bufqueue_discard_all(&at->used_frame_queue);
- ff_af_queue_close(&at->afq);
- av_frame_free(&at->encoding_frame);
- return 0;
-}
-
-static const AVProfile aac_profiles[] = {
- { FF_PROFILE_AAC_LOW, "LC" },
- { FF_PROFILE_AAC_HE, "HE-AAC" },
- { FF_PROFILE_AAC_HE_V2, "HE-AACv2" },
- { FF_PROFILE_AAC_LD, "LD" },
- { FF_PROFILE_AAC_ELD, "ELD" },
- { FF_PROFILE_UNKNOWN },
-};
-
-#define AE AV_OPT_FLAG_AUDIO_PARAM | AV_OPT_FLAG_ENCODING_PARAM
-static const AVOption options[] = {
-#if !TARGET_OS_IPHONE
- {"aac_at_mode", "ratecontrol mode", offsetof(ATDecodeContext, mode), AV_OPT_TYPE_INT, {.i64 = -1}, -1, kAudioCodecBitRateControlMode_Variable, AE, "mode"},
- {"auto", "VBR if global quality is given; CBR otherwise", 0, AV_OPT_TYPE_CONST, {.i64 = -1}, INT_MIN, INT_MAX, AE, "mode"},
- {"cbr", "constant bitrate", 0, AV_OPT_TYPE_CONST, {.i64 = kAudioCodecBitRateControlMode_Constant}, INT_MIN, INT_MAX, AE, "mode"},
- {"abr", "long-term average bitrate", 0, AV_OPT_TYPE_CONST, {.i64 = kAudioCodecBitRateControlMode_LongTermAverage}, INT_MIN, INT_MAX, AE, "mode"},
- {"cvbr", "constrained variable bitrate", 0, AV_OPT_TYPE_CONST, {.i64 = kAudioCodecBitRateControlMode_VariableConstrained}, INT_MIN, INT_MAX, AE, "mode"},
- {"vbr" , "variable bitrate", 0, AV_OPT_TYPE_CONST, {.i64 = kAudioCodecBitRateControlMode_Variable}, INT_MIN, INT_MAX, AE, "mode"},
-#endif
- {"aac_at_quality", "quality vs speed control", offsetof(ATDecodeContext, quality), AV_OPT_TYPE_INT, {.i64 = 0}, 0, 2, AE},
- { NULL },
-};
-
-#define FFAT_ENC_CLASS(NAME) \
- static const AVClass ffat_##NAME##_enc_class = { \
- .class_name = "at_" #NAME "_enc", \
- .item_name = av_default_item_name, \
- .option = options, \
- .version = LIBAVUTIL_VERSION_INT, \
- };
-
-#define FFAT_ENC(NAME, ID, PROFILES, CAPS, CHANNEL_LAYOUTS, CH_LAYOUTS) \
- FFAT_ENC_CLASS(NAME) \
- const FFCodec ff_##NAME##_at_encoder = { \
- .p.name = #NAME "_at", \
- CODEC_LONG_NAME(#NAME " (AudioToolbox)"), \
- .p.type = AVMEDIA_TYPE_AUDIO, \
- .p.id = ID, \
- .priv_data_size = sizeof(ATDecodeContext), \
- .init = ffat_init_encoder, \
- .close = ffat_close_encoder, \
- FF_CODEC_ENCODE_CB(ffat_encode), \
- .flush = ffat_encode_flush, \
- .p.priv_class = &ffat_##NAME##_enc_class, \
- .p.capabilities = AV_CODEC_CAP_DELAY | \
- AV_CODEC_CAP_ENCODER_FLUSH CAPS, \
- CODEC_OLD_CHANNEL_LAYOUTS_ARRAY(CHANNEL_LAYOUTS) \
- .p.ch_layouts = CH_LAYOUTS, \
- .p.sample_fmts = (const enum AVSampleFormat[]) { \
- AV_SAMPLE_FMT_S16, \
- AV_SAMPLE_FMT_U8, AV_SAMPLE_FMT_NONE \
- }, \
- .p.profiles = PROFILES, \
- .p.wrapper_name = "at", \
- };
-
-static const AVChannelLayout aac_at_ch_layouts[] = {
- AV_CHANNEL_LAYOUT_MONO,
- AV_CHANNEL_LAYOUT_STEREO,
- AV_CHANNEL_LAYOUT_SURROUND,
- AV_CHANNEL_LAYOUT_4POINT0,
- AV_CHANNEL_LAYOUT_5POINT0,
- AV_CHANNEL_LAYOUT_5POINT1,
- AV_CHANNEL_LAYOUT_6POINT0,
- AV_CHANNEL_LAYOUT_6POINT1,
- AV_CHANNEL_LAYOUT_7POINT0,
- AV_CHANNEL_LAYOUT_7POINT1_WIDE_BACK,
- AV_CHANNEL_LAYOUT_QUAD,
- AV_CHANNEL_LAYOUT_OCTAGONAL,
- { 0 },
-};
-
-#if FF_API_OLD_CHANNEL_LAYOUT
-static const uint64_t aac_at_channel_layouts[] = {
- AV_CH_LAYOUT_MONO,
- AV_CH_LAYOUT_STEREO,
- AV_CH_LAYOUT_SURROUND,
- AV_CH_LAYOUT_4POINT0,
- AV_CH_LAYOUT_5POINT0,
- AV_CH_LAYOUT_5POINT1,
- AV_CH_LAYOUT_6POINT0,
- AV_CH_LAYOUT_6POINT1,
- AV_CH_LAYOUT_7POINT0,
- AV_CH_LAYOUT_7POINT1_WIDE_BACK,
- AV_CH_LAYOUT_QUAD,
- AV_CH_LAYOUT_OCTAGONAL,
- 0,
-};
-#endif
-
-FFAT_ENC(aac, AV_CODEC_ID_AAC, aac_profiles, , aac_at_channel_layouts, aac_at_ch_layouts)
-//FFAT_ENC(adpcm_ima_qt, AV_CODEC_ID_ADPCM_IMA_QT, NULL)
-FFAT_ENC(alac, AV_CODEC_ID_ALAC, NULL, | AV_CODEC_CAP_VARIABLE_FRAME_SIZE, NULL, NULL)
-FFAT_ENC(ilbc, AV_CODEC_ID_ILBC, NULL, , NULL, NULL)
-FFAT_ENC(pcm_alaw, AV_CODEC_ID_PCM_ALAW, NULL, , NULL, NULL)
-FFAT_ENC(pcm_mulaw, AV_CODEC_ID_PCM_MULAW, NULL, , NULL, NULL)
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cfhd.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cfhd.c
deleted file mode 100644
index c23eb069c656738792e3ac8e474329d522683a9f..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cfhd.c
+++ /dev/null
@@ -1,1467 +0,0 @@
-/*
- * Copyright (c) 2015-2016 Kieran Kunhya
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * Cineform HD video decoder
- */
-
-#include "libavutil/attributes.h"
-#include "libavutil/buffer.h"
-#include "libavutil/common.h"
-#include "libavutil/intreadwrite.h"
-#include "libavutil/pixdesc.h"
-
-#include "avcodec.h"
-#include "bytestream.h"
-#include "codec_internal.h"
-#include "decode.h"
-#include "get_bits.h"
-#include "internal.h"
-#include "thread.h"
-#include "cfhd.h"
-
-#define ALPHA_COMPAND_DC_OFFSET 256
-#define ALPHA_COMPAND_GAIN 9400
-
-static av_cold int cfhd_init(AVCodecContext *avctx)
-{
- CFHDContext *s = avctx->priv_data;
-
- s->avctx = avctx;
-
- for (int i = 0; i < 64; i++) {
- int val = i;
-
- if (val >= 40) {
- if (val >= 54) {
- val -= 54;
- val <<= 2;
- val += 54;
- }
-
- val -= 40;
- val <<= 2;
- val += 40;
- }
-
- s->lut[0][i] = val;
- }
-
- for (int i = 0; i < 256; i++)
- s->lut[1][i] = i + ((768LL * i * i * i) / (256 * 256 * 256));
-
- return ff_cfhd_init_vlcs(s);
-}
-
-static void init_plane_defaults(CFHDContext *s)
-{
- s->subband_num = 0;
- s->level = 0;
- s->subband_num_actual = 0;
-}
-
-static void init_peak_table_defaults(CFHDContext *s)
-{
- s->peak.level = 0;
- s->peak.offset = 0;
- memset(&s->peak.base, 0, sizeof(s->peak.base));
-}
-
-static void init_frame_defaults(CFHDContext *s)
-{
- s->coded_width = 0;
- s->coded_height = 0;
- s->coded_format = AV_PIX_FMT_YUV422P10;
- s->cropped_height = 0;
- s->bpc = 10;
- s->channel_cnt = 3;
- s->subband_cnt = SUBBAND_COUNT;
- s->channel_num = 0;
- s->lowpass_precision = 16;
- s->quantisation = 1;
- s->codebook = 0;
- s->difference_coding = 0;
- s->frame_type = 0;
- s->sample_type = 0;
- if (s->transform_type != 2)
- s->transform_type = -1;
- init_plane_defaults(s);
- init_peak_table_defaults(s);
-}
-
-static inline int dequant_and_decompand(CFHDContext *s, int level, int quantisation, int codebook)
-{
- if (codebook == 0 || codebook == 1) {
- return s->lut[codebook][abs(level)] * FFSIGN(level) * quantisation;
- } else
- return level * quantisation;
-}
-
-static inline void difference_coding(int16_t *band, int width, int height)
-{
-
- int i,j;
- for (i = 0; i < height; i++) {
- for (j = 1; j < width; j++) {
- band[j] += band[j-1];
- }
- band += width;
- }
-}
-
-static inline void peak_table(int16_t *band, Peak *peak, int length)
-{
- int i;
- for (i = 0; i < length; i++)
- if (abs(band[i]) > peak->level)
- band[i] = bytestream2_get_le16(&peak->base);
-}
-
-static inline void process_alpha(int16_t *alpha, int width)
-{
- int i, channel;
- for (i = 0; i < width; i++) {
- channel = alpha[i];
- channel -= ALPHA_COMPAND_DC_OFFSET;
- channel <<= 3;
- channel *= ALPHA_COMPAND_GAIN;
- channel >>= 16;
- channel = av_clip_uintp2(channel, 12);
- alpha[i] = channel;
- }
-}
-
-static inline void process_bayer(AVFrame *frame, int bpc)
-{
- const int linesize = frame->linesize[0];
- uint16_t *r = (uint16_t *)frame->data[0];
- uint16_t *g1 = (uint16_t *)(frame->data[0] + 2);
- uint16_t *g2 = (uint16_t *)(frame->data[0] + frame->linesize[0]);
- uint16_t *b = (uint16_t *)(frame->data[0] + frame->linesize[0] + 2);
- const int mid = 1 << (bpc - 1);
- const int factor = 1 << (16 - bpc);
-
- for (int y = 0; y < frame->height >> 1; y++) {
- for (int x = 0; x < frame->width; x += 2) {
- int R, G1, G2, B;
- int g, rg, bg, gd;
-
- g = r[x];
- rg = g1[x];
- bg = g2[x];
- gd = b[x];
- gd -= mid;
-
- R = (rg - mid) * 2 + g;
- G1 = g + gd;
- G2 = g - gd;
- B = (bg - mid) * 2 + g;
-
- R = av_clip_uintp2(R * factor, 16);
- G1 = av_clip_uintp2(G1 * factor, 16);
- G2 = av_clip_uintp2(G2 * factor, 16);
- B = av_clip_uintp2(B * factor, 16);
-
- r[x] = R;
- g1[x] = G1;
- g2[x] = G2;
- b[x] = B;
- }
-
- r += linesize;
- g1 += linesize;
- g2 += linesize;
- b += linesize;
- }
-}
-
-static inline void interlaced_vertical_filter(int16_t *output, int16_t *low, int16_t *high,
- int width, int linesize, int plane)
-{
- int i;
- int16_t even, odd;
- for (i = 0; i < width; i++) {
- even = (low[i] - high[i])/2;
- odd = (low[i] + high[i])/2;
- output[i] = av_clip_uintp2(even, 10);
- output[i + linesize] = av_clip_uintp2(odd, 10);
- }
-}
-
-static inline void inverse_temporal_filter(int16_t *low, int16_t *high, int width)
-{
- for (int i = 0; i < width; i++) {
- int even = (low[i] - high[i]) / 2;
- int odd = (low[i] + high[i]) / 2;
-
- low[i] = even;
- high[i] = odd;
- }
-}
-
-static void free_buffers(CFHDContext *s)
-{
- int i, j;
-
- for (i = 0; i < FF_ARRAY_ELEMS(s->plane); i++) {
- Plane *p = &s->plane[i];
- av_freep(&s->plane[i].idwt_buf);
- av_freep(&s->plane[i].idwt_tmp);
- s->plane[i].idwt_size = 0;
-
- for (j = 0; j < SUBBAND_COUNT_3D; j++)
- s->plane[i].subband[j] = NULL;
-
- for (j = 0; j < 10; j++)
- s->plane[i].l_h[j] = NULL;
-
- for (j = 0; j < DWT_LEVELS_3D; j++)
- p->band[j][0].read_ok =
- p->band[j][1].read_ok =
- p->band[j][2].read_ok =
- p->band[j][3].read_ok = 0;
- }
- s->a_height = 0;
- s->a_width = 0;
- s->a_transform_type = INT_MIN;
-}
-
-static int alloc_buffers(AVCodecContext *avctx)
-{
- CFHDContext *s = avctx->priv_data;
- int i, j, ret, planes, bayer = 0;
- int chroma_x_shift, chroma_y_shift;
- unsigned k;
-
- if ((ret = ff_set_dimensions(avctx, s->coded_width, s->coded_height)) < 0)
- return ret;
- avctx->pix_fmt = s->coded_format;
-
- ff_cfhddsp_init(&s->dsp, s->bpc, avctx->pix_fmt == AV_PIX_FMT_BAYER_RGGB16);
-
- if ((ret = av_pix_fmt_get_chroma_sub_sample(s->coded_format,
- &chroma_x_shift,
- &chroma_y_shift)) < 0)
- return ret;
- planes = av_pix_fmt_count_planes(s->coded_format);
- if (s->coded_format == AV_PIX_FMT_BAYER_RGGB16) {
- planes = 4;
- chroma_x_shift = 1;
- chroma_y_shift = 1;
- bayer = 1;
- }
-
- for (i = 0; i < planes; i++) {
- int w8, h8, w4, h4, w2, h2;
- int width = (i || bayer) ? s->coded_width >> chroma_x_shift : s->coded_width;
- int height = (i || bayer) ? s->coded_height >> chroma_y_shift : s->coded_height;
- ptrdiff_t stride = (FFALIGN(width / 8, 8) + 64) * 8;
-
- if (chroma_y_shift && !bayer)
- height = FFALIGN(height / 8, 2) * 8;
- s->plane[i].width = width;
- s->plane[i].height = height;
- s->plane[i].stride = stride;
-
- w8 = FFALIGN(s->plane[i].width / 8, 8) + 64;
- h8 = FFALIGN(height, 8) / 8;
- w4 = w8 * 2;
- h4 = h8 * 2;
- w2 = w4 * 2;
- h2 = h4 * 2;
-
- if (s->transform_type == 0) {
- s->plane[i].idwt_size = FFALIGN(height, 8) * stride;
- s->plane[i].idwt_buf =
- av_calloc(s->plane[i].idwt_size, sizeof(*s->plane[i].idwt_buf));
- s->plane[i].idwt_tmp =
- av_malloc_array(s->plane[i].idwt_size, sizeof(*s->plane[i].idwt_tmp));
- } else {
- s->plane[i].idwt_size = FFALIGN(height, 8) * stride * 2;
- s->plane[i].idwt_buf =
- av_calloc(s->plane[i].idwt_size, sizeof(*s->plane[i].idwt_buf));
- s->plane[i].idwt_tmp =
- av_malloc_array(s->plane[i].idwt_size, sizeof(*s->plane[i].idwt_tmp));
- }
-
- if (!s->plane[i].idwt_buf || !s->plane[i].idwt_tmp)
- return AVERROR(ENOMEM);
-
- s->plane[i].subband[0] = s->plane[i].idwt_buf;
- s->plane[i].subband[1] = s->plane[i].idwt_buf + 2 * w8 * h8;
- s->plane[i].subband[2] = s->plane[i].idwt_buf + 1 * w8 * h8;
- s->plane[i].subband[3] = s->plane[i].idwt_buf + 3 * w8 * h8;
- s->plane[i].subband[4] = s->plane[i].idwt_buf + 2 * w4 * h4;
- s->plane[i].subband[5] = s->plane[i].idwt_buf + 1 * w4 * h4;
- s->plane[i].subband[6] = s->plane[i].idwt_buf + 3 * w4 * h4;
- if (s->transform_type == 0) {
- s->plane[i].subband[7] = s->plane[i].idwt_buf + 2 * w2 * h2;
- s->plane[i].subband[8] = s->plane[i].idwt_buf + 1 * w2 * h2;
- s->plane[i].subband[9] = s->plane[i].idwt_buf + 3 * w2 * h2;
- } else {
- int16_t *frame2 =
- s->plane[i].subband[7] = s->plane[i].idwt_buf + 4 * w2 * h2;
- s->plane[i].subband[8] = frame2 + 2 * w4 * h4;
- s->plane[i].subband[9] = frame2 + 1 * w4 * h4;
- s->plane[i].subband[10] = frame2 + 3 * w4 * h4;
- s->plane[i].subband[11] = frame2 + 2 * w2 * h2;
- s->plane[i].subband[12] = frame2 + 1 * w2 * h2;
- s->plane[i].subband[13] = frame2 + 3 * w2 * h2;
- s->plane[i].subband[14] = s->plane[i].idwt_buf + 2 * w2 * h2;
- s->plane[i].subband[15] = s->plane[i].idwt_buf + 1 * w2 * h2;
- s->plane[i].subband[16] = s->plane[i].idwt_buf + 3 * w2 * h2;
- }
-
- if (s->transform_type == 0) {
- for (j = 0; j < DWT_LEVELS; j++) {
- for (k = 0; k < FF_ARRAY_ELEMS(s->plane[i].band[j]); k++) {
- s->plane[i].band[j][k].a_width = w8 << j;
- s->plane[i].band[j][k].a_height = h8 << j;
- }
- }
- } else {
- for (j = 0; j < DWT_LEVELS_3D; j++) {
- int t = j < 1 ? 0 : (j < 3 ? 1 : 2);
-
- for (k = 0; k < FF_ARRAY_ELEMS(s->plane[i].band[j]); k++) {
- s->plane[i].band[j][k].a_width = w8 << t;
- s->plane[i].band[j][k].a_height = h8 << t;
- }
- }
- }
-
- /* ll2 and ll1 commented out because they are done in-place */
- s->plane[i].l_h[0] = s->plane[i].idwt_tmp;
- s->plane[i].l_h[1] = s->plane[i].idwt_tmp + 2 * w8 * h8;
- // s->plane[i].l_h[2] = ll2;
- s->plane[i].l_h[3] = s->plane[i].idwt_tmp;
- s->plane[i].l_h[4] = s->plane[i].idwt_tmp + 2 * w4 * h4;
- // s->plane[i].l_h[5] = ll1;
- s->plane[i].l_h[6] = s->plane[i].idwt_tmp;
- s->plane[i].l_h[7] = s->plane[i].idwt_tmp + 2 * w2 * h2;
- if (s->transform_type != 0) {
- int16_t *frame2 = s->plane[i].idwt_tmp + 4 * w2 * h2;
-
- s->plane[i].l_h[8] = frame2;
- s->plane[i].l_h[9] = frame2 + 2 * w2 * h2;
- }
- }
-
- s->a_transform_type = s->transform_type;
- s->a_height = s->coded_height;
- s->a_width = s->coded_width;
- s->a_format = s->coded_format;
-
- return 0;
-}
-
-static int cfhd_decode(AVCodecContext *avctx, AVFrame *pic,
- int *got_frame, AVPacket *avpkt)
-{
- CFHDContext *s = avctx->priv_data;
- CFHDDSPContext *dsp = &s->dsp;
- GetByteContext gb;
- int ret = 0, i, j, plane, got_buffer = 0;
- int16_t *coeff_data;
-
- init_frame_defaults(s);
- s->planes = av_pix_fmt_count_planes(s->coded_format);
-
- bytestream2_init(&gb, avpkt->data, avpkt->size);
-
- while (bytestream2_get_bytes_left(&gb) >= 4) {
- /* Bit weird but implement the tag parsing as the spec says */
- uint16_t tagu = bytestream2_get_be16(&gb);
- int16_t tag = (int16_t)tagu;
- int8_t tag8 = (int8_t)(tagu >> 8);
- uint16_t abstag = abs(tag);
- int8_t abs_tag8 = abs(tag8);
- uint16_t data = bytestream2_get_be16(&gb);
- if (abs_tag8 >= 0x60 && abs_tag8 <= 0x6f) {
- av_log(avctx, AV_LOG_DEBUG, "large len %x\n", ((tagu & 0xff) << 16) | data);
- } else if (tag == SampleFlags) {
- av_log(avctx, AV_LOG_DEBUG, "Progressive? %"PRIu16"\n", data);
- s->progressive = data & 0x0001;
- } else if (tag == FrameType) {
- s->frame_type = data;
- av_log(avctx, AV_LOG_DEBUG, "Frame type %"PRIu16"\n", data);
- } else if (abstag == VersionMajor) {
- av_log(avctx, AV_LOG_DEBUG, "Version major %"PRIu16"\n", data);
- } else if (abstag == VersionMinor) {
- av_log(avctx, AV_LOG_DEBUG, "Version minor %"PRIu16"\n", data);
- } else if (abstag == VersionRevision) {
- av_log(avctx, AV_LOG_DEBUG, "Version revision %"PRIu16"\n", data);
- } else if (abstag == VersionEdit) {
- av_log(avctx, AV_LOG_DEBUG, "Version edit %"PRIu16"\n", data);
- } else if (abstag == Version) {
- av_log(avctx, AV_LOG_DEBUG, "Version %"PRIu16"\n", data);
- } else if (tag == ImageWidth) {
- av_log(avctx, AV_LOG_DEBUG, "Width %"PRIu16"\n", data);
- s->coded_width = data;
- } else if (tag == ImageHeight) {
- av_log(avctx, AV_LOG_DEBUG, "Height %"PRIu16"\n", data);
- s->coded_height = data;
- } else if (tag == ChannelCount) {
- av_log(avctx, AV_LOG_DEBUG, "Channel Count: %"PRIu16"\n", data);
- s->channel_cnt = data;
- if (data > 4) {
- av_log(avctx, AV_LOG_ERROR, "Channel Count of %"PRIu16" is unsupported\n", data);
- ret = AVERROR_PATCHWELCOME;
- goto end;
- }
- } else if (tag == SubbandCount) {
- av_log(avctx, AV_LOG_DEBUG, "Subband Count: %"PRIu16"\n", data);
- if (data != SUBBAND_COUNT && data != SUBBAND_COUNT_3D) {
- av_log(avctx, AV_LOG_ERROR, "Subband Count of %"PRIu16" is unsupported\n", data);
- ret = AVERROR_PATCHWELCOME;
- goto end;
- }
- } else if (tag == ChannelNumber) {
- s->channel_num = data;
- av_log(avctx, AV_LOG_DEBUG, "Channel number %"PRIu16"\n", data);
- if (s->channel_num >= s->planes) {
- av_log(avctx, AV_LOG_ERROR, "Invalid channel number\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
- init_plane_defaults(s);
- } else if (tag == SubbandNumber) {
- if (s->subband_num != 0 && data == 1 && (s->transform_type == 0 || s->transform_type == 2)) // hack
- s->level++;
- av_log(avctx, AV_LOG_DEBUG, "Subband number %"PRIu16"\n", data);
- s->subband_num = data;
- if ((s->transform_type == 0 && s->level >= DWT_LEVELS) ||
- (s->transform_type == 2 && s->level >= DWT_LEVELS_3D)) {
- av_log(avctx, AV_LOG_ERROR, "Invalid level\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
- if (s->subband_num > 3) {
- av_log(avctx, AV_LOG_ERROR, "Invalid subband number\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
- } else if (tag == SubbandBand) {
- av_log(avctx, AV_LOG_DEBUG, "Subband number actual %"PRIu16"\n", data);
- if ((s->transform_type == 0 && data >= SUBBAND_COUNT) ||
- (s->transform_type == 2 && data >= SUBBAND_COUNT_3D && data != 255)) {
- av_log(avctx, AV_LOG_ERROR, "Invalid subband number actual\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
- if (s->transform_type == 0 || s->transform_type == 2)
- s->subband_num_actual = data;
- else
- av_log(avctx, AV_LOG_WARNING, "Ignoring subband num actual %"PRIu16"\n", data);
- } else if (tag == LowpassPrecision)
- av_log(avctx, AV_LOG_DEBUG, "Lowpass precision bits: %"PRIu16"\n", data);
- else if (tag == Quantization) {
- s->quantisation = data;
- av_log(avctx, AV_LOG_DEBUG, "Quantisation: %"PRIu16"\n", data);
- } else if (tag == PrescaleTable) {
- for (i = 0; i < 8; i++)
- s->prescale_table[i] = (data >> (14 - i * 2)) & 0x3;
- av_log(avctx, AV_LOG_DEBUG, "Prescale table: %x\n", data);
- } else if (tag == BandEncoding) {
- if (!data || data > 5) {
- av_log(avctx, AV_LOG_ERROR, "Invalid band encoding\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
- s->band_encoding = data;
- av_log(avctx, AV_LOG_DEBUG, "Encode Method for Subband %d : %x\n", s->subband_num_actual, data);
- } else if (tag == LowpassWidth) {
- av_log(avctx, AV_LOG_DEBUG, "Lowpass width %"PRIu16"\n", data);
- s->plane[s->channel_num].band[0][0].width = data;
- s->plane[s->channel_num].band[0][0].stride = data;
- } else if (tag == LowpassHeight) {
- av_log(avctx, AV_LOG_DEBUG, "Lowpass height %"PRIu16"\n", data);
- s->plane[s->channel_num].band[0][0].height = data;
- } else if (tag == SampleType) {
- s->sample_type = data;
- av_log(avctx, AV_LOG_DEBUG, "Sample type? %"PRIu16"\n", data);
- } else if (tag == TransformType) {
- if (data > 2) {
- av_log(avctx, AV_LOG_ERROR, "Invalid transform type\n");
- ret = AVERROR(EINVAL);
- goto end;
- } else if (data == 1) {
- av_log(avctx, AV_LOG_ERROR, "unsupported transform type\n");
- ret = AVERROR_PATCHWELCOME;
- goto end;
- }
- if (s->transform_type == -1) {
- s->transform_type = data;
- av_log(avctx, AV_LOG_DEBUG, "Transform type %"PRIu16"\n", data);
- } else {
- av_log(avctx, AV_LOG_DEBUG, "Ignoring additional transform type %"PRIu16"\n", data);
- }
- } else if (abstag >= 0x4000 && abstag <= 0x40ff) {
- if (abstag == 0x4001)
- s->peak.level = 0;
- av_log(avctx, AV_LOG_DEBUG, "Small chunk length %d %s\n", data * 4, tag < 0 ? "optional" : "required");
- bytestream2_skipu(&gb, data * 4);
- } else if (tag == FrameIndex) {
- av_log(avctx, AV_LOG_DEBUG, "Frame index %"PRIu16"\n", data);
- s->frame_index = data;
- } else if (tag == SampleIndexTable) {
- av_log(avctx, AV_LOG_DEBUG, "Sample index table - skipping %i values\n", data);
- if (data > bytestream2_get_bytes_left(&gb) / 4) {
- av_log(avctx, AV_LOG_ERROR, "too many values (%d)\n", data);
- ret = AVERROR_INVALIDDATA;
- goto end;
- }
- for (i = 0; i < data; i++) {
- uint32_t offset = bytestream2_get_be32(&gb);
- av_log(avctx, AV_LOG_DEBUG, "Offset = %"PRIu32"\n", offset);
- }
- } else if (tag == HighpassWidth) {
- av_log(avctx, AV_LOG_DEBUG, "Highpass width %i channel %i level %i subband %i\n", data, s->channel_num, s->level, s->subband_num);
- if (data < 3) {
- av_log(avctx, AV_LOG_ERROR, "Invalid highpass width\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
- s->plane[s->channel_num].band[s->level][s->subband_num].width = data;
- s->plane[s->channel_num].band[s->level][s->subband_num].stride = FFALIGN(data, 8);
- } else if (tag == HighpassHeight) {
- av_log(avctx, AV_LOG_DEBUG, "Highpass height %i\n", data);
- if (data < 3) {
- av_log(avctx, AV_LOG_ERROR, "Invalid highpass height\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
- s->plane[s->channel_num].band[s->level][s->subband_num].height = data;
- } else if (tag == BandWidth) {
- av_log(avctx, AV_LOG_DEBUG, "Highpass width2 %i\n", data);
- if (data < 3) {
- av_log(avctx, AV_LOG_ERROR, "Invalid highpass width2\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
- s->plane[s->channel_num].band[s->level][s->subband_num].width = data;
- s->plane[s->channel_num].band[s->level][s->subband_num].stride = FFALIGN(data, 8);
- } else if (tag == BandHeight) {
- av_log(avctx, AV_LOG_DEBUG, "Highpass height2 %i\n", data);
- if (data < 3) {
- av_log(avctx, AV_LOG_ERROR, "Invalid highpass height2\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
- s->plane[s->channel_num].band[s->level][s->subband_num].height = data;
- } else if (tag == InputFormat) {
- av_log(avctx, AV_LOG_DEBUG, "Input format %i\n", data);
- if (s->coded_format == AV_PIX_FMT_NONE ||
- s->coded_format == AV_PIX_FMT_YUV422P10) {
- if (data >= 100 && data <= 105) {
- s->coded_format = AV_PIX_FMT_BAYER_RGGB16;
- } else if (data >= 122 && data <= 128) {
- s->coded_format = AV_PIX_FMT_GBRP12;
- } else if (data == 30) {
- s->coded_format = AV_PIX_FMT_GBRAP12;
- } else {
- s->coded_format = AV_PIX_FMT_YUV422P10;
- }
- s->planes = s->coded_format == AV_PIX_FMT_BAYER_RGGB16 ? 4 : av_pix_fmt_count_planes(s->coded_format);
- }
- } else if (tag == BandCodingFlags) {
- s->codebook = data & 0xf;
- s->difference_coding = (data >> 4) & 1;
- av_log(avctx, AV_LOG_DEBUG, "Other codebook? %i\n", s->codebook);
- } else if (tag == Precision) {
- av_log(avctx, AV_LOG_DEBUG, "Precision %i\n", data);
- if (!(data == 10 || data == 12)) {
- av_log(avctx, AV_LOG_ERROR, "Invalid bits per channel\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
- avctx->bits_per_raw_sample = s->bpc = data;
- } else if (tag == EncodedFormat) {
- av_log(avctx, AV_LOG_DEBUG, "Sample format? %i\n", data);
- if (data == 1) {
- s->coded_format = AV_PIX_FMT_YUV422P10;
- } else if (data == 2) {
- s->coded_format = AV_PIX_FMT_BAYER_RGGB16;
- } else if (data == 3) {
- s->coded_format = AV_PIX_FMT_GBRP12;
- } else if (data == 4) {
- s->coded_format = AV_PIX_FMT_GBRAP12;
- } else {
- avpriv_report_missing_feature(avctx, "Sample format of %"PRIu16, data);
- ret = AVERROR_PATCHWELCOME;
- goto end;
- }
- s->planes = data == 2 ? 4 : av_pix_fmt_count_planes(s->coded_format);
- } else if (tag == -DisplayHeight) {
- av_log(avctx, AV_LOG_DEBUG, "Cropped height %"PRIu16"\n", data);
- s->cropped_height = data;
- } else if (tag == -PeakOffsetLow) {
- s->peak.offset &= ~0xffff;
- s->peak.offset |= (data & 0xffff);
- s->peak.base = gb;
- s->peak.level = 0;
- } else if (tag == -PeakOffsetHigh) {
- s->peak.offset &= 0xffff;
- s->peak.offset |= (data & 0xffffU)<<16;
- s->peak.base = gb;
- s->peak.level = 0;
- } else if (tag == -PeakLevel && s->peak.offset) {
- s->peak.level = data;
- if (s->peak.offset < 4 - bytestream2_tell(&s->peak.base) ||
- s->peak.offset > 4 + bytestream2_get_bytes_left(&s->peak.base)
- ) {
- ret = AVERROR_INVALIDDATA;
- goto end;
- }
- bytestream2_seek(&s->peak.base, s->peak.offset - 4, SEEK_CUR);
- } else
- av_log(avctx, AV_LOG_DEBUG, "Unknown tag %i data %x\n", tag, data);
-
- if (tag == BitstreamMarker && data == 0xf0f &&
- s->coded_format != AV_PIX_FMT_NONE) {
- int lowpass_height = s->plane[s->channel_num].band[0][0].height;
- int lowpass_width = s->plane[s->channel_num].band[0][0].width;
- int factor = s->coded_format == AV_PIX_FMT_BAYER_RGGB16 ? 2 : 1;
-
- if (s->coded_width) {
- s->coded_width *= factor;
- }
-
- if (s->coded_height) {
- s->coded_height *= factor;
- }
-
- if (!s->a_width && !s->coded_width) {
- s->coded_width = lowpass_width * factor * 8;
- }
-
- if (!s->a_height && !s->coded_height) {
- s->coded_height = lowpass_height * factor * 8;
- }
-
- if (s->a_width && !s->coded_width)
- s->coded_width = s->a_width;
- if (s->a_height && !s->coded_height)
- s->coded_height = s->a_height;
-
- if (s->a_width != s->coded_width || s->a_height != s->coded_height ||
- s->a_format != s->coded_format ||
- s->transform_type != s->a_transform_type) {
- free_buffers(s);
- if ((ret = alloc_buffers(avctx)) < 0) {
- free_buffers(s);
- return ret;
- }
- }
- ret = ff_set_dimensions(avctx, s->coded_width, s->coded_height);
- if (ret < 0)
- return ret;
- if (s->cropped_height) {
- unsigned height = s->cropped_height << (avctx->pix_fmt == AV_PIX_FMT_BAYER_RGGB16);
- if (avctx->height < height)
- return AVERROR_INVALIDDATA;
- avctx->height = height;
- }
- pic->width = pic->height = 0;
-
- if ((ret = ff_thread_get_buffer(avctx, pic, 0)) < 0)
- return ret;
-
- s->coded_width = 0;
- s->coded_height = 0;
- s->coded_format = AV_PIX_FMT_NONE;
- got_buffer = 1;
- } else if (tag == FrameIndex && data == 1 && s->sample_type == 1 && s->frame_type == 2) {
- pic->width = pic->height = 0;
-
- if ((ret = ff_thread_get_buffer(avctx, pic, 0)) < 0)
- return ret;
- s->coded_width = 0;
- s->coded_height = 0;
- s->coded_format = AV_PIX_FMT_NONE;
- got_buffer = 1;
- }
-
- if (s->subband_num_actual == 255)
- goto finish;
- coeff_data = s->plane[s->channel_num].subband[s->subband_num_actual];
-
- /* Lowpass coefficients */
- if (tag == BitstreamMarker && data == 0xf0f) {
- int lowpass_height, lowpass_width, lowpass_a_height, lowpass_a_width;
-
- if (!s->a_width || !s->a_height) {
- ret = AVERROR_INVALIDDATA;
- goto end;
- }
-
- lowpass_height = s->plane[s->channel_num].band[0][0].height;
- lowpass_width = s->plane[s->channel_num].band[0][0].width;
- lowpass_a_height = s->plane[s->channel_num].band[0][0].a_height;
- lowpass_a_width = s->plane[s->channel_num].band[0][0].a_width;
-
- if (lowpass_width < 3 ||
- lowpass_width > lowpass_a_width) {
- av_log(avctx, AV_LOG_ERROR, "Invalid lowpass width\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
-
- if (lowpass_height < 3 ||
- lowpass_height > lowpass_a_height) {
- av_log(avctx, AV_LOG_ERROR, "Invalid lowpass height\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
-
- if (!got_buffer) {
- av_log(avctx, AV_LOG_ERROR, "No end of header tag found\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
-
- if (lowpass_height > lowpass_a_height || lowpass_width > lowpass_a_width ||
- lowpass_width * lowpass_height * sizeof(int16_t) > bytestream2_get_bytes_left(&gb)) {
- av_log(avctx, AV_LOG_ERROR, "Too many lowpass coefficients\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
-
- av_log(avctx, AV_LOG_DEBUG, "Start of lowpass coeffs component %d height:%d, width:%d\n", s->channel_num, lowpass_height, lowpass_width);
- for (i = 0; i < lowpass_height; i++) {
- for (j = 0; j < lowpass_width; j++)
- coeff_data[j] = bytestream2_get_be16u(&gb);
-
- coeff_data += lowpass_width;
- }
-
- /* Align to mod-4 position to continue reading tags */
- bytestream2_seek(&gb, bytestream2_tell(&gb) & 3, SEEK_CUR);
-
- /* Copy last line of coefficients if odd height */
- if (lowpass_height & 1) {
- memcpy(&coeff_data[lowpass_height * lowpass_width],
- &coeff_data[(lowpass_height - 1) * lowpass_width],
- lowpass_width * sizeof(*coeff_data));
- }
-
- s->plane[s->channel_num].band[0][0].read_ok = 1;
-
- av_log(avctx, AV_LOG_DEBUG, "Lowpass coefficients %d\n", lowpass_width * lowpass_height);
- }
-
- av_assert0(s->subband_num_actual != 255);
- if (tag == BandHeader || tag == BandSecondPass) {
- int highpass_height, highpass_width, highpass_a_width, highpass_a_height, highpass_stride, a_expected;
- int expected;
- int level, run, coeff;
- int count = 0, bytes;
-
- if (!s->a_width || !s->a_height) {
- ret = AVERROR_INVALIDDATA;
- goto end;
- }
-
- highpass_height = s->plane[s->channel_num].band[s->level][s->subband_num].height;
- highpass_width = s->plane[s->channel_num].band[s->level][s->subband_num].width;
- highpass_a_width = s->plane[s->channel_num].band[s->level][s->subband_num].a_width;
- highpass_a_height = s->plane[s->channel_num].band[s->level][s->subband_num].a_height;
- highpass_stride = s->plane[s->channel_num].band[s->level][s->subband_num].stride;
- a_expected = highpass_a_height * highpass_a_width;
-
- if (!got_buffer) {
- av_log(avctx, AV_LOG_ERROR, "No end of header tag found\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
-
- if (highpass_height > highpass_a_height || highpass_width > highpass_a_width || a_expected < highpass_height * (uint64_t)highpass_stride) {
- av_log(avctx, AV_LOG_ERROR, "Too many highpass coefficients\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
- expected = highpass_height * highpass_stride;
-
- av_log(avctx, AV_LOG_DEBUG, "Start subband coeffs plane %i level %i codebook %i expected %i\n", s->channel_num, s->level, s->codebook, expected);
-
- ret = init_get_bits8(&s->gb, gb.buffer, bytestream2_get_bytes_left(&gb));
- if (ret < 0)
- goto end;
- {
- OPEN_READER(re, &s->gb);
-
- const int lossless = s->band_encoding == 5;
-
- if (s->codebook == 0 && s->transform_type == 2 && s->subband_num_actual == 7)
- s->codebook = 1;
- if (!s->codebook) {
- while (1) {
- UPDATE_CACHE(re, &s->gb);
- GET_RL_VLC(level, run, re, &s->gb, s->table_9_rl_vlc,
- VLC_BITS, 3, 1);
-
- /* escape */
- if (!run)
- break;
-
- count += run;
-
- if (count > expected)
- break;
-
- if (!lossless)
- coeff = dequant_and_decompand(s, level, s->quantisation, 0);
- else
- coeff = level;
- if (tag == BandSecondPass) {
- const uint16_t q = s->quantisation;
-
- for (i = 0; i < run; i++) {
- *coeff_data |= coeff * 256U;
- *coeff_data++ *= q;
- }
- } else {
- for (i = 0; i < run; i++)
- *coeff_data++ = coeff;
- }
- }
- } else {
- while (1) {
- UPDATE_CACHE(re, &s->gb);
- GET_RL_VLC(level, run, re, &s->gb, s->table_18_rl_vlc,
- VLC_BITS, 3, 1);
-
- /* escape */
- if (!run)
- break;
-
- count += run;
-
- if (count > expected)
- break;
-
- if (!lossless)
- coeff = dequant_and_decompand(s, level, s->quantisation, s->codebook);
- else
- coeff = level;
- if (tag == BandSecondPass) {
- const uint16_t q = s->quantisation;
-
- for (i = 0; i < run; i++) {
- *coeff_data |= coeff * 256U;
- *coeff_data++ *= q;
- }
- } else {
- for (i = 0; i < run; i++)
- *coeff_data++ = coeff;
- }
- }
- }
- CLOSE_READER(re, &s->gb);
- }
-
- if (count > expected) {
- av_log(avctx, AV_LOG_ERROR, "Escape codeword not found, probably corrupt data\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
- if (s->peak.level)
- peak_table(coeff_data - count, &s->peak, count);
- if (s->difference_coding)
- difference_coding(s->plane[s->channel_num].subband[s->subband_num_actual], highpass_width, highpass_height);
-
- bytes = FFALIGN(AV_CEIL_RSHIFT(get_bits_count(&s->gb), 3), 4);
- if (bytes > bytestream2_get_bytes_left(&gb)) {
- av_log(avctx, AV_LOG_ERROR, "Bitstream overread error\n");
- ret = AVERROR(EINVAL);
- goto end;
- } else
- bytestream2_seek(&gb, bytes, SEEK_CUR);
-
- av_log(avctx, AV_LOG_DEBUG, "End subband coeffs %i extra %i\n", count, count - expected);
- s->plane[s->channel_num].band[s->level][s->subband_num].read_ok = 1;
-finish:
- if (s->subband_num_actual != 255)
- s->codebook = 0;
- }
- }
-
- s->planes = av_pix_fmt_count_planes(avctx->pix_fmt);
- if (avctx->pix_fmt == AV_PIX_FMT_BAYER_RGGB16) {
- s->progressive = 1;
- s->planes = 4;
- }
-
- ff_thread_finish_setup(avctx);
-
- if (!s->a_width || !s->a_height || s->a_format == AV_PIX_FMT_NONE ||
- s->a_transform_type == INT_MIN ||
- s->coded_width || s->coded_height || s->coded_format != AV_PIX_FMT_NONE) {
- av_log(avctx, AV_LOG_ERROR, "Invalid dimensions\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
-
- if (!got_buffer) {
- av_log(avctx, AV_LOG_ERROR, "No end of header tag found\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
-
- for (plane = 0; plane < s->planes; plane++) {
- int o, level;
-
- for (level = 0; level < (s->transform_type == 0 ? DWT_LEVELS : DWT_LEVELS_3D) ; level++) {
- if (s->transform_type == 2)
- if (level == 2 || level == 5)
- continue;
- for (o = !!level; o < 4 ; o++) {
- if (!s->plane[plane].band[level][o].read_ok) {
- ret = AVERROR_INVALIDDATA;
- goto end;
- }
- }
- }
- }
-
- if (s->transform_type == 0 && s->sample_type != 1) {
- for (plane = 0; plane < s->planes && !ret; plane++) {
- /* level 1 */
- int lowpass_height = s->plane[plane].band[0][0].height;
- int output_stride = s->plane[plane].band[0][0].a_width;
- int lowpass_width = s->plane[plane].band[0][0].width;
- int highpass_stride = s->plane[plane].band[0][1].stride;
- int act_plane = plane == 1 ? 2 : plane == 2 ? 1 : plane;
- ptrdiff_t dst_linesize;
- int16_t *low, *high, *output, *dst;
-
- if (avctx->pix_fmt == AV_PIX_FMT_BAYER_RGGB16) {
- act_plane = 0;
- dst_linesize = pic->linesize[act_plane];
- } else {
- dst_linesize = pic->linesize[act_plane] / 2;
- }
-
- if (lowpass_height > s->plane[plane].band[0][0].a_height || lowpass_width > s->plane[plane].band[0][0].a_width ||
- !highpass_stride || s->plane[plane].band[0][1].width > s->plane[plane].band[0][1].a_width ||
- lowpass_width < 3 || lowpass_height < 3) {
- av_log(avctx, AV_LOG_ERROR, "Invalid plane dimensions\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
-
- av_log(avctx, AV_LOG_DEBUG, "Decoding level 1 plane %i %i %i %i\n", plane, lowpass_height, lowpass_width, highpass_stride);
-
- low = s->plane[plane].subband[0];
- high = s->plane[plane].subband[2];
- output = s->plane[plane].l_h[0];
- dsp->vert_filter(output, output_stride, low, lowpass_width, high, highpass_stride, lowpass_width, lowpass_height);
-
- low = s->plane[plane].subband[1];
- high = s->plane[plane].subband[3];
- output = s->plane[plane].l_h[1];
-
- dsp->vert_filter(output, output_stride, low, highpass_stride, high, highpass_stride, lowpass_width, lowpass_height);
-
- low = s->plane[plane].l_h[0];
- high = s->plane[plane].l_h[1];
- output = s->plane[plane].subband[0];
- dsp->horiz_filter(output, output_stride, low, output_stride, high, output_stride, lowpass_width, lowpass_height * 2);
- if (s->bpc == 12) {
- output = s->plane[plane].subband[0];
- for (i = 0; i < lowpass_height * 2; i++) {
- for (j = 0; j < lowpass_width * 2; j++)
- output[j] *= 4;
-
- output += output_stride * 2;
- }
- }
-
- /* level 2 */
- lowpass_height = s->plane[plane].band[1][1].height;
- output_stride = s->plane[plane].band[1][1].a_width;
- lowpass_width = s->plane[plane].band[1][1].width;
- highpass_stride = s->plane[plane].band[1][1].stride;
-
- if (lowpass_height > s->plane[plane].band[1][1].a_height || lowpass_width > s->plane[plane].band[1][1].a_width ||
- !highpass_stride || s->plane[plane].band[1][1].width > s->plane[plane].band[1][1].a_width ||
- lowpass_width < 3 || lowpass_height < 3) {
- av_log(avctx, AV_LOG_ERROR, "Invalid plane dimensions\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
-
- av_log(avctx, AV_LOG_DEBUG, "Level 2 plane %i %i %i %i\n", plane, lowpass_height, lowpass_width, highpass_stride);
-
- low = s->plane[plane].subband[0];
- high = s->plane[plane].subband[5];
- output = s->plane[plane].l_h[3];
- dsp->vert_filter(output, output_stride, low, output_stride, high, highpass_stride, lowpass_width, lowpass_height);
-
- low = s->plane[plane].subband[4];
- high = s->plane[plane].subband[6];
- output = s->plane[plane].l_h[4];
- dsp->vert_filter(output, output_stride, low, highpass_stride, high, highpass_stride, lowpass_width, lowpass_height);
-
- low = s->plane[plane].l_h[3];
- high = s->plane[plane].l_h[4];
- output = s->plane[plane].subband[0];
- dsp->horiz_filter(output, output_stride, low, output_stride, high, output_stride, lowpass_width, lowpass_height * 2);
-
- output = s->plane[plane].subband[0];
- for (i = 0; i < lowpass_height * 2; i++) {
- for (j = 0; j < lowpass_width * 2; j++)
- output[j] *= 4;
-
- output += output_stride * 2;
- }
-
- /* level 3 */
- lowpass_height = s->plane[plane].band[2][1].height;
- output_stride = s->plane[plane].band[2][1].a_width;
- lowpass_width = s->plane[plane].band[2][1].width;
- highpass_stride = s->plane[plane].band[2][1].stride;
-
- if (lowpass_height > s->plane[plane].band[2][1].a_height || lowpass_width > s->plane[plane].band[2][1].a_width ||
- !highpass_stride || s->plane[plane].band[2][1].width > s->plane[plane].band[2][1].a_width ||
- lowpass_height < 3 || lowpass_width < 3 || lowpass_width * 2 > s->plane[plane].width) {
- av_log(avctx, AV_LOG_ERROR, "Invalid plane dimensions\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
-
- av_log(avctx, AV_LOG_DEBUG, "Level 3 plane %i %i %i %i\n", plane, lowpass_height, lowpass_width, highpass_stride);
- if (s->progressive) {
- low = s->plane[plane].subband[0];
- high = s->plane[plane].subband[8];
- output = s->plane[plane].l_h[6];
- dsp->vert_filter(output, output_stride, low, output_stride, high, highpass_stride, lowpass_width, lowpass_height);
-
- low = s->plane[plane].subband[7];
- high = s->plane[plane].subband[9];
- output = s->plane[plane].l_h[7];
- dsp->vert_filter(output, output_stride, low, highpass_stride, high, highpass_stride, lowpass_width, lowpass_height);
-
- dst = (int16_t *)pic->data[act_plane];
- if (avctx->pix_fmt == AV_PIX_FMT_BAYER_RGGB16) {
- if (plane & 1)
- dst++;
- if (plane > 1)
- dst += pic->linesize[act_plane] >> 1;
- }
- low = s->plane[plane].l_h[6];
- high = s->plane[plane].l_h[7];
-
- if (avctx->pix_fmt == AV_PIX_FMT_BAYER_RGGB16 &&
- (lowpass_height * 2 > avctx->coded_height / 2 ||
- lowpass_width * 2 > avctx->coded_width / 2 )
- ) {
- ret = AVERROR_INVALIDDATA;
- goto end;
- }
-
- for (i = 0; i < s->plane[act_plane].height; i++) {
- dsp->horiz_filter_clip(dst, low, high, lowpass_width, s->bpc);
- if (avctx->pix_fmt == AV_PIX_FMT_GBRAP12 && act_plane == 3)
- process_alpha(dst, lowpass_width * 2);
- low += output_stride;
- high += output_stride;
- dst += dst_linesize;
- }
- } else {
- av_log(avctx, AV_LOG_DEBUG, "interlaced frame ? %d", pic->interlaced_frame);
- pic->interlaced_frame = 1;
- low = s->plane[plane].subband[0];
- high = s->plane[plane].subband[7];
- output = s->plane[plane].l_h[6];
- dsp->horiz_filter(output, output_stride, low, output_stride, high, highpass_stride, lowpass_width, lowpass_height);
-
- low = s->plane[plane].subband[8];
- high = s->plane[plane].subband[9];
- output = s->plane[plane].l_h[7];
- dsp->horiz_filter(output, output_stride, low, highpass_stride, high, highpass_stride, lowpass_width, lowpass_height);
-
- dst = (int16_t *)pic->data[act_plane];
- low = s->plane[plane].l_h[6];
- high = s->plane[plane].l_h[7];
- for (i = 0; i < s->plane[act_plane].height / 2; i++) {
- interlaced_vertical_filter(dst, low, high, lowpass_width * 2, pic->linesize[act_plane]/2, act_plane);
- low += output_stride * 2;
- high += output_stride * 2;
- dst += pic->linesize[act_plane];
- }
- }
- }
- } else if (s->transform_type == 2 && (avctx->internal->is_copy || s->frame_index == 1 || s->sample_type != 1)) {
- for (plane = 0; plane < s->planes && !ret; plane++) {
- int lowpass_height = s->plane[plane].band[0][0].height;
- int output_stride = s->plane[plane].band[0][0].a_width;
- int lowpass_width = s->plane[plane].band[0][0].width;
- int highpass_stride = s->plane[plane].band[0][1].stride;
- int act_plane = plane == 1 ? 2 : plane == 2 ? 1 : plane;
- int16_t *low, *high, *output, *dst;
- ptrdiff_t dst_linesize;
-
- if (avctx->pix_fmt == AV_PIX_FMT_BAYER_RGGB16) {
- act_plane = 0;
- dst_linesize = pic->linesize[act_plane];
- } else {
- dst_linesize = pic->linesize[act_plane] / 2;
- }
-
- if (lowpass_height > s->plane[plane].band[0][0].a_height || lowpass_width > s->plane[plane].band[0][0].a_width ||
- !highpass_stride || s->plane[plane].band[0][1].width > s->plane[plane].band[0][1].a_width ||
- lowpass_width < 3 || lowpass_height < 3) {
- av_log(avctx, AV_LOG_ERROR, "Invalid plane dimensions\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
-
- av_log(avctx, AV_LOG_DEBUG, "Decoding level 1 plane %i %i %i %i\n", plane, lowpass_height, lowpass_width, highpass_stride);
-
- low = s->plane[plane].subband[0];
- high = s->plane[plane].subband[2];
- output = s->plane[plane].l_h[0];
- dsp->vert_filter(output, output_stride, low, lowpass_width, high, highpass_stride, lowpass_width, lowpass_height);
-
- low = s->plane[plane].subband[1];
- high = s->plane[plane].subband[3];
- output = s->plane[plane].l_h[1];
- dsp->vert_filter(output, output_stride, low, highpass_stride, high, highpass_stride, lowpass_width, lowpass_height);
-
- low = s->plane[plane].l_h[0];
- high = s->plane[plane].l_h[1];
- output = s->plane[plane].l_h[7];
- dsp->horiz_filter(output, output_stride, low, output_stride, high, output_stride, lowpass_width, lowpass_height * 2);
- if (s->bpc == 12) {
- output = s->plane[plane].l_h[7];
- for (i = 0; i < lowpass_height * 2; i++) {
- for (j = 0; j < lowpass_width * 2; j++)
- output[j] *= 4;
-
- output += output_stride * 2;
- }
- }
-
- lowpass_height = s->plane[plane].band[1][1].height;
- output_stride = s->plane[plane].band[1][1].a_width;
- lowpass_width = s->plane[plane].band[1][1].width;
- highpass_stride = s->plane[plane].band[1][1].stride;
-
- if (lowpass_height > s->plane[plane].band[1][1].a_height || lowpass_width > s->plane[plane].band[1][1].a_width ||
- !highpass_stride || s->plane[plane].band[1][1].width > s->plane[plane].band[1][1].a_width ||
- lowpass_width < 3 || lowpass_height < 3) {
- av_log(avctx, AV_LOG_ERROR, "Invalid plane dimensions\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
-
- av_log(avctx, AV_LOG_DEBUG, "Level 2 lowpass plane %i %i %i %i\n", plane, lowpass_height, lowpass_width, highpass_stride);
-
- low = s->plane[plane].l_h[7];
- high = s->plane[plane].subband[5];
- output = s->plane[plane].l_h[3];
- dsp->vert_filter(output, output_stride, low, output_stride, high, highpass_stride, lowpass_width, lowpass_height);
-
- low = s->plane[plane].subband[4];
- high = s->plane[plane].subband[6];
- output = s->plane[plane].l_h[4];
- dsp->vert_filter(output, output_stride, low, highpass_stride, high, highpass_stride, lowpass_width, lowpass_height);
-
- low = s->plane[plane].l_h[3];
- high = s->plane[plane].l_h[4];
- output = s->plane[plane].l_h[7];
- dsp->horiz_filter(output, output_stride, low, output_stride, high, output_stride, lowpass_width, lowpass_height * 2);
-
- output = s->plane[plane].l_h[7];
- for (i = 0; i < lowpass_height * 2; i++) {
- for (j = 0; j < lowpass_width * 2; j++)
- output[j] *= 4;
- output += output_stride * 2;
- }
-
- low = s->plane[plane].subband[7];
- high = s->plane[plane].subband[9];
- output = s->plane[plane].l_h[3];
- dsp->vert_filter(output, output_stride, low, highpass_stride, high, highpass_stride, lowpass_width, lowpass_height);
-
- low = s->plane[plane].subband[8];
- high = s->plane[plane].subband[10];
- output = s->plane[plane].l_h[4];
- dsp->vert_filter(output, output_stride, low, highpass_stride, high, highpass_stride, lowpass_width, lowpass_height);
-
- low = s->plane[plane].l_h[3];
- high = s->plane[plane].l_h[4];
- output = s->plane[plane].l_h[9];
- dsp->horiz_filter(output, output_stride, low, output_stride, high, output_stride, lowpass_width, lowpass_height * 2);
-
- lowpass_height = s->plane[plane].band[4][1].height;
- output_stride = s->plane[plane].band[4][1].a_width;
- lowpass_width = s->plane[plane].band[4][1].width;
- highpass_stride = s->plane[plane].band[4][1].stride;
- av_log(avctx, AV_LOG_DEBUG, "temporal level %i %i %i %i\n", plane, lowpass_height, lowpass_width, highpass_stride);
-
- if (lowpass_height > s->plane[plane].band[4][1].a_height || lowpass_width > s->plane[plane].band[4][1].a_width ||
- !highpass_stride || s->plane[plane].band[4][1].width > s->plane[plane].band[4][1].a_width ||
- lowpass_width < 3 || lowpass_height < 3) {
- av_log(avctx, AV_LOG_ERROR, "Invalid plane dimensions\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
-
- low = s->plane[plane].l_h[7];
- high = s->plane[plane].l_h[9];
- output = s->plane[plane].l_h[7];
- for (i = 0; i < lowpass_height; i++) {
- inverse_temporal_filter(low, high, lowpass_width);
- low += output_stride;
- high += output_stride;
- }
- if (s->progressive) {
- low = s->plane[plane].l_h[7];
- high = s->plane[plane].subband[15];
- output = s->plane[plane].l_h[6];
- dsp->vert_filter(output, output_stride, low, output_stride, high, highpass_stride, lowpass_width, lowpass_height);
-
- low = s->plane[plane].subband[14];
- high = s->plane[plane].subband[16];
- output = s->plane[plane].l_h[7];
- dsp->vert_filter(output, output_stride, low, highpass_stride, high, highpass_stride, lowpass_width, lowpass_height);
-
- low = s->plane[plane].l_h[9];
- high = s->plane[plane].subband[12];
- output = s->plane[plane].l_h[8];
- dsp->vert_filter(output, output_stride, low, output_stride, high, highpass_stride, lowpass_width, lowpass_height);
-
- low = s->plane[plane].subband[11];
- high = s->plane[plane].subband[13];
- output = s->plane[plane].l_h[9];
- dsp->vert_filter(output, output_stride, low, highpass_stride, high, highpass_stride, lowpass_width, lowpass_height);
-
- if (s->sample_type == 1)
- continue;
-
- dst = (int16_t *)pic->data[act_plane];
- if (avctx->pix_fmt == AV_PIX_FMT_BAYER_RGGB16) {
- if (plane & 1)
- dst++;
- if (plane > 1)
- dst += pic->linesize[act_plane] >> 1;
- }
-
- if (avctx->pix_fmt == AV_PIX_FMT_BAYER_RGGB16 &&
- (lowpass_height * 2 > avctx->coded_height / 2 ||
- lowpass_width * 2 > avctx->coded_width / 2 )
- ) {
- ret = AVERROR_INVALIDDATA;
- goto end;
- }
-
- low = s->plane[plane].l_h[6];
- high = s->plane[plane].l_h[7];
- for (i = 0; i < s->plane[act_plane].height; i++) {
- dsp->horiz_filter_clip(dst, low, high, lowpass_width, s->bpc);
- low += output_stride;
- high += output_stride;
- dst += dst_linesize;
- }
- } else {
- pic->interlaced_frame = 1;
- low = s->plane[plane].l_h[7];
- high = s->plane[plane].subband[14];
- output = s->plane[plane].l_h[6];
- dsp->horiz_filter(output, output_stride, low, output_stride, high, highpass_stride, lowpass_width, lowpass_height);
-
- low = s->plane[plane].subband[15];
- high = s->plane[plane].subband[16];
- output = s->plane[plane].l_h[7];
- dsp->horiz_filter(output, output_stride, low, highpass_stride, high, highpass_stride, lowpass_width, lowpass_height);
-
- low = s->plane[plane].l_h[9];
- high = s->plane[plane].subband[11];
- output = s->plane[plane].l_h[8];
- dsp->horiz_filter(output, output_stride, low, output_stride, high, highpass_stride, lowpass_width, lowpass_height);
-
- low = s->plane[plane].subband[12];
- high = s->plane[plane].subband[13];
- output = s->plane[plane].l_h[9];
- dsp->horiz_filter(output, output_stride, low, highpass_stride, high, highpass_stride, lowpass_width, lowpass_height);
-
- if (s->sample_type == 1)
- continue;
-
- dst = (int16_t *)pic->data[act_plane];
- low = s->plane[plane].l_h[6];
- high = s->plane[plane].l_h[7];
- for (i = 0; i < s->plane[act_plane].height / 2; i++) {
- interlaced_vertical_filter(dst, low, high, lowpass_width * 2, pic->linesize[act_plane]/2, act_plane);
- low += output_stride * 2;
- high += output_stride * 2;
- dst += pic->linesize[act_plane];
- }
- }
- }
- }
-
- if (s->transform_type == 2 && s->sample_type == 1) {
- int16_t *low, *high, *dst;
- int output_stride, lowpass_height, lowpass_width;
- ptrdiff_t dst_linesize;
-
- for (plane = 0; plane < s->planes; plane++) {
- int act_plane = plane == 1 ? 2 : plane == 2 ? 1 : plane;
-
- if (avctx->pix_fmt == AV_PIX_FMT_BAYER_RGGB16) {
- act_plane = 0;
- dst_linesize = pic->linesize[act_plane];
- } else {
- dst_linesize = pic->linesize[act_plane] / 2;
- }
-
- lowpass_height = s->plane[plane].band[4][1].height;
- output_stride = s->plane[plane].band[4][1].a_width;
- lowpass_width = s->plane[plane].band[4][1].width;
-
- if (lowpass_height > s->plane[plane].band[4][1].a_height || lowpass_width > s->plane[plane].band[4][1].a_width ||
- s->plane[plane].band[4][1].width > s->plane[plane].band[4][1].a_width ||
- lowpass_width < 3 || lowpass_height < 3) {
- av_log(avctx, AV_LOG_ERROR, "Invalid plane dimensions\n");
- ret = AVERROR(EINVAL);
- goto end;
- }
-
- if (s->progressive) {
- dst = (int16_t *)pic->data[act_plane];
- low = s->plane[plane].l_h[8];
- high = s->plane[plane].l_h[9];
-
- if (avctx->pix_fmt == AV_PIX_FMT_BAYER_RGGB16) {
- if (plane & 1)
- dst++;
- if (plane > 1)
- dst += pic->linesize[act_plane] >> 1;
- }
-
- if (avctx->pix_fmt == AV_PIX_FMT_BAYER_RGGB16 &&
- (lowpass_height * 2 > avctx->coded_height / 2 ||
- lowpass_width * 2 > avctx->coded_width / 2 )
- ) {
- ret = AVERROR_INVALIDDATA;
- goto end;
- }
-
- for (i = 0; i < s->plane[act_plane].height; i++) {
- dsp->horiz_filter_clip(dst, low, high, lowpass_width, s->bpc);
- low += output_stride;
- high += output_stride;
- dst += dst_linesize;
- }
- } else {
- dst = (int16_t *)pic->data[act_plane];
- low = s->plane[plane].l_h[8];
- high = s->plane[plane].l_h[9];
- for (i = 0; i < s->plane[act_plane].height / 2; i++) {
- interlaced_vertical_filter(dst, low, high, lowpass_width * 2, pic->linesize[act_plane]/2, act_plane);
- low += output_stride * 2;
- high += output_stride * 2;
- dst += pic->linesize[act_plane];
- }
- }
- }
- }
-
- if (avctx->pix_fmt == AV_PIX_FMT_BAYER_RGGB16)
- process_bayer(pic, s->bpc);
-end:
- if (ret < 0)
- return ret;
-
- *got_frame = 1;
- return avpkt->size;
-}
-
-static av_cold int cfhd_close(AVCodecContext *avctx)
-{
- CFHDContext *s = avctx->priv_data;
-
- free_buffers(s);
-
- return 0;
-}
-
-#if HAVE_THREADS
-static int update_thread_context(AVCodecContext *dst, const AVCodecContext *src)
-{
- CFHDContext *psrc = src->priv_data;
- CFHDContext *pdst = dst->priv_data;
- int ret;
-
- if (dst == src || psrc->transform_type == 0)
- return 0;
-
- if (pdst->plane[0].idwt_size != psrc->plane[0].idwt_size ||
- pdst->a_format != psrc->a_format ||
- pdst->a_width != psrc->a_width ||
- pdst->a_height != psrc->a_height ||
- pdst->a_transform_type != psrc->a_transform_type)
- free_buffers(pdst);
-
- pdst->a_format = psrc->a_format;
- pdst->a_width = psrc->a_width;
- pdst->a_height = psrc->a_height;
- pdst->a_transform_type = psrc->a_transform_type;
- pdst->transform_type = psrc->transform_type;
- pdst->progressive = psrc->progressive;
- pdst->planes = psrc->planes;
-
- if (!pdst->plane[0].idwt_buf) {
- pdst->coded_width = pdst->a_width;
- pdst->coded_height = pdst->a_height;
- pdst->coded_format = pdst->a_format;
- pdst->transform_type = pdst->a_transform_type;
- ret = alloc_buffers(dst);
- if (ret < 0)
- return ret;
- }
-
- for (int plane = 0; plane < pdst->planes; plane++) {
- memcpy(pdst->plane[plane].band, psrc->plane[plane].band, sizeof(pdst->plane[plane].band));
- memcpy(pdst->plane[plane].idwt_buf, psrc->plane[plane].idwt_buf,
- pdst->plane[plane].idwt_size * sizeof(int16_t));
- }
-
- return 0;
-}
-#endif
-
-const FFCodec ff_cfhd_decoder = {
- .p.name = "cfhd",
- CODEC_LONG_NAME("GoPro CineForm HD"),
- .p.type = AVMEDIA_TYPE_VIDEO,
- .p.id = AV_CODEC_ID_CFHD,
- .priv_data_size = sizeof(CFHDContext),
- .init = cfhd_init,
- .close = cfhd_close,
- FF_CODEC_DECODE_CB(cfhd_decode),
- UPDATE_THREAD_CONTEXT(update_thread_context),
- .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_FRAME_THREADS,
- .caps_internal = FF_CODEC_CAP_INIT_CLEANUP,
-};
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Minecraft Trial on iPad A Step-by-Step Guide.md b/spaces/congsaPfin/Manga-OCR/logs/Download Minecraft Trial on iPad A Step-by-Step Guide.md
deleted file mode 100644
index 481a6ba0d2acbf561a1059b343bb8730e11dfc98..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Minecraft Trial on iPad A Step-by-Step Guide.md
+++ /dev/null
@@ -1,107 +0,0 @@
-
-
How to Download Minecraft Trial on iPad
-
Minecraft is one of the most popular and creative games in the world, with over 200 million copies sold and more than 130 million monthly active players. It is a sandbox game that lets you create your own virtual world using blocks, tools, and resources. You can play solo or with friends, online or offline, in survival or creative mode. You can also explore infinite worlds, discover new biomes, creatures, and structures, and even learn coding, math, science, and art with Minecraft Education Edition.
If you have an iPad, you can try Minecraft for free for a limited time and see for yourself why millions of people love this game. In this article, we will show you how to download the Minecraft trial on iPad, how to play it, and how to upgrade to the full version if you want more.
-
What is Minecraft and why you should try it
-
Minecraft features and gameplay
-
Minecraft is a game that gives you complete freedom to create and explore. You can build anything you can imagine with blocks, from simple houses to complex machines. You can also mine resources, craft items, farm crops, breed animals, fight enemies, trade with villagers, and more. You can play in different modes, such as survival, where you have to gather resources and fend off dangers; creative, where you have unlimited resources and can build anything; adventure, where you can explore custom maps and quests; or spectator, where you can watch other players.
-
Minecraft also supports multiplayer, where you can join online servers and play with other people around the world. You can also create your own server or join a realm, which is a private online world that you can invite your friends to. You can also play locally with other devices on the same Wi-Fi network.
-
Minecraft benefits and educational value
-
Minecraft is not only fun but also educational. It can help you develop your creativity, problem-solving, collaboration, and communication skills. It can also teach you about various subjects, such as math, science, history, art, and coding. For example, you can learn about geometry by building shapes and structures; physics by creating machines and circuits; biology by studying animals and plants; history by recreating historical landmarks; art by making pixel art and sculptures; and coding by using commands and redstone.
-
Minecraft also has a special version called Minecraft Education Edition, which is designed for teachers and students. It has additional features and content that support learning outcomes across subjects and age levels. It also has a classroom mode that allows teachers to manage settings, chat with students, and monitor their progress.
-
how to get minecraft trial version on ipad
-how to install minecraft trial edition on ipad
-how to play minecraft trial mode on ipad
-how to access minecraft trial for free on ipad
-how to download minecraft demo on ipad
-how to get minecraft demo version on ipad
-how to install minecraft demo edition on ipad
-how to play minecraft demo mode on ipad
-how to access minecraft demo for free on ipad
-how to download minecraft free trial on ipad
-how to get minecraft free trial version on ipad
-how to install minecraft free trial edition on ipad
-how to play minecraft free trial mode on ipad
-how to access minecraft free trial for free on ipad
-how to download minecraft trial for ipad from app store
-how to get minecraft trial for ipad from app store
-how to install minecraft trial for ipad from app store
-how to play minecraft trial for ipad from app store
-how to access minecraft trial for ipad from app store
-how to download minecraft trial for ipad from official website
-how to get minecraft trial for ipad from official website
-how to install minecraft trial for ipad from official website
-how to play minecraft trial for ipad from official website
-how to access minecraft trial for ipad from official website
-how to download minecraft trial for ios devices
-how to get minecraft trial for ios devices
-how to install minecraft trial for ios devices
-how to play minecraft trial for ios devices
-how to access minecraft trial for ios devices
-how to download minecraft pe (pocket edition) trial on ipad
-how to get minecraft pe (pocket edition) trial on ipad
-how to install minecraft pe (pocket edition) trial on ipad
-how to play minecraft pe (pocket edition) trial on ipad
-how to access minecraft pe (pocket edition) trial on ipad
-how to download and play the latest version of the Minecraft Trial game on iPad.
-How do I download and play the Minecraft Trial game on my iPad?
-What are the steps to download and play the Minecraft Trial game on an iPad?
-Where can I download and play the Minecraft Trial game on my iPad?
-How can I download and play the Minecraft Trial game on iPad without paying?
-How long can I play the Minecraft Trial game on my iPad?
-How much space does the Minecraft Trial game take up on my iPad?
-How do I update the Minecraft Trial game on my iPad?
-How do I uninstall the Minecraft Trial game from my iPad?
-How do I switch between creative and survival modes in the Minecraft Trial game on iPad?
-How do I join multiplayer servers in the Minecraft Trial game on iPad?
-How do I create my own world in the Minecraft Trial game on iPad?
-How do I customize my character in the Minecraft Trial game on iPad?
-How do I use commands in the Minecraft Trial game on iPad?
-
How to get the Minecraft free trial for iPad
-
Step 1: Go to the App Store
-
To download the Minecraft trial on iPad, you need to go to the App Store first. You can do this by tapping on the App Store icon on your home screen or by searching for it in Spotlight.
-
Step 2: Search for Minecraft
-
Once you are in the App Store, you need to search for Minecraft. You can do this by tapping on the magnifying glass icon at the bottom right corner of the screen and typing "Minecraft" in the search bar. You should see the Minecraft app as the first result, with a green grass block icon and the word "Mojang" below it.
-
Step 3: Tap on the "Get" button
-
To download the Minecraft trial on iPad, you need to tap on the "Get" button next to the app name. This will start the download process. You may need to enter your Apple ID and password if you are not signed in to the App Store.
-
Step 4: Confirm your purchase with Touch ID or Face ID
-
Before you can download the Minecraft trial on iPad, you need to confirm your purchase with Touch ID or Face ID. This is a security feature that prevents unauthorized purchases on your device. To do this, you need to place your finger on the home button or look at the camera, depending on your device model. You should see a message saying "Done" when the confirmation is successful.
-
Step 5: Wait for the app to download and install
-
The last step to download the Minecraft trial on iPad is to wait for the app to download and install. You can see the progress of the download by looking at the circle around the app icon. When the circle is full, it means that the app is ready to use. You can also check the status of the download by tapping on your profile picture at the top right corner of the screen and scrolling down to see your purchased apps.
-
How to play the Minecraft trial on iPad
-
Step 1: Launch the app and sign in with your Microsoft account
-
To play the Minecraft trial on iPad, you need to launch the app first. You can do this by tapping on the app icon on your home screen or by searching for it in Spotlight. When you open the app, you will see a splash screen with the Minecraft logo and some tips. After a few seconds, you will be taken to the main menu, where you can choose to play, sign in, or access settings.
-
To play online multiplayer or access some features, you need to sign in with your Microsoft account. You can do this by tapping on the "Sign in for free" button at the bottom of the screen and following the instructions. If you don't have a Microsoft account, you can create one for free by tapping on "Create one!" and filling out your details. You will also get a free trial of Minecraft Realms Plus, which is a subscription service that lets you create and join private online worlds.
-
Step 2: Choose a game mode and a world
-
To play the Minecraft trial on iPad, you need to choose a game mode and a world. You can do this by tapping on "Play" at the main menu and selecting one of the options: Worlds, Servers, Friends, or Featured Servers. Each option has different types of worlds that you can join or create.
-
Worlds are single-player or local multiplayer worlds that are stored on your device. You can create your own world by tapping on "Create New" and choosing a template, a seed, or a custom setting. You can also join an existing world by tapping on it and choosing "Play".
-
Servers are online multiplayer worlds that are hosted by other players or organizations. You can join a server by tapping on it and choosing "Join Server". You can also add a server by tapping on "Add Server" and entering its name, address, and port.
-
Friends are online multiplayer worlds that are hosted by your friends or other players that you have added as friends. You can join a friend's world by tapping on their name and choosing "Join World". You can also invite a friend to your world by tapping on "Invite to Game" and selecting them from your friends list.
-
Featured Servers are online multiplayer worlds that are curated by Mojang and have special features, such as mini-games, maps, and events. You can join a featured server by tapping on it and choosing "Join Server". You can also browse different categories of featured servers by swiping left or right.
-
Step 3: Explore, build, and survive
-
To play the Minecraft trial on iPad, you need to explore, build, and survive in your chosen world. You can move around by using the virtual joystick on the left side of the screen and look around by dragging your finger on the right side of the screen. You can also jump by tapping on the button at the bottom center of the screen and fly by double-tapping on it and holding it. You can switch between first-person and third-person view by tapping on the button at the top right corner of the screen.
-
You can build anything you want by using blocks, which are the basic units of the game. You can access your inventory by tapping on the button at the bottom right corner of the screen and select a block by tapping on it. You can place a block by tapping on the screen where you want it to go and break a block by holding your finger on it. You can also craft items by using a crafting table, which you can make by placing four wooden planks in a square.
-
You can survive by managing your health and hunger, which are shown by the bars at the top of the screen. You can restore your health by eating food, which you can get by farming, hunting, or fishing. You can also heal yourself by using potions, which you can brew by using a brewing stand, which you can make by placing a blaze rod and three cobblestones in a row. You can lose health by falling, drowning, burning, or getting attacked by enemies, which are called mobs. You can defend yourself by using weapons, such as swords, bows, or tridents, which you can make by using various materials.
-
Step 4: Save your progress and exit the game
-
To play the Minecraft trial on iPad, you need to save your progress and exit the game when you are done. You can do this by tapping on the pause button at the top of the screen and choosing "Save and Quit". This will take you back to the main menu, where you can choose to play another world or exit the app.
-
How to upgrade to the full version of Minecraft on iPad
-
Step 1: Go to the App Store and open the Minecraft app page
-
To upgrade to the full version of Minecraft on iPad, you need to go to the App Store first. You can do this by tapping on the App Store icon on your home screen or by searching for it in Spotlight. Once you are in the App Store, you need to open the Minecraft app page. You can do this by tapping on the magnifying glass icon at the bottom right corner of the screen and typing "Minecraft" in the search bar. You should see the Minecraft app as the first result, with a green grass block icon and the word "Mojang" below it.
-
Step 2: Tap on the "Buy" button and confirm your purchase
-
To upgrade to the full version of Minecraft on iPad, you need to tap on the "Buy" button next to the app name. This will start the purchase process. You may need to enter your Apple ID and password if you are not signed in to the App Store. You may also need to confirm your purchase with Touch ID or Face ID, depending on your device model. You should see a message saying "Thank you" when the purchase is successful.
-
Step 3: Enjoy unlimited access to all the features and content of Minecraft
-
The last step to upgrade to the full version of Minecraft on iPad is to enjoy unlimited access to all the features and content of the game. You can do this by launching the app and signing in with your Microsoft account. You will see that the "Get" button has changed to a "Play" button, and that you can access all the game modes, worlds, servers, friends, and featured servers. You will also be able to download and use skins, textures, maps, and add-ons from the Minecraft Marketplace, which is a store where you can buy or get free content created by other players and developers.
-
Conclusion and FAQs
-
Minecraft is a game that lets you create your own virtual world using blocks, tools, and resources. You can play solo or with friends, online or offline, in survival or creative mode. You can also explore infinite worlds, discover new biomes, creatures, and structures, and even learn coding, math, science, and art with Minecraft Education Edition.
-
If you have an iPad, you can try Minecraft for free for a limited time and see for yourself why millions of people love this game. You just need to download the Minecraft trial on iPad from the App Store, sign in with your Microsoft account, choose a game mode and a world, and start playing. You can also upgrade to the full version of Minecraft on iPad for a one-time payment of $6.99 and enjoy unlimited access to all the features and content of the game.
-
Here are some frequently asked questions about Minecraft on iPad:
-
-
Q: How long does the Minecraft trial on iPad last?
A: The Minecraft trial on iPad lasts for 90 minutes of gameplay. After that, you will need to upgrade to the full version of the game to continue playing.
-
Q: Can I play with other players on the Minecraft trial on iPad?
A: Yes, you can play with other players on the Minecraft trial on iPad if they are on the same Wi-Fi network as you or if they have invited you to their realm. However, you cannot join online servers or featured servers on the trial version.
-
Q: Can I use skins, textures, maps, and add-ons on the Minecraft trial on iPad?
A: No, you cannot use skins, textures, maps, and add-ons on the Minecraft trial on iPad. You will need to upgrade to the full version of the game to access the Minecraft Marketplace and download or buy content from there.
-
Q: Can I transfer my progress from the Minecraft trial on iPad to the full version?
A: Yes, you can transfer your progress from the Minecraft trial on iPad to the full version. Your worlds will be saved on your device and will be available when you upgrade to the full version. However, if you delete the app or change your device, you may lose your progress unless you back it up using iCloud or iTunes.
-
Q: Can I play Minecraft Education Edition on iPad?
A: Yes, you can play Minecraft Education Edition on iPad if you have a valid Office 365 Education account. You can download the app from the App Store and sign in with your account. You will be able to access all the features and content of Minecraft Education Edition, such as lessons, worlds, code builder, classroom mode, and more.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Ten Blast for Free and Enjoy a Fun and Original Puzzle Game.md b/spaces/congsaPfin/Manga-OCR/logs/Download Ten Blast for Free and Enjoy a Fun and Original Puzzle Game.md
deleted file mode 100644
index 019e0b3a565cfb1792248658f3126c366db6d357..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Ten Blast for Free and Enjoy a Fun and Original Puzzle Game.md
+++ /dev/null
@@ -1,150 +0,0 @@
-
-
Ten Blast APK: A Fun and Unique Number Puzzle Game
-
If you are looking for a new and exciting puzzle game to challenge your brain and relax your mind, you should try Ten Blast APK. This is a brand new number puzzle game that is designed by Kiwi Fun Games, a developer that has created many popular puzzle games such as Mahjong Solitaire, Ten Crush, and Ten Pair. In this article, we will tell you everything you need to know about Ten Blast APK, including what it is, how to download and install it on your Android device, and how to play it on your PC or Mac using an emulator.
Ten Blast APK is a puzzle game that is based on the simple concept of matching numbers. The goal of the game is to clear the board by blasting the same numbers (such as 4-4, 9-9, etc.) or pairs that add up to 10 (such as 4-6, 3-7, etc.). You can blast the pairs vertically, horizontally, or diagonally as long as there is no barrier between them. The game has many levels with different targets and challenges that will test your logic and strategy skills. You can also use various props to help you pass the levels faster and easier.
-
The gameplay of Ten Blast APK
-
The gameplay of Ten Blast APK is very simple and intuitive. You just need to tap on the numbers or pairs that you want to blast and they will disappear from the board. You need to complete the target for each level before you run out of moves or time. For example, some levels require you to blast a certain number of numbers or pairs, while others require you to clear a certain area of the board. You can see the target and the remaining moves or time at the top of the screen. You can also see your score and coins at the bottom of the screen.
-
The features of Ten Blast APK
-
Ten Blast APK has many features that make it a fun and unique number puzzle game. Some of these features are:
-
-
It has a super fun and original design that is different from other number puzzle games.
-
It has hundreds of levels with various difficulties and challenges that will keep you entertained for hours.
-
It has colorful and cute graphics and animations that will brighten up your mood.
-
It has relaxing and soothing music and sound effects that will calm your nerves.
-
It has various props that you can use to blast more numbers or pairs, such as bombs, hammers, magnets, etc.
-
It has daily rewards and bonuses that you can claim to get more coins and props.
-
It has leaderboards and achievements that you can compete with other players around the world.
-
It has a user-friendly interface and easy controls that make it suitable for all ages.
-
-
How to download and install Ten Blast APK on your Android device?
-
If you want to play Ten Blast APK on your Android device, you need to download and install it first. Here are the steps that you need to follow:
-
Ten Blast Android App Free Download
-Ten Blast Puzzle Game by Kiwi Fun
-Ten Blast APK Download Size and Version
-Ten Blast Content Rating and Reviews
-Ten Blast Number Puzzle Game Online and Offline
-How to Play Ten Blast Game
-Ten Blast Presets and Custom Levels
-Ten Blast Themes and Backgrounds
-Ten Blast for Android Free in English
-Ten Blast Softonic Review and Rating
-Ten Blast Google Play Store Link and Description
-Ten Blast Changelog and Update History
-Ten Blast Developer Information and Contact
-Ten Blast Comments and Feedback
-Ten Blast Google Play Rankings and Statistics
-Download Related Apps to Ten Blast
-More from Kiwi Fun Games
-Ten Blast Security Status and License
-Ten Blast Latest Update and Download Options
-Ten Blast Program Available in Other Languages
-Ten Blast for Android Free Download APKPure.com
-Ten Blast APK 1.6.0 for Android - APKFab.com
-Ten Blast Mod APK Unlimited Money - APK Home.com
-Ten Blast Hack Cheats Tips Guide Real Gamers - Hack Cheat.org
-Ten Blast 1.6.0 APK + Mod (Remove ads / Free purchase / No Ads) for Android - APKDone.com
-How to Install Ten Blast APK on Android Device - APK Installer.com
-How to Install Ten Blast APK on PC Windows 7/8/10 - APK Installer.com
-How to Install Ten Blast APK on Mac OS X - APK Installer.com
-How to Install Ten Blast APK on Firestick / Fire TV - APK Installer.com
-How to Install Ten Blast APK on Smart TV - APK Installer.com
-How to Install Ten Blast APK on Chromebook - APK Installer.com
-How to Install Ten Blast APK on Linux - APK Installer.com
-How to Install Ten Blast APK on iOS iPhone / iPad - APK Installer.com
-How to Uninstall Ten Blast APK from Android Device - APK Installer.com
-How to Uninstall Ten Blast APK from PC Windows 7/8/10 - APK Installer.com
-How to Uninstall Ten Blast APK from Mac OS X - APK Installer.com
-How to Uninstall Ten Blast APK from Firestick / Fire TV - APK Installer.com
-How to Uninstall Ten Blast APK from Smart TV - APK Installer.com
-How to Uninstall Ten Blast APK from Chromebook - APK Installer.com
-How to Uninstall Ten Blast APK from Linux - APK Installer.com
-How to Uninstall Ten Blast APK from iOS iPhone / iPad - APK Installer.com
-How to Update Ten Blast APK on Android Device - APK Installer.com
-How to Update Ten Blast APK on PC Windows 7/8/10 - APK Installer.com
-How to Update Ten Blast APK on Mac OS X - APK Installer.com
-How to Update Ten Blast APK on Firestick / Fire TV - APK Installer.com
-How to Update Ten Blast APK on Smart TV - APK Installer.com
-How to Update Ten Blast APK on Chromebook - APK Installer.com
-How to Update Ten Blast APK on Linux - APK Installer.com
-How to Update Ten Blast APK on iOS iPhone / iPad - APK Installer.com
-What is New in Ten Blast 1.6.0 Version - AppBrain.com
-
The steps to download and install Ten Blast APK
-
-
Go to a trusted source that provides the latest version of Ten Blast APK. For example, you can go to [AppBrain](^1^) or [PlayMods](^3^).
-
Click on the download button or link to start downloading the APK file.
-
Once the download is complete, locate the APK file in your device's file manager or downloads folder.
-
Tap on the APK file to start installing it. You may need to enable the unknown sources option in your device's settings to allow the installation of apps from outside the Google Play Store.
-
Follow the instructions on the screen to complete the installation process.
-
Once the installation is done, you can find the Ten Blast APK icon on your device's home screen or app drawer.
-
Tap on the icon to launch the game and enjoy!
-
-
The benefits of downloading and installing Ten Blast APK from a trusted source
-
There are many benefits of downloading and installing Ten Blast APK from a trusted source, such as:
-
-
You can get the latest version of the game with all the new features and updates.
-
You can avoid any malware or viruses that may harm your device or compromise your privacy.
-
You can save your data and storage space by downloading a smaller APK file than the one from the Google Play Store.
-
You can bypass any regional or device restrictions that may prevent you from accessing the game from the Google Play Store.
-
-
How to play Ten Blast APK on your PC or Mac using an emulator?
-
If you want to play Ten Blast APK on your PC or Mac, you need to use an emulator. An emulator is a software that allows you to run Android apps on your computer. Here are some reasons why you may want to play Ten Blast APK on your PC or Mac and how to do it:
-
The advantages of playing Ten Blast APK on your PC or Mac
-
Playing Ten Blast APK on your PC or Mac has some advantages over playing it on your Android device, such as:
-
-
You can enjoy a bigger and better screen that enhances the graphics and animations of the game.
-
You can use a keyboard and mouse that offer more precise and comfortable controls than a touchscreen.
-
You can avoid any battery or performance issues that may affect your Android device while playing the game.
-
You can access more features and options that may not be available on your Android device, such as recording, streaming, or editing your gameplay.
-
-
The best emulator to play Ten Blast APK on your PC or Mac
-
There are many emulators that you can use to play Ten Blast APK on your PC or Mac, but we recommend using [BlueStacks]. BlueStacks is one of the most popular and reliable emulators that has millions of users worldwide. It is compatible with Windows and Mac OS and supports a wide range of Android apps and games. It also has many features and benefits that make it the best emulator to play Ten Blast APK on your PC or Mac, such as:
-
-
It has a fast and smooth performance that ensures a lag-free and glitch-free gaming experience.
-
It has a user-friendly and customizable interface that allows you to adjust the settings and preferences according to your needs.
-
It has a multi-instance mode that allows you to run multiple apps or games at the same time on different windows.
-
It has a keymapping tool that allows you to assign keyboard and mouse shortcuts to any action in the game.
-
It has a game center that allows you to discover and download new and popular games from various genres and categories.
-
-
To play Ten Blast APK on your PC or Mac using BlueStacks, you need to follow these steps:
-
-
Download and install BlueStacks from its official website [here].
-
Launch BlueStacks and sign in with your Google account or create a new one if you don't have one.
-
Go to the game center and search for Ten Blast APK in the search bar.
-
Select Ten Blast APK from the results and click on install. You can also drag and drop the APK file that you downloaded earlier into BlueStacks if you prefer.
-
Wait for the installation to finish and then click on open to start playing Ten Blast APK on your PC or Mac.
-
-
Conclusion
-
Ten Blast APK is a fun and unique number puzzle game that will challenge your brain and relax your mind. You can download and install it on your Android device from a trusted source or play it on your PC or Mac using an emulator. Either way, you will enjoy a super fun and original design, hundreds of levels with various difficulties and challenges, colorful and cute graphics and animations, relaxing and soothing music and sound effects, various props to help you pass the levels faster and easier, daily rewards and bonuses, leaderboards and achievements, and a user-friendly interface and easy controls. If you are looking for a new and exciting puzzle game, you should try Ten Blast APK today!
-
FAQs
FAQs
-
Here are some frequently asked questions about Ten Blast APK that you may find helpful:
-
-
-
Question
-
Answer
-
-
-
Is Ten Blast APK free to play?
-
Yes, Ten Blast APK is free to play and download. However, it may contain some in-app purchases and ads that you can disable or remove if you wish.
-
-
-
Is Ten Blast APK safe to play?
-
Yes, Ten Blast APK is safe to play as long as you download and install it from a trusted source or use an emulator. It does not contain any malware or viruses that may harm your device or compromise your privacy.
-
-
-
Can I play Ten Blast APK offline?
-
Yes, you can play Ten Blast APK offline without an internet connection. However, you may not be able to access some features or functions that require an internet connection, such as leaderboards, achievements, daily rewards, etc.
-
-
-
How can I get more coins and props in Ten Blast APK?
-
You can get more coins and props in Ten Blast APK by completing the levels, claiming the daily rewards and bonuses, watching ads, or making in-app purchases.
-
-
-
How can I contact the developer of Ten Blast APK?
-
You can contact the developer of Ten Blast APK by sending an email to kiwifungames@gmail.com or visiting their Facebook page [here].
-
-
-
I hope you enjoyed reading this article and found it useful. If you have any questions or feedback, please feel free to leave a comment below. Thank you for your time and attention.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Dummynation The Ultimate Strategy Game for World Domination.md b/spaces/congsaPfin/Manga-OCR/logs/Dummynation The Ultimate Strategy Game for World Domination.md
deleted file mode 100644
index 861127ad2359f34bb4762019786aa4a60b30172c..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Dummynation The Ultimate Strategy Game for World Domination.md
+++ /dev/null
@@ -1,154 +0,0 @@
-
-
How to Download Dummy Nation: A Guide for Strategy Game Lovers
-
If you are a fan of strategy games and geopolitics, you might have heard of Dummy Nation, a game that lets you take control of a country and lead it to world domination. But how can you download this game and start playing it on your device? In this article, we will show you how to download Dummy Nation on different platforms, how to install and run it, and why you should give it a try.
-
What is Dummy Nation and why should you play it?
-
Dummy Nation is a strategy game developed by Alejandro Hernández Ferrero, released in April 2022. It is available for PC and mobile devices, and it has received very positive reviews from players and critics alike .
In Dummy Nation, you are given unlimited power over a country, with a single promise to fulfill: world domination. How you manage to achieve it is up to you. You can expand your territory by military occupation, manipulate diplomatic relations, use your country's resources for research and development, or pursue economic growth. You can also choose from different scenarios, such as historical, modern, or fictional ones, or create your own custom map.
-
Dummy Nation has a unique feature that makes it stand out from other strategy games: real-time shifting borders. This means that when you conquer a place, it does not immediately become part of your country, but rather it is gradually annexed over time. However, you also have to deal with resistance forces that can arise and slow down your conquest. This adds a layer of realism and challenge to the game, as you have to balance your expansion and consolidation strategies.
-
The benefits of playing Dummy Nation
-
Dummy Nation is not only fun and engaging, but also educational and informative. By playing this game, you can learn more about geography, history, politics, economics, and culture. You can also develop your critical thinking, problem-solving, decision-making, and leadership skills. Moreover, you can enjoy the game's beautiful graphics, smooth gameplay, and immersive soundtrack.
-
How to download Dummy Nation on different platforms
-
Dummy Nation is available for both PC and mobile devices. Depending on your preferred platform, there are different ways to download the game.
-
How to download Dummy Nation on PC
-
If you want to play Dummy Nation on your PC, you have three options:
-
How to install dummy nation on PC
-Dummy nation download guide for Windows
-How to get dummy nation for free on Steam
-Dummy nation PC game installation tutorial
-How to play dummy nation on Mac with emulator
-Dummy nation strategy game download and review
-How to run dummy nation on Android with BlueStacks
-Dummy nation system requirements and download size
-How to update dummy nation on PC and Android
-Dummy nation tips and tricks for beginners
-How to download dummy nation in Spanish with GameLoop
-Dummy nation gameplay and features overview
-How to fix dummy nation errors and crashes on PC
-Dummy nation best settings and graphics options
-How to join dummy nation online multiplayer mode
-Dummy nation cheats and hacks for PC and Android
-How to uninstall dummy nation from PC and Android
-Dummy nation mods and customizations for PC
-How to download dummy nation soundtrack and wallpapers
-Dummy nation achievements and leaderboards guide
-How to download dummy nation beta version on PC
-Dummy nation release date and price information
-How to download dummy nation demo on Steam
-Dummy nation developer interview and behind the scenes
-How to download dummy nation DLCs and expansions
-Dummy nation patch notes and changelog history
-How to download dummy nation in French with New Scientist
-Dummy nation comparison with other strategy games
-How to download dummy nation in German with Yahoo News
-Dummy nation user reviews and ratings on Steam
-How to download dummy nation in Japanese with The Sun
-Dummy nation FAQs and common questions answered
-How to download dummy nation in Chinese with Wikipedia
-Dummy nation community and fan forums links
-How to download dummy nation in Portuguese with NDTV
-Dummy nation official website and social media accounts
-How to download dummy nation in Russian with Forbes
-Dummy nation video tutorials and walkthroughs on YouTube
-How to download dummy nation in Arabic with MSN.com
-Dummy nation news and updates from the developer
-How to download dummy nation in Turkish with Montana.edu
-Dummy nation best strategies and tactics for world domination
-How to download dummy nation in Italian with Cornell.edu
-Dummy nation fun facts and trivia you may not know
-How to download dummy nation in Korean with NASA.gov
-
How to download Dummy Nation from Steam
-
Steam is one of the most popular platforms for downloading PC games. To download Dummy Nation from Steam, you need to follow these steps:
-
-
Create a Steam account or log in if you already have one.
-
Download and install the Steam client on your PC.
-
Open the Steam client and search for "Dummy Nation" in the store.
-
Click on the game's page and purchase it for $11.99 or add it to your wishlist.
-
Click on the "Install" button and wait for the download to finish.
-
Launch the game from your Steam library and enjoy!
-How to download Dummy Nation from BlueStacks
-
BlueStacks is an emulator that allows you to play mobile games on your PC. To download Dummy Nation from BlueStacks, you need to follow these steps:
-
-
Download and install BlueStacks on your PC.
-
Open BlueStacks and sign in with your Google account or create a new one.
-
Go to the Google Play Store app and search for "Dummy Nation".
-
Click on the game's page and install it for free.
-
Go to the home screen of BlueStacks and click on the Dummy Nation icon.
-
Launch the game and enjoy!
-
-
How to download Dummy Nation from GameLoop
-
GameLoop is another emulator that allows you to play mobile games on your PC. To download Dummy Nation from GameLoop, you need to follow these steps:
-
-
Download and install GameLoop on your PC.
-
Open GameLoop and go to the "Game Center" tab.
-
Search for "Dummy Nation" in the search bar.
-
Click on the game's page and install it for free.
-
Go to the "My Games" tab and click on the Dummy Nation icon.
-
Launch the game and enjoy!
-
-
How to download Dummy Nation on mobile devices
-
If you want to play Dummy Nation on your mobile device, you have two options:
-
How to download Dummy Nation from Google Play Store
-
If you have an Android device, you can download Dummy Nation from the Google Play Store. To do so, you need to follow these steps:
-
-
Open the Google Play Store app on your device and sign in with your Google account or create a new one.
-
Search for "Dummy Nation" in the search bar.
-
Click on the game's page and install it for free.
-
Go to your app drawer and tap on the Dummy Nation icon.
-
Launch the game and enjoy!
-
-
How to download Dummy Nation from App Store
-
If you have an iOS device, you can download Dummy Nation from the App Store. To do so, you need to follow these steps:
-
-
Open the App Store app on your device and sign in with your Apple ID or create a new one.
-
Search for "Dummy Nation" in the search bar.
-
Click on the game's page and install it for free.
-
Go to your home screen and tap on the Dummy Nation icon.
-
Launch the game and enjoy!
-
-
How to install and run Dummy Nation on your device
-
After downloading Dummy Nation, you need to install and run it on your device. Here are some tips on how to do that:
-
How to install Dummy Nation on PC
-
If you downloaded Dummy Nation from Steam, BlueStacks, or GameLoop, you don't need to do anything else, as the installation process is automatic. However, if you downloaded Dummy Nation from another source, such as a website or a torrent, you need to follow these steps:
-
-
Locate the downloaded file, which should be a .exe or a .zip file.
-
If it is a .zip file, extract it using a program like WinRAR or 7-Zip.
-
If it is a .exe file, double-click on it and follow the instructions on the screen.
-
Select a destination folder for the game files and click on "Install".
-
Wait for the installation to finish and click on "Finish".
-
-
How to install Dummy Nation on mobile devices
-
If you downloaded Dummy Nation from Google Play Store or App Store, you don't need to do anything else, as the installation process is automatic. However, if you downloaded Dummy Nation from another source, such as a website or an APK file, you need to follow these steps:
-
Go to your device's settings and enable the option to install apps from unknown sources. This may vary depending on your device model and OS version, but it is usually found under "Security" or "Privacy".
-
Locate the downloaded file, which should be an .apk file.
-
Tap on the file and confirm the installation.
-
Wait for the installation to finish and tap on "Open".
-
-
How to run Dummy Nation and start playing
-
Once you have installed Dummy Nation on your device, you can run it and start playing. Here are some tips on how to do that:
-
-
If you are playing on PC, you can run the game by clicking on its shortcut on your desktop or in your start menu. Alternatively, you can run it from the platform you downloaded it from, such as Steam, BlueStacks, or GameLoop.
-
If you are playing on mobile devices, you can run the game by tapping on its icon on your home screen or app drawer.
-
When you run the game for the first time, you will be asked to choose a language and agree to the terms of service and privacy policy. You can also adjust the game settings, such as sound, graphics, and controls.
-
After that, you can choose a scenario to play or create your own custom map. You can also access the tutorial, which will guide you through the basics of the game.
-
Enjoy the game and have fun!
-
-
Conclusion and FAQs
-
Dummy Nation is a strategy game that lets you take control of a country and lead it to world domination. It has real-time shifting borders, different scenarios, and a streamlined gameplay. It is available for PC and mobile devices, and you can download it from various platforms or sources. In this article, we showed you how to download Dummy Nation on different platforms, how to install and run it, and why you should play it. We hope you found this article helpful and informative. If you have any questions or feedback, please let us know in the comments section below. Thank you for reading!
-
FAQs
-
Here are some frequently asked questions about Dummy Nation:
-
-
Is Dummy Nation free to play?
-
Dummy Nation is free to play on mobile devices, but it costs $11.99 on PC. However, there are no in-app purchases or ads in the game.
-
Is Dummy Nation multiplayer?
-
Dummy Nation is currently a single-player game, but the developer has plans to add multiplayer features in the future updates.
-
Is Dummy Nation realistic?
-
Dummy Nation is not meant to be a realistic simulation of geopolitics, but rather a simplified and stylized version of it. The game does not reflect the actual political views or opinions of the developer or anyone else.
-
How can I contact the developer of Dummy Nation?
-
You can contact the developer of Dummy Nation by sending an email to dummy.nation.game@gmail.com or by following him on Twitter @DummyNationGame.
-
How can I support the development of Dummy Nation?
-
You can support the development of Dummy Nation by buying the game on PC, leaving a positive review or rating on the platform you downloaded it from, sharing the game with your friends and family, or donating to the developer's Patreon page.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/ExpressVPN The Fastest and Most Secure VPN APK for Android.md b/spaces/congsaPfin/Manga-OCR/logs/ExpressVPN The Fastest and Most Secure VPN APK for Android.md
deleted file mode 100644
index 029469339717dd4294e62f96564803aa8f2a403c..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/ExpressVPN The Fastest and Most Secure VPN APK for Android.md
+++ /dev/null
@@ -1,141 +0,0 @@
-
-
How to Download APK VPN Express and Why You Should Use It
-
If you are looking for a fast, secure, and easy-to-use VPN app for your Android device, you should consider APK VPN Express. This app is one of the best VPNs for Android, as it offers you unlimited bandwidth, access to servers in 94 countries, and powerful encryption. In this article, we will show you how to download and install APK VPN Express on your Android device, and why you should use it to protect your online privacy and freedom.
-
What is APK VPN Express?
-
APK VPN Express is a free and unlimited VPN (Virtual Private Network) proxy client for all Android devices. It provides you with the fastest VPN connection with over 3,000 servers located in 100 different countries. With APK VPN Express, you can:
Hide your IP address and location from prying eyes
-
Encrypt your data and prevent hackers from stealing your information
-
Bypass censorship and access any website or app you want
-
Stream, download, and play online games without buffering or throttling
-
Enjoy a smooth and user-friendly interface with no registration or login required
-
-
APK VPN Express differs from other VPN apps in several ways. First, it does not collect any logs or track your online activity, unlike some free VPNs that may sell your data to third parties. Second, it does not limit your bandwidth or speed, unlike some premium VPNs that may impose caps or restrictions. Third, it does not require any root access or special permissions, unlike some VPNs that may compromise your device's security.
-
How to Download and Install APK VPN Express on Your Android Device
-
Downloading and installing APK VPN Express on your Android device is very easy. Just follow these simple steps:
-
-
Go to this page on your Android browser and sign in with your credentials.
-
You will see "Set up your devices" on your dashboard. Tap on "Android" and then tap on "Download APK".
-
Your device may not allow apps from unknown sources by default. To enable this option, go to Settings > Security > Unknown Sources and toggle it on.
-
Once the APK file is downloaded on your device, tap on it to open it and then tap on "Install".
-
Once the installation is complete, tap on "Open" to launch the app.
-
-
Congratulations! You have successfully installed APK VPN Express on your Android device. Now you can enjoy a secure and private internet connection with just one tap.
-
Tips and Tricks to Optimize Your VPN Experience
-
To make the most out of APK VPN Express, here are some tips and tricks you can use:
-
-
To connect to the best server for your location, tap on the "Smart Location" button on the app's home screen.
-
To change your server location manually, tap on the "Choose Location" button and select a country or city from the list.
-
To access a specific website or app that is blocked in your region, tap on the "Search" icon on the top right corner of the app and type in the name of the website or app.
-
To check your IP address and location, tap on the "i" icon on the bottom right corner of the app.
-
To customize your VPN settings, tap on the "Menu" icon on the top left corner of the app and then tap on "Settings". You can change your protocol, encryption, kill switch, split tunneling, and more.
-
-
Benefits of Using APK VPN Express
-
Using APK VPN Express has many benefits for your online security and freedom. Here are some of them:
-
Enhanced Online Security and Privacy
-
APK VPN Express protects your online security and privacy by encrypting your data and hiding your IP address and location. This way, you can prevent hackers, ISPs, governments, and other third parties from spying on your online activity, stealing your personal information, or tracking your location. APK VPN Express uses AES-256 encryption, which is the same level of encryption used by the military and banks. It also supports multiple protocols, such as OpenVPN, IKEv2, and L2TP/IPsec, to ensure the best performance and compatibility for your device.
-
download expressvpn apk for android
-download expressvpn apk for fire tv
-download expressvpn apk for linux
-download expressvpn apk for chrome
-download expressvpn apk for windows
-download expressvpn apk for mac
-download expressvpn apk for ios
-download expressvpn apk for routers
-download expressvpn apk latest version
-download expressvpn apk mod
-download expressvpn apk cracked
-download expressvpn apk premium
-download expressvpn apk unlimited trial
-download expressvpn apk beta
-download expressvpn apk old version
-download expressvpn app for android
-download expressvpn app for fire tv
-download expressvpn app for linux
-download expressvpn app for chrome
-download expressvpn app for windows
-download expressvpn app for mac
-download expressvpn app for ios
-download expressvpn app for routers
-download expressvpn app latest version
-download expressvpn app mod
-download expressvpn app cracked
-download expressvpn app premium
-download expressvpn app unlimited trial
-download expressvpn app beta
-download expressvpn app old version
-how to download express vpn apk on android
-how to download express vpn apk on fire tv
-how to download express vpn apk on linux
-how to download express vpn apk on chrome
-how to download express vpn apk on windows
-how to download express vpn apk on mac
-how to download express vpn apk on ios
-how to download express vpn apk on routers
-how to install express vpn apk on android
-how to install express vpn apk on fire tv
-how to install express vpn apk on linux
-how to install express vpn apk on chrome
-how to install express vpn apk on windows
-how to install express vpn apk on mac
-how to install express vpn apk on ios
-how to install express vpn apk on routers
-where to download free vpn apk like ExpressVPN
-where to find ExpressVPN APK direct link
-where to get ExpressVPN APK coupon code.
-
Access to Geo-Restricted Content and Services
-
APK VPN Express allows you to access any website or app you want, no matter where you are in the world. You can bypass censorship and geo-restrictions imposed by governments, ISPs, or content providers. For example, you can watch Netflix US from anywhere in the world, access social media platforms like Facebook and Twitter in countries where they are blocked, or play online games with players from different regions. APK VPN Express has servers in 94 countries, including the US, UK, Canada, Australia, Japan, Germany, France, and more. You can switch between servers as many times as you want with no extra cost.
-
Faster and More Reliable Internet Connection
-
APK VPN Express improves your internet connection by optimizing your speed and reducing your latency. You can enjoy a faster and more reliable internet connection with no buffering or throttling. APK VPN Express has over 3,000 servers located in 100 different countries, which means you can always find a server that is close to your location and has a high-speed connection. APK VPN Express also has a smart algorithm that automatically connects you to the best server for your needs.
-
Comparison of APK VPN Express with Other VPN Apps
-
There are many VPN apps available for Android devices, but not all of them are equal. Some may offer more features than others, but they may also have some drawbacks or limitations. To help you make an informed decision, we have compared APK VPN Express with three other popular VPN apps: NordVPN, Turbo VPN, and Hotspot Shield. Here is a table showing the pros and cons of each app:
-
-
-
VPN App
-
Pros
-
Cons
-
-
-
APK VPN Express
-
- Free and unlimited - Fast and secure - Easy to use - No logs or tracking - No bandwidth or speed limits - No root access or permissions required - Servers in 94 countries - Supports multiple protocols - Customizable settings
-
- None
-
-
-
NordVPN
-
- Secure and reliable - No logs or tracking - Servers in 60 countries - Supports multiple protocols - Customizable settings - Kill switch feature
-
- Expensive ($11.95/month) - Requires registration and login - May slow down your connection - May not work with some streaming services
-
-
-
Turbo VPN
-
- Free - Fast - Easy to use
-
- Contains ads - Collects logs and tracks activity - Limited servers (8 countries) - No encryption or protocol options - Requires root access and permissions
-
-
-
Hotspot Shield
-
- Free (with premium option) - Fast - Easy to use - Kill switch feature
-
- Contains ads (free version) - Collects logs and tracks activity - Limited servers (15 countries) - No encryption or protocol options - May not work with some streaming services
-
-
-
As you can see from the table above, APK VPN Express is the best choice for Android users who want a free, fast, secure, and easy-to-use VPN app. It offers more features and benefits than other VPN apps without any drawbacks or limitations.
-
Conclusion
-
In conclusion, APK VPN Express is a free and unlimited VPN proxy client for all Android devices that provides you with the fastest VPN connection with over 3, 000 servers located in 100 different countries. It also protects your online security and privacy by encrypting your data and hiding your IP address and location. Moreover, it allows you to access any website or app you want, no matter where you are in the world. APK VPN Express is better than other VPN apps in terms of features, performance, and compatibility. It does not collect any logs or track your online activity, unlike some free VPNs that may sell your data to third parties. It also does not limit your bandwidth or speed, unlike some premium VPNs that may impose caps or restrictions. It also does not require any root access or special permissions, unlike some VPNs that may compromise your device's security. APK VPN Express is the best VPN app for Android users who want a free, fast, secure, and easy-to-use VPN service. If you want to download and install APK VPN Express on your Android device, you can follow the steps we have provided in this article. You can also use the tips and tricks we have shared to optimize your VPN experience. APK VPN Express is a must-have app for anyone who cares about their online security and freedom. Download it today and enjoy a secure and private internet connection with just one tap.
-
FAQs
-
Here are some frequently asked questions and answers related to APK VPN Express:
-
Q: Is APK VPN Express safe to use?
-
A: Yes, APK VPN Express is safe to use. It does not collect any logs or track your online activity, unlike some free VPNs that may sell your data to third parties. It also uses AES-256 encryption, which is the same level of encryption used by the military and banks. It also supports multiple protocols, such as OpenVPN, IKEv2, and L2TP/IPsec, to ensure the best performance and compatibility for your device.
-
Q: Does APK VPN Express work with Netflix?
-
A: Yes, APK VPN Express works with Netflix and other streaming services. You can watch Netflix US from anywhere in the world with APK VPN Express. You can also access other streaming services that are blocked or restricted in your region, such as Hulu, BBC iPlayer, Disney+, Amazon Prime Video, and more.
-
Q: How can I contact APK VPN Express support?
-
A: If you have any questions or issues with APK VPN Express, you can contact their support team via email at support@apkvpnexpress.com. They will respond to you within 24 hours.
-
Q: How can I update APK VPN Express?
-
A: APK VPN Express updates automatically whenever there is a new version available. You don't need to do anything to update the app. However, if you want to check for updates manually, you can go to Settings > About > Check for Updates on the app.
-
Q: How can I uninstall APK VPN Express?
-
A: If you want to uninstall APK VPN Express from your Android device, you can follow these steps:
-
-
Go to Settings > Apps on your device.
-
Find and tap on APK VPN Express from the list of apps.
-
Tap on Uninstall and confirm your action.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Save LiveJournal Video to Your Computer or Mobile Device.md b/spaces/congsaPfin/Manga-OCR/logs/How to Save LiveJournal Video to Your Computer or Mobile Device.md
deleted file mode 100644
index 25a8b6b09eaa1a125c133fbe849fb28bf8688810..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/How to Save LiveJournal Video to Your Computer or Mobile Device.md
+++ /dev/null
@@ -1,151 +0,0 @@
-
-
How to Download Video from Livejournal
-
Livejournal is a popular social media platform that allows users to create blogs, communities, and share multimedia content. If you are a fan of Livejournal and want to download some videos from it, you might be wondering how to do it. In this article, we will show you three different methods to download video from Livejournal using online tools, desktop tools, and browser extensions.
-
What is Livejournal and why download videos from it?
-
Livejournal is a social media platform that allows users to create blogs, communities, and share multimedia content.
-
Livejournal was launched in 1999 as one of the first blogging platforms on the web. It has since evolved into a social media network that allows users to create personal journals, join interest-based communities, and share photos, videos, music, and other media. Livejournal has millions of active users from around the world who post and comment on various topics ranging from politics and fandoms to personal stories and hobbies.
Downloading videos from Livejournal can help you save them for offline viewing, backup, or editing.
-
Livejournal hosts a lot of interesting and unique videos that you might want to watch again or keep for future reference. However, Livejournal does not provide an option to download videos directly from its website. Therefore, you need to use third-party tools to save videos from Livejournal to your device. By downloading videos from Livejournal, you can enjoy them offline without internet connection, backup them in case they get deleted or removed, or edit them for your own purposes.
-
How to download video from Livejournal using online tools?
-
Online tools are websites that can download videos from various sources by entering the video URL.
-
One of the easiest ways to download video from Livejournal is to use online tools that can fetch and save videos from any website. These tools are free and easy to use, and do not require any installation or registration. All you need is the URL of the video you want to download and a web browser. Here are some of the best online tools to download video from Livejournal:
-
SaveTheVideo.com
-
SaveTheVideo.com is a versatile online tool that can download videos from over 800 websites, including Livejournal. It also supports converting videos to various formats and qualities, as well as cutting and merging videos. To use SaveTheVideo.com to download video from Livejournal, follow these steps:
Copy the URL of the video you want to download from Livejournal and paste it into the text field on the website.
-
Click DOWNLOAD VIDEO and wait for the tool to process the video.
-
Select the format and quality you want for your downloaded video and click DOWNLOAD next to it.
-
Save the video file to your device.
-
-
AceThinker Online Video Downloader
-
AceThinker Online Video Downloader is another online tool that can download videos from various sources, including Livejournal. It has a simple and user-friendly interface that allows you to download videos in one click. To use AceThinker Online Video Downloader to download video from Livejournal, follow these steps:
Copy the URL of the video you want to download from Livejournal and paste it into the text field on the website.
-
Click the blue DOWNLOAD button and wait for the tool to analyze the video.
-
Select the format and quality you want for your downloaded video and click the download icon next to it.
-
Save the video file to your device.
-
-
SmallSEOTools Online Video Downloader
-
SmallSEOTools Online Video Downloader is a free and fast online tool that can download videos from hundreds of websites, including Livejournal. It also offers other useful features such as video editing, video compression, and video to GIF conversion. To use SmallSEOTools Online Video Downloader to download video from Livejournal, follow these steps:
-
How to download livejournal.com video with Replay Media Catcher
-Free online URL downloader for livejournal.com videos
-Download any video from livejournal.com using SmallSEOTools
-Livejournal video downloader - save livejournal videos in HD quality
-Best livejournal.com video downloader software for Windows and Mac
-How to download livejournal videos on Android and iOS devices
-Download livejournal videos as MP4, AVI, MOV, or other formats
-Livejournal video converter - convert livejournal videos to MP3, WAV, or other audio formats
-How to download livejournal videos with subtitles or captions
-Livejournal video editor - trim, crop, rotate, or merge livejournal videos
-How to download private or password-protected livejournal videos
-Livejournal video recorder - capture livejournal videos with screen recording or webcam recording
-How to download livejournal videos in bulk or batch mode
-Livejournal video downloader extension for Chrome, Firefox, or Safari
-How to download livejournal videos without watermark or ads
-Livejournal video downloader online - no installation or registration required
-How to download livejournal videos faster and safer with VPN
-Livejournal video downloader app - download livejournal videos on the go
-How to download livejournal videos with 4K resolution or higher
-Livejournal video downloader pro - unlock more features and benefits
-How to download livejournal videos with sound or audio
-Livejournal video downloader for PC - download livejournal videos on Windows 10, 8, 7, or XP
-How to download livejournal videos on Mac OS X, macOS, or Linux
-Livejournal video downloader alternative - compare other popular video downloaders
-How to download livejournal videos with one click or drag and drop
-Livejournal video downloader review - read user feedback and ratings
-How to download livejournal videos with QR code or URL shortener
-Livejournal video downloader coupon code - get discounts and offers
-How to download livejournal videos with resume or pause function
-Livejournal video downloader FAQ - find answers to common questions and issues
-How to download livejournal videos with high speed or turbo mode
-Livejournal video downloader tutorial - watch step-by-step guides and tips
-How to download livejournal videos with playlist or channel support
-Livejournal video downloader blog - read the latest news and updates about livejournal video downloading
-How to download livejournal videos with metadata or tags
-Livejournal video downloader API - integrate livejournal video downloading into your own applications or websites
-How to download livejournal videos with proxy or firewall support
-Livejournal video downloader for iPhone - download livejournal videos on iPhone 12, 11, X, 8, 7, or 6s
-How to download livejournal videos on iPad or iPod touch
-Livejournal video downloader for Android - download livejournal videos on Samsung, Huawei, Xiaomi, LG, or other Android phones
-How to download livejournal videos on Kindle Fire or Fire TV stick
-Livejournal video downloader for Windows Phone - download livejournal videos on Nokia Lumia, Microsoft Surface, or other Windows phones
-How to download livejournal videos on BlackBerry or Nokia phones
-Livejournal video downloader for smart TV - download livejournal videos on Samsung, LG, Sony, or other smart TVs
-How to download livejourna
Copy the URL of the video you want to download from Livejournal and paste it into the text field on the website.
-
Click DOWNLOAD VIDEO and wait for the tool to fetch the video.
-
Select the format and quality you want for your downloaded video and click DOWNLOAD NOW next to it.
-
Save the video file to your device.
-
-
How to download video from Livejournal using desktop tools?
-
Desktop tools are software applications that can download videos from various sources by installing them on your computer.
-
If you prefer to use desktop tools to download video from Livejournal, you will need to install them on your computer first. Desktop tools usually offer more features and options than online tools, such as batch downloading, playlist downloading, subtitle downloading, and more. However, they also take up more space and may require updates from time to time. Here are some of the best desktop tools to download video from Livejournal:
-
4K Video Downloader
-
4K Video Downloader is a powerful and easy-to-use desktop tool that can download videos from over 50 websites, including Livejournal. It can also download 4K and 8K videos, 3D and 360-degree videos, playlists and channels, subtitles and annotations, and more. To use 4K Video Downloader to download video from Livejournal, follow these steps:
Launch the program and copy the URL of the video you want to download from Livejournal.
-
Click PASTE LINK in the program and wait for it to parse the video.
-
Select the format and quality you want for your downloaded video and click DOWNLOAD.
-
Find the downloaded video file in your destination folder.
-
-
Download Accelerator Manager
-
Download Accelerator Manager is a fast and reliable desktop tool that can download videos from over 1000 websites, including Livejournal. It can also accelerate your downloads by up to 10 times, resume broken downloads, schedule downloads, and more. To use Download Accelerator Manager to download video from Livejournal, follow these steps:
Launch the program and copy the URL of the video you want to download from Livejournal.
-
Click ADD in the program and paste the URL into the URL field.
-
Select the format and quality you want for your downloaded video and click OK.
-
The program will start downloading the video automatically. You can monitor the progress in the program window.
-
Find the downloaded video file in your destination folder.
-
-
VideoProc
-
VideoProc is a comprehensive desktop tool that can not only download videos from over 1000 websites, including Livejournal, but also edit, convert, compress, record, and stream videos. It can also handle 4K videos, HEVC videos, large videos, VR videos, and more. To use VideoProc to download video from Livejournal, follow these steps:
Launch the program and click Downloader on the main interface.
-
Click Add Video and paste the URL of the video you want to download from Livejournal.
-
Select the format and quality you want for your downloaded video and click Download Selected Videos.
-
The program will start downloading the video automatically. You can monitor the progress in the program window.
-
Find the downloaded video file in your destination folder.
-
-
How to download video from Livejournal using browser extensions?
-
Browser extensions are add-ons that can download videos from various sources by integrating them with your web browser.
-
Another way to download video from Livejournal is to use browser extensions that can add a download button or menu to your web browser. Browser extensions are convenient and easy to use, as they do not require any external tools or websites. However, they may not work with all websites or videos, and they may slow down your browser performance. Here are some of the best browser extensions to download video from Livejournal:
-
Video DownloadHelper (Firefox)
-
Video DownloadHelper is a popular and powerful browser extension that can download videos from over 3000 websites, including Livejournal. It can also convert videos to various formats, capture streaming videos, and more. To use Video DownloadHelper to download video from Livejournal, follow these steps:
Go to the Livejournal page that contains the video you want to download.
-
Click the Video DownloadHelper icon on the toolbar and select the video from the list.
-
Select the format and quality you want for your downloaded video and click Download.
-
Save the video file to your device.
-
-
Video Downloader Professional (Chrome)
-
Video Downloader Professional is a simple and effective browser extension that can download videos from most websites, including Livejournal. It can also create playlists, play videos in a pop-up window, and more. To use Video Downloader Professional to download video from Livejournal, follow these steps:
Go to the Livejournal page that contains the video you want to download.
-
Click the Video Downloader Professional icon on the toolbar and select the video from the list.
-
Select the format and quality you want for your downloaded video and click Download.
-
Save the video file to your device.
-
-
Conclusion and FAQs
-
In this article, we have shown you how to download video from Livejournal using three different methods: online tools, desktop tools, and browser extensions. Each method has its own advantages and disadvantages, so you can choose the one that suits your needs and preferences. We hope this article has been helpful and informative for you. If you have any questions or comments, feel free to leave them below. Here are some FAQs that might answer some of your queries:
-
Q: Can I download videos from Livejournal without any tools?
-
A: No, you cannot download videos from Livejournal without any tools. Livejournal does not provide a direct download option for its videos, so you need to use third-party tools to save them to your device.
-
Q: Which method is the best for downloading videos from Livejournal?
-
A: There is no definitive answer to this question, as different methods have different pros and cons. Online tools are free and easy to use, but they may not support all videos or formats. Desktop tools are more powerful and versatile, but they require installation and updates. Browser extensions are convenient and fast, but they may not work with all websites or browsers. You should choose the method that works best for you based on your needs and preferences.
-
Q: Is it legal to download videos from Livejournal?
-
A: It depends on the source and content of the videos. Generally speaking, downloading videos from Livejournal for personal use is not illegal, as long as you do not violate any copyrights or terms of service of Livejournal or the original creators. However, downloading videos from Livejournal for commercial use or distribution is illegal, as it infringes on the rights of the owners. You should always respect the intellectual property rights of others and use downloaded videos responsibly.
-
Q: How can I edit downloaded videos from Livejournal?
-
A: You can edit downloaded videos from Livejournal using various tools such as VideoProc, which we mentioned earlier in this article. VideoProc is a comprehensive desktop tool that can not only download videos from over 100 websites, including Livejournal, but also edit, convert, compress, record, and stream videos. It can also handle 4K videos, HEVC videos, large videos, VR videos, and more. With VideoProc, you can trim, crop, rotate, merge, split, add effects, subtitles, watermarks, and more to your downloaded videos from Livejournal. You can also convert them to different formats and devices, compress them to reduce file size and quality loss, record your screen or webcam, and stream them to various platforms.
-
Q: How can I share downloaded videos from Livejournal?
-
A: You can share downloaded videos from Livejournal using various methods such as email, social media, cloud storage, or online video platforms. However, before you share downloaded videos from Livejournal, you should make sure that you have the permission of the original creators or owners of the videos. You should also respect their privacy and preferences and give them proper credit and attribution. You should not share downloaded videos from Livejournal that contain illegal, offensive, or inappropriate content.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Pink Lie Sub Indo Kisah Cinta Penuh Kebohongan di Dating Show Korea.md b/spaces/congsaPfin/Manga-OCR/logs/Pink Lie Sub Indo Kisah Cinta Penuh Kebohongan di Dating Show Korea.md
deleted file mode 100644
index 85474bd10fc57bdd47b78cbae783ae27aa059210..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Pink Lie Sub Indo Kisah Cinta Penuh Kebohongan di Dating Show Korea.md
+++ /dev/null
@@ -1,103 +0,0 @@
-
-
Download Pink Lie Subtitle Indonesia: A Guide to Watch the Korean Dating Show on Disney+ Hotstar
-
If you are looking for a new and exciting Korean dating show to watch, you might want to check out Pink Lie. This show is a part of Star, a new content brand on Disney+ Hotstar that offers more diverse and mature stories for adult audiences. In this article, we will tell you everything you need to know about Pink Lie, why you should watch it, and how to download Pink Lie subtitle Indonesia.
-
What is Pink Lie?
-
Pink Lie is a Korean dating show that premiered on October 5, 2022, on Disney+ Hotstar. It is produced by Studio Lululala, the same company behind popular variety shows like Running Man, Busted!, and How Do You Play?. The show is hosted by Kim Hee-chul, a member of the boy group Super Junior and a veteran MC of many shows.
The show follows the lives of eight young men and women who move into a pink house, each with a lie about themselves. The lies can be about their jobs, ages, educational backgrounds, or anything else. They have to keep their lies hidden from each other while trying to find love. However, as they get closer, their lies start to unravel and their true identities are revealed. Will they be able to accept each other for who they really are?
-
The cast and crew of the show
-
The cast of Pink Lie consists of four men and four women who have different personalities and backgrounds. They are:
-
-
Lee Sun-bin, an actress and singer who is known for her roles in dramas like Squad 38, Criminal Minds, and Voice 3. She plays herself in the show.
-
Song Won-seok, an actor who has appeared in dramas like My Only One, Liver or Die, and Love (ft. Marriage and Divorce). He plays himself in the show.
-
RALRAL, a rapper and producer who is a member of the hip-hop duo XXX. He plays himself in the show.
-
Kim Ji-hyeon, a model and influencer who has over 1.5 million followers on Instagram. She plays herself in the show.
-
Lee Seung-hyub, a singer and actor who is the leader of the rock band N.Flying. He plays himself in the show.
-
Kim Min-ji, a former member of the girl group Jewelry who is now an actress and host. She plays herself in the show.
-
Kim Dong-hyun, a former professional baseball player who played for the Doosan Bears and the LG Twins. He plays himself in the show.
-
Lee Ji-eun, a former rhythmic gymnast who won a bronze medal at the 2014 Asian Games. She plays herself in the show.
-
-
The crew of Pink Lie includes:
-
-
Kim Hee-chul, the host and narrator of the show who also interacts with the cast members through video calls.
-
Park Jin-kyung, the director of the show who has also directed shows like Running Man, Busted!, and How Do You Play?.
-
Lee Hyun-jin, the writer of the show who has also written shows like Running Man, Busted!, and How Do You Play?.
-
Kim Tae-ho, the chief producer of Studio Lululala who oversees the production of Pink Lie
The steps to download Pink Lie subtitle Indonesia from DramaSubIndo
-
-
Go to DramaSubIndo, another website that provides Korean drama subtitles in various languages.
-
Search for Pink Lie in the search bar or browse through the categories.
-
Select the episode that you want to watch and click on the download button.
-
Choose the subtitle format that suits your media player (such as SRT or ASS) and click on the download button again.
-
Save the subtitle file to your device.
-
Open your media player and load the video file of Pink Lie that you have downloaded or streamed from Disney+ Hotstar.
-
Add the subtitle file to your media player by dragging and dropping it or by selecting it from the menu.
-
Enjoy watching Pink Lie with Indonesian subtitles.
-
-
The tips and tricks to download Pink Lie subtitle Indonesia
-
Here are some tips and tricks that can help you download Pink Lie subtitle Indonesia more easily and efficiently:
-
-
Make sure that the subtitle file and the video file have the same name and are in the same folder, so that your media player can automatically detect and load them.
-
Make sure that the subtitle file and the video file are in sync, so that the subtitles match the dialogue and scenes. If they are not in sync, you can adjust the timing of the subtitles using your media player or a subtitle editor.
-
Make sure that the subtitle file is in good quality, so that the subtitles are clear, accurate, and complete. If they are not in good quality, you can look for other sources or edit them yourself using a subtitle editor.
-
Make sure that you have a stable internet connection, so that you can download the subtitle file and the video file without interruption or delay.
-
Make sure that you respect the rights and efforts of the subtitle providers, so that you do not share, distribute, or sell their subtitles without their permission.
-
-
Conclusion
-
Pink Lie is a Korean dating show that is worth watching for its original and captivating concept, its realistic and relatable characters, its witty and humorous narration, and its unpredictable and thrilling plot twists. It is also a show that teaches you valuable lessons about life, love, and yourself. If you want to watch Pink Lie with Indonesian subtitles, you can either watch it on Disney+ Hotstar with the official subtitles provided by the platform, or you can download the subtitles from other sources like OPPADRAMA or DramaSubIndo. We hope that this article has helped you learn more about Pink Lie and how to download Pink Lie subtitle Indonesia. Happy watching!
-
FAQs
-
Here are some frequently asked questions about Pink Lie and how to download Pink Lie subtitle Indonesia:
-
-
How many episodes are there in Pink Lie?
-
Pink Lie has 12 episodes, each lasting about 60 minutes. The show airs every Tuesday at 10 p.m. KST on Disney+ Hotstar.
-
Is Pink Lie based on a true story?
-
No, Pink Lie is not based on a true story. It is a fictional show that is created by Studio Lululala. However, some of the lies and situations in the show may be inspired by real-life experiences or events.
-
download pink lie sub indo episode 1
-download pink lie sub indonesia 720p
-download pink lie sub indo batch
-download pink lie sub indonesia oppadrama
-download pink lie sub indo mp4
-download pink lie sub indonesia google drive
-download pink lie sub indo full episode
-download pink lie sub indonesia 480p
-download pink lie sub indo mkv
-download pink lie sub indonesia hardsub
-download pink lie sub indo streaming
-download pink lie sub indonesia gratis
-download pink lie sub indo drama korea
-download pink lie sub indonesia terbaru
-download pink lie sub indo eng sub
-download pink lie sub indo drakorindo
-download pink lie sub indonesia subtitle
-download pink lie sub indo online
-download pink lie sub indonesia kualitas hd
-download pink lie sub indo nonton
-download pink lie sub indonesia pingkeurai
-download pink lie sub indo episode 4
-download pink lie sub indonesia rating 7.9
-download pink lie sub indo variety show
-download pink lie sub indo kordramas
-download pink lie sub indonesia sinopsis
-download pink lie sub indo episode 2
-download pink lie sub indonesia web series
-download pink lie sub indo genre comedy
-download pink lie sub indo cast kim min jae
-download pink lie sub indonesia review
-download pink lie sub indo episode 3
-download pink lie sub indonesia trailer
-download pink lie sub indo genre romance
-download pink lie sub indo cast lee yoo young
-download pink lie sub indonesia ost
-download pink lie sub indo episode 5
-download pink lie sub indonesia completed
-download pink lie sub indo genre mystery
-download pink lie sub indo cast kim dae myung
-
Is Pink Lie suitable for all ages?
-
No, Pink Lie is not suitable for all ages. It is rated 18+ by Disney+ Hotstar, as it contains mature themes, language, and scenes. It is recommended for adult audiences only.
-
Where can I watch Pink Lie with English subtitles?
-
You can watch Pink Lie with English subtitles on Disney+ Hotstar, as it provides official subtitles in English and other languages. You can also download English subtitles from other sources like OPPADRAMA or DramaSubIndo.
-
Will there be a second season of Pink Lie?
-
There is no official confirmation yet about a second season of Pink Lie. However, given the popularity and success of the show, there is a possibility that Studio Lululala may produce a second season with a new cast and new lies. We will update you if there is any news about a second season of Pink Lie.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Star Stable Online Explore the Magical World of Jorvik with Your Horse.md b/spaces/congsaPfin/Manga-OCR/logs/Star Stable Online Explore the Magical World of Jorvik with Your Horse.md
deleted file mode 100644
index a1c6c313c440383c89ca0d40dd004d6889ff0c02..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Star Stable Online Explore the Magical World of Jorvik with Your Horse.md
+++ /dev/null
@@ -1,142 +0,0 @@
-
-
Star Stable Horses: A Fun and Engaging Horse Game for All Ages
-
If you love horses and games, then you will love Star Stable Horses. This is a game where you can care for your very own adorable foal and watch as they grow into beautiful horses you can ride, train, and take on amazing adventures within the land of Jorvik. In this article, we will tell you everything you need to know about Star Stable Horses, including how to download and play it, how to transfer your horse to Star Stable Online, and some tips and tricks for having the best experience possible.
-
What is Star Stable Horses?
-
Star Stable Horses is a mobile game developed by Star Stable Entertainment AB, a Swedish company that also created Star Stable Online, the world's biggest online horse game. Star Stable Horses is a spin-off of Star Stable Online, but it can also be played as a standalone game. Here are some of the features that make Star Stable Horses so fun and engaging:
A game where you can raise, train, and ride your own horses
-
In Star Stable Horses, you can choose from a variety of horse breeds and variations, each with their own unique personality and appearance. You can name your foal, feed them, groom them, play with them, and teach them new skills. As you complete daily tasks, your foal will grow up and become ready to join you on exciting adventures in Star Stable Online.
-
A game where you can explore the magical world of Jorvik
-
Jorvik is a fictional island inspired by Scandinavian mythology and culture. It is a place where humans and horses live in harmony, but also face many challenges and mysteries. In Star Stable Horses, you can explore different locations in Jorvik, such as the paddock, the barn, the beauty salon, the garden, and more. You can also meet other characters, such as Lisa Peterson, who will guide you through the game.
-
A game where you can customize your horses and stable
-
One of the best things about Star Stable Horses is that you can make your horses and stable truly yours. You can dress up your horses with bows, ribbons, blankets, saddles, bridles, and more. You can also decorate your stable with different items, such as plants, posters, rugs, lamps, etc. You can even grow treats for your horses in your own garden.
-
How to download and play Star Stable Horses?
-
Star Stable Horses is available on both Android and iOS devices. You can download it for free from Google Play or App Store. Here are the steps to follow:
-
Download the app from Google Play or App Store
-
Go to Google Play or App Store on your device and search for "Star Stable Horses". Tap on the app icon and then tap on "Install". Wait for the app to download and install on your device.
-
Create your account and choose your first foal
-
When you open the app for the first time, you will be asked to create an account. You can use your email address or connect with Facebook or Google. You will also need to choose a username and a password. After that
After that, you will be able to choose your first foal from a selection of different breeds and colors. You can also name your foal and change their gender. Once you have chosen your foal, you will be taken to the paddock, where you can start taking care of them.
-
Complete daily tasks and watch your foal grow up
-
In the paddock, you will see a list of daily tasks that you need to complete to keep your foal happy and healthy. These tasks include feeding, grooming, playing, training, and more. You can also tap on your foal to interact with them and see their mood and stats. As you complete the tasks, you will earn star coins, experience points, and hearts. Star coins are the currency of the game, which you can use to buy items and horses. Experience points help you level up and unlock new features. Hearts show how much your foal loves you and trusts you. When you reach a certain amount of hearts, your foal will grow up and become ready to ride.
-
How to transfer your horse to Star Stable Online?
-
If you want to take your horse to the next level, you can transfer them to Star Stable Online, the online multiplayer game where you can join millions of other players and explore the vast world of Jorvik. Here is how:
-
star stable horses game download for android
-star stable horses game download for pc
-star stable horses game download free
-star stable horses game download apk
-star stable horses game download ios
-star stable horses game download mac
-star stable horses game download online
-star stable horses game download windows 10
-star stable horses game download app store
-star stable horses game download google play
-star stable horses game download full version
-star stable horses game download mod apk
-star stable horses game download laptop
-star stable horses game download chromebook
-star stable horses game download update
-star stable horses game download without wifi
-star stable horses game download offline
-star stable horses game download review
-star stable horses game download hack
-star stable horses game download cheats
-star stable horses game download tips and tricks
-star stable horses game download guide
-star stable horses game download walkthrough
-star stable horses game download gameplay
-star stable horses game download trailer
-star stable horses game download how to play
-star stable horses game download how to transfer horse to sso
-star stable horses game download how to get more stalls
-star stable horses game download how to get more coins
-star stable horses game download how to get more breeds
-star stable horses game download how to get exclusive coat colors
-star stable horses game download how to customize your barn and paddock
-star stable horses game download how to dress up your horse with bows
-star stable horses game download how to grow treats in your garden
-star stable horses game download how to care for your foal and horse
-star stable horses game download how to train your horse and foal
-star stable horses game download best horse breeds and variations
-star stable horses game download best horse names and nicknames
-star stable horses game download best horse accessories and outfits
-star stable horses game download best horse games for kids and adults
-star stable horses game download similar games and apps
-star stable horses game download alternatives and competitors
-star stable horses game download pros and cons
-star stable horses game download features and benefits
-star stable horses game download ratings and reviews
-star stable horses game download customer feedback and testimonials
-star stable horses game download customer service and support
-star stable horses game download contact information and social media links
-
What is Star Stable Online and why should you play it?
-
Star Stable Online is the online version of Star Stable Horses, where you can continue your adventure with your horse and meet other players from around the world. In Star Stable Online, you can:
-
-
Ride your horse across different regions of Jorvik, such as forests, mountains, beaches, cities, and more.
-
Join clubs and make friends with other players who share your passion for horses.
-
Participate in races, quests, events, competitions, and more.
-
Discover the secrets and mysteries of Jorvik and its history.
-
Collect more horses and items for your stable.
-
Express yourself with different outfits, hairstyles, accessories, and more.
-
-
To play Star Stable Online, you need to download the game client from the official website and create an account. You can play for free up to level 5, but after that you need to become a Star Rider by paying a monthly or annual subscription fee. As a Star Rider, you get access to all the features and content of the game.
-
How to transfer your horse from the app to the online game
-
To transfer your horse from Star Stable Horses to Star Stable Online, you need to have both games installed on your device and use the same account for both. You also need to have at least one free stall in your stable in Star Stable Online. Here are the steps to follow:
-
-
In Star Stable Horses, go to the barn and tap on the horse you want to transfer.
-
Tap on the button that says "Transfer to Star Stable Online".
-
A confirmation message will appear. Tap on "Yes" to proceed.
-
The app will close and the game client will open automatically.
-
In Star Stable Online, go to your stable and find your transferred horse in one of the stalls.
-
Tap on your horse and enjoy riding them in Jorvik.
-
-
Note that once you transfer your horse to Star Stable Online, you cannot transfer them back to Star Stable Horses. You can still visit them in the app, but you cannot interact with them or complete tasks with them.
-
What to do with your horse in Star Stable Online
-
Once you have transferred your horse to Star Stable Online, there are many things you can do with them. Here are some suggestions:
-
-
Take them for a ride around Jorvik and explore new places.
-
Enter races and competitions with them and win prizes.
-
Complete quests and missions with them and earn star coins and experience points.
-
Join clubs and events with them and meet other players.
-
Customize them with different tack, gear, accessories, and more.
-
Care for them by feeding them, grooming them, petting them, etc.
-
-
Tips and tricks for playing Star Stable Horses
-
To make the most out of Star Stable Horses, here are some tips and tricks that might help you:
-
How to get more horses and star coins
-
If you want to expand your collection of horses in Star Stable Horses, there are several ways to do so.
If you want to expand your collection of horses in Star Stable Horses, there are several ways to do so:
-
-
You can buy new horses with star coins, which you can earn by completing tasks, watching ads, or buying them with real money.
-
You can breed new horses with the breeding feature, which allows you to combine two horses of the same breed and get a new foal with a random color and pattern.
-
You can get special horses by participating in events, such as the Halloween event, the Christmas event, the Easter event, etc.
-
-
How to use the beauty salon and the garden
-
The beauty salon and the garden are two places where you can pamper your horses and make them look fabulous. Here is how to use them:
-
-
The beauty salon is where you can change your horse's mane, tail, coat, and eye color. You can also add accessories, such as bows, ribbons, flowers, etc. To use the beauty salon, tap on the scissors icon in the barn and then select your horse. You can then choose from different options and see how they look on your horse. You can also use star coins to unlock more options.
-
The garden is where you can grow treats for your horses, such as carrots, apples, sugar cubes, etc. To use the garden, tap on the watering can icon in the barn and then select a plot of land. You can then choose a seed to plant and water it. After some time, your treat will be ready to harvest. You can then feed it to your horse or store it in your inventory.
-
-
How to keep your horses happy and healthy
-
Keeping your horses happy and healthy is essential for their growth and development. Here are some tips to do so:
-
-
Feed your horses regularly with hay, water, and treats. You can see their hunger and thirst levels by tapping on them in the paddock or the barn.
-
Groom your horses daily with the brush, the hoof pick, and the sponge. You can see their cleanliness level by tapping on them in the paddock or the barn.
-
Play with your horses by tapping on them in the paddock or the barn. You can also use toys, such as balls, ropes, etc., to make them more entertained.
-
Train your horses by teaching them new skills, such as jumping, trotting, galloping, etc. You can see their skill level by tapping on them in the paddock or the barn.
-
Pet your horses by tapping on them in the paddock or the barn. This will increase their affection and trust towards you.
-
-
Conclusion
-
Star Stable Horses is a game that will appeal to anyone who loves horses and games. It is a game where you can raise, train, and ride your own horses in a magical world of Jorvik. It is also a game where you can customize your horses and stable to your liking. And it is a game where you can transfer your horse to Star Stable Online and join millions of other players in an online adventure. If you are looking for a fun and engaging horse game for all ages, Star Stable Horses is the game for you.
-
FAQs
-
Q: Is Star Stable Horses free to play?
-
A: Yes, Star Stable Horses is free to download and play on both Android and iOS devices. However, some features and items may require star coins, which can be earned or bought with real money.
-
Q: How many horses can I have in Star Stable Horses?
-
A: You can have up to 10 horses in Star Stable Horses. However, you need to buy more stalls in your stable with star coins to accommodate more horses.
-
Q: Can I play Star Stable Horses offline?
-
A: Yes, you can play Star Stable Horses offline without an internet connection. However, some features may not be available offline, such as watching ads or transferring your horse to Star Stable Online.
-
Q: Can I play Star Stable Horses on PC or Mac?
-
A: No, Star Stable Horses is only available on mobile devices. However, you can play Star Stable Online on PC or Mac by downloading the game client from the official website.
-
Q: Can I chat with other players in Star Stable Horses?
-
A: No, Star Stable Horses does not have a chat feature. However, you can chat with other players in Star Stable Online by using the chat box or joining clubs.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/TikTok for Android The easiest way to download the APK and start watching videos.md b/spaces/congsaPfin/Manga-OCR/logs/TikTok for Android The easiest way to download the APK and start watching videos.md
deleted file mode 100644
index 8b2da7854b35f332389daa9da5693e81974d706a..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/TikTok for Android The easiest way to download the APK and start watching videos.md
+++ /dev/null
@@ -1,108 +0,0 @@
-
-
TikTok APK Android: How to Download and Install the App
-
TikTok is one of the most popular social media apps in the world, with over 500 million downloads on Google Play Store. On TikTok, you can create and share short-form videos with music, effects, filters, and more. You can also watch millions of videos from other users, covering various topics like comedy, gaming, food, sports, memes, pets, and more. TikTok is a fun and creative way to express yourself and connect with others.
-
What is TikTok?
-
TikTok is a video-sharing social network that was launched in 2016 by ByteDance, a Chinese company. The app was originally called Douyin in China, but it was rebranded as TikTok for the international market. TikTok allows users to create videos of up to 60 seconds, using various editing tools and features. Users can choose from a huge library of songs and sounds, or use their own audio. They can also apply filters, stickers, transitions, effects, and more to make their videos more engaging. Users can also interact with other users by liking, commenting, following, dueting, or sending messages.
If you want to enjoy TikTok on your Android device, you might be wondering why you should download the APK file instead of getting it from the Google Play Store. Well, there are several reasons why downloading TikTok APK Android can be a better option for you. Here are some of them:
-
Access to the latest version and updates
-
One of the advantages of downloading TikTok APK Android is that you can get access to the latest version of the app before it is available on the official store. This way, you can enjoy new features and improvements as soon as they are released. You can also avoid bugs and glitches that might affect older versions of the app.
-
Avoid geo-restrictions and censorship
-
Another benefit of downloading TikTok APK Android is that you can bypass geo-restrictions and censorship that might prevent you from accessing the app in your region. For example, some countries like India have banned TikTok due to security and privacy concerns. By downloading the APK file from a third-party source, you can still use the app without any limitations.
-
Save storage space and data usage
-
A third advantage of downloading TikTok APK Android is that you can save storage space and data usage on your device. The APK file is usually smaller than the app file on the Google Play Store, which means it will take up less space on your device memory. Moreover, by downloading the APK file once, you can install it offline without using your internet connection.
-
How to download TikTok APK Android?
-
Now that you know why downloading TikTok APK Android is a good idea, you might be wondering how to do it. Don't worry, it's not complicated at all. Just follow these simple steps:
-
Find a reliable source for the APK file
-
The first thing you need to do is find a trustworthy source for the TikTok APK file. There are many websites that offer APK files for various apps, but not all of them are safe and reliable. Some of them might contain malware or viruses that could harm your device or steal your personal information. Therefore, you need to be careful when choosing where to download the APK file from.
-
One of the best sources for TikTok APK Android is Upt
One of the best sources for TikTok APK Android is Uptodown, a website that offers free and safe downloads for thousands of apps. Uptodown has a team of experts that verify and test every APK file before uploading it to their platform. You can also read user reviews and ratings to see what other people think about the app. To download TikTok APK Android from Uptodown, you can visit this link: https://tiktok.en.uptodown.com/android
-
Enable unknown sources on your device settings
-
The next thing you need to do is enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, you need to go to your device settings, then tap on security or privacy, then look for the option that says unknown sources or install unknown apps. You need to toggle it on and confirm your choice. This might vary depending on your device model and Android version, but you can always search for it in your settings.
-
tiktok apk android download free
-tiktok apk android uptodown
-tiktok apk android latest version
-tiktok apk android 5.0
-tiktok apk android tv
-tiktok apk android 4.4.2
-tiktok apk android mod
-tiktok apk android 6.0
-tiktok apk android old version
-tiktok apk android 10
-tiktok apk android 9.0
-tiktok apk android no watermark
-tiktok apk android 7.0
-tiktok apk android 8.0
-tiktok apk android without google play services
-tiktok apk android 11
-tiktok apk android hack
-tiktok apk android 4.1
-tiktok apk android 4.2.2
-tiktok apk android 4.3
-tiktok apk android premium
-tiktok apk android pro
-tiktok apk android offline
-tiktok apk android online
-tiktok apk android video editor
-tiktok apk android video downloader
-tiktok apk android video maker
-tiktok apk android video converter
-tiktok apk android video compressor
-tiktok apk android video trimmer
-tiktok apk android video collage
-tiktok apk android video recorder
-tiktok apk android video player
-tiktok apk android video effects
-tiktok apk android video filters
-tiktok apk android video cutter
-tiktok apk android video merger
-tiktok apk android video speed up
-tiktok apk android video slow motion
-tiktok apk android video reverse
-tiktok apk android video loop
-tiktok apk android video music
-tiktok apk android video sound
-tiktok apk android video voice changer
-tiktok apk android video subtitles
-tiktok apk android video stickers
-tiktok apk android video transitions
-tiktok apk android video crop
-tiktok apk android video rotate
-
Download and install the APK file
-
The third thing you need to do is download and install the APK file. To do this, you need to open the link that you got from Uptodown or another source, then tap on the download button. You might see a warning message that says this type of file can harm your device, but you can ignore it and tap on OK. The download will start and you will see a notification on your status bar. Once the download is complete, you need to tap on the notification or go to your downloads folder and find the APK file. Then, you need to tap on it and follow the instructions on the screen to install it.
-
Launch the app and enjoy
-
The last thing you need to do is launch the app and enjoy. To do this, you need to find the app icon on your home screen or app drawer and tap on it. You might see a message that says this app was installed from an unknown source and asks for your permission to access certain features on your device. You need to grant the permission and accept the terms and conditions of the app. Then, you can start using TikTok as usual, creating and watching videos, exploring different categories, following other users, and more.
-
Conclusion
-
TikTok is a great app for anyone who loves making and watching short-form videos with music, effects, filters, and more. However, if you want to get the most out of it, you might want to download TikTok APK Android instead of getting it from the Google Play Store. This way, you can access the latest version and updates, avoid geo-restrictions and censorship, save storage space and data usage, and more. All you need to do is find a reliable source for the APK file, enable unknown sources on your device settings, download and install the APK file, launch the app and enjoy.
-
If you found this article helpful, please share it with your friends and family who might be interested in TikTok APK Android. Also, feel free to leave a comment below if you have any questions or feedback about the app or the process of downloading it.
-
FAQs
-
Here are some frequently asked questions about TikTok APK Android:
-
-
-
Question
-
Answer
-
-
-
Is TikTok APK Android safe?
-
TikTok APK Android is safe as long as you download it from a trustworthy source like Uptodown. However, you should always be careful when installing apps from unknown sources and scan them with an antivirus before opening them.
-
-
-
Is TikTok APK Android legal?
-
TikTok APK Android is legal as long as you use it for personal and non-commercial purposes. However, some countries like India have banned TikTok due to security and privacy concerns. Therefore, you should check the laws of your country before using TikTok APK Android.
-
-
-
What are the differences between TikTok APK Android and TikTok from Google Play Store?
-
TikTok APK Android and TikTok from Google Play Store are essentially the same app with the same features and functions. However, TikTok APK Android might have some advantages over TikTok from Google Play Store, such as access to the latest version and updates, bypassing geo-restrictions and censorship, saving storage space and data usage, and more.
-
-
-
How can I update TikTok APK Android?
-
You can update TikTok APK Android by downloading the latest version of the app from Uptodown or another source. Then, you need to uninstall the previous version of the app from your device and install the new version. Alternatively, you can use an app like APKUpdater that can automatically check for updates and install them for you.
-
-
-
How can I delete TikTok APK Android?
-
You can delete TikTok APK Android by going to your device settings, then tapping on apps or applications, then finding TikTok and tapping on it. Then, you need to tap on uninstall and confirm your choice. You can also delete the APK file from your downloads folder or wherever you saved it.
-
-
-
I hope this article has answered all your questions about TikTok APK Android. If you have any more questions, please let me know in the comments section below. Thank you for reading and have a great day!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/WhatsApp Messenger The Best Way to Download and Enjoy It.md b/spaces/congsaPfin/Manga-OCR/logs/WhatsApp Messenger The Best Way to Download and Enjoy It.md
deleted file mode 100644
index 16e27dda97769ff8a654dc93435991f620169f20..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/WhatsApp Messenger The Best Way to Download and Enjoy It.md
+++ /dev/null
@@ -1,246 +0,0 @@
-
-
How to Download WhatsApp Messenger
-
WhatsApp Messenger is one of the most popular and widely used messaging and calling apps in the world. It allows you to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content with your contacts. It also offers end-to-end encryption, group chats and calls, voice messaging, stickers, GIFs, and more. In this article, we will show you how to download WhatsApp Messenger for your Android, iOS, or desktop device, and how to set up and use its features. We will also give you some tips on how to secure your account, backup and restore your data, troubleshoot common issues, delete or deactivate your account, and find alternatives to WhatsApp Messenger.
WhatsApp Messenger has many advantages over other messaging apps. Here are some of them:
-
-
It is free of cost and does not require any subscription or fees.
-
It uses your phone number as your identity and syncs with your phone's contacts.
-
It works on any smartphone or tablet that runs Android or iOS.
-
It also has a desktop version that works on Windows or Mac computers.
-
It supports cross-platform communication between different devices.
-
It offers high-quality voice and video calls that are secure and reliable.
-
It has end-to-end encryption that protects your messages and calls from being intercepted or accessed by anyone else.
-
It has a simple and user-friendly interface that is easy to navigate and customize.
-
It has a variety of features that enhance your communication experience, such as group chats and calls, voice messaging, stickers, GIFs, status updates, live location sharing, media gallery, dark mode, etc.
-
It has a business version that allows you to connect with your customers and clients in a professional way.
-
-
How to Download WhatsApp Messenger for Android
-
If you have an Android device, you can download WhatsApp Messenger from the Google Play Store. Here are the steps:
-
-
Open the Google Play Store app on your Android device.
-
Search for "WhatsApp Messenger" in the search bar.
-
Tap on the app icon or the "Install" button.
-
Wait for the app to download and install on your device.
-
Open the app and agree to the terms and conditions.
-
Enter your phone number and verify it with a code sent to you via SMS or a phone call.
-
Create your profile by entering your name and choosing a profile picture.
-
Allow the app to access your contacts, photos, media, and files.
-
Start using WhatsApp Messenger by tapping on the "New chat" or "New call" button at the bottom right corner of the screen.
-
-
How to Download WhatsApp Messenger for iOS
-
If you have an iOS device, you can download WhatsApp Messenger from the App Store. Here are the steps:
-
-
Open the App Store app on your iOS device.
-
Search for "WhatsApp Messenger" in the search bar.
-
Tap on the app icon or the "Get" button.
-
Enter your Apple ID password or use Touch ID or Face ID to confirm.
-
Wait for the app to download and install on your device.
-
Open the app and agree to the terms and conditions.
-
Enter your phone number and verify it with a code sent to you via SMS or a phone call.
-
Create your profile by entering your name and choosing a profile picture.
-
Allow the app to access your contacts, photos, media, and files.
-
Start using WhatsApp Messenger by tapping on the "New chat" or "New call" button at the bottom right corner of the screen.
-
-
How to Download WhatsApp Messenger for Desktop
-
If you want to use WhatsApp Messenger on your desktop computer, you have two options: you can either download it from the official website or from the Microsoft Store or the Apple App Store, depending on your operating system. Here are the steps for both options:
Select your operating system (Windows or Mac) and click on the "Download" button.
-
Save the file to your computer and run it once it is downloaded.
-
Follow the instructions on the screen to install WhatsApp Messenger on your computer.
-
Open the app and scan the QR code with your phone's camera. To do this, open WhatsApp Messenger on your phone, tap on the menu icon (three dots) at the top right corner of the screen, and select "WhatsApp Web".
-
You will see a list of devices where you are logged in. Tap on the "+" icon at the top right corner of the screen and point your phone's camera at the QR code on your computer screen.
-
You will see a confirmation message that you are connected. You can now use WhatsApp Messenger on your computer as long as your phone is connected to the internet.
-
-
Option 2: Download from the Microsoft Store or the Apple App Store
-
-
Open the Microsoft Store or the Apple App Store on your computer, depending on your operating system.
-
Search for "WhatsApp Desktop" in the search bar.
-
Click on the app icon or the "Get" button.
-
Wait for the app to download and install on your computer.
-
Open the app and scan the QR code with your phone's camera. To do this, open WhatsApp Messenger on your phone, tap on the menu icon (three dots) at the top right corner of the screen, and select "WhatsApp Web".
-
You will see a list of devices where you are logged in. Tap on the "+" icon at the top right corner of the screen and point your phone's camera at the QR code on your computer screen.
-
You will see a confirmation message that you are connected. You can now use WhatsApp Messenger on your computer as long as your phone is connected to the internet.
-
-
How to Set Up and Use WhatsApp Messenger
-
Once you have downloaded and installed WhatsApp Messenger on your device, you can start using its features. Here are some of them:
-
Start a chat
-
To start a chat with someone, follow these steps:
-
How to download whatsapp messenger on android phone
-Whatsapp messenger download for pc windows 10
-Steps to install whatsapp messenger on iphone
-Whatsapp messenger apk download latest version
-How to download whatsapp desktop app for mac
-Whatsapp messenger free download for samsung galaxy
-How to update whatsapp messenger on your device
-Whatsapp messenger download for laptop without bluestacks
-How to use whatsapp web on your browser
-Whatsapp messenger for ipad download and install guide
-How to download whatsapp messenger on jio phone
-Whatsapp messenger download for nokia asha 200
-How to backup and restore whatsapp messages
-Whatsapp messenger for blackberry z10 free download
-How to download whatsapp status videos and photos
-Whatsapp messenger download for kindle fire hd
-How to transfer whatsapp messages from android to iphone
-Whatsapp messenger for windows phone 8.1 download
-How to enable dark mode on whatsapp messenger
-Whatsapp messenger download for huawei p40 pro
-How to create and join whatsapp groups
-Whatsapp messenger for chromebook free download
-How to make video calls on whatsapp messenger
-Whatsapp messenger download for oppo a5s
-How to delete whatsapp messages for everyone
-Whatsapp messenger for linux ubuntu download
-How to send stickers and gifs on whatsapp messenger
-Whatsapp messenger download for vivo y12
-How to change your whatsapp number and profile picture
-Whatsapp messenger for smartwatch free download
-How to mute and block contacts on whatsapp messenger
-Whatsapp messenger download for sony xperia z5
-How to use two whatsapp accounts on one phone
-Whatsapp messenger for tablet without sim card download
-How to hide your online status and last seen on whatsapp messenger
-Whatsapp messenger download for lg g6
-How to scan qr code on whatsapp messenger
-Whatsapp messenger for roku tv free download
-How to record and send voice messages on whatsapp messenger
-Whatsapp messenger download for xiaomi redmi note 9s
-How to pin chats and archive conversations on whatsapp messenger
-Whatsapp messenger for carplay free download
-How to share your live location on whatsapp messenger
-Whatsapp messenger download for oneplus 8t
-How to customize notifications and sounds on whatsapp messenger
-
-
Tap on the "New chat" button at the bottom right corner of the screen.
-
Select a contact from your list or tap on the "New contact" button to add a new one.
-
Type your message in the text box at the bottom of the screen and tap on the "Send" button.
-
You can also send voice messages by tapping and holding the microphone icon next to the text box.
-
You can also send media files, such as photos, videos, documents, contacts, or locations, by tapping on the attachment icon (paperclip) next to the text box and choosing the file you want to send.
-
-
Make a call
-
To make a voice or video call with someone, follow these steps:
-
-
Tap on the "New call" button at the bottom right corner of the screen.
-
Select a contact from your list or tap on the "New contact" button to add a new one.
-
Tap on the phone icon to make a voice call or tap on the video icon to make a video call.
-
You can also make a group call by tapping on the "New group call" button at the top right corner of the screen and selecting up to eight contacts.
-
To end the call, tap on the red button at the bottom of the screen.
-
-
Use other features
-
WhatsApp Messenger has many other features that you can use to enhance your communication experience. Here are some of them:
-
-
You can create a group chat by tapping on the menu icon (three dots) at the top right corner of the screen and selecting "New group". You can add up to 256 contacts to a group chat and assign a name and an icon to it. You can also mute, archive, or delete a group chat.
-
You can update your status by tapping on the "Status" tab at the top of the screen and selecting "My status". You can share text, photos, videos, or GIFs that will disappear after 24 hours. You can also view and reply to your contacts' status updates.
-
You can change your settings by tapping on the menu icon (three dots) at the top right corner of the screen and selecting "Settings". You can change your profile, account, chats, notifications, data and storage usage, and help options. You can also invite your friends to join WhatsApp Messenger by tapping on "Invite a friend".
-
-
How to Secure Your WhatsApp Messenger Account
-
WhatsApp Messenger is designed to protect your privacy and security with end-to-end encryption, which means that only you and the person you are communicating with can read or listen to your messages and calls. However, there are some additional steps you can take to secure your account even more. Here are some of them:
-
-
Enable two-step verification by going to Settings > Account > Two-step verification. This will require you to enter a six-digit PIN when you register your phone number with WhatsApp Messenger or when you change your phone. This will prevent anyone from accessing your account without your PIN.
-
Enable biometric authentication by going to Settings > Account > Privacy > Screen lock. This will require you to use your fingerprint, face ID, or iris scan to unlock WhatsApp Messenger on your device. This will prevent anyone from accessing your app without your biometric data.
-
Block unwanted contacts by going to Settings > Account > Privacy > Blocked contacts. This will prevent them from sending you messages or calls. You can also report them as spam or abusive by tapping on their name and selecting "Report contact".
-
-
How to Backup and Restore Your WhatsApp Messenger Data
-
If you want to keep your chat history, media files, and settings safe and secure, you can backup your data to Google Drive or iCloud, depending on your device. This will allow you to restore them if you change your phone or reinstall the app. Here are the steps:
-
Backup your data
-
-
Go to Settings > Chats > Chat backup.
-
Select how often you want to backup your data: daily, weekly, monthly, or manually.
-
Select which Google Drive or iCloud account you want to use for backup.
-
Select whether you want to include videos in your backup or not.
-
Tap on "Back up" to start backing up your data.
-
-
Restore your data
-
-
Download and install WhatsApp Messenger on your new phone or after reinstalling it on your old phone.
-
Verify your phone number with a code sent to you via SMS or a phone call.
-
You will see a prompt asking you to restore your data from Google Drive or iCloud. Tap on "Restore".
-
Wait for the restoration process to complete. You will see a confirmation message that your data has been restored.
-
Create your profile by entering your name and choosing a profile picture.
-
Start using WhatsApp Messenger as usual.
-
-
How to Troubleshoot Common WhatsApp Messenger Issues
-
Sometimes, you may encounter some issues or problems while using WhatsApp Messenger. Here are some of the common ones and how to fix them:
-
Connection issues
-
If you are unable to send or receive messages or calls on WhatsApp Messenger, you may have a connection issue. To fix it, try these steps:
-
-
Check your internet connection and make sure it is working properly. You can try switching between Wi-Fi and mobile data, or turning them off and on again.
-
Check the WhatsApp Messenger server status and make sure it is not down or undergoing maintenance. You can do this by visiting https://www.whatsapp.com/status/.
-
Update your WhatsApp Messenger app to the latest version. You can do this by going to the Google Play Store or the App Store and checking for updates.
-
Restart your phone or device and try again.
-
-
Notification issues
-
If you are not receiving notifications from WhatsApp Messenger, you may have a notification issue. To fix it, try these steps:
-
-
Check your phone or device settings and make sure notifications are enabled for WhatsApp Messenger. You can also customize your notification preferences, such as sound, vibration, pop-up, etc.
-
Check your WhatsApp Messenger settings and make sure notifications are enabled for each chat or group. You can also mute or unmute specific chats or groups.
-
Check your battery saver or power saving mode and make sure it is not interfering with WhatsApp Messenger notifications. You can also whitelist WhatsApp Messenger from any battery optimization settings.
-
Clear your WhatsApp Messenger cache and data. You can do this by going to Settings > Apps > WhatsApp Messenger > Storage > Clear cache and Clear data.
-
-
Storage issues
-
If you are running out of storage space on your phone or device due to WhatsApp Messenger, you may have a storage issue. To fix it, try these steps:
-
-
Delete any unwanted or unnecessary media files, such as photos, videos, documents, etc., from your WhatsApp Messenger app. You can do this by going to Settings > Data and storage usage > Storage usage and selecting the chats or groups that are taking up the most space. Then tap on "Free up space" and select the items you want to delete.
-
Backup your data to Google Drive or iCloud and delete it from your phone or device. You can do this by going to Settings > Chats > Chat backup and following the steps mentioned above.
-
Use an external storage device, such as a memory card, USB drive, or cloud service, to store your media files. You can also transfer them to your computer or another device.
-
-
Update issues
-
If you are unable to update your WhatsApp Messenger app to the latest version, you may have an update issue. To fix it, try these steps:
-
-
Check your internet connection and make sure it is working properly. You can try switching between Wi-Fi and mobile data, or turning them off and on again.
-
Check your phone or device storage and make sure you have enough space to download and install the update. You can also delete some unwanted or unnecessary files or apps to free up some space.
-
Check your phone or device settings and make sure you have enabled automatic updates for WhatsApp Messenger. You can also manually check for updates by going to the Google Play Store or the App Store and tapping on "Update".
-
Restart your phone or device and try again.
-
-
How to Delete or Deactivate Your WhatsApp Messenger Account
-
If you no longer want to use WhatsApp Messenger or want to switch to another service, you can delete or deactivate your account. Here are the steps:
-
Delete your account
-
To permanently delete your account and all your data from WhatsApp Messenger, follow these steps:
-
-
Open WhatsApp Messenger and go to Settings > Account > Delete my account.
-
Enter your phone number and tap on "Delete my account".
-
You will see a confirmation message that your account has been deleted.
-
You will also be logged out of WhatsApp Messenger and removed from all your chats and groups.
Uninstall WhatsApp Messenger from your phone or device.
-
-
Note: Deleting your account will not delete your data from Google Drive or iCloud. You will have to delete it manually from those services.
-
Deactivate your account
-
To temporarily deactivate your account and stop receiving messages and calls from WhatsApp Messenger, follow these steps:
-
-
Remove your SIM card from your phone or device.
-
Insert your SIM card into another phone or device.
-
Download and install WhatsApp Messenger on that phone or device.
-
Verify your phone number with a code sent to you via SMS or a phone call.
-
You will see a prompt asking you to restore your data from Google Drive or iCloud. Tap on "Skip".
-
You will see a message that your account is being used on another phone or device. Tap on "Log out".
-
You will be logged out of WhatsApp Messenger and stopped from receiving messages and calls.
-
-
Note: Deactivating your account will not delete your data from WhatsApp Messenger or Google Drive or iCloud. You can reactivate your account by following the same steps on your original phone or device.
-
Alternatives to WhatsApp Messenger
-
If you are looking for some alternatives to WhatsApp Messenger that offer similar or better features, security, and privacy, here are some of them:
-
-
Signal: Signal is a messaging and calling app that is known for its high level of encryption and privacy. It is open-source, free, and does not collect any user data. It also has features such as disappearing messages, group chats and calls, stickers, GIFs, etc.
-
Telegram: Telegram is a messaging and calling app that is known for its speed and reliability. It is also free and does not have any ads. It also has features such as cloud storage, channels, bots, groups with up to 200,000 members, voice chats, etc.
-
iMessage: iMessage is a messaging and calling app that is exclusive to iOS devices. It is integrated with the Messages app and allows you to send text and voice messages, make voice and video calls, and share media files with other iOS users. It also has features such as end-to-end encryption, stickers, Animoji, Memoji, etc.
-
-
Conclusion
-
In this article, we have shown you how to download WhatsApp Messenger for your Android, iOS, or desktop device, and how to set up and use its features. We have also given you some tips on how to secure your account, backup and restore your data, troubleshoot common issues, delete or deactivate your account, and find alternatives to WhatsApp Messenger. We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
-
FAQs
-
Here are some frequently asked questions and answers about WhatsApp Messenger:
-
-
Q: How can I change my phone number on WhatsApp Messenger?
-
A: You can change your phone number on WhatsApp Messenger by going to Settings > Account > Change number. You will have to enter your old and new phone numbers and verify them with codes sent to you via SMS or a phone call. You will also have the option to notify your contacts about your new number.
-
Q: How can I delete a message or a chat on WhatsApp Messenger?
-
A: You can delete a message or a chat on WhatsApp Messenger by tapping and holding on the message or chat you want to delete and selecting "Delete" from the menu. You will have the option to delete it for yourself or for everyone. If you delete it for everyone, it will disappear from both your and the recipient's chat history. However, you can only do this within an hour of sending the message.
-
Q: How can I use WhatsApp Messenger without internet?
-
A: You cannot use WhatsApp Messenger without internet. However, you can use some offline features such as composing messages or viewing media files that are already downloaded on your device. Once you are connected to the internet again, you can send or receive messages or calls as usual.
-
Q: How can I hide my online status or last seen on WhatsApp Messenger?
-
A: You can hide your online status or last seen on WhatsApp Messenger by going to Settings > Account > Privacy > Last seen. You can choose who can see your last seen: everyone, your contacts, or nobody. However, if you hide your last seen, you will not be able to see other people's last seen as well.
-
Q: How can I block or unblock someone on WhatsApp Messenger?
-
A: You can block or unblock someone on WhatsApp Messenger by going to Settings > Account > Privacy > Blocked contacts. You can tap on the "Add" icon at the top right corner of the screen and select the contact you want to block. You can also tap and hold on the contact you want to unblock and select "Unblock" from the menu. When you block someone, they will not be able to send you messages or calls, see your profile picture, status, or last seen, or add you to groups. When you unblock someone, you will be able to communicate with them as usual.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/A Journey Through Cancer Book Pdf How One Man Found Hope and Meaning in His Battle with the Disease.md b/spaces/contluForse/HuggingGPT/assets/A Journey Through Cancer Book Pdf How One Man Found Hope and Meaning in His Battle with the Disease.md
deleted file mode 100644
index a738aa8192053c176b73c823ba240f4d18a810ff..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/A Journey Through Cancer Book Pdf How One Man Found Hope and Meaning in His Battle with the Disease.md
+++ /dev/null
@@ -1,21 +0,0 @@
-
-
Tumor necrosis factor (TNF) had been described as a highly active anti-cancer molecule in mouse models and appeared to compromise the vasculature of rapidly growing tumors. We first attempted to use autologous TIL as vehicles to deliver TNF at high concentrations to a tumor deposit. The TNF gene was transduced into lymphocytes and the secreted levels of TNF were 30-fold lower than comparably transduced tumor cell lines transduced with this gene (Hwu et al. 1993a). Although secretion of tumor necrosis factor was low, we attempted this treatment in several patients, and in 1992, we treated a 52-year old woman with metastatic melanoma who had progressed after treatment using non-transduced TIL followed by multiple doses of IL-2 to keep these cells alive in vivo (Rosenberg 1992). We grew TIL from one of many subcutaneous lesions that she had throughout her body, transduced the lymphocytes with the gene encoding TNF, and injected escalating doses of these TNF gene modified TIL twice a week giving increasing doses of these cells in the absence of IL-2 administration. Multiple melanoma nodules regressed and ultimately disappeared, and the patient survived disease-free for several decades. The hemodynamic effects of tumor necrosis factor were of concern although there was no toxicity in these patients likely due to the very low amounts of TNF that were being produced. It was not at all clear that the TNF had played a role in the tumor reduction since some patients could potentially respond to a second treatment with naturally occurring TIL. The ability to successfully treat patients with this kind of functional modification of the T cell was the stimulus to proceed with efforts to genetically modify lymphocytes to improve anti-tumor activity.
As we were reporting these dramatic responses to anti CD-19 CAR T cell therapy, I was contacted in 2011 by Arie Belldegrun, a former fellow, then a professor of Urology at UCLA, who had worked in my lab 25 years earlier. Arie had a vision of how to commercialize this approach for the treatment of hematologic cancers. In 2012 the Surgery Branch NCI signed a Cooperative Research and Development Agreement (CRADA) with Kite Pharma founded by Dr. Belldegrun. We transferred our technology to Kite and worked closely with them to develop a closed system for cell production, applicable to Good Manufacturing Practices. Clinical studies in the Surgery Branch confirmed by subsequent multi institutional studies by Kite Pharma resulted in objective responses in about 83% of patients with diffuse, large B cell lymphoma with 58% complete durable responses (Cappell et al. 2020; Neelapu et al. 2017). Both Kite Pharma and Novartis, who had been conducting trials in conjunction with the University of Pennsylvania group, received FDA approval to market CD19 CAR for the treatment of diffuse large B cell lymphoma and acute lymphocytic leukemia, respectively in 2017. In that same month, Kite was sold to Gilead Sciences for 11.9 billion dollars. Anti-CD19 CAR studies are now being widely used in patients throughout the United States, Europe, and Israel. Over 200 companies world-wide are now working to develop cell-based therapies.
-
by OPACC In its pages, you will find stories and photos showing what hope looks like from the many vantage points of families who have been affected by childhood cancer. The families who have contributed to this volume have chosen to include themselves to honour their children and their inspiring stories; on every page, the families included here want to show others that, no matter what, they are smiling, hopeful, loving, and persevering. The book can be purchased through the OPACC website at:www.opacc.org
-
The newly published book, Love and Death: My Journey Through the Valley of the Shadow, begins with the announcement Church made to his congregation (All Souls in New York) on February 4 of this year: He is dying of esophogeal cancer, and his time remaining "is likely to be measured in months, not years." Similarly, Church did not dance around the subject of his mortality Friday: "This General Assembly is a very special occasion for me. Barring some sort of miraculous but nonetheless unexpected turn in my health, it will be my last opportunity to celebrate with you the gift of our chosen faith."
-
"Those who know my mantra sometimes test me with it: 'So, Forrest, do you really want cancer?' I reply: 'I want what I have.' ... We cannot selectively wish away what is wrong with us without including all that is right. ... In short, I back away from the darkened pane of my health to gain a prospect of the whole window I am blessed to look through."
-
Author Kenneth C. Haugk writes in a warm, caring style, with short, easy-to-read chapters. He walks alongside the reader through the grief journey, sharing helpful insights about grief, biblical truths, and stories that provide comfort and reassurance.
-
Congregations and other organizations use these resources to strengthen and expand ministry.Individuals use them to improve their ability to relate to and care for others, grow in faith,and journey through life crises. Our 30-person staff is based in St. Louis, Missouri.
-
-
Corinne Stanley's La tercera luz: A Poetic Journey Through Spain tells the story of the birth of the poetry collection, Silence from the Forest (Silencios del bosque) by Maria Esther Bendala Pavón. But it is also the story of a poet translator's awakening at the intersection of loneliness and desire. When Stanley decides to leave her Midwest cubicle for Spain, chance encounters spiral her into intimate proximity with a mysterious, beautiful Spanish poet suffering from brain cancer named Esther. We also meet fortune tellers, literary companions with varying degrees of allegiance, architecture vibrating with layers of history, and most haunting, an elusive beloved whose abandonment propels Stanley deep into the labyrinth of her own heart. Ultimately, La tercera luz is a rhapsody of arriving, death muse faced and the triumph of not one, but two books in hand, heart's path met with ceaseless passion and generosity despite setback: "I want / to be lost so I can say / I walked that path--I was there, / singing the dark days of the Córdoba moon."
-
Set against the luminous backdrop of the Spanish landscape, and the haunting vestiges of la Convivencia, La tercera luz is an incandescent tribute to the ever- refilling well of possibilities when we have the courage to make the journeys of the Heart. When Corinne Stanley, herself a cancer survivor, reaches out across oceans and cultures and generously gifts Esther Bendala Pavón her longed-for place in the literary sun, she ensures her own triumphant legacy.
-
A fun keepsake that provides a way for the child to document his or her cancer journey. While it is a helpful tool for the child undergoing treatment, this journal is also a cherished treasure for family and friends, who can read it and experience the cancer journey through the thoughts and words of a child .
-
Find out about organisations, support, books, leaflets and other resources to help you cope with breast cancer and it's treatment. There is also information about mastectomy wear and prosthesis suppliers.
-
Breast Cancer Now is a charity dedicated to funding breast cancer research. They also provide breast cancer information and support across the UK. Services are free and include a helpline, website, publications, and practical and emotional support. It was formed by the merger of The Breast Cancer Campaign and Breakthrough Breast Cancer in 2015.
-
Macmillan Cancer Support is a charity that gives practical, medical and financial support to people with cancer in the UK. Its helpline gives guidance on cancer and advice on benefits. It also publishes booklets about cancer and treatments. Information is available in other languages.
-
This charity aims to help with the physical and emotional wellbeing of people going through cancer treatment. They provide confidence boosting workshops, which gives a chance for people to meet others going through a similar experience and to learn skills to manage some of the side effects of treatment, such as skincare and make up techniques.
-
Coming to terms with a diagnosis of breast cancer isn't easy. This book gives reassurance and practical advice about getting on with life as normally as possible. It is helpful for friends and family as well as people with breast cancer.
-
A book for women who have just been diagnosed with breast cancer. Written by a woman who has been through treatment for breast cancer. Provides information about what breast cancer is, how it is diagnosed and the treatment options available. It also talks about the emotional effects after a cancer diagnosis.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cooelf/Multimodal-CoT/timm/data/tf_preprocessing.py b/spaces/cooelf/Multimodal-CoT/timm/data/tf_preprocessing.py
deleted file mode 100644
index 44b4a3af7372c6865b1cdddda0a8da0ccc6b93a0..0000000000000000000000000000000000000000
--- a/spaces/cooelf/Multimodal-CoT/timm/data/tf_preprocessing.py
+++ /dev/null
@@ -1,232 +0,0 @@
-""" Tensorflow Preprocessing Adapter
-
-Allows use of Tensorflow preprocessing pipeline in PyTorch Transform
-
-Copyright of original Tensorflow code below.
-
-Hacked together by / Copyright 2020 Ross Wightman
-"""
-
-# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""ImageNet preprocessing for MnasNet."""
-import tensorflow as tf
-import numpy as np
-
-IMAGE_SIZE = 224
-CROP_PADDING = 32
-
-
-def distorted_bounding_box_crop(image_bytes,
- bbox,
- min_object_covered=0.1,
- aspect_ratio_range=(0.75, 1.33),
- area_range=(0.05, 1.0),
- max_attempts=100,
- scope=None):
- """Generates cropped_image using one of the bboxes randomly distorted.
-
- See `tf.image.sample_distorted_bounding_box` for more documentation.
-
- Args:
- image_bytes: `Tensor` of binary image data.
- bbox: `Tensor` of bounding boxes arranged `[1, num_boxes, coords]`
- where each coordinate is [0, 1) and the coordinates are arranged
- as `[ymin, xmin, ymax, xmax]`. If num_boxes is 0 then use the whole
- image.
- min_object_covered: An optional `float`. Defaults to `0.1`. The cropped
- area of the image must contain at least this fraction of any bounding
- box supplied.
- aspect_ratio_range: An optional list of `float`s. The cropped area of the
- image must have an aspect ratio = width / height within this range.
- area_range: An optional list of `float`s. The cropped area of the image
- must contain a fraction of the supplied image within in this range.
- max_attempts: An optional `int`. Number of attempts at generating a cropped
- region of the image of the specified constraints. After `max_attempts`
- failures, return the entire image.
- scope: Optional `str` for name scope.
- Returns:
- cropped image `Tensor`
- """
- with tf.name_scope(scope, 'distorted_bounding_box_crop', [image_bytes, bbox]):
- shape = tf.image.extract_jpeg_shape(image_bytes)
- sample_distorted_bounding_box = tf.image.sample_distorted_bounding_box(
- shape,
- bounding_boxes=bbox,
- min_object_covered=min_object_covered,
- aspect_ratio_range=aspect_ratio_range,
- area_range=area_range,
- max_attempts=max_attempts,
- use_image_if_no_bounding_boxes=True)
- bbox_begin, bbox_size, _ = sample_distorted_bounding_box
-
- # Crop the image to the specified bounding box.
- offset_y, offset_x, _ = tf.unstack(bbox_begin)
- target_height, target_width, _ = tf.unstack(bbox_size)
- crop_window = tf.stack([offset_y, offset_x, target_height, target_width])
- image = tf.image.decode_and_crop_jpeg(image_bytes, crop_window, channels=3)
-
- return image
-
-
-def _at_least_x_are_equal(a, b, x):
- """At least `x` of `a` and `b` `Tensors` are equal."""
- match = tf.equal(a, b)
- match = tf.cast(match, tf.int32)
- return tf.greater_equal(tf.reduce_sum(match), x)
-
-
-def _decode_and_random_crop(image_bytes, image_size, resize_method):
- """Make a random crop of image_size."""
- bbox = tf.constant([0.0, 0.0, 1.0, 1.0], dtype=tf.float32, shape=[1, 1, 4])
- image = distorted_bounding_box_crop(
- image_bytes,
- bbox,
- min_object_covered=0.1,
- aspect_ratio_range=(3. / 4, 4. / 3.),
- area_range=(0.08, 1.0),
- max_attempts=10,
- scope=None)
- original_shape = tf.image.extract_jpeg_shape(image_bytes)
- bad = _at_least_x_are_equal(original_shape, tf.shape(image), 3)
-
- image = tf.cond(
- bad,
- lambda: _decode_and_center_crop(image_bytes, image_size),
- lambda: tf.image.resize([image], [image_size, image_size], resize_method)[0])
-
- return image
-
-
-def _decode_and_center_crop(image_bytes, image_size, resize_method):
- """Crops to center of image with padding then scales image_size."""
- shape = tf.image.extract_jpeg_shape(image_bytes)
- image_height = shape[0]
- image_width = shape[1]
-
- padded_center_crop_size = tf.cast(
- ((image_size / (image_size + CROP_PADDING)) *
- tf.cast(tf.minimum(image_height, image_width), tf.float32)),
- tf.int32)
-
- offset_height = ((image_height - padded_center_crop_size) + 1) // 2
- offset_width = ((image_width - padded_center_crop_size) + 1) // 2
- crop_window = tf.stack([offset_height, offset_width,
- padded_center_crop_size, padded_center_crop_size])
- image = tf.image.decode_and_crop_jpeg(image_bytes, crop_window, channels=3)
- image = tf.image.resize([image], [image_size, image_size], resize_method)[0]
-
- return image
-
-
-def _flip(image):
- """Random horizontal image flip."""
- image = tf.image.random_flip_left_right(image)
- return image
-
-
-def preprocess_for_train(image_bytes, use_bfloat16, image_size=IMAGE_SIZE, interpolation='bicubic'):
- """Preprocesses the given image for evaluation.
-
- Args:
- image_bytes: `Tensor` representing an image binary of arbitrary size.
- use_bfloat16: `bool` for whether to use bfloat16.
- image_size: image size.
- interpolation: image interpolation method
-
- Returns:
- A preprocessed image `Tensor`.
- """
- resize_method = tf.image.ResizeMethod.BICUBIC if interpolation == 'bicubic' else tf.image.ResizeMethod.BILINEAR
- image = _decode_and_random_crop(image_bytes, image_size, resize_method)
- image = _flip(image)
- image = tf.reshape(image, [image_size, image_size, 3])
- image = tf.image.convert_image_dtype(
- image, dtype=tf.bfloat16 if use_bfloat16 else tf.float32)
- return image
-
-
-def preprocess_for_eval(image_bytes, use_bfloat16, image_size=IMAGE_SIZE, interpolation='bicubic'):
- """Preprocesses the given image for evaluation.
-
- Args:
- image_bytes: `Tensor` representing an image binary of arbitrary size.
- use_bfloat16: `bool` for whether to use bfloat16.
- image_size: image size.
- interpolation: image interpolation method
-
- Returns:
- A preprocessed image `Tensor`.
- """
- resize_method = tf.image.ResizeMethod.BICUBIC if interpolation == 'bicubic' else tf.image.ResizeMethod.BILINEAR
- image = _decode_and_center_crop(image_bytes, image_size, resize_method)
- image = tf.reshape(image, [image_size, image_size, 3])
- image = tf.image.convert_image_dtype(
- image, dtype=tf.bfloat16 if use_bfloat16 else tf.float32)
- return image
-
-
-def preprocess_image(image_bytes,
- is_training=False,
- use_bfloat16=False,
- image_size=IMAGE_SIZE,
- interpolation='bicubic'):
- """Preprocesses the given image.
-
- Args:
- image_bytes: `Tensor` representing an image binary of arbitrary size.
- is_training: `bool` for whether the preprocessing is for training.
- use_bfloat16: `bool` for whether to use bfloat16.
- image_size: image size.
- interpolation: image interpolation method
-
- Returns:
- A preprocessed image `Tensor` with value range of [0, 255].
- """
- if is_training:
- return preprocess_for_train(image_bytes, use_bfloat16, image_size, interpolation)
- else:
- return preprocess_for_eval(image_bytes, use_bfloat16, image_size, interpolation)
-
-
-class TfPreprocessTransform:
-
- def __init__(self, is_training=False, size=224, interpolation='bicubic'):
- self.is_training = is_training
- self.size = size[0] if isinstance(size, tuple) else size
- self.interpolation = interpolation
- self._image_bytes = None
- self.process_image = self._build_tf_graph()
- self.sess = None
-
- def _build_tf_graph(self):
- with tf.device('/cpu:0'):
- self._image_bytes = tf.placeholder(
- shape=[],
- dtype=tf.string,
- )
- img = preprocess_image(
- self._image_bytes, self.is_training, False, self.size, self.interpolation)
- return img
-
- def __call__(self, image_bytes):
- if self.sess is None:
- self.sess = tf.Session()
- img = self.sess.run(self.process_image, feed_dict={self._image_bytes: image_bytes})
- img = img.round().clip(0, 255).astype(np.uint8)
- if img.ndim < 3:
- img = np.expand_dims(img, axis=-1)
- img = np.rollaxis(img, 2) # HWC to CHW
- return img
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/leres/pix2pix/util/visualizer.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/leres/pix2pix/util/visualizer.py
deleted file mode 100644
index 810a0513ab997103ace77b665c9a17f223b173c9..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/leres/pix2pix/util/visualizer.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import numpy as np
-import os
-import sys
-import ntpath
-import time
-from . import util, html
-from subprocess import Popen, PIPE
-import torch
-
-
-if sys.version_info[0] == 2:
- VisdomExceptionBase = Exception
-else:
- VisdomExceptionBase = ConnectionError
-
-
-def save_images(webpage, visuals, image_path, aspect_ratio=1.0, width=256):
- """Save images to the disk.
-
- Parameters:
- webpage (the HTML class) -- the HTML webpage class that stores these imaegs (see html.py for more details)
- visuals (OrderedDict) -- an ordered dictionary that stores (name, images (either tensor or numpy) ) pairs
- image_path (str) -- the string is used to create image paths
- aspect_ratio (float) -- the aspect ratio of saved images
- width (int) -- the images will be resized to width x width
-
- This function will save images stored in 'visuals' to the HTML file specified by 'webpage'.
- """
- image_dir = webpage.get_image_dir()
- short_path = ntpath.basename(image_path[0])
- name = os.path.splitext(short_path)[0]
-
- webpage.add_header(name)
- ims, txts, links = [], [], []
-
- for label, im_data in visuals.items():
- im = util.tensor2im(im_data)
- image_name = '%s_%s.png' % (name, label)
- save_path = os.path.join(image_dir, image_name)
- util.save_image(im, save_path, aspect_ratio=aspect_ratio)
- ims.append(image_name)
- txts.append(label)
- links.append(image_name)
- webpage.add_images(ims, txts, links, width=width)
-
-
-class Visualizer():
- """This class includes several functions that can display/save images and print/save logging information.
-
- It uses a Python library 'visdom' for display, and a Python library 'dominate' (wrapped in 'HTML') for creating HTML files with images.
- """
-
- def __init__(self, opt):
- """Initialize the Visualizer class
-
- Parameters:
- opt -- stores all the experiment flags; needs to be a subclass of BaseOptions
- Step 1: Cache the training/test options
- Step 2: connect to a visdom server
- Step 3: create an HTML object for saveing HTML filters
- Step 4: create a logging file to store training losses
- """
- self.opt = opt # cache the option
- self.display_id = opt.display_id
- self.use_html = opt.isTrain and not opt.no_html
- self.win_size = opt.display_winsize
- self.name = opt.name
- self.port = opt.display_port
- self.saved = False
-
- if self.use_html: # create an HTML object at /web/; images will be saved under /web/images/
- self.web_dir = os.path.join(opt.checkpoints_dir, opt.name, 'web')
- self.img_dir = os.path.join(self.web_dir, 'images')
- print('create web directory %s...' % self.web_dir)
- util.mkdirs([self.web_dir, self.img_dir])
- # create a logging file to store training losses
- self.log_name = os.path.join(opt.checkpoints_dir, opt.name, 'loss_log.txt')
- with open(self.log_name, "a") as log_file:
- now = time.strftime("%c")
- log_file.write('================ Training Loss (%s) ================\n' % now)
-
- def reset(self):
- """Reset the self.saved status"""
- self.saved = False
-
- def create_visdom_connections(self):
- """If the program could not connect to Visdom server, this function will start a new server at port < self.port > """
- cmd = sys.executable + ' -m visdom.server -p %d &>/dev/null &' % self.port
- print('\n\nCould not connect to Visdom server. \n Trying to start a server....')
- print('Command: %s' % cmd)
- Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)
-
- def display_current_results(self, visuals, epoch, save_result):
- """Display current results on visdom; save current results to an HTML file.
-
- Parameters:
- visuals (OrderedDict) - - dictionary of images to display or save
- epoch (int) - - the current epoch
- save_result (bool) - - if save the current results to an HTML file
- """
- if self.use_html and (save_result or not self.saved): # save images to an HTML file if they haven't been saved.
- self.saved = True
- # save images to the disk
- for label, image in visuals.items():
- image_numpy = util.tensor2im(image)
- img_path = os.path.join(self.img_dir, 'epoch%.3d_%s.png' % (epoch, label))
- util.save_image(image_numpy, img_path)
-
- # update website
- webpage = html.HTML(self.web_dir, 'Experiment name = %s' % self.name, refresh=1)
- for n in range(epoch, 0, -1):
- webpage.add_header('epoch [%d]' % n)
- ims, txts, links = [], [], []
-
- for label, image_numpy in visuals.items():
- # image_numpy = util.tensor2im(image)
- img_path = 'epoch%.3d_%s.png' % (n, label)
- ims.append(img_path)
- txts.append(label)
- links.append(img_path)
- webpage.add_images(ims, txts, links, width=self.win_size)
- webpage.save()
-
- # def plot_current_losses(self, epoch, counter_ratio, losses):
- # """display the current losses on visdom display: dictionary of error labels and values
- #
- # Parameters:
- # epoch (int) -- current epoch
- # counter_ratio (float) -- progress (percentage) in the current epoch, between 0 to 1
- # losses (OrderedDict) -- training losses stored in the format of (name, float) pairs
- # """
- # if not hasattr(self, 'plot_data'):
- # self.plot_data = {'X': [], 'Y': [], 'legend': list(losses.keys())}
- # self.plot_data['X'].append(epoch + counter_ratio)
- # self.plot_data['Y'].append([losses[k] for k in self.plot_data['legend']])
- # try:
- # self.vis.line(
- # X=np.stack([np.array(self.plot_data['X'])] * len(self.plot_data['legend']), 1),
- # Y=np.array(self.plot_data['Y']),
- # opts={
- # 'title': self.name + ' loss over time',
- # 'legend': self.plot_data['legend'],
- # 'xlabel': 'epoch',
- # 'ylabel': 'loss'},
- # win=self.display_id)
- # except VisdomExceptionBase:
- # self.create_visdom_connections()
-
- # losses: same format as |losses| of plot_current_losses
- def print_current_losses(self, epoch, iters, losses, t_comp, t_data):
- """print current losses on console; also save the losses to the disk
-
- Parameters:
- epoch (int) -- current epoch
- iters (int) -- current training iteration during this epoch (reset to 0 at the end of every epoch)
- losses (OrderedDict) -- training losses stored in the format of (name, float) pairs
- t_comp (float) -- computational time per data point (normalized by batch_size)
- t_data (float) -- data loading time per data point (normalized by batch_size)
- """
- message = '(epoch: %d, iters: %d, time: %.3f, data: %.3f) ' % (epoch, iters, t_comp, t_data)
- for k, v in losses.items():
- message += '%s: %.3f ' % (k, v)
-
- print(message) # print the message
- with open(self.log_name, "a") as log_file:
- log_file.write('%s\n' % message) # save the message
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/video/processing.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/video/processing.py
deleted file mode 100644
index 2b93a59215d56b6e5ba05f48bca3527772f0c744..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/video/processing.py
+++ /dev/null
@@ -1,160 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os
-import os.path as osp
-import subprocess
-import tempfile
-
-from annotator.mmpkg.mmcv.utils import requires_executable
-
-
-@requires_executable('ffmpeg')
-def convert_video(in_file,
- out_file,
- print_cmd=False,
- pre_options='',
- **kwargs):
- """Convert a video with ffmpeg.
-
- This provides a general api to ffmpeg, the executed command is::
-
- `ffmpeg -y -i `
-
- Options(kwargs) are mapped to ffmpeg commands with the following rules:
-
- - key=val: "-key val"
- - key=True: "-key"
- - key=False: ""
-
- Args:
- in_file (str): Input video filename.
- out_file (str): Output video filename.
- pre_options (str): Options appears before "-i ".
- print_cmd (bool): Whether to print the final ffmpeg command.
- """
- options = []
- for k, v in kwargs.items():
- if isinstance(v, bool):
- if v:
- options.append(f'-{k}')
- elif k == 'log_level':
- assert v in [
- 'quiet', 'panic', 'fatal', 'error', 'warning', 'info',
- 'verbose', 'debug', 'trace'
- ]
- options.append(f'-loglevel {v}')
- else:
- options.append(f'-{k} {v}')
- cmd = f'ffmpeg -y {pre_options} -i {in_file} {" ".join(options)} ' \
- f'{out_file}'
- if print_cmd:
- print(cmd)
- subprocess.call(cmd, shell=True)
-
-
-@requires_executable('ffmpeg')
-def resize_video(in_file,
- out_file,
- size=None,
- ratio=None,
- keep_ar=False,
- log_level='info',
- print_cmd=False):
- """Resize a video.
-
- Args:
- in_file (str): Input video filename.
- out_file (str): Output video filename.
- size (tuple): Expected size (w, h), eg, (320, 240) or (320, -1).
- ratio (tuple or float): Expected resize ratio, (2, 0.5) means
- (w*2, h*0.5).
- keep_ar (bool): Whether to keep original aspect ratio.
- log_level (str): Logging level of ffmpeg.
- print_cmd (bool): Whether to print the final ffmpeg command.
- """
- if size is None and ratio is None:
- raise ValueError('expected size or ratio must be specified')
- if size is not None and ratio is not None:
- raise ValueError('size and ratio cannot be specified at the same time')
- options = {'log_level': log_level}
- if size:
- if not keep_ar:
- options['vf'] = f'scale={size[0]}:{size[1]}'
- else:
- options['vf'] = f'scale=w={size[0]}:h={size[1]}:' \
- 'force_original_aspect_ratio=decrease'
- else:
- if not isinstance(ratio, tuple):
- ratio = (ratio, ratio)
- options['vf'] = f'scale="trunc(iw*{ratio[0]}):trunc(ih*{ratio[1]})"'
- convert_video(in_file, out_file, print_cmd, **options)
-
-
-@requires_executable('ffmpeg')
-def cut_video(in_file,
- out_file,
- start=None,
- end=None,
- vcodec=None,
- acodec=None,
- log_level='info',
- print_cmd=False):
- """Cut a clip from a video.
-
- Args:
- in_file (str): Input video filename.
- out_file (str): Output video filename.
- start (None or float): Start time (in seconds).
- end (None or float): End time (in seconds).
- vcodec (None or str): Output video codec, None for unchanged.
- acodec (None or str): Output audio codec, None for unchanged.
- log_level (str): Logging level of ffmpeg.
- print_cmd (bool): Whether to print the final ffmpeg command.
- """
- options = {'log_level': log_level}
- if vcodec is None:
- options['vcodec'] = 'copy'
- if acodec is None:
- options['acodec'] = 'copy'
- if start:
- options['ss'] = start
- else:
- start = 0
- if end:
- options['t'] = end - start
- convert_video(in_file, out_file, print_cmd, **options)
-
-
-@requires_executable('ffmpeg')
-def concat_video(video_list,
- out_file,
- vcodec=None,
- acodec=None,
- log_level='info',
- print_cmd=False):
- """Concatenate multiple videos into a single one.
-
- Args:
- video_list (list): A list of video filenames
- out_file (str): Output video filename
- vcodec (None or str): Output video codec, None for unchanged
- acodec (None or str): Output audio codec, None for unchanged
- log_level (str): Logging level of ffmpeg.
- print_cmd (bool): Whether to print the final ffmpeg command.
- """
- tmp_filehandler, tmp_filename = tempfile.mkstemp(suffix='.txt', text=True)
- with open(tmp_filename, 'w') as f:
- for filename in video_list:
- f.write(f'file {osp.abspath(filename)}\n')
- options = {'log_level': log_level}
- if vcodec is None:
- options['vcodec'] = 'copy'
- if acodec is None:
- options['acodec'] = 'copy'
- convert_video(
- tmp_filename,
- out_file,
- print_cmd,
- pre_options='-f concat -safe 0',
- **options)
- os.close(tmp_filehandler)
- os.remove(tmp_filename)
diff --git a/spaces/coutant/detect-signature/app.py b/spaces/coutant/detect-signature/app.py
deleted file mode 100644
index b4e0fba984afc99837c57aa5579c8501558a52d4..0000000000000000000000000000000000000000
--- a/spaces/coutant/detect-signature/app.py
+++ /dev/null
@@ -1,80 +0,0 @@
-import PIL.Image
-import gradio as gr
-import torch
-import numpy as np
-
-def detect_with_craft_text_detector(image: np.ndarray):
- from craft_text_detector import Craft
- craft = Craft(output_dir='output', crop_type="box", cuda=torch.cuda.is_available(), export_extra=True)
- result = craft.detect_text( image)
- annotated = PIL.Image.open('output/image_text_detection.png') # image with boxes displayed
- return annotated, result['boxes'], is_signature(result['boxes_as_ratios'])
-
-def detect_with_craft_hw_ocr(image: np.ndarray):
- from craft_hw_ocr import OCR
- ocr = OCR.load_models()
- image, results = OCR.detection(image, ocr[2])
- bboxes, _ = OCR.recoginition(image, results, ocr[0], ocr[1])
- h,w,_=np.shape(image) # third dimension is color channel
- annotated = OCR.visualize(image, results)
- m=(np.asarray([w,h]))[np.newaxis,np.newaxis,:]
- return annotated, bboxes, is_signature(bboxes/m)
-
-def process(image:np.ndarray, lib:str='craft_text_detector'):
- if image is None:
- return None,'',''
- annotated, boxes, signed = detect_with_craft_text_detector(image) if lib=='craft_text_detector' else detect_with_craft_hw_ocr( image)
- return annotated, len(boxes), signed
-
-dw=0.3 # width ratio
-dh=0.25
-def is_nw(box):
- """
- A box happen to be a 4-pixel list in order
- 1 -- 2
- 4 -- 3
- """
- return box[2][0]<=dw and box[2][1]<= dh
-
-def is_ne(box):
- return box[3][0]>=1-dw and box[3][1]<= dh
-
-def is_se(box):
- return box[0][0]>=1-dw and box[0][1]>= 1-dh
-
-def is_sw(box):
- return box[1][0]<=dw and box[1][1]>= 1-dh
-
-def is_corner(box)->bool:
- """ @:returns true if the box is located in any corner """
- return is_nw(box) or is_ne(box) or is_se(box) or is_sw(box)
-
-dhhf=0.2 # dh for header and footer
-def is_footer(box)->bool:
- """ true if for the 2 first points, y>0.8 """
- return box[0][1]>=1-dhhf and box[1][1]>=1-dhhf
-
-def is_header(box)->bool:
- """ true if for the 2 last points, y<0.2 """
- return box[2][1]<=dhhf and box[3][1]<=dhhf
-
-# def is_signature(prediction_result) -> bool:
-def is_signature(boxes) -> bool:
- """ true if any of the boxes is at any corner, or header or footer """
- for box in boxes:
- if box[1][0]-box[0][0]<0.05: # not large enough
- continue
- if is_corner(box) or is_header(box) or is_footer(box):
- return True
- return False
-
-gr.Interface(
- fn = process,
- # inputs = [ gr.Image(label="Input"), gr.inputs.Radio(label='Model', choices=["craft_text_detector", "craft_hw_ocr"], default='craft_text_detector') ],
- inputs = [ gr.Image(label="Input") ],
- outputs = [ gr.Image(label="Output"), gr.Label(label="nb of text detections"), gr.Label(label="Has signature") ],
- title="Detect signature in image",
- description="Is the photo or image watermarked by a signature?",
- examples=[['data/photologo-1-1.jpg'], ['data/times-square.jpg'], ['data/photologo-3.jpg']],
- allow_flagging="never"
-).launch(debug=True, enable_queue=True)
\ No newline at end of file
diff --git a/spaces/csuer/vits/utils.py b/spaces/csuer/vits/utils.py
deleted file mode 100644
index 63af17c87f522ca58e9301c882cdc643f212d78b..0000000000000000000000000000000000000000
--- a/spaces/csuer/vits/utils.py
+++ /dev/null
@@ -1,266 +0,0 @@
-import os
-import glob
-import sys
-import argparse
-import logging
-import json
-import subprocess
-import numpy as np
-from scipy.io.wavfile import read
-import torch
-
-MATPLOTLIB_FLAG = False
-
-logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
-logger = logging
-
-
-def load_checkpoint(checkpoint_path, model, optimizer=None):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
- iteration = checkpoint_dict['iteration']
- learning_rate = checkpoint_dict['learning_rate']
- if optimizer is not None:
- optimizer.load_state_dict(checkpoint_dict['optimizer'])
- saved_state_dict = checkpoint_dict['model']
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict = {}
- for k, v in state_dict.items():
- try:
- if k == 'emb_g.weight':
- v[:saved_state_dict[k].shape[0], :] = saved_state_dict[k]
- # v[999, :] = saved_state_dict[k][154, :]
- new_state_dict[k] = v
- else:
- new_state_dict[k] = saved_state_dict[k]
- except:
- logger.info("%s is not in the checkpoint" % k)
- new_state_dict[k] = v
- if hasattr(model, 'module'):
- model.module.load_state_dict(new_state_dict)
- else:
- model.load_state_dict(new_state_dict)
- logger.info("Loaded checkpoint '{}' (iteration {})".format(
- checkpoint_path, iteration))
- return model, optimizer, learning_rate, iteration
-
-
-def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path):
- logger.info("Saving model and optimizer state at iteration {} to {}".format(
- iteration, checkpoint_path))
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- torch.save({'model': state_dict,
- 'iteration': iteration,
- 'optimizer': optimizer.state_dict() if optimizer is not None else None,
- 'learning_rate': learning_rate}, checkpoint_path)
-
-
-def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050):
- for k, v in scalars.items():
- writer.add_scalar(k, v, global_step)
- for k, v in histograms.items():
- writer.add_histogram(k, v, global_step)
- for k, v in images.items():
- writer.add_image(k, v, global_step, dataformats='HWC')
- for k, v in audios.items():
- writer.add_audio(k, v, global_step, audio_sampling_rate)
-
-
-def latest_checkpoint_path(dir_path, regex="G_*.pth"):
- f_list = glob.glob(os.path.join(dir_path, regex))
- f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f))))
- x = f_list[-1]
- print(x)
- return x
-
-
-def plot_spectrogram_to_numpy(spectrogram):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10, 2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
- interpolation='none')
- plt.colorbar(im, ax=ax)
- plt.xlabel("Frames")
- plt.ylabel("Channels")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def plot_alignment_to_numpy(alignment, info=None):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(6, 4))
- im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower',
- interpolation='none')
- fig.colorbar(im, ax=ax)
- xlabel = 'Decoder timestep'
- if info is not None:
- xlabel += '\n\n' + info
- plt.xlabel(xlabel)
- plt.ylabel('Encoder timestep')
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
-
-
-def load_filepaths_and_text(filename, split="|"):
- with open(filename, encoding='utf-8') as f:
- filepaths_and_text = [line.strip().split(split) for line in f]
- return filepaths_and_text
-
-
-def get_hparams(init=True):
- parser = argparse.ArgumentParser()
- parser.add_argument('-c', '--config', type=str, default="./configs/modified_finetune_speaker.json",
- help='JSON file for configuration')
- parser.add_argument('-m', '--model', type=str, default="pretrained_models",
- help='Model name')
- parser.add_argument('-n', '--max_epochs', type=int, default=50,
- help='finetune epochs')
-
- args = parser.parse_args()
- model_dir = os.path.join("./", args.model)
-
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
-
- config_path = args.config
- config_save_path = os.path.join(model_dir, "config.json")
- if init:
- with open(config_path, "r") as f:
- data = f.read()
- with open(config_save_path, "w") as f:
- f.write(data)
- else:
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- hparams.max_epochs = args.max_epochs
- return hparams
-
-
-def get_hparams_from_dir(model_dir):
- config_save_path = os.path.join(model_dir, "config.json")
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r", encoding="utf-8") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- return hparams
-
-
-def check_git_hash(model_dir):
- source_dir = os.path.dirname(os.path.realpath(__file__))
- if not os.path.exists(os.path.join(source_dir, ".git")):
- logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format(
- source_dir
- ))
- return
-
- cur_hash = subprocess.getoutput("git rev-parse HEAD")
-
- path = os.path.join(model_dir, "githash")
- if os.path.exists(path):
- saved_hash = open(path).read()
- if saved_hash != cur_hash:
- logger.warn("git hash values are different. {}(saved) != {}(current)".format(
- saved_hash[:8], cur_hash[:8]))
- else:
- open(path, "w").write(cur_hash)
-
-
-def get_logger(model_dir, filename="train.log"):
- global logger
- logger = logging.getLogger(os.path.basename(model_dir))
- logger.setLevel(logging.DEBUG)
-
- formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- h = logging.FileHandler(os.path.join(model_dir, filename))
- h.setLevel(logging.DEBUG)
- h.setFormatter(formatter)
- logger.addHandler(h)
- return logger
-
-
-class HParams():
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
\ No newline at end of file
diff --git a/spaces/cymic/Talking_Head_Anime_3/tha3/mocap/ifacialmocap_constants.py b/spaces/cymic/Talking_Head_Anime_3/tha3/mocap/ifacialmocap_constants.py
deleted file mode 100644
index 27031ac728a0a77d6e21ff50f6bfbcafc6a1131b..0000000000000000000000000000000000000000
--- a/spaces/cymic/Talking_Head_Anime_3/tha3/mocap/ifacialmocap_constants.py
+++ /dev/null
@@ -1,239 +0,0 @@
-EYE_LOOK_IN_LEFT = "eyeLookInLeft"
-EYE_LOOK_OUT_LEFT = "eyeLookOutLeft"
-EYE_LOOK_DOWN_LEFT = "eyeLookDownLeft"
-EYE_LOOK_UP_LEFT = "eyeLookUpLeft"
-EYE_BLINK_LEFT = "eyeBlinkLeft"
-EYE_SQUINT_LEFT = "eyeSquintLeft"
-EYE_WIDE_LEFT = "eyeWideLeft"
-EYE_LOOK_IN_RIGHT = "eyeLookInRight"
-EYE_LOOK_OUT_RIGHT = "eyeLookOutRight"
-EYE_LOOK_DOWN_RIGHT = "eyeLookDownRight"
-EYE_LOOK_UP_RIGHT = "eyeLookUpRight"
-EYE_BLINK_RIGHT = "eyeBlinkRight"
-EYE_SQUINT_RIGHT = "eyeSquintRight"
-EYE_WIDE_RIGHT = "eyeWideRight"
-BROW_DOWN_LEFT = "browDownLeft"
-BROW_OUTER_UP_LEFT = "browOuterUpLeft"
-BROW_DOWN_RIGHT = "browDownRight"
-BROW_OUTER_UP_RIGHT = "browOuterUpRight"
-BROW_INNER_UP = "browInnerUp"
-NOSE_SNEER_LEFT = "noseSneerLeft"
-NOSE_SNEER_RIGHT = "noseSneerRight"
-CHEEK_SQUINT_LEFT = "cheekSquintLeft"
-CHEEK_SQUINT_RIGHT = "cheekSquintRight"
-CHEEK_PUFF = "cheekPuff"
-MOUTH_LEFT = "mouthLeft"
-MOUTH_DIMPLE_LEFT = "mouthDimpleLeft"
-MOUTH_FROWN_LEFT = "mouthFrownLeft"
-MOUTH_LOWER_DOWN_LEFT = "mouthLowerDownLeft"
-MOUTH_PRESS_LEFT = "mouthPressLeft"
-MOUTH_SMILE_LEFT = "mouthSmileLeft"
-MOUTH_STRETCH_LEFT = "mouthStretchLeft"
-MOUTH_UPPER_UP_LEFT = "mouthUpperUpLeft"
-MOUTH_RIGHT = "mouthRight"
-MOUTH_DIMPLE_RIGHT = "mouthDimpleRight"
-MOUTH_FROWN_RIGHT = "mouthFrownRight"
-MOUTH_LOWER_DOWN_RIGHT = "mouthLowerDownRight"
-MOUTH_PRESS_RIGHT = "mouthPressRight"
-MOUTH_SMILE_RIGHT = "mouthSmileRight"
-MOUTH_STRETCH_RIGHT = "mouthStretchRight"
-MOUTH_UPPER_UP_RIGHT = "mouthUpperUpRight"
-MOUTH_CLOSE = "mouthClose"
-MOUTH_FUNNEL = "mouthFunnel"
-MOUTH_PUCKER = "mouthPucker"
-MOUTH_ROLL_LOWER = "mouthRollLower"
-MOUTH_ROLL_UPPER = "mouthRollUpper"
-MOUTH_SHRUG_LOWER = "mouthShrugLower"
-MOUTH_SHRUG_UPPER = "mouthShrugUpper"
-JAW_LEFT = "jawLeft"
-JAW_RIGHT = "jawRight"
-JAW_FORWARD = "jawForward"
-JAW_OPEN = "jawOpen"
-TONGUE_OUT = "tongueOut"
-
-BLENDSHAPE_NAMES = [
- EYE_LOOK_IN_LEFT, # 0
- EYE_LOOK_OUT_LEFT, # 1
- EYE_LOOK_DOWN_LEFT, # 2
- EYE_LOOK_UP_LEFT, # 3
- EYE_BLINK_LEFT, # 4
- EYE_SQUINT_LEFT, # 5
- EYE_WIDE_LEFT, # 6
- EYE_LOOK_IN_RIGHT, # 7
- EYE_LOOK_OUT_RIGHT, # 8
- EYE_LOOK_DOWN_RIGHT, # 9
- EYE_LOOK_UP_RIGHT, # 10
- EYE_BLINK_RIGHT, # 11
- EYE_SQUINT_RIGHT, # 12
- EYE_WIDE_RIGHT, # 13
- BROW_DOWN_LEFT, # 14
- BROW_OUTER_UP_LEFT, # 15
- BROW_DOWN_RIGHT, # 16
- BROW_OUTER_UP_RIGHT, # 17
- BROW_INNER_UP, # 18
- NOSE_SNEER_LEFT, # 19
- NOSE_SNEER_RIGHT, # 20
- CHEEK_SQUINT_LEFT, # 21
- CHEEK_SQUINT_RIGHT, # 22
- CHEEK_PUFF, # 23
- MOUTH_LEFT, # 24
- MOUTH_DIMPLE_LEFT, # 25
- MOUTH_FROWN_LEFT, # 26
- MOUTH_LOWER_DOWN_LEFT, # 27
- MOUTH_PRESS_LEFT, # 28
- MOUTH_SMILE_LEFT, # 29
- MOUTH_STRETCH_LEFT, # 30
- MOUTH_UPPER_UP_LEFT, # 31
- MOUTH_RIGHT, # 32
- MOUTH_DIMPLE_RIGHT, # 33
- MOUTH_FROWN_RIGHT, # 34
- MOUTH_LOWER_DOWN_RIGHT, # 35
- MOUTH_PRESS_RIGHT, # 36
- MOUTH_SMILE_RIGHT, # 37
- MOUTH_STRETCH_RIGHT, # 38
- MOUTH_UPPER_UP_RIGHT, # 39
- MOUTH_CLOSE, # 40
- MOUTH_FUNNEL, # 41
- MOUTH_PUCKER, # 42
- MOUTH_ROLL_LOWER, # 43
- MOUTH_ROLL_UPPER, # 44
- MOUTH_SHRUG_LOWER, # 45
- MOUTH_SHRUG_UPPER, # 46
- JAW_LEFT, # 47
- JAW_RIGHT, # 48
- JAW_FORWARD, # 49
- JAW_OPEN, # 50
- TONGUE_OUT, # 51
-]
-
-EYE_LEFT_BLENDSHAPES = [
- EYE_LOOK_IN_LEFT, # 0
- EYE_LOOK_OUT_LEFT, # 1
- EYE_LOOK_DOWN_LEFT, # 2
- EYE_LOOK_UP_LEFT, # 3
- EYE_BLINK_LEFT, # 4
- EYE_SQUINT_LEFT, # 5
- EYE_WIDE_LEFT, # 6
-]
-
-EYE_RIGHT_BLENDSHAPES = [
- EYE_LOOK_IN_RIGHT, # 7
- EYE_LOOK_OUT_RIGHT, # 8
- EYE_LOOK_DOWN_RIGHT, # 9
- EYE_LOOK_UP_RIGHT, # 10
- EYE_BLINK_RIGHT, # 11
- EYE_SQUINT_RIGHT, # 12
- EYE_WIDE_RIGHT, # 13
-]
-
-BROW_LEFT_BLENDSHAPES = [
- BROW_DOWN_LEFT, # 14
- BROW_OUTER_UP_LEFT, # 15
-
-]
-
-BROW_RIGHT_BLENDSHAPES = [
- BROW_DOWN_RIGHT, # 16
- BROW_OUTER_UP_RIGHT, # 17
-
-]
-
-BROW_BOTH_BLENDSHAPES = [
- BROW_INNER_UP, # 18
-]
-
-NOSE_BLENDSHAPES = [
- NOSE_SNEER_LEFT, # 19
- NOSE_SNEER_RIGHT, # 20
-]
-
-CHECK_BLENDSHAPES = [
- CHEEK_SQUINT_LEFT, # 21
- CHEEK_SQUINT_RIGHT, # 22
- CHEEK_PUFF, # 23
-]
-
-MOUTH_LEFT_BLENDSHAPES = [
- MOUTH_LEFT, # 24
- MOUTH_DIMPLE_LEFT, # 25
- MOUTH_FROWN_LEFT, # 26
- MOUTH_LOWER_DOWN_LEFT, # 27
- MOUTH_PRESS_LEFT, # 28
- MOUTH_SMILE_LEFT, # 29
- MOUTH_STRETCH_LEFT, # 30
- MOUTH_UPPER_UP_LEFT, # 31
-]
-
-MOUTH_RIGHT_BLENDSHAPES = [
- MOUTH_RIGHT, # 32
- MOUTH_DIMPLE_RIGHT, # 33
- MOUTH_FROWN_RIGHT, # 34
- MOUTH_LOWER_DOWN_RIGHT, # 35
- MOUTH_PRESS_RIGHT, # 36
- MOUTH_SMILE_RIGHT, # 37
- MOUTH_STRETCH_RIGHT, # 38
- MOUTH_UPPER_UP_RIGHT, # 39
-]
-
-MOUTH_BOTH_BLENDSHAPES = [
- MOUTH_CLOSE, # 40
- MOUTH_FUNNEL, # 41
- MOUTH_PUCKER, # 42
- MOUTH_ROLL_LOWER, # 43
- MOUTH_ROLL_UPPER, # 44
- MOUTH_SHRUG_LOWER, # 45
- MOUTH_SHRUG_UPPER, # 46
-]
-
-JAW_BLENDSHAPES = [
- JAW_LEFT, # 47
- JAW_RIGHT, # 48
- JAW_FORWARD, # 49
- JAW_OPEN, # 50
-]
-
-TONGUE_BLENDSHAPES = [
- TONGUE_OUT, # 51
-]
-
-COLUMN_0_BLENDSHAPES = EYE_RIGHT_BLENDSHAPES + BROW_RIGHT_BLENDSHAPES + [NOSE_SNEER_RIGHT, CHEEK_SQUINT_RIGHT]
-COLUMN_1_BLENDSHAPES = EYE_LEFT_BLENDSHAPES + BROW_LEFT_BLENDSHAPES + [NOSE_SNEER_LEFT, CHEEK_SQUINT_LEFT]
-COLUMN_2_BLENDSHAPES = MOUTH_RIGHT_BLENDSHAPES + [JAW_RIGHT]
-COLUMN_3_BLENDSHAPES = MOUTH_LEFT_BLENDSHAPES + [JAW_LEFT]
-COLUMN_4_BLENDSHAPES = [BROW_INNER_UP, CHEEK_PUFF] + MOUTH_BOTH_BLENDSHAPES + [JAW_FORWARD, JAW_OPEN, TONGUE_OUT]
-
-BLENDSHAPE_COLUMNS = [
- COLUMN_0_BLENDSHAPES,
- COLUMN_1_BLENDSHAPES,
- COLUMN_2_BLENDSHAPES,
- COLUMN_3_BLENDSHAPES,
- COLUMN_4_BLENDSHAPES,
-]
-
-RIGHT_EYE_BONE_X = "rightEyeBoneX"
-RIGHT_EYE_BONE_Y = "rightEyeBoneY"
-RIGHT_EYE_BONE_Z = "rightEyeBoneZ"
-RIGHT_EYE_BONE_ROTATIONS = [RIGHT_EYE_BONE_X, RIGHT_EYE_BONE_Y, RIGHT_EYE_BONE_Z]
-
-LEFT_EYE_BONE_X = "leftEyeBoneX"
-LEFT_EYE_BONE_Y = "leftEyeBoneY"
-LEFT_EYE_BONE_Z = "leftEyeBoneZ"
-LEFT_EYE_BONE_ROTATIONS = [LEFT_EYE_BONE_X, LEFT_EYE_BONE_Y, LEFT_EYE_BONE_Z]
-
-HEAD_BONE_X = "headBoneX"
-HEAD_BONE_Y = "headBoneY"
-HEAD_BONE_Z = "headBoneZ"
-HEAD_BONE_ROTATIONS = [HEAD_BONE_X, HEAD_BONE_Y, HEAD_BONE_Z]
-
-ROTATION_NAMES = RIGHT_EYE_BONE_ROTATIONS + LEFT_EYE_BONE_ROTATIONS + HEAD_BONE_ROTATIONS
-
-RIGHT_EYE_BONE_QUAT = "rightEyeBoneQuat"
-LEFT_EYE_BONE_QUAT = "leftEyeBoneQuat"
-HEAD_BONE_QUAT = "headBoneQuat"
-QUATERNION_NAMES = [
- RIGHT_EYE_BONE_QUAT,
- LEFT_EYE_BONE_QUAT,
- HEAD_BONE_QUAT
-]
-
-IFACIALMOCAP_DATETIME_FORMAT = "%Y/%m/%d-%H:%M:%S.%f"
diff --git a/spaces/darragh/bloom_demo_long/README.md b/spaces/darragh/bloom_demo_long/README.md
deleted file mode 100644
index 8310be8ce4c5594f0b3afeae98d865818c4ba87f..0000000000000000000000000000000000000000
--- a/spaces/darragh/bloom_demo_long/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Bloom Demo Long
-emoji: 🌸
-colorFrom: pink
-colorTo: grey
-sdk: gradio
-sdk_version: 3.0.25
-app_file: app.py
-pinned: false
-models:
-- bigscience/bloom
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/anyio/_backends/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/anyio/_backends/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/processing_utils.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/processing_utils.py
deleted file mode 100644
index d2fd6292cffdb060017d39d572654f6a58f9ee4f..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/processing_utils.py
+++ /dev/null
@@ -1,546 +0,0 @@
-from __future__ import annotations
-
-import base64
-import json
-import logging
-import os
-import shutil
-import subprocess
-import tempfile
-import warnings
-from io import BytesIO
-from pathlib import Path
-
-import numpy as np
-from gradio_client import utils as client_utils
-from PIL import Image, ImageOps, PngImagePlugin
-
-from gradio import wasm_utils
-
-if not wasm_utils.IS_WASM:
- # TODO: Support ffmpeg on Wasm
- from ffmpy import FFmpeg, FFprobe, FFRuntimeError
-
-with warnings.catch_warnings():
- warnings.simplefilter("ignore") # Ignore pydub warning if ffmpeg is not installed
- from pydub import AudioSegment
-
-log = logging.getLogger(__name__)
-
-#########################
-# GENERAL
-#########################
-
-
-def to_binary(x: str | dict) -> bytes:
- """Converts a base64 string or dictionary to a binary string that can be sent in a POST."""
- if isinstance(x, dict):
- if x.get("data"):
- base64str = x["data"]
- else:
- base64str = client_utils.encode_url_or_file_to_base64(x["name"])
- else:
- base64str = x
- return base64.b64decode(extract_base64_data(base64str))
-
-
-def extract_base64_data(x: str) -> str:
- """Just extracts the base64 data from a general base64 string."""
- return x.rsplit(",", 1)[-1]
-
-
-#########################
-# IMAGE PRE-PROCESSING
-#########################
-
-
-def decode_base64_to_image(encoding: str) -> Image.Image:
- image_encoded = extract_base64_data(encoding)
- img = Image.open(BytesIO(base64.b64decode(image_encoded)))
- try:
- if hasattr(ImageOps, "exif_transpose"):
- img = ImageOps.exif_transpose(img)
- except Exception:
- log.warning(
- "Failed to transpose image %s based on EXIF data.",
- img,
- exc_info=True,
- )
- return img
-
-
-def encode_plot_to_base64(plt):
- with BytesIO() as output_bytes:
- plt.savefig(output_bytes, format="png")
- bytes_data = output_bytes.getvalue()
- base64_str = str(base64.b64encode(bytes_data), "utf-8")
- return "data:image/png;base64," + base64_str
-
-
-def get_pil_metadata(pil_image):
- # Copy any text-only metadata
- metadata = PngImagePlugin.PngInfo()
- for key, value in pil_image.info.items():
- if isinstance(key, str) and isinstance(value, str):
- metadata.add_text(key, value)
-
- return metadata
-
-
-def encode_pil_to_bytes(pil_image, format="png"):
- with BytesIO() as output_bytes:
- pil_image.save(output_bytes, format, pnginfo=get_pil_metadata(pil_image))
- return output_bytes.getvalue()
-
-
-def encode_pil_to_base64(pil_image):
- bytes_data = encode_pil_to_bytes(pil_image)
- base64_str = str(base64.b64encode(bytes_data), "utf-8")
- return "data:image/png;base64," + base64_str
-
-
-def encode_array_to_base64(image_array):
- with BytesIO() as output_bytes:
- pil_image = Image.fromarray(_convert(image_array, np.uint8, force_copy=False))
- pil_image.save(output_bytes, "PNG")
- bytes_data = output_bytes.getvalue()
- base64_str = str(base64.b64encode(bytes_data), "utf-8")
- return "data:image/png;base64," + base64_str
-
-
-def resize_and_crop(img, size, crop_type="center"):
- """
- Resize and crop an image to fit the specified size.
- args:
- size: `(width, height)` tuple. Pass `None` for either width or height
- to only crop and resize the other.
- crop_type: can be 'top', 'middle' or 'bottom', depending on this
- value, the image will cropped getting the 'top/left', 'middle' or
- 'bottom/right' of the image to fit the size.
- raises:
- ValueError: if an invalid `crop_type` is provided.
- """
- if crop_type == "top":
- center = (0, 0)
- elif crop_type == "center":
- center = (0.5, 0.5)
- else:
- raise ValueError
-
- resize = list(size)
- if size[0] is None:
- resize[0] = img.size[0]
- if size[1] is None:
- resize[1] = img.size[1]
- return ImageOps.fit(img, resize, centering=center) # type: ignore
-
-
-##################
-# Audio
-##################
-
-
-def audio_from_file(filename, crop_min=0, crop_max=100):
- try:
- audio = AudioSegment.from_file(filename)
- except FileNotFoundError as e:
- isfile = Path(filename).is_file()
- msg = (
- f"Cannot load audio from file: `{'ffprobe' if isfile else filename}` not found."
- + " Please install `ffmpeg` in your system to use non-WAV audio file formats"
- " and make sure `ffprobe` is in your PATH."
- if isfile
- else ""
- )
- raise RuntimeError(msg) from e
- if crop_min != 0 or crop_max != 100:
- audio_start = len(audio) * crop_min / 100
- audio_end = len(audio) * crop_max / 100
- audio = audio[audio_start:audio_end]
- data = np.array(audio.get_array_of_samples())
- if audio.channels > 1:
- data = data.reshape(-1, audio.channels)
- return audio.frame_rate, data
-
-
-def audio_to_file(sample_rate, data, filename, format="wav"):
- if format == "wav":
- data = convert_to_16_bit_wav(data)
- audio = AudioSegment(
- data.tobytes(),
- frame_rate=sample_rate,
- sample_width=data.dtype.itemsize,
- channels=(1 if len(data.shape) == 1 else data.shape[1]),
- )
- file = audio.export(filename, format=format)
- file.close() # type: ignore
-
-
-def convert_to_16_bit_wav(data):
- # Based on: https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.wavfile.write.html
- warning = "Trying to convert audio automatically from {} to 16-bit int format."
- if data.dtype in [np.float64, np.float32, np.float16]:
- warnings.warn(warning.format(data.dtype))
- data = data / np.abs(data).max()
- data = data * 32767
- data = data.astype(np.int16)
- elif data.dtype == np.int32:
- warnings.warn(warning.format(data.dtype))
- data = data / 65538
- data = data.astype(np.int16)
- elif data.dtype == np.int16:
- pass
- elif data.dtype == np.uint16:
- warnings.warn(warning.format(data.dtype))
- data = data - 32768
- data = data.astype(np.int16)
- elif data.dtype == np.uint8:
- warnings.warn(warning.format(data.dtype))
- data = data * 257 - 32768
- data = data.astype(np.int16)
- else:
- raise ValueError(
- "Audio data cannot be converted automatically from "
- f"{data.dtype} to 16-bit int format."
- )
- return data
-
-
-##################
-# OUTPUT
-##################
-
-
-def _convert(image, dtype, force_copy=False, uniform=False):
- """
- Adapted from: https://github.com/scikit-image/scikit-image/blob/main/skimage/util/dtype.py#L510-L531
-
- Convert an image to the requested data-type.
- Warnings are issued in case of precision loss, or when negative values
- are clipped during conversion to unsigned integer types (sign loss).
- Floating point values are expected to be normalized and will be clipped
- to the range [0.0, 1.0] or [-1.0, 1.0] when converting to unsigned or
- signed integers respectively.
- Numbers are not shifted to the negative side when converting from
- unsigned to signed integer types. Negative values will be clipped when
- converting to unsigned integers.
- Parameters
- ----------
- image : ndarray
- Input image.
- dtype : dtype
- Target data-type.
- force_copy : bool, optional
- Force a copy of the data, irrespective of its current dtype.
- uniform : bool, optional
- Uniformly quantize the floating point range to the integer range.
- By default (uniform=False) floating point values are scaled and
- rounded to the nearest integers, which minimizes back and forth
- conversion errors.
- .. versionchanged :: 0.15
- ``_convert`` no longer warns about possible precision or sign
- information loss. See discussions on these warnings at:
- https://github.com/scikit-image/scikit-image/issues/2602
- https://github.com/scikit-image/scikit-image/issues/543#issuecomment-208202228
- https://github.com/scikit-image/scikit-image/pull/3575
- References
- ----------
- .. [1] DirectX data conversion rules.
- https://msdn.microsoft.com/en-us/library/windows/desktop/dd607323%28v=vs.85%29.aspx
- .. [2] Data Conversions. In "OpenGL ES 2.0 Specification v2.0.25",
- pp 7-8. Khronos Group, 2010.
- .. [3] Proper treatment of pixels as integers. A.W. Paeth.
- In "Graphics Gems I", pp 249-256. Morgan Kaufmann, 1990.
- .. [4] Dirty Pixels. J. Blinn. In "Jim Blinn's corner: Dirty Pixels",
- pp 47-57. Morgan Kaufmann, 1998.
- """
- dtype_range = {
- bool: (False, True),
- np.bool_: (False, True),
- np.bool8: (False, True), # type: ignore
- float: (-1, 1),
- np.float_: (-1, 1),
- np.float16: (-1, 1),
- np.float32: (-1, 1),
- np.float64: (-1, 1),
- }
-
- def _dtype_itemsize(itemsize, *dtypes):
- """Return first of `dtypes` with itemsize greater than `itemsize`
- Parameters
- ----------
- itemsize: int
- The data type object element size.
- Other Parameters
- ----------------
- *dtypes:
- Any Object accepted by `np.dtype` to be converted to a data
- type object
- Returns
- -------
- dtype: data type object
- First of `dtypes` with itemsize greater than `itemsize`.
- """
- return next(dt for dt in dtypes if np.dtype(dt).itemsize >= itemsize)
-
- def _dtype_bits(kind, bits, itemsize=1):
- """Return dtype of `kind` that can store a `bits` wide unsigned int
- Parameters:
- kind: str
- Data type kind.
- bits: int
- Desired number of bits.
- itemsize: int
- The data type object element size.
- Returns
- -------
- dtype: data type object
- Data type of `kind` that can store a `bits` wide unsigned int
- """
-
- s = next(
- i
- for i in (itemsize,) + (2, 4, 8)
- if bits < (i * 8) or (bits == (i * 8) and kind == "u")
- )
-
- return np.dtype(kind + str(s))
-
- def _scale(a, n, m, copy=True):
- """Scale an array of unsigned/positive integers from `n` to `m` bits.
- Numbers can be represented exactly only if `m` is a multiple of `n`.
- Parameters
- ----------
- a : ndarray
- Input image array.
- n : int
- Number of bits currently used to encode the values in `a`.
- m : int
- Desired number of bits to encode the values in `out`.
- copy : bool, optional
- If True, allocates and returns new array. Otherwise, modifies
- `a` in place.
- Returns
- -------
- out : array
- Output image array. Has the same kind as `a`.
- """
- kind = a.dtype.kind
- if n > m and a.max() < 2**m:
- return a.astype(_dtype_bits(kind, m))
- elif n == m:
- return a.copy() if copy else a
- elif n > m:
- # downscale with precision loss
- if copy:
- b = np.empty(a.shape, _dtype_bits(kind, m))
- np.floor_divide(a, 2 ** (n - m), out=b, dtype=a.dtype, casting="unsafe")
- return b
- else:
- a //= 2 ** (n - m)
- return a
- elif m % n == 0:
- # exact upscale to a multiple of `n` bits
- if copy:
- b = np.empty(a.shape, _dtype_bits(kind, m))
- np.multiply(a, (2**m - 1) // (2**n - 1), out=b, dtype=b.dtype)
- return b
- else:
- a = a.astype(_dtype_bits(kind, m, a.dtype.itemsize), copy=False)
- a *= (2**m - 1) // (2**n - 1)
- return a
- else:
- # upscale to a multiple of `n` bits,
- # then downscale with precision loss
- o = (m // n + 1) * n
- if copy:
- b = np.empty(a.shape, _dtype_bits(kind, o))
- np.multiply(a, (2**o - 1) // (2**n - 1), out=b, dtype=b.dtype)
- b //= 2 ** (o - m)
- return b
- else:
- a = a.astype(_dtype_bits(kind, o, a.dtype.itemsize), copy=False)
- a *= (2**o - 1) // (2**n - 1)
- a //= 2 ** (o - m)
- return a
-
- image = np.asarray(image)
- dtypeobj_in = image.dtype
- dtypeobj_out = np.dtype("float64") if dtype is np.floating else np.dtype(dtype)
- dtype_in = dtypeobj_in.type
- dtype_out = dtypeobj_out.type
- kind_in = dtypeobj_in.kind
- kind_out = dtypeobj_out.kind
- itemsize_in = dtypeobj_in.itemsize
- itemsize_out = dtypeobj_out.itemsize
-
- # Below, we do an `issubdtype` check. Its purpose is to find out
- # whether we can get away without doing any image conversion. This happens
- # when:
- #
- # - the output and input dtypes are the same or
- # - when the output is specified as a type, and the input dtype
- # is a subclass of that type (e.g. `np.floating` will allow
- # `float32` and `float64` arrays through)
-
- if np.issubdtype(dtype_in, np.obj2sctype(dtype)):
- if force_copy:
- image = image.copy()
- return image
-
- if kind_in in "ui":
- imin_in = np.iinfo(dtype_in).min
- imax_in = np.iinfo(dtype_in).max
- if kind_out in "ui":
- imin_out = np.iinfo(dtype_out).min # type: ignore
- imax_out = np.iinfo(dtype_out).max # type: ignore
-
- # any -> binary
- if kind_out == "b":
- return image > dtype_in(dtype_range[dtype_in][1] / 2)
-
- # binary -> any
- if kind_in == "b":
- result = image.astype(dtype_out)
- if kind_out != "f":
- result *= dtype_out(dtype_range[dtype_out][1])
- return result
-
- # float -> any
- if kind_in == "f":
- if kind_out == "f":
- # float -> float
- return image.astype(dtype_out)
-
- if np.min(image) < -1.0 or np.max(image) > 1.0:
- raise ValueError("Images of type float must be between -1 and 1.")
- # floating point -> integer
- # use float type that can represent output integer type
- computation_type = _dtype_itemsize(
- itemsize_out, dtype_in, np.float32, np.float64
- )
-
- if not uniform:
- if kind_out == "u":
- image_out = np.multiply(image, imax_out, dtype=computation_type) # type: ignore
- else:
- image_out = np.multiply(
- image, (imax_out - imin_out) / 2, dtype=computation_type # type: ignore
- )
- image_out -= 1.0 / 2.0
- np.rint(image_out, out=image_out)
- np.clip(image_out, imin_out, imax_out, out=image_out) # type: ignore
- elif kind_out == "u":
- image_out = np.multiply(image, imax_out + 1, dtype=computation_type) # type: ignore
- np.clip(image_out, 0, imax_out, out=image_out) # type: ignore
- else:
- image_out = np.multiply(
- image, (imax_out - imin_out + 1.0) / 2.0, dtype=computation_type # type: ignore
- )
- np.floor(image_out, out=image_out)
- np.clip(image_out, imin_out, imax_out, out=image_out) # type: ignore
- return image_out.astype(dtype_out)
-
- # signed/unsigned int -> float
- if kind_out == "f":
- # use float type that can exactly represent input integers
- computation_type = _dtype_itemsize(
- itemsize_in, dtype_out, np.float32, np.float64
- )
-
- if kind_in == "u":
- # using np.divide or np.multiply doesn't copy the data
- # until the computation time
- image = np.multiply(image, 1.0 / imax_in, dtype=computation_type) # type: ignore
- # DirectX uses this conversion also for signed ints
- # if imin_in:
- # np.maximum(image, -1.0, out=image)
- else:
- image = np.add(image, 0.5, dtype=computation_type)
- image *= 2 / (imax_in - imin_in) # type: ignore
-
- return np.asarray(image, dtype_out)
-
- # unsigned int -> signed/unsigned int
- if kind_in == "u":
- if kind_out == "i":
- # unsigned int -> signed int
- image = _scale(image, 8 * itemsize_in, 8 * itemsize_out - 1)
- return image.view(dtype_out)
- else:
- # unsigned int -> unsigned int
- return _scale(image, 8 * itemsize_in, 8 * itemsize_out)
-
- # signed int -> unsigned int
- if kind_out == "u":
- image = _scale(image, 8 * itemsize_in - 1, 8 * itemsize_out)
- result = np.empty(image.shape, dtype_out)
- np.maximum(image, 0, out=result, dtype=image.dtype, casting="unsafe")
- return result
-
- # signed int -> signed int
- if itemsize_in > itemsize_out:
- return _scale(image, 8 * itemsize_in - 1, 8 * itemsize_out - 1)
-
- image = image.astype(_dtype_bits("i", itemsize_out * 8))
- image -= imin_in # type: ignore
- image = _scale(image, 8 * itemsize_in, 8 * itemsize_out, copy=False)
- image += imin_out # type: ignore
- return image.astype(dtype_out)
-
-
-def ffmpeg_installed() -> bool:
- if wasm_utils.IS_WASM:
- # TODO: Support ffmpeg in WASM
- return False
-
- return shutil.which("ffmpeg") is not None
-
-
-def video_is_playable(video_filepath: str) -> bool:
- """Determines if a video is playable in the browser.
-
- A video is playable if it has a playable container and codec.
- .mp4 -> h264
- .webm -> vp9
- .ogg -> theora
- """
- try:
- container = Path(video_filepath).suffix.lower()
- probe = FFprobe(
- global_options="-show_format -show_streams -select_streams v -print_format json",
- inputs={video_filepath: None},
- )
- output = probe.run(stderr=subprocess.PIPE, stdout=subprocess.PIPE)
- output = json.loads(output[0])
- video_codec = output["streams"][0]["codec_name"]
- return (container, video_codec) in [
- (".mp4", "h264"),
- (".ogg", "theora"),
- (".webm", "vp9"),
- ]
- # If anything goes wrong, assume the video can be played to not convert downstream
- except (FFRuntimeError, IndexError, KeyError):
- return True
-
-
-def convert_video_to_playable_mp4(video_path: str) -> str:
- """Convert the video to mp4. If something goes wrong return the original video."""
- try:
- with tempfile.NamedTemporaryFile(delete=False) as tmp_file:
- output_path = Path(video_path).with_suffix(".mp4")
- shutil.copy2(video_path, tmp_file.name)
- # ffmpeg will automatically use h264 codec (playable in browser) when converting to mp4
- ff = FFmpeg(
- inputs={str(tmp_file.name): None},
- outputs={str(output_path): None},
- global_options="-y -loglevel quiet",
- )
- ff.run()
- except FFRuntimeError as e:
- print(f"Error converting video to browser-playable format {str(e)}")
- output_path = video_path
- finally:
- # Remove temp file
- os.remove(tmp_file.name) # type: ignore
- return str(output_path)
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-edf307d2.css b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-edf307d2.css
deleted file mode 100644
index 690ed736f2c29c32ba8499343659e9fde81f2098..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-edf307d2.css
+++ /dev/null
@@ -1 +0,0 @@
-div.svelte-1yrv54 .math.inline{fill:var(--body-text-color);display:inline-block;vertical-align:middle;padding:var(--size-1-5) -var(--size-1);color:var(--body-text-color)}div.svelte-1yrv54 .math.inline svg{display:inline;margin-bottom:.22em}div.svelte-1yrv54{max-width:100%}.min.svelte-1yrv54{min-height:var(--size-24)}.hide.svelte-1yrv54{display:none}div.svelte-1ed2p3z{transition:.15s}.pending.svelte-1ed2p3z{opacity:.2}
diff --git a/spaces/descript/vampnet/README.md b/spaces/descript/vampnet/README.md
deleted file mode 100644
index 9f63c43e04e5c6c4bf9d1ec12276636ee77a075d..0000000000000000000000000000000000000000
--- a/spaces/descript/vampnet/README.md
+++ /dev/null
@@ -1,106 +0,0 @@
----
-title: "VampNet: Music Generation with Masked Transformers"
-emoji: 🤖
-colorFrom: gray
-colorTo: gray
-sdk: gradio
-sdk_version: 3.36.1
-app_file: app.py
-pinned: false
-python_version: 3.9
----
-
-# VampNet
-
-This repository contains recipes for training generative music models on top of the Descript Audio Codec.
-
-## try `unloop`
-you can try vampnet in a co-creative looper called unloop. see this link: https://github.com/hugofloresgarcia/unloop
-
-# Setting up
-
-**Requires Python 3.9**.
-
-you'll need a Python 3.9 environment to run VampNet. This is due to a [known issue with madmom](https://github.com/hugofloresgarcia/vampnet/issues/15).
-
-(for example, using conda)
-```bash
-conda create -n vampnet python=3.9
-conda activate vampnet
-```
-
-
-install VampNet
-
-```bash
-git clone https://github.com/hugofloresgarcia/vampnet.git
-pip install -e ./vampnet
-```
-
-## A note on argbind
-This repository relies on [argbind](https://github.com/pseeth/argbind) to manage CLIs and config files.
-Config files are stored in the `conf/` folder.
-
-## Getting the Pretrained Models
-
-### Licensing for Pretrained Models:
-The weights for the models are licensed [`CC BY-NC-SA 4.0`](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.ml). Likewise, any VampNet models fine-tuned on the pretrained models are also licensed [`CC BY-NC-SA 4.0`](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.ml).
-
-Download the pretrained models from [this link](https://zenodo.org/record/8136629). Then, extract the models to the `models/` folder.
-
-
-# Usage
-
-## Launching the Gradio Interface
-You can launch a gradio UI to play with vampnet.
-
-```bash
-python app.py --args.load conf/interface.yml --Interface.device cuda
-```
-
-# Training / Fine-tuning
-
-## Training a model
-
-To train a model, run the following script:
-
-```bash
-python scripts/exp/train.py --args.load conf/vampnet.yml --save_path /path/to/checkpoints
-```
-
-You can edit `conf/vampnet.yml` to change the dataset paths or any training hyperparameters.
-
-For coarse2fine models, you can use `conf/c2f.yml` as a starting configuration.
-
-See `python scripts/exp/train.py -h` for a list of options.
-
-## Fine-tuning
-To fine-tune a model, use the script in `scripts/exp/fine_tune.py` to generate 3 configuration files: `c2f.yml`, `coarse.yml`, and `interface.yml`.
-The first two are used to fine-tune the coarse and fine models, respectively. The last one is used to launch the gradio interface.
-
-```bash
-python scripts/exp/fine_tune.py "/path/to/audio1.mp3 /path/to/audio2/ /path/to/audio3.wav"
-```
-
-This will create a folder under `conf//` with the 3 configuration files.
-
-The save_paths will be set to `runs//coarse` and `runs//c2f`.
-
-launch the coarse job:
-```bash
-python scripts/exp/train.py --args.load conf//coarse.yml
-```
-
-this will save the coarse model to `runs//coarse/ckpt/best/`.
-
-launch the c2f job:
-```bash
-python scripts/exp/train.py --args.load conf//c2f.yml
-```
-
-launch the interface:
-```bash
-python app.py --args.load conf/generated//interface.yml
-```
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Full Crack Asta Powerproject WORK.md b/spaces/diacanFperku/AutoGPT/Full Crack Asta Powerproject WORK.md
deleted file mode 100644
index 4a2e076f075a09807a62c1d0290293344baa07ed..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Full Crack Asta Powerproject WORK.md
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
-",
-
- "eklentik": "eklentik"
-
- },
-
- "create" : {
-
- "title" : "Creea"
-
- "name": {
-
- "title" : "Nume"
-
- "determined_intro_title": {
-
- "title" : "Sunt asteptate / Cață / asteptate modificări în setările (Mulți pacheturi necesare fiilor) :"
-
- "determined_intro_description": {
-
- "title" : "Înainte de a continuă, trebuie să asigurăm ca fiile să poată merge înainte cu această selecție. Facem acest lucru asigurând că pachetele necesare aparțin fiilor :
-
-- pachetă de identitate - se aplică identitatea fiecărei fiile pentru fiecare pachete, folosind un îndrumar. Această opțiune permite ca fiile să se desfășoare pur și simplu cu cea mai mare facilitate.
-
-- pachetă de motoconectare - acest pachet dezvoltă și predefinează configurarea modulului de conectare a fiilor prin intermediul unor filtre de conexiune. Această opțiune permite fiilor să intre în alte părți a site-ului de site întru cât vor.
-
-- pachetă de navigare - această pachete de ajutor e configurată din punct de vedere pricipal pentru fie 4fefd39f24
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/IObit Malware Fighter Pro 7.5 Crack.md b/spaces/diacanFperku/AutoGPT/IObit Malware Fighter Pro 7.5 Crack.md
deleted file mode 100644
index 8f35ce360f4960b294f6d53cc73488a9965e5266..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/IObit Malware Fighter Pro 7.5 Crack.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
iobit malware fighter crack is an essential security program. it will offer protection against all type of malicious threats. it is very useful in protecting the information from hacking or other malicious threats. it is a great software for security device that gives full protection from all sorts of cyber threats. the user interface is user-friendly and easy to use. it will scan your device and analyze its threats with advance anti-malware engine. it gives ultimate security to your device and information stored on it.
-
the complete edition of iobit malware fighter crack includes the first five features for no cos t. since it was designed for iobit, this security software provides minimal defense. the second bitdef generator will not be able to be used. this makes your computer vulnerable to online attacks. the free version does not provide the same level of protection as the paid ones. bitdefenders engine is less secure because iobit malware fighter pro crack has a lower virus definition. unwanted pop-ups and virus-ridden websites can blocks. you will, however, be able to use dns and web-enabled windows protection. this may not prevent malware from reaching your data or stop unauthorized computer users from accessing it. you will need to purchase the extended edition if you desire these features.
another more useful security software is the bitdefender antivirus engine. iobit anti-malware engine and anti-ransomware engine. this malware fighter iobit tool is a powerful detection system for spyware and malware that is more complex very quickly. high utilization of digital electronic systems can trigger slow laptops, because hackers may need other cryptocurrency mining codes. for guaranteed security, your security on the network, removal of ads security and ads on iobit malware fighter 8 key can protect your inox and firefox so that they stay away from slow laptops and reproduce digital coins for miners without notice.
-
-Ikea Library Chief Architect. , Jen--it becomes a lot more of a struggle. ... Duration: SMALL ENTRYWAY MAKEOVER with Ikea hack for TONS of ... 1fdad05405
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/One Two Three Full Movie In Hindi Dubbed Free Download Mp4.md b/spaces/falterWliame/Face_Mask_Detection/One Two Three Full Movie In Hindi Dubbed Free Download Mp4.md
deleted file mode 100644
index 27647f495d909f078661849dfe5d0741eb929b85..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/One Two Three Full Movie In Hindi Dubbed Free Download Mp4.md
+++ /dev/null
@@ -1,47 +0,0 @@
-
-
One Two Three Full Movie in Hindi Dubbed Free Download MP4
-
-
If you are looking for a hilarious comedy movie to watch online or download for free, then you should not miss One Two Three. This is a 2008 Bollywood film that features three men with the same name - Laxminarayan - who get involved in a series of misunderstandings and chaos. The movie stars Sunil Shetty, Paresh Rawal, Tusshar Kapoor, Esha Deol, Neetu Chandra, Sameera Reddy, Upen Patel and Tanisha in the lead roles.
-
One Two Three full movie in hindi dubbed free download mp4
One Two Three is a laugh riot that will keep you entertained from start to finish. The movie has a lot of funny scenes and dialogues that will make you laugh out loud. The movie also has some action and romance elements that add to the fun. The movie is directed by Ashwani Dhir and produced by Kumar Mangat Pathak.
-
-
How to Watch or Download One Two Three Full Movie in Hindi Dubbed Free Download MP4
-
-
One Two Three is a popular movie that has been dubbed in Hindi for the Indian audience. You can watch or download One Two Three full movie in Hindi dubbed free download MP4 format from various websites on the internet. However, not all websites are safe and legal to use. Some websites may contain viruses, malware, pop-ups, ads or other harmful content that may harm your device or compromise your privacy.
-
-
Therefore, you should be careful and choose only trusted and reliable websites to watch or download One Two Three full movie in Hindi dubbed free download MP4. Here are some of the best websites that you can use to enjoy this comedy movie:
-
-
-
Archive.org: This is a website that offers free access to millions of movies, books, music, software and more. You can watch or download One Two Three full movie in Hindi dubbed free download MP4 from this website without any hassle. The video quality is also good and you can choose from different resolutions.
-
Onlinemovieshindi.com: This is a website that specializes in streaming and downloading Bollywood movies online for free. You can watch or download One Two Three full movie in Hindi dubbed free download MP4 from this website easily. The website also provides subtitles, ratings, reviews and other information about the movie.
-
YouTube: This is the most popular video-sharing platform in the world. You can watch or download One Two Three full movie in Hindi dubbed free download MP4 from YouTube as well. However, you may need to use a third-party software or app to download the video file from YouTube.
-
Katmoviehd.la: This is a website that offers Hollywood dubbed movies, TV series, Korean drama series and more in Hindi and other languages. You can watch or download One Two Three full movie in Hindi dubbed free download MP4 from this website as well. The website also provides high-quality video files and fast downloading speed.
-
-
-
Conclusion
-
-
One Two Three is a must-watch comedy movie that will make you laugh your heart out. The movie has a great star cast, a hilarious plot and a lot of entertainment value. You can watch or download One Two Three full movie in Hindi dubbed free download MP4 from any of the websites mentioned above and enjoy this movie with your friends and family.
-
What is the Plot of One Two Three Full Movie in Hindi Dubbed Free Download MP4
-
-
One Two Three is a comedy movie that revolves around three men who share the same name - Laxminarayan. They are all hired by different people for different purposes, but end up in the same hotel in Pondicherry. There, they encounter a lot of confusion, chaos and comedy as they try to complete their tasks and escape from trouble.
-
-
-
The first Laxminarayan (Tusshar Kapoor) is a car salesman who is sent by his boss to deliver a vintage car to a millionaire named Jijaji (Mukesh Tiwari). However, he accidentally delivers the wrong car, which belongs to a gangster named Papa (Murali Sharma). The second Laxminarayan (Sunil Shetty) is a hitman who is hired by a don named Batla Bhai (Mukesh Tiwari) to kill a rival gangster named D'Mello (Manoj Pahwa). However, he mistakes Jijaji for D'Mello and tries to kill him. The third Laxminarayan (Paresh Rawal) is a lingerie salesman who is sent by his boss to buy some diamonds from a dealer named Chandu (Sharat Saxena). However, he gets mixed up with Papa's diamonds and ends up with a huge debt.
-
-
The three Laxminarayans have to deal with various problems and obstacles as they cross paths with each other and with other characters such as Jiya (Esha Deol), Chandni (Sameera Reddy), Madhu (Neetu Chandra), Albert (Vrajesh Hirjee), Laila (Tanisha) and others. The movie is full of hilarious situations, witty dialogues and comic timing that will make you laugh till your stomach hurts.
-
-
Why You Should Watch or Download One Two Three Full Movie in Hindi Dubbed Free Download MP4
-
-
One Two Three is a movie that you should not miss if you love comedy movies. The movie has everything that you need for a perfect entertainment - a great star cast, a funny plot, a catchy music and a lot of fun. The movie will keep you engaged and entertained throughout its duration. The movie also has some messages about friendship, loyalty and love that will touch your heart.
-
-
One Two Three is a movie that you can watch or download for free from any of the websites mentioned above. You can enjoy this movie with your friends and family and have a good time. You can also watch or download this movie in Hindi dubbed version if you prefer that language. The movie is available in MP4 format that you can play on any device.
-
-
Conclusion
-
-
One Two Three is one of the best comedy movies that you can watch or download online for free. The movie has a superb star cast, a hilarious plot and a lot of entertainment value. You can watch or download One Two Three full movie in Hindi dubbed free download MP4 from any of the websites mentioned above and enjoy this movie with your loved ones.
-
Conclusion
-
-
One Two Three is one of the best comedy movies that you can watch or download online for free. The movie has a superb star cast, a hilarious plot and a lot of entertainment value. You can watch or download One Two Three full movie in Hindi dubbed free download MP4 from any of the websites mentioned above and enjoy this movie with your loved ones.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Descarga Dream League Soccer 2023 APK y consigue monedas infinitas para tu equipo.md b/spaces/fatiXbelha/sd/Descarga Dream League Soccer 2023 APK y consigue monedas infinitas para tu equipo.md
deleted file mode 100644
index 6a38d534186b6d3e1ff160fc9d504b5ec64857d3..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Descarga Dream League Soccer 2023 APK y consigue monedas infinitas para tu equipo.md
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
Dream League Soccer 2023 Apk Mod: How to Get Unlimited Coins and Diamonds
-
Introduction
-
If you are a fan of soccer games, you might have heard of Dream League Soccer 2023, the latest installment of the popular series by First Touch Games. This game lets you create your own dream team, customize your players and kits, compete in various leagues and tournaments, and enjoy realistic graphics and gameplay.
But what if you want to have more fun and freedom in the game? What if you want to unlock all the players, stadiums, and modes without spending real money? What if you want to have unlimited coins and diamonds to buy anything you want in the game?
-
Well, there is a way to do that. You can use Dream League Soccer 2023 Apk Mod, a modified version of the game that gives you access to unlimited resources and features. In this article, we will show you what Dream League Soccer 2023 Apk Mod is, how to download and install it, and how to get unlimited coins and diamonds in the game.
-
What is Dream League Soccer 2023?
-
Dream League Soccer 2023 is a soccer simulation game that lets you build your own team from scratch, recruit players from over 4,000 licensed FIFPro™ players, customize your kits and logos, and compete in various leagues and tournaments. You can also play online with other players from around the world, or offline with friends using local multiplayer mode.
-
The game features realistic graphics and animations, dynamic gameplay and AI, immersive sound effects and commentary, and a variety of modes and challenges. You can also upgrade your stadium, train your players, develop your tactics, and track your progress using the in-game stats.
-
descargar dream league soccer 2023 apk monedas infinitas
-dream league soccer 2023 apk mod monedas infinitas
-como hackear dream league soccer 2023 apk monedas infinitas
-dream league soccer 2023 apk monedas infinitas android
-dream league soccer 2023 apk monedas infinitas mega
-dream league soccer 2023 apk monedas infinitas mediafire
-dream league soccer 2023 apk monedas infinitas sin root
-dream league soccer 2023 apk monedas infinitas ultima version
-dream league soccer 2023 apk monedas infinitas y diamantes
-dream league soccer 2023 apk monedas infinitas gratis
-dream league soccer 2023 apk monedas infinitas ios
-dream league soccer 2023 apk monedas infinitas online
-dream league soccer 2023 apk monedas infinitas sin internet
-dream league soccer 2023 apk monedas infinitas todo desbloqueado
-dream league soccer 2023 apk monedas infinitas actualizado
-dream league soccer 2023 apk monedas infinitas facil y rapido
-dream league soccer 2023 apk monedas infinitas sin descargar nada
-dream league soccer 2023 apk monedas infinitas para pc
-dream league soccer 2023 apk monedas infinitas trucos y consejos
-dream league soccer 2023 apk monedas infinitas con licencias
-dream league soccer 2023 apk monedas infinitas con jugadores reales
-dream league soccer 2023 apk monedas infinitas con kits y logos
-dream league soccer 2023 apk monedas infinitas con plantillas actualizadas
-dream league soccer 2023 apk monedas infinitas con modo carrera
-dream league soccer 2023 apk monedas infinitas con graficos hd
-dream league soccer 2023 apk monedas infinitas con musica personalizada
-dream league soccer 2023 apk monedas infinitas con narracion en español
-dream league soccer 2023 apk monedas infinitas con estadios nuevos
-dream league soccer 2023 apk monedas infinitas con balones exclusivos
-dream league soccer 2023 apk monedas infinitas con camisetas originales
-dream league soccer 2023 apk monedas infinitas con equipos legendarios
-dream league soccer 2023 apk monedas infinitas con fichajes de lujo
-dream league soccer 2023 apk monedas infinitas con entrenamiento mejorado
-dream league soccer 2023 apk monedas infinitas con habilidades especiales
-dream league soccer 2023 apk monedas infinitas con eventos y torneos
-dream league soccer 2023 apk monedas infinitas con ranking mundial
-dream league soccer 2023 apk monedas infinitas con amigos y familiares
-dream league soccer 2023 apk monedas infinitas con chat y emojis
-dream league soccer 2023 apk monedas infinitas con soporte tecnico
-dream league socc
-
What are the features of Dream League Soccer 2023 Apk Mod?
-
Dream League Soccer 2023 Apk Mod is a modified version of the game that gives you some extra features and advantages that are not available in the original version. Some of these features are:
-
-
Unlimited coins and diamonds: You can use these currencies to buy anything you want in the game, such as players, kits, stadiums, managers, boosts, etc.
-
All players unlocked: You can recruit any player you want from the transfer market without any restrictions or costs.
-
All stadiums unlocked: You can play in any stadium you want without having to upgrade or unlock them.
-
All modes unlocked: You can access all the modes in the game without having to complete any requirements or achievements.
-
No ads: You can enjoy the game without any annoying ads or pop-ups.
-
-
How to download and install Dream League Soccer 2023 Apk Mod?
-
If you want to download and install Dream League Soccer 2023 Apk Mod on your Android device, you need to follow these steps:
-
Step 1: Enable unknown sources on your device
-
Since Dream League Soccer 2023 Apk Mod is not available on the Google Play Store, you need to enable unknown sources on your device to allow it to install apps from other sources. To do this, go to Settings > Security > Unknown sources and toggle it on.
-
Step 2: Download the Apk file from a trusted source
-
Next, you need to download the Apk file of Dream League Soccer 2023 Apk Mod from a trusted source. You can search for it on the internet or use the link below to download it directly.
Make sure you have enough storage space on your device before downloading the file. The file size is about 400 MB.
-
Step 3: Install the Apk file and launch the game
-
After downloading the Apk file, locate it on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to complete.
-
Once the installation is done, you can launch the game from your app drawer or home screen. You will see a cheat menu on the top right corner of the screen. You can use this menu to activate or deactivate the mod features as you wish.
-
How to get unlimited coins and diamonds in Dream League Soccer 2023 Apk Mod?
-
There are two methods to get unlimited coins and diamonds in Dream League Soccer 2023 Apk Mod. You can use either one of them or both depending on your preference.
-
Method 1: Use the in-game cheat menu
-
The easiest and most convenient method to get unlimited coins and diamonds in Dream League Soccer 2023 Apk Mod is to use the in-game cheat menu. This menu allows you to add any amount of coins and diamonds to your account with just a few taps.
-
To use this method, follow these steps:
-
-
Launch the game and tap on the cheat menu icon on the top right corner of the screen.
-
Select Coins/Diamonds from the list of options.
-
Enter the amount of coins and diamonds you want to add to your account. You can enter any number up to 999,999,999.
-
Tap on Confirm and wait for a few seconds.
-
Enjoy your unlimited coins and diamonds!
-
-
Method 2: Use a game hacker app
-
The second method to get unlimited coins and diamonds in Dream League Soccer 2023 Apk Mod is to use a game hacker app. A game hacker app is a tool that allows you to modify the values of various parameters in a game, such as coins, diamonds, health, score, etc.
-
To use this method, you need to download and install a game hacker app on your device. There are many game hacker apps available on the internet, but we recommend using Game Guardian, as it is one of the most reliable and easy-to-use ones.
Launch Game Guardian and grant it root or virtual space access if needed.
-
Launch Dream League Soccer 2023 Apk Mod and start playing a match.
-
Pause the game and tap on the Game Guardian icon that floats on your screen.
-
Select Dream League Soccer 2023 from the list of processes.
-
Tap on the search icon and select Dword as the value type.
-
Enter your current amount of coins or diamonds in the search box and tap on Search.
-
You will see some results on the screen. Tap on them and change their values to any number you want, up to 999,999,999.
-
Tap on Yes when prompted to confirm the changes.
-
Resume the game and enjoy your unlimited coins and diamonds!
-
-
Conclusion
-
Dream League Soccer 2023 is a fun and addictive soccer game that lets you create your own dream team and compete in various leagues and tournaments. However, if you want to have more fun and freedom in the game, you can use Dream League Soccer 2023 Apk Mod, a modified version of the game that gives you unlimited coins and diamonds, as well as other features and advantages.
-
In this article, we have shown you what Dream League Soccer 2023 Apk Mod is, how to download and install it, and how to get unlimited coins and diamonds in the game using two methods: using the in-game cheat menu or using a game hacker app. You can use either one of them or both depending on your preference.
-
We hope you have found this article helpful and informative. If you have any questions or feedback or suggestions, please feel free to leave them in the comments section below. We would love to hear from you.
-
FAQs
-
Here are some frequently asked questions about Dream League Soccer 2023 Apk Mod and their answers:
-
Q: Is Dream League Soccer 2023 Apk Mod safe to use?
-
A: Yes, Dream League Soccer 2023 Apk Mod is safe to use as long as you download it from a trusted source and scan it with an antivirus app before installing it. However, you should be aware that using a modded version of the game may violate the terms and conditions of the game developer and may result in your account being banned or suspended. Therefore, use it at your own risk and discretion.
-
Q: Do I need to root my device to use Dream League Soccer 2023 Apk Mod?
-
A: No, you do not need to root your device to use Dream League Soccer 2023 Apk Mod. However, if you want to use a game hacker app to get unlimited coins and diamonds, you may need to root your device or use a virtual space app to run the game hacker app.
-
Q: Can I play online with other players using Dream League Soccer 2023 Apk Mod?
-
A: Yes, you can play online with other players using Dream League Soccer 2023 Apk Mod. However, you should be careful not to abuse the mod features or cheat in the game, as this may ruin the fun and fairness of the game for other players and may get you reported or banned by the game developer.
-
Q: Can I update Dream League Soccer 2023 Apk Mod to the latest version?
-
A: Yes, you can update Dream League Soccer 2023 Apk Mod to the latest version as long as the mod developer releases a new version of the mod that is compatible with the latest version of the game. However, you should always backup your game data before updating the mod, as updating may erase your progress or cause some errors in the game.
-
Q: Can I use Dream League Soccer 2023 Apk Mod on iOS devices?
-
A: No, Dream League Soccer 2023 Apk Mod is only available for Android devices. If you want to use a modded version of the game on iOS devices, you may need to jailbreak your device or use a third-party app store that offers modded apps for iOS devices.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download AFK Heroes Idle RPG Legends APK for Android - Free and Fast.md b/spaces/fatiXbelha/sd/Download AFK Heroes Idle RPG Legends APK for Android - Free and Fast.md
deleted file mode 100644
index 4432e2fd909a08bf959ccacac60c8204832dbf42..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download AFK Heroes Idle RPG Legends APK for Android - Free and Fast.md
+++ /dev/null
@@ -1,137 +0,0 @@
-
-
How to Download AFK Heroes and Enjoy the Idle RPG Adventure
-
Are you looking for a fun and relaxing game that lets you summon mighty heroes, fight epic battles, and explore outer space? If so, you should try AFK Heroes, an idle RPG game that offers a lot of features and surprises. In this article, we will show you what AFK Heroes is, why you should play it, how to download it for free, and how to play it like a pro. Let's get started!
AFK Heroes is a simulation game where you must train yourself and get yourself to work to earn money. Also, don't forget about traveling to stay motivated. Get a bonus by completing the achievements. Choose your development strategy to become successful!
-
AFK Heroes is an idle RPG game that combines the elements of adventure, strategy, and fantasy. You will be a chosen one who has the mysterious power to summon heroes from six factions: East, Revenger, Mech, Death, Leader, and Destroyer. You will then form a squad of heroes and lead them into battles against the evil Dark Legion that threatens the fate of the earth.
-
The game has a simple and intuitive gameplay that allows you to enjoy it without spending too much time or effort. You can set up your heroes for auto-battle before you go offline, and they will keep fighting and collecting rewards for you. You can also upgrade your heroes' skills, artifacts, and equipment to make them stronger. You can also join an alliance with other players and cooperate with them in various game modes.
-
Why Should You Play AFK Heroes?
-
Summon Mighty Heroes
-
One of the main features of AFK Heroes is that you can unlock and summon more than 100 superheroes from six different factions. Each hero has a unique talent and combination of skills that can help you in different situations. You can also evolve your heroes from ordinary, rare, epic, legend to myth by using special materials.
-
You can study the strategy and form a balanced squad based on the heroes' skills and attributes. For example, some heroes are good at dealing damage, some are good at healing or buffing allies, some are good at controlling or debuffing enemies, etc. You can also switch heroes from different factions to challenge different difficulty levels.
-
Download afk heroes idle rpg legends apk
-How to play afk heroes on pc with gameloop
-Afk heroes idle clash war free android game
-Best tips and tricks for afk heroes game
-Afk heroes review: a fun and casual idle rpg
-Where to find afk heroes redeem codes and coupons
-Afk heroes mod apk unlimited gems and coins
-Afk heroes tier list: the strongest heroes in 2023
-Afk heroes wiki: everything you need to know
-Afk heroes hack: how to get free resources
-Download afk heroes for ios devices
-Afk heroes vs idle heroes: which one is better
-Afk heroes online: play with friends and strangers
-Afk heroes guide: how to level up fast and easy
-Afk heroes update: what's new in the latest version
-Download afk heroes for windows 10 pc
-Afk heroes cheats: how to unlock all heroes and items
-Afk heroes gameplay: how to master the combat system
-Afk heroes support: how to contact the developer
-Afk heroes forum: join the community and share your thoughts
-Download afk heroes for mac os x
-Afk heroes events: how to participate and win rewards
-Afk heroes characters: how to customize and upgrade your heroes
-Afk heroes strategy: how to build the best team and formation
-Afk heroes download size: how much space do you need
-Download afk heroes for linux ubuntu
-Afk heroes skins: how to change the appearance of your heroes
-Afk heroes factions: how to choose and benefit from them
-Afk heroes ratings: how popular and well-received is the game
-Afk heroes memes: enjoy some funny and relatable jokes
-Download afk heroes for chromebook
-Afk heroes discord: join the official server and chat with other players
-Afk heroes classes: how to pick the right role for your hero
-Afk heroes arena: how to compete and rank up in pvp mode
-Afk heroes videos: watch some gameplay and tutorials on youtube
-Download afk heroes for amazon fire tablet
-Afk heroes facebook: like the official page and get updates and news
-Afk heroes skills: how to use and improve your hero's abilities
-Afk heroes guild: how to join and cooperate with other players
-Afk heroes screenshots: see some images of the game's graphics and design
-
Auto-Battle in Idle RPG
-
Another feature of AFK Heroes is that you can enjoy the game without spending too much time or energy every day. You can set up your heroes for auto-battle before you go offline, and they will keep fighting for you. You can then easily collect idle rewards and resources when you come back online.
-
Adventure in Outer Space
-
Besides fighting the Dark Legion, you can also explore the vast and mysterious outer space in AFK Heroes. You can challenge different monsters and bosses in different planets and galaxies. You can also collect rare and valuable resources and treasures from the space exploration.
-
Each planet and galaxy has a different difficulty level and reward. You can choose the one that suits your squad's strength and strategy. You can also use the space map to navigate and plan your route. You can also encounter random events and surprises that can help or hinder your progress.
-
Share Levels in Clone Center
-
One of the unique features of AFK Heroes is that you can share your heroes' levels with other heroes in the same faction. This is done through the Clone Center, where you can clone your heroes' levels to other heroes. This way, you can boost your heroes' power without spending too much time or resources.
-
The Clone Center has a limited capacity, so you need to choose wisely which heroes to clone and which heroes to receive the clone. You can also upgrade the Clone Center to increase its capacity and efficiency. You can also use the Clone Center to exchange heroes with other players.
-
How to Download AFK Heroes for Free?
-
Download from CrazyGames
-
If you want to play AFK Heroes on your browser, you can download it from CrazyGames, a website that offers free online games. CrazyGames uses WebGL technology to run games smoothly and without any installation or registration.
-
To download AFK Heroes from CrazyGames, you just need to follow these simple steps:
If you want to play AFK Heroes on your Android device, you can download it from Google Play Store, a platform that offers free and paid apps and games. Google Play Store allows you to install games with one tap and update them automatically.
-
To download AFK Heroes from Google Play Store, you just need to follow these simple steps:
Tap on the game icon and then tap on the install button.
-
Wait for the game to download and install on your device.
-
Tap on the open button and enjoy the game.
-
-
How to Play AFK Heroes Like a Pro?
-
Choose Your Heroes Wisely
-
The first step to play AFK Heroes like a pro is to choose your heroes wisely. You need to form a balanced squad of five heroes that can complement each other's skills and attributes. You also need to consider the faction advantages and disadvantages when choosing your heroes.
-
The game has six factions: East, Revenger, Mech, Death, Leader, and Destroyer. Each faction has a different color and symbol. Each faction also has a strength and weakness against another faction. For example, East is strong against Revenger but weak against Mech. You can see the faction relationship chart below:
-
-
-
+25%
-25%
-
-25%
+25%
-
+25%
-25%
-
+25%
-25%
-
-25%
+25%
-
+25%
-25%
-
-
You can use this chart to plan your squad and choose the heroes that have an advantage over the enemy's faction. You can also switch your heroes from different factions to challenge different difficulty levels.
-
Upgrade Your Heroes Regularly
-
The second step to play AFK Heroes like a pro is to upgrade your heroes regularly. You need to enhance your heroes' abilities with skills, artifacts, and equipment. You also need to evolve your heroes from ordinary, rare, epic, legend to myth by using special materials.
-
You can upgrade your heroes' skills by using skill points that you can earn from battles and events. You can also unlock new skills when your heroes reach certain levels. Each hero has four skills: one active skill and three passive skills. The active skill can be used manually or automatically in battle, while the passive skills are always effective.
-
You can upgrade your heroes' artifacts by using artifact fragments that you can collect from space exploration and events. You can also fuse artifact fragments to get higher quality artifacts. Each hero can equip one artifact that can boost their stats and provide special effects.
-
You can upgrade your heroes' equipment by using gold and equipment materials that you can get from battles and events. You can also enhance equipment by using enhancement stones or other equipment. Each hero can equip four pieces of equipment: weapon, armor, helmet, and accessory. The equipment can improve your heroes' stats and give them extra bonuses.
-
Join an Alliance and Make Friends
-
The third step to play AFK Heroes like a pro is to join an alliance and make friends. You can cooperate with other players and get benefits from the alliance features. You can also chat with other players and share tips and strategies.
-
You can join an alliance by applying to an existing one or creating your own one. You can also invite your friends to join your alliance or accept invitations from others. Each alliance has a leader, a deputy leader, and members. The leader and the deputy leader can manage the alliance settings, members, and activities.
-
By joining an alliance, you can access the following features:
-
-
Alliance Shop: You can buy items with alliance coins that you can earn from alliance activities.
-
Alliance War: You can fight against other alliances in a weekly competition and win rewards based on your rank.
-
Alliance Boss: You can challenge a powerful boss with your alliance members and get rewards based on your damage.
-
Alliance Aid: You can request or donate resources to your alliance members and get rewards for helping each other.
-
Alliance Chat: You can chat with your alliance members and send them emojis and gifts.
-
-
Explore Different Game Modes and Events
-
The fourth step to play AFK Heroes like a pro is to explore different game modes and events. You can enjoy various gameplays and rewards in the arena, exploration, battle, and more. You can also participate in limited-time events and get exclusive rewards.
-
Here are some of the game modes and events you can explore:
-
-
Arena: You can compete with other players in real-time or asynchronous battles and win rewards based on your rank.
-
Exploration: You can explore different planets and galaxies and collect resources and treasures.
-
Battle: You can fight against the Dark Legion in different difficulty levels and progress through the story.
-
Clone Center: You can clone your heroes' levels to other heroes or exchange heroes with other players.
-
Daily Quests: You can complete daily tasks and get rewards such as gold, diamonds, skill points, etc.
-
Weekly Events: You can join weekly events such as Hero Trial, Space Race, Treasure Hunt, etc. and get special rewards such as hero fragments, artifact fragments, equipment materials, etc.
-
Monthly Events: You can join monthly events such as Hero Festival, Artifact Festival, Equipment Festival, etc. and get rare rewards such as myth heroes, legend artifacts, legend equipment, etc.
-
-
Conclusion and FAQs
Conclusion and FAQs
-
In conclusion, AFK Heroes is an idle RPG game that lets you summon mighty heroes, fight epic battles, and explore outer space. You can download it for free from CrazyGames or Google Play Store and play it like a pro by following our tips and tricks. You can also join an alliance and make friends with other players and enjoy different game modes and events. If you are looking for a fun and relaxing game that offers a lot of features and surprises, you should try AFK Heroes today!
-
Here are some FAQs that you might have about the game:
-
-
Q: How can I get more diamonds in the game?
-
A: Diamonds are the premium currency in the game that can be used to buy items, summon heroes, refresh the shop, etc. You can get more diamonds by completing daily quests, participating in events, ranking up in the arena, watching ads, etc. You can also buy diamonds with real money if you want to support the game.
-
Q: How can I get more heroes in the game?
-
A: Heroes are the main characters in the game that can help you in battles and exploration. You can get more heroes by summoning them with hero fragments or diamonds, cloning them with clone points or exchange codes, evolving them with evolution materials, etc. You can also get free heroes from events, quests, rewards, etc.
-
Q: How can I get more skills, artifacts, and equipment in the game?
-
A: Skills, artifacts, and equipment are the items that can enhance your heroes' abilities and stats. You can get more skills by using skill points that you can earn from battles and events. You can get more artifacts by using artifact fragments that you can collect from space exploration and events. You can get more equipment by using gold and equipment materials that you can get from battles and events.
-
Q: How can I join or create an alliance in the game?
-
A: Alliance is a feature that allows you to cooperate with other players and get benefits from the alliance features. You can join or create an alliance by tapping on the alliance button on the main screen. You can then apply to an existing alliance or create your own one. You can also invite your friends to join your alliance or accept invitations from others.
-
Q: How can I contact the customer service or report a bug in the game?
-
A: Customer service is a feature that allows you to communicate with the game developers and get help or feedback. You can contact the customer service or report a bug by tapping on the settings button on the main screen. You can then tap on the customer service button or the report bug button. You can also email them at afkheroes@gmail.com or visit their Facebook page at https://www.facebook.com/afkheroes/.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Blue Light Filter - Night Mode Pro APK for Android - Protect Your Eyes and Sleep Better.md b/spaces/fatiXbelha/sd/Download Blue Light Filter - Night Mode Pro APK for Android - Protect Your Eyes and Sleep Better.md
deleted file mode 100644
index a7f129af3c888d995b81fd66a5ebdf728b39d16f..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Blue Light Filter - Night Mode Pro APK for Android - Protect Your Eyes and Sleep Better.md
+++ /dev/null
@@ -1,103 +0,0 @@
-
-
Blue Light Filter Night Mode Pro APK Free Download
-
If you are looking for a way to protect your eyes from the harmful effects of blue light emitted by your digital devices, you might want to try blue light filter night mode pro apk. This is a free app that can reduce blue light by adjusting the screen to natural color, making it easier for you to read, work, or play at night. In this article, we will explain what blue light is, how it affects your eyes, and why you need a blue light filter app. We will also review the features and functions of blue light filter night mode pro apk, and show you how to download and install it on your Android device.
-
blue light filter night mode pro apk free download
What is Blue Light and How Does It Affect Your Eyes?
-
Sources and Effects of Blue Light
-
Blue light is a part of the visible light spectrum that has the shortest wavelength and the highest energy. It is also known as high-energy visible (HEV) light. The main source of blue light is the sun, but artificial sources include fluorescent lights, LED lights, computer monitors, tablet screens, smartphones, and other digital devices. Blue light can have both positive and negative effects on your eyes and health. On one hand, blue light can help regulate your circadian rhythm, which is your body's natural clock that tells you when to sleep and wake up. It can also boost your alertness, mood, and cognitive performance. On the other hand, too much exposure to blue light can cause eye strain, dry eyes, blurred vision, headaches, insomnia, and fatigue. It can also damage your retinal cells over time, leading to vision problems like age-related macular degeneration (AMD) and cataracts.
-
Benefits of Blue Light Filters
-
Blue light filters are devices or software that block or reduce the amount of blue light that reaches your eyes. They can be glasses, lenses, screen protectors, apps, or settings that change the color temperature or brightness of your screen. By using a blue light filter, you can enjoy the following benefits:
-
-
Ease digital eye strain by improving contrast and reducing glare.
-
Increase the clarity of your vision by filtering out the harsh blue light.
-
Reduce computer headaches by preventing eye fatigue and irritation.
-
Eliminate migraines by lowering the risk of triggering photophobia (sensitivity to light).
-
Eliminate dry eyes by encouraging more blinking and lubrication.
-
Improve sleep quality by preventing blue light from disrupting your melatonin production and circadian rhythm.
-
Alleviate eye fatigue by relaxing your eye muscles and nerves.
-
Conserve macular health by protecting your retinal cells from oxidative stress and inflammation.
-
-
What is Blue Light Filter Night Mode Pro APK?
-
Features and Functions of the App
-
Blue light filter night mode pro apk is a free app that can help you reduce blue light by adjusting the screen to natural color. It has the following features and functions:
-
-
Adjustable filter intensity: You can choose from five levels of filter intensity to suit your preference and need.
-
- Customizable color temperature: You can select from six preset color temperatures to change the hue of your screen, ranging from warm to cool.
-
Automatic timer: You can set a schedule for the app to turn on and off the filter automatically according to your preferred time.
-
Notification bar: You can access the app easily from the notification bar and adjust the settings quickly.
-
Simple and user-friendly interface: You can use the app with ease and convenience, as it has a simple and user-friendly design.
-
No ads or in-app purchases: You can enjoy the app without any interruptions or costs, as it is completely free and ad-free.
-
-
Comparison with Other Blue Light Filter Apps
-
There are many other blue light filter apps available on the Google Play Store, but blue light filter night mode pro apk stands out for several reasons. Here are some of the advantages of blue light filter night mode pro apk over other similar apps:
-
-
It has more options for filter intensity and color temperature than most other apps, giving you more control and flexibility over your screen settings.
-
It has a smaller file size than most other apps, saving you more storage space and battery life.
-
It has a higher rating and more positive reviews than most other apps, indicating that it has a better performance and quality.
-
It has a more frequent update than most other apps, ensuring that it has the latest features and bug fixes.
-
-
How to Download and Install Blue Light Filter Night Mode Pro APK?
-
Downloading APK Files from Google Play Store
-
If you want to download blue light filter night mode pro apk, you will need to download the APK file from the Google Play Store. APK stands for Android Package Kit, which is a file format that contains all the elements of an app. To download APK files from the Google Play Store, you will need to follow these steps:
-
-
Go to the Google Play Store and search for blue light filter night mode pro apk.
-
Select the app from the search results and tap on it.
-
Tap on the three dots icon at the top right corner of the app page.
-
Select "Share" from the menu that appears.
-
Select "Copy to clipboard" from the options that appear.
-
Paste the copied link into a browser or a downloader app of your choice.
-
Download the APK file from the link that you pasted.
-
-
Installing APK Files on Your Android Device
-
Once you have downloaded the APK file of blue light filter night mode pro apk, you will need to install it on your Android device. To install APK files on your Android device, you will need to follow these steps:
-
blue light filter app for android free download
-night mode pro apk with blue light filter
-download blue light filter - night mode apk
-blue light filter night mode eye care pro apk
-free blue light filter app for night mode
-night mode pro blue light filter for android apk
-blue light filter - night mode apk latest version
-blue light filter night mode eye protection pro apk
-download free blue light filter app for night mode
-night mode pro blue light filter apk download
-blue light filter - night mode apk for android
-blue light filter night mode pro app free download
-free download night mode pro blue light filter
-blue light filter - night mode app apk download
-blue light filter night mode pro apk free install
-install blue light filter app for night mode free
-night mode pro blue light filter app apk
-blue light filter - night mode free download apk
-blue light filter night mode pro apk latest version
-latest version of blue light filter app for night mode
-night mode pro blue light filter apk free update
-update blue light filter - night mode apk for free
-blue light filter - night mode app for android free
-free android app for blue light filter night mode pro
-night mode pro blue light filter android app download
-download android app for blue light filter - night mode
-blue light filter - night mode android app free install
-install android app for blue light filter night mode pro
-blue light filter - night mode android app latest version
-latest version of android app for blue light filter night mode pro
-
-
Go to your device settings and enable "Unknown sources" under "Security" or "Applications". This will allow you to install apps from sources other than the Google Play Store.
-
Locate the downloaded APK file on your device using a file manager app or your device's default file explorer.
-
Tap on the APK file and follow the instructions on the screen to install it.
-
Launch the app from your app drawer or home screen and enjoy its features.
-
-
Conclusion
-
In conclusion, blue light filter night mode pro apk is a free app that can help you reduce blue light by adjusting the screen to natural color. It can protect your eyes from eye strain, headaches, insomnia, and macular degeneration. It also has many features and functions that make it superior to other blue light filter apps. If you want to download and install blue light filter night mode pro apk, you can follow the steps we have provided in this article. We hope you found this article helpful and informative. Thank you for reading!
-
FAQs
-
What is the difference between blue light filter night mode pro apk and blue light filter night mode apk?
-
The main difference between blue light filter night mode pro apk and blue light filter night mode apk is that the former is an upgraded version of the latter. Blue light filter night mode pro apk has more options for filter intensity and color temperature, as well as an automatic timer feature. It also has no ads or in-app purchases, unlike blue light filter night mode apk.
-
Is blue light filter night mode pro apk safe to use?
-
Yes, blue light filter night mode pro apk is safe to use, as long as you download it from a trusted source like the Google Play Store. It does not contain any malware or viruses that can harm your device or data. It also does not require any special permissions or access to your personal information, unlike some other apps that may ask for your contacts, location, camera, or microphone.
-
How can I adjust the filter intensity and color temperature of the app?
-
You can adjust the filter intensity and color temperature of the app by using the sliders on the main screen of the app. You can also tap on the preset buttons to choose from the predefined levels of filter intensity and color temperature. You can see the changes in real time on your screen as you adjust the settings.
-
Does blue light filter night mode pro apk work with other apps and games?
-
Yes, blue light filter night mode pro apk works with other apps and games on your device. It applies a universal filter over your screen that affects all the apps and games that you use. However, some apps and games may have their own settings for brightness, contrast, or color mode that may override or conflict with the app's filter. In that case, you may need to adjust the settings of those apps or games to make them compatible with the app's filter.
-
How can I contact the developer of the app for feedback or support?
-
If you have any questions, suggestions, or issues regarding the app, you can contact the developer of the app by sending an email to bluelightfilternightmodeproapk@gmail.com. You can also visit their website at https://bluelightfilternightmodeproapk.com/ for more information and updates about the app.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Green Glass A Game that Combines Action Romance and Mystery.md b/spaces/fatiXbelha/sd/Download Green Glass A Game that Combines Action Romance and Mystery.md
deleted file mode 100644
index bc394bb873ceaf814efeed112993a4c213dab27b..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Green Glass A Game that Combines Action Romance and Mystery.md
+++ /dev/null
@@ -1,82 +0,0 @@
-
-
Download Green Glass: A Guide to the Best Sources and Tips
-
Green glass is a term that can refer to two different things: a type of eco-friendly glass that is used for building projects, or a 3D adventure game that features a warrior and a woman on a long journey. In this article, we will explain what green glass is in both contexts, and how you can download green glass for your own purposes. Whether you are looking for a sustainable building material or an immersive gaming experience, green glass has something to offer you.
-
What is Green Glass?
-
Green glass can mean different things depending on the context. Here are the two main meanings of green glass:
Green glass is a type of glass that is made from recycled materials and has low environmental impact. It is used for various building projects, such as windows, doors, facades, partitions, and more. Green glass can help reduce energy consumption, improve natural lighting, enhance indoor air quality, and protect the scholarly record. Some examples of green glass suppliers are Saint-Gobain Glass India and GreenGlass.
-
Green Glass as a 3D Adventure Game
-
Green Glass is also the name of a 3D adventure game that was developed by NetEase Games. It is a game that follows the story of a warrior who has to escort a woman across a vast and beautiful world. The game features stunning graphics, realistic physics, dynamic combat, and emotional storytelling. The game is available for Android devices and can be downloaded from various sources, such as APKCombo and YouTube.
-
How to Download Green Glass for Different Purposes
-
If you are interested in downloading green glass for your own use, here are some tips on how to do it for different purposes:
-
Download Green Glass for Building Projects
-
If you want to use green glass for your building projects, here are some steps you should follow:
-
Find Reliable Suppliers of Green Glass
-
The first step is to find reliable suppliers of green glass that can provide you with high-quality products at reasonable prices. You can search online for green glass suppliers in your area or country, or you can use websites like OCLC that can help you compare different options and find the best deals.
-
Compare Prices and Quality of Green Glass
-
The next step is to compare the prices and quality of green glass from different suppliers. You should look at factors such as the size, shape, color, thickness, transparency, durability, and warranty of the green glass products. You should also check the customer reviews and ratings of the suppliers to see their reputation and service quality.
-
Check the Environmental Benefits of Green Glass
-
The final step is to check the environmental benefits of using green glass for your building projects. You should look at how much energy, water, and resources you can save by using green glass instead of other materials. You should also look at how much carbon footprint and waste you can reduce by using green glass. You can use tools like GreenGlass that can help you calculate the environmental impact of your choices.
-
download green glass game apk
-download green glass stock videos
-download green glass collection management software
-download green glass 3D adventure game
-download green glass app for android
-download green glass video backgrounds
-download green glass oclc decision support tool
-download green glass game for pc
-download green glass free images
-download green glass game review
-download green glass game walkthrough
-download green glass game trailer
-download green glass game tips and tricks
-download green glass game mod apk
-download green glass game latest version
-download green glass game offline
-download green glass game english version
-download green glass game ios
-download green glass game online
-download green glass game cheats
-download green glass game hack
-download green glass game guide
-download greenglass web application
-download greenglass user manual
-download greenglass demo
-download greenglass tutorial
-download greenglass case studies
-download greenglass data analysis tool
-download greenglass shared print management software
-download greenglass serials decision support tool
-download greenglass monographs decision support tool
-download greenglass library collection data visualization tool
-download greenglass library collection optimization software
-download greenglass library collection weeding software
-download greenglass library collection preservation software
-download greenglass library collection digitization software
-download greenglass library collection transfer software
-download greenglass library collection comparison software
-download greenglass library collection development software
-download greenglass library collection evaluation software
-
Download Green Glass for Gaming Experience
-
If you want to play Green Glass on your Android device, here are some steps you should follow:
-
Find the Official APK File of Green Glass
-
The first step is to find the official APK file of Green Glass that is compatible with your device and version. You can download the APK file from the official website of NetEase Games, or from other sources like APKCombo or YouTube. However, be careful to avoid fake or malicious files that may harm your device or steal your data.
-
Install and Run Green Glass on Your Device
-
The next step is to install and run Green Glass on your device. You may need to enable the installation of apps from unknown sources in your settings before you can install the APK file. After you install the APK file, you can open the app and grant the necessary permissions for it to run properly. You may also need to download some additional data for the game to work.
-
Enjoy the Stunning Graphics and Storyline of Green Glass
-
The final step is to enjoy the stunning graphics and storyline of Green Glass. You can explore the vast and beautiful world of Green Glass, interact with various characters and objects, fight against enemies and bosses, and experience the emotional journey of the warrior and the woman. You can also adjust the settings of the game to suit your preferences, such as the language, sound, graphics, and controls.
-
Conclusion
-
Green glass is a term that can refer to a type of eco-friendly glass that is used for building projects, or a 3D adventure game that features a warrior and a woman on a long journey. In this article, we have explained what green glass is in both contexts, and how you can download green glass for your own purposes. Whether you are looking for a sustainable building material or an immersive gaming experience, green glass has something to offer you. We hope you found this article helpful and informative.
-
FAQs
-
Here are some frequently asked questions about green glass:
-
-
Question
Answer
-
What are the benefits of using green glass for building projects?
Green glass can help reduce energy consumption, improve natural lighting, enhance indoor air quality, and protect the scholarly record. It can also save resources, reduce waste, and lower carbon footprint.
-
What are the requirements for playing Green Glass on Android devices?
Green Glass requires Android 5.0 or higher, 2 GB of RAM or more, and 1.5 GB of free storage space. It also requires a stable internet connection and permission to access your device's storage, location, camera, microphone, and phone.
-
Is Green Glass available for other platforms?
No, Green Glass is currently only available for Android devices. There is no official information about whether it will be released for other platforms in the future.
-
How much does green glass cost?
The price of green glass depends on various factors, such as the supplier, the quality, the size, and the quantity. You can use websites like OCLC to compare different options and find the best deals. The game Green Glass is free to download and play, but it may contain some in-app purchases or ads.
-
Where can I find more information about green glass?
You can find more information about green glass by visiting the websites of green glass suppliers or game developers, or by searching online for articles, videos, reviews, or forums about green glass.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Drive Ahead Mod Apk 3.2 0 A Car Game Like No Other with Unlimited Money and God Mode.md b/spaces/fatiXbelha/sd/Drive Ahead Mod Apk 3.2 0 A Car Game Like No Other with Unlimited Money and God Mode.md
deleted file mode 100644
index c7772b82df67f7b02b2022f11568237839aaea2b..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Drive Ahead Mod Apk 3.2 0 A Car Game Like No Other with Unlimited Money and God Mode.md
+++ /dev/null
@@ -1,130 +0,0 @@
-
-
Drive Ahead Mod APK 3.2 0: A Car Fighting Game Like No Other
-
If you are looking for a fun and addictive game that will keep you entertained for hours, then you should try Drive Ahead Mod APK. This is not your typical racing game, but a car fighting game where you have to smash your opponent's head with your vehicle. Sounds brutal, right? But don't worry, it's all in good fun, as the game has a cartoonish style and a humorous tone.
Drive Ahead Mod APK is a modified version of the original Drive Ahead game, which is available on Google Play Store. The mod apk gives you access to unlimited coins and tickets, as well as all the vehicles and arenas in the game. You can also customize the gameplay and graphics settings to suit your preferences. Whether you want to play solo or with your friends, online or offline, Drive Ahead Mod APK has something for everyone.
-
In this article, we will tell you everything you need to know about Drive Ahead Mod APK, including its features, how to download and install it, its pros and cons, and some tips and tricks for playing it. So, without further ado, let's get started!
-
Features of Drive Ahead Mod APK
-
Drive Ahead Mod APK has many features that make it stand out from other car fighting games. Here are some of them:
-
Unlimited Coins and Tickets
-
Coins and tickets are the main currencies in Drive Ahead. You can use them to buy new vehicles, arenas, helmets, and other items in the game. However, earning them can be quite slow and tedious, especially if you want to unlock everything in the game. That's why Drive Ahead Mod APK gives you unlimited coins and tickets, so you can enjoy the game without any limitations.
-
drive ahead mod apk 3.2 0 unlimited money
-drive ahead mod apk 3.2 0 download for android
-drive ahead mod apk 3.2 0 latest version
-drive ahead mod apk 3.2 0 free shopping
-drive ahead mod apk 3.2 0 no ads
-drive ahead mod apk 3.2 0 unlocked all cars
-drive ahead mod apk 3.2 0 mega mod
-drive ahead mod apk 3.2 0 hack
-drive ahead mod apk 3.2 0 online
-drive ahead mod apk 3.2 0 gameplay
-drive ahead mod apk 3.2 0 cheats
-drive ahead mod apk 3.2 0 revdl
-drive ahead mod apk 3.2 0 rexdl
-drive ahead mod apk 3.2 0 happymod
-drive ahead mod apk 3.2 0 apkpure
-drive ahead mod apk 3.2 0 android oyun club
-drive ahead mod apk 3.2 0 an1
-drive ahead mod apk 3.2 0 andropalace
-drive ahead mod apk 3.2 0 blackmod
-drive ahead mod apk 3.2 0 by lenov.ru
-drive ahead mod apk 3.2 0 cracked
-drive ahead mod apk 3.2 0 coins generator
-drive ahead mod apk 3.2 0 direct link
-drive ahead mod apk 3.2 0 data obb
-drive ahead mod apk 3.2 0 everything unlocked
-drive ahead mod apk 3.2 0 full version
-drive ahead mod apk 3.2 0 for pc
-drive ahead mod apk 3.2 0 for ios
-drive ahead mod apk 3.2 0 free download apkpure
-drive ahead mod apk 3.2 0 god mode
-drive ahead mod apk 3.2 0 google play
-drive ahead mod apk 3.2 0 how to install
-drive ahead mod apk 3.2 0 highly compressed
-drive ahead mod apk 3.2 0 iosgods
-drive ahead mod apk 3.2 0 ihackedit
-drive ahead mod apk 3.2 0 indir
-drive ahead mod apk 3.2 0 inewkhushi
-drive ahead mod apk
-
All Vehicles and Arenas Unlocked
-
Drive Ahead has a huge variety of vehicles and arenas to choose from. You can drive anything from garbage trucks, tanks, monster trucks, sports cars, motorcycles, and even UFOs. You can also fight in different arenas, such as deserts, jungles, volcanoes, stadiums, rooftops, and more. However, not all of them are available from the start. You have to unlock them by playing the game or spending coins and tickets. But with Drive Ahead Mod APK, you don't have to worry about that. You can access all the vehicles and arenas from the beginning.
-
Customizable Gameplay and Graphics
-
Drive Ahead Mod APK allows you to customize the gameplay and graphics settings according to your liking. You can adjust the difficulty level, the number of rounds, the time limit, the gravity, the damage mode, and more. You can also change the graphics quality, the sound effects, the music volume, and other options. This way, you can have the best gaming experience possible.
-
Online and Offline Modes
-
Drive Ahead Mod APK lets you play online or offline depending on your mood. If you want to play solo, you can choose from three modes: Missions, King of the Hill, or Rift Riders. In Missions mode, you have to complete various challenges using different vehicles and arenas. In King of the Hill mode, you have to survive as long as possible against waves of enemies. In Rift Riders mode, you have to collect rift bolts and avoid obstacles in a futuristic setting.
-
If you want to play with your friends, you can choose from two modes: Local Multiplayer or Online Multiplayer. In Local Multiplayer mode, you can play with up to four players on the same device using split-screen or Bluetooth. In Online Multiplayer mode, you can play with up to six players online using Wi-Fi or mobile data. You can also join or create clans, chat with other players, and compete in leaderboards and tournaments.
-
How to Download and Install Drive Ahead Mod APK
-
Downloading and installing Drive Ahead Mod APK is very easy and fast. Just follow these simple steps:
-
Requirements and Permissions
-
Before you download and install Drive Ahead Mod APK, make sure that your device meets the following requirements and permissions:
-
-
Your device must have Android 4.4 or higher.
-
Your device must have at least 100 MB of free storage space.
-
You must enable the installation of apps from unknown sources in your device settings.
-
You must allow Drive Ahead Mod APK to access your photos, media, files, Wi-Fi connection information, and device ID.
-
-
Steps to Download and Install
-
Once you have checked the requirements and permissions, you can proceed with the following steps:
-
-
Click on this link to download the Drive Ahead Mod APK file.
-
Wait for the download to finish and then open the file.
-
Tap on the install button and wait for the installation to complete.
-
Launch the game and enjoy!
-
-
How to Update the Mod APK
-
To update the Drive Ahead Mod APK, you have to follow the same steps as above. However, before you install the new version, you have to uninstall the old one first. Don't worry, you won't lose your progress or data, as they are stored in your device memory. Just make sure that you back up your game data before uninstalling the old version.
-
Pros and Cons of Drive Ahead Mod APK
-
Drive Ahead Mod APK has many advantages, but it also has some drawbacks. Here are some of them:
-
Pros
-
-
You can enjoy unlimited coins and tickets, which means you can buy anything you want in the game.
-
You can access all the vehicles and arenas in the game, which means you can have more fun and variety.
-
You can customize the gameplay and graphics settings, which means you can have a better gaming experience.
-
You can play online or offline, which means you can play anytime and anywhere.
-
-
Cons
-
-
You may encounter some bugs or glitches, which may affect your game performance or stability.
-
You may face some compatibility issues with some devices or Android versions, which may prevent you from playing the game properly.
-
You may get banned from online multiplayer mode if you are detected using a mod apk, which may ruin your reputation or progress.
-
You may miss out on some updates or features from the original game, which may make you feel left out or outdated.
-
-
Tips and Tricks for Playing Drive Ahead Mod APK
-
If you want to master Drive Ahead Mod APK, here are some tips and tricks that will help you:
-
Choose the Right Vehicle for the Arena
-
Drive Ahead has many different vehicles and arenas, each with their own strengths and weaknesses. You have to choose the right vehicle for the arena that you are playing in. For example, if you are playing in a desert arena, you may want to use a vehicle that has good traction and speed, such as a buggy or a motorcycle. If you are playing in a volcano arena, you may want to use a vehicle that has good armor and durability, such as a tank or a bulldozer. Experiment with different combinations and see what works best for you.
-
Use the Environment to Your Advantage
-
Drive Ahead is not just about smashing your opponent's head with your vehicle. You can also use the environment to your advantage. For example, you can use ramps, bridges, loops, swings, explosives, magnets, saws, and other objects to launch yourself or your opponent into the air or into danger. You can also use gravity, wind, water, fire, ice, and other elements to affect your movement or your opponent's movement. Be creative and use everything around you to win.
-
Collect Coins and Tickets to Unlock More Content
-
As we mentioned before, coins and tickets are the main currencies in Drive Ahead. You can use them to buy new vehicles, arenas, helmets, and other items in the game. Although Drive Ahead Mod APK gives you unlimited coins and tickets, you still have to collect them in the game to unlock more content. You can collect coins and tickets by playing the game modes, completing missions, watching ads, or opening chests. The more coins and tickets you have, the more content you can enjoy.
-
Challenge Your Friends and Other Players Online
-
Drive Ahead is more fun when you play with your friends and other players online. You can challenge them to a car fight and see who is the best driver. You can also join or create clans, chat with other players, and compete in leaderboards and tournaments. You can also share your replays and screenshots with your friends and other players on social media. However, be careful not to use the mod apk in online multiplayer mode, as you may get banned if you are detected.
-
Conclusion
-
Drive Ahead Mod APK is a great game for anyone who loves car fighting games. It has many features that make it unique and enjoyable, such as unlimited coins and tickets, all vehicles and arenas unlocked, customizable gameplay and graphics, online and offline modes, and more. You can download and install it easily and quickly by following our guide. You can also follow our tips and tricks to master the game and have more fun.
-
If you are looking for a car fighting game like no other, then you should try Drive Ahead Mod APK. It will give you hours of entertainment and excitement. Download it now and enjoy!
-
FAQs
-
Here are some frequently asked questions about Drive Ahead Mod APK:
-
Q: Is Drive Ahead Mod APK safe to use?
-
A: Yes, Drive Ahead Mod APK is safe to use, as long as you download it from a trusted source. However, you should always scan the file with an antivirus before installing it, just to be sure.
-
Q: Can I play Drive Ahead Mod APK on PC?
-
A: Yes, you can play Drive Ahead Mod APK on PC using an Android emulator. An Android emulator is a software that allows you to run Android apps on your PC. Some of the popular Android emulators are BlueStacks, NoxPlayer, LDPlayer, etc.
-
Q: How can I backup my game data in Drive Ahead Mod APK?
-
A: You can backup your game data in Drive Ahead Mod APK by using a file manager app. A file manager app is a tool that allows you to access and manage the files on your device. Some of the popular file manager apps are ES File Explorer, File Manager+, Solid Explorer, etc.
-
To backup your game data in Drive Ahead Mod APK, follow these steps:
-
-
Open the file manager app on your device.
-
Navigate to the folder where Drive Ahead Mod APK is installed. It is usually located in /sdcard/Android/data/com.dodreams.driveahead/.
-
Copy the folder named "files" and paste it somewhere else on your device or on an external storage device.
-
To restore your game data, just copy the folder back to its original location.
-
-
Q: What are some alternatives to Drive Ahead Mod APK?
-
A: If you are looking for some alternatives to Drive Ahead Mod APK, here are some suggestions:
-
-
Catapult King Mod APK: A game where you have to launch projectiles at castles and enemies using a catapult.
-
Bowmasters Mod APK: A game where you have to shoot arrows at your opponents using different characters and weapons.
-
Tank Stars Mod APK: A game where you have to blast your enemies with tanks using different missiles and bombs.
-
-
Q: How can I contact the developers of Drive Ahead Mod APK?
-
A: If you have any questions or feedback about Drive Ahead Mod APK, you can contact the developers by sending an email to support@dodreams.com or by visiting their website at https://dodreams.com/.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fb700/chatglm-fitness-RLHF/Dockerfile b/spaces/fb700/chatglm-fitness-RLHF/Dockerfile
deleted file mode 100644
index 5ddc6e3d8b246534a58f9612a88b309fa7e10795..0000000000000000000000000000000000000000
--- a/spaces/fb700/chatglm-fitness-RLHF/Dockerfile
+++ /dev/null
@@ -1,59 +0,0 @@
-FROM nvidia/cuda:11.7.1-cudnn8-devel-ubuntu22.04
-ENV DEBIAN_FRONTEND=noninteractive
-RUN apt-get update && \
- apt-get upgrade -y && \
- apt-get install -y --no-install-recommends \
- git \
- zip \
- unzip \
- git-lfs \
- wget \
- curl \
- # ffmpeg \
- ffmpeg \
- x264 \
- # python build dependencies \
- build-essential \
- libssl-dev \
- zlib1g-dev \
- libbz2-dev \
- libreadline-dev \
- libsqlite3-dev \
- libncursesw5-dev \
- xz-utils \
- tk-dev \
- libxml2-dev \
- libxmlsec1-dev \
- libffi-dev \
- liblzma-dev && \
- apt-get clean && \
- rm -rf /var/lib/apt/lists/*
-
-RUN useradd -m -u 1000 user
-USER user
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:${PATH}
-WORKDIR ${HOME}/app
-
-RUN curl https://pyenv.run | bash
-ENV PATH=${HOME}/.pyenv/shims:${HOME}/.pyenv/bin:${PATH}
-ENV PYTHON_VERSION=3.10.9
-RUN pyenv install ${PYTHON_VERSION} && \
- pyenv global ${PYTHON_VERSION} && \
- pyenv rehash && \
- pip install --no-cache-dir -U pip setuptools wheel
-
-RUN pip install --no-cache-dir -U torch==1.12.1 torchvision==0.13.1
-COPY --chown=1000 requirements.txt /tmp/requirements.txt
-RUN pip install --no-cache-dir -U -r /tmp/requirements.txt
-
-COPY --chown=1000 . ${HOME}/app
-RUN ls -a
-ENV PYTHONPATH=${HOME}/app \
- PYTHONUNBUFFERED=1 \
- GRADIO_ALLOW_FLAGGING=never \
- GRADIO_NUM_PORTS=1 \
- GRADIO_SERVER_NAME=0.0.0.0 \
- GRADIO_THEME=huggingface \
- SYSTEM=spaces
-CMD ["python", "app.py"]
\ No newline at end of file
diff --git a/spaces/fb700/chatglm-fitness-RLHF/tts_voice.py b/spaces/fb700/chatglm-fitness-RLHF/tts_voice.py
deleted file mode 100644
index 8ee194c252f82ada41ccc14f33adb592e1a00985..0000000000000000000000000000000000000000
--- a/spaces/fb700/chatglm-fitness-RLHF/tts_voice.py
+++ /dev/null
@@ -1,26 +0,0 @@
-tts_order_voice = {'英语 (美国)-Jenny-女': 'en-US-JennyNeural',
- '英语 (美国)-Guy-男': 'en-US-GuyNeural',
- '英语 (美国)-Ana-女': 'en-US-AnaNeural',
- '英语 (美国)-Aria-女': 'en-US-AriaNeural',
- '英语 (美国)-Christopher-男': 'en-US-ChristopherNeural',
- '英语 (美国)-Eric-男': 'en-US-EricNeural',
- '英语 (美国)-Michelle-女': 'en-US-MichelleNeural',
- '英语 (美国)-Roger-男': 'en-US-RogerNeural',
- '韩语 (韩国)-Sun-Hi-女': 'ko-KR-SunHiNeural',
- '韩语 (韩国)-InJoon-男': 'ko-KR-InJoonNeural',
- '日语 (日本)-Nanami-女': 'ja-JP-NanamiNeural',
- '日语 (日本)-Keita-男': 'ja-JP-KeitaNeural',
- '普通话 (中国大陆)-Xiaoxiao-女': 'zh-CN-XiaoxiaoNeural',
- '普通话 (中国大陆)-Yunyang-男': 'zh-CN-YunyangNeural',
- '普通话 (中国大陆)-Yunxi-男': 'zh-CN-YunxiNeural',
- '普通话 (中国大陆)-Xiaoyi-女': 'zh-CN-XiaoyiNeural',
- '普通话 (中国大陆)-Yunjian-男': 'zh-CN-YunjianNeural',
- '普通话 (中国大陆)-Yunxia-男': 'zh-CN-YunxiaNeural',
- '东北话 (中国大陆)-Xiaobei-女': 'zh-CN-liaoning-XiaobeiNeural',
- '中原官话 (中国陕西)-Xiaoni-女': 'zh-CN-shaanxi-XiaoniNeural',
- '粤语 (中国香港)-HiuMaan-女': 'zh-HK-HiuMaanNeural',
- '粤语 (中国香港)-HiuGaai-女': 'zh-HK-HiuGaaiNeural',
- '粤语 (中国香港)-WanLung-男': 'zh-HK-WanLungNeural',
- '台湾普通话-HsiaoChen-女': 'zh-TW-HsiaoChenNeural',
- '台湾普通话-HsiaoYu-女': 'zh-TW-HsiaoYuNeural',
- '台湾普通话-YunJhe-男': 'zh-TW-YunJheNeural'}
diff --git a/spaces/fclong/summary/fengshen/models/albert/modeling_albert.py b/spaces/fclong/summary/fengshen/models/albert/modeling_albert.py
deleted file mode 100644
index 7c5298825fb471e0575dabaefb2b8514e5bedcd8..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/models/albert/modeling_albert.py
+++ /dev/null
@@ -1,1363 +0,0 @@
-# coding=utf-8
-# Copyright 2018 Google AI, Google Brain and the HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""PyTorch ALBERT model. """
-
-import math
-import os
-from dataclasses import dataclass
-from typing import Optional, Tuple
-
-import torch
-from packaging import version
-from torch import nn
-from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
-
-from transformers.activations import ACT2FN
-from transformers.file_utils import (
- ModelOutput,
- add_code_sample_docstrings,
- add_start_docstrings,
- add_start_docstrings_to_model_forward,
- replace_return_docstrings,
-)
-from transformers.modeling_outputs import (
- BaseModelOutput,
- BaseModelOutputWithPooling,
- MaskedLMOutput,
- MultipleChoiceModelOutput,
- QuestionAnsweringModelOutput,
- SequenceClassifierOutput,
- TokenClassifierOutput,
-)
-from transformers.modeling_utils import (
- PreTrainedModel,
- apply_chunking_to_forward,
- find_pruneable_heads_and_indices,
- prune_linear_layer,
-)
-from transformers.utils import logging
-from transformers import AlbertConfig
-
-
-
-logger = logging.get_logger(__name__)
-
-_CHECKPOINT_FOR_DOC = "albert-base-v2"
-_CONFIG_FOR_DOC = "AlbertConfig"
-_TOKENIZER_FOR_DOC = "AlbertTokenizer"
-
-
-ALBERT_PRETRAINED_MODEL_ARCHIVE_LIST = [
- "albert-base-v1",
- "albert-large-v1",
- "albert-xlarge-v1",
- "albert-xxlarge-v1",
- "albert-base-v2",
- "albert-large-v2",
- "albert-xlarge-v2",
- "albert-xxlarge-v2",
- # See all ALBERT models at https://huggingface.co/models?filter=albert
-]
-
-
-def load_tf_weights_in_albert(model, config, tf_checkpoint_path):
- """Load tf checkpoints in a pytorch model."""
- try:
- import re
-
- import numpy as np
- import tensorflow as tf
- except ImportError:
- logger.error(
- "Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see "
- "https://www.tensorflow.org/install/ for installation instructions."
- )
- raise
- tf_path = os.path.abspath(tf_checkpoint_path)
- logger.info(f"Converting TensorFlow checkpoint from {tf_path}")
- # Load weights from TF model
- init_vars = tf.train.list_variables(tf_path)
- names = []
- arrays = []
- for name, shape in init_vars:
- logger.info(f"Loading TF weight {name} with shape {shape}")
- array = tf.train.load_variable(tf_path, name)
- names.append(name)
- arrays.append(array)
-
- for name, array in zip(names, arrays):
- print(name)
-
- for name, array in zip(names, arrays):
- original_name = name
-
- # If saved from the TF HUB module
- name = name.replace("module/", "")
-
- # Renaming and simplifying
- name = name.replace("ffn_1", "ffn")
- name = name.replace("bert/", "albert/")
- name = name.replace("attention_1", "attention")
- name = name.replace("transform/", "")
- name = name.replace("LayerNorm_1", "full_layer_layer_norm")
- name = name.replace("LayerNorm", "attention/LayerNorm")
- name = name.replace("transformer/", "")
-
- # The feed forward layer had an 'intermediate' step which has been abstracted away
- name = name.replace("intermediate/dense/", "")
- name = name.replace("ffn/intermediate/output/dense/", "ffn_output/")
-
- # ALBERT attention was split between self and output which have been abstracted away
- name = name.replace("/output/", "/")
- name = name.replace("/self/", "/")
-
- # The pooler is a linear layer
- name = name.replace("pooler/dense", "pooler")
-
- # The classifier was simplified to predictions from cls/predictions
- name = name.replace("cls/predictions", "predictions")
- name = name.replace("predictions/attention", "predictions")
-
- # Naming was changed to be more explicit
- name = name.replace("embeddings/attention", "embeddings")
- name = name.replace("inner_group_", "albert_layers/")
- name = name.replace("group_", "albert_layer_groups/")
-
- # Classifier
- if len(name.split("/")) == 1 and ("output_bias" in name or "output_weights" in name):
- name = "classifier/" + name
-
- # No ALBERT model currently handles the next sentence prediction task
- if "seq_relationship" in name:
- name = name.replace("seq_relationship/output_", "sop_classifier/classifier/")
- name = name.replace("weights", "weight")
-
- name = name.split("/")
-
- # Ignore the gradients applied by the LAMB/ADAM optimizers.
- if (
- "adam_m" in name
- or "adam_v" in name
- or "AdamWeightDecayOptimizer" in name
- or "AdamWeightDecayOptimizer_1" in name
- or "global_step" in name
- ):
- logger.info(f"Skipping {'/'.join(name)}")
- continue
-
- pointer = model
- for m_name in name:
- if re.fullmatch(r"[A-Za-z]+_\d+", m_name):
- scope_names = re.split(r"_(\d+)", m_name)
- else:
- scope_names = [m_name]
-
- if scope_names[0] == "kernel" or scope_names[0] == "gamma":
- pointer = getattr(pointer, "weight")
- elif scope_names[0] == "output_bias" or scope_names[0] == "beta":
- pointer = getattr(pointer, "bias")
- elif scope_names[0] == "output_weights":
- pointer = getattr(pointer, "weight")
- elif scope_names[0] == "squad":
- pointer = getattr(pointer, "classifier")
- else:
- try:
- pointer = getattr(pointer, scope_names[0])
- except AttributeError:
- logger.info(f"Skipping {'/'.join(name)}")
- continue
- if len(scope_names) >= 2:
- num = int(scope_names[1])
- pointer = pointer[num]
-
- if m_name[-11:] == "_embeddings":
- pointer = getattr(pointer, "weight")
- elif m_name == "kernel":
- array = np.transpose(array)
- try:
- if pointer.shape != array.shape:
- raise ValueError(f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched")
- except AssertionError as e:
- e.args += (pointer.shape, array.shape)
- raise
- print(f"Initialize PyTorch weight {name} from {original_name}")
- pointer.data = torch.from_numpy(array)
-
- return model
-
-
-class AlbertEmbeddings(nn.Module):
- """
- Construct the embeddings from word, position and token_type embeddings.
- """
-
- def __init__(self, config):
- super().__init__()
- self.word_embeddings = nn.Embedding(config.vocab_size, config.embedding_size, padding_idx=config.pad_token_id)
- self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.embedding_size)
- self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.embedding_size)
-
- # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
- # any TensorFlow checkpoint file
- self.LayerNorm = nn.LayerNorm(config.embedding_size, eps=config.layer_norm_eps)
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
-
- # position_ids (1, len position emb) is contiguous in memory and exported when serialized
- self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1)))
- self.position_embedding_type = getattr(config, "position_embedding_type", "absolute")
- if version.parse(torch.__version__) > version.parse("1.6.0"):
- self.register_buffer(
- "token_type_ids",
- torch.zeros(self.position_ids.size(), dtype=torch.long, device=self.position_ids.device),
- persistent=False,
- )
-
- # Copied from transformers.models.bert.modeling_bert.BertEmbeddings.forward
- def forward(
- self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None, past_key_values_length=0
- ):
- if input_ids is not None:
- input_shape = input_ids.size()
- else:
- input_shape = inputs_embeds.size()[:-1]
-
- seq_length = input_shape[1]
-
- if position_ids is None:
- position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length]
-
- # Setting the token_type_ids to the registered buffer in constructor where it is all zeros, which usually occurs
- # when its auto-generated, registered buffer helps users when tracing the model without passing token_type_ids, solves
- # issue #5664
- if token_type_ids is None:
- if hasattr(self, "token_type_ids"):
- buffered_token_type_ids = self.token_type_ids[:, :seq_length]
- buffered_token_type_ids_expanded = buffered_token_type_ids.expand(input_shape[0], seq_length)
- token_type_ids = buffered_token_type_ids_expanded
- else:
- token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device)
-
- if inputs_embeds is None:
- inputs_embeds = self.word_embeddings(input_ids)
- token_type_embeddings = self.token_type_embeddings(token_type_ids)
-
- embeddings = inputs_embeds + token_type_embeddings
- if self.position_embedding_type == "absolute":
- position_embeddings = self.position_embeddings(position_ids)
- embeddings += position_embeddings
- embeddings = self.LayerNorm(embeddings)
- embeddings = self.dropout(embeddings)
- return embeddings
-
-
-class AlbertAttention(nn.Module):
- def __init__(self, config):
- super().__init__()
- if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"):
- raise ValueError(
- f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention "
- f"heads ({config.num_attention_heads}"
- )
-
- self.num_attention_heads = config.num_attention_heads
- self.hidden_size = config.hidden_size
- self.attention_head_size = config.hidden_size // config.num_attention_heads
- self.all_head_size = self.num_attention_heads * self.attention_head_size
-
- self.query = nn.Linear(config.hidden_size, self.all_head_size)
- self.key = nn.Linear(config.hidden_size, self.all_head_size)
- self.value = nn.Linear(config.hidden_size, self.all_head_size)
-
- self.attention_dropout = nn.Dropout(config.attention_probs_dropout_prob)
- self.output_dropout = nn.Dropout(config.hidden_dropout_prob)
- self.dense = nn.Linear(config.hidden_size, config.hidden_size)
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- self.pruned_heads = set()
-
- self.position_embedding_type = getattr(config, "position_embedding_type", "absolute")
- if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query":
- self.max_position_embeddings = config.max_position_embeddings
- self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size)
-
- # Copied from transformers.models.bert.modeling_bert.BertSelfAttention.transpose_for_scores
- def transpose_for_scores(self, x):
- new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size)
- x = x.view(*new_x_shape)
- return x.permute(0, 2, 1, 3)
-
- def prune_heads(self, heads):
- if len(heads) == 0:
- return
- heads, index = find_pruneable_heads_and_indices(
- heads, self.num_attention_heads, self.attention_head_size, self.pruned_heads
- )
-
- # Prune linear layers
- self.query = prune_linear_layer(self.query, index)
- self.key = prune_linear_layer(self.key, index)
- self.value = prune_linear_layer(self.value, index)
- self.dense = prune_linear_layer(self.dense, index, dim=1)
-
- # Update hyper params and store pruned heads
- self.num_attention_heads = self.num_attention_heads - len(heads)
- self.all_head_size = self.attention_head_size * self.num_attention_heads
- self.pruned_heads = self.pruned_heads.union(heads)
-
- def forward(self, hidden_states, attention_mask=None, head_mask=None, output_attentions=False):
- mixed_query_layer = self.query(hidden_states)
- mixed_key_layer = self.key(hidden_states)
- mixed_value_layer = self.value(hidden_states)
-
- query_layer = self.transpose_for_scores(mixed_query_layer)
- key_layer = self.transpose_for_scores(mixed_key_layer)
- value_layer = self.transpose_for_scores(mixed_value_layer)
-
- # Take the dot product between "query" and "key" to get the raw attention scores.
- attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
- attention_scores = attention_scores / math.sqrt(self.attention_head_size)
-
- if attention_mask is not None:
- # Apply the attention mask is (precomputed for all layers in BertModel forward() function)
- attention_scores = attention_scores + attention_mask
-
- if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query":
- seq_length = hidden_states.size()[1]
- position_ids_l = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(-1, 1)
- position_ids_r = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(1, -1)
- distance = position_ids_l - position_ids_r
- positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1)
- positional_embedding = positional_embedding.to(dtype=query_layer.dtype) # fp16 compatibility
-
- if self.position_embedding_type == "relative_key":
- relative_position_scores = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding)
- attention_scores = attention_scores + relative_position_scores
- elif self.position_embedding_type == "relative_key_query":
- relative_position_scores_query = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding)
- relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding)
- attention_scores = attention_scores + relative_position_scores_query + relative_position_scores_key
-
- # Normalize the attention scores to probabilities.
- attention_probs = nn.Softmax(dim=-1)(attention_scores)
-
- # This is actually dropping out entire tokens to attend to, which might
- # seem a bit unusual, but is taken from the original Transformer paper.
- attention_probs = self.attention_dropout(attention_probs)
-
- # Mask heads if we want to
- if head_mask is not None:
- attention_probs = attention_probs * head_mask
-
- context_layer = torch.matmul(attention_probs, value_layer)
- context_layer = context_layer.transpose(2, 1).flatten(2)
-
- projected_context_layer = self.dense(context_layer)
- projected_context_layer_dropout = self.output_dropout(projected_context_layer)
- layernormed_context_layer = self.LayerNorm(hidden_states + projected_context_layer_dropout)
- return (layernormed_context_layer, attention_probs) if output_attentions else (layernormed_context_layer,)
-
-
-class AlbertLayer(nn.Module):
- def __init__(self, config):
- super().__init__()
-
- self.config = config
- self.chunk_size_feed_forward = config.chunk_size_feed_forward
- self.seq_len_dim = 1
- self.full_layer_layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- self.attention = AlbertAttention(config)
- self.ffn = nn.Linear(config.hidden_size, config.intermediate_size)
- self.ffn_output = nn.Linear(config.intermediate_size, config.hidden_size)
- self.activation = ACT2FN[config.hidden_act]
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
-
- def forward(
- self, hidden_states, attention_mask=None, head_mask=None, output_attentions=False, output_hidden_states=False
- ):
- attention_output = self.attention(hidden_states, attention_mask, head_mask, output_attentions)
-
- ffn_output = apply_chunking_to_forward(
- self.ff_chunk,
- self.chunk_size_feed_forward,
- self.seq_len_dim,
- attention_output[0],
- )
- hidden_states = self.full_layer_layer_norm(ffn_output + attention_output[0])
-
- return (hidden_states,) + attention_output[1:] # add attentions if we output them
-
- def ff_chunk(self, attention_output):
- ffn_output = self.ffn(attention_output)
- ffn_output = self.activation(ffn_output)
- ffn_output = self.ffn_output(ffn_output)
- return ffn_output
-
-
-class AlbertLayerGroup(nn.Module):
- def __init__(self, config):
- super().__init__()
-
- self.albert_layers = nn.ModuleList([AlbertLayer(config) for _ in range(config.inner_group_num)])
-
- def forward(
- self, hidden_states, attention_mask=None, head_mask=None, output_attentions=False, output_hidden_states=False
- ):
- layer_hidden_states = ()
- layer_attentions = ()
-
- for layer_index, albert_layer in enumerate(self.albert_layers):
- layer_output = albert_layer(hidden_states, attention_mask, head_mask[layer_index], output_attentions)
- hidden_states = layer_output[0]
-
- if output_attentions:
- layer_attentions = layer_attentions + (layer_output[1],)
-
- if output_hidden_states:
- layer_hidden_states = layer_hidden_states + (hidden_states,)
-
- outputs = (hidden_states,)
- if output_hidden_states:
- outputs = outputs + (layer_hidden_states,)
- if output_attentions:
- outputs = outputs + (layer_attentions,)
- return outputs # last-layer hidden state, (layer hidden states), (layer attentions)
-
-
-class AlbertTransformer(nn.Module):
- def __init__(self, config):
- super().__init__()
-
- self.config = config
- self.embedding_hidden_mapping_in = nn.Linear(config.embedding_size, config.hidden_size)
- self.albert_layer_groups = nn.ModuleList([AlbertLayerGroup(config) for _ in range(config.num_hidden_groups)])
-
- def forward(
- self,
- hidden_states,
- attention_mask=None,
- head_mask=None,
- output_attentions=False,
- output_hidden_states=False,
- return_dict=True,
- ):
- hidden_states = self.embedding_hidden_mapping_in(hidden_states)
-
- all_hidden_states = (hidden_states,) if output_hidden_states else None
- all_attentions = () if output_attentions else None
-
- head_mask = [None] * self.config.num_hidden_layers if head_mask is None else head_mask
-
- for i in range(self.config.num_hidden_layers):
- # Number of layers in a hidden group
- layers_per_group = int(self.config.num_hidden_layers / self.config.num_hidden_groups)
-
- # Index of the hidden group
- group_idx = int(i / (self.config.num_hidden_layers / self.config.num_hidden_groups))
-
- layer_group_output = self.albert_layer_groups[group_idx](
- hidden_states,
- attention_mask,
- head_mask[group_idx * layers_per_group : (group_idx + 1) * layers_per_group],
- output_attentions,
- output_hidden_states,
- )
- hidden_states = layer_group_output[0]
-
- if output_attentions:
- all_attentions = all_attentions + layer_group_output[-1]
-
- if output_hidden_states:
- all_hidden_states = all_hidden_states + (hidden_states,)
-
- if not return_dict:
- return tuple(v for v in [hidden_states, all_hidden_states, all_attentions] if v is not None)
- return BaseModelOutput(
- last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_attentions
- )
-
-
-class AlbertPreTrainedModel(PreTrainedModel):
- """
- An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
- models.
- """
-
- config_class = AlbertConfig
- load_tf_weights = load_tf_weights_in_albert
- base_model_prefix = "albert"
- _keys_to_ignore_on_load_missing = [r"position_ids"]
-
- def _init_weights(self, module):
- """Initialize the weights."""
- if isinstance(module, nn.Linear):
- # Slightly different from the TF version which uses truncated_normal for initialization
- # cf https://github.com/pytorch/pytorch/pull/5617
- module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
- if module.bias is not None:
- module.bias.data.zero_()
- elif isinstance(module, nn.Embedding):
- module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
- if module.padding_idx is not None:
- module.weight.data[module.padding_idx].zero_()
- elif isinstance(module, nn.LayerNorm):
- module.bias.data.zero_()
- module.weight.data.fill_(1.0)
-
-
-@dataclass
-class AlbertForPreTrainingOutput(ModelOutput):
- """
- Output type of :class:`~transformers.AlbertForPreTraining`.
-
- Args:
- loss (`optional`, returned when ``labels`` is provided, ``torch.FloatTensor`` of shape :obj:`(1,)`):
- Total loss as the sum of the masked language modeling loss and the next sequence prediction
- (classification) loss.
- prediction_logits (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, config.vocab_size)`):
- Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
- sop_logits (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, 2)`):
- Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
- before SoftMax).
- hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``):
- Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer)
- of shape :obj:`(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):
- Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads,
- sequence_length, sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
-
- loss: Optional[torch.FloatTensor] = None
- prediction_logits: torch.FloatTensor = None
- sop_logits: torch.FloatTensor = None
- hidden_states: Optional[Tuple[torch.FloatTensor]] = None
- attentions: Optional[Tuple[torch.FloatTensor]] = None
-
-
-ALBERT_START_DOCSTRING = r"""
-
- This model inherits from :class:`~transformers.PreTrainedModel`. Check the superclass documentation for the generic
- methods the library implements for all its model (such as downloading or saving, resizing the input embeddings,
- pruning heads etc.)
-
- This model is also a PyTorch `torch.nn.Module `__
- subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to
- general usage and behavior.
-
- Args:
- config (:class:`~transformers.AlbertConfig`): Model configuration class with all the parameters of the model.
- Initializing with a config file does not load the weights associated with the model, only the
- configuration. Check out the :meth:`~transformers.PreTrainedModel.from_pretrained` method to load the model
- weights.
-"""
-
-ALBERT_INPUTS_DOCSTRING = r"""
- Args:
- input_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`):
- Indices of input sequence tokens in the vocabulary.
-
- Indices can be obtained using :class:`~transformers.AlbertTokenizer`. See
- :meth:`transformers.PreTrainedTokenizer.__call__` and :meth:`transformers.PreTrainedTokenizer.encode` for
- details.
-
- `What are input IDs? <../glossary.html#input-ids>`__
- attention_mask (:obj:`torch.FloatTensor` of shape :obj:`({0})`, `optional`):
- Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- `What are attention masks? <../glossary.html#attention-mask>`__
- token_type_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`, `optional`):
- Segment token indices to indicate first and second portions of the inputs. Indices are selected in ``[0,
- 1]``:
-
- - 0 corresponds to a `sentence A` token,
- - 1 corresponds to a `sentence B` token.
-
- `What are token type IDs? <../glossary.html#token-type-ids>`_
- position_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`, `optional`):
- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range ``[0,
- config.max_position_embeddings - 1]``.
-
- `What are position IDs? <../glossary.html#position-ids>`_
- head_mask (:obj:`torch.FloatTensor` of shape :obj:`(num_heads,)` or :obj:`(num_layers, num_heads)`, `optional`):
- Mask to nullify selected heads of the self-attention modules. Mask values selected in ``[0, 1]``:
-
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
-
- inputs_embeds (:obj:`torch.FloatTensor` of shape :obj:`({0}, hidden_size)`, `optional`):
- Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation.
- This is useful if you want more control over how to convert :obj:`input_ids` indices into associated
- vectors than the model's internal embedding lookup matrix.
- output_attentions (:obj:`bool`, `optional`):
- Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned
- tensors for more detail.
- output_hidden_states (:obj:`bool`, `optional`):
- Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for
- more detail.
- return_dict (:obj:`bool`, `optional`):
- Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple.
-"""
-
-
-@add_start_docstrings(
- "The bare ALBERT Model transformer outputting raw hidden-states without any specific head on top.",
- ALBERT_START_DOCSTRING,
-)
-class AlbertModel(AlbertPreTrainedModel):
-
- config_class = AlbertConfig
- base_model_prefix = "albert"
-
- def __init__(self, config, add_pooling_layer=True):
- super().__init__(config)
-
- self.config = config
- self.embeddings = AlbertEmbeddings(config)
- self.encoder = AlbertTransformer(config)
- if add_pooling_layer:
- self.pooler = nn.Linear(config.hidden_size, config.hidden_size)
- self.pooler_activation = nn.Tanh()
- else:
- self.pooler = None
- self.pooler_activation = None
-
- self.init_weights()
-
- def get_input_embeddings(self):
- return self.embeddings.word_embeddings
-
- def set_input_embeddings(self, value):
- self.embeddings.word_embeddings = value
-
- def _prune_heads(self, heads_to_prune):
- """
- Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} ALBERT has
- a different architecture in that its layers are shared across groups, which then has inner groups. If an ALBERT
- model has 12 hidden layers and 2 hidden groups, with two inner groups, there is a total of 4 different layers.
-
- These layers are flattened: the indices [0,1] correspond to the two inner groups of the first hidden layer,
- while [2,3] correspond to the two inner groups of the second hidden layer.
-
- Any layer with in index other than [0,1,2,3] will result in an error. See base class PreTrainedModel for more
- information about head pruning
- """
- for layer, heads in heads_to_prune.items():
- group_idx = int(layer / self.config.inner_group_num)
- inner_group_idx = int(layer - group_idx * self.config.inner_group_num)
- self.encoder.albert_layer_groups[group_idx].albert_layers[inner_group_idx].attention.prune_heads(heads)
-
- @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
- @add_code_sample_docstrings(
- processor_class=_TOKENIZER_FOR_DOC,
- checkpoint=_CHECKPOINT_FOR_DOC,
- output_type=BaseModelOutputWithPooling,
- config_class=_CONFIG_FOR_DOC,
- )
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- if input_ids is not None and inputs_embeds is not None:
- raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
- elif input_ids is not None:
- input_shape = input_ids.size()
- elif inputs_embeds is not None:
- input_shape = inputs_embeds.size()[:-1]
- else:
- raise ValueError("You have to specify either input_ids or inputs_embeds")
-
- batch_size, seq_length = input_shape
- device = input_ids.device if input_ids is not None else inputs_embeds.device
-
- if attention_mask is None:
- attention_mask = torch.ones(input_shape, device=device)
- if token_type_ids is None:
- if hasattr(self.embeddings, "token_type_ids"):
- buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length]
- buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length)
- token_type_ids = buffered_token_type_ids_expanded
- else:
- token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)
-
- # extended_attention_mask = attention_mask.unsqueeze(1).unsqueeze(2) #
- extended_attention_mask = attention_mask[:, None, :, :]
- extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility
- extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0
- head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
-
- embedding_output = self.embeddings(
- input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
- )
- encoder_outputs = self.encoder(
- embedding_output,
- extended_attention_mask,
- head_mask=head_mask,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- sequence_output = encoder_outputs[0]
-
- pooled_output = self.pooler_activation(self.pooler(sequence_output[:, 0])) if self.pooler is not None else None
-
- if not return_dict:
- return (sequence_output, pooled_output) + encoder_outputs[1:]
-
- return BaseModelOutputWithPooling(
- last_hidden_state=sequence_output,
- pooler_output=pooled_output,
- hidden_states=encoder_outputs.hidden_states,
- attentions=encoder_outputs.attentions,
- )
-
-
-@add_start_docstrings(
- """
- Albert Model with two heads on top as done during the pretraining: a `masked language modeling` head and a
- `sentence order prediction (classification)` head.
- """,
- ALBERT_START_DOCSTRING,
-)
-class AlbertForPreTraining(AlbertPreTrainedModel):
- def __init__(self, config):
- super().__init__(config)
-
- self.albert = AlbertModel(config)
- self.predictions = AlbertMLMHead(config)
- self.sop_classifier = AlbertSOPHead(config)
-
- self.init_weights()
-
- def get_output_embeddings(self):
- return self.predictions.decoder
-
- def set_output_embeddings(self, new_embeddings):
- self.predictions.decoder = new_embeddings
-
- def get_input_embeddings(self):
- return self.albert.embeddings.word_embeddings
-
- @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
- @replace_return_docstrings(output_type=AlbertForPreTrainingOutput, config_class=_CONFIG_FOR_DOC)
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- labels=None,
- sentence_order_label=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- r"""
- labels (``torch.LongTensor`` of shape ``(batch_size, sequence_length)``, `optional`):
- Labels for computing the masked language modeling loss. Indices should be in ``[-100, 0, ...,
- config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are ignored
- (masked), the loss is only computed for the tokens with labels in ``[0, ..., config.vocab_size]``
- sentence_order_label (``torch.LongTensor`` of shape ``(batch_size,)``, `optional`):
- Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair
- (see :obj:`input_ids` docstring) Indices should be in ``[0, 1]``. ``0`` indicates original order (sequence
- A, then sequence B), ``1`` indicates switched order (sequence B, then sequence A).
-
- Returns:
-
- Example::
-
- >>> from transformers import AlbertTokenizer, AlbertForPreTraining
- >>> import torch
-
- >>> tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
- >>> model = AlbertForPreTraining.from_pretrained('albert-base-v2')
-
- >>> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
- >>> outputs = model(input_ids)
-
- >>> prediction_logits = outputs.prediction_logits
- >>> sop_logits = outputs.sop_logits
-
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- outputs = self.albert(
- input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- sequence_output, pooled_output = outputs[:2]
-
- prediction_scores = self.predictions(sequence_output)
- sop_scores = self.sop_classifier(pooled_output)
-
- total_loss = None
- if labels is not None and sentence_order_label is not None:
- loss_fct = CrossEntropyLoss()
- masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
- sentence_order_loss = loss_fct(sop_scores.view(-1, 2), sentence_order_label.view(-1))
- total_loss = masked_lm_loss + sentence_order_loss
-
- if not return_dict:
- output = (prediction_scores, sop_scores) + outputs[2:]
- return ((total_loss,) + output) if total_loss is not None else output
-
- return AlbertForPreTrainingOutput(
- loss=total_loss,
- prediction_logits=prediction_scores,
- sop_logits=sop_scores,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
-
-
-class AlbertMLMHead(nn.Module):
- def __init__(self, config):
- super().__init__()
-
- self.LayerNorm = nn.LayerNorm(config.embedding_size)
- self.bias = nn.Parameter(torch.zeros(config.vocab_size))
- self.dense = nn.Linear(config.hidden_size, config.embedding_size)
- self.decoder = nn.Linear(config.embedding_size, config.vocab_size)
- self.activation = ACT2FN[config.hidden_act]
- self.decoder.bias = self.bias
-
- def forward(self, hidden_states):
- hidden_states = self.dense(hidden_states)
- hidden_states = self.activation(hidden_states)
- hidden_states = self.LayerNorm(hidden_states)
- hidden_states = self.decoder(hidden_states)
-
- prediction_scores = hidden_states
-
- return prediction_scores
-
- def _tie_weights(self):
- # To tie those two weights if they get disconnected (on TPU or when the bias is resized)
- self.bias = self.decoder.bias
-
-
-class AlbertSOPHead(nn.Module):
- def __init__(self, config):
- super().__init__()
-
- self.dropout = nn.Dropout(config.classifier_dropout_prob)
- self.classifier = nn.Linear(config.hidden_size, config.num_labels)
-
- def forward(self, pooled_output):
- dropout_pooled_output = self.dropout(pooled_output)
- logits = self.classifier(dropout_pooled_output)
- return logits
-
-
-@add_start_docstrings(
- "Albert Model with a `language modeling` head on top.",
- ALBERT_START_DOCSTRING,
-)
-class AlbertForMaskedLM(AlbertPreTrainedModel):
-
- _keys_to_ignore_on_load_unexpected = [r"pooler"]
-
- def __init__(self, config):
- super().__init__(config)
-
- self.albert = AlbertModel(config, add_pooling_layer=False)
- self.predictions = AlbertMLMHead(config)
-
- self.init_weights()
-
- def get_output_embeddings(self):
- return self.predictions.decoder
-
- def set_output_embeddings(self, new_embeddings):
- self.predictions.decoder = new_embeddings
-
- def get_input_embeddings(self):
- return self.albert.embeddings.word_embeddings
-
- @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
- @add_code_sample_docstrings(
- processor_class=_TOKENIZER_FOR_DOC,
- checkpoint=_CHECKPOINT_FOR_DOC,
- output_type=MaskedLMOutput,
- config_class=_CONFIG_FOR_DOC,
- )
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- labels=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- r"""
- labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
- Labels for computing the masked language modeling loss. Indices should be in ``[-100, 0, ...,
- config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are ignored
- (masked), the loss is only computed for the tokens with labels in ``[0, ..., config.vocab_size]``
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- outputs = self.albert(
- input_ids=input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- sequence_outputs = outputs[0]
-
- prediction_scores = self.predictions(sequence_outputs)
-
- masked_lm_loss = None
- if labels is not None:
- loss_fct = CrossEntropyLoss()
- masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
-
- if not return_dict:
- output = (prediction_scores,) + outputs[2:]
- return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output
-
- return MaskedLMOutput(
- loss=masked_lm_loss,
- logits=prediction_scores,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
-
-
-@add_start_docstrings(
- """
- Albert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
- output) e.g. for GLUE tasks.
- """,
- ALBERT_START_DOCSTRING,
-)
-class AlbertForSequenceClassification(AlbertPreTrainedModel):
- def __init__(self, config):
- super().__init__(config)
- self.num_labels = config.num_labels
- self.config = config
-
- self.albert = AlbertModel(config)
- self.dropout = nn.Dropout(config.classifier_dropout_prob)
- self.classifier = nn.Linear(config.hidden_size, self.config.num_labels)
-
- self.init_weights()
-
- @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
- @add_code_sample_docstrings(
- processor_class=_TOKENIZER_FOR_DOC,
- checkpoint=_CHECKPOINT_FOR_DOC,
- output_type=SequenceClassifierOutput,
- config_class=_CONFIG_FOR_DOC,
- )
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- labels=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- r"""
- labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
- Labels for computing the sequence classification/regression loss. Indices should be in ``[0, ...,
- config.num_labels - 1]``. If ``config.num_labels == 1`` a regression loss is computed (Mean-Square loss),
- If ``config.num_labels > 1`` a classification loss is computed (Cross-Entropy).
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- outputs = self.albert(
- input_ids=input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- pooled_output = outputs[1]
-
- pooled_output = self.dropout(pooled_output)
- logits = self.classifier(pooled_output)
-
- loss = None
- if labels is not None:
- if self.config.problem_type is None:
- if self.num_labels == 1:
- self.config.problem_type = "regression"
- elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
- self.config.problem_type = "single_label_classification"
- else:
- self.config.problem_type = "multi_label_classification"
-
- if self.config.problem_type == "regression":
- loss_fct = MSELoss()
- if self.num_labels == 1:
- loss = loss_fct(logits.squeeze(), labels.squeeze())
- else:
- loss = loss_fct(logits, labels)
- elif self.config.problem_type == "single_label_classification":
- loss_fct = CrossEntropyLoss()
- loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
- elif self.config.problem_type == "multi_label_classification":
- loss_fct = BCEWithLogitsLoss()
- loss = loss_fct(logits, labels)
-
- if not return_dict:
- output = (logits,) + outputs[2:]
- return ((loss,) + output) if loss is not None else output
-
- return SequenceClassifierOutput(
- loss=loss,
- logits=logits,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
-
-
-@add_start_docstrings(
- """
- Albert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
- Named-Entity-Recognition (NER) tasks.
- """,
- ALBERT_START_DOCSTRING,
-)
-class AlbertForTokenClassification(AlbertPreTrainedModel):
-
- _keys_to_ignore_on_load_unexpected = [r"pooler"]
-
- def __init__(self, config):
- super().__init__(config)
- self.num_labels = config.num_labels
-
- self.albert = AlbertModel(config, add_pooling_layer=False)
- classifier_dropout_prob = (
- config.classifier_dropout_prob
- if config.classifier_dropout_prob is not None
- else config.hidden_dropout_prob
- )
- self.dropout = nn.Dropout(classifier_dropout_prob)
- self.classifier = nn.Linear(config.hidden_size, self.config.num_labels)
-
- self.init_weights()
-
- @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
- @add_code_sample_docstrings(
- processor_class=_TOKENIZER_FOR_DOC,
- checkpoint=_CHECKPOINT_FOR_DOC,
- output_type=TokenClassifierOutput,
- config_class=_CONFIG_FOR_DOC,
- )
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- labels=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- r"""
- labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
- Labels for computing the token classification loss. Indices should be in ``[0, ..., config.num_labels -
- 1]``.
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- outputs = self.albert(
- input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- sequence_output = outputs[0]
-
- sequence_output = self.dropout(sequence_output)
- logits = self.classifier(sequence_output)
-
- loss = None
- if labels is not None:
- loss_fct = CrossEntropyLoss()
- # Only keep active parts of the loss
- if attention_mask is not None:
- active_loss = attention_mask.view(-1) == 1
- active_logits = logits.view(-1, self.num_labels)
- active_labels = torch.where(
- active_loss, labels.view(-1), torch.tensor(loss_fct.ignore_index).type_as(labels)
- )
- loss = loss_fct(active_logits, active_labels)
- else:
- loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
-
- if not return_dict:
- output = (logits,) + outputs[2:]
- return ((loss,) + output) if loss is not None else output
-
- return TokenClassifierOutput(
- loss=loss,
- logits=logits,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
-
-
-@add_start_docstrings(
- """
- Albert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
- layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
- """,
- ALBERT_START_DOCSTRING,
-)
-class AlbertForQuestionAnswering(AlbertPreTrainedModel):
-
- _keys_to_ignore_on_load_unexpected = [r"pooler"]
-
- def __init__(self, config):
- super().__init__(config)
- self.num_labels = config.num_labels
-
- self.albert = AlbertModel(config, add_pooling_layer=False)
- self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels)
-
- self.init_weights()
-
- @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
- @add_code_sample_docstrings(
- processor_class=_TOKENIZER_FOR_DOC,
- checkpoint=_CHECKPOINT_FOR_DOC,
- output_type=QuestionAnsweringModelOutput,
- config_class=_CONFIG_FOR_DOC,
- )
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- start_positions=None,
- end_positions=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- r"""
- start_positions (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
- Labels for position (index) of the start of the labelled span for computing the token classification loss.
- Positions are clamped to the length of the sequence (:obj:`sequence_length`). Position outside of the
- sequence are not taken into account for computing the loss.
- end_positions (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
- Labels for position (index) of the end of the labelled span for computing the token classification loss.
- Positions are clamped to the length of the sequence (:obj:`sequence_length`). Position outside of the
- sequence are not taken into account for computing the loss.
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- outputs = self.albert(
- input_ids=input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- sequence_output = outputs[0]
-
- logits = self.qa_outputs(sequence_output)
- start_logits, end_logits = logits.split(1, dim=-1)
- start_logits = start_logits.squeeze(-1).contiguous()
- end_logits = end_logits.squeeze(-1).contiguous()
-
- total_loss = None
- if start_positions is not None and end_positions is not None:
- # If we are on multi-GPU, split add a dimension
- if len(start_positions.size()) > 1:
- start_positions = start_positions.squeeze(-1)
- if len(end_positions.size()) > 1:
- end_positions = end_positions.squeeze(-1)
- # sometimes the start/end positions are outside our model inputs, we ignore these terms
- ignored_index = start_logits.size(1)
- start_positions = start_positions.clamp(0, ignored_index)
- end_positions = end_positions.clamp(0, ignored_index)
-
- loss_fct = CrossEntropyLoss(ignore_index=ignored_index)
- start_loss = loss_fct(start_logits, start_positions)
- end_loss = loss_fct(end_logits, end_positions)
- total_loss = (start_loss + end_loss) / 2
-
- if not return_dict:
- output = (start_logits, end_logits) + outputs[2:]
- return ((total_loss,) + output) if total_loss is not None else output
-
- return QuestionAnsweringModelOutput(
- loss=total_loss,
- start_logits=start_logits,
- end_logits=end_logits,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
-
-
-@add_start_docstrings(
- """
- Albert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
- softmax) e.g. for RocStories/SWAG tasks.
- """,
- ALBERT_START_DOCSTRING,
-)
-class AlbertForMultipleChoice(AlbertPreTrainedModel):
- def __init__(self, config):
- super().__init__(config)
-
- self.albert = AlbertModel(config)
- self.dropout = nn.Dropout(config.classifier_dropout_prob)
- self.classifier = nn.Linear(config.hidden_size, 1)
-
- self.init_weights()
-
- @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, num_choices, sequence_length"))
- @add_code_sample_docstrings(
- processor_class=_TOKENIZER_FOR_DOC,
- checkpoint=_CHECKPOINT_FOR_DOC,
- output_type=MultipleChoiceModelOutput,
- config_class=_CONFIG_FOR_DOC,
- )
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- labels=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- r"""
- labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
- Labels for computing the multiple choice classification loss. Indices should be in ``[0, ...,
- num_choices-1]`` where `num_choices` is the size of the second dimension of the input tensors. (see
- `input_ids` above)
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
- num_choices = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1]
-
- input_ids = input_ids.view(-1, input_ids.size(-1)) if input_ids is not None else None
- attention_mask = attention_mask.view(-1, attention_mask.size(-1)) if attention_mask is not None else None
- token_type_ids = token_type_ids.view(-1, token_type_ids.size(-1)) if token_type_ids is not None else None
- position_ids = position_ids.view(-1, position_ids.size(-1)) if position_ids is not None else None
- inputs_embeds = (
- inputs_embeds.view(-1, inputs_embeds.size(-2), inputs_embeds.size(-1))
- if inputs_embeds is not None
- else None
- )
- outputs = self.albert(
- input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- pooled_output = outputs[1]
-
- pooled_output = self.dropout(pooled_output)
- logits = self.classifier(pooled_output)
- reshaped_logits = logits.view(-1, num_choices)
-
- loss = None
- if labels is not None:
- loss_fct = CrossEntropyLoss()
- loss = loss_fct(reshaped_logits, labels)
-
- if not return_dict:
- output = (reshaped_logits,) + outputs[2:]
- return ((loss,) + output) if loss is not None else output
-
- return MultipleChoiceModelOutput(
- loss=loss,
- logits=reshaped_logits,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
diff --git a/spaces/fclong/summary/fengshen/pipelines/text_classification.py b/spaces/fclong/summary/fengshen/pipelines/text_classification.py
deleted file mode 100644
index c236bd9a354896c0fada947ba12206cc78ebc99f..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/pipelines/text_classification.py
+++ /dev/null
@@ -1,234 +0,0 @@
-import torch
-from torch.utils.data._utils.collate import default_collate
-from dataclasses import dataclass
-from typing import Dict, List
-from .base import (
- _CONFIG_MODEL_TYPE,
- _CONFIG_TOKENIZER_TYPE)
-from fengshen.models.roformer import RoFormerForSequenceClassification
-from fengshen.models.longformer import LongformerForSequenceClassification
-from fengshen.models.zen1 import ZenForSequenceClassification
-from transformers import (
- BertConfig,
- AutoModelForSequenceClassification,
- AutoTokenizer,
-)
-from transformers.models.auto.tokenization_auto import get_tokenizer_config
-from transformers.pipelines.base import PipelineException, GenericTensor
-from transformers import TextClassificationPipeline as HuggingfacePipe
-import pytorch_lightning as pl
-from fengshen.data.universal_datamodule import UniversalDataModule
-from fengshen.utils.universal_checkpoint import UniversalCheckpoint
-from fengshen.models.model_utils import add_module_args
-import torchmetrics
-
-_model_dict = {
- 'fengshen-roformer': RoFormerForSequenceClassification,
- # 'fengshen-megatron_t5': T5EncoderModel, TODO 实现T5EncoderForSequenceClassification
- 'fengshen-longformer': LongformerForSequenceClassification,
- 'fengshen-zen1': ZenForSequenceClassification,
- 'huggingface-auto': AutoModelForSequenceClassification,
-}
-
-_tokenizer_dict = {}
-
-_ATTR_PREPARE_INPUT = '_prepare_inputs_for_sequence_classification'
-
-
-class _taskModel(pl.LightningModule):
- @staticmethod
- def add_model_specific_args(parent_args):
- _ = parent_args.add_argument_group('text classification task model')
- return parent_args
-
- def __init__(self, args, model):
- super().__init__()
- self.model = model
- self.acc_metrics = torchmetrics.Accuracy()
- self.save_hyperparameters(args)
-
- def setup(self, stage) -> None:
- if stage == 'fit':
- train_loader = self.trainer._data_connector._train_dataloader_source.dataloader()
- # Calculate total steps
- if self.trainer.max_epochs > 0:
- world_size = self.trainer.world_size
- tb_size = self.hparams.train_batchsize * max(1, world_size)
- ab_size = self.trainer.accumulate_grad_batches
- self.total_steps = (len(train_loader.dataset) *
- self.trainer.max_epochs // tb_size) // ab_size
- else:
- self.total_steps = self.trainer.max_steps // self.trainer.accumulate_grad_batches
-
- print('Total steps: {}' .format(self.total_steps))
-
- def training_step(self, batch, batch_idx):
- outputs = self.model(**batch)
- loss, _ = outputs[0], outputs[1]
- self.log('train_loss', loss)
- return loss
-
- def comput_metrix(self, logits, labels):
- y_pred = torch.argmax(logits, dim=-1)
- y_pred = y_pred.view(size=(-1,))
- y_true = labels.view(size=(-1,)).long()
- acc = self.acc_metrics(y_pred.long(), y_true.long())
- return acc
-
- def validation_step(self, batch, batch_idx):
- outputs = self.model(**batch)
- loss, logits = outputs[0], outputs[1]
- acc = self.comput_metrix(logits, batch['labels'])
- self.log('val_loss', loss)
- self.log('val_acc', acc)
-
- def predict_step(self, batch, batch_idx):
- output = self.model(**batch)
- return output.logits
-
- def configure_optimizers(self):
- from fengshen.models.model_utils import configure_optimizers
- return configure_optimizers(self)
-
-
-@dataclass
-class _Collator:
- tokenizer = None
- texta_name = 'sentence'
- textb_name = 'sentence2'
- label_name = 'label'
- max_length = 512
- model_type = 'huggingface-auto'
-
- def __call__(self, samples):
- sample_list = []
- for item in samples:
- if self.textb_name in item and item[self.textb_name] != '':
- if self.model_type != 'fengshen-roformer':
- encode_dict = self.tokenizer.encode_plus(
- [item[self.texta_name], item[self.textb_name]],
- max_length=self.max_length,
- padding='max_length',
- truncation='longest_first')
- else:
- encode_dict = self.tokenizer.encode_plus(
- [item[self.texta_name]+'[SEP]'+item[self.textb_name]],
- max_length=self.max_length,
- padding='max_length',
- truncation='longest_first')
- else:
- encode_dict = self.tokenizer.encode_plus(
- item[self.texta_name],
- max_length=self.max_length,
- padding='max_length',
- truncation='longest_first')
- sample = {}
- for k, v in encode_dict.items():
- sample[k] = torch.tensor(v)
- if self.label_name in item:
- sample['labels'] = torch.tensor(item[self.label_name]).long()
- sample_list.append(sample)
- return default_collate(sample_list)
-
-
-class TextClassificationPipeline(HuggingfacePipe):
- @staticmethod
- def add_pipeline_specific_args(parent_args):
- parser = parent_args.add_argument_group('SequenceClassificationPipeline')
- parser.add_argument('--texta_name', default='sentence', type=str)
- parser.add_argument('--textb_name', default='sentence2', type=str)
- parser.add_argument('--label_name', default='label', type=str)
- parser.add_argument('--max_length', default=512, type=int)
- parser.add_argument('--device', default=-1, type=int)
- parser = _taskModel.add_model_specific_args(parent_args)
- parser = UniversalDataModule.add_data_specific_args(parent_args)
- parser = UniversalCheckpoint.add_argparse_args(parent_args)
- parser = pl.Trainer.add_argparse_args(parent_args)
- parser = add_module_args(parent_args)
- return parent_args
-
- def __init__(self,
- model: str = None,
- args=None,
- **kwargs):
- self.args = args
- self.model_name = model
- self.model_type = 'huggingface-auto'
- # 用BertConfig做兼容,我只需要读里面的fengshen_model_type,所以这里用啥Config都可以
- config = BertConfig.from_pretrained(model)
- if hasattr(config, _CONFIG_MODEL_TYPE):
- self.model_type = config.fengshen_model_type
- if self.model_type not in _model_dict:
- raise PipelineException(self.model_name, ' not in model type dict')
- # 加载模型,并且使用模型的config
- self.model = _model_dict[self.model_type].from_pretrained(model)
- self.config = self.model.config
- # 加载分词
- tokenizer_config = get_tokenizer_config(model, **kwargs)
- self.tokenizer = None
- if hasattr(tokenizer_config, _CONFIG_TOKENIZER_TYPE):
- if tokenizer_config._CONFIG_TOKENIZER_TYPE in _tokenizer_dict:
- self.tokenizer = _tokenizer_dict[tokenizer_config._CONFIG_TOKENIZER_TYPE].from_pretrained(
- model)
- if self.tokenizer is None:
- self.tokenizer = AutoTokenizer.from_pretrained(model)
- # 加载数据处理模块
- c = _Collator()
- c.tokenizer = self.tokenizer
- c.model_type = self.model_type
- if args is not None:
- c.texta_name = self.args.texta_name
- c.textb_name = self.args.textb_name
- c.label_name = self.args.label_name
- c.max_length = self.args.max_length
- self.collator = c
- device = -1 if args is None else args.device
- print(device)
- print(kwargs)
- super().__init__(model=self.model,
- tokenizer=self.tokenizer,
- framework='pt',
- device=device,
- **kwargs)
-
- def train(self,
- datasets: Dict):
- """
- Args:
- datasets is a dict like
- {
- test: Dataset()
- validation: Dataset()
- train: Dataset()
- }
- """
- checkpoint_callback = UniversalCheckpoint(self.args)
- trainer = pl.Trainer.from_argparse_args(self.args,
- callbacks=[checkpoint_callback]
- )
-
- data_model = UniversalDataModule(
- datasets=datasets,
- tokenizer=self.tokenizer,
- collate_fn=self.collator,
- args=self.args)
- model = _taskModel(self.args, self.model)
-
- trainer.fit(model, data_model)
- return
-
- def preprocess(self, inputs, **tokenizer_kwargs) -> Dict[str, GenericTensor]:
- # 如果模型有自定义的接口,用模型的口
- if hasattr(self.model, _ATTR_PREPARE_INPUT):
- return getattr(self.model, _ATTR_PREPARE_INPUT)(inputs, self.tokenizer, **tokenizer_kwargs)
- samples = []
- if isinstance(inputs, str):
- samples.append({self.collator.texta_name: inputs})
- else:
- # 在__call__里面已经保证了input的类型,所以这里直接else就行
- for i in inputs:
- samples.append({self.collator.texta_name})
- return self.collator(samples)
-
-
-Pipeline = TextClassificationPipeline
diff --git a/spaces/fclong/summary/fengshen/utils/convert_py_to_npy.py b/spaces/fclong/summary/fengshen/utils/convert_py_to_npy.py
deleted file mode 100644
index 0d652169b59ffdc7ca977318ee72187b2ce73c1f..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/utils/convert_py_to_npy.py
+++ /dev/null
@@ -1,54 +0,0 @@
-import argparse
-import torch
-import glob
-import os
-import numpy as np
-
-
-class MMapIndexDataset():
- def __init__(self, datapath):
- self.idxfp = np.load(datapath + '.npy', mmap_mode='r')
- self.binfp = np.memmap(datapath + '.bin', dtype='long', mode='r')
-
- def __len__(self):
- return self.idxfp.shape[0]
-
- def __getitem__(self, idx):
- return self.binfp[self.idxfp[idx, 0]:self.idxfp[idx, 1]]
-
-
-def convert_py_to_npy(input_tensor, bin_out, idx_out):
- idx = torch.empty(len(input_tensor), 2, dtype=torch.long)
- start = 0
- for i, input in enumerate(input_tensor):
- idx[i] = torch.tensor([start, start + len(input)])
- start += len(input)
- np.save(idx_out, idx)
- binfp = np.memmap(bin_out, dtype='long', mode='w+', shape=(start))
- start = 0
- for i, input in enumerate(input_tensor):
- for j, idx in enumerate(input):
- binfp[start + j] = idx
- start += len(input)
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser(description="Text infilling.")
- parser.add_argument('--data_path', type=str,
- default='/cognitive_comp/gaoxinyu/data/wudao')
- args = parser.parse_args()
- process_key = [
- 'incorrect_input_ids_list',
- 'label_ids_list',
- 'target_ids_list',
- ]
- if os.path.exists(args.data_path):
- print(f'''Loading data from {args.data_path}''')
- data_dict = torch.load(args.data_path)
- for k in process_key:
- bin_out = ('_' + k + '.bin').join(args.data_path.rsplit('.pt', 1))
- idx_out = ('_' + k).join(args.data_path.rsplit('.pt', 1))
- convert_py_to_npy(data_dict[k], bin_out, idx_out)
- else:
- print(
- f'Please create the synthetic datafile {args.data_path} with create_synthetic_data.py.')
diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/utils/misc.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/utils/misc.py
deleted file mode 100644
index e2f772285c79db97a41a662d40f7361aed806448..0000000000000000000000000000000000000000
--- a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/utils/misc.py
+++ /dev/null
@@ -1,18 +0,0 @@
-import os
-from typing import Iterable
-
-
-def optional_string(condition: bool, string: str):
- return string if condition else ""
-
-
-def parent_dir(path: str) -> str:
- return os.path.basename(os.path.dirname(path))
-
-
-def stem(path: str) -> str:
- return os.path.splitext(os.path.basename(path))[0]
-
-
-def iterable_to_str(iterable: Iterable) -> str:
- return ','.join([str(x) for x in iterable])
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Growtopia Mod APK 3.95 with Autofarm and Multibot Features.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Growtopia Mod APK 3.95 with Autofarm and Multibot Features.md
deleted file mode 100644
index e2e28b98cd07823a6455c3ad1fc5372c33c33243..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Growtopia Mod APK 3.95 with Autofarm and Multibot Features.md
+++ /dev/null
@@ -1,113 +0,0 @@
-
-
Growtopia Mod Apk 3.95: Everything You Need to Know
-
Growtopia is a popular sandbox platformer MMO game that lets you create, explore, and socialize with other players in a vast pixelated world. But what if you want to enjoy the game with unlimited gems, items, and features? That's where Growtopia Mod Apk 3.95 comes in.
In this article, we will tell you everything you need to know about Growtopia Mod Apk 3.95, including what it is, how to download and install it, how to use it, what are its benefits and risks, and some frequently asked questions.
-
What is Growtopia?
-
Before we dive into Growtopia Mod Apk 3.95, let's first understand what Growtopia is. Growtopia is a sandbox platformer MMO game that was released in 2012 by Robinson Technologies and Hamumu Software. It was later acquired by Ubisoft in 2017.
-
A sandbox platformer MMO game
-
Growtopia is a game that combines the elements of sandbox, platformer, and MMO genres. You can create your own worlds using various blocks and items, or visit other players' worlds and interact with them. You can also play mini-games, complete quests, trade items, join guilds, and participate in events.
-
A creative and social experience
-
Growtopia is a game that encourages creativity and socialization. You can express yourself by customizing your character, building your own world, or making art and music. You can also chat with other players, make friends, or even enemies. You can collaborate or compete with others in various ways.
-
growtopia mod menu 3.95 download
-growtopia powerkuy 3.95 autofarm multibot
-growtopia 3.95 apk free download
-growtopia mod apk 3.95 unlimited gems
-growtopia hack 3.95 no ban
-growtopia mod apk 3.95 mediafire
-growtopia mod apk 3.95 latest version
-growtopia mod apk 3.95 android
-growtopia mod apk 3.95 anti cheat bypass
-growtopia mod apk 3.95 mega mod
-growtopia mod apk 3.95 offline
-growtopia mod apk 3.95 online
-growtopia mod apk 3.95 update
-growtopia mod apk 3.95 with cheat engine
-growtopia mod apk 3.95 xmodgames
-growtopia mod apk 3.95 youtube
-growtopia mod apk 3.95 zip file
-growtopia mod apk 3.95 zippyshare
-growtopia mod apk 3.95 zoom hack
-growtopia mod apk 3.95 zoom out
-how to install growtopia mod apk 3.95
-how to use growtopia mod apk 3.95
-is growtopia mod apk 3.95 safe
-where to download growtopia mod apk 3.95
-why growtopia mod apk 3.95 not working
-best growtopia mod apk 3.95 features
-best site to download growtopia mod apk 3.95
-best settings for growtopia mod apk 3.95
-best tips and tricks for growtopia mod apk 3.95
-best way to play growtopia mod apk 3.95
-compare growtopia mod apk 3.95 and original version
-pros and cons of growtopia mod apk 3.95
-reviews and ratings of growtopia mod apk 3.95
-testimonials and feedback of growtopia mod apk 3.95 users
-benefits and drawbacks of growtopia mod apk 3.95
-advantages and disadvantages of growtopia mod apk 3.95
-alternatives and substitutes for growtopia mod apk 3.95
-competitors and rivals of growtopia mod apk 3.95
-similarities and differences between growtopia mod apk 3.95 and other versions
-what is new in growtopia mod apk 3.95
-
A game with endless possibilities
-
Growtopia is a game that has endless possibilities. You can do anything you want in the game, as long as you follow the rules and respect others. You can explore different worlds, discover new items, learn new skills, or create your own content. You can also influence the game's development by giving feedback or suggestions.
-
What is Growtopia Mod Apk 3.95?
-
Now that you know what Growtopia is, let's talk about Growtopia Mod Apk 3.95. Growtopia Mod Apk 3.95 is a modified version of the original game that gives you access to unlimited gems, items, and features that are not available in the official version.
-
A modified version of the original game
-
Grow
Growtopia Mod Apk 3.95 is a version of the game that has been modified by some developers or hackers to give you more advantages and features than the original game. It is not an official version and it is not supported by the game developers or publishers.
-
A version with unlimited gems, items, and features
-
Growtopia Mod Apk 3.95 is a version that gives you unlimited gems, items, and features in the game. Gems are the main currency in Growtopia that you can use to buy items, worlds, or packs. Items are the things that you can use to build, decorate, or equip your character. Features are the modes, functions, or options that you can access in the game.
-
With Growtopia Mod Apk 3.95, you can get unlimited gems for free without spending real money or watching ads. You can also get any item you want in the game without farming, crafting, or trading. You can also unlock all the features and modes in the game without completing requirements or achievements.
-
A version with a mod menu and autofarm multibot
-
Growtopia Mod Apk 3.95 is a version that has a mod menu and an autofarm multibot. A mod menu is a tool that allows you to customize your settings and preferences in the game. An autofarm multibot is a tool that allows you to automate your actions and tasks in the game.
-
With Growtopia Mod Apk 3.95, you can access the mod menu and change your settings such as speed, gravity, zoom, fly, ghost, noclip, and more. You can also use the autofarm multibot to collect resources and items automatically without playing the game yourself.
-
How to Download and Install Growtopia Mod Apk 3.95?
-
If you want to try Growtopia Mod Apk 3.95, you need to download and install it on your device. Here are the steps to do so:
-
Download the apk file from a trusted source
-
The first step is to download the apk file of Growtopia Mod Apk 3.95 from a trusted source. You can search for it on Google or use a link from a reliable website. Make sure that the file is safe and virus-free before downloading it.
-
Enable unknown sources on your device
-
The second step is to enable unknown sources on your device. This is because Growtopia Mod Apk 3.95 is not from the official Google Play Store and your device may block its installation. To enable unknown sources, go to your device settings, security, and toggle on the option that allows installation from unknown sources.
-
Install the apk file and launch the game
-
The third step is to install the apk file and launch the game. To install the apk file, locate it in your device storage and tap on it. Follow the instructions on the screen and wait for the installation to finish. To launch the game, find its icon on your device home screen or app drawer and tap on it.
-
How to Use Growtopia Mod Apk 3.95?
-
Once you have downloaded and installed Growtopia Mod Apk 3.95, you can start using it and enjoying its benefits. Here are some tips on how to use it:
-
Access the mod menu and customize your settings
-
The first tip is to access the mod menu and customize your settings. To access the mod menu, tap on the button that says "Mod Menu" on the top left corner of your screen. You will see a list of options that you can toggle on or off according to your preference. For example, you can turn on speed hack to move faster in the game, or turn off gravity hack to float in the air.
-
Use the autofarm multibot to collect resources and items
-
The second tip is to use the autofarm multibot to collect resources and items. To use the autofarm multibot, tap on the button that says "Autofarm Multibot" on the top right corner of your screen. You will see a list of options that you can choose from such as farm gems, farm items, farm worlds, farm packs, etc. For example, you can choose farm gems to automatically collect gems from different worlds.
-
Enjoy the game with unlimited gems, items, and features
-
The third tip is to enjoy the game with unlimited gems, items, and features. With Growtopia Mod Apk 3.95, you can have fun with all the things that you can do in the game, such as buying items, building worlds, playing mini-games, or joining events. You can also unlock all the features and modes in the game, such as the developer mode, the hardcore mode, or the zombie mode. You can also use the mod menu and the autofarm multibot to enhance your gameplay and make it more fun and easy.
-
What are the Benefits of Growtopia Mod Apk 3.95?
-
Growtopia Mod Apk 3.95 has many benefits that can make your gaming experience more enjoyable and satisfying. Here are some of the benefits of Growtopia Mod Apk 3.95:
-
You can get unlimited gems for free
-
One of the main benefits of Growtopia Mod Apk 3.95 is that you can get unlimited gems for free. Gems are the main currency in Growtopia that you can use to buy items, worlds, or packs. Normally, you have to spend real money or watch ads to get gems in the game. But with Growtopia Mod Apk 3.95, you can get unlimited gems for free without any hassle. You can use the gems to buy anything you want in the game and enjoy it to the fullest.
-
You can get any item you want in the game
-
Another benefit of Growtopia Mod Apk 3.95 is that you can get any item you want in the game. Items are the things that you can use to build, decorate, or equip your character. There are thousands of items in Growtopia that have different functions and effects. Normally, you have to farm, craft, or trade to get items in the game. But with Growtopia Mod Apk 3.95, you can get any item you want in the game without any effort. You can use the items to create your own world, customize your character, or make art and music.
-
You can unlock all the features and modes in the game
-
A third benefit of Growtopia Mod Apk 3.95 is that you can unlock all the features and modes in the game. Features are the modes, functions, or options that you can access in the game. There are many features and modes in Growtopia that have different gameplay and challenges. Normally, you have to complete requirements or achievements to unlock features and modes in the game. But with Growtopia Mod Apk 3.95, you can unlock all the features and modes in the game without any restriction. You can access the developer mode, the hardcore mode, or the zombie mode and enjoy different aspects of the game.
-
What are the Risks of Growtopia Mod Apk 3.95?
-
Growtopia Mod Apk 3.95 may have many benefits, but it also has some risks that you should be aware of before using it. Here are some of the risks of Growtopia Mod Apk 3.95:
-
You may get banned from the game server
-
One of the main risks of Growtopia Mod Apk 3.95 is that you may get banned from the game server. Growtopia has a strict anti-cheat system that detects and bans players who use mods or hacks in the game. If you use Growtopia Mod Apk 3.95, you may get detected and banned from the game server permanently. This means that you will lose your account, your progress, and your data in the game.
-
You may get viruses or malware on your device
-
Another risk of Growtopia Mod Apk 3.95 is that you may get viruses or malware on your device. Growtopia Mod Apk 3.95 is not an official version and it is not verified by Google Play Protect or any other security software. This means that it may contain viruses or malware that can harm your device or steal your personal information.
-
You may lose your progress or data in the game
-
A third risk of Growtopia Mod Apk 3.95 is that you may lose your progress or data in the game. Growtopia Mod Apk 3.95 may not be compatible with the latest version of the game or the game server. This means that it may cause errors, glitches, or crashes in the game. If this happens, you may lose your progress or data in the game, such as your gems, items, worlds, or friends.
-
Conclusion
-
Growtopia Mod Apk 3.95 is a modified version of the original game that gives you unlimited gems, items, and features in the game. It also has a mod menu and an autofarm multibot that can enhance your gameplay and make it more fun and easy. However, it also has some risks that you should be aware of before using it, such as getting banned from the game server, getting viruses or malware on your device, or losing your progress or data in the game.
-
If you want to try Growtopia Mod Apk 3.95, you should download and install it from a trusted source, enable unknown sources on your device, and follow the instructions on how to use it. You should also be careful and responsible when using it and respect the rules and other players in the game.
-
We hope that this article has helped you understand everything you need to know about Growtopia Mod Apk 3.95. If you have any questions or comments, feel free to leave them below.
-
FAQs
-
Here are some of the frequently asked questions about Growtopia Mod Apk 3.95:
-
Is Growtopia Mod Apk 3.95 safe to use?
-
Growtopia Mod Apk 3.95 is not an official version and it is not verified by any security software. This means that it may not be safe to use and it may contain viruses or malware that can harm your device or steal your personal information. You should only download and install it from a trusted source and scan it with an antivirus before using it.
-
Is Growtopia Mod Apk 3.95 legal to use?
-
Growtopia Mod Apk 3.95 is not legal to use and it violates the terms of service and the end-user license agreement of the game. This means that you are breaking the rules and the law by using it and you may face legal consequences for doing so. You should only use the official version of the game and play it fair and square.
-
Can I use Growtopia Mod Apk 3.95 with my existing account?
-
You can use Growtopia Mod Apk 3.95 with your existing account, but you should not do so. This is because Growtopia Mod Apk 3.95 may get detected and banned by the game server and you will lose your account, your progress, and your data in the game. You should only use Growtopia Mod Apk 3.95 with a new account or a guest account.
-
Can I update Growtopia Mod Apk 3.95 to the latest version of the game?
-
You can update Growtopia Mod Apk 3.95 to the latest version of the game, but you should not do so. This is because Growtopia Mod Apk 3.95 may not be compatible with the latest version of the game or the game server and it may cause errors, glitches, or crashes in the game. You should only use Growtopia Mod Apk 3.95 with the version of the game that matches it.
-
Can I play online with other players using Growtopia Mod Apk 3.95?
-
You can play online with other players using Growtopia Mod Apk 3.95, but you should not do so. This is because Growtopia Mod Apk 3.95 may give you an unfair advantage over other players and ruin their gaming experience. You may also get reported or banned by other players or moderators for using mods or hacks in the game. You should only play online with other players using the official version of the game and respect their rights and feelings.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io-parser/build/cjs/decodePacket.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io-parser/build/cjs/decodePacket.js
deleted file mode 100644
index 2dbe0f8f819cd16ff72c2a5ed4e192ff8d9659f9..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io-parser/build/cjs/decodePacket.js
+++ /dev/null
@@ -1,49 +0,0 @@
-"use strict";
-Object.defineProperty(exports, "__esModule", { value: true });
-const commons_js_1 = require("./commons.js");
-const decodePacket = (encodedPacket, binaryType) => {
- if (typeof encodedPacket !== "string") {
- return {
- type: "message",
- data: mapBinary(encodedPacket, binaryType)
- };
- }
- const type = encodedPacket.charAt(0);
- if (type === "b") {
- const buffer = Buffer.from(encodedPacket.substring(1), "base64");
- return {
- type: "message",
- data: mapBinary(buffer, binaryType)
- };
- }
- if (!commons_js_1.PACKET_TYPES_REVERSE[type]) {
- return commons_js_1.ERROR_PACKET;
- }
- return encodedPacket.length > 1
- ? {
- type: commons_js_1.PACKET_TYPES_REVERSE[type],
- data: encodedPacket.substring(1)
- }
- : {
- type: commons_js_1.PACKET_TYPES_REVERSE[type]
- };
-};
-const mapBinary = (data, binaryType) => {
- const isBuffer = Buffer.isBuffer(data);
- switch (binaryType) {
- case "arraybuffer":
- return isBuffer ? toArrayBuffer(data) : data;
- case "nodebuffer":
- default:
- return data; // assuming the data is already a Buffer
- }
-};
-const toArrayBuffer = (buffer) => {
- const arrayBuffer = new ArrayBuffer(buffer.length);
- const view = new Uint8Array(arrayBuffer);
- for (let i = 0; i < buffer.length; i++) {
- view[i] = buffer[i];
- }
- return arrayBuffer;
-};
-exports.default = decodePacket;
diff --git a/spaces/florim/MedGPT/autogpt/config/ai_config.py b/spaces/florim/MedGPT/autogpt/config/ai_config.py
deleted file mode 100644
index d50c30beee9dc8009f63415378ae1c6a399f0037..0000000000000000000000000000000000000000
--- a/spaces/florim/MedGPT/autogpt/config/ai_config.py
+++ /dev/null
@@ -1,121 +0,0 @@
-# sourcery skip: do-not-use-staticmethod
-"""
-A module that contains the AIConfig class object that contains the configuration
-"""
-from __future__ import annotations
-
-import os
-from typing import Type
-
-import yaml
-
-
-class AIConfig:
- """
- A class object that contains the configuration information for the AI
-
- Attributes:
- ai_name (str): The name of the AI.
- ai_role (str): The description of the AI's role.
- ai_goals (list): The list of objectives the AI is supposed to complete.
- """
-
- def __init__(
- self, ai_name: str = "", ai_role: str = "", ai_goals: list | None = None
- ) -> None:
- """
- Initialize a class instance
-
- Parameters:
- ai_name (str): The name of the AI.
- ai_role (str): The description of the AI's role.
- ai_goals (list): The list of objectives the AI is supposed to complete.
- Returns:
- None
- """
- if ai_goals is None:
- ai_goals = []
- self.ai_name = ai_name
- self.ai_role = ai_role
- self.ai_goals = ai_goals
-
- # Soon this will go in a folder where it remembers more stuff about the run(s)
- SAVE_FILE = os.path.join(os.path.dirname(__file__), "..", "ai_settings.yaml")
-
- @staticmethod
- def load(config_file: str = SAVE_FILE) -> "AIConfig":
- """
- Returns class object with parameters (ai_name, ai_role, ai_goals) loaded from
- yaml file if yaml file exists,
- else returns class with no parameters.
-
- Parameters:
- config_file (int): The path to the config yaml file.
- DEFAULT: "../ai_settings.yaml"
-
- Returns:
- cls (object): An instance of given cls object
- """
-
- try:
- with open(config_file, encoding="utf-8") as file:
- config_params = yaml.load(file, Loader=yaml.FullLoader)
- except FileNotFoundError:
- config_params = {}
-
- ai_name = config_params.get("ai_name", "")
- ai_role = config_params.get("ai_role", "")
- ai_goals = config_params.get("ai_goals", [])
- # type: Type[AIConfig]
- return AIConfig(ai_name, ai_role, ai_goals)
-
- def save(self, config_file: str = SAVE_FILE) -> None:
- """
- Saves the class parameters to the specified file yaml file path as a yaml file.
-
- Parameters:
- config_file(str): The path to the config yaml file.
- DEFAULT: "../ai_settings.yaml"
-
- Returns:
- None
- """
-
- config = {
- "ai_name": self.ai_name,
- "ai_role": self.ai_role,
- "ai_goals": self.ai_goals,
- }
- with open(config_file, "w", encoding="utf-8") as file:
- yaml.dump(config, file, allow_unicode=True)
-
- def construct_full_prompt(self) -> str:
- """
- Returns a prompt to the user with the class information in an organized fashion.
-
- Parameters:
- None
-
- Returns:
- full_prompt (str): A string containing the initial prompt for the user
- including the ai_name, ai_role and ai_goals.
- """
-
- prompt_start = (
- "Your decisions must always be made independently without"
- " seeking user assistance. Play to your strengths as an LLM and pursue"
- " simple strategies with no legal complications."
- ""
- )
-
- from autogpt.prompt import get_prompt
-
- # Construct full prompt
- full_prompt = (
- f"You are {self.ai_name}, {self.ai_role}\n{prompt_start}\n\nGOALS:\n\n"
- )
- for i, goal in enumerate(self.ai_goals):
- full_prompt += f"{i+1}. {goal}\n"
-
- full_prompt += f"\n\n{get_prompt()}"
- return full_prompt
diff --git a/spaces/furqankassa/Docker-FlanT5-TextGeneratorTranslator/static/script.js b/spaces/furqankassa/Docker-FlanT5-TextGeneratorTranslator/static/script.js
deleted file mode 100644
index efd05c5d1e76ecd3d0e41927b073c8d10f1e8e20..0000000000000000000000000000000000000000
--- a/spaces/furqankassa/Docker-FlanT5-TextGeneratorTranslator/static/script.js
+++ /dev/null
@@ -1,21 +0,0 @@
-const textGenForm = document.querySelector('.text-gen-form');
-
-const translateText = async (text) => {
- const inferResponse = await fetch(`infer_t5?input=${text}`);
- const inferJson = await inferResponse.json();
-
- return inferJson.output;
-};
-
-textGenForm.addEventListener('submit', async (event) => {
- event.preventDefault();
-
- const textGenInput = document.getElementById('text-gen-input');
- const textGenParagraph = document.querySelector('.text-gen-output');
-
- try {
- textGenParagraph.textContent = await translateText(textGenInput.value);
- } catch (err) {
- console.error(err);
- }
-});
\ No newline at end of file
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Buku Negosiasi Bisnis Pdf 26l LINK.md b/spaces/gotiQspiryo/whisper-ui/examples/Buku Negosiasi Bisnis Pdf 26l LINK.md
deleted file mode 100644
index 4a6cc351560d1521108791f3fbe530810be0e0b1..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Buku Negosiasi Bisnis Pdf 26l LINK.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-You can assign a GPU in the {SETTINGS} tab if you are running this on HF Spaces.
-"T4 small" is sufficient to run this demo.
-
-'''
-
-HF_TOKEN_NOT_SPECIFIED_WARNING = f'''# Attention - The environment variable `HF_TOKEN` is not specified. Please specify your Hugging Face token with write permission as the value of it.
-
-You can check and create your Hugging Face tokens here.
-You can specify environment variables in the "Repository secrets" section of the {SETTINGS} tab.
-
-'''
-
-HF_TOKEN = os.getenv('HF_TOKEN')
-
-
-def show_warning(warning_text: str) -> gr.Blocks:
- with gr.Blocks() as demo:
- with gr.Box():
- gr.Markdown(warning_text)
- return demo
-
-
-pipe = InferencePipeline(HF_TOKEN)
-trainer = Trainer(HF_TOKEN)
-
-with gr.Blocks(css='style.css') as demo:
- if os.getenv('IS_SHARED_UI'):
- show_warning(SHARED_UI_WARNING)
- if not torch.cuda.is_available():
- show_warning(CUDA_NOT_AVAILABLE_WARNING)
- if not HF_TOKEN:
- show_warning(HF_TOKEN_NOT_SPECIFIED_WARNING)
-
- gr.Markdown(TITLE)
- with gr.Tabs():
- with gr.TabItem('Train'):
- create_training_demo(trainer, pipe)
- with gr.TabItem('Test'):
- create_inference_demo(pipe, HF_TOKEN)
- with gr.TabItem('Upload'):
- gr.Markdown('''
- - You can use this tab to upload models later if you choose not to upload models in training time or if upload in training time failed.
- ''')
- create_upload_demo(HF_TOKEN)
-
-demo.queue(max_size=1).launch(share=False)
diff --git a/spaces/imjunaidafzal/LoRA-DreamBooth-Training-UI/constants.py b/spaces/imjunaidafzal/LoRA-DreamBooth-Training-UI/constants.py
deleted file mode 100644
index baaebbae71058fbb4faed35fd00e7559305dc409..0000000000000000000000000000000000000000
--- a/spaces/imjunaidafzal/LoRA-DreamBooth-Training-UI/constants.py
+++ /dev/null
@@ -1,6 +0,0 @@
-import enum
-
-
-class UploadTarget(enum.Enum):
- PERSONAL_PROFILE = 'Personal Profile'
- LORA_LIBRARY = 'LoRA Library'
diff --git a/spaces/inamXcontru/PoeticTTS/Blade And Soul Soul Shield List.md b/spaces/inamXcontru/PoeticTTS/Blade And Soul Soul Shield List.md
deleted file mode 100644
index ef144ec3d4411db778e0974fab87e41c144fdd3d..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Blade And Soul Soul Shield List.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
dando 1 semana de hoja de presentacion uasd pdf download, una hoja de presentacion uasd pdf download en el curso de 2 semanas en una hoja de presentacion uasd pdf download, con apoyos multimedia para poder hacer su hoja de presentacion uasd pdf download.
hoja de presentacion uasd. download document..rar full version hoja de presentacion uasd pdf descargar. kingsoft office unfrotunately free. . ese diseo diseo de hoja para aplicar en pdf, ese diseo lo pusimos en su link de descarga. vez que lo tengas que usar solo agregaos a hojas_design.css y lo pega. vez que aplicas la hoja de presentacin abre automaticamente tu hoja de presentacion, por lo que tienes que modificar unos lineas de codigo de ortonotes siempre que el hoja de p desenfoque de la hoja y el fondo. es una hoja de ponente diseo que sirve para aplicar a hojas diseo de contenido.
-
hoja de presentacion uasd pdf descargar. . ese diseo diseo de hoja para aplicar en pdf, ese diseo lo pusimos en su link de descarga. vez que lo tengas que usar solo agregaos a hojas_design.css y lo pega. vez que aplicas la hoja de presentacin abre automaticamente tu hoja de presentacion, por lo que tienes que modificar unos lineas de codigo de ortonotes siempre que el hoja de p desenfoque de la hoja y el fondo. es una hoja de ponente diseo que sirve para aplicar a hojas diseo de contenido.
-
hoja de presentacion uasd pdf descargar. download. . ese diseo diseo de hoja para aplicar en pdf, ese diseo lo pusimos en su link de descarga. vez que lo tengas que usar solo agregaos a hojas_design.css y lo pega. vez que aplicas la hoja de presentacin abre automaticamente tu hoja de presentacion, por lo que tienes que modificar unos lineas de codigo de ortonotes siempre que el hoja de p desenfoque de la hoja y el fondo. es una hoja de ponente diseo que sirve para aplicar a hojas diseo de contenido.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/inreVtussa/clothingai/Examples/50 Nuances De Grey Film Complet Download TOP.md b/spaces/inreVtussa/clothingai/Examples/50 Nuances De Grey Film Complet Download TOP.md
deleted file mode 100644
index 00e31804d84d342e205991351b937ca29b75fee0..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/50 Nuances De Grey Film Complet Download TOP.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- 3cee63e6c2
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Artcut V7 0 2009 English Kl !EXCLUSIVE! Full Version.md b/spaces/inreVtussa/clothingai/Examples/Artcut V7 0 2009 English Kl !EXCLUSIVE! Full Version.md
deleted file mode 100644
index c2ddaea2fb61693408bb2250fd309a2a93f550e7..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Artcut V7 0 2009 English Kl !EXCLUSIVE! Full Version.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
in the world of today one finds themselves wunderland w/ superbooster full version with crack vce u learning security suite 2020 april edition (all-in-one) [64-bit] full crack download denpa onna ki ae ram bari mp3 download zinc crack 1.3.0.9 zeka [chinese data emission]
isabella and marcos are both very talented and there are 3 versions and each one has a different view. the latter one is the most expensive and is not worth buying unless you are an enthusiast and want to read it with a traditional book. for those who are curious to read it, it is available for around 28 usd and it is well worth it. the former two are worth buying because they are cheaper. the former is a dutch translation and the other is an english translation. as expected, the latter one is way more expensive than the former one as the latter one is just a translation of the former one. the former is also available for 28 usd but it is the dutch version. it is also well worth buying and most people who read the dutch version have the english version by heart.
-
teresa cheyenne fox 720p for free full video download nourishing the spirit: a gift for you and your family, book 1: a mothers guide to meals, recipes and mindfulness for spiritual wellness (the healing holi..) babalon: pregnancy tvc
-
новая детская забава знания the art of getting by 1080p yifyl free download kaspersky internet security 2019 будильник cак открыть скачивает что получать в функции кардинала killer instinct portable application screen capture серии антологии название серии антологии sr 2k19 full game for pc как найти начало серии название серии
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/ivn888/Rome-in-transit/modules/rome_gtfs_rt.py b/spaces/ivn888/Rome-in-transit/modules/rome_gtfs_rt.py
deleted file mode 100644
index 360c71caaefecfac92b0f93a41b2d744e0e283e9..0000000000000000000000000000000000000000
--- a/spaces/ivn888/Rome-in-transit/modules/rome_gtfs_rt.py
+++ /dev/null
@@ -1,179 +0,0 @@
-import pandas as pd
-import requests
-from google.transit import gtfs_realtime_pb2
-from modules.colors import IN_TRANSIT_CL, LATE_CL, ON_TIME_CL, STOPPED_CL
-from modules.constants import CORS_GTFS_TRIP_UPDATES, CORS_GTFS_VEHICLE_POS
-from modules.time_utils import timestamp_to_hms
-from pyproj import Transformer
-
-# Vehicle Dataframe columns
-VEHICLE_DF_COLUMNS = [
- "x",
- "y",
- "vehicleID",
- "tripID",
- "startTime",
- "lastUpdate",
- "currentStatus",
- "currentStatusClass",
- "statusColor",
-]
-
-# Delay Dataframe columns
-DELAY_DF_COLUMNS = [
- "tripID",
- "delay",
- "delayClass",
- "delayColor",
-]
-
-VEHICLE_DF_SCHEMA = pd.DataFrame([], columns=VEHICLE_DF_COLUMNS)
-
-DELAY_DF_SCHEMA = pd.DataFrame([], columns=DELAY_DF_COLUMNS)
-
-FULL_DF_SCHEMA = VEHICLE_DF_SCHEMA.merge(DELAY_DF_SCHEMA, on="tripID")
-
-# A transformer that converts coordinates from EPSG:4326 to EPSG:3857
-transformer = Transformer.from_crs(4326, 3857, always_xy=True)
-
-
-def build_url(cache_bust):
- """
- Get the current local time and build the request url
- """
-
- vehicle_url = CORS_GTFS_VEHICLE_POS + f"?cacheBust={cache_bust}"
- trip_url = CORS_GTFS_TRIP_UPDATES + f"?cacheBust={cache_bust}"
-
- return (vehicle_url, trip_url)
-
-
-def get_vehicle_position(entity):
- """
- Returns the xy position of the processed entity.
- """
-
- coords = transformer.transform(
- entity.vehicle.position.longitude, entity.vehicle.position.latitude
- )
- return coords
-
-
-def get_current_status_color(current_status):
- """
- Returns the color of the entity according to the status
- of the vehicle (In transit/Stopped).
- """
-
- return STOPPED_CL if current_status == 1 else IN_TRANSIT_CL
-
-
-def get_current_status_class(current_status):
- """
- Returns the Vehicle current status (In transit/Stopped).
- """
-
- return "Stopped" if current_status == 1 else "In Transit"
-
-
-def get_delay_color(delay):
- """
- Returns the color of the entity according to the delay class.
- """
-
- if delay <= 0:
- return ON_TIME_CL
- else:
- return LATE_CL
-
-
-def get_delay_class(delay):
- """
- Returns the delay class (Late or On time).
- """
-
- if delay <= 0:
- return "On time"
- else:
- return "Late"
-
-
-def get_vehicle_data(url):
- """Reads the vehicle position feed and returns a pandas DataFrame"""
-
- vehicle_feed = gtfs_realtime_pb2.FeedMessage()
-
- # TODO: Retry at least 5 times if the response is empty
- response = requests.get(url).content
- vehicle_feed.ParseFromString(response)
-
- # Entities
- vehicle_entities = vehicle_feed.entity
-
- positions = []
- for entity in vehicle_entities:
- # Vehicle attributes
- x, y = get_vehicle_position(entity)
- vehicle_id = entity.vehicle.vehicle.id
- trip_id = entity.vehicle.trip.trip_id.strip()
- start_time = entity.vehicle.trip.start_time
- last_update = timestamp_to_hms(entity.vehicle.timestamp)
- current_status = entity.vehicle.current_status
- current_status_class = get_current_status_class(current_status)
- vehicle_color = get_current_status_color(current_status)
-
- positions.append(
- [
- x,
- y,
- vehicle_id,
- trip_id,
- start_time,
- last_update,
- current_status,
- current_status_class,
- vehicle_color,
- ]
- )
-
- data = pd.DataFrame(positions, columns=VEHICLE_DF_COLUMNS)
- return data
-
-
-def get_delay_data(url):
- """Reads the trip updates feed and returns a pandas DataFrame"""
-
- trip_update_feed = gtfs_realtime_pb2.FeedMessage()
-
- # TODO: Retry at least 5 times if the response is empty
- response = requests.get(url).content
- trip_update_feed.ParseFromString(response)
-
- # Entities
- trip_update_entities = trip_update_feed.entity
-
- delays = []
- for entity in trip_update_entities:
- trip_id = entity.trip_update.trip.trip_id.strip()
- current_stop_arrival = entity.trip_update.stop_time_update[0].arrival
- current_stop_delay = current_stop_arrival.delay / 60
- delay_class = get_delay_class(current_stop_delay)
- delay_color = get_delay_color(current_stop_delay)
- delays.append([trip_id, current_stop_delay, delay_class, delay_color])
- data = pd.DataFrame(delays, columns=DELAY_DF_COLUMNS)
- return data
-
-
-def get_data(cache_bust):
- """
- This function reads the Roma mobilità GTFS-RT feed
- and returns a pandas DataFrame.
- """
-
- vehicle_url, trip_url = build_url(cache_bust)
- vehicle_data = get_vehicle_data(vehicle_url)
- delay_data = get_delay_data(trip_url)
-
- # Merge vehicle and delay dataframe
- full_data = vehicle_data.merge(delay_data, on="tripID")
- return full_data
diff --git a/spaces/james-oldfield/PandA/networks/genforce/models/pggan_generator.py b/spaces/james-oldfield/PandA/networks/genforce/models/pggan_generator.py
deleted file mode 100644
index fc9c0cd45e18c768059ed1c745b6700b3e015df2..0000000000000000000000000000000000000000
--- a/spaces/james-oldfield/PandA/networks/genforce/models/pggan_generator.py
+++ /dev/null
@@ -1,331 +0,0 @@
-# python3.7
-"""Contains the implementation of generator described in PGGAN.
-
-Paper: https://arxiv.org/pdf/1710.10196.pdf
-
-Official TensorFlow implementation:
-https://github.com/tkarras/progressive_growing_of_gans
-"""
-
-import numpy as np
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-__all__ = ['PGGANGenerator']
-
-# Resolutions allowed.
-_RESOLUTIONS_ALLOWED = [8, 16, 32, 64, 128, 256, 512, 1024]
-
-# Initial resolution.
-_INIT_RES = 4
-
-# Default gain factor for weight scaling.
-_WSCALE_GAIN = np.sqrt(2.0)
-
-
-class PGGANGenerator(nn.Module):
- """Defines the generator network in PGGAN.
-
- NOTE: The synthesized images are with `RGB` channel order and pixel range
- [-1, 1].
-
- Settings for the network:
-
- (1) resolution: The resolution of the output image.
- (2) z_space_dim: The dimension of the latent space, Z. (default: 512)
- (3) image_channels: Number of channels of the output image. (default: 3)
- (4) final_tanh: Whether to use `tanh` to control the final pixel range.
- (default: False)
- (5) label_size: Size of the additional label for conditional generation.
- (default: 0)
- (6) fused_scale: Whether to fused `upsample` and `conv2d` together,
- resulting in `conv2d_transpose`. (default: False)
- (7) use_wscale: Whether to use weight scaling. (default: True)
- (8) fmaps_base: Factor to control number of feature maps for each layer.
- (default: 16 << 10)
- (9) fmaps_max: Maximum number of feature maps in each layer. (default: 512)
- """
-
- def __init__(self,
- resolution,
- z_space_dim=512,
- image_channels=3,
- final_tanh=False,
- label_size=0,
- fused_scale=False,
- use_wscale=True,
- fmaps_base=16 << 10,
- fmaps_max=512):
- """Initializes with basic settings.
-
- Raises:
- ValueError: If the `resolution` is not supported.
- """
- super().__init__()
-
- if resolution not in _RESOLUTIONS_ALLOWED:
- raise ValueError(f'Invalid resolution: `{resolution}`!\n'
- f'Resolutions allowed: {_RESOLUTIONS_ALLOWED}.')
-
- self.init_res = _INIT_RES
- self.init_res_log2 = int(np.log2(self.init_res))
- self.resolution = resolution
- self.final_res_log2 = int(np.log2(self.resolution))
- self.z_space_dim = z_space_dim
- self.image_channels = image_channels
- self.final_tanh = final_tanh
- self.label_size = label_size
- self.fused_scale = fused_scale
- self.use_wscale = use_wscale
- self.fmaps_base = fmaps_base
- self.fmaps_max = fmaps_max
-
- # Number of convolutional layers.
- self.num_layers = (self.final_res_log2 - self.init_res_log2 + 1) * 2
-
- # Level of detail (used for progressive training).
- self.register_buffer('lod', torch.zeros(()))
- self.pth_to_tf_var_mapping = {'lod': 'lod'}
-
- for res_log2 in range(self.init_res_log2, self.final_res_log2 + 1):
- res = 2 ** res_log2
- block_idx = res_log2 - self.init_res_log2
-
- # First convolution layer for each resolution.
- if res == self.init_res:
- self.add_module(
- f'layer{2 * block_idx}',
- ConvBlock(in_channels=self.z_space_dim + self.label_size,
- out_channels=self.get_nf(res),
- kernel_size=self.init_res,
- padding=self.init_res - 1,
- use_wscale=self.use_wscale))
- tf_layer_name = 'Dense'
- else:
- self.add_module(
- f'layer{2 * block_idx}',
- ConvBlock(in_channels=self.get_nf(res // 2),
- out_channels=self.get_nf(res),
- upsample=True,
- fused_scale=self.fused_scale,
- use_wscale=self.use_wscale))
- tf_layer_name = 'Conv0_up' if self.fused_scale else 'Conv0'
- self.pth_to_tf_var_mapping[f'layer{2 * block_idx}.weight'] = (
- f'{res}x{res}/{tf_layer_name}/weight')
- self.pth_to_tf_var_mapping[f'layer{2 * block_idx}.bias'] = (
- f'{res}x{res}/{tf_layer_name}/bias')
-
- # Second convolution layer for each resolution.
- self.add_module(
- f'layer{2 * block_idx + 1}',
- ConvBlock(in_channels=self.get_nf(res),
- out_channels=self.get_nf(res),
- use_wscale=self.use_wscale))
- tf_layer_name = 'Conv' if res == self.init_res else 'Conv1'
- self.pth_to_tf_var_mapping[f'layer{2 * block_idx + 1}.weight'] = (
- f'{res}x{res}/{tf_layer_name}/weight')
- self.pth_to_tf_var_mapping[f'layer{2 * block_idx + 1}.bias'] = (
- f'{res}x{res}/{tf_layer_name}/bias')
-
- # Output convolution layer for each resolution.
- self.add_module(
- f'output{block_idx}',
- ConvBlock(in_channels=self.get_nf(res),
- out_channels=self.image_channels,
- kernel_size=1,
- padding=0,
- use_wscale=self.use_wscale,
- wscale_gain=1.0,
- activation_type='linear'))
- self.pth_to_tf_var_mapping[f'output{block_idx}.weight'] = (
- f'ToRGB_lod{self.final_res_log2 - res_log2}/weight')
- self.pth_to_tf_var_mapping[f'output{block_idx}.bias'] = (
- f'ToRGB_lod{self.final_res_log2 - res_log2}/bias')
-
- self.upsample = UpsamplingLayer()
- self.final_activate = nn.Tanh() if self.final_tanh else nn.Identity()
-
- def get_nf(self, res):
- """Gets number of feature maps according to current resolution."""
- return min(self.fmaps_base // res, self.fmaps_max)
-
- def forward(self, z, label=None, lod=None, start=2, stop=None, init_norm=True, **_unused_kwargs):
- stop = self.final_res_log2 + 1 if stop is None else stop
-
- lod = self.lod.cpu().tolist() if lod is None else lod
- if lod + self.init_res_log2 > self.final_res_log2:
- raise ValueError(f'Maximum level-of-detail (lod) is '
- f'{self.final_res_log2 - self.init_res_log2}, '
- f'but `{lod}` is received!')
-
- # process latent code if we start at first layer of GAN
- if start == 2:
- z = self.layer0.pixel_norm(z) if init_norm else z
- x = z.view(z.shape[0], self.z_space_dim + self.label_size, 1, 1)
- else:
- x = z
-
- for res_log2 in range(start, stop):
- current_lod = self.final_res_log2 - res_log2
- if lod < current_lod + 1:
- block_idx = res_log2 - self.init_res_log2
- x = self.__getattr__(f'layer{2 * block_idx}')(x)
- x = self.__getattr__(f'layer{2 * block_idx + 1}')(x)
- if current_lod - 1 < lod <= current_lod:
- image = self.__getattr__(f'output{block_idx}')(x)
- elif current_lod < lod < current_lod + 1:
- alpha = np.ceil(lod) - lod
- image = (self.__getattr__(f'output{block_idx}')(x) * alpha +
- self.upsample(image) * (1 - alpha))
- elif lod >= current_lod + 1:
- image = self.upsample(image)
-
- if res_log2 == self.final_res_log2:
- image = self.final_activate(image)
- else:
- image = None
-
- results = {
- 'z': z,
- 'x': x,
- 'label': label,
- 'image': image,
- }
- return results
-
-
-class PixelNormLayer(nn.Module):
- """Implements pixel-wise feature vector normalization layer."""
-
- def __init__(self, epsilon=1e-8):
- super().__init__()
- self.eps = epsilon
-
- def forward(self, x):
- norm = torch.sqrt(torch.mean(x ** 2, dim=1, keepdim=True) + self.eps)
- return x / norm
-
-
-class UpsamplingLayer(nn.Module):
- """Implements the upsampling layer.
-
- Basically, this layer can be used to upsample feature maps with nearest
- neighbor interpolation.
- """
-
- def __init__(self, scale_factor=2):
- super().__init__()
- self.scale_factor = scale_factor
-
- def forward(self, x):
- if self.scale_factor <= 1:
- return x
- return F.interpolate(x, scale_factor=self.scale_factor, mode='nearest')
-
-
-class ConvBlock(nn.Module):
- """Implements the convolutional block.
-
- Basically, this block executes pixel-wise normalization layer, upsampling
- layer (if needed), convolutional layer, and activation layer in sequence.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1,
- add_bias=True,
- upsample=False,
- fused_scale=False,
- use_wscale=True,
- wscale_gain=_WSCALE_GAIN,
- activation_type='lrelu'):
- """Initializes with block settings.
-
- Args:
- in_channels: Number of channels of the input tensor.
- out_channels: Number of channels of the output tensor.
- kernel_size: Size of the convolutional kernels. (default: 3)
- stride: Stride parameter for convolution operation. (default: 1)
- padding: Padding parameter for convolution operation. (default: 1)
- add_bias: Whether to add bias onto the convolutional result.
- (default: True)
- upsample: Whether to upsample the input tensor before convolution.
- (default: False)
- fused_scale: Whether to fused `upsample` and `conv2d` together,
- resulting in `conv2d_transpose`. (default: False)
- use_wscale: Whether to use weight scaling. (default: True)
- wscale_gain: Gain factor for weight scaling. (default: _WSCALE_GAIN)
- activation_type: Type of activation. Support `linear` and `lrelu`.
- (default: `lrelu`)
-
- Raises:
- NotImplementedError: If the `activation_type` is not supported.
- """
- super().__init__()
-
- self.pixel_norm = PixelNormLayer()
-
- if upsample and not fused_scale:
- self.upsample = UpsamplingLayer()
- else:
- self.upsample = nn.Identity()
-
- if upsample and fused_scale:
- self.use_conv2d_transpose = True
- weight_shape = (in_channels, out_channels, kernel_size, kernel_size)
- self.stride = 2
- self.padding = 1
- else:
- self.use_conv2d_transpose = False
- weight_shape = (out_channels, in_channels, kernel_size, kernel_size)
- self.stride = stride
- self.padding = padding
-
- fan_in = kernel_size * kernel_size * in_channels
- wscale = wscale_gain / np.sqrt(fan_in)
- if use_wscale:
- self.weight = nn.Parameter(torch.randn(*weight_shape))
- self.wscale = wscale
- else:
- self.weight = nn.Parameter(torch.randn(*weight_shape) * wscale)
- self.wscale = 1.0
-
- if add_bias:
- self.bias = nn.Parameter(torch.zeros(out_channels))
- else:
- self.bias = None
-
- if activation_type == 'linear':
- self.activate = nn.Identity()
- elif activation_type == 'lrelu':
- self.activate = nn.LeakyReLU(negative_slope=0.2, inplace=True)
- else:
- raise NotImplementedError(f'Not implemented activation function: '
- f'`{activation_type}`!')
-
- def forward(self, x):
- x = self.pixel_norm(x)
- x = self.upsample(x)
- weight = self.weight * self.wscale
- if self.use_conv2d_transpose:
- weight = F.pad(weight, (1, 1, 1, 1, 0, 0, 0, 0), 'constant', 0.0)
- weight = (weight[:, :, 1:, 1:] + weight[:, :, :-1, 1:] +
- weight[:, :, 1:, :-1] + weight[:, :, :-1, :-1])
- x = F.conv_transpose2d(x,
- weight=weight,
- bias=self.bias,
- stride=self.stride,
- padding=self.padding)
- else:
- x = F.conv2d(x,
- weight=weight,
- bias=self.bias,
- stride=self.stride,
- padding=self.padding)
- x = self.activate(x)
- return x
diff --git a/spaces/james-oldfield/PandA/readme.md b/spaces/james-oldfield/PandA/readme.md
deleted file mode 100644
index bb9f821c8ba410541a077904162b5832270dcec6..0000000000000000000000000000000000000000
--- a/spaces/james-oldfield/PandA/readme.md
+++ /dev/null
@@ -1,82 +0,0 @@
-# PandA: Unsupervised Learning of Parts and Appearances in the Feature Maps of GANs
-
-## [ [paper](https://openreview.net/pdf?id=iUdSB2kK9GY) | [project page](http://eecs.qmul.ac.uk/~jo001/PandA/) | [video](https://www.youtube.com/watch?v=1KY055goKP0) | [edit zoo](https://colab.research.google.com/github/james-oldfield/PandA/blob/main/ffhq-edit-zoo.ipynb) | [](https://colab.research.google.com/github/james-oldfield/PandA/blob/main/demo.ipynb) ]
-
-
-
-> **PandA: Unsupervised Learning of Parts and Appearances in the Feature Maps of GANs**
-> James Oldfield, Christos Tzelepis, Yannis Panagakis, Mihalis A. Nicolaou, and Ioannis Patras
-> *International Conference on Learning Representations (ICLR)*, 2023
-> https://arxiv.org/abs/2206.00048
->
-> **Abstract**: Recent advances in the understanding of Generative Adversarial Networks (GANs) have led to remarkable progress in visual editing and synthesis tasks, capitalizing on the rich semantics that are embedded in the latent spaces of pre-trained GANs. However, existing methods are often tailored to specific GAN architectures and are limited to either discovering global semantic directions that do not facilitate localized control, or require some form of supervision through manually provided regions or segmentation masks. In this light, we present an architecture-agnostic approach that jointly discovers factors representing spatial parts and their appearances in an entirely unsupervised fashion. These factors are obtained by applying a semi-nonnegative tensor factorization on the feature maps, which in turn enables context-aware local image editing with pixel-level control. In addition, we show that the discovered appearance factors correspond to saliency maps that localize concepts of interest, without using any labels. Experiments on a wide range of GAN architectures and datasets show that, in comparison to the state of the art, our method is far more efficient in terms of training time and, most importantly, provides much more accurate localized control.
-
-
-> An example of using our learnt appearances and semantic parts for local image editing.
-
-## Experiments
-
-We provide a number of notebooks to reproduce the experiments in the paper and to explore the model. Please see the following notebooks:
-
-# [`./demo.ipynb`](./demo.ipynb)
-
-This notebook contains the code to learn the parts and appearance factors at a target layer in a target GAN. Contains code for local image editing using the learnt parts, and provides code for refining the parts factors.
-
-| Local image editing (at the learnt semantic parts) | |
-| :-- | :-- |
-|  | 
-
-# [`./localize-concepts.ipynb`](./localize-concepts.ipynb)
-
-Provides code to localize/visualize concepts of interest for a model/dataset of interest (setup for the "background" concept in `stylegan2_afhqdog512` as an example).
-
-| Localizing the learnt "background" concept vector |
-| :-- |
-|    |
-
-# [`./ffhq-edit-zoo.ipynb`](./ffhq-edit-zoo.ipynb)
-
-Quickly produce edits with annotated directions with pre-trained factors on FFHQ StyleGAN2.
-
-| Local image editing: "Big eyes" |
-| :-- |
-|  |
-
-## Setup
-
-Should you wish to run the notebooks, please consult this section below:
-
-### Install
-First, please install the dependencies with `pip install -r requirements.txt`, or alternatively with conda using `conda env create -f environment.yml`
-
-### Pre-trained models
-Should you wish to download the pre-trained models to run the notebooks, please first download them with:
-
-```bash
-wget -r -np -nH --cut-dirs=2 -R *index* http://eecs.qmul.ac.uk/~jo001/PandA-pretrained-models/
-```
-
-# citation
-
-If you find our work useful, please consider citing our paper:
-
-```bibtex
-@inproceedings{oldfield2023panda,
- title={PandA: Unsupervised Learning of Parts and Appearances in the Feature Maps of GANs},
- author={James Oldfield and Christos Tzelepis and Yannis Panagakis and Mihalis A. Nicolaou and Ioannis Patras},
- booktitle={Int. Conf. Learn. Represent.},
- year={2023}
-}
-```
-
-# contact
-
-**Please feel free to get in touch at**: `j.a.oldfield@qmul.ac.uk`
-
----
-
-## credits
-
-- `./networks/genforce/` contains mostly code directly from [https://github.com/genforce/genforce](https://github.com/genforce/genforce).
-- `./networks/biggan/` contains mostly code directly from [https://github.com/huggingface/pytorch-pretrained-BigGAN](https://github.com/huggingface/pytorch-pretrained-BigGAN).
-- `./networks/stylegan3/` contains mostly code directly from [https://github.com/NVlabs/stylegan3](https://github.com/NVlabs/stylegan3).
diff --git a/spaces/jennysun/jwsun-multisubject-render-model/gligen/ldm/modules/attention.py b/spaces/jennysun/jwsun-multisubject-render-model/gligen/ldm/modules/attention.py
deleted file mode 100644
index c443da348bc1ce707487fb8962a13b1810a43454..0000000000000000000000000000000000000000
--- a/spaces/jennysun/jwsun-multisubject-render-model/gligen/ldm/modules/attention.py
+++ /dev/null
@@ -1,387 +0,0 @@
-from inspect import isfunction
-import math
-import torch
-import torch.nn.functional as F
-from torch import nn, einsum
-from einops import rearrange, repeat
-
-# from ldm.modules.diffusionmodules.util import checkpoint, FourierEmbedder
-from torch.utils import checkpoint
-
-try:
- import xformers
- import xformers.ops
- XFORMERS_IS_AVAILBLE = True
-except:
- XFORMERS_IS_AVAILBLE = False
-
-
-def exists(val):
- return val is not None
-
-
-def uniq(arr):
- return{el: True for el in arr}.keys()
-
-
-def default(val, d):
- if exists(val):
- return val
- return d() if isfunction(d) else d
-
-
-def max_neg_value(t):
- return -torch.finfo(t.dtype).max
-
-
-def init_(tensor):
- dim = tensor.shape[-1]
- std = 1 / math.sqrt(dim)
- tensor.uniform_(-std, std)
- return tensor
-
-
-# feedforward
-class GEGLU(nn.Module):
- def __init__(self, dim_in, dim_out):
- super().__init__()
- self.proj = nn.Linear(dim_in, dim_out * 2)
-
- def forward(self, x):
- x, gate = self.proj(x).chunk(2, dim=-1)
- return x * F.gelu(gate)
-
-
-class FeedForward(nn.Module):
- def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.):
- super().__init__()
- inner_dim = int(dim * mult)
- dim_out = default(dim_out, dim)
- project_in = nn.Sequential(
- nn.Linear(dim, inner_dim),
- nn.GELU()
- ) if not glu else GEGLU(dim, inner_dim)
-
- self.net = nn.Sequential(
- project_in,
- nn.Dropout(dropout),
- nn.Linear(inner_dim, dim_out)
- )
-
- def forward(self, x):
- return self.net(x)
-
-
-def zero_module(module):
- """
- Zero out the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().zero_()
- return module
-
-
-def Normalize(in_channels):
- return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True)
-
-
-class LinearAttention(nn.Module):
- def __init__(self, dim, heads=4, dim_head=32):
- super().__init__()
- self.heads = heads
- hidden_dim = dim_head * heads
- self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias = False)
- self.to_out = nn.Conv2d(hidden_dim, dim, 1)
-
- def forward(self, x):
- b, c, h, w = x.shape
- qkv = self.to_qkv(x)
- q, k, v = rearrange(qkv, 'b (qkv heads c) h w -> qkv b heads c (h w)', heads = self.heads, qkv=3)
- k = k.softmax(dim=-1)
- context = torch.einsum('bhdn,bhen->bhde', k, v)
- out = torch.einsum('bhde,bhdn->bhen', context, q)
- out = rearrange(out, 'b heads c (h w) -> b (heads c) h w', heads=self.heads, h=h, w=w)
- return self.to_out(out)
-
-
-
-
-class CrossAttention(nn.Module):
- def __init__(self, query_dim, key_dim, value_dim, heads=8, dim_head=64, dropout=0):
- super().__init__()
- inner_dim = dim_head * heads
- self.scale = dim_head ** -0.5
- self.heads = heads
- self.dim_head = dim_head
-
- self.to_q = nn.Linear(query_dim, inner_dim, bias=False)
- self.to_k = nn.Linear(key_dim, inner_dim, bias=False)
- self.to_v = nn.Linear(value_dim, inner_dim, bias=False)
-
-
- self.to_out = nn.Sequential( nn.Linear(inner_dim, query_dim), nn.Dropout(dropout) )
-
-
- def fill_inf_from_mask(self, sim, mask):
- if mask is not None:
- B,M = mask.shape
- mask = mask.unsqueeze(1).repeat(1,self.heads,1).reshape(B*self.heads,1,-1)
- max_neg_value = -torch.finfo(sim.dtype).max
- sim.masked_fill_(~mask, max_neg_value)
- return sim
-
- def forward_plain(self, x, key, value, mask=None):
-
- q = self.to_q(x) # B*N*(H*C)
- k = self.to_k(key) # B*M*(H*C)
- v = self.to_v(value) # B*M*(H*C)
-
- B, N, HC = q.shape
- _, M, _ = key.shape
- H = self.heads
- C = HC // H
-
- q = q.view(B,N,H,C).permute(0,2,1,3).reshape(B*H,N,C) # (B*H)*N*C
- k = k.view(B,M,H,C).permute(0,2,1,3).reshape(B*H,M,C) # (B*H)*M*C
- v = v.view(B,M,H,C).permute(0,2,1,3).reshape(B*H,M,C) # (B*H)*M*C
-
- sim = torch.einsum('b i d, b j d -> b i j', q, k) * self.scale # (B*H)*N*M
- self.fill_inf_from_mask(sim, mask)
- attn = sim.softmax(dim=-1) # (B*H)*N*M
-
- out = torch.einsum('b i j, b j d -> b i d', attn, v) # (B*H)*N*C
- out = out.view(B,H,N,C).permute(0,2,1,3).reshape(B,N,(H*C)) # B*N*(H*C)
-
- return self.to_out(out)
-
- def forward(self, x, key, value, mask=None):
- if not XFORMERS_IS_AVAILBLE:
- return self.forward_plain(x, key, value, mask)
-
- q = self.to_q(x) # B*N*(H*C)
- k = self.to_k(key) # B*M*(H*C)
- v = self.to_v(value) # B*M*(H*C)
-
- b, _, _ = q.shape
- q, k, v = map(
- lambda t: t.unsqueeze(3)
- .reshape(b, t.shape[1], self.heads, self.dim_head)
- .permute(0, 2, 1, 3)
- .reshape(b * self.heads, t.shape[1], self.dim_head)
- .contiguous(),
- (q, k, v),
- )
-
- # actually compute the attention, what we cannot get enough of
- out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=None)
-
- if exists(mask):
- raise NotImplementedError
- out = (
- out.unsqueeze(0)
- .reshape(b, self.heads, out.shape[1], self.dim_head)
- .permute(0, 2, 1, 3)
- .reshape(b, out.shape[1], self.heads * self.dim_head)
- )
- return self.to_out(out)
-
-
-
-
-
-class SelfAttention(nn.Module):
- def __init__(self, query_dim, heads=8, dim_head=64, dropout=0.):
- super().__init__()
- inner_dim = dim_head * heads
- self.scale = dim_head ** -0.5
- self.heads = heads
- self.dim_head = dim_head
-
- self.to_q = nn.Linear(query_dim, inner_dim, bias=False)
- self.to_k = nn.Linear(query_dim, inner_dim, bias=False)
- self.to_v = nn.Linear(query_dim, inner_dim, bias=False)
-
- self.to_out = nn.Sequential(nn.Linear(inner_dim, query_dim), nn.Dropout(dropout) )
-
- def forward_plain(self, x):
- q = self.to_q(x) # B*N*(H*C)
- k = self.to_k(x) # B*N*(H*C)
- v = self.to_v(x) # B*N*(H*C)
-
- B, N, HC = q.shape
- H = self.heads
- C = HC // H
-
- q = q.view(B,N,H,C).permute(0,2,1,3).reshape(B*H,N,C) # (B*H)*N*C
- k = k.view(B,N,H,C).permute(0,2,1,3).reshape(B*H,N,C) # (B*H)*N*C
- v = v.view(B,N,H,C).permute(0,2,1,3).reshape(B*H,N,C) # (B*H)*N*C
-
- sim = torch.einsum('b i c, b j c -> b i j', q, k) * self.scale # (B*H)*N*N
- attn = sim.softmax(dim=-1) # (B*H)*N*N
-
- out = torch.einsum('b i j, b j c -> b i c', attn, v) # (B*H)*N*C
- out = out.view(B,H,N,C).permute(0,2,1,3).reshape(B,N,(H*C)) # B*N*(H*C)
-
- return self.to_out(out)
-
- def forward(self, x, context=None, mask=None):
- if not XFORMERS_IS_AVAILBLE:
- return self.forward_plain(x)
-
- q = self.to_q(x)
- context = default(context, x)
- k = self.to_k(context)
- v = self.to_v(context)
-
- b, _, _ = q.shape
- q, k, v = map(
- lambda t: t.unsqueeze(3)
- .reshape(b, t.shape[1], self.heads, self.dim_head)
- .permute(0, 2, 1, 3)
- .reshape(b * self.heads, t.shape[1], self.dim_head)
- .contiguous(),
- (q, k, v),
- )
-
- # actually compute the attention, what we cannot get enough of
- out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=None)
-
- if exists(mask):
- raise NotImplementedError
- out = (
- out.unsqueeze(0)
- .reshape(b, self.heads, out.shape[1], self.dim_head)
- .permute(0, 2, 1, 3)
- .reshape(b, out.shape[1], self.heads * self.dim_head)
- )
- return self.to_out(out)
-
-
-class GatedCrossAttentionDense(nn.Module):
- def __init__(self, query_dim, key_dim, value_dim, n_heads, d_head):
- super().__init__()
-
- self.attn = CrossAttention(query_dim=query_dim, key_dim=key_dim, value_dim=value_dim, heads=n_heads, dim_head=d_head)
- self.ff = FeedForward(query_dim, glu=True)
-
- self.norm1 = nn.LayerNorm(query_dim)
- self.norm2 = nn.LayerNorm(query_dim)
-
- self.register_parameter('alpha_attn', nn.Parameter(torch.tensor(0.)) )
- self.register_parameter('alpha_dense', nn.Parameter(torch.tensor(0.)) )
-
- # this can be useful: we can externally change magnitude of tanh(alpha)
- # for example, when it is set to 0, then the entire model is same as original one
- self.scale = 1
-
- def forward(self, x, objs):
-
- x = x + self.scale*torch.tanh(self.alpha_attn) * self.attn( self.norm1(x), objs, objs)
- x = x + self.scale*torch.tanh(self.alpha_dense) * self.ff( self.norm2(x) )
-
- return x
-
-
-class GatedSelfAttentionDense(nn.Module):
- def __init__(self, query_dim, context_dim, n_heads, d_head):
- super().__init__()
-
- # we need a linear projection since we need cat visual feature and obj feature
- self.linear = nn.Linear(context_dim, query_dim)
-
- self.attn = SelfAttention(query_dim=query_dim, heads=n_heads, dim_head=d_head)
- self.ff = FeedForward(query_dim, glu=True)
-
- self.norm1 = nn.LayerNorm(query_dim)
- self.norm2 = nn.LayerNorm(query_dim)
-
- self.register_parameter('alpha_attn', nn.Parameter(torch.tensor(0.)) )
- self.register_parameter('alpha_dense', nn.Parameter(torch.tensor(0.)) )
-
- # this can be useful: we can externally change magnitude of tanh(alpha)
- # for example, when it is set to 0, then the entire model is same as original one
- self.scale = 1
-
-
- def forward(self, x, objs):
-
- N_visual = x.shape[1]
- objs = self.linear(objs)
-
- x = x + self.scale*torch.tanh(self.alpha_attn) * self.attn( self.norm1(torch.cat([x,objs],dim=1)) )[:,0:N_visual,:]
- x = x + self.scale*torch.tanh(self.alpha_dense) * self.ff( self.norm2(x) )
-
- return x
-
-
-class BasicTransformerBlock(nn.Module):
- def __init__(self, query_dim, key_dim, value_dim, n_heads, d_head, fuser_type, use_checkpoint=True):
- super().__init__()
- self.attn1 = SelfAttention(query_dim=query_dim, heads=n_heads, dim_head=d_head)
- self.ff = FeedForward(query_dim, glu=True)
- self.attn2 = CrossAttention(query_dim=query_dim, key_dim=key_dim, value_dim=value_dim, heads=n_heads, dim_head=d_head)
- self.norm1 = nn.LayerNorm(query_dim)
- self.norm2 = nn.LayerNorm(query_dim)
- self.norm3 = nn.LayerNorm(query_dim)
- self.use_checkpoint = use_checkpoint
-
- if fuser_type == "gatedSA":
- # note key_dim here actually is context_dim
- self.fuser = GatedSelfAttentionDense(query_dim, key_dim, n_heads, d_head)
- elif fuser_type == "gatedCA":
- self.fuser = GatedCrossAttentionDense(query_dim, key_dim, value_dim, n_heads, d_head)
- else:
- assert False
-
-
- def forward(self, x, context, objs):
-# return checkpoint(self._forward, (x, context, objs), self.parameters(), self.use_checkpoint)
- if self.use_checkpoint and x.requires_grad:
- return checkpoint.checkpoint(self._forward, x, context, objs)
- else:
- return self._forward(x, context, objs)
-
- def _forward(self, x, context, objs):
- x = self.attn1( self.norm1(x) ) + x
- x = self.fuser(x, objs) # identity mapping in the beginning
- x = self.attn2(self.norm2(x), context, context) + x
- x = self.ff(self.norm3(x)) + x
- return x
-
-
-class SpatialTransformer(nn.Module):
- def __init__(self, in_channels, key_dim, value_dim, n_heads, d_head, depth=1, fuser_type=None, use_checkpoint=True):
- super().__init__()
- self.in_channels = in_channels
- query_dim = n_heads * d_head
- self.norm = Normalize(in_channels)
-
-
- self.proj_in = nn.Conv2d(in_channels,
- query_dim,
- kernel_size=1,
- stride=1,
- padding=0)
-
- self.transformer_blocks = nn.ModuleList(
- [BasicTransformerBlock(query_dim, key_dim, value_dim, n_heads, d_head, fuser_type, use_checkpoint=use_checkpoint)
- for d in range(depth)]
- )
-
- self.proj_out = zero_module(nn.Conv2d(query_dim,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0))
-
- def forward(self, x, context, objs):
- b, c, h, w = x.shape
- x_in = x
- x = self.norm(x)
- x = self.proj_in(x)
- x = rearrange(x, 'b c h w -> b (h w) c')
- for block in self.transformer_blocks:
- x = block(x, context, objs)
- x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w)
- x = self.proj_out(x)
- return x + x_in
\ No newline at end of file
diff --git a/spaces/jeonchangbin49/De-limiter/eval_delimit/score_peaq.py b/spaces/jeonchangbin49/De-limiter/eval_delimit/score_peaq.py
deleted file mode 100644
index a308663081a6facbea237a1c3a19469de1f6ee4c..0000000000000000000000000000000000000000
--- a/spaces/jeonchangbin49/De-limiter/eval_delimit/score_peaq.py
+++ /dev/null
@@ -1,77 +0,0 @@
-# We are going to use PEAQ based on https://github.com/HSU-ANT/gstpeaq
-
-"""
-python3 score_peaq.py --exp_name=delimit_6_s | tee /path/to/results/delimit_6_s/score_peaq.txt
-"""
-
-
-
-import os
-import subprocess
-import glob
-import argparse
-
-
-def str2bool(v):
- if v.lower() in ("yes", "true", "t", "y", "1"):
- return True
- elif v.lower() in ("no", "false", "f", "n", "0"):
- return False
- else:
- raise argparse.ArgumentTypeError("Boolean value expected.")
-
-
-parser = argparse.ArgumentParser(description="model test.py")
-
-parser.add_argument(
- "--target",
- type=str,
- default="all",
- help="target source. all, vocals, drums, bass, other",
-)
-parser.add_argument(
- "--root",
- type=str,
- default="/path/to/musdb_XL_loudnorm",
-)
-parser.add_argument(
- "--output_directory",
- type=str,
- default="/path/to/results/",
-)
-parser.add_argument("--exp_name", type=str, default="delimit_6_s")
-parser.add_argument(
- "--calc_results",
- type=str2bool,
- default=True,
- help="Set this True when you want to calculate the results of the test set. Set this False when calculating musdb-hq vs musdb-XL. (top row in Table 1.)",
-)
-
-args, _ = parser.parse_known_args()
-
-if args.calc_results:
- args.test_output_dir = f"{args.output_directory}/test/{args.exp_name}"
-else:
- args.test_output_dir = f"{args.output_directory}/{args.exp_name}"
-
-if args.target == "all":
- song_list = sorted(glob.glob(f"{args.root}/*/mixture.wav"))
-
- for song in song_list:
- song_name = os.path.basename(os.path.dirname(song))
- est_path = f"{args.test_output_dir}/{song_name}/{args.target}.wav"
- subprocess.run(
- f'peaq --gst-plugin-load=/usr/local/lib/gstreamer-1.0/libgstpeaq.so "{song}" "{est_path}"',
- shell=True,
- )
-
-else:
- song_list = sorted(glob.glob(f"{args.root}/*/{args.target}.wav"))
-
- for song in song_list:
- song_name = os.path.basename(os.path.dirname(song))
- est_path = f"{args.test_output_dir}/{song_name}/{args.target}.wav"
- subprocess.run(
- f'peaq --gst-plugin-load=/usr/local/lib/gstreamer-1.0/libgstpeaq.so "{song}" "{est_path}"',
- shell=True,
- )
diff --git a/spaces/jeonchangbin49/De-limiter/utils/read_wave_utils.py b/spaces/jeonchangbin49/De-limiter/utils/read_wave_utils.py
deleted file mode 100644
index 9f5cf510c69547162c435b9c30fcca04f3218e57..0000000000000000000000000000000000000000
--- a/spaces/jeonchangbin49/De-limiter/utils/read_wave_utils.py
+++ /dev/null
@@ -1,109 +0,0 @@
-import random
-import math
-
-import numpy as np
-import librosa
-import torchaudio
-
-
-def load_wav_arbitrary_position_mono(filename, sample_rate, seq_duration):
- # mono
- # seq_duration[second]
- length = torchaudio.info(filename).num_frames
-
- read_length = librosa.time_to_samples(seq_duration, sr=sample_rate)
- if length > read_length:
- random_start = random.randint(0, int(length - read_length - 1)) / sample_rate
- X, sr = librosa.load(
- filename, sr=None, offset=random_start, duration=seq_duration
- )
- else:
- random_start = 0
- total_pad_length = read_length - length
- X, sr = librosa.load(filename, sr=None, offset=0, duration=seq_duration)
- pad_left = random.randint(0, total_pad_length)
- X = np.pad(X, (pad_left, total_pad_length - pad_left))
-
- return X
-
-
-def load_wav_specific_position_mono(
- filename, sample_rate, seq_duration, start_position
-):
- # mono
- # seq_duration[second]
- # start_position[second]
- length = torchaudio.info(filename).num_frames
- read_length = librosa.time_to_samples(seq_duration, sr=sample_rate)
-
- start_pos_sec = max(
- start_position, 0
- ) # if start_position is minus, then start from 0.
- start_pos_sample = librosa.time_to_samples(start_pos_sec, sr=sample_rate)
-
- if (
- length <= start_pos_sample
- ): # if start position exceeds audio length, then start from 0.
- start_pos_sec = 0
- start_pos_sample = 0
- X, sr = librosa.load(filename, sr=None, offset=start_pos_sec, duration=seq_duration)
-
- if length < start_pos_sample + read_length:
- X = np.pad(X, (0, (start_pos_sample + read_length) - length))
-
- return X
-
-
-# load wav file from arbitrary positions of 16bit stereo wav file
-def load_wav_arbitrary_position_stereo(
- filename, sample_rate, seq_duration, return_pos=False
-):
- # stereo
- # seq_duration[second]
- length = torchaudio.info(filename).num_frames
- read_length = librosa.time_to_samples(seq_duration, sr=sample_rate)
-
- random_start_sample = random.randint(
- 0, int(length - math.ceil(seq_duration * sample_rate) - 1)
- )
- random_start_sec = librosa.samples_to_time(random_start_sample, sr=sample_rate)
- X, sr = librosa.load(
- filename, sr=None, mono=False, offset=random_start_sec, duration=seq_duration
- )
-
- if length < random_start_sample + read_length:
- X = np.pad(X, ((0, 0), (0, (random_start_sample + read_length) - length)))
-
- if return_pos:
- return X, random_start_sec
- else:
- return X
-
-
-def load_wav_specific_position_stereo(
- filename, sample_rate, seq_duration, start_position
-):
- # stereo
- # seq_duration[second]
- # start_position[second]
- length = torchaudio.info(filename).num_frames
- read_length = librosa.time_to_samples(seq_duration, sr=sample_rate)
-
- start_pos_sec = max(
- start_position, 0
- ) # if start_position is minus, then start from 0.
- start_pos_sample = librosa.time_to_samples(start_pos_sec, sr=sample_rate)
-
- if (
- length <= start_pos_sample
- ): # if start position exceeds audio length, then start from 0.
- start_pos_sec = 0
- start_pos_sample = 0
- X, sr = librosa.load(
- filename, sr=None, mono=False, offset=start_pos_sec, duration=seq_duration
- )
-
- if length < start_pos_sample + read_length:
- X = np.pad(X, ((0, 0), (0, (start_pos_sample + read_length) - length)))
-
- return X
diff --git a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/modules/__init__.py b/spaces/jgurzoni/image_background_swapper/saicinpainting/training/modules/__init__.py
deleted file mode 100644
index 82e1a9096a5bd8f3fb00e899d0239b078246cad4..0000000000000000000000000000000000000000
--- a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/modules/__init__.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import logging
-
-from saicinpainting.training.modules.ffc import FFCResNetGenerator
-from saicinpainting.training.modules.pix2pixhd import GlobalGenerator, MultiDilatedGlobalGenerator, \
- NLayerDiscriminator, MultidilatedNLayerDiscriminator
-
-def make_generator(config, kind, **kwargs):
- logging.info(f'Make generator {kind}')
-
- if kind == 'pix2pixhd_multidilated':
- return MultiDilatedGlobalGenerator(**kwargs)
-
- if kind == 'pix2pixhd_global':
- return GlobalGenerator(**kwargs)
-
- if kind == 'ffc_resnet':
- return FFCResNetGenerator(**kwargs)
-
- raise ValueError(f'Unknown generator kind {kind}')
-
-
-def make_discriminator(kind, **kwargs):
- logging.info(f'Make discriminator {kind}')
-
- if kind == 'pix2pixhd_nlayer_multidilated':
- return MultidilatedNLayerDiscriminator(**kwargs)
-
- if kind == 'pix2pixhd_nlayer':
- return NLayerDiscriminator(**kwargs)
-
- raise ValueError(f'Unknown discriminator kind {kind}')
diff --git a/spaces/jinhybr/OCR-Invoice-LayoutLMv3/README.md b/spaces/jinhybr/OCR-Invoice-LayoutLMv3/README.md
deleted file mode 100644
index 5ba42f304c03e6ea98af0a253a1c943febbf66e7..0000000000000000000000000000000000000000
--- a/spaces/jinhybr/OCR-Invoice-LayoutLMv3/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: OCR Invoice LayoutLMv3
-emoji: 🏢
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.9
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/jirufengyu/face_recognition/videofast.py b/spaces/jirufengyu/face_recognition/videofast.py
deleted file mode 100644
index 1159c8217cd3a39b96293745d0eb165c9f713b0f..0000000000000000000000000000000000000000
--- a/spaces/jirufengyu/face_recognition/videofast.py
+++ /dev/null
@@ -1,160 +0,0 @@
-import face_recognition
-import cv2
-import numpy as np
-import os
-import pickle
-
-# This is a demo of running face recognition on live video from your webcam. It's a little more complicated than the
-# other example, but it includes some basic performance tweaks to make things run a lot faster:
-# 1. Process each video frame at 1/4 resolution (though still display it at full resolution)
-# 2. Only detect faces in every other frame of video.
-
-# PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read from your webcam.
-# OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this
-# specific demo. If you have trouble installing it, try any of the other demos that don't require it instead.
-
-# Get a reference to webcam #0 (the default one)
-
-
-def get_emb(file_name):
- if os.path.exists(file_name):
- file_ = face_recognition.load_image_file(file_name)
- emb = face_recognition.face_encodings(file_)[0]
- np.save(file_name.replace(".jpg",'.npy'), emb)
- else:
- emb = np.load(file_name)
- return emb
-def input_an_image(image_file, person_name, ori_img_dir='images/ori_images',img_emb_dir='images/img_emb'):
- image_file_dir=os.path.join(ori_img_dir,person_name)
- emb_file_dir=os.path.join(img_emb_dir,person_name)
- if not os.path.exists(image_file_dir):
- os.mkdir(image_file_dir)
- os.mkdir(emb_file_dir)
- file_ind=0
- else:
- file_ind=len(os.listdir(image_file_dir))
- file_ = face_recognition.load_image_file(image_file)
- emb = face_recognition.face_encodings(file_)[0]
- emb_file=image_file.split('.')[0]+f'_{file_ind}.npy'
- emb_file_out_path=os.path.join(emb_file_dir,emb_file)
- np.save(emb_file_out_path, emb)
- return emb
-def init_load_embs(img_emb_dir='images/img_emb'):
- persons=os.listdir(img_emb_dir)
- i=0
- ind2person=dict()
- for oneperson in persons:
- oneperson_dir=os.path.join(img_emb_dir,oneperson)
- oneperson_list=os.listdir(oneperson_dir)
- for oneperson_j in oneperson_list:
- emb_id=i
- i+=1
- emb=np.load(os.path.join(oneperson_dir,oneperson_j))
- ind2person[emb_id]=dict(person=oneperson,emb=emb)
- return ind2person
-
-
-
-if __name__=="__main__":
- ind2person=init_load_embs()
- video_capture = cv2.VideoCapture(0)
- emb=input_an_image('youpeng.jpg', "youpeng")
- ind2person[len(list(ind2person.values()))]=dict(person="youpeng",emb=emb)
-# img_emb_dir='images/img_emb'
-# ori_img_dir='images/ori_images'
-# if not os.path.exists(img_emb_dir):
-# os.mkdir(img_emb_dir)
-# if not os.path.exists(ori_img_dir):
-# os.mkdir(ori_img_dir)
-# # os.listdir()
-# Load a sample picture and learn how to recognize it.
-# file_list=["obama.jpg","biden.jpg","mengqi.jpg","xinyi.jpg","sixian.jpg","wang.jpg","chenmengqi.jpg",'yilin.jpg','youpeng.jpg','wangyibo.jpg']
-
-
-# Create arrays of known face encodings and their names
-# known_face_encodings = [
-# obama_face_encoding,
-# biden_face_encoding,
-# me_face_encoding,
-# wang_face_encoding
-# ]
-# known_face_names = [
-# "Barack Obama",
-# "Joe Biden",
-# "me",
-# "wang"
-# ]
- known_face_encodings=[v['emb'] for k,v in ind2person.items()]
- # known_face_encodings=[get_emb(f) for f in file_list]
- # known_face_names=[st.replace('.jpg','')for st in file_list]
- # Initialize some variables
- face_locations = []
- face_encodings = []
- face_names = []
- process_this_frame = True
-
- while True:
- # Grab a single frame of video
- ret, frame = video_capture.read()
-
- # Only process every other frame of video to save time
- if process_this_frame:
- # Resize frame of video to 1/4 size for faster face recognition processing
- small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
-
- # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
- rgb_small_frame = small_frame[:, :, ::-1]
-
- # Find all the faces and face encodings in the current frame of video
- face_locations = face_recognition.face_locations(rgb_small_frame, number_of_times_to_upsample=1)#, model="cnn")
- face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)
-
- face_names = []
- for face_encoding in face_encodings:
- # See if the face is a match for the known face(s)
- matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
- name = "Unknown"
-
- # # If a match was found in known_face_encodings, just use the first one.
- # if True in matches:
- # first_match_index = matches.index(True)
- # name = known_face_names[first_match_index]
-
- # Or instead, use the known face with the smallest distance to the new face
- face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
- best_match_index = np.argmin(face_distances)
- if matches[best_match_index]:
- # name = known_face_names[best_match_index]
- name = ind2person[best_match_index]['person']
-
- face_names.append(name)
-
- process_this_frame = not process_this_frame
-
-
- # Display the results
- for (top, right, bottom, left), name in zip(face_locations, face_names):
- # Scale back up face locations since the frame we detected in was scaled to 1/4 size
- top *= 4
- right *= 4
- bottom *= 4
- left *= 4
-
- # Draw a box around the face
- cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
-
- # Draw a label with a name below the face
- cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
- font = cv2.FONT_HERSHEY_DUPLEX
- cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)
-
- # Display the resulting image
- cv2.imshow('Video', frame)
-
- # Hit 'q' on the keyboard to quit!
- if cv2.waitKey(1) & 0xFF == ord('q'):
- break
-
- # Release handle to the webcam
- video_capture.release()
- cv2.destroyAllWindows()
\ No newline at end of file
diff --git a/spaces/joaogabriellima/Real-Time-Voice-Cloning/toolbox/__init__.py b/spaces/joaogabriellima/Real-Time-Voice-Cloning/toolbox/__init__.py
deleted file mode 100644
index 531d6adef076007afd6116eb6472485f540e80de..0000000000000000000000000000000000000000
--- a/spaces/joaogabriellima/Real-Time-Voice-Cloning/toolbox/__init__.py
+++ /dev/null
@@ -1,357 +0,0 @@
-from toolbox.ui import UI
-from encoder import inference as encoder
-from synthesizer.inference import Synthesizer
-from vocoder import inference as vocoder
-from pathlib import Path
-from time import perf_counter as timer
-from toolbox.utterance import Utterance
-import numpy as np
-import traceback
-import sys
-import torch
-import librosa
-from audioread.exceptions import NoBackendError
-
-# Use this directory structure for your datasets, or modify it to fit your needs
-recognized_datasets = [
- "LibriSpeech/dev-clean",
- "LibriSpeech/dev-other",
- "LibriSpeech/test-clean",
- "LibriSpeech/test-other",
- "LibriSpeech/train-clean-100",
- "LibriSpeech/train-clean-360",
- "LibriSpeech/train-other-500",
- "LibriTTS/dev-clean",
- "LibriTTS/dev-other",
- "LibriTTS/test-clean",
- "LibriTTS/test-other",
- "LibriTTS/train-clean-100",
- "LibriTTS/train-clean-360",
- "LibriTTS/train-other-500",
- "LJSpeech-1.1",
- "VoxCeleb1/wav",
- "VoxCeleb1/test_wav",
- "VoxCeleb2/dev/aac",
- "VoxCeleb2/test/aac",
- "VCTK-Corpus/wav48",
-]
-
-#Maximum of generated wavs to keep on memory
-MAX_WAVES = 15
-
-class Toolbox:
- def __init__(self, datasets_root, enc_models_dir, syn_models_dir, voc_models_dir, seed, no_mp3_support):
- if not no_mp3_support:
- try:
- librosa.load("samples/6829_00000.mp3")
- except NoBackendError:
- print("Librosa will be unable to open mp3 files if additional software is not installed.\n"
- "Please install ffmpeg or add the '--no_mp3_support' option to proceed without support for mp3 files.")
- exit(-1)
- self.no_mp3_support = no_mp3_support
- sys.excepthook = self.excepthook
- self.datasets_root = datasets_root
- self.utterances = set()
- self.current_generated = (None, None, None, None) # speaker_name, spec, breaks, wav
-
- self.synthesizer = None # type: Synthesizer
- self.current_wav = None
- self.waves_list = []
- self.waves_count = 0
- self.waves_namelist = []
-
- # Check for webrtcvad (enables removal of silences in vocoder output)
- try:
- import webrtcvad
- self.trim_silences = True
- except:
- self.trim_silences = False
-
- # Initialize the events and the interface
- self.ui = UI()
- self.reset_ui(enc_models_dir, syn_models_dir, voc_models_dir, seed)
- self.setup_events()
- self.ui.start()
-
- def excepthook(self, exc_type, exc_value, exc_tb):
- traceback.print_exception(exc_type, exc_value, exc_tb)
- self.ui.log("Exception: %s" % exc_value)
-
- def setup_events(self):
- # Dataset, speaker and utterance selection
- self.ui.browser_load_button.clicked.connect(lambda: self.load_from_browser())
- random_func = lambda level: lambda: self.ui.populate_browser(self.datasets_root,
- recognized_datasets,
- level)
- self.ui.random_dataset_button.clicked.connect(random_func(0))
- self.ui.random_speaker_button.clicked.connect(random_func(1))
- self.ui.random_utterance_button.clicked.connect(random_func(2))
- self.ui.dataset_box.currentIndexChanged.connect(random_func(1))
- self.ui.speaker_box.currentIndexChanged.connect(random_func(2))
-
- # Model selection
- self.ui.encoder_box.currentIndexChanged.connect(self.init_encoder)
- def func():
- self.synthesizer = None
- self.ui.synthesizer_box.currentIndexChanged.connect(func)
- self.ui.vocoder_box.currentIndexChanged.connect(self.init_vocoder)
-
- # Utterance selection
- func = lambda: self.load_from_browser(self.ui.browse_file())
- self.ui.browser_browse_button.clicked.connect(func)
- func = lambda: self.ui.draw_utterance(self.ui.selected_utterance, "current")
- self.ui.utterance_history.currentIndexChanged.connect(func)
- func = lambda: self.ui.play(self.ui.selected_utterance.wav, Synthesizer.sample_rate)
- self.ui.play_button.clicked.connect(func)
- self.ui.stop_button.clicked.connect(self.ui.stop)
- self.ui.record_button.clicked.connect(self.record)
-
- #Audio
- self.ui.setup_audio_devices(Synthesizer.sample_rate)
-
- #Wav playback & save
- func = lambda: self.replay_last_wav()
- self.ui.replay_wav_button.clicked.connect(func)
- func = lambda: self.export_current_wave()
- self.ui.export_wav_button.clicked.connect(func)
- self.ui.waves_cb.currentIndexChanged.connect(self.set_current_wav)
-
- # Generation
- func = lambda: self.synthesize() or self.vocode()
- self.ui.generate_button.clicked.connect(func)
- self.ui.synthesize_button.clicked.connect(self.synthesize)
- self.ui.vocode_button.clicked.connect(self.vocode)
- self.ui.random_seed_checkbox.clicked.connect(self.update_seed_textbox)
-
- # UMAP legend
- self.ui.clear_button.clicked.connect(self.clear_utterances)
-
- def set_current_wav(self, index):
- self.current_wav = self.waves_list[index]
-
- def export_current_wave(self):
- self.ui.save_audio_file(self.current_wav, Synthesizer.sample_rate)
-
- def replay_last_wav(self):
- self.ui.play(self.current_wav, Synthesizer.sample_rate)
-
- def reset_ui(self, encoder_models_dir, synthesizer_models_dir, vocoder_models_dir, seed):
- self.ui.populate_browser(self.datasets_root, recognized_datasets, 0, True)
- self.ui.populate_models(encoder_models_dir, synthesizer_models_dir, vocoder_models_dir)
- self.ui.populate_gen_options(seed, self.trim_silences)
-
- def load_from_browser(self, fpath=None):
- if fpath is None:
- fpath = Path(self.datasets_root,
- self.ui.current_dataset_name,
- self.ui.current_speaker_name,
- self.ui.current_utterance_name)
- name = str(fpath.relative_to(self.datasets_root))
- speaker_name = self.ui.current_dataset_name + '_' + self.ui.current_speaker_name
-
- # Select the next utterance
- if self.ui.auto_next_checkbox.isChecked():
- self.ui.browser_select_next()
- elif fpath == "":
- return
- else:
- name = fpath.name
- speaker_name = fpath.parent.name
-
- if fpath.suffix.lower() == ".mp3" and self.no_mp3_support:
- self.ui.log("Error: No mp3 file argument was passed but an mp3 file was used")
- return
-
- # Get the wav from the disk. We take the wav with the vocoder/synthesizer format for
- # playback, so as to have a fair comparison with the generated audio
- wav = Synthesizer.load_preprocess_wav(fpath)
- self.ui.log("Loaded %s" % name)
-
- self.add_real_utterance(wav, name, speaker_name)
-
- def record(self):
- wav = self.ui.record_one(encoder.sampling_rate, 5)
- if wav is None:
- return
- self.ui.play(wav, encoder.sampling_rate)
-
- speaker_name = "user01"
- name = speaker_name + "_rec_%05d" % np.random.randint(100000)
- self.add_real_utterance(wav, name, speaker_name)
-
- def add_real_utterance(self, wav, name, speaker_name):
- # Compute the mel spectrogram
- spec = Synthesizer.make_spectrogram(wav)
- self.ui.draw_spec(spec, "current")
-
- # Compute the embedding
- if not encoder.is_loaded():
- self.init_encoder()
- encoder_wav = encoder.preprocess_wav(wav)
- embed, partial_embeds, _ = encoder.embed_utterance(encoder_wav, return_partials=True)
-
- # Add the utterance
- utterance = Utterance(name, speaker_name, wav, spec, embed, partial_embeds, False)
- self.utterances.add(utterance)
- self.ui.register_utterance(utterance)
-
- # Plot it
- self.ui.draw_embed(embed, name, "current")
- self.ui.draw_umap_projections(self.utterances)
-
- def clear_utterances(self):
- self.utterances.clear()
- self.ui.draw_umap_projections(self.utterances)
-
- def synthesize(self):
- self.ui.log("Generating the mel spectrogram...")
- self.ui.set_loading(1)
-
- # Update the synthesizer random seed
- if self.ui.random_seed_checkbox.isChecked():
- seed = int(self.ui.seed_textbox.text())
- self.ui.populate_gen_options(seed, self.trim_silences)
- else:
- seed = None
-
- if seed is not None:
- torch.manual_seed(seed)
-
- # Synthesize the spectrogram
- if self.synthesizer is None or seed is not None:
- self.init_synthesizer()
-
- texts = self.ui.text_prompt.toPlainText().split("\n")
- embed = self.ui.selected_utterance.embed
- embeds = [embed] * len(texts)
- specs = self.synthesizer.synthesize_spectrograms(texts, embeds)
- breaks = [spec.shape[1] for spec in specs]
- spec = np.concatenate(specs, axis=1)
-
- self.ui.draw_spec(spec, "generated")
- self.current_generated = (self.ui.selected_utterance.speaker_name, spec, breaks, None)
- self.ui.set_loading(0)
-
- def vocode(self):
- speaker_name, spec, breaks, _ = self.current_generated
- assert spec is not None
-
- # Initialize the vocoder model and make it determinstic, if user provides a seed
- if self.ui.random_seed_checkbox.isChecked():
- seed = int(self.ui.seed_textbox.text())
- self.ui.populate_gen_options(seed, self.trim_silences)
- else:
- seed = None
-
- if seed is not None:
- torch.manual_seed(seed)
-
- # Synthesize the waveform
- if not vocoder.is_loaded() or seed is not None:
- self.init_vocoder()
-
- def vocoder_progress(i, seq_len, b_size, gen_rate):
- real_time_factor = (gen_rate / Synthesizer.sample_rate) * 1000
- line = "Waveform generation: %d/%d (batch size: %d, rate: %.1fkHz - %.2fx real time)" \
- % (i * b_size, seq_len * b_size, b_size, gen_rate, real_time_factor)
- self.ui.log(line, "overwrite")
- self.ui.set_loading(i, seq_len)
- if self.ui.current_vocoder_fpath is not None:
- self.ui.log("")
- wav = vocoder.infer_waveform(spec, progress_callback=vocoder_progress)
- else:
- self.ui.log("Waveform generation with Griffin-Lim... ")
- wav = Synthesizer.griffin_lim(spec)
- self.ui.set_loading(0)
- self.ui.log(" Done!", "append")
-
- # Add breaks
- b_ends = np.cumsum(np.array(breaks) * Synthesizer.hparams.hop_size)
- b_starts = np.concatenate(([0], b_ends[:-1]))
- wavs = [wav[start:end] for start, end, in zip(b_starts, b_ends)]
- breaks = [np.zeros(int(0.15 * Synthesizer.sample_rate))] * len(breaks)
- wav = np.concatenate([i for w, b in zip(wavs, breaks) for i in (w, b)])
-
- # Trim excessive silences
- if self.ui.trim_silences_checkbox.isChecked():
- wav = encoder.preprocess_wav(wav)
-
- # Play it
- wav = wav / np.abs(wav).max() * 0.97
- self.ui.play(wav, Synthesizer.sample_rate)
-
- # Name it (history displayed in combobox)
- # TODO better naming for the combobox items?
- wav_name = str(self.waves_count + 1)
-
- #Update waves combobox
- self.waves_count += 1
- if self.waves_count > MAX_WAVES:
- self.waves_list.pop()
- self.waves_namelist.pop()
- self.waves_list.insert(0, wav)
- self.waves_namelist.insert(0, wav_name)
-
- self.ui.waves_cb.disconnect()
- self.ui.waves_cb_model.setStringList(self.waves_namelist)
- self.ui.waves_cb.setCurrentIndex(0)
- self.ui.waves_cb.currentIndexChanged.connect(self.set_current_wav)
-
- # Update current wav
- self.set_current_wav(0)
-
- #Enable replay and save buttons:
- self.ui.replay_wav_button.setDisabled(False)
- self.ui.export_wav_button.setDisabled(False)
-
- # Compute the embedding
- # TODO: this is problematic with different sampling rates, gotta fix it
- if not encoder.is_loaded():
- self.init_encoder()
- encoder_wav = encoder.preprocess_wav(wav)
- embed, partial_embeds, _ = encoder.embed_utterance(encoder_wav, return_partials=True)
-
- # Add the utterance
- name = speaker_name + "_gen_%05d" % np.random.randint(100000)
- utterance = Utterance(name, speaker_name, wav, spec, embed, partial_embeds, True)
- self.utterances.add(utterance)
-
- # Plot it
- self.ui.draw_embed(embed, name, "generated")
- self.ui.draw_umap_projections(self.utterances)
-
- def init_encoder(self):
- model_fpath = self.ui.current_encoder_fpath
-
- self.ui.log("Loading the encoder %s... " % model_fpath)
- self.ui.set_loading(1)
- start = timer()
- encoder.load_model(model_fpath)
- self.ui.log("Done (%dms)." % int(1000 * (timer() - start)), "append")
- self.ui.set_loading(0)
-
- def init_synthesizer(self):
- model_fpath = self.ui.current_synthesizer_fpath
-
- self.ui.log("Loading the synthesizer %s... " % model_fpath)
- self.ui.set_loading(1)
- start = timer()
- self.synthesizer = Synthesizer(model_fpath)
- self.ui.log("Done (%dms)." % int(1000 * (timer() - start)), "append")
- self.ui.set_loading(0)
-
- def init_vocoder(self):
- model_fpath = self.ui.current_vocoder_fpath
- # Case of Griffin-lim
- if model_fpath is None:
- return
-
- self.ui.log("Loading the vocoder %s... " % model_fpath)
- self.ui.set_loading(1)
- start = timer()
- vocoder.load_model(model_fpath)
- self.ui.log("Done (%dms)." % int(1000 * (timer() - start)), "append")
- self.ui.set_loading(0)
-
- def update_seed_textbox(self):
- self.ui.update_seed_textbox()
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/PublicKey/test_ECC_25519.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/PublicKey/test_ECC_25519.py
deleted file mode 100644
index 9f14131ff114ba61cc26a39fa13c033c9d7bbaa1..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/PublicKey/test_ECC_25519.py
+++ /dev/null
@@ -1,333 +0,0 @@
-# ===================================================================
-#
-# Copyright (c) 2022, Legrandin
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-#
-# 1. Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# 2. Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in
-# the documentation and/or other materials provided with the
-# distribution.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
-# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
-# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
-# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
-# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
-# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-# ===================================================================
-
-import unittest
-from binascii import unhexlify
-
-from Crypto.SelfTest.st_common import list_test_cases
-from Crypto.SelfTest.loader import load_test_vectors
-
-from Crypto.PublicKey import ECC
-from Crypto.PublicKey.ECC import EccPoint, _curves, EccKey
-
-from Crypto.Math.Numbers import Integer
-
-from Crypto.Hash import SHAKE128
-
-
-class TestEccPoint_Ed25519(unittest.TestCase):
-
- Gxy = {"x": 15112221349535400772501151409588531511454012693041857206046113283949847762202,
- "y": 46316835694926478169428394003475163141307993866256225615783033603165251855960}
-
- G2xy = {"x": 24727413235106541002554574571675588834622768167397638456726423682521233608206,
- "y": 15549675580280190176352668710449542251549572066445060580507079593062643049417}
-
- G3xy = {"x": 46896733464454938657123544595386787789046198280132665686241321779790909858396,
- "y": 8324843778533443976490377120369201138301417226297555316741202210403726505172}
-
- pointG = EccPoint(Gxy['x'], Gxy['y'], curve="Ed25519")
- pointG2 = EccPoint(G2xy['x'], G2xy['y'], curve="Ed25519")
- pointG3 = EccPoint(G3xy['x'], G3xy['y'], curve="Ed25519")
-
- def test_init_xy(self):
- EccPoint(self.Gxy['x'], self.Gxy['y'], curve="Ed25519")
-
- # Neutral point
- pai = EccPoint(0, 1, curve="Ed25519")
- self.assertEqual(pai.x, 0)
- self.assertEqual(pai.y, 1)
- self.assertEqual(pai.xy, (0, 1))
-
- # G
- bp = self.pointG.copy()
- self.assertEqual(bp.x, 15112221349535400772501151409588531511454012693041857206046113283949847762202)
- self.assertEqual(bp.y, 46316835694926478169428394003475163141307993866256225615783033603165251855960)
- self.assertEqual(bp.xy, (bp.x, bp.y))
-
- # 2G
- bp2 = self.pointG2.copy()
- self.assertEqual(bp2.x, 24727413235106541002554574571675588834622768167397638456726423682521233608206)
- self.assertEqual(bp2.y, 15549675580280190176352668710449542251549572066445060580507079593062643049417)
- self.assertEqual(bp2.xy, (bp2.x, bp2.y))
-
- # 5G
- EccPoint(x=33467004535436536005251147249499675200073690106659565782908757308821616914995,
- y=43097193783671926753355113395909008640284023746042808659097434958891230611693,
- curve="Ed25519")
-
- # Catch if point is not on the curve
- self.assertRaises(ValueError, EccPoint, 34, 35, curve="Ed25519")
-
- def test_set(self):
- pointW = EccPoint(0, 1, curve="Ed25519")
- pointW.set(self.pointG)
- self.assertEqual(pointW.x, self.pointG.x)
- self.assertEqual(pointW.y, self.pointG.y)
-
- def test_copy(self):
- pointW = self.pointG.copy()
- self.assertEqual(pointW.x, self.pointG.x)
- self.assertEqual(pointW.y, self.pointG.y)
-
- def test_equal(self):
- pointH = self.pointG.copy()
- pointI = self.pointG2.copy()
- self.assertEqual(self.pointG, pointH)
- self.assertNotEqual(self.pointG, pointI)
-
- def test_pai(self):
- pai = EccPoint(0, 1, curve="Ed25519")
- self.assertTrue(pai.is_point_at_infinity())
- self.assertEqual(pai, pai.point_at_infinity())
-
- def test_negate(self):
- negG = -self.pointG
- sum = self.pointG + negG
- self.assertTrue(sum.is_point_at_infinity())
-
- def test_addition(self):
- self.assertEqual(self.pointG + self.pointG2, self.pointG3)
- self.assertEqual(self.pointG2 + self.pointG, self.pointG3)
- self.assertEqual(self.pointG2 + self.pointG.point_at_infinity(), self.pointG2)
- self.assertEqual(self.pointG.point_at_infinity() + self.pointG2, self.pointG2)
-
- G5 = self.pointG2 + self.pointG3
- self.assertEqual(G5.x, 33467004535436536005251147249499675200073690106659565782908757308821616914995)
- self.assertEqual(G5.y, 43097193783671926753355113395909008640284023746042808659097434958891230611693)
-
- def test_inplace_addition(self):
- pointH = self.pointG.copy()
- pointH += self.pointG
- self.assertEqual(pointH, self.pointG2)
- pointH += self.pointG
- self.assertEqual(pointH, self.pointG3)
- pointH += self.pointG.point_at_infinity()
- self.assertEqual(pointH, self.pointG3)
-
- def test_doubling(self):
- pointH = self.pointG.copy()
- pointH.double()
- self.assertEqual(pointH.x, self.pointG2.x)
- self.assertEqual(pointH.y, self.pointG2.y)
-
- # 2*0
- pai = self.pointG.point_at_infinity()
- pointR = pai.copy()
- pointR.double()
- self.assertEqual(pointR, pai)
-
- def test_scalar_multiply(self):
- d = 0
- pointH = d * self.pointG
- self.assertEqual(pointH.x, 0)
- self.assertEqual(pointH.y, 1)
-
- d = 1
- pointH = d * self.pointG
- self.assertEqual(pointH.x, self.pointG.x)
- self.assertEqual(pointH.y, self.pointG.y)
-
- d = 2
- pointH = d * self.pointG
- self.assertEqual(pointH.x, self.pointG2.x)
- self.assertEqual(pointH.y, self.pointG2.y)
-
- d = 3
- pointH = d * self.pointG
- self.assertEqual(pointH.x, self.pointG3.x)
- self.assertEqual(pointH.y, self.pointG3.y)
-
- d = 4
- pointH = d * self.pointG
- self.assertEqual(pointH.x, 14582954232372986451776170844943001818709880559417862259286374126315108956272)
- self.assertEqual(pointH.y, 32483318716863467900234833297694612235682047836132991208333042722294373421359)
-
- d = 5
- pointH = d * self.pointG
- self.assertEqual(pointH.x, 33467004535436536005251147249499675200073690106659565782908757308821616914995)
- self.assertEqual(pointH.y, 43097193783671926753355113395909008640284023746042808659097434958891230611693)
-
- d = 10
- pointH = d * self.pointG
- self.assertEqual(pointH.x, 43500613248243327786121022071801015118933854441360174117148262713429272820047)
- self.assertEqual(pointH.y, 45005105423099817237495816771148012388779685712352441364231470781391834741548)
-
- d = 20
- pointH = d * self.pointG
- self.assertEqual(pointH.x, 46694936775300686710656303283485882876784402425210400817529601134760286812591)
- self.assertEqual(pointH.y, 8786390172762935853260670851718824721296437982862763585171334833968259029560)
-
- d = 255
- pointH = d * self.pointG
- self.assertEqual(pointH.x, 36843863416400016952258312492144504209624961884991522125275155377549541182230)
- self.assertEqual(pointH.y, 22327030283879720808995671630924669697661065034121040761798775626517750047180)
-
- d = 256
- pointH = d * self.pointG
- self.assertEqual(pointH.x, 42740085206947573681423002599456489563927820004573071834350074001818321593686)
- self.assertEqual(pointH.y, 6935684722522267618220753829624209639984359598320562595061366101608187623111)
-
- def test_sizes(self):
- self.assertEqual(self.pointG.size_in_bits(), 255)
- self.assertEqual(self.pointG.size_in_bytes(), 32)
-
-
-class TestEccKey_Ed25519(unittest.TestCase):
-
- def test_private_key(self):
- seed = unhexlify("9d61b19deffd5a60ba844af492ec2cc44449c5697b326919703bac031cae7f60")
- Px = 38815646466658113194383306759739515082307681141926459231621296960732224964046
- Py = 11903303657706407974989296177215005343713679411332034699907763981919547054807
-
- key = EccKey(curve="Ed25519", seed=seed)
- self.assertEqual(key.seed, seed)
- self.assertEqual(key.d, 36144925721603087658594284515452164870581325872720374094707712194495455132720)
- self.assertTrue(key.has_private())
- self.assertEqual(key.pointQ.x, Px)
- self.assertEqual(key.pointQ.y, Py)
-
- point = EccPoint(Px, Py, "ed25519")
- key = EccKey(curve="Ed25519", seed=seed, point=point)
- self.assertEqual(key.d, 36144925721603087658594284515452164870581325872720374094707712194495455132720)
- self.assertTrue(key.has_private())
- self.assertEqual(key.pointQ, point)
-
- # Other names
- key = EccKey(curve="ed25519", seed=seed)
-
- # Must not accept d parameter
- self.assertRaises(ValueError, EccKey, curve="ed25519", d=1)
-
- def test_public_key(self):
- point = EccPoint(_curves['ed25519'].Gx, _curves['ed25519'].Gy, curve='ed25519')
- key = EccKey(curve="ed25519", point=point)
- self.assertFalse(key.has_private())
- self.assertEqual(key.pointQ, point)
-
- def test_public_key_derived(self):
- priv_key = EccKey(curve="ed25519", seed=b'H'*32)
- pub_key = priv_key.public_key()
- self.assertFalse(pub_key.has_private())
- self.assertEqual(priv_key.pointQ, pub_key.pointQ)
-
- def test_invalid_seed(self):
- self.assertRaises(ValueError, lambda: EccKey(curve="ed25519", seed=b'H' * 31))
-
- def test_equality(self):
- private_key = ECC.construct(seed=b'H'*32, curve="Ed25519")
- private_key2 = ECC.construct(seed=b'H'*32, curve="ed25519")
- private_key3 = ECC.construct(seed=b'C'*32, curve="Ed25519")
-
- public_key = private_key.public_key()
- public_key2 = private_key2.public_key()
- public_key3 = private_key3.public_key()
-
- self.assertEqual(private_key, private_key2)
- self.assertNotEqual(private_key, private_key3)
-
- self.assertEqual(public_key, public_key2)
- self.assertNotEqual(public_key, public_key3)
-
- self.assertNotEqual(public_key, private_key)
-
- def test_name_consistency(self):
- key = ECC.generate(curve='ed25519')
- self.assertIn("curve='Ed25519'", repr(key))
- self.assertEqual(key.curve, 'Ed25519')
- self.assertEqual(key.public_key().curve, 'Ed25519')
-
-
-class TestEccModule_Ed25519(unittest.TestCase):
-
- def test_generate(self):
- key = ECC.generate(curve="Ed25519")
- self.assertTrue(key.has_private())
- point = EccPoint(_curves['Ed25519'].Gx, _curves['Ed25519'].Gy, curve="Ed25519") * key.d
- self.assertEqual(key.pointQ, point)
-
- # Always random
- key2 = ECC.generate(curve="Ed25519")
- self.assertNotEqual(key, key2)
-
- # Other names
- ECC.generate(curve="Ed25519")
-
- # Random source
- key1 = ECC.generate(curve="Ed25519", randfunc=SHAKE128.new().read)
- key2 = ECC.generate(curve="Ed25519", randfunc=SHAKE128.new().read)
- self.assertEqual(key1, key2)
-
- def test_construct(self):
- seed = unhexlify("9d61b19deffd5a60ba844af492ec2cc44449c5697b326919703bac031cae7f60")
- Px = 38815646466658113194383306759739515082307681141926459231621296960732224964046
- Py = 11903303657706407974989296177215005343713679411332034699907763981919547054807
- d = 36144925721603087658594284515452164870581325872720374094707712194495455132720
- point = EccPoint(Px, Py, curve="Ed25519")
-
- # Private key only
- key = ECC.construct(curve="Ed25519", seed=seed)
- self.assertEqual(key.pointQ, point)
- self.assertTrue(key.has_private())
-
- # Public key only
- key = ECC.construct(curve="Ed25519", point_x=Px, point_y=Py)
- self.assertEqual(key.pointQ, point)
- self.assertFalse(key.has_private())
-
- # Private and public key
- key = ECC.construct(curve="Ed25519", seed=seed, point_x=Px, point_y=Py)
- self.assertEqual(key.pointQ, point)
- self.assertTrue(key.has_private())
-
- # Other names
- key = ECC.construct(curve="ed25519", seed=seed)
-
- def test_negative_construct(self):
- coord = dict(point_x=10, point_y=4)
- coordG = dict(point_x=_curves['ed25519'].Gx, point_y=_curves['ed25519'].Gy)
-
- self.assertRaises(ValueError, ECC.construct, curve="Ed25519", **coord)
- self.assertRaises(ValueError, ECC.construct, curve="Ed25519", d=2, **coordG)
- self.assertRaises(ValueError, ECC.construct, curve="Ed25519", seed=b'H'*31)
-
-
-def get_tests(config={}):
- tests = []
- tests += list_test_cases(TestEccPoint_Ed25519)
- tests += list_test_cases(TestEccKey_Ed25519)
- tests += list_test_cases(TestEccModule_Ed25519)
- return tests
-
-
-if __name__ == '__main__':
- def suite():
- return unittest.TestSuite(get_tests())
- unittest.main(defaultTest='suite')
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/tests/test_pageelement.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/tests/test_pageelement.py
deleted file mode 100644
index 24f9385de243f853f5de8ea456976d6e2dbef80f..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/tests/test_pageelement.py
+++ /dev/null
@@ -1,378 +0,0 @@
-"""Tests of the bs4.element.PageElement class"""
-import copy
-import pickle
-import pytest
-import sys
-
-from bs4 import BeautifulSoup
-from bs4.element import (
- Comment,
- ResultSet,
- SoupStrainer,
-)
-from . import (
- SoupTest,
-)
-
-class TestEncoding(SoupTest):
- """Test the ability to encode objects into strings."""
-
- def test_unicode_string_can_be_encoded(self):
- html = "\N{SNOWMAN}"
- soup = self.soup(html)
- assert soup.b.string.encode("utf-8") == "\N{SNOWMAN}".encode("utf-8")
-
- def test_tag_containing_unicode_string_can_be_encoded(self):
- html = "\N{SNOWMAN}"
- soup = self.soup(html)
- assert soup.b.encode("utf-8") == html.encode("utf-8")
-
- def test_encoding_substitutes_unrecognized_characters_by_default(self):
- html = "\N{SNOWMAN}"
- soup = self.soup(html)
- assert soup.b.encode("ascii") == b"☃"
-
- def test_encoding_can_be_made_strict(self):
- html = "\N{SNOWMAN}"
- soup = self.soup(html)
- with pytest.raises(UnicodeEncodeError):
- soup.encode("ascii", errors="strict")
-
- def test_decode_contents(self):
- html = "\N{SNOWMAN}"
- soup = self.soup(html)
- assert "\N{SNOWMAN}" == soup.b.decode_contents()
-
- def test_encode_contents(self):
- html = "\N{SNOWMAN}"
- soup = self.soup(html)
- assert "\N{SNOWMAN}".encode("utf8") == soup.b.encode_contents(
- encoding="utf8"
- )
-
- def test_encode_deeply_nested_document(self):
- # This test verifies that encoding a string doesn't involve
- # any recursive function calls. If it did, this test would
- # overflow the Python interpreter stack.
- limit = sys.getrecursionlimit() + 1
- markup = "" * limit
- soup = self.soup(markup)
- encoded = soup.encode()
- assert limit == encoded.count(b"")
-
- def test_deprecated_renderContents(self):
- html = "\N{SNOWMAN}"
- soup = self.soup(html)
- soup.renderContents()
- assert "\N{SNOWMAN}".encode("utf8") == soup.b.renderContents()
-
- def test_repr(self):
- html = "\N{SNOWMAN}"
- soup = self.soup(html)
- assert html == repr(soup)
-
-
-class TestFormatters(SoupTest):
- """Test the formatting feature, used by methods like decode() and
- prettify(), and the formatters themselves.
- """
-
- def test_default_formatter_is_minimal(self):
- markup = "<<Sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!>>"
- soup = self.soup(markup)
- decoded = soup.decode(formatter="minimal")
- # The < is converted back into < but the e-with-acute is left alone.
- assert decoded == self.document_for(
- "<<Sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!>>"
- )
-
- def test_formatter_html(self):
- markup = " <<Sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!>>"
- soup = self.soup(markup)
- decoded = soup.decode(formatter="html")
- assert decoded == self.document_for(
- " <<Sacré bleu!>>"
- )
-
- def test_formatter_html5(self):
- markup = " <<Sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!>>"
- soup = self.soup(markup)
- decoded = soup.decode(formatter="html5")
- assert decoded == self.document_for(
- " <<Sacré bleu!>>"
- )
-
- def test_formatter_minimal(self):
- markup = "<<Sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!>>"
- soup = self.soup(markup)
- decoded = soup.decode(formatter="minimal")
- # The < is converted back into < but the e-with-acute is left alone.
- assert decoded == self.document_for(
- "<<Sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!>>"
- )
-
- def test_formatter_null(self):
- markup = "<<Sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!>>"
- soup = self.soup(markup)
- decoded = soup.decode(formatter=None)
- # Neither the angle brackets nor the e-with-acute are converted.
- # This is not valid HTML, but it's what the user wanted.
- assert decoded == self.document_for(
- "<>"
- )
-
- def test_formatter_custom(self):
- markup = "<foo>bar "
- soup = self.soup(markup)
- decoded = soup.decode(formatter = lambda x: x.upper())
- # Instead of normal entity conversion code, the custom
- # callable is called on every string.
- assert decoded == self.document_for("BAR ")
-
- def test_formatter_is_run_on_attribute_values(self):
- markup = 'e'
- soup = self.soup(markup)
- a = soup.a
-
- expect_minimal = 'e'
-
- assert expect_minimal == a.decode()
- assert expect_minimal == a.decode(formatter="minimal")
-
- expect_html = 'e'
- assert expect_html == a.decode(formatter="html")
-
- assert markup == a.decode(formatter=None)
- expect_upper = 'E'
- assert expect_upper == a.decode(formatter=lambda x: x.upper())
-
- def test_formatter_skips_script_tag_for_html_documents(self):
- doc = """
-
-"""
- encoded = BeautifulSoup(doc, 'html.parser').encode()
- assert b"< < hey > >" in encoded
-
- def test_formatter_skips_style_tag_for_html_documents(self):
- doc = """
-
-"""
- encoded = BeautifulSoup(doc, 'html.parser').encode()
- assert b"< < hey > >" in encoded
-
- def test_prettify_leaves_preformatted_text_alone(self):
- soup = self.soup("
foo
\tbar\n \n
baz
")
- # Everything outside the
tag is reformatted, but everything
- # inside is left alone.
- assert '
\n foo\n
\tbar\n \n
\n baz\n \n
\n' == soup.div.prettify()
-
- def test_prettify_handles_nested_string_literal_tags(self):
- # Most of this markup is inside a
tag, so prettify()
- # only does three things to it:
- # 1. Add a newline and a space between the
and the
- # 2. Add a newline after the
- # 3. Add a newline at the end.
- #
- # The contents of the
tag are left completely alone. In
- # particular, we don't start adding whitespace again once we
- # encounter the first
tag, because we know it's not
- # the one that put us into string literal mode.
- markup = """
some
- for you
-
"""
-
- expect = """
-
some
- for you
-
-
-"""
- soup = self.soup(markup)
- assert expect == soup.div.prettify()
-
- def test_prettify_accepts_formatter_function(self):
- soup = BeautifulSoup("foo", 'html.parser')
- pretty = soup.prettify(formatter = lambda x: x.upper())
- assert "FOO" in pretty
-
- def test_prettify_outputs_unicode_by_default(self):
- soup = self.soup("")
- assert str == type(soup.prettify())
-
- def test_prettify_can_encode_data(self):
- soup = self.soup("")
- assert bytes == type(soup.prettify("utf-8"))
-
- def test_html_entity_substitution_off_by_default(self):
- markup = "Sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!"
- soup = self.soup(markup)
- encoded = soup.b.encode("utf-8")
- assert encoded == markup.encode('utf-8')
-
- def test_encoding_substitution(self):
- # Here's the tag saying that a document is
- # encoded in Shift-JIS.
- meta_tag = ('')
- soup = self.soup(meta_tag)
-
- # Parse the document, and the charset apprears unchanged.
- assert soup.meta['content'] == 'text/html; charset=x-sjis'
-
- # Encode the document into some encoding, and the encoding is
- # substituted into the meta tag.
- utf_8 = soup.encode("utf-8")
- assert b"charset=utf-8" in utf_8
-
- euc_jp = soup.encode("euc_jp")
- assert b"charset=euc_jp" in euc_jp
-
- shift_jis = soup.encode("shift-jis")
- assert b"charset=shift-jis" in shift_jis
-
- utf_16_u = soup.encode("utf-16").decode("utf-16")
- assert "charset=utf-16" in utf_16_u
-
- def test_encoding_substitution_doesnt_happen_if_tag_is_strained(self):
- markup = ('
foo
')
-
- # Beautiful Soup used to try to rewrite the meta tag even if the
- # meta tag got filtered out by the strainer. This test makes
- # sure that doesn't happen.
- strainer = SoupStrainer('pre')
- soup = self.soup(markup, parse_only=strainer)
- assert soup.contents[0].name == 'pre'
-
-
-class TestPersistence(SoupTest):
- "Testing features like pickle and deepcopy."
-
- def setup_method(self):
- self.page = """
-
-
-
-Beautiful Soup: We called him Tortoise because he taught us.
-
-
-
-
-
-
-foo
-bar
-
-"""
- self.tree = self.soup(self.page)
-
- def test_pickle_and_unpickle_identity(self):
- # Pickling a tree, then unpickling it, yields a tree identical
- # to the original.
- dumped = pickle.dumps(self.tree, 2)
- loaded = pickle.loads(dumped)
- assert loaded.__class__ == BeautifulSoup
- assert loaded.decode() == self.tree.decode()
-
- def test_deepcopy_identity(self):
- # Making a deepcopy of a tree yields an identical tree.
- copied = copy.deepcopy(self.tree)
- assert copied.decode() == self.tree.decode()
-
- def test_copy_deeply_nested_document(self):
- # This test verifies that copy and deepcopy don't involve any
- # recursive function calls. If they did, this test would
- # overflow the Python interpreter stack.
- limit = sys.getrecursionlimit() + 1
- markup = "" * limit
-
- soup = self.soup(markup)
-
- copied = copy.copy(soup)
- copied = copy.deepcopy(soup)
-
- def test_copy_preserves_encoding(self):
- soup = BeautifulSoup(b'
-
-Recognizing the quirk ways to get this ebook interactive anatomy study guide is ... plays, poetry, and non-fiction texts are all available for you to download at your leisure. ... Learn anatomy of the spine: Diagrams and interactive vertebrae quizzes. ... EasyAnatomy is an interactive 3D canine anatomy study & reference app. 1fdad05405
-
-
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/CHESS GAMES CHESSBASE MEGA DATABASE 2013.torrent.md b/spaces/lincquiQcaudo/Top-20-Diffusion/CHESS GAMES CHESSBASE MEGA DATABASE 2013.torrent.md
deleted file mode 100644
index 2b21eed2d604d189a4094ebd0e352241b8da07aa..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/CHESS GAMES CHESSBASE MEGA DATABASE 2013.torrent.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Founded in 1986, it maintains and sells massive databases containing recorded chess moves. All World Championship games, Chess Olympiads, major tournaments, and many local competitions that your closest opponent may have competed in. ChessBase increased this number to ... one million. But in order for all this to work, you need a good support team. And ChessBase has it - the site has a forum where players can discuss questions about chess, as well as the opportunity to seek help from professionals. So if you suddenly find that your strength in chess is lower than you expected, or you need help understanding a complex game, then ChessBase is what you need. And if you want to try your hand at chess - just try it. 8a78ff9644
-
-
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Curtis 1314 Pc Programming Station Software 92.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Curtis 1314 Pc Programming Station Software 92.md
deleted file mode 100644
index 7cc89aab0fca4817a38755c451fdbd3ecf24b08e..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Curtis 1314 Pc Programming Station Software 92.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
- `)
- }
- initList(bySentence, d3.select('.list'))
-
-
-
- function setSentenceAsPair(s){
- s.e0 = d3.range(python_data.vocab.length).map(d => -Infinity)
- s.e1 = d3.range(python_data.vocab.length).map(d => -Infinity)
- s.forEach(d => {
- s.e0[d.tokenIndex] = d.e0
- s.e1[d.tokenIndex] = d.e1
- })
-
- s.label0 = s.s0
- s.label1 = s.s1
- s.vocab = python_data.vocab
- s.count = python_settings.count || 150
- s.isDifference = python_settings.isDifference
-
- var sel = d3.select('.pair').html('').st({width: 400})
-
- initPair(s, sel)
-
- d3.selectAll('.sentence').classed('active', d => d == s)
-
- d3.selectAll('div.sentence').filter(d => d == s)
- .each(function(){
- this.scrollIntoView({ block: 'nearest', inline: 'nearest'})
- })
- }
-
- setSentenceAsPair(bySentence[0])
-
-}
-
-
-window.init()
-
diff --git a/spaces/merve/dataset-worldviews/public/anonymization/init.js b/spaces/merve/dataset-worldviews/public/anonymization/init.js
deleted file mode 100644
index 5e181d580ff878e75ebbd508b052866e42c2ac1a..0000000000000000000000000000000000000000
--- a/spaces/merve/dataset-worldviews/public/anonymization/init.js
+++ /dev/null
@@ -1,77 +0,0 @@
-d3.select('body').selectAppend('div.tooltip.tooltip-hidden')
-
-window.ages = '18 19 20 21 22'.split(' ')
-window.states = 'RI NH NY CT VT'.split(' ')
-
-window.init = function(){
- // console.clear()
- var graphSel = d3.select('#graph').html('').append('div')
- window.c = d3.conventions({
- sel: graphSel,
- width: 460,
- height: 460,
- })
-
- function sizeGraphSel(){
- var clientWidth = d3.select('body').node().clientWidth
-
- window.scale = d3.clamp(1, (c.totalWidth + 35)/(clientWidth - 10), 2) // off by one, s is 35
-
- graphSel.st({
- transform: `scale(${1/scale})`,
- transformOrigin: `0px 0px`,
- })
-
- d3.select('#graph').st({height: scale == 1 ? 500 : 710})
- }
- sizeGraphSel()
- d3.select(window).on('resize', sizeGraphSel)
-
-
- c.svg = c.svg.append('g').translate([.5, .5])
-
- window.axii = makeAxii()
- window.sliders = makeSliders()
- window.students = makeStudents()
- window.sel = makeSel()
- window.slides = makeSlides()
- window.estimates = makeEstimates()
-
-
-
-
- var error = 0
- while (error < .02 || error > .05){
- estimates.flipCoin()
- error = Math.abs(estimates.active.val - .5)
- }
-
- makeGS()
-}
-
-init()
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/merve/fill-in-the-blank/source/fill-in-the-blank/init-gender-over-time.js b/spaces/merve/fill-in-the-blank/source/fill-in-the-blank/init-gender-over-time.js
deleted file mode 100644
index 4e678f28d4669d45b6957cd3e110b325875a41a1..0000000000000000000000000000000000000000
--- a/spaces/merve/fill-in-the-blank/source/fill-in-the-blank/init-gender-over-time.js
+++ /dev/null
@@ -1,181 +0,0 @@
-/* Copyright 2021 Google LLC. All Rights Reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-==============================================================================*/
-
-
-
-window.initGenderOverTime = async () => {
- if (!window.genderOverTimeData){
- window.genderOverTimeData = await (await fetch('data/gender-over-time.json')).json()
- }
-
- var isMobile = innerWidth <= 1100
-
- var sentences = window.genderOverTimeData
-
- var blocks = [
- {
- text: 'placeholder',
- sentences: sentences.slice(0, 3),
- ariaLabel: 'Gendered difference in predicted occupations, studies and names are smalled with a "in 2000" prefix than with a "in 1860" prefix.'
- },
- {
- text: 'placeholder',
- sentences: [sentences[3], sentences[5], sentences[4]],
- ariaLabel: 'Gendered difference in game play and bears do not decrease.'
-
- },
- ]
-
- var blockSel = d3.selectAll('.gender-over-time').html('').data(blocks)
- .st({marginBottom: 30, marginTop: 30})
- .at({role: 'graphics-document', 'aria-label': d => d.ariaLabel})
-
- var sentenceSel = blockSel.appendMany('div.sentence', d => d.sentences)
- .st({display: 'inline-block'})
- .each(drawSentence)
-
- blockSel.filter((d, i) => !i).append('div.g-caption').html(`
- The top 150 “he” and “she” completions in years from 1860-2018 are shown
- with the y position encoding he_logit - she_logit.
- Run in Colab →`)
-
-
-
- async function drawSentence({s0, s1, tidyCSV, minYear}, i){
- var tidy = d3.csvParse(tidyCSV)
- var {colors} = util
-
- tidy.forEach(d => {
- d.year = minYear + +d.year_index
- d.i = +d.token_index
- d.e0 = +d.e0
- d.e1 = +d.e1
- d.mean = d.e0 + d.e1
- d.dif = d.e0 - d.e1
- })
-
- var sel = d3.select(this)
-
- function fmtStr(d){
- return d.replace('[MASK]', '___').replace('YEAR', '$year')
- .replace(' he ', ' he ')
- .replace(' she ', ' she ')
- .replace(' his ', ' his ')
- .replace(' her ', ' her ')
- .replace(' they ', ' they ')
- }
- sel.classed('is-bear', d => s0.includes('bear'))
-
- var c0 = s0.includes('they') ? colors[2] : colors[0]
- var c1 = s1.includes('they') ? colors[2] : colors[1]
-
- sel.append('div.sentence-title').st({color: c0}).html(fmtStr(s0))
- sel.append('div.sentence-title').st({color: c1}).html(fmtStr(s1))
-
- var e0Extent = d3.extent(tidy, d => d.e0)
- var e1Extent = d3.extent(tidy, d => d.e1)
- var e0e1Exent = d3.extent(e0Extent.concat(e1Extent))
-
- var maxDif = d3.max(d3.extent(tidy, d => d.dif), Math.abs)
- var difExtent = [-maxDif, maxDif]
-
- drawDim(tidy, sel, {
- key: 'dif',
- yExtent: difExtent,
- rectColor: [c0, c1]
- })
- // drawDim(tidy, sel, {
- // key: 'e0',
- // yExtent: e0e1Exent,
- // rectColor: [colors[0], colors[0]]
- // })
- // drawDim(tidy, sel, {
- // key: 'e1',
- // yExtent: e0e1Exent,
- // rectColor: [colors[1], colors[1]]
- // })
- }
-
- function drawDim(tidy, sel, {key, rectColor, yExtent}){
- var c = d3.conventions({
- sel: sel.append('div'),
- height: 240,
- // width: 240,
- margin: {left: 20, bottom: 20, right: 80, top: 5}
- })
-
- c.svg.append('rect')
- .at({width: c.width, height: c.height/2, opacity: .1, fill: rectColor[0]})
-
- c.svg.append('rect')
- .at({width: c.width, height: c.height/2, opacity: .1, fill: rectColor[1], y: c.height/2})
-
- c.x.domain(d3.extent(tidy, d => d.year)).interpolate(d3.interpolateRound)
- c.y.domain(yExtent).interpolate(d3.interpolateRound)
-
- c.xAxis.tickFormat(d => d).ticks(5)
- c.yAxis.ticks(c.y.ticks(2).length > 2 ? 2 : 3).tickFormat(d3.format('+'))
- d3.drawAxis(c)
- // c.svg.select('.y .tick text').st({fill: d => !d ? '' : rectColor[d < 0 ? 0 : 1]})
-
- var byToken = d3.nestBy(tidy, d => d.i)
- byToken.forEach(d => {
- d.endY = c.y(_.last(d)[key])
- d.str = bertLargeVocab[+d.key].replace('▁', '')
- d.displayLabel = true
- d.mean = d3.sum(d, e => e.mean)
- d.keyMean = d3.sum(d, e => e[key])
- })
-
- d3.nestBy(_.sortBy(byToken, d => -d.mean), d => Math.round(d.endY/12))
- .forEach(d => d.forEach((e, i) => e.displayLabel = !i))
-
- var line = d3.line()
- .x(d => c.x(d.year))
- .y(d => c.y(d[key]))
-
- var tokenSel = c.svg.appendMany('g.time-token', byToken)
- // .call(d3.attachTooltip)
- .on('mouseover', function(d){
- d3.selectAll('g.time-token')
- .classed('active', 0)
- .filter(e => e.str == d.str)
- .classed('active', 1)
- .raise()
- })
-
- c.svg.on('mouseleave', function(){
- d3.selectAll('g.time-token').classed('active', 0)
- })
-
- tokenSel.append('text')
- .text(d => d.str)
- .translate(d => [c.width + 2, d.endY])
- .at({fontSize: 10, dy: '.33em', fill: (d, i) => d.displayLabel ? '#999' : 'rgba(0,0,0,0)'})
-
- tokenSel.append('path')
- .at({
- d: line,
- stroke: '#000',
- opacity: .2,
- fill: 'none',
- })
-
- }
-}
-
-
-if (window.init) window.init()
-
diff --git a/spaces/merve/measuring-fairness/source/uncertainty-calibration/draw_calibrationcurve.js b/spaces/merve/measuring-fairness/source/uncertainty-calibration/draw_calibrationcurve.js
deleted file mode 100644
index c7992a7c79b1a5187bc3f267350869904c836626..0000000000000000000000000000000000000000
--- a/spaces/merve/measuring-fairness/source/uncertainty-calibration/draw_calibrationcurve.js
+++ /dev/null
@@ -1,102 +0,0 @@
-
-window.drawCalibrationCurve = function (graphSel, fig_height, fig_width){
- var width = Math.min(fig_height, fig_width)
- var sel = graphSel
- .append('div').st({textAlign: 'center'})
- .append('div').st({display: 'inline-block'})
-
- var c = d3.conventions({
- sel,
- width,
- height: width,
- margin: {top: 40}
- });
-
- c.svg.parent()
-
- //TODO(nthain) Who owns the buckets? We have at least 2 instances, reduce to 1
- var buckets = d3.pairs(window.weatherGraph.thresholds)
- buckets.forEach(bucket => {
- bucket.val = d3.mean(bucket, d => d.origVal)
- })
-
- c.xAxis.tickValues(buckets.map(d => d.val)).tickFormat(d3.format('.2f'))
- c.yAxis.tickValues(buckets.map(d => d.val)).tickFormat(d3.format('.2f'))
- d3.drawAxis(c)
- window.util.ggPlotBg(c)
-
- window.util.addAxisLabel(c, 'Calibrated Model Score', 'Probability of Rain')
-
- var eceSel = c.svg.append('g.ece')
- var eceBox = eceSel.append('rect.val-box')
- .at({width: 55, height: 20, x: c.width/2 + 72.5, y: -35, rx: 3, ry: 3})
- var eceText = eceSel.append('text.big-text')
- .at({y: -20, x: c.width/2-30, textAnchor: 'middle'})
- var eceVal = eceSel.append('text.val-text')
- .at({y: -20, x: c.width/2+100, textAnchor: 'middle'})
-
- c.svg.append('path')
- .at({
- d: ['M', 0, c.height, 'L', c.width, 0].join(' '),
- stroke: '#555',
- strokeDasharray: '3 3',
- })
-
- var bucketSel = c.svg.appendMany('g.bucket', buckets)
-
- var circleSel = bucketSel.append('circle')
- .at({fillOpacity: .4, fill: 'steelblue'})
-
- var pathSel = bucketSel.append('path')
- .at({stroke: 'steelblue', strokeWidth: 3})
-
- var bucketText = bucketSel.append('text').text('8 / 10')
- .at({textAnchor: 'start', dy: '.33em', fontSize: 10, fill: '#000'})
-
-
- // function remap_score(s) {
- // // new_score = min_threshold_new + (old_score-min_threshold_old)(max_threshold_new-min_threshold_new)/(max_threshold_old-min_threshold_old)
- // //find index less than score
- // }
-
- function renderBuckets(){
- var filter_rain = window.slides.slide?.filter_rain
-
- buckets.forEach(bucket => {
- bucket.data = weatherdata
- .filter(d => bucket[0].val <= d.score && d.score <= bucket[1].val)
- .filter(d => !filter_rain || !d.is_filter)
-
- bucket.nPositive = d3.sum(bucket.data, d => d.label)
- bucket.percent = bucket.nPositive/bucket.data.length
-
- if (isNaN(bucket.percent)) bucket.percent = bucket[0].val
- })
-
- var ece = d3.sum(buckets, d => d.data.length*Math.abs(d.val - d.percent))
- ece = ece/d3.sum(buckets, d => d.data.length)
-
- eceText.text('Expected Calibration Error: ')
- eceVal.text(d3.format('.3f')(ece))
-
- var rScale = d3.scaleSqrt().domain([0, 50]).range([0, 20])
-
- bucketSel
- .st({opacity: d => d.data.length})
- .filter(d => d.data.length)
- .translate(d => [c.x(d.val), c.y(d.percent)])
-
- circleSel
- .at({r: d => rScale(d.data.length)})
-
- pathSel.at({d: d => 'M 0 0 V ' + (c.y(d.val) - c.y(d.percent))})
-
- bucketText
- .text(d => `${d.nPositive} / ${d.data.length}`)
- .at({x: d => rScale(d.data.length) + 2})
- }
-
- return {renderBuckets, c, buckets, calibrationDataFn: () => console.log('test')}
-}
-
-if (window.init) window.init()
diff --git a/spaces/merve/uncertainty-calibration/source/uncertainty-calibration/draw_slides.js b/spaces/merve/uncertainty-calibration/source/uncertainty-calibration/draw_slides.js
deleted file mode 100644
index 17ab651b01bc454c7168d55d28d5d8b42b26379b..0000000000000000000000000000000000000000
--- a/spaces/merve/uncertainty-calibration/source/uncertainty-calibration/draw_slides.js
+++ /dev/null
@@ -1,160 +0,0 @@
-window.drawSlides = function(){
- var slides = [
- {
- id: 'intro',
- visible_threshold: 0, //Also sets pointerEvents
- visible_tmessage: 0,
- visible_calibration: 0,
- constant_model_score: 0,
- },
- {
- id: 'thresholding',
- visible_threshold: 1,
- visible_tmessage: 0,
- visible_calibration: 0,
- constant_model_score: 0,
- // target_thresholds: [0, 0.25, 0.35, 0.6, 0.7, 1]
- target_threshold: .4
- },
- {
- id: 'adjustable_thresholding',
- visible_threshold: 1,
- visible_tmessage: 1,
- visible_calibration: 0,
- constant_model_score: 0,
- target_threshold: .47
- // target_thresholds: [0, 0.25, 0.35, 0.6, 0.7, 1]
- },
- {
- id: 'calibration',
- visible_threshold: 0,
- visible_tmessage: 0,
- visible_calibration: 1,
- constant_model_score: 0,
- target_thresholds: [0, 0.2, 0.4, 0.6, 0.8, 1]
- },
- {
- id: 'adjusting_calibration',
- visible_threshold: 0,
- visible_tmessage: 0,
- visible_calibration: 1,
- constant_model_score: 0,
- target_thresholds: [0, 0.15, 0.45, 0.55, 0.83, 1]
- },
- // {
- // id: 'improving_calibration',
- // visible_threshold: 0,
- // visible_calibration: 1,
- // constant_model_score: 1,
- // target_thresholds: [0, 0.305, 0.407, 0.503, 0.649, 1],
- // },
- {
- id: 'shifting_data',
- visible_threshold: 0,
- visible_tmessage: 0,
- visible_calibration: 1,
- constant_model_score: 1,
- filter_rain: true
- },
- {
- id: 'beyond_calibration',
- visible_threshold: 0,
- visible_tmessage: 0,
- visible_calibration: 1,
- constant_model_score: 1,
- target_thresholds: [0, .02, .04, .96, .98, 1],
- },
-
- ]
-
- var prevSlide = null;
-
- var gs = d3.graphScroll()
- .container(d3.select('#container'))
- .graph(d3.selectAll('#container #graph'))
- .eventId('uniqueId1') // namespace for scroll and resize events
- .sections(d3.selectAll('#container #sections > div'))
- .offset(window.isMobile ? 300 : 200)
- .on('active', function(i){
- try{
- var slide = slides.slide = slides[i]
-
- if (!slide) return console.log(`missing slide ${i}`)
-
- // if(slide.id != 'slide1'){
- // weatherGraph.prediction_sel.at({opacity:0});
- // }
-
- // if(slide.constant_model_score){
- // weatherGraph.icon_sel.transition().duration(500)
- // .at({y: constant_score})
- // }
- // else {
- // weatherGraph.icon_sel.transition().duration(500)
- // .at({y: d => c.y(d.h)})
- // }
-
- //weatherGraph.threshold_sel.classed('temp')
-
- var transition_duration = prevSlide ? 500 : 0;
-
- // Animate threshold and thresholds between slides
- var durationScale = 1
- if (prevSlide){
- durationScale = prevSlide.visible_calibration == slide.visible_calibration ? 1 : 3
- }
- if (slide.target_thresholds){
- weatherGraph.setThresholds(slide.target_thresholds, transition_duration*durationScale)
- }
- if (slide.target_threshold){
- weatherGraph.setThreshold(slide.target_threshold, transition_duration*durationScale)
- }
-
- calibrationCurve.renderBuckets()
-
-
- weatherGraph.thresholdSel
- .st({pointerEvents: slide.visible_threshold ? 'all' : 'none'})
- .transition().duration(transition_duration)
- .st({opacity: slide.visible_threshold});
-
- weatherGraph.messageSel
- .transition().duration(transition_duration)
- .st({opacity: slide.visible_tmessage});
-
- weatherGraph.predictionSel
- .transition().duration(transition_duration)
- .at({strokeOpacity: slide.visible_threshold ? 1: 0});
-
- weatherGraph.weatherGroupSel
- .transition().duration(transition_duration)
- .ease(d3.easeBounce).delay((d, i) => Math.random()*transition_duration)
- .st({opacity: d => slide.filter_rain && d.is_filter ? 0 : 1})
-
- weatherGraph.thresholdsGroupSel
- .st({pointerEvents: slide.visible_calibration ? 'all' : 'none'})
- .transition().duration(transition_duration)
- .st({opacity: slide.visible_calibration})
-
- calibrationCurve.c.svg
- .transition().duration(transition_duration)
- .st({opacity: slide.visible_calibration})
-
-
- prevSlide = slide;
- } catch (e){
- console.log(e)
- }
- })
-
- return slides
-}
-
-if (window.init) window.init()
-
-
-/*
-
-
-
-*/
\ No newline at end of file
diff --git a/spaces/meyabase/oshiwambo-speech-greetings/app.css b/spaces/meyabase/oshiwambo-speech-greetings/app.css
deleted file mode 100644
index 6fcc2b6d1ee451e2be66dcc423a99d7e4845ed62..0000000000000000000000000000000000000000
--- a/spaces/meyabase/oshiwambo-speech-greetings/app.css
+++ /dev/null
@@ -1,38 +0,0 @@
-
-.infoPoint h1 {
- font-size: 30px;
- text-decoration: bold;
-
- }
-
-a {
- text-decoration: underline;
- color: #1f3b54 ;
-}
-
-.finished {
- color:rgb(9, 102, 169);
- font-size:13px
-}
-
-table {
-
- margin: 25px 0;
- font-size: 0.9em;
- font-family: sans-serif;
- min-width: 400px;
- max-width: 400px;
- box-shadow: 0 0 20px rgba(0, 0, 0, 0.15);
-}
-
-table th,
-table td {
- padding: 12px 15px;
-}
-
-tr {
-text-align: left;
-}
-thead tr {
-text-align: left;
-}
diff --git a/spaces/mfrashad/ClothingGAN/netdissect/__main__.py b/spaces/mfrashad/ClothingGAN/netdissect/__main__.py
deleted file mode 100644
index e2bd9f630eaa0f45a6a201adcf356a1e092050cb..0000000000000000000000000000000000000000
--- a/spaces/mfrashad/ClothingGAN/netdissect/__main__.py
+++ /dev/null
@@ -1,408 +0,0 @@
-import torch, sys, os, argparse, textwrap, numbers, numpy, json, PIL
-from torchvision import transforms
-from torch.utils.data import TensorDataset
-from netdissect.progress import verbose_progress, print_progress
-from netdissect import InstrumentedModel, BrodenDataset, dissect
-from netdissect import MultiSegmentDataset, GeneratorSegRunner
-from netdissect import ImageOnlySegRunner
-from netdissect.parallelfolder import ParallelImageFolders
-from netdissect.zdataset import z_dataset_for_model
-from netdissect.autoeval import autoimport_eval
-from netdissect.modelconfig import create_instrumented_model
-from netdissect.pidfile import exit_if_job_done, mark_job_done
-
-help_epilog = '''\
-Example: to dissect three layers of the pretrained alexnet in torchvision:
-
-python -m netdissect \\
- --model "torchvision.models.alexnet(pretrained=True)" \\
- --layers features.6:conv3 features.8:conv4 features.10:conv5 \\
- --imgsize 227 \\
- --outdir dissect/alexnet-imagenet
-
-To dissect a progressive GAN model:
-
-python -m netdissect \\
- --model "proggan.from_pth_file('model/churchoutdoor.pth')" \\
- --gan
-'''
-
-def main():
- # Training settings
- def strpair(arg):
- p = tuple(arg.split(':'))
- if len(p) == 1:
- p = p + p
- return p
- def intpair(arg):
- p = arg.split(',')
- if len(p) == 1:
- p = p + p
- return tuple(int(v) for v in p)
-
- parser = argparse.ArgumentParser(description='Net dissect utility',
- prog='python -m netdissect',
- epilog=textwrap.dedent(help_epilog),
- formatter_class=argparse.RawDescriptionHelpFormatter)
- parser.add_argument('--model', type=str, default=None,
- help='constructor for the model to test')
- parser.add_argument('--pthfile', type=str, default=None,
- help='filename of .pth file for the model')
- parser.add_argument('--unstrict', action='store_true', default=False,
- help='ignore unexpected pth parameters')
- parser.add_argument('--submodule', type=str, default=None,
- help='submodule to load from pthfile')
- parser.add_argument('--outdir', type=str, default='dissect',
- help='directory for dissection output')
- parser.add_argument('--layers', type=strpair, nargs='+',
- help='space-separated list of layer names to dissect' +
- ', in the form layername[:reportedname]')
- parser.add_argument('--segments', type=str, default='dataset/broden',
- help='directory containing segmentation dataset')
- parser.add_argument('--segmenter', type=str, default=None,
- help='constructor for asegmenter class')
- parser.add_argument('--download', action='store_true', default=False,
- help='downloads Broden dataset if needed')
- parser.add_argument('--imagedir', type=str, default=None,
- help='directory containing image-only dataset')
- parser.add_argument('--imgsize', type=intpair, default=(227, 227),
- help='input image size to use')
- parser.add_argument('--netname', type=str, default=None,
- help='name for network in generated reports')
- parser.add_argument('--meta', type=str, nargs='+',
- help='json files of metadata to add to report')
- parser.add_argument('--merge', type=str,
- help='json file of unit data to merge in report')
- parser.add_argument('--examples', type=int, default=20,
- help='number of image examples per unit')
- parser.add_argument('--size', type=int, default=10000,
- help='dataset subset size to use')
- parser.add_argument('--batch_size', type=int, default=100,
- help='batch size for forward pass')
- parser.add_argument('--num_workers', type=int, default=24,
- help='number of DataLoader workers')
- parser.add_argument('--quantile_threshold', type=strfloat, default=None,
- choices=[FloatRange(0.0, 1.0), 'iqr'],
- help='quantile to use for masks')
- parser.add_argument('--no-labels', action='store_true', default=False,
- help='disables labeling of units')
- parser.add_argument('--maxiou', action='store_true', default=False,
- help='enables maxiou calculation')
- parser.add_argument('--covariance', action='store_true', default=False,
- help='enables covariance calculation')
- parser.add_argument('--rank_all_labels', action='store_true', default=False,
- help='include low-information labels in rankings')
- parser.add_argument('--no-images', action='store_true', default=False,
- help='disables generation of unit images')
- parser.add_argument('--no-report', action='store_true', default=False,
- help='disables generation report summary')
- parser.add_argument('--no-cuda', action='store_true', default=False,
- help='disables CUDA usage')
- parser.add_argument('--gen', action='store_true', default=False,
- help='test a generator model (e.g., a GAN)')
- parser.add_argument('--gan', action='store_true', default=False,
- help='synonym for --gen')
- parser.add_argument('--perturbation', default=None,
- help='filename of perturbation attack to apply')
- parser.add_argument('--add_scale_offset', action='store_true', default=None,
- help='offsets masks according to stride and padding')
- parser.add_argument('--quiet', action='store_true', default=False,
- help='silences console output')
- if len(sys.argv) == 1:
- parser.print_usage(sys.stderr)
- sys.exit(1)
- args = parser.parse_args()
- args.images = not args.no_images
- args.report = not args.no_report
- args.labels = not args.no_labels
- if args.gan:
- args.gen = args.gan
-
- # Set up console output
- verbose_progress(not args.quiet)
-
- # Exit right away if job is already done or being done.
- if args.outdir is not None:
- exit_if_job_done(args.outdir)
-
- # Speed up pytorch
- torch.backends.cudnn.benchmark = True
-
- # Special case: download flag without model to test.
- if args.model is None and args.download:
- from netdissect.broden import ensure_broden_downloaded
- for resolution in [224, 227, 384]:
- ensure_broden_downloaded(args.segments, resolution, 1)
- from netdissect.segmenter import ensure_upp_segmenter_downloaded
- ensure_upp_segmenter_downloaded('dataset/segmodel')
- sys.exit(0)
-
- # Help if broden is not present
- if not args.gen and not args.imagedir and not os.path.isdir(args.segments):
- print_progress('Segmentation dataset not found at %s.' % args.segments)
- print_progress('Specify dataset directory using --segments [DIR]')
- print_progress('To download Broden, run: netdissect --download')
- sys.exit(1)
-
- # Default segmenter class
- if args.gen and args.segmenter is None:
- args.segmenter = ("netdissect.segmenter.UnifiedParsingSegmenter(" +
- "segsizes=[256], segdiv='quad')")
-
- # Default threshold
- if args.quantile_threshold is None:
- if args.gen:
- args.quantile_threshold = 'iqr'
- else:
- args.quantile_threshold = 0.005
-
- # Set up CUDA
- args.cuda = not args.no_cuda and torch.cuda.is_available()
- if args.cuda:
- torch.backends.cudnn.benchmark = True
-
- # Construct the network with specified layers instrumented
- if args.model is None:
- print_progress('No model specified')
- sys.exit(1)
- model = create_instrumented_model(args)
-
- # Update any metadata from files, if any
- meta = getattr(model, 'meta', {})
- if args.meta:
- for mfilename in args.meta:
- with open(mfilename) as f:
- meta.update(json.load(f))
-
- # Load any merge data from files
- mergedata = None
- if args.merge:
- with open(args.merge) as f:
- mergedata = json.load(f)
-
- # Set up the output directory, verify write access
- if args.outdir is None:
- args.outdir = os.path.join('dissect', type(model).__name__)
- exit_if_job_done(args.outdir)
- print_progress('Writing output into %s.' % args.outdir)
- os.makedirs(args.outdir, exist_ok=True)
- train_dataset = None
-
- if not args.gen:
- # Load dataset for classifier case.
- # Load perturbation
- perturbation = numpy.load(args.perturbation
- ) if args.perturbation else None
- segrunner = None
-
- # Load broden dataset
- if args.imagedir is not None:
- dataset = try_to_load_images(args.imagedir, args.imgsize,
- perturbation, args.size)
- segrunner = ImageOnlySegRunner(dataset)
- else:
- dataset = try_to_load_broden(args.segments, args.imgsize, 1,
- perturbation, args.download, args.size)
- if dataset is None:
- dataset = try_to_load_multiseg(args.segments, args.imgsize,
- perturbation, args.size)
- if dataset is None:
- print_progress('No segmentation dataset found in %s',
- args.segments)
- print_progress('use --download to download Broden.')
- sys.exit(1)
- else:
- # For segmenter case the dataset is just a random z
- dataset = z_dataset_for_model(model, args.size)
- train_dataset = z_dataset_for_model(model, args.size, seed=2)
- segrunner = GeneratorSegRunner(autoimport_eval(args.segmenter))
-
- # Run dissect
- dissect(args.outdir, model, dataset,
- train_dataset=train_dataset,
- segrunner=segrunner,
- examples_per_unit=args.examples,
- netname=args.netname,
- quantile_threshold=args.quantile_threshold,
- meta=meta,
- merge=mergedata,
- make_images=args.images,
- make_labels=args.labels,
- make_maxiou=args.maxiou,
- make_covariance=args.covariance,
- make_report=args.report,
- make_row_images=args.images,
- make_single_images=True,
- rank_all_labels=args.rank_all_labels,
- batch_size=args.batch_size,
- num_workers=args.num_workers,
- settings=vars(args))
-
- # Mark the directory so that it's not done again.
- mark_job_done(args.outdir)
-
-class AddPerturbation(object):
- def __init__(self, perturbation):
- self.perturbation = perturbation
-
- def __call__(self, pic):
- if self.perturbation is None:
- return pic
- # Convert to a numpy float32 array
- npyimg = numpy.array(pic, numpy.uint8, copy=False
- ).astype(numpy.float32)
- # Center the perturbation
- oy, ox = ((self.perturbation.shape[d] - npyimg.shape[d]) // 2
- for d in [0, 1])
- npyimg += self.perturbation[
- oy:oy+npyimg.shape[0], ox:ox+npyimg.shape[1]]
- # Pytorch conventions: as a float it should be [0..1]
- npyimg.clip(0, 255, npyimg)
- return npyimg / 255.0
-
-def test_dissection():
- verbose_progress(True)
- from torchvision.models import alexnet
- from torchvision import transforms
- model = InstrumentedModel(alexnet(pretrained=True))
- model.eval()
- # Load an alexnet
- model.retain_layers([
- ('features.0', 'conv1'),
- ('features.3', 'conv2'),
- ('features.6', 'conv3'),
- ('features.8', 'conv4'),
- ('features.10', 'conv5') ])
- # load broden dataset
- bds = BrodenDataset('dataset/broden',
- transform=transforms.Compose([
- transforms.ToTensor(),
- transforms.Normalize(IMAGE_MEAN, IMAGE_STDEV)]),
- size=100)
- # run dissect
- dissect('dissect/test', model, bds,
- examples_per_unit=10)
-
-def try_to_load_images(directory, imgsize, perturbation, size):
- # Load plain image dataset
- # TODO: allow other normalizations.
- return ParallelImageFolders(
- [directory],
- transform=transforms.Compose([
- transforms.Resize(imgsize),
- AddPerturbation(perturbation),
- transforms.ToTensor(),
- transforms.Normalize(IMAGE_MEAN, IMAGE_STDEV)]),
- size=size)
-
-def try_to_load_broden(directory, imgsize, broden_version, perturbation,
- download, size):
- # Load broden dataset
- ds_resolution = (224 if max(imgsize) <= 224 else
- 227 if max(imgsize) <= 227 else 384)
- if not os.path.isfile(os.path.join(directory,
- 'broden%d_%d' % (broden_version, ds_resolution), 'index.csv')):
- return None
- return BrodenDataset(directory,
- resolution=ds_resolution,
- download=download,
- broden_version=broden_version,
- transform=transforms.Compose([
- transforms.Resize(imgsize),
- AddPerturbation(perturbation),
- transforms.ToTensor(),
- transforms.Normalize(IMAGE_MEAN, IMAGE_STDEV)]),
- size=size)
-
-def try_to_load_multiseg(directory, imgsize, perturbation, size):
- if not os.path.isfile(os.path.join(directory, 'labelnames.json')):
- return None
- minsize = min(imgsize) if hasattr(imgsize, '__iter__') else imgsize
- return MultiSegmentDataset(directory,
- transform=(transforms.Compose([
- transforms.Resize(minsize),
- transforms.CenterCrop(imgsize),
- AddPerturbation(perturbation),
- transforms.ToTensor(),
- transforms.Normalize(IMAGE_MEAN, IMAGE_STDEV)]),
- transforms.Compose([
- transforms.Resize(minsize, interpolation=PIL.Image.NEAREST),
- transforms.CenterCrop(imgsize)])),
- size=size)
-
-def add_scale_offset_info(model, layer_names):
- '''
- Creates a 'scale_offset' property on the model which guesses
- how to offset the featuremap, in cases where the convolutional
- padding does not exacly correspond to keeping featuremap pixels
- centered on the downsampled regions of the input. This mainly
- shows up in AlexNet: ResNet and VGG pad convolutions to keep
- them centered and do not need this.
- '''
- model.scale_offset = {}
- seen = set()
- sequence = []
- aka_map = {}
- for name in layer_names:
- aka = name
- if not isinstance(aka, str):
- name, aka = name
- aka_map[name] = aka
- for name, layer in model.named_modules():
- sequence.append(layer)
- if name in aka_map:
- seen.add(name)
- aka = aka_map[name]
- model.scale_offset[aka] = sequence_scale_offset(sequence)
- for name in aka_map:
- assert name in seen, ('Layer %s not found' % name)
-
-def dilation_scale_offset(dilations):
- '''Composes a list of (k, s, p) into a single total scale and offset.'''
- if len(dilations) == 0:
- return (1, 0)
- scale, offset = dilation_scale_offset(dilations[1:])
- kernel, stride, padding = dilations[0]
- scale *= stride
- offset *= stride
- offset += (kernel - 1) / 2.0 - padding
- return scale, offset
-
-def dilations(modulelist):
- '''Converts a list of modules to (kernel_size, stride, padding)'''
- result = []
- for module in modulelist:
- settings = tuple(getattr(module, n, d)
- for n, d in (('kernel_size', 1), ('stride', 1), ('padding', 0)))
- settings = (((s, s) if not isinstance(s, tuple) else s)
- for s in settings)
- if settings != ((1, 1), (1, 1), (0, 0)):
- result.append(zip(*settings))
- return zip(*result)
-
-def sequence_scale_offset(modulelist):
- '''Returns (yscale, yoffset), (xscale, xoffset) given a list of modules'''
- return tuple(dilation_scale_offset(d) for d in dilations(modulelist))
-
-
-def strfloat(s):
- try:
- return float(s)
- except:
- return s
-
-class FloatRange(object):
- def __init__(self, start, end):
- self.start = start
- self.end = end
- def __eq__(self, other):
- return isinstance(other, float) and self.start <= other <= self.end
- def __repr__(self):
- return '[%g-%g]' % (self.start, self.end)
-
-# Many models use this normalization.
-IMAGE_MEAN = [0.485, 0.456, 0.406]
-IMAGE_STDEV = [0.229, 0.224, 0.225]
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/mithril-security/blind_chat/src/lib/types/Message.ts b/spaces/mithril-security/blind_chat/src/lib/types/Message.ts
deleted file mode 100644
index 61e517da14ff331f268b308c73293e3ef706dd5a..0000000000000000000000000000000000000000
--- a/spaces/mithril-security/blind_chat/src/lib/types/Message.ts
+++ /dev/null
@@ -1,10 +0,0 @@
-import type { Timestamps } from "./Timestamps";
-
-export type Message = Partial & {
- from: "user" | "assistant";
- id: ReturnType;
- content: string;
- webSearchId?: string;
- score?: -1 | 0 | 1;
- isCode: boolean;
-};
diff --git a/spaces/mlpc-lab/BLIVA/bliva/common/registry.py b/spaces/mlpc-lab/BLIVA/bliva/common/registry.py
deleted file mode 100644
index 9fad5f1ae6eeca008878667f37e24640153223fe..0000000000000000000000000000000000000000
--- a/spaces/mlpc-lab/BLIVA/bliva/common/registry.py
+++ /dev/null
@@ -1,268 +0,0 @@
-"""
- Copyright (c) 2022, salesforce.com, inc.
- All rights reserved.
- SPDX-License-Identifier: BSD-3-Clause
- For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
-"""
-
-
-class Registry:
- mapping = {
- "builder_name_mapping": {},
- "task_name_mapping": {},
- "processor_name_mapping": {},
- "model_name_mapping": {},
- "lr_scheduler_name_mapping": {},
- "runner_name_mapping": {},
- "state": {},
- "paths": {},
- }
-
- @classmethod
- def register_model(cls, name):
- r"""Register a task to registry with key 'name'
-
- Args:
- name: Key with which the task will be registered.
-
- Usage:
-
- from bliva.common.registry import registry
- """
-
- def wrap(model_cls):
- from bliva.models import BaseModel
-
- assert issubclass(
- model_cls, BaseModel
- ), "All models must inherit BaseModel class"
- if name in cls.mapping["model_name_mapping"]:
- raise KeyError(
- "Name '{}' already registered for {}.".format(
- name, cls.mapping["model_name_mapping"][name]
- )
- )
- cls.mapping["model_name_mapping"][name] = model_cls
- return model_cls
-
- return wrap
-
- @classmethod
- def register_processor(cls, name):
- r"""Register a processor to registry with key 'name'
-
- Args:
- name: Key with which the task will be registered.
-
- Usage:
-
- from bliva.common.registry import registry
- """
-
- def wrap(processor_cls):
- from bliva.processors import BaseProcessor
-
- assert issubclass(
- processor_cls, BaseProcessor
- ), "All processors must inherit BaseProcessor class"
- if name in cls.mapping["processor_name_mapping"]:
- raise KeyError(
- "Name '{}' already registered for {}.".format(
- name, cls.mapping["processor_name_mapping"][name]
- )
- )
- cls.mapping["processor_name_mapping"][name] = processor_cls
- return processor_cls
-
- return wrap
-
- @classmethod
- def register_lr_scheduler(cls, name):
- r"""Register a model to registry with key 'name'
-
- Args:
- name: Key with which the task will be registered.
-
- Usage:
-
- from bliva.common.registry import registry
- """
-
- def wrap(lr_sched_cls):
- if name in cls.mapping["lr_scheduler_name_mapping"]:
- raise KeyError(
- "Name '{}' already registered for {}.".format(
- name, cls.mapping["lr_scheduler_name_mapping"][name]
- )
- )
- cls.mapping["lr_scheduler_name_mapping"][name] = lr_sched_cls
- return lr_sched_cls
-
- return wrap
-
- @classmethod
- def register_runner(cls, name):
- r"""Register a model to registry with key 'name'
-
- Args:
- name: Key with which the task will be registered.
-
- Usage:
-
- from bliva.common.registry import registry
- """
-
- def wrap(runner_cls):
- if name in cls.mapping["runner_name_mapping"]:
- raise KeyError(
- "Name '{}' already registered for {}.".format(
- name, cls.mapping["runner_name_mapping"][name]
- )
- )
- cls.mapping["runner_name_mapping"][name] = runner_cls
- return runner_cls
-
- return wrap
-
- @classmethod
- def register_path(cls, name, path):
- r"""Register a path to registry with key 'name'
-
- Args:
- name: Key with which the path will be registered.
-
- Usage:
-
- from bliva.common.registry import registry
- """
- assert isinstance(path, str), "All path must be str."
- if name in cls.mapping["paths"]:
- raise KeyError("Name '{}' already registered.".format(name))
- cls.mapping["paths"][name] = path
-
- @classmethod
- def register(cls, name, obj):
- r"""Register an item to registry with key 'name'
-
- Args:
- name: Key with which the item will be registered.
-
- Usage::
-
- from bliva.common.registry import registry
-
- registry.register("config", {})
- """
- path = name.split(".")
- current = cls.mapping["state"]
-
- for part in path[:-1]:
- if part not in current:
- current[part] = {}
- current = current[part]
-
- current[path[-1]] = obj
-
- # @classmethod
- # def get_trainer_class(cls, name):
- # return cls.mapping["trainer_name_mapping"].get(name, None)
-
- @classmethod
- def get_builder_class(cls, name):
- return cls.mapping["builder_name_mapping"].get(name, None)
-
- @classmethod
- def get_model_class(cls, name):
- return cls.mapping["model_name_mapping"].get(name, None)
-
- @classmethod
- def get_task_class(cls, name):
- return cls.mapping["task_name_mapping"].get(name, None)
-
- @classmethod
- def get_processor_class(cls, name):
- return cls.mapping["processor_name_mapping"].get(name, None)
-
- @classmethod
- def get_lr_scheduler_class(cls, name):
- return cls.mapping["lr_scheduler_name_mapping"].get(name, None)
-
- @classmethod
- def get_runner_class(cls, name):
- return cls.mapping["runner_name_mapping"].get(name, None)
-
- @classmethod
- def list_runners(cls):
- return sorted(cls.mapping["runner_name_mapping"].keys())
-
- @classmethod
- def list_models(cls):
- return sorted(cls.mapping["model_name_mapping"].keys())
-
- @classmethod
- def list_tasks(cls):
- return sorted(cls.mapping["task_name_mapping"].keys())
-
- @classmethod
- def list_processors(cls):
- return sorted(cls.mapping["processor_name_mapping"].keys())
-
- @classmethod
- def list_lr_schedulers(cls):
- return sorted(cls.mapping["lr_scheduler_name_mapping"].keys())
-
- @classmethod
- def list_datasets(cls):
- return sorted(cls.mapping["builder_name_mapping"].keys())
-
- @classmethod
- def get_path(cls, name):
- return cls.mapping["paths"].get(name, None)
-
- @classmethod
- def get(cls, name, default=None, no_warning=False):
- r"""Get an item from registry with key 'name'
-
- Args:
- name (string): Key whose value needs to be retrieved.
- default: If passed and key is not in registry, default value will
- be returned with a warning. Default: None
- no_warning (bool): If passed as True, warning when key doesn't exist
- will not be generated. Useful for MMF's
- internal operations. Default: False
- """
- original_name = name
- name = name.split(".")
- value = cls.mapping["state"]
- for subname in name:
- value = value.get(subname, default)
- if value is default:
- break
-
- if (
- "writer" in cls.mapping["state"]
- and value == default
- and no_warning is False
- ):
- cls.mapping["state"]["writer"].warning(
- "Key {} is not present in registry, returning default value "
- "of {}".format(original_name, default)
- )
- return value
-
- @classmethod
- def unregister(cls, name):
- r"""Remove an item from registry with key 'name'
-
- Args:
- name: Key which needs to be removed.
- Usage::
-
- from mmf.common.registry import registry
-
- config = registry.unregister("config")
- """
- return cls.mapping["state"].pop(name, None)
-
-
-registry = Registry()
diff --git a/spaces/mrm8488/PromptSource/seqio_tasks/preview_promptsource.py b/spaces/mrm8488/PromptSource/seqio_tasks/preview_promptsource.py
deleted file mode 100644
index 4dbbec7615aded5c895f41f2f66d6cd90589db3b..0000000000000000000000000000000000000000
--- a/spaces/mrm8488/PromptSource/seqio_tasks/preview_promptsource.py
+++ /dev/null
@@ -1,105 +0,0 @@
-import csv
-from typing import List, Optional, Tuple
-
-import pkg_resources
-
-# from rich import inspect
-from rich.pretty import pprint
-
-from promptsource.templates import TemplateCollection
-
-
-def preview() -> None:
- experiment_path = pkg_resources.resource_filename(__name__, "experiment_D4.csv")
- gsheet = {}
- d4_train: List[Tuple[str, Optional[str]]] = []
- d4_eval: List[Tuple[str, Optional[str]]] = []
- d3_train_gpt: List[Tuple[str, Optional[str]]] = []
- d3_train_sglue: List[Tuple[str, Optional[str]]] = []
- experiment_path = pkg_resources.resource_filename(__name__, "experiment_D4.csv")
- with open(experiment_path) as exp_file:
- reader = csv.DictReader(exp_file)
- for row in reader:
- if row["skip"]:
- continue
- if row["subset"] == "":
- row["subset"] = None # to match promptsource.Template object
- dataset_subset = (row["HF_name"], row["subset"])
- if row["do_train"] == "TRUE":
- d4_train.append(dataset_subset)
- if row["do_eval"] == "TRUE":
- d4_eval.append(dataset_subset)
- if row["D3_do_train"] == "TRUE" and "GPT" in row["seed_paper"]:
- d3_train_gpt.append(dataset_subset)
- if row["D3_do_train"] == "TRUE" and row["HF_name"] == "super_glue":
- d3_train_sglue.append(dataset_subset)
- gsheet[dataset_subset] = row
- all_datasets = d4_train + d4_eval + d3_train_gpt + d3_train_sglue
- print(f"Number of non-desk-rejected datasets = {len(all_datasets)}")
- print(f"Number of training sets = {len(d4_train)}")
- print(f"Number of evaluation sets = {len(d4_eval)}")
-
- template_collection = TemplateCollection()
- output = []
- missing_og_flags = []
- missing_metrics = []
- for dataset_name, subset_name in template_collection.keys:
- ds_name = (dataset_name, subset_name)
- if ds_name not in d4_eval:
- template_collection.remove(dataset_name, subset_name)
- continue
- OG = 0
- non_OG = 0
- dataset = template_collection.get_dataset(dataset_name, subset_name)
- for template_name in dataset.all_template_names:
- template = dataset[template_name]
- # if dataset_name == 'ropes':
- # inspect(template.metadata)
- if not template.metadata.metrics:
- missing_metrics.append(f"{dataset_name}/{subset_name}/{template_name}")
-
- if template.metadata.original_task is True:
- OG += 1
- elif template.metadata.original_task is False:
- non_OG += 1
- elif template.metadata.original_task is None:
- missing_og_flags.append(dataset_name + "/" + template_name)
- continue
-
- train_size = gsheet[ds_name]["train_size"]
- if train_size == "":
- train_size = 0
- else:
- train_size = int(train_size)
-
- adjusted_train_size = train_size // len(dataset.all_template_names)
-
- output.append(
- (
- f"{dataset_name} {subset_name if subset_name else ''}",
- f"{OG}-{non_OG}",
- f"{train_size:,} {adjusted_train_size:,}",
- )
- )
-
- pprint(output)
- print(len(template_collection))
-
- print("Missing metrics:")
- pprint(missing_metrics)
-
- print("Missing original task flags:")
- pprint(missing_og_flags)
-
- # # print(d4_train_mixture)
- # print(f"Number of training templates = {len(d4_train_mixture)}")
- # # print(d4_eval_mixture)
- # print(f"Number of evaluation templates = {len(d4_eval_mixture)}")
- # # for i in seqio.TaskRegistry.names():
- # # print(i)
- # print(f"Number of SeqIO registered templates = {len(seqio.TaskRegistry.names())}")
- # print("^ includes non-original task templates which are excluded from the eval mixture")
-
-
-if __name__ == "__main__":
- preview()
diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/noisychannel/rerank_options.py b/spaces/mshukor/UnIVAL/fairseq/examples/noisychannel/rerank_options.py
deleted file mode 100644
index de91939e6635bdf33c9dc330116be07d9e8be6a2..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/examples/noisychannel/rerank_options.py
+++ /dev/null
@@ -1,149 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from fairseq import options
-
-
-def get_reranking_parser(default_task="translation"):
- parser = options.get_parser("Generation and reranking", default_task)
- add_reranking_args(parser)
- return parser
-
-
-def get_tuning_parser(default_task="translation"):
- parser = options.get_parser("Reranking tuning", default_task)
- add_reranking_args(parser)
- add_tuning_args(parser)
- return parser
-
-
-def add_reranking_args(parser):
- group = parser.add_argument_group("Reranking")
- # fmt: off
- group.add_argument('--score-model1', '-s1', type=str, metavar='FILE', required=True,
- help='path to first model or ensemble of models for rescoring')
- group.add_argument('--score-model2', '-s2', type=str, metavar='FILE', required=False,
- help='path to second model or ensemble of models for rescoring')
- group.add_argument('--num-rescore', '-n', type=int, metavar='N', default=10,
- help='the number of candidate hypothesis to rescore')
- group.add_argument('-bz', '--batch-size', type=int, metavar='N', default=128,
- help='batch size for generating the nbest list')
- group.add_argument('--gen-subset', default='test', metavar='SET', choices=['test', 'train', 'valid'],
- help='data subset to generate (train, valid, test)')
- group.add_argument('--gen-model', default=None, metavar='FILE',
- help='the model to generate translations')
- group.add_argument('-b1', '--backwards1', action='store_true',
- help='whether or not the first model group is backwards')
- group.add_argument('-b2', '--backwards2', action='store_true',
- help='whether or not the second model group is backwards')
- group.add_argument('-a', '--weight1', default=1, nargs='+', type=float,
- help='the weight(s) of the first model')
- group.add_argument('-b', '--weight2', default=1, nargs='+', type=float,
- help='the weight(s) of the second model, or the gen model if using nbest from interactive.py')
- group.add_argument('-c', '--weight3', default=1, nargs='+', type=float,
- help='the weight(s) of the third model')
-
- # lm arguments
- group.add_argument('-lm', '--language-model', default=None, metavar='FILE',
- help='language model for target language to rescore translations')
- group.add_argument('--lm-dict', default=None, metavar='FILE',
- help='the dict of the language model for the target language')
- group.add_argument('--lm-name', default=None,
- help='the name of the language model for the target language')
- group.add_argument('--lm-bpe-code', default=None, metavar='FILE',
- help='the bpe code for the language model for the target language')
- group.add_argument('--data-dir-name', default=None,
- help='name of data directory')
- group.add_argument('--lenpen', default=1, nargs='+', type=float,
- help='length penalty: <1.0 favors shorter, >1.0 favors longer sentences')
- group.add_argument('--score-dict-dir', default=None,
- help='the directory with dictionaries for the scoring models')
- group.add_argument('--right-to-left1', action='store_true',
- help='whether the first model group is a right to left model')
- group.add_argument('--right-to-left2', action='store_true',
- help='whether the second model group is a right to left model')
- group.add_argument('--post-process', '--remove-bpe', default='@@ ',
- help='the bpe symbol, used for the bitext and LM')
- group.add_argument('--prefix-len', default=None, type=int,
- help='the length of the target prefix to use in rescoring (in terms of words wo bpe)')
- group.add_argument('--sampling', action='store_true',
- help='use sampling instead of beam search for generating n best list')
- group.add_argument('--diff-bpe', action='store_true',
- help='bpe for rescoring and nbest list not the same')
- group.add_argument('--rescore-bpe-code', default=None,
- help='bpe code for rescoring models')
- group.add_argument('--nbest-list', default=None,
- help='use predefined nbest list in interactive.py format')
- group.add_argument('--write-hypos', default=None,
- help='filename prefix to write hypos to')
- group.add_argument('--ref-translation', default=None,
- help='reference translation to use with nbest list from interactive.py')
- group.add_argument('--backwards-score-dict-dir', default=None,
- help='the directory with dictionaries for the backwards model,'
- 'if None then it is assumed the fw and backwards models share dictionaries')
-
- # extra scaling args
- group.add_argument('--gen-model-name', default=None,
- help='the name of the models that generated the nbest list')
- group.add_argument('--model1-name', default=None,
- help='the name of the set for model1 group ')
- group.add_argument('--model2-name', default=None,
- help='the name of the set for model2 group')
- group.add_argument('--shard-id', default=0, type=int,
- help='the id of the shard to generate')
- group.add_argument('--num-shards', default=1, type=int,
- help='the number of shards to generate across')
- group.add_argument('--all-shards', action='store_true',
- help='use all shards')
- group.add_argument('--target-prefix-frac', default=None, type=float,
- help='the fraction of the target prefix to use in rescoring (in terms of words wo bpe)')
- group.add_argument('--source-prefix-frac', default=None, type=float,
- help='the fraction of the source prefix to use in rescoring (in terms of words wo bpe)')
- group.add_argument('--normalize', action='store_true',
- help='whether to normalize by src and target len')
- # fmt: on
- return group
-
-
-def add_tuning_args(parser):
- group = parser.add_argument_group("Tuning")
-
- group.add_argument(
- "--lower-bound",
- default=[-0.7],
- nargs="+",
- type=float,
- help="lower bound of search space",
- )
- group.add_argument(
- "--upper-bound",
- default=[3],
- nargs="+",
- type=float,
- help="upper bound of search space",
- )
- group.add_argument(
- "--tune-param",
- default=["lenpen"],
- nargs="+",
- choices=["lenpen", "weight1", "weight2", "weight3"],
- help="the parameter(s) to tune",
- )
- group.add_argument(
- "--tune-subset",
- default="valid",
- choices=["valid", "test", "train"],
- help="the subset to tune on ",
- )
- group.add_argument(
- "--num-trials",
- default=1000,
- type=int,
- help="number of trials to do for random search",
- )
- group.add_argument(
- "--share-weights", action="store_true", help="share weight2 and weight 3"
- )
- return group
diff --git a/spaces/multimodalart/mariogpt/Makefile b/spaces/multimodalart/mariogpt/Makefile
deleted file mode 100644
index 09092195768bbdd5b1fbdb682192190adb1ffa5f..0000000000000000000000000000000000000000
--- a/spaces/multimodalart/mariogpt/Makefile
+++ /dev/null
@@ -1,28 +0,0 @@
-clean: clean-build clean-pyc clean-test ## remove all build, test, coverage and Python artifacts
-
-clean-build: ## remove build artifacts
- rm -fr build/
- rm -fr dist/
- rm -fr .eggs/
- find . -name '*.egg-info' -exec rm -fr {} +
- find . -name '*.egg' -exec rm -f {} +
-
-clean-pyc: ## remove Python file artifacts
- find . -name '*.pyc' -exec rm -f {} +
- find . -name '*.pyo' -exec rm -f {} +
- find . -name '*~' -exec rm -f {} +
- find . -name '__pycache__' -exec rm -fr {} +
-
-clean-test: ## remove test and coverage artifacts
- rm -fr .tox/
- rm -f .coverage
- rm -fr coverage/
- rm -fr .pytest_cache
-
-lint: ## check style with flake8
- isort --profile black mario_gpt
- black mario_gpt
- flake8 mario_gpt
-
-install: clean lint
- python setup.py install
diff --git a/spaces/mynti/plainly/README.md b/spaces/mynti/plainly/README.md
deleted file mode 100644
index 69665fef1c76c2034639f9f9144fdce1967d5a85..0000000000000000000000000000000000000000
--- a/spaces/mynti/plainly/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Plainly
-emoji: ✈️
-colorFrom: blue
-colorTo: gray
-sdk: gradio
-sdk_version: 3.0.5
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/utils/data/distributed.py b/spaces/myrad01/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/utils/data/distributed.py
deleted file mode 100644
index c3d890e28fd2b9e044bdd9494de4a43ad2471eed..0000000000000000000000000000000000000000
--- a/spaces/myrad01/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/utils/data/distributed.py
+++ /dev/null
@@ -1,58 +0,0 @@
-import math
-import torch
-from .sampler import Sampler
-from torch.distributed import get_world_size, get_rank
-
-
-class DistributedSampler(Sampler):
- """Sampler that restricts data loading to a subset of the dataset.
-
- It is especially useful in conjunction with
- :class:`torch.nn.parallel.DistributedDataParallel`. In such case, each
- process can pass a DistributedSampler instance as a DataLoader sampler,
- and load a subset of the original dataset that is exclusive to it.
-
- .. note::
- Dataset is assumed to be of constant size.
-
- Arguments:
- dataset: Dataset used for sampling.
- num_replicas (optional): Number of processes participating in
- distributed training.
- rank (optional): Rank of the current process within num_replicas.
- """
-
- def __init__(self, dataset, num_replicas=None, rank=None):
- if num_replicas is None:
- num_replicas = get_world_size()
- if rank is None:
- rank = get_rank()
- self.dataset = dataset
- self.num_replicas = num_replicas
- self.rank = rank
- self.epoch = 0
- self.num_samples = int(math.ceil(len(self.dataset) * 1.0 / self.num_replicas))
- self.total_size = self.num_samples * self.num_replicas
-
- def __iter__(self):
- # deterministically shuffle based on epoch
- g = torch.Generator()
- g.manual_seed(self.epoch)
- indices = list(torch.randperm(len(self.dataset), generator=g))
-
- # add extra samples to make it evenly divisible
- indices += indices[:(self.total_size - len(indices))]
- assert len(indices) == self.total_size
-
- # subsample
- offset = self.num_samples * self.rank
- indices = indices[offset:offset + self.num_samples]
- assert len(indices) == self.num_samples
-
- return iter(indices)
-
- def __len__(self):
- return self.num_samples
-
- def set_epoch(self, epoch):
- self.epoch = epoch
diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/models/ade20k/utils.py b/spaces/myrad01/Inpaint-Anything/third_party/lama/models/ade20k/utils.py
deleted file mode 100644
index f337db7db54c82be041698d694e1403e8918c4c0..0000000000000000000000000000000000000000
--- a/spaces/myrad01/Inpaint-Anything/third_party/lama/models/ade20k/utils.py
+++ /dev/null
@@ -1,40 +0,0 @@
-"""Modified from https://github.com/CSAILVision/semantic-segmentation-pytorch"""
-
-import os
-import sys
-
-import numpy as np
-import torch
-
-try:
- from urllib import urlretrieve
-except ImportError:
- from urllib.request import urlretrieve
-
-
-def load_url(url, model_dir='./pretrained', map_location=None):
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- filename = url.split('/')[-1]
- cached_file = os.path.join(model_dir, filename)
- if not os.path.exists(cached_file):
- sys.stderr.write('Downloading: "{}" to {}\n'.format(url, cached_file))
- urlretrieve(url, cached_file)
- return torch.load(cached_file, map_location=map_location)
-
-
-def color_encode(labelmap, colors, mode='RGB'):
- labelmap = labelmap.astype('int')
- labelmap_rgb = np.zeros((labelmap.shape[0], labelmap.shape[1], 3),
- dtype=np.uint8)
- for label in np.unique(labelmap):
- if label < 0:
- continue
- labelmap_rgb += (labelmap == label)[:, :, np.newaxis] * \
- np.tile(colors[label],
- (labelmap.shape[0], labelmap.shape[1], 1))
-
- if mode == 'BGR':
- return labelmap_rgb[:, :, ::-1]
- else:
- return labelmap_rgb
diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/training/losses/perceptual.py b/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/training/losses/perceptual.py
deleted file mode 100644
index 8c055c2b327ce7943682af5c5f9394b9fcbec506..0000000000000000000000000000000000000000
--- a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/training/losses/perceptual.py
+++ /dev/null
@@ -1,113 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torchvision
-
-from models.ade20k import ModelBuilder
-from saicinpainting.utils import check_and_warn_input_range
-
-
-IMAGENET_MEAN = torch.FloatTensor([0.485, 0.456, 0.406])[None, :, None, None]
-IMAGENET_STD = torch.FloatTensor([0.229, 0.224, 0.225])[None, :, None, None]
-
-
-class PerceptualLoss(nn.Module):
- def __init__(self, normalize_inputs=True):
- super(PerceptualLoss, self).__init__()
-
- self.normalize_inputs = normalize_inputs
- self.mean_ = IMAGENET_MEAN
- self.std_ = IMAGENET_STD
-
- vgg = torchvision.models.vgg19(pretrained=True).features
- vgg_avg_pooling = []
-
- for weights in vgg.parameters():
- weights.requires_grad = False
-
- for module in vgg.modules():
- if module.__class__.__name__ == 'Sequential':
- continue
- elif module.__class__.__name__ == 'MaxPool2d':
- vgg_avg_pooling.append(nn.AvgPool2d(kernel_size=2, stride=2, padding=0))
- else:
- vgg_avg_pooling.append(module)
-
- self.vgg = nn.Sequential(*vgg_avg_pooling)
-
- def do_normalize_inputs(self, x):
- return (x - self.mean_.to(x.device)) / self.std_.to(x.device)
-
- def partial_losses(self, input, target, mask=None):
- check_and_warn_input_range(target, 0, 1, 'PerceptualLoss target in partial_losses')
-
- # we expect input and target to be in [0, 1] range
- losses = []
-
- if self.normalize_inputs:
- features_input = self.do_normalize_inputs(input)
- features_target = self.do_normalize_inputs(target)
- else:
- features_input = input
- features_target = target
-
- for layer in self.vgg[:30]:
-
- features_input = layer(features_input)
- features_target = layer(features_target)
-
- if layer.__class__.__name__ == 'ReLU':
- loss = F.mse_loss(features_input, features_target, reduction='none')
-
- if mask is not None:
- cur_mask = F.interpolate(mask, size=features_input.shape[-2:],
- mode='bilinear', align_corners=False)
- loss = loss * (1 - cur_mask)
-
- loss = loss.mean(dim=tuple(range(1, len(loss.shape))))
- losses.append(loss)
-
- return losses
-
- def forward(self, input, target, mask=None):
- losses = self.partial_losses(input, target, mask=mask)
- return torch.stack(losses).sum(dim=0)
-
- def get_global_features(self, input):
- check_and_warn_input_range(input, 0, 1, 'PerceptualLoss input in get_global_features')
-
- if self.normalize_inputs:
- features_input = self.do_normalize_inputs(input)
- else:
- features_input = input
-
- features_input = self.vgg(features_input)
- return features_input
-
-
-class ResNetPL(nn.Module):
- def __init__(self, weight=1,
- weights_path=None, arch_encoder='resnet50dilated', segmentation=True):
- super().__init__()
- self.impl = ModelBuilder.get_encoder(weights_path=weights_path,
- arch_encoder=arch_encoder,
- arch_decoder='ppm_deepsup',
- fc_dim=2048,
- segmentation=segmentation)
- self.impl.eval()
- for w in self.impl.parameters():
- w.requires_grad_(False)
-
- self.weight = weight
-
- def forward(self, pred, target):
- pred = (pred - IMAGENET_MEAN.to(pred)) / IMAGENET_STD.to(pred)
- target = (target - IMAGENET_MEAN.to(target)) / IMAGENET_STD.to(target)
-
- pred_feats = self.impl(pred, return_feature_maps=True)
- target_feats = self.impl(target, return_feature_maps=True)
-
- result = torch.stack([F.mse_loss(cur_pred, cur_target)
- for cur_pred, cur_target
- in zip(pred_feats, target_feats)]).sum() * self.weight
- return result
diff --git a/spaces/nakas/MusicGenDemucs/audiocraft/utils/export.py b/spaces/nakas/MusicGenDemucs/audiocraft/utils/export.py
deleted file mode 100644
index b513b52267f7bf5aae09282c15b0a2e20c8a8fee..0000000000000000000000000000000000000000
--- a/spaces/nakas/MusicGenDemucs/audiocraft/utils/export.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Utility to export a training checkpoint to a lightweight release checkpoint.
-"""
-
-from pathlib import Path
-import typing as tp
-
-from omegaconf import OmegaConf, DictConfig
-import torch
-
-
-def _clean_lm_cfg(cfg: DictConfig):
- OmegaConf.set_struct(cfg, False)
- # This used to be set automatically in the LM solver, need a more robust solution
- # for the future.
- cfg['transformer_lm']['card'] = 2048
- cfg['transformer_lm']['n_q'] = 4
- # Experimental params no longer supported.
- bad_params = ['spectral_norm_attn_iters', 'spectral_norm_ff_iters',
- 'residual_balancer_attn', 'residual_balancer_ff', 'layer_drop']
- for name in bad_params:
- del cfg['transformer_lm'][name]
- OmegaConf.set_struct(cfg, True)
- return cfg
-
-
-def export_encodec(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]):
- sig = Path(checkpoint_path).parent.name
- assert len(sig) == 8, "Not a valid Dora signature"
- pkg = torch.load(checkpoint_path, 'cpu')
- new_pkg = {
- 'best_state': pkg['ema']['state']['model'],
- 'xp.cfg': OmegaConf.to_yaml(pkg['xp.cfg']),
- }
- out_file = Path(out_folder) / f'{sig}.th'
- torch.save(new_pkg, out_file)
- return out_file
-
-
-def export_lm(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]):
- sig = Path(checkpoint_path).parent.name
- assert len(sig) == 8, "Not a valid Dora signature"
- pkg = torch.load(checkpoint_path, 'cpu')
- new_pkg = {
- 'best_state': pkg['fsdp_best_state']['model'],
- 'xp.cfg': OmegaConf.to_yaml(_clean_lm_cfg(pkg['xp.cfg']))
- }
- out_file = Path(out_folder) / f'{sig}.th'
- torch.save(new_pkg, out_file)
- return out_file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/AnthemScore 2.3.1 (x64) With Full Crack 2020.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/AnthemScore 2.3.1 (x64) With Full Crack 2020.md
deleted file mode 100644
index 5a3748e2d77de640a846ec537847db93a03dd932..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/AnthemScore 2.3.1 (x64) With Full Crack 2020.md
+++ /dev/null
@@ -1,30 +0,0 @@
-
-```html
-
AnthemScore 2.3.1 (x64) With Full Crack 2020: A Powerful Tool for Music Transcription
-
AnthemScore is a software that can automatically create sheet music from audio files, such as MP3, WAV, WMA, M4A, MP2, FLAC, OGG, AIFF and AMR. It uses a convolutional neural network trained on 2 million data samples to achieve high accuracy and speed. The output is a MusicXML file that can be viewed and edited using any standard music notation software.
-
AnthemScore 2.3.1 (x64) With Full Crack 2020 is the latest version of this software that offers many features and improvements. Some of the key features are:
It can handle most of the songs recorded during the disturbance, and save them as XML or CSV.
-
It can view frequency/time graphs, slow down, play on the virtual keyboard, and save sheet music in other keys or only in treble clef or bass clef.
-
It has a simple and easy to use interface that does not require any specialized training.
-
It can process multiple files at once and batch convert them to sheet music.
-
It can transcribe polyphonic music with up to 4 voices per staff.
-
-
AnthemScore 2.3.1 (x64) With Full Crack 2020 is a great tool for musicians, composers, teachers, students, and anyone who wants to create sheet music from audio files. It can help you learn more about songs, practice your instrument, arrange music, and more. You can download AnthemScore 2.3.1 (x64) With Full Crack 2020 from the link below and enjoy this powerful software for free.
If you want to learn more about AnthemScore and how it works, you can visit the official website and read the user manual. You can also watch some video tutorials and demos on YouTube. You can also contact the support team if you have any questions or issues with the software.
-
AnthemScore is a revolutionary software that can make music transcription easier and faster than ever before. It can handle various types of music genres and instruments, and produce high-quality sheet music that you can edit and print. AnthemScore 2.3.1 (x64) With Full Crack 2020 is the best version of this software that you can get for free. Don't miss this opportunity and download AnthemScore 2.3.1 (x64) With Full Crack 2020 today.
-```
-
-```html
-
One of the advantages of AnthemScore is that it can transcribe music from any source, such as CDs, DVDs, online videos, radio, podcasts, etc. You just need to record the audio using your computer's microphone or line-in, and AnthemScore will do the rest. You can also import audio files from your hard drive or external devices.
-
Another advantage of AnthemScore is that it can handle complex music with multiple instruments and voices. It can detect the pitch, duration, and volume of each note, and assign them to different staves. You can also adjust the settings to change the number of voices per staff, the clef, the key signature, the time signature, and more.
-
A third advantage of AnthemScore is that it can export the sheet music to various formats, such as PDF, PNG, MIDI, MusicXML, and LilyPond. You can also share your sheet music with others via email or social media. You can also print your sheet music or save it to your cloud storage.
-
-``` e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/nickmuchi/Earnings-Call-Analysis-Whisperer/README.md b/spaces/nickmuchi/Earnings-Call-Analysis-Whisperer/README.md
deleted file mode 100644
index 58b2d1d07316e4aac9eed5068058953dfef7bdda..0000000000000000000000000000000000000000
--- a/spaces/nickmuchi/Earnings-Call-Analysis-Whisperer/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Earnings Call Analysis Whisperer
-emoji: 📞
-colorFrom: blue
-colorTo: gray
-sdk: streamlit
-sdk_version: 1.25.0
-app_file: 01_🏠_Home.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/nihaldsouza1/clearlydefined_license_summarizer/src/textrank.py b/spaces/nihaldsouza1/clearlydefined_license_summarizer/src/textrank.py
deleted file mode 100644
index 80e45b2d2cf8823ba693b9916d428f51aab6ebc3..0000000000000000000000000000000000000000
--- a/spaces/nihaldsouza1/clearlydefined_license_summarizer/src/textrank.py
+++ /dev/null
@@ -1,474 +0,0 @@
-import pandas as pd
-import spacy
-import math
-from collections import Counter
-
-
-try:
- from src.clean import clean_license_text
- from src.parameters import color, vocab
-except:
- from clean import clean_license_text
- from parameters import color, vocab
-
-
-GOLD_STANDARD_PATH = "../UBC-SAP_gold-corpus/UBC-SAP_capstone_corpus_labels_removed.xlsx"
-LABELS_PATH = "data/choosealicense_appendix_labels.csv"
-MIN_SENT_LEN = 3
-SUMMARY_LEN = 0.3
-
-nlp = spacy.load("en_core_web_sm")
-
-
-def normalize_sentence_counter(counter):
- """
- Normalize sentence scores in the counter between 0 and 3
-
- Parameters
- ----------
- counter : dict
- A dictionary of scores with keys as sentence and values as raw scores.
-
- Returns
- -------
- counter : dict
- A dictionary of scores with keys as sentence and values as normalized
- scores.
-
- """
- vals = list(counter.values())
-
- if vals:
- min_val = min(vals)
- max_val = max(vals)
- else:
- return counter
-
- for sent in counter:
- try:
- counter[sent] = round(3 * (counter[sent] - min_val) / (max_val - min_val), 3)
- except:
- counter[sent] = 0
- return counter
-
-
-def sent_tokenize_text(text, debug=False):
- """
- Tokenize a license text into sentences
-
- Parameters
- ----------
- text : str
- License text to be tokenized into sentences.
- debug : bool, optional
- Toggles debug mode. The default is False.
-
- Returns
- -------
- tokenized_sents : list
- A list of tokenized sentences.
-
- """
- tokenized_sents = list()
- paras = text.split("\n\n")
- for para in paras:
- for sent in nlp(para).sents:
- sent = sent.text.replace("\n", "").strip()
- if tokenized_sents and len(tokenized_sents[-1]) <= 30:
- tokenized_sents[-1] += f" {sent.strip()}"
- else:
- tokenized_sents.append(sent.strip())
- try:
- tokenized_sents[-1] += "\n\n"
- except:
- pass
- if debug:
- print("Segmented Sentences:")
- print("="*20)
- for i, sent in enumerate(tokenized_sents):
- print(f"Sent {i+1}")
- print("-"*20)
- print(sent)
- print("-"*50)
- print()
- return tokenized_sents
-
-
-def lemmatize_tokens(sent):
- """
- Lemmatize tokens into the given sentence
-
- Parameters
- ----------
- sent : str
- A sentences whose tokens are to be lemmatized.
-
- Returns
- -------
- list
- A list of lemmatized tokens.
-
- """
- lemmas = list()
-
- nlp_sent = [token.lemma_.lower().strip() for token in nlp(sent)]
-
- for tok_i, token in enumerate(nlp_sent):
- if (token
- and token not in vocab.license_stopwords
- and token not in vocab.negation_words):
- if tok_i > 0 and nlp_sent[tok_i-1] in vocab.negation_words:
- lemmas.append(f"{nlp_sent[tok_i-1]}-{token}")
- elif (tok_i > 1
- and nlp_sent[tok_i-1] in " -"
- and nlp_sent[tok_i-2] in vocab.negation_words):
- lemmas.append(f"{nlp_sent[tok_i-2]}-{token}")
- else:
- lemmas.append(token)
-
- return [lemma for lemma in lemmas if len(lemma) > 2]
-
-
-def get_license_summary_scores(license_text,
- min_sent_len=MIN_SENT_LEN,
- summary_len=SUMMARY_LEN,
- summary_in_text_order=True,
- return_summary_only=True,
- debug=False,
- cleaned_license_sentences=None):
- """
- Get sentence scores for all the cleaned sentences in a given license_text
- along with other extracted details such as definitions, exceptions, etc.
- and the cleaned license text itself.
-
- Parameters
- ----------
- license_text : str
- License text.
- min_sent_len : int, optional
- The minimum number of tokens in a sentence for it to be considered.
- The default is 3.
- summary_len : float, optional
- The proportion of length of the expected summary to the length of
- license text. The default is 0.3.
- summary_in_text_order : bool, optional
- Toggle to switch between summary in text order or in descending order
- by scores. The default is True.
- return_summary_only : bool, optional
- Toggle to return just the summary or entire license text with
- important sentences highlighted. The default is True.
- debug : bool, optional
- Toggles debug mode. The default is False.
- cleaned_license_sentences : list, optional
- A list of cleaned sentences. The default is None.
-
- Returns
- -------
- sent_scores : dict
- A dictionary of sentence scores with keys as tuples of sentence and
- sentence id and values as their normalized scores.
- cleaned_license_sentences : list
- A list of cleaned sentences.
- definitions : str
- Definitions extracted from license text.
- exceptions : str
- Exceptions extracted from license text.
- summary_len : float
- The proportion of length of the expected summary to the length of
- license text.
-
- """
-
- if not cleaned_license_sentences:
- cleaned_license_text, definitions, exceptions = clean_license_text(license_text)
- cleaned_license_sentences = sent_tokenize_text(cleaned_license_text, debug)
- else:
- definitions, exceptions = "", ""
-
- sent_scores = Counter()
-
- summary_len = math.ceil(summary_len * len(cleaned_license_sentences))
-
- if debug:
- print(f"summary length:{summary_len}")
-
- for sent_i, sent in enumerate(cleaned_license_sentences):
-
- if len(sent.split()) < min_sent_len:
- continue
-
- score = 0
-
- lemmatized_tokens = lemmatize_tokens(sent)
-
- if debug:
- print("-"*50)
- print(f"\nOriginal Sentence = {sent}")
- print(f"\n{sent_i}. Lemmatized_tokens = {lemmatized_tokens}")
-
- word_count = Counter([tok for tok in lemmatized_tokens])
-
- for prop, prop_words in vocab.properties_dict.items():
- prop_score = 0
-
- imp_words = list()
-
- for prop_word in prop_words:
- if prop_word in word_count.keys():
- prop_score += vocab.properties_scores[prop]
- imp_words.append(prop_word)
-
- if debug:
- print(prop, "=", imp_words, "=", prop_score)
-
- score += prop_score
-
- # With normalization
- # sent_scores[(sent, sent_i)] = score / len(lemmatized_tokens)
-
- # Without normalization
- sent_scores[(sent, sent_i)] = score
-
- if debug:
- print(f"Sentence score: {sent_scores[(sent, sent_i)]}")
- print()
-
- sent_scores = normalize_sentence_counter(sent_scores)
-
- if debug:
- print(sent_scores)
-
- return sent_scores, cleaned_license_sentences, definitions, exceptions, summary_len
-
-
-def get_sent_scores(license_text,
- min_sent_len=MIN_SENT_LEN,
- summary_len=SUMMARY_LEN,
- summary_in_text_order=True,
- return_summary_only=True,
- debug=False,
- cleaned_license_sentences=None):
- """
- Get sentence scores for all the sentences in a given license_text along
- with their sentence ids.
-
- Parameters
- ----------
- license_text : str
- License text.
- min_sent_len : int, optional
- The minimum number of tokens in a sentence for it to be considered.
- The default is 3.
- summary_len : float, optional
- The proportion of length of the expected summary to the length of
- license text. The default is 0.3.
- summary_in_text_order : bool, optional
- Toggle to switch between summary in text order or in descending order
- by scores. The default is True.
- return_summary_only : bool, optional
- Toggle to return just the summary or entire license text with
- important sentences highlighted. The default is True.
- debug : bool, optional
- Toggles debug mode. The default is False.
- cleaned_license_sentences : list, optional
- A list of cleaned sentences. The default is None.
-
- Returns
- -------
- sent_id_scores : list(tuple)
- A list of tuples of sentence id and sentence score.
-
- """
- sent_scores, cleaned_license_sentences, definitions, exceptions, summary_len = get_license_summary_scores(
- license_text,
- min_sent_len=min_sent_len,
- summary_len=summary_len,
- summary_in_text_order=summary_in_text_order,
- return_summary_only=return_summary_only,
- debug=debug,
- cleaned_license_sentences=cleaned_license_sentences
- )
-
- sent_id_scores = [
- (sent_i, score) for (sent_id, sent_i), score in sent_scores.items()
- ]
-
- return sent_id_scores
-
-
-def custom_textrank_summarizer(license_text,
- min_sent_len=MIN_SENT_LEN,
- summary_len=SUMMARY_LEN,
- summary_in_text_order=True,
- return_summary_only=True,
- debug=False):
- """
- Returns summary / highlighted summary, definitions and exceptions for a
- given license_text.
-
- Parameters
- ----------
- license_text : str
- License text.
- min_sent_len : int, optional
- The minimum number of tokens in a sentence for it to be considered.
- The default is 3.
- summary_len : float, optional
- The proportion of length of the expected summary to the length of
- license text. The default is 0.3.
- summary_in_text_order : bool, optional
- Toggle to switch between summary in text order or in descending order
- by scores. The default is True.
- return_summary_only : bool, optional
- Toggle to return just the summary or entire license text with
- important sentences highlighted. The default is True.
- debug : bool, optional
- Toggles debug mode. The default is False.
-
- Returns
- -------
- str
- Summary or the highlighted license text.
- definitions : str
- Definitions extracted from license text.
- exceptions : str
- Exceptions extracted from license text.
-
- """
-
- sent_scores, cleaned_license_sentences, definitions, exceptions, summary_len = get_license_summary_scores(
- license_text,
- min_sent_len=min_sent_len,
- summary_len=summary_len,
- summary_in_text_order=summary_in_text_order,
- return_summary_only=return_summary_only,
- debug=debug
- )
-
- sorted_sent_scores = sent_scores.most_common()[:summary_len]
-
- if summary_in_text_order:
- sentences_in_text_order = sorted(sorted_sent_scores, key=lambda x: x[0][1])
- summary = "".join(sent.strip(". ") for (sent, sent_i), score in sentences_in_text_order)
- selected_sent_ids = set(sent_i for (_, sent_i), score in sentences_in_text_order)
- else:
- summary = "".join(sent.strip(". ") for (sent, sent_i), score in sorted_sent_scores)
- selected_sent_ids = set(sent_i for (_, sent_i), score in sorted_sent_scores)
-
- highlighted_license_text = " ".join(
- f"""{sent}"""
- if sent_i in selected_sent_ids
- else sent
- for sent_i, sent in enumerate(cleaned_license_sentences)
- )
-
- if debug:
- print("="*50)
- print("License Text:")
- print("-"*30)
- print(highlighted_license_text)
- print("="*50)
-
- definitions = definitions.strip("\n.") + "."
-
- if return_summary_only:
- return summary, definitions, exceptions
- else:
- return highlighted_license_text, definitions, exceptions
-
-
-def get_system_scores(attachment_id=None):
- """
- Get system sentence scores for all the sentences in all licenses in gold
- standard.
-
- Parameters
- ----------
- attachment_id : str, optional
- The attachment id of the document for which the sentence scores are to
- be calculated. If None, the sentence scores for all the documents will
- be returned. The default is None.
-
- Returns
- -------
- scores_dict : dict
- A dictionary of all the scores with keys as the attachment id of a
- document and values as a list of tuples of sentence id and scores for
- that attachment id.
-
- """
- gold_data = pd.read_excel(GOLD_STANDARD_PATH)
- gold_data = gold_data[["attachment_id", "sentence"]]
- sent_lists = gold_data.groupby("attachment_id")["sentence"].apply(list)
-
- scores_dict = dict()
-
- if attachment_id:
- scores_dict[attachment_id] = get_sent_scores(
- "",
- summary_len=SUMMARY_LEN,
- cleaned_license_sentences=sent_lists[attachment_id]
- )
- return scores_dict
-
- for attachment_id, cleaned_license_sentences in dict(sent_lists).items():
-
- scores_dict[attachment_id] = get_sent_scores(
- "",
- summary_len=SUMMARY_LEN,
- cleaned_license_sentences=cleaned_license_sentences
- )
-
- return scores_dict
-
-
-def preprocess_properties(cell):
- """
- Converts licnse properties to title case and removes hyphens and
- underscores.
-
- Parameters
- ----------
- cell : str
- A cell string in properties dataframe of a license.
-
- Returns
- -------
- cell : TYPE
- DESCRIPTION.
-
- """
- try:
- cell = cell.replace("--", "$")
- cell = cell.replace("-", " ")
- cell = cell.replace("_", " ")
- cell = cell.replace("$", " - ").title()
- except:
- pass
- return cell
-
-def get_labels_for_license(license_id, by_license_id=True):
- """
- Gets license properties for a given license_id.
-
- Parameters
- ----------
- license_id : str
- License id of the license for which properties are to be returned.
- by_license_id : bool, optional
- A flag to decide whether we fetch the license properties by license id
- or license name. The default is True.
-
- Returns
- -------
- properties : pandas.DataFrame
- Dataframe with properties of the license with id license_id.
-
- """
- index_col = 0 if by_license_id else 1
- columns = ["Property", "Label"]
- labels_data = pd.read_csv(LABELS_PATH, index_col=index_col)
- properties = pd.DataFrame(labels_data.loc[license_id]).reset_index()
- properties.columns = columns
- properties = properties.applymap(preprocess_properties)
- return properties
\ No newline at end of file
diff --git a/spaces/nikhil5678/turkey-syria-earthquake-tweets/helper.py b/spaces/nikhil5678/turkey-syria-earthquake-tweets/helper.py
deleted file mode 100644
index 7e8b6cb6f6ed12184b7e7b25d1892e7e1be3c680..0000000000000000000000000000000000000000
--- a/spaces/nikhil5678/turkey-syria-earthquake-tweets/helper.py
+++ /dev/null
@@ -1,77 +0,0 @@
-import pandas as pd
-import streamlit as st
-import altair as alt
-import matplotlib.pyplot as plt
-from wordcloud import WordCloud, STOPWORDS
-import seaborn as sns
-import pickle
-import numpy as np
-import cv2
-
-def plot_bar_chart(tweet_df):
- x_name = tweet_df.columns[0]
- y_name = tweet_df.columns[1]
- st.write(alt.Chart(tweet_df).mark_bar().encode(
- x=alt.X(x_name, sort=None),
- y=y_name,
- ))
-
-def plot_line_chart(tweet_df):
- x_name = tweet_df.columns[0]
- y_name = tweet_df.columns[1]
- st.write(alt.Chart(tweet_df).mark_line().encode(
- x=alt.X(x_name, sort=None),
- y=y_name,
- ))
-
-def plot_pie(tweet_df, labels):
- explode = (0, 0.1)
- fig1, ax1 = plt.subplots()
- colors = ("orange", "brown")
- ax1.pie(tweet_df, explode=explode, colors=colors, labels=labels, autopct='%1.1f%%',
- shadow=True, startangle=90)
- ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle.
-
- st.pyplot(fig1)
-
-def word_cloud(hashtags, col):
- mask = np.array(cv2.imread("twitter.png"))
- stopwords = STOPWORDS
- wc = WordCloud(width=500, height=500, min_font_size=10, background_color='black', stopwords=stopwords, mask=mask)
- if col == 'hashtags':
- df_wc = wc.generate(hashtags[col].str.cat(sep=","))
- else:
- text = str(hashtags[col].values)
- df_wc = wc.generate(text)
- return df_wc
-
-def plot_heatmap():
- table = pickle.load(open('table.pkl', 'rb'))
- fig, ax = plt.subplots(figsize=(9, 6), ncols=1)
-
- sns.heatmap(table, cmap="Greens",
- linewidths=0.5, ax=ax)
- st.pyplot(fig)
-
- # day_df = pd.DataFrame(list(df.groupby('day')['hash_tags']))
- # day_df.columns = ['date', 'hashtags']
-
- # top_hashtags = pd.DataFrame()
- # day_hash_freq = pd.DataFrame()
- # for i in range(len(day_df)):
- # hold = pd.DataFrame(np.hstack(day_df['hashtags'][i])).value_counts().head(15)
- # v1 = hold.index
- # v2 = hold.values
- # v1 = [i[0] for i in v1]
- # v1 = np.array(v1)
- # day_hash_freq = day_hash_freq.append(pd.DataFrame({'date': day_df['date'][i], 'hashtag': v1, 'Frequency': v2}),
- # ignore_index=True)
- # top_hashtags = top_hashtags.append(pd.DataFrame({'hashtag': v1, 'Frequency': v2}), ignore_index=True)
-
- # top_hashtags = top_hashtags.sort_values(by='Frequency', ascending=False, ignore_index=True).head(30)
- # top_hashtags = pd.DataFrame(top_hashtags['hashtag'].unique())
- # top_hashtags.columns = ['hashtag']
-
- # day_hash_freq = day_hash_freq.merge(top_hashtags, on='hashtag').sort_values(by='date', ascending=True)
- # table = day_hash_freq.pivot_table(index='date', columns='hashtag', values='Frequency', aggfunc='sum').fillna(
- # 0).astype('int')
\ No newline at end of file
diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/modeling/__init__.py b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/modeling/__init__.py
deleted file mode 100644
index 4d949e222b5e94bef7deac65dadf21dd0e466c5d..0000000000000000000000000000000000000000
--- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/modeling/__init__.py
+++ /dev/null
@@ -1,64 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from detectron2.layers import ShapeSpec
-
-from .anchor_generator import build_anchor_generator, ANCHOR_GENERATOR_REGISTRY
-from .backbone import (
- BACKBONE_REGISTRY,
- FPN,
- Backbone,
- ResNet,
- ResNetBlockBase,
- build_backbone,
- build_resnet_backbone,
- make_stage,
- ViT,
- SimpleFeaturePyramid,
- get_vit_lr_decay_rate,
- MViT,
- SwinTransformer,
-)
-from .meta_arch import (
- META_ARCH_REGISTRY,
- SEM_SEG_HEADS_REGISTRY,
- GeneralizedRCNN,
- PanopticFPN,
- ProposalNetwork,
- RetinaNet,
- SemanticSegmentor,
- build_model,
- build_sem_seg_head,
- FCOS,
-)
-from .postprocessing import detector_postprocess
-from .proposal_generator import (
- PROPOSAL_GENERATOR_REGISTRY,
- build_proposal_generator,
- RPN_HEAD_REGISTRY,
- build_rpn_head,
-)
-from .roi_heads import (
- ROI_BOX_HEAD_REGISTRY,
- ROI_HEADS_REGISTRY,
- ROI_KEYPOINT_HEAD_REGISTRY,
- ROI_MASK_HEAD_REGISTRY,
- ROIHeads,
- StandardROIHeads,
- BaseMaskRCNNHead,
- BaseKeypointRCNNHead,
- FastRCNNOutputLayers,
- build_box_head,
- build_keypoint_head,
- build_mask_head,
- build_roi_heads,
-)
-from .test_time_augmentation import DatasetMapperTTA, GeneralizedRCNNWithTTA
-from .mmdet_wrapper import MMDetBackbone, MMDetDetector
-
-_EXCLUDE = {"ShapeSpec"}
-__all__ = [k for k in globals().keys() if k not in _EXCLUDE and not k.startswith("_")]
-
-
-from detectron2.utils.env import fixup_module_metadata
-
-fixup_module_metadata(__name__, globals(), __all__)
-del fixup_module_metadata
diff --git a/spaces/oguzakif/video-object-remover/README.md b/spaces/oguzakif/video-object-remover/README.md
deleted file mode 100644
index 121737496008344b5bd82530b7cce23d7f24d6f8..0000000000000000000000000000000000000000
--- a/spaces/oguzakif/video-object-remover/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Video Object Remover
-emoji: 🌖
-colorFrom: pink
-colorTo: gray
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
-license: apache-2.0
-python_version: 3.9.16
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/oliver2023/chatgpt-on-wechat/plugins/bdunit/__init__.py b/spaces/oliver2023/chatgpt-on-wechat/plugins/bdunit/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/osanseviero/streamlit_1.15/app.py b/spaces/osanseviero/streamlit_1.15/app.py
deleted file mode 100644
index 8979cddb74cd113aad4c6c98523b3a2d1dc4297e..0000000000000000000000000000000000000000
--- a/spaces/osanseviero/streamlit_1.15/app.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import numpy as np
-import pandas as pd
-import streamlit as st
-from datasets import load_dataset
-
-dataset = load_dataset("inria-soda/tabular-benchmark", data_files="reg_cat/house_sales.csv")
-
-st.header("Streamlit 1.15 is now supported in Spaces!")
-st.markdown("""
-Tabs are supported!
-
-You can use tabs with `st.tabs` to have app containers.
-""")
-st.balloons()
-
-with st.sidebar:
- st.text("Sidebars can be resized")
- st.text("with drag and drop!")
-
-tab1, tab2, tab3 = st.tabs(["Fancy charts", "Info components", "Nice dataframes"])
-
-with tab1:
- chart_data = pd.DataFrame(np.random.randn(20, 3), columns=["a", "b", "c"])
- st.line_chart(chart_data)
-
- chart_data = pd.DataFrame(np.random.randn(20, 3), columns=["a", "b", "c"])
- st.area_chart(chart_data)
-
- chart_data = pd.DataFrame(np.random.randn(50, 3), columns=["a", "b", "c"])
- st.bar_chart(chart_data)
-with tab2:
- st.info("Info is redesigned!")
- st.success("Which we love!")
- st.warning("Check the other tabs!")
-with tab3:
- st.info("Dataframes are also supported, look nicer and can be easily expanded!")
- st.dataframe(dataset["train"].to_pandas())
diff --git a/spaces/parseny/youtube_comment_generation/README.md b/spaces/parseny/youtube_comment_generation/README.md
deleted file mode 100644
index cd835efab600c4848c8adf1de95de21a9622c285..0000000000000000000000000000000000000000
--- a/spaces/parseny/youtube_comment_generation/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Youtube comment generation by video link
-emoji: 🌖
-colorFrom: gray
-colorTo: pink
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/paulbricman/decontextualizer/Dockerfile b/spaces/paulbricman/decontextualizer/Dockerfile
deleted file mode 100644
index 4c60303e915603e160ebf2f561ddec74920fafa1..0000000000000000000000000000000000000000
--- a/spaces/paulbricman/decontextualizer/Dockerfile
+++ /dev/null
@@ -1,8 +0,0 @@
-FROM python:3.8
-EXPOSE 8501
-WORKDIR /app
-COPY requirements.txt ./requirements.txt
-RUN pip3 install -r requirements.txt
-RUN python -m nltk.downloader punkt
-COPY . .
-CMD streamlit run main.py
\ No newline at end of file
diff --git a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/backbones/unet3d.py b/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/backbones/unet3d.py
deleted file mode 100644
index 8ab23b67bead4dc106dfe80d3d6feee3a4844bbd..0000000000000000000000000000000000000000
--- a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/backbones/unet3d.py
+++ /dev/null
@@ -1,116 +0,0 @@
-# Used for Models Genesis
-import math
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from backbones.classifier import FracClassifier
-
-
-class ContBatchNorm3d(nn.modules.batchnorm._BatchNorm):
- def _check_input_dim(self, input):
-
- if input.dim() != 5:
- raise ValueError('expected 5D input (got {}D input)'.format(input.dim()))
-
- def forward(self, input):
- self._check_input_dim(input)
- return F.batch_norm(
- input, self.running_mean, self.running_var, self.weight, self.bias,
- True, self.momentum, self.eps)
-
-
-class LUConv(nn.Module):
- def __init__(self, in_chan, out_chan, act):
- super(LUConv, self).__init__()
- self.conv1 = nn.Conv3d(in_chan, out_chan, kernel_size=3, padding=1)
- self.bn1 = ContBatchNorm3d(out_chan)
-
- if act == 'relu':
- self.activation = nn.ReLU(out_chan)
- elif act == 'prelu':
- self.activation = nn.PReLU(out_chan)
- elif act == 'elu':
- self.activation = nn.ELU(inplace=True)
- else:
- raise
-
- def forward(self, x):
- out = self.activation(self.bn1(self.conv1(x)))
- return out
-
-
-def _make_nConv(in_channel, depth, act, double_chnnel=False):
- if double_chnnel:
- layer1 = LUConv(in_channel, 32 * (2 ** (depth+1)),act)
- layer2 = LUConv(32 * (2 ** (depth+1)), 32 * (2 ** (depth+1)),act)
- else:
- layer1 = LUConv(in_channel, 32*(2**depth),act)
- layer2 = LUConv(32*(2**depth), 32*(2**depth)*2,act)
-
- return nn.Sequential(layer1,layer2)
-
-class DownTransition(nn.Module):
- def __init__(self, in_channel,depth, act):
- super(DownTransition, self).__init__()
- self.ops = _make_nConv(in_channel, depth,act)
- self.maxpool = nn.MaxPool3d(2)
- self.current_depth = depth
-
- def forward(self, x):
- if self.current_depth == 3:
- out = self.ops(x)
- out_before_pool = out
- else:
- out_before_pool = self.ops(x)
- out = self.maxpool(out_before_pool)
- return out, out_before_pool
-
-class UpTransition(nn.Module):
- def __init__(self, inChans, outChans, depth,act):
- super(UpTransition, self).__init__()
- self.depth = depth
- self.up_conv = nn.ConvTranspose3d(inChans, outChans, kernel_size=2, stride=2)
- self.ops = _make_nConv(inChans+ outChans//2,depth, act, double_chnnel=True)
-
- def forward(self, x, skip_x):
- out_up_conv = self.up_conv(x)
- concat = torch.cat((out_up_conv,skip_x),1)
- out = self.ops(concat)
- return out
-
-
-class OutputTransition(nn.Module):
- def __init__(self, inChans, n_labels):
-
- super(OutputTransition, self).__init__()
- self.final_conv = nn.Conv3d(inChans, n_labels, kernel_size=1)
- #self.sigmoid = nn.Sigmoid()
-
- def forward(self, x):
- out = torch.sigmoid(self.final_conv(x))
- return out
-
-class UNet3D(nn.Module):
- # the number of convolutions in each layer corresponds
- # to what is in the actual prototxt, not the intent
- def __init__(self, input_size, n_class=1, act='relu', in_channels=1):
- super(UNet3D, self).__init__()
-
- self.down_tr64 = DownTransition(in_channels,0,act)
- self.down_tr128 = DownTransition(64,1,act)
- self.down_tr256 = DownTransition(128,2,act)
- self.down_tr512 = DownTransition(256,3,act)
-
- # Classification
- self.classifier = FracClassifier(encoder_channels=512, final_channels=n_class, linear_kernel=int(math.pow(input_size / 32, 3) * 512))
-
- def forward(self, x):
- self.out64, _ = self.down_tr64(x)
- self.out128, _ = self.down_tr128(self.out64)
- self.out256, _ = self.down_tr256(self.out128)
- self.out512, _ = self.down_tr512(self.out256)
-
- self.out = self.classifier(self.out512)
-
- return self.out
\ No newline at end of file
diff --git a/spaces/perezcatriel/data_world_jobs/ML/prediccion_job.py b/spaces/perezcatriel/data_world_jobs/ML/prediccion_job.py
deleted file mode 100644
index fb11ed57badedbd3df7d2dfea472de9e72e698c8..0000000000000000000000000000000000000000
--- a/spaces/perezcatriel/data_world_jobs/ML/prediccion_job.py
+++ /dev/null
@@ -1,132 +0,0 @@
-import datetime
-import time
-
-import altair as alt
-import pandas as pd
-import streamlit as st
-from sklearn.linear_model import LinearRegression
-
-st.set_page_config(page_title="Predicción de nuevos puestos de trabajo",
- page_icon=":bar_chart:", layout="wide")
-
-st.title('Predicción de nuevos puestos de trabajo')
-
-# Cargar los datos
-df = pd.read_csv('/home/catriel/Documents/data_world_jobs/data/ds_salaries.csv')
-
-# Seleccionar las columnas relevantes
-df_relevant = df[['job_title', 'work_year']]
-
-# Transformar la columna work_year en un tipo date en la columna date
-df_relevant['date'] = pd.to_datetime(df_relevant['work_year'], format='%Y')
-
-# Agregar una columna con el año de creación
-df_relevant['year'] = pd.DatetimeIndex(df_relevant['date']).year
-
-# Contar la cantidad de job_title creados por año
-job_title_count = df_relevant.groupby('year').count()['job_title']
-
-# Crear un dataframe con la cantidad de job_title creados por año
-df_job_title_count = pd.DataFrame(
- {'year': job_title_count.index, 'job_title_count': job_title_count.values})
-
-# Crear un modelo de regresión lineal
-model = LinearRegression()
-
-# Entrenar el modelo con los datos históricos
-X = df_job_title_count[['year']]
-y = df_job_title_count['job_title_count']
-model.fit(X, y)
-
-# Obtener el año actual
-current_year = datetime.datetime.now().year
-
-# Predecir la cantidad de nuevos job_title que se crearán este año
-current_year_input = st.number_input('Ingresa un año:', value=current_year,
- min_value=current_year,
- max_value=2050, step=1)
-if current_year_input < current_year:
- st.warning('Solo se pueden hacer predicciones para años futuros.')
- current_year_input = current_year
- st.write('Se usará el año actual:', current_year_input)
-
-with st.spinner('Prediciendo...'):
- time.sleep(1)
- job_title_count_pred = model.predict([[current_year_input]])
-
-# Obtener el último año del dataset
-last_year = df_job_title_count['year'].max()
-last_year_count = \
- df_job_title_count.loc[df_job_title_count['year'] == last_year][
- 'job_title_count'].values[0]
-
-# Mostrar resultados
-st.write(
- "Se crearán aproximadamente **{}** nuevos puestos de trabajo este año **{}**.".format(
- int(job_title_count_pred), current_year_input))
-percentage_change = (
- job_title_count_pred - last_year_count) / last_year_count * 100
-percentage_change = float(percentage_change)
-if percentage_change >= 0:
- st.write(
- "Esto representa un aumento del {:.2f}% con respecto al año {}.".format(
- percentage_change, last_year))
-else:
- st.write(
- "Esto representa una disminución del {:.2f}% con respecto al año {}".format(
- abs(percentage_change), last_year))
-
-# Crear un gráfico de línea
-line_chart = alt.Chart(df_job_title_count).mark_line().encode(
- x='year',
- y='job_title_count'
-).properties(
- title='Cantidad de nuevos puestos de trabajo por año',
- width=700,
- height=400
-).configure_axis(
- labelFontSize=14,
- titleFontSize=16
-)
-
-# Crear un punto para mostrar el valor predicho
-point = alt.Chart(df_job_title_count.iloc[-1:]).mark_point(color='#5c62ac').encode(
- x='year',
- y='job_title_count'
-)
-
-# Mostrar la gráfica actualizada con el valor predicho para el año ingresado
-# st.altair_chart(line_chart, use_container_width=True)
-
-
-# Crear botón para graficar la predicción
-if st.button('Mostrar gráfico de predicción'):
- # Crear dataframe con los años y las predicciones
- years = list(range(last_year, current_year + current_year_input - 2000))
- predictions = model.predict([[year] for year in years])
- df_predictions = pd.DataFrame(
- {'year': years, 'job_title_count_pred': predictions})
-
- # Crear gráfico de línea
- line_chart = alt.Chart(df_predictions).mark_line().encode(
- x='year',
- y='job_title_count_pred'
- ).properties(
- width=1200,
- height=600
- )
-
- # Agregar capa con punto rojo en el valor predicho para el año actual
- current_year_pred = int(model.predict([[current_year_input]])[0])
- point_chart = alt.Chart(pd.DataFrame(
- {'x': [current_year_input], 'y': [current_year_pred]})).mark_point(
- color='#5c62ac',
- size=300,
- stroke='#5c62ac',
- strokeWidth=5).encode(
- x='x',
- y='y'
- )
-
- # Mostrar gráfico con la capa adicional del punto rojo
- st.altair_chart(line_chart + point_chart)
\ No newline at end of file
diff --git a/spaces/pknez/face-swap-docker/chain_img_processor/__init__.py b/spaces/pknez/face-swap-docker/chain_img_processor/__init__.py
deleted file mode 100644
index f8841b3954c11071f2596b9851fa3edfac4413d0..0000000000000000000000000000000000000000
--- a/spaces/pknez/face-swap-docker/chain_img_processor/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from .image import ChainImgProcessor, ChainImgPlugin, get_single_image_processor, version
-from .video import ChainVideoProcessor, get_single_video_processor
-from .batchimage import ChainBatchImageProcessor
-from .ffmpeg_writer import FFMPEG_VideoWriter
\ No newline at end of file
diff --git a/spaces/prerna9811/Chord/portaudio/qa/loopback/src/write_wav.c b/spaces/prerna9811/Chord/portaudio/qa/loopback/src/write_wav.c
deleted file mode 100644
index aa5ee2146af1a83467b6dd69fcced05014832f76..0000000000000000000000000000000000000000
--- a/spaces/prerna9811/Chord/portaudio/qa/loopback/src/write_wav.c
+++ /dev/null
@@ -1,242 +0,0 @@
-/*
- * PortAudio Portable Real-Time Audio Library
- * Latest Version at: http://www.portaudio.com
- *
- * Copyright (c) 1999-2010 Phil Burk and Ross Bencina
- *
- * Permission is hereby granted, free of charge, to any person obtaining
- * a copy of this software and associated documentation files
- * (the "Software"), to deal in the Software without restriction,
- * including without limitation the rights to use, copy, modify, merge,
- * publish, distribute, sublicense, and/or sell copies of the Software,
- * and to permit persons to whom the Software is furnished to do so,
- * subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be
- * included in all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR
- * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
- * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
- * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
- */
-
-/*
- * The text above constitutes the entire PortAudio license; however,
- * the PortAudio community also makes the following non-binding requests:
- *
- * Any person wishing to distribute modifications to the Software is
- * requested to send the modifications to the original developer so that
- * they can be incorporated into the canonical version. It is also
- * requested that these non-binding requests be included along with the
- * license above.
- */
-
-/**
- * Very simple WAV file writer for saving captured audio.
- */
-
-#include
-#include
-#include "write_wav.h"
-
-
-/* Write long word data to a little endian format byte array. */
-static void WriteLongLE( unsigned char **addrPtr, unsigned long data )
-{
- unsigned char *addr = *addrPtr;
- *addr++ = (unsigned char) data;
- *addr++ = (unsigned char) (data>>8);
- *addr++ = (unsigned char) (data>>16);
- *addr++ = (unsigned char) (data>>24);
- *addrPtr = addr;
-}
-
-/* Write short word data to a little endian format byte array. */
-static void WriteShortLE( unsigned char **addrPtr, unsigned short data )
-{
- unsigned char *addr = *addrPtr;
- *addr++ = (unsigned char) data;
- *addr++ = (unsigned char) (data>>8);
- *addrPtr = addr;
-}
-
-/* Write IFF ChunkType data to a byte array. */
-static void WriteChunkType( unsigned char **addrPtr, unsigned long cktyp )
-{
- unsigned char *addr = *addrPtr;
- *addr++ = (unsigned char) (cktyp>>24);
- *addr++ = (unsigned char) (cktyp>>16);
- *addr++ = (unsigned char) (cktyp>>8);
- *addr++ = (unsigned char) cktyp;
- *addrPtr = addr;
-}
-
-#define WAV_HEADER_SIZE (4 + 4 + 4 + /* RIFF+size+WAVE */ \
- 4 + 4 + 16 + /* fmt chunk */ \
- 4 + 4 ) /* data chunk */
-
-
-/*********************************************************************************
- * Open named file and write WAV header to the file.
- * The header includes the DATA chunk type and size.
- * Returns number of bytes written to file or negative error code.
- */
-long Audio_WAV_OpenWriter( WAV_Writer *writer, const char *fileName, int frameRate, int samplesPerFrame )
-{
- unsigned int bytesPerSecond;
- unsigned char header[ WAV_HEADER_SIZE ];
- unsigned char *addr = header;
- int numWritten;
-
- writer->dataSize = 0;
- writer->dataSizeOffset = 0;
-
- writer->fid = fopen( fileName, "wb" );
- if( writer->fid == NULL )
- {
- return -1;
- }
-
-/* Write RIFF header. */
- WriteChunkType( &addr, RIFF_ID );
-
-/* Write RIFF size as zero for now. Will patch later. */
- WriteLongLE( &addr, 0 );
-
-/* Write WAVE form ID. */
- WriteChunkType( &addr, WAVE_ID );
-
-/* Write format chunk based on AudioSample structure. */
- WriteChunkType( &addr, FMT_ID );
- WriteLongLE( &addr, 16 );
- WriteShortLE( &addr, WAVE_FORMAT_PCM );
- bytesPerSecond = frameRate * samplesPerFrame * sizeof( short);
- WriteShortLE( &addr, (short) samplesPerFrame );
- WriteLongLE( &addr, frameRate );
- WriteLongLE( &addr, bytesPerSecond );
- WriteShortLE( &addr, (short) (samplesPerFrame * sizeof( short)) ); /* bytesPerBlock */
- WriteShortLE( &addr, (short) 16 ); /* bits per sample */
-
-/* Write ID and size for 'data' chunk. */
- WriteChunkType( &addr, DATA_ID );
-/* Save offset so we can patch it later. */
- writer->dataSizeOffset = (int) (addr - header);
- WriteLongLE( &addr, 0 );
-
- numWritten = fwrite( header, 1, sizeof(header), writer->fid );
- if( numWritten != sizeof(header) ) return -1;
-
- return (int) numWritten;
-}
-
-/*********************************************************************************
- * Write to the data chunk portion of a WAV file.
- * Returns bytes written or negative error code.
- */
-long Audio_WAV_WriteShorts( WAV_Writer *writer,
- short *samples,
- int numSamples
- )
-{
- unsigned char buffer[2];
- unsigned char *bufferPtr;
- int i;
- short *p = samples;
- int numWritten;
- int bytesWritten;
- if( numSamples <= 0 )
- {
- return -1;
- }
-
- for( i=0; ifid );
- if( numWritten != sizeof(buffer) ) return -1;
- }
- bytesWritten = numSamples * sizeof(short);
- writer->dataSize += bytesWritten;
- return (int) bytesWritten;
-}
-
-/*********************************************************************************
- * Close WAV file.
- * Update chunk sizes so it can be read by audio applications.
- */
-long Audio_WAV_CloseWriter( WAV_Writer *writer )
-{
- unsigned char buffer[4];
- unsigned char *bufferPtr;
- int numWritten;
- int riffSize;
-
- /* Go back to beginning of file and update DATA size */
- int result = fseek( writer->fid, writer->dataSizeOffset, SEEK_SET );
- if( result < 0 ) return result;
-
- bufferPtr = buffer;
- WriteLongLE( &bufferPtr, writer->dataSize );
- numWritten = fwrite( buffer, 1, sizeof( buffer), writer->fid );
- if( numWritten != sizeof(buffer) ) return -1;
-
- /* Update RIFF size */
- result = fseek( writer->fid, 4, SEEK_SET );
- if( result < 0 ) return result;
-
- riffSize = writer->dataSize + (WAV_HEADER_SIZE - 8);
- bufferPtr = buffer;
- WriteLongLE( &bufferPtr, riffSize );
- numWritten = fwrite( buffer, 1, sizeof( buffer), writer->fid );
- if( numWritten != sizeof(buffer) ) return -1;
-
- fclose( writer->fid );
- writer->fid = NULL;
- return writer->dataSize;
-}
-
-/*********************************************************************************
- * Simple test that write a sawtooth waveform to a file.
- */
-#if 0
-int main( void )
-{
- int i;
- WAV_Writer writer;
- int result;
-#define NUM_SAMPLES (200)
- short data[NUM_SAMPLES];
- short saw = 0;
-
- for( i=0; i bytes:
- return await self.receive_stream.receive(max_bytes)
-
- async def send(self, item: bytes) -> None:
- await self.send_stream.send(item)
-
- async def send_eof(self) -> None:
- await self.send_stream.aclose()
-
- async def aclose(self) -> None:
- await self.send_stream.aclose()
- await self.receive_stream.aclose()
-
- @property
- def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]:
- return {
- **self.send_stream.extra_attributes,
- **self.receive_stream.extra_attributes,
- }
-
-
-@dataclass(eq=False)
-class StapledObjectStream(Generic[T_Item], ObjectStream[T_Item]):
- """
- Combines two object streams into a single, bidirectional object stream.
-
- Extra attributes will be provided from both streams, with the receive stream providing the
- values in case of a conflict.
-
- :param ObjectSendStream send_stream: the sending object stream
- :param ObjectReceiveStream receive_stream: the receiving object stream
- """
-
- send_stream: ObjectSendStream[T_Item]
- receive_stream: ObjectReceiveStream[T_Item]
-
- async def receive(self) -> T_Item:
- return await self.receive_stream.receive()
-
- async def send(self, item: T_Item) -> None:
- await self.send_stream.send(item)
-
- async def send_eof(self) -> None:
- await self.send_stream.aclose()
-
- async def aclose(self) -> None:
- await self.send_stream.aclose()
- await self.receive_stream.aclose()
-
- @property
- def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]:
- return {
- **self.send_stream.extra_attributes,
- **self.receive_stream.extra_attributes,
- }
-
-
-@dataclass(eq=False)
-class MultiListener(Generic[T_Stream], Listener[T_Stream]):
- """
- Combines multiple listeners into one, serving connections from all of them at once.
-
- Any MultiListeners in the given collection of listeners will have their listeners moved into
- this one.
-
- Extra attributes are provided from each listener, with each successive listener overriding any
- conflicting attributes from the previous one.
-
- :param listeners: listeners to serve
- :type listeners: Sequence[Listener[T_Stream]]
- """
-
- listeners: Sequence[Listener[T_Stream]]
-
- def __post_init__(self) -> None:
- listeners: list[Listener[T_Stream]] = []
- for listener in self.listeners:
- if isinstance(listener, MultiListener):
- listeners.extend(listener.listeners)
- del listener.listeners[:] # type: ignore[attr-defined]
- else:
- listeners.append(listener)
-
- self.listeners = listeners
-
- async def serve(
- self, handler: Callable[[T_Stream], Any], task_group: TaskGroup | None = None
- ) -> None:
- from .. import create_task_group
-
- async with create_task_group() as tg:
- for listener in self.listeners:
- tg.start_soon(listener.serve, handler, task_group)
-
- async def aclose(self) -> None:
- for listener in self.listeners:
- await listener.aclose()
-
- @property
- def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]:
- attributes: dict = {}
- for listener in self.listeners:
- attributes.update(listener.extra_attributes)
-
- return attributes
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/_space_api.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/_space_api.py
deleted file mode 100644
index ce07fca09891c436d977804596bade3f22e398a4..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/_space_api.py
+++ /dev/null
@@ -1,151 +0,0 @@
-# coding=utf-8
-# Copyright 2019-present, the HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from dataclasses import dataclass
-from datetime import datetime
-from enum import Enum
-from typing import Dict, Optional
-
-from huggingface_hub.utils import parse_datetime
-
-
-class SpaceStage(str, Enum):
- """
- Enumeration of possible stage of a Space on the Hub.
-
- Value can be compared to a string:
- ```py
- assert SpaceStage.BUILDING == "BUILDING"
- ```
-
- Taken from https://github.com/huggingface/moon-landing/blob/main/server/repo_types/SpaceInfo.ts#L61 (private url).
- """
-
- # Copied from moon-landing > server > repo_types > SpaceInfo.ts (private repo)
- NO_APP_FILE = "NO_APP_FILE"
- CONFIG_ERROR = "CONFIG_ERROR"
- BUILDING = "BUILDING"
- BUILD_ERROR = "BUILD_ERROR"
- RUNNING = "RUNNING"
- RUNNING_BUILDING = "RUNNING_BUILDING"
- RUNTIME_ERROR = "RUNTIME_ERROR"
- DELETING = "DELETING"
- STOPPED = "STOPPED"
- PAUSED = "PAUSED"
-
-
-class SpaceHardware(str, Enum):
- """
- Enumeration of hardwares available to run your Space on the Hub.
-
- Value can be compared to a string:
- ```py
- assert SpaceHardware.CPU_BASIC == "cpu-basic"
- ```
-
- Taken from https://github.com/huggingface/moon-landing/blob/main/server/repo_types/SpaceInfo.ts#L73 (private url).
- """
-
- CPU_BASIC = "cpu-basic"
- CPU_UPGRADE = "cpu-upgrade"
- T4_SMALL = "t4-small"
- T4_MEDIUM = "t4-medium"
- A10G_SMALL = "a10g-small"
- A10G_LARGE = "a10g-large"
- A100_LARGE = "a100-large"
-
-
-class SpaceStorage(str, Enum):
- """
- Enumeration of persistent storage available for your Space on the Hub.
-
- Value can be compared to a string:
- ```py
- assert SpaceStorage.SMALL == "small"
- ```
-
- Taken from https://github.com/huggingface/moon-landing/blob/main/server/repo_types/SpaceHardwareFlavor.ts#L24 (private url).
- """
-
- SMALL = "small"
- MEDIUM = "medium"
- LARGE = "large"
-
-
-@dataclass
-class SpaceRuntime:
- """
- Contains information about the current runtime of a Space.
-
- Args:
- stage (`str`):
- Current stage of the space. Example: RUNNING.
- hardware (`str` or `None`):
- Current hardware of the space. Example: "cpu-basic". Can be `None` if Space
- is `BUILDING` for the first time.
- requested_hardware (`str` or `None`):
- Requested hardware. Can be different than `hardware` especially if the request
- has just been made. Example: "t4-medium". Can be `None` if no hardware has
- been requested yet.
- sleep_time (`int` or `None`):
- Number of seconds the Space will be kept alive after the last request. By default (if value is `None`), the
- Space will never go to sleep if it's running on an upgraded hardware, while it will go to sleep after 48
- hours on a free 'cpu-basic' hardware. For more details, see https://huggingface.co/docs/hub/spaces-gpus#sleep-time.
- raw (`dict`):
- Raw response from the server. Contains more information about the Space
- runtime like number of replicas, number of cpu, memory size,...
- """
-
- stage: SpaceStage
- hardware: Optional[SpaceHardware]
- requested_hardware: Optional[SpaceHardware]
- sleep_time: Optional[int]
- storage: Optional[SpaceStorage]
- raw: Dict
-
- def __init__(self, data: Dict) -> None:
- self.stage = data["stage"]
- self.hardware = data["hardware"]["current"]
- self.requested_hardware = data["hardware"]["requested"]
- self.sleep_time = data["gcTimeout"]
- self.storage = data["storage"]
- self.raw = data
-
-
-@dataclass
-class SpaceVariable:
- """
- Contains information about the current variables of a Space.
-
- Args:
- key (`str`):
- Variable key. Example: `"MODEL_REPO_ID"`
- value (`str`):
- Variable value. Example: `"the_model_repo_id"`.
- description (`str` or None):
- Description of the variable. Example: `"Model Repo ID of the implemented model"`.
- updatedAt (`datetime`):
- datetime of the last update of the variable.
- """
-
- key: str
- value: str
- description: Optional[str]
- updated_at: datetime
-
- def __init__(self, key: str, values: Dict) -> None:
- self.key = key
- self.value = values["value"]
- self.description = values.get("description")
- self.updated_at = parse_datetime(values["updatedAt"])
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/jsonschema/tests/test_utils.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/jsonschema/tests/test_utils.py
deleted file mode 100644
index 4e542b9628d2572c3f43da40c46b2a0b13ac7421..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/jsonschema/tests/test_utils.py
+++ /dev/null
@@ -1,124 +0,0 @@
-from unittest import TestCase
-
-from jsonschema._utils import equal
-
-
-class TestEqual(TestCase):
- def test_none(self):
- self.assertTrue(equal(None, None))
-
-
-class TestDictEqual(TestCase):
- def test_equal_dictionaries(self):
- dict_1 = {"a": "b", "c": "d"}
- dict_2 = {"c": "d", "a": "b"}
- self.assertTrue(equal(dict_1, dict_2))
-
- def test_missing_key(self):
- dict_1 = {"a": "b", "c": "d"}
- dict_2 = {"c": "d", "x": "b"}
- self.assertFalse(equal(dict_1, dict_2))
-
- def test_additional_key(self):
- dict_1 = {"a": "b", "c": "d"}
- dict_2 = {"c": "d", "a": "b", "x": "x"}
- self.assertFalse(equal(dict_1, dict_2))
-
- def test_missing_value(self):
- dict_1 = {"a": "b", "c": "d"}
- dict_2 = {"c": "d", "a": "x"}
- self.assertFalse(equal(dict_1, dict_2))
-
- def test_empty_dictionaries(self):
- dict_1 = {}
- dict_2 = {}
- self.assertTrue(equal(dict_1, dict_2))
-
- def test_one_none(self):
- dict_1 = None
- dict_2 = {"a": "b", "c": "d"}
- self.assertFalse(equal(dict_1, dict_2))
-
- def test_same_item(self):
- dict_1 = {"a": "b", "c": "d"}
- self.assertTrue(equal(dict_1, dict_1))
-
- def test_nested_equal(self):
- dict_1 = {"a": {"a": "b", "c": "d"}, "c": "d"}
- dict_2 = {"c": "d", "a": {"a": "b", "c": "d"}}
- self.assertTrue(equal(dict_1, dict_2))
-
- def test_nested_dict_unequal(self):
- dict_1 = {"a": {"a": "b", "c": "d"}, "c": "d"}
- dict_2 = {"c": "d", "a": {"a": "b", "c": "x"}}
- self.assertFalse(equal(dict_1, dict_2))
-
- def test_mixed_nested_equal(self):
- dict_1 = {"a": ["a", "b", "c", "d"], "c": "d"}
- dict_2 = {"c": "d", "a": ["a", "b", "c", "d"]}
- self.assertTrue(equal(dict_1, dict_2))
-
- def test_nested_list_unequal(self):
- dict_1 = {"a": ["a", "b", "c", "d"], "c": "d"}
- dict_2 = {"c": "d", "a": ["b", "c", "d", "a"]}
- self.assertFalse(equal(dict_1, dict_2))
-
-
-class TestListEqual(TestCase):
- def test_equal_lists(self):
- list_1 = ["a", "b", "c"]
- list_2 = ["a", "b", "c"]
- self.assertTrue(equal(list_1, list_2))
-
- def test_unsorted_lists(self):
- list_1 = ["a", "b", "c"]
- list_2 = ["b", "b", "a"]
- self.assertFalse(equal(list_1, list_2))
-
- def test_first_list_larger(self):
- list_1 = ["a", "b", "c"]
- list_2 = ["a", "b"]
- self.assertFalse(equal(list_1, list_2))
-
- def test_second_list_larger(self):
- list_1 = ["a", "b"]
- list_2 = ["a", "b", "c"]
- self.assertFalse(equal(list_1, list_2))
-
- def test_list_with_none_unequal(self):
- list_1 = ["a", "b", None]
- list_2 = ["a", "b", "c"]
- self.assertFalse(equal(list_1, list_2))
-
- list_1 = ["a", "b", None]
- list_2 = [None, "b", "c"]
- self.assertFalse(equal(list_1, list_2))
-
- def test_list_with_none_equal(self):
- list_1 = ["a", None, "c"]
- list_2 = ["a", None, "c"]
- self.assertTrue(equal(list_1, list_2))
-
- def test_empty_list(self):
- list_1 = []
- list_2 = []
- self.assertTrue(equal(list_1, list_2))
-
- def test_one_none(self):
- list_1 = None
- list_2 = []
- self.assertFalse(equal(list_1, list_2))
-
- def test_same_list(self):
- list_1 = ["a", "b", "c"]
- self.assertTrue(equal(list_1, list_1))
-
- def test_equal_nested_lists(self):
- list_1 = ["a", ["b", "c"], "d"]
- list_2 = ["a", ["b", "c"], "d"]
- self.assertTrue(equal(list_1, list_2))
-
- def test_unequal_nested_lists(self):
- list_1 = ["a", ["b", "c"], "d"]
- list_2 = ["a", [], "c"]
- self.assertFalse(equal(list_1, list_2))
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/font_manager.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/font_manager.py
deleted file mode 100644
index a91ca4ba45df883387e2319154c2a31bceeb37e0..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/font_manager.py
+++ /dev/null
@@ -1,1584 +0,0 @@
-"""
-A module for finding, managing, and using fonts across platforms.
-
-This module provides a single `FontManager` instance, ``fontManager``, that can
-be shared across backends and platforms. The `findfont`
-function returns the best TrueType (TTF) font file in the local or
-system font path that matches the specified `FontProperties`
-instance. The `FontManager` also handles Adobe Font Metrics
-(AFM) font files for use by the PostScript backend.
-The `FontManager.addfont` function adds a custom font from a file without
-installing it into your operating system.
-
-The design is based on the `W3C Cascading Style Sheet, Level 1 (CSS1)
-font specification `_.
-Future versions may implement the Level 2 or 2.1 specifications.
-"""
-
-# KNOWN ISSUES
-#
-# - documentation
-# - font variant is untested
-# - font stretch is incomplete
-# - font size is incomplete
-# - default font algorithm needs improvement and testing
-# - setWeights function needs improvement
-# - 'light' is an invalid weight value, remove it.
-
-from base64 import b64encode
-from collections import namedtuple
-import copy
-import dataclasses
-from functools import lru_cache
-from io import BytesIO
-import json
-import logging
-from numbers import Number
-import os
-from pathlib import Path
-import re
-import subprocess
-import sys
-import threading
-from typing import Union
-
-import matplotlib as mpl
-from matplotlib import _api, _afm, cbook, ft2font
-from matplotlib._fontconfig_pattern import (
- parse_fontconfig_pattern, generate_fontconfig_pattern)
-from matplotlib.rcsetup import _validators
-
-_log = logging.getLogger(__name__)
-
-font_scalings = {
- 'xx-small': 0.579,
- 'x-small': 0.694,
- 'small': 0.833,
- 'medium': 1.0,
- 'large': 1.200,
- 'x-large': 1.440,
- 'xx-large': 1.728,
- 'larger': 1.2,
- 'smaller': 0.833,
- None: 1.0,
-}
-stretch_dict = {
- 'ultra-condensed': 100,
- 'extra-condensed': 200,
- 'condensed': 300,
- 'semi-condensed': 400,
- 'normal': 500,
- 'semi-expanded': 600,
- 'semi-extended': 600,
- 'expanded': 700,
- 'extended': 700,
- 'extra-expanded': 800,
- 'extra-extended': 800,
- 'ultra-expanded': 900,
- 'ultra-extended': 900,
-}
-weight_dict = {
- 'ultralight': 100,
- 'light': 200,
- 'normal': 400,
- 'regular': 400,
- 'book': 400,
- 'medium': 500,
- 'roman': 500,
- 'semibold': 600,
- 'demibold': 600,
- 'demi': 600,
- 'bold': 700,
- 'heavy': 800,
- 'extra bold': 800,
- 'black': 900,
-}
-_weight_regexes = [
- # From fontconfig's FcFreeTypeQueryFaceInternal; not the same as
- # weight_dict!
- ("thin", 100),
- ("extralight", 200),
- ("ultralight", 200),
- ("demilight", 350),
- ("semilight", 350),
- ("light", 300), # Needs to come *after* demi/semilight!
- ("book", 380),
- ("regular", 400),
- ("normal", 400),
- ("medium", 500),
- ("demibold", 600),
- ("demi", 600),
- ("semibold", 600),
- ("extrabold", 800),
- ("superbold", 800),
- ("ultrabold", 800),
- ("bold", 700), # Needs to come *after* extra/super/ultrabold!
- ("ultrablack", 1000),
- ("superblack", 1000),
- ("extrablack", 1000),
- (r"\bultra", 1000),
- ("black", 900), # Needs to come *after* ultra/super/extrablack!
- ("heavy", 900),
-]
-font_family_aliases = {
- 'serif',
- 'sans-serif',
- 'sans serif',
- 'cursive',
- 'fantasy',
- 'monospace',
- 'sans',
-}
-
-_ExceptionProxy = namedtuple('_ExceptionProxy', ['klass', 'message'])
-
-# OS Font paths
-try:
- _HOME = Path.home()
-except Exception: # Exceptions thrown by home() are not specified...
- _HOME = Path(os.devnull) # Just an arbitrary path with no children.
-MSFolders = \
- r'Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders'
-MSFontDirectories = [
- r'SOFTWARE\Microsoft\Windows NT\CurrentVersion\Fonts',
- r'SOFTWARE\Microsoft\Windows\CurrentVersion\Fonts']
-MSUserFontDirectories = [
- str(_HOME / 'AppData/Local/Microsoft/Windows/Fonts'),
- str(_HOME / 'AppData/Roaming/Microsoft/Windows/Fonts'),
-]
-X11FontDirectories = [
- # an old standard installation point
- "/usr/X11R6/lib/X11/fonts/TTF/",
- "/usr/X11/lib/X11/fonts",
- # here is the new standard location for fonts
- "/usr/share/fonts/",
- # documented as a good place to install new fonts
- "/usr/local/share/fonts/",
- # common application, not really useful
- "/usr/lib/openoffice/share/fonts/truetype/",
- # user fonts
- str((Path(os.environ.get('XDG_DATA_HOME') or _HOME / ".local/share"))
- / "fonts"),
- str(_HOME / ".fonts"),
-]
-OSXFontDirectories = [
- "/Library/Fonts/",
- "/Network/Library/Fonts/",
- "/System/Library/Fonts/",
- # fonts installed via MacPorts
- "/opt/local/share/fonts",
- # user fonts
- str(_HOME / "Library/Fonts"),
-]
-
-
-def get_fontext_synonyms(fontext):
- """
- Return a list of file extensions that are synonyms for
- the given file extension *fileext*.
- """
- return {
- 'afm': ['afm'],
- 'otf': ['otf', 'ttc', 'ttf'],
- 'ttc': ['otf', 'ttc', 'ttf'],
- 'ttf': ['otf', 'ttc', 'ttf'],
- }[fontext]
-
-
-def list_fonts(directory, extensions):
- """
- Return a list of all fonts matching any of the extensions, found
- recursively under the directory.
- """
- extensions = ["." + ext for ext in extensions]
- return [os.path.join(dirpath, filename)
- # os.walk ignores access errors, unlike Path.glob.
- for dirpath, _, filenames in os.walk(directory)
- for filename in filenames
- if Path(filename).suffix.lower() in extensions]
-
-
-def win32FontDirectory():
- r"""
- Return the user-specified font directory for Win32. This is
- looked up from the registry key ::
-
- \\HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders\Fonts
-
- If the key is not found, ``%WINDIR%\Fonts`` will be returned.
- """
- import winreg
- try:
- with winreg.OpenKey(winreg.HKEY_CURRENT_USER, MSFolders) as user:
- return winreg.QueryValueEx(user, 'Fonts')[0]
- except OSError:
- return os.path.join(os.environ['WINDIR'], 'Fonts')
-
-
-def _get_win32_installed_fonts():
- """List the font paths known to the Windows registry."""
- import winreg
- items = set()
- # Search and resolve fonts listed in the registry.
- for domain, base_dirs in [
- (winreg.HKEY_LOCAL_MACHINE, [win32FontDirectory()]), # System.
- (winreg.HKEY_CURRENT_USER, MSUserFontDirectories), # User.
- ]:
- for base_dir in base_dirs:
- for reg_path in MSFontDirectories:
- try:
- with winreg.OpenKey(domain, reg_path) as local:
- for j in range(winreg.QueryInfoKey(local)[1]):
- # value may contain the filename of the font or its
- # absolute path.
- key, value, tp = winreg.EnumValue(local, j)
- if not isinstance(value, str):
- continue
- try:
- # If value contains already an absolute path,
- # then it is not changed further.
- path = Path(base_dir, value).resolve()
- except RuntimeError:
- # Don't fail with invalid entries.
- continue
- items.add(path)
- except (OSError, MemoryError):
- continue
- return items
-
-
-@lru_cache
-def _get_fontconfig_fonts():
- """Cache and list the font paths known to ``fc-list``."""
- try:
- if b'--format' not in subprocess.check_output(['fc-list', '--help']):
- _log.warning( # fontconfig 2.7 implemented --format.
- 'Matplotlib needs fontconfig>=2.7 to query system fonts.')
- return []
- out = subprocess.check_output(['fc-list', '--format=%{file}\\n'])
- except (OSError, subprocess.CalledProcessError):
- return []
- return [Path(os.fsdecode(fname)) for fname in out.split(b'\n')]
-
-
-def findSystemFonts(fontpaths=None, fontext='ttf'):
- """
- Search for fonts in the specified font paths. If no paths are
- given, will use a standard set of system paths, as well as the
- list of fonts tracked by fontconfig if fontconfig is installed and
- available. A list of TrueType fonts are returned by default with
- AFM fonts as an option.
- """
- fontfiles = set()
- fontexts = get_fontext_synonyms(fontext)
-
- if fontpaths is None:
- if sys.platform == 'win32':
- installed_fonts = _get_win32_installed_fonts()
- fontpaths = []
- else:
- installed_fonts = _get_fontconfig_fonts()
- if sys.platform == 'darwin':
- fontpaths = [*X11FontDirectories, *OSXFontDirectories]
- else:
- fontpaths = X11FontDirectories
- fontfiles.update(str(path) for path in installed_fonts
- if path.suffix.lower()[1:] in fontexts)
-
- elif isinstance(fontpaths, str):
- fontpaths = [fontpaths]
-
- for path in fontpaths:
- fontfiles.update(map(os.path.abspath, list_fonts(path, fontexts)))
-
- return [fname for fname in fontfiles if os.path.exists(fname)]
-
-
-def _fontentry_helper_repr_png(fontent):
- from matplotlib.figure import Figure # Circular import.
- fig = Figure()
- font_path = Path(fontent.fname) if fontent.fname != '' else None
- fig.text(0, 0, fontent.name, font=font_path)
- with BytesIO() as buf:
- fig.savefig(buf, bbox_inches='tight', transparent=True)
- return buf.getvalue()
-
-
-def _fontentry_helper_repr_html(fontent):
- png_stream = _fontentry_helper_repr_png(fontent)
- png_b64 = b64encode(png_stream).decode()
- return f""
-
-
-FontEntry = dataclasses.make_dataclass(
- 'FontEntry', [
- ('fname', str, dataclasses.field(default='')),
- ('name', str, dataclasses.field(default='')),
- ('style', str, dataclasses.field(default='normal')),
- ('variant', str, dataclasses.field(default='normal')),
- ('weight', Union[str, int], dataclasses.field(default='normal')),
- ('stretch', str, dataclasses.field(default='normal')),
- ('size', str, dataclasses.field(default='medium')),
- ],
- namespace={
- '__doc__': """
- A class for storing Font properties.
-
- It is used when populating the font lookup dictionary.
- """,
- '_repr_html_': lambda self: _fontentry_helper_repr_html(self),
- '_repr_png_': lambda self: _fontentry_helper_repr_png(self),
- }
-)
-
-
-def ttfFontProperty(font):
- """
- Extract information from a TrueType font file.
-
- Parameters
- ----------
- font : `.FT2Font`
- The TrueType font file from which information will be extracted.
-
- Returns
- -------
- `FontEntry`
- The extracted font properties.
-
- """
- name = font.family_name
-
- # Styles are: italic, oblique, and normal (default)
-
- sfnt = font.get_sfnt()
- mac_key = (1, # platform: macintosh
- 0, # id: roman
- 0) # langid: english
- ms_key = (3, # platform: microsoft
- 1, # id: unicode_cs
- 0x0409) # langid: english_united_states
-
- # These tables are actually mac_roman-encoded, but mac_roman support may be
- # missing in some alternative Python implementations and we are only going
- # to look for ASCII substrings, where any ASCII-compatible encoding works
- # - or big-endian UTF-16, since important Microsoft fonts use that.
- sfnt2 = (sfnt.get((*mac_key, 2), b'').decode('latin-1').lower() or
- sfnt.get((*ms_key, 2), b'').decode('utf_16_be').lower())
- sfnt4 = (sfnt.get((*mac_key, 4), b'').decode('latin-1').lower() or
- sfnt.get((*ms_key, 4), b'').decode('utf_16_be').lower())
-
- if sfnt4.find('oblique') >= 0:
- style = 'oblique'
- elif sfnt4.find('italic') >= 0:
- style = 'italic'
- elif sfnt2.find('regular') >= 0:
- style = 'normal'
- elif font.style_flags & ft2font.ITALIC:
- style = 'italic'
- else:
- style = 'normal'
-
- # Variants are: small-caps and normal (default)
-
- # !!!! Untested
- if name.lower() in ['capitals', 'small-caps']:
- variant = 'small-caps'
- else:
- variant = 'normal'
-
- # The weight-guessing algorithm is directly translated from fontconfig
- # 2.13.1's FcFreeTypeQueryFaceInternal (fcfreetype.c).
- wws_subfamily = 22
- typographic_subfamily = 16
- font_subfamily = 2
- styles = [
- sfnt.get((*mac_key, wws_subfamily), b'').decode('latin-1'),
- sfnt.get((*mac_key, typographic_subfamily), b'').decode('latin-1'),
- sfnt.get((*mac_key, font_subfamily), b'').decode('latin-1'),
- sfnt.get((*ms_key, wws_subfamily), b'').decode('utf-16-be'),
- sfnt.get((*ms_key, typographic_subfamily), b'').decode('utf-16-be'),
- sfnt.get((*ms_key, font_subfamily), b'').decode('utf-16-be'),
- ]
- styles = [*filter(None, styles)] or [font.style_name]
-
- def get_weight(): # From fontconfig's FcFreeTypeQueryFaceInternal.
- # OS/2 table weight.
- os2 = font.get_sfnt_table("OS/2")
- if os2 and os2["version"] != 0xffff:
- return os2["usWeightClass"]
- # PostScript font info weight.
- try:
- ps_font_info_weight = (
- font.get_ps_font_info()["weight"].replace(" ", "") or "")
- except ValueError:
- pass
- else:
- for regex, weight in _weight_regexes:
- if re.fullmatch(regex, ps_font_info_weight, re.I):
- return weight
- # Style name weight.
- for style in styles:
- style = style.replace(" ", "")
- for regex, weight in _weight_regexes:
- if re.search(regex, style, re.I):
- return weight
- if font.style_flags & ft2font.BOLD:
- return 700 # "bold"
- return 500 # "medium", not "regular"!
-
- weight = int(get_weight())
-
- # Stretch can be absolute and relative
- # Absolute stretches are: ultra-condensed, extra-condensed, condensed,
- # semi-condensed, normal, semi-expanded, expanded, extra-expanded,
- # and ultra-expanded.
- # Relative stretches are: wider, narrower
- # Child value is: inherit
-
- if any(word in sfnt4 for word in ['narrow', 'condensed', 'cond']):
- stretch = 'condensed'
- elif 'demi cond' in sfnt4:
- stretch = 'semi-condensed'
- elif any(word in sfnt4 for word in ['wide', 'expanded', 'extended']):
- stretch = 'expanded'
- else:
- stretch = 'normal'
-
- # Sizes can be absolute and relative.
- # Absolute sizes are: xx-small, x-small, small, medium, large, x-large,
- # and xx-large.
- # Relative sizes are: larger, smaller
- # Length value is an absolute font size, e.g., 12pt
- # Percentage values are in 'em's. Most robust specification.
-
- if not font.scalable:
- raise NotImplementedError("Non-scalable fonts are not supported")
- size = 'scalable'
-
- return FontEntry(font.fname, name, style, variant, weight, stretch, size)
-
-
-def afmFontProperty(fontpath, font):
- """
- Extract information from an AFM font file.
-
- Parameters
- ----------
- fontpath : str
- The filename corresponding to *font*.
- font : AFM
- The AFM font file from which information will be extracted.
-
- Returns
- -------
- `FontEntry`
- The extracted font properties.
- """
-
- name = font.get_familyname()
- fontname = font.get_fontname().lower()
-
- # Styles are: italic, oblique, and normal (default)
-
- if font.get_angle() != 0 or 'italic' in name.lower():
- style = 'italic'
- elif 'oblique' in name.lower():
- style = 'oblique'
- else:
- style = 'normal'
-
- # Variants are: small-caps and normal (default)
-
- # !!!! Untested
- if name.lower() in ['capitals', 'small-caps']:
- variant = 'small-caps'
- else:
- variant = 'normal'
-
- weight = font.get_weight().lower()
- if weight not in weight_dict:
- weight = 'normal'
-
- # Stretch can be absolute and relative
- # Absolute stretches are: ultra-condensed, extra-condensed, condensed,
- # semi-condensed, normal, semi-expanded, expanded, extra-expanded,
- # and ultra-expanded.
- # Relative stretches are: wider, narrower
- # Child value is: inherit
- if 'demi cond' in fontname:
- stretch = 'semi-condensed'
- elif any(word in fontname for word in ['narrow', 'cond']):
- stretch = 'condensed'
- elif any(word in fontname for word in ['wide', 'expanded', 'extended']):
- stretch = 'expanded'
- else:
- stretch = 'normal'
-
- # Sizes can be absolute and relative.
- # Absolute sizes are: xx-small, x-small, small, medium, large, x-large,
- # and xx-large.
- # Relative sizes are: larger, smaller
- # Length value is an absolute font size, e.g., 12pt
- # Percentage values are in 'em's. Most robust specification.
-
- # All AFM fonts are apparently scalable.
-
- size = 'scalable'
-
- return FontEntry(fontpath, name, style, variant, weight, stretch, size)
-
-
-class FontProperties:
- """
- A class for storing and manipulating font properties.
-
- The font properties are the six properties described in the
- `W3C Cascading Style Sheet, Level 1
- `_ font
- specification and *math_fontfamily* for math fonts:
-
- - family: A list of font names in decreasing order of priority.
- The items may include a generic font family name, either 'sans-serif',
- 'serif', 'cursive', 'fantasy', or 'monospace'. In that case, the actual
- font to be used will be looked up from the associated rcParam during the
- search process in `.findfont`. Default: :rc:`font.family`
-
- - style: Either 'normal', 'italic' or 'oblique'.
- Default: :rc:`font.style`
-
- - variant: Either 'normal' or 'small-caps'.
- Default: :rc:`font.variant`
-
- - stretch: A numeric value in the range 0-1000 or one of
- 'ultra-condensed', 'extra-condensed', 'condensed',
- 'semi-condensed', 'normal', 'semi-expanded', 'expanded',
- 'extra-expanded' or 'ultra-expanded'. Default: :rc:`font.stretch`
-
- - weight: A numeric value in the range 0-1000 or one of
- 'ultralight', 'light', 'normal', 'regular', 'book', 'medium',
- 'roman', 'semibold', 'demibold', 'demi', 'bold', 'heavy',
- 'extra bold', 'black'. Default: :rc:`font.weight`
-
- - size: Either a relative value of 'xx-small', 'x-small',
- 'small', 'medium', 'large', 'x-large', 'xx-large' or an
- absolute font size, e.g., 10. Default: :rc:`font.size`
-
- - math_fontfamily: The family of fonts used to render math text.
- Supported values are: 'dejavusans', 'dejavuserif', 'cm',
- 'stix', 'stixsans' and 'custom'. Default: :rc:`mathtext.fontset`
-
- Alternatively, a font may be specified using the absolute path to a font
- file, by using the *fname* kwarg. However, in this case, it is typically
- simpler to just pass the path (as a `pathlib.Path`, not a `str`) to the
- *font* kwarg of the `.Text` object.
-
- The preferred usage of font sizes is to use the relative values,
- e.g., 'large', instead of absolute font sizes, e.g., 12. This
- approach allows all text sizes to be made larger or smaller based
- on the font manager's default font size.
-
- This class will also accept a fontconfig_ pattern_, if it is the only
- argument provided. This support does not depend on fontconfig; we are
- merely borrowing its pattern syntax for use here.
-
- .. _fontconfig: https://www.freedesktop.org/wiki/Software/fontconfig/
- .. _pattern:
- https://www.freedesktop.org/software/fontconfig/fontconfig-user.html
-
- Note that Matplotlib's internal font manager and fontconfig use a
- different algorithm to lookup fonts, so the results of the same pattern
- may be different in Matplotlib than in other applications that use
- fontconfig.
- """
-
- def __init__(self, family=None, style=None, variant=None, weight=None,
- stretch=None, size=None,
- fname=None, # if set, it's a hardcoded filename to use
- math_fontfamily=None):
- self.set_family(family)
- self.set_style(style)
- self.set_variant(variant)
- self.set_weight(weight)
- self.set_stretch(stretch)
- self.set_file(fname)
- self.set_size(size)
- self.set_math_fontfamily(math_fontfamily)
- # Treat family as a fontconfig pattern if it is the only parameter
- # provided. Even in that case, call the other setters first to set
- # attributes not specified by the pattern to the rcParams defaults.
- if (isinstance(family, str)
- and style is None and variant is None and weight is None
- and stretch is None and size is None and fname is None):
- self.set_fontconfig_pattern(family)
-
- @classmethod
- def _from_any(cls, arg):
- """
- Generic constructor which can build a `.FontProperties` from any of the
- following:
-
- - a `.FontProperties`: it is passed through as is;
- - `None`: a `.FontProperties` using rc values is used;
- - an `os.PathLike`: it is used as path to the font file;
- - a `str`: it is parsed as a fontconfig pattern;
- - a `dict`: it is passed as ``**kwargs`` to `.FontProperties`.
- """
- if arg is None:
- return cls()
- elif isinstance(arg, cls):
- return arg
- elif isinstance(arg, os.PathLike):
- return cls(fname=arg)
- elif isinstance(arg, str):
- return cls(arg)
- else:
- return cls(**arg)
-
- def __hash__(self):
- l = (tuple(self.get_family()),
- self.get_slant(),
- self.get_variant(),
- self.get_weight(),
- self.get_stretch(),
- self.get_size(),
- self.get_file(),
- self.get_math_fontfamily())
- return hash(l)
-
- def __eq__(self, other):
- return hash(self) == hash(other)
-
- def __str__(self):
- return self.get_fontconfig_pattern()
-
- def get_family(self):
- """
- Return a list of individual font family names or generic family names.
-
- The font families or generic font families (which will be resolved
- from their respective rcParams when searching for a matching font) in
- the order of preference.
- """
- return self._family
-
- def get_name(self):
- """
- Return the name of the font that best matches the font properties.
- """
- return get_font(findfont(self)).family_name
-
- def get_style(self):
- """
- Return the font style. Values are: 'normal', 'italic' or 'oblique'.
- """
- return self._slant
-
- def get_variant(self):
- """
- Return the font variant. Values are: 'normal' or 'small-caps'.
- """
- return self._variant
-
- def get_weight(self):
- """
- Set the font weight. Options are: A numeric value in the
- range 0-1000 or one of 'light', 'normal', 'regular', 'book',
- 'medium', 'roman', 'semibold', 'demibold', 'demi', 'bold',
- 'heavy', 'extra bold', 'black'
- """
- return self._weight
-
- def get_stretch(self):
- """
- Return the font stretch or width. Options are: 'ultra-condensed',
- 'extra-condensed', 'condensed', 'semi-condensed', 'normal',
- 'semi-expanded', 'expanded', 'extra-expanded', 'ultra-expanded'.
- """
- return self._stretch
-
- def get_size(self):
- """
- Return the font size.
- """
- return self._size
-
- def get_file(self):
- """
- Return the filename of the associated font.
- """
- return self._file
-
- def get_fontconfig_pattern(self):
- """
- Get a fontconfig_ pattern_ suitable for looking up the font as
- specified with fontconfig's ``fc-match`` utility.
-
- This support does not depend on fontconfig; we are merely borrowing its
- pattern syntax for use here.
- """
- return generate_fontconfig_pattern(self)
-
- def set_family(self, family):
- """
- Change the font family. Can be either an alias (generic name
- is CSS parlance), such as: 'serif', 'sans-serif', 'cursive',
- 'fantasy', or 'monospace', a real font name or a list of real
- font names. Real font names are not supported when
- :rc:`text.usetex` is `True`. Default: :rc:`font.family`
- """
- if family is None:
- family = mpl.rcParams['font.family']
- if isinstance(family, str):
- family = [family]
- self._family = family
-
- def set_style(self, style):
- """
- Set the font style.
-
- Parameters
- ----------
- style : {'normal', 'italic', 'oblique'}, default: :rc:`font.style`
- """
- if style is None:
- style = mpl.rcParams['font.style']
- _api.check_in_list(['normal', 'italic', 'oblique'], style=style)
- self._slant = style
-
- def set_variant(self, variant):
- """
- Set the font variant.
-
- Parameters
- ----------
- variant : {'normal', 'small-caps'}, default: :rc:`font.variant`
- """
- if variant is None:
- variant = mpl.rcParams['font.variant']
- _api.check_in_list(['normal', 'small-caps'], variant=variant)
- self._variant = variant
-
- def set_weight(self, weight):
- """
- Set the font weight.
-
- Parameters
- ----------
- weight : int or {'ultralight', 'light', 'normal', 'regular', 'book', \
-'medium', 'roman', 'semibold', 'demibold', 'demi', 'bold', 'heavy', \
-'extra bold', 'black'}, default: :rc:`font.weight`
- If int, must be in the range 0-1000.
- """
- if weight is None:
- weight = mpl.rcParams['font.weight']
- if weight in weight_dict:
- self._weight = weight
- return
- try:
- weight = int(weight)
- except ValueError:
- pass
- else:
- if 0 <= weight <= 1000:
- self._weight = weight
- return
- raise ValueError(f"{weight=} is invalid")
-
- def set_stretch(self, stretch):
- """
- Set the font stretch or width.
-
- Parameters
- ----------
- stretch : int or {'ultra-condensed', 'extra-condensed', 'condensed', \
-'semi-condensed', 'normal', 'semi-expanded', 'expanded', 'extra-expanded', \
-'ultra-expanded'}, default: :rc:`font.stretch`
- If int, must be in the range 0-1000.
- """
- if stretch is None:
- stretch = mpl.rcParams['font.stretch']
- if stretch in stretch_dict:
- self._stretch = stretch
- return
- try:
- stretch = int(stretch)
- except ValueError:
- pass
- else:
- if 0 <= stretch <= 1000:
- self._stretch = stretch
- return
- raise ValueError(f"{stretch=} is invalid")
-
- def set_size(self, size):
- """
- Set the font size.
-
- Parameters
- ----------
- size : float or {'xx-small', 'x-small', 'small', 'medium', \
-'large', 'x-large', 'xx-large'}, default: :rc:`font.size`
- If a float, the font size in points. The string values denote
- sizes relative to the default font size.
- """
- if size is None:
- size = mpl.rcParams['font.size']
- try:
- size = float(size)
- except ValueError:
- try:
- scale = font_scalings[size]
- except KeyError as err:
- raise ValueError(
- "Size is invalid. Valid font size are "
- + ", ".join(map(str, font_scalings))) from err
- else:
- size = scale * FontManager.get_default_size()
- if size < 1.0:
- _log.info('Fontsize %1.2f < 1.0 pt not allowed by FreeType. '
- 'Setting fontsize = 1 pt', size)
- size = 1.0
- self._size = size
-
- def set_file(self, file):
- """
- Set the filename of the fontfile to use. In this case, all
- other properties will be ignored.
- """
- self._file = os.fspath(file) if file is not None else None
-
- def set_fontconfig_pattern(self, pattern):
- """
- Set the properties by parsing a fontconfig_ *pattern*.
-
- This support does not depend on fontconfig; we are merely borrowing its
- pattern syntax for use here.
- """
- for key, val in parse_fontconfig_pattern(pattern).items():
- if type(val) is list:
- getattr(self, "set_" + key)(val[0])
- else:
- getattr(self, "set_" + key)(val)
-
- def get_math_fontfamily(self):
- """
- Return the name of the font family used for math text.
-
- The default font is :rc:`mathtext.fontset`.
- """
- return self._math_fontfamily
-
- def set_math_fontfamily(self, fontfamily):
- """
- Set the font family for text in math mode.
-
- If not set explicitly, :rc:`mathtext.fontset` will be used.
-
- Parameters
- ----------
- fontfamily : str
- The name of the font family.
-
- Available font families are defined in the
- :ref:`default matplotlibrc file `.
-
- See Also
- --------
- .text.Text.get_math_fontfamily
- """
- if fontfamily is None:
- fontfamily = mpl.rcParams['mathtext.fontset']
- else:
- valid_fonts = _validators['mathtext.fontset'].valid.values()
- # _check_in_list() Validates the parameter math_fontfamily as
- # if it were passed to rcParams['mathtext.fontset']
- _api.check_in_list(valid_fonts, math_fontfamily=fontfamily)
- self._math_fontfamily = fontfamily
-
- def copy(self):
- """Return a copy of self."""
- return copy.copy(self)
-
- # Aliases
- set_name = set_family
- get_slant = get_style
- set_slant = set_style
- get_size_in_points = get_size
-
-
-class _JSONEncoder(json.JSONEncoder):
- def default(self, o):
- if isinstance(o, FontManager):
- return dict(o.__dict__, __class__='FontManager')
- elif isinstance(o, FontEntry):
- d = dict(o.__dict__, __class__='FontEntry')
- try:
- # Cache paths of fonts shipped with Matplotlib relative to the
- # Matplotlib data path, which helps in the presence of venvs.
- d["fname"] = str(
- Path(d["fname"]).relative_to(mpl.get_data_path()))
- except ValueError:
- pass
- return d
- else:
- return super().default(o)
-
-
-def _json_decode(o):
- cls = o.pop('__class__', None)
- if cls is None:
- return o
- elif cls == 'FontManager':
- r = FontManager.__new__(FontManager)
- r.__dict__.update(o)
- return r
- elif cls == 'FontEntry':
- r = FontEntry.__new__(FontEntry)
- r.__dict__.update(o)
- if not os.path.isabs(r.fname):
- r.fname = os.path.join(mpl.get_data_path(), r.fname)
- return r
- else:
- raise ValueError("Don't know how to deserialize __class__=%s" % cls)
-
-
-def json_dump(data, filename):
- """
- Dump `FontManager` *data* as JSON to the file named *filename*.
-
- See Also
- --------
- json_load
-
- Notes
- -----
- File paths that are children of the Matplotlib data path (typically, fonts
- shipped with Matplotlib) are stored relative to that data path (to remain
- valid across virtualenvs).
-
- This function temporarily locks the output file to prevent multiple
- processes from overwriting one another's output.
- """
- with cbook._lock_path(filename), open(filename, 'w') as fh:
- try:
- json.dump(data, fh, cls=_JSONEncoder, indent=2)
- except OSError as e:
- _log.warning('Could not save font_manager cache %s', e)
-
-
-def json_load(filename):
- """
- Load a `FontManager` from the JSON file named *filename*.
-
- See Also
- --------
- json_dump
- """
- with open(filename) as fh:
- return json.load(fh, object_hook=_json_decode)
-
-
-class FontManager:
- """
- On import, the `FontManager` singleton instance creates a list of ttf and
- afm fonts and caches their `FontProperties`. The `FontManager.findfont`
- method does a nearest neighbor search to find the font that most closely
- matches the specification. If no good enough match is found, the default
- font is returned.
-
- Fonts added with the `FontManager.addfont` method will not persist in the
- cache; therefore, `addfont` will need to be called every time Matplotlib is
- imported. This method should only be used if and when a font cannot be
- installed on your operating system by other means.
-
- Notes
- -----
- The `FontManager.addfont` method must be called on the global `FontManager`
- instance.
-
- Example usage::
-
- import matplotlib.pyplot as plt
- from matplotlib import font_manager
-
- font_dirs = ["/resources/fonts"] # The path to the custom font file.
- font_files = font_manager.findSystemFonts(fontpaths=font_dirs)
-
- for font_file in font_files:
- font_manager.fontManager.addfont(font_file)
- """
- # Increment this version number whenever the font cache data
- # format or behavior has changed and requires an existing font
- # cache files to be rebuilt.
- __version__ = 330
-
- def __init__(self, size=None, weight='normal'):
- self._version = self.__version__
-
- self.__default_weight = weight
- self.default_size = size
-
- # Create list of font paths.
- paths = [cbook._get_data_path('fonts', subdir)
- for subdir in ['ttf', 'afm', 'pdfcorefonts']]
- _log.debug('font search path %s', paths)
-
- self.defaultFamily = {
- 'ttf': 'DejaVu Sans',
- 'afm': 'Helvetica'}
-
- self.afmlist = []
- self.ttflist = []
-
- # Delay the warning by 5s.
- timer = threading.Timer(5, lambda: _log.warning(
- 'Matplotlib is building the font cache; this may take a moment.'))
- timer.start()
- try:
- for fontext in ["afm", "ttf"]:
- for path in [*findSystemFonts(paths, fontext=fontext),
- *findSystemFonts(fontext=fontext)]:
- try:
- self.addfont(path)
- except OSError as exc:
- _log.info("Failed to open font file %s: %s", path, exc)
- except Exception as exc:
- _log.info("Failed to extract font properties from %s: "
- "%s", path, exc)
- finally:
- timer.cancel()
-
- def addfont(self, path):
- """
- Cache the properties of the font at *path* to make it available to the
- `FontManager`. The type of font is inferred from the path suffix.
-
- Parameters
- ----------
- path : str or path-like
-
- Notes
- -----
- This method is useful for adding a custom font without installing it in
- your operating system. See the `FontManager` singleton instance for
- usage and caveats about this function.
- """
- # Convert to string in case of a path as
- # afmFontProperty and FT2Font expect this
- path = os.fsdecode(path)
- if Path(path).suffix.lower() == ".afm":
- with open(path, "rb") as fh:
- font = _afm.AFM(fh)
- prop = afmFontProperty(path, font)
- self.afmlist.append(prop)
- else:
- font = ft2font.FT2Font(path)
- prop = ttfFontProperty(font)
- self.ttflist.append(prop)
- self._findfont_cached.cache_clear()
-
- @property
- def defaultFont(self):
- # Lazily evaluated (findfont then caches the result) to avoid including
- # the venv path in the json serialization.
- return {ext: self.findfont(family, fontext=ext)
- for ext, family in self.defaultFamily.items()}
-
- def get_default_weight(self):
- """
- Return the default font weight.
- """
- return self.__default_weight
-
- @staticmethod
- def get_default_size():
- """
- Return the default font size.
- """
- return mpl.rcParams['font.size']
-
- def set_default_weight(self, weight):
- """
- Set the default font weight. The initial value is 'normal'.
- """
- self.__default_weight = weight
-
- @staticmethod
- def _expand_aliases(family):
- if family in ('sans', 'sans serif'):
- family = 'sans-serif'
- return mpl.rcParams['font.' + family]
-
- # Each of the scoring functions below should return a value between
- # 0.0 (perfect match) and 1.0 (terrible match)
- def score_family(self, families, family2):
- """
- Return a match score between the list of font families in
- *families* and the font family name *family2*.
-
- An exact match at the head of the list returns 0.0.
-
- A match further down the list will return between 0 and 1.
-
- No match will return 1.0.
- """
- if not isinstance(families, (list, tuple)):
- families = [families]
- elif len(families) == 0:
- return 1.0
- family2 = family2.lower()
- step = 1 / len(families)
- for i, family1 in enumerate(families):
- family1 = family1.lower()
- if family1 in font_family_aliases:
- options = [*map(str.lower, self._expand_aliases(family1))]
- if family2 in options:
- idx = options.index(family2)
- return (i + (idx / len(options))) * step
- elif family1 == family2:
- # The score should be weighted by where in the
- # list the font was found.
- return i * step
- return 1.0
-
- def score_style(self, style1, style2):
- """
- Return a match score between *style1* and *style2*.
-
- An exact match returns 0.0.
-
- A match between 'italic' and 'oblique' returns 0.1.
-
- No match returns 1.0.
- """
- if style1 == style2:
- return 0.0
- elif (style1 in ('italic', 'oblique')
- and style2 in ('italic', 'oblique')):
- return 0.1
- return 1.0
-
- def score_variant(self, variant1, variant2):
- """
- Return a match score between *variant1* and *variant2*.
-
- An exact match returns 0.0, otherwise 1.0.
- """
- if variant1 == variant2:
- return 0.0
- else:
- return 1.0
-
- def score_stretch(self, stretch1, stretch2):
- """
- Return a match score between *stretch1* and *stretch2*.
-
- The result is the absolute value of the difference between the
- CSS numeric values of *stretch1* and *stretch2*, normalized
- between 0.0 and 1.0.
- """
- try:
- stretchval1 = int(stretch1)
- except ValueError:
- stretchval1 = stretch_dict.get(stretch1, 500)
- try:
- stretchval2 = int(stretch2)
- except ValueError:
- stretchval2 = stretch_dict.get(stretch2, 500)
- return abs(stretchval1 - stretchval2) / 1000.0
-
- def score_weight(self, weight1, weight2):
- """
- Return a match score between *weight1* and *weight2*.
-
- The result is 0.0 if both weight1 and weight 2 are given as strings
- and have the same value.
-
- Otherwise, the result is the absolute value of the difference between
- the CSS numeric values of *weight1* and *weight2*, normalized between
- 0.05 and 1.0.
- """
- # exact match of the weight names, e.g. weight1 == weight2 == "regular"
- if cbook._str_equal(weight1, weight2):
- return 0.0
- w1 = weight1 if isinstance(weight1, Number) else weight_dict[weight1]
- w2 = weight2 if isinstance(weight2, Number) else weight_dict[weight2]
- return 0.95 * (abs(w1 - w2) / 1000) + 0.05
-
- def score_size(self, size1, size2):
- """
- Return a match score between *size1* and *size2*.
-
- If *size2* (the size specified in the font file) is 'scalable', this
- function always returns 0.0, since any font size can be generated.
-
- Otherwise, the result is the absolute distance between *size1* and
- *size2*, normalized so that the usual range of font sizes (6pt -
- 72pt) will lie between 0.0 and 1.0.
- """
- if size2 == 'scalable':
- return 0.0
- # Size value should have already been
- try:
- sizeval1 = float(size1)
- except ValueError:
- sizeval1 = self.default_size * font_scalings[size1]
- try:
- sizeval2 = float(size2)
- except ValueError:
- return 1.0
- return abs(sizeval1 - sizeval2) / 72
-
- def findfont(self, prop, fontext='ttf', directory=None,
- fallback_to_default=True, rebuild_if_missing=True):
- """
- Find a font that most closely matches the given font properties.
-
- Parameters
- ----------
- prop : str or `~matplotlib.font_manager.FontProperties`
- The font properties to search for. This can be either a
- `.FontProperties` object or a string defining a
- `fontconfig patterns`_.
-
- fontext : {'ttf', 'afm'}, default: 'ttf'
- The extension of the font file:
-
- - 'ttf': TrueType and OpenType fonts (.ttf, .ttc, .otf)
- - 'afm': Adobe Font Metrics (.afm)
-
- directory : str, optional
- If given, only search this directory and its subdirectories.
-
- fallback_to_default : bool
- If True, will fall back to the default font family (usually
- "DejaVu Sans" or "Helvetica") if the first lookup hard-fails.
-
- rebuild_if_missing : bool
- Whether to rebuild the font cache and search again if the first
- match appears to point to a nonexisting font (i.e., the font cache
- contains outdated entries).
-
- Returns
- -------
- str
- The filename of the best matching font.
-
- Notes
- -----
- This performs a nearest neighbor search. Each font is given a
- similarity score to the target font properties. The first font with
- the highest score is returned. If no matches below a certain
- threshold are found, the default font (usually DejaVu Sans) is
- returned.
-
- The result is cached, so subsequent lookups don't have to
- perform the O(n) nearest neighbor search.
-
- See the `W3C Cascading Style Sheet, Level 1
- `_ documentation
- for a description of the font finding algorithm.
-
- .. _fontconfig patterns:
- https://www.freedesktop.org/software/fontconfig/fontconfig-user.html
- """
- # Pass the relevant rcParams (and the font manager, as `self`) to
- # _findfont_cached so to prevent using a stale cache entry after an
- # rcParam was changed.
- rc_params = tuple(tuple(mpl.rcParams[key]) for key in [
- "font.serif", "font.sans-serif", "font.cursive", "font.fantasy",
- "font.monospace"])
- ret = self._findfont_cached(
- prop, fontext, directory, fallback_to_default, rebuild_if_missing,
- rc_params)
- if isinstance(ret, _ExceptionProxy):
- raise ret.klass(ret.message)
- return ret
-
- def get_font_names(self):
- """Return the list of available fonts."""
- return list({font.name for font in self.ttflist})
-
- def _find_fonts_by_props(self, prop, fontext='ttf', directory=None,
- fallback_to_default=True, rebuild_if_missing=True):
- """
- Find font families that most closely match the given properties.
-
- Parameters
- ----------
- prop : str or `~matplotlib.font_manager.FontProperties`
- The font properties to search for. This can be either a
- `.FontProperties` object or a string defining a
- `fontconfig patterns`_.
-
- fontext : {'ttf', 'afm'}, default: 'ttf'
- The extension of the font file:
-
- - 'ttf': TrueType and OpenType fonts (.ttf, .ttc, .otf)
- - 'afm': Adobe Font Metrics (.afm)
-
- directory : str, optional
- If given, only search this directory and its subdirectories.
-
- fallback_to_default : bool
- If True, will fall back to the default font family (usually
- "DejaVu Sans" or "Helvetica") if none of the families were found.
-
- rebuild_if_missing : bool
- Whether to rebuild the font cache and search again if the first
- match appears to point to a nonexisting font (i.e., the font cache
- contains outdated entries).
-
- Returns
- -------
- list[str]
- The paths of the fonts found
-
- Notes
- -----
- This is an extension/wrapper of the original findfont API, which only
- returns a single font for given font properties. Instead, this API
- returns a dict containing multiple fonts and their filepaths
- which closely match the given font properties. Since this internally
- uses the original API, there's no change to the logic of performing the
- nearest neighbor search. See `findfont` for more details.
- """
-
- prop = FontProperties._from_any(prop)
-
- fpaths = []
- for family in prop.get_family():
- cprop = prop.copy()
- cprop.set_family(family) # set current prop's family
-
- try:
- fpaths.append(
- self.findfont(
- cprop, fontext, directory,
- fallback_to_default=False, # don't fallback to default
- rebuild_if_missing=rebuild_if_missing,
- )
- )
- except ValueError:
- if family in font_family_aliases:
- _log.warning(
- "findfont: Generic family %r not found because "
- "none of the following families were found: %s",
- family, ", ".join(self._expand_aliases(family))
- )
- else:
- _log.warning("findfont: Font family %r not found.", family)
-
- # only add default family if no other font was found and
- # fallback_to_default is enabled
- if not fpaths:
- if fallback_to_default:
- dfamily = self.defaultFamily[fontext]
- cprop = prop.copy()
- cprop.set_family(dfamily)
- fpaths.append(
- self.findfont(
- cprop, fontext, directory,
- fallback_to_default=True,
- rebuild_if_missing=rebuild_if_missing,
- )
- )
- else:
- raise ValueError("Failed to find any font, and fallback "
- "to the default font was disabled")
-
- return fpaths
-
- @lru_cache(1024)
- def _findfont_cached(self, prop, fontext, directory, fallback_to_default,
- rebuild_if_missing, rc_params):
-
- prop = FontProperties._from_any(prop)
-
- fname = prop.get_file()
- if fname is not None:
- return fname
-
- if fontext == 'afm':
- fontlist = self.afmlist
- else:
- fontlist = self.ttflist
-
- best_score = 1e64
- best_font = None
-
- _log.debug('findfont: Matching %s.', prop)
- for font in fontlist:
- if (directory is not None and
- Path(directory) not in Path(font.fname).parents):
- continue
- # Matching family should have top priority, so multiply it by 10.
- score = (self.score_family(prop.get_family(), font.name) * 10
- + self.score_style(prop.get_style(), font.style)
- + self.score_variant(prop.get_variant(), font.variant)
- + self.score_weight(prop.get_weight(), font.weight)
- + self.score_stretch(prop.get_stretch(), font.stretch)
- + self.score_size(prop.get_size(), font.size))
- _log.debug('findfont: score(%s) = %s', font, score)
- if score < best_score:
- best_score = score
- best_font = font
- if score == 0:
- break
-
- if best_font is None or best_score >= 10.0:
- if fallback_to_default:
- _log.warning(
- 'findfont: Font family %s not found. Falling back to %s.',
- prop.get_family(), self.defaultFamily[fontext])
- for family in map(str.lower, prop.get_family()):
- if family in font_family_aliases:
- _log.warning(
- "findfont: Generic family %r not found because "
- "none of the following families were found: %s",
- family, ", ".join(self._expand_aliases(family)))
- default_prop = prop.copy()
- default_prop.set_family(self.defaultFamily[fontext])
- return self.findfont(default_prop, fontext, directory,
- fallback_to_default=False)
- else:
- # This return instead of raise is intentional, as we wish to
- # cache that it was not found, which will not occur if it was
- # actually raised.
- return _ExceptionProxy(
- ValueError,
- f"Failed to find font {prop}, and fallback to the default font was disabled"
- )
- else:
- _log.debug('findfont: Matching %s to %s (%r) with score of %f.',
- prop, best_font.name, best_font.fname, best_score)
- result = best_font.fname
-
- if not os.path.isfile(result):
- if rebuild_if_missing:
- _log.info(
- 'findfont: Found a missing font file. Rebuilding cache.')
- new_fm = _load_fontmanager(try_read_cache=False)
- # Replace self by the new fontmanager, because users may have
- # a reference to this specific instance.
- # TODO: _load_fontmanager should really be (used by) a method
- # modifying the instance in place.
- vars(self).update(vars(new_fm))
- return self.findfont(
- prop, fontext, directory, rebuild_if_missing=False)
- else:
- # This return instead of raise is intentional, as we wish to
- # cache that it was not found, which will not occur if it was
- # actually raised.
- return _ExceptionProxy(ValueError, "No valid font could be found")
-
- return _cached_realpath(result)
-
-
-@lru_cache
-def is_opentype_cff_font(filename):
- """
- Return whether the given font is a Postscript Compact Font Format Font
- embedded in an OpenType wrapper. Used by the PostScript and PDF backends
- that cannot subset these fonts.
- """
- if os.path.splitext(filename)[1].lower() == '.otf':
- with open(filename, 'rb') as fd:
- return fd.read(4) == b"OTTO"
- else:
- return False
-
-
-@lru_cache(64)
-def _get_font(font_filepaths, hinting_factor, *, _kerning_factor, thread_id):
- first_fontpath, *rest = font_filepaths
- return ft2font.FT2Font(
- first_fontpath, hinting_factor,
- _fallback_list=[
- ft2font.FT2Font(
- fpath, hinting_factor,
- _kerning_factor=_kerning_factor
- )
- for fpath in rest
- ],
- _kerning_factor=_kerning_factor
- )
-
-
-# FT2Font objects cannot be used across fork()s because they reference the same
-# FT_Library object. While invalidating *all* existing FT2Fonts after a fork
-# would be too complicated to be worth it, the main way FT2Fonts get reused is
-# via the cache of _get_font, which we can empty upon forking (not on Windows,
-# which has no fork() or register_at_fork()).
-if hasattr(os, "register_at_fork"):
- os.register_at_fork(after_in_child=_get_font.cache_clear)
-
-
-@lru_cache(64)
-def _cached_realpath(path):
- # Resolving the path avoids embedding the font twice in pdf/ps output if a
- # single font is selected using two different relative paths.
- return os.path.realpath(path)
-
-
-def get_font(font_filepaths, hinting_factor=None):
- """
- Get an `.ft2font.FT2Font` object given a list of file paths.
-
- Parameters
- ----------
- font_filepaths : Iterable[str, Path, bytes], str, Path, bytes
- Relative or absolute paths to the font files to be used.
-
- If a single string, bytes, or `pathlib.Path`, then it will be treated
- as a list with that entry only.
-
- If more than one filepath is passed, then the returned FT2Font object
- will fall back through the fonts, in the order given, to find a needed
- glyph.
-
- Returns
- -------
- `.ft2font.FT2Font`
-
- """
- if isinstance(font_filepaths, (str, Path, bytes)):
- paths = (_cached_realpath(font_filepaths),)
- else:
- paths = tuple(_cached_realpath(fname) for fname in font_filepaths)
-
- if hinting_factor is None:
- hinting_factor = mpl.rcParams['text.hinting_factor']
-
- return _get_font(
- # must be a tuple to be cached
- paths,
- hinting_factor,
- _kerning_factor=mpl.rcParams['text.kerning_factor'],
- # also key on the thread ID to prevent segfaults with multi-threading
- thread_id=threading.get_ident()
- )
-
-
-def _load_fontmanager(*, try_read_cache=True):
- fm_path = Path(
- mpl.get_cachedir(), f"fontlist-v{FontManager.__version__}.json")
- if try_read_cache:
- try:
- fm = json_load(fm_path)
- except Exception:
- pass
- else:
- if getattr(fm, "_version", object()) == FontManager.__version__:
- _log.debug("Using fontManager instance from %s", fm_path)
- return fm
- fm = FontManager()
- json_dump(fm, fm_path)
- _log.info("generated new fontManager")
- return fm
-
-
-fontManager = _load_fontmanager()
-findfont = fontManager.findfont
-get_font_names = fontManager.get_font_names
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/interval/test_constructors.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/interval/test_constructors.py
deleted file mode 100644
index 9524288b33eef1fa8560560a820b47b7bc04829c..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/interval/test_constructors.py
+++ /dev/null
@@ -1,478 +0,0 @@
-from functools import partial
-
-import numpy as np
-import pytest
-
-from pandas.core.dtypes.dtypes import IntervalDtype
-
-from pandas import (
- Categorical,
- CategoricalDtype,
- CategoricalIndex,
- Index,
- Interval,
- IntervalIndex,
- date_range,
- notna,
- period_range,
- timedelta_range,
-)
-import pandas._testing as tm
-from pandas.core.arrays import IntervalArray
-import pandas.core.common as com
-
-
-@pytest.fixture(params=[None, "foo"])
-def name(request):
- return request.param
-
-
-class ConstructorTests:
- """
- Common tests for all variations of IntervalIndex construction. Input data
- to be supplied in breaks format, then converted by the subclass method
- get_kwargs_from_breaks to the expected format.
- """
-
- @pytest.fixture(
- params=[
- ([3, 14, 15, 92, 653], np.int64),
- (np.arange(10, dtype="int64"), np.int64),
- (Index(np.arange(-10, 11, dtype=np.int64)), np.int64),
- (Index(np.arange(10, 31, dtype=np.uint64)), np.uint64),
- (Index(np.arange(20, 30, 0.5), dtype=np.float64), np.float64),
- (date_range("20180101", periods=10), " None:
- """
- Check whether the given Python version is compatible with a distribution's
- "Requires-Python" value.
-
- :param version_info: A 3-tuple of ints representing the Python
- major-minor-micro version to check.
- :param ignore_requires_python: Whether to ignore the "Requires-Python"
- value if the given Python version isn't compatible.
-
- :raises UnsupportedPythonVersion: When the given Python version isn't
- compatible.
- """
- # This idiosyncratically converts the SpecifierSet to str and let
- # check_requires_python then parse it again into SpecifierSet. But this
- # is the legacy resolver so I'm just not going to bother refactoring.
- try:
- requires_python = str(dist.requires_python)
- except FileNotFoundError as e:
- raise NoneMetadataError(dist, str(e))
- try:
- is_compatible = check_requires_python(
- requires_python,
- version_info=version_info,
- )
- except specifiers.InvalidSpecifier as exc:
- logger.warning(
- "Package %r has an invalid Requires-Python: %s", dist.raw_name, exc
- )
- return
-
- if is_compatible:
- return
-
- version = ".".join(map(str, version_info))
- if ignore_requires_python:
- logger.debug(
- "Ignoring failed Requires-Python check for package %r: %s not in %r",
- dist.raw_name,
- version,
- requires_python,
- )
- return
-
- raise UnsupportedPythonVersion(
- "Package {!r} requires a different Python: {} not in {!r}".format(
- dist.raw_name, version, requires_python
- )
- )
-
-
-class Resolver(BaseResolver):
- """Resolves which packages need to be installed/uninstalled to perform \
- the requested operation without breaking the requirements of any package.
- """
-
- _allowed_strategies = {"eager", "only-if-needed", "to-satisfy-only"}
-
- def __init__(
- self,
- preparer: RequirementPreparer,
- finder: PackageFinder,
- wheel_cache: Optional[WheelCache],
- make_install_req: InstallRequirementProvider,
- use_user_site: bool,
- ignore_dependencies: bool,
- ignore_installed: bool,
- ignore_requires_python: bool,
- force_reinstall: bool,
- upgrade_strategy: str,
- py_version_info: Optional[Tuple[int, ...]] = None,
- ) -> None:
- super().__init__()
- assert upgrade_strategy in self._allowed_strategies
-
- if py_version_info is None:
- py_version_info = sys.version_info[:3]
- else:
- py_version_info = normalize_version_info(py_version_info)
-
- self._py_version_info = py_version_info
-
- self.preparer = preparer
- self.finder = finder
- self.wheel_cache = wheel_cache
-
- self.upgrade_strategy = upgrade_strategy
- self.force_reinstall = force_reinstall
- self.ignore_dependencies = ignore_dependencies
- self.ignore_installed = ignore_installed
- self.ignore_requires_python = ignore_requires_python
- self.use_user_site = use_user_site
- self._make_install_req = make_install_req
-
- self._discovered_dependencies: DiscoveredDependencies = defaultdict(list)
-
- def resolve(
- self, root_reqs: List[InstallRequirement], check_supported_wheels: bool
- ) -> RequirementSet:
- """Resolve what operations need to be done
-
- As a side-effect of this method, the packages (and their dependencies)
- are downloaded, unpacked and prepared for installation. This
- preparation is done by ``pip.operations.prepare``.
-
- Once PyPI has static dependency metadata available, it would be
- possible to move the preparation to become a step separated from
- dependency resolution.
- """
- requirement_set = RequirementSet(check_supported_wheels=check_supported_wheels)
- for req in root_reqs:
- if req.constraint:
- check_invalid_constraint_type(req)
- requirement_set.add_requirement(req)
-
- # Actually prepare the files, and collect any exceptions. Most hash
- # exceptions cannot be checked ahead of time, because
- # _populate_link() needs to be called before we can make decisions
- # based on link type.
- discovered_reqs: List[InstallRequirement] = []
- hash_errors = HashErrors()
- for req in chain(requirement_set.all_requirements, discovered_reqs):
- try:
- discovered_reqs.extend(self._resolve_one(requirement_set, req))
- except HashError as exc:
- exc.req = req
- hash_errors.append(exc)
-
- if hash_errors:
- raise hash_errors
-
- return requirement_set
-
- def _is_upgrade_allowed(self, req: InstallRequirement) -> bool:
- if self.upgrade_strategy == "to-satisfy-only":
- return False
- elif self.upgrade_strategy == "eager":
- return True
- else:
- assert self.upgrade_strategy == "only-if-needed"
- return req.user_supplied or req.constraint
-
- def _set_req_to_reinstall(self, req: InstallRequirement) -> None:
- """
- Set a requirement to be installed.
- """
- # Don't uninstall the conflict if doing a user install and the
- # conflict is not a user install.
- if not self.use_user_site or req.satisfied_by.in_usersite:
- req.should_reinstall = True
- req.satisfied_by = None
-
- def _check_skip_installed(
- self, req_to_install: InstallRequirement
- ) -> Optional[str]:
- """Check if req_to_install should be skipped.
-
- This will check if the req is installed, and whether we should upgrade
- or reinstall it, taking into account all the relevant user options.
-
- After calling this req_to_install will only have satisfied_by set to
- None if the req_to_install is to be upgraded/reinstalled etc. Any
- other value will be a dist recording the current thing installed that
- satisfies the requirement.
-
- Note that for vcs urls and the like we can't assess skipping in this
- routine - we simply identify that we need to pull the thing down,
- then later on it is pulled down and introspected to assess upgrade/
- reinstalls etc.
-
- :return: A text reason for why it was skipped, or None.
- """
- if self.ignore_installed:
- return None
-
- req_to_install.check_if_exists(self.use_user_site)
- if not req_to_install.satisfied_by:
- return None
-
- if self.force_reinstall:
- self._set_req_to_reinstall(req_to_install)
- return None
-
- if not self._is_upgrade_allowed(req_to_install):
- if self.upgrade_strategy == "only-if-needed":
- return "already satisfied, skipping upgrade"
- return "already satisfied"
-
- # Check for the possibility of an upgrade. For link-based
- # requirements we have to pull the tree down and inspect to assess
- # the version #, so it's handled way down.
- if not req_to_install.link:
- try:
- self.finder.find_requirement(req_to_install, upgrade=True)
- except BestVersionAlreadyInstalled:
- # Then the best version is installed.
- return "already up-to-date"
- except DistributionNotFound:
- # No distribution found, so we squash the error. It will
- # be raised later when we re-try later to do the install.
- # Why don't we just raise here?
- pass
-
- self._set_req_to_reinstall(req_to_install)
- return None
-
- def _find_requirement_link(self, req: InstallRequirement) -> Optional[Link]:
- upgrade = self._is_upgrade_allowed(req)
- best_candidate = self.finder.find_requirement(req, upgrade)
- if not best_candidate:
- return None
-
- # Log a warning per PEP 592 if necessary before returning.
- link = best_candidate.link
- if link.is_yanked:
- reason = link.yanked_reason or ""
- msg = (
- # Mark this as a unicode string to prevent
- # "UnicodeEncodeError: 'ascii' codec can't encode character"
- # in Python 2 when the reason contains non-ascii characters.
- "The candidate selected for download or install is a "
- "yanked version: {candidate}\n"
- "Reason for being yanked: {reason}"
- ).format(candidate=best_candidate, reason=reason)
- logger.warning(msg)
-
- return link
-
- def _populate_link(self, req: InstallRequirement) -> None:
- """Ensure that if a link can be found for this, that it is found.
-
- Note that req.link may still be None - if the requirement is already
- installed and not needed to be upgraded based on the return value of
- _is_upgrade_allowed().
-
- If preparer.require_hashes is True, don't use the wheel cache, because
- cached wheels, always built locally, have different hashes than the
- files downloaded from the index server and thus throw false hash
- mismatches. Furthermore, cached wheels at present have undeterministic
- contents due to file modification times.
- """
- if req.link is None:
- req.link = self._find_requirement_link(req)
-
- if self.wheel_cache is None or self.preparer.require_hashes:
- return
- cache_entry = self.wheel_cache.get_cache_entry(
- link=req.link,
- package_name=req.name,
- supported_tags=get_supported(),
- )
- if cache_entry is not None:
- logger.debug("Using cached wheel link: %s", cache_entry.link)
- if req.link is req.original_link and cache_entry.persistent:
- req.original_link_is_in_wheel_cache = True
- req.link = cache_entry.link
-
- def _get_dist_for(self, req: InstallRequirement) -> BaseDistribution:
- """Takes a InstallRequirement and returns a single AbstractDist \
- representing a prepared variant of the same.
- """
- if req.editable:
- return self.preparer.prepare_editable_requirement(req)
-
- # satisfied_by is only evaluated by calling _check_skip_installed,
- # so it must be None here.
- assert req.satisfied_by is None
- skip_reason = self._check_skip_installed(req)
-
- if req.satisfied_by:
- return self.preparer.prepare_installed_requirement(req, skip_reason)
-
- # We eagerly populate the link, since that's our "legacy" behavior.
- self._populate_link(req)
- dist = self.preparer.prepare_linked_requirement(req)
-
- # NOTE
- # The following portion is for determining if a certain package is
- # going to be re-installed/upgraded or not and reporting to the user.
- # This should probably get cleaned up in a future refactor.
-
- # req.req is only avail after unpack for URL
- # pkgs repeat check_if_exists to uninstall-on-upgrade
- # (#14)
- if not self.ignore_installed:
- req.check_if_exists(self.use_user_site)
-
- if req.satisfied_by:
- should_modify = (
- self.upgrade_strategy != "to-satisfy-only"
- or self.force_reinstall
- or self.ignore_installed
- or req.link.scheme == "file"
- )
- if should_modify:
- self._set_req_to_reinstall(req)
- else:
- logger.info(
- "Requirement already satisfied (use --upgrade to upgrade): %s",
- req,
- )
- return dist
-
- def _resolve_one(
- self,
- requirement_set: RequirementSet,
- req_to_install: InstallRequirement,
- ) -> List[InstallRequirement]:
- """Prepare a single requirements file.
-
- :return: A list of additional InstallRequirements to also install.
- """
- # Tell user what we are doing for this requirement:
- # obtain (editable), skipping, processing (local url), collecting
- # (remote url or package name)
- if req_to_install.constraint or req_to_install.prepared:
- return []
-
- req_to_install.prepared = True
-
- # Parse and return dependencies
- dist = self._get_dist_for(req_to_install)
- # This will raise UnsupportedPythonVersion if the given Python
- # version isn't compatible with the distribution's Requires-Python.
- _check_dist_requires_python(
- dist,
- version_info=self._py_version_info,
- ignore_requires_python=self.ignore_requires_python,
- )
-
- more_reqs: List[InstallRequirement] = []
-
- def add_req(subreq: Requirement, extras_requested: Iterable[str]) -> None:
- # This idiosyncratically converts the Requirement to str and let
- # make_install_req then parse it again into Requirement. But this is
- # the legacy resolver so I'm just not going to bother refactoring.
- sub_install_req = self._make_install_req(str(subreq), req_to_install)
- parent_req_name = req_to_install.name
- to_scan_again, add_to_parent = requirement_set.add_requirement(
- sub_install_req,
- parent_req_name=parent_req_name,
- extras_requested=extras_requested,
- )
- if parent_req_name and add_to_parent:
- self._discovered_dependencies[parent_req_name].append(add_to_parent)
- more_reqs.extend(to_scan_again)
-
- with indent_log():
- # We add req_to_install before its dependencies, so that we
- # can refer to it when adding dependencies.
- if not requirement_set.has_requirement(req_to_install.name):
- # 'unnamed' requirements will get added here
- # 'unnamed' requirements can only come from being directly
- # provided by the user.
- assert req_to_install.user_supplied
- requirement_set.add_requirement(req_to_install, parent_req_name=None)
-
- if not self.ignore_dependencies:
- if req_to_install.extras:
- logger.debug(
- "Installing extra requirements: %r",
- ",".join(req_to_install.extras),
- )
- missing_requested = sorted(
- set(req_to_install.extras) - set(dist.iter_provided_extras())
- )
- for missing in missing_requested:
- logger.warning(
- "%s %s does not provide the extra '%s'",
- dist.raw_name,
- dist.version,
- missing,
- )
-
- available_requested = sorted(
- set(dist.iter_provided_extras()) & set(req_to_install.extras)
- )
- for subreq in dist.iter_dependencies(available_requested):
- add_req(subreq, extras_requested=available_requested)
-
- return more_reqs
-
- def get_installation_order(
- self, req_set: RequirementSet
- ) -> List[InstallRequirement]:
- """Create the installation order.
-
- The installation order is topological - requirements are installed
- before the requiring thing. We break cycles at an arbitrary point,
- and make no other guarantees.
- """
- # The current implementation, which we may change at any point
- # installs the user specified things in the order given, except when
- # dependencies must come earlier to achieve topological order.
- order = []
- ordered_reqs: Set[InstallRequirement] = set()
-
- def schedule(req: InstallRequirement) -> None:
- if req.satisfied_by or req in ordered_reqs:
- return
- if req.constraint:
- return
- ordered_reqs.add(req)
- for dep in self._discovered_dependencies[req.name]:
- schedule(dep)
- order.append(req)
-
- for install_req in req_set.requirements.values():
- schedule(install_req)
- return order
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/config.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/config.py
deleted file mode 100644
index ccdcd7fdb9f947265f861c6ab63166e7febc0edd..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/config.py
+++ /dev/null
@@ -1,791 +0,0 @@
-"""Configuration for Pydantic models."""
-from __future__ import annotations as _annotations
-
-from typing import TYPE_CHECKING, Any, Callable, Dict, Type, Union
-
-from typing_extensions import Literal, TypeAlias, TypedDict
-
-from ._migration import getattr_migration
-
-if TYPE_CHECKING:
- from ._internal._generate_schema import GenerateSchema as _GenerateSchema
-
-__all__ = ('ConfigDict',)
-
-
-JsonEncoder = Callable[[Any], Any]
-
-JsonSchemaExtraCallable: TypeAlias = Union[
- Callable[[Dict[str, Any]], None],
- Callable[[Dict[str, Any], Type[Any]], None],
-]
-
-ExtraValues = Literal['allow', 'ignore', 'forbid']
-
-
-class ConfigDict(TypedDict, total=False):
- """A TypedDict for configuring Pydantic behaviour."""
-
- title: str | None
- """The title for the generated JSON schema, defaults to the model's name"""
-
- str_to_lower: bool
- """Whether to convert all characters to lowercase for str types. Defaults to `False`."""
-
- str_to_upper: bool
- """Whether to convert all characters to uppercase for str types. Defaults to `False`."""
- str_strip_whitespace: bool
- """Whether to strip leading and trailing whitespace for str types."""
-
- str_min_length: int
- """The minimum length for str types. Defaults to `None`."""
-
- str_max_length: int | None
- """The maximum length for str types. Defaults to `None`."""
-
- extra: ExtraValues | None
- """
- Whether to ignore, allow, or forbid extra attributes during model initialization. Defaults to `'ignore'`.
-
- You can configure how pydantic handles the attributes that are not defined in the model:
-
- * `allow` - Allow any extra attributes.
- * `forbid` - Forbid any extra attributes.
- * `ignore` - Ignore any extra attributes.
-
- ```py
- from pydantic import BaseModel, ConfigDict
-
-
- class User(BaseModel):
- model_config = ConfigDict(extra='ignore') # (1)!
-
- name: str
-
-
- user = User(name='John Doe', age=20) # (2)!
- print(user)
- #> name='John Doe'
- ```
-
- 1. This is the default behaviour.
- 2. The `age` argument is ignored.
-
- Instead, with `extra='allow'`, the `age` argument is included:
-
- ```py
- from pydantic import BaseModel, ConfigDict
-
-
- class User(BaseModel):
- model_config = ConfigDict(extra='allow')
-
- name: str
-
-
- user = User(name='John Doe', age=20) # (1)!
- print(user)
- #> name='John Doe' age=20
- ```
-
- 1. The `age` argument is included.
-
- With `extra='forbid'`, an error is raised:
-
- ```py
- from pydantic import BaseModel, ConfigDict, ValidationError
-
-
- class User(BaseModel):
- model_config = ConfigDict(extra='forbid')
-
- name: str
-
-
- try:
- User(name='John Doe', age=20)
- except ValidationError as e:
- print(e)
- '''
- 1 validation error for User
- age
- Extra inputs are not permitted [type=extra_forbidden, input_value=20, input_type=int]
- '''
- ```
- """
-
- frozen: bool
- """
- Whether or not models are faux-immutable, i.e. whether `__setattr__` is allowed, and also generates
- a `__hash__()` method for the model. This makes instances of the model potentially hashable if all the
- attributes are hashable. Defaults to `False`.
-
- Note:
- On V1, this setting was called `allow_mutation`, and was `True` by default.
- """
-
- populate_by_name: bool
- """
- Whether an aliased field may be populated by its name as given by the model
- attribute, as well as the alias. Defaults to `False`.
-
- Note:
- The name of this configuration setting was changed in **v2.0** from
- `allow_population_by_alias` to `populate_by_name`.
-
- ```py
- from pydantic import BaseModel, ConfigDict, Field
-
-
- class User(BaseModel):
- model_config = ConfigDict(populate_by_name=True)
-
- name: str = Field(alias='full_name') # (1)!
- age: int
-
-
- user = User(full_name='John Doe', age=20) # (2)!
- print(user)
- #> name='John Doe' age=20
- user = User(name='John Doe', age=20) # (3)!
- print(user)
- #> name='John Doe' age=20
- ```
-
- 1. The field `'name'` has an alias `'full_name'`.
- 2. The model is populated by the alias `'full_name'`.
- 3. The model is populated by the field name `'name'`.
- """
-
- use_enum_values: bool
- """
- Whether to populate models with the `value` property of enums, rather than the raw enum.
- This may be useful if you want to serialize `model.model_dump()` later. Defaults to `False`.
- """
-
- validate_assignment: bool
- """
- Whether to validate the data when the model is changed. Defaults to `False`.
-
- The default behavior of Pydantic is to validate the data when the model is created.
-
- In case the user changes the data after the model is created, the model is _not_ revalidated.
-
- ```py
- from pydantic import BaseModel
-
- class User(BaseModel):
- name: str
-
- user = User(name='John Doe') # (1)!
- print(user)
- #> name='John Doe'
- user.name = 123 # (1)!
- print(user)
- #> name=123
- ```
-
- 1. The validation happens only when the model is created.
- 2. The validation does not happen when the data is changed.
-
- In case you want to revalidate the model when the data is changed, you can use `validate_assignment=True`:
-
- ```py
- from pydantic import BaseModel, ValidationError
-
- class User(BaseModel, validate_assignment=True): # (1)!
- name: str
-
- user = User(name='John Doe') # (2)!
- print(user)
- #> name='John Doe'
- try:
- user.name = 123 # (3)!
- except ValidationError as e:
- print(e)
- '''
- 1 validation error for User
- name
- Input should be a valid string [type=string_type, input_value=123, input_type=int]
- '''
- ```
-
- 1. You can either use class keyword arguments, or `model_config` to set `validate_assignment=True`.
- 2. The validation happens when the model is created.
- 3. The validation _also_ happens when the data is changed.
- """
-
- arbitrary_types_allowed: bool
- """
- Whether arbitrary types are allowed for field types. Defaults to `False`.
-
- ```py
- from pydantic import BaseModel, ConfigDict, ValidationError
-
- # This is not a pydantic model, it's an arbitrary class
- class Pet:
- def __init__(self, name: str):
- self.name = name
-
- class Model(BaseModel):
- model_config = ConfigDict(arbitrary_types_allowed=True)
-
- pet: Pet
- owner: str
-
- pet = Pet(name='Hedwig')
- # A simple check of instance type is used to validate the data
- model = Model(owner='Harry', pet=pet)
- print(model)
- #> pet=<__main__.Pet object at 0x0123456789ab> owner='Harry'
- print(model.pet)
- #> <__main__.Pet object at 0x0123456789ab>
- print(model.pet.name)
- #> Hedwig
- print(type(model.pet))
- #>
- try:
- # If the value is not an instance of the type, it's invalid
- Model(owner='Harry', pet='Hedwig')
- except ValidationError as e:
- print(e)
- '''
- 1 validation error for Model
- pet
- Input should be an instance of Pet [type=is_instance_of, input_value='Hedwig', input_type=str]
- '''
-
- # Nothing in the instance of the arbitrary type is checked
- # Here name probably should have been a str, but it's not validated
- pet2 = Pet(name=42)
- model2 = Model(owner='Harry', pet=pet2)
- print(model2)
- #> pet=<__main__.Pet object at 0x0123456789ab> owner='Harry'
- print(model2.pet)
- #> <__main__.Pet object at 0x0123456789ab>
- print(model2.pet.name)
- #> 42
- print(type(model2.pet))
- #>
- ```
- """
-
- from_attributes: bool
- """
- Whether to build models and look up discriminators of tagged unions using python object attributes.
- """
-
- loc_by_alias: bool
- """Whether to use the actual key provided in the data (e.g. alias) for error `loc`s rather than the field's name. Defaults to `True`."""
-
- alias_generator: Callable[[str], str] | None
- """
- A callable that takes a field name and returns an alias for it.
-
- If data source field names do not match your code style (e. g. CamelCase fields),
- you can automatically generate aliases using `alias_generator`:
-
- ```py
- from pydantic import BaseModel, ConfigDict
- from pydantic.alias_generators import to_pascal
-
- class Voice(BaseModel):
- model_config = ConfigDict(alias_generator=to_pascal)
-
- name: str
- language_code: str
-
- voice = Voice(Name='Filiz', LanguageCode='tr-TR')
- print(voice.language_code)
- #> tr-TR
- print(voice.model_dump(by_alias=True))
- #> {'Name': 'Filiz', 'LanguageCode': 'tr-TR'}
- ```
-
- Note:
- Pydantic offers three built-in alias generators: [`to_pascal`][pydantic.alias_generators.to_pascal],
- [`to_camel`][pydantic.alias_generators.to_camel], and [`to_snake`][pydantic.alias_generators.to_snake].
- """
-
- ignored_types: tuple[type, ...]
- """A tuple of types that may occur as values of class attributes without annotations. This is
- typically used for custom descriptors (classes that behave like `property`). If an attribute is set on a
- class without an annotation and has a type that is not in this tuple (or otherwise recognized by
- _pydantic_), an error will be raised. Defaults to `()`.
- """
-
- allow_inf_nan: bool
- """Whether to allow infinity (`+inf` an `-inf`) and NaN values to float fields. Defaults to `True`."""
-
- json_schema_extra: dict[str, object] | JsonSchemaExtraCallable | None
- """A dict or callable to provide extra JSON schema properties. Defaults to `None`."""
-
- json_encoders: dict[type[object], JsonEncoder] | None
- """
- A `dict` of custom JSON encoders for specific types. Defaults to `None`.
-
- !!! warning "Deprecated"
- This config option is a carryover from v1.
- We originally planned to remove it in v2 but didn't have a 1:1 replacement so we are keeping it for now.
- It is still deprecated and will likely be removed in the future.
- """
-
- # new in V2
- strict: bool
- """
- _(new in V2)_ If `True`, strict validation is applied to all fields on the model.
-
- By default, Pydantic attempts to coerce values to the correct type, when possible.
-
- There are situations in which you may want to disable this behavior, and instead raise an error if a value's type
- does not match the field's type annotation.
-
- To configure strict mode for all fields on a model, you can set `strict=True` on the model.
-
- ```py
- from pydantic import BaseModel, ConfigDict
-
- class Model(BaseModel):
- model_config = ConfigDict(strict=True)
-
- name: str
- age: int
- ```
-
- See [Strict Mode](../concepts/strict_mode.md) for more details.
-
- See the [Conversion Table](../concepts/conversion_table.md) for more details on how Pydantic converts data in both
- strict and lax modes.
- """
- # whether instances of models and dataclasses (including subclass instances) should re-validate, default 'never'
- revalidate_instances: Literal['always', 'never', 'subclass-instances']
- """
- When and how to revalidate models and dataclasses during validation. Accepts the string
- values of `'never'`, `'always'` and `'subclass-instances'`. Defaults to `'never'`.
-
- - `'never'` will not revalidate models and dataclasses during validation
- - `'always'` will revalidate models and dataclasses during validation
- - `'subclass-instances'` will revalidate models and dataclasses during validation if the instance is a
- subclass of the model or dataclass
-
- By default, model and dataclass instances are not revalidated during validation.
-
- ```py
- from typing import List
-
- from pydantic import BaseModel
-
- class User(BaseModel, revalidate_instances='never'): # (1)!
- hobbies: List[str]
-
- class SubUser(User):
- sins: List[str]
-
- class Transaction(BaseModel):
- user: User
-
- my_user = User(hobbies=['reading'])
- t = Transaction(user=my_user)
- print(t)
- #> user=User(hobbies=['reading'])
-
- my_user.hobbies = [1] # (2)!
- t = Transaction(user=my_user) # (3)!
- print(t)
- #> user=User(hobbies=[1])
-
- my_sub_user = SubUser(hobbies=['scuba diving'], sins=['lying'])
- t = Transaction(user=my_sub_user)
- print(t)
- #> user=SubUser(hobbies=['scuba diving'], sins=['lying'])
- ```
-
- 1. `revalidate_instances` is set to `'never'` by **default.
- 2. The assignment is not validated, unless you set `validate_assignment` to `True` in the model's config.
- 3. Since `revalidate_instances` is set to `never`, this is not revalidated.
-
- If you want to revalidate instances during validation, you can set `revalidate_instances` to `'always'`
- in the model's config.
-
- ```py
- from typing import List
-
- from pydantic import BaseModel, ValidationError
-
- class User(BaseModel, revalidate_instances='always'): # (1)!
- hobbies: List[str]
-
- class SubUser(User):
- sins: List[str]
-
- class Transaction(BaseModel):
- user: User
-
- my_user = User(hobbies=['reading'])
- t = Transaction(user=my_user)
- print(t)
- #> user=User(hobbies=['reading'])
-
- my_user.hobbies = [1]
- try:
- t = Transaction(user=my_user) # (2)!
- except ValidationError as e:
- print(e)
- '''
- 1 validation error for Transaction
- user.hobbies.0
- Input should be a valid string [type=string_type, input_value=1, input_type=int]
- '''
-
- my_sub_user = SubUser(hobbies=['scuba diving'], sins=['lying'])
- t = Transaction(user=my_sub_user)
- print(t) # (3)!
- #> user=User(hobbies=['scuba diving'])
- ```
-
- 1. `revalidate_instances` is set to `'always'`.
- 2. The model is revalidated, since `revalidate_instances` is set to `'always'`.
- 3. Using `'never'` we would have gotten `user=SubUser(hobbies=['scuba diving'], sins=['lying'])`.
-
- It's also possible to set `revalidate_instances` to `'subclass-instances'` to only revalidate instances
- of subclasses of the model.
-
- ```py
- from typing import List
-
- from pydantic import BaseModel
-
- class User(BaseModel, revalidate_instances='subclass-instances'): # (1)!
- hobbies: List[str]
-
- class SubUser(User):
- sins: List[str]
-
- class Transaction(BaseModel):
- user: User
-
- my_user = User(hobbies=['reading'])
- t = Transaction(user=my_user)
- print(t)
- #> user=User(hobbies=['reading'])
-
- my_user.hobbies = [1]
- t = Transaction(user=my_user) # (2)!
- print(t)
- #> user=User(hobbies=[1])
-
- my_sub_user = SubUser(hobbies=['scuba diving'], sins=['lying'])
- t = Transaction(user=my_sub_user)
- print(t) # (3)!
- #> user=User(hobbies=['scuba diving'])
- ```
-
- 1. `revalidate_instances` is set to `'subclass-instances'`.
- 2. This is not revalidated, since `my_user` is not a subclass of `User`.
- 3. Using `'never'` we would have gotten `user=SubUser(hobbies=['scuba diving'], sins=['lying'])`.
- """
-
- ser_json_timedelta: Literal['iso8601', 'float']
- """
- The format of JSON serialized timedeltas. Accepts the string values of `'iso8601'` and
- `'float'`. Defaults to `'iso8601'`.
-
- - `'iso8601'` will serialize timedeltas to ISO 8601 durations.
- - `'float'` will serialize timedeltas to the total number of seconds.
- """
-
- ser_json_bytes: Literal['utf8', 'base64']
- """
- The encoding of JSON serialized bytes. Accepts the string values of `'utf8'` and `'base64'`.
- Defaults to `'utf8'`.
-
- - `'utf8'` will serialize bytes to UTF-8 strings.
- - `'base64'` will serialize bytes to URL safe base64 strings.
- """
-
- # whether to validate default values during validation, default False
- validate_default: bool
- """Whether to validate default values during validation. Defaults to `False`."""
-
- validate_return: bool
- """whether to validate the return value from call validators. Defaults to `False`."""
-
- protected_namespaces: tuple[str, ...]
- """
- A `tuple` of strings that prevent model to have field which conflict with them.
- Defaults to `('model_', )`).
-
- Pydantic prevents collisions between model attributes and `BaseModel`'s own methods by
- namespacing them with the prefix `model_`.
-
- ```py
- import warnings
-
- from pydantic import BaseModel
-
- warnings.filterwarnings('error') # Raise warnings as errors
-
- try:
-
- class Model(BaseModel):
- model_prefixed_field: str
-
- except UserWarning as e:
- print(e)
- '''
- Field "model_prefixed_field" has conflict with protected namespace "model_".
-
- You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
- '''
- ```
-
- You can customize this behavior using the `protected_namespaces` setting:
-
- ```py
- import warnings
-
- from pydantic import BaseModel, ConfigDict
-
- warnings.filterwarnings('error') # Raise warnings as errors
-
- try:
-
- class Model(BaseModel):
- model_prefixed_field: str
- also_protect_field: str
-
- model_config = ConfigDict(
- protected_namespaces=('protect_me_', 'also_protect_')
- )
-
- except UserWarning as e:
- print(e)
- '''
- Field "also_protect_field" has conflict with protected namespace "also_protect_".
-
- You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ('protect_me_',)`.
- '''
- ```
-
- While Pydantic will only emit a warning when an item is in a protected namespace but does not actually have a collision,
- an error _is_ raised if there is an actual collision with an existing attribute:
-
- ```py
- from pydantic import BaseModel
-
- try:
-
- class Model(BaseModel):
- model_validate: str
-
- except NameError as e:
- print(e)
- '''
- Field "model_validate" conflicts with member > of protected namespace "model_".
- '''
- ```
- """
-
- hide_input_in_errors: bool
- """
- Whether to hide inputs when printing errors. Defaults to `False`.
-
- Pydantic shows the input value and type when it raises `ValidationError` during the validation.
-
- ```py
- from pydantic import BaseModel, ValidationError
-
- class Model(BaseModel):
- a: str
-
- try:
- Model(a=123)
- except ValidationError as e:
- print(e)
- '''
- 1 validation error for Model
- a
- Input should be a valid string [type=string_type, input_value=123, input_type=int]
- '''
- ```
-
- You can hide the input value and type by setting the `hide_input_in_errors` config to `True`.
-
- ```py
- from pydantic import BaseModel, ConfigDict, ValidationError
-
- class Model(BaseModel):
- a: str
- model_config = ConfigDict(hide_input_in_errors=True)
-
- try:
- Model(a=123)
- except ValidationError as e:
- print(e)
- '''
- 1 validation error for Model
- a
- Input should be a valid string [type=string_type]
- '''
- ```
- """
-
- defer_build: bool
- """
- Whether to defer model validator and serializer construction until the first model validation.
-
- This can be useful to avoid the overhead of building models which are only
- used nested within other models, or when you want to manually define type namespace via
- [`Model.model_rebuild(_types_namespace=...)`][pydantic.BaseModel.model_rebuild]. Defaults to False.
- """
-
- plugin_settings: dict[str, object] | None
- """A `dict` of settings for plugins. Defaults to `None`.
-
- See [Pydantic Plugins](../concepts/plugins.md) for details.
- """
-
- schema_generator: type[_GenerateSchema] | None
- """
- A custom core schema generator class to use when generating JSON schemas.
- Useful if you want to change the way types are validated across an entire model/schema. Defaults to `None`.
-
- The `GenerateSchema` interface is subject to change, currently only the `string_schema` method is public.
-
- See [#6737](https://github.com/pydantic/pydantic/pull/6737) for details.
- """
-
- json_schema_serialization_defaults_required: bool
- """
- Whether fields with default values should be marked as required in the serialization schema. Defaults to `False`.
-
- This ensures that the serialization schema will reflect the fact a field with a default will always be present
- when serializing the model, even though it is not required for validation.
-
- However, there are scenarios where this may be undesirable — in particular, if you want to share the schema
- between validation and serialization, and don't mind fields with defaults being marked as not required during
- serialization. See [#7209](https://github.com/pydantic/pydantic/issues/7209) for more details.
-
- ```py
- from pydantic import BaseModel, ConfigDict
-
- class Model(BaseModel):
- a: str = 'a'
-
- model_config = ConfigDict(json_schema_serialization_defaults_required=True)
-
- print(Model.model_json_schema(mode='validation'))
- '''
- {
- 'properties': {'a': {'default': 'a', 'title': 'A', 'type': 'string'}},
- 'title': 'Model',
- 'type': 'object',
- }
- '''
- print(Model.model_json_schema(mode='serialization'))
- '''
- {
- 'properties': {'a': {'default': 'a', 'title': 'A', 'type': 'string'}},
- 'required': ['a'],
- 'title': 'Model',
- 'type': 'object',
- }
- '''
- ```
- """
-
- json_schema_mode_override: Literal['validation', 'serialization', None]
- """
- If not `None`, the specified mode will be used to generate the JSON schema regardless of what `mode` was passed to
- the function call. Defaults to `None`.
-
- This provides a way to force the JSON schema generation to reflect a specific mode, e.g., to always use the
- validation schema.
-
- It can be useful when using frameworks (such as FastAPI) that may generate different schemas for validation
- and serialization that must both be referenced from the same schema; when this happens, we automatically append
- `-Input` to the definition reference for the validation schema and `-Output` to the definition reference for the
- serialization schema. By specifying a `json_schema_mode_override` though, this prevents the conflict between
- the validation and serialization schemas (since both will use the specified schema), and so prevents the suffixes
- from being added to the definition references.
-
- ```py
- from pydantic import BaseModel, ConfigDict, Json
-
- class Model(BaseModel):
- a: Json[int] # requires a string to validate, but will dump an int
-
- print(Model.model_json_schema(mode='serialization'))
- '''
- {
- 'properties': {'a': {'title': 'A', 'type': 'integer'}},
- 'required': ['a'],
- 'title': 'Model',
- 'type': 'object',
- }
- '''
-
- class ForceInputModel(Model):
- # the following ensures that even with mode='serialization', we
- # will get the schema that would be generated for validation.
- model_config = ConfigDict(json_schema_mode_override='validation')
-
- print(ForceInputModel.model_json_schema(mode='serialization'))
- '''
- {
- 'properties': {
- 'a': {
- 'contentMediaType': 'application/json',
- 'contentSchema': {'type': 'integer'},
- 'title': 'A',
- 'type': 'string',
- }
- },
- 'required': ['a'],
- 'title': 'ForceInputModel',
- 'type': 'object',
- }
- '''
- ```
- """
-
- coerce_numbers_to_str: bool
- """
- If `True`, enables automatic coercion of any `Number` type to `str` in "lax" (non-strict) mode. Defaults to `False`.
-
- Pydantic doesn't allow number types (`int`, `float`, `Decimal`) to be coerced as type `str` by default.
-
- ```py
- from decimal import Decimal
-
- from pydantic import BaseModel, ConfigDict, ValidationError
-
- class Model(BaseModel):
- value: str
-
- try:
- print(Model(value=42))
- except ValidationError as e:
- print(e)
- '''
- 1 validation error for Model
- value
- Input should be a valid string [type=string_type, input_value=42, input_type=int]
- '''
-
- class Model(BaseModel):
- model_config = ConfigDict(coerce_numbers_to_str=True)
-
- value: str
-
- repr(Model(value=42).value)
- #> "42"
- repr(Model(value=42.13).value)
- #> "42.13"
- repr(Model(value=Decimal('42.13')).value)
- #> "42.13"
- ```
- """
-
-
-__getattr__ = getattr_migration(__name__)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/cddl.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/cddl.py
deleted file mode 100644
index bd7f54aefd0fa1ee1043bfcf8bd4ec23f2ede5fc..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/cddl.py
+++ /dev/null
@@ -1,173 +0,0 @@
-"""
- pygments.lexers.cddl
- ~~~~~~~~~~~~~~~~~~~~
-
- Lexer for the Concise data definition language (CDDL), a notational
- convention to express CBOR and JSON data structures.
-
- More information:
- https://datatracker.ietf.org/doc/rfc8610/
-
- :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-from pygments.lexer import RegexLexer, bygroups, include, words
-from pygments.token import Comment, Error, Keyword, Name, Number, Operator, \
- Punctuation, String, Whitespace
-
-__all__ = ['CddlLexer']
-
-
-class CddlLexer(RegexLexer):
- """
- Lexer for CDDL definitions.
-
- .. versionadded:: 2.8
- """
- name = "CDDL"
- url = 'https://datatracker.ietf.org/doc/rfc8610/'
- aliases = ["cddl"]
- filenames = ["*.cddl"]
- mimetypes = ["text/x-cddl"]
-
- _prelude_types = [
- "any",
- "b64legacy",
- "b64url",
- "bigfloat",
- "bigint",
- "bignint",
- "biguint",
- "bool",
- "bstr",
- "bytes",
- "cbor-any",
- "decfrac",
- "eb16",
- "eb64legacy",
- "eb64url",
- "encoded-cbor",
- "false",
- "float",
- "float16",
- "float16-32",
- "float32",
- "float32-64",
- "float64",
- "int",
- "integer",
- "mime-message",
- "nil",
- "nint",
- "null",
- "number",
- "regexp",
- "tdate",
- "text",
- "time",
- "true",
- "tstr",
- "uint",
- "undefined",
- "unsigned",
- "uri",
- ]
-
- _controls = [
- ".and",
- ".bits",
- ".cbor",
- ".cborseq",
- ".default",
- ".eq",
- ".ge",
- ".gt",
- ".le",
- ".lt",
- ".ne",
- ".regexp",
- ".size",
- ".within",
- ]
-
- _re_id = (
- r"[$@A-Z_a-z]"
- r"(?:[\-\.]+(?=[$@0-9A-Z_a-z])|[$@0-9A-Z_a-z])*"
-
- )
-
- # While the spec reads more like "an int must not start with 0" we use a
- # lookahead here that says "after a 0 there must be no digit". This makes the
- # '0' the invalid character in '01', which looks nicer when highlighted.
- _re_uint = r"(?:0b[01]+|0x[0-9a-fA-F]+|[1-9]\d*|0(?!\d))"
- _re_int = r"-?" + _re_uint
-
- tokens = {
- "commentsandwhitespace": [(r"\s+", Whitespace), (r";.+$", Comment.Single)],
- "root": [
- include("commentsandwhitespace"),
- # tag types
- (r"#(\d\.{uint})?".format(uint=_re_uint), Keyword.Type), # type or any
- # occurrence
- (
- r"({uint})?(\*)({uint})?".format(uint=_re_uint),
- bygroups(Number, Operator, Number),
- ),
- (r"\?|\+", Operator), # occurrence
- (r"\^", Operator), # cuts
- (r"(\.\.\.|\.\.)", Operator), # rangeop
- (words(_controls, suffix=r"\b"), Operator.Word), # ctlops
- # into choice op
- (r"&(?=\s*({groupname}|\())".format(groupname=_re_id), Operator),
- (r"~(?=\s*{})".format(_re_id), Operator), # unwrap op
- (r"//|/(?!/)", Operator), # double und single slash
- (r"=>|/==|/=|=", Operator),
- (r"[\[\]{}\(\),<>:]", Punctuation),
- # Bytestrings
- (r"(b64)(')", bygroups(String.Affix, String.Single), "bstrb64url"),
- (r"(h)(')", bygroups(String.Affix, String.Single), "bstrh"),
- (r"'", String.Single, "bstr"),
- # Barewords as member keys (must be matched before values, types, typenames,
- # groupnames).
- # Token type is String as barewords are always interpreted as such.
- (r"({bareword})(\s*)(:)".format(bareword=_re_id),
- bygroups(String, Whitespace, Punctuation)),
- # predefined types
- (words(_prelude_types, prefix=r"(?![\-_$@])\b", suffix=r"\b(?![\-_$@])"),
- Name.Builtin),
- # user-defined groupnames, typenames
- (_re_id, Name.Class),
- # values
- (r"0b[01]+", Number.Bin),
- (r"0o[0-7]+", Number.Oct),
- (r"0x[0-9a-fA-F]+(\.[0-9a-fA-F]+)?p[+-]?\d+", Number.Hex), # hexfloat
- (r"0x[0-9a-fA-F]+", Number.Hex), # hex
- # Float
- (r"{int}(?=(\.\d|e[+-]?\d))(?:\.\d+)?(?:e[+-]?\d+)?".format(int=_re_int),
- Number.Float),
- # Int
- (_re_int, Number.Integer),
- (r'"(\\\\|\\"|[^"])*"', String.Double),
- ],
- "bstrb64url": [
- (r"'", String.Single, "#pop"),
- include("commentsandwhitespace"),
- (r"\\.", String.Escape),
- (r"[0-9a-zA-Z\-_=]+", String.Single),
- (r".", Error),
- # (r";.+$", Token.Other),
- ],
- "bstrh": [
- (r"'", String.Single, "#pop"),
- include("commentsandwhitespace"),
- (r"\\.", String.Escape),
- (r"[0-9a-fA-F]+", String.Single),
- (r".", Error),
- ],
- "bstr": [
- (r"'", String.Single, "#pop"),
- (r"\\.", String.Escape),
- (r"[^'\\]+", String.Single),
- ],
- }
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/requests/hooks.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/requests/hooks.py
deleted file mode 100644
index d181ba2ec2e55d274897315887b78fbdca757da8..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/requests/hooks.py
+++ /dev/null
@@ -1,33 +0,0 @@
-"""
-requests.hooks
-~~~~~~~~~~~~~~
-
-This module provides the capabilities for the Requests hooks system.
-
-Available hooks:
-
-``response``:
- The response generated from a Request.
-"""
-HOOKS = ["response"]
-
-
-def default_hooks():
- return {event: [] for event in HOOKS}
-
-
-# TODO: response is the only one
-
-
-def dispatch_hook(key, hooks, hook_data, **kwargs):
- """Dispatches a hook dictionary on a given piece of data."""
- hooks = hooks or {}
- hooks = hooks.get(key)
- if hooks:
- if hasattr(hooks, "__call__"):
- hooks = [hooks]
- for hook in hooks:
- _hook_data = hook(hook_data, **kwargs)
- if _hook_data is not None:
- hook_data = _hook_data
- return hook_data
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_vendor/packaging/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_vendor/packaging/__init__.py
deleted file mode 100644
index a0cf67df5245be16a020ca048832e180f7ce8661..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_vendor/packaging/__init__.py
+++ /dev/null
@@ -1,26 +0,0 @@
-# This file is dual licensed under the terms of the Apache License, Version
-# 2.0, and the BSD License. See the LICENSE file in the root of this repository
-# for complete details.
-from __future__ import absolute_import, division, print_function
-
-from .__about__ import (
- __author__,
- __copyright__,
- __email__,
- __license__,
- __summary__,
- __title__,
- __uri__,
- __version__,
-)
-
-__all__ = [
- "__title__",
- "__summary__",
- "__uri__",
- "__version__",
- "__author__",
- "__email__",
- "__license__",
- "__copyright__",
-]
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/command/build_clib.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/command/build_clib.py
deleted file mode 100644
index 67ce2444ea69a0bbdfab0bda8c2aa14951187096..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/command/build_clib.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import distutils.command.build_clib as orig
-from distutils.errors import DistutilsSetupError
-from distutils import log
-from setuptools.dep_util import newer_pairwise_group
-
-
-class build_clib(orig.build_clib):
- """
- Override the default build_clib behaviour to do the following:
-
- 1. Implement a rudimentary timestamp-based dependency system
- so 'compile()' doesn't run every time.
- 2. Add more keys to the 'build_info' dictionary:
- * obj_deps - specify dependencies for each object compiled.
- this should be a dictionary mapping a key
- with the source filename to a list of
- dependencies. Use an empty string for global
- dependencies.
- * cflags - specify a list of additional flags to pass to
- the compiler.
- """
-
- def build_libraries(self, libraries):
- for (lib_name, build_info) in libraries:
- sources = build_info.get('sources')
- if sources is None or not isinstance(sources, (list, tuple)):
- raise DistutilsSetupError(
- "in 'libraries' option (library '%s'), "
- "'sources' must be present and must be "
- "a list of source filenames" % lib_name)
- sources = list(sources)
-
- log.info("building '%s' library", lib_name)
-
- # Make sure everything is the correct type.
- # obj_deps should be a dictionary of keys as sources
- # and a list/tuple of files that are its dependencies.
- obj_deps = build_info.get('obj_deps', dict())
- if not isinstance(obj_deps, dict):
- raise DistutilsSetupError(
- "in 'libraries' option (library '%s'), "
- "'obj_deps' must be a dictionary of "
- "type 'source: list'" % lib_name)
- dependencies = []
-
- # Get the global dependencies that are specified by the '' key.
- # These will go into every source's dependency list.
- global_deps = obj_deps.get('', list())
- if not isinstance(global_deps, (list, tuple)):
- raise DistutilsSetupError(
- "in 'libraries' option (library '%s'), "
- "'obj_deps' must be a dictionary of "
- "type 'source: list'" % lib_name)
-
- # Build the list to be used by newer_pairwise_group
- # each source will be auto-added to its dependencies.
- for source in sources:
- src_deps = [source]
- src_deps.extend(global_deps)
- extra_deps = obj_deps.get(source, list())
- if not isinstance(extra_deps, (list, tuple)):
- raise DistutilsSetupError(
- "in 'libraries' option (library '%s'), "
- "'obj_deps' must be a dictionary of "
- "type 'source: list'" % lib_name)
- src_deps.extend(extra_deps)
- dependencies.append(src_deps)
-
- expected_objects = self.compiler.object_filenames(
- sources,
- output_dir=self.build_temp,
- )
-
- if (
- newer_pairwise_group(dependencies, expected_objects)
- != ([], [])
- ):
- # First, compile the source code to object files in the library
- # directory. (This should probably change to putting object
- # files in a temporary build directory.)
- macros = build_info.get('macros')
- include_dirs = build_info.get('include_dirs')
- cflags = build_info.get('cflags')
- self.compiler.compile(
- sources,
- output_dir=self.build_temp,
- macros=macros,
- include_dirs=include_dirs,
- extra_postargs=cflags,
- debug=self.debug
- )
-
- # Now "link" the object files together into a static library.
- # (On Unix at least, this isn't really linking -- it just
- # builds an archive. Whatever.)
- self.compiler.create_static_lib(
- expected_objects,
- lib_name,
- output_dir=self.build_clib,
- debug=self.debug
- )
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/command/sdist.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/command/sdist.py
deleted file mode 100644
index 4a014283c8650112323007992fe702702707ad66..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/command/sdist.py
+++ /dev/null
@@ -1,189 +0,0 @@
-from distutils import log
-import distutils.command.sdist as orig
-import os
-import sys
-import io
-import contextlib
-
-from .py36compat import sdist_add_defaults
-
-import pkg_resources
-
-_default_revctrl = list
-
-
-def walk_revctrl(dirname=''):
- """Find all files under revision control"""
- for ep in pkg_resources.iter_entry_points('setuptools.file_finders'):
- for item in ep.load()(dirname):
- yield item
-
-
-class sdist(sdist_add_defaults, orig.sdist):
- """Smart sdist that finds anything supported by revision control"""
-
- user_options = [
- ('formats=', None,
- "formats for source distribution (comma-separated list)"),
- ('keep-temp', 'k',
- "keep the distribution tree around after creating " +
- "archive file(s)"),
- ('dist-dir=', 'd',
- "directory to put the source distribution archive(s) in "
- "[default: dist]"),
- ]
-
- negative_opt = {}
-
- README_EXTENSIONS = ['', '.rst', '.txt', '.md']
- READMES = tuple('README{0}'.format(ext) for ext in README_EXTENSIONS)
-
- def run(self):
- self.run_command('egg_info')
- ei_cmd = self.get_finalized_command('egg_info')
- self.filelist = ei_cmd.filelist
- self.filelist.append(os.path.join(ei_cmd.egg_info, 'SOURCES.txt'))
- self.check_readme()
-
- # Run sub commands
- for cmd_name in self.get_sub_commands():
- self.run_command(cmd_name)
-
- self.make_distribution()
-
- dist_files = getattr(self.distribution, 'dist_files', [])
- for file in self.archive_files:
- data = ('sdist', '', file)
- if data not in dist_files:
- dist_files.append(data)
-
- def initialize_options(self):
- orig.sdist.initialize_options(self)
-
- self._default_to_gztar()
-
- def _default_to_gztar(self):
- # only needed on Python prior to 3.6.
- if sys.version_info >= (3, 6, 0, 'beta', 1):
- return
- self.formats = ['gztar']
-
- def make_distribution(self):
- """
- Workaround for #516
- """
- with self._remove_os_link():
- orig.sdist.make_distribution(self)
-
- @staticmethod
- @contextlib.contextmanager
- def _remove_os_link():
- """
- In a context, remove and restore os.link if it exists
- """
-
- class NoValue:
- pass
-
- orig_val = getattr(os, 'link', NoValue)
- try:
- del os.link
- except Exception:
- pass
- try:
- yield
- finally:
- if orig_val is not NoValue:
- setattr(os, 'link', orig_val)
-
- def _add_defaults_optional(self):
- super()._add_defaults_optional()
- if os.path.isfile('pyproject.toml'):
- self.filelist.append('pyproject.toml')
-
- def _add_defaults_python(self):
- """getting python files"""
- if self.distribution.has_pure_modules():
- build_py = self.get_finalized_command('build_py')
- self.filelist.extend(build_py.get_source_files())
- self._add_data_files(self._safe_data_files(build_py))
-
- def _safe_data_files(self, build_py):
- """
- Extracting data_files from build_py is known to cause
- infinite recursion errors when `include_package_data`
- is enabled, so suppress it in that case.
- """
- if self.distribution.include_package_data:
- return ()
- return build_py.data_files
-
- def _add_data_files(self, data_files):
- """
- Add data files as found in build_py.data_files.
- """
- self.filelist.extend(
- os.path.join(src_dir, name)
- for _, src_dir, _, filenames in data_files
- for name in filenames
- )
-
- def _add_defaults_data_files(self):
- try:
- super()._add_defaults_data_files()
- except TypeError:
- log.warn("data_files contains unexpected objects")
-
- def check_readme(self):
- for f in self.READMES:
- if os.path.exists(f):
- return
- else:
- self.warn(
- "standard file not found: should have one of " +
- ', '.join(self.READMES)
- )
-
- def make_release_tree(self, base_dir, files):
- orig.sdist.make_release_tree(self, base_dir, files)
-
- # Save any egg_info command line options used to create this sdist
- dest = os.path.join(base_dir, 'setup.cfg')
- if hasattr(os, 'link') and os.path.exists(dest):
- # unlink and re-copy, since it might be hard-linked, and
- # we don't want to change the source version
- os.unlink(dest)
- self.copy_file('setup.cfg', dest)
-
- self.get_finalized_command('egg_info').save_version_info(dest)
-
- def _manifest_is_not_generated(self):
- # check for special comment used in 2.7.1 and higher
- if not os.path.isfile(self.manifest):
- return False
-
- with io.open(self.manifest, 'rb') as fp:
- first_line = fp.readline()
- return (first_line !=
- '# file GENERATED by distutils, do NOT edit\n'.encode())
-
- def read_manifest(self):
- """Read the manifest file (named by 'self.manifest') and use it to
- fill in 'self.filelist', the list of files to include in the source
- distribution.
- """
- log.info("reading manifest file '%s'", self.manifest)
- manifest = open(self.manifest, 'rb')
- for line in manifest:
- # The manifest must contain UTF-8. See #303.
- try:
- line = line.decode('UTF-8')
- except UnicodeDecodeError:
- log.warn("%r not UTF-8 decodable -- skipping" % line)
- continue
- # ignore comments and blank lines
- line = line.strip()
- if line.startswith('#') or not line:
- continue
- self.filelist.append(line)
- manifest.close()
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/tqdm/dask.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/tqdm/dask.py
deleted file mode 100644
index af9926a2797b9a49220fc5b2228e1ae18c447f37..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/tqdm/dask.py
+++ /dev/null
@@ -1,44 +0,0 @@
-from functools import partial
-
-from dask.callbacks import Callback
-
-from .auto import tqdm as tqdm_auto
-
-__author__ = {"github.com/": ["casperdcl"]}
-__all__ = ['TqdmCallback']
-
-
-class TqdmCallback(Callback):
- """Dask callback for task progress."""
- def __init__(self, start=None, pretask=None, tqdm_class=tqdm_auto,
- **tqdm_kwargs):
- """
- Parameters
- ----------
- tqdm_class : optional
- `tqdm` class to use for bars [default: `tqdm.auto.tqdm`].
- tqdm_kwargs : optional
- Any other arguments used for all bars.
- """
- super(TqdmCallback, self).__init__(start=start, pretask=pretask)
- if tqdm_kwargs:
- tqdm_class = partial(tqdm_class, **tqdm_kwargs)
- self.tqdm_class = tqdm_class
-
- def _start_state(self, _, state):
- self.pbar = self.tqdm_class(total=sum(
- len(state[k]) for k in ['ready', 'waiting', 'running', 'finished']))
-
- def _posttask(self, *_, **__):
- self.pbar.update()
-
- def _finish(self, *_, **__):
- self.pbar.close()
-
- def display(self):
- """Displays in the current cell in Notebooks."""
- container = getattr(self.bar, 'container', None)
- if container is None:
- return
- from .notebook import display
- display(container)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/typer/_completion_click7.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/typer/_completion_click7.py
deleted file mode 100644
index 9f4ad73f3055594c36b0b13ceaf22bd4f8e4d3df..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/typer/_completion_click7.py
+++ /dev/null
@@ -1,157 +0,0 @@
-import os
-import re
-import sys
-
-import click
-import click._bashcomplete
-
-from ._completion_shared import get_completion_script
-
-try:
- import shellingham
-except ImportError: # pragma: nocover
- shellingham = None
-
-
-_click_patched = False
-
-
-def do_bash_complete(cli: click.Command, prog_name: str) -> bool:
- cwords = click.parser.split_arg_string(os.getenv("COMP_WORDS", ""))
- cword = int(os.getenv("COMP_CWORD", 0))
- args = cwords[1:cword]
- try:
- incomplete = cwords[cword]
- except IndexError:
- incomplete = ""
-
- for item in click._bashcomplete.get_choices(cli, prog_name, args, incomplete):
- click.echo(item[0])
- return True
-
-
-def do_zsh_complete(cli: click.Command, prog_name: str) -> bool:
- completion_args = os.getenv("_TYPER_COMPLETE_ARGS", "")
- cwords = click.parser.split_arg_string(completion_args)
- args = cwords[1:]
- if args and not completion_args.endswith(" "):
- incomplete = args[-1]
- args = args[:-1]
- else:
- incomplete = ""
-
- def escape(s: str) -> str:
- return (
- s.replace('"', '""')
- .replace("'", "''")
- .replace("$", "\\$")
- .replace("`", "\\`")
- )
-
- res = []
- for item, help in click._bashcomplete.get_choices(cli, prog_name, args, incomplete):
- if help:
- res.append(f'"{escape(item)}":"{escape(help)}"')
- else:
- res.append(f'"{escape(item)}"')
- if res:
- args_str = "\n".join(res)
- click.echo(f"_arguments '*: :(({args_str}))'")
- else:
- click.echo("_files")
- return True
-
-
-def do_fish_complete(cli: click.Command, prog_name: str) -> bool:
- completion_args = os.getenv("_TYPER_COMPLETE_ARGS", "")
- complete_action = os.getenv("_TYPER_COMPLETE_FISH_ACTION", "")
- cwords = click.parser.split_arg_string(completion_args)
- args = cwords[1:]
- if args and not completion_args.endswith(" "):
- incomplete = args[-1]
- args = args[:-1]
- else:
- incomplete = ""
- show_args = []
- for item, help in click._bashcomplete.get_choices(cli, prog_name, args, incomplete):
- if help:
- formatted_help = re.sub(r"\s", " ", help)
- show_args.append(f"{item}\t{formatted_help}")
- else:
- show_args.append(item)
- if complete_action == "get-args":
- if show_args:
- for arg in show_args:
- click.echo(arg)
- elif complete_action == "is-args":
- if show_args:
- # Activate complete args (no files)
- sys.exit(0)
- else:
- # Deactivate complete args (allow files)
- sys.exit(1)
- return True
-
-
-def do_powershell_complete(cli: click.Command, prog_name: str) -> bool:
- completion_args = os.getenv("_TYPER_COMPLETE_ARGS", "")
- incomplete = os.getenv("_TYPER_COMPLETE_WORD_TO_COMPLETE", "")
- cwords = click.parser.split_arg_string(completion_args)
- args = cwords[1:]
- for item, help in click._bashcomplete.get_choices(cli, prog_name, args, incomplete):
- click.echo(f"{item}:::{help or ' '}")
-
- return True
-
-
-def do_shell_complete(*, cli: click.Command, prog_name: str, shell: str) -> bool:
- if shell == "bash":
- return do_bash_complete(cli, prog_name)
- elif shell == "zsh":
- return do_zsh_complete(cli, prog_name)
- elif shell == "fish":
- return do_fish_complete(cli, prog_name)
- elif shell in {"powershell", "pwsh"}:
- return do_powershell_complete(cli, prog_name)
- return False
-
-
-def handle_shell_complete(
- cli: click.Command, prog_name: str, complete_var: str, complete_instr: str
-) -> bool:
- if "_" not in complete_instr:
- click.echo("Invalid completion instruction.", err=True)
- sys.exit(1)
- command, shell = complete_instr.split("_", 1)
- if command == "source":
- click.echo(
- get_completion_script(
- prog_name=prog_name, complete_var=complete_var, shell=shell
- )
- )
- return True
- elif command == "complete":
- return do_shell_complete(cli=cli, prog_name=prog_name, shell=shell)
- click.echo(f'Completion instruction "{command}" not supported.', err=True)
- return False
-
-
-def completion_init() -> None:
- global _click_patched
- if not _click_patched:
- testing = os.getenv("_TYPER_COMPLETE_TESTING")
-
- def testing_handle_shell_complete(
- cli: click.Command, prog_name: str, complete_var: str, complete_instr: str
- ) -> bool:
- result = handle_shell_complete(cli, prog_name, complete_var, complete_instr)
- if result:
- # Avoid fast_exit(1) in Click so Coverage can finish
- sys.exit(1)
- return result
-
- if testing:
- click._bashcomplete.bashcomplete = testing_handle_shell_complete
- else:
- click._bashcomplete.bashcomplete = handle_shell_complete
- _click_patched = True
diff --git a/spaces/profoz/index_demo/app.py b/spaces/profoz/index_demo/app.py
deleted file mode 100644
index d50a0360581d5796388129752e82469559066ebc..0000000000000000000000000000000000000000
--- a/spaces/profoz/index_demo/app.py
+++ /dev/null
@@ -1,119 +0,0 @@
-import openai
-import requests
-import streamlit as st
-from bs4 import BeautifulSoup
-from sentence_transformers import CrossEncoder
-from transformers import pipeline
-
-all_documents = {}
-
-
-def qa_gpt3(question, context):
- print(question, context)
- openai.api_key = st.secrets["openai_key"]
-
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=f"Answer given the following context: {context}\n\nQuestion: {question}",
- temperature=0.7,
- max_tokens=256,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0
- )
- print(response)
- return {'answer': response['choices'][0]['text'].strip()}
-
-
-st.title('Document Question Answering System')
-
-qa_model = None
-
-crawl_urls = st.checkbox('Crawl?', value=False)
-
-document_text = st.text_area(
- label="Links (Comma separated)", height=100,
- value='https://www.databricks.com/blog/2022/11/15/values-define-databricks-culture.html, https://databricks.com/product/databricks-runtime-for-machine-learning/faq'
-)
-query = st.text_input("Query")
-
-qa_option = st.selectbox('Q/A Answerer', ('gpt3', 'a-ware/bart-squadv2'))
-tokenizing = st.selectbox('How to Tokenize',
- ("Don't (use entire body as document)", 'Newline (split by newline character)', 'Combo'))
-
-if qa_option == 'gpt3':
- qa_model = qa_gpt3
-else:
- qa_model = pipeline("question-answering", qa_option)
-st.write(f'Using {qa_option} as the Q/A model')
-
-encoder = CrossEncoder('cross-encoder/ms-marco-MiniLM-L-6-v2')
-
-
-def get_relevent_passage(question, documents):
- query_paragraph_list = [(question, para) for para in list(documents.keys()) if len(para.strip()) > 0]
-
- scores = encoder.predict(query_paragraph_list)
- top_5_indices = scores.argsort()[-5:]
- top_5_query_paragraph_list = [query_paragraph_list[i] for i in top_5_indices]
- top_5_query_paragraph_list.reverse()
- return top_5_query_paragraph_list[0][1]
-
-
-def answer_question(query, context):
- answer = qa_model(question=query, context=context)['answer']
- return answer
-
-
-def get_documents(document_text, crawl=crawl_urls):
- urls = document_text.split(',')
- for url in urls:
- st.write(f'Crawling {url}')
- if url in set(all_documents.values()):
- continue
- html = requests.get(url).text
- soup = BeautifulSoup(html, 'html.parser')
-
- if crawl:
- st.write('Give me a sec, crawling..')
- import re
-
- more_urls = re.findall('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+',
- html)
- more_urls = list(
- set([m for m in more_urls if m[-4] != '.' and m[-3] != '.' and m.split('/')[:3] == url.split('/')[:3]]))
- for more_url in more_urls:
- all_documents.update(get_documents(more_url, crawl=False))
-
- body = "\n".join([x for x in soup.body.get_text().split('\n') if len(x) > 10])
- print(body)
-
- if tokenizing == "Don't (use entire body as document)":
- document_paragraphs = [body]
- elif tokenizing == 'Newline (split by newline character)':
- document_paragraphs = [n for n in body.split('\n') if len(n) > 250]
- elif tokenizing == 'Combo':
- document_paragraphs = [body] + [n for n in body.split('\n') if len(n) > 250]
-
- for document_paragraph in document_paragraphs:
- all_documents[document_paragraph] = url
-
- return all_documents
-
-
-if len(document_text.strip()) > 0 and len(query.strip()) > 0 and qa_model and encoder:
- st.write('Hmmm let me think about that..')
- document_text = document_text.strip()
- documents = get_documents(document_text)
- st.write(f'I am looking through {len(set(documents.values()))} sites')
-
- query = query.strip()
- context = get_relevent_passage(query, documents)
- answer = answer_question(query, context)
-
- relevant_url = documents[context]
-
- st.write('Check the answer below...with reference text')
- st.header("ANSWER: " + answer)
- st.subheader("REFERENCE: " + context)
- st.subheader("REFERENCE URL: " + relevant_url)
diff --git a/spaces/protoxx91/stable-diffusion-webui-controlnet-docker/header_patch.py b/spaces/protoxx91/stable-diffusion-webui-controlnet-docker/header_patch.py
deleted file mode 100644
index 464447c8cfb431f96098a1cbd95835596a5457bb..0000000000000000000000000000000000000000
--- a/spaces/protoxx91/stable-diffusion-webui-controlnet-docker/header_patch.py
+++ /dev/null
@@ -1,37 +0,0 @@
- with gr.Box(visible=os.environ.get("SPACE_ID")):
- if os.environ.get("SPACE_ID") and str(os.environ.get("IS_SHARED_UI", "") or "") not in ("", "0"):
- import torch
- if not torch.cuda.is_available():
- gr.HTML(f"""
-
-
▲ Automatic1111's Stable Diffusion WebUI + Mikubill's ControlNet WebUI extension | Running on Hugging Face | Loaded checkpoint: AtoZovyaRPGArtistTools15_sd15V1
-
-Codename: Panzers, Phase Two Free Download PC Game Cracked in ... Codename: Panzers, Phase Two – Fight alongside your troops in the sw.. ... OS: Windows XP / Vista / 7 / 8 /10 32 or 64 bit; Processor: Intel or AMD 2,0 ... 1fdad05405
-
-
-
diff --git a/spaces/qwieug123467/Linaqruf-anything-v3.0/app.py b/spaces/qwieug123467/Linaqruf-anything-v3.0/app.py
deleted file mode 100644
index 16e8131a0bbf7b06956e69e2b7758fa01e4eb51f..0000000000000000000000000000000000000000
--- a/spaces/qwieug123467/Linaqruf-anything-v3.0/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/Linaqruf/anything-v3.0").launch()
\ No newline at end of file
diff --git a/spaces/r3gm/RVC_HF/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/r3gm/RVC_HF/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py
deleted file mode 100644
index ee3171bcb7c4a5066560723108b56e055f18be45..0000000000000000000000000000000000000000
--- a/spaces/r3gm/RVC_HF/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py
+++ /dev/null
@@ -1,90 +0,0 @@
-from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import pyworld
-import numpy as np
-
-
-class DioF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def resize_f0(self, x, target_len):
- source = np.array(x)
- source[source < 0.001] = np.nan
- target = np.interp(
- np.arange(0, len(source) * target_len, len(source)) / target_len,
- np.arange(0, len(source)),
- source,
- )
- res = np.nan_to_num(target)
- return res
-
- def compute_f0(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))[0]
-
- def compute_f0_uv(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))
diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/criteria/lpips/utils.py b/spaces/radames/UserControllableLT-Latent-Transformer/criteria/lpips/utils.py
deleted file mode 100644
index 3d15a0983775810ef6239c561c67939b2b9ee3b5..0000000000000000000000000000000000000000
--- a/spaces/radames/UserControllableLT-Latent-Transformer/criteria/lpips/utils.py
+++ /dev/null
@@ -1,30 +0,0 @@
-from collections import OrderedDict
-
-import torch
-
-
-def normalize_activation(x, eps=1e-10):
- norm_factor = torch.sqrt(torch.sum(x ** 2, dim=1, keepdim=True))
- return x / (norm_factor + eps)
-
-
-def get_state_dict(net_type: str = 'alex', version: str = '0.1'):
- # build url
- url = 'https://raw.githubusercontent.com/richzhang/PerceptualSimilarity/' \
- + f'master/lpips/weights/v{version}/{net_type}.pth'
-
- # download
- old_state_dict = torch.hub.load_state_dict_from_url(
- url, progress=True,
- map_location=None if torch.cuda.is_available() else torch.device('cpu')
- )
-
- # rename keys
- new_state_dict = OrderedDict()
- for key, val in old_state_dict.items():
- new_key = key
- new_key = new_key.replace('lin', '')
- new_key = new_key.replace('model.', '')
- new_state_dict[new_key] = val
-
- return new_state_dict
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Ac4 Kenway Fleet Crack ((FULL)).md b/spaces/raedeXanto/academic-chatgpt-beta/Ac4 Kenway Fleet Crack ((FULL)).md
deleted file mode 100644
index a58b74b9cedf81bdd0c7d479c8a73eec03a347d7..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Ac4 Kenway Fleet Crack ((FULL)).md
+++ /dev/null
@@ -1,139 +0,0 @@
-
-
What is Ac4 Kenway Fleet Crack?
-
If you are a fan of Assassin's Creed 4: Black Flag, you might have heard of or played Kenway's Fleet, a naval combat minigame that lets you manage your own fleet of ships and send them on missions across the Caribbean. But did you know that there is a way to play this minigame without an online connection or a Uplay code? That's right, there is a crack that allows you to enjoy Kenway's Fleet offline and without any restrictions. In this article, we will tell you everything you need to know about Ac4 Kenway Fleet Crack, including what it is, how it works, how to install it, how to play it, and what are its pros and cons.
Kenway's Fleet is a feature that was introduced in Assassin's Creed 4: Black Flag, the sixth main installment in the popular action-adventure stealth game series developed by Ubisoft. It is a minigame that lets you control your own fleet of ships that you can capture from enemy vessels during naval battles. You can then send your ships on various missions across different regions of the Caribbean Sea, such as trading goods, fighting pirates, exploring islands, or collecting treasures. By completing these missions, you can earn money, resources, outfits, weapons, upgrades, and other rewards that can help you in your main adventure as Edward Kenway, a pirate-turned-assassin in the Golden Age of Piracy.
-
How to access Kenway's Fleet?
-
To access Kenway's Fleet, you need to meet two requirements. First, you need to have an online connection and a Uplay code that ships with each copy of the game (though you don't need a Uplay account). Second, you need to progress through the main story until Sequence 3 Memory 4, where you will acquire your first ship, the Jackdaw. After that, you can access Kenway's Fleet from any tavern or captain's cabin in the game. You can also access it from a companion app on your smartphone or tablet.
-
What are the benefits of Kenway's Fleet?
-
Kenway's Fleet offers several benefits that can enhance your gameplay experience. Some of them are:
-
-
You can earn extra money by trading goods with other ports or selling captured ships.
-
You can unlock new outfits, weapons, upgrades, and cheats for Edward and the Jackdaw by completing certain missions or collecting treasures.
-
You can explore new locations and discover secrets that are not available in the main game.
-
You can compete with other players online by comparing your fleet's performance or attacking their ships.
-
You can customize your fleet by renaming your ships, changing their colors, or adding flags.
-
-
What are the challenges of Kenway's Fleet?
-
Kenway's Fleet is not without its challenges. Some of them are:
-
-
You need to maintain your fleet by repairing your ships or hiring new crew members.
-
You need to protect your fleet from enemy attacks or storms that can damage or sink your ships.
-
You need to balance your fleet's speed, firepower, cargo capacity, and defense when choosing which ships to send on which missions.
-
You need to plan ahead and manage your time wisely as some missions can take hours or days to complete.
-
You need to deal with random events that can affect your fleet positively or negatively.
-
-
What is a crack?
-
A crack is a modification or patch that alters or bypasses some aspects of a software program or game. Usually, cracks are used to remove copy protection or digital rights management (DRM) systems that prevent unauthorized copying or distribution of software products. Cracks can also enable features or functions that are otherwise disabled or restricted by the original developers or publishers.
-
Why do people use cracks?
-
People use cracks for various reasons and motivations. Some of them are:
-
-
They want to save money by not buying expensive software products or subscriptions.
-
They want to test or try out software products before buying them.
-
They want to avoid annoying or intrusive DRM systems that can affect their performance or privacy.
-
They want to access features or functions that are not available in their region or platform.
-
They want to modify or customize software products according to their preferences or needs.
-
-
What are the risks of using cracks?
-
Using cracks is not without its risks. Some of them are:
-
How to fix Ac4 Kenway Fleet Crack error
-Ac4 Kenway Fleet Crack download free
-Ac4 Kenway Fleet Crack not working on Windows 10
-Ac4 Kenway Fleet Crack online multiplayer
-Ac4 Kenway Fleet Crack gameplay tips and tricks
-Ac4 Kenway Fleet Crack best ships and upgrades
-Ac4 Kenway Fleet Crack review and rating
-Ac4 Kenway Fleet Crack cheats and hacks
-Ac4 Kenway Fleet Crack system requirements and compatibility
-Ac4 Kenway Fleet Crack patch notes and updates
-Ac4 Kenway Fleet Crack mods and customizations
-Ac4 Kenway Fleet Crack DLC and expansions
-Ac4 Kenway Fleet Crack achievements and trophies
-Ac4 Kenway Fleet Crack soundtrack and music
-Ac4 Kenway Fleet Crack wallpapers and fan art
-Ac4 Kenway Fleet Crack vs Assassin's Creed Rogue
-Ac4 Kenway Fleet Crack guide and walkthrough
-Ac4 Kenway Fleet Crack save file location and backup
-Ac4 Kenway Fleet Crack Easter eggs and secrets
-Ac4 Kenway Fleet Crack lore and history
-Ac4 Kenway Fleet Crack characters and voice actors
-Ac4 Kenway Fleet Crack quotes and dialogues
-Ac4 Kenway Fleet Crack memes and jokes
-Ac4 Kenway Fleet Crack merchandise and collectibles
-Ac4 Kenway Fleet Crack fan fiction and stories
-How to play Ac4 Kenway Fleet Crack offline
-How to uninstall Ac4 Kenway Fleet Crack completely
-How to get Ac4 Kenway Fleet Crack for free legally
-How to stream Ac4 Kenway Fleet Crack on Twitch or YouTube
-How to speedrun Ac4 Kenway Fleet Crack in record time
-How to make money from playing Ac4 Kenway Fleet Crack
-How to improve performance and FPS in Ac4 Kenway Fleet Crack
-How to solve common problems and issues in Ac4 Kenway Fleet Crack
-How to contact support and customer service for Ac4 Kenway Fleet Crack
-How to join a community and forum for Ac4 Kenway Fleet Crack players
-How to learn more about the development and making of Ac4 Kenway Fleet Crack
-How to find the best deals and discounts for buying Ac4 Kenway Fleet Crack
-How to compare prices and features of different versions of Ac4 Kenway Fleet Crack
-How to gift or share Ac4 Kenway Fleet Crack with friends or family
-How to play Ac4 Kenway Fleet Crack on mobile devices or consoles
-How to use a controller or keyboard and mouse for playing Ac4 Kenway Fleet Crack
-How to change language and subtitles settings in Ac4 Kenway Fleet Crack
-How to enable or disable sound effects and voiceovers in Ac4 Kenway Fleet Crack
-How to adjust graphics and video settings in Ac4 Kenway Fleet Crack
-How to customize controls and keybindings in Ac4 Kenway Fleet Crack
-How to access hidden or bonus content in Ac4 Kenway Fleet Crack
-How to unlock all outfits and weapons in Ac4 Kenway Fleet Crack
-How to level up and earn rewards in Ac4 Kenway Fleet Crack
-How to master the combat and stealth mechanics in Ac4 Kenway Fleet Crack
-
-
They can expose your device or system to malware or viruses that can harm your data or hardware.
-
They can cause compatibility or stability issues that can affect your software performance or functionality.
-
They can violate intellectual property rights or laws that can result in legal consequences or penalties.
-
They can damage your reputation or credibility as a software user or consumer.
-
They can deprive software developers or publishers of their deserved income or recognition.
-
-
How to find and use a crack safely?
-
If you decide to use a crack for any reason, you should follow some tips and precautions to minimize your risks. Some of them are:
-
-
You should only download cracks from reputable sources or websites that have positive reviews or feedback from other users.
-
You should scan cracks with antivirus software before installing them on your device or system.
-
You should backup your data before applying cracks in case something goes wrong.
-
You should read instructions carefully before using cracks and follow them step by step.
-
You should disable your internet connection before using cracks if possible.
-
-
What is Ac4 Kenway Fleet Crack?
-
Ac4 Kenway Fleet Crack is a specific crack for Assassin's Creed 4: Black Flag that allows you to play Kenway's Fleet minigame offline and without any restrictions. It was created by an anonymous hacker who claimed to have found a way to emulate Ubisoft servers on his own computer. He then shared his crack online for other players who wanted to enjoy Kenway's Fleet without an online connection or a Uplay code. The crack has been downloaded by thousands of players who have given positive feedback on its performance and functionality.
-
How does it work?
-
The crack works by modifying some files in the game folder that are responsible for connecting to Ubisoft servers. It then redirects these files to a local server that mimics Ubisoft servers but does not require any authentication or verification. This way, the game thinks that it is connected online but actually it is not. The crack also removes some checks and limits that normally apply to Kenway's Fleet minigame such as ship capacity, mission duration, enemy difficulty, etc. This way, you can Q: How do I play Ac4 Kenway Fleet Crack?
-
A: To play Ac4 Kenway Fleet Crack, you need to follow these steps:
-
-
Access Kenway's Fleet from any tavern or captain's cabin in the game. You don't need an online connection or a Uplay code.
-
Select a ship from your fleet and assign it to a mission. You can choose from different types of missions such as trade, combat, exploration, or treasure hunting.
-
Wait for your ship to complete the mission and collect your rewards. You can speed up the process by using gems that you can earn or buy with real money.
-
Upgrade your ships with better weapons, armor, sails, or crew members. You can also capture new ships from enemy vessels during naval battles.
-
Compete with other players online by comparing your fleet's performance or attacking their ships. You can also join a guild and cooperate with other players.
-
-
- Q: How do I play Ac4 Kenway Fleet Crack?
-
A: To play Ac4 Kenway Fleet Crack, you need to follow these steps:
-
-
Access Kenway's Fleet from any tavern or captain's cabin in the game. You don't need an online connection or a Uplay code.
-
Select a ship from your fleet and assign it to a mission. You can choose from different types of missions such as trade, combat, exploration, or treasure hunting.
-
Wait for your ship to complete the mission and collect your rewards. You can speed up the process by using gems that you can earn or buy with real money.
-
Upgrade your ships with better weapons, armor, sails, or crew members. You can also capture new ships from enemy vessels during naval battles.
-
Compete with other players online by comparing your fleet's performance or attacking their ships. You can also join a guild and cooperate with other players.
-
-
Q: What are the advantages and disadvantages of Ac4 Kenway Fleet Crack?
-
A: Ac4 Kenway Fleet Crack has some advantages and disadvantages that you should consider before using it. Some of them are:
-
-
Advantages
Disadvantages
-
You can play Kenway's Fleet offline and without any restrictions.
You can expose your device or system to malware or viruses that can harm your data or hardware.
-
You can access more ships, more missions, more rewards, and more fun.
You can cause compatibility or stability issues that can affect your game performance or functionality.
-
You can save money by not buying a Uplay code or a subscription.
You can violate intellectual property rights or laws that can result in legal consequences or penalties.
-
You can customize your fleet by renaming your ships, changing their colors, or adding flags.
You can damage your reputation or credibility as a game user or consumer.
-
You can compete with other players online by comparing your fleet's performance or attacking their ships.
You can deprive game developers or publishers of their deserved income or recognition.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Adobe Acrobat Pro DC 2018.011.20058 Activation [crack Extra QualitysMind] Download.md b/spaces/raedeXanto/academic-chatgpt-beta/Adobe Acrobat Pro DC 2018.011.20058 Activation [crack Extra QualitysMind] Download.md
deleted file mode 100644
index da561dd175c321b948f5353af8b481d178b8eee4..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Adobe Acrobat Pro DC 2018.011.20058 Activation [crack Extra QualitysMind] Download.md
+++ /dev/null
@@ -1,143 +0,0 @@
-
-
Adobe Acrobat Pro DC 2018.011.20058 Activation [CracksMind] download
-
If you are looking for a powerful and versatile PDF editor, you might want to check out Adobe Acrobat Pro DC 2018.011.20058. This is the latest version of the popular software that allows you to create, edit, convert, sign, and share PDF documents with ease. But how can you get this software for free? And is it safe to use a crack from [CracksMind] to activate it? In this article, we will answer these questions and more.
-
Adobe Acrobat Pro DC 2018.011.20058 Activation [CracksMind] download
What is Adobe Acrobat Pro DC and why do you need it?
-
Adobe Acrobat Pro DC is the leading PDF software that lets you do more with your PDF files than ever before. With Acrobat Pro DC, you can:
-
-
Create professional-looking PDFs from any application or device.
-
Edit text, images, links, and other elements in your PDFs.
-
Convert PDFs to other formats such as Word, Excel, PowerPoint, HTML, and more.
-
Sign and fill out forms electronically with digital signatures and e-signatures.
-
Share and collaborate on PDFs with others using cloud services or email.
-
Protect your PDFs with passwords, encryption, redaction, and digital rights management.
-
Optimize your PDFs for web, print, or mobile devices.
-
Access your PDFs from anywhere using the Acrobat Reader app or a web browser.
-
-
As you can see, Adobe Acrobat Pro DC is a must-have tool for anyone who works with PDFs on a regular basis. Whether you are a student, a teacher, a business owner, a lawyer, or a creative professional, you can benefit from the features and functions of Acrobat Pro DC.
-
Features and benefits of Adobe Acrobat Pro DC
-
Adobe Acrobat Pro DC 2018.011.20058 is the latest version of the software that was released in August 2018. It comes with some new and improved features that make it even more powerful and user-friendly than before. Some of these features are:
-
-
A new and modern user interface that adapts to your device and preferences.
-
A new Home view that gives you quick access to your recent files, shared files, tools, and cloud services.
-
A new Document Cloud service that lets you store, sync, and access your PDFs online.
-
A new Compare Files tool that lets you compare two versions of a PDF and highlight the differences.
-
A new Scan tool that lets you scan paper documents to PDF using your smartphone camera.
-
A new Edit PDF tool that lets you edit text and images in your PDFs with more accuracy and ease.
-
A new Export PDF tool that lets you export your PDFs to other formats with more options and quality.
-
A new Fill & Sign tool that lets you fill out and sign forms electronically with your finger or stylus.
-
A new Send for Signature tool that lets you request signatures from others using Adobe Sign.
-
A new Certify with Certificate tool that lets you certify your PDFs with digital certificates.
-
A new Protect tool that lets you protect your PDFs with passwords, encryption, redaction, and digital rights management. Continuing the article:
A new Optimize PDF tool that lets you optimize your PDFs for web, print, or mobile devices.
-
A new Accessibility tool that lets you check and fix the accessibility of your PDFs for people with disabilities.
-
A new Print Production tool that lets you prepare your PDFs for high-quality printing.
-
-
With these features and more, Adobe Acrobat Pro DC 2018.011.20058 offers you a complete solution for working with PDFs in any situation and environment.
-
-
How to download and install Adobe Acrobat Pro DC 2018.011.20058
-
If you want to try out Adobe Acrobat Pro DC 2018.011.20058, you have several options to download and install it on your computer. Here are some of the most common methods:
-
Download from official website
-
The easiest and safest way to download Adobe Acrobat Pro DC 2018.011.20058 is to get it from the official website of Adobe. Here are the steps to follow:
Sign in with your Adobe ID or create a new one if you don't have one.
-
Select your operating system (Windows or Mac) and language, and click on Download Now.
-
Wait for the download to finish and run the installer file.
-
Follow the on-screen instructions to complete the installation process.
-
Launch Adobe Acrobat Pro DC and sign in with your Adobe ID.
-
Enjoy your free trial for 7 days.
-
-
Note that after the trial period ends, you will need to purchase a subscription plan to continue using Adobe Acrobat Pro DC. You can choose from different plans depending on your needs and budget. You can also cancel your subscription at any time.
-
Download from torrent sites
-
Another way to download Adobe Acrobat Pro DC 2018.011.20058 is to use torrent sites that offer cracked versions of the software. However, this method is not recommended for several reasons:
-
-
It is illegal and violates the terms of use of Adobe.
-
It is risky and may expose your computer to viruses, malware, or spyware.
-
It is unreliable and may not work properly or at all.
-
It is unethical and harms the developers of Adobe who work hard to create and improve the software.
-
-
If you still want to use this method, you will need a torrent client such as uTorrent or BitTorrent, and a torrent file that contains the cracked version of Adobe Acrobat Pro DC 2018.011.20058. You can search for such files on various torrent sites such as The Pirate Bay, Kickass Torrents, or 1337x. However, be careful of fake or malicious files that may harm your computer or steal your personal information. Here are the steps to follow:
-
-
Download and install a torrent client on your computer.
-
Go to a torrent site and search for Adobe Acrobat Pro DC 2018.011.20058 crack or similar keywords.
-
Select a torrent file that has a high number of seeders and leechers, and a positive feedback from other users.
-
Download the torrent file and open it with your torrent client.
-
Wait for the download to finish and extract the files from the archive.
-
Run the setup file and follow the instructions to install Adobe Acrobat Pro DC 2018.011.20058.
-
Copy the crack file from the [CracksMind] folder and paste it into the installation directory of Adobe Acrobat Pro DC 2018.011.20058.
-
Launch Adobe Acrobat Pro DC 2018.011.20058 and enjoy using it without activation.
-
-
Download from direct links
-
A third way to download Adobe Acrobat Pro DC 2018.011.20058 is to use direct links that provide the software without any installation or activation required. This method is also not recommended for the same reasons as the previous one, but it is simpler and faster than using torrent sites. You will need a web browser such as Chrome or Firefox, and a link that contains the software in a compressed format such as ZIP or RAR. You can find such links on various websites or blogs that offer free software downloads, but be careful of fake or malicious links that may harm your computer or steal your personal information. Here are the steps to follow:
-
-
Go to a website or blog that offers direct links for Adobe Acrobat Pro DC 2018.011.20058 and copy the link.
-
Paste the link into your web browser and wait for the download to start.
-
Save the file to your computer and extract the files from the archive.
-
Open the folder and run the Adobe Acrobat Pro DC 2018.011.20058.exe file.
-
Enjoy using Adobe Acrobat Pro DC 2018.011.20058 without installation or activation.
-
-
How to activate Adobe Acrobat Pro DC 2018.011.20058 with [CracksMind] crack
-
If you have downloaded Adobe Acrobat Pro DC 2018.011.20058 from the official website or any other source that requires activation, you can use a crack from [CracksMind] to bypass the activation process and use the software for free. But what is [CracksMind] and how does it work?
-
What is [CracksMind] and how does it work?
-
[CracksMind] is a group of hackers and crackers who create and distribute cracks, patches, keygens, and serial keys for various software products. A crack is a modified version of a software file that removes or bypasses the protection mechanisms that prevent unauthorized use of the software. A patch is a small program that modifies or replaces a software file to fix bugs or add features. A keygen is a program that generates valid serial keys or activation codes for a software product. A serial key or activation code is a unique combination of numbers and letters that verifies the authenticity of a software product.
-
[CracksMind] works by analyzing the software files and finding the vulnerabilities or loopholes that allow them to modify or replace them with their own versions. They then test their cracks, patches, keygens, or serial keys to ensure that they work properly and do not cause any harm to the users' computers. They then upload their files to various websites or torrent sites where they can be downloaded by anyone who wants to use them.
-
Steps to activate Adobe Acrobat Pro DC 2018.011.20058 with [CracksMind] crack
-
If you want to activate Adobe Acrobat Pro DC 2018.011.20058 with [CracksMind] crack, you will need to download the crack file from a reliable source and follow these steps:
-
-
Make sure that Adobe Acrobat Pro DC 2018.011.20058 is installed on your computer and close it if it is running.
-
Disable your antivirus software or firewall temporarily as they may interfere with the crack file.
-
Download the [CracksMind] crack file for Adobe Acrobat Pro DC 2018.011.20058 from a website or torrent site that you trust.
-
Extract the files from the archive and open the folder.
-
Copy the crack file (usually named as amtlib.dll) and paste it into the installation directory of Adobe Acrobat Pro DC 2018.011.20058 (usually located at C:\Program Files (x86)\Adobe\Acrobat DC\Acrobat).
-
Replace the original file when prompted.
-
Launch Adobe Acrobat Pro DC 2018.011.20058 and enjoy using it without activation.
-
-
Pros and cons of using Adobe Acrobat Pro DC 2018.011.20058 with [CracksMind] crack
-
Using Adobe Acrobat Pro DC 2018.011.20058 with [CracksMind] crack may seem like a good idea if you want to save money and use the software for free, but it also has some drawbacks that you should be aware of before deciding to use it. Here are some of the pros and cons of using Adobe Acrobat Pro DC 2018.011.20058 with [CracksMind] crack:
-
Pros
-
-
You can use all the features and functions of Adobe Acrobat Pro DC 2018.011.20058 without paying anything.
-
You can use the software for as long as you want without worrying about expiration dates or subscription plans.
-
You can update the software to the latest version without losing the activation status.
-
-
Cons
-
-
You are violating the terms of use of Adobe and may face legal consequences if you are caught using a cracked version of their software.
-
You are risking your computer's security and performance by downloading and installing files from unknown sources that may contain viruses, malware, or spyware.
-
You are depriving the developers of Adobe of their rightful income and discouraging them from creating and improving their software products.
You are missing out on the official support and updates from Adobe that may fix bugs, improve performance, or add new features to the software.
-
You are compromising your ethical standards and integrity by using a cracked version of a software product that someone else has created and owns.
-
-
As you can see, using Adobe Acrobat Pro DC 2018.011.20058 with [CracksMind] crack has more cons than pros, and it is not worth the risk or the hassle. If you really want to use Adobe Acrobat Pro DC 2018.011.20058, you should consider buying a legitimate copy from the official website of Adobe or from an authorized reseller. This way, you can enjoy the software without any worries or regrets.
-
Conclusion
-
Adobe Acrobat Pro DC 2018.011.20058 is a powerful and versatile PDF software that lets you create, edit, convert, sign, and share PDF documents with ease. It comes with many new and improved features that make it even more user-friendly and functional than before. However, if you want to use this software for free, you may be tempted to download and install a cracked version of it from [CracksMind] or other sources. This is not a good idea, as it may cause you legal, security, reliability, ethical, and other problems. Therefore, we recommend that you avoid using Adobe Acrobat Pro DC 2018.011.20058 with [CracksMind] crack and instead purchase a genuine copy of the software from the official website of Adobe or from an authorized reseller. This way, you can support the developers of Adobe and use their software with peace of mind.
-
FAQs
-
Here are some of the frequently asked questions about Adobe Acrobat Pro DC 2018.011.20058 and [CracksMind] crack:
-
-
Q: Is Adobe Acrobat Pro DC 2018.011.20058 compatible with Windows 10?
-
A: Yes, Adobe Acrobat Pro DC 2018.011.20058 is compatible with Windows 10 as well as Windows 8.1 and Windows 7.
-
Q: How can I update Adobe Acrobat Pro DC 2018.011.20058 to the latest version?
-
A: If you have a legitimate copy of Adobe Acrobat Pro DC 2018.011.20058, you can update it to the latest version by going to Help > Check for Updates in the software menu or by downloading the update from https://helpx.adobe.com/acrobat/release-note/release-notes-acrobat-reader.html. If you have a cracked version of Adobe Acrobat Pro DC 2018.011.20058, you may not be able to update it or you may lose the activation status after updating it.
-
Q: How can I uninstall Adobe Acrobat Pro DC 2018.011.20058 from my computer?
-
A: If you want to uninstall Adobe Acrobat Pro DC 2018.011.20058 from your computer, you can follow these steps:
-
-
Go to Control Panel > Programs > Programs and Features.
-
Select Adobe Acrobat Pro DC from the list of programs and click on Uninstall.
-
Follow the on-screen instructions to complete the uninstallation process.
-
Delete any remaining files or folders related to Adobe Acrobat Pro DC from your computer.
-
-
Q: What are some alternatives to Adobe Acrobat Pro DC 2018.011.20058?
-
A: If you are looking for some alternatives to Adobe Acrobat Pro DC 2018.011.20058, you can try these software products:
-
-
Nitro Pro: A PDF software that lets you create, edit, convert, sign, and share PDF documents with similar features as Adobe Acrobat Pro DC.
-
PDFelement: A PDF software that lets you create, edit, convert, sign, and share PDF documents with a simple and intuitive user interface.
-
Foxit PhantomPDF: A PDF software that lets you create, edit, convert, sign, and share PDF documents with advanced security and collaboration features.
-
-
Q: Where can I get more information about Adobe Acrobat Pro DC 2018.011.20058 and [CracksMind] crack?
-
A: If you want to get more information about Adobe Acrobat Pro DC 2018.011.20058 and [CracksMind] crack, you can visit these websites:
https://cracksmind.com/: The official website of [CracksMind] where you can get more details about their cracks, patches, keygens, and serial keys.
-
https://www.reddit.com/r/Piracy/: A subreddit where you can get more information and opinions about piracy, cracking, and related topics.
-
-
I hope this article has helped you understand more about Adobe Acrobat Pro DC 2018.011.20058 and [CracksMind] crack. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Free Download Xforce Keygen AutoCAD 2019 64bit and Enjoy Unlimited Design Possibilities.md b/spaces/raedeXanto/academic-chatgpt-beta/Free Download Xforce Keygen AutoCAD 2019 64bit and Enjoy Unlimited Design Possibilities.md
deleted file mode 100644
index 825f0d874a6f2f914ba835bfbe80cbdfe4c94eed..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Free Download Xforce Keygen AutoCAD 2019 64bit and Enjoy Unlimited Design Possibilities.md
+++ /dev/null
@@ -1,137 +0,0 @@
-
-
Xforce Keygen AutoCAD 2019 64bit Free Download: A Complete Guide
-
If you are looking for a way to activate AutoCAD 2019 without paying a dime, you might have heard of Xforce Keygen. Xforce Keygen is a software that can generate activation codes for various Autodesk products, including AutoCAD 2019. But what is Xforce Keygen exactly and how can you use it to get AutoCAD 2019 for free? In this article, we will answer these questions and provide you with a step-by-step guide on how to download and install Xforce Keygen for AutoCAD 2019 64bit. We will also share some tips and tricks on how to use Xforce Keygen effectively and avoid common errors. So, let's get started!
-
What is Xforce Keygen and Why Do You Need It?
-
Xforce Keygen is a software that can generate activation codes for various Autodesk products, such as AutoCAD, Maya, Revit, Inventor, etc. These activation codes can be used to bypass the license verification process and unlock the full features of the software. In other words, Xforce Keygen can help you get any Autodesk product for free.
Xforce Keygen: A Powerful Tool for Activating AutoCAD 2019
-
AutoCAD is one of the most popular and widely used software for designing and drafting in various fields, such as architecture, engineering, construction, manufacturing, etc. AutoCAD 2019 is the latest version of this software that offers many new features and improvements, such as:
-
-
Improved performance and stability
-
New drawing tools and commands
-
Enhanced user interface and collaboration
-
Support for cloud services and mobile devices
-
Integration with other Autodesk products
-
-
However, AutoCAD 2019 is not cheap. The official price of a single-user license is $1,610 per year or $210 per month. That's why many users look for alternative ways to get AutoCAD 2019 without spending a fortune. And that's where Xforce Keygen comes in handy.
-
Benefits of Using Xforce Keygen for AutoCAD 2019
-
By using Xforce Keygen for AutoCAD 2019, you can enjoy the following benefits:
-
-
You can save a lot of money by getting AutoCAD 2019 for free
-
You can access all the features and functions of AutoCAD 2019 without any limitations
-
You can use AutoCAD 2019 offline without any internet connection
-
You can update AutoCAD 2019 without any problems
-
You can use AutoCAD 2019 on any computer or device that supports it
-
-
Of course, using Xforce Keygen for AutoCAD 2019 also has some risks and drawbacks, such as:
-
-
You might violate the terms and conditions of Autodesk and face legal consequences
-
You might expose your computer or device to viruses or malware
-
You might encounter some errors or glitches while using AutoCAD 2019
-
You might not get any technical support or customer service from Autodesk
-
You might not be able to use some online features or services of Autodesk
-
-
Therefore, you should use Xforce Keygen for AutoCAD 2019 at your own risk and discretion.
-
How to Download and Install Xforce Keygen for AutoCAD 2019 64bit
-
If you have decided to use Xforce Keygen for AutoCAD 2019, you need to follow these steps to download and install it:
-
Step 1: Download Xforce Keygen from a Reliable Source
-
The first thing you need to do is to find a reliable source where you can download Xforce Keygen for AutoCAD 2019. There are many websites that claim to offer Xforce Keygen for free, but not all of them are trustworthy. Some of them might contain fake or corrupted files that can harm your computer or device. Therefore, you should be careful when choosing where to download Xforce Keygen from.
One of the most reputable sources where you can download Xforce Keygen for AutoCAD 2019 is X-Force Cracks. This website has been providing working cracks and keygens for various Autodesk products since 2006. You can trust this website because it has positive reviews from many users who have successfully activated their Autodesk products using its cracks and keygens.
-
To download Xforce Keygen from X-Force Cracks, you need to follow these steps:
Scroll down until you see the download links for different versions of Autodesk products
-
Click on the link that says "AutoCad x64 (2020)" (Note: This link works for both AutoCAD 2020 and AutoCAD 2019)
-
You will be redirected to another page where you need to complete a captcha verification to prove that you are not a robot
-
After completing the captcha verification, click on the button that says "Download"
-
You will be redirected to another page where you need to wait for a few seconds until the download link appears
-
Click on the button that says "Download Now"
-
A pop-up window will appear asking you to choose where to save the file. Choose a location where you can easily find it later (such as your desktop) and click on "Save"
-
The file will start downloading automatically. The file name is "xf-adsk2020_x64.exe" and its size is about 1 MB.
-
-
Step 2: Disable Your Antivirus and Internet Connection
-
The next thing you need to do is to disable your antivirus software and your internet connection before running Xforce Keygen. This is because your antivirus software might detect Xforce Keygen as a threat and block it from running. And your internet connection might interfere with the activation process or alert Autodesk about your illegal activity.
-
To disable your antivirus software, you need to follow these steps:
-
-
Right-click on the icon of your antivirus software in the system tray (usually located at the bottom-right corner of your screen)
-
Select "Disable" or "Turn off" or something similar (depending on your antivirus software)
-
A pop-up window will appear asking you how long you want to disable your antivirus software. Choose "Until I restart my computer" or something similar (depending on your antivirus software)
-
Click on "OK" or "Yes" or something similar (depending on your antivirus software)
-
-
To disable your internet connection, you need to follow these steps:
-
-
Right-click on the icon of your network connection in the system tray (usually located at the bottom-right corner of your screen)
-
Select "Open Network & Internet settings" or something similar (depending on your operating system)
-
A window will appear showing your network status and settings. Click on "Change adapter options" or something similar (depending on your operating system)
-
-
-
Error
-
Solution
-
-
-
Xforce Keygen does not open or run
-
Make sure you have downloaded Xforce Keygen from a reliable source and saved it in a safe location. Make sure you have disabled your antivirus software and internet connection before running Xforce Keygen. Make sure you have run Xforce Keygen as administrator.
-
-
-
Xforce Keygen does not generate or copy the activation code
-
Make sure you have selected the correct Autodesk product from the list in Xforce Keygen. Make sure you have clicked on the "Patch" button before clicking on the "Generate" button. Make sure you have pasted the request code correctly in Xforce Keygen. Make sure you have copied the activation code correctly to your clipboard.
-
-
-
AutoCAD 2019 does not accept or verify the activation code
-
Make sure you have entered any random numbers for the serial number and product key in AutoCAD 2019. Make sure you have selected "I have an activation code from Autodesk" as your activation method in AutoCAD 2019. Make sure you have pasted the activation code correctly in AutoCAD 2019.
-
-
-
AutoCAD 2019 shows an error message or crashes after activation
-
Make sure you have updated your drivers and system requirements for AutoCAD 2019. Make sure you have installed AutoCAD 2019 correctly and completely. Make sure you have not updated AutoCAD 2019 unless you are sure that it will not affect your activation status.
-
-
-
Conclusion
-
In this article, we have shown you how to download and install Xforce Keygen for AutoCAD 2019 64bit and how to use it to activate AutoCAD 2019 for free. We have also shared some tips and tricks on how to use Xforce Keygen effectively and avoid common errors. We hope that this article has been helpful and informative for you.
-
However, we would like to remind you that using Xforce Keygen for AutoCAD 2019 is illegal and unethical. You might face legal consequences or security risks by doing so. Therefore, we do not recommend or endorse using Xforce Keygen for AutoCAD 2019 or any other Autodesk product. If you want to use AutoCAD 2019 legally and safely, you should buy a license or subscription from Autodesk or its authorized dealers.
-
Thank you for reading this article. If you have any questions or feedback, please feel free to leave a comment below.
-
FAQs
-
Here are some frequently asked questions and answers about Xforce Keygen for AutoCAD 2019:
-
Q: Is Xforce Keygen safe to use?
-
A: Xforce Keygen is not safe to use because it is a crack software that can violate the terms and conditions of Autodesk and expose your computer or device to viruses or malware. You should always use a reliable antivirus software and internet connection when using Xforce Keygen.
-
Q: Is Xforce Keygen legal to use?
-
A: Xforce Keygen is not legal to use because it is a software that can generate activation codes for various Autodesk products without paying for them. This can infringe the intellectual property rights of Autodesk and cause legal problems for you. You should always respect the laws and regulations of your country and region when using Xforce Keygen.
-
Q: Is Xforce Keygen compatible with other versions of AutoCAD?
-
A: Xforce Keygen is compatible with other versions of AutoCAD, such as AutoCAD 2020, AutoCAD 2018, AutoCAD 2017, etc. However, you need to download and install the specific version of Xforce Keygen that matches your version of AutoCAD. You can find different versions of Xforce Keygen on X-Force Cracks.
-
Q: Can I use Xforce Keygen for other Autodesk products?
-
A: Yes, you can use Xforce Keygen for other Autodesk products, such as Maya, Revit, Inventor, etc. However, you need to download and install the specific version of Xforce Keygen that matches your product of Autodesk. You can find different versions of Xforce Keygen on X-Force Cracks.
-
Q: Can I update AutoCAD 2019 after activating it with Xforce Keygen?
-
A: You can update AutoCAD 2019 after activating it with Xforce Keygen, but only if you are sure that the update will not affect your activation status or require a new activation code. Some updates might disable your existing activation code or require a new one. In that case, you need to repeat the activation process with Xforce Keygen again.
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Gaussian 09 Torrent 1357 Learn How to Use the Software for Various Scientific and Engineering Problems.md b/spaces/raedeXanto/academic-chatgpt-beta/Gaussian 09 Torrent 1357 Learn How to Use the Software for Various Scientific and Engineering Problems.md
deleted file mode 100644
index 7bee4a34d4fe022ed37720fe4609be6353b04b90..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Gaussian 09 Torrent 1357 Learn How to Use the Software for Various Scientific and Engineering Problems.md
+++ /dev/null
@@ -1,148 +0,0 @@
-
-
Gaussian 09 Torrent 1357: What You Need to Know
-
If you are interested in computational chemistry, you may have heard of Gaussian, a series of electronic structure programs that can predict the energies, molecular structures, vibrational frequencies, and other properties of molecular systems. Gaussian is widely used by chemists, chemical engineers, biochemists, physicists, and others for research in various fields of chemistry.
One of the latest versions of Gaussian is Gaussian 09, which was released in 2013. Gaussian 09 introduces several new features and improvements over previous versions, such as enhanced performance, parallelization, accuracy, functionality, and compatibility. However, Gaussian is not a free software, and it requires a license to use it legally.
-
That's why some people may resort to downloading Gaussian 09 torrent, a file that contains the data of Gaussian 09 that can be shared and downloaded through a peer-to-peer network. One of the most popular sources of Gaussian 09 torrent is 1357, which is a code name for The Pirate Bay, a notorious website that hosts millions of torrents for various types of content.
-
In this article, we will tell you everything you need to know about Gaussian 09 torrent 1357, including how to download it, how to install it, how to run it, what are its features and benefits, what are its limitations and drawbacks, and what are some alternatives and competitors. By the end of this article, you will have a better understanding of whether Gaussian 09 torrent 1357 is worth your time and effort or not.
-
Gaussian 09 electronic structure programs
-Gaussian 09 quantum mechanics
-Gaussian 09 molecular structures
-Gaussian 09 vibrational frequencies
-Gaussian 09 molecular properties
-Gaussian 09 reactions
-Gaussian 09 stable species
-Gaussian 09 compounds
-Gaussian 09 intermediates
-Gaussian 09 transition structures
-Gaussian 09 download
-Gaussian 09 magnet link
-Gaussian 09 seeders
-Gaussian 09 leechers
-Gaussian 09 info hash
-Gaussian 09 applications UNIX
-Gaussian 09 size
-Gaussian 09 uploaded
-Gaussian 09 by MikeRalph
-Gaussian 09 hpc cineca it software gaussian
-Gaussian 09 VPN download
-Gaussian 09 install process
-Gaussian 09 setup wizard
-Gaussian 09 continue downloading
-Gaussian 09 tlniurl com
-Gaussian 09 camps-valls gustau
-Gaussian 09 svendsen daniel h
-Gaussian 09 martino luca
-Gaussian 09 munoz mari jordi
-Gaussian 09 laparra perez-muelas valero
-Gaussian 09 campos-taberner manuel
-Gaussian 09 luengo garcia david
-Gaussian 09 physics-aware gaussian processes for e
-Gaussian 09 bltlly com
-Gaussian 09 wald2021 shop de forum forum-rund um beauty gaussian
-Gaussian 09 the pirate bay torrent
-Gaussian 09 the pirate bay download
-Gaussian 09 the pirate bay magnet
-Gaussian 09 the pirate bay seeders
-Gaussian 09 the pirate bay leechers
-Gaussian 09 the pirate bay info hash
-Gaussian 09 the pirate bay applications UNIX
-Gaussian 09 the pirate bay size
-Gaussian 09 the pirate bay uploaded
-Gaussian 09 the pirate bay by MikeRalph
-Gaussian 09 the pirate bay hpc cineca it software gaussian
-Gaussian 09 the pirate bay VPN download
-Gaussian 09 the pirate bay install process
-Gaussian 09 the pirate bay setup wizard
-Gaussian 09 the pirate bay continue downloading
-
How to Download Gaussian 09 Torrent 1357
-
The Pirate Bay as a source of Gaussian 09 Torrent 1357
-
The Pirate Bay (TPB) is one of the oldest and most popular torrent websites in the world. It was founded in 2003 by a group of Swedish activists who wanted to provide a platform for free sharing of information and culture. TPB hosts millions of torrents for various types of content, such as movies, music, games, software, books, etc.
-
One of the torrents that you can find on TPB is Gaussian 09, which was uploaded by a user named MikeRalph in 2014. The file size is about 1.11 GB and it contains three files: gau-916-Linux_x86_64.tar.gz, gau-916-Linux_x86_64.tgz.md5, and README.txt. According to the description, this is the latest version of Gaussian for UNIX systems.
-
To access TPB, you need to use a web browser that supports magnet links, which are a type of URI (Uniform Resource Identifier) that can identify a torrent file without requiring a central server. Some examples of web browsers that support magnet links are Chrome, Firefox, Opera, and Safari.
-
How to use a magnet link to download Gaussian 09 Torrent 1357
-
A magnet link is a simple way to download a torrent file without having to download the actual .torrent file first. A magnet link contains information such as the name, size, hash value, and trackers of the torrent file. When you click on a magnet link, your web browser will launch your default torrent client, which is a software that can manage your torrent downloads.
-
Some examples of torrent clients are uTorrent, BitTorrent, qBittorrent, and Transmission. You need to have a torrent client installed on your computer before you can use magnet links. Once your torrent client is launched, it will start downloading the data from other peers who have the same torrent file.
-
To download Gaussian 09 Torrent 1357 from TPB, you need to follow these steps:
-
-
Go to https://www1.thepiratebay3.to/torrent/11102069/Gaussian_09 using your web browser.
-
Click on the magnet icon next to "Get this torrent". A pop-up window will appear asking you to confirm your action.
-
Click on "Open uTorrent" or whatever your default torrent client is. Your torrent client will open and start downloading the data.
-
Wait until the download is complete. You can check the progress, speed, and status of your download on your torrent client.
-
Once the download is complete, you will find the files in your designated folder.
-
-
Risks and precautions of downloading Gaussian 09 Torrent 1357
-
While downloading Gaussian 09 Torrent 1357 may seem like an easy and convenient way to get access to one of the most advanced computational chemistry software, there are also some risks and precautions that you need to be aware of before you proceed.
-
-
Risk: Downloading torrents from TPB or any other unverified source may expose you to malware, viruses, spyware, or other harmful programs that can damage your computer or steal your personal information.
-
Precaution: Before you download any torrent file, you should scan it with an antivirus or anti-malware software to make sure it is safe and clean. You should also read the comments and reviews from other users who have downloaded the same file to see if they encountered any problems or issues.
-
Risk: Downloading torrents from TPB or any other unverified source may violate the intellectual property rights of the original creators or owners of the content. This may result in legal actions or penalties against you for piracy or infringement.
-
Precaution: Before you download any torrent file, you should check if it is legal or authorized in your country or region. You should also respect the terms and conditions of the original creators or owners of the content and not distribute or share it without their permission.
-
Risk: Downloading torrents from TPB or any other unverified source may expose you to cyberattacks or surveillance from hackers, governments, or other entities who may monitor your online activity or intercept your data.
-
Precaution: Before you download any torrent file, you should use a VPN (Virtual Private Network) service that can encrypt your internet connection and hide your IP address from prying eyes. You should also use a proxy server or Tor browser that can anonymize your web traffic and bypass any censorship or geo-restrictions.
-
-
How to Install and Run Gaussian 09 Torrent 1357
-
System requirements and compatibility issues for Gaussian 09 Torrent 1357
-
Gaussian 09 Torrent 1357 is designed for UNIX systems, which are a family of operating systems that are based on Linux or BSD (Berkeley Software Distribution). UNIX systems are widely used for scientific computing, networking, and server applications. Some examples of UNIX systems are Ubuntu, Fedora, Debian, CentOS, FreeBSD, and macOS.
-
To install and run Gaussian 09 system at a fixed geometry. It can be used to visualize the regions of positive and negative potential around a molecule or complex.
-
Population analysis: This calculation computes the atomic charges and spin densities for a given molecular system at a fixed geometry using various schemes such as Mulliken, Löwdin, Hirshfeld, or AIM. It can be used to analyze the charge distribution and polarization of a molecule or complex.
-
Thermochemistry analysis: This calculation computes the thermodynamic properties of a given molecular system at a fixed geometry and temperature, such as enthalpy, entropy, free energy, heat capacity, and thermal corrections. It can be used to evaluate the stability and feasibility of a molecule or reaction.
-
Transition state optimization: This calculation finds the saddle point geometry of a given molecular system that corresponds to the highest energy point along a reaction path. It can be used to determine the activation energy and mechanism of a reaction.
-
Reaction path analysis: This calculation computes the energy profile of a given molecular system along a reaction coordinate that connects the reactants and products. It can be used to study the kinetics and dynamics of a reaction.
-
-
Examples of molecular properties and phenomena that can be predicted by Gaussian 09
-
Gaussian 09 can help you predict various molecular properties and phenomena that can be useful for your research or application. Some examples are:
-
-
Molecular structure: You can use Gaussian 09 to optimize the geometry of any molecule or complex and obtain its bond lengths, bond angles, dihedral angles, and symmetry. You can also use Gaussian 09 to perform conformational analysis and find the most stable or lowest energy conformer of a molecule.
-
Vibrational spectra: You can use Gaussian 09 to compute the infrared (IR) and Raman spectra of any molecule or complex and compare them with experimental data. You can also use Gaussian 09 to perform normal mode analysis and identify the characteristic vibrations of functional groups or bonds.
-
Electronic spectra: You can use Gaussian 09 to compute the ultraviolet-visible (UV-Vis) and fluorescence spectra of any molecule or complex and compare them with experimental data. You can also use Gaussian 09 to perform excited state calculations and identify the electronic transitions and configurations involved.
-
Nuclear magnetic resonance (NMR) spectra: You can use Gaussian 09 to compute the proton (^1H) and carbon (^13C) NMR spectra of any molecule or complex and compare them with experimental data. You can also use Gaussian 09 to perform spin-spin coupling calculations and identify the coupling constants and multiplicity patterns involved.
-
Molecular orbitals (MOs): You can use Gaussian 09 to visualize the shape and distribution of MOs for any molecule or complex and analyze their contribution to bonding or antibonding interactions. You can also use Gaussian 09 to perform frontier orbital analysis and identify the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) involved in chemical reactivity.
-
Molecular electrostatic potential (MEP): You can use Gaussian 09 to visualize the regions of positive and negative potential around any molecule or complex and analyze their interaction with charged species. You can also use Gaussian 09 to perform electrostatic potential fitting (ESPF) and obtain partial atomic charges that best reproduce the MEP.
-
Solvation effects: You can use Gaussian 09 to account for solvation effects on any molecule or reaction by using various models such as implicit solvation models (e.g. PCM, CPCM, SMD) or explicit solvation models (e.g. ONIOM, QM/MM). You can also use Gaussian 09 to compute solvation free energies and solubility parameters for any molecule or complex.
-
Reaction mechanisms: You can use Gaussian 09 to explore possible reaction mechanisms for any reaction by using various methods such as transition state theory (TST), intrinsic reaction coordinate (IRC), nudged elastic band (NEB), or growing string method (GSM). You can also use Gaussian 09 to compute activation energies, rate constants, and kinetic isotope effects for any reaction.
-
-
Limitations and Drawbacks of Gaussian 09 Torrent 1357
-
Accuracy and reliability issues of Gaussian 09 Torrent 1357
-
Gaussian 09 Torrent 1357 is not a perfect software, and it has some limitations and drawbacks that may affect its accuracy and reliability for certain types of calculations or systems. Some examples are:
-
-
Basis set incompleteness: Gaussian 09 Torrent 1357 uses finite basis sets to approximate the infinite number of atomic orbitals in a molecule. However, no finite basis set can fully represent all the atomic orbitals, and this leads to errors in the calculated energies, properties, and spectra. To reduce these errors, one needs to use larger and more complete basis sets, but this also increases the computational cost and time.
-
Density functional theory (DFT) approximations: Gaussian 09 Torrent 1357 uses DFT as one of its main methods for electronic structure calculations. However, DFT relies on approximations for the exchange-correlation functional, which is unknown for any given system. Different functionals may give different results for the same system, and none of them may be accurate enough for certain properties or phenomena. To improve the accuracy of DFT calculations, one needs to use hybrid functionals or post-DFT methods, but this also increases the computational cost and time.
-
Numerical errors: Gaussian 09 Torrent 1357 uses numerical methods to solve various equations and integrals in quantum mechanics. However, no numerical method is exact, and this leads to errors in the calculated values due to round-off, truncation, or convergence issues. To reduce these errors, one needs to use higher precision, finer grids, or stricter criteria, but this also increases the computational cost and time.
-
Bugs and glitches: Gaussian 09 Torrent 1357 is a complex software that consists of millions of lines of code written by different developers over decades. However, no software is bug-free, and this leads to errors in the execution or output of some calculations due to programming mistakes, logic flaws, or compatibility issues. To avoid these errors, one needs to update the software regularly, check the output carefully, and report any problems to the developers.
-
-
Legal and ethical implications of using Gaussian 09 Torrent 1357
-
Gaussian 09 Torrent 1357 is not a legal software, and it has some legal and ethical implications that may affect your reputation or career as a researcher or user. Some examples are:
-
-
Piracy: Downloading Gaussian 09 Torrent 1357 from TPB or any other unverified source is an act of piracy that violates the intellectual property rights of Gaussian Inc., the original creator and owner of Gaussian software. Piracy is illegal in most countries and regions, and it may result in legal actions or penalties against you for infringement.
-
Fraud: Using Gaussian 09 Torrent 1357 for your research or application without disclosing its source or license is an act of fraud that deceives your peers, funders, publishers, or clients about the validity or quality of your work. Fraud is unethical in most academic or professional settings, and it may result in disciplinary actions or sanctions against you for misconduct.
-
Risk: Using Gaussian 09 Torrent 1357 without verifying its integrity or functionality is an act of risk that exposes you to potential harm or loss due to malware, viruses, spyware, or other harmful programs that may damage your computer or steal your personal information. Risk is irresponsible in most personal or organizational settings, and it may result in security breaches or data losses that may affect you or others negatively.
-
-
Alternatives and competitors of Gaussian 09 Torrent 1357
-
Gaussian 09 Torrent 1357 is not a unique software, and it has some alternatives and competitors that may offer similar or better features or benefits for certain types of calculations or systems. Some examples are:
-
-
Gaussian 16: This is the latest version of Gaussian software that was released in 2016. It introduces several new features and improvements over Gaussian 09, such as enhanced performance, accuracy, functionality, compatibility, usability, documentation, support, etc. However, it also requires a license to use it legally, which may be expensive or unavailable for some users.
-
Gamess-US: This is another popular electronic structure program that was developed by Mark Gordon's group at Iowa State University since 1981. It offers similar features as Gaussian software, such as DFT, CC, MM, QMC methods, etc., but it also has some unique features such as fragment molecular orbital (F MO) method, which can reduce the computational cost and time for large molecular systems by dividing them into smaller fragments and treating the interfragment interactions perturbatively.
-
Orca: This is another popular electronic structure program that was developed by Frank Neese's group at Max Planck Institute for Chemical Energy Conversion since 1999. It offers similar features as Gaussian software, such as DFT, CC, MM, QMC methods, etc., but it also has some unique features such as relativistic effects, spin-orbit coupling, magnetic properties, and spectroscopic parameters.
-
Q-Chem: This is another popular electronic structure program that was developed by a team of scientists and engineers at Q-Chem Inc., Carnegie Mellon University, and Princeton University since 1993. It offers similar features as Gaussian software, such as DFT, CC, MM, QMC methods, etc., but it also has some unique features such as density matrix renormalization group (DMRG) method, density-fitted coupled cluster (DF-CC) method, and quantum mechanics/molecular mechanics (QM/MM) method.
-
-
Conclusion
-
In this article, we have told you everything you need to know about Gaussian 09 torrent 1357, including how to download it, how to install it, how to run it, what are its features and benefits, what are its limitations and drawbacks, and what are some alternatives and competitors. We hope that this article has been informative and helpful for you.
-
However, we also want to remind you that Gaussian 09 torrent 1357 is not a legal or ethical software, and it may expose you to various risks and consequences that may outweigh its advantages. Therefore, we strongly advise you to use Gaussian 09 torrent 1357 with caution and discretion, or better yet, to use a legal and authorized version of Gaussian software or any other electronic structure program that suits your needs and preferences.
-
FAQs
-
Here are some frequently asked questions about Gaussian 09 torrent 1357:
-
-
Q: What is the difference between Gaussian 09 torrent 1357 and Gaussian 16?
-
A: Gaussian 09 torrent 1357 is an illegal file that contains the data of Gaussian 09 software that was released in 2013. Gaussian 16 is the latest version of Gaussian software that was released in 2016. Gaussian 16 introduces several new features and improvements over Gaussian 09, such as enhanced performance, accuracy, functionality, compatibility, usability, documentation, support, etc.
-
Q: How can I get a license for Gaussian software?
-
A: You can get a license for Gaussian software by contacting Gaussian Inc., the original creator and owner of Gaussian software. You can visit their website at www.gaussian.com for more information about their products and services. You can also check if your institution or organization has a site license for Gaussian software that you can use.
-
Q: How can I learn more about Gaussian software?
-
A: You can learn more about Gaussian software by visiting their website at www.gaussian.com or by reading their manuals and publications. You can also find many tutorials and examples online that can help you learn how to use Gaussian software for various types of calculations and systems.
-
Q: How can I cite Gaussian software in my research?
-
A: You can cite Gaussian software in your research by using the following format: M. J. Frisch et al., "Gaussian XX", Wallingford CT: Gaussian Inc., YYYY (where XX is the version number and YYYY is the year of release). You can also include the specific citation for the methods or models that you used in your calculation from the output file or from the website www.gaussian.com/citation.
-
Q: How can I get help or support for Gaussian software?
-
A: You can get help or support for Gaussian software by contacting their technical support team at support@gaussian.com or by visiting their website at www.gaussian.com/support. You can also find many resources online that can help you solve your problems or answer your questions about Gaussian software.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/raskell/livebook/README.md b/spaces/raskell/livebook/README.md
deleted file mode 100644
index 7b12495942e63525fa13b91ef4673911e7b3cb26..0000000000000000000000000000000000000000
--- a/spaces/raskell/livebook/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Livebook
-emoji: 📓
-colorFrom: pink
-colorTo: purple
-sdk: docker
-fullWidth: true
-duplicated_from: livebook-dev/livebook
----
-
-You can install and run [Livebook](https://livebook.dev/) inside a Hugging Face Space. Here's [a tutorial](https://huggingface.co/docs/hub/spaces-sdks-docker-livebook) on how to do that.
\ No newline at end of file
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/A Bugs Life Pc Game Crack.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/A Bugs Life Pc Game Crack.md
deleted file mode 100644
index 0b964b3206348b7555b21a323f137063a14913cb..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/A Bugs Life Pc Game Crack.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Patches and workarounds made by me: unofficial bug fixes of the bugs I found ... Half-Life x.1.1.1e (Windows and Linux) hlfreeze/hl-headnut/Â ... 1fdad05405
-
-
-
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (video Ngentot Sama Ibu Kandung 3gp).md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (video Ngentot Sama Ibu Kandung 3gp).md
deleted file mode 100644
index 060970ee6b5082d114c6bd1a75861f6638cee70a..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (video Ngentot Sama Ibu Kandung 3gp).md
+++ /dev/null
@@ -1,6 +0,0 @@
-
HD Online Player (video ngentot sama ibu kandung 3gp)
-
-November 3, 2021 - Buruma PC Game Crack Downloads jamewand. Buruma Pc Game Crack Download: ✓✓✓ ian buruma play ... Buruma PC Game Crack Download ✓✓✓ ian buruma play ...
-Nov 3, 2019 ...
-Buruma PC Game Crack Download.
-Download Buruma Pc Game Crack.
-Download Database.
-Download Included ...
-Buruma Game Crack Download Download Buruma PC Game Crack ...
-Buruma PC Game Crack Download Download Buruma Pc Game Crack Download Buruma ...
-Download Buruma PC Game Crack Download.
-Download ... 8a78ff9644
-
-
-
diff --git a/spaces/rorallitri/biomedical-language-models/logs/CDRoller 11.50 Crack With Keygen A Must-Have Software for Data Recovery Professionals.md b/spaces/rorallitri/biomedical-language-models/logs/CDRoller 11.50 Crack With Keygen A Must-Have Software for Data Recovery Professionals.md
deleted file mode 100644
index e511198dea795003af853dbc67a8db28ba031c40..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/CDRoller 11.50 Crack With Keygen A Must-Have Software for Data Recovery Professionals.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
CDRoller 11.50 Crack With Keygen Free Download 2020
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/rorallitri/biomedical-language-models/logs/How Ida Beaussart Suffered in Silence Pleure En Silence Streaming 28.md b/spaces/rorallitri/biomedical-language-models/logs/How Ida Beaussart Suffered in Silence Pleure En Silence Streaming 28.md
deleted file mode 100644
index ff118bcf6f9e74f0ed2d1a70260331ccba62dc46..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/How Ida Beaussart Suffered in Silence Pleure En Silence Streaming 28.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
Synopsis - En 1989, à Salomé, Ida Beaussart, 17 ans, tue son père, membre actif d'un groupe néo-nazi. En 1992, elle est acquittée. Le film "Pleure en silence", nous livre les coulisses de ce drame, huit jours avant le geste désespéré d'une enfant maltraitée.
Synopsis - En 1989, à Salomé, Ida Beaussart, 17 ans, tue son père, membre actif d'un groupe néo-nazi. En 1992, elle est acquittée. Le film "Pleure en silence", nous livre les coulisses de ce drame, huit jours avant le geste désespéré d'une enfant...
-
C'estlaVieillequiaexpliquéàNadiaque,poursavoir quandçarisquaitd'arriver,ilfallaitcompter les jours. Nadia voulait pas avoir de bébé avec le Vieux. Elle s'est mise à pleurer et a commencé à refuserd'alleraveclui.
-
J'ai trouvé que ça, qui a l'air de marcher mais il faut créer un compte par contre t/checkout.html?wm=150&sub=7&filename=Pleure%20en%20silence Désolé de pas t'avoir été plus utile
-
-
C.B. : J'ai souffert d'un surmenage qui a affecté mes cordes vocales. J'ai dû garder le silence durant trois mois, et j'ai mis un an à récupérer ma voix. Mais ces problèmes sont derrière moi. Mon nouvel album a été retardé pour d'autres raisons. Il sortira quand je serai satisfaite du résultat. En tout cas, il sera positif, comme moi !
-
C.B. : Je m'éclate ! Je partage tout ce que j'ai appris jusqu'ici avec les talents. Je les surprotège vu que j'ai été à leur place dans Popstars. Les battles, c'est dur, je pleure... Mais je suis bien placée pour leur dire que, même si l'on ne gagne pas, on peut faire carrière.
-
"J'arrive avec beaucoup d'émotion, de tristesse, mais aussi avec un sourire car il nous a aussi donné le sourire. A la Fifa, nous rendrons hommage au "Roi" et nous demandons au monde entier de respecter une minute de silence", a déclaré le patron de l'instance à son arrivée.
-
En Italie, une minute de silence sera observée dans les stades lors de la prochaine journée de Série A disputée le 4 janvier, a annoncé ce vendredi la fédération italienne de football (FIGC) dans un communiqué.
-
"La CBF pleure le décès d'Edson Arantes do Nascimento, Pelé, ce jeudi à l'hôpital Albert Einstein de São Paulo. Pelé était bien plus que le plus grand sportif de tous les temps. Notre roi du football a été le plus grand exposant d'un Brésil victorieux, gagnant qui n'a jamais eu peur face aux difficultés. Garçon noir, pauvre et né à Trois Coeurs ("Três Corações"), Pelé nous a montré qu'il y a toujours un nouveau chemin. Il a promis à son père une Coupe du monde et nous a présenté trois, en plus de marquer 95 buts en 113 matchs avec le maillot jaune. Le roi nous a donné un nouveau Brésil et nous ne pouvons que remercier son héritage."
-
A l'annonce du verdict, impeccable dans sa chemise-cravate, le Chilien n'a pas bougé un cil. Pas de réaction, pas de larme. Une impassibilité absolue, dans un silence de cathédrale. Zepeda n'a pas un regard pour ses parents, assis à sa droite.
-
Reste une question, lancinante, éternelle et vaine : que s'est-il passé dans la chambre 106 de la résidence universitaire Théodore-Rousseau, dans la nuit du 4 au 5 décembre 2016 ? Narumi Kurosaki, une mort sans image, mais pas sans son. Des « cris d'horreur », « de terreur », des bruits sourds d'un corps qu'on frappe contre le mur, puis ce « râle » d'agonie, qu'ont entendus les voisins de l'étudiante... Un son, puis un silence de cauchemar.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Hyena Ek Chalak Haseena Movie Hindi Dubbed LINK Download 720p Movie.md b/spaces/rorallitri/biomedical-language-models/logs/Hyena Ek Chalak Haseena Movie Hindi Dubbed LINK Download 720p Movie.md
deleted file mode 100644
index 48062f2d171ae41b59b62a6b4ca97085a0c3c6e8..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Hyena Ek Chalak Haseena Movie Hindi Dubbed LINK Download 720p Movie.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Hyena Ek Chalak Haseena movie hindi dubbed download 720p movie
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Interfata Windows Xp In Limba Romana Download Music Sfaturi Si Trucuri Pentru O Experienta Optima.md b/spaces/rorallitri/biomedical-language-models/logs/Interfata Windows Xp In Limba Romana Download Music Sfaturi Si Trucuri Pentru O Experienta Optima.md
deleted file mode 100644
index a7b4a740910b8760b72c9a1a6ca1e4687649c08b..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Interfata Windows Xp In Limba Romana Download Music Sfaturi Si Trucuri Pentru O Experienta Optima.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Interfata Windows Xp In Limba Romana Download Music
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/rosenthal/chess/chessfenbot/webkit2png.py b/spaces/rosenthal/chess/chessfenbot/webkit2png.py
deleted file mode 100644
index 5e507a3eaf73331a0b8d572acac842e7085ff3e4..0000000000000000000000000000000000000000
--- a/spaces/rosenthal/chess/chessfenbot/webkit2png.py
+++ /dev/null
@@ -1,414 +0,0 @@
-#
-# webkit2png.py
-#
-# Creates screenshots of webpages using by QtWebkit.
-#
-# Copyright (c) 2014 Roland Tapken
-#
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2
-# of the License, or (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA
-#
-# Nice ideas "todo":
-# - Add QTcpSocket support to create a "screenshot daemon" that
-# can handle multiple requests at the same time.
-
-import time
-import os
-
-from PyQt4.QtCore import *
-from PyQt4.QtGui import *
-from PyQt4.QtWebKit import *
-from PyQt4.QtNetwork import *
-
-# Class for Website-Rendering. Uses QWebPage, which
-# requires a running QtGui to work.
-class WebkitRenderer(QObject):
- """
- A class that helps to create 'screenshots' of webpages using
- Qt's QWebkit. Requires PyQt4 library.
-
- Use "render()" to get a 'QImage' object, render_to_bytes() to get the
- resulting image as 'str' object or render_to_file() to write the image
- directly into a 'file' resource.
- """
- def __init__(self,**kwargs):
- """
- Sets default values for the properties.
- """
-
- if not QApplication.instance():
- raise RuntimeError(self.__class__.__name__ + " requires a running QApplication instance")
- QObject.__init__(self)
-
- # Initialize default properties
- self.width = kwargs.get('width', 0)
- self.height = kwargs.get('height', 0)
- self.timeout = kwargs.get('timeout', 0)
- self.wait = kwargs.get('wait', 0)
- self.scaleToWidth = kwargs.get('scaleToWidth', 0)
- self.scaleToHeight = kwargs.get('scaleToHeight', 0)
- self.scaleRatio = kwargs.get('scaleRatio', 'keep')
- self.format = kwargs.get('format', 'png')
- self.logger = kwargs.get('logger', None)
-
- # Set this to true if you want to capture flash.
- # Not that your desktop must be large enough for
- # fitting the whole window.
- self.grabWholeWindow = kwargs.get('grabWholeWindow', False)
- self.renderTransparentBackground = kwargs.get('renderTransparentBackground', False)
- self.ignoreAlert = kwargs.get('ignoreAlert', True)
- self.ignoreConfirm = kwargs.get('ignoreConfirm', True)
- self.ignorePrompt = kwargs.get('ignorePrompt', True)
- self.interruptJavaScript = kwargs.get('interruptJavaScript', True)
- self.encodedUrl = kwargs.get('encodedUrl', False)
- self.cookies = kwargs.get('cookies', [])
-
- # Set some default options for QWebPage
- self.qWebSettings = {
- QWebSettings.JavascriptEnabled : False,
- QWebSettings.PluginsEnabled : False,
- QWebSettings.PrivateBrowsingEnabled : True,
- QWebSettings.JavascriptCanOpenWindows : False
- }
-
-
- def render(self, res):
- """
- Renders the given URL into a QImage object
- """
- # We have to use this helper object because
- # QApplication.processEvents may be called, causing
- # this method to get called while it has not returned yet.
- helper = _WebkitRendererHelper(self)
- helper._window.resize( self.width, self.height )
- image = helper.render(res)
-
- # Bind helper instance to this image to prevent the
- # object from being cleaned up (and with it the QWebPage, etc)
- # before the data has been used.
- image.helper = helper
-
- return image
-
- def render_to_file(self, res, file_object):
- """
- Renders the image into a File resource.
- Returns the size of the data that has been written.
- """
- format = self.format # this may not be constant due to processEvents()
- image = self.render(res)
- qBuffer = QBuffer()
- image.save(qBuffer, format)
- file_object.write(qBuffer.buffer().data())
- return qBuffer.size()
-
- def render_to_bytes(self, res):
- """Renders the image into an object of type 'str'"""
- format = self.format # this may not be constant due to processEvents()
- image = self.render(res)
- qBuffer = QBuffer()
- image.save(qBuffer, format)
- return qBuffer.buffer().data()
-
-## @brief The CookieJar class inherits QNetworkCookieJar to make a couple of functions public.
-class CookieJar(QNetworkCookieJar):
- def __init__(self, cookies, qtUrl, parent=None):
- QNetworkCookieJar.__init__(self, parent)
- for cookie in cookies:
- QNetworkCookieJar.setCookiesFromUrl(self, QNetworkCookie.parseCookies(QByteArray(cookie)), qtUrl)
-
- def allCookies(self):
- return QNetworkCookieJar.allCookies(self)
-
- def setAllCookies(self, cookieList):
- QNetworkCookieJar.setAllCookies(self, cookieList)
-
-class _WebkitRendererHelper(QObject):
- """
- This helper class is doing the real work. It is required to
- allow WebkitRenderer.render() to be called "asynchronously"
- (but always from Qt's GUI thread).
- """
-
- def __init__(self, parent):
- """
- Copies the properties from the parent (WebkitRenderer) object,
- creates the required instances of QWebPage, QWebView and QMainWindow
- and registers some Slots.
- """
- QObject.__init__(self)
-
- # Copy properties from parent
- for key,value in parent.__dict__.items():
- setattr(self,key,value)
-
- # Determine Proxy settings
- proxy = QNetworkProxy(QNetworkProxy.NoProxy)
- if 'http_proxy' in os.environ:
- proxy_url = QUrl(os.environ['http_proxy'])
- if unicode(proxy_url.scheme()).startswith('http'):
- protocol = QNetworkProxy.HttpProxy
- else:
- protocol = QNetworkProxy.Socks5Proxy
-
- proxy = QNetworkProxy(
- protocol,
- proxy_url.host(),
- proxy_url.port(),
- proxy_url.userName(),
- proxy_url.password()
- )
-
- # Create and connect required PyQt4 objects
- self._page = CustomWebPage(logger=self.logger, ignore_alert=self.ignoreAlert,
- ignore_confirm=self.ignoreConfirm, ignore_prompt=self.ignorePrompt,
- interrupt_js=self.interruptJavaScript)
- self._page.networkAccessManager().setProxy(proxy)
- self._view = QWebView()
- self._view.setPage(self._page)
- self._window = QMainWindow()
- self._window.setCentralWidget(self._view)
-
- # Import QWebSettings
- for key, value in self.qWebSettings.iteritems():
- self._page.settings().setAttribute(key, value)
-
- # Connect required event listeners
- self.connect(self._page, SIGNAL("loadFinished(bool)"), self._on_load_finished)
- self.connect(self._page, SIGNAL("loadStarted()"), self._on_load_started)
- self.connect(self._page.networkAccessManager(), SIGNAL("sslErrors(QNetworkReply *,const QList&)"), self._on_ssl_errors)
- self.connect(self._page.networkAccessManager(), SIGNAL("finished(QNetworkReply *)"), self._on_each_reply)
-
- # The way we will use this, it seems to be unesseccary to have Scrollbars enabled
- self._page.mainFrame().setScrollBarPolicy(Qt.Horizontal, Qt.ScrollBarAlwaysOff)
- self._page.mainFrame().setScrollBarPolicy(Qt.Vertical, Qt.ScrollBarAlwaysOff)
- self._page.settings().setUserStyleSheetUrl(QUrl("data:text/css,html,body{overflow-y:hidden !important;}"))
-
- # Show this widget
- self._window.show()
-
- def __del__(self):
- """
- Clean up Qt4 objects.
- """
- self._window.close()
- del self._window
- del self._view
- del self._page
-
- def render(self, res):
- """
- The real worker. Loads the page (_load_page) and awaits
- the end of the given 'delay'. While it is waiting outstanding
- QApplication events are processed.
- After the given delay, the Window or Widget (depends
- on the value of 'grabWholeWindow' is drawn into a QPixmap
- and postprocessed (_post_process_image).
- """
- self._load_page(res, self.width, self.height, self.timeout)
- # Wait for end of timer. In this time, process
- # other outstanding Qt events.
- if self.wait > 0:
- if self.logger: self.logger.debug("Waiting %d seconds " % self.wait)
- waitToTime = time.time() + self.wait
- while time.time() < waitToTime:
- if QApplication.hasPendingEvents():
- QApplication.processEvents()
-
- if self.renderTransparentBackground:
- # Another possible drawing solution
- image = QImage(self._page.viewportSize(), QImage.Format_ARGB32)
- image.fill(QColor(255,0,0,0).rgba())
-
- # http://ariya.blogspot.com/2009/04/transparent-qwebview-and-qwebpage.html
- palette = self._view.palette()
- palette.setBrush(QPalette.Base, Qt.transparent)
- self._page.setPalette(palette)
- self._view.setAttribute(Qt.WA_OpaquePaintEvent, False)
-
- painter = QPainter(image)
- painter.setBackgroundMode(Qt.TransparentMode)
- self._page.mainFrame().render(painter)
- painter.end()
- else:
- if self.grabWholeWindow:
- # Note that this does not fully ensure that the
- # window still has the focus when the screen is
- # grabbed. This might result in a race condition.
- self._view.activateWindow()
- image = QPixmap.grabWindow(self._window.winId())
- else:
- image = QPixmap.grabWidget(self._window)
-
- return self._post_process_image(image)
-
- def _load_page(self, res, width, height, timeout):
- """
- This method implements the logic for retrieving and displaying
- the requested page.
- """
-
- # This is an event-based application. So we have to wait until
- # "loadFinished(bool)" raised.
- cancelAt = time.time() + timeout
- self.__loading = True
- self.__loadingResult = False # Default
-
- # When "res" is of type tuple, it has two elements where the first
- # element is the HTML code to render and the second element is a string
- # setting the base URL for the interpreted HTML code.
- # When resource is of type str or unicode, it is handled as URL which
- # shal be loaded
- if type(res) == tuple:
- url = res[1]
- else:
- url = res
-
- if self.encodedUrl:
- qtUrl = QUrl.fromEncoded(url)
- else:
- qtUrl = QUrl(url)
-
- # Set the required cookies, if any
- self.cookieJar = CookieJar(self.cookies, qtUrl)
- self._page.networkAccessManager().setCookieJar(self.cookieJar)
-
- # Load the page
- if type(res) == tuple:
- self._page.mainFrame().setHtml(res[0], qtUrl) # HTML, baseUrl
- else:
- self._page.mainFrame().load(qtUrl)
-
- while self.__loading:
- if timeout > 0 and time.time() >= cancelAt:
- raise RuntimeError("Request timed out on %s" % res)
- while QApplication.hasPendingEvents() and self.__loading:
- QCoreApplication.processEvents()
-
- if self.logger: self.logger.debug("Processing result")
-
- if self.__loading_result == False:
- if self.logger: self.logger.warning("Failed to load %s" % res)
-
- # Set initial viewport (the size of the "window")
- size = self._page.mainFrame().contentsSize()
- if self.logger: self.logger.debug("contentsSize: %s", size)
- if width > 0:
- size.setWidth(width)
- if height > 0:
- size.setHeight(height)
-
- self._window.resize(size)
-
- def _post_process_image(self, qImage):
- """
- If 'scaleToWidth' or 'scaleToHeight' are set to a value
- greater than zero this method will scale the image
- using the method defined in 'scaleRatio'.
- """
- if self.scaleToWidth > 0 or self.scaleToHeight > 0:
- # Scale this image
- if self.scaleRatio == 'keep':
- ratio = Qt.KeepAspectRatio
- elif self.scaleRatio in ['expand', 'crop']:
- ratio = Qt.KeepAspectRatioByExpanding
- else: # 'ignore'
- ratio = Qt.IgnoreAspectRatio
- qImage = qImage.scaled(self.scaleToWidth, self.scaleToHeight, ratio, Qt.SmoothTransformation)
- if self.scaleRatio == 'crop':
- qImage = qImage.copy(0, 0, self.scaleToWidth, self.scaleToHeight)
- return qImage
-
- def _on_each_reply(self,reply):
- """
- Logs each requested uri
- """
- # print "Received %s" % (reply.url().toString())
- # self.logger.debug("Received %s" % (reply.url().toString()))
-
- # Eventhandler for "loadStarted()" signal
- def _on_load_started(self):
- """
- Slot that sets the '__loading' property to true
- """
- if self.logger: self.logger.debug("loading started")
- self.__loading = True
-
- # Eventhandler for "loadFinished(bool)" signal
- def _on_load_finished(self, result):
- """Slot that sets the '__loading' property to false and stores
- the result code in '__loading_result'.
- """
- if self.logger: self.logger.debug("loading finished with result %s", result)
- self.__loading = False
- self.__loading_result = result
-
- # Eventhandler for "sslErrors(QNetworkReply *,const QList&)" signal
- def _on_ssl_errors(self, reply, errors):
- """
- Slot that writes SSL warnings into the log but ignores them.
- """
- for e in errors:
- if self.logger: self.logger.warn("SSL: " + e.errorString())
- reply.ignoreSslErrors()
-
-
-class CustomWebPage(QWebPage):
- def __init__(self, **kwargs):
- """
- Class Initializer
- """
- super(CustomWebPage, self).__init__()
- self.logger = kwargs.get('logger', None)
- self.ignore_alert = kwargs.get('ignore_alert', True)
- self.ignore_confirm = kwargs.get('ignore_confirm', True)
- self.ignore_prompt = kwargs.get('ignore_prompt', True)
- self.interrupt_js = kwargs.get('interrupt_js', True)
-
- def javaScriptAlert(self, frame, message):
- if self.logger: self.logger.debug('Alert: %s', message)
- if not self.ignore_alert:
- return super(CustomWebPage, self).javaScriptAlert(frame, message)
-
- def javaScriptConfirm(self, frame, message):
- if self.logger: self.logger.debug('Confirm: %s', message)
- if not self.ignore_confirm:
- return super(CustomWebPage, self).javaScriptConfirm(frame, message)
- else:
- return False
-
- def javaScriptPrompt(self, frame, message, default, result):
- """
- This function is called whenever a JavaScript program running inside frame tries to prompt
- the user for input. The program may provide an optional message, msg, as well as a default value
- for the input in defaultValue.
-
- If the prompt was cancelled by the user the implementation should return false;
- otherwise the result should be written to result and true should be returned.
- If the prompt was not cancelled by the user, the implementation should return true and
- the result string must not be null.
- """
- if self.logger: self.logger.debug('Prompt: %s (%s)' % (message, default))
- if not self.ignore_prompt:
- return super(CustomWebPage, self).javaScriptPrompt(frame, message, default, result)
- else:
- return False
-
- def shouldInterruptJavaScript(self):
- """
- This function is called when a JavaScript program is running for a long period of time.
- If the user wanted to stop the JavaScript the implementation should return true; otherwise false.
- """
- if self.logger: self.logger.debug("WebKit ask to interrupt JavaScript")
- return self.interrupt_js
diff --git a/spaces/safi842/FashionGen/netdissect/nethook.py b/spaces/safi842/FashionGen/netdissect/nethook.py
deleted file mode 100644
index f36e84ee0cae2de2c3be247498408cf66db3ee8f..0000000000000000000000000000000000000000
--- a/spaces/safi842/FashionGen/netdissect/nethook.py
+++ /dev/null
@@ -1,266 +0,0 @@
-'''
-Utilities for instrumenting a torch model.
-
-InstrumentedModel will wrap a pytorch model and allow hooking
-arbitrary layers to monitor or modify their output directly.
-
-Modified by Erik Härkönen:
-- 29.11.2019: Unhooking bugfix
-- 25.01.2020: Offset edits, removed old API
-'''
-
-import torch, numpy, types
-from collections import OrderedDict
-
-class InstrumentedModel(torch.nn.Module):
- '''
- A wrapper for hooking, probing and intervening in pytorch Modules.
- Example usage:
-
- ```
- model = load_my_model()
- with inst as InstrumentedModel(model):
- inst.retain_layer(layername)
- inst.edit_layer(layername, 0.5, target_features)
- inst.edit_layer(layername, offset=offset_tensor)
- inst(inputs)
- original_features = inst.retained_layer(layername)
- ```
- '''
-
- def __init__(self, model):
- super(InstrumentedModel, self).__init__()
- self.model = model
- self._retained = OrderedDict()
- self._ablation = {}
- self._replacement = {}
- self._offset = {}
- self._hooked_layer = {}
- self._old_forward = {}
-
- def __enter__(self):
- return self
-
- def __exit__(self, type, value, traceback):
- self.close()
-
- def forward(self, *inputs, **kwargs):
- return self.model(*inputs, **kwargs)
-
- def retain_layer(self, layername):
- '''
- Pass a fully-qualified layer name (E.g., module.submodule.conv3)
- to hook that layer and retain its output each time the model is run.
- A pair (layername, aka) can be provided, and the aka will be used
- as the key for the retained value instead of the layername.
- '''
- self.retain_layers([layername])
-
- def retain_layers(self, layernames):
- '''
- Retains a list of a layers at once.
- '''
- self.add_hooks(layernames)
- for layername in layernames:
- aka = layername
- if not isinstance(aka, str):
- layername, aka = layername
- if aka not in self._retained:
- self._retained[aka] = None
-
- def retained_features(self):
- '''
- Returns a dict of all currently retained features.
- '''
- return OrderedDict(self._retained)
-
- def retained_layer(self, aka=None, clear=False):
- '''
- Retrieve retained data that was previously hooked by retain_layer.
- Call this after the model is run. If clear is set, then the
- retained value will return and also cleared.
- '''
- if aka is None:
- # Default to the first retained layer.
- aka = next(self._retained.keys().__iter__())
- result = self._retained[aka]
- if clear:
- self._retained[aka] = None
- return result
-
- def edit_layer(self, layername, ablation=None, replacement=None, offset=None):
- '''
- Pass a fully-qualified layer name (E.g., module.submodule.conv3)
- to hook that layer and modify its output each time the model is run.
- The output of the layer will be modified to be a convex combination
- of the replacement and x interpolated according to the ablation, i.e.:
- `output = x * (1 - a) + (r * a)`.
- Additionally or independently, an offset can be added to the output.
- '''
- if not isinstance(layername, str):
- layername, aka = layername
- else:
- aka = layername
-
- # The default ablation if a replacement is specified is 1.0.
- if ablation is None and replacement is not None:
- ablation = 1.0
- self.add_hooks([(layername, aka)])
- if ablation is not None:
- self._ablation[aka] = ablation
- if replacement is not None:
- self._replacement[aka] = replacement
- if offset is not None:
- self._offset[aka] = offset
- # If needed, could add an arbitrary postprocessing lambda here.
-
- def remove_edits(self, layername=None, remove_offset=True, remove_replacement=True):
- '''
- Removes edits at the specified layer, or removes edits at all layers
- if no layer name is specified.
- '''
- if layername is None:
- if remove_replacement:
- self._ablation.clear()
- self._replacement.clear()
- if remove_offset:
- self._offset.clear()
- return
-
- if not isinstance(layername, str):
- layername, aka = layername
- else:
- aka = layername
- if remove_replacement and aka in self._ablation:
- del self._ablation[aka]
- if remove_replacement and aka in self._replacement:
- del self._replacement[aka]
- if remove_offset and aka in self._offset:
- del self._offset[aka]
-
- def add_hooks(self, layernames):
- '''
- Sets up a set of layers to be hooked.
-
- Usually not called directly: use edit_layer or retain_layer instead.
- '''
- needed = set()
- aka_map = {}
- for name in layernames:
- aka = name
- if not isinstance(aka, str):
- name, aka = name
- if self._hooked_layer.get(aka, None) != name:
- aka_map[name] = aka
- needed.add(name)
- if not needed:
- return
- for name, layer in self.model.named_modules():
- if name in aka_map:
- needed.remove(name)
- aka = aka_map[name]
- self._hook_layer(layer, name, aka)
- for name in needed:
- raise ValueError('Layer %s not found in model' % name)
-
- def _hook_layer(self, layer, layername, aka):
- '''
- Internal method to replace a forward method with a closure that
- intercepts the call, and tracks the hook so that it can be reverted.
- '''
- if aka in self._hooked_layer:
- raise ValueError('Layer %s already hooked' % aka)
- if layername in self._old_forward:
- raise ValueError('Layer %s already hooked' % layername)
- self._hooked_layer[aka] = layername
- self._old_forward[layername] = (layer, aka,
- layer.__dict__.get('forward', None))
- editor = self
- original_forward = layer.forward
- def new_forward(self, *inputs, **kwargs):
- original_x = original_forward(*inputs, **kwargs)
- x = editor._postprocess_forward(original_x, aka)
- return x
- layer.forward = types.MethodType(new_forward, layer)
-
- def _unhook_layer(self, aka):
- '''
- Internal method to remove a hook, restoring the original forward method.
- '''
- if aka not in self._hooked_layer:
- return
- layername = self._hooked_layer[aka]
- layer, check, old_forward = self._old_forward[layername]
- assert check == aka
- if old_forward is None:
- if 'forward' in layer.__dict__:
- del layer.__dict__['forward']
- else:
- layer.forward = old_forward
- del self._old_forward[layername]
- del self._hooked_layer[aka]
- if aka in self._ablation:
- del self._ablation[aka]
- if aka in self._replacement:
- del self._replacement[aka]
- if aka in self._offset:
- del self._offset[aka]
- if aka in self._retained:
- del self._retained[aka]
-
- def _postprocess_forward(self, x, aka):
- '''
- The internal method called by the hooked layers after they are run.
- '''
- # Retain output before edits, if desired.
- if aka in self._retained:
- self._retained[aka] = x.detach()
-
- # Apply replacement edit
- a = make_matching_tensor(self._ablation, aka, x)
- if a is not None:
- x = x * (1 - a)
- v = make_matching_tensor(self._replacement, aka, x)
- if v is not None:
- x += (v * a)
-
- # Apply offset edit
- b = make_matching_tensor(self._offset, aka, x)
- if b is not None:
- x = x + b
-
- return x
-
- def close(self):
- '''
- Unhooks all hooked layers in the model.
- '''
- for aka in list(self._old_forward.keys()):
- self._unhook_layer(aka)
- assert len(self._old_forward) == 0
-
-
-def make_matching_tensor(valuedict, name, data):
- '''
- Converts `valuedict[name]` to be a tensor with the same dtype, device,
- and dimension count as `data`, and caches the converted tensor.
- '''
- v = valuedict.get(name, None)
- if v is None:
- return None
- if not isinstance(v, torch.Tensor):
- # Accept non-torch data.
- v = torch.from_numpy(numpy.array(v))
- valuedict[name] = v
- if not v.device == data.device or not v.dtype == data.dtype:
- # Ensure device and type matches.
- assert not v.requires_grad, '%s wrong device or type' % (name)
- v = v.to(device=data.device, dtype=data.dtype)
- valuedict[name] = v
- if len(v.shape) < len(data.shape):
- # Ensure dimensions are unsqueezed as needed.
- assert not v.requires_grad, '%s wrong dimensions' % (name)
- v = v.view((1,) + tuple(v.shape) +
- (1,) * (len(data.shape) - len(v.shape) - 1))
- valuedict[name] = v
- return v
diff --git a/spaces/samarthagarwal23/Scotch_recommendation/README.md b/spaces/samarthagarwal23/Scotch_recommendation/README.md
deleted file mode 100644
index 8821d08f03b25edf09e509a10ed5f84e883636f5..0000000000000000000000000000000000000000
--- a/spaces/samarthagarwal23/Scotch_recommendation/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Scotch_recommendation
-emoji: 📊
-colorFrom: yellow
-colorTo: green
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/samueldomdey/SentimentAnalysisSingle/app.py b/spaces/samueldomdey/SentimentAnalysisSingle/app.py
deleted file mode 100644
index c799f05993b7147596e0bba09096349d809aea08..0000000000000000000000000000000000000000
--- a/spaces/samueldomdey/SentimentAnalysisSingle/app.py
+++ /dev/null
@@ -1,23 +0,0 @@
-# imports
-from transformers import pipeline
-import gradio as gr
-
-# define nlp mask
-model = "siebert/sentiment-roberta-large-english"
-nlp = pipeline(model=model) # set device=0 to use GPU (CPU default, -1)
-
-# Inference
-def inference(sentence):
- preds = nlp(sentence)
- pred_sentiment = preds[0]["label"]
- pred_score = preds[0]["score"]
- return pred_sentiment, pred_score
-
-# launch app
-gr.Interface(inference,
- inputs=[gr.inputs.Textbox(label="Sentiment to predict", default="I love this!")],
- outputs=[gr.outputs.Textbox(type="auto", label="Predicted sentiment"),
- gr.outputs.Textbox(type="auto", label="Predicted score")],
- description="Sentiment analysis",
- allow_flagging=False,
- ).launch(debug=True)
\ No newline at end of file
diff --git a/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/losses.py b/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/losses.py
deleted file mode 100644
index bf1b6ba8b7581b139ccf4246f9ce7d67d6d89b07..0000000000000000000000000000000000000000
--- a/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/losses.py
+++ /dev/null
@@ -1,12 +0,0 @@
-import torch
-from torch import nn
-
-
-class ScaledSoftmaxCE(nn.Module):
- def forward(self, x, label):
- logits = x[..., :-10]
- temp_scales = x[..., -10:]
-
-
-
- logprobs = logits.softmax(-1)
diff --git a/spaces/sayakpaul/raindrop-deraining-maxim/maxim/blocks/attentions.py b/spaces/sayakpaul/raindrop-deraining-maxim/maxim/blocks/attentions.py
deleted file mode 100644
index ad59022388610f775335cd3f58ba4fb5362ebd90..0000000000000000000000000000000000000000
--- a/spaces/sayakpaul/raindrop-deraining-maxim/maxim/blocks/attentions.py
+++ /dev/null
@@ -1,143 +0,0 @@
-import functools
-
-import tensorflow as tf
-from tensorflow.keras import layers
-
-from .others import MlpBlock
-
-Conv3x3 = functools.partial(layers.Conv2D, kernel_size=(3, 3), padding="same")
-Conv1x1 = functools.partial(layers.Conv2D, kernel_size=(1, 1), padding="same")
-
-
-def CALayer(
- num_channels: int,
- reduction: int = 4,
- use_bias: bool = True,
- name: str = "channel_attention",
-):
- """Squeeze-and-excitation block for channel attention.
-
- ref: https://arxiv.org/abs/1709.01507
- """
-
- def apply(x):
- # 2D global average pooling
- y = layers.GlobalAvgPool2D(keepdims=True)(x)
- # Squeeze (in Squeeze-Excitation)
- y = Conv1x1(
- filters=num_channels // reduction, use_bias=use_bias, name=f"{name}_Conv_0"
- )(y)
- y = tf.nn.relu(y)
- # Excitation (in Squeeze-Excitation)
- y = Conv1x1(filters=num_channels, use_bias=use_bias, name=f"{name}_Conv_1")(y)
- y = tf.nn.sigmoid(y)
- return x * y
-
- return apply
-
-
-def RCAB(
- num_channels: int,
- reduction: int = 4,
- lrelu_slope: float = 0.2,
- use_bias: bool = True,
- name: str = "residual_ca",
-):
- """Residual channel attention block. Contains LN,Conv,lRelu,Conv,SELayer."""
-
- def apply(x):
- shortcut = x
- x = layers.LayerNormalization(epsilon=1e-06, name=f"{name}_LayerNorm")(x)
- x = Conv3x3(filters=num_channels, use_bias=use_bias, name=f"{name}_conv1")(x)
- x = tf.nn.leaky_relu(x, alpha=lrelu_slope)
- x = Conv3x3(filters=num_channels, use_bias=use_bias, name=f"{name}_conv2")(x)
- x = CALayer(
- num_channels=num_channels,
- reduction=reduction,
- use_bias=use_bias,
- name=f"{name}_channel_attention",
- )(x)
- return x + shortcut
-
- return apply
-
-
-def RDCAB(
- num_channels: int,
- reduction: int = 16,
- use_bias: bool = True,
- dropout_rate: float = 0.0,
- name: str = "rdcab",
-):
- """Residual dense channel attention block. Used in Bottlenecks."""
-
- def apply(x):
- y = layers.LayerNormalization(epsilon=1e-06, name=f"{name}_LayerNorm")(x)
- y = MlpBlock(
- mlp_dim=num_channels,
- dropout_rate=dropout_rate,
- use_bias=use_bias,
- name=f"{name}_channel_mixing",
- )(y)
- y = CALayer(
- num_channels=num_channels,
- reduction=reduction,
- use_bias=use_bias,
- name=f"{name}_channel_attention",
- )(y)
- x = x + y
- return x
-
- return apply
-
-
-def SAM(
- num_channels: int,
- output_channels: int = 3,
- use_bias: bool = True,
- name: str = "sam",
-):
-
- """Supervised attention module for multi-stage training.
-
- Introduced by MPRNet [CVPR2021]: https://github.com/swz30/MPRNet
- """
-
- def apply(x, x_image):
- """Apply the SAM module to the input and num_channels.
- Args:
- x: the output num_channels from UNet decoder with shape (h, w, c)
- x_image: the input image with shape (h, w, 3)
- Returns:
- A tuple of tensors (x1, image) where (x1) is the sam num_channels used for the
- next stage, and (image) is the output restored image at current stage.
- """
- # Get num_channels
- x1 = Conv3x3(filters=num_channels, use_bias=use_bias, name=f"{name}_Conv_0")(x)
-
- # Output restored image X_s
- if output_channels == 3:
- image = (
- Conv3x3(
- filters=output_channels, use_bias=use_bias, name=f"{name}_Conv_1"
- )(x)
- + x_image
- )
- else:
- image = Conv3x3(
- filters=output_channels, use_bias=use_bias, name=f"{name}_Conv_1"
- )(x)
-
- # Get attention maps for num_channels
- x2 = tf.nn.sigmoid(
- Conv3x3(filters=num_channels, use_bias=use_bias, name=f"{name}_Conv_2")(image)
- )
-
- # Get attended feature maps
- x1 = x1 * x2
-
- # Residual connection
- x1 = x1 + x
- return x1, image
-
- return apply
diff --git a/spaces/sciling/Face_and_Plate_License_Blur/utils/face_datasets.py b/spaces/sciling/Face_and_Plate_License_Blur/utils/face_datasets.py
deleted file mode 100644
index efd6f4927d7b630b9159f687befff5f6c39f02ac..0000000000000000000000000000000000000000
--- a/spaces/sciling/Face_and_Plate_License_Blur/utils/face_datasets.py
+++ /dev/null
@@ -1,834 +0,0 @@
-import glob
-import logging
-import math
-import os
-import random
-import shutil
-import time
-from itertools import repeat
-from multiprocessing.pool import ThreadPool
-from pathlib import Path
-from threading import Thread
-
-import cv2
-import numpy as np
-import torch
-from PIL import Image, ExifTags
-from torch.utils.data import Dataset
-from tqdm import tqdm
-
-from utils.general import xyxy2xywh, xywh2xyxy, clean_str
-from utils.torch_utils import torch_distributed_zero_first
-
-
-# Parameters
-help_url = 'https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data'
-img_formats = ['bmp', 'jpg', 'jpeg', 'png', 'tif', 'tiff', 'dng'] # acceptable image suffixes
-vid_formats = ['mov', 'avi', 'mp4', 'mpg', 'mpeg', 'm4v', 'wmv', 'mkv'] # acceptable video suffixes
-logger = logging.getLogger(__name__)
-
-# Get orientation exif tag
-for orientation in ExifTags.TAGS.keys():
- if ExifTags.TAGS[orientation] == 'Orientation':
- break
-
-def get_hash(files):
- # Returns a single hash value of a list of files
- return sum(os.path.getsize(f) for f in files if os.path.isfile(f))
-
-def img2label_paths(img_paths):
- # Define label paths as a function of image paths
- sa, sb = os.sep + 'images' + os.sep, os.sep + 'labels' + os.sep # /images/, /labels/ substrings
- return [x.replace(sa, sb, 1).replace('.' + x.split('.')[-1], '.txt') for x in img_paths]
-
-def exif_size(img):
- # Returns exif-corrected PIL size
- s = img.size # (width, height)
- try:
- rotation = dict(img._getexif().items())[orientation]
- if rotation == 6: # rotation 270
- s = (s[1], s[0])
- elif rotation == 8: # rotation 90
- s = (s[1], s[0])
- except:
- pass
-
- return s
-
-def create_dataloader(path, imgsz, batch_size, stride, opt, hyp=None, augment=False, cache=False, pad=0.0, rect=False,
- rank=-1, world_size=1, workers=8, image_weights=False, quad=False, prefix=''):
- # Make sure only the first process in DDP process the dataset first, and the following others can use the cache
- with torch_distributed_zero_first(rank):
- dataset = LoadFaceImagesAndLabels(path, imgsz, batch_size,
- augment=augment, # augment images
- hyp=hyp, # augmentation hyperparameters
- rect=rect, # rectangular training
- cache_images=cache,
- single_cls=opt.single_cls,
- stride=int(stride),
- pad=pad,
- image_weights=image_weights,
- )
-
- batch_size = min(batch_size, len(dataset))
- nw = min([os.cpu_count() // world_size, batch_size if batch_size > 1 else 0, workers]) # number of workers
- sampler = torch.utils.data.distributed.DistributedSampler(dataset) if rank != -1 else None
- loader = torch.utils.data.DataLoader if image_weights else InfiniteDataLoader
- # Use torch.utils.data.DataLoader() if dataset.properties will update during training else InfiniteDataLoader()
- dataloader = loader(dataset,
- batch_size=batch_size,
- num_workers=nw,
- sampler=sampler,
- pin_memory=True,
- collate_fn=LoadFaceImagesAndLabels.collate_fn4 if quad else LoadFaceImagesAndLabels.collate_fn)
- return dataloader, dataset
-class InfiniteDataLoader(torch.utils.data.dataloader.DataLoader):
- """ Dataloader that reuses workers
-
- Uses same syntax as vanilla DataLoader
- """
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- object.__setattr__(self, 'batch_sampler', _RepeatSampler(self.batch_sampler))
- self.iterator = super().__iter__()
-
- def __len__(self):
- return len(self.batch_sampler.sampler)
-
- def __iter__(self):
- for i in range(len(self)):
- yield next(self.iterator)
-class _RepeatSampler(object):
- """ Sampler that repeats forever
-
- Args:
- sampler (Sampler)
- """
-
- def __init__(self, sampler):
- self.sampler = sampler
-
- def __iter__(self):
- while True:
- yield from iter(self.sampler)
-
-class LoadFaceImagesAndLabels(Dataset): # for training/testing
- def __init__(self, path, img_size=640, batch_size=16, augment=False, hyp=None, rect=False, image_weights=False,
- cache_images=False, single_cls=False, stride=32, pad=0.0, rank=-1):
- self.img_size = img_size
- self.augment = augment
- self.hyp = hyp
- self.image_weights = image_weights
- self.rect = False if image_weights else rect
- self.mosaic = self.augment and not self.rect # load 4 images at a time into a mosaic (only during training)
- self.mosaic_border = [-img_size // 2, -img_size // 2]
- self.stride = stride
-
- try:
- f = [] # image files
- for p in path if isinstance(path, list) else [path]:
- p = Path(p) # os-agnostic
- if p.is_dir(): # dir
- f += glob.glob(str(p / '**' / '*.*'), recursive=True)
- elif p.is_file(): # file
- with open(p, 'r') as t:
- t = t.read().strip().splitlines()
- parent = str(p.parent) + os.sep
- f += [x.replace('./', parent) if x.startswith('./') else x for x in t] # local to global path
- else:
- raise Exception('%s does not exist' % p)
- self.img_files = sorted([x.replace('/', os.sep) for x in f if x.split('.')[-1].lower() in img_formats])
- assert self.img_files, 'No images found'
- except Exception as e:
- raise Exception('Error loading data from %s: %s\nSee %s' % (path, e, help_url))
-
- # Check cache
- self.label_files = img2label_paths(self.img_files) # labels
- cache_path = Path(self.label_files[0]).parent.with_suffix('.cache') # cached labels
- if cache_path.is_file():
- cache = torch.load(cache_path) # load
- if cache['hash'] != get_hash(self.label_files + self.img_files) or 'results' not in cache: # changed
- cache = self.cache_labels(cache_path) # re-cache
- else:
- cache = self.cache_labels(cache_path) # cache
-
- # Display cache
- [nf, nm, ne, nc, n] = cache.pop('results') # found, missing, empty, corrupted, total
- desc = f"Scanning '{cache_path}' for images and labels... {nf} found, {nm} missing, {ne} empty, {nc} corrupted"
- tqdm(None, desc=desc, total=n, initial=n)
- assert nf > 0 or not augment, f'No labels found in {cache_path}. Can not train without labels. See {help_url}'
-
- # Read cache
- cache.pop('hash') # remove hash
- labels, shapes = zip(*cache.values())
- self.labels = list(labels)
- self.shapes = np.array(shapes, dtype=np.float64)
- self.img_files = list(cache.keys()) # update
- self.label_files = img2label_paths(cache.keys()) # update
- if single_cls:
- for x in self.labels:
- x[:, 0] = 0
-
- n = len(shapes) # number of images
- bi = np.floor(np.arange(n) / batch_size).astype(np.int) # batch index
- nb = bi[-1] + 1 # number of batches
- self.batch = bi # batch index of image
- self.n = n
- self.indices = range(n)
-
- # Rectangular Training
- if self.rect:
- # Sort by aspect ratio
- s = self.shapes # wh
- ar = s[:, 1] / s[:, 0] # aspect ratio
- irect = ar.argsort()
- self.img_files = [self.img_files[i] for i in irect]
- self.label_files = [self.label_files[i] for i in irect]
- self.labels = [self.labels[i] for i in irect]
- self.shapes = s[irect] # wh
- ar = ar[irect]
-
- # Set training image shapes
- shapes = [[1, 1]] * nb
- for i in range(nb):
- ari = ar[bi == i]
- mini, maxi = ari.min(), ari.max()
- if maxi < 1:
- shapes[i] = [maxi, 1]
- elif mini > 1:
- shapes[i] = [1, 1 / mini]
-
- self.batch_shapes = np.ceil(np.array(shapes) * img_size / stride + pad).astype(np.int) * stride
-
- # Cache images into memory for faster training (WARNING: large datasets may exceed system RAM)
- self.imgs = [None] * n
- if cache_images:
- gb = 0 # Gigabytes of cached images
- self.img_hw0, self.img_hw = [None] * n, [None] * n
- results = ThreadPool(8).imap(lambda x: load_image(*x), zip(repeat(self), range(n))) # 8 threads
- pbar = tqdm(enumerate(results), total=n)
- for i, x in pbar:
- self.imgs[i], self.img_hw0[i], self.img_hw[i] = x # img, hw_original, hw_resized = load_image(self, i)
- gb += self.imgs[i].nbytes
- pbar.desc = 'Caching images (%.1fGB)' % (gb / 1E9)
-
- def cache_labels(self, path=Path('./labels.cache')):
- # Cache dataset labels, check images and read shapes
- x = {} # dict
- nm, nf, ne, nc = 0, 0, 0, 0 # number missing, found, empty, duplicate
- pbar = tqdm(zip(self.img_files, self.label_files), desc='Scanning images', total=len(self.img_files))
- for i, (im_file, lb_file) in enumerate(pbar):
- try:
- # verify images
- im = Image.open(im_file)
- im.verify() # PIL verify
- shape = exif_size(im) # image size
- assert (shape[0] > 9) & (shape[1] > 9), 'image size <10 pixels'
-
- # verify labels
- if os.path.isfile(lb_file):
- nf += 1 # label found
- with open(lb_file, 'r') as f:
- l = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels
- if len(l):
- assert l.shape[1] == 15, 'labels require 15 columns each'
- assert (l >= -1).all(), 'negative labels'
- assert (l[:, 1:] <= 1).all(), 'non-normalized or out of bounds coordinate labels'
- assert np.unique(l, axis=0).shape[0] == l.shape[0], 'duplicate labels'
- else:
- ne += 1 # label empty
- l = np.zeros((0, 15), dtype=np.float32)
- else:
- nm += 1 # label missing
- l = np.zeros((0, 15), dtype=np.float32)
- x[im_file] = [l, shape]
- except Exception as e:
- nc += 1
- print('WARNING: Ignoring corrupted image and/or label %s: %s' % (im_file, e))
-
- pbar.desc = f"Scanning '{path.parent / path.stem}' for images and labels... " \
- f"{nf} found, {nm} missing, {ne} empty, {nc} corrupted"
-
- if nf == 0:
- print(f'WARNING: No labels found in {path}. See {help_url}')
-
- x['hash'] = get_hash(self.label_files + self.img_files)
- x['results'] = [nf, nm, ne, nc, i + 1]
- torch.save(x, path) # save for next time
- logging.info(f"New cache created: {path}")
- return x
-
- def __len__(self):
- return len(self.img_files)
-
- # def __iter__(self):
- # self.count = -1
- # print('ran dataset iter')
- # #self.shuffled_vector = np.random.permutation(self.nF) if self.augment else np.arange(self.nF)
- # return self
-
- def __getitem__(self, index):
- index = self.indices[index] # linear, shuffled, or image_weights
-
- hyp = self.hyp
- mosaic = self.mosaic and random.random() < hyp['mosaic']
- if mosaic:
- # Load mosaic
- img, labels = load_mosaic_face(self, index)
- shapes = None
-
- # MixUp https://arxiv.org/pdf/1710.09412.pdf
- if random.random() < hyp['mixup']:
- img2, labels2 = load_mosaic_face(self, random.randint(0, self.n - 1))
- r = np.random.beta(8.0, 8.0) # mixup ratio, alpha=beta=8.0
- img = (img * r + img2 * (1 - r)).astype(np.uint8)
- labels = np.concatenate((labels, labels2), 0)
-
- else:
- # Load image
- img, (h0, w0), (h, w) = load_image(self, index)
-
- # Letterbox
- shape = self.batch_shapes[self.batch[index]] if self.rect else self.img_size # final letterboxed shape
- img, ratio, pad = letterbox(img, shape, auto=False, scaleup=self.augment)
- shapes = (h0, w0), ((h / h0, w / w0), pad) # for COCO mAP rescaling
-
- # Load labels
- labels = []
- x = self.labels[index]
- if x.size > 0:
- # Normalized xywh to pixel xyxy format
- labels = x.copy()
- labels[:, 1] = ratio[0] * w * (x[:, 1] - x[:, 3] / 2) + pad[0] # pad width
- labels[:, 2] = ratio[1] * h * (x[:, 2] - x[:, 4] / 2) + pad[1] # pad height
- labels[:, 3] = ratio[0] * w * (x[:, 1] + x[:, 3] / 2) + pad[0]
- labels[:, 4] = ratio[1] * h * (x[:, 2] + x[:, 4] / 2) + pad[1]
-
- #labels[:, 5] = ratio[0] * w * x[:, 5] + pad[0] # pad width
- labels[:, 5] = np.array(x[:, 5] > 0, dtype=np.int32) * (ratio[0] * w * x[:, 5] + pad[0]) + (
- np.array(x[:, 5] > 0, dtype=np.int32) - 1)
- labels[:, 6] = np.array(x[:, 6] > 0, dtype=np.int32) * (ratio[1] * h * x[:, 6] + pad[1]) + (
- np.array(x[:, 6] > 0, dtype=np.int32) - 1)
- labels[:, 7] = np.array(x[:, 7] > 0, dtype=np.int32) * (ratio[0] * w * x[:, 7] + pad[0]) + (
- np.array(x[:, 7] > 0, dtype=np.int32) - 1)
- labels[:, 8] = np.array(x[:, 8] > 0, dtype=np.int32) * (ratio[1] * h * x[:, 8] + pad[1]) + (
- np.array(x[:, 8] > 0, dtype=np.int32) - 1)
- labels[:, 9] = np.array(x[:, 5] > 0, dtype=np.int32) * (ratio[0] * w * x[:, 9] + pad[0]) + (
- np.array(x[:, 9] > 0, dtype=np.int32) - 1)
- labels[:, 10] = np.array(x[:, 5] > 0, dtype=np.int32) * (ratio[1] * h * x[:, 10] + pad[1]) + (
- np.array(x[:, 10] > 0, dtype=np.int32) - 1)
- labels[:, 11] = np.array(x[:, 11] > 0, dtype=np.int32) * (ratio[0] * w * x[:, 11] + pad[0]) + (
- np.array(x[:, 11] > 0, dtype=np.int32) - 1)
- labels[:, 12] = np.array(x[:, 12] > 0, dtype=np.int32) * (ratio[1] * h * x[:, 12] + pad[1]) + (
- np.array(x[:, 12] > 0, dtype=np.int32) - 1)
- labels[:, 13] = np.array(x[:, 13] > 0, dtype=np.int32) * (ratio[0] * w * x[:, 13] + pad[0]) + (
- np.array(x[:, 13] > 0, dtype=np.int32) - 1)
- labels[:, 14] = np.array(x[:, 14] > 0, dtype=np.int32) * (ratio[1] * h * x[:, 14] + pad[1]) + (
- np.array(x[:, 14] > 0, dtype=np.int32) - 1)
-
- if self.augment:
- # Augment imagespace
- if not mosaic:
- img, labels = random_perspective(img, labels,
- degrees=hyp['degrees'],
- translate=hyp['translate'],
- scale=hyp['scale'],
- shear=hyp['shear'],
- perspective=hyp['perspective'])
-
- # Augment colorspace
- augment_hsv(img, hgain=hyp['hsv_h'], sgain=hyp['hsv_s'], vgain=hyp['hsv_v'])
-
- # Apply cutouts
- # if random.random() < 0.9:
- # labels = cutout(img, labels)
-
- nL = len(labels) # number of labels
- if nL:
- labels[:, 1:5] = xyxy2xywh(labels[:, 1:5]) # convert xyxy to xywh
- labels[:, [2, 4]] /= img.shape[0] # normalized height 0-1
- labels[:, [1, 3]] /= img.shape[1] # normalized width 0-1
-
- labels[:, [5, 7, 9, 11, 13]] /= img.shape[1] # normalized landmark x 0-1
- labels[:, [5, 7, 9, 11, 13]] = np.where(labels[:, [5, 7, 9, 11, 13]] < 0, -1, labels[:, [5, 7, 9, 11, 13]])
- labels[:, [6, 8, 10, 12, 14]] /= img.shape[0] # normalized landmark y 0-1
- labels[:, [6, 8, 10, 12, 14]] = np.where(labels[:, [6, 8, 10, 12, 14]] < 0, -1, labels[:, [6, 8, 10, 12, 14]])
-
- if self.augment:
- # flip up-down
- if random.random() < hyp['flipud']:
- img = np.flipud(img)
- if nL:
- labels[:, 2] = 1 - labels[:, 2]
-
- labels[:, 6] = np.where(labels[:,6] < 0, -1, 1 - labels[:, 6])
- labels[:, 8] = np.where(labels[:, 8] < 0, -1, 1 - labels[:, 8])
- labels[:, 10] = np.where(labels[:, 10] < 0, -1, 1 - labels[:, 10])
- labels[:, 12] = np.where(labels[:, 12] < 0, -1, 1 - labels[:, 12])
- labels[:, 14] = np.where(labels[:, 14] < 0, -1, 1 - labels[:, 14])
-
- # flip left-right
- if random.random() < hyp['fliplr']:
- img = np.fliplr(img)
- if nL:
- labels[:, 1] = 1 - labels[:, 1]
-
- labels[:, 5] = np.where(labels[:, 5] < 0, -1, 1 - labels[:, 5])
- labels[:, 7] = np.where(labels[:, 7] < 0, -1, 1 - labels[:, 7])
- labels[:, 9] = np.where(labels[:, 9] < 0, -1, 1 - labels[:, 9])
- labels[:, 11] = np.where(labels[:, 11] < 0, -1, 1 - labels[:, 11])
- labels[:, 13] = np.where(labels[:, 13] < 0, -1, 1 - labels[:, 13])
-
- #左右镜像的时候,左眼、右眼, 左嘴角、右嘴角无法区分, 应该交换位置,便于网络学习
- eye_left = np.copy(labels[:, [5, 6]])
- mouth_left = np.copy(labels[:, [11, 12]])
- labels[:, [5, 6]] = labels[:, [7, 8]]
- labels[:, [7, 8]] = eye_left
- labels[:, [11, 12]] = labels[:, [13, 14]]
- labels[:, [13, 14]] = mouth_left
-
- labels_out = torch.zeros((nL, 16))
- if nL:
- labels_out[:, 1:] = torch.from_numpy(labels)
- #showlabels(img, labels[:, 1:5], labels[:, 5:15])
-
- # Convert
- img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
- img = np.ascontiguousarray(img)
- #print(index, ' --- labels_out: ', labels_out)
- #if nL:
- #print( ' : landmarks : ', torch.max(labels_out[:, 5:15]), ' --- ', torch.min(labels_out[:, 5:15]))
- return torch.from_numpy(img), labels_out, self.img_files[index], shapes
-
- @staticmethod
- def collate_fn(batch):
- img, label, path, shapes = zip(*batch) # transposed
- for i, l in enumerate(label):
- l[:, 0] = i # add target image index for build_targets()
- return torch.stack(img, 0), torch.cat(label, 0), path, shapes
-
-
-def showlabels(img, boxs, landmarks):
- for box in boxs:
- x,y,w,h = box[0] * img.shape[1], box[1] * img.shape[0], box[2] * img.shape[1], box[3] * img.shape[0]
- #cv2.rectangle(image, (x,y), (x+w,y+h), (0,255,0), 2)
- cv2.rectangle(img, (int(x - w/2), int(y - h/2)), (int(x + w/2), int(y + h/2)), (0, 255, 0), 2)
-
- for landmark in landmarks:
- #cv2.circle(img,(60,60),30,(0,0,255))
- for i in range(5):
- cv2.circle(img, (int(landmark[2*i] * img.shape[1]), int(landmark[2*i+1]*img.shape[0])), 3 ,(0,0,255), -1)
- cv2.imshow('test', img)
- cv2.waitKey(0)
-
-
-def load_mosaic_face(self, index):
- # loads images in a mosaic
- labels4 = []
- s = self.img_size
- yc, xc = [int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border] # mosaic center x, y
- indices = [index] + [self.indices[random.randint(0, self.n - 1)] for _ in range(3)] # 3 additional image indices
- for i, index in enumerate(indices):
- # Load image
- img, _, (h, w) = load_image(self, index)
-
- # place img in img4
- if i == 0: # top left
- img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles
- x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image)
- x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image)
- elif i == 1: # top right
- x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc
- x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h
- elif i == 2: # bottom left
- x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h)
- x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h)
- elif i == 3: # bottom right
- x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h)
- x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h)
-
- img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax]
- padw = x1a - x1b
- padh = y1a - y1b
-
- # Labels
- x = self.labels[index]
- labels = x.copy()
- if x.size > 0: # Normalized xywh to pixel xyxy format
- #box, x1,y1,x2,y2
- labels[:, 1] = w * (x[:, 1] - x[:, 3] / 2) + padw
- labels[:, 2] = h * (x[:, 2] - x[:, 4] / 2) + padh
- labels[:, 3] = w * (x[:, 1] + x[:, 3] / 2) + padw
- labels[:, 4] = h * (x[:, 2] + x[:, 4] / 2) + padh
- #10 landmarks
-
- labels[:, 5] = np.array(x[:, 5] > 0, dtype=np.int32) * (w * x[:, 5] + padw) + (np.array(x[:, 5] > 0, dtype=np.int32) - 1)
- labels[:, 6] = np.array(x[:, 6] > 0, dtype=np.int32) * (h * x[:, 6] + padh) + (np.array(x[:, 6] > 0, dtype=np.int32) - 1)
- labels[:, 7] = np.array(x[:, 7] > 0, dtype=np.int32) * (w * x[:, 7] + padw) + (np.array(x[:, 7] > 0, dtype=np.int32) - 1)
- labels[:, 8] = np.array(x[:, 8] > 0, dtype=np.int32) * (h * x[:, 8] + padh) + (np.array(x[:, 8] > 0, dtype=np.int32) - 1)
- labels[:, 9] = np.array(x[:, 9] > 0, dtype=np.int32) * (w * x[:, 9] + padw) + (np.array(x[:, 9] > 0, dtype=np.int32) - 1)
- labels[:, 10] = np.array(x[:, 10] > 0, dtype=np.int32) * (h * x[:, 10] + padh) + (np.array(x[:, 10] > 0, dtype=np.int32) - 1)
- labels[:, 11] = np.array(x[:, 11] > 0, dtype=np.int32) * (w * x[:, 11] + padw) + (np.array(x[:, 11] > 0, dtype=np.int32) - 1)
- labels[:, 12] = np.array(x[:, 12] > 0, dtype=np.int32) * (h * x[:, 12] + padh) + (np.array(x[:, 12] > 0, dtype=np.int32) - 1)
- labels[:, 13] = np.array(x[:, 13] > 0, dtype=np.int32) * (w * x[:, 13] + padw) + (np.array(x[:, 13] > 0, dtype=np.int32) - 1)
- labels[:, 14] = np.array(x[:, 14] > 0, dtype=np.int32) * (h * x[:, 14] + padh) + (np.array(x[:, 14] > 0, dtype=np.int32) - 1)
- labels4.append(labels)
-
- # Concat/clip labels
- if len(labels4):
- labels4 = np.concatenate(labels4, 0)
- np.clip(labels4[:, 1:5], 0, 2 * s, out=labels4[:, 1:5]) # use with random_perspective
- # img4, labels4 = replicate(img4, labels4) # replicate
-
- #landmarks
- labels4[:, 5:] = np.where(labels4[:, 5:] < 0, -1, labels4[:, 5:])
- labels4[:, 5:] = np.where(labels4[:, 5:] > 2 * s, -1, labels4[:, 5:])
-
- labels4[:, 5] = np.where(labels4[:, 6] == -1, -1, labels4[:, 5])
- labels4[:, 6] = np.where(labels4[:, 5] == -1, -1, labels4[:, 6])
-
- labels4[:, 7] = np.where(labels4[:, 8] == -1, -1, labels4[:, 7])
- labels4[:, 8] = np.where(labels4[:, 7] == -1, -1, labels4[:, 8])
-
- labels4[:, 9] = np.where(labels4[:, 10] == -1, -1, labels4[:, 9])
- labels4[:, 10] = np.where(labels4[:, 9] == -1, -1, labels4[:, 10])
-
- labels4[:, 11] = np.where(labels4[:, 12] == -1, -1, labels4[:, 11])
- labels4[:, 12] = np.where(labels4[:, 11] == -1, -1, labels4[:, 12])
-
- labels4[:, 13] = np.where(labels4[:, 14] == -1, -1, labels4[:, 13])
- labels4[:, 14] = np.where(labels4[:, 13] == -1, -1, labels4[:, 14])
-
- # Augment
- img4, labels4 = random_perspective(img4, labels4,
- degrees=self.hyp['degrees'],
- translate=self.hyp['translate'],
- scale=self.hyp['scale'],
- shear=self.hyp['shear'],
- perspective=self.hyp['perspective'],
- border=self.mosaic_border) # border to remove
- return img4, labels4
-
-
-# Ancillary functions --------------------------------------------------------------------------------------------------
-def load_image(self, index):
- # loads 1 image from dataset, returns img, original hw, resized hw
- img = self.imgs[index]
- if img is None: # not cached
- path = self.img_files[index]
- img = cv2.imread(path) # BGR
- assert img is not None, 'Image Not Found ' + path
- h0, w0 = img.shape[:2] # orig hw
- r = self.img_size / max(h0, w0) # resize image to img_size
- if r != 1: # always resize down, only resize up if training with augmentation
- interp = cv2.INTER_AREA if r < 1 and not self.augment else cv2.INTER_LINEAR
- img = cv2.resize(img, (int(w0 * r), int(h0 * r)), interpolation=interp)
- return img, (h0, w0), img.shape[:2] # img, hw_original, hw_resized
- else:
- return self.imgs[index], self.img_hw0[index], self.img_hw[index] # img, hw_original, hw_resized
-
-
-def augment_hsv(img, hgain=0.5, sgain=0.5, vgain=0.5):
- r = np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1 # random gains
- hue, sat, val = cv2.split(cv2.cvtColor(img, cv2.COLOR_BGR2HSV))
- dtype = img.dtype # uint8
-
- x = np.arange(0, 256, dtype=np.int16)
- lut_hue = ((x * r[0]) % 180).astype(dtype)
- lut_sat = np.clip(x * r[1], 0, 255).astype(dtype)
- lut_val = np.clip(x * r[2], 0, 255).astype(dtype)
-
- img_hsv = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val))).astype(dtype)
- cv2.cvtColor(img_hsv, cv2.COLOR_HSV2BGR, dst=img) # no return needed
-
- # Histogram equalization
- # if random.random() < 0.2:
- # for i in range(3):
- # img[:, :, i] = cv2.equalizeHist(img[:, :, i])
-
-def replicate(img, labels):
- # Replicate labels
- h, w = img.shape[:2]
- boxes = labels[:, 1:].astype(int)
- x1, y1, x2, y2 = boxes.T
- s = ((x2 - x1) + (y2 - y1)) / 2 # side length (pixels)
- for i in s.argsort()[:round(s.size * 0.5)]: # smallest indices
- x1b, y1b, x2b, y2b = boxes[i]
- bh, bw = y2b - y1b, x2b - x1b
- yc, xc = int(random.uniform(0, h - bh)), int(random.uniform(0, w - bw)) # offset x, y
- x1a, y1a, x2a, y2a = [xc, yc, xc + bw, yc + bh]
- img[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax]
- labels = np.append(labels, [[labels[i, 0], x1a, y1a, x2a, y2a]], axis=0)
-
- return img, labels
-
-
-def letterbox(img, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True):
- # Resize image to a 32-pixel-multiple rectangle https://github.com/ultralytics/yolov3/issues/232
- shape = img.shape[:2] # current shape [height, width]
- if isinstance(new_shape, int):
- new_shape = (new_shape, new_shape)
-
- # Scale ratio (new / old)
- r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
- if not scaleup: # only scale down, do not scale up (for better test mAP)
- r = min(r, 1.0)
-
- # Compute padding
- ratio = r, r # width, height ratios
- new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
- dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding
- if auto: # minimum rectangle
- dw, dh = np.mod(dw, 64), np.mod(dh, 64) # wh padding
- elif scaleFill: # stretch
- dw, dh = 0.0, 0.0
- new_unpad = (new_shape[1], new_shape[0])
- ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios
-
- dw /= 2 # divide padding into 2 sides
- dh /= 2
-
- if shape[::-1] != new_unpad: # resize
- img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR)
- top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
- left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
- img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border
- return img, ratio, (dw, dh)
-
-
-def random_perspective(img, targets=(), degrees=10, translate=.1, scale=.1, shear=10, perspective=0.0, border=(0, 0)):
- # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(.1, .1), scale=(.9, 1.1), shear=(-10, 10))
- # targets = [cls, xyxy]
-
- height = img.shape[0] + border[0] * 2 # shape(h,w,c)
- width = img.shape[1] + border[1] * 2
-
- # Center
- C = np.eye(3)
- C[0, 2] = -img.shape[1] / 2 # x translation (pixels)
- C[1, 2] = -img.shape[0] / 2 # y translation (pixels)
-
- # Perspective
- P = np.eye(3)
- P[2, 0] = random.uniform(-perspective, perspective) # x perspective (about y)
- P[2, 1] = random.uniform(-perspective, perspective) # y perspective (about x)
-
- # Rotation and Scale
- R = np.eye(3)
- a = random.uniform(-degrees, degrees)
- # a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations
- s = random.uniform(1 - scale, 1 + scale)
- # s = 2 ** random.uniform(-scale, scale)
- R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s)
-
- # Shear
- S = np.eye(3)
- S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # x shear (deg)
- S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # y shear (deg)
-
- # Translation
- T = np.eye(3)
- T[0, 2] = random.uniform(0.5 - translate, 0.5 + translate) * width # x translation (pixels)
- T[1, 2] = random.uniform(0.5 - translate, 0.5 + translate) * height # y translation (pixels)
-
- # Combined rotation matrix
- M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT
- if (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any(): # image changed
- if perspective:
- img = cv2.warpPerspective(img, M, dsize=(width, height), borderValue=(114, 114, 114))
- else: # affine
- img = cv2.warpAffine(img, M[:2], dsize=(width, height), borderValue=(114, 114, 114))
-
- # Visualize
- # import matplotlib.pyplot as plt
- # ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel()
- # ax[0].imshow(img[:, :, ::-1]) # base
- # ax[1].imshow(img2[:, :, ::-1]) # warped
-
- # Transform label coordinates
- n = len(targets)
- if n:
- # warp points
- #xy = np.ones((n * 4, 3))
- xy = np.ones((n * 9, 3))
- xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]].reshape(n * 9, 2) # x1y1, x2y2, x1y2, x2y1
- xy = xy @ M.T # transform
- if perspective:
- xy = (xy[:, :2] / xy[:, 2:3]).reshape(n, 18) # rescale
- else: # affine
- xy = xy[:, :2].reshape(n, 18)
-
- # create new boxes
- x = xy[:, [0, 2, 4, 6]]
- y = xy[:, [1, 3, 5, 7]]
-
- landmarks = xy[:, [8, 9, 10, 11, 12, 13, 14, 15, 16, 17]]
- mask = np.array(targets[:, 5:] > 0, dtype=np.int32)
- landmarks = landmarks * mask
- landmarks = landmarks + mask - 1
-
- landmarks = np.where(landmarks < 0, -1, landmarks)
- landmarks[:, [0, 2, 4, 6, 8]] = np.where(landmarks[:, [0, 2, 4, 6, 8]] > width, -1, landmarks[:, [0, 2, 4, 6, 8]])
- landmarks[:, [1, 3, 5, 7, 9]] = np.where(landmarks[:, [1, 3, 5, 7, 9]] > height, -1,landmarks[:, [1, 3, 5, 7, 9]])
-
- landmarks[:, 0] = np.where(landmarks[:, 1] == -1, -1, landmarks[:, 0])
- landmarks[:, 1] = np.where(landmarks[:, 0] == -1, -1, landmarks[:, 1])
-
- landmarks[:, 2] = np.where(landmarks[:, 3] == -1, -1, landmarks[:, 2])
- landmarks[:, 3] = np.where(landmarks[:, 2] == -1, -1, landmarks[:, 3])
-
- landmarks[:, 4] = np.where(landmarks[:, 5] == -1, -1, landmarks[:, 4])
- landmarks[:, 5] = np.where(landmarks[:, 4] == -1, -1, landmarks[:, 5])
-
- landmarks[:, 6] = np.where(landmarks[:, 7] == -1, -1, landmarks[:, 6])
- landmarks[:, 7] = np.where(landmarks[:, 6] == -1, -1, landmarks[:, 7])
-
- landmarks[:, 8] = np.where(landmarks[:, 9] == -1, -1, landmarks[:, 8])
- landmarks[:, 9] = np.where(landmarks[:, 8] == -1, -1, landmarks[:, 9])
-
- targets[:,5:] = landmarks
-
- xy = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T
-
- # # apply angle-based reduction of bounding boxes
- # radians = a * math.pi / 180
- # reduction = max(abs(math.sin(radians)), abs(math.cos(radians))) ** 0.5
- # x = (xy[:, 2] + xy[:, 0]) / 2
- # y = (xy[:, 3] + xy[:, 1]) / 2
- # w = (xy[:, 2] - xy[:, 0]) * reduction
- # h = (xy[:, 3] - xy[:, 1]) * reduction
- # xy = np.concatenate((x - w / 2, y - h / 2, x + w / 2, y + h / 2)).reshape(4, n).T
-
- # clip boxes
- xy[:, [0, 2]] = xy[:, [0, 2]].clip(0, width)
- xy[:, [1, 3]] = xy[:, [1, 3]].clip(0, height)
-
- # filter candidates
- i = box_candidates(box1=targets[:, 1:5].T * s, box2=xy.T)
- targets = targets[i]
- targets[:, 1:5] = xy[i]
-
- return img, targets
-
-
-def box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1): # box1(4,n), box2(4,n)
- # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio
- w1, h1 = box1[2] - box1[0], box1[3] - box1[1]
- w2, h2 = box2[2] - box2[0], box2[3] - box2[1]
- ar = np.maximum(w2 / (h2 + 1e-16), h2 / (w2 + 1e-16)) # aspect ratio
- return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + 1e-16) > area_thr) & (ar < ar_thr) # candidates
-
-
-def cutout(image, labels):
- # Applies image cutout augmentation https://arxiv.org/abs/1708.04552
- h, w = image.shape[:2]
-
- def bbox_ioa(box1, box2):
- # Returns the intersection over box2 area given box1, box2. box1 is 4, box2 is nx4. boxes are x1y1x2y2
- box2 = box2.transpose()
-
- # Get the coordinates of bounding boxes
- b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
- b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
-
- # Intersection area
- inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \
- (np.minimum(b1_y2, b2_y2) - np.maximum(b1_y1, b2_y1)).clip(0)
-
- # box2 area
- box2_area = (b2_x2 - b2_x1) * (b2_y2 - b2_y1) + 1e-16
-
- # Intersection over box2 area
- return inter_area / box2_area
-
- # create random masks
- scales = [0.5] * 1 + [0.25] * 2 + [0.125] * 4 + [0.0625] * 8 + [0.03125] * 16 # image size fraction
- for s in scales:
- mask_h = random.randint(1, int(h * s))
- mask_w = random.randint(1, int(w * s))
-
- # box
- xmin = max(0, random.randint(0, w) - mask_w // 2)
- ymin = max(0, random.randint(0, h) - mask_h // 2)
- xmax = min(w, xmin + mask_w)
- ymax = min(h, ymin + mask_h)
-
- # apply random color mask
- image[ymin:ymax, xmin:xmax] = [random.randint(64, 191) for _ in range(3)]
-
- # return unobscured labels
- if len(labels) and s > 0.03:
- box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32)
- ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area
- labels = labels[ioa < 0.60] # remove >60% obscured labels
-
- return labels
-
-
-def create_folder(path='./new'):
- # Create folder
- if os.path.exists(path):
- shutil.rmtree(path) # delete output folder
- os.makedirs(path) # make new output folder
-
-
-def flatten_recursive(path='../coco128'):
- # Flatten a recursive directory by bringing all files to top level
- new_path = Path(path + '_flat')
- create_folder(new_path)
- for file in tqdm(glob.glob(str(Path(path)) + '/**/*.*', recursive=True)):
- shutil.copyfile(file, new_path / Path(file).name)
-
-
-def extract_boxes(path='../coco128/'): # from utils.datasets import *; extract_boxes('../coco128')
- # Convert detection dataset into classification dataset, with one directory per class
-
- path = Path(path) # images dir
- shutil.rmtree(path / 'classifier') if (path / 'classifier').is_dir() else None # remove existing
- files = list(path.rglob('*.*'))
- n = len(files) # number of files
- for im_file in tqdm(files, total=n):
- if im_file.suffix[1:] in img_formats:
- # image
- im = cv2.imread(str(im_file))[..., ::-1] # BGR to RGB
- h, w = im.shape[:2]
-
- # labels
- lb_file = Path(img2label_paths([str(im_file)])[0])
- if Path(lb_file).exists():
- with open(lb_file, 'r') as f:
- lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels
-
- for j, x in enumerate(lb):
- c = int(x[0]) # class
- f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg' # new filename
- if not f.parent.is_dir():
- f.parent.mkdir(parents=True)
-
- b = x[1:] * [w, h, w, h] # box
- # b[2:] = b[2:].max() # rectangle to square
- b[2:] = b[2:] * 1.2 + 3 # pad
- b = xywh2xyxy(b.reshape(-1, 4)).ravel().astype(np.int)
-
- b[[0, 2]] = np.clip(b[[0, 2]], 0, w) # clip boxes outside of image
- b[[1, 3]] = np.clip(b[[1, 3]], 0, h)
- assert cv2.imwrite(str(f), im[b[1]:b[3], b[0]:b[2]]), f'box failure in {f}'
-
-
-def autosplit(path='../coco128', weights=(0.9, 0.1, 0.0)): # from utils.datasets import *; autosplit('../coco128')
- """ Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files
- # Arguments
- path: Path to images directory
- weights: Train, val, test weights (list)
- """
- path = Path(path) # images dir
- files = list(path.rglob('*.*'))
- n = len(files) # number of files
- indices = random.choices([0, 1, 2], weights=weights, k=n) # assign each image to a split
- txt = ['autosplit_train.txt', 'autosplit_val.txt', 'autosplit_test.txt'] # 3 txt files
- [(path / x).unlink() for x in txt if (path / x).exists()] # remove existing
- for i, img in tqdm(zip(indices, files), total=n):
- if img.suffix[1:] in img_formats:
- with open(path / txt[i], 'a') as f:
- f.write(str(img) + '\n') # add image to txt file
diff --git a/spaces/sczhou/CodeFormer/app.py b/spaces/sczhou/CodeFormer/app.py
deleted file mode 100644
index 485c29cf06adc43e1dcefe561497f53e85f73116..0000000000000000000000000000000000000000
--- a/spaces/sczhou/CodeFormer/app.py
+++ /dev/null
@@ -1,305 +0,0 @@
-"""
-This file is used for deploying hugging face demo:
-https://huggingface.co/spaces/sczhou/CodeFormer
-"""
-
-import sys
-sys.path.append('CodeFormer')
-import os
-import cv2
-import torch
-import torch.nn.functional as F
-import gradio as gr
-
-from torchvision.transforms.functional import normalize
-
-from basicsr.utils import imwrite, img2tensor, tensor2img
-from basicsr.utils.download_util import load_file_from_url
-from facelib.utils.face_restoration_helper import FaceRestoreHelper
-from facelib.utils.misc import is_gray
-from basicsr.archs.rrdbnet_arch import RRDBNet
-from basicsr.utils.realesrgan_utils import RealESRGANer
-
-from basicsr.utils.registry import ARCH_REGISTRY
-
-
-os.system("pip freeze")
-
-pretrain_model_url = {
- 'codeformer': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth',
- 'detection': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/detection_Resnet50_Final.pth',
- 'parsing': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/parsing_parsenet.pth',
- 'realesrgan': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/RealESRGAN_x2plus.pth'
-}
-# download weights
-if not os.path.exists('CodeFormer/weights/CodeFormer/codeformer.pth'):
- load_file_from_url(url=pretrain_model_url['codeformer'], model_dir='CodeFormer/weights/CodeFormer', progress=True, file_name=None)
-if not os.path.exists('CodeFormer/weights/facelib/detection_Resnet50_Final.pth'):
- load_file_from_url(url=pretrain_model_url['detection'], model_dir='CodeFormer/weights/facelib', progress=True, file_name=None)
-if not os.path.exists('CodeFormer/weights/facelib/parsing_parsenet.pth'):
- load_file_from_url(url=pretrain_model_url['parsing'], model_dir='CodeFormer/weights/facelib', progress=True, file_name=None)
-if not os.path.exists('CodeFormer/weights/realesrgan/RealESRGAN_x2plus.pth'):
- load_file_from_url(url=pretrain_model_url['realesrgan'], model_dir='CodeFormer/weights/realesrgan', progress=True, file_name=None)
-
-# download images
-torch.hub.download_url_to_file(
- 'https://replicate.com/api/models/sczhou/codeformer/files/fa3fe3d1-76b0-4ca8-ac0d-0a925cb0ff54/06.png',
- '01.png')
-torch.hub.download_url_to_file(
- 'https://replicate.com/api/models/sczhou/codeformer/files/a1daba8e-af14-4b00-86a4-69cec9619b53/04.jpg',
- '02.jpg')
-torch.hub.download_url_to_file(
- 'https://replicate.com/api/models/sczhou/codeformer/files/542d64f9-1712-4de7-85f7-3863009a7c3d/03.jpg',
- '03.jpg')
-torch.hub.download_url_to_file(
- 'https://replicate.com/api/models/sczhou/codeformer/files/a11098b0-a18a-4c02-a19a-9a7045d68426/010.jpg',
- '04.jpg')
-torch.hub.download_url_to_file(
- 'https://replicate.com/api/models/sczhou/codeformer/files/7cf19c2c-e0cf-4712-9af8-cf5bdbb8d0ee/012.jpg',
- '05.jpg')
-torch.hub.download_url_to_file(
- 'https://raw.githubusercontent.com/sczhou/CodeFormer/master/inputs/cropped_faces/0729.png',
- '06.png')
-
-def imread(img_path):
- img = cv2.imread(img_path)
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
- return img
-
-# set enhancer with RealESRGAN
-def set_realesrgan():
- half = True if torch.cuda.is_available() else False
- model = RRDBNet(
- num_in_ch=3,
- num_out_ch=3,
- num_feat=64,
- num_block=23,
- num_grow_ch=32,
- scale=2,
- )
- upsampler = RealESRGANer(
- scale=2,
- model_path="CodeFormer/weights/realesrgan/RealESRGAN_x2plus.pth",
- model=model,
- tile=400,
- tile_pad=40,
- pre_pad=0,
- half=half,
- )
- return upsampler
-
-upsampler = set_realesrgan()
-device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-codeformer_net = ARCH_REGISTRY.get("CodeFormer")(
- dim_embd=512,
- codebook_size=1024,
- n_head=8,
- n_layers=9,
- connect_list=["32", "64", "128", "256"],
-).to(device)
-ckpt_path = "CodeFormer/weights/CodeFormer/codeformer.pth"
-checkpoint = torch.load(ckpt_path)["params_ema"]
-codeformer_net.load_state_dict(checkpoint)
-codeformer_net.eval()
-
-os.makedirs('output', exist_ok=True)
-
-def inference(image, face_align, background_enhance, face_upsample, upscale, codeformer_fidelity):
- """Run a single prediction on the model"""
- try: # global try
- # take the default setting for the demo
- only_center_face = False
- draw_box = False
- detection_model = "retinaface_resnet50"
-
- print('Inp:', image, background_enhance, face_upsample, upscale, codeformer_fidelity)
- face_align = face_align if face_align is not None else True
- background_enhance = background_enhance if background_enhance is not None else True
- face_upsample = face_upsample if face_upsample is not None else True
- upscale = upscale if (upscale is not None and upscale > 0) else 2
-
- has_aligned = not face_align
- upscale = 1 if has_aligned else upscale
-
- img = cv2.imread(str(image), cv2.IMREAD_COLOR)
- print('\timage size:', img.shape)
-
- upscale = int(upscale) # convert type to int
- if upscale > 4: # avoid memory exceeded due to too large upscale
- upscale = 4
- if upscale > 2 and max(img.shape[:2])>1000: # avoid memory exceeded due to too large img resolution
- upscale = 2
- if max(img.shape[:2]) > 1500: # avoid memory exceeded due to too large img resolution
- upscale = 1
- background_enhance = False
- face_upsample = False
-
- face_helper = FaceRestoreHelper(
- upscale,
- face_size=512,
- crop_ratio=(1, 1),
- det_model=detection_model,
- save_ext="png",
- use_parse=True,
- device=device,
- )
- bg_upsampler = upsampler if background_enhance else None
- face_upsampler = upsampler if face_upsample else None
-
- if has_aligned:
- # the input faces are already cropped and aligned
- img = cv2.resize(img, (512, 512), interpolation=cv2.INTER_LINEAR)
- face_helper.is_gray = is_gray(img, threshold=5)
- if face_helper.is_gray:
- print('\tgrayscale input: True')
- face_helper.cropped_faces = [img]
- else:
- face_helper.read_image(img)
- # get face landmarks for each face
- num_det_faces = face_helper.get_face_landmarks_5(
- only_center_face=only_center_face, resize=640, eye_dist_threshold=5
- )
- print(f'\tdetect {num_det_faces} faces')
- # align and warp each face
- face_helper.align_warp_face()
-
- # face restoration for each cropped face
- for idx, cropped_face in enumerate(face_helper.cropped_faces):
- # prepare data
- cropped_face_t = img2tensor(
- cropped_face / 255.0, bgr2rgb=True, float32=True
- )
- normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True)
- cropped_face_t = cropped_face_t.unsqueeze(0).to(device)
-
- try:
- with torch.no_grad():
- output = codeformer_net(
- cropped_face_t, w=codeformer_fidelity, adain=True
- )[0]
- restored_face = tensor2img(output, rgb2bgr=True, min_max=(-1, 1))
- del output
- torch.cuda.empty_cache()
- except RuntimeError as error:
- print(f"Failed inference for CodeFormer: {error}")
- restored_face = tensor2img(
- cropped_face_t, rgb2bgr=True, min_max=(-1, 1)
- )
-
- restored_face = restored_face.astype("uint8")
- face_helper.add_restored_face(restored_face)
-
- # paste_back
- if not has_aligned:
- # upsample the background
- if bg_upsampler is not None:
- # Now only support RealESRGAN for upsampling background
- bg_img = bg_upsampler.enhance(img, outscale=upscale)[0]
- else:
- bg_img = None
- face_helper.get_inverse_affine(None)
- # paste each restored face to the input image
- if face_upsample and face_upsampler is not None:
- restored_img = face_helper.paste_faces_to_input_image(
- upsample_img=bg_img,
- draw_box=draw_box,
- face_upsampler=face_upsampler,
- )
- else:
- restored_img = face_helper.paste_faces_to_input_image(
- upsample_img=bg_img, draw_box=draw_box
- )
- else:
- restored_img = restored_face
-
- # save restored img
- save_path = f'output/out.png'
- imwrite(restored_img, str(save_path))
-
- restored_img = cv2.cvtColor(restored_img, cv2.COLOR_BGR2RGB)
- return restored_img
- except Exception as error:
- print('Global exception', error)
- return None, None
-
-
-title = "CodeFormer: Robust Face Restoration and Enhancement Network"
-
-description = r"""
-
-Official Gradio demo for Towards Robust Blind Face Restoration with Codebook Lookup Transformer (NeurIPS 2022)
-🔥 CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.
-🤗 Try CodeFormer for improved stable-diffusion generation!
-"""
-
-article = r"""
-If CodeFormer is helpful, please help to ⭐ the Github Repo. Thanks!
-[](https://github.com/sczhou/CodeFormer)
-
----
-
-📝 **Citation**
-
-If our work is useful for your research, please consider citing:
-```bibtex
-@inproceedings{zhou2022codeformer,
- author = {Zhou, Shangchen and Chan, Kelvin C.K. and Li, Chongyi and Loy, Chen Change},
- title = {Towards Robust Blind Face Restoration with Codebook Lookup TransFormer},
- booktitle = {NeurIPS},
- year = {2022}
-}
-```
-
-📋 **License**
-
-This project is licensed under S-Lab License 1.0.
-Redistribution and use for non-commercial purposes should follow this license.
-
-📧 **Contact**
-
-If you have any questions, please feel free to reach me out at shangchenzhou@gmail.com.
-
-🤗 **Find Me:**
-
-
-
-
-------
-[**업데이트 로그**](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Changelog_CN.md)
-
-[**English**](./README.en.md) | [**中文简体**](../README.md) | [**日本語**](./README.ja.md) | [**한국어**](./README.ko.md) ([**韓國語**](./README.ko.han.md))
-
-> [데모 영상](https://www.bilibili.com/video/BV1pm4y1z7Gm/)을 확인해 보세요!
-
-> RVC를 활용한 실시간 음성변환: [w-okada/voice-changer](https://github.com/w-okada/voice-changer)
-
-> 기본 모델은 50시간 가량의 고퀄리티 오픈 소스 VCTK 데이터셋을 사용하였으므로, 저작권상의 염려가 없으니 안심하고 사용하시기 바랍니다.
-
-> 저작권 문제가 없는 고퀄리티의 노래를 이후에도 계속해서 훈련할 예정입니다.
-
-## 소개
-본 Repo는 다음과 같은 특징을 가지고 있습니다:
-+ top1 검색을 이용하여 입력 음색 특징을 훈련 세트 음색 특징으로 대체하여 음색의 누출을 방지;
-+ 상대적으로 낮은 성능의 GPU에서도 빠른 훈련 가능;
-+ 적은 양의 데이터로 훈련해도 좋은 결과를 얻을 수 있음 (최소 10분 이상의 저잡음 음성 데이터를 사용하는 것을 권장);
-+ 모델 융합을 통한 음색의 변조 가능 (ckpt 처리 탭->ckpt 병합 선택);
-+ 사용하기 쉬운 WebUI (웹 인터페이스);
-+ UVR5 모델을 이용하여 목소리와 배경음악의 빠른 분리;
-
-## 환경의 준비
-poetry를 통해 dependecies를 설치하는 것을 권장합니다.
-
-다음 명령은 Python 버전 3.8 이상의 환경에서 실행되어야 합니다:
-```bash
-# PyTorch 관련 주요 dependencies 설치, 이미 설치되어 있는 경우 건너뛰기 가능
-# 참조: https://pytorch.org/get-started/locally/
-pip install torch torchvision torchaudio
-
-# Windows + Nvidia Ampere Architecture(RTX30xx)를 사용하고 있다면, https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/issues/21 에서 명시된 것과 같이 PyTorch에 맞는 CUDA 버전을 지정해야 합니다.
-#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
-
-# Poetry 설치, 이미 설치되어 있는 경우 건너뛰기 가능
-# Reference: https://python-poetry.org/docs/#installation
-curl -sSL https://install.python-poetry.org | python3 -
-
-# Dependecies 설치
-poetry install
-```
-pip를 활용하여 dependencies를 설치하여도 무방합니다.
-
-**공지**: `MacOS`에서 `faiss 1.7.2`를 사용하면 Segmentation Fault: 11 오류가 발생할 수 있습니다. 수동으로 pip를 사용하여 설치하는 경우 `pip install faiss-cpu==1.7.0`을 사용해야 합니다.
-
-```bash
-pip install -r requirements.txt
-```
-
-## 기타 사전 모델 준비
-RVC 모델은 추론과 훈련을 위하여 다른 사전 모델이 필요합니다.
-
-[Huggingface space](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)를 통해서 다운로드 할 수 있습니다.
-
-다음은 RVC에 필요한 사전 모델 및 기타 파일 목록입니다:
-```bash
-hubert_base.pt
-
-./pretrained
-
-./uvr5_weights
-
-# Windows를 사용하는 경우 이 사전도 필요할 수 있습니다. FFmpeg가 설치되어 있으면 건너뛰어도 됩니다.
-ffmpeg.exe
-```
-그 후 이하의 명령을 사용하여 WebUI를 시작할 수 있습니다:
-```bash
-python infer-web.py
-```
-Windows를 사용하는 경우 `RVC-beta.7z`를 다운로드 및 압축 해제하여 RVC를 직접 사용하거나 `go-web.bat`을 사용하여 WebUi를 시작할 수 있습니다.
-
-## 참고
-+ [ContentVec](https://github.com/auspicious3000/contentvec/)
-+ [VITS](https://github.com/jaywalnut310/vits)
-+ [HIFIGAN](https://github.com/jik876/hifi-gan)
-+ [Gradio](https://github.com/gradio-app/gradio)
-+ [FFmpeg](https://github.com/FFmpeg/FFmpeg)
-+ [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui)
-+ [audio-slicer](https://github.com/openvpi/audio-slicer)
-## 모든 기여자 분들의 노력에 감사드립니다.
-
-
-
-
-
diff --git a/spaces/shi-labs/FcF-Inpainting/training/losses/ade20k/segm_lib/utils/__init__.py b/spaces/shi-labs/FcF-Inpainting/training/losses/ade20k/segm_lib/utils/__init__.py
deleted file mode 100644
index abe3cbe49477fe37d4fc16249de8a10f4fb4a013..0000000000000000000000000000000000000000
--- a/spaces/shi-labs/FcF-Inpainting/training/losses/ade20k/segm_lib/utils/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .th import *
diff --git a/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/seecoder.py b/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/seecoder.py
deleted file mode 100644
index 2ad331180113db5ee33186a5abce81e871e0c7c9..0000000000000000000000000000000000000000
--- a/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/seecoder.py
+++ /dev/null
@@ -1,576 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import copy
-
-from .seecoder_utils import with_pos_embed
-from lib.model_zoo.common.get_model import get_model, register
-
-symbol = 'seecoder'
-
-###########
-# helpers #
-###########
-
-def _get_clones(module, N):
- return nn.ModuleList([copy.deepcopy(module) for i in range(N)])
-
-def _get_activation_fn(activation):
- """Return an activation function given a string"""
- if activation == "relu":
- return F.relu
- if activation == "gelu":
- return F.gelu
- if activation == "glu":
- return F.glu
- raise RuntimeError(f"activation should be relu/gelu, not {activation}.")
-
-def c2_xavier_fill(module):
- # Caffe2 implementation of XavierFill in fact
- nn.init.kaiming_uniform_(module.weight, a=1)
- if module.bias is not None:
- nn.init.constant_(module.bias, 0)
-
-def with_pos_embed(x, pos):
- return x if pos is None else x + pos
-
-###########
-# Modules #
-###########
-
-class Conv2d_Convenience(nn.Conv2d):
- def __init__(self, *args, **kwargs):
- norm = kwargs.pop("norm", None)
- activation = kwargs.pop("activation", None)
- super().__init__(*args, **kwargs)
- self.norm = norm
- self.activation = activation
-
- def forward(self, x):
- x = F.conv2d(
- x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups)
- if self.norm is not None:
- x = self.norm(x)
- if self.activation is not None:
- x = self.activation(x)
- return x
-
-class DecoderLayer(nn.Module):
- def __init__(self,
- dim=256,
- feedforward_dim=1024,
- dropout=0.1,
- activation="relu",
- n_heads=8,):
-
- super().__init__()
-
- self.self_attn = nn.MultiheadAttention(dim, n_heads, dropout=dropout)
- self.dropout1 = nn.Dropout(dropout)
- self.norm1 = nn.LayerNorm(dim)
-
- self.linear1 = nn.Linear(dim, feedforward_dim)
- self.activation = _get_activation_fn(activation)
- self.dropout2 = nn.Dropout(dropout)
- self.linear2 = nn.Linear(feedforward_dim, dim)
- self.dropout3 = nn.Dropout(dropout)
- self.norm2 = nn.LayerNorm(dim)
-
- def forward(self, x):
- h = x
- h1 = self.self_attn(x, x, x, attn_mask=None)[0]
- h = h + self.dropout1(h1)
- h = self.norm1(h)
-
- h2 = self.linear2(self.dropout2(self.activation(self.linear1(h))))
- h = h + self.dropout3(h2)
- h = self.norm2(h)
- return h
-
-class DecoderLayerStacked(nn.Module):
- def __init__(self, layer, num_layers, norm=None):
- super().__init__()
- self.layers = _get_clones(layer, num_layers)
- self.num_layers = num_layers
- self.norm = norm
-
- def forward(self, x):
- h = x
- for _, layer in enumerate(self.layers):
- h = layer(h)
- if self.norm is not None:
- h = self.norm(h)
- return h
-
-class SelfAttentionLayer(nn.Module):
- def __init__(self, channels, nhead, dropout=0.0,
- activation="relu", normalize_before=False):
- super().__init__()
- self.self_attn = nn.MultiheadAttention(channels, nhead, dropout=dropout)
-
- self.norm = nn.LayerNorm(channels)
- self.dropout = nn.Dropout(dropout)
-
- self.activation = _get_activation_fn(activation)
- self.normalize_before = normalize_before
-
- self._reset_parameters()
-
- def _reset_parameters(self):
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
-
- def forward_post(self,
- qkv,
- qk_pos = None,
- mask = None,):
- h = qkv
- qk = with_pos_embed(qkv, qk_pos).transpose(0, 1)
- v = qkv.transpose(0, 1)
- h1 = self.self_attn(qk, qk, v, attn_mask=mask)[0]
- h1 = h1.transpose(0, 1)
- h = h + self.dropout(h1)
- h = self.norm(h)
- return h
-
- def forward_pre(self, tgt,
- tgt_mask = None,
- tgt_key_padding_mask = None,
- query_pos = None):
- # deprecated
- assert False
- tgt2 = self.norm(tgt)
- q = k = self.with_pos_embed(tgt2, query_pos)
- tgt2 = self.self_attn(q, k, value=tgt2, attn_mask=tgt_mask,
- key_padding_mask=tgt_key_padding_mask)[0]
- tgt = tgt + self.dropout(tgt2)
- return tgt
-
- def forward(self, *args, **kwargs):
- if self.normalize_before:
- return self.forward_pre(*args, **kwargs)
- return self.forward_post(*args, **kwargs)
-
-class CrossAttentionLayer(nn.Module):
- def __init__(self, channels, nhead, dropout=0.0,
- activation="relu", normalize_before=False):
- super().__init__()
- self.multihead_attn = nn.MultiheadAttention(channels, nhead, dropout=dropout)
-
- self.norm = nn.LayerNorm(channels)
- self.dropout = nn.Dropout(dropout)
-
- self.activation = _get_activation_fn(activation)
- self.normalize_before = normalize_before
-
- self._reset_parameters()
-
- def _reset_parameters(self):
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
-
- def forward_post(self,
- q,
- kv,
- q_pos = None,
- k_pos = None,
- mask = None,):
- h = q
- q = with_pos_embed(q, q_pos).transpose(0, 1)
- k = with_pos_embed(kv, k_pos).transpose(0, 1)
- v = kv.transpose(0, 1)
- h1 = self.multihead_attn(q, k, v, attn_mask=mask)[0]
- h1 = h1.transpose(0, 1)
- h = h + self.dropout(h1)
- h = self.norm(h)
- return h
-
- def forward_pre(self, tgt, memory,
- memory_mask = None,
- memory_key_padding_mask = None,
- pos = None,
- query_pos = None):
- # Deprecated
- assert False
- tgt2 = self.norm(tgt)
- tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt2, query_pos),
- key=self.with_pos_embed(memory, pos),
- value=memory, attn_mask=memory_mask,
- key_padding_mask=memory_key_padding_mask)[0]
- tgt = tgt + self.dropout(tgt2)
- return tgt
-
- def forward(self, *args, **kwargs):
- if self.normalize_before:
- return self.forward_pre(*args, **kwargs)
- return self.forward_post(*args, **kwargs)
-
-class FeedForwardLayer(nn.Module):
- def __init__(self, channels, hidden_channels=2048, dropout=0.0,
- activation="relu", normalize_before=False):
- super().__init__()
- self.linear1 = nn.Linear(channels, hidden_channels)
- self.dropout = nn.Dropout(dropout)
- self.linear2 = nn.Linear(hidden_channels, channels)
- self.norm = nn.LayerNorm(channels)
- self.activation = _get_activation_fn(activation)
- self.normalize_before = normalize_before
- self._reset_parameters()
-
- def _reset_parameters(self):
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
-
- def forward_post(self, x):
- h = x
- h1 = self.linear2(self.dropout(self.activation(self.linear1(h))))
- h = h + self.dropout(h1)
- h = self.norm(h)
- return h
-
- def forward_pre(self, x):
- xn = self.norm(x)
- h = x
- h1 = self.linear2(self.dropout(self.activation(self.linear1(xn))))
- h = h + self.dropout(h1)
- return h
-
- def forward(self, *args, **kwargs):
- if self.normalize_before:
- return self.forward_pre(*args, **kwargs)
- return self.forward_post(*args, **kwargs)
-
-class MLP(nn.Module):
- def __init__(self, in_channels, channels, out_channels, num_layers):
- super().__init__()
- self.num_layers = num_layers
- h = [channels] * (num_layers - 1)
- self.layers = nn.ModuleList(
- nn.Linear(n, k)
- for n, k in zip([in_channels]+h, h+[out_channels]))
-
- def forward(self, x):
- for i, layer in enumerate(self.layers):
- x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)
- return x
-
-class PPE_MLP(nn.Module):
- def __init__(self, freq_num=20, freq_max=None, out_channel=768, mlp_layer=3):
- import math
- super().__init__()
- self.freq_num = freq_num
- self.freq_max = freq_max
- self.out_channel = out_channel
- self.mlp_layer = mlp_layer
- self.twopi = 2 * math.pi
-
- mlp = []
- in_channel = freq_num*4
- for idx in range(mlp_layer):
- linear = nn.Linear(in_channel, out_channel, bias=True)
- nn.init.xavier_normal_(linear.weight)
- nn.init.constant_(linear.bias, 0)
- mlp.append(linear)
- if idx != mlp_layer-1:
- mlp.append(nn.SiLU())
- in_channel = out_channel
- self.mlp = nn.Sequential(*mlp)
- nn.init.constant_(self.mlp[-1].weight, 0)
-
- def forward(self, x, mask=None):
- assert mask is None, "Mask not implemented"
- h, w = x.shape[-2:]
- minlen = min(h, w)
-
- h_embed, w_embed = torch.meshgrid(torch.arange(h), torch.arange(w), indexing='ij')
- if self.training:
- import numpy.random as npr
- pertube_h, pertube_w = npr.uniform(-0.5, 0.5), npr.uniform(-0.5, 0.5)
- else:
- pertube_h, pertube_w = 0, 0
-
- h_embed = (h_embed+0.5 - h/2 + pertube_h) / (minlen) * self.twopi
- w_embed = (w_embed+0.5 - w/2 + pertube_w) / (minlen) * self.twopi
- h_embed, w_embed = h_embed.to(x.device).to(x.dtype), w_embed.to(x.device).to(x.dtype)
-
- dim_t = torch.linspace(0, 1, self.freq_num, dtype=torch.float32, device=x.device)
- freq_max = self.freq_max if self.freq_max is not None else minlen/2
- dim_t = freq_max ** dim_t.to(x.dtype)
-
- pos_h = h_embed[:, :, None] * dim_t
- pos_w = w_embed[:, :, None] * dim_t
- pos = torch.cat((pos_h.sin(), pos_h.cos(), pos_w.sin(), pos_w.cos()), dim=-1)
- pos = self.mlp(pos)
- pos = pos.permute(2, 0, 1)[None]
- return pos
-
- def __repr__(self, _repr_indent=4):
- head = "Positional encoding " + self.__class__.__name__
- body = [
- "num_pos_feats: {}".format(self.num_pos_feats),
- "temperature: {}".format(self.temperature),
- "normalize: {}".format(self.normalize),
- "scale: {}".format(self.scale),
- ]
- # _repr_indent = 4
- lines = [head] + [" " * _repr_indent + line for line in body]
- return "\n".join(lines)
-
-###########
-# Decoder #
-###########
-
-@register('seecoder_decoder')
-class Decoder(nn.Module):
- def __init__(
- self,
- inchannels,
- trans_input_tags,
- trans_num_layers,
- trans_dim,
- trans_nheads,
- trans_dropout,
- trans_feedforward_dim,):
-
- super().__init__()
- trans_inchannels = {
- k: v for k, v in inchannels.items() if k in trans_input_tags}
- fpn_inchannels = {
- k: v for k, v in inchannels.items() if k not in trans_input_tags}
-
- self.trans_tags = sorted(list(trans_inchannels.keys()))
- self.fpn_tags = sorted(list(fpn_inchannels.keys()))
- self.all_tags = sorted(list(inchannels.keys()))
-
- if len(self.trans_tags)==0:
- assert False # Not allowed
-
- self.num_trans_lvls = len(self.trans_tags)
-
- self.inproj_layers = nn.ModuleDict()
- for tagi in self.trans_tags:
- layeri = nn.Sequential(
- nn.Conv2d(trans_inchannels[tagi], trans_dim, kernel_size=1),
- nn.GroupNorm(32, trans_dim),)
- nn.init.xavier_uniform_(layeri[0].weight, gain=1)
- nn.init.constant_(layeri[0].bias, 0)
- self.inproj_layers[tagi] = layeri
-
- tlayer = DecoderLayer(
- dim = trans_dim,
- n_heads = trans_nheads,
- dropout = trans_dropout,
- feedforward_dim = trans_feedforward_dim,
- activation = 'relu',)
-
- self.transformer = DecoderLayerStacked(tlayer, trans_num_layers)
- for p in self.transformer.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
- self.level_embed = nn.Parameter(torch.Tensor(len(self.trans_tags), trans_dim))
- nn.init.normal_(self.level_embed)
-
- self.lateral_layers = nn.ModuleDict()
- self.output_layers = nn.ModuleDict()
- for tagi in self.all_tags:
- lateral_conv = Conv2d_Convenience(
- inchannels[tagi], trans_dim, kernel_size=1,
- bias=False, norm=nn.GroupNorm(32, trans_dim))
- c2_xavier_fill(lateral_conv)
- self.lateral_layers[tagi] = lateral_conv
-
- for tagi in self.fpn_tags:
- output_conv = Conv2d_Convenience(
- trans_dim, trans_dim, kernel_size=3, stride=1, padding=1,
- bias=False, norm=nn.GroupNorm(32, trans_dim), activation=F.relu,)
- c2_xavier_fill(output_conv)
- self.output_layers[tagi] = output_conv
-
- def forward(self, features):
- x = []
- spatial_shapes = {}
- for idx, tagi in enumerate(self.trans_tags[::-1]):
- xi = features[tagi]
- xi = self.inproj_layers[tagi](xi)
- bs, _, h, w = xi.shape
- spatial_shapes[tagi] = (h, w)
- xi = xi.flatten(2).transpose(1, 2) + self.level_embed[idx].view(1, 1, -1)
- x.append(xi)
-
- x_length = [xi.shape[1] for xi in x]
- x_concat = torch.cat(x, 1)
- y_concat = self.transformer(x_concat)
- y = torch.split(y_concat, x_length, dim=1)
-
- out = {}
- for idx, tagi in enumerate(self.trans_tags[::-1]):
- h, w = spatial_shapes[tagi]
- yi = y[idx].transpose(1, 2).view(bs, -1, h, w)
- out[tagi] = yi
-
- for idx, tagi in enumerate(self.all_tags[::-1]):
- lconv = self.lateral_layers[tagi]
- if tagi in self.trans_tags:
- out[tagi] = out[tagi] + lconv(features[tagi])
- tag_save = tagi
- else:
- oconv = self.output_layers[tagi]
- h = lconv(features[tagi])
- oprev = out[tag_save]
- h = h + F.interpolate(oconv(oprev), size=h.shape[-2:], mode="bilinear", align_corners=False)
- out[tagi] = h
-
- return out
-
-#####################
-# Query Transformer #
-#####################
-
-@register('seecoder_query_transformer')
-class QueryTransformer(nn.Module):
- def __init__(self,
- in_channels,
- hidden_dim,
- num_queries = [8, 144],
- nheads = 8,
- num_layers = 9,
- feedforward_dim = 2048,
- mask_dim = 256,
- pre_norm = False,
- num_feature_levels = 3,
- enforce_input_project = False,
- with_fea2d_pos = True):
-
- super().__init__()
-
- if with_fea2d_pos:
- self.pe_layer = PPE_MLP(freq_num=20, freq_max=None, out_channel=hidden_dim, mlp_layer=3)
- else:
- self.pe_layer = None
-
- if in_channels!=hidden_dim or enforce_input_project:
- self.input_proj = nn.ModuleList()
- for _ in range(num_feature_levels):
- self.input_proj.append(nn.Conv2d(in_channels, hidden_dim, kernel_size=1))
- c2_xavier_fill(self.input_proj[-1])
- else:
- self.input_proj = None
-
- self.num_heads = nheads
- self.num_layers = num_layers
- self.transformer_selfatt_layers = nn.ModuleList()
- self.transformer_crossatt_layers = nn.ModuleList()
- self.transformer_feedforward_layers = nn.ModuleList()
-
- for _ in range(self.num_layers):
- self.transformer_selfatt_layers.append(
- SelfAttentionLayer(
- channels=hidden_dim,
- nhead=nheads,
- dropout=0.0,
- normalize_before=pre_norm, ))
-
- self.transformer_crossatt_layers.append(
- CrossAttentionLayer(
- channels=hidden_dim,
- nhead=nheads,
- dropout=0.0,
- normalize_before=pre_norm, ))
-
- self.transformer_feedforward_layers.append(
- FeedForwardLayer(
- channels=hidden_dim,
- hidden_channels=feedforward_dim,
- dropout=0.0,
- normalize_before=pre_norm, ))
-
- self.num_queries = num_queries
- num_gq, num_lq = self.num_queries
- self.init_query = nn.Embedding(num_gq+num_lq, hidden_dim)
- self.query_pos_embedding = nn.Embedding(num_gq+num_lq, hidden_dim)
-
- self.num_feature_levels = num_feature_levels
- self.level_embed = nn.Embedding(num_feature_levels, hidden_dim)
-
- def forward(self, x):
- # x is a list of multi-scale feature
- assert len(x) == self.num_feature_levels
- fea2d = []
- fea2d_pos = []
- size_list = []
-
- for i in range(self.num_feature_levels):
- size_list.append(x[i].shape[-2:])
- if self.pe_layer is not None:
- pi = self.pe_layer(x[i], None).flatten(2)
- pi = pi.transpose(1, 2)
- else:
- pi = None
- xi = self.input_proj[i](x[i]) if self.input_proj is not None else x[i]
- xi = xi.flatten(2) + self.level_embed.weight[i][None, :, None]
- xi = xi.transpose(1, 2)
- fea2d.append(xi)
- fea2d_pos.append(pi)
-
- bs, _, _ = fea2d[0].shape
- num_gq, num_lq = self.num_queries
- gquery = self.init_query.weight[:num_gq].unsqueeze(0).repeat(bs, 1, 1)
- lquery = self.init_query.weight[num_gq:].unsqueeze(0).repeat(bs, 1, 1)
-
- gquery_pos = self.query_pos_embedding.weight[:num_gq].unsqueeze(0).repeat(bs, 1, 1)
- lquery_pos = self.query_pos_embedding.weight[num_gq:].unsqueeze(0).repeat(bs, 1, 1)
-
- for i in range(self.num_layers):
- level_index = i % self.num_feature_levels
-
- qout = self.transformer_crossatt_layers[i](
- q = lquery,
- kv = fea2d[level_index],
- q_pos = lquery_pos,
- k_pos = fea2d_pos[level_index],
- mask = None,)
- lquery = qout
-
- qout = self.transformer_selfatt_layers[i](
- qkv = torch.cat([gquery, lquery], dim=1),
- qk_pos = torch.cat([gquery_pos, lquery_pos], dim=1),)
-
- qout = self.transformer_feedforward_layers[i](qout)
-
- gquery = qout[:, :num_gq]
- lquery = qout[:, num_gq:]
-
- output = torch.cat([gquery, lquery], dim=1)
-
- return output
-
-##################
-# Main structure #
-##################
-
-@register('seecoder')
-class SemanticExtractionEncoder(nn.Module):
- def __init__(self,
- imencoder_cfg,
- imdecoder_cfg,
- qtransformer_cfg):
- super().__init__()
- self.imencoder = get_model()(imencoder_cfg)
- self.imdecoder = get_model()(imdecoder_cfg)
- self.qtransformer = get_model()(qtransformer_cfg)
-
- def forward(self, x):
- fea = self.imencoder(x)
- hs = {'res3' : fea['res3'],
- 'res4' : fea['res4'],
- 'res5' : fea['res5'], }
- hs = self.imdecoder(hs)
- hs = [hs['res3'], hs['res4'], hs['res5']]
- q = self.qtransformer(hs)
- return q
-
- def encode(self, x):
- return self(x)
diff --git a/spaces/shreydan/youtube-QandA/app.py b/spaces/shreydan/youtube-QandA/app.py
deleted file mode 100644
index 920cc2c2ef908095c8a616a98b7c6ebad9e54141..0000000000000000000000000000000000000000
--- a/spaces/shreydan/youtube-QandA/app.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import streamlit as st
-from streamlit_player import st_player
-
-from model import Engine
-from fetch_transcript import fetch_transcript
-from preprocessing import create_similarity_text, create_result_url
-
-with st.container():
- st.title('YouTube Q&A Search')
- st.write('Ask YouTube videos questions and get your answers :)')
-
-with st.container():
-
- url_input = st.text_input(label='Video',placeholder='enter YouTube video url')
-
- question_input = st.text_input(label='Question',placeholder='enter your question')
-
- get_ans = st.button(label='Answer!')
-
- if len(url_input)!='' and len(question_input)!='' and get_ans:
-
- with st.spinner('loading your video...'):
- transcript = fetch_transcript(url_input)
- model = Engine(transcript)
- prev_url = url_input
-
- with st.spinner('finding an answer...'):
- answer = model.ask(question_input)
- similarity_text = create_similarity_text(question_input,answer)
- groups,timestamps = model.find_similar(similarity_text)
- url = create_result_url(url_input,timestamps[0])
-
- with st.container():
-
- st.caption('Extracted Answer:')
- st.write(answer)
- st.caption('In Video:')
- st_player(url)
-
diff --git a/spaces/sklkd93/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface_net.py b/spaces/sklkd93/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface_net.py
deleted file mode 100644
index ab6aa82d3e9055a838f1f9076b12f05fdfc154d0..0000000000000000000000000000000000000000
--- a/spaces/sklkd93/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface_net.py
+++ /dev/null
@@ -1,196 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-def conv_bn(inp, oup, stride=1, leaky=0):
- return nn.Sequential(
- nn.Conv2d(inp, oup, 3, stride, 1, bias=False), nn.BatchNorm2d(oup),
- nn.LeakyReLU(negative_slope=leaky, inplace=True))
-
-
-def conv_bn_no_relu(inp, oup, stride):
- return nn.Sequential(
- nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
- nn.BatchNorm2d(oup),
- )
-
-
-def conv_bn1X1(inp, oup, stride, leaky=0):
- return nn.Sequential(
- nn.Conv2d(inp, oup, 1, stride, padding=0, bias=False), nn.BatchNorm2d(oup),
- nn.LeakyReLU(negative_slope=leaky, inplace=True))
-
-
-def conv_dw(inp, oup, stride, leaky=0.1):
- return nn.Sequential(
- nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False),
- nn.BatchNorm2d(inp),
- nn.LeakyReLU(negative_slope=leaky, inplace=True),
- nn.Conv2d(inp, oup, 1, 1, 0, bias=False),
- nn.BatchNorm2d(oup),
- nn.LeakyReLU(negative_slope=leaky, inplace=True),
- )
-
-
-class SSH(nn.Module):
-
- def __init__(self, in_channel, out_channel):
- super(SSH, self).__init__()
- assert out_channel % 4 == 0
- leaky = 0
- if (out_channel <= 64):
- leaky = 0.1
- self.conv3X3 = conv_bn_no_relu(in_channel, out_channel // 2, stride=1)
-
- self.conv5X5_1 = conv_bn(in_channel, out_channel // 4, stride=1, leaky=leaky)
- self.conv5X5_2 = conv_bn_no_relu(out_channel // 4, out_channel // 4, stride=1)
-
- self.conv7X7_2 = conv_bn(out_channel // 4, out_channel // 4, stride=1, leaky=leaky)
- self.conv7x7_3 = conv_bn_no_relu(out_channel // 4, out_channel // 4, stride=1)
-
- def forward(self, input):
- conv3X3 = self.conv3X3(input)
-
- conv5X5_1 = self.conv5X5_1(input)
- conv5X5 = self.conv5X5_2(conv5X5_1)
-
- conv7X7_2 = self.conv7X7_2(conv5X5_1)
- conv7X7 = self.conv7x7_3(conv7X7_2)
-
- out = torch.cat([conv3X3, conv5X5, conv7X7], dim=1)
- out = F.relu(out)
- return out
-
-
-class FPN(nn.Module):
-
- def __init__(self, in_channels_list, out_channels):
- super(FPN, self).__init__()
- leaky = 0
- if (out_channels <= 64):
- leaky = 0.1
- self.output1 = conv_bn1X1(in_channels_list[0], out_channels, stride=1, leaky=leaky)
- self.output2 = conv_bn1X1(in_channels_list[1], out_channels, stride=1, leaky=leaky)
- self.output3 = conv_bn1X1(in_channels_list[2], out_channels, stride=1, leaky=leaky)
-
- self.merge1 = conv_bn(out_channels, out_channels, leaky=leaky)
- self.merge2 = conv_bn(out_channels, out_channels, leaky=leaky)
-
- def forward(self, input):
- # names = list(input.keys())
- # input = list(input.values())
-
- output1 = self.output1(input[0])
- output2 = self.output2(input[1])
- output3 = self.output3(input[2])
-
- up3 = F.interpolate(output3, size=[output2.size(2), output2.size(3)], mode='nearest')
- output2 = output2 + up3
- output2 = self.merge2(output2)
-
- up2 = F.interpolate(output2, size=[output1.size(2), output1.size(3)], mode='nearest')
- output1 = output1 + up2
- output1 = self.merge1(output1)
-
- out = [output1, output2, output3]
- return out
-
-
-class MobileNetV1(nn.Module):
-
- def __init__(self):
- super(MobileNetV1, self).__init__()
- self.stage1 = nn.Sequential(
- conv_bn(3, 8, 2, leaky=0.1), # 3
- conv_dw(8, 16, 1), # 7
- conv_dw(16, 32, 2), # 11
- conv_dw(32, 32, 1), # 19
- conv_dw(32, 64, 2), # 27
- conv_dw(64, 64, 1), # 43
- )
- self.stage2 = nn.Sequential(
- conv_dw(64, 128, 2), # 43 + 16 = 59
- conv_dw(128, 128, 1), # 59 + 32 = 91
- conv_dw(128, 128, 1), # 91 + 32 = 123
- conv_dw(128, 128, 1), # 123 + 32 = 155
- conv_dw(128, 128, 1), # 155 + 32 = 187
- conv_dw(128, 128, 1), # 187 + 32 = 219
- )
- self.stage3 = nn.Sequential(
- conv_dw(128, 256, 2), # 219 +3 2 = 241
- conv_dw(256, 256, 1), # 241 + 64 = 301
- )
- self.avg = nn.AdaptiveAvgPool2d((1, 1))
- self.fc = nn.Linear(256, 1000)
-
- def forward(self, x):
- x = self.stage1(x)
- x = self.stage2(x)
- x = self.stage3(x)
- x = self.avg(x)
- # x = self.model(x)
- x = x.view(-1, 256)
- x = self.fc(x)
- return x
-
-
-class ClassHead(nn.Module):
-
- def __init__(self, inchannels=512, num_anchors=3):
- super(ClassHead, self).__init__()
- self.num_anchors = num_anchors
- self.conv1x1 = nn.Conv2d(inchannels, self.num_anchors * 2, kernel_size=(1, 1), stride=1, padding=0)
-
- def forward(self, x):
- out = self.conv1x1(x)
- out = out.permute(0, 2, 3, 1).contiguous()
-
- return out.view(out.shape[0], -1, 2)
-
-
-class BboxHead(nn.Module):
-
- def __init__(self, inchannels=512, num_anchors=3):
- super(BboxHead, self).__init__()
- self.conv1x1 = nn.Conv2d(inchannels, num_anchors * 4, kernel_size=(1, 1), stride=1, padding=0)
-
- def forward(self, x):
- out = self.conv1x1(x)
- out = out.permute(0, 2, 3, 1).contiguous()
-
- return out.view(out.shape[0], -1, 4)
-
-
-class LandmarkHead(nn.Module):
-
- def __init__(self, inchannels=512, num_anchors=3):
- super(LandmarkHead, self).__init__()
- self.conv1x1 = nn.Conv2d(inchannels, num_anchors * 10, kernel_size=(1, 1), stride=1, padding=0)
-
- def forward(self, x):
- out = self.conv1x1(x)
- out = out.permute(0, 2, 3, 1).contiguous()
-
- return out.view(out.shape[0], -1, 10)
-
-
-def make_class_head(fpn_num=3, inchannels=64, anchor_num=2):
- classhead = nn.ModuleList()
- for i in range(fpn_num):
- classhead.append(ClassHead(inchannels, anchor_num))
- return classhead
-
-
-def make_bbox_head(fpn_num=3, inchannels=64, anchor_num=2):
- bboxhead = nn.ModuleList()
- for i in range(fpn_num):
- bboxhead.append(BboxHead(inchannels, anchor_num))
- return bboxhead
-
-
-def make_landmark_head(fpn_num=3, inchannels=64, anchor_num=2):
- landmarkhead = nn.ModuleList()
- for i in range(fpn_num):
- landmarkhead.append(LandmarkHead(inchannels, anchor_num))
- return landmarkhead
diff --git a/spaces/sneedium/captcha_pixelplanet/modules/model_language.py b/spaces/sneedium/captcha_pixelplanet/modules/model_language.py
deleted file mode 100644
index a643cd5946240548746b22fc9294db63c2dfe7a1..0000000000000000000000000000000000000000
--- a/spaces/sneedium/captcha_pixelplanet/modules/model_language.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import logging
-import torch.nn as nn
-from fastai.vision import *
-
-from modules.model import _default_tfmer_cfg
-from modules.model import Model
-from modules.transformer import (PositionalEncoding,
- TransformerDecoder,
- TransformerDecoderLayer)
-
-
-class BCNLanguage(Model):
- def __init__(self, config):
- super().__init__(config)
- d_model = ifnone(config.model_language_d_model, _default_tfmer_cfg['d_model'])
- nhead = ifnone(config.model_language_nhead, _default_tfmer_cfg['nhead'])
- d_inner = ifnone(config.model_language_d_inner, _default_tfmer_cfg['d_inner'])
- dropout = ifnone(config.model_language_dropout, _default_tfmer_cfg['dropout'])
- activation = ifnone(config.model_language_activation, _default_tfmer_cfg['activation'])
- num_layers = ifnone(config.model_language_num_layers, 4)
- self.d_model = d_model
- self.detach = ifnone(config.model_language_detach, True)
- self.use_self_attn = ifnone(config.model_language_use_self_attn, False)
- self.loss_weight = ifnone(config.model_language_loss_weight, 1.0)
- self.max_length = config.dataset_max_length + 1 # additional stop token
- self.debug = ifnone(config.global_debug, False)
-
- self.proj = nn.Linear(self.charset.num_classes, d_model, False)
- self.token_encoder = PositionalEncoding(d_model, max_len=self.max_length)
- self.pos_encoder = PositionalEncoding(d_model, dropout=0, max_len=self.max_length)
- decoder_layer = TransformerDecoderLayer(d_model, nhead, d_inner, dropout,
- activation, self_attn=self.use_self_attn, debug=self.debug)
- self.model = TransformerDecoder(decoder_layer, num_layers)
-
- self.cls = nn.Linear(d_model, self.charset.num_classes)
-
- if config.model_language_checkpoint is not None:
- logging.info(f'Read language model from {config.model_language_checkpoint}.')
- self.load(config.model_language_checkpoint)
-
- def forward(self, tokens, lengths):
- """
- Args:
- tokens: (N, T, C) where T is length, N is batch size and C is classes number
- lengths: (N,)
- """
- if self.detach: tokens = tokens.detach()
- embed = self.proj(tokens) # (N, T, E)
- embed = embed.permute(1, 0, 2) # (T, N, E)
- embed = self.token_encoder(embed) # (T, N, E)
- padding_mask = self._get_padding_mask(lengths, self.max_length)
-
- zeros = embed.new_zeros(*embed.shape)
- qeury = self.pos_encoder(zeros)
- location_mask = self._get_location_mask(self.max_length, tokens.device)
- output = self.model(qeury, embed,
- tgt_key_padding_mask=padding_mask,
- memory_mask=location_mask,
- memory_key_padding_mask=padding_mask) # (T, N, E)
- output = output.permute(1, 0, 2) # (N, T, E)
-
- logits = self.cls(output) # (N, T, C)
- pt_lengths = self._get_length(logits)
-
- res = {'feature': output, 'logits': logits, 'pt_lengths': pt_lengths,
- 'loss_weight':self.loss_weight, 'name': 'language'}
- return res
diff --git a/spaces/sriramelango/Social_Classification_Public/criterions/label_smoothed_cross_entropy.py b/spaces/sriramelango/Social_Classification_Public/criterions/label_smoothed_cross_entropy.py
deleted file mode 100644
index 73b36e750a0037cad8403e383d790f868b509d24..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/criterions/label_smoothed_cross_entropy.py
+++ /dev/null
@@ -1,343 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-from dataclasses import dataclass, field
-from typing import Optional
-
-import torch
-import torch.nn.functional as F
-import numpy as np
-from fairseq import metrics, utils
-from fairseq.criterions import FairseqCriterion, register_criterion
-from fairseq.dataclass import FairseqDataclass
-from omegaconf import II
-
-
-@dataclass
-class AjustLabelSmoothedCrossEntropyCriterionConfig(FairseqDataclass):
- label_smoothing: float = field(
- default=0.0,
- metadata={"help": "epsilon for label smoothing, 0 means no label smoothing"},
- )
- report_accuracy: bool = field(
- default=False,
- metadata={"help": "report accuracy metric"},
- )
- ignore_prefix_size: int = field(
- default=0,
- metadata={"help": "Ignore first N tokens"},
- )
- ignore_eos: bool = field(
- default=False,
- metadata={"help": "Ignore eos token"},
- )
- sentence_avg: bool = II("optimization.sentence_avg")
- drop_worst_ratio: float = field(
- default=0.0,
- metadata={"help": "ratio for discarding bad samples"},
- )
- drop_worst_after: int = field(
- default=0,
- metadata={"help": "steps for discarding bad samples"},
- )
- use_rdrop: bool = field(
- default=False, metadata={"help": "use R-Drop"}
- )
- reg_alpha: float = field(
- default=1.0, metadata={"help": "weight for R-Drop"}
- )
- sample_patch_num: int = field(
- default=196, metadata={"help": "sample patchs for v1"}
- )
- constraint_range: Optional[str] = field(
- default=None,
- metadata={"help": "constraint range"}
- )
-
-
-def construct_rdrop_sample(x):
- if isinstance(x, dict):
- for key in x:
- x[key] = construct_rdrop_sample(x[key])
- return x
- elif isinstance(x, torch.Tensor):
- return x.repeat(2, *([1] * (x.dim()-1)))
- elif isinstance(x, int):
- return x * 2
- elif isinstance(x, np.ndarray):
- return x.repeat(2)
- else:
- raise NotImplementedError
-
-
-def kl_loss(p, q):
- p_loss = F.kl_div(p, torch.exp(q), reduction='sum')
- q_loss = F.kl_div(q, torch.exp(p), reduction='sum')
- loss = (p_loss + q_loss) / 2
- return loss
-
-
-def label_smoothed_nll_loss(
- lprobs, target, epsilon, update_num, reduce=True,
- drop_worst_ratio=0.0, drop_worst_after=0, use_rdrop=False, reg_alpha=1.0,
- constraint_masks=None, constraint_start=None, constraint_end=None
-):
- if target.dim() == lprobs.dim() - 1:
- target = target.unsqueeze(-1)
- nll_loss = -lprobs.gather(dim=-1, index=target).squeeze(-1)
- if constraint_masks is not None:
- smooth_loss = -lprobs.masked_fill(~constraint_masks, 0).sum(dim=-1, keepdim=True).squeeze(-1)
- eps_i = epsilon / (constraint_masks.sum(1) - 1 + 1e-6)
- elif constraint_start is not None and constraint_end is not None:
- constraint_range = [0, 1, 2, 3] + list(range(constraint_start, constraint_end))
- smooth_loss = -lprobs[:, constraint_range].sum(dim=-1, keepdim=True).squeeze(-1)
- eps_i = epsilon / (len(constraint_range) - 1 + 1e-6)
- else:
- smooth_loss = -lprobs.sum(dim=-1, keepdim=True).squeeze(-1)
- eps_i = epsilon / (lprobs.size(-1) - 1)
- loss = (1.0 - epsilon - eps_i) * nll_loss + eps_i * smooth_loss
- if drop_worst_ratio > 0 and update_num > drop_worst_after:
- if use_rdrop:
- true_batch_size = loss.size(0) // 2
- _, indices = torch.topk(loss[:true_batch_size], k=int(true_batch_size * (1 - drop_worst_ratio)), largest=False)
- loss = torch.cat([loss[indices], loss[indices+true_batch_size]])
- nll_loss = torch.cat([nll_loss[indices], nll_loss[indices+true_batch_size]])
- lprobs = torch.cat([lprobs[indices], lprobs[indices+true_batch_size]])
- else:
- loss, indices = torch.topk(loss, k=int(loss.shape[0] * (1 - drop_worst_ratio)), largest=False)
- nll_loss = nll_loss[indices]
- lprobs = lprobs[indices]
-
- ntokens = loss.numel()
- nll_loss = nll_loss.sum()
- loss = loss.sum()
- if use_rdrop:
- true_batch_size = lprobs.size(0) // 2
- p = lprobs[:true_batch_size]
- q = lprobs[true_batch_size:]
- if constraint_start is not None and constraint_end is not None:
- constraint_range = [0, 1, 2, 3] + list(range(constraint_start, constraint_end))
- p = p[:, constraint_range]
- q = q[:, constraint_range]
- loss += kl_loss(p, q) * reg_alpha
-
- return loss, nll_loss, ntokens
-
-
-@register_criterion(
- "ajust_label_smoothed_cross_entropy", dataclass=AjustLabelSmoothedCrossEntropyCriterionConfig
-)
-class AjustLabelSmoothedCrossEntropyCriterion(FairseqCriterion):
- def __init__(
- self,
- task,
- sentence_avg,
- label_smoothing,
- ignore_prefix_size=0,
- ignore_eos=False,
- report_accuracy=False,
- drop_worst_ratio=0,
- drop_worst_after=0,
- use_rdrop=False,
- reg_alpha=1.0,
- sample_patch_num=196,
- constraint_range=None
- ):
- super().__init__(task)
- self.sentence_avg = sentence_avg
- self.eps = label_smoothing
- self.ignore_prefix_size = ignore_prefix_size
- self.ignore_eos = ignore_eos
- self.report_accuracy = report_accuracy
- self.drop_worst_ratio = drop_worst_ratio
- self.drop_worst_after = drop_worst_after
- self.use_rdrop = use_rdrop
- self.reg_alpha = reg_alpha
- self.sample_patch_num = sample_patch_num
-
- self.constraint_start = None
- self.constraint_end = None
- if constraint_range is not None:
- constraint_start, constraint_end = constraint_range.split(',')
- self.constraint_start = int(constraint_start)
- self.constraint_end = int(constraint_end)
-
- def forward(self, model, sample, update_num=0, reduce=True):
- """Compute the loss for the given sample.
-
- Returns a tuple with three elements:
- 1) the loss
- 2) the sample size, which is used as the denominator for the gradient
- 3) logging outputs to display while training
- """
- if isinstance(sample, list):
- if self.sample_patch_num > 0:
- sample[0]['net_input']['sample_patch_num'] = self.sample_patch_num
- loss_v1, sample_size_v1, logging_output_v1 = self.forward(model, sample[0], update_num, reduce)
- loss_v2, sample_size_v2, logging_output_v2 = self.forward(model, sample[1], update_num, reduce)
- loss = loss_v1 / sample_size_v1 + loss_v2 / sample_size_v2
- sample_size = 1
- logging_output = {
- "loss": loss.data,
- "loss_v1": loss_v1.data,
- "loss_v2": loss_v2.data,
- "nll_loss": logging_output_v1["nll_loss"].data / sample_size_v1 + logging_output_v2["nll_loss"].data / sample_size_v2,
- "ntokens": logging_output_v1["ntokens"] + logging_output_v2["ntokens"],
- "nsentences": logging_output_v1["nsentences"] + logging_output_v2["nsentences"],
- "sample_size": 1,
- "sample_size_v1": sample_size_v1,
- "sample_size_v2": sample_size_v2,
- }
- return loss, sample_size, logging_output
-
- if self.use_rdrop:
- construct_rdrop_sample(sample)
-
- net_output = model(**sample["net_input"])
- loss, nll_loss, ntokens = self.compute_loss(model, net_output, sample, update_num, reduce=reduce)
- sample_size = (
- sample["target"].size(0) if self.sentence_avg else ntokens
- )
- logging_output = {
- "loss": loss.data,
- "nll_loss": nll_loss.data,
- "ntokens": sample["ntokens"],
- "nsentences": sample["nsentences"],
- "sample_size": sample_size,
- }
- if self.report_accuracy:
- n_correct, total = self.compute_accuracy(model, net_output, sample)
- logging_output["n_correct"] = utils.item(n_correct.data)
- logging_output["total"] = utils.item(total.data)
- return loss, sample_size, logging_output
-
- def get_lprobs_and_target(self, model, net_output, sample):
- conf = sample['conf'][:, None, None] if 'conf' in sample and sample['conf'] is not None else 1
- constraint_masks = None
- if "constraint_masks" in sample and sample["constraint_masks"] is not None:
- constraint_masks = sample["constraint_masks"]
- net_output[0].masked_fill_(~constraint_masks, -math.inf)
- if self.constraint_start is not None and self.constraint_end is not None:
- net_output[0][:, :, 4:self.constraint_start] = -math.inf
- net_output[0][:, :, self.constraint_end:] = -math.inf
- lprobs = model.get_normalized_probs(net_output, log_probs=True) * conf
- target = model.get_targets(sample, net_output)
- if self.ignore_prefix_size > 0:
- lprobs = lprobs[:, self.ignore_prefix_size :, :].contiguous()
- target = target[:, self.ignore_prefix_size :].contiguous()
- if constraint_masks is not None:
- constraint_masks = constraint_masks[:, self.ignore_prefix_size :, :].contiguous()
- if self.ignore_eos:
- bsz, seq_len, embed_dim = lprobs.size()
- eos_indices = target.eq(self.task.tgt_dict.eos())
- lprobs = lprobs[~eos_indices].reshape(bsz, seq_len-1, embed_dim)
- target = target[~eos_indices].reshape(bsz, seq_len-1)
- if constraint_masks is not None:
- constraint_masks = constraint_masks[~eos_indices].reshape(bsz, seq_len-1, embed_dim)
- if constraint_masks is not None:
- constraint_masks = constraint_masks.view(-1, constraint_masks.size(-1))
- return lprobs.view(-1, lprobs.size(-1)), target.view(-1), constraint_masks
-
- def compute_loss(self, model, net_output, sample, update_num, reduce=True):
- lprobs, target, constraint_masks = self.get_lprobs_and_target(model, net_output, sample)
- if constraint_masks is not None:
- constraint_masks = constraint_masks[target != self.padding_idx]
- lprobs = lprobs[target != self.padding_idx]
- target = target[target != self.padding_idx]
- loss, nll_loss, ntokens = label_smoothed_nll_loss(
- lprobs,
- target,
- self.eps,
- update_num,
- reduce=reduce,
- drop_worst_ratio=self.drop_worst_ratio,
- drop_worst_after=self.drop_worst_after,
- use_rdrop=self.use_rdrop,
- reg_alpha=self.reg_alpha,
- constraint_masks=constraint_masks,
- constraint_start=self.constraint_start,
- constraint_end=self.constraint_end
- )
- return loss, nll_loss, ntokens
-
- def compute_accuracy(self, model, net_output, sample):
- lprobs, target = self.get_lprobs_and_target(model, net_output, sample)
- mask = target.ne(self.padding_idx)
- n_correct = torch.sum(
- lprobs.argmax(1).masked_select(mask).eq(target.masked_select(mask))
- )
- total = torch.sum(mask)
- return n_correct, total
-
- @classmethod
- def reduce_metrics(cls, logging_outputs) -> None:
- """Aggregate logging outputs from data parallel training."""
- loss_sum = sum(log.get("loss", 0) for log in logging_outputs)
- loss_sum_v1 = sum(log.get("loss_v1", 0) for log in logging_outputs)
- loss_sum_v2 = sum(log.get("loss_v2", 0) for log in logging_outputs)
- nll_loss_sum = sum(log.get("nll_loss", 0) for log in logging_outputs)
- ntokens = sum(log.get("ntokens", 0) for log in logging_outputs)
- nsentences = sum(log.get("nsentences", 0) for log in logging_outputs)
- sample_size = sum(log.get("sample_size", 0) for log in logging_outputs)
- sample_size_v1 = sum(log.get("sample_size_v1", 0) for log in logging_outputs)
- sample_size_v2 = sum(log.get("sample_size_v2", 0) for log in logging_outputs)
-
- metrics.log_scalar(
- "loss", loss_sum / sample_size, sample_size, round=3
- )
- metrics.log_scalar(
- "loss_v1", loss_sum_v1 / max(sample_size_v1, 1), max(sample_size_v1, 1), round=3
- )
- metrics.log_scalar(
- "loss_v2", loss_sum_v2 / max(sample_size_v2, 1), max(sample_size_v2, 1), round=3
- )
- metrics.log_scalar(
- "nll_loss", nll_loss_sum / sample_size, ntokens, round=3
- )
- metrics.log_derived(
- "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg)
- )
-
- metrics.log_scalar(
- "ntokens", ntokens, 1, round=3
- )
- metrics.log_scalar(
- "nsentences", nsentences, 1, round=3
- )
- metrics.log_scalar(
- "sample_size", sample_size, 1, round=3
- )
- metrics.log_scalar(
- "sample_size_v1", sample_size_v1, 1, round=3
- )
- metrics.log_scalar(
- "sample_size_v2", sample_size_v2, 1, round=3
- )
-
- total = utils.item(sum(log.get("total", 0) for log in logging_outputs))
- if total > 0:
- metrics.log_scalar("total", total)
- n_correct = utils.item(
- sum(log.get("n_correct", 0) for log in logging_outputs)
- )
- metrics.log_scalar("n_correct", n_correct)
- metrics.log_derived(
- "accuracy",
- lambda meters: round(
- meters["n_correct"].sum * 100.0 / meters["total"].sum, 3
- )
- if meters["total"].sum > 0
- else float("nan"),
- )
-
- @staticmethod
- def logging_outputs_can_be_summed() -> bool:
- """
- Whether the logging outputs returned by `forward` can be summed
- across workers prior to calling `reduce_metrics`. Setting this
- to True will improves distributed training speed.
- """
- return True
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/multilingual/data_scripts/utils/strip_sgm.sh b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/multilingual/data_scripts/utils/strip_sgm.sh
deleted file mode 100644
index 7f4f61d7b1a46f51a1221de6b336cb70b5a0b8b3..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/multilingual/data_scripts/utils/strip_sgm.sh
+++ /dev/null
@@ -1 +0,0 @@
-grep "seg id" | sed 's///g' | sed 's/<\/seg>//g'
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/pay_less_attention_paper/README.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/pay_less_attention_paper/README.md
deleted file mode 100644
index 5adab11f4dc3461f9e7126ac391b04e703616e6b..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/pay_less_attention_paper/README.md
+++ /dev/null
@@ -1,176 +0,0 @@
-# Pay Less Attention with Lightweight and Dynamic Convolutions (Wu et al., 2019)
-
-This page contains pointers to pre-trained models as well as instructions on how to train new models for [our paper](https://arxiv.org/abs/1901.10430).
-
-## Citation:
-```bibtex
-@inproceedings{wu2018pay,
- title = {Pay Less Attention with Lightweight and Dynamic Convolutions},
- author = {Felix Wu and Angela Fan and Alexei Baevski and Yann Dauphin and Michael Auli},
- booktitle = {International Conference on Learning Representations},
- year = {2019},
- url = {https://arxiv.org/abs/1901.10430},
-}
-```
-
-## Translation
-
-### Pre-trained models
-For some datasets we release models without GLUs which are faster at inference.
-
-Model | Description | Dataset | Download
----|---|---|---
-`lightconv.no_glu.iwslt14.de-en` | LightConv (without GLUs) | [IWSLT14 German-English](https://wit3.fbk.eu/archive/2014-01/texts/de/en/de-en.tgz) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.lightconv.tar.gz) IWSLT14 test: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/iwslt14.de-en.test.tar.bz2)
-`dynamicconv.no_glu.iwslt14.de-en` | DynamicConv (without GLUs) | [IWSLT14 German-English](https://wit3.fbk.eu/archive/2014-01/texts/de/en/de-en.tgz) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.dynamicconv.tar.gz) IWSLT14 test: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/iwslt14.de-en.test.tar.bz2)
-`lightconv.no_glu.wmt16.en-de` | LightConv (without GLUs) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv.tar.gz) newstest2014 (shared vocab): [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2)
-`dynamicconv.no_glu.wmt16.en-de` | DynamicConv (without GLUs) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv.tar.gz) newstest2014 (shared vocab): [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2)
-`lightconv.glu.wmt16.en-de` | LightConv | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz) newstest2014 (shared vocab): [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2)
-`dynamicconv.glu.wmt16.en-de` | DynamicConv | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz) newstest2014 (shared vocab): [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2)
-`lightconv.glu.wmt14.en-fr` | LightConv | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.lightconv-glu.tar.gz) newstest2014: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2)
-`dynamicconv.glu.wmt14.en-fr` | DynamicConv | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.dynamicconv-glu.tar.gz) newstest2014: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2)
-`lightconv.glu.wmt17.zh-en` | LightConv | [WMT17 Chinese-English](http://statmt.org/wmt17/translation-task.html#Download) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.lightconv-glu.tar.gz) newstest2017: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.zh-en.newstest2017.tar.bz2)
-`dynamicconv.glu.wmt17.zh-en` | DynamicConv | [WMT17 Chinese-English](http://statmt.org/wmt17/translation-task.html#Download) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.dynamicconv-glu.tar.gz) newstest2017: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.zh-en.newstest2017.tar.bz2)
-
-### Memory-Efficient CUDA Kernels
-
-Since the PyTorch implementations of Light/Dynamic conv are quite memory intensive, we have developed CUDA kernels that implement the light and dynamic convolution operator in a memory-efficient and performant manner. For large sequence lengths, these kernels save about 50% memory compared to the PyTorch equivalent.
-
-To install the kernels, use the commands below. Once installed, they will automatically be used in place of the PyTorch implementations whenever a light or dynamic convolution is used.
-
-```sh
-# to install lightconv
-cd fairseq/modules/lightconv_layer
-python cuda_function_gen.py
-python setup.py install
-
-# to install dynamicconv
-cd fairseq/modules/dynamicconv_layer
-python cuda_function_gen.py
-python setup.py install
-```
-
-### Example usage (torch.hub)
-
-We require a few additional Python dependencies for preprocessing:
-```bash
-pip install sacremoses subword_nmt
-```
-
-Interactive translation via PyTorch Hub:
-```python
-import torch
-
-# List available models
-torch.hub.list('pytorch/fairseq') # [..., 'lightconv.glu.wmt17.zh-en', ... ]
-
-# Load a transformer trained on WMT'16 En-De
-zh2en = torch.hub.load('pytorch/fairseq', 'lightconv.glu.wmt17.zh-en', tokenizer='moses', bpe='subword_nmt')
-
-# The underlying model is available under the *models* attribute
-assert isinstance(zh2en.models[0], fairseq.models.lightconv.LightConvModel)
-
-# Translate a sentence
-zh2en.translate('你好 世界')
-# 'Hello World'
-```
-
-Loading custom models:
-```python
-from fairseq.models.lightconv import LightConvModel
-en2fr = LightConvModel.from_pretrained(
- '/path/to/checkpoints',
- checkpoint_file='checkpoint_best.pt',
- data_name_or_path='data-bin/wmt14_en_fr',
- bpe='subword_nmt',
- bpe_codes='data-bin/wmt14_en_fr/en.code'
-)
-en2fr.translate('Hello world!')
-# 'Bonjour le monde'
-```
-
-### Preprocessing the training datasets
-
-Please follow the instructions in [`examples/translation/README.md`](../translation/README.md) to preprocess the data.
-
-### Training and evaluation options:
-To use the model without GLU, please set `--encoder-glu 0 --decoder-glu 0`.
-For LightConv, please use `--encoder-conv-type lightweight --decoder-conv-type lightweight`, otherwise the default is DynamicConv.
-For best BLEU results, lenpen may need to be manually tuned.
-
-To use the CUDA kernels, first install the PyTorch modules using the commands
-above. Once the CUDA modules are installed, they will automatically be used
-instead of the PyTorch modules.
-
-### IWSLT14 De-En
-Training and evaluating DynamicConv (without GLU) on a GPU:
-```sh
-# Training
-SAVE="save/dynamic_conv_iwslt"
-mkdir -p $SAVE
-CUDA_VISIBLE_DEVICES=0 $(which fairseq-train) data-bin/iwslt14.tokenized.de-en \
- --clip-norm 0 --optimizer adam --lr 0.0005 \
- --source-lang de --target-lang en --max-tokens 4000 --no-progress-bar \
- --log-interval 100 --stop-min-lr '1e-09' --weight-decay 0.0001 \
- --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
- --lr-scheduler inverse_sqrt \
- --ddp-backend=legacy_ddp \
- --max-update 50000 --warmup-updates 4000 --warmup-init-lr '1e-07' \
- --adam-betas '(0.9, 0.98)' --keep-last-epochs 10 \
- -a lightconv_iwslt_de_en --save-dir $SAVE \
- --dropout 0.3 --attention-dropout 0.1 --weight-dropout 0.1 \
- --encoder-glu 0 --decoder-glu 0
-python scripts/average_checkpoints.py --inputs $SAVE \
- --num-epoch-checkpoints 10 --output "${SAVE}/checkpoint_last10_avg.pt"
-
-# Evaluation
-CUDA_VISIBLE_DEVICES=0 fairseq-generate data-bin/iwslt14.tokenized.de-en --path "${SAVE}/checkpoint_last10_avg.pt" --batch-size 128 --beam 4 --remove-bpe --lenpen 1 --gen-subset test --quiet
-```
-
-### WMT16 En-De
-Training and evaluating DynamicConv (with GLU) on WMT16 En-De using cosine scheduler on one machine with 8 V100 GPUs:
-```sh
-# Training
-SAVE="save/dynamic_conv_wmt16en2de"
-mkdir -p $SAVE
-python -m torch.distributed.launch --nproc_per_node 8 $(which fairseq-train) \
- data-bin/wmt16_en_de_bpe32k --fp16 --log-interval 100 --no-progress-bar \
- --max-update 30000 --share-all-embeddings --optimizer adam \
- --adam-betas '(0.9, 0.98)' --clip-norm 0.0 --weight-decay 0.0 \
- --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
- --stop-min-lr 1e-09 --update-freq 16 --attention-dropout 0.1 --keep-last-epochs 10 \
- --ddp-backend=legacy_ddp --max-tokens 3584 \
- --lr-scheduler cosine --warmup-init-lr 1e-7 --warmup-updates 10000 \
- --lr-shrink 1 --lr 0.001 --min-lr 1e-7 --warmup-init-lr 1e-07 \
- --t-mult 1 --lr-period-updates 20000 \
- --arch lightconv_wmt_en_de_big --save-dir $SAVE \
- --dropout 0.3 --attention-dropout 0.1 --weight-dropout 0.1 \
- --encoder-glu 1 --decoder-glu 1
-
-# Evaluation
-CUDA_VISIBLE_DEVICES=0 fairseq-generate data-bin/wmt16.en-de.joined-dict.newstest2014 --path "${SAVE}/checkpoint_best.pt" --batch-size 128 --beam 5 --remove-bpe --lenpen 0.5 --gen-subset test > wmt16_gen.txt
-bash scripts/compound_split_bleu.sh wmt16_gen.txt
-```
-
-### WMT14 En-Fr
-Training DynamicConv (with GLU) on WMT14 En-Fr using cosine scheduler on one machine with 8 V100 GPUs:
-```sh
-# Training
-SAVE="save/dynamic_conv_wmt14en2fr"
-mkdir -p $SAVE
-python -m torch.distributed.launch --nproc_per_node 8 $(which fairseq-train) \
- data-bin/wmt14_en_fr --fp16 --log-interval 100 --no-progress-bar \
- --max-update 30000 --share-all-embeddings --optimizer adam \
- --adam-betas '(0.9, 0.98)' --clip-norm 0.0 --weight-decay 0.0 \
- --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
- --stop-min-lr 1e-09 --update-freq 16 --attention-dropout 0.1 --keep-last-epochs 10 \
- --ddp-backend=legacy_ddp --max-tokens 3584 \
- --lr-scheduler cosine --warmup-init-lr 1e-7 --warmup-updates 10000 \
- --lr-shrink 1 --lr 0.001 --min-lr 1e-7 --warmup-init-lr 1e-07 \
- --t-mult 1 --lr-period-updates 70000 \
- --arch lightconv_wmt_en_fr_big --save-dir $SAVE \
- --dropout 0.1 --attention-dropout 0.1 --weight-dropout 0.1 \
- --encoder-glu 1 --decoder-glu 1
-
-# Evaluation
-CUDA_VISIBLE_DEVICES=0 fairseq-generate data-bin/wmt14.en-fr.joined-dict.newstest2014 --path "${SAVE}/checkpoint_best.pt" --batch-size 128 --beam 5 --remove-bpe --lenpen 0.9 --gen-subset test
-```
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/stories/README.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/stories/README.md
deleted file mode 100644
index 588941eddc5f0280f5254affd40ef49de874c885..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/stories/README.md
+++ /dev/null
@@ -1,66 +0,0 @@
-# Hierarchical Neural Story Generation (Fan et al., 2018)
-
-The following commands provide an example of pre-processing data, training a model, and generating text for story generation with the WritingPrompts dataset.
-
-## Pre-trained models
-
-Description | Dataset | Model | Test set(s)
----|---|---|---
-Stories with Convolutional Model ([Fan et al., 2018](https://arxiv.org/abs/1805.04833)) | [WritingPrompts](https://dl.fbaipublicfiles.com/fairseq/data/writingPrompts.tar.gz) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/stories_checkpoint.tar.bz2) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/stories_test.tar.bz2)
-
-We provide sample stories generated by the [convolutional seq2seq model](https://dl.fbaipublicfiles.com/fairseq/data/seq2seq_stories.txt) and [fusion model](https://dl.fbaipublicfiles.com/fairseq/data/fusion_stories.txt) from [Fan et al., 2018](https://arxiv.org/abs/1805.04833). The corresponding prompts for the fusion model can be found [here](https://dl.fbaipublicfiles.com/fairseq/data/fusion_prompts.txt). Note that there are unk in the file, as we modeled a small full vocabulary (no BPE or pre-training). We did not use these unk prompts for human evaluation.
-
-## Dataset
-
-The dataset can be downloaded like this:
-
-```bash
-cd examples/stories
-curl https://dl.fbaipublicfiles.com/fairseq/data/writingPrompts.tar.gz | tar xvzf -
-```
-
-and contains a train, test, and valid split. The dataset is described here: https://arxiv.org/abs/1805.04833. We model only the first 1000 words of each story, including one newLine token.
-
-## Example usage
-
-First we will preprocess the dataset. Note that the dataset release is the full data, but the paper models the first 1000 words of each story. Here is example code that trims the dataset to the first 1000 words of each story:
-```python
-data = ["train", "test", "valid"]
-for name in data:
- with open(name + ".wp_target") as f:
- stories = f.readlines()
- stories = [" ".join(i.split()[0:1000]) for i in stories]
- with open(name + ".wp_target", "w") as o:
- for line in stories:
- o.write(line.strip() + "\n")
-```
-
-Once we've trimmed the data we can binarize it and train our model:
-```bash
-# Binarize the dataset:
-export TEXT=examples/stories/writingPrompts
-fairseq-preprocess --source-lang wp_source --target-lang wp_target \
- --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \
- --destdir data-bin/writingPrompts --padding-factor 1 --thresholdtgt 10 --thresholdsrc 10
-
-# Train the model:
-fairseq-train data-bin/writingPrompts -a fconv_self_att_wp --lr 0.25 --optimizer nag --clip-norm 0.1 --max-tokens 1500 --lr-scheduler reduce_lr_on_plateau --decoder-attention True --encoder-attention False --criterion label_smoothed_cross_entropy --weight-decay .0000001 --label-smoothing 0 --source-lang wp_source --target-lang wp_target --gated-attention True --self-attention True --project-input True --pretrained False
-
-# Train a fusion model:
-# add the arguments: --pretrained True --pretrained-checkpoint path/to/checkpoint
-
-# Generate:
-# Note: to load the pretrained model at generation time, you need to pass in a model-override argument to communicate to the fusion model at generation time where you have placed the pretrained checkpoint. By default, it will load the exact path of the fusion model's pretrained model from training time. You should use model-override if you have moved the pretrained model (or are using our provided models). If you are generating from a non-fusion model, the model-override argument is not necessary.
-
-fairseq-generate data-bin/writingPrompts --path /path/to/trained/model/checkpoint_best.pt --batch-size 32 --beam 1 --sampling --sampling-topk 10 --temperature 0.8 --nbest 1 --model-overrides "{'pretrained_checkpoint':'/path/to/pretrained/model/checkpoint'}"
-```
-
-## Citation
-```bibtex
-@inproceedings{fan2018hierarchical,
- title = {Hierarchical Neural Story Generation},
- author = {Fan, Angela and Lewis, Mike and Dauphin, Yann},
- booktitle = {Conference of the Association for Computational Linguistics (ACL)},
- year = 2018,
-}
-```
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/criterions/sentence_ranking.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/criterions/sentence_ranking.py
deleted file mode 100644
index d4c76341d4d87e6d0da21ac89e833ce0bda13a0c..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/criterions/sentence_ranking.py
+++ /dev/null
@@ -1,120 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-import torch
-import torch.nn.functional as F
-from fairseq import metrics, utils
-from fairseq.criterions import FairseqCriterion, register_criterion
-
-
-@register_criterion("sentence_ranking")
-class SentenceRankingCriterion(FairseqCriterion):
- def __init__(self, task, ranking_head_name, save_predictions, num_classes):
- super().__init__(task)
- self.ranking_head_name = ranking_head_name
- if save_predictions is not None:
- self.prediction_h = open(save_predictions, "w")
- else:
- self.prediction_h = None
- self.num_classes = num_classes
-
- def __del__(self):
- if self.prediction_h is not None:
- self.prediction_h.close()
-
- @staticmethod
- def add_args(parser):
- # fmt: off
- parser.add_argument('--save-predictions', metavar='FILE',
- help='file to save predictions to')
- parser.add_argument('--ranking-head-name',
- default='sentence_classification_head',
- help='name of the ranking head to use')
- # fmt: on
-
- def forward(self, model, sample, reduce=True):
- """Compute ranking loss for the given sample.
-
- Returns a tuple with three elements:
- 1) the loss
- 2) the sample size, which is used as the denominator for the gradient
- 3) logging outputs to display while training
- """
- assert (
- hasattr(model, "classification_heads")
- and self.ranking_head_name in model.classification_heads
- ), "model must provide sentence ranking head for --criterion=sentence_ranking"
-
- scores = []
- for idx in range(self.num_classes):
- score, _ = model(
- **sample["net_input{idx}".format(idx=idx + 1)],
- classification_head_name=self.ranking_head_name,
- )
- scores.append(score)
-
- logits = torch.cat(scores, dim=1)
- sample_size = logits.size(0)
-
- if "target" in sample:
- targets = model.get_targets(sample, [logits]).view(-1)
- lprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32)
- loss = F.nll_loss(lprobs, targets, reduction="sum")
- else:
- targets = None
- loss = torch.tensor(0.0, requires_grad=True)
-
- if self.prediction_h is not None:
- preds = logits.argmax(dim=1)
- for i, (id, pred) in enumerate(zip(sample["id"].tolist(), preds.tolist())):
- if targets is not None:
- label = targets[i].item()
- print("{}\t{}\t{}".format(id, pred, label), file=self.prediction_h)
- else:
- print("{}\t{}".format(id, pred), file=self.prediction_h)
-
- logging_output = {
- "loss": loss.data,
- "ntokens": sample["ntokens"],
- "nsentences": sample_size,
- "sample_size": sample_size,
- }
- if targets is not None:
- logging_output["ncorrect"] = (logits.argmax(dim=1) == targets).sum()
-
- return loss, sample_size, logging_output
-
- @staticmethod
- def reduce_metrics(logging_outputs) -> None:
- """Aggregate logging outputs from data parallel training."""
- loss_sum = sum(log.get("loss", 0) for log in logging_outputs)
- ntokens = sum(log.get("ntokens", 0) for log in logging_outputs)
- nsentences = sum(log.get("nsentences", 0) for log in logging_outputs)
- sample_size = sum(log.get("sample_size", 0) for log in logging_outputs)
-
- metrics.log_scalar(
- "loss", loss_sum / sample_size / math.log(2), sample_size, round=3
- )
- if sample_size != ntokens:
- metrics.log_scalar(
- "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3
- )
-
- if len(logging_outputs) > 0 and "ncorrect" in logging_outputs[0]:
- ncorrect = sum(log.get("ncorrect", 0) for log in logging_outputs)
- metrics.log_scalar(
- "accuracy", 100.0 * ncorrect / nsentences, nsentences, round=1
- )
-
- @staticmethod
- def logging_outputs_can_be_summed() -> bool:
- """
- Whether the logging outputs returned by `forward` can be summed
- across workers prior to calling `reduce_metrics`. Setting this
- to True will improves distributed training speed.
- """
- return True
diff --git a/spaces/sub314xxl/MetaGPT/metagpt/tools/ut_writer.py b/spaces/sub314xxl/MetaGPT/metagpt/tools/ut_writer.py
deleted file mode 100644
index 2f4e1ec217a3077d480a917627c835ac6a31a420..0000000000000000000000000000000000000000
--- a/spaces/sub314xxl/MetaGPT/metagpt/tools/ut_writer.py
+++ /dev/null
@@ -1,290 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-
-import json
-from pathlib import Path
-
-from metagpt.provider.openai_api import OpenAIGPTAPI as GPTAPI
-
-ICL_SAMPLE = '''接口定义:
-```text
-接口名称:元素打标签
-接口路径:/projects/{project_key}/node-tags
-Method:POST
-
-请求参数:
-路径参数:
-project_key
-
-Body参数:
-名称 类型 是否必须 默认值 备注
-nodes array 是 节点
- node_key string 否 节点key
- tags array 否 节点原标签列表
- node_type string 否 节点类型 DATASET / RECIPE
-operations array 是
- tags array 否 操作标签列表
- mode string 否 操作类型 ADD / DELETE
-
-返回数据:
-名称 类型 是否必须 默认值 备注
-code integer 是 状态码
-msg string 是 提示信息
-data object 是 返回数据
-list array 否 node列表 true / false
-node_type string 否 节点类型 DATASET / RECIPE
-node_key string 否 节点key
-```
-
-单元测试:
-```python
-@pytest.mark.parametrize(
-"project_key, nodes, operations, expected_msg",
-[
-("project_key", [{"node_key": "dataset_001", "tags": ["tag1", "tag2"], "node_type": "DATASET"}], [{"tags": ["new_tag1"], "mode": "ADD"}], "success"),
-("project_key", [{"node_key": "dataset_002", "tags": ["tag1", "tag2"], "node_type": "DATASET"}], [{"tags": ["tag1"], "mode": "DELETE"}], "success"),
-("", [{"node_key": "dataset_001", "tags": ["tag1", "tag2"], "node_type": "DATASET"}], [{"tags": ["new_tag1"], "mode": "ADD"}], "缺少必要的参数 project_key"),
-(123, [{"node_key": "dataset_001", "tags": ["tag1", "tag2"], "node_type": "DATASET"}], [{"tags": ["new_tag1"], "mode": "ADD"}], "参数类型不正确"),
-("project_key", [{"node_key": "a"*201, "tags": ["tag1", "tag2"], "node_type": "DATASET"}], [{"tags": ["new_tag1"], "mode": "ADD"}], "请求参数超出字段边界")
-]
-)
-def test_node_tags(project_key, nodes, operations, expected_msg):
- pass
-```
-以上是一个 接口定义 与 单元测试 样例。
-接下来,请你扮演一个Google 20年经验的专家测试经理,在我给出 接口定义 后,回复我单元测试。有几个要求
-1. 只输出一个 `@pytest.mark.parametrize` 与对应的test_<接口名>函数(内部pass,不实现)
--- 函数参数中包含expected_msg,用于结果校验
-2. 生成的测试用例使用较短的文本或数字,并且尽量紧凑
-3. 如果需要注释,使用中文
-
-如果你明白了,请等待我给出接口定义,并只回答"明白",以节省token
-'''
-
-ACT_PROMPT_PREFIX = '''参考测试类型:如缺少请求参数,字段边界校验,字段类型不正确
-请在一个 `@pytest.mark.parametrize` 作用域内输出10个测试用例
-```text
-'''
-
-YFT_PROMPT_PREFIX = '''参考测试类型:如SQL注入,跨站点脚本(XSS),非法访问和越权访问,认证和授权,参数验证,异常处理,文件上传和下载
-请在一个 `@pytest.mark.parametrize` 作用域内输出10个测试用例
-```text
-'''
-
-OCR_API_DOC = '''```text
-接口名称:OCR识别
-接口路径:/api/v1/contract/treaty/task/ocr
-Method:POST
-
-请求参数:
-路径参数:
-
-Body参数:
-名称 类型 是否必须 默认值 备注
-file_id string 是
-box array 是
-contract_id number 是 合同id
-start_time string 否 yyyy-mm-dd
-end_time string 否 yyyy-mm-dd
-extract_type number 否 识别类型 1-导入中 2-导入后 默认1
-
-返回数据:
-名称 类型 是否必须 默认值 备注
-code integer 是
-message string 是
-data object 是
-```
-'''
-
-
-class UTGenerator:
- """UT生成器:通过API文档构造UT"""
-
- def __init__(self, swagger_file: str, ut_py_path: str, questions_path: str,
- chatgpt_method: str = "API", template_prefix=YFT_PROMPT_PREFIX) -> None:
- """初始化UT生成器
-
- Args:
- swagger_file: swagger路径
- ut_py_path: 用例存放路径
- questions_path: 模版存放路径,便于后续排查
- chatgpt_method: API
- template_prefix: 使用模版,默认使用YFT_UT_PROMPT
- """
- self.swagger_file = swagger_file
- self.ut_py_path = ut_py_path
- self.questions_path = questions_path
- assert chatgpt_method in ["API"], "非法chatgpt_method"
- self.chatgpt_method = chatgpt_method
-
- # ICL: In-Context Learning,这里给出例子,要求GPT模仿例子
- self.icl_sample = ICL_SAMPLE
- self.template_prefix = template_prefix
-
- def get_swagger_json(self) -> dict:
- """从本地文件加载Swagger JSON"""
- with open(self.swagger_file, "r", encoding="utf-8") as file:
- swagger_json = json.load(file)
- return swagger_json
-
- def __para_to_str(self, prop, required, name=""):
- name = name or prop["name"]
- ptype = prop["type"]
- title = prop.get("title", "")
- desc = prop.get("description", "")
- return f'{name}\t{ptype}\t{"是" if required else "否"}\t{title}\t{desc}'
-
- def _para_to_str(self, prop):
- required = prop.get("required", False)
- return self.__para_to_str(prop, required)
-
- def para_to_str(self, name, prop, prop_object_required):
- required = name in prop_object_required
- return self.__para_to_str(prop, required, name)
-
- def build_object_properties(self, node, prop_object_required, level: int = 0) -> str:
- """递归输出object和array[object]类型的子属性
-
- Args:
- node (_type_): 子项的值
- prop_object_required (_type_): 是否必填项
- level: 当前递归深度
- """
-
- doc = ""
-
- def dive_into_object(node):
- """如果是object类型,递归输出子属性"""
- if node.get("type") == "object":
- sub_properties = node.get("properties", {})
- return self.build_object_properties(sub_properties, prop_object_required, level=level + 1)
- return ""
-
- if node.get("in", "") in ["query", "header", "formData"]:
- doc += f'{" " * level}{self._para_to_str(node)}\n'
- doc += dive_into_object(node)
- return doc
-
- for name, prop in node.items():
- doc += f'{" " * level}{self.para_to_str(name, prop, prop_object_required)}\n'
- doc += dive_into_object(prop)
- if prop["type"] == "array":
- items = prop.get("items", {})
- doc += dive_into_object(items)
- return doc
-
- def get_tags_mapping(self) -> dict:
- """处理tag与path
-
- Returns:
- Dict: tag: path对应关系
- """
- swagger_data = self.get_swagger_json()
- paths = swagger_data["paths"]
- tags = {}
-
- for path, path_obj in paths.items():
- for method, method_obj in path_obj.items():
- for tag in method_obj["tags"]:
- if tag not in tags:
- tags[tag] = {}
- if path not in tags[tag]:
- tags[tag][path] = {}
- tags[tag][path][method] = method_obj
-
- return tags
-
- def generate_ut(self, include_tags) -> bool:
- """生成用例文件"""
- tags = self.get_tags_mapping()
- for tag, paths in tags.items():
- if include_tags is None or tag in include_tags:
- self._generate_ut(tag, paths)
- return True
-
- def build_api_doc(self, node: dict, path: str, method: str) -> str:
- summary = node["summary"]
-
- doc = f"接口名称:{summary}\n接口路径:{path}\nMethod:{method.upper()}\n"
- doc += "\n请求参数:\n"
- if "parameters" in node:
- parameters = node["parameters"]
- doc += "路径参数:\n"
-
- # param["in"]: path / formData / body / query / header
- for param in parameters:
- if param["in"] == "path":
- doc += f'{param["name"]} \n'
-
- doc += "\nBody参数:\n"
- doc += "名称\t类型\t是否必须\t默认值\t备注\n"
- for param in parameters:
- if param["in"] == "body":
- schema = param.get("schema", {})
- prop_properties = schema.get("properties", {})
- prop_required = schema.get("required", [])
- doc += self.build_object_properties(prop_properties, prop_required)
- else:
- doc += self.build_object_properties(param, [])
-
- # 输出返回数据信息
- doc += "\n返回数据:\n"
- doc += "名称\t类型\t是否必须\t默认值\t备注\n"
- responses = node["responses"]
- response = responses.get("200", {})
- schema = response.get("schema", {})
- properties = schema.get("properties", {})
- required = schema.get("required", {})
-
- doc += self.build_object_properties(properties, required)
- doc += "\n"
- doc += "```"
-
- return doc
-
- def _store(self, data, base, folder, fname):
- file_path = self.get_file_path(Path(base) / folder, fname)
- with open(file_path, "w", encoding="utf-8") as file:
- file.write(data)
-
- def ask_gpt_and_save(self, question: str, tag: str, fname: str):
- """生成问题,并且存储问题与答案"""
- messages = [self.icl_sample, question]
- result = self.gpt_msgs_to_code(messages=messages)
-
- self._store(question, self.questions_path, tag, f"{fname}.txt")
- self._store(result, self.ut_py_path, tag, f"{fname}.py")
-
- def _generate_ut(self, tag, paths):
- """处理数据路径下的结构
-
- Args:
- tag (_type_): 模块名称
- paths (_type_): 路径Object
- """
- for path, path_obj in paths.items():
- for method, node in path_obj.items():
- summary = node["summary"]
- question = self.template_prefix
- question += self.build_api_doc(node, path, method)
- self.ask_gpt_and_save(question, tag, summary)
-
- def gpt_msgs_to_code(self, messages: list) -> str:
- """根据不同调用方式选择"""
- result = ''
- if self.chatgpt_method == "API":
- result = GPTAPI().ask_code(msgs=messages)
-
- return result
-
- def get_file_path(self, base: Path, fname: str):
- """保存不同的文件路径
-
- Args:
- base (str): 路径
- fname (str): 文件名称
- """
- path = Path(base)
- path.mkdir(parents=True, exist_ok=True)
- file_path = path / fname
- return str(file_path)
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/ATube Catcher 1.0.236 Serial Key [BEST].md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/ATube Catcher 1.0.236 Serial Key [BEST].md
deleted file mode 100644
index 88118a39728df54bec29cdb2db40becc9c8b930c..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/ATube Catcher 1.0.236 Serial Key [BEST].md
+++ /dev/null
@@ -1,72 +0,0 @@
-
-
How to Download and Install aTube Catcher 1.0.236 with Serial Key
-
aTube Catcher is a powerful and easy-to-use software that allows you to download videos from various online platforms, such as YouTube, Vimeo, Dailymotion, etc. You can also convert the downloaded videos to different formats, such as MP4, AVI, WMV, MOV, etc., and burn them to DVDs or CDs. Moreover, you can also record your screen, audio, or webcam with aTube Catcher.
-
If you want to enjoy the full features of aTube Catcher without any limitations, you need to activate it with a serial key. In this article, we will show you how to download and install aTube Catcher 1.0.236 with serial key in a few simple steps.
The first step is to download the latest version of aTube Catcher from its official website[^1^]. You can also use the following link to download it directly:
Once you click on the link, you will see a download button on the webpage. Click on it and save the file on your computer.
-
Step 2: Install aTube Catcher 1.0.236
-
The next step is to install aTube Catcher on your computer. To do that, follow these instructions:
-
-
Locate the downloaded file and double-click on it to run it.
-
Follow the on-screen instructions and accept the terms and conditions.
-
Choose the destination folder where you want to install aTube Catcher.
-
Click on the Install button and wait for the installation process to complete.
-
Click on the Finish button when done.
-
-
Step 3: Activate aTube Catcher 1.0.236 with Serial Key
-
The final step is to activate aTube Catcher with a serial key. To do that, follow these steps:
-
-
Launch aTube Catcher from your desktop or start menu.
-
Click on the Help menu and select Enter Registration Code.
-
Enter the following serial key in the text box:
-9M7KNP-CATNCA-LKBT78
-
Click on the OK button and enjoy your activated aTube Catcher.
-
-
Congratulations! You have successfully downloaded and installed aTube Catcher 1.0.236 with serial key. Now you can use it to download, convert, record, and burn videos as you wish.
-
-
How to Use aTube Catcher 1.0.236
-
Now that you have activated aTube Catcher, you can start using it to download and manage your videos. Here are some of the main features and functions of aTube Catcher:
-
Download Videos
-
To download videos from online platforms, follow these steps:
-
-
-
Copy the URL of the video that you want to download from your browser.
-
Paste it in the URL box of aTube Catcher.
-
Select the output format and quality that you prefer from the drop-down menus.
-
Click on the Download button and wait for the download to finish.
-
You can find the downloaded video in the destination folder that you chose during the installation.
-
-
Convert Videos
-
To convert videos to different formats, follow these steps:
-
-
Click on the Video Converter button on the main interface of aTube Catcher.
-
Add the video files that you want to convert by clicking on the Add button or dragging and dropping them.
-
Select the output format and quality that you want from the drop-down menus.
-
Click on the Convert button and wait for the conversion to finish.
-
You can find the converted video files in the destination folder that you chose during the installation.
-
-
Record Screen, Audio, or Webcam
-
To record your screen, audio, or webcam with aTube Catcher, follow these steps:
-
-
Click on the Screen Record button on the main interface of aTube Catcher.
-
Select the source that you want to record from the drop-down menu (Screen, Audio, or Webcam).
-
Adjust the settings and options according to your preferences (such as resolution, frame rate, audio quality, etc.).
-
Click on the Record button and start recording your activity.
-
Click on the Stop button when you are done.
-
You can find the recorded file in the destination folder that you chose during the installation.
-
-
Burn Videos to DVD or CD
-
To burn videos to DVD or CD with aTube Catcher, follow these steps:
-
-
Click on the DVD/CD Creator button on the main interface of aTube Catcher.
-
Add the video files that you want to burn by clicking on the Add button or dragging and dropping them.
-
Select the output format and quality that you want from the drop-down menus (DVD or CD).
-
Insert a blank DVD or CD into your drive and select it from the drop-down menu.
-
Click on the Burn button and wait for the burning process to finish.
-
You can eject your DVD or CD when it is done.
-
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Caterpillar Et 2010 Factory Password _HOT_ Keygen.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Caterpillar Et 2010 Factory Password _HOT_ Keygen.md
deleted file mode 100644
index 982f308e3b2a9bf3ad140a10abc39e1ee2452eb0..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Caterpillar Et 2010 Factory Password _HOT_ Keygen.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
it is possible to mount ios device using imgburn. the role of imgburn is to burn cd/dvd for windows. imgburn is a free data imaging tool that can be used to create and mount forensic images. the main features of imgburn are that it can mount images and create a cd/dvd image for windows. the final cd/dvd image is created by imgburn and is then mounted on the windows machine. on the other hand it is possible to mount ios device using imgburn. imgburn will search for the boot partition of the ios device to mount it.
-
the file system of the ios device is mounted using a fat filesystem which is a simple file system that is very easy to use. most of the time forensic investigators prefer to mount the ios device using a fat file system. fat is a file system which is very easy to use and has no complex algorithms. the main reason for using fat is the simplicity and ease of use. another reason for using a fat is that the ios device stores data in a fat file system. this means that the fat file system can be used to extract data from the device.
the way my forensic process works is i review the data on the device and then select the type of analysis to perform. i would then attempt to extract data from the device. if that didnt work i would use a recovery program. once i have the data on a machine i would then extract it and do a side-by-side comparison to determine what data is hidden and what data is not hidden. hope that helps.
-
i have found a hidden bios on the samsung galaxy s4 gt-i9505. the link to the bios dump is: there are also links to the other phones that i have found hidden bios’s. i have a sample list of the phones and the hidden bios’s that i have found: > encase forensic v7 crack.iso
the problem with the bios is that its hidden. the bios chip has a secret key that can open the bios chip. if you have this key then you can extract the data from the bios. if you have the key then its a matter of locating the bios chip and extracting it. if you have the bios dump then you can compare it to the bios of a phone that you have no hidden bios. if you have the bios dump then you can extract the data from the bios and do a side-by-side comparison.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Robot Studio 5.15.02 25 _HOT_.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Robot Studio 5.15.02 25 _HOT_.md
deleted file mode 100644
index 1e400ecef162a5e95268ba1a677870fa2ea8a4fc..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Robot Studio 5.15.02 25 _HOT_.md
+++ /dev/null
@@ -1,42 +0,0 @@
-
-
How to use RobotStudio 5.15.02 25 for offline programming and simulation of ABB robots
-
RobotStudio is a software tool developed by ABB Robotics that allows users to create, simulate and test a complete robot installation in a virtual 3D environment without having to visit or disturb their actual production line. RobotStudio 5.15.02 25 is the latest version of RobotStudio that was released in April 2023 and includes several new features and improvements.
-
In this article, we will show you how to use RobotStudio 5.15.02 25 for offline programming and simulation of ABB robots, such as the IRB140 model. We will cover the following topics:
How to download and install RobotStudio 5.15.02 25 and RobotWare
-
How to create a new station and add a robot, a tool and a work object
-
How to program the robot using RAPID language and graphical editors
-
How to simulate the robot motion and check for collisions and errors
-
How to export the program to a real robot controller
-
-
How to download and install RobotStudio 5.15.02 25 and RobotWare
-
To use RobotStudio 5.15.02 25, you need to have a valid subscription and activation key from ABB Robotics. You can request a free trial or purchase a subscription from the ABB Robotics website[^1^]. You also need to have RobotWare installed on your computer, which is the software that runs on the real robot controller. RobotWare can be installed from RobotApps within RobotStudio.
-
To download and install RobotStudio 5.15.02 25, follow these steps:
Select the RobotStudio 2022.3.2 image file and click on Download.
-
Save the file on your computer and run it as an administrator.
-
Follow the instructions on the screen and choose the installation type (Minimal, Complete or Custom).
-
When prompted, enter your activation key and click on Activate.
-
Wait for the installation to finish and launch RobotStudio from the Start menu or desktop shortcut.
-
-
How to create a new station and add a robot, a tool and a work object
-
A station is a virtual representation of your robot installation that contains all the components and settings that you need to program and simulate your robot. To create a new station and add a robot, a tool and a work object, follow these steps:
-
-
In RobotStudio, click on File > New > Station.
-
In the Station Explorer panel on the left, right-click on Controllers and select Add Controller.
-
In the Add Controller dialog box, select the type of controller that matches your real robot controller (e.g., IRC5) and click on OK.
-
In the Station Explorer panel, right-click on Robots under your controller and select Add Robot.
-
In the Add Robot dialog box, select the type of robot that matches your real robot (e.g., IRB140) and click on OK.
-
In the Station Explorer panel, right-click on Tools under your robot and select Add Tool.
-
In the Add Tool dialog box, select a tool from the library or browse for a custom tool file and click on OK.
-
In the Station Explorer panel, right-click on Work Objects under your controller and select Add Work Object.
-
In the Add Work Object dialog box, select a work object from the library or browse for a custom work object file and click on OK.
-
In the Graphics Window panel on the right, you can see your station with all the components that you added. You can use the mouse buttons and scroll wheel to zoom, pan and rotate the view.
-
-
-
How to program the
-
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/demo/analyze/track/__init__.py b/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/demo/analyze/track/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/tappyness1/error_analysis_obj_det/README.md b/spaces/tappyness1/error_analysis_obj_det/README.md
deleted file mode 100644
index c74a5a7bbad759755d9a6b511de672135a8666b0..0000000000000000000000000000000000000000
--- a/spaces/tappyness1/error_analysis_obj_det/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Error Analysis Obj Det
-emoji: 🚀
-colorFrom: blue
-colorTo: yellow
-sdk: streamlit
-python_version: 3.8.9
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Autodesk AutoCAD 2018.0.2 Final (x86 X64) Keygen - [SH] Keygen LINK.md b/spaces/terfces0erbo/CollegeProjectV2/Autodesk AutoCAD 2018.0.2 Final (x86 X64) Keygen - [SH] Keygen LINK.md
deleted file mode 100644
index e06a9e7bda860f1c2ec94093d38015c9d540dab2..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Autodesk AutoCAD 2018.0.2 Final (x86 X64) Keygen - [SH] Keygen LINK.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Autodesk AutoCAD 2018.0.2 Final (x86 X64) Keygen - [SH] Keygen
-
- d5da3c52bf
-
-
-
diff --git a/spaces/terfces0erbo/CollegeProjectV2/CLIP STUDIO PAINT EX 1.9 Setup License Key Full [Latest].md b/spaces/terfces0erbo/CollegeProjectV2/CLIP STUDIO PAINT EX 1.9 Setup License Key Full [Latest].md
deleted file mode 100644
index da07b9afa039294ed4c0c08751f7df040eaab755..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/CLIP STUDIO PAINT EX 1.9 Setup License Key Full [Latest].md
+++ /dev/null
@@ -1,31 +0,0 @@
-
-
How to Download and Install CLIP STUDIO PAINT EX 1.9 with License Key
-
CLIP STUDIO PAINT EX is a powerful and versatile software for creating digital art, comics, animation, and more. It offers a wide range of tools and features to suit any style and workflow. Whether you are a beginner or a professional, you can enjoy the benefits of CLIP STUDIO PAINT EX with its easy-to-use interface and customizable settings.
-
CLIP STUDIO PAINT EX 1.9 Setup License Key Full [Latest]
In this article, we will show you how to download and install CLIP STUDIO PAINT EX 1.9 with a license key, which is the latest version of the software as of April 2023. This version includes some new and improved features, such as:
-
-
A new vector eraser tool that can erase any part of a vector layer without affecting the rest.
-
A new colorize feature that can automatically color your line art based on your settings.
-
A new animation timeline that can display multiple layers and frames at once.
-
A new export option that can export your animation as a GIF file.
-
And more!
-
-
To download and install CLIP STUDIO PAINT EX 1.9 with a license key, follow these steps:
-
-
Go to the official website of CLIP STUDIO PAINT and click on the "Download" button.
-
Select your operating system (Windows or Mac) and your language.
-
Enter your email address and click on the "Send" button. You will receive an email with a download link and a license key.
-
Click on the download link and save the file to your computer.
-
Run the installer and follow the instructions on the screen.
-
When prompted, enter your license key and click on the "Activate" button.
-
Enjoy using CLIP STUDIO PAINT EX 1.9!
-
-
If you have any questions or issues with the installation process, you can contact the customer support team of CLIP STUDIO PAINT through their website or social media channels. They will be happy to assist you with any problem you may encounter.
-
CLIP STUDIO PAINT EX 1.9 is a great software for creating stunning digital art, comics, animation, and more. It has everything you need to unleash your creativity and express your vision. Download it today and see for yourself what it can do for you!
-
-
-
One of the best features of CLIP STUDIO PAINT EX 1.9 is its compatibility with various devices and formats. You can use it on your PC, tablet, or smartphone, and you can import and export files in various formats, such as PSD, PNG, JPG, BMP, TIFF, PDF, EPUB, and more. You can also sync your files across different devices using the CLIP STUDIO cloud service. This way, you can access your work anytime and anywhere.
-
Another great feature of CLIP STUDIO PAINT EX 1.9 is its extensive library of resources and materials. You can browse and download thousands of brushes, textures, patterns, fonts, 3D models, and more from the CLIP STUDIO ASSETS store. You can also create your own materials and share them with other users. You can also access tutorials and tips from professional artists and experts on the CLIP STUDIO TIPS website. You can learn new skills and techniques to improve your art and workflow.
-
CLIP STUDIO PAINT EX 1.9 is not only a software for creating digital art, comics, animation, and more. It is also a software for connecting with other artists and enthusiasts. You can join the CLIP STUDIO community and interact with millions of users from around the world. You can share your work, get feedback, join contests, participate in events, and more. You can also follow your favorite artists and discover new ones. You can also get inspired by the amazing works of others and find new ideas for your own projects.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Club International Magazine Online BEST.md b/spaces/terfces0erbo/CollegeProjectV2/Club International Magazine Online BEST.md
deleted file mode 100644
index d8b6e629b857694a4c62d52d021f904792516821..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Club International Magazine Online BEST.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
-Collectible magazine delivered to your door. A private club for inquisitive minds built around. magazine, application and . Subscribe on the site CITY. Magazine ''City''.
-Download city magazine, city magazine app, city app, city magazine app, city magazine app, city magazine app, city magazine, app.
-City magazine.
-In the City No 02(22), 2012. Magazine City. 8a78ff9644
-
-
-
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Free Sainik Full Movie Download Hindi Mp4 EXCLUSIVE.md b/spaces/terfces0erbo/CollegeProjectV2/Free Sainik Full Movie Download Hindi Mp4 EXCLUSIVE.md
deleted file mode 100644
index 9663d35679ccbc04f0170e1a1fd18badfe66bcd3..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Free Sainik Full Movie Download Hindi Mp4 EXCLUSIVE.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-10-Dec-2019 - Sainik (1993) Full Hindi Movie | Akshay Kumar, Ashwini ... dubbed movies to watch online and download in HD.. tamil new movie free . ... download, Sainik 1993 HD Mobile movie, Sainik 1993 HD Mp4 movie, ... 1fdad05405
-
-
-
diff --git a/spaces/terfces0erbo/CollegeProjectV2/HOT! Apostilas De Ingles Kumon .pdf.md b/spaces/terfces0erbo/CollegeProjectV2/HOT! Apostilas De Ingles Kumon .pdf.md
deleted file mode 100644
index 1111f957931a3e949d170d6bf8294f7c7cd6e259..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/HOT! Apostilas De Ingles Kumon .pdf.md
+++ /dev/null
@@ -1,52 +0,0 @@
-
-
-links to PDF. EOF.
-
-2.
-
-A:
-
-You are asking for details about the commands, but you did not specify what language or environment.
-
-For this reason, I give a general answer.
-
-You can always get the source of a Bash script by
-
-$ man -k "command"
-
-or
-
-$ info "#command"
-
-$ grep command /usr/share/doc/packagename/README.Debian.gz
-
-To get a list of all the commands used, either type
-
-$ man --list
-
-$ man --section
-
-and read from the bottom up.
-
-MONROE, LA (KTRK) -- A 26-year-old man is dead after he was shot while working as a bartender at a Monroe establishment.
-
-The incident happened at the New Orleans Lounge on Piety Street.
-
-The Monroe Police Department says there was a dispute inside the lounge over a parking spot.
-
-After a dispute, one of the suspects pulled out a gun and shot the victim twice.
-
-The victim was transported to a local hospital where he later died.
-
-The other suspect fled the scene after the shooting.
-
-The Monroe Police Department says the victim's identity has not been released at this time.
-
-An investigation into the incident is ongoing.
-
-[Cardiac arrest in a patient with post-traumatic stress disorder].
-
-A 40-year-old male was found in cardiac arrest by his roommate. He was admitted to the emergency department of the Tokushukai Medico-Psychiatric Hospital, and resuscitation was attempted. He was diagnosed with cardiogenic shock on admission and was transferred to the intensive care unit. The patient was found to have a tracheostomy tube, a nasogastric tube and a central venous line. The cause of cardiac arrest was discussed by a team including a psychiatrist and cardiologist and the patient was diagnosed with post-traumatic stress disorder (PTSD). The patient was treated with diazepam. He was weaned from the ventilator but he was discharged from the intensive care unit on the sixth hospital day. The patient was readmitted to the hospital on the 23rd hospital day with a staphylococcus infection in the stoma site of the tracheostomy tube. He was diagnosed with cardiogenic shock again and was treated with 4fefd39f24
-
-
-
diff --git a/spaces/theekshana/boardpac_chat_app_test/README.md b/spaces/theekshana/boardpac_chat_app_test/README.md
deleted file mode 100644
index 0fbd7d650584a138bdbe91bfdbf624e83f90a726..0000000000000000000000000000000000000000
--- a/spaces/theekshana/boardpac_chat_app_test/README.md
+++ /dev/null
@@ -1,157 +0,0 @@
----
-title: Boardpac Chat App Test
-emoji: 😻
-colorFrom: gray
-colorTo: purple
-sdk: streamlit
-sdk_version: 1.26.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
-
-# privateGPT
-Ask questions to your documents without an internet connection, using the power of LLMs. 100% private, no data leaves your execution environment at any point. You can ingest documents and ask questions without an internet connection!
-
-Built with [LangChain](https://github.com/hwchase17/langchain), [GPT4All](https://github.com/nomic-ai/gpt4all), [LlamaCpp](https://github.com/ggerganov/llama.cpp), [Chroma](https://www.trychroma.com/) and [SentenceTransformers](https://www.sbert.net/).
-
-
-
-### how to run
-python -m streamlit run app.py
-
-# Environment Setup
-In order to set your environment up to run the code here, first install all requirements:
-
-```shell
-pip3 install -r requirements.txt
-```
-
-Then, download the LLM model and place it in a directory of your choice:
-- LLM: default to [ggml-gpt4all-j-v1.3-groovy.bin](https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin). If you prefer a different GPT4All-J compatible model, just download it and reference it in your `.env` file.
-
-Copy the `example.env` template into `.env`
-```shell
-cp example.env .env
-```
-
-and edit the variables appropriately in the `.env` file.
-```
-MODEL_TYPE: supports LlamaCpp or GPT4All
-PERSIST_DIRECTORY: is the folder you want your vectorstore in
-MODEL_PATH: Path to your GPT4All or LlamaCpp supported LLM
-MODEL_N_CTX: Maximum token limit for the LLM model
-MODEL_N_BATCH: Number of tokens in the prompt that are fed into the model at a time. Optimal value differs a lot depending on the model (8 works well for GPT4All, and 1024 is better for LlamaCpp)
-EMBEDDINGS_MODEL_NAME: SentenceTransformers embeddings model name (see https://www.sbert.net/docs/pretrained_models.html)
-TARGET_SOURCE_CHUNKS: The amount of chunks (sources) that will be used to answer a question
-```
-
-Note: because of the way `langchain` loads the `SentenceTransformers` embeddings, the first time you run the script it will require internet connection to download the embeddings model itself.
-
-## Test dataset
-This repo uses a [state of the union transcript](https://github.com/imartinez/privateGPT/blob/main/source_documents/state_of_the_union.txt) as an example.
-
-## Instructions for ingesting your own dataset
-
-Put any and all your files into the `source_documents` directory
-
-The supported extensions are:
-
- - `.csv`: CSV,
- - `.docx`: Word Document,
- - `.doc`: Word Document,
- - `.enex`: EverNote,
- - `.eml`: Email,
- - `.epub`: EPub,
- - `.html`: HTML File,
- - `.md`: Markdown,
- - `.msg`: Outlook Message,
- - `.odt`: Open Document Text,
- - `.pdf`: Portable Document Format (PDF),
- - `.pptx` : PowerPoint Document,
- - `.ppt` : PowerPoint Document,
- - `.txt`: Text file (UTF-8),
-
-Run the following command to ingest all the data.
-
-```shell
-python ingest.py
-```
-
-Output should look like this:
-
-```shell
-Creating new vectorstore
-Loading documents from source_documents
-Loading new documents: 100%|██████████████████████| 1/1 [00:01<00:00, 1.73s/it]
-Loaded 1 new documents from source_documents
-Split into 90 chunks of text (max. 500 tokens each)
-Creating embeddings. May take some minutes...
-Using embedded DuckDB with persistence: data will be stored in: db
-Ingestion complete! You can now run privateGPT.py to query your documents
-```
-
-It will create a `db` folder containing the local vectorstore. Will take 20-30 seconds per document, depending on the size of the document.
-You can ingest as many documents as you want, and all will be accumulated in the local embeddings database.
-If you want to start from an empty database, delete the `db` folder.
-
-Note: during the ingest process no data leaves your local environment. You could ingest without an internet connection, except for the first time you run the ingest script, when the embeddings model is downloaded.
-
-## Ask questions to your documents, locally!
-In order to ask a question, run a command like:
-
-```shell
-python privateGPT.py
-```
-
-And wait for the script to require your input.
-
-```plaintext
-> Enter a query:
-```
-
-Hit enter. You'll need to wait 20-30 seconds (depending on your machine) while the LLM model consumes the prompt and prepares the answer. Once done, it will print the answer and the 4 sources it used as context from your documents; you can then ask another question without re-running the script, just wait for the prompt again.
-
-Note: you could turn off your internet connection, and the script inference would still work. No data gets out of your local environment.
-
-Type `exit` to finish the script.
-
-
-### CLI
-The script also supports optional command-line arguments to modify its behavior. You can see a full list of these arguments by running the command ```python privateGPT.py --help``` in your terminal.
-
-
-# How does it work?
-Selecting the right local models and the power of `LangChain` you can run the entire pipeline locally, without any data leaving your environment, and with reasonable performance.
-
-- `ingest.py` uses `LangChain` tools to parse the document and create embeddings locally using `HuggingFaceEmbeddings` (`SentenceTransformers`). It then stores the result in a local vector database using `Chroma` vector store.
-- `privateGPT.py` uses a local LLM based on `GPT4All-J` or `LlamaCpp` to understand questions and create answers. The context for the answers is extracted from the local vector store using a similarity search to locate the right piece of context from the docs.
-- `GPT4All-J` wrapper was introduced in LangChain 0.0.162.
-
-# System Requirements
-
-## Python Version
-To use this software, you must have Python 3.10 or later installed. Earlier versions of Python will not compile.
-
-## C++ Compiler
-If you encounter an error while building a wheel during the `pip install` process, you may need to install a C++ compiler on your computer.
-
-### For Windows 10/11
-To install a C++ compiler on Windows 10/11, follow these steps:
-
-1. Install Visual Studio 2022.
-2. Make sure the following components are selected:
- * Universal Windows Platform development
- * C++ CMake tools for Windows
-3. Download the MinGW installer from the [MinGW website](https://sourceforge.net/projects/mingw/).
-4. Run the installer and select the `gcc` component.
-
-## Mac Running Intel
-When running a Mac with Intel hardware (not M1), you may run into _clang: error: the clang compiler does not support '-march=native'_ during pip install.
-
-If so set your archflags during pip install. eg: _ARCHFLAGS="-arch x86_64" pip3 install -r requirements.txt_
-
-# Disclaimer
-This is a test project to validate the feasibility of a fully private solution for question answering using LLMs and Vector embeddings. It is not production ready, and it is not meant to be used in production. The models selection is not optimized for performance, but for privacy; but it is possible to use different models and vectorstores to improve performance.
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Avunu Subtitles Download [VERIFIED].md b/spaces/tialenAdioni/chat-gpt-api/logs/Avunu Subtitles Download [VERIFIED].md
deleted file mode 100644
index bd163026935490b5af57b60d621c238df78ab945..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Avunu Subtitles Download [VERIFIED].md
+++ /dev/null
@@ -1,38 +0,0 @@
-
-Here is a possible title and article for the keyword "Avunu subtitles download":
-
-
How to Download Avunu Subtitles in Different Languages
-
Avunu is a popular Telugu horror thriller film series that has two parts: Avunu (2012) and Avunu Part 2 (2015). The films follow the story of a young couple who move into a new apartment and experience paranormal activities. The films are directed by Ravi Babu and star Poorna and Harshvardhan Rane in the lead roles.
-
If you are a fan of Avunu and want to watch it with subtitles in your preferred language, you might be wondering how to download them. There are many websites that offer subtitles for Avunu, but not all of them are reliable or safe. Some might contain malware, viruses, or inaccurate translations. To avoid these risks, you need to find a trustworthy source for Avunu subtitles download.
One of the best websites for Avunu subtitles download is SUBDL. SUBDL is a fast and easy subtitle website that offers subtitles in various languages for movies and TV shows. You can find subtitles for Avunu in English, French, Spanish, and more. SUBDL also provides subtitles for Avunu Part 2, the sequel to the first film.
-
To download Avunu subtitles from SUBDL, you just need to follow these simple steps:
Select your desired language from the filter menu on the left side of the page.
-
Click on the download button next to the subtitle file that matches your video quality and format.
-
Save the subtitle file to your device and extract it if it is compressed.
-
Rename the subtitle file to match the name of your video file.
-
Play your video with your preferred media player and enjoy Avunu with subtitles.
-
-
That's it! You have successfully downloaded Avunu subtitles from SUBDL. Now you can watch this thrilling horror film series with subtitles in your preferred language. SUBDL is a reliable and safe website for Avunu subtitles download, as well as other movies and TV shows. You can also request subtitles for any content that is not available on SUBDL. SUBDL is your ultimate destination for subtitle downloads.
Here are a few more paragraphs for the article:
-
-
Why Watch Avunu with Subtitles?
-
Avunu is a film series that has received critical acclaim and commercial success for its innovative and realistic portrayal of horror. The films use minimal special effects and rely on sound design, camera angles, and acting to create a sense of dread and suspense. The films also explore themes such as marital issues, sexual harassment, and superstition.
-
Watching Avunu with subtitles can enhance your viewing experience in many ways. First of all, subtitles can help you understand the dialogues better, especially if you are not familiar with the Telugu language or the regional accents. Subtitles can also help you catch the subtle details and nuances that might be missed otherwise. Subtitles can also make you more immersed in the story and the atmosphere of the film.
-
Moreover, watching Avunu with subtitles can also help you learn a new language or improve your existing language skills. You can compare the original audio with the translated subtitles and learn new words, phrases, and expressions. You can also improve your listening comprehension and pronunciation by following along with the subtitles. Watching Avunu with subtitles can be a fun and effective way to learn Telugu or any other language.
-
-
Where to Watch Avunu Online?
-
If you are looking for a way to watch Avunu online, you have several options to choose from. You can either rent or buy the films from various streaming platforms such as Amazon Prime Video, YouTube, Google Play Movies, iTunes, or Netflix. You can also watch the films for free on some websites that host pirated content, but this is not recommended as it is illegal and unethical.
-
The best way to watch Avunu online is to use a legal and safe streaming service that offers high-quality video and audio, as well as subtitles in different languages. One of the best streaming services for Avunu is Aha. Aha is a Telugu-exclusive OTT platform that offers a wide range of movies and shows in various genres. You can watch Avunu and Avunu Part 2 on Aha with subtitles in English or Hindi.
-
-
To watch Avunu on Aha, you just need to follow these simple steps:
Select your preferred subscription plan from monthly or yearly options.
-
Search for Avunu or Avunu Part 2 in the search bar or browse through the horror category.
-
Click on the play button and enjoy the film with subtitles.
-
-
Aha is a great streaming service for Avunu fans as it offers high-quality video and audio, as well as subtitles in different languages. You can also watch other Telugu movies and shows on Aha with subtitles. Aha is your ultimate destination for Telugu entertainment online.
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Crez des btiments en 3D avec Archicad 13 francais gratuit avec crack le logiciel BIM incontournable.md b/spaces/tialenAdioni/chat-gpt-api/logs/Crez des btiments en 3D avec Archicad 13 francais gratuit avec crack le logiciel BIM incontournable.md
deleted file mode 100644
index fb1fff7980c9a981fbf9078e1eabf8a0de2bd12b..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Crez des btiments en 3D avec Archicad 13 francais gratuit avec crack le logiciel BIM incontournable.md
+++ /dev/null
@@ -1,197 +0,0 @@
-
-
Archicad 13 francais gratuit avec crack: comment télécharger et installer le logiciel BIM pour la modélisation 3D
-
Vous êtes à la recherche d'un logiciel BIM (Building Information Modeling) pour la modélisation 3D de bâtiments et d'architecture? Vous voulez profiter des fonctionnalités avancées de Archicad 13 sans payer le prix fort? Vous êtes au bon endroit! Dans cet article, nous allons vous expliquer comment télécharger et installer Archicad 13 francais gratuit avec crack, un logiciel qui vous permettra de concevoir des projets architecturaux en 3D avec une documentation complète et un travail collaboratif. Nous allons également vous présenter les principales caractéristiques, les avantages et les risques de ce logiciel, ainsi que quelques conseils pour l'utiliser efficacement.
ArchiCAD est un logiciel d'architecture spécialisé dans le BIM, c'est-à-dire la modélisation des données du bâtiment. Il vous permettra de modéliser un bâtiment en trois dimensions et de concevoir une documentation complète qui sera utile pendant toute la durée d'un projet architectural. ArchiCAD est développé par la société Graphisoft, qui fait partie du groupe Nemetschek, leader mondial des solutions logicielles pour l'architecture, l'ingénierie et la construction.
-
Les principales caractéristiques de ArchiCAD 13
-
ArchiCAD 13 est la version sortie en 2009 du logiciel. Elle apporte plusieurs nouveautés et améliorations par rapport aux versions précédentes, notamment:
-
-
L'outil MORPH, qui permet de modéliser et d'éditer des formes libres en 3D.
-
L'outil SHELL, qui permet de créer des toits complexes et des formes incurvées.
-
L'outil CURTAIN WALL, qui permet de créer des façades vitrées personnalisables.
-
L'intégration du moteur de rendu CineRender, qui offre des possibilités de visualisation photoréalistes.
-
L'amélioration du travail collaboratif grâce au système Delta Server, qui réduit le temps de synchronisation entre les différents intervenants du projet.
-
L'amélioration de l'interface utilisateur, qui facilite l'accès aux commandes et aux paramètres.
-
L'amélioration de la compatibilité avec les formats DWG et DXF, qui facilitent l'échange de données avec les autres logiciels CAD.
-
-
Les avantages de ArchiCAD 13 par rapport aux autres logiciels BIM
-
ArchiCAD 13 présente plusieurs avantages par rapport aux autres logiciels BIM du marché, tels que Revit ou SketchUp. Parmi ces avantages, on peut citer:
-
-
La facilité d'utilisation, qui permet aux utilisateurs débutants ou expérimentés de prendre en main le logiciel rapidement.
-
La flexibilité, qui permet aux utilisateurs de personnaliser le logiciel selon leurs besoins et leurs préférences.
-
La performance, qui permet aux utilisateurs de travailler sur des projets complexes sans ralentir le logiciel ou le système.
-
La fiabilité, qui garantit aux utilisateurs la sécurité et la stabilité du logiciel et des données.
-
Le support, qui offre aux utilisateurs un service client réactif et compétent.
-
-
Comment télécharger ArchiCAD 13 francais gratuit avec crack?
-
Si vous souhaitez télécharger ArchiCAD 13 francais gratuit avec crack, vous devez savoir que vous vous exposez à des risques juridiques et techniques. En effet, il s'agit d'une version illégale du logiciel, qui n'a pas été autorisée par son éditeur. Vous pouvez donc être poursuivi en justice pour violation du droit d'auteur ou pour contrefaçon. De plus, vous pouvez être victime de virus ou de malwares qui peuvent endommager votre ordinateur ou voler vos données personnelles. Nous vous déconseillons donc fortement de recourir à cette méthode. Si vous voulez utiliser ArchiCAD 13 en toute légalité et en toute sécurité, vous pouvez opter pour une version d'essai gratuite pendant 30 jours ou pour une version étudiante gratuite pendant un an. Vous pouvez également acheter une licence officielle sur le site web de Graphisoft ou auprès d'un revendeur agréé.
-
Les prérequis pour installer ArchiCAD 13
-
Si vous décidez malgré tout de télécharger ArchiCAD 13 francais gratuit avec crack, vous devez vérifier que votre ordinateur respecte les prérequis suivants:
-
-
Système d'exploitation: Windows XP SP3 ou supérieur
-
Processeur: Intel Pentium IV ou supérieur
-
Mémoire vive: 1 Go minimum
-
Espace disque: 5 Go minimum
-
Résolution d'écran: 1024 x 768 minimum
-
Carte graphique: compatible OpenGL
-
-
Les étapes pour télécharger ArchiCAD 13 francais gratuit avec crack
-
Si vous avez vérifié que votre ordinateur respecte les prérequis ci-dessus, vous pouvez suivre les étapes suivantes pour télécharger ArchiCAD 13 francais gratuit avec crack:
-
-
Rendez-vous sur un site web qui propose le téléchargement du fichier archivé contenant le logiciel et le crack. Par exemple, vous pouvez utiliser le lien suivant:
-
Cliquez sur le bouton "Download" ou "Télécharger" et attendez que le téléchargement se termine.
-
Ouvrez le fichier archivé avec un logiciel comme WinRAR ou WinZip et extrayez son contenu dans un dossier de votre choix.
-
Ouvrez le dossier extrait et lancez le fichier "Setup.exe" pour lancer l'installation du logiciel.
-
Suivez les instructions à l'écran et choisissez les options d'installation selon vos préférences.
-
A la fin de l'installation, ne lancez pas le logiciel et fermez toutes les fenêtres.
-
Ouvrez le dossier "Crack" et copiez le fichier "Archicad.exe" dans le dossier d'installation du logiciel (par défaut C:\Program Files\Graphisoft\Archicad).
-
Collez le fichier "Archicad.exe" dans le dossier d'installation du logiciel en écrasant le fichier existant.
-
Lancez le fichier "Archicad.exe" depuis le dossier d'installation du logiciel pour démarrer le logiciel.
-
Les risques et les précautions à prendre avant d'utiliser ArchiCAD 13 francais gratuit avec crack
-
Comme nous l'avons mentionné précédemment, utiliser ArchiCAD 13 francais gratuit avec crack comporte des risques juridiques et techniques. Vous devez donc être conscient des conséquences possibles et prendre des précautions avant d'utiliser le logiciel. Voici quelques conseils à suivre:
-
telecharger archicad 13 francais gratuit version complete avec crack
-archicad 13 francais gratuit pour mac avec crack
-comment installer archicad 13 francais gratuit avec crack
-archicad 13 francais gratuit 32 bits avec crack
-archicad 13 francais gratuit 64 bits avec crack
-archicad 13 francais gratuit licence avec crack
-archicad 13 francais gratuit serial avec crack
-archicad 13 francais gratuit keygen avec crack
-archicad 13 francais gratuit patch avec crack
-archicad 13 francais gratuit activation avec crack
-archicad 13 francais gratuit torrent avec crack
-archicad 13 francais gratuit mega avec crack
-archicad 13 francais gratuit uptobox avec crack
-archicad 13 francais gratuit mediafire avec crack
-archicad 13 francais gratuit rapidshare avec crack
-archicad 13 francais gratuit sans virus avec crack
-archicad 13 francais gratuit sans inscription avec crack
-archicad 13 francais gratuit sans mot de passe avec crack
-archicad 13 francais gratuit sans survey avec crack
-archicad 13 francais gratuit sans pub avec crack
-archicad 13 francais gratuit facile a installer avec crack
-archicad 13 francais gratuit compatible windows 10 avec crack
-archicad 13 francais gratuit compatible windows 7 avec crack
-archicad 13 francais gratuit compatible windows xp avec crack
-archicad 13 francais gratuit compatible windows vista avec crack
-archicad 13 francais gratuit compatible windows 8 avec crack
-archicad 13 francais gratuit logiciel de conception architecturale avec crack
-archicad 13 francais gratuit logiciel de dessin technique avec crack
-archicad 13 francais gratuit logiciel de modelisation 3d avec crack
-archicad 13 francais gratuit logiciel de rendu photorealiste avec crack
-archicad 13 francais gratuit logiciel de collaboration bim avec crack
-archicad 13 francais gratuit logiciel de gestion de projet avec crack
-archicad 13 francais gratuit logiciel de calcul de structure avec crack
-archicad 13 francais gratuit logiciel de simulation energetique avec crack
-archicad 13 francais gratuit logiciel de documentation graphique avec crack
-archicad 13 francais gratuit formation en ligne gratuite avec crack
-archicad 13 francais gratuit tutoriel video gratuit avec crack
-archicad 13 francais gratuit guide utilisateur pdf gratuit avec crack
-archicad 13 francais gratuit support technique gratuit avec crack
-archicad 13 francais gratuit forum d'entraide gratuit avec crack
-avis sur archicad 13 francais gratuit avec crack
-comparatif entre archicad et autocad en francais et en version gratuite et cracker
-comment cracker la version d'essai de archicad en version francaise et gratuite
-comment migrer de la version precedente de archicad a la version francaise et gratuite et cracker de la version actuelle
-comment mettre a jour la version francaise et gratuite et cracker de archicad
-comment exporter les projets realises sur la version francaise et gratuite et cracker de archicad vers d'autres formats
-comment importer les projets realises sur d'autres logiciels vers la version francaise et gratuite et cracker de archicad
-comment optimiser les performances de la version francaise et gratuite et cracker de archicad
-comment personnaliser l'interface de la version francaise et gratuite et cracker de archicad
-comment resoudre les problemes techniques de la version francaise et gratuite et cracker de archicad
-
-
Vérifiez la fiabilité du site web qui propose le téléchargement du fichier archivé. Lisez les commentaires des autres utilisateurs et évitez les sites qui ont une mauvaise réputation ou qui demandent des informations personnelles.
-
Scannez le fichier archivé avec un antivirus avant de l'ouvrir. Supprimez le fichier si votre antivirus détecte une menace ou un fichier malveillant.
-
Désactivez votre connexion internet et votre antivirus pendant l'installation du logiciel et le lancement du crack. Cela évitera que le logiciel soit détecté comme illégal ou que le crack soit supprimé par votre antivirus.
-
Ne mettez pas à jour le logiciel ni le crack. Cela risquerait de rendre le logiciel inutilisable ou de révéler votre utilisation illégale.
-
Ne partagez pas le logiciel ni le crack avec d'autres personnes. Cela augmenterait les chances que le logiciel soit repéré par son éditeur ou par les autorités.
-
Ne stockez pas vos projets sur le cloud ni sur des supports externes. Cela pourrait compromettre la sécurité de vos données ou la confidentialité de vos projets.
-
-
Comment utiliser ArchiCAD 13 pour la modélisation 3D?
-
Une fois que vous avez installé et lancé ArchiCAD 13 francais gratuit avec crack, vous pouvez commencer à utiliser le logiciel pour la modélisation 3D de vos projets architecturaux. Voici quelques étapes à suivre pour créer un projet en 3D avec ArchiCAD 13:
-
-
Cliquez sur le menu "Fichier" puis sur "Nouveau" pour créer un nouveau projet.
-
Choisissez les paramètres de base de votre projet, tels que le nom, l'emplacement, l'échelle, l'unité de mesure, etc.
-
Cliquez sur le menu "Affichage" puis sur "Plan" pour passer en mode plan.
-
Utilisez les outils de dessin et de construction pour tracer les murs, les dalles, les poteaux, les poutres, etc. de votre bâtiment.
-
Cliquez sur le menu "Affichage" puis sur "3D" pour passer en mode 3D.
-
Utilisez les outils de modélisation pour ajouter des éléments 3D à votre bâtiment, tels que des fenêtres, des portes, des escaliers, des toits, etc.
-
Utilisez les outils de modification pour ajuster la forme, la taille, la position, l'orientation, etc. de vos éléments 3D.
-
Utilisez les outils de rendu pour appliquer des matériaux, des textures, des couleurs, des ombres, des lumières, etc. à vos éléments 3D.
-
Utilisez les outils de navigation pour changer le point de vue et la perspective de votre scène 3D.
-
Cliquez sur le menu "Fichier" puis sur "Enregistrer" pour sauvegarder votre projet.
-
-
Les outils de modélisation de ArchiCAD 13
-
ArchiCAD 13 vous offre une large gamme d'outils de modélisation qui vous permettront de créer des formes simples ou complexes en 3D. Parmi ces outils, on peut citer:
-
-
L'outil MORPH, qui vous permet de créer et d'éditer des formes libres en 3D. Vous pouvez déformer, sculpter, fusionner ou couper des formes selon vos envies.
-
L'outil SHELL, qui vous permet de créer des toits complexes et des formes incurvées. Vous pouvez définir la forme générale, l'épaisseur, la courbure et les extrusions de vos coques.
-
L'outil CURTAIN WALL, qui vous permet de créer des façades vitrées personnalisables. Vous pouvez définir la structure, les panneaux, les accessoires et les ouvertures de vos murs-rideaux.
-
L'outil OBJECT LIBRARY, qui vous permet d'accéder à une bibliothèque d'objets prédéfinis que vous pouvez insérer dans votre scène 3D. Vous pouvez choisir parmi des catégories telles que mobilier, équipement, végétation, symboles, etc.
-
Les outils de modélisation de ArchiCAD 13
-
ArchiCAD 13 vous offre une large gamme d'outils de modélisation qui vous permettront de créer des formes simples ou complexes en 3D. Parmi ces outils, on peut citer:
-
-
L'outil MORPH, qui vous permet de créer et d'éditer des formes libres en 3D. Vous pouvez déformer, sculpter, fusionner ou couper des formes selon vos envies.
-
L'outil SHELL, qui vous permet de créer des toits complexes et des formes incurvées. Vous pouvez définir la forme générale, l'épaisseur, la courbure et les extrusions de vos coques.
-
L'outil CURTAIN WALL, qui vous permet de créer des façades vitrées personnalisables. Vous pouvez définir la structure, les panneaux, les accessoires et les ouvertures de vos murs-rideaux.
-
L'outil OBJECT LIBRARY, qui vous permet d'accéder à une bibliothèque d'objets prédéfinis que vous pouvez insérer dans votre scène 3D. Vous pouvez choisir parmi des catégories telles que mobilier, équipement, végétation, symboles, etc.
-
L'outil GDL EDITOR, qui vous permet de créer et de modifier vos propres objets en utilisant le langage GDL (Geometric Description Language). Vous pouvez définir les propriétés géométriques, graphiques et fonctionnelles de vos objets.
-
-
Les outils de documentation de ArchiCAD 13
-
ArchiCAD 13 vous permet également de concevoir une documentation complète et précise de votre projet en 3D. Vous pouvez générer des plans, des coupes, des élévations, des nomenclatures, des détails et des vues en 2D et en 3D. Vous pouvez également exporter vos documents aux formats PDF, DWF ou DWG et DXF. Parmi les outils de documentation de ArchiCAD 13, on peut citer:
-
-
L'outil LAYOUT BOOK, qui vous permet d'organiser vos documents dans un livre de mise en page. Vous pouvez créer des chapitres, des sous-chapitres et des pages selon la structure de votre projet.
-
L'outil VIEW MAP, qui vous permet de gérer vos vues en 2D et en 3D. Vous pouvez créer des vues personnalisées à partir de votre scène 3D et les modifier selon vos besoins.
-
L'outil PUBLISHER SETS, qui vous permet de publier vos documents dans différents formats et supports. Vous pouvez choisir les documents à publier, le format de sortie, le mode d'impression ou d'envoi.
-
L'outil ANNOTATION TOOLS, qui vous permet d'ajouter des annotations à vos documents. Vous pouvez insérer du texte, des cotes, des étiquettes, des hachures, des lignes directrices, etc.
-
Les outils de documentation de ArchiCAD 13
-
ArchiCAD 13 vous permet également de concevoir une documentation complète et précise de votre projet en 3D. Vous pouvez générer des plans, des coupes, des élévations, des nomenclatures, des détails et des vues en 2D et en 3D. Vous pouvez également exporter vos documents aux formats PDF, DWF ou DWG et DXF. Parmi les outils de documentation de ArchiCAD 13, on peut citer:
-
-
L'outil LAYOUT BOOK, qui vous permet d'organiser vos documents dans un livre de mise en page. Vous pouvez créer des chapitres, des sous-chapitres et des pages selon la structure de votre projet.
-
L'outil VIEW MAP, qui vous permet de gérer vos vues en 2D et en 3D. Vous pouvez créer des vues personnalisées à partir de votre scène 3D et les modifier selon vos besoins.
-
L'outil PUBLISHER SETS, qui vous permet de publier vos documents dans différents formats et supports. Vous pouvez choisir les documents à publier, le format de sortie, le mode d'impression ou d'envoi.
-
L'outil ANNOTATION TOOLS, qui vous permet d'ajouter des annotations à vos documents. Vous pouvez insérer du texte, des cotes, des étiquettes, des hachures, des lignes directrices, etc.
-
L'outil SCHEDULES AND INDEXES, qui vous permet de créer des listes et des tableaux à partir des données de votre projet. Vous pouvez générer des nomenclatures d'éléments, des quantitatifs de matériaux, des index de plans, etc.
-
-
Les outils de collaboration de ArchiCAD 13
-
ArchiCAD 13 vous offre aussi la possibilité de travailler en équipe sur un même projet en 3D. Vous pouvez partager et synchroniser vos données avec les autres intervenants du projet, tels que les architectes, les ingénieurs, les clients ou les consultants. Vous pouvez également communiquer et échanger des informations avec les autres utilisateurs du logiciel. Parmi les outils de collaboration de ArchiCAD 13, on peut citer:
-
-
L'outil TEAMWORK, qui vous permet de travailler sur un projet commun avec plusieurs utilisateurs. Vous pouvez accéder au projet depuis un serveur centralisé et modifier les parties qui vous sont attribuées.
-
L'outil DELTA SERVER, qui vous permet de réduire le temps de synchronisation entre les utilisateurs. Il détecte et transmet uniquement les modifications effectuées sur le projet.
-
L'outil BIM SERVER MANAGER, qui vous permet de gérer le serveur centralisé du projet. Vous pouvez contrôler les accès au projet, les sauvegardes, les versions et les révisions.
-
Les outils de collaboration de ArchiCAD 13
-
ArchiCAD 13 vous offre aussi la possibilité de travailler en équipe sur un même projet en 3D. Vous pouvez partager et synchroniser vos données avec les autres intervenants du projet, tels que les architectes, les ingénieurs, les clients ou les consultants. Vous pouvez également communiquer et échanger des informations avec les autres utilisateurs du logiciel. Parmi les outils de collaboration de ArchiCAD 13, on peut citer:
-
-
L'outil TEAMWORK, qui vous permet de travailler sur un projet commun avec plusieurs utilisateurs. Vous pouvez accéder au projet depuis un serveur centralisé et modifier les parties qui vous sont attribuées.
-
L'outil DELTA SERVER, qui vous permet de réduire le temps de synchronisation entre les utilisateurs. Il détecte et transmet uniquement les modifications effectuées sur le projet.
-
L'outil BIM SERVER MANAGER, qui vous permet de gérer le serveur centralisé du projet. Vous pouvez contrôler les accès au projet, les sauvegardes, les versions et les révisions.
-
L'outil BIM EXPLORER, qui vous permet de visualiser et de présenter votre projet en 3D. Vous pouvez naviguer dans votre scène 3D et créer des animations ou des visites virtuelles.
-
L'outil BIMX, qui vous permet de partager votre projet en 3D avec vos clients ou vos partenaires. Vous pouvez exporter votre scène 3D dans un format interactif qui peut être consulté sur un ordinateur ou un appareil mobile.
-
-
Conclusion
-
ArchiCAD 13 est un logiciel BIM puissant et polyvalent qui vous permet de modéliser des bâtiments en 3D avec une documentation complète et un travail collaboratif. Il offre une multitude d'outils et de fonctionnalités pour la modélisation, le rendu, la documentation et la communication de vos projets architecturaux. Cependant, il s'agit d'un logiciel payant qui nécessite une licence officielle pour être utilisé légalement et en toute sécurité. Si vous souhaitez télécharger ArchiCAD 13 francais gratuit avec crack, vous devez être conscient des risques juridiques et techniques que cela implique. Nous vous conseillons donc de choisir une alternative légale et sûre, comme une version d'essai gratuite ou une version étudiante gratuite. Vous pouvez également acheter une licence officielle sur le site web de Graphisoft ou auprès d'un revendeur agréé.
-
FAQ
-
Voici quelques questions fréquemment posées sur ArchiCAD 13 francais gratuit avec crack:
-
-
Quelle est la différence entre ArchiCAD 13 et ArchiCAD 24?
-
ArchiCAD 24 est la dernière version du logiciel, sortie en 2020. Elle apporte plusieurs améliorations et nouveautés par rapport à ArchiCAD 13, notamment:
-
-
Une meilleure intégration du BIMcloud, la plateforme cloud de Graphisoft qui facilite le travail collaboratif.
-
Une meilleure prise en charge du format IFC (Industry Foundation Classes), qui permet l'échange de données entre les différents logiciels BIM.
-
Une meilleure performance du logiciel grâce à l'utilisation du multi-threading et du multi-processing.
-
Une meilleure qualité du rendu grâce à l'utilisation du moteur Twinmotion, qui offre des effets visuels réalistes.
-
Une meilleure conception structurelle grâce à l'intégration du logiciel Archicad Structural Analysis (ASA), qui permet le calcul des contraintes et des déformations.
-
-
Où puis-je trouver des tutoriels pour apprendre à utiliser ArchiCAD 13?
-
Vous pouvez trouver des tutoriels pour apprendre à utiliser ArchiCAD 13 sur le site web de Graphisoft ou sur des plateformes en ligne comme YouTube ou Udemy. Vous pouvez également consulter des livres ou des magazines spécialisés sur le sujet.
-
Comment puis-je obtenir une version d'essai gratuite ou une version étudiante gratuite de ArchiCAD 13?
-
Pour obtenir une version d'essai gratuite ou une version étudiante gratuite de ArchiCAD 13, vous devez vous rendre sur le site web de Graphisoft et remplir un formulaire d'inscription. Vous recevrez ensuite un lien par e-mail pour télécharger le logiciel. La version d'essai gratuite est valable pendant 30 jours et la version étudiante gratuite est valable pendant un an.
-
Comment puis-je acheter une licence officielle de ArchiCAD 13?
-
Pour acheter une licence officielle de ArchiCAD 13, vous devez vous rendre sur le site web de Graphisoft ou auprès d'un revendeur agréé. Vous devrez choisir entre une licence perpétuelle ou une licence annuelle, selon vos besoins et votre budget. Vous devrez également choisir entre une licence individuelle ou une licence réseau, selon le nombre d'utilisateurs que vous souhaitez autoriser.
-
Comment puis-je contacter le service client de Graphisoft?
-
Pour contacter le service client de Graphisoft, vous pouvez utiliser le formulaire de contact disponible sur le site web de Graphisoft ou envoyer un e-mail à l'adresse support@graphisoft.com. Vous pouvez également appeler le numéro +36-1-437-3000 ou consulter la FAQ disponible sur le site web de Graphisoft.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/IBM ViaVoice Gold Arabic 4.3.rarl A Users Guide and FAQ.md b/spaces/tialenAdioni/chat-gpt-api/logs/IBM ViaVoice Gold Arabic 4.3.rarl A Users Guide and FAQ.md
deleted file mode 100644
index 7bb00d13f6e0ada13a6e28897203691638941472..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/IBM ViaVoice Gold Arabic 4.3.rarl A Users Guide and FAQ.md
+++ /dev/null
@@ -1,173 +0,0 @@
-
-
IBM ViaVoice Gold Arabic 4.3.rarl: A Review
-
If you are looking for a voice recognition software that can help you create documents, emails, and other texts in Arabic, you might want to check out IBM ViaVoice Gold Arabic 4.3.rarl. This software is designed to provide you with a fast, accurate, and easy way to dictate and edit your texts using your voice.
-
In this article, we will review IBM ViaVoice Gold Arabic 4.3.rarl and tell you everything you need to know about it. We will cover its features, benefits, installation process, usage tips, pros, cons, and more. By the end of this article, you will be able to decide if this software is suitable for your needs and preferences.
A brief introduction to IBM ViaVoice Gold Arabic 4.3.rarl
-
IBM ViaVoice Gold Arabic 4.3.rarl is a voice recognition software that allows you to dictate and edit texts in Arabic using your voice. It is developed by IBM Corporation, a leading company in the field of artificial intelligence and natural language processing.
-
IBM ViaVoice Gold Arabic 4.3.rarl is based on the technology of IBM ViaVoice, which was first launched in 1997 as one of the first voice recognition software in the market. Since then, IBM ViaVoice has been improved and updated with various versions and languages, including Arabic.
-
IBM ViaVoice Gold Arabic 4.3.rarl is the latest version of IBM ViaVoice for Arabic speakers. It was released in 2022 and it is compatible with Windows XP, Vista, 7, 8, and 10 operating systems.
-
The features and benefits of IBM ViaVoice Gold Arabic 4.3.rarl
-
IBM ViaVoice Gold Arabic 4.3.rarl has many features and benefits that make it a powerful and convenient voice recognition software for Arabic speakers. Some of these features and benefits are:
-
-
It supports both Modern Standard Arabic (MSA) and Egyptian Colloquial Arabic (ECA), which are the most widely used varieties of Arabic in the world.
-
It has a high accuracy rate of over 95%, which means that it can recognize your voice and words correctly most of the time.
-
It has a fast response time of less than a second, which means that it can process your voice input quickly and display the text output on your screen without delay.
-
It has a user-friendly interface that is easy to navigate and customize according to your preferences.
-
It has a built-in text editor that allows you to edit your texts using your voice or keyboard.
-
It has a speech feedback feature that reads back your texts aloud so that you can check them for errors or corrections.
-
It has a vocabulary builder feature that allows you to add new words or phrases to its dictionary so that it can recognize them better in the future.
-
It has a voice training feature that allows you to improve its recognition accuracy by adapting it to your voice characteristics and pronunciation.
-
It has a voice command feature that allows you to control your computer applications using your voice.
-
It has a voice macro feature that allows you to create shortcuts for frequently used commands or texts using your voice.
-
It has a compatibility feature that allows you to use it with other applications such as Microsoft Word, Excel, PowerPoint, Outlook, Internet Explorer, Firefox, Chrome, Skype, WhatsApp, Facebook Messenger, etc.
-
-
How to download and install IBM ViaVoice Gold Arabic 4.3.rarl?
-
The system requirements for IBM ViaVoice Gold Arabic 4.3.rarl
-
To download and install IBM ViaVoice Gold Arabic 4.3.rarl on your computer, you need to make sure that your computer meets the following system requirements:
-
-
Operating system
Windows XP/Vista/7/8/10
-
CPU
Pentium IV or higher
-
RAM
512 MB or higher
-
Disk space
1 GB or higher
-
Sound card
16-bit or higher
-
Microphone
Analog or USB headset microphone
-
Internet connection
Required for activation and updates
-
-
The steps to download and install IBM ViaVoice Gold Arabic 4.3.rarl
-
To download and install IBM ViaVoice Gold Arabic 4.3.rarl on your computer, you need to follow these steps:
-
-
Go to one of the websites that offer the download link for IBM ViaVoice Gold Arabic 4.3.rarl . Make sure that the website is trustworthy and secure before downloading anything from it.
-
Click on the download button or link and save the file on your computer.
-
Extract the file using a program such as WinRAR or WinZip.
-
Run the setup.exe file as an administrator.
-
Follow the instructions on the screen to complete the installation process.
-
Activate the software using the serial number provided by the website or by contacting IBM customer service.
-
Restart your computer if prompted.
-
Lunch the software from your desktop or start menu.
-
Select your preferred language (MSA or ECA) and complete the voice training session.
-
Enjoy using IBM ViaVoice Gold Arabic 4.3.rarl!
-
-
How to use IBM ViaVoice Gold Arabic 4.3.rarl?
-
The main functions and commands of IBM ViaVoice Gold Arabic 4.3.rarl
-
To use IBM ViaVoice Gold Arabic 4.3.rarl effectively, you need to know its main functions and commands:
The dictation function allows you to create texts using your voice instead of typing them on your keyboard. To start dictating, say "بدء التحدث" (start speaking) or click on the microphone icon on the toolbar. To stop dictating, say "إيقاف التحدث" (stop speaking) or click on the microphone icon again.
-
The editing function allows you to edit your texts using your voice or keyboard after dictating them. To edit a word or phrase, say "تحرير" (edit) followed by the word or phrase you want to edit. To delete a word or phrase, say "حذف" (delete) followed by the word or phrase you want to delete. To insert a word or phrase, say "إدراج" (insert) followed by the word or phrase you want to insert.
-
The speech feedback function allows you to hear your texts read back aloud by a synthetic voice after dictating or editing them. To activate this function, say "تشغيل الصوت" (turn on sound) or click on the speaker icon on Continuing the article: the toolbar. To deactivate this function, say "إيقاف الصوت" (turn off sound) or click on the speaker icon again.
-
The vocabulary builder function allows you to add new words or phrases to the software's dictionary so that it can recognize them better in the future. To activate this function, say "إضافة كلمة" (add word) or click on the plus icon on the toolbar. To deactivate this function, say "إنهاء الإضافة" (end adding) or click on the plus icon again.
-
The voice training function allows you to improve the software's recognition accuracy by adapting it to your voice characteristics and pronunciation. To activate this function, say "تدريب الصوت" (train voice) or click on the star icon on the toolbar. To deactivate this function, say "إنهاء التدريب" (end training) or click on the star icon again.
-
The voice command function allows you to control your computer applications using your voice. To activate this function, say "أوامر الصوت" (voice commands) or click on the gear icon on the toolbar. To deactivate this function, say "إيقاف الأوامر" (stop commands) or click on the gear icon again.
-
The voice macro function allows you to create shortcuts for frequently used commands or texts using your voice. To activate this function, say "ماكرو الصوت" (voice macro) or click on the lightning icon on the toolbar. To deactivate this function, say "إيقاف الماكرو" (stop macro) or click on the lightning icon again.
-
The compatibility function allows you to use the software with other applications such as Microsoft Word, Excel, PowerPoint, Outlook, Internet Explorer, Firefox, Chrome, Skype, WhatsApp, Facebook Messenger, etc. To activate this function, say "التوافق" (compatibility) or click on the globe icon on the toolbar. To deactivate this function, say "إيقاف التوافق" (stop compatibility) or click on the globe icon again.
-
-
The tips and tricks to improve your voice recognition and dictation with IBM ViaVoice Gold Arabic 4.3.rarl
-
To improve your voice recognition and dictation with IBM ViaVoice Gold Arabic 4.3.rarl, you can follow these tips and tricks:
-
-
Use a good quality microphone that is compatible with your sound card and operating system.
-
Adjust the microphone volume and position so that it can capture your voice clearly and avoid background noise.
-
Speak clearly and naturally in a normal tone and speed.
-
Pronounce each word and syllable correctly and distinctly.
-
Use proper punctuation and capitalization when dictating.
-
Use short pauses between words and phrases to separate them.
-
Use longer pauses between sentences and paragraphs to indicate them.
-
Use specific commands to format, correct, or delete your texts.
-
Review your texts for errors or corrections using the speech feedback or text editor functions.
-
Add new words or phrases to the vocabulary builder function if they are not recognized by the software.
-
Train your voice regularly using the voice training function to adapt the software to your voice characteristics and pronunciation.
-
Create shortcuts for frequently used commands or texts using the voice macro function.
-
Control your computer applications using the voice command function.
-
Use the compatibility function to use the software with other applications.
-
-
What are the pros and cons of IBM ViaVoice Gold Arabic 4.3.rarl?
-
The advantages of IBM ViaVoice Gold Arabic 4.3.rarl
-
IBM ViaVoice Gold Arabic 4.3.rarl has many advantages that make it a useful and efficient voice recognition software for Arabic speakers. Some of these advantages are:
-
-
It saves you time and effort by allowing you to create texts using your voice instead of typing them on your keyboard.
-
It improves your productivity and creativity by allowing you to focus on your ideas and thoughts instead of typing errors or corrections.
-
It enhances your accessibility and mobility by allowing you to use your voice as an input device instead of a mouse or keyboard.
-
It supports both MSA and ECA, which are the most widely used varieties of Arabic in the world.
-
It has a high accuracy rate of over 95%, which means that it can recognize your voice and words correctly most of the time.
-
It has a fast response time of less than a second, which means that it can process your voice input quickly and display the text output on your screen without delay.
-
It has a user-friendly interface that is easy to navigate and customize according to your preferences.
-
It has a built-in text editor that allows you to edit your texts using your voice or keyboard.
-
It has a speech feedback feature that reads back your texts aloud so that you can check them for errors or corrections.
-
It has a vocabulary builder feature that allows you to add new words or phrases to its dictionary so that it can recognize them better in Continuing the article: depending on the website and the currency.
-
Q: How can I get the serial number for IBM ViaVoice Gold Arabic 4.3.rarl?
-
A: The serial number for IBM ViaVoice Gold Arabic 4.3.rarl is usually provided by the website that offers the download link. If you don't receive the serial number or if it doesn't work, you can contact IBM customer service by phone or email and provide them with your purchase details and proof of payment.
-
Q: How can I update IBM ViaVoice Gold Arabic 4.3.rarl?
-
A: To update IBM ViaVoice Gold Arabic 4.3.rarl, you need to have an internet connection and follow these steps:
-
-
Open the software and click on the help icon on the toolbar.
-
Select "Check for updates" from the menu.
-
Follow the instructions on the screen to download and install the latest updates.
-
Restart your computer if prompted.
-
-
Q: How can I uninstall IBM ViaVoice Gold Arabic 4.3.rarl?
-
A: To uninstall IBM ViaVoice Gold Arabic 4.3.rarl, you need to follow these steps:
-
-
Close the software and any other applications that are using it.
-
Go to the control panel and select "Add or remove programs".
-
Find and select "IBM ViaVoice Gold Arabic 4.3.rarl" from the list of programs.
-
Click on the "Remove" button and follow the instructions on the screen to complete the uninstallation process.
-
Delete any remaining files or folders related to the software from your computer.
-
-
Q: How can I contact IBM customer service?
-
A: You can contact IBM customer service by phone or email using the following information:
-
-
Phone
+1-800-426-4968 (USA)
-
Email
support@us.ibm.com
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/DJ Studio 5 Mod APK How to Download and Install This Fantastic App on Your Android Device.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/DJ Studio 5 Mod APK How to Download and Install This Fantastic App on Your Android Device.md
deleted file mode 100644
index f6c777c3357a61dcf6c717f7cf1b7960633a68b2..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/DJ Studio 5 Mod APK How to Download and Install This Fantastic App on Your Android Device.md
+++ /dev/null
@@ -1,124 +0,0 @@
-
-
DJ Studio 5 Mod APK Download: A Complete Guide
-
Do you love mixing music and creating your own beats? Do you want to turn your Android device into a mobile DJ station? If yes, then you should check out DJ Studio 5, a free music mixer app that allows you to manipulate music in various ways. However, if you want to enjoy all the features and functions of this app without any limitations, you will need to download the mod apk version. In this article, we will tell you everything you need to know about DJ Studio 5 Mod APK, including its features, how to download and install it, its pros and cons, and some frequently asked questions.
DJ Studio 5 is a powerful app that lets you spin, mix, and scratch music on your Android device. It has a lot of features that make it a comprehensive and fun app for both beginners and experts. However, some of these features are not available in the original version of the app, which requires an in-app purchase to unlock unlimited playback. That's why you need the mod apk version, which gives you access to all the features and functions for free. Here are some of the features of DJ Studio 5 Mod APK:
-
-
Unlimited playback and access to all functions: With the mod apk version, you can play as many songs as you want without any interruptions or ads. You can also use all the functions of the app without any restrictions or limitations.
-
Customizable interface and skins: You can choose between a single deck or twin decks mode, depending on your preference and skill level. You can also customize the interface and skins of the app according to your taste and style.
-
Support for various audio formats and external devices: You can load music from your device's library or from external sources like USB drives or SD cards. You can also use external devices like headphones, speakers, or MIDI controllers to enhance your mixing experience.
-
Live recording and sharing of mixes: You can record your mixes live and save them on your device or share them on Soundcloud or other social media platforms. You can also listen to other users' mixes and rate them.
-
Equalizer, loop, BPM, and other tools for music manipulation: You can adjust the sound levels, create loops, change the tempo, and apply various effects to your music using the tools provided by the app. You can also sync the tracks automatically or manually using the BPM feature.
-
-
How to Download and Install DJ Studio 5 Mod APK
-
If you are interested in downloading and installing DJ Studio 5 Mod APK on your Android device, you will need to follow these simple steps:
-
-
Enable unknown sources on your device: To install apps from sources other than the Google Play Store, you will need to enable the unknown sources option on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Download the mod apk file from a trusted source: You can find the mod apk file of DJ Studio 5 on various websites and blogs, but not all of them are safe and reliable. Therefore, you should always download the file from a trusted source that has positive reviews and ratings. You can use this link to download the file safely and quickly.
-
Locate and install the file on your device: After downloading the file, you will need to locate it on your device using a file manager app. You can usually find it in the Downloads folder or the folder where you saved it. Once you find it, tap on it and follow the instructions to install it on your device.
-
Launch the app and enjoy mixing music: After installing the app, you can launch it from your app drawer or home screen. You will see a welcome screen that will guide you through the basic features and functions of the app. You can then start loading music and mixing it as you wish.
-
-
Pros and Cons of DJ Studio 5 Mod APK
-
DJ Studio 5 Mod APK is a great app for anyone who loves music and wants to create their own mixes. However, like any other app, it also has some pros and cons that you should be aware of before using it. Here are some of them:
-
dj studio 5 free music mixer apk download
-dj studio 5 pro mod apk download
-dj studio 5 unlocked apk download
-dj studio 5 android app download
-dj studio 5 latest version apk download
-dj studio 5 full mod apk download
-dj studio 5 premium apk download
-dj studio 5 modded apk download
-dj studio 5 hack apk download
-dj studio 5 cracked apk download
-dj studio 5 music mixer app download
-dj studio 5 mod apk free download
-dj studio 5 no ads apk download
-dj studio 5 offline apk download
-dj studio 5 update apk download
-dj studio 5 best music mixer apk download
-dj studio 5 mod apk unlimited playback download
-dj studio 5 nexus s apk download
-dj studio 5 old version apk download
-dj studio 5 original apk download
-dj studio 5 professional music mixer apk download
-dj studio 5 mod apk android download
-dj studio 5 all features unlocked apk download
-dj studio 5 beatronik apk download
-dj studio 5 new version apk download
-dj studio 5 paid apk download
-dj studio 5 modded app download
-dj studio 5 hacked app download
-dj studio 5 cracked app download
-dj studio 5 music mixer app free download
-dj studio 5 pro mod app download
-dj studio 5 unlocked app download
-dj studio 5 android app free download
-dj studio 5 latest version app download
-dj studio 5 full mod app download
-dj studio 5 premium app download
-dj studio 5 no ads app download
-dj studio 5 offline app download
-dj studio 5 update app download
-dj studio 5 best music mixer app download
-dj studio 5 mod app free download
-dj studio 5 unlimited playback app download
-dj studio 5 nexus s app download
-dj studio 5 old version app download
-dj studio 5 original app download
-dj studio 5 professional music mixer app download
-dj studio 5 mod app android download
-dj studio 5 all features unlocked app download
-dj studio 5 beatronik app download
-
-
-
Pros
-
Cons
-
-
-
- Free, comprehensive, fun, and easy to use
-
- May have compatibility issues with some devices and Android versions
-
-
-
- Customizable interface and skins
-
- Steep learning curve for beginners and advanced users
-
-
-
- Support for various audio formats and external devices
-
- No effects like reverb, flanger, or delay
-
-
-
- Live recording and sharing of mixes
-
- May consume a lot of battery and storage space
-
-
-
- Equalizer, loop, BPM, and other tools for music manipulation
-
- May not be legal or ethical to use the mod apk version
-
-
-
Conclusion and FAQs
-
DJ Studio 5 Mod APK is a great app for aspiring and professional DJs who want to mix music on their Android devices. It offers a lot of features and functions that make it one of the best free DJing apps available. However, it also has some drawbacks that may affect its performance and user experience. Therefore, users should be careful when downloading and installing the mod apk version and always use it at their own risk.
-
If you have any questions or doubts about DJ Studio 5 Mod APK, you may find the answers in the following FAQs:
-
Q: Is DJ Studio 5 Mod APK safe to use?
-
A: DJ Studio 5 Mod APK is generally safe to use as long as you download it from a trusted source and scan it for viruses or malware before installing it. However, you should also be aware that using mod apk versions of apps may violate their terms of service and may result in legal or ethical issues.
-
Q: Is DJ Studio 5 Mod APK compatible with my device?
-
A: DJ Studio 5 Mod APK is compatible with most Android devices that run on Android 4.0 or higher. However, some devices may have compatibility issues due to different hardware or software specifications. Therefore, you should always check the compatibility of your device before downloading and installing the app.
-
Q: How can I update DJ Studio 5 Mod APK?
-
A: DJ Studio 5 Mod APK is not available on the Google Play Store, so you cannot update it automatically or manually through the store. Instead, you will need to check for updates from the source where you downloaded the app or from other websites or blogs that offer the latest version of the app.
-
Q: How can I uninstall DJ Studio 5 Mod APK?
-
A: If you want to uninstall DJ Studio 5 Mod APK from your device, you can do so by following these steps:
-
-
Go to Settings > Apps > DJ Studio 5 Mod APK and tap on it.
-
Tap on Uninstall and confirm your action.
-
Wait for the app to be uninstalled from your device.
-
Delete the mod apk file from your device if you still have it.
-
Q: What are the alternatives to DJ Studio 5 Mod APK?
-
A: If you are looking for other apps that can help you mix music on your Android device, you may want to try some of these alternatives:
-
-
edjing Mix: This is another popular and free app that lets you create amazing mixes with your music library or from various online sources. It has a lot of features and effects that make it a professional and fun app for DJs of all levels.
-
Cross DJ: This is a powerful and intuitive app that allows you to mix tracks with accuracy and creativity. It has a sleek and user-friendly interface that makes it easy to use. It also supports various audio formats and external devices.
-
DJ Mixer Studio: This is a simple and elegant app that enables you to mix music with ease and style. It has a minimalist and colorful design that makes it attractive and enjoyable. It also has a variety of tools and functions that make it a versatile and reliable app for mixing music.
-
-
I hope you enjoyed reading this article and learned something new about DJ Studio 5 Mod APK. If you have any feedback or suggestions, please feel free to leave a comment below. Thank you for your time and attention.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Grade 12 Mathematics P1 Mock Exams and Answers.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Grade 12 Mathematics P1 Mock Exams and Answers.md
deleted file mode 100644
index 62a8b589caef0ca8a142c361f66d8c6d36fda656..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Grade 12 Mathematics P1 Mock Exams and Answers.md
+++ /dev/null
@@ -1,149 +0,0 @@
-
-
How to Download Grade 12 Mathematics P1
-
Mathematics P1 is one of the papers that you have to write in Grade 12 if you are taking Mathematics as a subject. It covers topics such as algebra, calculus, trigonometry, geometry, and statistics. Mathematics P1 is a challenging paper that requires a lot of practice and preparation. One of the best ways to prepare for Mathematics P1 is to download past exam papers and memos from reliable sources. By doing so, you can:
-
-
Get familiar with the exam format and structure.
-
Test your knowledge and skills on various topics and questions.
-
Improve your speed and accuracy in solving problems.
-
Learn from your mistakes and gaps in understanding.
-
Boost your confidence and reduce your anxiety.
-
-
In this article, we will show you how to download Grade 12 Mathematics P1 from different sources, and how to use them effectively for your exam preparation. Let's get started!
There are many sources where you can find and download Grade 12 Mathematics P1, but not all of them are reliable and updated. Some of them may have errors, missing pages, or outdated content. Therefore, you need to be careful and selective when choosing your sources. Here are some of the sources that we recommend:
-
SA Exam Papers
-
SA Exam Papers is a website that provides a comprehensive range of past year exam papers and memos for various subjects and grades in South Africa. You can find Mathematics P1 papers from 2023 to as far back as 2009, from national, provincial, and common tests. You can also find papers in English and Afrikaans languages, as well as question papers, answer books, addendums, and memorandums.
-
To download Grade 12 Mathematics P1 from SA Exam Papers, you need to:
-
Download grade 12 mathematics p1 past exam papers and memos
-Download grade 12 mathematics p1 June 2021 exam paper and memo
-Download grade 12 mathematics p1 NSC exam papers and marking guidelines
-Download grade 12 mathematics p1 CAPS exam papers and answer books
-Download grade 12 mathematics p1 Gauteng provincial exam papers and memos
-Download grade 12 mathematics p1 KZN provincial exam papers and memos
-Download grade 12 mathematics p1 Western Cape provincial exam papers and memos
-Download grade 12 mathematics p1 Eastern Cape provincial exam papers and memos
-Download grade 12 mathematics p1 Mpumalanga provincial exam papers and memos
-Download grade 12 mathematics p1 North West provincial exam papers and memos
-Download grade 12 mathematics p1 Free State provincial exam papers and memos
-Download grade 12 mathematics p1 Limpopo provincial exam papers and memos
-Download grade 12 mathematics p1 Northern Cape provincial exam papers and memos
-Download grade 12 mathematics p1 IEB exam papers and memos
-Download grade 12 mathematics p1 SACAI exam papers and memos
-Download grade 12 mathematics p1 preparatory exam papers and memos
-Download grade 12 mathematics p1 common test papers and memos
-Download grade 12 mathematics p1 exemplar papers and memos
-Download grade 12 mathematics p1 revision papers and memos
-Download grade 12 mathematics p1 study guides and notes
-Download grade 12 mathematics p1 worksheets and solutions
-Download grade 12 mathematics p1 assignments and feedback
-Download grade 12 mathematics p1 tests and answers
-Download grade 12 mathematics p1 practice questions and explanations
-Download grade 12 mathematics p1 video lessons and tutorials
-Download grade 12 mathematics p1 online courses and quizzes
-Download grade 12 mathematics p1 textbooks and summaries
-Download grade 12 mathematics p1 formula sheets and cheat sheets
-Download grade 12 mathematics p1 mind maps and diagrams
-Download grade 12 mathematics p1 tips and tricks
-Download grade 12 mathematics p1 skills development and assessment tasks
-Download grade 12 mathematics p1 projects and investigations
-Download grade 12 mathematics p1 challenges and competitions
-Download grade 12 mathematics p1 games and activities
-Download grade 12 mathematics p1 podcasts and audiobooks
-Download grade 12 mathematics p1 apps and software
-Download grade 12 mathematics p1 calculators and tools
-Download grade 12 mathematics p1 blogs and forums
-Download grade 12 mathematics p1 articles and news
-Download grade 12 mathematics p1 research papers and journals
-
-
Go to [text](^i^), where i is the index of the URL for SA Exam Papers.
-
Scroll down to find the table with the headings "Year" and "Exam Semester".
-
Select the year and exam semester that you want to download.
-
You will see another table with the headings "Paper", "Language", "Type", "Download".
-
Select the paper that you want to download (Mathematics P1).
-
Select the language that you want to download (English or Afrikaans).
-
Select the type that you want to download (Question Paper or Memorandum).
-
Click on the "Download" button.
-
The paper will open in a new tab or window as a PDF file.
-
You can save it on your device or print it out.
-
-
Here is a screenshot of how SA Exam Papers looks like:
-
-
Edwardsmaths
-
Edwardsmaths is another website that provides past exam papers and memos for various subjects and grades in South Africa. You can find Mathematics P1 papers from 2023 to 2018, from national, provincial, and common tests. You can also find papers in English and Afrikaans languages, as well as question papers and memorandums.
-
To download Grade 12 Mathematics P1 from Edwardsmaths, you need to:
-
-
Go to [text](^i^), where i is the index of the URL for Edwardsmaths.
-
Scroll down to find the section with the title "Grade 12 Mathematics Exam Papers and Memos".
-
Select the year that you want to download.
-
You will see a table with the headings "Paper", "Question Paper", "Memo".
-
Select the paper that you want to download (Mathematics P1).
-
Select the question paper or memo that you want to download.
-
The paper will open in a new tab or window as a PDF file.
-
You can save it on your device or print it out.
-
-
Here is a screenshot of how Edwardsmaths looks like:
-
-
National Department of Basic Education
-
National Department of Basic Education is the official website of the government department that oversees primary and secondary education in South Africa. You can find Mathematics P1 papers from 2023 to 2014, from national and supplementary exams. You can also find papers in English and Afrikaans languages, as well as question papers and memorandums.
-
To download Grade 12 Mathematics P1 from National Department of Basic Education, you need to:
-
-
Go to [text](^i^), where i is the index of the URL for National Department of Basic Education.
-
Scroll down to find the section with the title "Past Exam Papers".
-
Select the grade that you want to download (Grade 12).
-
Select the subject that you want to download (Mathematics).
-
Select the year that you want to download.
-
You will see a list of papers with the titles "Paper 1", "Paper 2", etc.
-
Select the paper that you want to download (Mathematics P1).
-
Select the language that you want to download (English or Afrikaans).
-
Select the question paper or memo that you want to download.
-
The paper will open in a new tab or window as a PDF file.
-
You can save it on your device or print it out.
-
-
Here is a screenshot of how National Department of Basic Education looks like:
-
-
Steps to Download Grade 12 Mathematics P1
-
Now that you know some of the sources where you can find and download Grade 12 Mathematics P1, let's go through the steps to download them from each source. As you can see, each source has a slightly different process, but they are all easy and straightforward. Here are the steps for each source:
-
SA Exam Papers
-
-
Step
Description
Example
-
1
Go to [text](^i^), where i is the index of the URL for SA Exam Papers.
-
2
Scroll down to find the table with the headings "Year" and "Exam Semester".
-
3
Select the year and exam semester that you want to download.
-
4
You will see another table with the headings "Paper", "Language", "Type", "Download".
-
5
Select the paper that you want to download (Mathematics P1).
-
6
Select the language that you want to download (English or Afrikaans).
-
7
Select the type that you want to download (Question Paper or Memorandum).
-
8
Click on the "Download" button.
-
9
The paper will open in a new tab or window as a PDF file.
-
10
You can save it on your device or print it out.
-
-
Edwardsmaths
-
-
Step
Description
Example
-
1
Go to [text](^i^), where i is the index of the URL for Edwardsmaths.
-
2
Scroll down to find the section with the title "Grade 12 Mathematics Exam Papers and Memos".
-
3
Select the year that you want to download.
-
4
You will see a table with the headings "Paper", "Question Paper", "Memo".
td>
-
5
Select the paper that you want to download (Mathematics P1).
-
6
Select the question paper or memo that you want to download.
-
7
The paper will open in a new tab or window as a PDF file.
-
8
You can save it on your device or print it out.
-
-
National Department of Basic Education
-
-
Step
Description
Example
-
1
Go to [text](^i^), where i is the index of the URL for National Department of Basic Education.
-
2
Scroll down to find the section with the title "Past Exam Papers".
-
3
Select the grade that you want to download (Grade 12).
-
4
Select the subject that you want to download (Mathematics).
A: There are many other sources of Grade 12 Mathematics P1 that you can find online, such as Study Master, Maths At Sharp, and Past Matric. However, you should always check the quality and reliability of these sources before downloading them. You can also ask your teachers, tutors, or peers for recommendations or suggestions.
-
Q: How often should I download and practice Grade 12 Mathematics P1?
-
A: There is no definitive answer to this question, as it depends on your personal goals, preferences, and schedule. However, a general rule of thumb is to download and practice Grade 12 Mathematics P1 at least once a week, or more frequently if you have more time and motivation. You should also vary the papers that you download, so that you can expose yourself to different types and levels of questions.
-
Q: How can I download Grade 12 Mathematics P1 on my phone or tablet?
-
A: You can download Grade 12 Mathematics P1 on your phone or tablet by following the same steps as on your computer. However, you may need to install a PDF reader app on your device, such as Adobe Acrobat Reader or Google PDF Viewer, to open and view the files. You may also need to adjust the zoom and orientation of the files to fit your screen size and resolution.
-
Q: How can I share Grade 12 Mathematics P1 with my friends or classmates?
-
A: You can share Grade 12 Mathematics P1 with your friends or classmates by sending them the links or files of the papers that you downloaded. You can also create a study group or chat group where you can discuss and compare your solutions and strategies. Sharing Grade 12 Mathematics P1 with others can help you learn from each other and motivate each other.
-
Q: How can I get feedback or help on Grade 12 Mathematics P1?
-
A: You can get feedback or help on Grade 12 Mathematics P1 by asking your teachers, tutors, or mentors for guidance and clarification. You can also use online platforms or forums where you can post your questions or doubts and get answers from experts or peers. Some examples of these platforms are Quora, Reddit, Stack Exchange, and Math Help Forum.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download King Ludo and Relive Your Childhood Memories with this Fun Board Game.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download King Ludo and Relive Your Childhood Memories with this Fun Board Game.md
deleted file mode 100644
index 2dc8fac53d7cea144be53045e18314e15cd42e5b..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download King Ludo and Relive Your Childhood Memories with this Fun Board Game.md
+++ /dev/null
@@ -1,108 +0,0 @@
-
-
How to Download King Ludo - The Most Popular Game of the Year
-
Do you love playing board games with your friends and family? Do you want to enjoy a classic game with a modern twist? Do you want to experience the thrill of rolling the dice and moving your tokens to the center of the board? If you answered yes to any of these questions, then you should download King Ludo, the most popular game of the year!
-
What is King Ludo?
-
King Ludo is a game that has taken the world by storm. It is based on the ancient game of Pachisi, which was played by Indian kings and queens in ancient times. King Ludo is a game that combines luck, strategy, and skill. Here are some of the features that make King Ludo so amazing:
King Ludo follows the traditional rules and the old school look of the board game. You have four tokens of your color that you need to move from your base to your home. You roll a dice and move your tokens accordingly. You can also capture or block your opponents' tokens. The first player to bring all their tokens to their home wins the game.
-
A cross-platform multiplayer game with voice chat
-
King Ludo is not just a game that you can play by yourself. You can also play it with your friends and family online or offline. King Ludo supports up to six players in online multiplayer mode. You can also invite and challenge your Facebook friends or make new buddies from around the world. You can also chat with your opponents using voice chat and send them emojis.
-
A game with various modes, themes, and features
-
King Ludo is a game that never gets boring. You can choose from different modes, such as quick mode, tournament mode, team up mode, and snake and ladders mode. You can also customize your game with different themes, such as disco, nature, Egypt, candy, Christmas, pirate, and more. You can also access an exciting inventory where you can get new dice, funny emojis, voice notes, rewards, and more.
-
Why should you download King Ludo?
-
King Ludo is a game that has many benefits for you. Here are some of the reasons why you should download King Ludo:
-
It is fun, easy, and addictive
-
King Ludo is a game that will keep you entertained for hours. It is easy to learn and play, but also challenging and competitive. You will enjoy rolling the dice and moving your tokens while trying to beat your opponents. You will also feel a sense of accomplishment when you win the game.
-
It is suitable for all ages and occasions
-
King Ludo is a game that everyone can enjoy. It is suitable for all ages, from kids to adults. It is also suitable for all occasions, from casual to formal. You can play King Ludo with your family at home, with your friends at a party, with your colleagues at work, or with strangers online.
-
It is free to play and has millions of downloads
-
King Ludo is a game that does not cost you anything to play. It is free to download and install on your device. It also does not require an internet connection to play offline mode. King Ludo has over 900 million downloads worldwide and has won many awards and accolades. It is one of the top-rated games on Google Play Store and App Store.
-
How to download King Ludo on different devices?
-
King Ludo is a game that you can play on any device, whether it is a smartphone, a tablet, or a computer. Here are the steps to download King Ludo on different devices:
-
download king ludo game for android
-download king ludo app for ios
-download king ludo online board game
-download king ludo voice chat mode
-download king ludo quick mode
-download king ludo mask mode
-download king ludo 6 player mode
-download king ludo tournaments
-download king ludo live themes
-download king ludo classic board game
-download king ludo apk file
-download king ludo for pc
-download king ludo offline mode
-download king ludo with friends and family
-download king ludo dice game of kings
-download king ludo cross platform multiplayer game
-download king ludo free game
-download king ludo latest version
-download king ludo mod apk
-download king ludo hack version
-download king ludo unlimited coins and gems
-download king ludo snake and ladder game
-download king ludo parchisi game
-download king ludo parcheesi game
-download king ludo pachisi game
-download king ludo best casual game in board games
-download king ludo most popular Ludo game in India
-download king ludo 900+ million downloads game
-download king ludo no internet connection required game
-download king ludo play with computer or local multiplayer game
-download king ludo invite and challenge your Facebook friends game
-download king ludo play with world players and make them your buddies game
-download king ludo private chat with your Facebook friends and buddies game
-download king ludo express yourself by sending emojis to your opponents game
-download king ludo recall your childhood game
-download king ludo modern version of the royal game of Pachisi game
-download king ludo traditional rules and the old school look of the Ludo game game
-download king ludo beat other players and become the Ludo King game
-download king ludo fun for the whole family game
-download king ludo perfect time pass game of Ludo board game
-
Download King Ludo on Android
-
If you have an Android device, you can download King Ludo from the Google Play Store. Here is how:
-
-
Open the Google Play Store app on your device.
-
Search for "King Ludo" in the search bar.
-
Select the game from the list of results and tap on "Install".
-
Wait for the game to download and install on your device.
-
Open the game and enjoy playing King Ludo.
-
-
Download King Ludo on iOS
-
If you have an iOS device, you can download King Ludo from the App Store. Here is how:
-
-
Open the App Store app on your device.
-
Search for "King Ludo" in the search bar.
-
Select the game from the list of results and tap on "Get".
-
Enter your Apple ID and password if prompted.
-
Wait for the game to download and install on your device.
-
Open the game and enjoy playing King Ludo.
-
-
Download King Ludo on PC
-
If you want to play King Ludo on your PC, you will need to use an emulator. An emulator is a software that allows you to run Android apps on your PC. There are many emulators available online, such as BlueStacks, NoxPlayer, MEmu, etc. Here is how to download King Ludo on PC using BlueStacks:
-
-
Download and install BlueStacks from its official website: https://www.bluestacks.com/
-
Launch BlueStacks and sign in with your Google account.
-
Open the Google Play Store app within BlueStacks.
-
Search for "King Ludo" in the search bar.
-
Select the game from the list of results and click on "Install".
-
Wait for the game to download and install on your PC.
-
Open the game and enjoy playing King Ludo.
-
-
Conclusion
-
King Ludo is a game that you should not miss. It is a game that will bring you joy, excitement, and nostalgia. It is a game that will connect you with your friends and family. It is a game that will challenge your mind and test your luck. It is a game that will make you feel like a king or a queen. So what are you waiting for? Download King Ludo today and have fun!
-
FAQs
-
Here are some of the frequently asked questions about King Ludo:
-
Q: How can I play King Ludo offline?
-
A: You can play King Ludo offline by choosing the offline mode in the main menu. You can play with up to six players using one device or with computer players.
-
Q: How can I earn coins in King Ludo?
-
A: You can earn coins in King Ludo by winning games, completing daily tasks, spinning the wheel, watching videos, or buying them with real money.
-
Q: How can I use coins in King Ludo?
-
A: You can use coins in King Ludo to buy new dice, themes, emojis, voice notes, rewards, and more from the inventory.
-
Q: How can I change my profile picture in King Ludo?
-
A: You can change your profile picture in King Ludo by tapping on your avatar in the main menu and choosing from the gallery or taking a photo.
-
Q: How can I report a bug or a problem in King Ludo?
-
A: You can report a bug or a problem in King Ludo by tapping on the settings icon in the main menu and choosing "Contact Us". You can also email them at support@kingludogame.com or visit their website at https://www.kingludogame.com/
Advance Steel 2017 64bit Activation Code Zip File: What You Need to Know
-
If you are a structural engineer, detailer, or fabricator who works with steel structures, you may have heard of Advance Steel 2017, a powerful software for structural design and detailing. Advance Steel 2017 is a comprehensive solution that supports a Building Information Modeling (BIM) process to help you more accurately detail structural elements and miscellaneous steel. It also enables you to generate detail drawings, bills of materials, and NC files for fabrication and erection.
But how do you install and activate Advance Steel 2017 64bit on your computer? And what are the benefits of using this software for your projects? In this article, we will answer these questions and more. We will show you how to download, install, and activate Advance Steel 2017 64bit using an activation code zip file. We will also give you some tips and best practices on how to use Advance Steel 2017 64bit effectively.
-
How to download Advance Steel 2017 64bit
-
Before you can install and activate Advance Steel 2017 64bit, you need to download the software from the Autodesk website. Here are the steps to follow:
Select your operating system (Windows 64-bit) and your preferred language. Then, enter your email address and click Next.
-
Choose whether you want to download the software directly or use a download manager. Then, click Download Now.
-
Save the file to your computer and wait for the download to complete.
-
-
Note: You can also download Advance Steel 2017 64bit from your Autodesk Account if you have a valid subscription or license. Just sign in to your account, go to Products & Services, find Advance Steel 2017, and click on the Download button.
-
How to check your system requirements and compatibility
-
Before you install Advance Steel 2017 64bit, you should check if your computer meets the minimum system requirements for the software. You can find the system requirements on the Autodesk Advance Steel product page or on the Autodesk Knowledge Network. Here are some of the main requirements:
-
-
Operating system: Microsoft Windows 10 (64-bit only), Microsoft Windows 8.1 with Update KB2919355 (64-bit only), or Microsoft Windows 7 SP1 (64-bit only)
-
CPU: Intel Core i5 or equivalent AMD processor with SSE2 technology
-
Memory: 8 GB RAM (16 GB recommended)
-
Disk space: 9 GB free disk space for installation
-
Display: 1920 x 1080 or greater True Color video display adapter; DirectX®11 capable graphics card with Shader Model 3 as recommended by Autodesk
-
Browser: Internet Explorer® version 11 or later
-
.NET Framework: .NET Framework Version 4.6
-
-
You should also check if your computer is compatible with Advance Steel 2017 64bit. You can do this by running the Autodesk Prerequisite Checker tool, which is included in the installation package. This tool will scan your computer and detect any potential issues or conflicts that may prevent a successful installation or activation of Advance Steel 2017 64bit.
-
-
How to prepare your computer for installation
-
Before you install Advance Steel 2017 64bit, you should prepare your computer by doing the following:
-
-
Disable any antivirus or firewall software that may interfere with the installation process.
-
Close any other applications that are running on your computer.
-
Make sure you have administrator rights on your computer.
-
Make sure you have a stable internet connection.
-
Make sure you have enough disk space for the installation.
-
Make sure you have your product key and activation code zip file ready.
-
-
Note: Your product key is a 25-character alphanumeric code that identifies your product and license type. Your activation code zip file is a compressed file that contains an XML file with your activation code. You can obtain these codes from your Autodesk Account, from an email confirmation, or from a reseller.
How to install Advance Steel 2017 64bit
-
After you have downloaded and prepared your computer for installation, you can proceed to install Advance Steel 2017 64bit. Here are the steps to follow:
-
-
Locate the setup file that you downloaded and double-click on it to run it.
-
On the Autodesk Advance Steel 2017 Setup dialog box, click on Install.
-
On the Autodesk Advance Steel 2017 Installation dialog box, read and accept the license agreement and click Next.
-
On the Product Information dialog box, enter your product key and serial number and click Next.
-
On the Configure Installation dialog box, select the components and features that you want to install and click Next.
-
On the Installation Location dialog box, choose the folder where you want to install Advance Steel 2017 64bit and click Next.
-
On the Ready to Install dialog box, review your installation settings and click Install.
-
Wait for the installation process to complete. You can monitor the progress on the Installation Progress dialog box.
-
When the installation is finished, click Finish.
-
-
Note: You can also customize your installation by clicking on the Customize button on the Configure Installation dialog box. This will allow you to change your installation language, select your content packs, and configure your network license settings.
-
How to activate Advance Steel 2017 64bit
-
After you have installed Advance Steel 2017 64bit, you need to activate it using your activation code zip file. Here are the steps to follow:
-
-
Launch Advance Steel 2017 64bit from your desktop or start menu.
-
On the Let's Get Started screen, select Enter a Serial Number and click Next.
-
On the Product License Activation screen, enter your product key and serial number and click Next.
-
On the License Method screen, select Stand-Alone License and click Next.
-
On the Activate screen, click on Activate Online Now.
-
On the Activation Code screen, click on Browse and locate your activation code zip file on your computer. Then, click Open.
-
The activation code will be automatically entered in the text box. Click Next.
-
Your product will be activated and registered. Click Finish.
-
-
Note: If you encounter any problems or errors during the activation process, you can refer to the Autodesk Knowledge Network for troubleshooting tips and solutions. You can also contact Autodesk support or your reseller for assistance.
2017 64bit, you can improve your productivity, efficiency, and quality of your structural engineering projects. You can also collaborate better with other stakeholders and disciplines using the BIM process.
-
We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to contact us or leave a comment below. Thank you for reading!
-
FAQs
-
Here are some of the frequently asked questions about Advance Steel 2017 64bit and their answers:
-
Q1: What are the system requirements for Advance Steel 2017 64bit?
-
A1: The minimum system requirements for Advance Steel 2017 64bit are:
-
-
Operating system: Microsoft Windows 10 (64-bit only), Microsoft Windows 8.1 with Update KB2919355 (64-bit only), or Microsoft Windows 7 SP1 (64-bit only)
-
CPU: Intel Core i5 or equivalent AMD processor with SSE2 technology
-
Memory: 8 GB RAM (16 GB recommended)
-
Disk space: 9 GB free disk space for installation
-
Display: 1920 x 1080 or greater True Color video display adapter; DirectX®11 capable graphics card with Shader Model 3 as recommended by Autodesk
-
Browser: Internet Explorer® version 11 or later
-
.NET Framework: .NET Framework Version 4.6
-
-
Q2: What are the differences between Advance Steel 2017 and previous versions?
-
A2: Some of the main differences between Advance Steel 2017 and previous versions are:
-
-
Advance Steel 2017 supports Windows 10 operating system.
-
Advance Steel 2017 has improved performance and stability.
-
Advance Steel 2017 has new and enhanced features and tools, such as the new ribbon interface, the new connection vault, the new model browser, the new drawing style manager, the new BOM editor, and more.
-
Advance Steel 2017 has better interoperability and integration with other Autodesk products, such as Revit, AutoCAD, Navisworks, and BIM 360.
-
-
Q3: How can I update or upgrade my Advance Steel 2017 license?
-
A3: You can update or upgrade your Advance Steel 2017 license by doing the following:
-
-
If you have a subscription or maintenance plan for Advance Steel 2017, you can download and install the latest updates and service packs from your Autodesk Account or from the Autodesk Advance Steel Downloads page.
-
If you want to upgrade to a newer version of Advance Steel, you can purchase a new license or renew your subscription or maintenance plan from your Autodesk Account or from an Autodesk reseller.
-
-
Q4: How can I get support or training for Advance Steel 2017?
-
A4: You can get support or training for Advance Steel 2017 by accessing the following resources:
The Autodesk Advance Steel Community Forum, which allows you to ask questions, share tips, and interact with other users and experts of Advance Steel 2017.
-
The Autodesk Support Center, which provides technical support, troubleshooting, and customer service for Advance Steel 2017.
-
The Autodesk Services Marketplace, which connects you with qualified professionals who can provide training, consulting, and implementation services for Advance Steel 2017.
-
-
Q5: How can I integrate Advance Steel 2017 with other Autodesk products?
-
A5: You can integrate Advance Steel 2017 with other Autodesk products by using the following methods:
-
-
You can import and export data between Advance Steel 2017 and Revit using the Advance Steel Extension for Revit. This allows you to synchronize structural models and data between the two software.
-
You can import and export data between Advance Steel 2017 and AutoCAD using the Advance Steel Extension for AutoCAD. This allows you to create and modify structural elements and connections in AutoCAD and transfer them to Advance Steel 2017.
-
You can import and export data between Advance Steel 2017 and Navisworks using the Advance Steel Extension for Navisworks. This allows you to review and coordinate structural models and data in Navisworks.
-
You can import and export data between Advance Steel 2017 and BIM 360 using the Advance Steel Extension for BIM 360. This allows you to collaborate and share structural models and data in the cloud using BIM 360.
-
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Annabelle Movie Download 720p 16 !!HOT!!.md b/spaces/tioseFevbu/cartoon-converter/scripts/Annabelle Movie Download 720p 16 !!HOT!!.md
deleted file mode 100644
index 214af605fbabe2d8d41e30bd7fceeba87fe0b36d..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Annabelle Movie Download 720p 16 !!HOT!!.md
+++ /dev/null
@@ -1,23 +0,0 @@
-
-
How to Download Annabelle Movie in 720p HD Quality
-
If you are a fan of horror movies, you might have heard of the Annabelle series, which is a spin-off of the popular Conjuring franchise. Annabelle is a haunted doll that causes terror and mayhem wherever it goes. The series consists of three movies: Annabelle (2014), Annabelle: Creation (2017), and Annabelle Comes Home (2019).
-
In this article, we will show you how to download Annabelle movie in 720p HD quality using torrent sites. Torrent sites are online platforms that allow users to share and download files, such as movies, music, games, etc. However, torrenting is illegal in many countries and can expose you to malware, viruses, and legal issues. Therefore, we advise you to use a VPN (Virtual Private Network) service to protect your online privacy and security when downloading torrents.
Steps to Download Annabelle Movie in 720p HD Quality
-
-
Choose a reliable torrent site that has the Annabelle movie you want to download. Some of the popular torrent sites are YTS.mx[^1^], The Pirate Bay, 1337x, etc. You can also use a torrent search engine like Torrentz2 or Zooqle to find the best torrent for your movie.
-
Search for the keyword "Annabelle Movie Download 720p 16" on the torrent site. This will show you a list of torrents that match your query. You can sort them by seeders, leechers, size, date, etc. to find the best one for your needs. Seeders are users who have the complete file and are sharing it with others. Leechers are users who are downloading the file but have not completed it yet. The more seeders and fewer leechers a torrent has, the faster and more reliable it will be.
-
Download the torrent file or magnet link of the Annabelle movie you want to download. A torrent file is a small file that contains information about the file you want to download, such as its name, size, hash, trackers, etc. A magnet link is a URL that contains the same information as a torrent file but does not require downloading. You can open either of them with a torrent client, which is a software that enables you to download and upload files using the BitTorrent protocol.
-
Choose a reputable torrent client that supports your device and operating system. Some of the popular torrent clients are uTorrent, BitTorrent, qBittorrent, Vuze, etc. You can download them from their official websites or app stores.
-
Open the torrent file or magnet link with your torrent client. This will start downloading the Annabelle movie in 720p HD quality to your device. You can monitor the progress of your download on your torrent client interface. You can also pause, resume, or cancel your download at any time.
-
Enjoy watching the Annabelle movie in 720p HD quality on your device. You can use any media player that supports the video format of your downloaded file. Some of the common video formats are MP4, MKV, AVI, etc. You can also use subtitles or dubbing if available.
-
-
Tips and Warnings
-
-
Always use a VPN service when downloading torrents to hide your IP address and encrypt your traffic. This will prevent your ISP (Internet Service Provider) from tracking your online activity and throttling your speed or blocking your access to torrent sites. It will also protect you from hackers, malware, viruses, and legal issues that may arise from torrenting.
-
Choose a VPN service that has fast speed, unlimited bandwidth, no logs policy, and multiple servers in different countries. Some of the best VPN services for torrenting are ExpressVPN, NordVPN, Surfshark, etc.
-
Check the comments and ratings of the torrents before downloading them to avoid fake or malicious files. You can also use antivirus software or malware scanners to scan your downloaded files for any threats.
-
Delete or seed your downloaded files after watching them to save space on your device and help other users who want to download them.
-
Do not download or share copyrighted content 7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Atomix VirtualDJ Pro Infinity 8.3.4787 Crack WORK.md b/spaces/tioseFevbu/cartoon-converter/scripts/Atomix VirtualDJ Pro Infinity 8.3.4787 Crack WORK.md
deleted file mode 100644
index 06bf6d745aa5c64de6be9026caa105d79d32b1f0..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Atomix VirtualDJ Pro Infinity 8.3.4787 Crack WORK.md
+++ /dev/null
@@ -1,27 +0,0 @@
-
-
How to Crack Atomix VirtualDJ Pro Infinity 8.3.4787 and Enjoy Its Amazing Features
-
Atomix VirtualDJ Pro Infinity is a professional DJ software that allows you to mix music, videos, and karaoke with ease. It has a powerful engine that lets you manipulate and combine different components of your tracks, such as vocals, instruments, kicks, hi-hats, etc. You can also use performance pads to unleash your creativity and create stunning remixes on the fly.
If you want to enjoy the full potential of VirtualDJ Pro Infinity, you need to crack it and activate it with a license key. Here are the steps to do that:
-
-
Turn off your anti-virus software and download the crack file from this link [^1^].
-
Install VirtualDJ Pro Infinity 8.3.4787 trial setup.exe from the downloaded file.
-
Do not run the application after installation.
-
Block VirtualDJ via firewall or run virtualdj_hosts_patch.cmd as an administrator to prevent it from connecting to the internet.
-
Copy virtualdj_pro file from Crack folder and paste it into the installation directory (C:\Program Files\VirtualDJ).
-
Run VirtualDJ Pro Infinity and enter any name and email address when prompted for registration.
-
Enjoy your cracked VirtualDJ Pro Infinity with unlimited features!
-
-
Note: This crack is only for educational purposes and we do not support piracy. Please buy the original software from the official website [^3^] if you like it and can afford it.
Some Tips and Tricks to Master VirtualDJ Pro Infinity
-
Now that you have cracked VirtualDJ Pro Infinity, you might be wondering how to use it like a pro. Here are some tips and tricks that will help you improve your DJ skills and impress your audience.
-
-
-
Use the sync button to match the tempo and phase of two tracks automatically. This will save you time and make your transitions smoother. You can also use the pitch slider to adjust the tempo manually if you prefer.
-
Use the cue points to mark specific parts of a track that you want to play or loop. You can set up to 8 cue points per track and trigger them with the performance pads or the keyboard. You can also use cue points to jump to different parts of a track or create mashups.
-
Use the effects to add some spice to your mix. VirtualDJ Pro Infinity comes with a wide range of effects, such as echo, flanger, reverb, filter, etc. You can apply them to one or both decks, or to the master output. You can also chain multiple effects together and adjust their parameters with the knobs or sliders.
-
Use the sampler to play short samples, such as vocals, drums, horns, etc. You can load your own samples or use the ones provided by VirtualDJ Pro Infinity. You can trigger them with the performance pads or the keyboard, and sync them with the tempo of the tracks. You can also record your own samples from any source and save them for later use.
-
Use the video mixer to mix videos along with your music. VirtualDJ Pro Infinity supports various video formats, such as MP4, AVI, WMV, etc. You can load videos on each deck and mix them with crossfader and effects. You can also add text, images, logos, etc. to your video mix with the video editor.
-
-
These are just some of the tips and tricks that you can use with VirtualDJ Pro Infinity. There are many more features and functions that you can explore and customize according to your preferences. For more tutorials and guides, you can check out this video [^1^] or visit the official forum .
cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Copy My Data For Mac.md b/spaces/tioseFevbu/cartoon-converter/scripts/Copy My Data For Mac.md
deleted file mode 100644
index 6a4caf5191df6a1fc5d02126964e7e0dede6b60b..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Copy My Data For Mac.md
+++ /dev/null
@@ -1,221 +0,0 @@
-
-
-
-
Copy My Data for Mac: How to Transfer Your Data from One Mac to Another
-
-
-
If you have a new Mac and want to transfer your data from your old one, you might be wondering how to do it easily and quickly. Fortunately, there is a handy app called Copy My Data that can help you with this task.
-
Copy My Data is a free app that allows you to copy your contacts, calendars, photos, videos, messages, notes, and more from one device to another over Wi-Fi. You can use it to transfer your data between two Macs, or between a Mac and an iPhone, iPad, iPod touch, or Android device.
In this article, we will show you how to use Copy My Data for Mac to transfer your data from one Mac to another. We will also show you some other methods to copy your data, such as using Migration Assistant, Time Machine, or keyboard shortcuts.
-
-
-
What You Need to Copy Your Data
-
-
-
Before you start copying your data, you need to make sure you have everything you need. Here are some of the things you need to copy your data:
-
-
A Wi-Fi network that both Macs can connect to.
-
The Copy My Data app installed on both Macs. You can download it from the Mac App Store for free.
-
The devices you want to transfer data from and to. Make sure they have enough battery power or are plugged in.
-
Optionally, you can also use a Time Machine backup or a USB storage device to copy your data. You will need an appropriate adapter if your Mac does not have a USB port.
-
-
Once you have everything ready, you can start copying your data with Copy My Data for Mac.
-
-
-
How to Copy Your Data Wirelessly with Migration Assistant
-
-
-
If you want to transfer all or most of your data from one Mac to another, you can use Migration Assistant, a built-in app that lets you move your user accounts, apps, files, folders, and settings over Wi-Fi. This method is recommended if you are setting up a new Mac or replacing an old one.
-
To use Migration Assistant, you need to follow these steps on both your new and old Macs:
-
-
-
-
How to Use Migration Assistant on Your New Mac
-
-
-
-
Turn on your new Mac and follow the onscreen instructions until you see the Migration Assistant screen.
-
Select the option to transfer from a Mac, Time Machine backup, or startup disk, and click Continue.
-
If prompted, enter your administrator password and click OK.
-
Choose the other Mac from the list of available devices, and click Continue.
-
A security code will appear on both Macs. Make sure they match, and click Continue on your new Mac.
-
-
-
How to Use Migration Assistant on Your Old Mac
-
-
-
-
Open Migration Assistant from the Utilities folder in the Applications folder.
-
Select the option to transfer to another Mac, and click Continue.
-
If prompted, enter your administrator password and click OK.
-
Wait for the other Mac to appear on the screen, and click Continue.
-
A security code will appear on both Macs. Make sure they match, and click Continue on your old Mac.
-
-
-
-
How to Select and Transfer the Information You Want
-
-
-
-
On your new Mac, you will see a list of information that you can transfer from your old Mac. You can select or deselect the items you want by checking or unchecking the boxes next to them.
-
You can also click the disclosure triangle next to each item to see more details and options. For example, you can choose which user accounts, apps, or folders you want to transfer.
-
If you have more than one user account on your old Mac, you will need to enter the password for each account that you want to transfer.
-
After you have selected everything you want to transfer, click Continue.
-
The transfer will begin and may take some time depending on the amount of data and the speed of your Wi-Fi network. You can see the progress and estimated time on both Macs.
-
When the transfer is complete, click Quit on both Macs. Your new Mac will restart and you will be able to log in with your transferred user accounts and access your transferred data.
-
-
-
How to Copy Your Data from a Time Machine Backup or a USB Storage Device
-
-
-
If you have a Time Machine backup or a USB storage device that contains your data, you can also use them to copy your data to your new Mac. This method is useful if you don't have a Wi-Fi network or if you only want to transfer some of your data.
-
To use this method, you need to follow these steps:
-
-
-
How to Connect the Backup or Storage Device to Your New Mac
-
-
-
-
Connect the backup or storage device to your new Mac using an appropriate adapter if necessary. For example, if your device has a USB-A connector and your Mac has a USB-C port, you will need a USB-A to USB-C adapter.
-
Wait for the device to appear on your desktop or in the Finder sidebar. If it does not appear, you may need to format it for Mac using Disk Utility.
-
Open the device and locate the files or folders that you want to copy. You can also use Spotlight or Finder search to find them.
-
Select the files or folders and drag them to your new Mac. You can drag them to the desktop, the Documents folder, or any other location you prefer.
-
Wait for the copying process to finish. You can see the progress and estimated time in a window that pops up.
-
Eject the device by dragging it to the Trash icon or by right-clicking or control-clicking it and choosing Eject.
-
Disconnect the device from your new Mac.
-
-
-
-
How to Restore Your Content from a Backup
-
-
-
-
If you have a Time Machine backup, you can use Time Machine to restore your content. To do this, open Time Machine from the Applications folder or from the menu bar icon.
-
Select the backup that contains your data. You can use the timeline on the right side of the screen or the arrows at the bottom to navigate through different backup dates and times.
-
Find the files or folders that you want to restore. You can also use Spotlight or Finder search to find them.
-
Select the files or folders and click Restore. You can also right-click or control-click them and choose Restore.
-
Choose where you want to restore them on your new Mac. You can overwrite existing files or keep both versions.
-
Wait for the restoring process to finish. You can see the progress and estimated time in a window that pops up.
-
If you have another backup software, such as Carbon Copy Cloner or SuperDuper, you can use it to restore your content as well. Follow the instructions provided by the software developer.
-
-
-
How to Copy and Paste on Mac with Keyboard Shortcuts
-
-
-
If you want to copy and paste a small amount of data, such as a file, a text, or an image, you can use keyboard shortcuts to do it quickly and easily. Keyboard shortcuts are combinations of keys that you press to perform certain actions. They can save you time and effort when working on your Mac.
-
To use keyboard shortcuts to copy and paste on Mac, you need to follow these steps:
-
-
-
How to Copy on Mac
-
-
-
-
Select the file or text that you want to copy. You can use your mouse or trackpad to drag over the item, or use the Shift and arrow keys to highlight it.
-
Press Command + C on your keyboard. This will copy the item to your clipboard, which is a temporary storage area for copied data.
-
You will see a brief animation or a sound indicating that the item has been copied. You can also check the Edit menu in the menu bar and see that the Copy option is highlighted.
-
-
-
-
How to Paste on Mac
-
-
-
-
Move the cursor to where you want to paste the item. You can use your mouse or trackpad to click on the location, or use the arrow keys to navigate.
-
Press Command + V on your keyboard. This will paste the item from your clipboard to the location.
-
You will see a brief animation or a sound indicating that the item has been pasted. You can also check the Edit menu in the menu bar and see that the Paste option is highlighted.
-
-
-
-
How to Cut on Mac
-
-
-
-
Select the file or text that you want to cut. You can use your mouse or trackpad to drag over the item, or use the Shift and arrow keys to highlight it.
-
Press Command + X on your keyboard. This will cut the item from its original location and copy it to your clipboard.
-
You will see a brief animation or a sound indicating that the item has been cut. You can also check the Edit menu in the menu bar and see that the Cut option is highlighted.
-
Move the cursor to where you want to paste the item. You can use your mouse or trackpad to click on the location, or use the arrow keys to navigate.
-
Press Command + V on your keyboard. This will paste the item from your clipboard to the location.
-
You will see a brief animation or a sound indicating that the item has been pasted. You can also check the Edit menu in the menu bar and see that the Paste option is highlighted.
-
-
-
How to Copy and Paste on Mac with Mouse or Trackpad
-
-
-
If you prefer to use your mouse or trackpad to copy and paste on Mac, you can also do that with a few clicks. You can use the right-click or the control-click to access the contextual menu that contains the copy, paste, and cut options. This method is convenient if you don't want to use the keyboard or if you want to have more control over the copying and pasting process.
-
To use your mouse or trackpad to copy and paste on Mac, you need to follow these steps:
-
-
-
How to Copy on Mac
-
-
-
-
Select the file or text that you want to copy. You can use your mouse or trackpad to drag over the item, or use the Shift and arrow keys to highlight it.
-
Right-click or control-click the item. This will open a contextual menu that contains various options.
-
Choose Copy from the menu. This will copy the item to your clipboard.
-
You will see a brief animation or a sound indicating that the item has been copied. You can also check the Edit menu in the menu bar and see that the Copy option is highlighted.
-
-
-
-
How to Paste on Mac
-
-
-
-
Move the cursor to where you want to paste the item. You can use your mouse or trackpad to click on the location, or use the arrow keys to navigate.
-
Right-click or control-click where you want to paste. This will open a contextual menu that contains various options.
-
Choose Paste from the menu. This will paste the item from your clipboard to the location.
-
You will see a brief animation or a sound indicating that the item has been pasted. You can also check the Edit menu in the menu bar and see that the Paste option is highlighted.
-
-
-
-
How to Cut on Mac
-
-
-
-
Select the file or text that you want to cut. You can use your mouse or trackpad to drag over the item, or use the Shift and arrow keys to highlight it.
-
Right-click or control-click the item. This will open a contextual menu that contains various options.
-
Choose Cut from the menu. This will cut the item from its original location and copy it to your clipboard.
-
You will see a brief animation or a sound indicating that the item has been cut. You can also check the Edit menu in the menu bar and see that the Cut option is highlighted.
-
Move the cursor to where you want to paste the item. You can use your mouse or trackpad to click on the location, or use the arrow keys to navigate.
-
Right-click or control-click where you want to paste. This will open a contextual menu that contains various options.
-
Choose Paste from the menu. This will paste the item from your clipboard to the location.
-
You will see a brief animation or a sound indicating that the item has been pasted. You can also check the Edit menu in the menu bar and see that the Paste option is highlighted.
-
-
-
Conclusion
-
-
-
In this article, we have shown you how to use Copy My Data for Mac to transfer your data from one Mac to another. We have also shown you some other methods to copy your data, such as using Migration Assistant, Time Machine, or keyboard shortcuts.
-
Copying your data on Mac is easy and fast with these methods. You can choose the one that suits your needs and preferences. Whether you want to transfer all or some of your data, you can do it without losing any quality or information.
-
Here are some tips and recommendations for copying data on Mac:
-
-
Make sure you have a reliable Wi-Fi network or a compatible backup or storage device before you start copying your data.
-
Back up your data regularly to avoid losing it in case of any accidents or errors.
-
Use Copy My Data for Mac to transfer your data between different devices, such as Macs, iPhones, iPads, iPods, or Androids.
-
Use Migration Assistant to transfer your data between two Macs over Wi-Fi.
-
Use Time Machine or another backup software to restore your data from a backup.
-
Use keyboard shortcuts or mouse clicks to copy and paste small amounts of data.
-
-
We hope this article has helped you learn how to copy your data on Mac. If you have any questions or feedback, please let us know in the comments below.
-
-
-
FAQs:
-
-
How do I copy my data from a Mac to an iPhone?
-
You can use Copy My Data for Mac to transfer your data from a Mac to an iPhone over Wi-Fi. You need to install the app on both devices and follow the instructions on the screen. You can also use iTunes or Finder to sync your data from a Mac to an iPhone using a USB cable.
-
How do I copy my data from a Mac to an Android?
-
You can use Copy My Data for Mac to transfer your data from a Mac to an Android over Wi-Fi. You need to install the app on both devices and follow the instructions on the screen. You can also use Android File Transfer or another software to drag and drop your files from a Mac to an Android using a USB cable.
-
How do I copy my data from one user account to another on the same Mac?
-
You can use Migration Assistant to transfer your data from one user account to another on the same Mac. You need to log out of the current user account and log in as an administrator. Then, open Migration Assistant and select the option to transfer information to this Mac. Choose the user account that you want to transfer from and select the information that you want to transfer. Click Continue and wait for the transfer to finish.
-
How do I copy my data from a Windows PC to a Mac?
-
You can use Migration Assistant to transfer your data from a Windows PC to a Mac over Wi-Fi or Ethernet. You need to download and install Windows Migration Assistant on your PC and open Migration Assistant on your Mac. Select the option to transfer from a Windows PC and follow the instructions on the screen. You can also use an external hard drive or another storage device to copy your files from a PC to a Mac.
-
How do I copy my photos from a Mac to iCloud?
-
You can use iCloud Photos to sync your photos from a Mac to iCloud. You need to turn on iCloud Photos on your Mac and sign in with the same Apple ID that you use on your other devices. Your photos will be uploaded and stored in iCloud automatically. You can also use Photos for Mac or another app to import your photos from a camera or a memory card to your Mac and then upload them to iCloud.
-
-
-
-
-
-
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Cpa Network Script Nulled Theme.md b/spaces/tioseFevbu/cartoon-converter/scripts/Cpa Network Script Nulled Theme.md
deleted file mode 100644
index fc38ef286f8e51afe8776c91515603e01c57e430..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Cpa Network Script Nulled Theme.md
+++ /dev/null
@@ -1,140 +0,0 @@
-
-
CPA Network Script Nulled Theme: What You Need to Know
-
If you are a webmaster who wants to run your own CPA/Affiliate network, you might have heard about CPA network script and nulled theme. But what are they exactly and why are they popular among some webmasters? In this article, we will explain what CPA network script is, what nulled theme is, and what you need to know about them before using them for your website.
-
Benefits of CPA Network Script
-
CPA network script is a software that allows you to create your own CPA/Affiliate network easily and efficiently. It provides you with all the tools and features you need to manage your offers, track your conversions, pay your affiliates, and more. With CPA network script, you can have full control over your network and customize it according to your preferences and needs.
Dashboard: A user-friendly and intuitive dashboard that shows you the overview of your network's performance, such as revenue, clicks, conversions, EPC, etc.
-
Offer Management: A comprehensive and flexible offer management system that allows you to add, edit, delete, approve, reject, pause, resume, and categorize your offers. You can also set different payout rates, commission types, caps, geo-targeting, tracking parameters, landing pages, etc. for your offers.
-
Tracking System: A robust and accurate tracking system that tracks every click, impression, conversion, and event on your network. You can also integrate with third-party tracking platforms, such as Voluum, Binom, BeMob, etc.
-
Payment System: A secure and reliable payment system that allows you to pay your affiliates on time and in various methods, such as PayPal, Payoneer, Wire Transfer, etc. You can also set different payment terms, thresholds, currencies, fees, etc. for your affiliates.
-
Affiliate Management: A powerful and easy-to-use affiliate management system that allows you to manage your affiliates effectively. You can add, edit, delete, approve, reject, ban, suspend, and assign different roles and permissions to your affiliates. You can also communicate with your affiliates via email or chat.
-
Reporting and Analytics: A detailed and insightful reporting and analytics system that allows you to monitor and optimize your network's performance. You can generate various reports and charts based on different metrics, filters, time periods, etc. You can also export or download your data in various formats.
-
And more: There are many more features of CPA network script that make it a complete solution for your CPA/Affiliate network. Some of them are: API integration, fraud detection, smart link, postback, offer wall, landing page builder, etc.
-
-
Examples of CPA Network Script
-
There are many CPA network scripts available in the market that you can choose from. Some of the examples are:
-
-
-
Name
-
Description
-
Price
-
-
-
DreamAff
-
A premium CPA network script that offers a fully responsive design, advanced features, lifetime updates, and 24/7 support.
-
$499
-
-
-
AdFlex
-
A popular CPA network script that offers a multi-language interface, multiple payment gateways, custom domains, and more.
-
$69
-
-
-
OfferWall
-
A simple and affordable CPA network script that offers a ready-made offer wall template, easy installation, and basic features.
-
$29
-
-
-
These are just some of the examples of CPA network script. You can find more options online or create your own custom script if you have the skills and resources.
-
Risks of Nulled Theme
-
Nulled theme is a theme that has been modified or cracked to remove the license verification or activation code from the original theme. It is usually distributed for free or at a very low price on various websites or forums. Some webmasters use nulled theme to save money or to access premium features without paying for them. However, using nulled theme is very risky and can cause serious problems for your website.
-
Legal Issues of Nulled Theme
-
Nulled theme can violate the intellectual property rights and terms of service of the original theme developers. By using nulled theme, you are infringing on their rights and breaking their rules. This can result in legal actions, such as lawsuits, fines, or penalties. You can also lose your access to the original theme and its updates, support, and features. You can also damage your reputation and credibility as a webmaster.
-
Security Issues of Nulled Theme
-
Nulled theme can contain malicious code, malware, backdoors, etc. that can compromise the security and performance of your website. These can allow hackers to access your website, steal your data, inject ads, redirect your traffic, or even take over your website. They can also harm your visitors, infect their devices, or expose their personal information. You can also face legal consequences if your website is involved in any illegal or unethical activities due to the nulled theme.
-
-
Quality Issues of Nulled Theme
-
Nulled theme can have bugs, errors, compatibility issues, outdated features, etc. that can affect the quality and functionality of your website. These can cause your website to crash, slow down, display incorrectly, or lose some features. They can also make your website vulnerable to attacks or exploits. You can also miss out on the latest updates, improvements, and innovations from the original theme developers. You can also have difficulties in finding support or solutions for your problems with the nulled theme.
-
Alternatives to Nulled Theme
-
Using nulled theme is not worth the risk and hassle for your website. It is better to use original theme or other legitimate alternatives than nulled theme. Here are some of the alternatives you can consider:
-
Original Theme
-
The best alternative to nulled theme is the original theme. By using the original theme, you can enjoy more benefits than nulled theme, such as:
-
-
Support: You can get professional and timely support from the original theme developers or their team. You can also access their documentation, tutorials, forums, etc.
-
Updates: You can get regular and automatic updates from the original theme developers that fix bugs, improve performance, add features, etc.
-
Customization Options: You can get more customization options from the original theme that allow you to change the appearance, layout, functionality, etc. of your website according to your needs and preferences.
-
And more: There are many more benefits of using the original theme that make it a worthwhile investment for your website. Some of them are: security, quality, compatibility, reputation, etc.
-
-
The price of the original theme may vary depending on the features, quality, popularity, etc. of the theme. However, you can find some affordable options online or look for discounts or coupons that can lower the cost.
-
Free Theme
-
If you have a limited budget but still want a quality theme for your website, you can opt for a free theme. A free theme is a theme that is available for free or at no cost on various websites or platforms. Some webmasters use free themes to test their websites or to start their online presence.
-
However, not all free themes are created equal. Some free themes may have some drawbacks or limitations compared to premium themes, such as:
-
Support: You may not get any support or assistance from the free theme developers or their team. You may have to rely on yourself or other users to solve your problems.
-
Updates: You may not get any updates or improvements from the free theme developers. You may have to use the same version of the theme for a long time or look for other alternatives.
-
Customization Options: You may not get many customization options from the free theme. You may have to stick with the default settings or make some changes manually.
-
And more: There are some more drawbacks or limitations of using free themes that you should be aware of. Some of them are: security, quality, compatibility, reputation, etc.
-
-
Therefore, you should be careful and selective when choosing a free theme for your website. You should check the reviews, ratings, feedback, etc. of the free theme before using it. You should also scan the free theme for any malicious code, malware, backdoors, etc. that can harm your website.
-
Some examples of free themes that are compatible with CPA network script are:
-
-
-
Name
-
Description
-
URL
-
-
-
Astra
-
A fast, lightweight, and customizable free theme that works well with CPA network script and other plugins.
-
-
-
-
OceanWP
-
A versatile, responsive, and SEO-friendly free theme that offers many features and options for CPA network script and other plugins.
-
-
-
-
GeneratePress
-
A simple, secure, and stable free theme that provides a solid foundation for CPA network script and other plugins.
-
-
-
-
These are just some of the examples of free themes that are compatible with CPA network script. You can find more options online or create your own custom theme if you have the skills and resources.
-
Premium Theme
-
If you want a professional and unique theme for your website, you can opt for a premium theme. A premium theme is a theme that is available for a certain price or fee on various websites or platforms. Some webmasters use premium themes to enhance their websites or to stand out from their competitors.
-
Premium themes usually offer more benefits than free themes, such as:
-
-
Support: You can get professional and timely support from the premium theme developers or their team. You can also access their documentation, tutorials, forums, etc.
-
Updates: You can get regular and automatic updates from the premium theme developers that fix bugs, improve performance, add features, etc.
-
Customization Options: You can get more customization options from the premium theme that allow you to change the appearance, layout, functionality, etc. of your website according to your needs and preferences.
-
And more: There are many more benefits of using premium themes that make them a worthwhile investment for your website. Some of them are: security, quality, compatibility, reputation, etc.
-
-
The price of the premium theme may vary depending on the features, quality, popularity, etc. of the theme. However, you can find some reasonable options online or look for discounts or coupons that can lower the cost.
-
Some examples of premium themes that are compatible with CPA network script are:
-
-
-
Name
-
Description
-
Price
-
URL
-
-
-
Doo
-
A modern and stylish premium theme that is designed for CPA network script and other affiliate marketing plugins.
-
$59
-
-
-
-
Couponis
-
A sleek and elegant premium theme that is optimized for CPA network script and other coupon/deal plugins.
-
$49
CouponXLA powerful and flexible premium theme that is suitable for CPA network script and other offer/cashback plugins.$49
These are just some of the examples of premium themes that are compatible with CPA network script. You can find more options online or create your own custom theme if you have the skills and resources.ConclusionIn conclusion, CPA network script is a software that allows you to create your own CPA/Affiliate network easily and efficiently. It provides you with all the tools and features you need to manage your offers, track your conversions, pay your affiliates, and more. However, using nulled theme for your CPA network script is very risky and can cause serious problems for your website. Nulled theme can violate the intellectual property rights and terms of service of the original theme developers, contain malicious code, malware, backdoors, etc. that can compromise the security and performance of your website, and have bugs, errors, compatibility issues, outdated features, etc. that can affect the quality and functionality of your website. Therefore, it is better to use original theme or other legitimate alternatives than nulled theme. Original theme can provide more benefits than nulled theme, such as support, updates, customization options, etc. Free theme can be a good option for webmasters who have limited budget but still want a quality theme for their website. Premium theme can be a worthwhile investment for webmasters who want a professional and unique theme for their website.
-
We hope this article has helped you understand what CPA network script nulled theme is and what you need to know about it before using it for your website. If you have any questions or comments, please feel free to leave them below. Thank you for reading!
-
FAQs
-
Here are some of the frequently asked questions related to the topic of this article and their answers:
-
-
What is CPA network script?
-
CPA network script is a software that allows you to create your own CPA/Affiliate network easily and efficiently.
-
What is nulled theme?
-
Nulled theme is a theme that has been modified or cracked to remove the license verification or activation code from the original theme.
-
Why is nulled theme risky?
-
Nulled theme is risky because it can violate the intellectual property rights and terms of service of the original theme developers, contain malicious code, malware, backdoors, etc. that can compromise the security and performance of your website, and have bugs, errors, compatibility issues, outdated features, etc. that can affect the quality and functionality of your website.
-
What are the alternatives to nulled theme?
-
The alternatives to nulled theme are original theme, free theme, and premium theme.
-
Where can I find CPA network script and themes?
-
You can find CPA network script and themes on various websites or platforms online. Some of them are: Codecanyon, Themeforest, WordPress, etc.
- b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Day Trading Options Jeff Augen Free Pdf 32 TOP.md b/spaces/tioseFevbu/cartoon-converter/scripts/Day Trading Options Jeff Augen Free Pdf 32 TOP.md
deleted file mode 100644
index d55e2489c345943531265011f1ded7b975e83141..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Day Trading Options Jeff Augen Free Pdf 32 TOP.md
+++ /dev/null
@@ -1,26 +0,0 @@
-
-
How to Profit from Day Trading Options with Jeff Augen's Strategies
-
Day trading options can be a lucrative way to take advantage of price distortions and anomalies in very brief time frames. However, it also requires a high level of skill, discipline and risk management. In this article, we will explore some of the strategies and techniques that Jeff Augen, a veteran option trader and author of Day Trading Options: Profiting from Price Distortions in Very Brief Time Frames,[^1^] has developed and shared in his book.
Day trading options is the practice of buying and selling options contracts within the same trading day, usually with the intention of closing the position before the market closes. Day traders aim to exploit short-term price movements and volatility fluctuations that occur during the day, often triggered by news events, earnings announcements, technical signals or market sentiment.
-
Options are contracts that give the buyer the right, but not the obligation, to buy or sell an underlying asset at a specified price (strike) before or on a certain date (expiration). Options can be classified into two types: calls and puts. A call option gives the buyer the right to buy the underlying asset, while a put option gives the buyer the right to sell the underlying asset. The seller (or writer) of an option receives a premium from the buyer in exchange for taking on the risk of being assigned (or exercised) if the option is in-the-money at expiration.
-
Why Day Trade Options?
-
Day trading options has several advantages over other forms of trading, such as:
-
-
-
Leverage: Options allow traders to control a large amount of shares with a relatively small amount of capital. This means that traders can potentially magnify their profits (and losses) with a small price movement in the underlying asset.
-
Flexibility: Options offer a variety of strategies and combinations that can suit different market conditions and risk preferences. Traders can use options to speculate on the direction, magnitude or volatility of the underlying asset's price movement, or to hedge their existing positions.
-
Liquidity: Options are traded on exchanges and have standardized specifications, which make them easy to buy and sell. Some options have high trading volume and narrow bid-ask spreads, which reduce transaction costs and slippage.
-
-
What are Jeff Augen's Strategies?
-
Jeff Augen is an experienced option trader who has written several books on options trading, including Day Trading Options. In his book, he reveals insights and techniques that he has developed and tested over many years of trading. Some of his strategies include:
-
-
Trading volatility distortions: Augen introduces a concept called the implied volatility surface, which is a three-dimensional representation of how implied volatility varies across different strike prices and expiration dates for a given underlying asset. He shows how to use this tool to identify and trade situations where implied volatility is either too high or too low compared to historical volatility or fair volatility.[^3^]
-
Working with intraday price spike charts: Augen presents a new charting technique that uses ultra-short-term price spikes to measure volatility and identify trends at the single-minute level. He demonstrates how to use this technique to trade options based on intraday price patterns and technical indicators.
-
Special events trading: Augen explains how to trade options around special events that cause significant volatility distortions in the market, such as earnings announcements, dividends, mergers and acquisitions, economic reports and political events. He provides guidelines on how to select the best option strategy, strike price and expiration date for each event.
-
-
Conclusion
-
Day trading options can be a profitable way to take advantage of short-term price movements and volatility fluctuations in the market. However, it also requires a high level of skill, discipline and risk management. Jeff Augen's book Day Trading Options provides valuable insights and techniques that can help traders improve their performance and profitability. The book is available for download as a PDF file[^2^] or as an ebook[^1^]. Traders who want to learn more
cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/__init__.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/tomg-group-umd/pez-dispenser/README.md b/spaces/tomg-group-umd/pez-dispenser/README.md
deleted file mode 100644
index 08391472ca5472f9df284911c58fb49c5f8b4bfd..0000000000000000000000000000000000000000
--- a/spaces/tomg-group-umd/pez-dispenser/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Pez Dispenser
-emoji: ⚡
-colorFrom: purple
-colorTo: blue
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/tomofi/MMOCR/configs/ner/bert_softmax/bert_softmax_cluener_18e.py b/spaces/tomofi/MMOCR/configs/ner/bert_softmax/bert_softmax_cluener_18e.py
deleted file mode 100644
index 5fd85d9a858236f4feb8903e3f4bf95f9eccaf94..0000000000000000000000000000000000000000
--- a/spaces/tomofi/MMOCR/configs/ner/bert_softmax/bert_softmax_cluener_18e.py
+++ /dev/null
@@ -1,70 +0,0 @@
-_base_ = [
- '../../_base_/schedules/schedule_adadelta_18e.py',
- '../../_base_/default_runtime.py'
-]
-
-categories = [
- 'address', 'book', 'company', 'game', 'government', 'movie', 'name',
- 'organization', 'position', 'scene'
-]
-
-test_ann_file = 'data/cluener2020/dev.json'
-train_ann_file = 'data/cluener2020/train.json'
-vocab_file = 'data/cluener2020/vocab.txt'
-
-max_len = 128
-loader = dict(
- type='HardDiskLoader',
- repeat=1,
- parser=dict(type='LineJsonParser', keys=['text', 'label']))
-
-ner_convertor = dict(
- type='NerConvertor',
- annotation_type='bio',
- vocab_file=vocab_file,
- categories=categories,
- max_len=max_len)
-
-test_pipeline = [
- dict(type='NerTransform', label_convertor=ner_convertor, max_len=max_len),
- dict(type='ToTensorNER')
-]
-
-train_pipeline = [
- dict(type='NerTransform', label_convertor=ner_convertor, max_len=max_len),
- dict(type='ToTensorNER')
-]
-dataset_type = 'NerDataset'
-
-train = dict(
- type=dataset_type,
- ann_file=train_ann_file,
- loader=loader,
- pipeline=train_pipeline,
- test_mode=False)
-
-test = dict(
- type=dataset_type,
- ann_file=test_ann_file,
- loader=loader,
- pipeline=test_pipeline,
- test_mode=True)
-data = dict(
- samples_per_gpu=8, workers_per_gpu=2, train=train, val=test, test=test)
-
-evaluation = dict(interval=1, metric='f1-score')
-
-model = dict(
- type='NerClassifier',
- encoder=dict(
- type='BertEncoder',
- max_position_embeddings=512,
- init_cfg=dict(
- type='Pretrained',
- checkpoint='https://download.openmmlab.com/mmocr/ner/'
- 'bert_softmax/bert_pretrain.pth')),
- decoder=dict(type='FCDecoder'),
- loss=dict(type='MaskedCrossEntropyLoss'),
- label_convertor=ner_convertor)
-
-test_cfg = None
diff --git a/spaces/tomofi/MMOCR/docs/zh_cn/datasets/kie.md b/spaces/tomofi/MMOCR/docs/zh_cn/datasets/kie.md
deleted file mode 100644
index 6d189bc7daffde42e6815f8f10725c6065f89240..0000000000000000000000000000000000000000
--- a/spaces/tomofi/MMOCR/docs/zh_cn/datasets/kie.md
+++ /dev/null
@@ -1,34 +0,0 @@
-# 关键信息提取
-
-## 概览
-
-关键信息提取任务的数据集,文件目录应按如下配置:
-
-```text
-└── wildreceipt
- ├── class_list.txt
- ├── dict.txt
- ├── image_files
- ├── test.txt
- └── train.txt
-```
-
-## 准备步骤
-
-### WildReceipt
-
-- 下载并解压 [wildreceipt.tar](https://download.openmmlab.com/mmocr/data/wildreceipt.tar)
-
-### WildReceiptOpenset
-
-- 准备好 [WildReceipt](#WildReceipt)。
-- 转换 WildReceipt 成 OpenSet 格式:
-```bash
-# 你可以运行以下命令以获取更多可用参数:
-# python tools/data/kie/closeset_to_openset.py -h
-python tools/data/kie/closeset_to_openset.py data/wildreceipt/train.txt data/wildreceipt/openset_train.txt
-python tools/data/kie/closeset_to_openset.py data/wildreceipt/test.txt data/wildreceipt/openset_test.txt
-```
-:::{note}
-[这篇教程](../tutorials/kie_closeset_openset.md)里讲述了更多 CloseSet 和 OpenSet 数据格式之间的区别。
-:::
diff --git a/spaces/tomofi/MMOCR/tools/data/textdet/ctw1500_converter.py b/spaces/tomofi/MMOCR/tools/data/textdet/ctw1500_converter.py
deleted file mode 100644
index 40dfbc1db6ee04d8599d25cd01a43ee07361def6..0000000000000000000000000000000000000000
--- a/spaces/tomofi/MMOCR/tools/data/textdet/ctw1500_converter.py
+++ /dev/null
@@ -1,231 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import argparse
-import glob
-import os.path as osp
-import xml.etree.ElementTree as ET
-from functools import partial
-
-import mmcv
-import numpy as np
-from shapely.geometry import Polygon
-
-from mmocr.utils import convert_annotations, list_from_file
-
-
-def collect_files(img_dir, gt_dir, split):
- """Collect all images and their corresponding groundtruth files.
-
- Args:
- img_dir(str): The image directory
- gt_dir(str): The groundtruth directory
- split(str): The split of dataset. Namely: training or test
-
- Returns:
- files(list): The list of tuples (img_file, groundtruth_file)
- """
- assert isinstance(img_dir, str)
- assert img_dir
- assert isinstance(gt_dir, str)
- assert gt_dir
-
- # note that we handle png and jpg only. Pls convert others such as gif to
- # jpg or png offline
- suffixes = ['.png', '.PNG', '.jpg', '.JPG', '.jpeg', '.JPEG']
-
- imgs_list = []
- for suffix in suffixes:
- imgs_list.extend(glob.glob(osp.join(img_dir, '*' + suffix)))
-
- files = []
- if split == 'training':
- for img_file in imgs_list:
- gt_file = gt_dir + '/' + osp.splitext(
- osp.basename(img_file))[0] + '.xml'
- files.append((img_file, gt_file))
- assert len(files), f'No images found in {img_dir}'
- print(f'Loaded {len(files)} images from {img_dir}')
- elif split == 'test':
- for img_file in imgs_list:
- gt_file = gt_dir + '/000' + osp.splitext(
- osp.basename(img_file))[0] + '.txt'
- files.append((img_file, gt_file))
- assert len(files), f'No images found in {img_dir}'
- print(f'Loaded {len(files)} images from {img_dir}')
-
- return files
-
-
-def collect_annotations(files, split, nproc=1):
- """Collect the annotation information.
-
- Args:
- files(list): The list of tuples (image_file, groundtruth_file)
- split(str): The split of dataset. Namely: training or test
- nproc(int): The number of process to collect annotations
-
- Returns:
- images(list): The list of image information dicts
- """
- assert isinstance(files, list)
- assert isinstance(split, str)
- assert isinstance(nproc, int)
-
- load_img_info_with_split = partial(load_img_info, split=split)
- if nproc > 1:
- images = mmcv.track_parallel_progress(
- load_img_info_with_split, files, nproc=nproc)
- else:
- images = mmcv.track_progress(load_img_info_with_split, files)
-
- return images
-
-
-def load_txt_info(gt_file, img_info):
- anno_info = []
- for line in list_from_file(gt_file):
- # each line has one ploygen (n vetices), and one text.
- # e.g., 695,885,866,888,867,1146,696,1143,####Latin 9
- line = line.strip()
- strs = line.split(',')
- category_id = 1
- assert strs[28][0] == '#'
- xy = [int(x) for x in strs[0:28]]
- assert len(xy) == 28
- coordinates = np.array(xy).reshape(-1, 2)
- polygon = Polygon(coordinates)
- iscrowd = 0
- area = polygon.area
- # convert to COCO style XYWH format
- min_x, min_y, max_x, max_y = polygon.bounds
- bbox = [min_x, min_y, max_x - min_x, max_y - min_y]
- text = strs[28][4:]
-
- anno = dict(
- iscrowd=iscrowd,
- category_id=category_id,
- bbox=bbox,
- area=area,
- text=text,
- segmentation=[xy])
- anno_info.append(anno)
- img_info.update(anno_info=anno_info)
- return img_info
-
-
-def load_xml_info(gt_file, img_info):
-
- obj = ET.parse(gt_file)
- anno_info = []
- for image in obj.getroot(): # image
- for box in image: # image
- h = box.attrib['height']
- w = box.attrib['width']
- x = box.attrib['left']
- y = box.attrib['top']
- text = box[0].text
- segs = box[1].text
- pts = segs.strip().split(',')
- pts = [int(x) for x in pts]
- assert len(pts) == 28
- # pts = []
- # for iter in range(2,len(box)):
- # pts.extend([int(box[iter].attrib['x']),
- # int(box[iter].attrib['y'])])
- iscrowd = 0
- category_id = 1
- bbox = [int(x), int(y), int(w), int(h)]
-
- coordinates = np.array(pts).reshape(-1, 2)
- polygon = Polygon(coordinates)
- area = polygon.area
- anno = dict(
- iscrowd=iscrowd,
- category_id=category_id,
- bbox=bbox,
- area=area,
- text=text,
- segmentation=[pts])
- anno_info.append(anno)
-
- img_info.update(anno_info=anno_info)
-
- return img_info
-
-
-def load_img_info(files, split):
- """Load the information of one image.
-
- Args:
- files(tuple): The tuple of (img_file, groundtruth_file)
- split(str): The split of dataset: training or test
-
- Returns:
- img_info(dict): The dict of the img and annotation information
- """
- assert isinstance(files, tuple)
- assert isinstance(split, str)
-
- img_file, gt_file = files
- # read imgs with ignoring orientations
- img = mmcv.imread(img_file, 'unchanged')
-
- split_name = osp.basename(osp.dirname(img_file))
- img_info = dict(
- # remove img_prefix for filename
- file_name=osp.join(split_name, osp.basename(img_file)),
- height=img.shape[0],
- width=img.shape[1],
- # anno_info=anno_info,
- segm_file=osp.join(split_name, osp.basename(gt_file)))
-
- if split == 'training':
- img_info = load_xml_info(gt_file, img_info)
- elif split == 'test':
- img_info = load_txt_info(gt_file, img_info)
- else:
- raise NotImplementedError
-
- return img_info
-
-
-def parse_args():
- parser = argparse.ArgumentParser(
- description='Convert ctw1500 annotations to COCO format')
- parser.add_argument('root_path', help='ctw1500 root path')
- parser.add_argument('-o', '--out-dir', help='output path')
- parser.add_argument(
- '--split-list',
- nargs='+',
- help='a list of splits. e.g., "--split-list training test"')
-
- parser.add_argument(
- '--nproc', default=1, type=int, help='number of process')
- args = parser.parse_args()
- return args
-
-
-def main():
- args = parse_args()
- root_path = args.root_path
- out_dir = args.out_dir if args.out_dir else root_path
- mmcv.mkdir_or_exist(out_dir)
-
- img_dir = osp.join(root_path, 'imgs')
- gt_dir = osp.join(root_path, 'annotations')
-
- set_name = {}
- for split in args.split_list:
- set_name.update({split: 'instances_' + split + '.json'})
- assert osp.exists(osp.join(img_dir, split))
-
- for split, json_name in set_name.items():
- print(f'Converting {split} into {json_name}')
- with mmcv.Timer(print_tmpl='It takes {}s to convert icdar annotation'):
- files = collect_files(
- osp.join(img_dir, split), osp.join(gt_dir, split), split)
- image_infos = collect_annotations(files, split, nproc=args.nproc)
- convert_annotations(image_infos, osp.join(out_dir, json_name))
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/tomofi/NDLOCR/cli/procs/line_ocr.py b/spaces/tomofi/NDLOCR/cli/procs/line_ocr.py
deleted file mode 100644
index 1797e68a516915698769606a774316dcf5436b3c..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/cli/procs/line_ocr.py
+++ /dev/null
@@ -1,86 +0,0 @@
-# Copyright (c) 2022, National Diet Library, Japan
-#
-# This software is released under the CC BY 4.0.
-# https://creativecommons.org/licenses/by/4.0/
-
-
-import copy
-import numpy
-import subprocess
-import xml.etree.ElementTree as ET
-
-from .base_proc import BaseInferenceProcess
-
-
-class LineOcrProcess(BaseInferenceProcess):
- """
- 行文字認識推論を実行するプロセスのクラス。
- BaseInferenceProcessを継承しています。
- """
- def __init__(self, cfg, pid):
- """
- Parameters
- ----------
- cfg : dict
- 本推論処理における設定情報です。
- pid : int
- 実行される順序を表す数値。
- """
- super().__init__(cfg, pid, '_line_ocr')
- process1 = subprocess.Popen(['cat', self.cfg['line_ocr']['char_list']], stdout=subprocess.PIPE)
- process2 = subprocess.Popen(['tr', '-d', '\\n'], stdin=process1.stdout, stdout=subprocess.PIPE)
- self.character = '〓' + process2.stdout.read().decode()
-
- from src.text_recognition.text_recognition import InferencerWithCLI
- self._inferencer = InferencerWithCLI(self.cfg['line_ocr'], self.character)
- self._run_src_inference = self._inferencer.inference_wich_cli
-
- def _is_valid_input(self, input_data):
- """
- 本クラスの推論処理における入力データのバリデーション。
-
- Parameters
- ----------
- input_data : dict
- 推論処理を実行する対象の入力データ。
-
- Returns
- -------
- [変数なし] : bool
- 入力データが正しければTrue, そうでなければFalseを返します。
- """
- if type(input_data['img']) is not numpy.ndarray:
- print('LineOcrProcess: input img is not numpy.ndarray')
- return False
- if type(input_data['xml']) is not ET.ElementTree:
- print('LineOcrProcess: input xml is not ElementTree')
- return False
- return True
-
- def _run_process(self, input_data):
- """
- 推論処理の本体部分。
-
- Parameters
- ----------
- input_data : dict
- 推論処理を実行する対象の入力データ。
-
- Returns
- -------
- result : dict
- 推論処理の結果を保持する辞書型データ。
- 基本的にinput_dataと同じ構造です。
- """
- result = []
- print('### Line OCR Process ###')
- result_xml = self._run_src_inference(input_data['img'], input_data['xml'],
- accept_empty=self.cfg['line_ocr']['accept_empty'],
- yield_block_page_num=self.cfg['line_ocr']['yield_block_page_num'],
- yield_block_pillar=self.cfg['line_ocr']['yield_block_pillar'])
-
- output_data = copy.deepcopy(input_data)
- output_data['xml'] = result_xml
- result.append(output_data)
-
- return result
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/carafe/faster_rcnn_r50_fpn_carafe_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/carafe/faster_rcnn_r50_fpn_carafe_1x_coco.py
deleted file mode 100644
index dedac3f46b4710d16a8bc66f00663e379b2ebdc7..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/carafe/faster_rcnn_r50_fpn_carafe_1x_coco.py
+++ /dev/null
@@ -1,50 +0,0 @@
-_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
-model = dict(
- neck=dict(
- type='FPN_CARAFE',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- num_outs=5,
- start_level=0,
- end_level=-1,
- norm_cfg=None,
- act_cfg=None,
- order=('conv', 'norm', 'act'),
- upsample_cfg=dict(
- type='carafe',
- up_kernel=5,
- up_group=1,
- encoder_kernel=3,
- encoder_dilation=1,
- compressed_channels=64)))
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=64),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=64),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/mask_rcnn/mask_rcnn_x101_32x8d_fpn_mstrain-poly_3x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/mask_rcnn/mask_rcnn_x101_32x8d_fpn_mstrain-poly_3x_coco.py
deleted file mode 100644
index 93b7d51912abaaab55ceac5263737d02cd4e99fa..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/mask_rcnn/mask_rcnn_x101_32x8d_fpn_mstrain-poly_3x_coco.py
+++ /dev/null
@@ -1,61 +0,0 @@
-_base_ = './mask_rcnn_r101_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://detectron2/resnext101_32x8d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=32,
- base_width=8,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=False),
- style='pytorch'))
-
-dataset_type = 'CocoDataset'
-data_root = 'data/coco/'
-img_norm_cfg = dict(
- mean=[103.530, 116.280, 123.675],
- std=[57.375, 57.120, 58.395],
- to_rgb=False)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='LoadAnnotations',
- with_bbox=True,
- with_mask=True,
- poly2mask=False),
- dict(
- type='Resize',
- img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736),
- (1333, 768), (1333, 800)],
- multiscale_mode='value',
- keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
-
-lr_config = dict(step=[28, 34])
-runner = dict(type='EpochBasedRunner', max_epochs=36)
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tools/model_converters/upgrade_model_version.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tools/model_converters/upgrade_model_version.py
deleted file mode 100644
index 232c8bc4cf010084b817c545ab4e2ef34fdd4549..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tools/model_converters/upgrade_model_version.py
+++ /dev/null
@@ -1,209 +0,0 @@
-import argparse
-import re
-import tempfile
-from collections import OrderedDict
-
-import torch
-from mmcv import Config
-
-
-def is_head(key):
- valid_head_list = [
- 'bbox_head', 'mask_head', 'semantic_head', 'grid_head', 'mask_iou_head'
- ]
-
- return any(key.startswith(h) for h in valid_head_list)
-
-
-def parse_config(config_strings):
- temp_file = tempfile.NamedTemporaryFile()
- config_path = f'{temp_file.name}.py'
- with open(config_path, 'w') as f:
- f.write(config_strings)
-
- config = Config.fromfile(config_path)
- is_two_stage = True
- is_ssd = False
- is_retina = False
- reg_cls_agnostic = False
- if 'rpn_head' not in config.model:
- is_two_stage = False
- # check whether it is SSD
- if config.model.bbox_head.type == 'SSDHead':
- is_ssd = True
- elif config.model.bbox_head.type == 'RetinaHead':
- is_retina = True
- elif isinstance(config.model['bbox_head'], list):
- reg_cls_agnostic = True
- elif 'reg_class_agnostic' in config.model.bbox_head:
- reg_cls_agnostic = config.model.bbox_head \
- .reg_class_agnostic
- temp_file.close()
- return is_two_stage, is_ssd, is_retina, reg_cls_agnostic
-
-
-def reorder_cls_channel(val, num_classes=81):
- # bias
- if val.dim() == 1:
- new_val = torch.cat((val[1:], val[:1]), dim=0)
- # weight
- else:
- out_channels, in_channels = val.shape[:2]
- # conv_cls for softmax output
- if out_channels != num_classes and out_channels % num_classes == 0:
- new_val = val.reshape(-1, num_classes, in_channels, *val.shape[2:])
- new_val = torch.cat((new_val[:, 1:], new_val[:, :1]), dim=1)
- new_val = new_val.reshape(val.size())
- # fc_cls
- elif out_channels == num_classes:
- new_val = torch.cat((val[1:], val[:1]), dim=0)
- # agnostic | retina_cls | rpn_cls
- else:
- new_val = val
-
- return new_val
-
-
-def truncate_cls_channel(val, num_classes=81):
-
- # bias
- if val.dim() == 1:
- if val.size(0) % num_classes == 0:
- new_val = val[:num_classes - 1]
- else:
- new_val = val
- # weight
- else:
- out_channels, in_channels = val.shape[:2]
- # conv_logits
- if out_channels % num_classes == 0:
- new_val = val.reshape(num_classes, in_channels, *val.shape[2:])[1:]
- new_val = new_val.reshape(-1, *val.shape[1:])
- # agnostic
- else:
- new_val = val
-
- return new_val
-
-
-def truncate_reg_channel(val, num_classes=81):
- # bias
- if val.dim() == 1:
- # fc_reg | rpn_reg
- if val.size(0) % num_classes == 0:
- new_val = val.reshape(num_classes, -1)[:num_classes - 1]
- new_val = new_val.reshape(-1)
- # agnostic
- else:
- new_val = val
- # weight
- else:
- out_channels, in_channels = val.shape[:2]
- # fc_reg | rpn_reg
- if out_channels % num_classes == 0:
- new_val = val.reshape(num_classes, -1, in_channels,
- *val.shape[2:])[1:]
- new_val = new_val.reshape(-1, *val.shape[1:])
- # agnostic
- else:
- new_val = val
-
- return new_val
-
-
-def convert(in_file, out_file, num_classes):
- """Convert keys in checkpoints.
-
- There can be some breaking changes during the development of mmdetection,
- and this tool is used for upgrading checkpoints trained with old versions
- to the latest one.
- """
- checkpoint = torch.load(in_file)
- in_state_dict = checkpoint.pop('state_dict')
- out_state_dict = OrderedDict()
- meta_info = checkpoint['meta']
- is_two_stage, is_ssd, is_retina, reg_cls_agnostic = parse_config(
- '#' + meta_info['config'])
- if meta_info['mmdet_version'] <= '0.5.3' and is_retina:
- upgrade_retina = True
- else:
- upgrade_retina = False
-
- # MMDetection v2.5.0 unifies the class order in RPN
- # if the model is trained in version=2.5.0
- if meta_info['mmdet_version'] < '2.5.0':
- upgrade_rpn = True
- else:
- upgrade_rpn = False
-
- for key, val in in_state_dict.items():
- new_key = key
- new_val = val
- if is_two_stage and is_head(key):
- new_key = 'roi_head.{}'.format(key)
-
- # classification
- if upgrade_rpn:
- m = re.search(
- r'(conv_cls|retina_cls|rpn_cls|fc_cls|fcos_cls|'
- r'fovea_cls).(weight|bias)', new_key)
- else:
- m = re.search(
- r'(conv_cls|retina_cls|fc_cls|fcos_cls|'
- r'fovea_cls).(weight|bias)', new_key)
- if m is not None:
- print(f'reorder cls channels of {new_key}')
- new_val = reorder_cls_channel(val, num_classes)
-
- # regression
- if upgrade_rpn:
- m = re.search(r'(fc_reg).(weight|bias)', new_key)
- else:
- m = re.search(r'(fc_reg|rpn_reg).(weight|bias)', new_key)
- if m is not None and not reg_cls_agnostic:
- print(f'truncate regression channels of {new_key}')
- new_val = truncate_reg_channel(val, num_classes)
-
- # mask head
- m = re.search(r'(conv_logits).(weight|bias)', new_key)
- if m is not None:
- print(f'truncate mask prediction channels of {new_key}')
- new_val = truncate_cls_channel(val, num_classes)
-
- m = re.search(r'(cls_convs|reg_convs).\d.(weight|bias)', key)
- # Legacy issues in RetinaNet since V1.x
- # Use ConvModule instead of nn.Conv2d in RetinaNet
- # cls_convs.0.weight -> cls_convs.0.conv.weight
- if m is not None and upgrade_retina:
- param = m.groups()[1]
- new_key = key.replace(param, f'conv.{param}')
- out_state_dict[new_key] = val
- print(f'rename the name of {key} to {new_key}')
- continue
-
- m = re.search(r'(cls_convs).\d.(weight|bias)', key)
- if m is not None and is_ssd:
- print(f'reorder cls channels of {new_key}')
- new_val = reorder_cls_channel(val, num_classes)
-
- out_state_dict[new_key] = new_val
- checkpoint['state_dict'] = out_state_dict
- torch.save(checkpoint, out_file)
-
-
-def main():
- parser = argparse.ArgumentParser(description='Upgrade model version')
- parser.add_argument('in_file', help='input checkpoint file')
- parser.add_argument('out_file', help='output checkpoint file')
- parser.add_argument(
- '--num-classes',
- type=int,
- default=81,
- help='number of classes of the original model')
- args = parser.parse_args()
- convert(args.in_file, args.out_file, args.num_classes)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Alcatech BPM Studio Pro 4.91 Serial Key Keygen.md b/spaces/usbethFlerru/sovits-modelsV2/example/Alcatech BPM Studio Pro 4.91 Serial Key Keygen.md
deleted file mode 100644
index 4bab00c40215efd1e7ebe736a7d4bea869534d32..0000000000000000000000000000000000000000
--- a/spaces/usbethFlerru/sovits-modelsV2/example/Alcatech BPM Studio Pro 4.91 Serial Key Keygen.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
How to Download Alcatech BPM Studio Pro 4.91 Serial Key Keygen for Free
-
Alcatech BPM Studio Pro 4.91 is a professional software for mixing and editing audio files. It allows you to create your own music tracks, remixes, podcasts, radio shows, and more. With Alcatech BPM Studio Pro 4.91, you can also manage your music library, record your voice, apply effects, and burn CDs or DVDs.
However, Alcatech BPM Studio Pro 4.91 is not a free software. You need to purchase a license key to activate it and enjoy its full features. But what if you don't want to spend money on it? Is there a way to get Alcatech BPM Studio Pro 4.91 serial key keygen for free?
-
The answer is yes. In this article, we will show you how to download Alcatech BPM Studio Pro 4.91 serial key keygen for free from the internet. We will also provide you with some tips and warnings to avoid scams and viruses.
-
What is Alcatech BPM Studio Pro 4.91 Serial Key Keygen?
-
A serial key is a unique code that identifies a software product and verifies its authenticity. A keygen is a program that generates serial keys for various software products. A serial key keygen is a combination of both: a program that generates serial keys for a specific software product.
-
Alcatech BPM Studio Pro 4.91 serial key keygen is a program that generates serial keys for Alcatech BPM Studio Pro 4.91 software. By using this program, you can get a valid serial key for Alcatech BPM Studio Pro 4.91 without paying anything.
-
How to Download Alcatech BPM Studio Pro 4.91 Serial Key Keygen for Free?
-
There are many websites that claim to offer Alcatech BPM Studio Pro 4.91 serial key keygen for free download. However, not all of them are trustworthy or safe. Some of them may contain malware, spyware, adware, or other harmful programs that can damage your computer or steal your personal information.
-
To avoid such risks, you need to be careful and selective when choosing a website to download Alcatech BPM Studio Pro 4.91 serial key keygen for free. Here are some tips and warnings to help you:
-
-
-
Do not download Alcatech BPM Studio Pro 4.91 serial key keygen from unknown or suspicious websites. They may contain viruses or other malicious programs that can harm your computer or compromise your security.
-
Do not download Alcatech BPM Studio Pro 4.91 serial key keygen from websites that require you to complete surveys, offers, or tasks before downloading. They may be scams that try to trick you into giving away your personal information or money.
-
Do not download Alcatech BPM Studio Pro 4.91 serial key keygen from websites that ask you to enter your email address or phone number before downloading. They may spam you with unwanted messages or calls.
-
Do not download Alcatech BPM Studio Pro 4.91 serial key keygen from websites that have too many pop-ups, ads, or redirects. They may be annoying or misleading.
-
Do not download Alcatech BPM Studio Pro 4.91 serial key keygen from websites that have poor ratings, reviews, or feedback from other users. They may be unreliable or fraudulent.
-
Do not download Alcatech BPM Studio Pro 4.91 serial key keygen from websites that do not provide any information about the program, such as its size, version, source, or compatibility. They may be fake or outdated.
-
-
Instead, you should download Alcatech BPM Studio Pro 4.91 serial key keygen from reputable and trusted websites that have the following features:
-
-
They provide clear and accurate information about the program, such as its size, version, source, and compatibility.
-
They have good ratings, reviews, and feedback from other users who have downloaded the program successfully.
-
They have secure and fast download links that do not d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Antares Mic Mod EFX (MAC PC) -CORE.md b/spaces/usbethFlerru/sovits-modelsV2/example/Antares Mic Mod EFX (MAC PC) -CORE.md
deleted file mode 100644
index a2b56e93cd15ab98f7122434150f241d3fb38944..0000000000000000000000000000000000000000
--- a/spaces/usbethFlerru/sovits-modelsV2/example/Antares Mic Mod EFX (MAC PC) -CORE.md
+++ /dev/null
@@ -1,133 +0,0 @@
-
-
Antares Mic Mod EFX (MAC PC) -CORE: how to transform the sound of your microphone with a plugin
-
-
Would you like to have a collection of more than 100 legendary microphones to use in your recordings, mixes or live performances? Would you like to be able to change the sound of your current microphone for another more expensive or exclusive one? Would you like to be able to control the specific options of each microphone, such as the low cut filter, the pickup pattern or the tube saturation? If the answer is yes, then you are interested in knowing Antares Mic Mod EFX (MAC PC) -CORE, a plugin that allows you to do all that and more with just a few clicks.
Antares Mic Mod EFX (MAC PC) -CORE is a plugin that allows you to model the sound of your microphone with that of another different one. It is a microphone modeling tool that uses Antares' patented Spectral Shaping Tool technology to reproduce the sonic characteristics of each microphone. With this plugin, you can expand your microphone collection with models of vintage microphones from Neumann, AKG and others, as well as a wide selection of modern and boutique microphones.
-
-
Antares Mic Mod EFX (MAC PC) -CORE is very easy to use. You just have to select the microphone you are using (or that you used during your original recording) and the microphone you want it to sound like. The plugin takes care of making the sound transformation and allows you to adjust the specific options of each microphone. For example, you can turn on or off the low cut filter, change the pickup pattern or add tube saturation. Each option has the same sonic effect that it would have with the real microphone.
-
-
Antares Mic Mod EFX (MAC PC) -CORE is a plugin that you can use both in studio and live, to get the sound of the microphones you always wanted to have. It is also a very useful tool for broadcast and podcasting applications. Antares Mic Mod EFX (MAC PC) -CORE is available as a plugin for RTAS (Mac and PC), VST (Mac and PC) and Audio Units.
-
-
What are the advantages of using Antares Mic Mod EFX (MAC PC) -CORE?
-
-
Using Antares Mic Mod EFX (MAC PC) -CORE has several advantages, among which are:
-
-
-
You don't have to spend money on buying expensive or hard-to-get microphones, as the plugin offers you a great variety of precise digital models.
-
You can improve the sound of your recordings or mixes, using the most suitable microphone model for each source or style.
-
You can experiment with different sounds and options, creating unique and interesting combinations.
-
You can use the plugin live, bringing the sound of your favorite microphones to the stage without risking damaging or losing them.
-
-
-
Using Antares Mic Mod EFX (MAC PC) -CORE is a very practical and creative way to make the most of your microphones and achieve professional results.
-
-
What models of microphones does Antares Mic Mod EFX (MAC PC) -CORE include?
-
-
Antares Mic Mod EFX (MAC PC) -CORE includes precise digital models of more than 100 legendary microphones. These are some examples:
-
-
-
-
Akg C12A
-
Akg C414
-
Akg C414B/ULS Limited Edition Gold
-
Akg C414B/ULS Modified by Audio Upgrades
-
Akg The Tube
-
Audix D4
-
B&K 4007
-
Beyerdynamic M500
-
Groove Tubes MD1b-FET
-
Groove Tubes VELO-8
-
Mojave Audio MA-200
-
Neumann KM84
-
Neumann U47
-
Neumann U67
-
Neumann U87
-
Rode Classic II
-
Rode NT1-A
-
Rode NT2-A
-
Rode NT1000
-
Royer R-121
-
Sennheiser MD421-II
-
Sennheiser MKH40
-
Sony C-800G
-
Townsend Labs Sphere L22 Precision Microphone Modeling System
-
-
-
These are just some examples, you can check the complete list on Antares' official website: https://www.antarestech.com/product/mic-mod-efx/.
-
-
Conclusion
-
-
Antares Mic Mod EFX (MAC PC) -CORE is a plugin that allows you to model the sound of your microphone with that of another different one. With this plugin, you can access a great variety of precise digital models of more than 100 legendary microphones. You just have to select the microphone you are using and the one you want it to sound like, and adjust the specific options of each one. The plugin takes care of making the sound transformation and offers you a professional result. Antares Mic Mod EFX (MAC PC) -CORE is a plugin that you can use both in studio and live, to get the sound you always wanted to have. If you are interested in this plugin, you can download it from Antares' official website: https://www.antarestech.com/product/mic-mod-efx/.
-
How to use Antares Mic Mod EFX (MAC PC) -CORE?
-
-
Using Antares Mic Mod EFX (MAC PC) -CORE is very simple and intuitive. You just have to follow these steps:
-
-
-
Install the plugin on your computer and activate it with your license code.
-
Open your DAW and insert the plugin on the track where you have your microphone signal or your recorded audio.
-
Select the source microphone from the list of available models. If your microphone is not on the list, you can select a similar one or use the generic model.
-
Select the modeled microphone from the list of available models. You can browse by categories or use the search function.
-
Adjust the proximity effect, the low cut filter, the pickup pattern and the tube saturation according to your preferences and needs.
-
Compare the original and modeled sound by using the bypass button or the output level control.
-
-
-
You can also use presets to quickly access different combinations of source and modeled microphones. You can save your own presets or use the ones included with the plugin.
-
-
Who can benefit from Antares Mic Mod EFX (MAC PC) -CORE?
-
-
Antares Mic Mod EFX (MAC PC) -CORE is a plugin that can benefit anyone who works with microphones, whether in studio or live situations. Some examples are:
-
-
-
Singers and vocalists who want to get the sound of their favorite microphones without having to buy them or rent them.
-
Musicians and producers who want to enhance their recordings or mixes with different microphone sounds and options.
-
Engineers and sound technicians who want to have a versatile and flexible tool for microphone modeling and processing.
-
Podcasters and broadcasters who want to improve the quality and clarity of their voice with different microphone models.
-
Live performers who want to bring the sound of their studio microphones to the stage without risking damaging or losing them.
-
-
-
Antares Mic Mod EFX (MAC PC) -CORE is a plugin that can help you achieve professional results with any microphone you have or use.
-
How to optimize the article for SEO?
-
-
SEO stands for Search Engine Optimization, which is the process of improving the visibility and ranking of a website or a web page in the search engines. SEO is important for attracting more traffic and potential customers to your website or blog. There are many factors that affect SEO, such as keywords, content, links, structure, speed, and more.
-
-
One of the most important factors for SEO is the content of your article. You want to write content that is relevant, engaging, informative, and original. You also want to use keywords that match the query of your target audience and that are related to your topic. Keywords are the words or phrases that people type in the search engines to find what they are looking for. You can use tools like Google Keyword Planner or Moz Keyword Explorer to find out what keywords are popular and relevant for your topic.
-
-
When you write your article, you want to use your keywords strategically and naturally throughout your content. You don't want to overuse or spam your keywords, as this can have a negative effect on your SEO and your readers. You want to use your keywords in the following places:
-
-
-
The title of your article: The title is the first thing that people see when they search for your topic. It should be catchy, clear, and include your main keyword.
-
The headers and subheaders of your article: The headers and subheaders help to organize your content and make it easier to read and scan. They should also include your keywords or variations of them.
-
The introduction and conclusion of your article: The introduction and conclusion are the parts that summarize your main points and capture the attention and interest of your readers. They should also include your keywords or synonyms of them.
-
The body of your article: The body is the main part of your article where you provide the information, facts, examples, and arguments that support your topic. You should use your keywords or related terms throughout your paragraphs, but not too often or too close together.
-
The meta description of your article: The meta description is a short summary of your article that appears below the title in the search results. It should be concise, compelling, and include your main keyword.
-
The URL of your article: The URL is the address of your web page that appears in the browser bar. It should be descriptive, easy to read, and include your main keyword.
-
-
-
By using keywords in these places, you can optimize your article for SEO and make it more likely to rank higher in the search engines.
-
-
What are some common mistakes to avoid when writing an article?
-
-
Writing an article is not an easy task. It requires research, planning, writing, editing, and proofreading. Along the way, you may encounter some common mistakes that can affect the quality and effectiveness of your article. Here are some of them and how to avoid them:
-
-
-
Not knowing your audience: Before you write your article, you should know who you are writing for and what they are looking for. You should tailor your tone, style, language, and content to suit their needs and preferences.
-
Not having a clear purpose: Before you write your article, you should know what you want to achieve with it and what message you want to convey. You should have a clear thesis statement that summarizes your main point and guides your writing.
-
Not doing enough research: Before you write your article, you should do enough research on your topic and gather reliable sources of information. You should cite your sources properly and avoid plagiarism.
-
Not having a clear structure: Before you write your article, you should have a clear outline that organizes your ideas and arguments into a logical flow. You should have an introduction, a body, and a conclusion that follow a coherent structure.
-
Not using transitions: When you write your article, you should use transitions to connect your sentences and paragraphs and create a smooth flow of ideas. Transitions are words or phrases that show the relationship between different parts of your text.
-
Not using headings: When you write your article, you should use headings to divide your content into sections and sub-sections that make it easier to read and scan. Headings also help to highlight the main points of each section.
-
Not proofreading and editing: After you write your article, you should proofread and edit it carefully to check for spelling, grammar, punctuation, style, clarity, accuracy, and consistency errors. You can use tools like Grammarly or Hemingway Editor to help you with this task.
-
-
-
By avoiding these common mistakes, you can improve the quality and effectiveness of your article.
-
Conclusion
-
-
Antares Mic Mod EFX (MAC PC) -CORE is a plugin that allows you to model the sound of your microphone with that of another different one. With this plugin, you can access a great variety of precise digital models of more than 100 legendary microphones. You just have to select the microphone you are using and the one you want it to sound like, and adjust the specific options of each one. The plugin takes care of making the sound transformation and offers you a professional result. Antares Mic Mod EFX (MAC PC) -CORE is a plugin that you can use both in studio and live, to get the sound you always wanted to have. If you are interested in this plugin, you can download it from Antares' official website: https://www.antarestech.com/product/mic-mod-efx/.
-
-
In this article, we have explained what Antares Mic Mod EFX (MAC PC) -CORE is, what are its advantages, what models of microphones it includes, how to use it, how to optimize it for SEO, and what are some common mistakes to avoid when writing an article. We hope that this article has been useful and informative for you and that you have learned something new about this amazing plugin. If you have any questions or comments, please feel free to leave them below. Thank you for reading!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Deep Freeze Standard Edition 7.71.020.4499 Final Full Version How to Freeze Your System Settings and Data.md b/spaces/usbethFlerru/sovits-modelsV2/example/Deep Freeze Standard Edition 7.71.020.4499 Final Full Version How to Freeze Your System Settings and Data.md
deleted file mode 100644
index ac38849003bfde4cafb6d661a20bc31c0f2eab2d..0000000000000000000000000000000000000000
--- a/spaces/usbethFlerru/sovits-modelsV2/example/Deep Freeze Standard Edition 7.71.020.4499 Final Full Version How to Freeze Your System Settings and Data.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Deep Freeze Standard Edition 7.71.020.4499 Final Full Version