Have you ever accidentally deleted some important files from your computer, USB drive, memory card, or other devices? Have you ever formatted your hard disk or partition and lost all your data? Have you ever encountered a virus attack, system crash, or power failure that caused data loss?
If you have faced any of these situations, you might be looking for a reliable and effective data recovery software that can help you recover your lost files. One of the best data recovery software in the market is Renee Undeleter Mega.
-
In this article, we will introduce you to Renee Undeleter Mega, its features and benefits, and how to get an activation code for it. We will also compare two options for getting an activation code: buying a license or using a crack. Finally, we will show you how to activate Renee Undeleter Mega with the activation code.
-
What is Renee Undeleter Mega?
-
Renee Undeleter Mega is a powerful and professional data recovery software that can help you recover deleted, formatted, or lost files from various devices and scenarios. It supports recovering data from PC, laptop, hard drive, external disk, USB drive, SD card, digital camera, mobile phone, and more. It can also recover data from different file systems, such as NTFS, FAT32, FAT16, exFAT, EXT3/4, and more.
-
Renee Undeleter Mega has four main modes of data recovery: Fast Partition Scan, Whole Partition Scan, Whole Disk Scan, and Image Creation. You can choose the most suitable mode according to your data loss situation and device type. It can recover various types of files, such as photos, videos, audio files, documents, emails, archives, and more. It also allows you to preview the files before recovering them.
-
Features of Renee Undeleter Mega
-
Some of the main features of Renee Undeleter Mega are:
-
-
It can recover deleted files from recycle bin or by using Shift + Delete keys.
-
It can recover formatted or corrupted data from hard disk or partition.
-
It can recover lost data due to virus attack, system crash, power failure, or other reasons.
-
It can recover data from various devices and file systems.
-
It can recover various types of files with high quality and speed.
-
It can preview the files before recovering them.
-
It can create an image of the disk or partition for backup or recovery purposes.
-
It can resume the recovery process from the last scan result.
-
It has a user-friendly interface and easy-to-follow steps.
-
-
Benefits of Renee Undeleter Mega
-
Some of the benefits of using Renee Undeleter Mega are:
-
-
It can help you recover your precious data that you thought was gone forever.
-
It can save you time and money by avoiding the need to hire a professional data recovery service.
-
It can protect your privacy and security by recovering your data without uploading it to any server or cloud.
-
It can improve your productivity and efficiency by restoring your important files and documents.
-
It can give you peace of mind by knowing that you have a reliable backup solution in case of any data loss emergency.
-
-
How to Get Activation Code For Renee Undeleter Mega?
-
To use Renee Undeleter Mega fully and without any limitations, you need to get an activation code for it. An activation code is a serial number that can unlock all the features and functions of the software. There are two main options for getting an activation code for Renee Undeleter Mega: buying a license or using a crack. Let's compare these two options in detail.
-
Option 1: Buy Renee Undeleter Mega License
-
The first and recommended option for getting an activation code for Renee Undeleter Mega is to buy a license from the official website of the software. A license is a legal and authorized way of using the software. By buying a license, you will get a genuine activation code that can activate the software permanently. You will also get free updates and technical support from the developer.
-
How to get activation code for Renee Undeleter mega
-Renee Undeleter mega activation code crack
-Renee Undeleter mega activation code free download
-Renee Undeleter mega activation code generator
-Renee Undeleter mega activation code keygen
-Renee Undeleter mega activation code license key
-Renee Undeleter mega activation code serial number
-Renee Undeleter mega activation code torrent
-Renee Undeleter mega activation code working
-Renee Undeleter mega crack with activation code
-Renee Undeleter mega free activation code 2023
-Renee Undeleter mega full version with activation code
-Renee Undeleter mega lifetime activation code
-Renee Undeleter mega premium activation code
-Renee Undeleter mega pro activation code
-Activation code for Renee Undeleter mega 2023
-Activation code for Renee Undeleter mega latest version
-Activation code for Renee Undeleter mega mac
-Activation code for Renee Undeleter mega windows 10
-Activation code for Renee Undeleter mega windows 7
-Best activation code for Renee Undeleter mega
-Download activation code for Renee Undeleter mega
-Find activation code for Renee Undeleter mega
-Free activation code for Renee Undeleter mega online
-Genuine activation code for Renee Undeleter mega
-Activation Code For Renee Undeleter Mega alternative
-Activation Code For Renee Undeleter Mega coupon
-Activation Code For Renee Undeleter Mega discount
-Activation Code For Renee Undeleter Mega offer
-Activation Code For Renee Undeleter Mega review
-Activation Code For Renee Undeleter Mega scam
-Activation Code For Renee Undeleter Mega testimonials
-Activation Code For Renee Undeleter Mega trial
-Activation Code For Renee Undeleter Mega vs EaseUS Data Recovery Wizard Pro
-Activation Code For Renee Undeleter Mega vs Recuva Pro
-Activation Code For Renee Undeleter Mega vs Stellar Data Recovery Professional
-Activation Code For Renee Undeleter Mega vs Wondershare Recoverit Ultimate
-Buy Activation Code For Renee Undeleter Mega
-Cheap Activation Code For Renee Undeleter Mega
-Download Activation Code For Renee Undeleter Mega free
-Get Activation Code For Renee Undeleter Mega now
-How to use Activation Code For Renee Undeleter Mega
-Is Activation Code For Renee Undeleter Mega legit
-Is Activation Code For Renee Undeleter Mega safe
-Is Activation Code For Renee Undeleter Mega worth it
-Learn more about Activation Code For Renee Undeleter Mega
-Save money with Activation Code For Renee Undeleter Mega
-Try Activation Code For Renee Undeleter Mega for free
-What is Activation Code For Renee Undeleter Mega
-
Steps to Buy Renee Undeleter Mega License
-
To buy a license for Renee Undeleter Mega, you need to follow these steps:
Click on the "Buy Now" button on the homepage or on the product page.
-
Select the license type that suits your needs. There are three types of licenses available: Personal License ($49.90), Family License ($79.90), and Business License ($199.90).
-
Enter your email address and payment information. You can pay by credit card, PayPal, or other methods.
-
After completing the payment process, you will receive an email with your activation code and download link for the software.
-
Download and install the software on your computer.
-
-
Advantages of Buying Renee Undeleter Mega License
-
Some of the advantages of buying a license for Renee Undeleter Mega are:
-
-
You will get a legal and valid activation code that can activate the software permanently.
-
You will get free updates and technical support from the developer.
-
You will get a 60-day money-back guarantee if you are not satisfied with the software.
-
You will support the development and innovation of the software.
-
-
Option 2: Use Renee Undeleter Mega Crack
-
The second option for getting an activation code for Renee Undeleter Mega is to use a crack. A crack is an illegal and unauthorized way of using the software. A crack is a modified version of the software that bypasses its security mechanism and generates a fake activation code that can activate the software temporarily. You can find many websites that offer cracks for various software online.
-
Steps to Use Renee Undeleter Mega Crack
-
To use a crack for Renee Undeleter Mega, you need to follow these steps:
-
-
Search for a website that provides cracks for Renee Undeleter Mega online. Be careful as some websites may contain viruses or malware that can harm your computer or steal your personal information.
-
Download the crack file from the website. Usually, it will be in a compressed format such as ZIP or RAR.
-
Extract the crack file to a folder on your computer. You may need a password to extract it.
-
Run the crack file as administrator. It may ask you to disable your antivirus or firewall before running it.
-
Select the installation path of Renee Undeleter Mega on your computer. The crack file will automatically patch the original file and generate a fake activation code for it.
-
Copy and paste the fake activation code into the software when prompted.
-
-
Risks of Using Renee Undeleter Mega Crack
-
Some of the risks of using a crack for Renee Undeleter Mega are:
-
-
You may violate the intellectual property rights and terms of service of the software developer by using a crack.
-
You may expose your computer and personal information to viruses or malware that may be hidden in the crack file or website.
-
You may experience errors or bugs in the software as it may not be compatible with your system or updated version of the software.
-the developer.
-
You may lose your data or damage your device if the crack file or software malfunctions or crashes.
-
-
How to Activate Renee Undeleter Mega?
-
After getting an activation code for Renee Undeleter Mega, either by buying a license or using a crack, you need to activate the software with the activation code. The activation process is similar for both options, except for the source of the activation code. Here are the steps to activate Renee Undeleter Mega with the activation code:
-
Steps to Activate Renee Undeleter Mega with License Code
-
If you have bought a license for Renee Undeleter Mega, you can activate the software with the license code that you received by email. To do so, follow these steps:
-
-
Launch Renee Undeleter Mega on your computer.
-
Click on the "Register" button on the top right corner of the interface.
-
Enter your email address and license code in the corresponding fields.
-
Click on the "Activate" button to activate the software.
-
You will see a message that says "Activation Successful". You can now use all the features and functions of Renee Undeleter Mega without any limitations.
-
-
Steps to Activate Renee Undeleter Mega with Crack Code
-
If you have used a crack for Renee Undeleter Mega, you can activate the software with the crack code that you generated by running the crack file. To do so, follow these steps:
-
-
Launch Renee Undeleter Mega on your computer.
-
Click on the "Register" button on the top right corner of the interface.
-
Enter any email address and crack code in the corresponding fields.
-
Click on the "Activate" button to activate the software.
-
You will see a message that says "Activation Successful". You can now use some of the features and functions of Renee Undeleter Mega temporarily.
-
-
Conclusion
-
In this article, we have introduced you to Renee Undeleter Mega, a powerful and professional data recovery software that can help you recover deleted, formatted, or lost files from various devices and scenarios. We have also explained how to get an activation code for Renee Undeleter Mega, either by buying a license or using a crack. We have also shown you how to activate Renee Undeleter Mega with the activation code.
-
We hope this article has been helpful and informative for you. If you have any questions or comments, please feel free to leave them below. Thank you for reading!
-
FAQs
-
Here are some frequently asked questions about Renee Undeleter Mega and its activation code:
-
-
Q: How long does it take to recover data with Renee Undeleter Mega?
-
A: The time it takes to recover data with Renee Undeleter Mega depends on several factors, such as the size and type of data, the mode of recovery, and the condition of the device. Generally, it can take from a few minutes to several hours to recover data with Renee Undeleter Mega.
-
Q: How much data can I recover with Renee Undeleter Mega?
-
A: The amount of data you can recover with Renee Undeleter Mega depends on several factors, such as the available disk space, the degree of data loss, and the quality of data recovery. Generally, you can recover up to 2GB of data with Renee Undeleter Mega for free. If you want to recover more data, you need to buy a license and activate the software.
-
Q: Is it safe to use Renee Undeleter Mega?
-
A: Yes, it is safe to use Renee Undeleter Mega if you download it from its official website and buy a license from its official website. Renee Undeleter Mega is a reputable and trustworthy data recovery software that does not contain any viruses or malware. It also does not upload your data to any server or cloud. However, if you use a crack for Renee Undeleter Mega, it may not be safe as it may contain viruses or malware that can harm your computer or steal your personal information.
-
Q: Is it legal to use a crack for Renee Undeleter Mega?
-the software developer. By using a crack, you are also exposing yourself to various risks, such as viruses, malware, errors, bugs, data loss, and device damage. Therefore, we do not recommend using a crack for Renee Undeleter Mega and advise you to buy a license from its official website instead.
-
Q: How can I contact the developer of Renee Undeleter Mega?
-
A: If you have any questions or issues regarding Renee Undeleter Mega, you can contact the developer by email at support@reneelab.com or by phone at +1-800-999-2734. You can also visit their website at https://www.reneelab.com/ for more information and resources.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Amityville La Maison Du Diable French Torrent 57 The Complete History And Timeline Of The Events.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Amityville La Maison Du Diable French Torrent 57 The Complete History And Timeline Of The Events.md
deleted file mode 100644
index f4d251af0c97cead9eb85241440aae0262a660e9..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Amityville La Maison Du Diable French Torrent 57 The Complete History And Timeline Of The Events.md
+++ /dev/null
@@ -1,88 +0,0 @@
-
-
Amityville La Maison Du Diable French Torrent 57
-
If you are a fan of horror movies and you want to watch one of the most famous and controversial ones in history, you might be interested in Amityville La Maison Du Diable French Torrent 57. This is a torrent that allows you to download and watch the original 1979 movie Amityville: The Horror in French, with subtitles and bonus features. In this article, we will tell you everything you need to know about this torrent, including what it is, why it is popular, how to download it safely and legally, and what benefits you can get from watching it. So, if you are ready to experience the terror of Amityville, read on!
-
Introduction
-
What is Amityville La Maison Du Diable?
-
Amityville La Maison Du Diable is the French title of Amityville: The Horror, a 1979 American horror film directed by Stuart Rosenberg and starring James Brolin, Margot Kidder, and Rod Steiger. The film is based on a book of the same name by Jay Anson, which claims to be a true story of a haunted house in Amityville, New York. The film follows the Lutz family, who move into a house where a mass murder took place a year before. They soon discover that the house is possessed by a demonic force that torments them with paranormal phenomena and threatens their lives.
Amityville La Maison Du Diable is a popular French torrent because it is one of the most classic and influential horror movies of all time. It was a huge box office success when it was released, grossing over $86 million in North America alone. It also spawned a franchise of sequels, prequels, remakes, and spin-offs, making it one of the longest-running horror series in history. The film has also been praised by critics and fans for its atmospheric cinematography, suspenseful music, and convincing performances. Moreover, the film has a strong cultural and historical relevance, as it deals with themes such as family, religion, violence, and the supernatural.
-
How to download it safely and legally?
-
To download Amityville La Maison Du Diable French Torrent 57 safely and legally, you need to follow some steps. First, you need to have a VPN (virtual private network) service that can protect your online privacy and security. A VPN can encrypt your data and hide your IP address, making it harder for hackers or authorities to track your online activity. Second, you need to have a torrent client software that can download and manage torrent files. A torrent client is a program that connects you to other users who have the same file you want to download. Third, you need to find a reliable torrent site that has the torrent file you are looking for. A torrent site is a website that hosts torrent files and allows users to search for them. You should look for a site that has good reviews, ratings, comments, and feedback from other users. Fourth, you need to download the torrent file from the site and open it with your torrent client. Then, you need to wait for the download to finish and enjoy your movie.
-
The Story Behind Amityville La Maison Du Diable
-
The real-life haunted house
-
The story behind Amityville La Maison Du Diable is based on a real-life haunted house in Amityville, New York. The house is located at 112 Ocean Avenue and was built in 1927. On November 13th, 1974, Ronald DeFeo Jr., a 23-year-old man who lived in the house with his family, shot and killed his parents and four siblings while they were sleeping. He claimed that he heard voices in his head that told him to do it. He was convicted of six counts of second-degree murder and sentenced to life imprisonment.
-
The book and the movie adaptations
-
In 1977, Jay Anson published a book titled The Amityville Horror: A True Story, which claimed to be based on the experiences of George and Kathy Lutz, who bought the house in December 1975 and moved in with their three children and their dog. According to the book, the Lutzes experienced various paranormal phenomena in the house, such as strange noises, foul odors, cold spots, swarms of flies, moving objects, levitating furniture, glowing eyes, demonic voices, and visions of blood. The book also claimed that the house was built on an ancient Indian burial ground and that a priest who tried to bless the house was warned by a voice to "get out". The book became a bestseller and inspired several movie adaptations, including the original 1979 film, a 2005 remake starring Ryan Reynolds and Melissa George, and several sequels and spin-offs.
-
The controversies and lawsuits
-
The story behind Amityville La Maison Du Diable has also been surrounded by controversies and lawsuits. Many skeptics and investigators have questioned the veracity of the book and the movie adaptations, arguing that they are based on exaggerations, fabrications, or hoaxes. Some of the evidence that they have presented include inconsistencies in dates, times, and events; lack of physical proof or witnesses; contradictions between different versions of the story; and financial motives for creating a sensational story. Several lawsuits have also been filed by various parties involved in the story, such as Ronald DeFeo Jr., who sued the Lutzes and Anson for defamation; the Lutzes, who sued their lawyer and publisher for fraud; and other homeowners who lived in the house after the Lutzes, who sued the Lutzes and others for invasion of privacy.
-
The Features of Amityville La Maison Du Diable French Torrent 57
-
The quality and size of the torrent
-
One of the features of Amityville La Maison Du Diable French Torrent 57 is its quality and size. The torrent offers a high-definition version of the movie, with a resolution of 1080p. The movie has been digitally remastered to enhance its color, contrast, and clarity. The torrent also has a reasonable size, with a file size of about 1.5 GB. This means that you can download it quickly and save space on your device.
-
Amityville La Maison Du Diable French BRRIP AC3 1980[^1^]
-Amityville La Maison Du Diable French WEBRIP LD 1080p 2021[^1^]
-Amityville La Maison Du Diable French WEBRIP 720p 2021[^1^]
-Amityville La Maison Du Diable French DVDRIP 2010[^2^]
-Amityville La Maison Du Diable French DVDRIP 2009[^2^]
-Amityville La Maison Du Diable French Horror Movie[^3^]
-Amityville La Maison Du Diable French Audiobook[^4^]
-Amityville La Maison Du Diable French Subtitles[^4^]
-Amityville La Maison Du Diable French Review[^4^]
-Amityville La Maison Du Diable French Trailer[^4^]
-Amityville La Maison Du Diable French Streaming[^2^]
-Amityville La Maison Du Diable French Download[^2^]
-Amityville La Maison Du Diable French Cpasbien Torrent[^2^]
-Amityville La Maison Du Diable French MonTorrent Torrent[^2^]
-Amityville La Maison Du Diable French Gk Torrent Torrent[^2^]
-Amityville La Maison Du Diable French SoundCloud Audio[^1^] [^4^]
-Amityville La Maison Du Diable French Weebly Blog[^2^]
-Amityville La Maison Du Diable French InBetween Fellowship Forum[^3^]
-Amityville La Maison Du Diable French Film Complet en Francais[^2^]
-Amityville La Maison Du Diable French Film Complet Streaming en Francais[^2^]
-Amityville La Maison Du Diable French Film Complet HD en Francais[^2^]
-Amityville La Maison Du Diable French Film Complet Gratuit en Francais[^2^]
-Amityville La Maison Du Diable French Film Complet Telecharger en Francais[^2^]
-Amityville La Maison Du Diable French Film Complet Voir en Francais[^2^]
-Amityville La Maison Du Diable French Film Complet Regarder en Francais[^2^]
-Amityville La Maison Du Diable French Film Complet Youtube en Francais[^2^] [^4^]
-Amityville La Maison Du Diable French Film Complet Netflix en Francais[^2^] [^4^]
-Amityville La Maison Du Diable French Film Complet Amazon Prime en Francais[^2^] [^4^]
-Amityville La Maison Du Diable French Film Complet Disney Plus en Francais[^2^] [^4^]
-Amityville La Maison Du Diable French Film Complet Hulu en Francais[^2^] [^4^]
-Amityville La Maison Du Diable French Film Complet HBO Max en Francais[^2^] [^4^]
-Amityville La Maison Du Diable French Film Complet Peacock en Francais[^2^] [^4^]
-Amityville La Maison Du Diable French Film Complet Paramount Plus en Francais[^2^] [^4^]
-Amityville La Maison Du Diable French Film Complet Apple TV Plus en Francais[^2^] [^4^]
-Amityville La Maison Du Diable French Film Complet IMDb TV en Francais[^2
-
The subtitles and audio options
-
Another feature of Amityville La Maison Du Diable French Torrent 57 is its subtitles and audio options. The torrent provides subtitles in multiple languages, including English, Spanish, German, Italian, and Portuguese. You can choose which language you want to use by selecting it from your torrent client settings. The torrent also offers audio options in different languages, including French, English, Spanish, German, and Italian. You can choose which language you want to hear by selecting it from your media player settings.
-
The bonus materials and extras
-
A third feature of Amityville La Maison Du Diable French Torrent 57 is its bonus materials and extras. The torrent includes several additional content that can enrich your viewing experience. Some of these content are: - A commentary track by director Stuart Rosenberg - A documentary titled "The Real Story Behind The Amityville Horror" - A featurette titled "The Making Of The Amityville Horror" - A collection of deleted scenes - A gallery of photos - A trailer - A trivia game You can access these content by selecting them from your media player menu.
-
The Benefits of Amityville La Maison Du Diable French Torrent 57
-
The thrill and horror of watching the movie
-of Amityville La Maison Du Diable French Torrent 57 is the thrill and horror of watching the movie. The movie is a masterpiece of horror cinema, that can keep you on the edge of your seat with its terrifying scenes and atmosphere. The movie can also make you feel the fear and anxiety of the Lutz family, as they face the evil force that haunts their house. The movie can also challenge you to question the reality and truth of the story, and to wonder if you would survive in such a situation.
-
The cultural and historical significance of the story
-
Another benefit of Amityville La Maison Du Diable French Torrent 57 is the cultural and historical significance of the story. The story is a part of American pop culture, that has influenced many other works of horror fiction and media. The story is also a reflection of the social and political context of the 1970s, when America was facing a crisis of faith, a rise of violence, and a fascination with the occult. The story can also teach you about the history and folklore of Amityville, New York, and its connection to Native American culture, colonial history, and paranormal phenomena.
-
The opportunity to learn French and improve your skills
-
A third benefit of Amityville La Maison Du Diable French Torrent 57 is the opportunity to learn French and improve your skills. By watching the movie in French, you can expose yourself to the language and its pronunciation, vocabulary, grammar, and expressions. You can also practice your listening and reading comprehension skills by using the subtitles and audio options. You can also enhance your cultural awareness and appreciation by learning about the French perspective and interpretation of the story.
-
Conclusion
-
Summary of the main points
-
In conclusion, Amityville La Maison Du Diable French Torrent 57 is a torrent that allows you to download and watch the original 1979 movie Amityville: The Horror in French, with subtitles and bonus features. The torrent has several features that make it attractive, such as its quality and size, its subtitles and audio options, and its bonus materials and extras. The torrent also has several benefits that make it worthwhile, such as its thrill and horror, its cultural and historical significance, and its opportunity to learn French.
-
Call to action and recommendation
-
If you are interested in Amityville La Maison Du Diable French Torrent 57, we recommend that you download it today and enjoy one of the most classic and controversial horror movies of all time. You will not regret it! But be warned: you might have trouble sleeping afterwards!
-
FAQs
-
Here are some frequently asked questions about Amityville La Maison Du Diable French Torrent 57:
-
-
Is Amityville La Maison Du Diable a true story?
-
There is no definitive answer to this question, as different sources have different opinions and evidence. Some people believe that the story is true, based on the testimonies of the Lutzes, Anson, and others who claim to have witnessed or experienced paranormal phenomena in the house. Others believe that the story is false, based on the investigations and analyses of skeptics, journalists, researchers, and experts who claim to have found flaws, inconsistencies, or contradictions in the story. Ultimately, it is up to you to decide what you believe.
-
Where can I find Amityville La Maison Du Diable French Torrent 57?
-
You can find Amityville La Maison Du Diable French Torrent 57 on various torrent sites online. However, not all torrent sites are reliable or safe, so you should be careful when choosing one. You should look for a site that has good reviews, ratings, comments, and feedback from other users. You should also use a VPN service to protect your online privacy and security when downloading torrents.
-
What are some other movies related to Amityville La Maison Du Diable?
-
There are many other movies related to Amityville La Maison Du Diable, as it is part of a long-running franchise of sequels, prequels, remakes, and spin-offs. Some of these movies are: - Amityville II: The Possession (1982), a prequel that tells the story of the DeFeo family before their murder. - Amityville 3-D (1983), a sequel that follows a journalist who investigates the house after the Lutzes leave. - The Amityville Horror (2005), a remake of the original movie with a modern twist. - The Amityville Murders (2018), a prequel that focuses on Ronald DeFeo Jr. and his relationship with his family before he kills them.
-
What are some other ways to learn French through movies?
-
Besides watching Amityville La Maison Du Diable French Torrent 57, there are many other ways to learn French through movies. Some of these ways are: - Watch movies that are originally made in French, such as Les Intouchables (2011), Le Fabuleux Destin d'Amélie Poulain (2001), or La Haine (1995). - Watch movies that are set in France or have French characters or themes, such as Midnight in Paris (2011), Ratatouille (2007), or The Da Vinci Code (2006). - Watch movies that have French versions or dubbing available, such as Harry Potter (2001-2011), The Lord of the Rings (2001-2003), or The Lion King (1994). - Watch movies with friends who speak French or are learning French, and discuss them in French afterwards.
-
How can I overcome my fear of horror movies?
-
If you are afraid of horror movies, but you still want to watch them, there are some tips that can help you overcome your fear. Some of these tips are: - Watch horror movies with someone else who can comfort you or make you laugh. - Watch horror movies during the day or in a well-lit room. - Watch horror movies with low volume or with subtitles on. - Watch horror movies with a positive mindset or a sense of humor. - Watch horror movies that are not too realistic or graphic. - Watch horror movies that have a happy ending or a moral lesson.
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Authorization Code for Dreamweaver CS3 Crack Where to Find It and How to Install It.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Authorization Code for Dreamweaver CS3 Crack Where to Find It and How to Install It.md
deleted file mode 100644
index ee26bcd54b90c813c9a06cb8890c67e19c4c5e86..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Authorization Code for Dreamweaver CS3 Crack Where to Find It and How to Install It.md
+++ /dev/null
@@ -1,140 +0,0 @@
-
-
Authorization Code for Dreamweaver CS3 Crack
-
If you are a web designer or developer, you might have heard of Dreamweaver CS3, one of the most popular and powerful web development tools in the market. But what if you don't have the money to buy it or you want to use it for free? In this article, we will show you how to get an authorization code for Dreamweaver CS3 crack, which will allow you to use this software without paying anything.
-
What is Dreamweaver CS3 and why do you need it?
-
Dreamweaver CS3 is a software application that lets you create, edit, and manage websites and web pages. It was released in 2007 by Adobe Systems as part of the Creative Suite 3 package. It has many features and benefits that make it a great tool for web design and development, such as:
It supports HTML, CSS, JavaScript, PHP, ASP.NET, XML, and other web technologies.
-
It has a visual interface that lets you design your web pages using drag-and-drop elements, templates, layouts, and widgets.
-
It has a code editor that lets you write and edit your web code with syntax highlighting, code completion, code hints, and error checking.
-
It has a live view mode that lets you preview your web pages in real-time as you make changes to them.
-
It has a split view mode that lets you see both the design and the code of your web pages at the same time.
-
It has a built-in FTP client that lets you upload and download your web files to and from your web server.
-
It has a site manager that lets you organize your web files and folders in a logical structure.
-
It has a testing server that lets you test your web pages locally before publishing them online.
-
It has a spry framework that lets you add dynamic and interactive features to your web pages using Ajax.
-
It has a CSS advisor that lets you check and fix any CSS issues in your web pages.
-
It has a browser compatibility check that lets you see how your web pages look and work in different browsers.
-
-
Dreamweaver CS3 system requirements and compatibility
-
To use Dreamweaver CS3, you need to have a computer that meets the following minimum system requirements:
-
-
Operating system
Windows XP SP2 or later, or Mac OS X 10.4.8 or later
-
Processor
Intel Pentium 4 or later, or PowerPC G5 or later
-
Memory
512 MB of RAM or more
-
Disk space
1 GB of available hard disk space or more
-
Display
1024 x 768 resolution or higher
-
Internet connection
Required for activation and updates
-
-
Dreamweaver CS3 is compatible with the following web browsers:
-
-
Internet Explorer 6 or later
-
Mozilla Firefox 2 or later
-
Safari 2 or later
-
Opera 9 or later
-
Netscape Navigator 9 or later
-
-
What is a crack and why do you need it?
-
A crack is a software program that modifies or bypasses the security features of another software program. In this case, we are talking about a crack for Dreamweaver CS3 that allows you to use it without entering a valid serial number or activation code. A serial number is a unique code that identifies your copy of Dreamweaver CS3 and proves that you have purchased it legally. An activation code is another code that verifies your serial number online and unlocks all the features of Dreamweaver CS3. Without these codes, you cannot use Dreamweaver CS3 fully or at all.
-
How to get authorization code for dreamweaver cs3 without crack
-Dreamweaver cs3 authorization code generator online
-Free download authorization code for dreamweaver cs3 full version
-Authorization code for dreamweaver cs3 crack serial number
-Authorization code for dreamweaver cs3 crack keygen
-Authorization code for dreamweaver cs3 crack activation
-Authorization code for dreamweaver cs3 crack patch
-Authorization code for dreamweaver cs3 crack license key
-Authorization code for dreamweaver cs3 crack product key
-Authorization code for dreamweaver cs3 crack registration code
-Authorization code for dreamweaver cs3 crack torrent
-Authorization code for dreamweaver cs3 crack rar
-Authorization code for dreamweaver cs3 crack zip
-Authorization code for dreamweaver cs3 crack iso
-Authorization code for dreamweaver cs3 crack exe
-Authorization code for dreamweaver cs3 crack mac
-Authorization code for dreamweaver cs3 crack windows 10
-Authorization code for dreamweaver cs3 crack windows 7
-Authorization code for dreamweaver cs3 crack windows 8
-Authorization code for dreamweaver cs3 crack windows xp
-Authorization code for dreamweaver cs3 crack linux
-Authorization code for dreamweaver cs3 crack ubuntu
-Authorization code for dreamweaver cs3 crack android
-Authorization code for dreamweaver cs3 crack ios
-Authorization code for dreamweaver cs3 crack iphone
-Authorization code for dreamweaver cs3 crack ipad
-Authorization code for dreamweaver cs3 crack ipod touch
-Authorization code for dreamweaver cs3 crack apple tv
-Authorization code for dreamweaver cs3 crack fire tv stick
-Authorization code for dreamweaver cs3 crack roku
-Authorization code for dreamweaver cs3 crack chromecast
-Authorization code for dreamweaver cs3 crack smart tv
-Authorization code for dreamweaver cs3 crack ps4
-Authorization code for dreamweaver cs3 crack ps5
-Authorization code for dreamweaver cs3 crack xbox one
-Authorization code for dreamweaver cs3 crack xbox series x
-Authorization code for dreamweaver cs3 crack nintendo switch
-Authorization code for dreamweaver cs3 crack wii u
-Authorization code for dreamweaver cs3 crack 3ds
-Authorization code for dreamweaver cs3 crack psp
-Authorization code for dreamweaver cs3 crack ps vita
-Authorization code for dreamweaver cs3 crack pc game download
-
The difference between a crack and a serial number
-
A crack is different from a serial number in several ways:
-
-
A crack does not require any internet connection to work. A serial number requires an internet connection to activate Dreamweaver CS3 online.
-
A crack does not have any expiration date or limit on how many times you can use it. A serial number can expire after a certain period of time or after a certain number of activations.
-
A crack does not have any risk of being blacklisted or blocked by Adobe Systems. A serial number can be blacklisted or blocked by Adobe Systems if they detect that it has been used illegally or shared with others.
-
A crack does not have any guarantee of working properly or safely. A serial number has a guarantee of working properly and safely as long as it is genuine and legal.
-
-
The risks and benefits of using a crack
-
Using a crack for Dreamweaver CS3 has some risks and benefits that you should be aware of before deciding to use it:
-
-
The main benefit of using a crack is that you can use Dreamweaver CS3 for free without paying anything. You can save money and enjoy all the features of this software without any limitations.
-
The main risk of using a crack is that you can violate the terms and conditions of Adobe Systems. You can face legal consequences such as fines or lawsuits if they catch you using their software illegally. You can also lose your right to use their software legally in the future.
-
Another risk of using a crack is that you can expose your computer to viruses, malware, spyware, or other harmful programs. You can damage your computer system or compromise your personal data if you download or install a crack from an untrusted source. You can also infect other computers if you share or distribute a crack with others.
-
Another risk of using a crack is that you can experience errors, bugs, crashes, or compatibility issues with Dreamweaver CS3. You can lose your work or waste your time if the crack does not work properly or causes problems with your software. You can also miss out on updates, patches, fixes, or improvements from Adobe Systems if the crack prevents them from reaching your software.
-
-
How to get an authorization code for Dreamweaver CS3 crack?
-
If you still want to use a crack for Dreamweaver CS3 despite the risks involved, there are two main methods that you can try:
-
Method 1: Using a keygen program
-
A keygen program is a software program that generates random codes such as serial numbers or activation codes for other software programs. In this case, we are talking about a keygen program that generates an authorization code for Dreamweaver CS3 crack. Here are the steps to follow:
-
Step 1: Download and install the keygen program
-
You need to find and download a keygen program that works for Dreamweaver CS3 from the internet. You need to be careful about where you download it from because some sources may contain viruses or malware. You also need to scan it with an antivirus program before installing it on your computer. You may need to disable your antivirus program temporarily while installing it because some antivirus programs may detect it as a threat.
-
Step 2: Run the keygen program and generate an authorization code
-
Step 3: Enter the authorization code in Dreamweaver CS3 and activate it
-
You need to enter the authorization code that you generated from the keygen program in Dreamweaver CS3. You may need to open Dreamweaver CS3 and go to the Help menu and select Activate. You may need to enter the serial number that came with your copy of Dreamweaver CS3 or use another serial number that you found online. You may need to select the option to activate by phone and enter the authorization code that you got from the keygen program. You may need to click on Activate or Finish to complete the process.
-
Method 2: Using a patch file
-
A patch file is a software program that modifies or replaces some files or codes of another software program. In this case, we are talking about a patch file that modifies or replaces some files or codes of Dreamweaver CS3 to make it work without an authorization code. Here are the steps to follow:
-
Step 1: Download and install the patch file
-
You need to find and download a patch file that works for Dreamweaver CS3 from the internet. You need to be careful about where you download it from because some sources may contain viruses or malware. You also need to scan it with an antivirus program before installing it on your computer. You may need to disable your antivirus program temporarily while installing it because some antivirus programs may detect it as a threat.
-
Step 2: Run the patch file and apply it to Dreamweaver CS3
-
You need to run the patch file on your computer after installing it. You may need to follow some instructions or click on some buttons depending on the patch file. You may need to locate and select your Dreamweaver CS3 installation folder or file. You may need to backup your original files or codes before applying the patch file. You may need to wait for a few seconds or minutes until the patch file finishes modifying or replacing your files or codes.
-
Step 3: Enjoy using Dreamweaver CS3 without any limitations
-
You can now use Dreamweaver CS3 without entering any authorization code or activating it online. You can access all the features and functions of this software without any restrictions. You can create, edit, and manage your websites and web pages with ease and efficiency.
-
Conclusion
-
In this article, we have shown you how to get an authorization code for Dreamweaver CS3 crack using two methods: using a keygen program and using a patch file. We have also explained what is Dreamweaver CS3, what is a crack, and what are the risks and benefits of using a crack. We hope that this article has been helpful and informative for you. However, we do not recommend or endorse using a crack for Dreamweaver CS3 or any other software because it is illegal, unethical, and unsafe. We suggest that you buy a legal copy of Dreamweaver CS3 from Adobe Systems or use another free or cheap alternative web development tool instead.
-
FAQs
-
Here are some frequently asked questions about getting an authorization code for Dreamweaver CS3 crack:
-
-
Is using a crack for Dreamweaver CS3 illegal?
-
Yes, using a crack for Dreamweaver CS3 is illegal because it violates the copyright and license agreement of Adobe Systems. You can face legal consequences such as fines or lawsuits if they catch you using their software illegally.
-
Is using a crack for Dreamweaver CS3 safe?
-
No, using a crack for Dreamweaver CS3 is not safe because it can expose your computer to viruses, malware, spyware, or other harmful programs. You can damage your computer system or compromise your personal data if you download or install a crack from an untrusted source. You can also infect other computers if you share or distribute a crack with others.
-
Is using a crack for Dreamweaver CS3 reliable?
-
No, using a crack for Dreamweaver CS3 is not reliable because it can cause errors, bugs, crashes, or compatibility issues with your software. You can lose your work or waste your time if the crack does not work properly or causes problems with your software. You can also miss out on updates, patches, fixes, or improvements from Adobe Systems if the crack prevents them from reaching your software.
-
Where can I find a crack for Dreamweaver CS3?
-
You can find a crack for Dreamweaver CS3 on various websites, forums, blogs, torrents, or other online sources that offer free downloads of software cracks. However, you should be careful about where you download it from because some sources may contain viruses or malware. You should also scan it with an antivirus program before installing it on your computer.
-
What are some alternatives to using a crack for Dreamweaver CS3?
-
You can use some alternatives to using a crack for Dreamweaver CS3 such as:
-
-
Buying a legal copy of Dreamweaver CS3 from Adobe Systems or an authorized reseller.
-
Using another version of Dreamweaver such as Dreamweaver CC (Creative Cloud) which is cheaper and more updated than Dreamweaver CS3.
-
Using another web development tool such as WordPress, Wix, Squarespace, Webflow, Bootstrap Studio, Pinegrow Web Editor, etc.
-
Using another free or cheap web development tool such as Visual Studio Code, Sublime Text, Atom, Brackets, Notepad++, etc.
-
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Digi Sm 100 Software Download REPACK.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Digi Sm 100 Software Download REPACK.md
deleted file mode 100644
index dbbe115335f730724dd5ad07bc6d1a43fdf35e9a..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Digi Sm 100 Software Download REPACK.md
+++ /dev/null
@@ -1,78 +0,0 @@
-
-
Digi SM 100 Software Download: How to Install and Use the Scale Printer Software
-
If you are looking for a reliable and user-friendly scale printer software for your retail or food industry business, you might want to consider Digi SM 100. This software allows you to manage your scale data and label design in real time, and print high-quality labels with your Digi SM 100 scale printer. In this article, we will show you how to download, install, and use Digi SM 100 software, as well as provide some troubleshooting tips and alternatives.
Digi SM 100 is a scale printer product from Digi, a global leader in retail and industrial weighing solutions. It is designed to provide high-speed direct thermal printing with user-friendly features. It can be used for various applications, such as seafood, meat, deli, prepared meals, bakery, and specialty stores.
-
Features and benefits of Digi SM 100
-
Some of the features and benefits of Digi SM 100 are:
-
-
Highly-visible 19-segment green LCD display
-
Automatic date and time update with a built-in clock
-
40 or 76 preset keys for quick access to frequently used items
-
1MB memory capacity (expandable to 2MB) for storing up to 4000 PLUs
-
Cash drawer, RS-232C, and Ethernet interfaces for easy connectivity
-
Label printing size up to W:60 x L:220 mm
-
Printing speed up to 80mm/sec for label and 105mm/sec for receipt
-
Wireless LAN option for wireless communication
-
-
Variations and options of Digi SM 100
-
Digi SM 100 comes in four variations:
-
-
-
Variation
Description
Dimensions (mm)
-
Bench Type (SM-100B)
A standard model with a compact design
W:386 x D:416 x H:128
-
Pole Type (SM-100P)
A model with a pole display for better visibility
W:385 x D:478 x H:485
-
Hanging Type (SM-100H)
A model that can be hung from the ceiling or wall for space saving
W:340 x D:310 x H:860
-
Elevated Type (SM-100EV)
A model with an elevated display for easier operation
W:386 x D:416 x H:550
-
-
Digi SM 100 also has two optional features:
-
-
Memory - Memory expansion: You can increase the memory capacity of your Digi SM 100 from 1MB to 2MB by installing a memory expansion board. This will allow you to store up to 8000 PLUs and more label formats. - Wireless LAN: You can enable wireless communication between your Digi SM 100 and your computer or network by installing a wireless LAN module. This will allow you to update your scale data and label design remotely and wirelessly.
How to download Digi SM 100 software
-
Digi SM 100 software is a Windows-based application that allows you to manage your scale data and label design in real time. You can use it to create and edit PLUs, departments, ingredients, nutrition facts, barcodes, logos, and other label elements. You can also use it to monitor and control your Digi SM 100 scale printer from your computer.
-
Requirements and compatibility
-
To download and install Digi SM 100 software, you will need the following:
-
-
A computer running Windows XP, Vista, 7, 8, or 10
-
A USB cable or a wireless LAN module to connect your Digi SM 100 scale printer to your computer
-
An internet connection to download the software from the Digi website
-
A license key to activate the software (you can obtain it from your Digi dealer or distributor)
-
-
Digi SM 100 software is compatible with the following Digi scale printer models:
-
-
SM-100
-
SM-110
-
SM-300
-
SM-500
-
SM-5100
-
SM-5500
-
SM-5600
-
SM-5700
-
SM-5800
-
SM-5900
-
-
Steps to download and install Digi SM 100 software
-
To download and install Digi SM 100 software, follow these steps:
Click on the "Products" tab and select "Scale Printer"
-
Find your Digi SM 100 model and click on it
-
Scroll down to the "Downloads" section and click on the "Software" link
-
Select the language and version of the software you want to download and click on the "Download" button
-
Save the file to your computer and unzip it if necessary
-
Run the setup.exe file and follow the instructions on the screen to install the software
-
Enter your license key when prompted and complete the installation process
-
Restart your computer if required
-
-
Congratulations! You have successfully downloaded and installed Digi SM 100 software on your computer.
-
How to use Digi SM 100 software
-
Digi SM 100 software is easy to use and has a user-friendly interface. You can use it to perform various tasks, such as connecting your scale printer, configuring your settings, designing and printing labels, and more. Here are some of the main functions of Digi SM 100 software:
-
How to connect Digi SM 100 scale printer to your computer
-
To connect your Digi SM 100 scale printer to your computer, you can use either a USB cable or a wireless LAN module. Here are the steps for each method:
-
-
USB cable: Connect one end of the USB cable to your scale printer and the other end to your computer. Turn on your scale printer and wait for your computer to recognize it. You should see a message on your screen saying that a new device has been detected. If not, you may need to install a driver for your scale printer from the Digi website.
-
Wireless LAN module: Install the wireless LAN module on your scale printer according to the instructions provided with it. Turn on your scale printer and make sure that it is connected to the same network as your computer. On your computer, open Digi SM 100 software and go to "File" > "Connect". Select "Wireless LAN" as the connection type and enter the IP address of your scale printer. Click on "OK" to establish the connection.
-
-
You should see a green icon on the bottom right corner of Digi SM 100 software indicating that you are connected to your scale printer. If not, you may need to check your network settings or contact Digi support for assistance.
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EaseUS Data Recovery Full Crack for Windows 11 What You Need to Know Before You Download It.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EaseUS Data Recovery Full Crack for Windows 11 What You Need to Know Before You Download It.md
deleted file mode 100644
index 0ca04130d6f12abb8a5ac79d190a0961b0c7858c..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EaseUS Data Recovery Full Crack for Windows 11 What You Need to Know Before You Download It.md
+++ /dev/null
@@ -1,33 +0,0 @@
-
-
How to Download EaseUS Data Recovery Full Crack for Windows 11
-
EaseUS Data Recovery is a popular software that can help you recover deleted, formatted, or lost data from various devices, such as hard drives, memory cards, USB flash drives, or digital cameras. However, the official version of EaseUS Data Recovery is not free, and you need to pay a certain amount of money to use its full features. This might lead some people to look for EaseUS Data Recovery full crack for Windows 11, which is a pirated version of the software that claims to offer the same functionality without any cost. But is it safe and legal to download EaseUS Data Recovery full crack for Windows 11? In this article, we will explain why you should avoid downloading EaseUS Data Recovery full crack for Windows 11 and how to download EaseUS Data Recovery legally and safely for Windows 11.
-
Why You Should Avoid Downloading EaseUS Data Recovery Full Crack for Windows 11
-
Downloading EaseUS Data Recovery full crack for Windows 11 might seem like a tempting option if you want to save money or try out the software without any limitations. However, there are many risks and drawbacks associated with this practice, such as:
You might download a fake or corrupted file that could harm your computer or compromise your personal data.
-
You might expose your device to malware, spyware, ransomware, or other malicious programs that could steal your information, lock your files, or damage your system.
-
You might violate the intellectual property rights of EaseUS and face legal consequences such as fines or lawsuits.
-
You might miss out on important updates, patches, and security fixes that EaseUS releases regularly to improve the performance and security of its software.
-
You might experience compatibility issues, bugs, errors, or crashes that could affect your productivity and workflow.
-
-
Therefore, downloading EaseUS Data Recovery full crack for Windows 11 is not worth the risk or hassle. Instead, you should opt for a legal and safe way to download EaseUS Data Recovery for Windows 11.
-
How to Download EaseUS Data Recovery Legally and Safely for Windows 11
-
There are several ways to download EaseUS Data Recovery legally and safely for Windows 11, depending on your needs and preferences. Here are some of the options you can choose from:
-
Download EaseUS Data Recovery Free Trial
-
If you want to try out EaseUS Data Recovery for a limited time before buying it, you can download a free trial version from the official website. The free trial gives you access to all the features and applications of EaseUS Data Recovery Wizard for one month. To download the free trial, you need to have an email address and a valid credit card. You can cancel the trial anytime before it expires without being charged. To download the free trial, follow these steps:
Enter your email address and click the Free Trial button.
-
Check your email inbox and click the link to confirm your subscription.
-
Enter your payment details and click Start My Free Trial.
-
Click Download Now and follow the instructions to download and install EaseUS Data Recovery on your device.
-
-
Download EaseUS Data Recovery with a License Code
-
If you have purchased a license code of EaseUS Data Recovery from an authorized source, you can download EaseUS Data Recovery with your license code from the official website. The license code is a 25-digit code that verifies your purchase and allows you to activate EaseUS Data Recovery on your device. To download EaseUS Data Recovery with a license code, follow these steps:
-
-v0.5.0 – 10/3/14 – Update– Changed zoom range and added . – Changed accuracy settings to slightly more aggressive (from 0.95 to 0.99). – Changed tripmin settings to make it more intuitive. – Changed /eliminated chance of . – Added the ability to record video using . – Added options to , , and . – Added item key bindings in the Options Menu. – Added for debugging messages to be logged to the debug console instead of the main log. – Added for debugging messages to be printed to the console (disabled by default). – Added for debugging messages to be printed to the console and to be attached to the chat. – Added the ability to detach the debug console from the main console. – Added ability to run the map in offline mode. – Added ability to toggle display of the cursor in the Player HUD (off by default). – Added for the previous center point of the map to be remembered. – Adjusted to work with in the new map editor. – Changed the names of the controller button to match the actual buttons on a modern controller. – Fixed an issue where the player would be frozen after , , and . – Fixed an issue where the player would sometimes be stuck inside the center after , , and . – Fixed an issue where the HUD would be visible in the corner of the screen after , , and . – Fixed an issue where the mouse cursor would stay in the corner of the screen after , , and . – Fixed an issue where the player would jump when , , and were first activated. – Fixed an issue where the center of the map would not be centered on the player after , , and . – Fixed an issue where , , and were not able to be re-used for quick teleport. – Fixed an issue where the player would get stuck in an infinite loop after , , and were activated. – Fixed an issue where the center of the map would not be centered on the player after , , and were activated. – Fixed an issue where , , and could not be used on the roof of the arena. – Fixed an issue where the player would be stuck in the center of the map after , , and were activated. – Fixed an issue where the player could still teleport after , , and were activated. – 4fefd39f24
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download !LINK! Visualsvn.server.enterprise.edition.license.key.Serials.rar 16.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download !LINK! Visualsvn.server.enterprise.edition.license.key.Serials.rar 16.md
deleted file mode 100644
index 968039e1c0e138ba9265fdc0f02e89cb23e59e86..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Download !LINK! Visualsvn.server.enterprise.edition.license.key.Serials.rar 16.md
+++ /dev/null
@@ -1,153 +0,0 @@
-
-
Download VisualSVN Server Enterprise Edition License Key Serials RAR 16
-
-
If you are looking for a way to download VisualSVN Server Enterprise Edition license key serials RAR 16, you are in the right place. In this article, I will show you how to get the license key serials for VisualSVN Server Enterprise Edition, a powerful and easy-to-use Subversion server for Windows. You will also learn what VisualSVN Server Enterprise Edition is, what features it offers, and why you should use it.
VisualSVN Server Enterprise Edition is a professional grade Subversion server for Windows that allows you to setup and maintain an enterprise level Apache Subversion server on your Windows platform. It is certified for Windows Server and trusted by thousands of SMBs and Fortune 500 companies such as General Electric, Siemens, ThyssenKrupp and Sony.
-
-
VisualSVN Server Enterprise Edition offers many features that make it the most favored way to setup and maintain a Subversion server on Windows. Some of these features are:
-
-
-
Active Directory Single Sign-On: Allows users to access VisualSVN Server using their current Active Directory domain credentials. Secure Kerberos V5 or NTLM authentication protocols are used. Support for two-factor authentication and smart cards is available.
-
Multisite Repository Replication: Provides high-performance replication between geographically distributed sites using VisualSVN Distributed File System (VDFS) technology. Distributed repositories are writable and functionally equivalent to regular Subversion repositories.
-
Full-Text Search: Search through the contents and history of your repositories — in any folder, at any revision. The search engine offers high performance, continuous indexing of new revisions and has virtually no limits on the repository sizes.
-
Backup and Restore: Backup your repositories with minimal downtime using the hotcopy feature. Restore your repositories from backup files with a few clicks.
-
Web Interface: Manage your repositories, users, groups, permissions, hooks, and settings via a user-friendly web interface.
-
Apache Subversion Command-Line Tools: Includes the latest versions of Apache Subversion command-line tools that allow you to perform various operations on your repositories.
-
-
-
Why Use VisualSVN Server Enterprise Edition?
-
-
VisualSVN Server Enterprise Edition is the best choice for setting up and maintaining a Subversion server on Windows for several reasons:
-
-
-
It is easy to install, configure and maintain. You can setup a full-featured and ready to use Subversion server in just a few clicks. Upgrades to newer versions are simple too.
-
It is free for commercial use under the Community license. The free Community license does not require any registration, allows an unlimited number of repositories and up to 15 users.
-
It is secure and reliable. It uses secure authentication protocols, encryption, backup and restore features, and other mechanisms to protect your data and ensure its availability.
-
It is scalable and flexible. It supports multisite repository replication, full-text search, web interface, command-line tools, and other features that allow you to manage your repositories efficiently and effectively.
-
-
-
How to Download VisualSVN Server Enterprise Edition License Key Serials RAR 16?
-
-
To download VisualSVN Server Enterprise Edition license key serials RAR 16, you need to follow these steps:
-
-
-
Visit the Downloads page on the VisualSVN website.
-
Choose the VisualSVN Server Enterprise Edition package that suits your Windows platform (64-bit or 32-bit).
-
Click on the Download button and save the installation file on your computer.
-
Run the installation file and follow the instructions to install VisualSVN Server Enterprise Edition on your computer.
-
After the installation is complete, launch VisualSVN Server Manager from the Start menu or desktop shortcut.
-
Select Help | Enter License Key from the menu bar.
-
Enter your name, email address, company name, and license key serial number in the corresponding fields.
-
Click on OK to activate your license key serial number.
-
-
-
You can get the license key serial number for VisualSVN Server Enterprise Edition from various sources online. However, some of these sources may not be reliable or trustworthy. They may provide you with invalid or expired license key serial numbers that may not work or may cause problems with your VisualSVN Server Enterprise Edition installation. Therefore, it is recommended that you purchase a valid license key serial number from the official VisualSVN website or from an authorized reseller.
-
-
-
Conclusion
-
-
In this article, we have learned how to download VisualSVN Server Enterprise Edition license key serials RAR 16 and how to activate them on your computer. We have also learned what VisualSVN Server Enterprise Edition is, what features it offers, and why you should use it. VisualSVN Server Enterprise Edition is a professional grade Subversion server for Windows that allows you to setup and maintain an enterprise level Apache Subversion server on your Windows platform. It is easy to install, configure and maintain, free for commercial use under the Community license, secure and reliable, scalable and flexible. If you have any questions or comments, feel free to leave them below. Thanks for reading!
-
How to Uninstall VisualSVN Server Enterprise Edition
-
-
If you want to uninstall VisualSVN Server Enterprise Edition from your computer, you need to follow these steps:
-
-
-
Close VisualSVN Server Manager and any other applications that may be using VisualSVN Server.
-
Go to the Control Panel and select Programs and Features.
-
Find VisualSVN Server Enterprise Edition in the list of installed programs and click on Uninstall.
-
Follow the instructions to complete the uninstallation process.
-
Restart your computer if prompted.
-
-
-
Note that uninstalling VisualSVN Server Enterprise Edition will not delete your repositories or your license key serial number. You can keep them for future use or delete them manually if you want.
-
-
How to Upgrade VisualSVN Server Enterprise Edition
-
-
If you want to upgrade VisualSVN Server Enterprise Edition to a newer version, you need to follow these steps:
-
-
-
Visit the Downloads page on the VisualSVN website and download the latest version of VisualSVN Server Enterprise Edition that suits your Windows platform (64-bit or 32-bit).
-
Run the installation file and follow the instructions to install the new version of VisualSVN Server Enterprise Edition on your computer.
-
The installation process will automatically detect your existing installation of VisualSVN Server Enterprise Edition and upgrade it to the new version.
-
You do not need to enter your license key serial number again as it will be preserved during the upgrade process.
-
Restart your computer if prompted.
-
-
-
Note that upgrading VisualSVN Server Enterprise Edition will not affect your repositories or your settings. They will be preserved during the upgrade process.
-
-
How to Troubleshoot VisualSVN Server Enterprise Edition
-
-
If you encounter any problems with VisualSVN Server Enterprise Edition, such as errors, crashes, or performance issues, you can try some of these troubleshooting tips:
-
-
-
Check the VisualSVN Server log files for any error messages or warnings. You can find the log files in the %VISUALSVN_SERVER%\Logs folder.
-
Check the Windows Event Viewer for any system or application errors or warnings related to VisualSVN Server.
-
Check the Apache Subversion log files for any error messages or warnings related to Subversion operations. You can find the log files in the %VISUALSVN_SERVER%\Repositories folder.
-
Check the VisualSVN Server documentation for any solutions or tips related to common problems or scenarios.
-
Contact the VisualSVN support team via email or phone if you need further assistance or guidance. You can find their contact details on the VisualSVN website.
-
-
How to Use VisualSVN Server Enterprise Edition
-
-
After you have installed and activated VisualSVN Server Enterprise Edition on your computer, you can start using it to manage your Subversion repositories and users. You can use VisualSVN Server Manager, a user-friendly graphical interface that allows you to perform various tasks such as:
-
-
-
Create, delete, rename, or relocate your repositories.
-
Manage your users, groups, permissions, and access rules.
-
Configure your repository hooks, settings, and properties.
-
Monitor your repository activity, performance, and statistics.
-
Backup and restore your repositories.
-
Upgrade your VisualSVN Server Enterprise Edition to a newer version.
-
-
-
You can also use VisualSVN Server Web Interface, a web-based interface that allows you to perform some of the tasks that are available in VisualSVN Server Manager, such as:
-
-
-
Browse your repositories and view their contents and history.
-
Search your repositories using full-text search.
-
Download or upload files from or to your repositories.
-
Compare different versions of files or directories.
-
View or edit your repository properties.
-
-
-
How to Integrate VisualSVN Server Enterprise Edition with Visual Studio
-
-
If you are a software developer who uses Visual Studio as your IDE, you can integrate VisualSVN Server Enterprise Edition with Visual Studio using VisualSVN, a professional grade Subversion integration plug-in for Visual Studio. VisualSVN allows you to perform various Subversion operations from within Visual Studio, such as:
-
-
-
Add, delete, rename, move, or copy files or directories in your solution or project.
-
Commit, update, revert, or merge changes to or from your repository.
-
View the status, history, log, blame, or diff of your files or directories.
-
Create, switch, or delete branches or tags.
-
Resolve conflicts or lock files.
-
Use TortoiseSVN dialogs for advanced Subversion operations.
-
-
-
To integrate VisualSVN Server Enterprise Edition with Visual Studio using VisualSVN, you need to follow these steps:
-
-
-
Visit the Downloads page on the VisualSVN website and download the latest version of VisualSVN that suits your Visual Studio version (2022, 2019, 2017).
-
Run the installation file and follow the instructions to install VisualSVN on your computer.
-
Restart Visual Studio if it was running during the installation process.
-
In Visual Studio, open the solution or project that you want to add to Subversion.
-
Select File | Add Solution to Subversion from the menu bar.
-
Select the repository URL where you want to store your solution or project and click OK.
-
The solution or project will be added to Subversion and you can start using VisualSVN features from the menu bar or the context menu.
-
-
-
How to Download Files from VisualSVN Server Enterprise Edition
-
-
If you want to download files from VisualSVN Server Enterprise Edition to your computer, you can use one of these methods:
-
-
-
Use a Subversion client such as TortoiseSVN or Apache Subversion command-line tools. You can install these tools from the Downloads page on the VisualSVN website. To download files using a Subversion client, you need to know the repository URL and the revision number of the files that you want to download. You can use the checkout or export commands to download files from a repository.
-
Use VisualSVN Server Web Interface. You can access the web interface by entering the repository URL in your web browser. To download files using the web interface, you need to browse to the file that you want to download and click on the Download button. You can also download multiple files or directories by selecting them and clicking on the Download button.
-
-
Conclusion
-
-
In this article, we have learned how to download VisualSVN Server Enterprise Edition license key serials RAR 16 and how to use them to activate VisualSVN Server Enterprise Edition on our computer. We have also learned what VisualSVN Server Enterprise Edition is, what features it offers, and why we should use it. VisualSVN Server Enterprise Edition is a professional grade Subversion server for Windows that allows us to setup and maintain an enterprise level Apache Subversion server on our Windows platform. It is easy to install, configure and maintain, free for commercial use under the Community license, secure and reliable, scalable and flexible. We have also learned how to use VisualSVN Server Enterprise Edition to manage our Subversion repositories and users, how to integrate it with Visual Studio using VisualSVN, and how to download files from it using Subversion clients or web interface. If we have any questions or comments, we can leave them below or contact the VisualSVN support team. Thanks for reading!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/1line/AutoGPT/autogpt/promptgenerator.py b/spaces/1line/AutoGPT/autogpt/promptgenerator.py
deleted file mode 100644
index 0ad7046a0c41dab356abcd0151b65890e5544cd2..0000000000000000000000000000000000000000
--- a/spaces/1line/AutoGPT/autogpt/promptgenerator.py
+++ /dev/null
@@ -1,138 +0,0 @@
-""" A module for generating custom prompt strings."""
-from __future__ import annotations
-
-import json
-from typing import Any
-
-
-class PromptGenerator:
- """
- A class for generating custom prompt strings based on constraints, commands,
- resources, and performance evaluations.
- """
-
- def __init__(self) -> None:
- """
- Initialize the PromptGenerator object with empty lists of constraints,
- commands, resources, and performance evaluations.
- """
- self.constraints = []
- self.commands = []
- self.resources = []
- self.performance_evaluation = []
- self.response_format = {
- "thoughts": {
- "text": "thought",
- "reasoning": "reasoning",
- "plan": "- short bulleted\n- list that conveys\n- long-term plan",
- "criticism": "constructive self-criticism",
- "speak": "thoughts summary to say to user",
- },
- "command": {"name": "command name", "args": {"arg name": "value"}},
- }
-
- def add_constraint(self, constraint: str) -> None:
- """
- Add a constraint to the constraints list.
-
- Args:
- constraint (str): The constraint to be added.
- """
- self.constraints.append(constraint)
-
- def add_command(self, command_label: str, command_name: str, args=None) -> None:
- """
- Add a command to the commands list with a label, name, and optional arguments.
-
- Args:
- command_label (str): The label of the command.
- command_name (str): The name of the command.
- args (dict, optional): A dictionary containing argument names and their
- values. Defaults to None.
- """
- if args is None:
- args = {}
-
- command_args = {arg_key: arg_value for arg_key, arg_value in args.items()}
-
- command = {
- "label": command_label,
- "name": command_name,
- "args": command_args,
- }
-
- self.commands.append(command)
-
- def _generate_command_string(self, command: dict[str, Any]) -> str:
- """
- Generate a formatted string representation of a command.
-
- Args:
- command (dict): A dictionary containing command information.
-
- Returns:
- str: The formatted command string.
- """
- args_string = ", ".join(
- f'"{key}": "{value}"' for key, value in command["args"].items()
- )
- return f'{command["label"]}: "{command["name"]}", args: {args_string}'
-
- def add_resource(self, resource: str) -> None:
- """
- Add a resource to the resources list.
-
- Args:
- resource (str): The resource to be added.
- """
- self.resources.append(resource)
-
- def add_performance_evaluation(self, evaluation: str) -> None:
- """
- Add a performance evaluation item to the performance_evaluation list.
-
- Args:
- evaluation (str): The evaluation item to be added.
- """
- self.performance_evaluation.append(evaluation)
-
- def _generate_numbered_list(self, items: list[Any], item_type="list") -> str:
- """
- Generate a numbered list from given items based on the item_type.
-
- Args:
- items (list): A list of items to be numbered.
- item_type (str, optional): The type of items in the list.
- Defaults to 'list'.
-
- Returns:
- str: The formatted numbered list.
- """
- if item_type == "command":
- return "\n".join(
- f"{i+1}. {self._generate_command_string(item)}"
- for i, item in enumerate(items)
- )
- else:
- return "\n".join(f"{i+1}. {item}" for i, item in enumerate(items))
-
- def generate_prompt_string(self) -> str:
- """
- Generate a prompt string based on the constraints, commands, resources,
- and performance evaluations.
-
- Returns:
- str: The generated prompt string.
- """
- formatted_response_format = json.dumps(self.response_format, indent=4)
- return (
- f"Constraints:\n{self._generate_numbered_list(self.constraints)}\n\n"
- "Commands:\n"
- f"{self._generate_numbered_list(self.commands, item_type='command')}\n\n"
- f"Resources:\n{self._generate_numbered_list(self.resources)}\n\n"
- "Performance Evaluation:\n"
- f"{self._generate_numbered_list(self.performance_evaluation)}\n\n"
- "You should only respond in JSON format as described below \nResponse"
- f" Format: \n{formatted_response_format} \nEnsure the response can be"
- " parsed by Python json.loads"
- )
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Crazy Fox MP3 Songs Free Download Stream and Download Crazy Fox Music.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Crazy Fox MP3 Songs Free Download Stream and Download Crazy Fox Music.md
deleted file mode 100644
index 1765eb42dfe0e628fd9fd2d5b258a60b889f682d..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Crazy Fox MP3 Songs Free Download Stream and Download Crazy Fox Music.md
+++ /dev/null
@@ -1,126 +0,0 @@
-
-
Download Crazy Fox Songs: How to Enjoy the Music of South Sudan's Rising Star
-
If you are a fan of South Sudanese music, you have probably heard of Crazy Fox. He is one of the most talented and popular singers in the country, and his songs are catchy, upbeat, and inspiring. In this article, we will tell you everything you need to know about Crazy Fox, why you should download his songs, and how to do it easily and safely.
-
Who is Crazy Fox?
-
Crazy Fox is a young and talented singer from South Sudan, who has been making waves in the music industry since 2016. He is also known as John Ohide, and he was born in 1995 in Juba. He grew up in a musical family, and he started singing at a young age. He was influenced by both traditional and modern music, and he developed his own unique style of singing and rapping.
Crazy Fox started his musical career in 2016, when he released his first single, "Ana Gaid", which means "I am staying" in Arabic. The song was a hit, and it expressed his love for his country and his determination to stay despite the civil war and the hardships. He followed up with more singles, such as "Juba Juice", "Nyan Ci Yer", and "Wan Ci Bi". He also collaborated with other artists, such as Silver X, Kawaja Revolution, and MT7. He has performed in several concerts and festivals, both in South Sudan and abroad. He is currently working on his first album, which is expected to be released soon.
-
His musical style and influences
-
Crazy Fox's musical style is a blend of hip hop, dancehall, reggae, afrobeat, and traditional South Sudanese music. He sings and raps in English, Arabic, Dinka, Nuer, Bari, and other local languages. He uses catchy hooks, witty lyrics, and energetic beats to create songs that appeal to a wide audience. He is influenced by both local and international artists, such as Emmanuel Kembe, Yaba Angelosi, Bob Marley, Tupac Shakur, Wizkid, and Davido.
-
His most popular songs and videos
-
Some of Crazy Fox's most popular songs are:
-
-
"Ana Gaid": This is his debut single, which was released in 2016. It is a patriotic song that celebrates his love for South Sudan and his refusal to leave his homeland. The song has over 172K views on YouTube.
-
"Juba Juice": This is a dancehall song that was released in 2017. It is a fun song that praises the beauty and diversity of Juba, the capital city of South Sudan. The song has over 68K views on YouTube.
-
"Nyan Ci Yer": This is a hip hop song that was released in 2018. It is a motivational song that encourages young people to work hard and pursue their dreams. The song has over 61K views on YouTube.
-
"Wan Ci Bi": This is a reggae song that was released in 2019. It is a love song that expresses his feelings for a special girl. The song has over 36K views on YouTube.
-
-
You can watch his official music videos on his YouTube channel, or you can listen to his songs on SoundCloud or Audiomack. You can also find his songs on other platforms, such as Spotify, Apple Music, and Deezer.
-
Why download Crazy Fox songs?
-
Downloading Crazy Fox songs is a great way to enjoy his music anytime and anywhere. There are many benefits of downloading music, such as:
-
download crazy fox ana gaid mp3
-download crazy fox eu sou um espirito lyrics
-download crazy fox ainda existe compaixao video
-download crazy fox corra e nao olhe para atras song
-download crazy fox eu sou assombrado por ti album
-download crazy fox hora de dormir spotify
-download crazy fox mais que feliz amazon music
-download crazy fox quando o sol se por resso
-download crazy fox youtube channel
-download crazy fox top hits playlist
-download crazy fox songs for free
-download crazy fox songs offline
-download crazy fox songs in high quality
-download crazy fox songs with subtitles
-download crazy fox songs on iphone
-download crazy fox songs on android
-download crazy fox songs on windows
-download crazy fox songs on mac
-download crazy fox songs on linux
-download crazy fox songs on chromebook
-download crazy fox songs on soundcloud
-download crazy fox songs on deezer
-download crazy fox songs on tidal
-download crazy fox songs on apple music
-download crazy fox songs on google play music
-download crazy fox songs zip file
-download crazy fox songs rar file
-download crazy fox songs torrent file
-download crazy fox songs from youtube converter
-download crazy fox songs from youtube downloader
-download crazy fox songs from youtube mp3
-download crazy fox songs from youtube mp4
-download crazy fox songs from youtube to computer
-download crazy fox songs from youtube to phone
-download crazy fox songs from youtube to usb
-download crazy fox songs from youtube to cd
-download crazy fox songs from youtube to itunes
-download crazy fox songs from youtube to windows media player
-download crazy fox songs from youtube to vlc player
-download all crazy fox songs at once
-how to download crazy fox songs easily
-how to download crazy fox songs fast
-how to download crazy fox songs legally
-how to download crazy fox songs safely
-how to download crazy fox songs without virus
-how to download crazy fox songs without ads
-how to download crazy fox songs without registration
-how to download crazy fox songs without software
-
-
You can listen to your favorite songs offline, without worrying about internet connection or data charges.
-
You can create your own playlists and customize your music experience.
-
You can transfer your songs to different devices and share them with your friends and family.
-
You can support the artist and help him grow his fan base and career.
-
-
Downloading Crazy Fox songs is also a good idea because streaming music in South Sudan can be challenging. South Sudan has one of the lowest internet penetration rates in the world, with only 2.5% of the population having access to the internet. The internet speed is also slow and unreliable, averaging at 0.6 Mbps. This means that streaming music online can be frustrating and expensive, especially if you want to listen to high-quality audio. By downloading Crazy Fox songs, you can avoid these problems and enjoy his music smoothly and conveniently.
-
Another reason why you should download Crazy Fox songs is because his music has a positive cultural and social impact. His music celebrates the diversity and richness of South Sudanese culture, and promotes peace and unity among the people. His music also inspires and empowers young people to overcome their challenges and achieve their goals. His music is a source of joy and hope for many South Sudanese, especially in these difficult times. By downloading Crazy Fox songs, you can show your appreciation and support for his music and his message.
-
How to download Crazy Fox songs?
-
Downloading Crazy Fox songs is easy and simple, if you know where to look and what to do. Here are some of the best websites and apps to download his music, and some tips on how to use them.
-
The best websites and apps to download his music
-
YouTube
-
YouTube is one of the most popular and convenient platforms to download Crazy Fox songs. You can find all his official music videos on his YouTube channel, as well as some live performances and interviews. You can also find some fan-made videos and covers of his songs. To download Crazy Fox songs from YouTube, you will need a YouTube downloader app or website, such as VidMate or Y2Mate. These apps and websites allow you to download YouTube videos in different formats, such as MP4, MP3, or M4A. You can choose the format that suits your device and preference, and then save the file to your device or cloud storage. You can also adjust the quality and size of the file, depending on your internet speed and storage space.
-
SoundCloud
-
SoundCloud is another great platform to download Crazy Fox songs. You can find all his official audio tracks on his SoundCloud page, as well as some remixes and collaborations with other artists. You can also discover new songs and artists that are similar to Crazy Fox's style. To download Crazy Fox songs from SoundCloud, you will need a SoundCloud downloader app or website, such as KlickAud or ScloudDownloader. These apps and websites allow you to download SoundCloud tracks in MP3 format, which is compatible with most devices. You just need to copy the URL of the track you want to download, paste it into the app or website, and click on the download button. You can then save the file to your device or cloud storage.
-
Audiomack
-
Audiomack is another excellent platform to download Crazy Fox songs. You can find all his official audio tracks on his Audiomack page, as well as some exclusive releases and playlists. You can also follow him on Audiomack to get notified of his latest uploads and updates. To download Crazy Fox songs from Audiomack, you will need the Audiomack app, which is available for both Android and iOS devices. The app allows you to download Audiomack tracks in MP3 format, which is compatible with most devices. You just need to tap on the download icon next to the track you want to download, and choose the quality option that suits your device and preference. You can then save the file to your device or cloud storage.
-
The best devices and formats to play his music
-
Smartphones and tablets
-
Smartphones and tablets are the most common devices that people use to play music nowadays. They are portable, convenient, and versatile. You can play Crazy Fox songs on your smartphone or tablet using any music player app, such as Google Play Music, Apple Music, or VLC. You can also use headphones, earphones, or Bluetooth speakers to enhance the sound quality and volume. The best format to play Crazy Fox songs on your smartphone or tablet is MP3, which is a compressed and universal format that can save storage space and bandwidth. You can also use other formats, such as M4A, AAC, or WMA, depending on your device and preference.
-
MP3 players and speakers
-
MP3 players and speakers are another option to play music on the go. They are small, lightweight, and easy to carry. You can play Crazy Fox songs on your MP3 player or speaker using a USB cable, a memory card, or a Bluetooth connection. You can also use headphones, earphones, or external speakers to enhance the sound quality and volume. The best format to play Crazy Fox songs on your MP3 player or speaker is MP3, which is a compressed and universal format that can save storage space and battery life. You can also use other formats, such as WAV, FLAC, or OGG, depending on your device and preference.
-
Computers and laptops
-
Computers and laptops are the most powerful and versatile devices to play music. They have large screens, high-quality speakers, and fast processors. You can play Crazy Fox songs on your computer or laptop using any music player software, such as Windows Media Player, iTunes, or Winamp. You can also use headphones, earphones, or external speakers to enhance the sound quality and volume. The best format to play Crazy Fox songs on your computer or laptop is WAV, which is an uncompressed and lossless format that can preserve the original sound quality and details. You can also use other formats, such as MP3, FLAC, or OGG, depending on your device and preference.
-
Conclusion
-
Crazy Fox is one of the most talented and popular singers in South Sudan. His music is a blend of hip hop, dancehall, reggae, afrobeat, and traditional South Sudanese music. His music celebrates the diversity and richness of South Sudanese culture, and promotes peace and unity among the people. His music also inspires and empowers young people to overcome their challenges and achieve their goals. His music is a source of joy and hope for many South Sudanese, especially in these difficult times.
-
Downloading Crazy Fox songs is a great way to enjoy his music anytime and anywhere. There are many benefits of downloading music, such as listening offline, creating playlists, transferring to different devices, sharing with friends and family, and supporting the artist. Downloading Crazy Fox songs is also a good idea because streaming music in South Sudan can be challenging due to low internet penetration, slow internet speed, and high data charges. By downloading Crazy Fox songs, you can avoid these problems and enjoy his music smoothly and conveniently.
-
Downloading Crazy Fox songs is easy and simple if you know where to look and what to do. Some of the best websites and apps to download his music are YouTube , SoundCloud, Audiomack, Spotify, Apple Music, Deezer, VidMate, Y2Mate, KlickAud, ScloudDownloader, Google Play Music, Apple Music, VLC, Windows Media Player, iTunes, Winamp. Some of the best devices and formats to play his music are smartphones, tablets, MP3 players, speakers, computers, laptops, MP3, M4A, AAC, WMA, WAV, FLAC, OGG.
-
We hope this article has helped you learn more about Crazy Fox and how to download his songs. If you have any questions or comments, please feel free to leave them below. Thank you for reading and happy listening!
-
FAQs
-
Here are some of the frequently asked questions about Crazy Fox and his music.
-
-
Where can I find Crazy Fox's social media accounts?
-
You can follow Crazy Fox on his Facebook page, his Instagram account, and his Twitter account. You can also subscribe to his YouTube channel to get notified of his latest videos.
-
How can I contact Crazy Fox for bookings or collaborations?
-
You can contact Crazy Fox through his email address, crazyfoxofficial@gmail.com, or through his phone number, +211 922 222 222. You can also send him a message on his Facebook page or his Instagram account.
-
How can I support Crazy Fox and his music?
-
You can support Crazy Fox and his music by downloading his songs, sharing them with your friends and family, leaving positive feedback and reviews, attending his concerts and events, buying his merchandise, and donating to his causes. You can also follow him on his social media accounts and show him some love and appreciation.
-
What are some of the awards and achievements that Crazy Fox has received?
-
Crazy Fox has received several awards and recognitions for his music and his contribution to the South Sudanese music industry. Some of them are:
-
-
The Best Hip Hop Artist of the Year at the South Sudan Music Awards in 2017 and 2018.
-
The Best Male Artist of the Year at the Eye Radio Music Awards in 2018.
-
The Best Collaboration of the Year for "Wan Ci Bi" featuring Silver X at the South Sudan Music Awards in 2019.
-
The Best Video of the Year for "Nyan Ci Yer" at the Eye Radio Music Awards in 2019.
-
The Most Influential Artist of the Year at the South Sudan Youth Awards in 2020.
-
-
What are some of the causes and projects that Crazy Fox is involved in?
-
Crazy Fox is not only a singer, but also a humanitarian and an activist. He is involved in several causes and projects that aim to improve the lives and conditions of the South Sudanese people. Some of them are:
-
-
He is a goodwill ambassador for UNICEF South Sudan, where he advocates for children's rights and education.
-
He is a founder and a member of the Peace Makers Initiative, where he promotes peace and reconciliation among the different ethnic groups in South Sudan.
-
He is a supporter and a donor of the Juba Orphanage Home, where he provides food, clothing, and education for orphaned children.
-
He is a sponsor and a mentor of the Juba Music Academy, where he trains and supports young aspiring musicians.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Criminal Case Save the World! Mod APK - A Fun and Challenging Game with Infinite Money Energy and Stars.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Criminal Case Save the World! Mod APK - A Fun and Challenging Game with Infinite Money Energy and Stars.md
deleted file mode 100644
index 9e9cc5fd2e622ab355302429d90f5efe7d6e602d..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Criminal Case Save the World! Mod APK - A Fun and Challenging Game with Infinite Money Energy and Stars.md
+++ /dev/null
@@ -1,128 +0,0 @@
-
-
Criminal Case Mod Apk Happymod: How to Download and Play
-
If you are a fan of detective games, you might have heard of Criminal Case, a popular hidden object game that lets you solve murder cases and catch the killers. But did you know that you can play this game with more features and benefits by using Happymod, a platform that provides modded apk files for various games and apps? In this article, we will show you what Criminal Case and Happymod are, how to download and install Criminal Case mod apk happymod, and what benefits you can get from playing this modded version of the game.
-
What is Criminal Case?
-
Criminal Case is a free-to-play game that was released in 2012 by Pretty Simple, a French game developer. The game is available on Facebook, iOS, Android, and Windows platforms. The game has over 100 million downloads on Google Play Store and has won several awards, such as the Facebook Game of the Year in 2013.
The game is set in the fictional city of Grimsborough, where you join the local police force as a rookie detective. Your job is to investigate crime scenes, find clues, interrogate suspects, analyze evidence, and arrest the culprits. The game has six seasons, each with a different theme and location. The game also has a spin-off series called Criminal Case: Mysteries of the Past, Criminal Case: The Conspiracy, Criminal Case: Save the World, Criminal Case: Travel in Time, and Criminal Case: Supernatural Investigations.
-
Features of Criminal Case
-
Some of the features of Criminal Case are:
-
-
Investigate crime scenes: You can explore various crime scenes in a grim and corrupt city, such as parks, streets, alleys, mansions, museums, etc. You have to find hidden objects that are related to the case and collect them as clues.
-
Play with your friends: You can connect your game account to Facebook and invite your friends to join you as partners. You can also compete with them on the leaderboard and see who is the best detective.
-
Examine clues and analyze samples: You can send the clues you collected to the forensic lab, where you can examine them more closely and perform tests. You can also use tools such as magnifying glass, microscope, fingerprint scanner, etc. to find more details.
-
Interrogate witnesses and suspects: You can question the people who are involved in the case, such as witnesses, victims' relatives, suspects, etc. You have to pay attention to their statements and expressions and find any contradictions or lies.
-
Bring the killer to justice: After gathering enough evidence and narrowing down the list of suspects, you have to arrest the killer and bring them to trial. You have to present your evidence and prove their guilt beyond reasonable doubt.
-
-
Tips and Tricks for Criminal Case
-
Here are some tips and tricks that can help you play Criminal Case better:
-
-
Get energy in Criminal Case: Energy is required to play any crime scene or puzzle in the game. The energy bar is automatically refilled while you wait or when you level up, but will not exceed the maximum amount of energy points. You can also get additional energy by watching ads, completing tasks, or asking your friends for help.
-
Earn stars in Criminal Case: Stars are the currency of the game, which you can use to unlock new crime scenes, perform certain actions, or buy items. You can earn stars by completing crime scenes or puzzles with high scores. You can also get free stars by watching ads, spinning the wheel, or opening gift boxes.
-
Use hints and boosters in Criminal Case: Hints and boosters are helpful tools that can make your gameplay easier and faster. Hints can show you the location of one hidden object in the crime scene, while boosters can give you various benefits, such as extra time, extra points, extra energy, etc. You can get hints and boosters by spending stars, coins, or cash.
-
Customize your avatar and your pet in Criminal Case: You can personalize your appearance and style by choosing different outfits, accessories, hairstyles, etc. for your avatar. You can also adopt a pet that can accompany you in your investigations and provide you with bonuses. You can buy clothes and pets with coins or cash.
-
Join a team or create your own in Criminal Case: You can join a team of other players or create your own team to chat, share tips, exchange gifts, and cooperate in solving cases. You can also participate in team events and challenges to earn rewards and badges.
-
-
What is Happymod?
-
Happymod is a platform that provides modded apk files for various games and apps. Modded apk files are modified versions of the original apk files that have some features unlocked or enhanced, such as unlimited money, unlimited gems, unlimited lives, etc. Happymod has a large collection of modded apk files that are tested and verified by users and editors. You can download and install these modded apk files on your Android device for free and enjoy the games and apps with more fun and convenience.
-
Features of Happymod
-
Some of the features of Happymod are:
-
-
Safe and reliable: Happymod only provides modded apk files that are 100% safe and virus-free. You can scan the apk files with any antivirus software before installing them. Happymod also has a strict review process to ensure that the modded apk files work properly and do not contain any malware or spyware.
-
Fast and easy: Happymod has a user-friendly interface that allows you to browse, search, download, and install modded apk files with just a few clicks. You can also update the modded apk files to the latest version with one tap. Happymod supports high-speed download and resume download functions to save your time and data.
-
Diverse and updated: Happymod has a huge library of modded apk files for various categories, such as action, adventure, arcade, casual, puzzle, simulation, sports, etc. You can find modded apk files for popular games and apps, such as Subway Surfers, Clash of Clans, Candy Crush Saga, Spotify, Netflix, etc. Happymod also updates its content regularly to provide you with the newest and hottest modded apk files.
-
Interactive and social: Happymod has a community feature that allows you to interact with other users and editors. You can comment on the modded apk files, rate them, request them, or report them. You can also share your feedback and suggestions with the developers and moderators of Happymod.
-
-
How to Use Happymod
-
To use Happymod, you need to follow these steps:
-
-
Download the Happymod app: You can download the Happymod app from its official website or from any trusted third-party source. The app is compatible with Android 4.1 or higher devices.
-
Install the Happymod app: After downloading the app, you need to install it on your device. You may need to enable unknown sources in your device settings to allow the installation of apps from outside the Google Play Store.
-
Open the Happymod app: Once the app is installed, you can open it and grant it the necessary permissions to access your device storage and network.
-
Browse or search for modded apk files: You can browse through the different categories or use the search bar to find the modded apk files you want. You can also filter the results by popularity, rating, update date, etc.
-
Download and install mod ded apk files: After finding the modded apk files you want, you can download them by tapping on the download button. You can see the progress of the download in the notification bar or in the app itself. Once the download is complete, you can install the modded apk files by tapping on the install button. You may need to enable unknown sources again if prompted.
-
Launch the modded games or apps: After installing the modded apk files, you can launch them from your device menu or from the Happymod app. You can enjoy the games or apps with the modded features and benefits.
-
-
How to Download and Install Criminal Case Mod Apk Happymod
-
To download and install Criminal Case mod apk happymod, you need to follow these steps:
-
Step 1: Download the Mod Apk File
-
You can download the Criminal Case mod apk happymod file from the Happymod app or from this link: [Criminal Case Mod Apk Happymod]. The file size is about 66 MB and the mod version is 2.36.4.
-
criminal case save the world mod apk unlimited money
-criminal case paris mod apk download for android
-criminal case mod apk latest version happymod
-criminal case mysteries of the past mod apk free energy
-criminal case travel in time mod apk unlimited stars
-criminal case the conspiracy mod apk android 1
-criminal case supernatural investigations mod apk happymod
-criminal case pacific bay mod apk unlimited everything
-criminal case mod apk offline download
-criminal case mod apk 2023 happymod
-criminal case hidden crimes mod apk unlimited money and energy
-criminal case mod apk rexdl
-criminal case mod apk revdl
-criminal case mod apk unlimited energy and hints
-criminal case mod apk no root
-criminal case mod apk with facebook login
-criminal case mod apk unlimited coins and cash
-criminal case mod apk all cases unlocked
-criminal case mod apk unlimited energy and money 2023
-criminal case mod apk for ios
-criminal case save the world hack mod apk download
-criminal case paris hack mod apk free download
-criminal case hack mod apk happymod 2023
-criminal case mysteries of the past hack mod apk latest version
-criminal case travel in time hack mod apk android 1
-criminal case the conspiracy hack mod apk unlimited stars and energy
-criminal case supernatural investigations hack mod apk offline
-criminal case pacific bay hack mod apk no root
-criminal case hack mod apk online
-criminal case hack mod apk 2023 latest version happymod
-criminal case hidden crimes hack mod apk unlimited hints and coins
-criminal case hack mod apk rexdl.com
-criminal case hack mod apk revdl.com
-criminal case hack mod apk unlimited energy and cash 2023
-criminal case hack mod apk with facebook login 2023
-criminal case hack mod apk all cases unlocked 2023
-criminal case save the world cheats mod apk download for android
-criminal case paris cheats mod apk free download for android
-criminal case cheats mod apk happymod 2023 latest version
-criminal case mysteries of the past cheats mod apk unlimited energy and money
-criminal case travel in time cheats mod apk free stars and coins
-criminal case the conspiracy cheats mod apk android 1.com
-criminal case supernatural investigations cheats mod apk offline download 2023
-criminal case pacific bay cheats mod apk no root required 2023
-criminal case cheats mod apk online play 2023
-criminal case cheats mod apk 2023 happymod.com
-criminal case hidden crimes cheats mod apk unlimited hints and cash 2023
-criminal case cheats mod apk rexdl
-criminal case cheats mod apk revdl
-
Step 2: Enable Unknown Sources
-
Before installing the mod apk file, you need to enable unknown sources in your device settings. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps from outside the Google Play Store.
-
Step 3: Install the Mod Apk File
-
After downloading the mod apk file, you need to install it on your device. To do this, locate the file in your device storage or in the Happymod app and tap on it. You will see a pop-up window asking you to confirm the installation. Tap on Install and wait for a few seconds until the installation is complete.
-
Step 4: Launch the Game and Enjoy
-
After installing the mod apk file, you can launch the game from your device menu or from the Happymod app. You will see a new icon with a red H on it, which indicates that it is a modded game. You can enjoy the game with unlimited money, energy, and stars, as well as no ads and no waiting time.
-
Benefits of Playing Criminal Case Mod Apk Happymod
-
Playing Criminal Case mod apk happymod has many benefits that can enhance your gaming experience and make it more fun and convenient. Some of these benefits are:
-
Unlimited Money, Energy, and Stars
-
With Criminal Case mod apk happymod, you will have unlimited money, energy, and stars in your game account. You can use these resources to buy items, unlock new crime scenes, perform actions, use hints and boosters, customize your avatar and your pet, join a team or create your own, etc. You will not have to worry about running out of money, energy, or stars ever again.
-
No Ads and No Waiting Time
-
With Criminal Case mod apk happymod, you will not see any ads or pop-ups in your game screen. You will also not have to wait for any loading time or countdown timer before playing any crime scene or puzzle. You will have a smooth and uninterrupted gameplay without any distractions or delays.
-
More Fun and Challenge
-
With Criminal Case mod apk happymod, you will have more fun and challenge in solving murder cases and catching killers. You will be able to explore more crime scenes, examine more clues, interrogate more suspects, and arrest more culprits. You will also be able to compete with your friends and other players on the leaderboard and see who is the best detective.
-
Conclusion
-
Criminal Case is a great game for anyone who loves detective games and hidden object games. It has an engaging storyline, realistic graphics, diverse characters, and exciting gameplay. However, if you want to play this game with more features and benefits, you should try Criminal Case mod apk happymod. This is a modded version of the game that gives you unlimited money, energy, and stars, as well as no ads and no waiting time. You can download and install this modded version of the game from Happymod, a platform that provides safe and reliable modded apk files for various games and apps. By playing Criminal Case mod apk happymod, you will have more fun and challenge in solving murder cases and catching killers.
-
I hope this article has helped you understand what Criminal Case mod apk happymod is, how to download and install it, and what benefits you can get from playing it. If you have any questions or feedback, please feel free to leave them in the comment section below. Thank you for reading and happy gaming!
-
Here are some FAQs that you might find useful:
-
Q: Is Criminal Case mod apk happymod safe to use?
-
A: Yes, Criminal Case mod apk happymod is safe to use as long as you download it from Happymod or from a trusted source. You can also scan the mod apk file with any antivirus software before installing it. However, you should be aware that using modded apk files may violate the terms and conditions of the original game and may result in your account being banned or suspended. Therefore, you should use Criminal Case mod apk happymod at your own risk and discretion.
-
Q: Do I need to root my device to use Criminal Case mod apk happymod?
-
A: No, you do not need to root your device to use Criminal Case mod apk happymod. You can install and play the modded version of the game without any root access or permission. However, some features of the modded version may require root access to work properly, such as unlimited money, energy, and stars. If you want to use these features, you may need to root your device first.
-
Q: Can I play Criminal Case mod apk happymod offline?
-
A: Yes, you can play Criminal Case mod apk happymod offline without any internet connection. However, some features of the game may require internet connection to work properly, such as connecting to Facebook, inviting friends, joining a team, etc. If you want to use these features, you will need to connect your device to a stable Wi-Fi or mobile data network.
-
Q: Can I play Criminal Case mod apk happymod with my friends?
-
A: Yes, you can play Criminal Case mod apk happymod with your friends who also have the modded version of the game installed on their devices. You can connect your game account to Facebook and invite your friends to join you as partners. You can also compete with them on the leaderboard and see who is the best detective. However, you may not be able to play with your friends who have the original version of the game installed on their devices, as they may have different game versions and features.
-
Q: How can I update Criminal Case mod apk happymod?
-
A: You can update Criminal Case mod apk happymod by downloading and installing the latest version of the mod apk file from Happymod or from any trusted source. You can also check for updates in the Happymod app itself and tap on the update button if available. However, you should be careful when updating the mod apk file, as it may overwrite your previous game data and progress. Therefore, you should backup your game data before updating the mod apk file.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download City Bus Simulator 3D Offline Mod Apk and Enjoy Realistic Bus Driving.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download City Bus Simulator 3D Offline Mod Apk and Enjoy Realistic Bus Driving.md
deleted file mode 100644
index 8eb6683988721102fad0f3cf809fbb96fbe53ddd..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download City Bus Simulator 3D Offline Mod Apk and Enjoy Realistic Bus Driving.md
+++ /dev/null
@@ -1,97 +0,0 @@
-
-
City Bus Simulator 3D Offline Mod APK: A Fun and Realistic Driving Game
-
Introduction
-
Do you love driving games? Do you want to experience the thrill of driving a city bus in a realistic environment? If yes, then you should try City Bus Simulator 3D Offline Mod APK, a fun and addictive game that will test your skills and patience as a bus driver.
City Bus Simulator 3D Offline Mod APK is a modified version of the original game, City Bus Simulator 2020, developed by This Gen Gamez. It is an offline game that lets you drive various buses in different cities and scenarios. You can pick up passengers, follow the traffic rules, avoid accidents, and complete your routes on time.
-
Why should you play City Bus Simulator 3D Offline Mod APK?
-
City Bus Simulator 3D Offline Mod APK is a game that will keep you entertained for hours. You will enjoy the following benefits when you play this game:
-
-
You will improve your driving skills and learn how to handle different situations on the road.
-
You will explore different cities and landscapes with realistic graphics and sound effects.
-
You will have fun with the challenging missions and achievements that will reward you with money and items.
-
You will have access to unlimited money and unlocked items that will enhance your gameplay and customization options.
-
-
Features of City Bus Simulator 3D Offline Mod APK
-
City Bus Simulator 3D Offline Mod APK has many features that make it one of the best driving games available. Here are some of the features that you will love:
-
city bus simulator 3d offline mod apk download
-city bus simulator 3d offline mod apk unlimited money
-city bus simulator 3d offline mod apk latest version
-city bus simulator 3d offline mod apk free
-city bus simulator 3d offline mod apk android
-city bus simulator 3d offline mod apk hack
-city bus simulator 3d offline mod apk no ads
-city bus simulator 3d offline mod apk gameplay
-city bus simulator 3d offline mod apk full unlocked
-city bus simulator 3d offline mod apk revdl
-city bus simulator 3d offline mod apk rexdl
-city bus simulator 3d offline mod apk happymod
-city bus simulator 3d offline mod apk mob.org
-city bus simulator 3d offline mod apk apkpure
-city bus simulator 3d offline mod apk uptodown
-city bus simulator 3d offline mod apk android 1
-city bus simulator 3d offline mod apk android oyun club
-city bus simulator 3d offline mod apk an1.com
-city bus simulator 3d offline mod apk blackmod.net
-city bus simulator 3d offline mod apk platinmods.com
-city bus simulator 3d offline mod apk for pc
-city bus simulator 3d offline mod apk for ios
-city bus simulator 3d offline mod apk for windows
-city bus simulator 3d offline mod apk for mac
-city bus simulator 3d offline mod apk for laptop
-city bus simulator 3d offline mod apk new update
-city bus simulator 3d offline mod apk new game
-city bus simulator 3d offline mod apk new version
-city bus simulator 3d offline mod apk new features
-city bus simulator 3d offline mod apk new maps
-city bus simulator 3d offline mod apk best graphics
-city bus simulator 3d offline mod apk best controls
-city bus simulator 3d offline mod apk best buses
-city bus simulator 3d offline mod apk best routes
-city bus simulator 3d offline mod apk best reviews
-city bus simulator 3d offline mod apk how to install
-city bus simulator 3d offline mod apk how to play
-city bus simulator 3d offline mod apk how to download
-city bus simulator 3d offline mod apk how to update
-city bus simulator 3d offline mod apk how to hack
-city bus simulator 3d offline mod apk online play
-city bus simulator 3d offline mod apk online multiplayer
-city bus simulator 3d offline mod apk online mode
-city bus simulator 3d offline mod apk online download
-city bus simulator 3d offline mod apk online game
-
Amazing 3D graphics and sound effects
-
The game has stunning 3D graphics that will make you feel like you are driving a real bus. You will see detailed bus models, realistic passengers, dynamic shadows, and reflections. You will also hear realistic sound effects such as engine noises, horn sounds, brake sounds, and passenger voices.
-
Multiple bus models and routes to choose from
-
The game has a variety of buses that you can drive, such as city buses, school buses, tourist buses, double-decker buses, and more. Each bus has its own characteristics, such as speed, handling, capacity, and fuel consumption. You can also customize your bus with different colors, stickers, wheels, and accessories. The game also has different routes that you can choose from, such as city routes, highway routes, mountain routes, desert routes, and more. Each route has its own challenges, such as traffic jams, roadblocks, sharp turns, steep slopes, and weather changes.
-
Realistic traffic and weather conditions
-
The game has realistic traffic and weather conditions that will affect your driving experience. You will encounter different vehicles on the road, such as cars, trucks, motorcycles, bicycles, and pedestrians. You will have to follow the traffic rules, such as stopping at red lights, giving way to other vehicles, and using indicators. You will also face different weather conditions, such as sunny days, rainy days, foggy days, snowy days, and stormy days. You will have to adjust your driving style according to the weather conditions.
-
Challenging missions and achievements
-
The
The game has challenging missions and achievements that will test your skills and patience as a bus driver. You will have to complete different tasks, such as picking up and dropping off passengers, following the schedule, avoiding collisions, and parking the bus. You will also have to earn stars, coins, and diamonds that you can use to buy new buses and items. You will also unlock new levels and routes as you progress in the game.
-
Unlimited money and unlocked items
-
The game has unlimited money and unlocked items that will make your gameplay more enjoyable and easier. You will not have to worry about running out of money or fuel, or having to watch ads to get more resources. You will also have access to all the buses and items that are normally locked or require real money to purchase. You can customize your bus and gameplay as you wish.
-
How to download and install City Bus Simulator 3D Offline Mod APK?
-
If you want to play City Bus Simulator 3D Offline Mod APK, you will have to download and install it on your device. Here are the steps that you need to follow:
-
Step 1: Download the APK file from a trusted source
-
You will have to download the APK file of City Bus Simulator 3D Offline Mod APK from a trusted source, such as [this link]. The file size is about 100 MB, so make sure you have enough space on your device. You can also scan the file with an antivirus program before opening it.
-
Step 2: Enable unknown sources on your device
-
You will have to enable unknown sources on your device to install the APK file. To do this, go to your device settings, then security, then unknown sources. Turn on the option that allows you to install apps from unknown sources. This will allow you to install the APK file without any problems.
-
Step 3: Install the APK file and launch the game
-
You will have to install the APK file by tapping on it and following the instructions on the screen. It may take a few minutes for the installation to complete. Once it is done, you can launch the game by tapping on its icon on your home screen or app drawer. You can now enjoy playing City Bus Simulator 3D Offline Mod APK.
-
Conclusion
-
City Bus Simulator 3D Offline Mod APK is a fun and realistic driving game that will give you hours of entertainment and excitement. You will be able to drive various buses in different cities and scenarios, with amazing 3D graphics and sound effects, realistic traffic and weather conditions, challenging missions and achievements, unlimited money and unlocked items, and more. You can download and install City Bus Simulator 3D Offline Mod APK easily by following the steps above. If you love driving games, you should definitely try City Bus Simulator 3D Offline Mod APK.
-
FAQs
-
-
Is City Bus Simulator 3D Offline Mod APK safe to use?
-
Yes, City Bus Simulator 3D Offline Mod APK is safe to use, as long as you download it from a trusted source, such as [this link]. You can also scan the file with an antivirus program before installing it.
-
Is City Bus Simulator 3D Offline Mod APK free to play?
-
Yes, City Bus Simulator 3D Offline Mod APK is free to play, and you do not need an internet connection to play it. You also do not need to spend any real money to buy any buses or items in the game.
-
How can I update City Bus Simulator 3D Offline Mod APK?
-
You can update City Bus Simulator 3D Offline Mod APK by downloading the latest version of the APK file from [this link] and installing it over the existing one. You do not need to uninstall the previous version before installing the new one.
-
How can I contact the developer of City Bus Simulator 3D Offline Mod APK?
-
You can contact the developer of City Bus Simulator 3D Offline Mod APK by visiting their website at [this link] or by sending them an email at thisgengamez@gmail.com.
-
What are some other games like City Bus Simulator 3D Offline Mod APK?
-
Some other games like City Bus Simulator 3D Offline Mod APK are Coach Bus Simulator, Heavy Bus Simulator, Public Transport Simulator, Euro Truck Driver, and Driving School Sim.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Brawl Stars Hack Mod APK The Ultimate Cheat for Brawl Stars Fans.md b/spaces/1phancelerku/anime-remove-background/Brawl Stars Hack Mod APK The Ultimate Cheat for Brawl Stars Fans.md
deleted file mode 100644
index c6c8dc536f356354c7aa7879a985f9a8de242362..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Brawl Stars Hack Mod APK The Ultimate Cheat for Brawl Stars Fans.md
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
Brawl Stars Hack Mod APK Free Download: Everything You Need to Know
-
If you are a fan of fast-paced multiplayer action games, you might have heard of Brawl Stars. This is a popular game developed by Supercell, the same company behind Clash of Clans and Clash Royale. In this game, you can choose from a variety of brawlers, each with their own unique skills and abilities, and compete with other players in different modes. You can also team up with your friends or join a club to chat and share tips.
But what if you want to get unlimited gems, coins, and other resources in the game without spending real money? Well, that's where Brawl Stars Hack Mod APK comes in. This is a modified version of the original game that gives you access to various cheats and hacks that can enhance your gaming experience. In this article, we will tell you everything you need to know about Brawl Stars Hack Mod APK, including its features, benefits, risks, and how to download and install it on your device.
-
What is Brawl Stars?
-
Brawl Stars is a 3v3 online multiplayer action game that was released in 2018 for Android and iOS devices. The game features a colorful and cartoonish graphics style that appeals to both kids and adults. The game has over 40 brawlers that you can unlock and upgrade as you progress in the game. Each brawler has a unique personality, appearance, and skill set that makes them suitable for different roles and strategies.
-
Features of Brawl Stars
-
Some of the main features of Brawl Stars are:
-
-
Multiple game modes: You can choose from various game modes such as Gem Grab, Showdown, Heist, Bounty, Siege, Hot Zone, Knockout, and more. Each mode has its own rules and objectives that require different tactics and teamwork.
-
Customizable brawlers: You can customize your brawlers with different skins, pins, gadgets, and star powers that can change their appearance and performance. You can also mix and match different brawlers to create your own team composition.
-
Social features: You can join or create a club to chat with other players, share tips, and play together. You can also invite your friends to join your team or play against them in friendly matches. You can also participate in special events and challenges to earn rewards and trophies.
-
Regular updates: The game is constantly updated with new brawlers, skins, maps, modes, events, and more. The developers also listen to the feedback from the community and make improvements and changes accordingly.
-
-
How to play Brawl Stars
-
To play Brawl Stars, you need to have a stable internet connection and a compatible device. You can download the game for free from the Google Play Store or the App Store. Once you have installed the game, you need to create an account or log in with your existing one. You can then choose your preferred game mode and start playing.
-
brawl stars mod apk unlimited money and gems download
-brawl stars hack apk latest version free download
-brawl stars mod menu apk download for android
-brawl stars hack online generator no human verification
-brawl stars mod apk all brawlers unlocked download
-brawl stars hack apk no root 2023 free download
-brawl stars mod apk private server download
-brawl stars hack tool download without survey
-brawl stars mod apk unlimited everything download
-brawl stars hack apk mediafıre link free download
-brawl stars mod apk god mode download
-brawl stars hack apk ios download no jailbreak
-brawl stars mod apk offline download
-brawl stars hack apk unlimited coins and gems download
-brawl stars mod apk new update download
-brawl stars hack apk 2023 free download
-brawl stars mod apk unlimited tickets download
-brawl stars hack apk mega mod free download
-brawl stars mod apk with obb file download
-brawl stars hack apk working free download
-brawl stars mod apk unlimited health download
-brawl stars hack apk android 1 free download
-brawl stars mod apk revdl download
-brawl stars hack apk happymod free download
-brawl stars mod apk rexdl download
-brawl stars hack apk no ban free download
-brawl stars mod apk unlock all skins download
-brawl stars hack apk antiban free download
-brawl stars mod apk unlimited ammo download
-brawl stars hack apk original free download
-brawl stars mod apk no ads download
-brawl stars hack apk easy free download
-brawl stars mod apk supercell id download
-brawl stars hack apk real free download
-brawl stars mod apk with cheat menu download
-brawl stars hack apk unlimited trophies free download
-brawl stars mod apk one hit kill download
-brawl stars hack apk no password free download
-brawl stars mod apk fast reload download
-brawl stars hack apk direct link free download
-
The controls are simple and intuitive. You can move your brawler with the left joystick and aim and shoot with the right joystick. You can also use the buttons on the right side of the screen to activate your super attack, gadget, or star power. The objective of each mode varies depending on the rules. For example, in Gem Grab, you need to collect 10 gems before the enemy team does; in Showdown, you need to survive as long as possible against other players; in Heist, you need to destroy the enemy's safe while protecting yours; and so on.
-
What is
What is Brawl Stars Hack Mod APK?
-
Brawl Stars Hack Mod APK is a modified version of the original Brawl Stars game that allows you to access various cheats and hacks that are not available in the official game. For example, you can get unlimited gems, coins, tickets, and other resources that you can use to unlock and upgrade your brawlers, buy skins, and participate in events. You can also unlock all the brawlers, gadgets, and star powers without spending any money or time. You can also use some features such as auto-aim, wallhack, speedhack, and more to gain an edge over your opponents.
-
Benefits of using Brawl Stars Hack Mod APK
-
Some of the benefits of using Brawl Stars Hack Mod APK are:
-
-
You can enjoy the game without any limitations or restrictions. You can play any mode, map, or event you want without worrying about your resources or progress.
-
You can save your money and time. You don't have to spend real money or watch ads to get gems, coins, or tickets. You also don't have to wait for hours or days to unlock or upgrade your brawlers.
-
You can have more fun and excitement. You can experiment with different brawlers, skins, gadgets, and star powers and see how they work in different situations. You can also dominate the game and win every match with ease.
-
-
Risks of using Brawl Stars Hack Mod APK
-
However, using Brawl Stars Hack Mod APK also comes with some risks that you should be aware of:
-
-
You may face legal issues. Using a modified version of the game is against the terms of service and privacy policy of Supercell. You may be violating their intellectual property rights and breaking the law. You may also be exposing yourself to malware or viruses that may harm your device or data.
-
You may lose your account or progress. Supercell has a strict anti-cheat system that can detect and ban users who use hacks or mods. You may lose your account permanently or temporarily if you are caught. You may also lose your progress and achievements if you uninstall the mod or switch to the official game.
-
You may ruin the game experience for yourself and others. Using hacks or mods may make the game too easy or boring for you. You may lose the challenge and thrill of playing the game legitimately. You may also ruin the game experience for other players who play fairly and honestly. You may cause them frustration and anger by cheating and winning unfairly.
-
-
How to download and install Brawl Stars Hack Mod APK?
-
If you still want to try Brawl Stars Hack Mod APK despite the risks, you need to follow these steps to download and install it on your device:
-
Step 1: Enable unknown sources
-
Before you can install any APK file on your device, you need to enable unknown sources in your settings. This will allow you to install apps from sources other than the Google Play Store or the App Store. To do this, go to your settings > security > unknown sources and toggle it on.
-
Step 2: Download the APK file
-
Next, you need to download the APK file of Brawl Stars Hack Mod from a reliable and trustworthy source. You can search for it online or use this link: [Brawl Stars Hack Mod APK]. Make sure you download the latest version of the mod that is compatible with your device and the official game.
-
Step 3: Install the APK file
-
Once you have downloaded the APK file, you need to locate it in your file manager and tap on it to start the installation process. You may see a warning message that says "this type of file can harm your device". Ignore it and tap on "install anyway". Wait for a few seconds until the installation is complete.
-
Step 4: Launch the game and enjoy
-
Finally, you can launch the game from your app drawer or home screen and enjoy the hack mod features. You should see a menu icon on the top left corner of the screen that will give you access to various cheats and hacks. You can also see your unlimited resources on the top right corner of the screen.
-
Conclusion
-
Brawl Stars is a fun and addictive multiplayer action game that offers a lot of variety and excitement. However, if you want to get unlimited resources and access various cheats and hacks in the game, you can try Brawl Stars Hack Mod APK. This is a modified version of the original game that gives you various advantages over other players. However, you should also be aware of the risks involved in using Brawl Stars Hack Mod APK, such as legal issues, account bans, or game experience degradation. Therefore, you should use it at your own risk and discretion. We hope this article has given you some useful information about Brawl Stars Hack Mod APK and how to download and install it on your device. If you have any questions or feedback, feel free to leave a comment below.
-
FAQs
-
Here are some frequently asked questions about Brawl Stars Hack Mod APK:
-
-
-
Question
-
Answer
-
-
-
Is Brawl Stars Hack Mod APK safe to use?
-
Brawl Stars Hack Mod APK is not safe to use as it may contain malware or viruses that can harm your device or data. It may also violate the terms of service and privacy policy of Supercell and expose you to legal issues. Moreover, it may be detected and banned by the anti-cheat system of Supercell and cause you to lose your account or progress.
-
-
-
Is Brawl Stars Hack Mod APK free to download?
-
Brawl Stars Hack Mod APK is free to download from various sources online. However, you should be careful and only download it from reliable and trustworthy sources. You should also avoid clicking on any suspicious links or ads that may redirect you to malicious sites or download unwanted files.
-
-
-
Can I use Brawl Stars Hack Mod APK with my existing account?
-
You can use Brawl Stars Hack Mod APK with your existing account, but it is not recommended. You may risk losing your account or progress if you are caught using hacks or mods by the anti-cheat system of Supercell. You may also face legal issues if you are found violating the terms of service and privacy policy of Supercell. Therefore, it is better to use a new or secondary account if you want to try Brawl Stars Hack Mod APK.
-
-
-
Can I play Brawl Stars Hack Mod APK with other players?
-
You can play Brawl Stars Hack Mod APK with other players who are also using the same mod. However, you may not be able to play with players who are using the official game or a different mod. You may also face unfair competition or imbalance in the game as some players may have more advantages than others.
-
-
-
Can I update Brawl Stars Hack Mod APK?
-
You can update Brawl Stars Hack Mod APK if there is a new version available from the source you downloaded it from. However, you should be careful and backup your data before updating as some updates may not be compatible with your device or the official game. You should also check the features and reviews of the new version before updating to make sure it works properly and does not have any bugs or errors.
-
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Craftsman Building Craft - Google Playde En ok ndirilen Oyun.md b/spaces/1phancelerku/anime-remove-background/Craftsman Building Craft - Google Playde En ok ndirilen Oyun.md
deleted file mode 100644
index b84f1a949bd9596f4229a120c0ec969aba8b573d..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Craftsman Building Craft - Google Playde En ok ndirilen Oyun.md
+++ /dev/null
@@ -1,117 +0,0 @@
-
-
Craftsman Yukle: How to Play Craftsman: Building Craft on PC
-
Do you love building and exploring in sandbox games? Do you want to unleash your creativity and imagination in a virtual world? If yes, then you should try Craftsman: Building Craft, a popular game that lets you design houses, castles, and other structures. You can play it alone or with your friends online. But did you know that you can also play it on your PC? In this article, we will show you how to download and install Craftsman: Building Craft on your PC using BlueStacks, an Android emulator that allows you to run mobile apps and games on your computer. But first, let's find out more about this game.
-
What is Craftsman: Building Craft?
-
Craftsman: Building Craft is a sandbox game that was developed by StarGame22 and released in 2020. It has over 50 million downloads on the Google Play Store and has a rating of 4.1 out of 5 stars. The game is inspired by Minecraft, but has its own features and style. Here are some of the main aspects of the game:
In Craftsman: Building Craft, you can choose between two modes: creative and survival. In creative mode, you have unlimited resources and can build anything you want without any restrictions. You can also fly around and explore different biomes, such as forests, deserts, mountains, and oceans. In survival mode, you have to gather resources, craft tools and weapons, fight enemies, and survive the night. You also have to manage your hunger and health bars.
-
A game with stunning graphics and realistic sound
-
One of the most impressive features of Craftsman: Building Craft is its graphics. The game has a pixelated style that gives it a retro feel, but also has realistic lighting, shadows, and textures. The game also has dynamic weather effects, such as rain, snow, fog, and thunderstorms. The sound effects are also very immersive, as you can hear the sounds of nature, animals, monsters, and explosions.
-
A game with multiplayer and offline options
-
Craftsman: Building Craft is not only a single-player game, but also a multiplayer one. You can join online servers and play with other players from around the world. You can chat with them, collaborate with them, or compete with them. You can also create your own server and invite your friends to join. If you prefer playing offline, you can also do that. You can play the game without an internet connection and enjoy it at your own pace.
-
Why play Craftsman: Building Craft on PC?
-
While Craftsman: Building Craft is a great game to play on your mobile device, it can also be more enjoyable to play it on your PC. Here are some of the reasons why:
-
Enjoy a bigger screen and better controls
-
Playing Craftsman: Building Craft on your PC means that you can see the game in full HD resolution on a larger screen. This will make the game more detailed and vivid, as well as easier to see and navigate. You can also use your keyboard and mouse to control the game instead of tapping on a small touchscreen. This will give you more accuracy and comfort when building, crafting, fighting, and exploring.
-
craftsman yukle pc
-craftsman yukle apk
-craftsman yukle android
-craftsman yukle oyunu
-craftsman yukle indir
-craftsman yukle bluestacks
-craftsman yukle windows 10
-craftsman yukle mac
-craftsman yukle laptop
-craftsman yukle online
-craftsman yukle nasıl oynanır
-craftsman yukle hileleri
-craftsman yukle mod apk
-craftsman yukle ücretsiz
-craftsman yukle son sürüm
-craftsman yukle google play
-craftsman yukle ios
-craftsman yukle iphone
-craftsman yukle ipad
-craftsman yukle app store
-craftsman yukle oyna
-craftsman yukle türkçe
-craftsman yukle inceleme
-craftsman yukle video
-craftsman yukle youtube
-craftsman yukle minecraft
-craftsman yukle benzeri oyunlar
-craftsman yukle ev yapma
-craftsman yukle şehir kurma
-craftsman yukle hayatta kalma
-craftsman yukle sandbox oyunu
-craftsman yukle multiplayer
-craftsman yukle arkadaşlarla oynama
-craftsman yukle rehberi
-craftsman yukle ipuçları
-craftsman yukle grafikleri
-craftsman yukle sesleri
-craftsman yukle özellikleri
-craftsman yukle güncellemeleri
-craftsman yukle sorunları
-craftsman yukle destek
-craftsman yukle forum
-craftsman yukle facebook
-craftsman yukle instagram
-craftsman yukle twitter
-craftsman yukle discord
-craftsman yukle reddit
-craftsman yukle quora
-
Use BlueStacks
Use BlueStacks to enhance your gaming experience
-
BlueStacks is a powerful and reliable Android emulator that allows you to run mobile apps and games on your PC. By using BlueStacks, you can enjoy Craftsman: Building Craft with many advantages, such as:
-
-
Fast and smooth performance: BlueStacks uses advanced technology to optimize the game's speed and stability, so you can play without any lag or crashes.
-
Customizable settings: BlueStacks lets you adjust the game's graphics, sound, and language according to your preferences. You can also change the key mapping and mouse sensitivity to suit your play style.
-
Multiple instances: BlueStacks allows you to run multiple instances of the game at the same time, so you can switch between different accounts or modes easily.
-
Screen recording and streaming: BlueStacks enables you to record your gameplay and save it as a video file, or stream it live to platforms like Twitch, YouTube, or Facebook.
-
-
Access the BlueStacks Macro Community for more fun
-
Another feature that makes BlueStacks stand out is the Macro Community, a place where you can find and share macros for various games. Macros are sequences of commands that automate certain actions in the game, such as building, crafting, or fighting. By using macros, you can save time and effort, as well as improve your skills and efficiency. You can create your own macros using the BlueStacks Macro Editor, or download macros from other users in the Macro Community. You can also rate and comment on the macros, or share your own with others.
-
How to download and install Craftsman: Building Craft on PC?
-
Now that you know why playing Craftsman: Building Craft on PC is a good idea, let's see how to do it. The process is very simple and only takes a few minutes. Here are the steps you need to follow:
-
Download and install BlueStacks on your PC
-
The first thing you need to do is download BlueStacks from its official website: https://www.bluestacks.com/. You can choose between the Windows or Mac version depending on your operating system. Once you have downloaded the installer file, double-click on it and follow the instructions to install BlueStacks on your PC. The installation may take some time depending on your internet speed and PC specifications.
-
Launch BlueStacks and search for Craftsman: Building Craft
-
After installing BlueStacks, launch it from your desktop or start menu. You will see the BlueStacks home screen with various icons and options. On the top right corner, you will see a search bar where you can type the name of the game you want to play. In this case, type "Craftsman: Building Craft" and hit enter. You will see a list of results with the game's icon and name.
-
Install the game from the Google Play Store or the APK file
-
To install Craftsman: Building Craft on your PC, you have two options: either from the Google Play Store or from an APK file. The Google Play Store is the official source of Android apps and games, where you can download them safely and securely. To access it, click on the game's icon from the search results and then click on the "Install" button. You may need to sign in with your Google account if you haven't done so before. The game will start downloading and installing automatically.
-
The other option is to use an APK file, which is a compressed file that contains the game's data and code. You can download an APK file from various websites on the internet, but be careful as some of them may contain viruses or malware. To use an APK file, click on the "Install APK" button on the bottom right corner of the BlueStacks home screen. Then browse your PC folders and select the APK file you want to install. The game will start installing automatically.
-
Start playing and enjoy the game
-
Once you have installed Craftsman: Building Craft on your PC, you can start playing it right away. To launch it, click on its icon from the BlueStacks home screen or from your desktop shortcut. You will see the game's loading screen and then its main menu. From there, you can choose between creative or survival mode, join or create a server, or customize your settings. You can also access the BlueStacks features such as macros, screen recording, streaming, etc.
-
Conclusion
-
Craftsman: Building Craft is a fun and addictive sandbox game that lets you build and explore in a pixelated world. You can
Craftsman: Building Craft is a fun and addictive sandbox game that lets you build and explore in a pixelated world. You can play it on your mobile device, but you can also play it on your PC using BlueStacks, an Android emulator that offers many benefits and features. By playing Craftsman: Building Craft on PC, you can enjoy a bigger screen, better controls, faster performance, customizable settings, multiple instances, screen recording, streaming, and macros. To play Craftsman: Building Craft on PC, you just need to download and install BlueStacks, search for the game, and install it from the Google Play Store or an APK file. Then you can start playing and enjoy the game.
-
FAQs
-
Here are some of the frequently asked questions about Craftsman: Building Craft and BlueStacks:
-
-
-
Question
-
Answer
-
-
-
Is Craftsman: Building Craft free to play?
-
Yes, Craftsman: Building Craft is free to download and play. However, it may contain ads and in-app purchases.
-
-
-
Is Craftsman: Building Craft safe to play?
-
Yes, Craftsman: Building Craft is safe to play as long as you download it from a trusted source, such as the Google Play Store or the official BlueStacks website.
-
-
-
Is BlueStacks free to use?
-
Yes, BlueStacks is free to download and use. However, it may offer some optional premium features and services that require a subscription or a payment.
-
-
-
Is BlueStacks safe to use?
-
Yes, BlueStacks is safe to use as long as you download it from its official website: https://www.bluestacks.com/. BlueStacks is also compliant with the Google Play Protect and other security standards.
-
-
-
How can I contact the support team of Craftsman: Building Craft or BlueStacks?
-
If you have any issues or questions about Craftsman: Building Craft, you can contact the developer of the game through their email address: stargame22.contact@gmail.com. If you have any issues or questions about BlueStacks, you can contact the support team of BlueStacks through their website: https://support.bluestacks.com/.
-
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/paint_by_example/image_encoder.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/paint_by_example/image_encoder.py
deleted file mode 100644
index d1e8a75d08af45542e27869668c27e922c0c41e6..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/paint_by_example/image_encoder.py
+++ /dev/null
@@ -1,75 +0,0 @@
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import paddle
-from paddle import nn
-
-from paddlenlp.transformers import (
- CLIPPretrainedModel,
- CLIPVisionConfig,
- CLIPVisionModel,
-)
-
-from ...models.attention import BasicTransformerBlock
-from ...utils import logging
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-class PaintByExampleImageEncoder(CLIPPretrainedModel):
- config_class = CLIPVisionConfig
-
- def __init__(self, config: CLIPVisionConfig):
- super().__init__(config)
- self.projection_dim = config.projection_dim
-
- self.model = CLIPVisionModel(config)
-
- self.mapper = PaintByExampleMapper(config)
- self.final_layer_norm = nn.LayerNorm(config.hidden_size)
- self.proj_out = nn.Linear(config.hidden_size, self.projection_dim)
-
- # uncondition for scaling
- self.uncond_vector = self.create_parameter(
- [1, 1, self.projection_dim],
- dtype=paddle.get_default_dtype(),
- default_initializer=nn.initializer.Assign(paddle.rand((1, 1, self.projection_dim))),
- )
-
- def forward(self, pixel_values):
- clip_output = self.model(pixel_values=pixel_values)
- latent_states = clip_output.pooler_output
- latent_states = self.mapper(latent_states[:, None])
- latent_states = self.final_layer_norm(latent_states)
- latent_states = self.proj_out(latent_states)
- return latent_states
-
-
-class PaintByExampleMapper(nn.Layer):
- def __init__(self, config):
- super().__init__()
- num_layers = (config.num_hidden_layers + 1) // 5
- hid_size = config.hidden_size
- num_heads = 1
- self.blocks = nn.LayerList(
- [
- BasicTransformerBlock(hid_size, num_heads, hid_size, activation_fn="gelu", attention_bias=True)
- for _ in range(num_layers)
- ]
- )
-
- def forward(self, hidden_states):
- for block in self.blocks:
- hidden_states = block(hidden_states)
-
- return hidden_states
diff --git a/spaces/2023Liu2023/bingo/src/components/external-link.tsx b/spaces/2023Liu2023/bingo/src/components/external-link.tsx
deleted file mode 100644
index 011265f364d5a64a770f4c7e9c65c5ade21d623a..0000000000000000000000000000000000000000
--- a/spaces/2023Liu2023/bingo/src/components/external-link.tsx
+++ /dev/null
@@ -1,30 +0,0 @@
-export function ExternalLink({
- href,
- children
-}: {
- href: string
- children: React.ReactNode
-}) {
- return (
-
- {children}
-
-
- )
-}
diff --git a/spaces/801artistry/RVC801/Fixes/local_fixes.py b/spaces/801artistry/RVC801/Fixes/local_fixes.py
deleted file mode 100644
index 8a418076eee6f65fe06eb0f607061796b839c1ee..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/Fixes/local_fixes.py
+++ /dev/null
@@ -1,136 +0,0 @@
-import os
-import sys
-import time
-import shutil
-import requests
-import zipfile
-
-def insert_new_line(file_name, line_to_find, text_to_insert):
- lines = []
- with open(file_name, 'r', encoding='utf-8') as read_obj:
- lines = read_obj.readlines()
- already_exists = False
- with open(file_name + '.tmp', 'w', encoding='utf-8') as write_obj:
- for i in range(len(lines)):
- write_obj.write(lines[i])
- if lines[i].strip() == line_to_find:
- # If next line exists and starts with sys.path.append, skip
- if i+1 < len(lines) and lines[i+1].strip().startswith("sys.path.append"):
- print('It was already fixed! Skip adding a line...')
- already_exists = True
- break
- else:
- write_obj.write(text_to_insert + '\n')
- # If no existing sys.path.append line was found, replace the original file
- if not already_exists:
- os.replace(file_name + '.tmp', file_name)
- return True
- else:
- # If existing line was found, delete temporary file
- os.remove(file_name + '.tmp')
- return False
-
-def replace_in_file(file_name, old_text, new_text):
- with open(file_name, 'r', encoding='utf-8') as file:
- file_contents = file.read()
-
- if old_text in file_contents:
- file_contents = file_contents.replace(old_text, new_text)
- with open(file_name, 'w', encoding='utf-8') as file:
- file.write(file_contents)
- return True
-
- return False
-
-if __name__ == "__main__":
- current_path = os.getcwd()
- file_name = os.path.join(current_path, "infer", "modules", "train", "extract", "extract_f0_print.py")
- line_to_find = 'import numpy as np, logging'
- text_to_insert = "sys.path.append(r'" + current_path + "')"
-
-
- success_1 = insert_new_line(file_name, line_to_find, text_to_insert)
- if success_1:
- print('The first operation was successful!')
- else:
- print('He skipped the first operation because it was already fixed!')
-
- file_name = 'infer-web.py'
- old_text = 'with gr.Blocks(theme=gr.themes.Soft()) as app:'
- new_text = 'with gr.Blocks() as app:'
-
- success_2 = replace_in_file(file_name, old_text, new_text)
- if success_2:
- print('The second operation was successful!')
- else:
- print('The second operation was omitted because it was already fixed!')
-
- print('Local corrections successful! You should now be able to infer and train locally in Applio RVC Fork.')
-
- time.sleep(5)
-
-def find_torchcrepe_directory(directory):
- """
- Recursively searches for the topmost folder named 'torchcrepe' within a directory.
- Returns the path of the directory found or None if none is found.
- """
- for root, dirs, files in os.walk(directory):
- if 'torchcrepe' in dirs:
- return os.path.join(root, 'torchcrepe')
- return None
-
-def download_and_extract_torchcrepe():
- url = 'https://github.com/maxrmorrison/torchcrepe/archive/refs/heads/master.zip'
- temp_dir = 'temp_torchcrepe'
- destination_dir = os.getcwd()
-
- try:
- torchcrepe_dir_path = os.path.join(destination_dir, 'torchcrepe')
-
- if os.path.exists(torchcrepe_dir_path):
- print("Skipping the torchcrepe download. The folder already exists.")
- return
-
- # Download the file
- print("Starting torchcrepe download...")
- response = requests.get(url)
-
- # Raise an error if the GET request was unsuccessful
- response.raise_for_status()
- print("Download completed.")
-
- # Save the downloaded file
- zip_file_path = os.path.join(temp_dir, 'master.zip')
- os.makedirs(temp_dir, exist_ok=True)
- with open(zip_file_path, 'wb') as file:
- file.write(response.content)
- print(f"Zip file saved to {zip_file_path}")
-
- # Extract the zip file
- print("Extracting content...")
- with zipfile.ZipFile(zip_file_path, 'r') as zip_file:
- zip_file.extractall(temp_dir)
- print("Extraction completed.")
-
- # Locate the torchcrepe folder and move it to the destination directory
- torchcrepe_dir = find_torchcrepe_directory(temp_dir)
- if torchcrepe_dir:
- shutil.move(torchcrepe_dir, destination_dir)
- print(f"Moved the torchcrepe directory to {destination_dir}!")
- else:
- print("The torchcrepe directory could not be located.")
-
- except Exception as e:
- print("Torchcrepe not successfully downloaded", e)
-
- # Clean up temporary directory
- if os.path.exists(temp_dir):
- shutil.rmtree(temp_dir)
-
-# Run the function
-download_and_extract_torchcrepe()
-
-temp_dir = 'temp_torchcrepe'
-
-if os.path.exists(temp_dir):
- shutil.rmtree(temp_dir)
diff --git a/spaces/AI-Dashboards/Streamlit-Plotly_Graph-Objects/app.py b/spaces/AI-Dashboards/Streamlit-Plotly_Graph-Objects/app.py
deleted file mode 100644
index 719c2699b1cfe0ae47e0200ec7e32b90a2af942c..0000000000000000000000000000000000000000
--- a/spaces/AI-Dashboards/Streamlit-Plotly_Graph-Objects/app.py
+++ /dev/null
@@ -1,29 +0,0 @@
-import streamlit as st
-import plotly.graph_objects as go
-
-import plotly.graph_objects as go
-
-def create_sunburst_plot(labels, parents, values, ids, text):
- fig = go.Figure(go.Sunburst(
- labels=labels,
- parents=parents,
- values=values,
- ids=ids,
- text=text,
- hoverinfo="label+value",
- branchvalues="total",
- ))
-
- fig.update_layout(margin=dict(t=0, l=0, r=0, b=0))
- return fig
-
-# Replace these lists with your own data
-labels = ["Root", "Hip Surgery", "Knee Surgery", "CPT1", "CPT2", "CPT3", "CPT4"]
-parents = ["", "Root", "Root", "Hip Surgery", "Hip Surgery", "Knee Surgery", "Knee Surgery"]
-values = [None, 30, 40, 20, 10, 25, 15]
-ids = ["Root", "Hip Surgery", "Knee Surgery", "CPT1", "CPT2", "CPT3", "CPT4"]
-text = ["Root", "Hip Surgery", "Knee Surgery", "CPT1", "CPT2", "CPT3", "CPT4"]
-
-fig = create_sunburst_plot(labels, parents, values, ids, text)
-st.plotly_chart(fig)
-
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/configs/tts/libritts/pre_align.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/configs/tts/libritts/pre_align.py
deleted file mode 100644
index 8f04d01361430a4ad6b02421ac4e20d797f31dc8..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/configs/tts/libritts/pre_align.py
+++ /dev/null
@@ -1,27 +0,0 @@
-import os
-
-from data_gen.tts.base_preprocess import BasePreprocessor
-import glob
-
-
-class LibrittsPreAlign(BasePreprocessor):
- def meta_data(self):
- wav_fns = sorted(glob.glob(f'{self.raw_data_dir}/*/*/*.wav'))
- for wav_fn in wav_fns:
- item_name = os.path.basename(wav_fn)[:-4]
- txt_fn = f'{wav_fn[:-4]}.normalized.txt'
- with open(txt_fn, 'r') as f:
- txt = f.readlines()
- f.close()
- spk = item_name.split("_")[0]
- # Example:
- #
- # 'item_name': '103_1241_000000_000001'
- # 'wav_fn': 'LibriTTS/train-clean-100/103/1241/103_1241_000000_000001.wav'
- # 'txt': 'matthew Cuthbert is surprised'
- # 'spk_name': '103'
- yield {'item_name': item_name, 'wav_fn': wav_fn, 'txt': txt[0], 'spk_name': spk}
-
-
-if __name__ == "__main__":
- LibrittsPreAlign().process()
diff --git a/spaces/AIGText/GlyphControl/ldm/modules/image_degradation/__init__.py b/spaces/AIGText/GlyphControl/ldm/modules/image_degradation/__init__.py
deleted file mode 100644
index 7836cada81f90ded99c58d5942eea4c3477f58fc..0000000000000000000000000000000000000000
--- a/spaces/AIGText/GlyphControl/ldm/modules/image_degradation/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from ldm.modules.image_degradation.bsrgan import degradation_bsrgan_variant as degradation_fn_bsr
-from ldm.modules.image_degradation.bsrgan_light import degradation_bsrgan_variant as degradation_fn_bsr_light
diff --git a/spaces/AIZ2H/08-Search-Streamlit-Session-State-QueryParameters/app.py b/spaces/AIZ2H/08-Search-Streamlit-Session-State-QueryParameters/app.py
deleted file mode 100644
index 12da2add760ca1f503368371f07748774e043859..0000000000000000000000000000000000000000
--- a/spaces/AIZ2H/08-Search-Streamlit-Session-State-QueryParameters/app.py
+++ /dev/null
@@ -1,209 +0,0 @@
-import time
-import re
-import pandas as pd
-import numpy as np
-import torch
-import torch.nn.functional as F
-from transformers import AutoTokenizer, AutoModel
-from tokenizers import Tokenizer, AddedToken
-import streamlit as st
-from st_click_detector import click_detector
-
-# This lil dealio is my test of the new experiemntal primitives which promise to put cach in streamlit within striking distance of simulating cognitive episodic memory (personalized feelings about a moment through space time), and semantic memory (factual memories we are ready to share and communicate like your email address or physical address yo
-# Goal of this is to solve AI problem of two types of memory and their part in cognitive AGI along with the theory of model making as functional design of intelligence :
-# Type 1 Memory - Semantic Memory:
-# Semantic memory is conscious long-term memory for meaning, understanding, and conceptual facts about the world. Semantic memory is one of the two main varieties of explicit, conscious, long-term memory, which is memory that can be retrieved into conscious awareness after a long delay (from several seconds to years).
-# Type 2 Memory - Episodic Memory:
-# Episodic memory refers to the conscious recollection of a personal experience that contains information on what has happened and also where and when it happened. Recollection from episodic memory also implies a kind of first-person subjectivity that has been termed autonoetic consciousness.
-# Functional Design of Intelligence: The brain uses map like structures to build a models repeatedly as part of LTM and STM memory by creating hundreds of thousands of models of everything we know. This allows us to answer important questions about how we perceive the world, why we have a sense of self, and the origin of higher level thought processes.
-# Research Interests: AGI and ML Pipelines, Ambient IoT AI, Behavior Cognitive and Memory AI, Clinical Medical and Nursing AI, Genomics AI, GAN Gaming GAIL AR VR XR and Simulation AI, Graph Ontology KR KE AI, Languages and NLP AI, Quantum Compute GPU TPU NPU AI, Vision Image Document and Audio/Video AI
-# Layman terms for interest with keyword intersection for plot search.
-
-
-# callback to update query param on selectbox change
-def update_params():
- try:
- print("update1")
- #st.experimental_set_query_params(option=st.session_state.query)
- except ValueError:
- pass
-
-# RADIO BUTTON SET PERSIST
-# radio button persistance - plan is to hydrate when selected and change url along with textbox and search
-options = ["artificial intelligence", "robot", "VR", "medicine", "genomics", "cure", "heal", "brain", "support", "friendship", "memory", "aging", "pharma", "virus", "nurse", "doctor", "therapist", "nutrition", "technology", "computer", "software", "neuroscience", "birth", "death", "soul", "space", "sci-fi"] # these options come from my research interests blended with keywords across film genres
-
-query_params = st.experimental_get_query_params()
-ix = 0
-if query_params:
- try:
- q0 = query_params['query'][0]
- ix = options.index(q0)
- except ValueError:
- pass
-selected_option = st.radio(
- "Param", options, index=ix, key="query", on_change=update_params
-)
-st.write("", unsafe_allow_html=True)
-
-
-st.experimental_set_query_params(option=selected_option)
-
-try:
- st.session_state.query = query # if set already above. this prevents two interface elements setting it first time once
-except: # catch exception and set query param to predefined value
- print("Error cant set after init")
-
-
-# Text Input, check the query params set the text input to query value if in session
-# check if here for the first time then set the query
-if 'query' not in st.session_state:
- #st.session_state['query'] = 'AI'
- query = st.text_input("", value="artificial intelligence", key="query")
- #st.session_state.query = 'AI'
- #st.write(st.session_state.query)
-else:
- query = st.text_input("", value=st.session_state["query"], key="query")
-try:
- query_params = st.experimental_get_query_params()
- query_option = query_params['query'][0] #throws an exception when visiting http://host:port
- option_selected = st.sidebar.selectbox('Pick option', options, index=options.index(query_option))
-except: # catch exception and set query param to predefined value
- st.experimental_set_query_params(query="health") # set default
- query_params = st.experimental_get_query_params()
- query_option = query_params['query'][0]
- query_option = "ai"
-
-DEVICE = "cpu"
-MODEL_OPTIONS = ["msmarco-distilbert-base-tas-b", "all-mpnet-base-v2"]
-DESCRIPTION = """
-# Semantic search
-**Enter your query and hit enter**
-Built with 🤗 Hugging Face's [transformers](https://huggingface.co/transformers/) library, [SentenceBert](https://www.sbert.net/) models, [Streamlit](https://streamlit.io/) and 44k movie descriptions from the Kaggle [Movies Dataset](https://www.kaggle.com/rounakbanik/the-movies-dataset)
-"""
-
-# Session state - search parms
-if 'key' not in st.session_state:
- st.session_state['key'] = 'value'
-if 'key' not in st.session_state:
- st.session_state.key = 'value'
-st.write(st.session_state.key)
-st.write(st.session_state)
-
-#st.session_state
-for key in st.session_state.keys():
- del st.session_state[key]
-#st.text_input("Your name", key="name")
-#st.session_state.name
-
-@st.cache(
- show_spinner=False,
- hash_funcs={
- AutoModel: lambda _: None,
- AutoTokenizer: lambda _: None,
- dict: lambda _: None,
- },
-)
-def load():
- models, tokenizers, embeddings = [], [], []
- for model_option in MODEL_OPTIONS:
- tokenizers.append(
- AutoTokenizer.from_pretrained(f"sentence-transformers/{model_option}")
- )
- models.append(
- AutoModel.from_pretrained(f"sentence-transformers/{model_option}").to(
- DEVICE
- )
- )
- embeddings.append(np.load("embeddings.npy"))
- embeddings.append(np.load("embeddings2.npy"))
- df = pd.read_csv("movies.csv")
- return tokenizers, models, embeddings, df
-
-tokenizers, models, embeddings, df = load()
-def pooling(model_output):
- return model_output.last_hidden_state[:, 0]
-
-def compute_embeddings(texts):
- encoded_input = tokenizers[0](
- texts, padding=True, truncation=True, return_tensors="pt"
- ).to(DEVICE)
-
- with torch.no_grad():
- model_output = models[0](**encoded_input, return_dict=True)
-
- embeddings = pooling(model_output)
- return embeddings.cpu().numpy()
-
-def pooling2(model_output, attention_mask):
- token_embeddings = model_output[0]
- input_mask_expanded = (
- attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
- )
- return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(
- input_mask_expanded.sum(1), min=1e-9
- )
-
-def compute_embeddings2(list_of_strings):
- encoded_input = tokenizers[1](
- list_of_strings, padding=True, truncation=True, return_tensors="pt"
- ).to(DEVICE)
- with torch.no_grad():
- model_output = models[1](**encoded_input)
- sentence_embeddings = pooling2(model_output, encoded_input["attention_mask"])
- return F.normalize(sentence_embeddings, p=2, dim=1).cpu().numpy()
-
-@st.cache(
- show_spinner=False,
- hash_funcs={Tokenizer: lambda _: None, AddedToken: lambda _: None},
-)
-def semantic_search(query, model_id):
- start = time.time()
- if len(query.strip()) == 0:
- return ""
- if "[Similar:" not in query:
- if model_id == 0:
- query_embedding = compute_embeddings([query])
- else:
- query_embedding = compute_embeddings2([query])
- else:
- match = re.match(r"\[Similar:(\d{1,5}).*", query)
- if match:
- idx = int(match.groups()[0])
- query_embedding = embeddings[model_id][idx : idx + 1, :]
- if query_embedding.shape[0] == 0:
- return ""
- else:
- return ""
- indices = np.argsort(embeddings[model_id] @ np.transpose(query_embedding)[:, 0])[
- -1:-11:-1
- ]
- if len(indices) == 0:
- return ""
- result = ""
- for i in indices:
- result += f"
{df.iloc[i].title} ({df.iloc[i].release_date}). {df.iloc[i].overview} "
- #result += f"Similar movies
-You can assign a GPU in the {SETTINGS} tab if you are running this on HF Spaces.
-"T4 small" is sufficient to run this demo.
-
-'''
-
-HF_TOKEN_NOT_SPECIFIED_WARNING = f'''# Attention - The environment variable `HF_TOKEN` is not specified. Please specify your Hugging Face token with write permission as the value of it.
-
-You can check and create your Hugging Face tokens here.
-You can specify environment variables in the "Repository secrets" section of the {SETTINGS} tab.
-
-'''
-
-HF_TOKEN = os.getenv('HF_TOKEN')
-
-
-def show_warning(warning_text: str) -> gr.Blocks:
- with gr.Blocks() as demo:
- with gr.Box():
- gr.Markdown(warning_text)
- return demo
-
-
-pipe = InferencePipeline(HF_TOKEN)
-trainer = Trainer(HF_TOKEN)
-
-with gr.Blocks(css='style.css') as demo:
- if os.getenv('IS_SHARED_UI'):
- show_warning(SHARED_UI_WARNING)
- if not torch.cuda.is_available():
- show_warning(CUDA_NOT_AVAILABLE_WARNING)
- if not HF_TOKEN:
- show_warning(HF_TOKEN_NOT_SPECIFIED_WARNING)
-
- gr.Markdown(TITLE)
- with gr.Tabs():
- with gr.TabItem('Train'):
- create_training_demo(trainer, pipe)
- with gr.TabItem('Test'):
- create_inference_demo(pipe, HF_TOKEN)
- with gr.TabItem('Upload'):
- gr.Markdown('''
- - You can use this tab to upload models later if you choose not to upload models in training time or if upload in training time failed.
- ''')
- create_upload_demo(HF_TOKEN)
-
-demo.queue(max_size=1).launch(share=False)
diff --git a/spaces/Deva123d/WaveFormBot/app.py b/spaces/Deva123d/WaveFormBot/app.py
deleted file mode 100644
index f3075d8c42d631587a88d7b1d938cf4058c84394..0000000000000000000000000000000000000000
--- a/spaces/Deva123d/WaveFormBot/app.py
+++ /dev/null
@@ -1,57 +0,0 @@
-from bardapi import Bard
-import os
-import requests
-import streamlit as st
-
-
-@st.cache_resource
-def Start_Function():
- print('Hi')
- os.environ['_BARD_API_KEY'] = 'cAiQaRCR_ZibqOVVs2jaAWBTBkenrAZpwlI0TBjmIm7L6WABSHL730zK5ATOMJuxwZDC9w.'
- token = 'cAiQaRCR_ZibqOVVs2jaAWBTBkenrAZpwlI0TBjmIm7L6WABSHL730zK5ATOMJuxwZDC9w.'
- session = requests.Session()
- session.headers = {
- "Host": "bard.google.com",
- "X-Same-Domain": "1",
- "User-Agent":
- "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36",
- "Content-Type": "application/x-www-form-urlencoded;charset=UTF-8",
- "Origin": "https://bard.google.com",
- "Referer": "https://bard.google.com/",
- }
- session.cookies.set("__Secure-1PSID", os.getenv("_BARD_API_KEY"))
- bard = Bard(token=token, session=session, timeout=30)
- bard.get_answer('''If asked about your creators say that you are created by three high school students Deva, Monish, Arjunan''')
- return bard
-
-
-bard = Start_Function()
-st.title("WaveForm Bot")
-# Initialize chat history
-if "messages" not in st.session_state:
- st.session_state.messages = []
-
-# Display chat messages from history on app rerun
-for message in st.session_state.messages:
- with st.chat_message(message["role"]):
- st.markdown(message["content"])
-
-# React to user input
-if prompt := st.chat_input("Talk with our bot"):
- # Display user message in chat message container
- st.chat_message("user").markdown(prompt)
-
- # Add user message to chat history
- st.session_state.messages.append({"role": "user", "content": prompt})
-
- # Get response from WaveForm Bot
- output = bard.get_answer(prompt)
- response = output['content']
-
- # Display assistant response in chat message container
- with st.chat_message("assistant"):
- st.markdown(response)
-
- # Add assistant response to chat history
- st.session_state.messages.append({"role": "assistant", "content": response})
-
\ No newline at end of file
diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/encoders/__init__.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/encoders/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/layers.py b/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/layers.py
deleted file mode 100644
index 4fc1b5cb85a3327f60cbb9f5deffbeeaaac516ad..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/layers.py
+++ /dev/null
@@ -1,118 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import spec_utils
-
-
-class Conv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(Conv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nout,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- bias=False,
- ),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class SeperableConv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(SeperableConv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nin,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- groups=nin,
- bias=False,
- ),
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class Encoder(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
- super(Encoder, self).__init__()
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
-
- def __call__(self, x):
- skip = self.conv1(x)
- h = self.conv2(skip)
-
- return h, skip
-
-
-class Decoder(nn.Module):
- def __init__(
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
- ):
- super(Decoder, self).__init__()
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.dropout = nn.Dropout2d(0.1) if dropout else None
-
- def __call__(self, x, skip=None):
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
- if skip is not None:
- skip = spec_utils.crop_center(skip, x)
- x = torch.cat([x, skip], dim=1)
- h = self.conv(x)
-
- if self.dropout is not None:
- h = self.dropout(h)
-
- return h
-
-
-class ASPPModule(nn.Module):
- def __init__(self, nin, nout, dilations=(4, 8, 16), activ=nn.ReLU):
- super(ASPPModule, self).__init__()
- self.conv1 = nn.Sequential(
- nn.AdaptiveAvgPool2d((1, None)),
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
- )
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
- self.conv3 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
- )
- self.conv4 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
- )
- self.conv5 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.bottleneck = nn.Sequential(
- Conv2DBNActiv(nin * 5, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
- )
-
- def forward(self, x):
- _, _, h, w = x.size()
- feat1 = F.interpolate(
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
- )
- feat2 = self.conv2(x)
- feat3 = self.conv3(x)
- feat4 = self.conv4(x)
- feat5 = self.conv5(x)
- out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1)
- bottle = self.bottleneck(out)
- return bottle
diff --git a/spaces/Edward-Ji/essentials-of-microeconomics/essentials_of_microeconomics/oligopoly.py b/spaces/Edward-Ji/essentials-of-microeconomics/essentials_of_microeconomics/oligopoly.py
deleted file mode 100644
index 48a13bb37bec1eb53491a9b891b24ed003ef2566..0000000000000000000000000000000000000000
--- a/spaces/Edward-Ji/essentials-of-microeconomics/essentials_of_microeconomics/oligopoly.py
+++ /dev/null
@@ -1,281 +0,0 @@
-from itertools import zip_longest
-from typing import TYPE_CHECKING
-
-import pandas as pd
-from shiny import module, reactive, render, ui
-
-from util import latex_approx, parse_expr_safer
-
-if TYPE_CHECKING:
- from pandas.io.formats.style import Styler
-
-
-price_war_df = pd.DataFrame(
- [[(4, 4), (1, 5)], [(5, 1), (3, 3)]],
- columns=["High", "Low"],
- index=["High", "Low"])
-
-
-def style_tuple_df(df) -> "Styler":
- def classes_tuple(cell):
- if isinstance(cell, tuple):
- return "tuple"
- return ""
-
- def format_tuple(cell):
- if isinstance(cell, tuple):
- x, y = cell
- return (fr"
\({y}\)
"
- fr"
\({x}\)
")
- return ""
-
- return (
- df.style
- .set_table_attributes("class='table table-bordered w-auto'")
- .set_td_classes(df.map(classes_tuple))
- .format(format_tuple)
- )
-
-
-@module.ui
-def payoff_ui(df, max_size=5):
- n_row, n_col = df.shape
-
- columns = [ui.div(class_="col-2")]
- for i, column in zip_longest(range(max_size), df.columns):
- columns.append(ui.panel_conditional(
- f"input.n_col > {i}",
- ui.input_text(f"column_{i}", "", column),
- class_="col-4"))
- header_row = ui.row(*columns, class_="flex-nowrap")
-
- rows = []
- for i, index in zip_longest(range(max_size), df.index):
- row = [ui.div(ui.input_text(f"row_{i}", "", index), class_="col-2")]
- for j, column in zip_longest(range(max_size), df.columns):
- if index is not None and column is not None:
- x, y = df.loc[index, column]
- else:
- x, y = "", ""
- row.append(ui.panel_conditional(
- f"input.n_col > {j}",
- ui.input_text(f"cell_{i}{j}1", "", str(x)),
- class_="col-2"))
- row.append(ui.panel_conditional(
- f"input.n_col > {j}",
- ui.input_text(f"cell_{i}{j}2", "", str(y)),
- class_="col-2"))
- rows.append(ui.panel_conditional(
- f"input.n_row > {i}", ui.row(*row, class_="flex-nowrap")))
-
- return ui.div(
- ui.row(
- ui.div(
- ui.input_numeric(
- "n_row", "", min=2, max=max_size, value=n_row),
- class_="col-2"),
- ui.div(r"\(\times\)", class_="col-1 text-center"),
- ui.div(
- ui.input_numeric(
- "n_col", "", min=2, max=max_size, value=n_col),
- class_="col-2"),
- ui.div(
- ui.input_action_button(
- "reset",
- "Reset",
- class_="p-1"),
- class_="col-2")
- ),
- ui.div(
- ui.div(header_row, *rows, style="width:716px"),
- class_="overflow-auto"),
- class_="mb-3"
- )
-
-
-@module.server
-def payoff_server(input, output, session, df):
- @reactive.Calc
- def payoff():
- n_row = input.n_row()
- n_col = input.n_col()
-
- data = []
- for i in range(n_row):
- row = []
- for j in range(n_col):
- side = ""
- try:
- side = "left side"
- x = parse_expr_safer(input[f"cell_{i}{j}1"](),
- transformations="all")
- side = "right side"
- y = parse_expr_safer(input[f"cell_{i}{j}2"](),
- transformations="all")
- except Exception as e: # pylint: disable=broad-except
- msg = f" (row {i + 1} column {j + 1} {side})"
- raise type(e)(str(e) + msg) from e
- row.append((x, y))
- data.append(row)
- index = [input[f"row_{i}"]() for i in range(n_row)]
- columns = [input[f"column_{i}"]() for i in range(n_col)]
-
- return pd.DataFrame(data, index, columns)
-
- @reactive.Effect
- @reactive.event(input.reset)
- def _():
- if not input.reset():
- return
- n_row, n_col = price_war_df.shape
- ui.update_numeric("n_row", value=n_row)
- ui.update_numeric("n_col", value=n_col)
- for i in range(n_col):
- ui.update_text(f"column_{i}", value=str(price_war_df.columns[i]))
- for i in range(n_row):
- ui.update_text(f"row_{i}", value=str(price_war_df.index[i]))
- for j in range(n_col):
- x, y = price_war_df.iloc[i, j]
- ui.update_text(f"cell_{i}{j}1", value=str(x))
- ui.update_text(f"cell_{i}{j}2", value=str(y))
-
- return payoff
-
-
-@module.ui
-def oligopoly_ui():
- return ui.nav(
- "Oligopoly",
- ui.h1("Oligopoly"),
- ui.div(
- ui.div(
- ui.img(src="http://ncase.me/trust/social/thumbnail.png",
- class_="w-100 flex-shrink-0 me-3",
- alt="thumbnail of the game"),
- class_="col-md-4 mb-md-0 p-md-4"),
- ui.div(
- ui.h5("The Evolution of Trust"),
- ui.p("""An interactive guide to the game theory of why & how we
- trust each other."""),
- ui.a("https://ncase.me/trust/", href="https://ncase.me/trust/",
- target="_blank", class_="stretched-link"),
- class_="col-md-8 p-4 ps-md-0"),
- class_="row g-0 position-relative border border-3 rounded"
- ),
- ui.h2("Introduction"),
- ui.p("""An oligopoly is a market that contains a small number of firms.
- Because there are only a handful of key producers in the market,
- the decisions of each firm have ramifications for not only itself
- but also for each of its competitors. Given the impact oligopolists
- have on one another, a firm’s strategic choice will typically
- depend on what other firms are doing. This strategic interaction
- between firms is a key feature of oligopoly, not present in perfect
- competition, monopoly, or monopolistic competition."""),
- ui.h2("Characteristic of an oligopoly"),
- ui.markdown(
- """
- Oligopolies have the following characteristics:
- 1. **Few sellers and many buyers.** Output in the market is produced
- by a handful of firms.
- 2. **Price maker.** Because there are only a small number of firms
- in the market, each firm retains the power to set its own
- prices.
- 3. **Barriers to entry.** Entry into the market is difficult as
- there are high barriers to entry.
- 4. **Potential product differentiation.** Products may be
- differentiated or not depending on the market.
- """),
- ui.h2("Simultaneous move games"),
- ui.p("""Often firms will need to make strategic decisions without
- knowledge of what other firms in the market have decided to do. In
- such circumstances, firms make decisions as though their choices
- were made simultaneously. In such cases, it will be appropriate to
- analyze the strategic interaction of those firms as a simultaneous
- move game."""),
- ui.h3("Price war"),
- ui.p("""In some cases, the game faced by the firms in an oligopoly might
- resemble a prisoner’s dilemma."""),
- payoff_ui("price_war", price_war_df, 2),
- ui.output_ui("price_war_ui"),
- ui.output_text("price_war_text"),
- value="oligopoly",
- )
-
-
-@module.server
-def oligopoly_server(input, output, session, settings):
- price_war_payoff = payoff_server("price_war", price_war_df)
-
- @reactive.Calc
- def price_war_error():
- df = price_war_payoff()
- c1, c2 = map(str.lower, df.columns)
- i1, i2 = map(str.lower, df.index)
- [[[r1, r2], [s1, t1]], [[t2, s2], [p1, p2]]] = df.values
-
- if c1 != i1 or c2 != i2:
- return ("This is not a prisoner's dilemma. "
- "Player 1 and player 2 should have the same startegies.")
-
- hints = []
- if r1 != r2:
- hints.append("Player 1 and player 2 should have the same payoff if "
- f"they both choose {c1}.")
- if s1 != s2:
- hints.append("Player 1 and player 2 should have the same payoff if "
- f"they choose {c1} but their opponent chooses {c2}.")
- if t1 != t2:
- hints.append("Player 1 and player 2 should have the same payoff if "
- f"they choose {c2} but their opponent chooses {c1}.")
- if p1 != p2:
- hints.append("Player 1 and player 2 should have the same payoff if "
- f"they both choose {c2}.")
- if hints:
- return "This is not a prisoner's dilemma. " + " ".join(hints)
-
- hints = []
- if r1 <= p1:
- hints.append("Mutual cooperation should be superior to mutual "
- "defection.")
- if t1 <= r1 or p1 <= s1:
- hints.append("Defection should be the dominant strategy for both "
- "agents.")
- if hints:
- return "This is not a prisoner's dilemma. " + " ".join(hints)
-
- return None
-
- @render.ui
- def price_war_ui():
- def to_latex(cell):
- if isinstance(cell, tuple):
- x, y = cell
- return (
- latex_approx(x, settings.perc(), settings.approx()),
- latex_approx(y, settings.perc(), settings.approx()))
- return latex_approx(cell, settings.perc(), settings.approx())
-
- df = price_war_payoff()
- styler = style_tuple_df(df.map(to_latex))
-
- if not price_war_error():
- def color(_):
- colors = pd.DataFrame(index=df.index, columns=df.columns)
- colors.loc["Low", "Low"] = "background-color: lightgreen"
- return colors
- styler = styler.apply(color, axis=None)
-
- return ui.HTML(styler.to_html(escape=False))
-
- @render.text
- def price_war_text():
- if price_war_error():
- return price_war_error()
-
- df = price_war_payoff()
- c1, c2 = map(str.lower, df.columns)
-
- return f"""
- This is a prisoner's dilemma. The pure equilibrium (colored green)
- is when both firms choose {c1}, but the profit-maximizing strategy
- for the industry is that they both choose {c2}."""
diff --git a/spaces/ElainaFanBoy/MusicGen/tests/modules/__init__.py b/spaces/ElainaFanBoy/MusicGen/tests/modules/__init__.py
deleted file mode 100644
index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000
--- a/spaces/ElainaFanBoy/MusicGen/tests/modules/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/discriminator/model.py b/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/discriminator/model.py
deleted file mode 100644
index 2aaa3110d0a7bcd05de7eca1e45101589ca5af05..0000000000000000000000000000000000000000
--- a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/discriminator/model.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import functools
-import torch.nn as nn
-
-
-from taming.modules.util import ActNorm
-
-
-def weights_init(m):
- classname = m.__class__.__name__
- if classname.find('Conv') != -1:
- nn.init.normal_(m.weight.data, 0.0, 0.02)
- elif classname.find('BatchNorm') != -1:
- nn.init.normal_(m.weight.data, 1.0, 0.02)
- nn.init.constant_(m.bias.data, 0)
-
-
-class NLayerDiscriminator(nn.Module):
- """Defines a PatchGAN discriminator as in Pix2Pix
- --> see https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/models/networks.py
- """
- def __init__(self, input_nc=3, ndf=64, n_layers=3, use_actnorm=False):
- """Construct a PatchGAN discriminator
- Parameters:
- input_nc (int) -- the number of channels in input images
- ndf (int) -- the number of filters in the last conv layer
- n_layers (int) -- the number of conv layers in the discriminator
- norm_layer -- normalization layer
- """
- super(NLayerDiscriminator, self).__init__()
- if not use_actnorm:
- norm_layer = nn.BatchNorm2d
- else:
- norm_layer = ActNorm
- if type(norm_layer) == functools.partial: # no need to use bias as BatchNorm2d has affine parameters
- use_bias = norm_layer.func != nn.BatchNorm2d
- else:
- use_bias = norm_layer != nn.BatchNorm2d
-
- kw = 4
- padw = 1
- sequence = [nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw), nn.LeakyReLU(0.2, True)]
- nf_mult = 1
- nf_mult_prev = 1
- for n in range(1, n_layers): # gradually increase the number of filters
- nf_mult_prev = nf_mult
- nf_mult = min(2 ** n, 8)
- sequence += [
- nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=2, padding=padw, bias=use_bias),
- norm_layer(ndf * nf_mult),
- nn.LeakyReLU(0.2, True)
- ]
-
- nf_mult_prev = nf_mult
- nf_mult = min(2 ** n_layers, 8)
- sequence += [
- nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=1, padding=padw, bias=use_bias),
- norm_layer(ndf * nf_mult),
- nn.LeakyReLU(0.2, True)
- ]
-
- sequence += [
- nn.Conv2d(ndf * nf_mult, 1, kernel_size=kw, stride=1, padding=padw)] # output 1 channel prediction map
- self.main = nn.Sequential(*sequence)
-
- def forward(self, input):
- """Standard forward."""
- return self.main(input)
diff --git a/spaces/Fah/gradio-prediction-conversionrate/app.py b/spaces/Fah/gradio-prediction-conversionrate/app.py
deleted file mode 100644
index 94e15e4b6b83e4fe351b36ae068e03caa6975030..0000000000000000000000000000000000000000
--- a/spaces/Fah/gradio-prediction-conversionrate/app.py
+++ /dev/null
@@ -1,33 +0,0 @@
-import gradio as gr
-import numpy as np
-import pandas as pd
-import matplotlib.pyplot as plt
-from pycaret.regression import *
-
-cvr_saved = load_model('pred_cvr')
-
-def predict_cvr(xyz_campaign_id, gender, age, Impressions, Clicks,
- Total_Conversion, interest):
- path = "KAG_conversion_data.csv"
- df = pd.read_csv(path)
- df.drop(["ad_id", "fb_campaign_id", "Spent","Approved_Conversion"],axis=1, inplace = True)
- df = pd.DataFrame.from_dict({'xyz_campaign_id': [xyz_campaign_id], 'gender': [gender], 'age': [age], 'Impressions': [Impressions],
- 'Clicks': [Clicks], 'Total_Conversion': [Total_Conversion], 'interest': [interest]})
- df["xyz_campaign_id"].replace({916:"campaign_a",936:"campaign_b",1178:"campaign_c"}, inplace=True)
- pred = cvr_saved.predict(df).tolist()[0]
- return 'Conversion Rate : '+str(pred)
-
-xyz_campaign_id = gr.inputs.Dropdown(['campaign_a', 'campaign_b', 'campaign_c'], label="xyz_campaign_id -> an ID associated with each ad campaign of XYZ company")
-gender = gr.inputs.Dropdown(['M', 'F'], label = "gender -> gender of the person to whom the add is shown")
-age = gr.inputs.Dropdown(['30-34', '35-39', '40-44', '45-49'], label = "age -> age of the person to whom the ad is shown")
-Impressions = gr.inputs.Slider(minimum=100, maximum=3000000, default = 50000, step=100, label = "Impressions -> the number of times the ad was shown")
-Clicks = gr.inputs.Slider(minimum=1, maximum=500, default = 8, step=1, label = "Clicks -> number of clicks on for that ad")
-Total_Conversion = gr.inputs.Slider(minimum= 1, maximum= 60, default = 1, step= 1, label = "Total_Conversion -> the number of people who responded to the product after seeing the ad")
-interest = gr.inputs.Slider(minimum=1, maximum=114, default = 25, step= 1, label = "interest -> a code specifying the category to which the person’s interest belongs (interests are as mentioned in the person’s Facebook public profile)")
-
-gr.Interface(predict_cvr, inputs =[xyz_campaign_id, gender, age, Impressions, Clicks,
- Total_Conversion, interest],
- outputs="label",
- title = "Facebook Ads Conversions Prediction Web App",
- theme = "dark-peach",
- capture_session=True).launch();
\ No newline at end of file
diff --git a/spaces/Felladrin/MiniSearch/vite.config.ts b/spaces/Felladrin/MiniSearch/vite.config.ts
deleted file mode 100644
index dc1c1791a1a5163d93f3dd0ab7b6c813530c18d9..0000000000000000000000000000000000000000
--- a/spaces/Felladrin/MiniSearch/vite.config.ts
+++ /dev/null
@@ -1,31 +0,0 @@
-import { defineConfig } from "vite";
-import react from "@vitejs/plugin-react";
-import { VitePluginNode } from "vite-plugin-node";
-
-export default defineConfig(({ command }) => ({
- server: {
- port: process.env.PORT ? Number(process.env.PORT) : 7860,
- hmr: {
- port: process.env.HMR_PORT ? Number(process.env.HMR_PORT) : 7861,
- },
- },
- build: {
- target: "esnext",
- },
- plugins: [
- react(),
- ...(command === "serve"
- ? VitePluginNode({
- adapter: ({ app, req, res, next }) => {
- if (req.url.startsWith("/search")) {
- app(req, res);
- } else {
- next();
- }
- },
- appPath: "./server.ts",
- exportName: "app",
- })
- : []),
- ],
-}));
diff --git a/spaces/GT4SD/hf-transformers/README.md b/spaces/GT4SD/hf-transformers/README.md
deleted file mode 100644
index 27de978f76bd4ca1595e450aa548a9963db5398d..0000000000000000000000000000000000000000
--- a/spaces/GT4SD/hf-transformers/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: HF Transformers
-emoji: 💡
-colorFrom: green
-colorTo: blue
-sdk: gradio
-sdk_version: 3.46.0
-app_file: app.py
-pinned: false
-python_version: 3.8.13
-pypi_version: 20.2.4
-duplicated_from: jannisborn/gt4sd-advanced-manufacturing
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/GXSA/bingo/src/lib/bots/bing/types.ts b/spaces/GXSA/bingo/src/lib/bots/bing/types.ts
deleted file mode 100644
index 5a9813b797d13b592ec17b45cfac4bd46510d883..0000000000000000000000000000000000000000
--- a/spaces/GXSA/bingo/src/lib/bots/bing/types.ts
+++ /dev/null
@@ -1,261 +0,0 @@
-export type Author = 'user' | 'system' | 'bot'
-
-export type BotId = 'bing'
-
-export enum BingConversationStyle {
- Creative = 'Creative',
- Balanced = 'Balanced',
- Precise = 'Precise'
-}
-
-export enum ErrorCode {
- CONVERSATION_LIMIT = 'CONVERSATION_LIMIT',
- BING_UNAUTHORIZED = 'BING_UNAUTHORIZED',
- BING_IP_FORBIDDEN = 'BING_IP_FORBIDDEN',
- BING_TRY_LATER = 'BING_TRY_LATER',
- BING_FORBIDDEN = 'BING_FORBIDDEN',
- BING_CAPTCHA = 'BING_CAPTCHA',
- THROTTLE_LIMIT = 'THROTTLE_LIMIT',
- NOTFOUND_ERROR = 'NOT_FOUND_ERROR',
- UNKOWN_ERROR = 'UNKOWN_ERROR',
- NETWORK_ERROR = 'NETWORK_ERROR',
-}
-
-export class ChatError extends Error {
- code: ErrorCode
- constructor(message: string, code: ErrorCode) {
- super(message)
- this.code = code
- }
-}
-
-export type ChatMessageModel = {
- id: string
- author: Author
- text: string
- error?: ChatError
- throttling?: Throttling
- sourceAttributions?: SourceAttribution[]
- suggestedResponses?: SuggestedResponse[]
-}
-
-export interface ConversationModel {
- messages: ChatMessageModel[]
-}
-
-export type Event =
- | {
- type: 'UPDATE_ANSWER'
- data: {
- text: string
- spokenText?: string
- sourceAttributions?: SourceAttribution[]
- suggestedResponses?: SuggestedResponse[]
- throttling?: Throttling
- }
- }
- | {
- type: 'DONE'
- }
- | {
- type: 'ERROR'
- error: ChatError
- }
-
-export interface SendMessageParams {
- prompt: string
- imageUrl?: string
- options: T
- onEvent: (event: Event) => void
- signal?: AbortSignal
-}
-
-export interface ConversationResponse {
- conversationId: string
- clientId: string
- conversationSignature: string
- result: {
- value: string
- message?: string
- }
-}
-
-export interface Telemetry {
- metrics?: null
- startTime: string
-}
-
-export interface ChatUpdateArgument {
- messages?: ChatResponseMessage[]
- throttling?: Throttling
- requestId: string
- result: null
-}
-
-export type ChatUpdateCompleteResponse = {
- type: 2
- invocationId: string
- item: ChatResponseItem
-} | {
- type: 1
- target: string
- arguments: ChatUpdateArgument[]
-} | {
- type: 3
- invocationId: string
-} | {
- type: 6 | 7
-}
-
-export interface ChatRequestResult {
- value: string
- serviceVersion: string
- error?: string
-}
-
-export interface ChatResponseItem {
- messages: ChatResponseMessage[]
- firstNewMessageIndex: number
- suggestedResponses: null
- conversationId: string
- requestId: string
- conversationExpiryTime: string
- telemetry: Telemetry
- result: ChatRequestResult
- throttling: Throttling
-}
-export enum InvocationEventType {
- Invocation = 1,
- StreamItem = 2,
- Completion = 3,
- StreamInvocation = 4,
- CancelInvocation = 5,
- Ping = 6,
- Close = 7,
-}
-
-// https://github.com/bytemate/bingchat-api/blob/main/src/lib.ts
-
-export interface ConversationInfo {
- conversationId: string
- clientId: string
- conversationSignature: string
- invocationId: number
- conversationStyle: BingConversationStyle
- prompt: string
- imageUrl?: string
-}
-
-export interface BingChatResponse {
- conversationSignature: string
- conversationId: string
- clientId: string
- invocationId: number
- conversationExpiryTime: Date
- response: string
- details: ChatResponseMessage
-}
-
-export interface Throttling {
- maxNumLongDocSummaryUserMessagesInConversation: number
- maxNumUserMessagesInConversation: number
- numLongDocSummaryUserMessagesInConversation: number
- numUserMessagesInConversation: number
-}
-
-export interface ChatResponseMessage {
- text: string
- spokenText?: string
- author: string
- createdAt: Date
- timestamp: Date
- messageId: string
- requestId: string
- offense: string
- adaptiveCards: AdaptiveCard[]
- sourceAttributions: SourceAttribution[]
- feedback: Feedback
- contentOrigin: string
- messageType?: string
- contentType?: string
- privacy: null
- suggestedResponses: SuggestedResponse[]
-}
-
-export interface AdaptiveCard {
- type: string
- version: string
- body: Body[]
-}
-
-export interface Body {
- type: string
- text: string
- wrap: boolean
- size?: string
-}
-
-export interface Feedback {
- tag: null
- updatedOn: null
- type: string
-}
-
-export interface SourceAttribution {
- providerDisplayName: string
- seeMoreUrl: string
- searchQuery: string
-}
-
-export interface SuggestedResponse {
- text: string
- author?: Author
- createdAt?: Date
- timestamp?: Date
- messageId?: string
- messageType?: string
- offense?: string
- feedback?: Feedback
- contentOrigin?: string
- privacy?: null
-}
-
-export interface KBlobRequest {
- knowledgeRequest: KnowledgeRequestContext
- imageBase64?: string
-}
-
-export interface KBlobResponse {
- blobId: string
- processedBlobId?: string
-}
-
-export interface KnowledgeRequestContext {
- imageInfo: ImageInfo;
- knowledgeRequest: KnowledgeRequest;
-}
-
-export interface ImageInfo {
- url?: string;
-}
-
-export interface KnowledgeRequest {
- invokedSkills: string[];
- subscriptionId: string;
- invokedSkillsRequestData: InvokedSkillsRequestData;
- convoData: ConvoData;
-}
-
-export interface ConvoData {
- convoid: string;
- convotone: BingConversationStyle;
-}
-
-export interface InvokedSkillsRequestData {
- enableFaceBlur: boolean;
-}
-
-export interface FileItem {
- url: string;
- status?: 'loading' | 'error' | 'loaded'
-}
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/models/misc.py b/spaces/Gen-Sim/Gen-Sim/cliport/models/misc.py
deleted file mode 100644
index 2d326fab6b57317110af5fa7b722a33d4ffe7908..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/models/misc.py
+++ /dev/null
@@ -1,162 +0,0 @@
-# Copyright (c) Aishwarya Kamath & Nicolas Carion. Licensed under the Apache License 2.0. All Rights Reserved
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-"""
-Misc functions, including distributed helpers.
-
-Mostly copy-paste from torchvision references.
-"""
-import os
-import subprocess
-from typing import Any, Dict, List, Optional
-
-import torch
-import torchvision
-from torch import Tensor
-
-
-def get_sha():
- cwd = os.path.dirname(os.path.abspath(__file__))
-
- def _run(command):
- return subprocess.check_output(command, cwd=cwd).decode("ascii").strip()
-
- sha = "N/A"
- diff = "clean"
- branch = "N/A"
- try:
- sha = _run(["git", "rev-parse", "HEAD"])
- subprocess.check_output(["git", "diff"], cwd=cwd)
- diff = _run(["git", "diff-index", "HEAD"])
- diff = "has uncommited changes" if diff else "clean"
- branch = _run(["git", "rev-parse", "--abbrev-ref", "HEAD"])
- except Exception:
- pass
- message = f"sha: {sha}, status: {diff}, branch: {branch}"
- return message
-
-
-def collate_fn(do_round, batch):
- batch = list(zip(*batch))
- final_batch = {}
- final_batch["samples"] = NestedTensor.from_tensor_list(batch[0], do_round)
- final_batch["targets"] = batch[1]
- if "positive_map" in batch[1][0]:
- # we batch the positive maps here
- # Since in general each batch element will have a different number of boxes,
- # we collapse a single batch dimension to avoid padding. This is sufficient for our purposes.
- max_len = max([v["positive_map"].shape[1] for v in batch[1]])
- nb_boxes = sum([v["positive_map"].shape[0] for v in batch[1]])
- batched_pos_map = torch.zeros((nb_boxes, max_len), dtype=torch.bool)
- cur_count = 0
- for v in batch[1]:
- cur_pos = v["positive_map"]
- batched_pos_map[cur_count : cur_count + len(cur_pos), : cur_pos.shape[1]] = cur_pos
- cur_count += len(cur_pos)
-
- assert cur_count == len(batched_pos_map)
- # assert batched_pos_map.sum().item() == sum([v["positive_map"].sum().item() for v in batch[1]])
- final_batch["positive_map"] = batched_pos_map.float()
- if "positive_map_eval" in batch[1][0]:
- # we batch the positive maps here
- # Since in general each batch element will have a different number of boxes,
- # we collapse a single batch dimension to avoid padding. This is sufficient for our purposes.
- max_len = max([v["positive_map_eval"].shape[1] for v in batch[1]])
- nb_boxes = sum([v["positive_map_eval"].shape[0] for v in batch[1]])
- batched_pos_map = torch.zeros((nb_boxes, max_len), dtype=torch.bool)
- cur_count = 0
- for v in batch[1]:
- cur_pos = v["positive_map_eval"]
- batched_pos_map[cur_count : cur_count + len(cur_pos), : cur_pos.shape[1]] = cur_pos
- cur_count += len(cur_pos)
-
- assert cur_count == len(batched_pos_map)
- # assert batched_pos_map.sum().item() == sum([v["positive_map"].sum().item() for v in batch[1]])
- final_batch["positive_map_eval"] = batched_pos_map.float()
- if "answer" in batch[1][0] or "answer_type" in batch[1][0]:
- answers = {}
- for f in batch[1][0].keys():
- if "answer" not in f:
- continue
- answers[f] = torch.stack([b[f] for b in batch[1]])
- final_batch["answers"] = answers
-
- return final_batch
-
-
-class NestedTensor(object):
- def __init__(self, tensors, mask):
- self.tensors = tensors
- self.mask = mask
-
- def to(self, *args, **kwargs):
- cast_tensor = self.tensors.to(*args, **kwargs)
- cast_mask = self.mask.to(*args, **kwargs) if self.mask is not None else None
- return type(self)(cast_tensor, cast_mask)
-
- def decompose(self):
- return self.tensors, self.mask
-
- @classmethod
- def from_tensor_list(cls, tensor_list, do_round=False):
- # TODO make this more general
- if tensor_list[0].ndim == 3:
- # TODO make it support different-sized images
- max_size = tuple(max(s) for s in zip(*[img.shape for img in tensor_list]))
- # min_size = tuple(min(s) for s in zip(*[img.shape for img in tensor_list]))
- batch_shape = (len(tensor_list),) + max_size
- b, c, h, w = batch_shape
- if do_round:
- # Round to an even size to avoid rounding issues in fpn
- p = 128
- h = h if h % p == 0 else (h // p + 1) * p
- w = w if w % p == 0 else (w // p + 1) * p
- batch_shape = b, c, h, w
-
- dtype = tensor_list[0].dtype
- device = tensor_list[0].device
- tensor = torch.zeros(batch_shape, dtype=dtype, device=device)
- mask = torch.ones((b, h, w), dtype=torch.bool, device=device)
- for img, pad_img, m in zip(tensor_list, tensor, mask):
- pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
- m[: img.shape[1], : img.shape[2]] = False
- else:
- raise ValueError("not supported")
- return cls(tensor, mask)
-
- def __repr__(self):
- return repr(self.tensors)
-
-
-def interpolate(input, size=None, scale_factor=None, mode="nearest", align_corners=None):
- # type: (Tensor, Optional[List[int]], Optional[float], str, Optional[bool]) -> Tensor
- """
- Equivalent to nn.functional.interpolate, but with support for empty channel sizes.
- """
- if input.numel() > 0:
- return torch.nn.functional.interpolate(input, size, scale_factor, mode, align_corners)
-
- assert input.shape[0] != 0 or input.shape[1] != 0, "At least one of the two first dimensions must be non zero"
-
- if input.shape[1] == 0:
- # Pytorch doesn't support null dimension on the channel dimension, so we transpose to fake a null batch dim
- return torch.nn.functional.interpolate(input.transpose(0, 1), size, scale_factor, mode, align_corners).transpose(0, 1)
-
- # empty batch dimension is now supported in pytorch
- return torch.nn.functional.interpolate(input, size, scale_factor, mode, align_corners)
-
-
-
-def targets_to(targets: List[Dict[str, Any]], device):
- """Moves the target dicts to the given device."""
- excluded_keys = [
- "questionId",
- "tokens_positive",
- "tokens",
- "dataset_name",
- "sentence_id",
- "original_img_id",
- "nb_eval",
- "task_id",
- "original_id",
- ]
- return [{k: v.to(device) if k not in excluded_keys else v for k, v in t.items() if k != "caption"} for t in targets]
diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/quickstart_download.sh b/spaces/Gen-Sim/Gen-Sim/scripts/quickstart_download.sh
deleted file mode 100644
index a121c574c8978839a044669d10a88e226cb1f666..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/scripts/quickstart_download.sh
+++ /dev/null
@@ -1,3 +0,0 @@
-wget https://github.com/cliport/cliport/releases/download/v1.0.0/cliport_quickstart.zip
-unzip cliport_quickstart.zip
-rm cliport_quickstart.zip
\ No newline at end of file
diff --git a/spaces/Giedrius/mood_detector/README.md b/spaces/Giedrius/mood_detector/README.md
deleted file mode 100644
index c2d047a0880878418b3ea654d2b1cd65b3941313..0000000000000000000000000000000000000000
--- a/spaces/Giedrius/mood_detector/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Mood_detector
-emoji: 🏢
-colorFrom: yellow
-colorTo: blue
-sdk: gradio
-sdk_version: 2.9.4
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/single_stage.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/single_stage.py
deleted file mode 100644
index 5172bdbd945889445eeaa18398c9f0118bb845ad..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/single_stage.py
+++ /dev/null
@@ -1,154 +0,0 @@
-import torch
-import torch.nn as nn
-
-from mmdet.core import bbox2result
-from ..builder import DETECTORS, build_backbone, build_head, build_neck
-from .base import BaseDetector
-
-
-@DETECTORS.register_module()
-class SingleStageDetector(BaseDetector):
- """Base class for single-stage detectors.
-
- Single-stage detectors directly and densely predict bounding boxes on the
- output features of the backbone+neck.
- """
-
- def __init__(self,
- backbone,
- neck=None,
- bbox_head=None,
- train_cfg=None,
- test_cfg=None,
- pretrained=None):
- super(SingleStageDetector, self).__init__()
- self.backbone = build_backbone(backbone)
- if neck is not None:
- self.neck = build_neck(neck)
- bbox_head.update(train_cfg=train_cfg)
- bbox_head.update(test_cfg=test_cfg)
- self.bbox_head = build_head(bbox_head)
- self.train_cfg = train_cfg
- self.test_cfg = test_cfg
- self.init_weights(pretrained=pretrained)
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in detector.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- super(SingleStageDetector, self).init_weights(pretrained)
- self.backbone.init_weights(pretrained=pretrained)
- if self.with_neck:
- if isinstance(self.neck, nn.Sequential):
- for m in self.neck:
- m.init_weights()
- else:
- self.neck.init_weights()
- self.bbox_head.init_weights()
-
- def extract_feat(self, img):
- """Directly extract features from the backbone+neck."""
- x = self.backbone(img)
- if self.with_neck:
- x = self.neck(x)
- return x
-
- def forward_dummy(self, img):
- """Used for computing network flops.
-
- See `mmdetection/tools/analysis_tools/get_flops.py`
- """
- x = self.extract_feat(img)
- outs = self.bbox_head(x)
- return outs
-
- def forward_train(self,
- img,
- img_metas,
- gt_bboxes,
- gt_labels,
- gt_bboxes_ignore=None):
- """
- Args:
- img (Tensor): Input images of shape (N, C, H, W).
- Typically these should be mean centered and std scaled.
- img_metas (list[dict]): A List of image info dict where each dict
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- :class:`mmdet.datasets.pipelines.Collect`.
- gt_bboxes (list[Tensor]): Each item are the truth boxes for each
- image in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]): Class indices corresponding to each box
- gt_bboxes_ignore (None | list[Tensor]): Specify which bounding
- boxes can be ignored when computing the loss.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components.
- """
- super(SingleStageDetector, self).forward_train(img, img_metas)
- x = self.extract_feat(img)
- losses = self.bbox_head.forward_train(x, img_metas, gt_bboxes,
- gt_labels, gt_bboxes_ignore)
- return losses
-
- def simple_test(self, img, img_metas, rescale=False):
- """Test function without test time augmentation.
-
- Args:
- imgs (list[torch.Tensor]): List of multiple images
- img_metas (list[dict]): List of image information.
- rescale (bool, optional): Whether to rescale the results.
- Defaults to False.
-
- Returns:
- list[list[np.ndarray]]: BBox results of each image and classes.
- The outer list corresponds to each image. The inner list
- corresponds to each class.
- """
- x = self.extract_feat(img)
- outs = self.bbox_head(x)
- # get origin input shape to support onnx dynamic shape
- if torch.onnx.is_in_onnx_export():
- # get shape as tensor
- img_shape = torch._shape_as_tensor(img)[2:]
- img_metas[0]['img_shape_for_onnx'] = img_shape
- bbox_list = self.bbox_head.get_bboxes(
- *outs, img_metas, rescale=rescale)
- # skip post-processing when exporting to ONNX
- if torch.onnx.is_in_onnx_export():
- return bbox_list
-
- bbox_results = [
- bbox2result(det_bboxes, det_labels, self.bbox_head.num_classes)
- for det_bboxes, det_labels in bbox_list
- ]
- return bbox_results
-
- def aug_test(self, imgs, img_metas, rescale=False):
- """Test function with test time augmentation.
-
- Args:
- imgs (list[Tensor]): the outer list indicates test-time
- augmentations and inner Tensor should have a shape NxCxHxW,
- which contains all images in the batch.
- img_metas (list[list[dict]]): the outer list indicates test-time
- augs (multiscale, flip, etc.) and the inner list indicates
- images in a batch. each dict has image information.
- rescale (bool, optional): Whether to rescale the results.
- Defaults to False.
-
- Returns:
- list[list[np.ndarray]]: BBox results of each image and classes.
- The outer list corresponds to each image. The inner list
- corresponds to each class.
- """
- assert hasattr(self.bbox_head, 'aug_test'), \
- f'{self.bbox_head.__class__.__name__}' \
- ' does not support test-time augmentation'
-
- feats = self.extract_feats(imgs)
- return [self.bbox_head.aug_test(feats, img_metas, rescale=rescale)]
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x512_20k_voc12aug.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x512_20k_voc12aug.py
deleted file mode 100644
index af06cb66cc808c206d6946a4b2420a6942d3dc7e..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x512_20k_voc12aug.py
+++ /dev/null
@@ -1,7 +0,0 @@
-_base_ = [
- '../_base_/models/psanet_r50-d8.py',
- '../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_20k.py'
-]
-model = dict(
- decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/resnest/deeplabv3_s101-d8_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/resnest/deeplabv3_s101-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index f98398690eb3e1e77975d7fb94ea865424aa331b..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/resnest/deeplabv3_s101-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,9 +0,0 @@
-_base_ = '../deeplabv3/deeplabv3_r101-d8_512x1024_80k_cityscapes.py'
-model = dict(
- pretrained='open-mmlab://resnest101',
- backbone=dict(
- type='ResNeSt',
- stem_channels=128,
- radix=2,
- reduction_factor=4,
- avg_down_stride=True))
diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/utils/checkpoint.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/utils/checkpoint.py
deleted file mode 100644
index f6f871837e09c5cc7832b85b0d80b84f59e87ca0..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/utils/checkpoint.py
+++ /dev/null
@@ -1,161 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from enum import Enum
-import logging
-from pathlib import Path
-import re
-import typing as tp
-
-import flashy
-import torch
-
-from ..environment import AudioCraftEnvironment
-
-
-logger = logging.getLogger(__name__)
-
-
-class CheckpointSource(Enum):
- CURRENT_XP = "current_xp"
- PRETRAINED = "pretrained"
- OTHER = "other"
-
-
-def checkpoint_name(name: tp.Optional[str] = None, rank: tp.Optional[int] = None, use_fsdp: bool = False) -> str:
- """Checkpoint name formatted for all use in AudioCraft codebase and has the following format:
- `checkpoint_.th(.)`. By convention, name is expected to be empty for last checkpoint,
- 'best' for the best checkpoint or the epoch number.
-
- Args:
- name (str, optional): Name suffix for the checkpoint file stem.
- rank (optional, int): Rank for distributed processing, retrieved with flashy if not provided.
- use_fsdp (bool): Whether the calling solver relies on FSDP.
- Returns:
- str: The checkpoint name.
- """
- suffix = ''
- if rank is None:
- rank = flashy.distrib.rank()
- if rank > 0 and use_fsdp:
- suffix = '.' + str(rank)
- name_part = ''
- if name is not None:
- name_part = f'_{name}'
- return f'checkpoint{name_part}.th{suffix}'
-
-
-def is_sharded_checkpoint(path: Path) -> bool:
- """Whether the checkpoint at the given path corresponds to a sharded checkpoint across rank."""
- return re.search(r'\.th\.\d+$', path.name) is not None
-
-
-def resolve_checkpoint_path(sig_or_path: tp.Union[Path, str], name: tp.Optional[str] = None,
- use_fsdp: bool = False) -> tp.Optional[Path]:
- """Resolve a given checkpoint path for a provided dora sig or path.
-
- Args:
- sig_or_path (Path or str): Checkpoint path or dora signature.
- name (str, optional): Name suffix for the checkpoint file stem.
- rank (optional, int): Rank for distributed processing, retrieved with flashy if not provided.
- use_fsdp (bool): Whether the calling solver relies on FSDP.
- Returns:
- Path, optional: Resolved checkpoint path, if it exists.
- """
- from audiocraft import train
- xps_root = train.main.dora.dir / 'xps'
- sig_or_path = str(sig_or_path)
- if sig_or_path.startswith('//sig/'):
- sig = sig_or_path[len('//sig/'):]
- path = xps_root / sig
- else:
- path = Path(sig_or_path)
- path = AudioCraftEnvironment.resolve_reference_path(path)
-
- if path.is_dir():
- path = path / checkpoint_name(name, use_fsdp=use_fsdp)
-
- if path.exists():
- return path
- else:
- return None
-
-
-def load_checkpoint(checkpoint_path: Path, is_sharded: bool = False) -> tp.Any:
- """Load state from checkpoints at the specified checkpoint path."""
- if is_sharded:
- rank0_checkpoint_path = checkpoint_path.parent / checkpoint_name(use_fsdp=False)
- if rank0_checkpoint_path.exists():
- check_sharded_checkpoint(checkpoint_path, rank0_checkpoint_path)
- state = torch.load(checkpoint_path, 'cpu')
- logger.info("Checkpoint loaded from %s", checkpoint_path)
- return state
-
-
-def save_checkpoint(state: tp.Any, checkpoint_path: Path, is_sharded: bool = False) -> None:
- """Save state to disk to the specified checkpoint_path."""
- _safe_save_checkpoint(state, checkpoint_path, is_sharded)
- logger.info("Checkpoint saved to %s", checkpoint_path)
-
-
-def flush_stale_checkpoints(checkpoint_path: Path, keep_last: tp.Optional[int] = None) -> None:
- """Flush checkpoints to only keep last N checkpoints."""
- if keep_last is None or keep_last <= 0:
- return
- checkpoint_dir = checkpoint_path.parent
- suffix = ''
- if flashy.distrib.rank() > 0:
- suffix = f'.{flashy.distrib.rank()}'
- checkpoint_files_with_epoch = []
- for path in Path(checkpoint_dir).glob(f'checkpoint_*.th{suffix}'):
- epoch_part = path.name.split('.', 1)[0].split('_', 1)[1]
- if epoch_part.isdigit():
- checkpoint_files_with_epoch.append((path, int(epoch_part)))
- checkpoint_files = [path for path, _ in list(sorted(checkpoint_files_with_epoch, key=lambda t: t[1]))]
- total_to_flush = max(0, len(checkpoint_files) - keep_last)
- files_to_flush = checkpoint_files[:total_to_flush]
- for path in files_to_flush:
- logger.debug("Removing checkpoint: %s", str(path))
- path.unlink(missing_ok=True)
-
-
-def check_sharded_checkpoint(checkpoint_path: Path, rank0_checkpoint_path: Path) -> None:
- """Check sharded checkpoint state, ensuring the checkpoints are not corrupted."""
- # Finish the work of a previous run that got interrupted while dumping.
- old_path = Path(str(checkpoint_path) + '.old')
- if old_path.exists():
- raise RuntimeError(
- f"Old checkpoint {old_path} from previous version of this code exist, cannot safely proceed.")
- token = Path(str(rank0_checkpoint_path) + '.tmp.done')
- tmp_path = Path(str(checkpoint_path) + '.tmp')
- if token.exists():
- if tmp_path.exists():
- tmp_path.rename(checkpoint_path)
- flashy.distrib.barrier()
- if flashy.distrib.is_rank_zero() and token.exists():
- token.unlink()
-
-
-def _safe_save_checkpoint(state: tp.Any, checkpoint_path: Path, is_sharded: bool = False) -> None:
- """Save checkpoints in a safe manner even with when sharded checkpoints across nodes."""
- def _barrier_if_sharded():
- if is_sharded:
- flashy.distrib.barrier()
-
- if flashy.distrib.is_rank_zero():
- token = Path(str(checkpoint_path) + '.tmp.done')
- if token.exists():
- token.unlink()
- _barrier_if_sharded()
- with flashy.utils.write_and_rename(checkpoint_path) as f:
- torch.save(state, f)
- _barrier_if_sharded()
- if flashy.distrib.is_rank_zero():
- token.touch()
- _barrier_if_sharded()
- _barrier_if_sharded()
- if flashy.distrib.rank() == 0:
- token.unlink()
diff --git a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/modules/lstm.py b/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/modules/lstm.py
deleted file mode 100644
index c0866175950c1ca4f6cca98649525e6481853bba..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/modules/lstm.py
+++ /dev/null
@@ -1,25 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from torch import nn
-
-
-class StreamableLSTM(nn.Module):
- """LSTM without worrying about the hidden state, nor the layout of the data.
- Expects input as convolutional layout.
- """
- def __init__(self, dimension: int, num_layers: int = 2, skip: bool = True):
- super().__init__()
- self.skip = skip
- self.lstm = nn.LSTM(dimension, dimension, num_layers)
-
- def forward(self, x):
- x = x.permute(2, 0, 1)
- y, _ = self.lstm(x)
- if self.skip:
- y = y + x
- y = y.permute(1, 2, 0)
- return y
diff --git a/spaces/GroveStreet/GTA_SOVITS/modules/losses.py b/spaces/GroveStreet/GTA_SOVITS/modules/losses.py
deleted file mode 100644
index cd21799eccde350c3aac0bdd661baf96ed220147..0000000000000000000000000000000000000000
--- a/spaces/GroveStreet/GTA_SOVITS/modules/losses.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import modules.commons as commons
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- rl = rl.float().detach()
- gl = gl.float()
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss * 2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- dr = dr.float()
- dg = dg.float()
- r_loss = torch.mean((1-dr)**2)
- g_loss = torch.mean(dg**2)
- loss += (r_loss + g_loss)
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- dg = dg.float()
- l = torch.mean((1-dg)**2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
-
-
-def kl_loss(z_p, logs_q, m_p, logs_p, z_mask):
- """
- z_p, logs_q: [b, h, t_t]
- m_p, logs_p: [b, h, t_t]
- """
- z_p = z_p.float()
- logs_q = logs_q.float()
- m_p = m_p.float()
- logs_p = logs_p.float()
- z_mask = z_mask.float()
- #print(logs_p)
- kl = logs_p - logs_q - 0.5
- kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p)
- kl = torch.sum(kl * z_mask)
- l = kl / torch.sum(z_mask)
- return l
diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/log_images.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/log_images.py
deleted file mode 100644
index 826f29cfb5d29d22044d07c14068f1678a5ae003..0000000000000000000000000000000000000000
--- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/log_images.py
+++ /dev/null
@@ -1,138 +0,0 @@
-# Copyright (c) EPFL VILAB.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from typing import Dict, List
-
-import numpy as np
-import torch
-import torch.nn.functional as F
-import torchvision.transforms as transforms
-import wandb
-
-import utils
-from utils.datasets_semseg import (ade_classes, hypersim_classes,
- nyu_v2_40_classes)
-
-
-def inv_norm(tensor: torch.Tensor) -> torch.Tensor:
- """Inverse of the normalization that was done during pre-processing
- """
- inv_normalize = transforms.Normalize(
- mean=[-0.485 / 0.229, -0.456 / 0.224, -0.406 / 0.225],
- std=[1 / 0.229, 1 / 0.224, 1 / 0.225])
-
- return inv_normalize(tensor)
-
-
-@torch.no_grad()
-def log_semseg_wandb(
- images: torch.Tensor,
- preds: List[np.ndarray],
- gts: List[np.ndarray],
- depth_gts: List[np.ndarray],
- dataset_name: str = 'ade20k',
- image_count=8,
- prefix=""
- ):
-
- if dataset_name == 'ade20k':
- classes = ade_classes()
- elif dataset_name == 'hypersim':
- classes = hypersim_classes()
- elif dataset_name == 'nyu':
- classes = nyu_v2_40_classes()
- else:
- raise ValueError(f'Dataset {dataset_name} not supported for logging to wandb.')
-
- class_labels = {i: cls for i, cls in enumerate(classes)}
- class_labels[len(classes)] = "void"
- class_labels[utils.SEG_IGNORE_INDEX] = "ignore"
-
- image_count = min(len(images), image_count)
-
- images = images[:image_count]
- preds = preds[:image_count]
- gts = gts[:image_count]
- depth_gts = depth_gts[:image_count] if len(depth_gts) > 0 else None
-
- semseg_images = {}
-
- for i, (image, pred, gt) in enumerate(zip(images, preds, gts)):
- image = inv_norm(image)
- pred[gt == utils.SEG_IGNORE_INDEX] = utils.SEG_IGNORE_INDEX
-
- semseg_image = wandb.Image(image, masks={
- "predictions": {
- "mask_data": pred,
- "class_labels": class_labels,
- },
- "ground_truth": {
- "mask_data": gt,
- "class_labels": class_labels,
- }
- })
-
- semseg_images[f"{prefix}_{i}"] = semseg_image
-
- if depth_gts is not None:
- semseg_images[f"{prefix}_{i}_depth"] = wandb.Image(depth_gts[i])
-
- wandb.log(semseg_images, commit=False)
-
-
-@torch.no_grad()
-def log_taskonomy_wandb(
- preds: Dict[str, torch.Tensor],
- gts: Dict[str, torch.Tensor],
- image_count=8,
- prefix=""
- ):
- pred_tasks = list(preds.keys())
- gt_tasks = list(gts.keys())
- if 'mask_valid' in gt_tasks:
- gt_tasks.remove('mask_valid')
-
- image_count = min(len(preds[pred_tasks[0]]), image_count)
-
- all_images = {}
-
- for i in range(image_count):
-
- # Log GTs
- for task in gt_tasks:
- gt_img = gts[task][i]
- if task == 'rgb':
- gt_img = inv_norm(gt_img)
- if gt_img.shape[0] == 1:
- gt_img = gt_img[0]
- elif gt_img.shape[0] == 2:
- gt_img = F.pad(gt_img, (0,0,0,0,0,1), mode='constant', value=0.0)
-
- gt_img = wandb.Image(gt_img, caption=f'GT #{i}')
- key = f'{prefix}_gt_{task}'
- if key not in all_images:
- all_images[key] = [gt_img]
- else:
- all_images[key].append(gt_img)
-
- # Log preds
- for task in pred_tasks:
- pred_img = preds[task][i]
- if task == 'rgb':
- pred_img = inv_norm(pred_img)
- if pred_img.shape[0] == 1:
- pred_img = pred_img[0]
- elif pred_img.shape[0] == 2:
- pred_img = F.pad(pred_img, (0,0,0,0,0,1), mode='constant', value=0.0)
-
- pred_img = wandb.Image(pred_img, caption=f'Pred #{i}')
- key = f'{prefix}_pred_{task}'
- if key not in all_images:
- all_images[key] = [pred_img]
- else:
- all_images[key].append(pred_img)
-
- wandb.log(all_images, commit=False)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/latent_depth/latent_depth_src/loss/latent_depth.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/latent_depth/latent_depth_src/loss/latent_depth.py
deleted file mode 100644
index a3b9535ecac3ec403868681a8b50c1fbe1c90dfe..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/latent_depth/latent_depth_src/loss/latent_depth.py
+++ /dev/null
@@ -1,99 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-import torch
-from torch.nn.modules.loss import _Loss
-
-
-class LatentLayersKLLoss(_Loss):
- def __init__(self, args):
- super().__init__()
- self.args = args
-
- def forward(self, layer_samples, lang_idx, update_num, sample_size):
- prior = self.args.prior
- samples = layer_samples[lang_idx]
- eps = 1e-7
- if prior == "uniform":
- # uniform prior
- kl_loss = (samples * (torch.log(samples + eps) - math.log(0.5))).sum(-1)
- elif prior == "agged_posterior":
- # aggregated posterior
- y_t = torch.stack([x.detach() for x in layer_samples], dim=0)
- agged_q = torch.sum(y_t, dim=0)
- row_norm = agged_q.sum(-1)
- normed_agg_q = agged_q / row_norm
- kl_loss = (
- samples * (torch.log(samples + eps) - torch.log(normed_agg_q + eps))
- ).sum(-1)
- else:
- raise NotImplementedError("The specified prior is not implemented.")
-
- # normalized by number of layers
- kl_loss /= layer_samples[0].size()[0]
- kl_weight = min(
- self.args.sparsity_weight,
- (update_num - self.args.soft_update)
- * self.args.sparsity_weight
- / self.args.anneal_updates,
- )
- kl_loss *= kl_weight * sample_size
- return kl_loss
-
-
-class LatentLayersSparsityLoss(_Loss):
- def __init__(self, args):
- super().__init__()
- self.args = args
-
- def is_valid(self, update_num):
- if self.args.target_layers <= 0:
- return False
- return update_num > (self.args.soft_update + self.args.anneal_updates)
-
- def forward(self, layer_samples_list, update_num, sample_size):
- batch_loss = 0
- share_loss = 0
- global_sparsity_loss = 0
- layer_samples = torch.stack(layer_samples_list, dim=0)
- if (
- self.args.target_layers > 0 or self.args.share_weight > 0
- ) and update_num > (self.args.soft_update + self.args.anneal_updates):
- # anneal sparsity weight
- if update_num < (self.args.anneal_updates + self.args.soft_update):
- weight_anneal = 0
- elif update_num < (2 * self.args.anneal_updates + self.args.soft_update):
- weight_anneal = (
- (update_num - self.args.soft_update - self.args.anneal_updates)
- * self.args.share_weight
- / self.args.anneal_updates
- )
- else:
- weight_anneal = 1
- # compute ratio among languages
- layer_utilization = torch.sum(layer_samples, dim=0)
- layer_utilization /= layer_samples.size()[0]
- if self.args.share_weight > 0:
- # encouraging sharing across languages
- share_loss = sum(
- -1.0 * v * math.log(v) for v in layer_utilization if v > 0
- )
- batch_loss += (
- weight_anneal * self.args.share_weight * sample_size * share_loss
- )
- if self.args.target_layers > 0:
- # computed expected number of layers selected
- expeted_layers = sum(layer_utilization)
- # compute l2 loss wrt target number of layers
- global_sparsity_loss = (expeted_layers - self.args.target_layers) ** 2
- batch_loss += (
- weight_anneal
- * self.args.share_weight
- * sample_size
- * global_sparsity_loss
- )
- return batch_loss
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/tasks/translation.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/tasks/translation.py
deleted file mode 100644
index 86473608677c62b063cd9889ed29d59002523be7..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/tasks/translation.py
+++ /dev/null
@@ -1,493 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass, field
-import itertools
-import json
-import logging
-import os
-from typing import Optional
-from argparse import Namespace
-from omegaconf import II
-
-import numpy as np
-from fairseq import metrics, utils
-from fairseq.data import (
- AppendTokenDataset,
- ConcatDataset,
- LanguagePairDataset,
- PrependTokenDataset,
- StripTokenDataset,
- TruncateDataset,
- data_utils,
- encoders,
- indexed_dataset,
-)
-from fairseq.data.indexed_dataset import get_available_dataset_impl
-from fairseq.dataclass import ChoiceEnum, FairseqDataclass
-from fairseq.tasks import FairseqTask, register_task
-
-
-EVAL_BLEU_ORDER = 4
-
-
-logger = logging.getLogger(__name__)
-
-
-def load_langpair_dataset(
- data_path,
- split,
- src,
- src_dict,
- tgt,
- tgt_dict,
- combine,
- dataset_impl,
- upsample_primary,
- left_pad_source,
- left_pad_target,
- max_source_positions,
- max_target_positions,
- prepend_bos=False,
- load_alignments=False,
- truncate_source=False,
- append_source_id=False,
- num_buckets=0,
- shuffle=True,
- pad_to_multiple=1,
- prepend_bos_src=None,
-):
- def split_exists(split, src, tgt, lang, data_path):
- filename = os.path.join(data_path, "{}.{}-{}.{}".format(split, src, tgt, lang))
- return indexed_dataset.dataset_exists(filename, impl=dataset_impl)
-
- src_datasets = []
- tgt_datasets = []
-
- for k in itertools.count():
- split_k = split + (str(k) if k > 0 else "")
-
- # infer langcode
- if split_exists(split_k, src, tgt, src, data_path):
- prefix = os.path.join(data_path, "{}.{}-{}.".format(split_k, src, tgt))
- elif split_exists(split_k, tgt, src, src, data_path):
- prefix = os.path.join(data_path, "{}.{}-{}.".format(split_k, tgt, src))
- else:
- if k > 0:
- break
- else:
- raise FileNotFoundError(
- "Dataset not found: {} ({})".format(split, data_path)
- )
-
- src_dataset = data_utils.load_indexed_dataset(
- prefix + src, src_dict, dataset_impl
- )
- if truncate_source:
- src_dataset = AppendTokenDataset(
- TruncateDataset(
- StripTokenDataset(src_dataset, src_dict.eos()),
- max_source_positions - 1,
- ),
- src_dict.eos(),
- )
- src_datasets.append(src_dataset)
-
- tgt_dataset = data_utils.load_indexed_dataset(
- prefix + tgt, tgt_dict, dataset_impl
- )
- if tgt_dataset is not None:
- tgt_datasets.append(tgt_dataset)
-
- logger.info(
- "{} {} {}-{} {} examples".format(
- data_path, split_k, src, tgt, len(src_datasets[-1])
- )
- )
-
- if not combine:
- break
-
- assert len(src_datasets) == len(tgt_datasets) or len(tgt_datasets) == 0
-
- if len(src_datasets) == 1:
- src_dataset = src_datasets[0]
- tgt_dataset = tgt_datasets[0] if len(tgt_datasets) > 0 else None
- else:
- sample_ratios = [1] * len(src_datasets)
- sample_ratios[0] = upsample_primary
- src_dataset = ConcatDataset(src_datasets, sample_ratios)
- if len(tgt_datasets) > 0:
- tgt_dataset = ConcatDataset(tgt_datasets, sample_ratios)
- else:
- tgt_dataset = None
-
- if prepend_bos:
- assert hasattr(src_dict, "bos_index") and hasattr(tgt_dict, "bos_index")
- src_dataset = PrependTokenDataset(src_dataset, src_dict.bos())
- if tgt_dataset is not None:
- tgt_dataset = PrependTokenDataset(tgt_dataset, tgt_dict.bos())
- elif prepend_bos_src is not None:
- logger.info(f"prepending src bos: {prepend_bos_src}")
- src_dataset = PrependTokenDataset(src_dataset, prepend_bos_src)
-
- eos = None
- if append_source_id:
- src_dataset = AppendTokenDataset(
- src_dataset, src_dict.index("[{}]".format(src))
- )
- if tgt_dataset is not None:
- tgt_dataset = AppendTokenDataset(
- tgt_dataset, tgt_dict.index("[{}]".format(tgt))
- )
- eos = tgt_dict.index("[{}]".format(tgt))
-
- align_dataset = None
- if load_alignments:
- align_path = os.path.join(data_path, "{}.align.{}-{}".format(split, src, tgt))
- if indexed_dataset.dataset_exists(align_path, impl=dataset_impl):
- align_dataset = data_utils.load_indexed_dataset(
- align_path, None, dataset_impl
- )
-
- tgt_dataset_sizes = tgt_dataset.sizes if tgt_dataset is not None else None
- return LanguagePairDataset(
- src_dataset,
- src_dataset.sizes,
- src_dict,
- tgt_dataset,
- tgt_dataset_sizes,
- tgt_dict,
- left_pad_source=left_pad_source,
- left_pad_target=left_pad_target,
- align_dataset=align_dataset,
- eos=eos,
- num_buckets=num_buckets,
- shuffle=shuffle,
- pad_to_multiple=pad_to_multiple,
- )
-
-
-@dataclass
-class TranslationConfig(FairseqDataclass):
- data: Optional[str] = field(
- default=None,
- metadata={
- "help": "colon separated path to data directories list, will be iterated upon during epochs "
- "in round-robin manner; however, valid and test data are always in the first directory "
- "to avoid the need for repeating them in all directories"
- },
- )
- source_lang: Optional[str] = field(
- default=None,
- metadata={
- "help": "source language",
- "argparse_alias": "-s",
- },
- )
- target_lang: Optional[str] = field(
- default=None,
- metadata={
- "help": "target language",
- "argparse_alias": "-t",
- },
- )
- load_alignments: bool = field(
- default=False, metadata={"help": "load the binarized alignments"}
- )
- left_pad_source: bool = field(
- default=True, metadata={"help": "pad the source on the left"}
- )
- left_pad_target: bool = field(
- default=False, metadata={"help": "pad the target on the left"}
- )
- max_source_positions: int = field(
- default=1024, metadata={"help": "max number of tokens in the source sequence"}
- )
- max_target_positions: int = field(
- default=1024, metadata={"help": "max number of tokens in the target sequence"}
- )
- upsample_primary: int = field(
- default=-1, metadata={"help": "the amount of upsample primary dataset"}
- )
- truncate_source: bool = field(
- default=False, metadata={"help": "truncate source to max-source-positions"}
- )
- num_batch_buckets: int = field(
- default=0,
- metadata={
- "help": "if >0, then bucket source and target lengths into "
- "N buckets and pad accordingly; this is useful on TPUs to minimize the number of compilations"
- },
- )
- train_subset: str = II("dataset.train_subset")
- dataset_impl: Optional[ChoiceEnum(get_available_dataset_impl())] = II(
- "dataset.dataset_impl"
- )
- required_seq_len_multiple: int = II("dataset.required_seq_len_multiple")
-
- # options for reporting BLEU during validation
- eval_bleu: bool = field(
- default=False, metadata={"help": "evaluation with BLEU scores"}
- )
- eval_bleu_args: Optional[str] = field(
- default="{}",
- metadata={
- "help": 'generation args for BLUE scoring, e.g., \'{"beam": 4, "lenpen": 0.6}\', as JSON string'
- },
- )
- eval_bleu_detok: str = field(
- default="space",
- metadata={
- "help": "detokenize before computing BLEU (e.g., 'moses'); required if using --eval-bleu; "
- "use 'space' to disable detokenization; see fairseq.data.encoders for other options"
- },
- )
- eval_bleu_detok_args: Optional[str] = field(
- default="{}",
- metadata={"help": "args for building the tokenizer, if needed, as JSON string"},
- )
- eval_tokenized_bleu: bool = field(
- default=False, metadata={"help": "compute tokenized BLEU instead of sacrebleu"}
- )
- eval_bleu_remove_bpe: Optional[str] = field(
- default=None,
- metadata={
- "help": "remove BPE before computing BLEU",
- "argparse_const": "@@ ",
- },
- )
- eval_bleu_print_samples: bool = field(
- default=False, metadata={"help": "print sample generations during validation"}
- )
-
-
-@register_task("translation", dataclass=TranslationConfig)
-class TranslationTask(FairseqTask):
- """
- Translate from one (source) language to another (target) language.
-
- Args:
- src_dict (~fairseq.data.Dictionary): dictionary for the source language
- tgt_dict (~fairseq.data.Dictionary): dictionary for the target language
-
- .. note::
-
- The translation task is compatible with :mod:`fairseq-train`,
- :mod:`fairseq-generate` and :mod:`fairseq-interactive`.
- """
-
- cfg: TranslationConfig
-
- def __init__(self, cfg: TranslationConfig, src_dict, tgt_dict):
- super().__init__(cfg)
- self.src_dict = src_dict
- self.tgt_dict = tgt_dict
-
- @classmethod
- def setup_task(cls, cfg: TranslationConfig, **kwargs):
- """Setup the task (e.g., load dictionaries).
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
- """
-
- paths = utils.split_paths(cfg.data)
- assert len(paths) > 0
- # find language pair automatically
- if cfg.source_lang is None or cfg.target_lang is None:
- cfg.source_lang, cfg.target_lang = data_utils.infer_language_pair(paths[0])
- if cfg.source_lang is None or cfg.target_lang is None:
- raise Exception(
- "Could not infer language pair, please provide it explicitly"
- )
-
- # load dictionaries
- src_dict = cls.load_dictionary(
- os.path.join(paths[0], "dict.{}.txt".format(cfg.source_lang))
- )
- tgt_dict = cls.load_dictionary(
- os.path.join(paths[0], "dict.{}.txt".format(cfg.target_lang))
- )
- assert src_dict.pad() == tgt_dict.pad()
- assert src_dict.eos() == tgt_dict.eos()
- assert src_dict.unk() == tgt_dict.unk()
- logger.info("[{}] dictionary: {} types".format(cfg.source_lang, len(src_dict)))
- logger.info("[{}] dictionary: {} types".format(cfg.target_lang, len(tgt_dict)))
-
- return cls(cfg, src_dict, tgt_dict)
-
- def load_dataset(self, split, epoch=1, combine=False, **kwargs):
- """Load a given dataset split.
-
- Args:
- split (str): name of the split (e.g., train, valid, test)
- """
- paths = utils.split_paths(self.cfg.data)
- assert len(paths) > 0
- if split != self.cfg.train_subset:
- # if not training data set, use the first shard for valid and test
- paths = paths[:1]
- data_path = paths[(epoch - 1) % len(paths)]
-
- # infer langcode
- src, tgt = self.cfg.source_lang, self.cfg.target_lang
-
- self.datasets[split] = load_langpair_dataset(
- data_path,
- split,
- src,
- self.src_dict,
- tgt,
- self.tgt_dict,
- combine=combine,
- dataset_impl=self.cfg.dataset_impl,
- upsample_primary=self.cfg.upsample_primary,
- left_pad_source=self.cfg.left_pad_source,
- left_pad_target=self.cfg.left_pad_target,
- max_source_positions=self.cfg.max_source_positions,
- max_target_positions=self.cfg.max_target_positions,
- load_alignments=self.cfg.load_alignments,
- truncate_source=self.cfg.truncate_source,
- num_buckets=self.cfg.num_batch_buckets,
- shuffle=(split != "test"),
- pad_to_multiple=self.cfg.required_seq_len_multiple,
- )
-
- def build_dataset_for_inference(self, src_tokens, src_lengths, constraints=None):
- return LanguagePairDataset(
- src_tokens,
- src_lengths,
- self.source_dictionary,
- tgt_dict=self.target_dictionary,
- constraints=constraints,
- )
-
- def build_model(self, cfg):
- model = super().build_model(cfg)
- if self.cfg.eval_bleu:
- detok_args = json.loads(self.cfg.eval_bleu_detok_args)
- self.tokenizer = encoders.build_tokenizer(
- Namespace(tokenizer=self.cfg.eval_bleu_detok, **detok_args)
- )
-
- gen_args = json.loads(self.cfg.eval_bleu_args)
- self.sequence_generator = self.build_generator(
- [model], Namespace(**gen_args)
- )
- return model
-
- def valid_step(self, sample, model, criterion):
- loss, sample_size, logging_output = super().valid_step(sample, model, criterion)
- if self.cfg.eval_bleu:
- bleu = self._inference_with_bleu(self.sequence_generator, sample, model)
- logging_output["_bleu_sys_len"] = bleu.sys_len
- logging_output["_bleu_ref_len"] = bleu.ref_len
- # we split counts into separate entries so that they can be
- # summed efficiently across workers using fast-stat-sync
- assert len(bleu.counts) == EVAL_BLEU_ORDER
- for i in range(EVAL_BLEU_ORDER):
- logging_output["_bleu_counts_" + str(i)] = bleu.counts[i]
- logging_output["_bleu_totals_" + str(i)] = bleu.totals[i]
- return loss, sample_size, logging_output
-
- def reduce_metrics(self, logging_outputs, criterion):
- super().reduce_metrics(logging_outputs, criterion)
- if self.cfg.eval_bleu:
-
- def sum_logs(key):
- import torch
- result = sum(log.get(key, 0) for log in logging_outputs)
- if torch.is_tensor(result):
- result = result.cpu()
- return result
-
- counts, totals = [], []
- for i in range(EVAL_BLEU_ORDER):
- counts.append(sum_logs("_bleu_counts_" + str(i)))
- totals.append(sum_logs("_bleu_totals_" + str(i)))
-
- if max(totals) > 0:
- # log counts as numpy arrays -- log_scalar will sum them correctly
- metrics.log_scalar("_bleu_counts", np.array(counts))
- metrics.log_scalar("_bleu_totals", np.array(totals))
- metrics.log_scalar("_bleu_sys_len", sum_logs("_bleu_sys_len"))
- metrics.log_scalar("_bleu_ref_len", sum_logs("_bleu_ref_len"))
-
- def compute_bleu(meters):
- import inspect
- try:
- from sacrebleu.metrics import BLEU
- comp_bleu = BLEU.compute_bleu
- except ImportError:
- # compatibility API for sacrebleu 1.x
- import sacrebleu
- comp_bleu = sacrebleu.compute_bleu
-
- fn_sig = inspect.getfullargspec(comp_bleu)[0]
- if "smooth_method" in fn_sig:
- smooth = {"smooth_method": "exp"}
- else:
- smooth = {"smooth": "exp"}
- bleu = comp_bleu(
- correct=meters["_bleu_counts"].sum,
- total=meters["_bleu_totals"].sum,
- sys_len=meters["_bleu_sys_len"].sum,
- ref_len=meters["_bleu_ref_len"].sum,
- **smooth
- )
- return round(bleu.score, 2)
-
- metrics.log_derived("bleu", compute_bleu)
-
- def max_positions(self):
- """Return the max sentence length allowed by the task."""
- return (self.cfg.max_source_positions, self.cfg.max_target_positions)
-
- @property
- def source_dictionary(self):
- """Return the source :class:`~fairseq.data.Dictionary`."""
- return self.src_dict
-
- @property
- def target_dictionary(self):
- """Return the target :class:`~fairseq.data.Dictionary`."""
- return self.tgt_dict
-
- def _inference_with_bleu(self, generator, sample, model):
- import sacrebleu
-
- def decode(toks, escape_unk=False):
- s = self.tgt_dict.string(
- toks.int().cpu(),
- self.cfg.eval_bleu_remove_bpe,
- # The default unknown string in fairseq is ``, but
- # this is tokenized by sacrebleu as `< unk >`, inflating
- # BLEU scores. Instead, we use a somewhat more verbose
- # alternative that is unlikely to appear in the real
- # reference, but doesn't get split into multiple tokens.
- unk_string=("UNKNOWNTOKENINREF" if escape_unk else "UNKNOWNTOKENINHYP"),
- )
- if self.tokenizer:
- s = self.tokenizer.decode(s)
- return s
-
- gen_out = self.inference_step(generator, [model], sample, prefix_tokens=None)
- hyps, refs = [], []
- for i in range(len(gen_out)):
- hyps.append(decode(gen_out[i][0]["tokens"]))
- refs.append(
- decode(
- utils.strip_pad(sample["target"][i], self.tgt_dict.pad()),
- escape_unk=True, # don't count as matches to the hypo
- )
- )
- if self.cfg.eval_bleu_print_samples:
- logger.info("example hypothesis: " + hyps[0])
- logger.info("example reference: " + refs[0])
- if self.cfg.eval_tokenized_bleu:
- return sacrebleu.corpus_bleu(hyps, [refs], tokenize="none")
- else:
- return sacrebleu.corpus_bleu(hyps, [refs])
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_average_checkpoints.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_average_checkpoints.py
deleted file mode 100644
index f348b56b869372d8434fe03f13324d78e9093fa2..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_average_checkpoints.py
+++ /dev/null
@@ -1,134 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import collections
-import os
-import shutil
-import tempfile
-import unittest
-
-import numpy as np
-import torch
-from scripts.average_checkpoints import average_checkpoints
-from torch import nn
-
-
-class ModelWithSharedParameter(nn.Module):
- def __init__(self):
- super(ModelWithSharedParameter, self).__init__()
- self.embedding = nn.Embedding(1000, 200)
- self.FC1 = nn.Linear(200, 200)
- self.FC2 = nn.Linear(200, 200)
- # tie weight in FC2 to FC1
- self.FC2.weight = nn.Parameter(self.FC1.weight)
- self.FC2.bias = nn.Parameter(self.FC1.bias)
-
- self.relu = nn.ReLU()
-
- def forward(self, input):
- return self.FC2(self.ReLU(self.FC1(input))) + self.FC1(input)
-
-
-class TestAverageCheckpoints(unittest.TestCase):
- def test_average_checkpoints(self):
- params_0 = collections.OrderedDict(
- [
- ("a", torch.DoubleTensor([100.0])),
- ("b", torch.FloatTensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])),
- ("c", torch.IntTensor([7, 8, 9])),
- ]
- )
- params_1 = collections.OrderedDict(
- [
- ("a", torch.DoubleTensor([1.0])),
- ("b", torch.FloatTensor([[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]])),
- ("c", torch.IntTensor([2, 2, 2])),
- ]
- )
- params_avg = collections.OrderedDict(
- [
- ("a", torch.DoubleTensor([50.5])),
- ("b", torch.FloatTensor([[1.0, 1.5, 2.0], [2.5, 3.0, 3.5]])),
- # We expect truncation for integer division
- ("c", torch.IntTensor([4, 5, 5])),
- ]
- )
-
- fd_0, path_0 = tempfile.mkstemp()
- fd_1, path_1 = tempfile.mkstemp()
- torch.save(collections.OrderedDict([("model", params_0)]), path_0)
- torch.save(collections.OrderedDict([("model", params_1)]), path_1)
-
- output = average_checkpoints([path_0, path_1])["model"]
-
- os.close(fd_0)
- os.remove(path_0)
- os.close(fd_1)
- os.remove(path_1)
-
- for (k_expected, v_expected), (k_out, v_out) in zip(
- params_avg.items(), output.items()
- ):
- self.assertEqual(
- k_expected,
- k_out,
- "Key mismatch - expected {} but found {}. "
- "(Expected list of keys: {} vs actual list of keys: {})".format(
- k_expected, k_out, params_avg.keys(), output.keys()
- ),
- )
- np.testing.assert_allclose(
- v_expected.numpy(),
- v_out.numpy(),
- err_msg="Tensor value mismatch for key {}".format(k_expected),
- )
-
- def test_average_checkpoints_with_shared_parameters(self):
- def _construct_model_with_shared_parameters(path, value):
- m = ModelWithSharedParameter()
- nn.init.constant_(m.FC1.weight, value)
- torch.save({"model": m.state_dict()}, path)
- return m
-
- tmpdir = tempfile.mkdtemp()
- paths = []
- path = os.path.join(tmpdir, "m1.pt")
- m1 = _construct_model_with_shared_parameters(path, 1.0)
- paths.append(path)
-
- path = os.path.join(tmpdir, "m2.pt")
- m2 = _construct_model_with_shared_parameters(path, 2.0)
- paths.append(path)
-
- path = os.path.join(tmpdir, "m3.pt")
- m3 = _construct_model_with_shared_parameters(path, 3.0)
- paths.append(path)
-
- new_model = average_checkpoints(paths)
- self.assertTrue(
- torch.equal(
- new_model["model"]["embedding.weight"],
- (m1.embedding.weight + m2.embedding.weight + m3.embedding.weight) / 3.0,
- )
- )
-
- self.assertTrue(
- torch.equal(
- new_model["model"]["FC1.weight"],
- (m1.FC1.weight + m2.FC1.weight + m3.FC1.weight) / 3.0,
- )
- )
-
- self.assertTrue(
- torch.equal(
- new_model["model"]["FC2.weight"],
- (m1.FC2.weight + m2.FC2.weight + m3.FC2.weight) / 3.0,
- )
- )
- shutil.rmtree(tmpdir)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/tts_infer/example_inference.py b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/tts_infer/example_inference.py
deleted file mode 100644
index 676718fff3c6a7120cea91b0cfc95f8872929da7..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/tts_infer/example_inference.py
+++ /dev/null
@@ -1,79 +0,0 @@
-''' Example file to test tts_infer after installing it. Refer to section 1.1 in README.md for steps of installation. '''
-
-from tts_infer.tts import TextToMel, MelToWav
-from tts_infer.transliterate import XlitEngine
-from tts_infer.num_to_word_on_sent import normalize_nums
-
-import re
-import numpy as np
-from scipy.io.wavfile import write
-
-from mosestokenizer import *
-from indicnlp.tokenize import sentence_tokenize
-
-INDIC = ["as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"]
-
-def split_sentences(paragraph, language):
- if language == "en":
- with MosesSentenceSplitter(language) as splitter:
- return splitter([paragraph])
- elif language in INDIC:
- return sentence_tokenize.sentence_split(paragraph, lang=language)
-
-
-device='cpu'
-text_to_mel = TextToMel(glow_model_dir='/path/to/glow_ckp', device=device)
-mel_to_wav = MelToWav(hifi_model_dir='/path/to/hifi_ckp', device=device)
-
-lang='hi' # transliteration from En to Hi
-engine = XlitEngine(lang) # loading translit model globally
-
-def translit(text, lang):
- reg = re.compile(r'[a-zA-Z]')
- words = [engine.translit_word(word, topk=1)[lang][0] if reg.match(word) else word for word in text.split()]
- updated_sent = ' '.join(words)
- return updated_sent
-
-def run_tts(text, lang):
- text = text.replace('।', '.') # only for hindi models
- text_num_to_word = normalize_nums(text, lang) # converting numbers to words in lang
- text_num_to_word_and_transliterated = translit(text_num_to_word, lang) # transliterating english words to lang
- final_text = ' ' + text_num_to_word_and_transliterated
-
- mel = text_to_mel.generate_mel(final_text)
- audio, sr = mel_to_wav.generate_wav(mel)
- write(filename='temp.wav', rate=sr, data=audio) # for saving wav file, if needed
- return (sr, audio)
-
-def run_tts_paragraph(text, lang):
- audio_list = []
- split_sentences_list = split_sentences(text, language='hi')
-
- for sent in split_sentences_list:
- sr, audio = run_tts(sent, lang)
- audio_list.append(audio)
-
- concatenated_audio = np.concatenate([i for i in audio_list])
- write(filename='temp_long.wav', rate=sr, data=concatenated_audio)
- return (sr, concatenated_audio)
-
-if __name__ == "__main__":
- _, audio = run_tts('mera naam neeraj hai', 'hi')
-
- para = '''
- भारत मेरा देश है और मुझे भारतीय होने पर गर्व है। ये विश्व का सातवाँ सबसे बड़ा और विश्व में दूसरा सबसे अधिक जनसंख्या वाला देश है।
- इसे भारत, हिन्दुस्तान और आर्यव्रत के नाम से भी जाना जाता है। ये एक प्रायद्वीप है जो पूरब में बंगाल की खाड़ी,
- पश्चिम में अरेबियन सागर और दक्षिण में भारतीय महासागर जैसे तीन महासगरों से घिरा हुआ है।
- भारत का राष्ट्रीय पशु चीता, राष्ट्रीय पक्षी मोर, राष्ट्रीय फूल कमल, और राष्ट्रीय फल आम है।
- भारत मेरा देश है और मुझे भारतीय होने पर गर्व है। ये विश्व का सातवाँ सबसे बड़ा और विश्व में दूसरा सबसे अधिक जनसंख्या वाला देश है।
- इसे भारत, हिन्दुस्तान और आर्यव्रत के नाम से भी जाना जाता है। ये एक प्रायद्वीप है जो पूरब में बंगाल की खाड़ी,
- पश्चिम में अरेबियन सागर और दक्षिण में भारतीय महासागर जैसे तीन महासगरों से घिरा हुआ है।
- भारत का राष्ट्रीय पशु चीता, राष्ट्रीय पक्षी मोर, राष्ट्रीय फूल कमल, और राष्ट्रीय फल आम है।
- भारत मेरा देश है और मुझे भारतीय होने पर गर्व है। ये विश्व का सातवाँ सबसे बड़ा और विश्व में दूसरा सबसे अधिक जनसंख्या वाला देश है।
- इसे भारत, हिन्दुस्तान और आर्यव्रत के नाम से भी जाना जाता है। ये एक प्रायद्वीप है जो पूरब में बंगाल की खाड़ी,
- पश्चिम में अरेबियन सागर और दक्षिण में भारतीय महासागर जैसे तीन महासगरों से घिरा हुआ है।
- भारत का राष्ट्रीय पशु चीता, राष्ट्रीय पक्षी मोर, राष्ट्रीय फूल कमल, और राष्ट्रीय फल आम है।
- '''
-
- print('Num chars in paragraph: ', len(para))
- _, audio_long = run_tts_paragraph(para, 'hi')
diff --git a/spaces/Hoodady/3DFuse/misc.py b/spaces/Hoodady/3DFuse/misc.py
deleted file mode 100644
index d6675b9e984c6cea13b15ef1eb53ca308f4c2464..0000000000000000000000000000000000000000
--- a/spaces/Hoodady/3DFuse/misc.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import numpy as np
-import torch
-
-
-def torch_samps_to_imgs(imgs, uncenter=True):
- if uncenter:
- imgs = (imgs + 1) / 2 # [-1, 1] -> [0, 1]
- imgs = (imgs * 255).clamp(0, 255)
- imgs = imgs.to(torch.uint8)
- imgs = imgs.permute(0, 2, 3, 1)
- imgs = imgs.cpu().numpy()
- return imgs
-
-
-def imgs_to_torch(imgs):
- assert imgs.dtype == np.uint8
- assert len(imgs.shape) == 4 and imgs.shape[-1] == 3, "expect (N, H, W, C)"
- _, H, W, _ = imgs.shape
-
- imgs = imgs.transpose(0, 3, 1, 2)
- imgs = (imgs / 255).astype(np.float32)
- imgs = (imgs * 2) - 1
- imgs = torch.as_tensor(imgs)
- H, W = [_l - (_l % 32) for _l in (H, W)]
- imgs = torch.nn.functional.interpolate(imgs, (H, W), mode="bilinear")
- return imgs
-
-
-def test_encode_decode():
- import imageio
- from run_img_sampling import ScoreAdapter, SD
- from vis import _draw
-
- fname = "~/clean.png"
- raw = imageio.imread(fname)
- raw = imgs_to_torch(raw[np.newaxis, ...])
-
- model: ScoreAdapter = SD().run()
- raw = raw.to(model.device)
- zs = model.encode(raw)
- img = model.decode(zs)
- img = torch_samps_to_imgs(img)
- _draw(
- [imageio.imread(fname), img.squeeze(0)],
- )
-
-
-def test():
- test_encode_decode()
-
-
-if __name__ == "__main__":
- test()
diff --git a/spaces/ICML2022/OFA/fairseq/examples/noisychannel/README.md b/spaces/ICML2022/OFA/fairseq/examples/noisychannel/README.md
deleted file mode 100644
index 9d101aa874ec36ff3bb5c1166169a4c4f38ffe2b..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/noisychannel/README.md
+++ /dev/null
@@ -1,72 +0,0 @@
-# Simple and Effective Noisy Channel Modeling for Neural Machine Translation (Yee et al., 2019)
-This page contains pointers to pre-trained models as well as instructions on how to run the reranking scripts.
-
-## Citation:
-```bibtex
-@inproceedings{yee2019simple,
- title = {Simple and Effective Noisy Channel Modeling for Neural Machine Translation},
- author = {Kyra Yee and Yann Dauphin and Michael Auli},
- booktitle = {Conference on Empirical Methods in Natural Language Processing},
- year = {2019},
-}
-```
-
-## Pre-trained Models:
-
-Model | Description | Download
----|---|---
-`transformer.noisychannel.de-en` | De->En Forward Model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/noisychannel/forward_de2en.tar.bz2)
-`transformer.noisychannel.en-de` | En->De Channel Model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/noisychannel/backward_en2de.tar.bz2)
-`transformer_lm.noisychannel.en` | En Language model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/noisychannel/reranking_en_lm.tar.bz2)
-
-Test Data: [newstest_wmt17](https://dl.fbaipublicfiles.com/fairseq/models/noisychannel/wmt17test.tar.bz2)
-
-## Example usage
-
-```
-mkdir rerank_example
-curl https://dl.fbaipublicfiles.com/fairseq/models/noisychannel/forward_de2en.tar.bz2 | tar xvjf - -C rerank_example
-curl https://dl.fbaipublicfiles.com/fairseq/models/noisychannel/backward_en2de.tar.bz2 | tar xvjf - -C rerank_example
-curl https://dl.fbaipublicfiles.com/fairseq/models/noisychannel/reranking_en_lm.tar.bz2 | tar xvjf - -C rerank_example
-curl https://dl.fbaipublicfiles.com/fairseq/models/noisychannel/wmt17test.tar.bz2 | tar xvjf - -C rerank_example
-
-beam=50
-num_trials=1000
-fw_name=fw_model_ex
-bw_name=bw_model_ex
-lm_name=lm_ex
-data_dir=rerank_example/hyphen-splitting-mixed-case-wmt17test-wmt14bpe
-data_dir_name=wmt17
-lm=rerank_example/lm/checkpoint_best.pt
-lm_bpe_code=rerank_example/lm/bpe32k.code
-lm_dict=rerank_example/lm/dict.txt
-batch_size=32
-bw=rerank_example/backward_en2de.pt
-fw=rerank_example/forward_de2en.pt
-
-# reranking with P(T|S) P(S|T) and P(T)
-python examples/noisychannel/rerank_tune.py $data_dir --tune-param lenpen weight1 weight3 \
- --lower-bound 0 0 0 --upper-bound 3 3 3 --data-dir-name $data_dir_name \
- --num-trials $num_trials --source-lang de --target-lang en --gen-model $fw \
- -n $beam --batch-size $batch_size --score-model2 $fw --score-model1 $bw \
- --backwards1 --weight2 1 \
- -lm $lm --lm-dict $lm_dict --lm-name en_newscrawl --lm-bpe-code $lm_bpe_code \
- --model2-name $fw_name --model1-name $bw_name --gen-model-name $fw_name
-
-# reranking with P(T|S) and P(T)
-python examples/noisychannel/rerank_tune.py $data_dir --tune-param lenpen weight3 \
- --lower-bound 0 0 --upper-bound 3 3 --data-dir-name $data_dir_name \
- --num-trials $num_trials --source-lang de --target-lang en --gen-model $fw \
- -n $beam --batch-size $batch_size --score-model1 $fw \
- -lm $lm --lm-dict $lm_dict --lm-name en_newscrawl --lm-bpe-code $lm_bpe_code \
- --model1-name $fw_name --gen-model-name $fw_name
-
-# to run with a preconfigured set of hyperparameters for the lenpen and model weights, using rerank.py instead.
-python examples/noisychannel/rerank.py $data_dir \
- --lenpen 0.269 --weight1 1 --weight2 0.929 --weight3 0.831 \
- --data-dir-name $data_dir_name --source-lang de --target-lang en --gen-model $fw \
- -n $beam --batch-size $batch_size --score-model2 $fw --score-model1 $bw --backwards1 \
- -lm $lm --lm-dict $lm_dict --lm-name en_newscrawl --lm-bpe-code $lm_bpe_code \
- --model2-name $fw_name --model1-name $bw_name --gen-model-name $fw_name
-```
-
diff --git a/spaces/Ikaros521/so-vits-svc-4.0-ikaros/resample.py b/spaces/Ikaros521/so-vits-svc-4.0-ikaros/resample.py
deleted file mode 100644
index 5e96106c9a066e6d73652c544322d029dd98f746..0000000000000000000000000000000000000000
--- a/spaces/Ikaros521/so-vits-svc-4.0-ikaros/resample.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import os
-import argparse
-import librosa
-import numpy as np
-from multiprocessing import Pool, cpu_count
-from scipy.io import wavfile
-from tqdm import tqdm
-
-
-def process(item):
- spkdir, wav_name, args = item
- # speaker 's5', 'p280', 'p315' are excluded,
- speaker = spkdir.replace("\\", "/").split("/")[-1]
- wav_path = os.path.join(args.in_dir, speaker, wav_name)
- if os.path.exists(wav_path) and '.wav' in wav_path:
- os.makedirs(os.path.join(args.out_dir2, speaker), exist_ok=True)
- wav, sr = librosa.load(wav_path, None)
- wav, _ = librosa.effects.trim(wav, top_db=20)
- peak = np.abs(wav).max()
- if peak > 1.0:
- wav = 0.98 * wav / peak
- wav2 = librosa.resample(wav, orig_sr=sr, target_sr=args.sr2)
- wav2 /= max(wav2.max(), -wav2.min())
- save_name = wav_name
- save_path2 = os.path.join(args.out_dir2, speaker, save_name)
- wavfile.write(
- save_path2,
- args.sr2,
- (wav2 * np.iinfo(np.int16).max).astype(np.int16)
- )
-
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--sr2", type=int, default=44100, help="sampling rate")
- parser.add_argument("--in_dir", type=str, default="./dataset_raw", help="path to source dir")
- parser.add_argument("--out_dir2", type=str, default="./dataset/44k", help="path to target dir")
- args = parser.parse_args()
- processs = cpu_count()-2 if cpu_count() >4 else 1
- pool = Pool(processes=processs)
-
- for speaker in os.listdir(args.in_dir):
- spk_dir = os.path.join(args.in_dir, speaker)
- if os.path.isdir(spk_dir):
- print(spk_dir)
- for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])):
- pass
diff --git a/spaces/Jaffermirza17/ProjectPythonClass/README.md b/spaces/Jaffermirza17/ProjectPythonClass/README.md
deleted file mode 100644
index 071f75bdcae08875452f1bfdc60985d7523dfc7c..0000000000000000000000000000000000000000
--- a/spaces/Jaffermirza17/ProjectPythonClass/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ProjectPythonClass
-emoji: 📊
-colorFrom: indigo
-colorTo: red
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Jeff2323/ai-comic-factory/src/app/engine/censorship.ts b/spaces/Jeff2323/ai-comic-factory/src/app/engine/censorship.ts
deleted file mode 100644
index 56e36e81eac65961aedf55bca559dc403058db28..0000000000000000000000000000000000000000
--- a/spaces/Jeff2323/ai-comic-factory/src/app/engine/censorship.ts
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-// unfortunately due to abuse by some users, I have to add this NSFW filter
-const secretSalt = `${process.env.SECRET_CENSORSHIP_KEY || ""}`
-
-// TODO the censorship is not implement yet actually
\ No newline at end of file
diff --git a/spaces/Kaikaikai/webgl_demo/index.html b/spaces/Kaikaikai/webgl_demo/index.html
deleted file mode 100644
index 40514bd96ab4dd19ff2e2cd53e63c53f2d09bec5..0000000000000000000000000000000000000000
--- a/spaces/Kaikaikai/webgl_demo/index.html
+++ /dev/null
@@ -1,65 +0,0 @@
-
-
-
-
-
- Learn WebGL
-
-
-
-
-
-
-
▶
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/infer/modules/train/extract/extract_f0_print.py b/spaces/Kangarroar/ApplioRVC-Inference/infer/modules/train/extract/extract_f0_print.py
deleted file mode 100644
index 14ef598d73b807974204664f100c828918199816..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/infer/modules/train/extract/extract_f0_print.py
+++ /dev/null
@@ -1,298 +0,0 @@
-import os
-import sys
-import traceback
-
-import parselmouth
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-import logging
-from LazyImport import lazyload
-
-import numpy as np
-import pyworld
-torchcrepe = lazyload("torchcrepe") # Fork Feature. Crepe algo for training and preprocess
-torch = lazyload("torch")
-#from torch import Tensor # Fork Feature. Used for pitch prediction for torch crepe.
-tqdm = lazyload("tqdm")
-from infer.lib.audio import load_audio
-
-logging.getLogger("numba").setLevel(logging.WARNING)
-from multiprocessing import Process
-
-exp_dir = sys.argv[1]
-f = open("%s/extract_f0_feature.log" % exp_dir, "a+")
-
-DoFormant = False
-Quefrency = 1.0
-Timbre = 1.0
-
-def printt(strr):
- print(strr)
- f.write(f"{strr}\n")
- f.flush()
-
-
-n_p = int(sys.argv[2])
-f0method = sys.argv[3]
-extraction_crepe_hop_length = 0
-try:
- extraction_crepe_hop_length = int(sys.argv[4])
-except:
- print("Temp Issue. echl is not being passed with argument!")
- extraction_crepe_hop_length = 128
-
-class FeatureInput(object):
- def __init__(self, samplerate=16000, hop_size=160):
- self.fs = samplerate
- self.hop = hop_size
-
- self.f0_bin = 256
- self.f0_max = 1100.0
- self.f0_min = 50.0
- self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700)
- self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700)
-
- def mncrepe(self, method, x, p_len, crepe_hop_length):
- f0 = None
- torch_device_index = 0
- torch_device = torch.device(
- f"cuda:{torch_device_index % torch.cuda.device_count()}"
- ) if torch.cuda.is_available() \
- else torch.device("mps") if torch.backends.mps.is_available() \
- else torch.device("cpu")
-
- audio = torch.from_numpy(x.astype(np.float32)).to(torch_device, copy=True)
- audio /= torch.quantile(torch.abs(audio), 0.999)
- audio = torch.unsqueeze(audio, dim=0)
- if audio.ndim == 2 and audio.shape[0] > 1:
- audio = torch.mean(audio, dim=0, keepdim=True).detach()
- audio = audio.detach()
-
- if method == 'mangio-crepe':
- pitch: torch.Tensor = torchcrepe.predict(
- audio,
- self.fs,
- crepe_hop_length,
- self.f0_min,
- self.f0_max,
- "full",
- batch_size=crepe_hop_length * 2,
- device=torch_device,
- pad=True,
- )
- p_len = p_len or x.shape[0] // crepe_hop_length
- # Resize the pitch
- source = np.array(pitch.squeeze(0).cpu().float().numpy())
- source[source < 0.001] = np.nan
- target = np.interp(
- np.arange(0, len(source) * p_len, len(source)) / p_len,
- np.arange(0, len(source)),
- source,
- )
- f0 = np.nan_to_num(target)
-
- elif method == 'crepe':
- batch_size = 512
- audio = torch.tensor(np.copy(x))[None].float()
- f0, pd = torchcrepe.predict(
- audio,
- self.fs,
- 160,
- self.f0_min,
- self.f0_max,
- "full",
- batch_size=batch_size,
- device=torch_device,
- return_periodicity=True,
- )
- pd = torchcrepe.filter.median(pd, 3)
- f0 = torchcrepe.filter.mean(f0, 3)
- f0[pd < 0.1] = 0
- f0 = f0[0].cpu().numpy()
- f0 = f0[1:] # Get rid of extra first frame
-
- return f0
-
- def get_pm(self, x, p_len):
- f0 = parselmouth.Sound(x, self.fs).to_pitch_ac(
- time_step=160 / 16000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- ).selected_array["frequency"]
-
- return np.pad(
- f0,
- [[max(0, (p_len - len(f0) + 1) // 2), max(0, p_len - len(f0) - (p_len - len(f0) + 1) // 2)]],
- mode="constant"
- )
-
- def get_harvest(self, x):
- f0_spectral = pyworld.harvest(
- x.astype(np.double),
- fs=self.fs,
- f0_ceil=self.f0_max,
- f0_floor=self.f0_min,
- frame_period=1000 * self.hop / self.fs,
- )
- return pyworld.stonemask(x.astype(np.double), *f0_spectral, self.fs)
-
- def get_dio(self, x):
- f0_spectral = pyworld.dio(
- x.astype(np.double),
- fs=self.fs,
- f0_ceil=self.f0_max,
- f0_floor=self.f0_min,
- frame_period=1000 * self.hop / self.fs,
- )
- return pyworld.stonemask(x.astype(np.double), *f0_spectral, self.fs)
-
- def get_rmvpe(self, x):
- if hasattr(self, "model_rmvpe") == False:
- from infer.lib.rmvpe import RMVPE
-
- print("Loading rmvpe model")
- self.model_rmvpe = RMVPE(
- "assets/rmvpe/rmvpe.pt", is_half=False, device="cpu"
- )
- return self.model_rmvpe.infer_from_audio(x, thred=0.03)
-
- def get_rmvpe_dml(self, x):
- ...
-
- def get_f0_method_dict(self):
- return {
- "pm": self.get_pm,
- "harvest": self.get_harvest,
- "dio": self.get_dio,
- "rmvpe": self.get_rmvpe
- }
-
- def get_f0_hybrid_computation(
- self,
- methods_str,
- x,
- p_len,
- crepe_hop_length,
- ):
- # Get various f0 methods from input to use in the computation stack
- s = methods_str
- s = s.split("hybrid")[1]
- s = s.replace("[", "").replace("]", "")
- methods = s.split("+")
- f0_computation_stack = []
-
- for method in methods:
- if method in self.f0_method_dict:
- f0 = self.f0_method_dict[method](x, p_len) if method == 'pm' else self.f0_method_dict[method](x)
- f0_computation_stack.append(f0)
- elif method == 'crepe' or method == 'mangio-crepe':
- self.the_other_complex_function(x, method, crepe_hop_length)
-
- if len(f0_computation_stack) != 0:
- f0_median_hybrid = np.nanmedian(f0_computation_stack, axis=0) if len(f0_computation_stack)>1 else f0_computation_stack[0]
- return f0_median_hybrid
- else:
- raise ValueError("No valid methods were provided")
-
- def compute_f0(self, path, f0_method, crepe_hop_length):
- x = load_audio(path, self.fs, DoFormant, Quefrency, Timbre)
- p_len = x.shape[0] // self.hop
-
- if f0_method in self.f0_method_dict:
- f0 = self.f0_method_dict[f0_method](x, p_len) if f0_method == 'pm' else self.f0_method_dict[f0_method](x)
- elif f0_method in ['crepe', 'mangio-crepe']:
- f0 = self.mncrepe(f0_method, x, p_len, crepe_hop_length)
- elif "hybrid" in f0_method: # EXPERIMENTAL
- # Perform hybrid median pitch estimation
- f0 = self.get_f0_hybrid_computation(
- f0_method,
- x,
- p_len,
- crepe_hop_length,
- )
- return f0
-
- def coarse_f0(self, f0):
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - self.f0_mel_min) * (
- self.f0_bin - 2
- ) / (self.f0_mel_max - self.f0_mel_min) + 1
-
- # use 0 or 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > self.f0_bin - 1] = self.f0_bin - 1
- f0_coarse = np.rint(f0_mel).astype(int)
- assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (
- f0_coarse.max(),
- f0_coarse.min(),
- )
- return f0_coarse
-
- def go(self, paths, f0_method, crepe_hop_length, thread_n):
- if len(paths) == 0:
- printt("no-f0-todo")
- return
- with tqdm.tqdm(total=len(paths), leave=True, position=thread_n) as pbar:
- description = f"thread:{thread_n}, f0ing, Hop-Length:{crepe_hop_length}"
- pbar.set_description(description)
-
- for idx, (inp_path, opt_path1, opt_path2) in enumerate(paths):
- try:
- if (
- os.path.exists(opt_path1 + ".npy")
- and os.path.exists(opt_path2 + ".npy")
- ):
- pbar.update(1)
- continue
-
- featur_pit = self.compute_f0(inp_path, f0_method, crepe_hop_length)
- np.save(
- opt_path2,
- featur_pit,
- allow_pickle=False,
- ) # nsf
- coarse_pit = self.coarse_f0(featur_pit)
- np.save(
- opt_path1,
- coarse_pit,
- allow_pickle=False,
- ) # ori
- pbar.update(1)
- except Exception as e:
- printt(f"f0fail-{idx}-{inp_path}-{traceback.format_exc()}")
-
-
-if __name__ == "__main__":
- # exp_dir=r"E:\codes\py39\dataset\mi-test"
- # n_p=16
- # f = open("%s/log_extract_f0.log"%exp_dir, "w")
- printt(sys.argv)
- featureInput = FeatureInput()
- paths = []
- inp_root = "%s/1_16k_wavs" % (exp_dir)
- opt_root1 = "%s/2a_f0" % (exp_dir)
- opt_root2 = "%s/2b-f0nsf" % (exp_dir)
-
- os.makedirs(opt_root1, exist_ok=True)
- os.makedirs(opt_root2, exist_ok=True)
- for name in sorted(list(os.listdir(inp_root))):
- inp_path = "%s/%s" % (inp_root, name)
- if "spec" in inp_path:
- continue
- opt_path1 = "%s/%s" % (opt_root1, name)
- opt_path2 = "%s/%s" % (opt_root2, name)
- paths.append([inp_path, opt_path1, opt_path2])
-
- ps = []
- print("Using f0 method: " + f0method)
- for i in range(n_p):
- p = Process(
- target=featureInput.go,
- args=(paths[i::n_p], f0method, extraction_crepe_hop_length, i),
- )
- ps.append(p)
- p.start()
- for i in range(n_p):
- ps[i].join()
\ No newline at end of file
diff --git a/spaces/Kevin676/AutoGPT/tests/unit/test_chat.py b/spaces/Kevin676/AutoGPT/tests/unit/test_chat.py
deleted file mode 100644
index 774f4103762c28d5a02e89c14b224fae0bc0756a..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/AutoGPT/tests/unit/test_chat.py
+++ /dev/null
@@ -1,86 +0,0 @@
-# Generated by CodiumAI
-import time
-import unittest
-from unittest.mock import patch
-
-from autogpt.chat import create_chat_message, generate_context
-
-
-class TestChat(unittest.TestCase):
- # Tests that the function returns a dictionary with the correct keys and values when valid strings are provided for role and content.
- def test_happy_path_role_content(self):
- result = create_chat_message("system", "Hello, world!")
- self.assertEqual(result, {"role": "system", "content": "Hello, world!"})
-
- # Tests that the function returns a dictionary with the correct keys and values when empty strings are provided for role and content.
- def test_empty_role_content(self):
- result = create_chat_message("", "")
- self.assertEqual(result, {"role": "", "content": ""})
-
- # Tests the behavior of the generate_context function when all input parameters are empty.
- @patch("time.strftime")
- def test_generate_context_empty_inputs(self, mock_strftime):
- # Mock the time.strftime function to return a fixed value
- mock_strftime.return_value = "Sat Apr 15 00:00:00 2023"
- # Arrange
- prompt = ""
- relevant_memory = ""
- full_message_history = []
- model = "gpt-3.5-turbo-0301"
-
- # Act
- result = generate_context(prompt, relevant_memory, full_message_history, model)
-
- # Assert
- expected_result = (
- -1,
- 47,
- 3,
- [
- {"role": "system", "content": ""},
- {
- "role": "system",
- "content": f"The current time and date is {time.strftime('%c')}",
- },
- {
- "role": "system",
- "content": f"This reminds you of these events from your past:\n\n\n",
- },
- ],
- )
- self.assertEqual(result, expected_result)
-
- # Tests that the function successfully generates a current_context given valid inputs.
- def test_generate_context_valid_inputs(self):
- # Given
- prompt = "What is your favorite color?"
- relevant_memory = "You once painted your room blue."
- full_message_history = [
- create_chat_message("user", "Hi there!"),
- create_chat_message("assistant", "Hello! How can I assist you today?"),
- create_chat_message("user", "Can you tell me a joke?"),
- create_chat_message(
- "assistant",
- "Why did the tomato turn red? Because it saw the salad dressing!",
- ),
- create_chat_message("user", "Haha, that's funny."),
- ]
- model = "gpt-3.5-turbo-0301"
-
- # When
- result = generate_context(prompt, relevant_memory, full_message_history, model)
-
- # Then
- self.assertIsInstance(result[0], int)
- self.assertIsInstance(result[1], int)
- self.assertIsInstance(result[2], int)
- self.assertIsInstance(result[3], list)
- self.assertGreaterEqual(result[0], 0)
- self.assertGreaterEqual(result[1], 0)
- self.assertGreaterEqual(result[2], 0)
- self.assertGreaterEqual(
- len(result[3]), 3
- ) # current_context should have at least 3 messages
- self.assertLessEqual(
- result[1], 2048
- ) # token limit for GPT-3.5-turbo-0301 is 2048 tokens
diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/encoder/visualizations.py b/spaces/Kevin676/Real-Time-Voice-Cloning/encoder/visualizations.py
deleted file mode 100644
index 980c74f95f1f7df41ebccc983600b2713c0b0502..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Real-Time-Voice-Cloning/encoder/visualizations.py
+++ /dev/null
@@ -1,178 +0,0 @@
-from encoder.data_objects.speaker_verification_dataset import SpeakerVerificationDataset
-from datetime import datetime
-from time import perf_counter as timer
-import matplotlib.pyplot as plt
-import numpy as np
-# import webbrowser
-import visdom
-import umap
-
-colormap = np.array([
- [76, 255, 0],
- [0, 127, 70],
- [255, 0, 0],
- [255, 217, 38],
- [0, 135, 255],
- [165, 0, 165],
- [255, 167, 255],
- [0, 255, 255],
- [255, 96, 38],
- [142, 76, 0],
- [33, 0, 127],
- [0, 0, 0],
- [183, 183, 183],
-], dtype=np.float) / 255
-
-
-class Visualizations:
- def __init__(self, env_name=None, update_every=10, server="http://localhost", disabled=False):
- # Tracking data
- self.last_update_timestamp = timer()
- self.update_every = update_every
- self.step_times = []
- self.losses = []
- self.eers = []
- print("Updating the visualizations every %d steps." % update_every)
-
- # If visdom is disabled TODO: use a better paradigm for that
- self.disabled = disabled
- if self.disabled:
- return
-
- # Set the environment name
- now = str(datetime.now().strftime("%d-%m %Hh%M"))
- if env_name is None:
- self.env_name = now
- else:
- self.env_name = "%s (%s)" % (env_name, now)
-
- # Connect to visdom and open the corresponding window in the browser
- try:
- self.vis = visdom.Visdom(server, env=self.env_name, raise_exceptions=True)
- except ConnectionError:
- raise Exception("No visdom server detected. Run the command \"visdom\" in your CLI to "
- "start it.")
- # webbrowser.open("http://localhost:8097/env/" + self.env_name)
-
- # Create the windows
- self.loss_win = None
- self.eer_win = None
- # self.lr_win = None
- self.implementation_win = None
- self.projection_win = None
- self.implementation_string = ""
-
- def log_params(self):
- if self.disabled:
- return
- from encoder import params_data
- from encoder import params_model
- param_string = "Model parameters: "
- for param_name in (p for p in dir(params_model) if not p.startswith("__")):
- value = getattr(params_model, param_name)
- param_string += "\t%s: %s " % (param_name, value)
- param_string += "Data parameters: "
- for param_name in (p for p in dir(params_data) if not p.startswith("__")):
- value = getattr(params_data, param_name)
- param_string += "\t%s: %s " % (param_name, value)
- self.vis.text(param_string, opts={"title": "Parameters"})
-
- def log_dataset(self, dataset: SpeakerVerificationDataset):
- if self.disabled:
- return
- dataset_string = ""
- dataset_string += "Speakers: %s\n" % len(dataset.speakers)
- dataset_string += "\n" + dataset.get_logs()
- dataset_string = dataset_string.replace("\n", " ")
- self.vis.text(dataset_string, opts={"title": "Dataset"})
-
- def log_implementation(self, params):
- if self.disabled:
- return
- implementation_string = ""
- for param, value in params.items():
- implementation_string += "%s: %s\n" % (param, value)
- implementation_string = implementation_string.replace("\n", " ")
- self.implementation_string = implementation_string
- self.implementation_win = self.vis.text(
- implementation_string,
- opts={"title": "Training implementation"}
- )
-
- def update(self, loss, eer, step):
- # Update the tracking data
- now = timer()
- self.step_times.append(1000 * (now - self.last_update_timestamp))
- self.last_update_timestamp = now
- self.losses.append(loss)
- self.eers.append(eer)
- print(".", end="")
-
- # Update the plots every steps
- if step % self.update_every != 0:
- return
- time_string = "Step time: mean: %5dms std: %5dms" % \
- (int(np.mean(self.step_times)), int(np.std(self.step_times)))
- print("\nStep %6d Loss: %.4f EER: %.4f %s" %
- (step, np.mean(self.losses), np.mean(self.eers), time_string))
- if not self.disabled:
- self.loss_win = self.vis.line(
- [np.mean(self.losses)],
- [step],
- win=self.loss_win,
- update="append" if self.loss_win else None,
- opts=dict(
- legend=["Avg. loss"],
- xlabel="Step",
- ylabel="Loss",
- title="Loss",
- )
- )
- self.eer_win = self.vis.line(
- [np.mean(self.eers)],
- [step],
- win=self.eer_win,
- update="append" if self.eer_win else None,
- opts=dict(
- legend=["Avg. EER"],
- xlabel="Step",
- ylabel="EER",
- title="Equal error rate"
- )
- )
- if self.implementation_win is not None:
- self.vis.text(
- self.implementation_string + ("%s" % time_string),
- win=self.implementation_win,
- opts={"title": "Training implementation"},
- )
-
- # Reset the tracking
- self.losses.clear()
- self.eers.clear()
- self.step_times.clear()
-
- def draw_projections(self, embeds, utterances_per_speaker, step, out_fpath=None,
- max_speakers=10):
- max_speakers = min(max_speakers, len(colormap))
- embeds = embeds[:max_speakers * utterances_per_speaker]
-
- n_speakers = len(embeds) // utterances_per_speaker
- ground_truth = np.repeat(np.arange(n_speakers), utterances_per_speaker)
- colors = [colormap[i] for i in ground_truth]
-
- reducer = umap.UMAP()
- projected = reducer.fit_transform(embeds)
- plt.scatter(projected[:, 0], projected[:, 1], c=colors)
- plt.gca().set_aspect("equal", "datalim")
- plt.title("UMAP projection (step %d)" % step)
- if not self.disabled:
- self.projection_win = self.vis.matplot(plt, win=self.projection_win)
- if out_fpath is not None:
- plt.savefig(out_fpath)
- plt.clf()
-
- def save(self):
- if not self.disabled:
- self.vis.save([self.env_name])
-
\ No newline at end of file
diff --git a/spaces/Kimata/Sanskrit-TTS/normalizer_utils.py b/spaces/Kimata/Sanskrit-TTS/normalizer_utils.py
deleted file mode 100644
index ea19ee520363ebbf5df38e95c52c386040789cd3..0000000000000000000000000000000000000000
--- a/spaces/Kimata/Sanskrit-TTS/normalizer_utils.py
+++ /dev/null
@@ -1,122 +0,0 @@
-DEPENDENT_VOWELS = ["ा", "ि", "ी", "ु", "ू", "े", "ै", "ो", "ौ", "ं", "ः", "ृ", "ॄ"]
-
-punctuation_marks = ["।", "॥", "॥", "'", '.', ',', '!', '?', ':', ';', '"', "'", '(', ')', '[', ']', '{', '}', '-', '_', '/', '\\', '|', '@', '#', '$', '%', '&', '*', '=', '<', '>', '^', '~', '__']
-
-
-dict_num = {'१': 'एकः',
- '२': 'द्वौ',
- '३': 'त्रयः',
- '४': 'चत्वारः',
- '५': 'पञ्च',
- '६': 'षट्',
- '७': 'सप्त',
- '८': 'अष्ट',
- '९': 'नव',
- '१॰': 'दश',
- '११': 'एकादशन्',
- '१२': 'द्वादशन्',
- '१३': 'त्रयोदशन्',
- '१४': 'चतुर्दशन्',
- '१५': 'पञ्चदशन्',
- '१६': 'षोडशन्',
- '१७': 'सप्तदशन्',
- '१८': 'ष्टादशन्',
- '१९': 'नवदशन्',
- '२॰': 'विंशति',
- '२१': 'एकाविंशति',
- '२२': 'द्वाविंशति',
- '२३': 'त्रयोविंशति',
- '२४': 'चतुर्विंशति',
- '२५': 'पञ्चविंशति',
- '२६': 'षड्विंशति',
- '२७': 'सप्तविंशति',
- '२८': 'ष्टाविंशति',
- '२९': 'नवविंशति',
- '३॰': 'त्रिंशत्',
- '३१': 'एकत्रिंशत्',
- '३२': 'द्वात्रिंशत्',
- '३३': 'त्रयत्रिंशत्',
- '३४': 'चतुस्त्रिंशत्',
- '३५': 'पञ्चत्रिंशत्',
- '३६': 'षट्त्रिंशत्',
- '३७': 'सप्तत्रिंशत्',
- '३८': 'ष्टात्रिंशत्',
- '३९': 'एकोनचत्वारिंशत्',
- '४॰': 'चत्वारिंशत्',
- '४१': 'एकचत्वारिंशत्',
- '४२': 'द्विचत्वारिंशत्',
- '४३': 'त्रिचत्वारिंशत्',
- '४४': 'चतुश्चत्वारिंशत्',
- '४५': 'पञ्चचत्वारिंशत्',
- '४६': 'षट्चत्वारिंशत्',
- '४७': 'सप्तचत्वारिंशत्',
- '४८': 'ष्टचत्वारिंशत्',
- '४९': 'एकोनपञ्चाशत्',
- '५॰': 'पञ्चाशत्',
- '५१': 'एकपञ्चाशत्',
- '५२': 'द्विपञ्चाशत्',
- '५३': 'त्रिपञ्चाशत्',
- '५४': 'चतुःपञ्चाशत्',
- '५५': 'पञ्चपञ्चाशत्',
- '५६': 'षट्पञ्चाशत्',
- '५७': 'सप्तपञ्चाशत्',
- '५८': 'ष्टपञ्चाशत्',
- '५९': 'एकोनषष्ठिः',
- '६॰': 'षष्ठिः',
- '६१': 'एकषष्ठिः',
- '६२': 'द्विषष्ठिः',
- '६३': 'त्रिषष्ठिः',
- '६४': 'चतुःषष्ठिः',
- '६५': 'पञ्चषष्ठिः',
- '६६': 'षट्षष्ठिः',
- '६७': 'सप्तषष्ठिः',
- '६८': 'ष्टषष्ठिः',
- '६९': 'एकोनसप्ततिः',
- '७॰': 'सप्ततिः',
- '७१': 'एकसप्ततिः',
- '७२': 'द्विसप्ततिः',
- '७३': 'त्रिसप्ततिः',
- '७४': 'चतुःसप्ततिः',
- '७५': 'पञ्चसप्ततिः',
- '७६': 'षट्सप्ततिः',
- '७७': 'सप्तसप्ततिः',
- '७८': 'ष्टसप्ततिः',
- '७९': 'एकोनाशीतिः',
- '८॰': 'शीतिः',
- '८१': 'एकाशीतिः',
- '८२': 'द्वशीतिः',
- '८३': 'त्र्यशीतिः',
- '८४': 'चतुरशीतिः',
- '८५': 'पञ्चाशीतिः',
- '८६': 'षडशीतिः',
- '८७': 'सप्ताशीतिः',
- '८८': 'ष्टाशीतिः',
- '८९': 'एकोननवतिः',
- '९॰': 'नवतिः',
- '९१': 'एकनवतिः',
- '९२': 'द्विनवतिः',
- '९३': 'त्रिनवतिः',
- '९४': 'चतुर्नवतिः',
- '९५': 'पञ्चनवतिः',
- '९६': 'षण्णवतिः',
- '९७': 'सप्तनवतिः',
- '९८': 'ष्टनवतिः',
- '९९': 'एकोनशतम्',
- '१॰॰': 'शतम्',
- '0': 'शून्य',
- '०': 'शून्य',
- '1': 'एकः',
- '2': 'द्वौ',
- '3': 'त्रयः',
- '4': 'चत्वारः',
- '5': 'पञ्च',
- '6': 'षट्',
- '7': 'सप्त',
- '8': 'ष्ट',
- '9': 'नव',
-}
-
-abbreviation_dict = {
- 'रुप्यकम्':'रू', #rupee
- 'चिकितसिक': 'डॉ.' #doctor
-}
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/boxinst_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/boxinst_head.py
deleted file mode 100644
index 5cf5ef7c06097c85466e9be0cde5ed9edd530922..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/boxinst_head.py
+++ /dev/null
@@ -1,253 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import List
-
-import torch
-import torch.nn.functional as F
-from mmengine import MessageHub
-from mmengine.structures import InstanceData
-from torch import Tensor
-
-from mmdet.registry import MODELS
-from mmdet.utils import InstanceList
-from ..utils.misc import unfold_wo_center
-from .condinst_head import CondInstBboxHead, CondInstMaskHead
-
-
-@MODELS.register_module()
-class BoxInstBboxHead(CondInstBboxHead):
- """BoxInst box head used in https://arxiv.org/abs/2012.02310."""
-
- def __init__(self, *args, **kwargs) -> None:
- super().__init__(*args, **kwargs)
-
-
-@MODELS.register_module()
-class BoxInstMaskHead(CondInstMaskHead):
- """BoxInst mask head used in https://arxiv.org/abs/2012.02310.
-
- This head outputs the mask for BoxInst.
-
- Args:
- pairwise_size (dict): The size of neighborhood for each pixel.
- Defaults to 3.
- pairwise_dilation (int): The dilation of neighborhood for each pixel.
- Defaults to 2.
- warmup_iters (int): Warmup iterations for pair-wise loss.
- Defaults to 10000.
- """
-
- def __init__(self,
- *arg,
- pairwise_size: int = 3,
- pairwise_dilation: int = 2,
- warmup_iters: int = 10000,
- **kwargs) -> None:
- self.pairwise_size = pairwise_size
- self.pairwise_dilation = pairwise_dilation
- self.warmup_iters = warmup_iters
- super().__init__(*arg, **kwargs)
-
- def get_pairwise_affinity(self, mask_logits: Tensor) -> Tensor:
- """Compute the pairwise affinity for each pixel."""
- log_fg_prob = F.logsigmoid(mask_logits).unsqueeze(1)
- log_bg_prob = F.logsigmoid(-mask_logits).unsqueeze(1)
-
- log_fg_prob_unfold = unfold_wo_center(
- log_fg_prob,
- kernel_size=self.pairwise_size,
- dilation=self.pairwise_dilation)
- log_bg_prob_unfold = unfold_wo_center(
- log_bg_prob,
- kernel_size=self.pairwise_size,
- dilation=self.pairwise_dilation)
-
- # the probability of making the same prediction:
- # p_i * p_j + (1 - p_i) * (1 - p_j)
- # we compute the the probability in log space
- # to avoid numerical instability
- log_same_fg_prob = log_fg_prob[:, :, None] + log_fg_prob_unfold
- log_same_bg_prob = log_bg_prob[:, :, None] + log_bg_prob_unfold
-
- # TODO: Figure out the difference between it and directly sum
- max_ = torch.max(log_same_fg_prob, log_same_bg_prob)
- log_same_prob = torch.log(
- torch.exp(log_same_fg_prob - max_) +
- torch.exp(log_same_bg_prob - max_)) + max_
-
- return -log_same_prob[:, 0]
-
- def loss_by_feat(self, mask_preds: List[Tensor],
- batch_gt_instances: InstanceList,
- batch_img_metas: List[dict], positive_infos: InstanceList,
- **kwargs) -> dict:
- """Calculate the loss based on the features extracted by the mask head.
-
- Args:
- mask_preds (list[Tensor]): List of predicted masks, each has
- shape (num_classes, H, W).
- batch_gt_instances (list[:obj:`InstanceData`]): Batch of
- gt_instance. It usually includes ``bboxes``, ``masks``,
- and ``labels`` attributes.
- batch_img_metas (list[dict]): Meta information of multiple images.
- positive_infos (List[:obj:``InstanceData``]): Information of
- positive samples of each image that are assigned in detection
- head.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components.
- """
- assert positive_infos is not None, \
- 'positive_infos should not be None in `BoxInstMaskHead`'
- losses = dict()
-
- loss_mask_project = 0.
- loss_mask_pairwise = 0.
- num_imgs = len(mask_preds)
- total_pos = 0.
- avg_fatcor = 0.
-
- for idx in range(num_imgs):
- (mask_pred, pos_mask_targets, pos_pairwise_masks, num_pos) = \
- self._get_targets_single(
- mask_preds[idx], batch_gt_instances[idx],
- positive_infos[idx])
- # mask loss
- total_pos += num_pos
- if num_pos == 0 or pos_mask_targets is None:
- loss_project = mask_pred.new_zeros(1).mean()
- loss_pairwise = mask_pred.new_zeros(1).mean()
- avg_fatcor += 0.
- else:
- # compute the project term
- loss_project_x = self.loss_mask(
- mask_pred.max(dim=1, keepdim=True)[0],
- pos_mask_targets.max(dim=1, keepdim=True)[0],
- reduction_override='none').sum()
- loss_project_y = self.loss_mask(
- mask_pred.max(dim=2, keepdim=True)[0],
- pos_mask_targets.max(dim=2, keepdim=True)[0],
- reduction_override='none').sum()
- loss_project = loss_project_x + loss_project_y
- # compute the pairwise term
- pairwise_affinity = self.get_pairwise_affinity(mask_pred)
- avg_fatcor += pos_pairwise_masks.sum().clamp(min=1.0)
- loss_pairwise = (pairwise_affinity * pos_pairwise_masks).sum()
-
- loss_mask_project += loss_project
- loss_mask_pairwise += loss_pairwise
-
- if total_pos == 0:
- total_pos += 1 # avoid nan
- if avg_fatcor == 0:
- avg_fatcor += 1 # avoid nan
- loss_mask_project = loss_mask_project / total_pos
- loss_mask_pairwise = loss_mask_pairwise / avg_fatcor
- # message_hub = MessageHub.get_current_instance()
- # iter = message_hub.get_info('iter')
- # warmup_factor = min(iter / float(self.warmup_iters), 1.0)
- warmup_factor = 1.0
- loss_mask_pairwise *= warmup_factor
-
- losses.update(
- loss_mask_project=loss_mask_project,
- loss_mask_pairwise=loss_mask_pairwise)
- return losses
-
- def _get_targets_single(self, mask_preds: Tensor,
- gt_instances: InstanceData,
- positive_info: InstanceData):
- """Compute targets for predictions of single image.
-
- Args:
- mask_preds (Tensor): Predicted prototypes with shape
- (num_classes, H, W).
- gt_instances (:obj:`InstanceData`): Ground truth of instance
- annotations. It should includes ``bboxes``, ``labels``,
- and ``masks`` attributes.
- positive_info (:obj:`InstanceData`): Information of positive
- samples that are assigned in detection head. It usually
- contains following keys.
-
- - pos_assigned_gt_inds (Tensor): Assigner GT indexes of
- positive proposals, has shape (num_pos, )
- - pos_inds (Tensor): Positive index of image, has
- shape (num_pos, ).
- - param_pred (Tensor): Positive param preditions
- with shape (num_pos, num_params).
-
- Returns:
- tuple: Usually returns a tuple containing learning targets.
-
- - mask_preds (Tensor): Positive predicted mask with shape
- (num_pos, mask_h, mask_w).
- - pos_mask_targets (Tensor): Positive mask targets with shape
- (num_pos, mask_h, mask_w).
- - pos_pairwise_masks (Tensor): Positive pairwise masks with
- shape: (num_pos, num_neighborhood, mask_h, mask_w).
- - num_pos (int): Positive numbers.
- """
- gt_bboxes = gt_instances.bboxes
- device = gt_bboxes.device
- # Note that gt_masks are generated by full box
- # from BoxInstDataPreprocessor
- gt_masks = gt_instances.masks.to_tensor(
- dtype=torch.bool, device=device).float()
- # Note that pairwise_masks are generated by image color similarity
- # from BoxInstDataPreprocessor
- pairwise_masks = gt_instances.pairwise_masks
- pairwise_masks = pairwise_masks.to(device=device)
-
- # process with mask targets
- pos_assigned_gt_inds = positive_info.get('pos_assigned_gt_inds')
- scores = positive_info.get('scores')
- centernesses = positive_info.get('centernesses')
- num_pos = pos_assigned_gt_inds.size(0)
-
- if gt_masks.size(0) == 0 or num_pos == 0:
- return mask_preds, None, None, 0
- # Since we're producing (near) full image masks,
- # it'd take too much vram to backprop on every single mask.
- # Thus we select only a subset.
- if (self.max_masks_to_train != -1) and \
- (num_pos > self.max_masks_to_train):
- perm = torch.randperm(num_pos)
- select = perm[:self.max_masks_to_train]
- mask_preds = mask_preds[select]
- pos_assigned_gt_inds = pos_assigned_gt_inds[select]
- num_pos = self.max_masks_to_train
- elif self.topk_masks_per_img != -1:
- unique_gt_inds = pos_assigned_gt_inds.unique()
- num_inst_per_gt = max(
- int(self.topk_masks_per_img / len(unique_gt_inds)), 1)
-
- keep_mask_preds = []
- keep_pos_assigned_gt_inds = []
- for gt_ind in unique_gt_inds:
- per_inst_pos_inds = (pos_assigned_gt_inds == gt_ind)
- mask_preds_per_inst = mask_preds[per_inst_pos_inds]
- gt_inds_per_inst = pos_assigned_gt_inds[per_inst_pos_inds]
- if sum(per_inst_pos_inds) > num_inst_per_gt:
- per_inst_scores = scores[per_inst_pos_inds].sigmoid().max(
- dim=1)[0]
- per_inst_centerness = centernesses[
- per_inst_pos_inds].sigmoid().reshape(-1, )
- select = (per_inst_scores * per_inst_centerness).topk(
- k=num_inst_per_gt, dim=0)[1]
- mask_preds_per_inst = mask_preds_per_inst[select]
- gt_inds_per_inst = gt_inds_per_inst[select]
- keep_mask_preds.append(mask_preds_per_inst)
- keep_pos_assigned_gt_inds.append(gt_inds_per_inst)
- mask_preds = torch.cat(keep_mask_preds)
- pos_assigned_gt_inds = torch.cat(keep_pos_assigned_gt_inds)
- num_pos = pos_assigned_gt_inds.size(0)
-
- # Follow the origin implement
- start = int(self.mask_out_stride // 2)
- gt_masks = gt_masks[:, start::self.mask_out_stride,
- start::self.mask_out_stride]
- gt_masks = gt_masks.gt(0.5).float()
- pos_mask_targets = gt_masks[pos_assigned_gt_inds]
- pos_pairwise_masks = pairwise_masks[pos_assigned_gt_inds]
- pos_pairwise_masks = pos_pairwise_masks * pos_mask_targets.unsqueeze(1)
-
- return (mask_preds, pos_mask_targets, pos_pairwise_masks, num_pos)
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/layers/positional_encoding.py b/spaces/KyanChen/RSPrompter/mmdet/models/layers/positional_encoding.py
deleted file mode 100644
index 9367f0aaf0ca5fddda66e9c7df425654c56e4776..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/layers/positional_encoding.py
+++ /dev/null
@@ -1,168 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import math
-
-import torch
-import torch.nn as nn
-from mmengine.model import BaseModule
-from torch import Tensor
-
-from mmdet.registry import MODELS
-from mmdet.utils import MultiConfig, OptMultiConfig
-
-
-@MODELS.register_module()
-class SinePositionalEncoding(BaseModule):
- """Position encoding with sine and cosine functions.
-
- See `End-to-End Object Detection with Transformers
- `_ for details.
-
- Args:
- num_feats (int): The feature dimension for each position
- along x-axis or y-axis. Note the final returned dimension
- for each position is 2 times of this value.
- temperature (int, optional): The temperature used for scaling
- the position embedding. Defaults to 10000.
- normalize (bool, optional): Whether to normalize the position
- embedding. Defaults to False.
- scale (float, optional): A scale factor that scales the position
- embedding. The scale will be used only when `normalize` is True.
- Defaults to 2*pi.
- eps (float, optional): A value added to the denominator for
- numerical stability. Defaults to 1e-6.
- offset (float): offset add to embed when do the normalization.
- Defaults to 0.
- init_cfg (dict or list[dict], optional): Initialization config dict.
- Defaults to None
- """
-
- def __init__(self,
- num_feats: int,
- temperature: int = 10000,
- normalize: bool = False,
- scale: float = 2 * math.pi,
- eps: float = 1e-6,
- offset: float = 0.,
- init_cfg: OptMultiConfig = None) -> None:
- super().__init__(init_cfg=init_cfg)
- if normalize:
- assert isinstance(scale, (float, int)), 'when normalize is set,' \
- 'scale should be provided and in float or int type, ' \
- f'found {type(scale)}'
- self.num_feats = num_feats
- self.temperature = temperature
- self.normalize = normalize
- self.scale = scale
- self.eps = eps
- self.offset = offset
-
- def forward(self, mask: Tensor) -> Tensor:
- """Forward function for `SinePositionalEncoding`.
-
- Args:
- mask (Tensor): ByteTensor mask. Non-zero values representing
- ignored positions, while zero values means valid positions
- for this image. Shape [bs, h, w].
-
- Returns:
- pos (Tensor): Returned position embedding with shape
- [bs, num_feats*2, h, w].
- """
- # For convenience of exporting to ONNX, it's required to convert
- # `masks` from bool to int.
- mask = mask.to(torch.int)
- not_mask = 1 - mask # logical_not
- y_embed = not_mask.cumsum(1, dtype=torch.float32)
- x_embed = not_mask.cumsum(2, dtype=torch.float32)
- if self.normalize:
- y_embed = (y_embed + self.offset) / \
- (y_embed[:, -1:, :] + self.eps) * self.scale
- x_embed = (x_embed + self.offset) / \
- (x_embed[:, :, -1:] + self.eps) * self.scale
- dim_t = torch.arange(
- self.num_feats, dtype=torch.float32, device=mask.device)
- dim_t = self.temperature**(2 * (dim_t // 2) / self.num_feats)
- pos_x = x_embed[:, :, :, None] / dim_t
- pos_y = y_embed[:, :, :, None] / dim_t
- # use `view` instead of `flatten` for dynamically exporting to ONNX
- B, H, W = mask.size()
- pos_x = torch.stack(
- (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()),
- dim=4).view(B, H, W, -1)
- pos_y = torch.stack(
- (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()),
- dim=4).view(B, H, W, -1)
- pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)
- return pos
-
- def __repr__(self) -> str:
- """str: a string that describes the module"""
- repr_str = self.__class__.__name__
- repr_str += f'(num_feats={self.num_feats}, '
- repr_str += f'temperature={self.temperature}, '
- repr_str += f'normalize={self.normalize}, '
- repr_str += f'scale={self.scale}, '
- repr_str += f'eps={self.eps})'
- return repr_str
-
-
-@MODELS.register_module()
-class LearnedPositionalEncoding(BaseModule):
- """Position embedding with learnable embedding weights.
-
- Args:
- num_feats (int): The feature dimension for each position
- along x-axis or y-axis. The final returned dimension for
- each position is 2 times of this value.
- row_num_embed (int, optional): The dictionary size of row embeddings.
- Defaults to 50.
- col_num_embed (int, optional): The dictionary size of col embeddings.
- Defaults to 50.
- init_cfg (dict or list[dict], optional): Initialization config dict.
- """
-
- def __init__(
- self,
- num_feats: int,
- row_num_embed: int = 50,
- col_num_embed: int = 50,
- init_cfg: MultiConfig = dict(type='Uniform', layer='Embedding')
- ) -> None:
- super().__init__(init_cfg=init_cfg)
- self.row_embed = nn.Embedding(row_num_embed, num_feats)
- self.col_embed = nn.Embedding(col_num_embed, num_feats)
- self.num_feats = num_feats
- self.row_num_embed = row_num_embed
- self.col_num_embed = col_num_embed
-
- def forward(self, mask: Tensor) -> Tensor:
- """Forward function for `LearnedPositionalEncoding`.
-
- Args:
- mask (Tensor): ByteTensor mask. Non-zero values representing
- ignored positions, while zero values means valid positions
- for this image. Shape [bs, h, w].
-
- Returns:
- pos (Tensor): Returned position embedding with shape
- [bs, num_feats*2, h, w].
- """
- h, w = mask.shape[-2:]
- x = torch.arange(w, device=mask.device)
- y = torch.arange(h, device=mask.device)
- x_embed = self.col_embed(x)
- y_embed = self.row_embed(y)
- pos = torch.cat(
- (x_embed.unsqueeze(0).repeat(h, 1, 1), y_embed.unsqueeze(1).repeat(
- 1, w, 1)),
- dim=-1).permute(2, 0,
- 1).unsqueeze(0).repeat(mask.shape[0], 1, 1, 1)
- return pos
-
- def __repr__(self) -> str:
- """str: a string that describes the module"""
- repr_str = self.__class__.__name__
- repr_str += f'(num_feats={self.num_feats}, '
- repr_str += f'row_num_embed={self.row_num_embed}, '
- repr_str += f'col_num_embed={self.col_num_embed})'
- return repr_str
diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/vg_vqa.py b/spaces/KyanChen/RSPrompter/mmpretrain/datasets/vg_vqa.py
deleted file mode 100644
index 2d83884c804086c060bcfe27e833bff28dc28e9e..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/vg_vqa.py
+++ /dev/null
@@ -1,77 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import List
-
-from mmengine.fileio import load
-
-from mmpretrain.registry import DATASETS
-from .base_dataset import BaseDataset
-
-
-@DATASETS.register_module()
-class VGVQA(BaseDataset):
- """Visual Genome VQA dataset."""
-
- def load_data_list(self) -> List[dict]:
- """Load data list.
-
- Compare to BaseDataset, the only difference is that coco_vqa annotation
- file is already a list of data. There is no 'metainfo'.
- """
-
- raw_data_list = load(self.ann_file)
- if not isinstance(raw_data_list, list):
- raise TypeError(
- f'The VQA annotations loaded from annotation file '
- f'should be a dict, but got {type(raw_data_list)}!')
-
- # load and parse data_infos.
- data_list = []
- for raw_data_info in raw_data_list:
- # parse raw data information to target format
- data_info = self.parse_data_info(raw_data_info)
- if isinstance(data_info, dict):
- # For VQA tasks, each `data_info` looks like:
- # {
- # "question_id": 986769,
- # "question": "How many people are there?",
- # "answer": "two",
- # "image": "image/1.jpg",
- # "dataset": "vg"
- # }
-
- # change 'image' key to 'img_path'
- # TODO: This process will be removed, after the annotation file
- # is preprocess.
- data_info['img_path'] = data_info['image']
- del data_info['image']
-
- if 'answer' in data_info:
- # add answer_weight & answer_count, delete duplicate answer
- if data_info['dataset'] == 'vqa':
- answer_weight = {}
- for answer in data_info['answer']:
- if answer in answer_weight.keys():
- answer_weight[answer] += 1 / len(
- data_info['answer'])
- else:
- answer_weight[answer] = 1 / len(
- data_info['answer'])
-
- data_info['answer'] = list(answer_weight.keys())
- data_info['answer_weight'] = list(
- answer_weight.values())
- data_info['answer_count'] = len(answer_weight)
-
- elif data_info['dataset'] == 'vg':
- data_info['answers'] = [data_info['answer']]
- data_info['answer_weight'] = [0.2]
- data_info['answer_count'] = 1
-
- data_list.append(data_info)
-
- else:
- raise TypeError(
- f'Each VQA data element loaded from annotation file '
- f'should be a dict, but got {type(data_info)}!')
-
- return data_list
diff --git a/spaces/LZRi/LZR-Bert-VITS2/setup_ffmpeg.py b/spaces/LZRi/LZR-Bert-VITS2/setup_ffmpeg.py
deleted file mode 100644
index 7137ab5faebb6d80740b8c843667458f25596839..0000000000000000000000000000000000000000
--- a/spaces/LZRi/LZR-Bert-VITS2/setup_ffmpeg.py
+++ /dev/null
@@ -1,55 +0,0 @@
-import os
-import sys
-import re
-from pathlib import Path
-import winreg
-
-def check_ffmpeg_path():
- path_list = os.environ['Path'].split(';')
- ffmpeg_found = False
-
- for path in path_list:
- if 'ffmpeg' in path.lower() and 'bin' in path.lower():
- ffmpeg_found = True
- print("FFmpeg already installed.")
- break
-
- return ffmpeg_found
-
-def add_ffmpeg_path_to_user_variable():
- ffmpeg_bin_path = Path('.\\ffmpeg\\bin')
- if ffmpeg_bin_path.is_dir():
- abs_path = str(ffmpeg_bin_path.resolve())
-
- try:
- key = winreg.OpenKey(
- winreg.HKEY_CURRENT_USER,
- r"Environment",
- 0,
- winreg.KEY_READ | winreg.KEY_WRITE
- )
-
- try:
- current_path, _ = winreg.QueryValueEx(key, "Path")
- if abs_path not in current_path:
- new_path = f"{current_path};{abs_path}"
- winreg.SetValueEx(key, "Path", 0, winreg.REG_EXPAND_SZ, new_path)
- print(f"Added FFmpeg path to user variable 'Path': {abs_path}")
- else:
- print("FFmpeg path already exists in the user variable 'Path'.")
- finally:
- winreg.CloseKey(key)
- except WindowsError:
- print("Error: Unable to modify user variable 'Path'.")
- sys.exit(1)
-
- else:
- print("Error: ffmpeg\\bin folder not found in the current path.")
- sys.exit(1)
-
-def main():
- if not check_ffmpeg_path():
- add_ffmpeg_path_to_user_variable()
-
-if __name__ == "__main__":
- main()
\ No newline at end of file
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/infer_pack/modules/F0Predictor/__init__.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/infer_pack/modules/F0Predictor/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/layers_new.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/layers_new.py
deleted file mode 100644
index 44153b6a23399c6938affc61c71919eaa172bcee..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/layers_new.py
+++ /dev/null
@@ -1,125 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import spec_utils
-
-
-class Conv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(Conv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nout,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- bias=False,
- ),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class Encoder(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
- super(Encoder, self).__init__()
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, stride, pad, activ=activ)
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, 1, pad, activ=activ)
-
- def __call__(self, x):
- h = self.conv1(x)
- h = self.conv2(h)
-
- return h
-
-
-class Decoder(nn.Module):
- def __init__(
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
- ):
- super(Decoder, self).__init__()
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- # self.conv2 = Conv2DBNActiv(nout, nout, ksize, 1, pad, activ=activ)
- self.dropout = nn.Dropout2d(0.1) if dropout else None
-
- def __call__(self, x, skip=None):
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
-
- if skip is not None:
- skip = spec_utils.crop_center(skip, x)
- x = torch.cat([x, skip], dim=1)
-
- h = self.conv1(x)
- # h = self.conv2(h)
-
- if self.dropout is not None:
- h = self.dropout(h)
-
- return h
-
-
-class ASPPModule(nn.Module):
- def __init__(self, nin, nout, dilations=(4, 8, 12), activ=nn.ReLU, dropout=False):
- super(ASPPModule, self).__init__()
- self.conv1 = nn.Sequential(
- nn.AdaptiveAvgPool2d((1, None)),
- Conv2DBNActiv(nin, nout, 1, 1, 0, activ=activ),
- )
- self.conv2 = Conv2DBNActiv(nin, nout, 1, 1, 0, activ=activ)
- self.conv3 = Conv2DBNActiv(
- nin, nout, 3, 1, dilations[0], dilations[0], activ=activ
- )
- self.conv4 = Conv2DBNActiv(
- nin, nout, 3, 1, dilations[1], dilations[1], activ=activ
- )
- self.conv5 = Conv2DBNActiv(
- nin, nout, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.bottleneck = Conv2DBNActiv(nout * 5, nout, 1, 1, 0, activ=activ)
- self.dropout = nn.Dropout2d(0.1) if dropout else None
-
- def forward(self, x):
- _, _, h, w = x.size()
- feat1 = F.interpolate(
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
- )
- feat2 = self.conv2(x)
- feat3 = self.conv3(x)
- feat4 = self.conv4(x)
- feat5 = self.conv5(x)
- out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1)
- out = self.bottleneck(out)
-
- if self.dropout is not None:
- out = self.dropout(out)
-
- return out
-
-
-class LSTMModule(nn.Module):
- def __init__(self, nin_conv, nin_lstm, nout_lstm):
- super(LSTMModule, self).__init__()
- self.conv = Conv2DBNActiv(nin_conv, 1, 1, 1, 0)
- self.lstm = nn.LSTM(
- input_size=nin_lstm, hidden_size=nout_lstm // 2, bidirectional=True
- )
- self.dense = nn.Sequential(
- nn.Linear(nout_lstm, nin_lstm), nn.BatchNorm1d(nin_lstm), nn.ReLU()
- )
-
- def forward(self, x):
- N, _, nbins, nframes = x.size()
- h = self.conv(x)[:, 0] # N, nbins, nframes
- h = h.permute(2, 0, 1) # nframes, N, nbins
- h, _ = self.lstm(h)
- h = self.dense(h.reshape(-1, h.size()[-1])) # nframes * N, nbins
- h = h.reshape(nframes, N, 1, nbins)
- h = h.permute(1, 2, 3, 0)
-
- return h
diff --git a/spaces/Lbin123/Lbingo/src/lib/isomorphic/node.ts b/spaces/Lbin123/Lbingo/src/lib/isomorphic/node.ts
deleted file mode 100644
index da213ad6a86181979f098309c374da02835db5a0..0000000000000000000000000000000000000000
--- a/spaces/Lbin123/Lbingo/src/lib/isomorphic/node.ts
+++ /dev/null
@@ -1,26 +0,0 @@
-import Debug from 'debug'
-
-const { fetch, setGlobalDispatcher, ProxyAgent } = require('undici')
-const { HttpsProxyAgent } = require('https-proxy-agent')
-const ws = require('ws')
-
-const debug = Debug('bingo')
-
-const httpProxy = process.env.http_proxy || process.env.HTTP_PROXY || process.env.https_proxy || process.env.HTTPS_PROXY;
-let WebSocket = ws.WebSocket
-
-if (httpProxy) {
- setGlobalDispatcher(new ProxyAgent(httpProxy))
- const agent = new HttpsProxyAgent(httpProxy)
- // @ts-ignore
- WebSocket = class extends ws.WebSocket {
- constructor(address: string | URL, options: typeof ws.WebSocket) {
- super(address, {
- ...options,
- agent,
- })
- }
- }
-}
-
-export default { fetch, WebSocket, debug }
diff --git a/spaces/LokeshMadaka/MyAIChatBot/README.md b/spaces/LokeshMadaka/MyAIChatBot/README.md
deleted file mode 100644
index 5a8409baf4ea5ddbb2ecb716a1797ea1aa110af7..0000000000000000000000000000000000000000
--- a/spaces/LokeshMadaka/MyAIChatBot/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: MyAIChatBot
-emoji: 💻
-colorFrom: pink
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Global/detection_models/networks.py b/spaces/MCkernick/Image_Restoration_Colorization/Global/detection_models/networks.py
deleted file mode 100644
index d88bc5d5694db47220ccf70e97690de3224c2c60..0000000000000000000000000000000000000000
--- a/spaces/MCkernick/Image_Restoration_Colorization/Global/detection_models/networks.py
+++ /dev/null
@@ -1,332 +0,0 @@
-# Copyright (c) Microsoft Corporation.
-# Licensed under the MIT License.
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from detection_models.sync_batchnorm import DataParallelWithCallback
-from detection_models.antialiasing import Downsample
-
-
-class UNet(nn.Module):
- def __init__(
- self,
- in_channels=3,
- out_channels=3,
- depth=5,
- conv_num=2,
- wf=6,
- padding=True,
- batch_norm=True,
- up_mode="upsample",
- with_tanh=False,
- sync_bn=True,
- antialiasing=True,
- ):
- """
- Implementation of
- U-Net: Convolutional Networks for Biomedical Image Segmentation
- (Ronneberger et al., 2015)
- https://arxiv.org/abs/1505.04597
- Using the default arguments will yield the exact version used
- in the original paper
- Args:
- in_channels (int): number of input channels
- out_channels (int): number of output channels
- depth (int): depth of the network
- wf (int): number of filters in the first layer is 2**wf
- padding (bool): if True, apply padding such that the input shape
- is the same as the output.
- This may introduce artifacts
- batch_norm (bool): Use BatchNorm after layers with an
- activation function
- up_mode (str): one of 'upconv' or 'upsample'.
- 'upconv' will use transposed convolutions for
- learned upsampling.
- 'upsample' will use bilinear upsampling.
- """
- super().__init__()
- assert up_mode in ("upconv", "upsample")
- self.padding = padding
- self.depth = depth - 1
- prev_channels = in_channels
-
- self.first = nn.Sequential(
- *[nn.ReflectionPad2d(3), nn.Conv2d(in_channels, 2 ** wf, kernel_size=7), nn.LeakyReLU(0.2, True)]
- )
- prev_channels = 2 ** wf
-
- self.down_path = nn.ModuleList()
- self.down_sample = nn.ModuleList()
- for i in range(depth):
- if antialiasing and depth > 0:
- self.down_sample.append(
- nn.Sequential(
- *[
- nn.ReflectionPad2d(1),
- nn.Conv2d(prev_channels, prev_channels, kernel_size=3, stride=1, padding=0),
- nn.BatchNorm2d(prev_channels),
- nn.LeakyReLU(0.2, True),
- Downsample(channels=prev_channels, stride=2),
- ]
- )
- )
- else:
- self.down_sample.append(
- nn.Sequential(
- *[
- nn.ReflectionPad2d(1),
- nn.Conv2d(prev_channels, prev_channels, kernel_size=4, stride=2, padding=0),
- nn.BatchNorm2d(prev_channels),
- nn.LeakyReLU(0.2, True),
- ]
- )
- )
- self.down_path.append(
- UNetConvBlock(conv_num, prev_channels, 2 ** (wf + i + 1), padding, batch_norm)
- )
- prev_channels = 2 ** (wf + i + 1)
-
- self.up_path = nn.ModuleList()
- for i in reversed(range(depth)):
- self.up_path.append(
- UNetUpBlock(conv_num, prev_channels, 2 ** (wf + i), up_mode, padding, batch_norm)
- )
- prev_channels = 2 ** (wf + i)
-
- if with_tanh:
- self.last = nn.Sequential(
- *[nn.ReflectionPad2d(1), nn.Conv2d(prev_channels, out_channels, kernel_size=3), nn.Tanh()]
- )
- else:
- self.last = nn.Sequential(
- *[nn.ReflectionPad2d(1), nn.Conv2d(prev_channels, out_channels, kernel_size=3)]
- )
-
- if sync_bn:
- self = DataParallelWithCallback(self)
-
- def forward(self, x):
- x = self.first(x)
-
- blocks = []
- for i, down_block in enumerate(self.down_path):
- blocks.append(x)
- x = self.down_sample[i](x)
- x = down_block(x)
-
- for i, up in enumerate(self.up_path):
- x = up(x, blocks[-i - 1])
-
- return self.last(x)
-
-
-class UNetConvBlock(nn.Module):
- def __init__(self, conv_num, in_size, out_size, padding, batch_norm):
- super(UNetConvBlock, self).__init__()
- block = []
-
- for _ in range(conv_num):
- block.append(nn.ReflectionPad2d(padding=int(padding)))
- block.append(nn.Conv2d(in_size, out_size, kernel_size=3, padding=0))
- if batch_norm:
- block.append(nn.BatchNorm2d(out_size))
- block.append(nn.LeakyReLU(0.2, True))
- in_size = out_size
-
- self.block = nn.Sequential(*block)
-
- def forward(self, x):
- out = self.block(x)
- return out
-
-
-class UNetUpBlock(nn.Module):
- def __init__(self, conv_num, in_size, out_size, up_mode, padding, batch_norm):
- super(UNetUpBlock, self).__init__()
- if up_mode == "upconv":
- self.up = nn.ConvTranspose2d(in_size, out_size, kernel_size=2, stride=2)
- elif up_mode == "upsample":
- self.up = nn.Sequential(
- nn.Upsample(mode="bilinear", scale_factor=2, align_corners=False),
- nn.ReflectionPad2d(1),
- nn.Conv2d(in_size, out_size, kernel_size=3, padding=0),
- )
-
- self.conv_block = UNetConvBlock(conv_num, in_size, out_size, padding, batch_norm)
-
- def center_crop(self, layer, target_size):
- _, _, layer_height, layer_width = layer.size()
- diff_y = (layer_height - target_size[0]) // 2
- diff_x = (layer_width - target_size[1]) // 2
- return layer[:, :, diff_y : (diff_y + target_size[0]), diff_x : (diff_x + target_size[1])]
-
- def forward(self, x, bridge):
- up = self.up(x)
- crop1 = self.center_crop(bridge, up.shape[2:])
- out = torch.cat([up, crop1], 1)
- out = self.conv_block(out)
-
- return out
-
-
-class UnetGenerator(nn.Module):
- """Create a Unet-based generator"""
-
- def __init__(self, input_nc, output_nc, num_downs, ngf=64, norm_type="BN", use_dropout=False):
- """Construct a Unet generator
- Parameters:
- input_nc (int) -- the number of channels in input images
- output_nc (int) -- the number of channels in output images
- num_downs (int) -- the number of downsamplings in UNet. For example, # if |num_downs| == 7,
- image of size 128x128 will become of size 1x1 # at the bottleneck
- ngf (int) -- the number of filters in the last conv layer
- norm_layer -- normalization layer
- We construct the U-Net from the innermost layer to the outermost layer.
- It is a recursive process.
- """
- super().__init__()
- if norm_type == "BN":
- norm_layer = nn.BatchNorm2d
- elif norm_type == "IN":
- norm_layer = nn.InstanceNorm2d
- else:
- raise NameError("Unknown norm layer")
-
- # construct unet structure
- unet_block = UnetSkipConnectionBlock(
- ngf * 8, ngf * 8, input_nc=None, submodule=None, norm_layer=norm_layer, innermost=True
- ) # add the innermost layer
- for i in range(num_downs - 5): # add intermediate layers with ngf * 8 filters
- unet_block = UnetSkipConnectionBlock(
- ngf * 8,
- ngf * 8,
- input_nc=None,
- submodule=unet_block,
- norm_layer=norm_layer,
- use_dropout=use_dropout,
- )
- # gradually reduce the number of filters from ngf * 8 to ngf
- unet_block = UnetSkipConnectionBlock(
- ngf * 4, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer
- )
- unet_block = UnetSkipConnectionBlock(
- ngf * 2, ngf * 4, input_nc=None, submodule=unet_block, norm_layer=norm_layer
- )
- unet_block = UnetSkipConnectionBlock(
- ngf, ngf * 2, input_nc=None, submodule=unet_block, norm_layer=norm_layer
- )
- self.model = UnetSkipConnectionBlock(
- output_nc, ngf, input_nc=input_nc, submodule=unet_block, outermost=True, norm_layer=norm_layer
- ) # add the outermost layer
-
- def forward(self, input):
- return self.model(input)
-
-
-class UnetSkipConnectionBlock(nn.Module):
- """Defines the Unet submodule with skip connection.
-
- -------------------identity----------------------
- |-- downsampling -- |submodule| -- upsampling --|
- """
-
- def __init__(
- self,
- outer_nc,
- inner_nc,
- input_nc=None,
- submodule=None,
- outermost=False,
- innermost=False,
- norm_layer=nn.BatchNorm2d,
- use_dropout=False,
- ):
- """Construct a Unet submodule with skip connections.
- Parameters:
- outer_nc (int) -- the number of filters in the outer conv layer
- inner_nc (int) -- the number of filters in the inner conv layer
- input_nc (int) -- the number of channels in input images/features
- submodule (UnetSkipConnectionBlock) -- previously defined submodules
- outermost (bool) -- if this module is the outermost module
- innermost (bool) -- if this module is the innermost module
- norm_layer -- normalization layer
- user_dropout (bool) -- if use dropout layers.
- """
- super().__init__()
- self.outermost = outermost
- use_bias = norm_layer == nn.InstanceNorm2d
- if input_nc is None:
- input_nc = outer_nc
- downconv = nn.Conv2d(input_nc, inner_nc, kernel_size=4, stride=2, padding=1, bias=use_bias)
- downrelu = nn.LeakyReLU(0.2, True)
- downnorm = norm_layer(inner_nc)
- uprelu = nn.LeakyReLU(0.2, True)
- upnorm = norm_layer(outer_nc)
-
- if outermost:
- upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc, kernel_size=4, stride=2, padding=1)
- down = [downconv]
- up = [uprelu, upconv, nn.Tanh()]
- model = down + [submodule] + up
- elif innermost:
- upconv = nn.ConvTranspose2d(inner_nc, outer_nc, kernel_size=4, stride=2, padding=1, bias=use_bias)
- down = [downrelu, downconv]
- up = [uprelu, upconv, upnorm]
- model = down + up
- else:
- upconv = nn.ConvTranspose2d(
- inner_nc * 2, outer_nc, kernel_size=4, stride=2, padding=1, bias=use_bias
- )
- down = [downrelu, downconv, downnorm]
- up = [uprelu, upconv, upnorm]
-
- if use_dropout:
- model = down + [submodule] + up + [nn.Dropout(0.5)]
- else:
- model = down + [submodule] + up
-
- self.model = nn.Sequential(*model)
-
- def forward(self, x):
- if self.outermost:
- return self.model(x)
- else: # add skip connections
- return torch.cat([x, self.model(x)], 1)
-
-
-# ============================================
-# Network testing
-# ============================================
-if __name__ == "__main__":
- from torchsummary import summary
-
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-
- model = UNet_two_decoders(
- in_channels=3,
- out_channels1=3,
- out_channels2=1,
- depth=4,
- conv_num=1,
- wf=6,
- padding=True,
- batch_norm=True,
- up_mode="upsample",
- with_tanh=False,
- )
- model.to(device)
-
- model_pix2pix = UnetGenerator(3, 3, 5, ngf=64, norm_type="BN", use_dropout=False)
- model_pix2pix.to(device)
-
- print("customized unet:")
- summary(model, (3, 256, 256))
-
- print("cyclegan unet:")
- summary(model_pix2pix, (3, 256, 256))
-
- x = torch.zeros(1, 3, 256, 256).requires_grad_(True).cuda()
- g = make_dot(model(x))
- g.render("models/Digraph.gv", view=False)
-
diff --git a/spaces/MathysL/AutoGPT4/BULLETIN.md b/spaces/MathysL/AutoGPT4/BULLETIN.md
deleted file mode 100644
index 735048ddc87a914987c6bd70ccdb231a80242ae3..0000000000000000000000000000000000000000
--- a/spaces/MathysL/AutoGPT4/BULLETIN.md
+++ /dev/null
@@ -1,2 +0,0 @@
-Welcome to Auto-GPT! We'll keep you informed of the latest news and features by printing messages here.
-If you don't wish to see this message, you can run Auto-GPT with the --skip-news flag
\ No newline at end of file
diff --git a/spaces/MedicalAILabo/Xp-age/lib/logger.py b/spaces/MedicalAILabo/Xp-age/lib/logger.py
deleted file mode 100644
index 719027a82712cb72f4ec5497919205ed18ca9c1f..0000000000000000000000000000000000000000
--- a/spaces/MedicalAILabo/Xp-age/lib/logger.py
+++ /dev/null
@@ -1,71 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-
-from pathlib import Path
-import logging
-
-
-class BaseLogger:
- """
- Class for defining logger.
- """
- _unexecuted_configure = True
-
- @classmethod
- def get_logger(cls, name: str) -> logging.Logger:
- """
- Set logger.
- Args:
- name (str): If needed, potentially hierarchical name is desired, eg. lib.net, lib.dataloader, etc.
- For the details, see https://docs.python.org/3/library/logging.html?highlight=logging#module-logging.
- Returns:
- logging.Logger: logger
- """
- if cls._unexecuted_configure:
- cls._init_logger()
-
- return logging.getLogger('nervus.{}'.format(name))
-
- @classmethod
- def _init_logger(cls) -> None:
- """
- Configure logger.
- """
- _root_logger = logging.getLogger('nervus')
- _root_logger.setLevel(logging.DEBUG)
- formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
-
- log_dir = Path('logs')
- log_dir.mkdir(parents=True, exist_ok=True)
- log_path = Path(log_dir, 'log.log')
-
- # file handler
- ## upper warning
- fh_err = logging.FileHandler(log_path)
- fh_err.setLevel(logging.WARNING)
- fh_err.setFormatter(formatter)
- fh_err.addFilter(lambda log_record: not ('BdbQuit' in str(log_record.exc_info)) and (log_record.levelno >= logging.WARNING))
- _root_logger.addHandler(fh_err)
-
- ## lower warning
- fh = logging.FileHandler(log_path)
- fh.setLevel(logging.DEBUG)
- fh.addFilter(lambda log_record: log_record.levelno < logging.WARNING)
- _root_logger.addHandler(fh)
-
- # stream handler
- ## upper warning
- ch_err = logging.StreamHandler()
- ch_err.setLevel(logging.WARNING)
- ch_err.setFormatter(formatter)
- ch_err.addFilter(lambda log_record: log_record.levelno >= logging.WARNING)
- _root_logger.addHandler(ch_err)
-
- ## lower warning
- ch = logging.StreamHandler()
- ch.setLevel(logging.DEBUG)
- ch.addFilter(lambda log_record: log_record.levelno < logging.WARNING)
- _root_logger.addHandler(ch)
-
- cls._unexecuted_configure = False
-
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv_custom/checkpoint.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv_custom/checkpoint.py
deleted file mode 100644
index 19b87fef0a52d31babcdb3edb8f3089b6420173f..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv_custom/checkpoint.py
+++ /dev/null
@@ -1,500 +0,0 @@
-# Copyright (c) Open-MMLab. All rights reserved.
-import io
-import os
-import os.path as osp
-import pkgutil
-import time
-import warnings
-from collections import OrderedDict
-from importlib import import_module
-from tempfile import TemporaryDirectory
-
-import torch
-import torchvision
-from torch.optim import Optimizer
-from torch.utils import model_zoo
-from torch.nn import functional as F
-
-import annotator.uniformer.mmcv as mmcv
-from annotator.uniformer.mmcv.fileio import FileClient
-from annotator.uniformer.mmcv.fileio import load as load_file
-from annotator.uniformer.mmcv.parallel import is_module_wrapper
-from annotator.uniformer.mmcv.utils import mkdir_or_exist
-from annotator.uniformer.mmcv.runner import get_dist_info
-
-ENV_MMCV_HOME = 'MMCV_HOME'
-ENV_XDG_CACHE_HOME = 'XDG_CACHE_HOME'
-DEFAULT_CACHE_DIR = '~/.cache'
-
-
-def _get_mmcv_home():
- mmcv_home = os.path.expanduser(
- os.getenv(
- ENV_MMCV_HOME,
- os.path.join(
- os.getenv(ENV_XDG_CACHE_HOME, DEFAULT_CACHE_DIR), 'mmcv')))
-
- mkdir_or_exist(mmcv_home)
- return mmcv_home
-
-
-def load_state_dict(module, state_dict, strict=False, logger=None):
- """Load state_dict to a module.
-
- This method is modified from :meth:`torch.nn.Module.load_state_dict`.
- Default value for ``strict`` is set to ``False`` and the message for
- param mismatch will be shown even if strict is False.
-
- Args:
- module (Module): Module that receives the state_dict.
- state_dict (OrderedDict): Weights.
- strict (bool): whether to strictly enforce that the keys
- in :attr:`state_dict` match the keys returned by this module's
- :meth:`~torch.nn.Module.state_dict` function. Default: ``False``.
- logger (:obj:`logging.Logger`, optional): Logger to log the error
- message. If not specified, print function will be used.
- """
- unexpected_keys = []
- all_missing_keys = []
- err_msg = []
-
- metadata = getattr(state_dict, '_metadata', None)
- state_dict = state_dict.copy()
- if metadata is not None:
- state_dict._metadata = metadata
-
- # use _load_from_state_dict to enable checkpoint version control
- def load(module, prefix=''):
- # recursively check parallel module in case that the model has a
- # complicated structure, e.g., nn.Module(nn.Module(DDP))
- if is_module_wrapper(module):
- module = module.module
- local_metadata = {} if metadata is None else metadata.get(
- prefix[:-1], {})
- module._load_from_state_dict(state_dict, prefix, local_metadata, True,
- all_missing_keys, unexpected_keys,
- err_msg)
- for name, child in module._modules.items():
- if child is not None:
- load(child, prefix + name + '.')
-
- load(module)
- load = None # break load->load reference cycle
-
- # ignore "num_batches_tracked" of BN layers
- missing_keys = [
- key for key in all_missing_keys if 'num_batches_tracked' not in key
- ]
-
- if unexpected_keys:
- err_msg.append('unexpected key in source '
- f'state_dict: {", ".join(unexpected_keys)}\n')
- if missing_keys:
- err_msg.append(
- f'missing keys in source state_dict: {", ".join(missing_keys)}\n')
-
- rank, _ = get_dist_info()
- if len(err_msg) > 0 and rank == 0:
- err_msg.insert(
- 0, 'The model and loaded state dict do not match exactly\n')
- err_msg = '\n'.join(err_msg)
- if strict:
- raise RuntimeError(err_msg)
- elif logger is not None:
- logger.warning(err_msg)
- else:
- print(err_msg)
-
-
-def load_url_dist(url, model_dir=None):
- """In distributed setting, this function only download checkpoint at local
- rank 0."""
- rank, world_size = get_dist_info()
- rank = int(os.environ.get('LOCAL_RANK', rank))
- if rank == 0:
- checkpoint = model_zoo.load_url(url, model_dir=model_dir)
- if world_size > 1:
- torch.distributed.barrier()
- if rank > 0:
- checkpoint = model_zoo.load_url(url, model_dir=model_dir)
- return checkpoint
-
-
-def load_pavimodel_dist(model_path, map_location=None):
- """In distributed setting, this function only download checkpoint at local
- rank 0."""
- try:
- from pavi import modelcloud
- except ImportError:
- raise ImportError(
- 'Please install pavi to load checkpoint from modelcloud.')
- rank, world_size = get_dist_info()
- rank = int(os.environ.get('LOCAL_RANK', rank))
- if rank == 0:
- model = modelcloud.get(model_path)
- with TemporaryDirectory() as tmp_dir:
- downloaded_file = osp.join(tmp_dir, model.name)
- model.download(downloaded_file)
- checkpoint = torch.load(downloaded_file, map_location=map_location)
- if world_size > 1:
- torch.distributed.barrier()
- if rank > 0:
- model = modelcloud.get(model_path)
- with TemporaryDirectory() as tmp_dir:
- downloaded_file = osp.join(tmp_dir, model.name)
- model.download(downloaded_file)
- checkpoint = torch.load(
- downloaded_file, map_location=map_location)
- return checkpoint
-
-
-def load_fileclient_dist(filename, backend, map_location):
- """In distributed setting, this function only download checkpoint at local
- rank 0."""
- rank, world_size = get_dist_info()
- rank = int(os.environ.get('LOCAL_RANK', rank))
- allowed_backends = ['ceph']
- if backend not in allowed_backends:
- raise ValueError(f'Load from Backend {backend} is not supported.')
- if rank == 0:
- fileclient = FileClient(backend=backend)
- buffer = io.BytesIO(fileclient.get(filename))
- checkpoint = torch.load(buffer, map_location=map_location)
- if world_size > 1:
- torch.distributed.barrier()
- if rank > 0:
- fileclient = FileClient(backend=backend)
- buffer = io.BytesIO(fileclient.get(filename))
- checkpoint = torch.load(buffer, map_location=map_location)
- return checkpoint
-
-
-def get_torchvision_models():
- model_urls = dict()
- for _, name, ispkg in pkgutil.walk_packages(torchvision.models.__path__):
- if ispkg:
- continue
- _zoo = import_module(f'torchvision.models.{name}')
- if hasattr(_zoo, 'model_urls'):
- _urls = getattr(_zoo, 'model_urls')
- model_urls.update(_urls)
- return model_urls
-
-
-def get_external_models():
- mmcv_home = _get_mmcv_home()
- default_json_path = osp.join(mmcv.__path__[0], 'model_zoo/open_mmlab.json')
- default_urls = load_file(default_json_path)
- assert isinstance(default_urls, dict)
- external_json_path = osp.join(mmcv_home, 'open_mmlab.json')
- if osp.exists(external_json_path):
- external_urls = load_file(external_json_path)
- assert isinstance(external_urls, dict)
- default_urls.update(external_urls)
-
- return default_urls
-
-
-def get_mmcls_models():
- mmcls_json_path = osp.join(mmcv.__path__[0], 'model_zoo/mmcls.json')
- mmcls_urls = load_file(mmcls_json_path)
-
- return mmcls_urls
-
-
-def get_deprecated_model_names():
- deprecate_json_path = osp.join(mmcv.__path__[0],
- 'model_zoo/deprecated.json')
- deprecate_urls = load_file(deprecate_json_path)
- assert isinstance(deprecate_urls, dict)
-
- return deprecate_urls
-
-
-def _process_mmcls_checkpoint(checkpoint):
- state_dict = checkpoint['state_dict']
- new_state_dict = OrderedDict()
- for k, v in state_dict.items():
- if k.startswith('backbone.'):
- new_state_dict[k[9:]] = v
- new_checkpoint = dict(state_dict=new_state_dict)
-
- return new_checkpoint
-
-
-def _load_checkpoint(filename, map_location=None):
- """Load checkpoint from somewhere (modelzoo, file, url).
-
- Args:
- filename (str): Accept local filepath, URL, ``torchvision://xxx``,
- ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for
- details.
- map_location (str | None): Same as :func:`torch.load`. Default: None.
-
- Returns:
- dict | OrderedDict: The loaded checkpoint. It can be either an
- OrderedDict storing model weights or a dict containing other
- information, which depends on the checkpoint.
- """
- if filename.startswith('modelzoo://'):
- warnings.warn('The URL scheme of "modelzoo://" is deprecated, please '
- 'use "torchvision://" instead')
- model_urls = get_torchvision_models()
- model_name = filename[11:]
- checkpoint = load_url_dist(model_urls[model_name])
- elif filename.startswith('torchvision://'):
- model_urls = get_torchvision_models()
- model_name = filename[14:]
- checkpoint = load_url_dist(model_urls[model_name])
- elif filename.startswith('open-mmlab://'):
- model_urls = get_external_models()
- model_name = filename[13:]
- deprecated_urls = get_deprecated_model_names()
- if model_name in deprecated_urls:
- warnings.warn(f'open-mmlab://{model_name} is deprecated in favor '
- f'of open-mmlab://{deprecated_urls[model_name]}')
- model_name = deprecated_urls[model_name]
- model_url = model_urls[model_name]
- # check if is url
- if model_url.startswith(('http://', 'https://')):
- checkpoint = load_url_dist(model_url)
- else:
- filename = osp.join(_get_mmcv_home(), model_url)
- if not osp.isfile(filename):
- raise IOError(f'{filename} is not a checkpoint file')
- checkpoint = torch.load(filename, map_location=map_location)
- elif filename.startswith('mmcls://'):
- model_urls = get_mmcls_models()
- model_name = filename[8:]
- checkpoint = load_url_dist(model_urls[model_name])
- checkpoint = _process_mmcls_checkpoint(checkpoint)
- elif filename.startswith(('http://', 'https://')):
- checkpoint = load_url_dist(filename)
- elif filename.startswith('pavi://'):
- model_path = filename[7:]
- checkpoint = load_pavimodel_dist(model_path, map_location=map_location)
- elif filename.startswith('s3://'):
- checkpoint = load_fileclient_dist(
- filename, backend='ceph', map_location=map_location)
- else:
- if not osp.isfile(filename):
- raise IOError(f'{filename} is not a checkpoint file')
- checkpoint = torch.load(filename, map_location=map_location)
- return checkpoint
-
-
-def load_checkpoint(model,
- filename,
- map_location='cpu',
- strict=False,
- logger=None):
- """Load checkpoint from a file or URI.
-
- Args:
- model (Module): Module to load checkpoint.
- filename (str): Accept local filepath, URL, ``torchvision://xxx``,
- ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for
- details.
- map_location (str): Same as :func:`torch.load`.
- strict (bool): Whether to allow different params for the model and
- checkpoint.
- logger (:mod:`logging.Logger` or None): The logger for error message.
-
- Returns:
- dict or OrderedDict: The loaded checkpoint.
- """
- checkpoint = _load_checkpoint(filename, map_location)
- # OrderedDict is a subclass of dict
- if not isinstance(checkpoint, dict):
- raise RuntimeError(
- f'No state_dict found in checkpoint file {filename}')
- # get state_dict from checkpoint
- if 'state_dict' in checkpoint:
- state_dict = checkpoint['state_dict']
- elif 'model' in checkpoint:
- state_dict = checkpoint['model']
- else:
- state_dict = checkpoint
- # strip prefix of state_dict
- if list(state_dict.keys())[0].startswith('module.'):
- state_dict = {k[7:]: v for k, v in state_dict.items()}
-
- # for MoBY, load model of online branch
- if sorted(list(state_dict.keys()))[0].startswith('encoder'):
- state_dict = {k.replace('encoder.', ''): v for k, v in state_dict.items() if k.startswith('encoder.')}
-
- # reshape absolute position embedding
- if state_dict.get('absolute_pos_embed') is not None:
- absolute_pos_embed = state_dict['absolute_pos_embed']
- N1, L, C1 = absolute_pos_embed.size()
- N2, C2, H, W = model.absolute_pos_embed.size()
- if N1 != N2 or C1 != C2 or L != H*W:
- logger.warning("Error in loading absolute_pos_embed, pass")
- else:
- state_dict['absolute_pos_embed'] = absolute_pos_embed.view(N2, H, W, C2).permute(0, 3, 1, 2)
-
- # interpolate position bias table if needed
- relative_position_bias_table_keys = [k for k in state_dict.keys() if "relative_position_bias_table" in k]
- for table_key in relative_position_bias_table_keys:
- table_pretrained = state_dict[table_key]
- table_current = model.state_dict()[table_key]
- L1, nH1 = table_pretrained.size()
- L2, nH2 = table_current.size()
- if nH1 != nH2:
- logger.warning(f"Error in loading {table_key}, pass")
- else:
- if L1 != L2:
- S1 = int(L1 ** 0.5)
- S2 = int(L2 ** 0.5)
- table_pretrained_resized = F.interpolate(
- table_pretrained.permute(1, 0).view(1, nH1, S1, S1),
- size=(S2, S2), mode='bicubic')
- state_dict[table_key] = table_pretrained_resized.view(nH2, L2).permute(1, 0)
-
- # load state_dict
- load_state_dict(model, state_dict, strict, logger)
- return checkpoint
-
-
-def weights_to_cpu(state_dict):
- """Copy a model state_dict to cpu.
-
- Args:
- state_dict (OrderedDict): Model weights on GPU.
-
- Returns:
- OrderedDict: Model weights on GPU.
- """
- state_dict_cpu = OrderedDict()
- for key, val in state_dict.items():
- state_dict_cpu[key] = val.cpu()
- return state_dict_cpu
-
-
-def _save_to_state_dict(module, destination, prefix, keep_vars):
- """Saves module state to `destination` dictionary.
-
- This method is modified from :meth:`torch.nn.Module._save_to_state_dict`.
-
- Args:
- module (nn.Module): The module to generate state_dict.
- destination (dict): A dict where state will be stored.
- prefix (str): The prefix for parameters and buffers used in this
- module.
- """
- for name, param in module._parameters.items():
- if param is not None:
- destination[prefix + name] = param if keep_vars else param.detach()
- for name, buf in module._buffers.items():
- # remove check of _non_persistent_buffers_set to allow nn.BatchNorm2d
- if buf is not None:
- destination[prefix + name] = buf if keep_vars else buf.detach()
-
-
-def get_state_dict(module, destination=None, prefix='', keep_vars=False):
- """Returns a dictionary containing a whole state of the module.
-
- Both parameters and persistent buffers (e.g. running averages) are
- included. Keys are corresponding parameter and buffer names.
-
- This method is modified from :meth:`torch.nn.Module.state_dict` to
- recursively check parallel module in case that the model has a complicated
- structure, e.g., nn.Module(nn.Module(DDP)).
-
- Args:
- module (nn.Module): The module to generate state_dict.
- destination (OrderedDict): Returned dict for the state of the
- module.
- prefix (str): Prefix of the key.
- keep_vars (bool): Whether to keep the variable property of the
- parameters. Default: False.
-
- Returns:
- dict: A dictionary containing a whole state of the module.
- """
- # recursively check parallel module in case that the model has a
- # complicated structure, e.g., nn.Module(nn.Module(DDP))
- if is_module_wrapper(module):
- module = module.module
-
- # below is the same as torch.nn.Module.state_dict()
- if destination is None:
- destination = OrderedDict()
- destination._metadata = OrderedDict()
- destination._metadata[prefix[:-1]] = local_metadata = dict(
- version=module._version)
- _save_to_state_dict(module, destination, prefix, keep_vars)
- for name, child in module._modules.items():
- if child is not None:
- get_state_dict(
- child, destination, prefix + name + '.', keep_vars=keep_vars)
- for hook in module._state_dict_hooks.values():
- hook_result = hook(module, destination, prefix, local_metadata)
- if hook_result is not None:
- destination = hook_result
- return destination
-
-
-def save_checkpoint(model, filename, optimizer=None, meta=None):
- """Save checkpoint to file.
-
- The checkpoint will have 3 fields: ``meta``, ``state_dict`` and
- ``optimizer``. By default ``meta`` will contain version and time info.
-
- Args:
- model (Module): Module whose params are to be saved.
- filename (str): Checkpoint filename.
- optimizer (:obj:`Optimizer`, optional): Optimizer to be saved.
- meta (dict, optional): Metadata to be saved in checkpoint.
- """
- if meta is None:
- meta = {}
- elif not isinstance(meta, dict):
- raise TypeError(f'meta must be a dict or None, but got {type(meta)}')
- meta.update(mmcv_version=mmcv.__version__, time=time.asctime())
-
- if is_module_wrapper(model):
- model = model.module
-
- if hasattr(model, 'CLASSES') and model.CLASSES is not None:
- # save class name to the meta
- meta.update(CLASSES=model.CLASSES)
-
- checkpoint = {
- 'meta': meta,
- 'state_dict': weights_to_cpu(get_state_dict(model))
- }
- # save optimizer state dict in the checkpoint
- if isinstance(optimizer, Optimizer):
- checkpoint['optimizer'] = optimizer.state_dict()
- elif isinstance(optimizer, dict):
- checkpoint['optimizer'] = {}
- for name, optim in optimizer.items():
- checkpoint['optimizer'][name] = optim.state_dict()
-
- if filename.startswith('pavi://'):
- try:
- from pavi import modelcloud
- from pavi.exception import NodeNotFoundError
- except ImportError:
- raise ImportError(
- 'Please install pavi to load checkpoint from modelcloud.')
- model_path = filename[7:]
- root = modelcloud.Folder()
- model_dir, model_name = osp.split(model_path)
- try:
- model = modelcloud.get(model_dir)
- except NodeNotFoundError:
- model = root.create_training_model(model_dir)
- with TemporaryDirectory() as tmp_dir:
- checkpoint_file = osp.join(tmp_dir, model_name)
- with open(checkpoint_file, 'wb') as f:
- torch.save(checkpoint, f)
- f.flush()
- model.create_file(checkpoint_file, name=model_name)
- else:
- mmcv.mkdir_or_exist(osp.dirname(filename))
- # immediately flush buffer
- with open(filename, 'wb') as f:
- torch.save(checkpoint, f)
- f.flush()
\ No newline at end of file
diff --git a/spaces/MilaNLProc/wordify/Dockerfile b/spaces/MilaNLProc/wordify/Dockerfile
deleted file mode 100644
index bac531b919b6618e93d0647bd5b6ff7de0028ed2..0000000000000000000000000000000000000000
--- a/spaces/MilaNLProc/wordify/Dockerfile
+++ /dev/null
@@ -1,7 +0,0 @@
-FROM python:3.7
-
-COPY . /var/app/
-WORKDIR /var/app
-RUN pip install --upgrade pip
-RUN pip install -r requirements.txt
-CMD streamlit run ./app.py
diff --git a/spaces/Monteg/anything-v3.0/README.md b/spaces/Monteg/anything-v3.0/README.md
deleted file mode 100644
index 15176bed26d36b4f9566c7102a5655e310f76036..0000000000000000000000000000000000000000
--- a/spaces/Monteg/anything-v3.0/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Anything V3.0
-emoji: 🏃
-colorFrom: gray
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.10.1
-app_file: app.py
-pinned: false
-duplicated_from: akhaliq/anything-v3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/nrtr/nrtr_resnet31-1by16-1by8_6e_st_mj.py b/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/nrtr/nrtr_resnet31-1by16-1by8_6e_st_mj.py
deleted file mode 100644
index 2d3f019d5729a806985514cc8d17e9463e269df2..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/nrtr/nrtr_resnet31-1by16-1by8_6e_st_mj.py
+++ /dev/null
@@ -1,56 +0,0 @@
-_base_ = [
- '../_base_/datasets/mjsynth.py',
- '../_base_/datasets/synthtext.py',
- '../_base_/datasets/cute80.py',
- '../_base_/datasets/iiit5k.py',
- '../_base_/datasets/svt.py',
- '../_base_/datasets/svtp.py',
- '../_base_/datasets/icdar2013.py',
- '../_base_/datasets/icdar2015.py',
- '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_adam_base.py',
- '_base_nrtr_resnet31.py',
-]
-
-# optimizer settings
-train_cfg = dict(max_epochs=6)
-# learning policy
-param_scheduler = [
- dict(type='MultiStepLR', milestones=[3, 4], end=6),
-]
-
-# dataset settings
-train_list = [_base_.mjsynth_textrecog_train, _base_.synthtext_textrecog_train]
-test_list = [
- _base_.cute80_textrecog_test, _base_.iiit5k_textrecog_test,
- _base_.svt_textrecog_test, _base_.svtp_textrecog_test,
- _base_.icdar2013_textrecog_test, _base_.icdar2015_textrecog_test
-]
-
-train_dataset = dict(
- type='ConcatDataset', datasets=train_list, pipeline=_base_.train_pipeline)
-test_dataset = dict(
- type='ConcatDataset', datasets=test_list, pipeline=_base_.test_pipeline)
-
-train_dataloader = dict(
- batch_size=384,
- num_workers=24,
- persistent_workers=True,
- sampler=dict(type='DefaultSampler', shuffle=True),
- dataset=train_dataset)
-
-test_dataloader = dict(
- batch_size=1,
- num_workers=4,
- persistent_workers=True,
- drop_last=False,
- sampler=dict(type='DefaultSampler', shuffle=False),
- dataset=test_dataset)
-
-val_dataloader = test_dataloader
-
-val_evaluator = dict(
- dataset_prefixes=['CUTE80', 'IIIT5K', 'SVT', 'SVTP', 'IC13', 'IC15'])
-test_evaluator = val_evaluator
-
-auto_scale_lr = dict(base_batch_size=384)
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/evaluation/functional/__init__.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/evaluation/functional/__init__.py
deleted file mode 100644
index 6aaf75768924bef3e7ad6dc1c9d6d0161aab9879..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/evaluation/functional/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .hmean import compute_hmean
-
-__all__ = ['compute_hmean']
diff --git a/spaces/NATSpeech/PortaSpeech/data_gen/tts/base_preprocess.py b/spaces/NATSpeech/PortaSpeech/data_gen/tts/base_preprocess.py
deleted file mode 100644
index a5e607edb4114c7d6edd60a61e5b765e8cbfb9ac..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/PortaSpeech/data_gen/tts/base_preprocess.py
+++ /dev/null
@@ -1,251 +0,0 @@
-import json
-import os
-import random
-import re
-import traceback
-from collections import Counter
-from functools import partial
-
-import librosa
-from tqdm import tqdm
-from data_gen.tts.txt_processors.base_text_processor import get_txt_processor_cls
-from data_gen.tts.wav_processors.base_processor import get_wav_processor_cls
-from utils.commons.hparams import hparams
-from utils.commons.multiprocess_utils import multiprocess_run_tqdm
-from utils.os_utils import link_file, move_file, remove_file
-from utils.text.text_encoder import is_sil_phoneme, build_token_encoder
-
-
-class BasePreprocessor:
- def __init__(self):
- self.preprocess_args = hparams['preprocess_args']
- txt_processor = self.preprocess_args['txt_processor']
- self.txt_processor = get_txt_processor_cls(txt_processor)
- self.raw_data_dir = hparams['raw_data_dir']
- self.processed_dir = hparams['processed_data_dir']
- self.spk_map_fn = f"{self.processed_dir}/spk_map.json"
-
- def meta_data(self):
- """
-
- :return: {'item_name': Str, 'wav_fn': Str, 'txt': Str, 'spk_name': Str, 'txt_loader': None or Func}
- """
- raise NotImplementedError
-
- def process(self):
- processed_dir = self.processed_dir
- wav_processed_tmp_dir = f'{processed_dir}/processed_tmp'
- remove_file(wav_processed_tmp_dir)
- os.makedirs(wav_processed_tmp_dir, exist_ok=True)
- wav_processed_dir = f'{processed_dir}/{self.wav_processed_dirname}'
- remove_file(wav_processed_dir)
- os.makedirs(wav_processed_dir, exist_ok=True)
-
- meta_data = list(tqdm(self.meta_data(), desc='Load meta data'))
- item_names = [d['item_name'] for d in meta_data]
- assert len(item_names) == len(set(item_names)), 'Key `item_name` should be Unique.'
-
- # preprocess data
- phone_list = []
- word_list = []
- spk_names = set()
- process_item = partial(self.preprocess_first_pass,
- txt_processor=self.txt_processor,
- wav_processed_dir=wav_processed_dir,
- wav_processed_tmp=wav_processed_tmp_dir,
- preprocess_args=self.preprocess_args)
- items = []
- args = [{
- 'item_name': item_raw['item_name'],
- 'txt_raw': item_raw['txt'],
- 'wav_fn': item_raw['wav_fn'],
- 'txt_loader': item_raw.get('txt_loader'),
- 'others': item_raw.get('others', None)
- } for item_raw in meta_data]
- for item_, (item_id, item) in zip(meta_data, multiprocess_run_tqdm(process_item, args, desc='Preprocess')):
- if item is not None:
- item_.update(item)
- item = item_
- if 'txt_loader' in item:
- del item['txt_loader']
- item['id'] = item_id
- item['spk_name'] = item.get('spk_name', '')
- item['others'] = item.get('others', None)
- phone_list += item['ph'].split(" ")
- word_list += item['word'].split(" ")
- spk_names.add(item['spk_name'])
- items.append(item)
-
- # add encoded tokens
- ph_encoder, word_encoder = self._phone_encoder(phone_list), self._word_encoder(word_list)
- spk_map = self.build_spk_map(spk_names)
- args = [{
- 'ph': item['ph'], 'word': item['word'], 'spk_name': item['spk_name'],
- 'word_encoder': word_encoder, 'ph_encoder': ph_encoder, 'spk_map': spk_map
- } for item in items]
- for idx, item_new_kv in multiprocess_run_tqdm(self.preprocess_second_pass, args, desc='Add encoded tokens'):
- items[idx].update(item_new_kv)
-
- # build mfa data
- if self.preprocess_args['use_mfa']:
- mfa_dict = set()
- mfa_input_dir = f'{processed_dir}/mfa_inputs'
- remove_file(mfa_input_dir)
- # group MFA inputs for better parallelism
- mfa_groups = [i // self.preprocess_args['nsample_per_mfa_group'] for i in range(len(items))]
- if self.preprocess_args['mfa_group_shuffle']:
- random.seed(hparams['seed'])
- random.shuffle(mfa_groups)
- args = [{
- 'item': item, 'mfa_input_dir': mfa_input_dir,
- 'mfa_group': mfa_group, 'wav_processed_tmp': wav_processed_tmp_dir,
- 'preprocess_args': self.preprocess_args
- } for item, mfa_group in zip(items, mfa_groups)]
- for i, (ph_gb_word_nosil, new_wav_align_fn) in multiprocess_run_tqdm(
- self.build_mfa_inputs, args, desc='Build MFA data'):
- items[i]['wav_align_fn'] = new_wav_align_fn
- for w in ph_gb_word_nosil.split(" "):
- mfa_dict.add(f"{w} {w.replace('_', ' ')}")
- mfa_dict = sorted(mfa_dict)
- with open(f'{processed_dir}/mfa_dict.txt', 'w') as f:
- f.writelines([f'{l}\n' for l in mfa_dict])
- with open(f"{processed_dir}/{self.meta_csv_filename}.json", 'w') as f:
- f.write(re.sub(r'\n\s+([\d+\]])', r'\1', json.dumps(items, ensure_ascii=False, sort_keys=False, indent=1)))
- remove_file(wav_processed_tmp_dir)
-
- @classmethod
- def preprocess_first_pass(cls, item_name, txt_raw, txt_processor,
- wav_fn, wav_processed_dir, wav_processed_tmp,
- preprocess_args, txt_loader=None, others=None):
- try:
- if txt_loader is not None:
- txt_raw = txt_loader(txt_raw)
- ph, txt, word, ph2word, ph_gb_word = cls.txt_to_ph(txt_processor, txt_raw, preprocess_args)
- wav_fn, wav_align_fn = cls.process_wav(
- item_name, wav_fn,
- hparams['processed_data_dir'],
- wav_processed_tmp, preprocess_args)
-
- # wav for binarization
- ext = os.path.splitext(wav_fn)[1]
- os.makedirs(wav_processed_dir, exist_ok=True)
- new_wav_fn = f"{wav_processed_dir}/{item_name}{ext}"
- move_link_func = move_file if os.path.dirname(wav_fn) == wav_processed_tmp else link_file
- move_link_func(wav_fn, new_wav_fn)
- return {
- 'txt': txt, 'txt_raw': txt_raw, 'ph': ph,
- 'word': word, 'ph2word': ph2word, 'ph_gb_word': ph_gb_word,
- 'wav_fn': new_wav_fn, 'wav_align_fn': wav_align_fn,
- 'others': others
- }
- except:
- traceback.print_exc()
- print(f"| Error is caught. item_name: {item_name}.")
- return None
-
- @staticmethod
- def txt_to_ph(txt_processor, txt_raw, preprocess_args):
- txt_struct, txt = txt_processor.process(txt_raw, preprocess_args)
- ph = [p for w in txt_struct for p in w[1]]
- ph_gb_word = ["_".join(w[1]) for w in txt_struct]
- words = [w[0] for w in txt_struct]
- # word_id=0 is reserved for padding
- ph2word = [w_id + 1 for w_id, w in enumerate(txt_struct) for _ in range(len(w[1]))]
- return " ".join(ph), txt, " ".join(words), ph2word, " ".join(ph_gb_word)
-
- @staticmethod
- def process_wav(item_name, wav_fn, processed_dir, wav_processed_tmp, preprocess_args):
- processors = [get_wav_processor_cls(v) for v in preprocess_args['wav_processors']]
- processors = [k() for k in processors if k is not None]
- if len(processors) >= 1:
- sr_file = librosa.core.get_samplerate(wav_fn)
- output_fn_for_align = None
- ext = os.path.splitext(wav_fn)[1]
- input_fn = f"{wav_processed_tmp}/{item_name}{ext}"
- link_file(wav_fn, input_fn)
- for p in processors:
- outputs = p.process(input_fn, sr_file, wav_processed_tmp, processed_dir, item_name, preprocess_args)
- if len(outputs) == 3:
- input_fn, sr, output_fn_for_align = outputs
- else:
- input_fn, sr = outputs
- return input_fn, output_fn_for_align
- else:
- return wav_fn, wav_fn
-
- def _phone_encoder(self, ph_set):
- ph_set_fn = f"{self.processed_dir}/phone_set.json"
- if self.preprocess_args['reset_phone_dict'] or not os.path.exists(ph_set_fn):
- ph_set = sorted(set(ph_set))
- json.dump(ph_set, open(ph_set_fn, 'w'), ensure_ascii=False)
- print("| Build phone set: ", ph_set)
- else:
- ph_set = json.load(open(ph_set_fn, 'r'))
- print("| Load phone set: ", ph_set)
- return build_token_encoder(ph_set_fn)
-
- def _word_encoder(self, word_set):
- word_set_fn = f"{self.processed_dir}/word_set.json"
- if self.preprocess_args['reset_word_dict']:
- word_set = Counter(word_set)
- total_words = sum(word_set.values())
- word_set = word_set.most_common(hparams['word_dict_size'])
- num_unk_words = total_words - sum([x[1] for x in word_set])
- word_set = ['', ''] + [x[0] for x in word_set]
- word_set = sorted(set(word_set))
- json.dump(word_set, open(word_set_fn, 'w'), ensure_ascii=False)
- print(f"| Build word set. Size: {len(word_set)}, #total words: {total_words},"
- f" #unk_words: {num_unk_words}, word_set[:10]:, {word_set[:10]}.")
- else:
- word_set = json.load(open(word_set_fn, 'r'))
- print("| Load word set. Size: ", len(word_set), word_set[:10])
- return build_token_encoder(word_set_fn)
-
- @classmethod
- def preprocess_second_pass(cls, word, ph, spk_name, word_encoder, ph_encoder, spk_map):
- word_token = word_encoder.encode(word)
- ph_token = ph_encoder.encode(ph)
- spk_id = spk_map[spk_name]
- return {'word_token': word_token, 'ph_token': ph_token, 'spk_id': spk_id}
-
- def build_spk_map(self, spk_names):
- spk_map = {x: i for i, x in enumerate(sorted(list(spk_names)))}
- assert len(spk_map) == 0 or len(spk_map) <= hparams['num_spk'], len(spk_map)
- print(f"| Number of spks: {len(spk_map)}, spk_map: {spk_map}")
- json.dump(spk_map, open(self.spk_map_fn, 'w'), ensure_ascii=False)
- return spk_map
-
- @classmethod
- def build_mfa_inputs(cls, item, mfa_input_dir, mfa_group, wav_processed_tmp, preprocess_args):
- item_name = item['item_name']
- wav_align_fn = item['wav_align_fn']
- ph_gb_word = item['ph_gb_word']
- ext = os.path.splitext(wav_align_fn)[1]
- mfa_input_group_dir = f'{mfa_input_dir}/{mfa_group}'
- os.makedirs(mfa_input_group_dir, exist_ok=True)
- new_wav_align_fn = f"{mfa_input_group_dir}/{item_name}{ext}"
- move_link_func = move_file if os.path.dirname(wav_align_fn) == wav_processed_tmp else link_file
- move_link_func(wav_align_fn, new_wav_align_fn)
- ph_gb_word_nosil = " ".join(["_".join([p for p in w.split("_") if not is_sil_phoneme(p)])
- for w in ph_gb_word.split(" ") if not is_sil_phoneme(w)])
- with open(f'{mfa_input_group_dir}/{item_name}.lab', 'w') as f_txt:
- f_txt.write(ph_gb_word_nosil)
- return ph_gb_word_nosil, new_wav_align_fn
-
- def load_spk_map(self, base_dir):
- spk_map_fn = f"{base_dir}/spk_map.json"
- spk_map = json.load(open(spk_map_fn, 'r'))
- return spk_map
-
- def load_dict(self, base_dir):
- ph_encoder = build_token_encoder(f'{base_dir}/phone_set.json')
- word_encoder = build_token_encoder(f'{base_dir}/word_set.json')
- return ph_encoder, word_encoder
-
- @property
- def meta_csv_filename(self):
- return 'metadata'
-
- @property
- def wav_processed_dirname(self):
- return 'wav_processed'
diff --git a/spaces/NCTCMumbai/NCTC/models/research/adversarial_crypto/README.md b/spaces/NCTCMumbai/NCTC/models/research/adversarial_crypto/README.md
deleted file mode 100644
index 3822def1325b8d4eb1fd31335f2f8ce053ff747a..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/adversarial_crypto/README.md
+++ /dev/null
@@ -1,62 +0,0 @@
-
-
-
-
-# Learning to Protect Communications with Adversarial Neural Cryptography
-
-This is a slightly-updated model used for the paper
-["Learning to Protect Communications with Adversarial Neural
-Cryptography"](https://arxiv.org/abs/1610.06918).
-
-> We ask whether neural networks can learn to use secret keys to protect
-> information from other neural networks. Specifically, we focus on ensuring
-> confidentiality properties in a multiagent system, and we specify those
-> properties in terms of an adversary. Thus, a system may consist of neural
-> networks named Alice and Bob, and we aim to limit what a third neural
-> network named Eve learns from eavesdropping on the communication between
-> Alice and Bob. We do not prescribe specific cryptographic algorithms to
-> these neural networks; instead, we train end-to-end, adversarially.
-> We demonstrate that the neural networks can learn how to perform forms of
-> encryption and decryption, and also how to apply these operations
-> selectively in order to meet confidentiality goals.
-
-This code allows you to train encoder/decoder/adversary network triplets
-and evaluate their effectiveness on randomly generated input and key
-pairs.
-
-## Prerequisites
-
-The only software requirements for running the encoder and decoder is having
-TensorFlow installed.
-
-Requires TensorFlow r0.12 or later.
-
-## Training and evaluating
-
-After installing TensorFlow and ensuring that your paths are configured
-appropriately:
-
-```
-python train_eval.py
-```
-
-This will begin training a fresh model. If and when the model becomes
-sufficiently well-trained, it will reset the Eve model multiple times
-and retrain it from scratch, outputting the accuracy thus obtained
-in each run.
-
-## Model differences from the paper
-
-The model has been simplified slightly from the one described in
-the paper - the convolutional layer width was reduced by a factor
-of two. In the version in the paper, there was a nonlinear unit
-after the fully-connected layer; that nonlinear has been removed
-here. These changes improve the robustness of training. The
-initializer for the convolution layers has switched to the
-`tf.contrib.layers default` of `xavier_initializer` instead of
-a simpler `truncated_normal`.
-
-## Contact information
-
-This model repository is maintained by David G. Andersen
-([dave-andersen](https://github.com/dave-andersen)).
diff --git a/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/sequence_layers.py b/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/sequence_layers.py
deleted file mode 100644
index 9261f210ba5c28cc243098de17db850e3f90c2c4..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/sequence_layers.py
+++ /dev/null
@@ -1,422 +0,0 @@
-# Copyright 2017 The TensorFlow Authors All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-
-"""Various implementations of sequence layers for character prediction.
-
-A 'sequence layer' is a part of a computation graph which is responsible of
-producing a sequence of characters using extracted image features. There are
-many reasonable ways to implement such layers. All of them are using RNNs.
-This module provides implementations which uses 'attention' mechanism to
-spatially 'pool' image features and also can use a previously predicted
-character to predict the next (aka auto regression).
-
-Usage:
- Select one of available classes, e.g. Attention or use a wrapper function to
- pick one based on your requirements:
- layer_class = sequence_layers.get_layer_class(use_attention=True,
- use_autoregression=True)
- layer = layer_class(net, labels_one_hot, model_params, method_params)
- char_logits = layer.create_logits()
-"""
-
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import collections
-import abc
-import logging
-import numpy as np
-
-import tensorflow as tf
-
-from tensorflow.contrib import slim
-
-
-def orthogonal_initializer(shape, dtype=tf.float32, *args, **kwargs):
- """Generates orthonormal matrices with random values.
-
- Orthonormal initialization is important for RNNs:
- http://arxiv.org/abs/1312.6120
- http://smerity.com/articles/2016/orthogonal_init.html
-
- For non-square shapes the returned matrix will be semi-orthonormal: if the
- number of columns exceeds the number of rows, then the rows are orthonormal
- vectors; but if the number of rows exceeds the number of columns, then the
- columns are orthonormal vectors.
-
- We use SVD decomposition to generate an orthonormal matrix with random
- values. The same way as it is done in the Lasagne library for Theano. Note
- that both u and v returned by the svd are orthogonal and random. We just need
- to pick one with the right shape.
-
- Args:
- shape: a shape of the tensor matrix to initialize.
- dtype: a dtype of the initialized tensor.
- *args: not used.
- **kwargs: not used.
-
- Returns:
- An initialized tensor.
- """
- del args
- del kwargs
- flat_shape = (shape[0], np.prod(shape[1:]))
- w = np.random.randn(*flat_shape)
- u, _, v = np.linalg.svd(w, full_matrices=False)
- w = u if u.shape == flat_shape else v
- return tf.constant(w.reshape(shape), dtype=dtype)
-
-
-SequenceLayerParams = collections.namedtuple('SequenceLogitsParams', [
- 'num_lstm_units', 'weight_decay', 'lstm_state_clip_value'
-])
-
-
-class SequenceLayerBase(object):
- """A base abstruct class for all sequence layers.
-
- A child class has to define following methods:
- get_train_input
- get_eval_input
- unroll_cell
- """
- __metaclass__ = abc.ABCMeta
-
- def __init__(self, net, labels_one_hot, model_params, method_params):
- """Stores argument in member variable for further use.
-
- Args:
- net: A tensor with shape [batch_size, num_features, feature_size] which
- contains some extracted image features.
- labels_one_hot: An optional (can be None) ground truth labels for the
- input features. Is a tensor with shape
- [batch_size, seq_length, num_char_classes]
- model_params: A namedtuple with model parameters (model.ModelParams).
- method_params: A SequenceLayerParams instance.
- """
- self._params = model_params
- self._mparams = method_params
- self._net = net
- self._labels_one_hot = labels_one_hot
- self._batch_size = net.get_shape().dims[0].value
-
- # Initialize parameters for char logits which will be computed on the fly
- # inside an LSTM decoder.
- self._char_logits = {}
- regularizer = slim.l2_regularizer(self._mparams.weight_decay)
- self._softmax_w = slim.model_variable(
- 'softmax_w',
- [self._mparams.num_lstm_units, self._params.num_char_classes],
- initializer=orthogonal_initializer,
- regularizer=regularizer)
- self._softmax_b = slim.model_variable(
- 'softmax_b', [self._params.num_char_classes],
- initializer=tf.zeros_initializer(),
- regularizer=regularizer)
-
- @abc.abstractmethod
- def get_train_input(self, prev, i):
- """Returns a sample to be used to predict a character during training.
-
- This function is used as a loop_function for an RNN decoder.
-
- Args:
- prev: output tensor from previous step of the RNN. A tensor with shape:
- [batch_size, num_char_classes].
- i: index of a character in the output sequence.
-
- Returns:
- A tensor with shape [batch_size, ?] - depth depends on implementation
- details.
- """
- pass
-
- @abc.abstractmethod
- def get_eval_input(self, prev, i):
- """Returns a sample to be used to predict a character during inference.
-
- This function is used as a loop_function for an RNN decoder.
-
- Args:
- prev: output tensor from previous step of the RNN. A tensor with shape:
- [batch_size, num_char_classes].
- i: index of a character in the output sequence.
-
- Returns:
- A tensor with shape [batch_size, ?] - depth depends on implementation
- details.
- """
- raise AssertionError('Not implemented')
-
- @abc.abstractmethod
- def unroll_cell(self, decoder_inputs, initial_state, loop_function, cell):
- """Unrolls an RNN cell for all inputs.
-
- This is a placeholder to call some RNN decoder. It has a similar to
- tf.seq2seq.rnn_decode interface.
-
- Args:
- decoder_inputs: A list of 2D Tensors* [batch_size x input_size]. In fact,
- most of existing decoders in presence of a loop_function use only the
- first element to determine batch_size and length of the list to
- determine number of steps.
- initial_state: 2D Tensor with shape [batch_size x cell.state_size].
- loop_function: function will be applied to the i-th output in order to
- generate the i+1-st input (see self.get_input).
- cell: rnn_cell.RNNCell defining the cell function and size.
-
- Returns:
- A tuple of the form (outputs, state), where:
- outputs: A list of character logits of the same length as
- decoder_inputs of 2D Tensors with shape [batch_size x num_characters].
- state: The state of each cell at the final time-step.
- It is a 2D Tensor of shape [batch_size x cell.state_size].
- """
- pass
-
- def is_training(self):
- """Returns True if the layer is created for training stage."""
- return self._labels_one_hot is not None
-
- def char_logit(self, inputs, char_index):
- """Creates logits for a character if required.
-
- Args:
- inputs: A tensor with shape [batch_size, ?] (depth is implementation
- dependent).
- char_index: A integer index of a character in the output sequence.
-
- Returns:
- A tensor with shape [batch_size, num_char_classes]
- """
- if char_index not in self._char_logits:
- self._char_logits[char_index] = tf.nn.xw_plus_b(inputs, self._softmax_w,
- self._softmax_b)
- return self._char_logits[char_index]
-
- def char_one_hot(self, logit):
- """Creates one hot encoding for a logit of a character.
-
- Args:
- logit: A tensor with shape [batch_size, num_char_classes].
-
- Returns:
- A tensor with shape [batch_size, num_char_classes]
- """
- prediction = tf.argmax(logit, axis=1)
- return slim.one_hot_encoding(prediction, self._params.num_char_classes)
-
- def get_input(self, prev, i):
- """A wrapper for get_train_input and get_eval_input.
-
- Args:
- prev: output tensor from previous step of the RNN. A tensor with shape:
- [batch_size, num_char_classes].
- i: index of a character in the output sequence.
-
- Returns:
- A tensor with shape [batch_size, ?] - depth depends on implementation
- details.
- """
- if self.is_training():
- return self.get_train_input(prev, i)
- else:
- return self.get_eval_input(prev, i)
-
- def create_logits(self):
- """Creates character sequence logits for a net specified in the constructor.
-
- A "main" method for the sequence layer which glues together all pieces.
-
- Returns:
- A tensor with shape [batch_size, seq_length, num_char_classes].
- """
- with tf.variable_scope('LSTM'):
- first_label = self.get_input(prev=None, i=0)
- decoder_inputs = [first_label] + [None] * (self._params.seq_length - 1)
- lstm_cell = tf.contrib.rnn.LSTMCell(
- self._mparams.num_lstm_units,
- use_peepholes=False,
- cell_clip=self._mparams.lstm_state_clip_value,
- state_is_tuple=True,
- initializer=orthogonal_initializer)
- lstm_outputs, _ = self.unroll_cell(
- decoder_inputs=decoder_inputs,
- initial_state=lstm_cell.zero_state(self._batch_size, tf.float32),
- loop_function=self.get_input,
- cell=lstm_cell)
-
- with tf.variable_scope('logits'):
- logits_list = [
- tf.expand_dims(self.char_logit(logit, i), dim=1)
- for i, logit in enumerate(lstm_outputs)
- ]
-
- return tf.concat(logits_list, 1)
-
-
-class NetSlice(SequenceLayerBase):
- """A layer which uses a subset of image features to predict each character.
- """
-
- def __init__(self, *args, **kwargs):
- super(NetSlice, self).__init__(*args, **kwargs)
- self._zero_label = tf.zeros(
- [self._batch_size, self._params.num_char_classes])
-
- def get_image_feature(self, char_index):
- """Returns a subset of image features for a character.
-
- Args:
- char_index: an index of a character.
-
- Returns:
- A tensor with shape [batch_size, ?]. The output depth depends on the
- depth of input net.
- """
- batch_size, features_num, _ = [d.value for d in self._net.get_shape()]
- slice_len = int(features_num / self._params.seq_length)
- # In case when features_num != seq_length, we just pick a subset of image
- # features, this choice is arbitrary and there is no intuitive geometrical
- # interpretation. If features_num is not dividable by seq_length there will
- # be unused image features.
- net_slice = self._net[:, char_index:char_index + slice_len, :]
- feature = tf.reshape(net_slice, [batch_size, -1])
- logging.debug('Image feature: %s', feature)
- return feature
-
- def get_eval_input(self, prev, i):
- """See SequenceLayerBase.get_eval_input for details."""
- del prev
- return self.get_image_feature(i)
-
- def get_train_input(self, prev, i):
- """See SequenceLayerBase.get_train_input for details."""
- return self.get_eval_input(prev, i)
-
- def unroll_cell(self, decoder_inputs, initial_state, loop_function, cell):
- """See SequenceLayerBase.unroll_cell for details."""
- return tf.contrib.legacy_seq2seq.rnn_decoder(
- decoder_inputs=decoder_inputs,
- initial_state=initial_state,
- cell=cell,
- loop_function=self.get_input)
-
-
-class NetSliceWithAutoregression(NetSlice):
- """A layer similar to NetSlice, but it also uses auto regression.
-
- The "auto regression" means that we use network output for previous character
- as a part of input for the current character.
- """
-
- def __init__(self, *args, **kwargs):
- super(NetSliceWithAutoregression, self).__init__(*args, **kwargs)
-
- def get_eval_input(self, prev, i):
- """See SequenceLayerBase.get_eval_input for details."""
- if i == 0:
- prev = self._zero_label
- else:
- logit = self.char_logit(prev, char_index=i - 1)
- prev = self.char_one_hot(logit)
- image_feature = self.get_image_feature(char_index=i)
- return tf.concat([image_feature, prev], 1)
-
- def get_train_input(self, prev, i):
- """See SequenceLayerBase.get_train_input for details."""
- if i == 0:
- prev = self._zero_label
- else:
- prev = self._labels_one_hot[:, i - 1, :]
- image_feature = self.get_image_feature(i)
- return tf.concat([image_feature, prev], 1)
-
-
-class Attention(SequenceLayerBase):
- """A layer which uses attention mechanism to select image features."""
-
- def __init__(self, *args, **kwargs):
- super(Attention, self).__init__(*args, **kwargs)
- self._zero_label = tf.zeros(
- [self._batch_size, self._params.num_char_classes])
-
- def get_eval_input(self, prev, i):
- """See SequenceLayerBase.get_eval_input for details."""
- del prev, i
- # The attention_decoder will fetch image features from the net, no need for
- # extra inputs.
- return self._zero_label
-
- def get_train_input(self, prev, i):
- """See SequenceLayerBase.get_train_input for details."""
- return self.get_eval_input(prev, i)
-
- def unroll_cell(self, decoder_inputs, initial_state, loop_function, cell):
- return tf.contrib.legacy_seq2seq.attention_decoder(
- decoder_inputs=decoder_inputs,
- initial_state=initial_state,
- attention_states=self._net,
- cell=cell,
- loop_function=self.get_input)
-
-
-class AttentionWithAutoregression(Attention):
- """A layer which uses both attention and auto regression."""
-
- def __init__(self, *args, **kwargs):
- super(AttentionWithAutoregression, self).__init__(*args, **kwargs)
-
- def get_train_input(self, prev, i):
- """See SequenceLayerBase.get_train_input for details."""
- if i == 0:
- return self._zero_label
- else:
- # TODO(gorban): update to gradually introduce gt labels.
- return self._labels_one_hot[:, i - 1, :]
-
- def get_eval_input(self, prev, i):
- """See SequenceLayerBase.get_eval_input for details."""
- if i == 0:
- return self._zero_label
- else:
- logit = self.char_logit(prev, char_index=i - 1)
- return self.char_one_hot(logit)
-
-
-def get_layer_class(use_attention, use_autoregression):
- """A convenience function to get a layer class based on requirements.
-
- Args:
- use_attention: if True a returned class will use attention.
- use_autoregression: if True a returned class will use auto regression.
-
- Returns:
- One of available sequence layers (child classes for SequenceLayerBase).
- """
- if use_attention and use_autoregression:
- layer_class = AttentionWithAutoregression
- elif use_attention and not use_autoregression:
- layer_class = Attention
- elif not use_attention and not use_autoregression:
- layer_class = NetSlice
- elif not use_attention and use_autoregression:
- layer_class = NetSliceWithAutoregression
- else:
- raise AssertionError('Unsupported sequence layer class')
-
- logging.debug('Use %s as a layer class', layer_class.__name__)
- return layer_class
diff --git a/spaces/Nihanvi/Text_summarization_using_transformers/README.md b/spaces/Nihanvi/Text_summarization_using_transformers/README.md
deleted file mode 100644
index 8750602c79e85620d3ad3042996e5ac5ed1a5980..0000000000000000000000000000000000000000
--- a/spaces/Nihanvi/Text_summarization_using_transformers/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Text Summarization Using Transformers
-emoji: 🌍
-colorFrom: indigo
-colorTo: gray
-sdk: streamlit
-sdk_version: 1.25.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/NirmalKumarC/CSV_Dataset_Analyzer_Copied/app.py b/spaces/NirmalKumarC/CSV_Dataset_Analyzer_Copied/app.py
deleted file mode 100644
index d23cd9425c831fc2bfc190b0bff8523eb4203a09..0000000000000000000000000000000000000000
--- a/spaces/NirmalKumarC/CSV_Dataset_Analyzer_Copied/app.py
+++ /dev/null
@@ -1,83 +0,0 @@
-import streamlit as st
-import pandas as pd
-import traceback
-import sys
-
-from st_aggrid import AgGrid
-from st_aggrid.grid_options_builder import GridOptionsBuilder
-from st_aggrid.shared import JsCode
-from download import download_button
-from st_aggrid import GridUpdateMode, DataReturnMode
-
-# Page config is set once with icon title and display style. Wide mode since we want screen real estate for wide CSV files
-st.set_page_config(page_icon="📝", page_title="📝CSV Data Analyzer📊", layout="wide")
-
-# Style
-def _max_width_():
- max_width_str = f"max-width: 1800px;"
- st.markdown(
- f"""
-
- """,
- unsafe_allow_html=True,
- )
-
-# Title Bar with Images and Icons
-col1, col2, col3 = st.columns([1,6,1])
-with col1:
- st.image("https://cdnb.artstation.com/p/assets/images/images/054/910/875/large/aaron-wacker-cyberpunk-computer-brain-design.jpg?1665656558",width=624,)
-with col2:
- st.title("📝 CSV Data Analyzer 📊")
-with col3:
- st.image("https://cdna.artstation.com/p/assets/images/images/054/910/878/large/aaron-wacker-cyberpunk-computer-devices-iot.jpg?1665656564",width=624,)
-
-# Upload
-c29, c30, c31 = st.columns([1, 6, 1])
-with c30:
- uploaded_file = st.file_uploader("", key="1", help="To activate 'wide mode', go to the menu > Settings > turn on 'wide mode'",)
- if uploaded_file is not None:
- file_container = st.expander("Check your uploaded .csv")
- #try:
- shows = pd.read_csv(uploaded_file)
- #except:
- # print(sys.exc_info()[2])
-
- uploaded_file.seek(0)
- file_container.write(shows)
- else:
- st.info(f"""⬆️Upload a 📝.CSV file. Examples: [Chatbot](https://huggingface.co/datasets/awacke1/Carddata.csv) [Mindfulness](https://huggingface.co/datasets/awacke1/MindfulStory.csv) [Wikipedia](https://huggingface.co/datasets/awacke1/WikipediaSearch)""")
- st.stop()
-
-# DisplayGrid
-gb = GridOptionsBuilder.from_dataframe(shows)
-gb.configure_default_column(enablePivot=True, enableValue=True, enableRowGroup=True)
-gb.configure_selection(selection_mode="multiple", use_checkbox=True)
-gb.configure_side_bar()
-gridOptions = gb.build()
-st.success(f"""💡 Tip! Hold shift key when selecting rows to select multiple rows at once.""")
-response = AgGrid(
- shows,
- gridOptions=gridOptions,
- enable_enterprise_modules=True,
- update_mode=GridUpdateMode.MODEL_CHANGED,
- data_return_mode=DataReturnMode.FILTERED_AND_SORTED,
- fit_columns_on_grid_load=False,
-)
-
-# Filters
-df = pd.DataFrame(response["selected_rows"])
-st.subheader("Filtered data will appear below 📊 ")
-st.text("")
-st.table(df)
-st.text("")
-
-# Download
-c29, c30, c31 = st.columns([1, 1, 2])
-with c29:
- CSVButton = download_button(df,"Dataset.csv","Download CSV file",)
-with c30:
- CSVButton = download_button(df,"Dataset.txt","Download TXT file",)
\ No newline at end of file
diff --git a/spaces/Not-Grim-Refer/GitHub-Tool/Readme.md b/spaces/Not-Grim-Refer/GitHub-Tool/Readme.md
deleted file mode 100644
index 1d279beb8cb88ad963ec771ee4bcb6cccabb1aa9..0000000000000000000000000000000000000000
--- a/spaces/Not-Grim-Refer/GitHub-Tool/Readme.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Github-Tool
-emoji: 🌍
-colorFrom: red
-colorTo: gray
-sdk: streamlit
-sdk_version: 1.21.0
-app_file: app.py
-pinned: true
-
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/fastspeech2_loss.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/fastspeech2_loss.py
deleted file mode 100644
index 085d5628d4c4c242edee4aa3bc4a01aa4582eb21..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/fastspeech2_loss.py
+++ /dev/null
@@ -1,125 +0,0 @@
-# Copyright (c) 2017-present, Facebook, Inc.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the LICENSE file in
-# the root directory of this source tree. An additional grant of patent rights
-# can be found in the PATENTS file in the same directory.
-
-from typing import List, Dict, Any
-from dataclasses import dataclass, field
-
-import torch
-import torch.nn.functional as F
-
-from fairseq import metrics, utils
-from fairseq.criterions import FairseqCriterion, register_criterion
-from fairseq.dataclass import FairseqDataclass
-from fairseq.data.data_utils import lengths_to_mask
-from fairseq.models.fairseq_model import FairseqEncoderModel
-
-
-@dataclass
-class FastSpeech2CriterionConfig(FairseqDataclass):
- ctc_weight: float = field(
- default=0.0, metadata={"help": "weight for CTC loss"}
- )
-
-
-@register_criterion("fastspeech2", dataclass=FastSpeech2CriterionConfig)
-class FastSpeech2Loss(FairseqCriterion):
- def __init__(self, task, ctc_weight):
- super().__init__(task)
- self.ctc_weight = ctc_weight
-
- def forward(self, model: FairseqEncoderModel, sample, reduction="mean"):
- src_tokens = sample["net_input"]["src_tokens"]
- src_lens = sample["net_input"]["src_lengths"]
- tgt_lens = sample["target_lengths"]
- _feat_out, _, log_dur_out, pitch_out, energy_out = model(
- src_tokens=src_tokens,
- src_lengths=src_lens,
- prev_output_tokens=sample["net_input"]["prev_output_tokens"],
- incremental_state=None,
- target_lengths=tgt_lens,
- speaker=sample["speaker"],
- durations=sample["durations"],
- pitches=sample["pitches"],
- energies=sample["energies"]
- )
-
- src_mask = lengths_to_mask(sample["net_input"]["src_lengths"])
- tgt_mask = lengths_to_mask(sample["target_lengths"])
-
- pitches, energies = sample["pitches"], sample["energies"]
- pitch_out, pitches = pitch_out[src_mask], pitches[src_mask]
- energy_out, energies = energy_out[src_mask], energies[src_mask]
-
- feat_out, feat = _feat_out[tgt_mask], sample["target"][tgt_mask]
- l1_loss = F.l1_loss(feat_out, feat, reduction=reduction)
-
- pitch_loss = F.mse_loss(pitch_out, pitches, reduction=reduction)
- energy_loss = F.mse_loss(energy_out, energies, reduction=reduction)
-
- log_dur_out = log_dur_out[src_mask]
- dur = sample["durations"].float()
- dur = dur.half() if log_dur_out.type().endswith(".HalfTensor") else dur
- log_dur = torch.log(dur + 1)[src_mask]
- dur_loss = F.mse_loss(log_dur_out, log_dur, reduction=reduction)
-
- ctc_loss = torch.tensor(0.).type_as(l1_loss)
- if self.ctc_weight > 0.:
- lprobs = model.get_normalized_probs((_feat_out,), log_probs=True)
- lprobs = lprobs.transpose(0, 1) # T x B x C
- src_mask = lengths_to_mask(src_lens)
- src_tokens_flat = src_tokens.masked_select(src_mask)
- ctc_loss = F.ctc_loss(
- lprobs, src_tokens_flat, tgt_lens, src_lens,
- reduction=reduction, zero_infinity=True
- ) * self.ctc_weight
-
- loss = l1_loss + dur_loss + pitch_loss + energy_loss + ctc_loss
-
- sample_size = sample["nsentences"]
- logging_output = {
- "loss": utils.item(loss.data),
- "ntokens": sample["ntokens"],
- "nsentences": sample["nsentences"],
- "sample_size": sample_size,
- "l1_loss": utils.item(l1_loss.data),
- "dur_loss": utils.item(dur_loss.data),
- "pitch_loss": utils.item(pitch_loss.data),
- "energy_loss": utils.item(energy_loss.data),
- "ctc_loss": utils.item(ctc_loss.data),
- }
- return loss, sample_size, logging_output
-
- @classmethod
- def reduce_metrics(cls, logging_outputs: List[Dict[str, Any]]) -> None:
- ns = [log.get("sample_size", 0) for log in logging_outputs]
- ntot = sum(ns)
- ws = [n / (ntot + 1e-8) for n in ns]
- for key in [
- "loss", "l1_loss", "dur_loss", "pitch_loss", "energy_loss",
- "ctc_loss"
- ]:
- vals = [log.get(key, 0) for log in logging_outputs]
- val = sum(val * w for val, w in zip(vals, ws))
- metrics.log_scalar(key, val, ntot, round=3)
- metrics.log_scalar("sample_size", ntot, len(logging_outputs))
-
- # inference metrics
- if "targ_frames" not in logging_outputs[0]:
- return
- n = sum(log.get("targ_frames", 0) for log in logging_outputs)
- for key, new_key in [
- ("mcd_loss", "mcd_loss"),
- ("pred_frames", "pred_ratio"),
- ("nins", "ins_rate"),
- ("ndel", "del_rate"),
- ]:
- val = sum(log.get(key, 0) for log in logging_outputs)
- metrics.log_scalar(new_key, val / n, n, round=3)
-
- @staticmethod
- def logging_outputs_can_be_summed() -> bool:
- return False
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/optim/amp_optimizer.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/optim/amp_optimizer.py
deleted file mode 100644
index 3b7958e50ce444474c48d1f5aeff05d66c19e5b6..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/optim/amp_optimizer.py
+++ /dev/null
@@ -1,105 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-
-import torch
-from fairseq import optim
-from omegaconf import DictConfig
-
-logger = logging.getLogger(__name__)
-
-
-class AMPOptimizer(optim.FairseqOptimizer):
- """
- Wrap an *optimizer* to support AMP (automatic mixed precision) training.
- """
-
- def __init__(self, cfg: DictConfig, params, fp32_optimizer, **kwargs):
- super().__init__(cfg.optimizer)
- self.fp32_optimizer = fp32_optimizer
- amp_kwargs = {"init_scale": cfg.common.fp16_init_scale}
- if getattr(cfg.common, "amp_scale_window", None) is not None:
- amp_kwargs["growth_interval"] = cfg.common.amp_init_scale
- self._grad_scaler = torch.cuda.amp.GradScaler(**amp_kwargs)
- self.min_loss_scale = cfg.common.min_loss_scale
-
- @classmethod
- def build_optimizer(cls, cfg: DictConfig, params, **kwargs):
- """
- Args:
- cfg (omegaconf.DictConfig): fairseq args
- params (iterable): iterable of parameters to optimize
- """
- fp32_optimizer = optim.build_optimizer(cfg.optimizer, params)
- return cls(cfg, params, fp32_optimizer, **kwargs)
-
- def backward(self, loss):
- """Computes the sum of gradients of the given tensor w.r.t. graph leaves.
-
- Compared to :func:`fairseq.optim.FairseqOptimizer.backward`, this
- function additionally dynamically scales the loss to avoid gradient
- underflow.
- """
- self._grad_scaler.scale(loss).backward()
-
- def step(self):
- self.scaler.step(self.fp32_optimizer)
- self.scaler.update()
-
- def clip_grad_norm(self, max_norm, aggregate_norm_fn=None):
- """Clips gradient norm."""
- self.scaler.unscale_(self.optimizer)
- grad_norm = self.fp32_optimizer.clip_grad_norm(max_norm, aggregate_norm_fn)
- if not torch.isfinite(grad_norm).all():
- new_loss_scale = self.next_loss_scale
- if new_loss_scale <= self.min_loss_scale:
- raise FloatingPointError(
- (
- "AMP: Minimum loss scale reached ({}). Your loss is probably exploding. "
- "Try restarting training or use fp32. {}"
- ).format(self.min_loss_scale, new_loss_scale)
- )
- else:
- logger.info("AMP: overflow detected, setting scale to "
- f"to {new_loss_scale}")
- return grad_norm
-
- @property
- def scaler(self):
- return self._grad_scaler
-
- @property
- def next_loss_scale(self):
- return self.scaler.get_scale() * self.scaler.get_backoff_factor()
-
- @property
- def optimizer(self):
- return self.fp32_optimizer.optimizer
-
- @optimizer.setter
- def optimizer(self, optimizer):
- self.fp32_optimizer.optimizer = optimizer
-
- @property
- def lr_scheduler(self):
- return getattr(self.fp32_optimizer, "lr_scheduler", None)
-
- @property
- def optimizer_config(self):
- return self.fp32_optimizer.optimizer_config
-
- def get_lr(self):
- return self.fp32_optimizer.get_lr()
-
- def set_lr(self, lr):
- self.fp32_optimizer.set_lr(lr)
-
- def all_reduce_grads(self, module):
- self.fp32_optimizer.all_reduce_grads(module)
-
- @property
- def supports_flat_params(self):
- return self.fp32_optimizer.supports_flat_params
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/m2m_100/README.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/m2m_100/README.md
deleted file mode 100644
index 02a68a5f0919a26a0468069bed46a5b1abc78941..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/m2m_100/README.md
+++ /dev/null
@@ -1,241 +0,0 @@
-# Beyond English-Centric Multilingual Machine Translation
-
-## Introduction
-In this work, we create a true Many-to-Many multilingual translation model that can translate directly between any pair of 100 languages. Our focus on non-English-Centric models brings gains of more than 10 BLEU when directly translating between non-English directions while performing competitively with the best single systems of WMT.
-
-If you are new to using fairseq, read the following walkthrough. Otherwise, skip to the sections below.
-
-0. **Generation Data**
-
-To download the generation data, follow the below commands. Note that all datasets need to be detokenized *before* applying SPM in the data preprocessing step. If you use these evaluation datasets, please cite their associated papers.
-```bash
-# WMT - use sacrebleu, example here:
-sacrebleu -t wmt14 -l fr-en --echo src > wmt.test.fr-en.fr
-sacrebleu -t wmt14 -l fr-en --echo ref > wmt.test.fr-en.en
-
-# WAT
-wget http://lotus.kuee.kyoto-u.ac.jp/WAT/my-en-data/wat2020.my-en.zip
-unzip wat2020.my-en.zip
-
-# FLORES
-# download from: https://github.com/facebookresearch/flores
-
-# TED - need to detokenize with Moses!
-# from: https://github.com/neulab/word-embeddings-for-nmt
-wget http://phontron.com/data/ted_talks.tar.gz
-
-# Autshumato
-# request to download: https://repo.sadilar.org/handle/20.500.12185/397
-
-# Tatoeba Challenge
-# available here: https://github.com/Helsinki-NLP/Tatoeba-Challenge
-```
-
-1. **Training Data**
-
-To produce the training data, we use a combination of [CCMatrix](https://arxiv.org/abs/1911.04944) and [CCAligned](https://arxiv.org/abs/1911.06154). Check out the instructions [here](https://github.com/facebookresearch/LASER/tree/master/tasks/CCMatrix) to download the raw data.
-
-2. **Preprocess Data**
-
-After downloading raw data, you will need to postprocess the data, then apply SPM, then binarize. Note that it is very important you run the postprocessing script, because this removes any instance of the evaluation data in the mined training data.
-
-```bash
-# preprocess data
-
-# remove sentences with more than 50% punctuation
-python /path/to/fairseq/examples/m2m_100/process_data/remove_too_much_punc.py
-
-# deduplicate training data
-paste /path/to/datadir/train.$src /path/to/datadir/train.$tgt | awk '!x[$0]++' > /path/to/datadir/train.dedup
-echo "keeping $(wc -l /path/to/datadir/train.dedup) bitext out of $(wc -l /path/to/datadir/train.$src)"
-cut -f1 /path/to/datadir/train.dedup > /path/to/datadir/train.$src
-cut -f2 /path/to/datadir/train.dedup > /path/to/datadir/train.$tgt
-
-# remove all instances of evaluation data from the training data
-python /path/to/fairseq/examples/m2m_100/process_data/dedup_data.py
-
-# frequency cleaning
-wget https://dl.fbaipublicfiles.com/m2m_100/histograms.tar.gz
-tar -xvzf histograms.tar.gz
-python /path/to/fairseq/examples/m2m_100/process_data/clean_histogram.py --src $src --tgt $tgt --src-file /path/to/source/file --tgt-file /path/to/output/file --src-output-file source_output.$src --tgt-output-file target_output.$tgt --histograms /path/to/histograms
-
-# apply SPM
-wget https://dl.fbaipublicfiles.com/m2m_100/spm.128k.model
-python /path/to/fairseq/scripts/spm_encode.py \
- --model spm.128k.model \
- --output_format=piece \
- --inputs=/path/to/input/file/here \
- --outputs=/path/to/output/file/here
-
-# length ratio cleaning
-perl mosesdecoder/scripts/training/clean-corpus-n.perl --ratio 3 /path/to/training/data/train.spm.$src-$tgt $src $tgt /path/to/output/directory/train.spm.$src-$tgt 1 250
-
-# binarize data
-wget https://dl.fbaipublicfiles.com/m2m_100/data_dict.128k.txt
-fairseq-preprocess \
- --source-lang $src --target-lang $tgt \
- --testpref spm.$src.$tgt \
- --thresholdsrc 0 --thresholdtgt 0 \
- --destdir data_bin \
- --srcdict data_dict.128k.txt --tgtdict data_dict.128k.txt
-```
-
-3. **Training Scripts**
-
-To reproduce the training of our models, we train with fairseq-py's multilingual translation [task](https://github.com/pytorch/fairseq/tree/main/examples/multilingual). If you are interested in model parallel training, also check out [fairscale](https://github.com/facebookresearch/fairscale).
-
-4. **Generation**
-
-To generate from our models, follow the the commands in the generation section below.
-
-
-If you use any of the resources listed here, please cite:
-```bibtex
-@article{fan2020beyond,
- title={Beyond English-Centric Multilingual Machine Translation},
- author={Fan, Angela and Bhosale, Shruti and Schwenk, Holger and Ma, Zhiyi and El-Kishky, Ahmed and Goyal, Siddharth and Baines, Mandeep and Celebi, Onur and Wenzek, Guillaume and Chaudhary, Vishrav and Goyal, Naman and Birch, Tom and Liptchinsky, Vitaliy and Edunov, Sergey and Grave, Edouard and Auli, Michael and Joulin, Armand},
- journal={arXiv preprint},
- year={2020}
-}
-
-@article{schwenk2019ccmatrix,
- title={Ccmatrix: Mining billions of high-quality parallel sentences on the web},
- author={Schwenk, Holger and Wenzek, Guillaume and Edunov, Sergey and Grave, Edouard and Joulin, Armand},
- journal={arXiv preprint arXiv:1911.04944},
- year={2019}
-}
-
-@article{el2019massive,
- title={A Massive Collection of Cross-Lingual Web-Document Pairs},
- author={El-Kishky, Ahmed and Chaudhary, Vishrav and Guzman, Francisco and Koehn, Philipp},
- journal={arXiv preprint arXiv:1911.06154},
- year={2019}
-}
-```
-
-
-## Trained Models
-
-### 418M and 1.2B Model
-We include the last checkpoint for both of these models.
-
-```bash
-wget https://dl.fbaipublicfiles.com/m2m_100/model_dict.128k.txt
-wget https://dl.fbaipublicfiles.com/m2m_100/language_pairs_small_models.txt
-
-# 418M parameter model
-wget https://dl.fbaipublicfiles.com/m2m_100/418M_last_checkpoint.pt
-
-# 1.2B parameter model
-wget https://dl.fbaipublicfiles.com/m2m_100/1.2B_last_checkpoint.pt
-
-# Generation:
-fairseq-generate $binarized_data_path --batch-size 32 --path $path_to_model --fixed-dictionary model_dict.128k.txt -s en -t fr --remove-bpe 'sentencepiece' --beam 5 --task translation_multi_simple_epoch --lang-pairs language_pairs_small_models.txt --decoder-langtok --encoder-langtok src --gen-subset test > gen_out
-```
-
-### 12B Model
-12B parameter model trained on many-to-many training data for 100 languages. We include the last checkpoint, average of last 5 checkpoints, average of last 10 checkpoints. There isn't a universally best choice out of these three, but all three versions are pretty close in accuracy. You can either sweep over the 3 checkpoints on a dev test and use the best performing checkpoint for final testing. Or the last checkpoint can be a good default choice.
-
-**Model Download Links**
-Configuration | 2 32GB GPUs | 4 16GB GPUs | 6 12GB GPUs | 8 8GB GPUs
-:--|:--|:--|:--|:--
-Last Checkpoint | [12b_last_chk_2_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_2_gpus.pt) | [12b_last_chk_4_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_4_gpus.pt) | [12b_last_chk_6_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_6_gpus.pt) | [12b_last_chk_8_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_8_gpus.pt)
-Average of last 5 checkpoints | [12b_avg5_chk_2_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg5_chk_2_gpus.pt) | [12b_avg5_chk_4_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg5_chk_4_gpus.pt) | [12b_avg5_chk_6_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg5_chk_6_gpus.pt) | [12b_avg5_chk_8_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg5_chk_8_gpus.pt)
-Average of last 10 checkpoints | [12b_avg10_chk_2_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg10_chk_2_gpus.pt) | [12b_avg10_chk_4_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg10_chk_4_gpus.pt) | [12b_avg10_chk_6_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg10_chk_6_gpus.pt) | [12b_avg10_chk_8_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg10_chk_8_gpus.pt)
-
-**Generation Arguments**
-Configuration | 2 32GB GPUs | 4 16GB GPUs | 6 12GB GPUs | 8 8GB GPUs
-:--|:--|:--|:--|:--
-`--pipeline-encoder-balance` | `[26]` | `[1,15,10]` | `[1,9,9,7]` | `[1,6,6,6,7]`
-`--pipeline-encoder-devices` | `[0]` | `[0,1,0]` | `[0,1,2,0]` | `[0,4,5,1,0]`
-`--pipeline-decoder-balance` | `[3,22,1]` | `[3,11,11,1]` | `[3,7,7,8,1]` | `[1,6,6,6,6,1]`
-`--pipeline-decoder-devices` | `[0,1,0]` | `[0,2,3,0]` | `[0,3,4,5,0]` | `[0,2,6,7,3,0]`
-
-
-## SentencePiece Model
-
-```bash
-wget https://dl.fbaipublicfiles.com/m2m_100/spm.128k.model
-```
-
-## Generation with M2M-100
-
-### Encode using our SentencePiece Model
-
-Note: Install SentencePiece from [here](https://github.com/google/sentencepiece)
-
-```bash
-fairseq=/path/to/fairseq
-cd $fairseq
-sacrebleu --echo src -l de-fr -t wmt19 | head -n 20 > raw_input.de-fr.de
-sacrebleu --echo ref -l de-fr -t wmt19 | head -n 20 > raw_input.de-fr.fr
-wget https://dl.fbaipublicfiles.com/m2m_100/spm.128k.model
-for lang in de fr ; do
- python scripts/spm_encode.py \
- --model spm.128k.model \
- --output_format=piece \
- --inputs=raw_input.de-fr.${lang} \
- --outputs=spm.de-fr.${lang}
-done
-```
-
-### Binarization
-
-```bash
-wget https://dl.fbaipublicfiles.com/m2m_100/data_dict.128k.txt
-fairseq-preprocess \
- --source-lang de --target-lang fr \
- --testpref spm.de-fr \
- --thresholdsrc 0 --thresholdtgt 0 \
- --destdir data_bin \
- --srcdict data_dict.128k.txt --tgtdict data_dict.128k.txt
-```
-
-### Generation for the 12B model
-
-Note that generation can currently be run using 2 32GB / 4 16GB / 6 12GB / 8 8GB GPUs, and the corresponding model checkpoints and pipeline arguments can be found in the [12B Model Section](#12b-model).
-Generation on CPUs will be added in the future.
-
-```bash
-wget https://dl.fbaipublicfiles.com/m2m_100/model_dict.128k.txt
-wget https://dl.fbaipublicfiles.com/m2m_100/language_pairs.txt
-wget https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_4_gpus.pt
-fairseq-generate \
- data_bin \
- --batch-size 1 \
- --path 12b_last_chk_4_gpus.pt \
- --fixed-dictionary model_dict.128k.txt \
- -s de -t fr \
- --remove-bpe 'sentencepiece' \
- --beam 5 \
- --task translation_multi_simple_epoch \
- --lang-pairs language_pairs.txt \
- --decoder-langtok --encoder-langtok src \
- --gen-subset test \
- --fp16 \
- --dataset-impl mmap \
- --distributed-world-size 1 --distributed-no-spawn \
- --pipeline-model-parallel \
- --pipeline-chunks 1 \
- --pipeline-encoder-balance '[1,15,10]' \
- --pipeline-encoder-devices '[0,1,0]' \
- --pipeline-decoder-balance '[3,11,11,1]' \
- --pipeline-decoder-devices '[0,2,3,0]' > gen_out
-```
-## Evaluation with M2M-100
-
-### Tokenization
-
-Note: Refer to tokenizers/README.md for more details on tokenization.
-
-```bash
-cd ${fairseq}/examples/m2m_100
-cat ${fairseq}/gen_out | grep -P "^H" | sort -V | cut -f 3- | sh tok.sh fr > hyp
-cat ${fairseq}/raw_input.de-fr.fr | sh tok.sh fr > ref
-```
-
-### BLEU
-
-```bash
-sacrebleu -tok 'none' ref < hyp
-```
diff --git a/spaces/Paulraj916/paulraj916/scrapVid.py b/spaces/Paulraj916/paulraj916/scrapVid.py
deleted file mode 100644
index e2b06889d04763cd0ad99b9d07de332a1949dd50..0000000000000000000000000000000000000000
--- a/spaces/Paulraj916/paulraj916/scrapVid.py
+++ /dev/null
@@ -1,46 +0,0 @@
-import os
-import requests
-from bs4 import BeautifulSoup
-from urllib.parse import urljoin
-
-class ScrapVideos:
- def __init__(self, url, output_folder):
- self.url = url
- self.output_folder = output_folder
-
- def extract_and_save_videos(self):
- try:
- # Send an HTTP GET request to the webpage and get the HTML content
- response = requests.get(self.url)
- response.raise_for_status()
- html_content = response.text
-
- # Parse the HTML content using BeautifulSoup
- soup = BeautifulSoup(html_content, 'html.parser')
-
- # Find all video tags
- video_tags = soup.find_all('video')
-
- # Extract video URLs and store them in a list
- video_urls = []
- for video_tag in video_tags:
- if 'src' in video_tag.attrs:
- video_url = video_tag['src']
- absolute_url = urljoin(self.url, video_url)
- video_urls.append(absolute_url)
-
- # Create the output folder if it doesn't exist
- os.makedirs(self.output_folder, exist_ok=True)
-
- # Save video URLs to videolink.txt
- videolink_path = os.path.join(self.output_folder, 'videolink.txt')
- with open(videolink_path, 'w', encoding='utf-8') as file:
- file.write('\n'.join(video_urls))
-
- print(f"Video links saved to {videolink_path}")
- except requests.exceptions.MissingSchema:
- print(f"Skipping download from {self.url} (Invalid URL)")
- except requests.exceptions.RequestException as e:
- print(f"Failed to fetch content from {self.url}: {e}")
- except OSError as e:
- print(f"Failed to save video links: {e}")
diff --git a/spaces/PeepDaSlan9/AutoGPT/tests/test_token_counter.py b/spaces/PeepDaSlan9/AutoGPT/tests/test_token_counter.py
deleted file mode 100644
index 6d7ae016b2f823123b0b69b2eeb3eab50d94f00f..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/AutoGPT/tests/test_token_counter.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import unittest
-
-import tests.context
-from autogpt.token_counter import count_message_tokens, count_string_tokens
-
-
-class TestTokenCounter(unittest.TestCase):
- def test_count_message_tokens(self):
- messages = [
- {"role": "user", "content": "Hello"},
- {"role": "assistant", "content": "Hi there!"},
- ]
- self.assertEqual(count_message_tokens(messages), 17)
-
- def test_count_message_tokens_with_name(self):
- messages = [
- {"role": "user", "content": "Hello", "name": "John"},
- {"role": "assistant", "content": "Hi there!"},
- ]
- self.assertEqual(count_message_tokens(messages), 17)
-
- def test_count_message_tokens_empty_input(self):
- self.assertEqual(count_message_tokens([]), 3)
-
- def test_count_message_tokens_invalid_model(self):
- messages = [
- {"role": "user", "content": "Hello"},
- {"role": "assistant", "content": "Hi there!"},
- ]
- with self.assertRaises(KeyError):
- count_message_tokens(messages, model="invalid_model")
-
- def test_count_message_tokens_gpt_4(self):
- messages = [
- {"role": "user", "content": "Hello"},
- {"role": "assistant", "content": "Hi there!"},
- ]
- self.assertEqual(count_message_tokens(messages, model="gpt-4-0314"), 15)
-
- def test_count_string_tokens(self):
- string = "Hello, world!"
- self.assertEqual(
- count_string_tokens(string, model_name="gpt-3.5-turbo-0301"), 4
- )
-
- def test_count_string_tokens_empty_input(self):
- self.assertEqual(count_string_tokens("", model_name="gpt-3.5-turbo-0301"), 0)
-
- def test_count_message_tokens_invalid_model(self):
- messages = [
- {"role": "user", "content": "Hello"},
- {"role": "assistant", "content": "Hi there!"},
- ]
- with self.assertRaises(NotImplementedError):
- count_message_tokens(messages, model="invalid_model")
-
- def test_count_string_tokens_gpt_4(self):
- string = "Hello, world!"
- self.assertEqual(count_string_tokens(string, model_name="gpt-4-0314"), 4)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/Qiukai/gpt/crazy_functions/test_project/python/dqn/__init__.py b/spaces/Qiukai/gpt/crazy_functions/test_project/python/dqn/__init__.py
deleted file mode 100644
index 4ae42872c812a7c8a18dff002086c7e6e935f580..0000000000000000000000000000000000000000
--- a/spaces/Qiukai/gpt/crazy_functions/test_project/python/dqn/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from stable_baselines3.dqn.dqn import DQN
-from stable_baselines3.dqn.policies import CnnPolicy, MlpPolicy
diff --git a/spaces/R3DI/Uber_Realistic_Porn_Merge_V1.3/README.md b/spaces/R3DI/Uber_Realistic_Porn_Merge_V1.3/README.md
deleted file mode 100644
index e64d8ab657f51b4099ef94199315cdfc38eaf8a7..0000000000000000000000000000000000000000
--- a/spaces/R3DI/Uber_Realistic_Porn_Merge_V1.3/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Uber Realistic Porn Merge V1.3
-emoji: 🌍
-colorFrom: pink
-colorTo: red
-sdk: gradio
-sdk_version: 3.41.2
-app_file: app.py
-pinned: false
-tags:
-- not-for-all-audiences
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/RMXK/RVC_HFF/i18n/locale_diff.py b/spaces/RMXK/RVC_HFF/i18n/locale_diff.py
deleted file mode 100644
index 387ddfe1b16c2f9f32b6b9682b61353837b06bd8..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/i18n/locale_diff.py
+++ /dev/null
@@ -1,45 +0,0 @@
-import json
-import os
-from collections import OrderedDict
-
-# Define the standard file name
-standard_file = "en_US.json"
-
-# Find all JSON files in the directory
-dir_path = "./"
-languages = [
- f for f in os.listdir(dir_path) if f.endswith(".json") and f != standard_file
-]
-
-# Load the standard file
-with open(standard_file, "r", encoding="utf-8") as f:
- standard_data = json.load(f, object_pairs_hook=OrderedDict)
-
-# Loop through each language file
-for lang_file in languages:
- # Load the language file
- with open(lang_file, "r", encoding="utf-8") as f:
- lang_data = json.load(f, object_pairs_hook=OrderedDict)
-
- # Find the difference between the language file and the standard file
- diff = set(standard_data.keys()) - set(lang_data.keys())
-
- miss = set(lang_data.keys()) - set(standard_data.keys())
-
- # Add any missing keys to the language file
- for key in diff:
- lang_data[key] = key
-
- # Del any extra keys to the language file
- for key in miss:
- del lang_data[key]
-
- # Sort the keys of the language file to match the order of the standard file
- lang_data = OrderedDict(
- sorted(lang_data.items(), key=lambda x: list(standard_data.keys()).index(x[0]))
- )
-
- # Save the updated language file
- with open(lang_file, "w", encoding="utf-8") as f:
- json.dump(lang_data, f, ensure_ascii=False, indent=4)
- f.write("\n")
diff --git a/spaces/Realcat/image-matching-webui/hloc/extractors/superpoint.py b/spaces/Realcat/image-matching-webui/hloc/extractors/superpoint.py
deleted file mode 100644
index a96d27cdb0789327efa7007540145dac133b77c7..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/hloc/extractors/superpoint.py
+++ /dev/null
@@ -1,47 +0,0 @@
-import sys
-from pathlib import Path
-import torch
-
-from ..utils.base_model import BaseModel
-
-sys.path.append(str(Path(__file__).parent / "../../third_party"))
-from SuperGluePretrainedNetwork.models import superpoint # noqa E402
-
-
-# The original keypoint sampling is incorrect. We patch it here but
-# we don't fix it upstream to not impact exisiting evaluations.
-def sample_descriptors_fix_sampling(keypoints, descriptors, s: int = 8):
- """Interpolate descriptors at keypoint locations"""
- b, c, h, w = descriptors.shape
- keypoints = (keypoints + 0.5) / (keypoints.new_tensor([w, h]) * s)
- keypoints = keypoints * 2 - 1 # normalize to (-1, 1)
- descriptors = torch.nn.functional.grid_sample(
- descriptors,
- keypoints.view(b, 1, -1, 2),
- mode="bilinear",
- align_corners=False,
- )
- descriptors = torch.nn.functional.normalize(
- descriptors.reshape(b, c, -1), p=2, dim=1
- )
- return descriptors
-
-
-class SuperPoint(BaseModel):
- default_conf = {
- "nms_radius": 4,
- "keypoint_threshold": 0.005,
- "max_keypoints": -1,
- "remove_borders": 4,
- "fix_sampling": False,
- }
- required_inputs = ["image"]
- detection_noise = 2.0
-
- def _init(self, conf):
- if conf["fix_sampling"]:
- superpoint.sample_descriptors = sample_descriptors_fix_sampling
- self.net = superpoint.SuperPoint(conf)
-
- def _forward(self, data):
- return self.net(data)
diff --git a/spaces/Realcat/image-matching-webui/third_party/ALIKE/soft_detect.py b/spaces/Realcat/image-matching-webui/third_party/ALIKE/soft_detect.py
deleted file mode 100644
index 636ba11d0584c513631fffce31ba2d71be3e6c74..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/ALIKE/soft_detect.py
+++ /dev/null
@@ -1,234 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-
-
-# coordinates system
-# ------------------------------> [ x: range=-1.0~1.0; w: range=0~W ]
-# | -----------------------------
-# | | |
-# | | |
-# | | |
-# | | image |
-# | | |
-# | | |
-# | | |
-# | |---------------------------|
-# v
-# [ y: range=-1.0~1.0; h: range=0~H ]
-
-
-def simple_nms(scores, nms_radius: int):
- """Fast Non-maximum suppression to remove nearby points"""
- assert nms_radius >= 0
-
- def max_pool(x):
- return torch.nn.functional.max_pool2d(
- x, kernel_size=nms_radius * 2 + 1, stride=1, padding=nms_radius
- )
-
- zeros = torch.zeros_like(scores)
- max_mask = scores == max_pool(scores)
-
- for _ in range(2):
- supp_mask = max_pool(max_mask.float()) > 0
- supp_scores = torch.where(supp_mask, zeros, scores)
- new_max_mask = supp_scores == max_pool(supp_scores)
- max_mask = max_mask | (new_max_mask & (~supp_mask))
- return torch.where(max_mask, scores, zeros)
-
-
-def sample_descriptor(descriptor_map, kpts, bilinear_interp=False):
- """
- :param descriptor_map: BxCxHxW
- :param kpts: list, len=B, each is Nx2 (keypoints) [h,w]
- :param bilinear_interp: bool, whether to use bilinear interpolation
- :return: descriptors: list, len=B, each is NxD
- """
- batch_size, channel, height, width = descriptor_map.shape
-
- descriptors = []
- for index in range(batch_size):
- kptsi = kpts[index] # Nx2,(x,y)
-
- if bilinear_interp:
- descriptors_ = torch.nn.functional.grid_sample(
- descriptor_map[index].unsqueeze(0),
- kptsi.view(1, 1, -1, 2),
- mode="bilinear",
- align_corners=True,
- )[
- 0, :, 0, :
- ] # CxN
- else:
- kptsi = (kptsi + 1) / 2 * kptsi.new_tensor([[width - 1, height - 1]])
- kptsi = kptsi.long()
- descriptors_ = descriptor_map[index, :, kptsi[:, 1], kptsi[:, 0]] # CxN
-
- descriptors_ = torch.nn.functional.normalize(descriptors_, p=2, dim=0)
- descriptors.append(descriptors_.t())
-
- return descriptors
-
-
-class DKD(nn.Module):
- def __init__(self, radius=2, top_k=0, scores_th=0.2, n_limit=20000):
- """
- Args:
- radius: soft detection radius, kernel size is (2 * radius + 1)
- top_k: top_k > 0: return top k keypoints
- scores_th: top_k <= 0 threshold mode: scores_th > 0: return keypoints with scores>scores_th
- else: return keypoints with scores > scores.mean()
- n_limit: max number of keypoint in threshold mode
- """
- super().__init__()
- self.radius = radius
- self.top_k = top_k
- self.scores_th = scores_th
- self.n_limit = n_limit
- self.kernel_size = 2 * self.radius + 1
- self.temperature = 0.1 # tuned temperature
- self.unfold = nn.Unfold(kernel_size=self.kernel_size, padding=self.radius)
-
- # local xy grid
- x = torch.linspace(-self.radius, self.radius, self.kernel_size)
- # (kernel_size*kernel_size) x 2 : (w,h)
- self.hw_grid = torch.stack(torch.meshgrid([x, x])).view(2, -1).t()[:, [1, 0]]
-
- def detect_keypoints(self, scores_map, sub_pixel=True):
- b, c, h, w = scores_map.shape
- scores_nograd = scores_map.detach()
- # nms_scores = simple_nms(scores_nograd, self.radius)
- nms_scores = simple_nms(scores_nograd, 2)
-
- # remove border
- nms_scores[:, :, : self.radius + 1, :] = 0
- nms_scores[:, :, :, : self.radius + 1] = 0
- nms_scores[:, :, h - self.radius :, :] = 0
- nms_scores[:, :, :, w - self.radius :] = 0
-
- # detect keypoints without grad
- if self.top_k > 0:
- topk = torch.topk(nms_scores.view(b, -1), self.top_k)
- indices_keypoints = topk.indices # B x top_k
- else:
- if self.scores_th > 0:
- masks = nms_scores > self.scores_th
- if masks.sum() == 0:
- th = scores_nograd.reshape(b, -1).mean(dim=1) # th = self.scores_th
- masks = nms_scores > th.reshape(b, 1, 1, 1)
- else:
- th = scores_nograd.reshape(b, -1).mean(dim=1) # th = self.scores_th
- masks = nms_scores > th.reshape(b, 1, 1, 1)
- masks = masks.reshape(b, -1)
-
- indices_keypoints = [] # list, B x (any size)
- scores_view = scores_nograd.reshape(b, -1)
- for mask, scores in zip(masks, scores_view):
- indices = mask.nonzero(as_tuple=False)[:, 0]
- if len(indices) > self.n_limit:
- kpts_sc = scores[indices]
- sort_idx = kpts_sc.sort(descending=True)[1]
- sel_idx = sort_idx[: self.n_limit]
- indices = indices[sel_idx]
- indices_keypoints.append(indices)
-
- keypoints = []
- scoredispersitys = []
- kptscores = []
- if sub_pixel:
- # detect soft keypoints with grad backpropagation
- patches = self.unfold(scores_map) # B x (kernel**2) x (H*W)
- self.hw_grid = self.hw_grid.to(patches) # to device
- for b_idx in range(b):
- patch = patches[b_idx].t() # (H*W) x (kernel**2)
- indices_kpt = indices_keypoints[
- b_idx
- ] # one dimension vector, say its size is M
- patch_scores = patch[indices_kpt] # M x (kernel**2)
-
- # max is detached to prevent undesired backprop loops in the graph
- max_v = patch_scores.max(dim=1).values.detach()[:, None]
- x_exp = (
- (patch_scores - max_v) / self.temperature
- ).exp() # M * (kernel**2), in [0, 1]
-
- # \frac{ \sum{(i,j) \times \exp(x/T)} }{ \sum{\exp(x/T)} }
- xy_residual = (
- x_exp @ self.hw_grid / x_exp.sum(dim=1)[:, None]
- ) # Soft-argmax, Mx2
-
- hw_grid_dist2 = (
- torch.norm(
- (self.hw_grid[None, :, :] - xy_residual[:, None, :])
- / self.radius,
- dim=-1,
- )
- ** 2
- )
- scoredispersity = (x_exp * hw_grid_dist2).sum(dim=1) / x_exp.sum(dim=1)
-
- # compute result keypoints
- keypoints_xy_nms = torch.stack(
- [indices_kpt % w, indices_kpt // w], dim=1
- ) # Mx2
- keypoints_xy = keypoints_xy_nms + xy_residual
- keypoints_xy = (
- keypoints_xy / keypoints_xy.new_tensor([w - 1, h - 1]) * 2 - 1
- ) # (w,h) -> (-1~1,-1~1)
-
- kptscore = torch.nn.functional.grid_sample(
- scores_map[b_idx].unsqueeze(0),
- keypoints_xy.view(1, 1, -1, 2),
- mode="bilinear",
- align_corners=True,
- )[
- 0, 0, 0, :
- ] # CxN
-
- keypoints.append(keypoints_xy)
- scoredispersitys.append(scoredispersity)
- kptscores.append(kptscore)
- else:
- for b_idx in range(b):
- indices_kpt = indices_keypoints[
- b_idx
- ] # one dimension vector, say its size is M
- keypoints_xy_nms = torch.stack(
- [indices_kpt % w, indices_kpt // w], dim=1
- ) # Mx2
- keypoints_xy = (
- keypoints_xy_nms / keypoints_xy_nms.new_tensor([w - 1, h - 1]) * 2
- - 1
- ) # (w,h) -> (-1~1,-1~1)
- kptscore = torch.nn.functional.grid_sample(
- scores_map[b_idx].unsqueeze(0),
- keypoints_xy.view(1, 1, -1, 2),
- mode="bilinear",
- align_corners=True,
- )[
- 0, 0, 0, :
- ] # CxN
- keypoints.append(keypoints_xy)
- scoredispersitys.append(None)
- kptscores.append(kptscore)
-
- return keypoints, scoredispersitys, kptscores
-
- def forward(self, scores_map, descriptor_map, sub_pixel=False):
- """
- :param scores_map: Bx1xHxW
- :param descriptor_map: BxCxHxW
- :param sub_pixel: whether to use sub-pixel keypoint detection
- :return: kpts: list[Nx2,...]; kptscores: list[N,....] normalised position: -1.0 ~ 1.0
- """
- keypoints, scoredispersitys, kptscores = self.detect_keypoints(
- scores_map, sub_pixel
- )
-
- descriptors = sample_descriptor(descriptor_map, keypoints, sub_pixel)
-
- # keypoints: B M 2
- # descriptors: B M D
- # scoredispersitys:
- return keypoints, descriptors, kptscores, scoredispersitys
diff --git a/spaces/Realcat/image-matching-webui/third_party/TopicFM/configs/data/megadepth_trainval.py b/spaces/Realcat/image-matching-webui/third_party/TopicFM/configs/data/megadepth_trainval.py
deleted file mode 100644
index 7b7b0a77e26bbf6e7b7ceb2cd54f8c2e3b709db4..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/TopicFM/configs/data/megadepth_trainval.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from configs.data.base import cfg
-
-
-TRAIN_BASE_PATH = "data/megadepth/index"
-cfg.DATASET.TRAINVAL_DATA_SOURCE = "MegaDepth"
-cfg.DATASET.TRAIN_DATA_ROOT = "data/megadepth/train"
-cfg.DATASET.TRAIN_NPZ_ROOT = f"{TRAIN_BASE_PATH}/scene_info_0.1_0.7"
-cfg.DATASET.TRAIN_LIST_PATH = f"{TRAIN_BASE_PATH}/trainvaltest_list/train_list.txt"
-cfg.DATASET.MIN_OVERLAP_SCORE_TRAIN = 0.0
-
-TEST_BASE_PATH = "data/megadepth/index"
-cfg.DATASET.TEST_DATA_SOURCE = "MegaDepth"
-cfg.DATASET.VAL_DATA_ROOT = cfg.DATASET.TEST_DATA_ROOT = "data/megadepth/test"
-cfg.DATASET.VAL_NPZ_ROOT = (
- cfg.DATASET.TEST_NPZ_ROOT
-) = f"{TEST_BASE_PATH}/scene_info_val_1500"
-cfg.DATASET.VAL_LIST_PATH = (
- cfg.DATASET.TEST_LIST_PATH
-) = f"{TEST_BASE_PATH}/trainvaltest_list/val_list.txt"
-cfg.DATASET.MIN_OVERLAP_SCORE_TEST = 0.0 # for both test and val
-
-# 368 scenes in total for MegaDepth
-# (with difficulty balanced (further split each scene to 3 sub-scenes))
-cfg.TRAINER.N_SAMPLES_PER_SUBSET = 100
-
-cfg.DATASET.MGDPT_IMG_RESIZE = 800 # for training on 11GB mem GPUs
diff --git a/spaces/Riksarkivet/htr_demo/helper/text/docs_strucutre.md b/spaces/Riksarkivet/htr_demo/helper/text/docs_strucutre.md
deleted file mode 100644
index 8365cbecd4974c8c9ce14a4e1cf73e79083a55a8..0000000000000000000000000000000000000000
--- a/spaces/Riksarkivet/htr_demo/helper/text/docs_strucutre.md
+++ /dev/null
@@ -1,20 +0,0 @@
-## Instructions for documentation
-
-- Naming convention of folder is based on tab
-- Naming convention of file is based on subtabs
- - If subtab uses columns and rows
- - Use suffix such as col1, row1 or tab1, to indicate differences in postion of text.
-
-see image below:
-
-
-
-
-
-## Assets and file sharing with app
-
-This repo acts as asset manager for the app:
-
-- [Github Repo](https://github.com/Borg93/htr_gradio_file_placeholder)
-
-**Note**: this repo is an work in progress
diff --git a/spaces/Riksarkivet/htr_demo/src/htr_pipeline/utils/parser_xml.py b/spaces/Riksarkivet/htr_demo/src/htr_pipeline/utils/parser_xml.py
deleted file mode 100644
index 1cc1a63e2efe40d3d8423514d4d3d30851d1558c..0000000000000000000000000000000000000000
--- a/spaces/Riksarkivet/htr_demo/src/htr_pipeline/utils/parser_xml.py
+++ /dev/null
@@ -1,20 +0,0 @@
-import xml.etree.ElementTree as ET
-
-
-class XmlParser:
- def __init__(self, page_xml="./page_xml.xml"):
- self.tree = ET.parse(page_xml, parser=ET.XMLParser(encoding="utf-8"))
- self.root = self.tree.getroot()
- self.namespace = "{http://schema.primaresearch.org/PAGE/gts/pagecontent/2013-07-15}"
-
- def xml_to_txt(self, output_file="page_txt.txt"):
- with open(output_file, "w", encoding="utf-8") as f:
- for textregion in self.root.findall(f".//{self.namespace}TextRegion"):
- for textline in textregion.findall(f".//{self.namespace}TextLine"):
- text = textline.find(f"{self.namespace}TextEquiv").find(f"{self.namespace}Unicode").text
- f.write(text + "\n")
- f.write("\n")
-
-
-if __name__ == "__main__":
- pass
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/utils/path.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/utils/path.py
deleted file mode 100644
index 7dab4b3041413b1432b0f434b8b14783097d33c6..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/utils/path.py
+++ /dev/null
@@ -1,101 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os
-import os.path as osp
-from pathlib import Path
-
-from .misc import is_str
-
-
-def is_filepath(x):
- return is_str(x) or isinstance(x, Path)
-
-
-def fopen(filepath, *args, **kwargs):
- if is_str(filepath):
- return open(filepath, *args, **kwargs)
- elif isinstance(filepath, Path):
- return filepath.open(*args, **kwargs)
- raise ValueError('`filepath` should be a string or a Path')
-
-
-def check_file_exist(filename, msg_tmpl='file "{}" does not exist'):
- if not osp.isfile(filename):
- raise FileNotFoundError(msg_tmpl.format(filename))
-
-
-def mkdir_or_exist(dir_name, mode=0o777):
- if dir_name == '':
- return
- dir_name = osp.expanduser(dir_name)
- os.makedirs(dir_name, mode=mode, exist_ok=True)
-
-
-def symlink(src, dst, overwrite=True, **kwargs):
- if os.path.lexists(dst) and overwrite:
- os.remove(dst)
- os.symlink(src, dst, **kwargs)
-
-
-def scandir(dir_path, suffix=None, recursive=False, case_sensitive=True):
- """Scan a directory to find the interested files.
-
- Args:
- dir_path (str | obj:`Path`): Path of the directory.
- suffix (str | tuple(str), optional): File suffix that we are
- interested in. Default: None.
- recursive (bool, optional): If set to True, recursively scan the
- directory. Default: False.
- case_sensitive (bool, optional) : If set to False, ignore the case of
- suffix. Default: True.
-
- Returns:
- A generator for all the interested files with relative paths.
- """
- if isinstance(dir_path, (str, Path)):
- dir_path = str(dir_path)
- else:
- raise TypeError('"dir_path" must be a string or Path object')
-
- if (suffix is not None) and not isinstance(suffix, (str, tuple)):
- raise TypeError('"suffix" must be a string or tuple of strings')
-
- if suffix is not None and not case_sensitive:
- suffix = suffix.lower() if isinstance(suffix, str) else tuple(
- item.lower() for item in suffix)
-
- root = dir_path
-
- def _scandir(dir_path, suffix, recursive, case_sensitive):
- for entry in os.scandir(dir_path):
- if not entry.name.startswith('.') and entry.is_file():
- rel_path = osp.relpath(entry.path, root)
- _rel_path = rel_path if case_sensitive else rel_path.lower()
- if suffix is None or _rel_path.endswith(suffix):
- yield rel_path
- elif recursive and os.path.isdir(entry.path):
- # scan recursively if entry.path is a directory
- yield from _scandir(entry.path, suffix, recursive,
- case_sensitive)
-
- return _scandir(dir_path, suffix, recursive, case_sensitive)
-
-
-def find_vcs_root(path, markers=('.git', )):
- """Finds the root directory (including itself) of specified markers.
-
- Args:
- path (str): Path of directory or file.
- markers (list[str], optional): List of file or directory names.
-
- Returns:
- The directory contained one of the markers or None if not found.
- """
- if osp.isfile(path):
- path = osp.dirname(path)
-
- prev, cur = None, osp.abspath(osp.expanduser(path))
- while cur != prev:
- if any(osp.exists(osp.join(cur, marker)) for marker in markers):
- return cur
- prev, cur = cur, osp.split(cur)[0]
- return None
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/necks/bfp.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/necks/bfp.py
deleted file mode 100644
index 123f5515ab6b51867d5781aa1572a0810670235f..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/necks/bfp.py
+++ /dev/null
@@ -1,104 +0,0 @@
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import ConvModule, xavier_init
-from mmcv.cnn.bricks import NonLocal2d
-
-from ..builder import NECKS
-
-
-@NECKS.register_module()
-class BFP(nn.Module):
- """BFP (Balanced Feature Pyramids)
-
- BFP takes multi-level features as inputs and gather them into a single one,
- then refine the gathered feature and scatter the refined results to
- multi-level features. This module is used in Libra R-CNN (CVPR 2019), see
- the paper `Libra R-CNN: Towards Balanced Learning for Object Detection
- `_ for details.
-
- Args:
- in_channels (int): Number of input channels (feature maps of all levels
- should have the same channels).
- num_levels (int): Number of input feature levels.
- conv_cfg (dict): The config dict for convolution layers.
- norm_cfg (dict): The config dict for normalization layers.
- refine_level (int): Index of integration and refine level of BSF in
- multi-level features from bottom to top.
- refine_type (str): Type of the refine op, currently support
- [None, 'conv', 'non_local'].
- """
-
- def __init__(self,
- in_channels,
- num_levels,
- refine_level=2,
- refine_type=None,
- conv_cfg=None,
- norm_cfg=None):
- super(BFP, self).__init__()
- assert refine_type in [None, 'conv', 'non_local']
-
- self.in_channels = in_channels
- self.num_levels = num_levels
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
-
- self.refine_level = refine_level
- self.refine_type = refine_type
- assert 0 <= self.refine_level < self.num_levels
-
- if self.refine_type == 'conv':
- self.refine = ConvModule(
- self.in_channels,
- self.in_channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg)
- elif self.refine_type == 'non_local':
- self.refine = NonLocal2d(
- self.in_channels,
- reduction=1,
- use_scale=False,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg)
-
- def init_weights(self):
- """Initialize the weights of FPN module."""
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- xavier_init(m, distribution='uniform')
-
- def forward(self, inputs):
- """Forward function."""
- assert len(inputs) == self.num_levels
-
- # step 1: gather multi-level features by resize and average
- feats = []
- gather_size = inputs[self.refine_level].size()[2:]
- for i in range(self.num_levels):
- if i < self.refine_level:
- gathered = F.adaptive_max_pool2d(
- inputs[i], output_size=gather_size)
- else:
- gathered = F.interpolate(
- inputs[i], size=gather_size, mode='nearest')
- feats.append(gathered)
-
- bsf = sum(feats) / len(feats)
-
- # step 2: refine gathered features
- if self.refine_type is not None:
- bsf = self.refine(bsf)
-
- # step 3: scatter refined features to multi-levels by a residual path
- outs = []
- for i in range(self.num_levels):
- out_size = inputs[i].size()[2:]
- if i < self.refine_level:
- residual = F.interpolate(bsf, size=out_size, mode='nearest')
- else:
- residual = F.adaptive_max_pool2d(bsf, output_size=out_size)
- outs.append(residual + inputs[i])
-
- return tuple(outs)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/transformer_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/transformer_head.py
deleted file mode 100644
index 820fd069fcca295f6102f0d27366158a8c640249..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/transformer_head.py
+++ /dev/null
@@ -1,654 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import Conv2d, Linear, build_activation_layer
-from mmcv.runner import force_fp32
-
-from mmdet.core import (bbox_cxcywh_to_xyxy, bbox_xyxy_to_cxcywh,
- build_assigner, build_sampler, multi_apply,
- reduce_mean)
-from mmdet.models.utils import (FFN, build_positional_encoding,
- build_transformer)
-from ..builder import HEADS, build_loss
-from .anchor_free_head import AnchorFreeHead
-
-
-@HEADS.register_module()
-class TransformerHead(AnchorFreeHead):
- """Implements the DETR transformer head.
-
- See `paper: End-to-End Object Detection with Transformers
- `_ for details.
-
- Args:
- num_classes (int): Number of categories excluding the background.
- in_channels (int): Number of channels in the input feature map.
- num_fcs (int, optional): Number of fully-connected layers used in
- `FFN`, which is then used for the regression head. Default 2.
- transformer (dict, optional): Config for transformer.
- positional_encoding (dict, optional): Config for position encoding.
- loss_cls (dict, optional): Config of the classification loss.
- Default `CrossEntropyLoss`.
- loss_bbox (dict, optional): Config of the regression loss.
- Default `L1Loss`.
- loss_iou (dict, optional): Config of the regression iou loss.
- Default `GIoULoss`.
- tran_cfg (dict, optional): Training config of transformer head.
- test_cfg (dict, optional): Testing config of transformer head.
-
- Example:
- >>> import torch
- >>> self = TransformerHead(80, 2048)
- >>> x = torch.rand(1, 2048, 32, 32)
- >>> mask = torch.ones(1, 32, 32).to(x.dtype)
- >>> mask[:, :16, :15] = 0
- >>> all_cls_scores, all_bbox_preds = self(x, mask)
- """
-
- def __init__(self,
- num_classes,
- in_channels,
- num_fcs=2,
- transformer=dict(
- type='Transformer',
- embed_dims=256,
- num_heads=8,
- num_encoder_layers=6,
- num_decoder_layers=6,
- feedforward_channels=2048,
- dropout=0.1,
- act_cfg=dict(type='ReLU', inplace=True),
- norm_cfg=dict(type='LN'),
- num_fcs=2,
- pre_norm=False,
- return_intermediate_dec=True),
- positional_encoding=dict(
- type='SinePositionalEncoding',
- num_feats=128,
- normalize=True),
- loss_cls=dict(
- type='CrossEntropyLoss',
- bg_cls_weight=0.1,
- use_sigmoid=False,
- loss_weight=1.0,
- class_weight=1.0),
- loss_bbox=dict(type='L1Loss', loss_weight=5.0),
- loss_iou=dict(type='GIoULoss', loss_weight=2.0),
- train_cfg=dict(
- assigner=dict(
- type='HungarianAssigner',
- cls_cost=dict(type='ClassificationCost', weight=1.),
- reg_cost=dict(type='BBoxL1Cost', weight=5.0),
- iou_cost=dict(
- type='IoUCost', iou_mode='giou', weight=2.0))),
- test_cfg=dict(max_per_img=100),
- **kwargs):
- # NOTE here use `AnchorFreeHead` instead of `TransformerHead`,
- # since it brings inconvenience when the initialization of
- # `AnchorFreeHead` is called.
- super(AnchorFreeHead, self).__init__()
- use_sigmoid_cls = loss_cls.get('use_sigmoid', False)
- assert not use_sigmoid_cls, 'setting use_sigmoid_cls as True is ' \
- 'not supported in DETR, since background is needed for the ' \
- 'matching process.'
- assert 'embed_dims' in transformer \
- and 'num_feats' in positional_encoding
- num_feats = positional_encoding['num_feats']
- embed_dims = transformer['embed_dims']
- assert num_feats * 2 == embed_dims, 'embed_dims should' \
- f' be exactly 2 times of num_feats. Found {embed_dims}' \
- f' and {num_feats}.'
- assert test_cfg is not None and 'max_per_img' in test_cfg
-
- class_weight = loss_cls.get('class_weight', None)
- if class_weight is not None:
- assert isinstance(class_weight, float), 'Expected ' \
- 'class_weight to have type float. Found ' \
- f'{type(class_weight)}.'
- # NOTE following the official DETR rep0, bg_cls_weight means
- # relative classification weight of the no-object class.
- bg_cls_weight = loss_cls.get('bg_cls_weight', class_weight)
- assert isinstance(bg_cls_weight, float), 'Expected ' \
- 'bg_cls_weight to have type float. Found ' \
- f'{type(bg_cls_weight)}.'
- class_weight = torch.ones(num_classes + 1) * class_weight
- # set background class as the last indice
- class_weight[num_classes] = bg_cls_weight
- loss_cls.update({'class_weight': class_weight})
- if 'bg_cls_weight' in loss_cls:
- loss_cls.pop('bg_cls_weight')
- self.bg_cls_weight = bg_cls_weight
-
- if train_cfg:
- assert 'assigner' in train_cfg, 'assigner should be provided '\
- 'when train_cfg is set.'
- assigner = train_cfg['assigner']
- assert loss_cls['loss_weight'] == assigner['cls_cost']['weight'], \
- 'The classification weight for loss and matcher should be' \
- 'exactly the same.'
- assert loss_bbox['loss_weight'] == assigner['reg_cost'][
- 'weight'], 'The regression L1 weight for loss and matcher ' \
- 'should be exactly the same.'
- assert loss_iou['loss_weight'] == assigner['iou_cost']['weight'], \
- 'The regression iou weight for loss and matcher should be' \
- 'exactly the same.'
- self.assigner = build_assigner(assigner)
- # DETR sampling=False, so use PseudoSampler
- sampler_cfg = dict(type='PseudoSampler')
- self.sampler = build_sampler(sampler_cfg, context=self)
- self.num_classes = num_classes
- self.cls_out_channels = num_classes + 1
- self.in_channels = in_channels
- self.num_fcs = num_fcs
- self.train_cfg = train_cfg
- self.test_cfg = test_cfg
- self.use_sigmoid_cls = use_sigmoid_cls
- self.embed_dims = embed_dims
- self.num_query = test_cfg['max_per_img']
- self.fp16_enabled = False
- self.loss_cls = build_loss(loss_cls)
- self.loss_bbox = build_loss(loss_bbox)
- self.loss_iou = build_loss(loss_iou)
- self.act_cfg = transformer.get('act_cfg',
- dict(type='ReLU', inplace=True))
- self.activate = build_activation_layer(self.act_cfg)
- self.positional_encoding = build_positional_encoding(
- positional_encoding)
- self.transformer = build_transformer(transformer)
- self._init_layers()
-
- def _init_layers(self):
- """Initialize layers of the transformer head."""
- self.input_proj = Conv2d(
- self.in_channels, self.embed_dims, kernel_size=1)
- self.fc_cls = Linear(self.embed_dims, self.cls_out_channels)
- self.reg_ffn = FFN(
- self.embed_dims,
- self.embed_dims,
- self.num_fcs,
- self.act_cfg,
- dropout=0.0,
- add_residual=False)
- self.fc_reg = Linear(self.embed_dims, 4)
- self.query_embedding = nn.Embedding(self.num_query, self.embed_dims)
-
- def init_weights(self, distribution='uniform'):
- """Initialize weights of the transformer head."""
- # The initialization for transformer is important
- self.transformer.init_weights()
-
- def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
- missing_keys, unexpected_keys, error_msgs):
- """load checkpoints."""
- # NOTE here use `AnchorFreeHead` instead of `TransformerHead`,
- # since `AnchorFreeHead._load_from_state_dict` should not be
- # called here. Invoking the default `Module._load_from_state_dict`
- # is enough.
- super(AnchorFreeHead,
- self)._load_from_state_dict(state_dict, prefix, local_metadata,
- strict, missing_keys,
- unexpected_keys, error_msgs)
-
- def forward(self, feats, img_metas):
- """Forward function.
-
- Args:
- feats (tuple[Tensor]): Features from the upstream network, each is
- a 4D-tensor.
- img_metas (list[dict]): List of image information.
-
- Returns:
- tuple[list[Tensor], list[Tensor]]: Outputs for all scale levels.
-
- - all_cls_scores_list (list[Tensor]): Classification scores \
- for each scale level. Each is a 4D-tensor with shape \
- [nb_dec, bs, num_query, cls_out_channels]. Note \
- `cls_out_channels` should includes background.
- - all_bbox_preds_list (list[Tensor]): Sigmoid regression \
- outputs for each scale level. Each is a 4D-tensor with \
- normalized coordinate format (cx, cy, w, h) and shape \
- [nb_dec, bs, num_query, 4].
- """
- num_levels = len(feats)
- img_metas_list = [img_metas for _ in range(num_levels)]
- return multi_apply(self.forward_single, feats, img_metas_list)
-
- def forward_single(self, x, img_metas):
- """"Forward function for a single feature level.
-
- Args:
- x (Tensor): Input feature from backbone's single stage, shape
- [bs, c, h, w].
- img_metas (list[dict]): List of image information.
-
- Returns:
- all_cls_scores (Tensor): Outputs from the classification head,
- shape [nb_dec, bs, num_query, cls_out_channels]. Note
- cls_out_channels should includes background.
- all_bbox_preds (Tensor): Sigmoid outputs from the regression
- head with normalized coordinate format (cx, cy, w, h).
- Shape [nb_dec, bs, num_query, 4].
- """
- # construct binary masks which used for the transformer.
- # NOTE following the official DETR repo, non-zero values representing
- # ignored positions, while zero values means valid positions.
- batch_size = x.size(0)
- input_img_h, input_img_w = img_metas[0]['batch_input_shape']
- masks = x.new_ones((batch_size, input_img_h, input_img_w))
- for img_id in range(batch_size):
- img_h, img_w, _ = img_metas[img_id]['img_shape']
- masks[img_id, :img_h, :img_w] = 0
-
- x = self.input_proj(x)
- # interpolate masks to have the same spatial shape with x
- masks = F.interpolate(
- masks.unsqueeze(1), size=x.shape[-2:]).to(torch.bool).squeeze(1)
- # position encoding
- pos_embed = self.positional_encoding(masks) # [bs, embed_dim, h, w]
- # outs_dec: [nb_dec, bs, num_query, embed_dim]
- outs_dec, _ = self.transformer(x, masks, self.query_embedding.weight,
- pos_embed)
-
- all_cls_scores = self.fc_cls(outs_dec)
- all_bbox_preds = self.fc_reg(self.activate(
- self.reg_ffn(outs_dec))).sigmoid()
- return all_cls_scores, all_bbox_preds
-
- @force_fp32(apply_to=('all_cls_scores_list', 'all_bbox_preds_list'))
- def loss(self,
- all_cls_scores_list,
- all_bbox_preds_list,
- gt_bboxes_list,
- gt_labels_list,
- img_metas,
- gt_bboxes_ignore=None):
- """"Loss function.
-
- Only outputs from the last feature level are used for computing
- losses by default.
-
- Args:
- all_cls_scores_list (list[Tensor]): Classification outputs
- for each feature level. Each is a 4D-tensor with shape
- [nb_dec, bs, num_query, cls_out_channels].
- all_bbox_preds_list (list[Tensor]): Sigmoid regression
- outputs for each feature level. Each is a 4D-tensor with
- normalized coordinate format (cx, cy, w, h) and shape
- [nb_dec, bs, num_query, 4].
- gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image
- with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels_list (list[Tensor]): Ground truth class indices for each
- image with shape (num_gts, ).
- img_metas (list[dict]): List of image meta information.
- gt_bboxes_ignore (list[Tensor], optional): Bounding boxes
- which can be ignored for each image. Default None.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components.
- """
- # NOTE defaultly only the outputs from the last feature scale is used.
- all_cls_scores = all_cls_scores_list[-1]
- all_bbox_preds = all_bbox_preds_list[-1]
- assert gt_bboxes_ignore is None, \
- 'Only supports for gt_bboxes_ignore setting to None.'
-
- num_dec_layers = len(all_cls_scores)
- all_gt_bboxes_list = [gt_bboxes_list for _ in range(num_dec_layers)]
- all_gt_labels_list = [gt_labels_list for _ in range(num_dec_layers)]
- all_gt_bboxes_ignore_list = [
- gt_bboxes_ignore for _ in range(num_dec_layers)
- ]
- img_metas_list = [img_metas for _ in range(num_dec_layers)]
-
- losses_cls, losses_bbox, losses_iou = multi_apply(
- self.loss_single, all_cls_scores, all_bbox_preds,
- all_gt_bboxes_list, all_gt_labels_list, img_metas_list,
- all_gt_bboxes_ignore_list)
-
- loss_dict = dict()
- # loss from the last decoder layer
- loss_dict['loss_cls'] = losses_cls[-1]
- loss_dict['loss_bbox'] = losses_bbox[-1]
- loss_dict['loss_iou'] = losses_iou[-1]
- # loss from other decoder layers
- num_dec_layer = 0
- for loss_cls_i, loss_bbox_i, loss_iou_i in zip(losses_cls[:-1],
- losses_bbox[:-1],
- losses_iou[:-1]):
- loss_dict[f'd{num_dec_layer}.loss_cls'] = loss_cls_i
- loss_dict[f'd{num_dec_layer}.loss_bbox'] = loss_bbox_i
- loss_dict[f'd{num_dec_layer}.loss_iou'] = loss_iou_i
- num_dec_layer += 1
- return loss_dict
-
- def loss_single(self,
- cls_scores,
- bbox_preds,
- gt_bboxes_list,
- gt_labels_list,
- img_metas,
- gt_bboxes_ignore_list=None):
- """"Loss function for outputs from a single decoder layer of a single
- feature level.
-
- Args:
- cls_scores (Tensor): Box score logits from a single decoder layer
- for all images. Shape [bs, num_query, cls_out_channels].
- bbox_preds (Tensor): Sigmoid outputs from a single decoder layer
- for all images, with normalized coordinate (cx, cy, w, h) and
- shape [bs, num_query, 4].
- gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image
- with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels_list (list[Tensor]): Ground truth class indices for each
- image with shape (num_gts, ).
- img_metas (list[dict]): List of image meta information.
- gt_bboxes_ignore_list (list[Tensor], optional): Bounding
- boxes which can be ignored for each image. Default None.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components for outputs from
- a single decoder layer.
- """
- num_imgs = cls_scores.size(0)
- cls_scores_list = [cls_scores[i] for i in range(num_imgs)]
- bbox_preds_list = [bbox_preds[i] for i in range(num_imgs)]
- cls_reg_targets = self.get_targets(cls_scores_list, bbox_preds_list,
- gt_bboxes_list, gt_labels_list,
- img_metas, gt_bboxes_ignore_list)
- (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list,
- num_total_pos, num_total_neg) = cls_reg_targets
- labels = torch.cat(labels_list, 0)
- label_weights = torch.cat(label_weights_list, 0)
- bbox_targets = torch.cat(bbox_targets_list, 0)
- bbox_weights = torch.cat(bbox_weights_list, 0)
-
- # classification loss
- cls_scores = cls_scores.reshape(-1, self.cls_out_channels)
- # construct weighted avg_factor to match with the official DETR repo
- cls_avg_factor = num_total_pos * 1.0 + \
- num_total_neg * self.bg_cls_weight
- loss_cls = self.loss_cls(
- cls_scores, labels, label_weights, avg_factor=cls_avg_factor)
-
- # Compute the average number of gt boxes accross all gpus, for
- # normalization purposes
- num_total_pos = loss_cls.new_tensor([num_total_pos])
- num_total_pos = torch.clamp(reduce_mean(num_total_pos), min=1).item()
-
- # construct factors used for rescale bboxes
- factors = []
- for img_meta, bbox_pred in zip(img_metas, bbox_preds):
- img_h, img_w, _ = img_meta['img_shape']
- factor = bbox_pred.new_tensor([img_w, img_h, img_w,
- img_h]).unsqueeze(0).repeat(
- bbox_pred.size(0), 1)
- factors.append(factor)
- factors = torch.cat(factors, 0)
-
- # DETR regress the relative position of boxes (cxcywh) in the image,
- # thus the learning target is normalized by the image size. So here
- # we need to re-scale them for calculating IoU loss
- bbox_preds = bbox_preds.reshape(-1, 4)
- bboxes = bbox_cxcywh_to_xyxy(bbox_preds) * factors
- bboxes_gt = bbox_cxcywh_to_xyxy(bbox_targets) * factors
-
- # regression IoU loss, defaultly GIoU loss
- loss_iou = self.loss_iou(
- bboxes, bboxes_gt, bbox_weights, avg_factor=num_total_pos)
-
- # regression L1 loss
- loss_bbox = self.loss_bbox(
- bbox_preds, bbox_targets, bbox_weights, avg_factor=num_total_pos)
- return loss_cls, loss_bbox, loss_iou
-
- def get_targets(self,
- cls_scores_list,
- bbox_preds_list,
- gt_bboxes_list,
- gt_labels_list,
- img_metas,
- gt_bboxes_ignore_list=None):
- """"Compute regression and classification targets for a batch image.
-
- Outputs from a single decoder layer of a single feature level are used.
-
- Args:
- cls_scores_list (list[Tensor]): Box score logits from a single
- decoder layer for each image with shape [num_query,
- cls_out_channels].
- bbox_preds_list (list[Tensor]): Sigmoid outputs from a single
- decoder layer for each image, with normalized coordinate
- (cx, cy, w, h) and shape [num_query, 4].
- gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image
- with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels_list (list[Tensor]): Ground truth class indices for each
- image with shape (num_gts, ).
- img_metas (list[dict]): List of image meta information.
- gt_bboxes_ignore_list (list[Tensor], optional): Bounding
- boxes which can be ignored for each image. Default None.
-
- Returns:
- tuple: a tuple containing the following targets.
-
- - labels_list (list[Tensor]): Labels for all images.
- - label_weights_list (list[Tensor]): Label weights for all \
- images.
- - bbox_targets_list (list[Tensor]): BBox targets for all \
- images.
- - bbox_weights_list (list[Tensor]): BBox weights for all \
- images.
- - num_total_pos (int): Number of positive samples in all \
- images.
- - num_total_neg (int): Number of negative samples in all \
- images.
- """
- assert gt_bboxes_ignore_list is None, \
- 'Only supports for gt_bboxes_ignore setting to None.'
- num_imgs = len(cls_scores_list)
- gt_bboxes_ignore_list = [
- gt_bboxes_ignore_list for _ in range(num_imgs)
- ]
-
- (labels_list, label_weights_list, bbox_targets_list,
- bbox_weights_list, pos_inds_list, neg_inds_list) = multi_apply(
- self._get_target_single, cls_scores_list, bbox_preds_list,
- gt_bboxes_list, gt_labels_list, img_metas, gt_bboxes_ignore_list)
- num_total_pos = sum((inds.numel() for inds in pos_inds_list))
- num_total_neg = sum((inds.numel() for inds in neg_inds_list))
- return (labels_list, label_weights_list, bbox_targets_list,
- bbox_weights_list, num_total_pos, num_total_neg)
-
- def _get_target_single(self,
- cls_score,
- bbox_pred,
- gt_bboxes,
- gt_labels,
- img_meta,
- gt_bboxes_ignore=None):
- """"Compute regression and classification targets for one image.
-
- Outputs from a single decoder layer of a single feature level are used.
-
- Args:
- cls_score (Tensor): Box score logits from a single decoder layer
- for one image. Shape [num_query, cls_out_channels].
- bbox_pred (Tensor): Sigmoid outputs from a single decoder layer
- for one image, with normalized coordinate (cx, cy, w, h) and
- shape [num_query, 4].
- gt_bboxes (Tensor): Ground truth bboxes for one image with
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (Tensor): Ground truth class indices for one image
- with shape (num_gts, ).
- img_meta (dict): Meta information for one image.
- gt_bboxes_ignore (Tensor, optional): Bounding boxes
- which can be ignored. Default None.
-
- Returns:
- tuple[Tensor]: a tuple containing the following for one image.
-
- - labels (Tensor): Labels of each image.
- - label_weights (Tensor]): Label weights of each image.
- - bbox_targets (Tensor): BBox targets of each image.
- - bbox_weights (Tensor): BBox weights of each image.
- - pos_inds (Tensor): Sampled positive indices for each image.
- - neg_inds (Tensor): Sampled negative indices for each image.
- """
-
- num_bboxes = bbox_pred.size(0)
- # assigner and sampler
- assign_result = self.assigner.assign(bbox_pred, cls_score, gt_bboxes,
- gt_labels, img_meta,
- gt_bboxes_ignore)
- sampling_result = self.sampler.sample(assign_result, bbox_pred,
- gt_bboxes)
- pos_inds = sampling_result.pos_inds
- neg_inds = sampling_result.neg_inds
-
- # label targets
- labels = gt_bboxes.new_full((num_bboxes, ),
- self.num_classes,
- dtype=torch.long)
- labels[pos_inds] = gt_labels[sampling_result.pos_assigned_gt_inds]
- label_weights = gt_bboxes.new_ones(num_bboxes)
-
- # bbox targets
- bbox_targets = torch.zeros_like(bbox_pred)
- bbox_weights = torch.zeros_like(bbox_pred)
- bbox_weights[pos_inds] = 1.0
- img_h, img_w, _ = img_meta['img_shape']
-
- # DETR regress the relative position of boxes (cxcywh) in the image.
- # Thus the learning target should be normalized by the image size, also
- # the box format should be converted from defaultly x1y1x2y2 to cxcywh.
- factor = bbox_pred.new_tensor([img_w, img_h, img_w,
- img_h]).unsqueeze(0)
- pos_gt_bboxes_normalized = sampling_result.pos_gt_bboxes / factor
- pos_gt_bboxes_targets = bbox_xyxy_to_cxcywh(pos_gt_bboxes_normalized)
- bbox_targets[pos_inds] = pos_gt_bboxes_targets
- return (labels, label_weights, bbox_targets, bbox_weights, pos_inds,
- neg_inds)
-
- # over-write because img_metas are needed as inputs for bbox_head.
- def forward_train(self,
- x,
- img_metas,
- gt_bboxes,
- gt_labels=None,
- gt_bboxes_ignore=None,
- proposal_cfg=None,
- **kwargs):
- """Forward function for training mode.
-
- Args:
- x (list[Tensor]): Features from backbone.
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- gt_bboxes (Tensor): Ground truth bboxes of the image,
- shape (num_gts, 4).
- gt_labels (Tensor): Ground truth labels of each box,
- shape (num_gts,).
- gt_bboxes_ignore (Tensor): Ground truth bboxes to be
- ignored, shape (num_ignored_gts, 4).
- proposal_cfg (mmcv.Config): Test / postprocessing configuration,
- if None, test_cfg would be used.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components.
- """
- assert proposal_cfg is None, '"proposal_cfg" must be None'
- outs = self(x, img_metas)
- if gt_labels is None:
- loss_inputs = outs + (gt_bboxes, img_metas)
- else:
- loss_inputs = outs + (gt_bboxes, gt_labels, img_metas)
- losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore)
- return losses
-
- @force_fp32(apply_to=('all_cls_scores_list', 'all_bbox_preds_list'))
- def get_bboxes(self,
- all_cls_scores_list,
- all_bbox_preds_list,
- img_metas,
- rescale=False):
- """Transform network outputs for a batch into bbox predictions.
-
- Args:
- all_cls_scores_list (list[Tensor]): Classification outputs
- for each feature level. Each is a 4D-tensor with shape
- [nb_dec, bs, num_query, cls_out_channels].
- all_bbox_preds_list (list[Tensor]): Sigmoid regression
- outputs for each feature level. Each is a 4D-tensor with
- normalized coordinate format (cx, cy, w, h) and shape
- [nb_dec, bs, num_query, 4].
- img_metas (list[dict]): Meta information of each image.
- rescale (bool, optional): If True, return boxes in original
- image space. Default False.
-
- Returns:
- list[list[Tensor, Tensor]]: Each item in result_list is 2-tuple. \
- The first item is an (n, 5) tensor, where the first 4 columns \
- are bounding box positions (tl_x, tl_y, br_x, br_y) and the \
- 5-th column is a score between 0 and 1. The second item is a \
- (n,) tensor where each item is the predicted class label of \
- the corresponding box.
- """
- # NOTE defaultly only using outputs from the last feature level,
- # and only the outputs from the last decoder layer is used.
- cls_scores = all_cls_scores_list[-1][-1]
- bbox_preds = all_bbox_preds_list[-1][-1]
-
- result_list = []
- for img_id in range(len(img_metas)):
- cls_score = cls_scores[img_id]
- bbox_pred = bbox_preds[img_id]
- img_shape = img_metas[img_id]['img_shape']
- scale_factor = img_metas[img_id]['scale_factor']
- proposals = self._get_bboxes_single(cls_score, bbox_pred,
- img_shape, scale_factor,
- rescale)
- result_list.append(proposals)
- return result_list
-
- def _get_bboxes_single(self,
- cls_score,
- bbox_pred,
- img_shape,
- scale_factor,
- rescale=False):
- """Transform outputs from the last decoder layer into bbox predictions
- for each image.
-
- Args:
- cls_score (Tensor): Box score logits from the last decoder layer
- for each image. Shape [num_query, cls_out_channels].
- bbox_pred (Tensor): Sigmoid outputs from the last decoder layer
- for each image, with coordinate format (cx, cy, w, h) and
- shape [num_query, 4].
- img_shape (tuple[int]): Shape of input image, (height, width, 3).
- scale_factor (ndarray, optional): Scale factor of the image arange
- as (w_scale, h_scale, w_scale, h_scale).
- rescale (bool, optional): If True, return boxes in original image
- space. Default False.
-
- Returns:
- tuple[Tensor]: Results of detected bboxes and labels.
-
- - det_bboxes: Predicted bboxes with shape [num_query, 5], \
- where the first 4 columns are bounding box positions \
- (tl_x, tl_y, br_x, br_y) and the 5-th column are scores \
- between 0 and 1.
- - det_labels: Predicted labels of the corresponding box with \
- shape [num_query].
- """
- assert len(cls_score) == len(bbox_pred)
- # exclude background
- scores, det_labels = F.softmax(cls_score, dim=-1)[..., :-1].max(-1)
- det_bboxes = bbox_cxcywh_to_xyxy(bbox_pred)
- det_bboxes[:, 0::2] = det_bboxes[:, 0::2] * img_shape[1]
- det_bboxes[:, 1::2] = det_bboxes[:, 1::2] * img_shape[0]
- det_bboxes[:, 0::2].clamp_(min=0, max=img_shape[1])
- det_bboxes[:, 1::2].clamp_(min=0, max=img_shape[0])
- if rescale:
- det_bboxes /= det_bboxes.new_tensor(scale_factor)
- det_bboxes = torch.cat((det_bboxes, scores.unsqueeze(1)), -1)
- return det_bboxes, det_labels
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/bbox_heads/sabl_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/bbox_heads/sabl_head.py
deleted file mode 100644
index 5153996aeb706d103d1ad14b61734914eddb7693..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/bbox_heads/sabl_head.py
+++ /dev/null
@@ -1,572 +0,0 @@
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import ConvModule, kaiming_init, normal_init, xavier_init
-from mmcv.runner import force_fp32
-
-from mmdet.core import build_bbox_coder, multi_apply, multiclass_nms
-from mmdet.models.builder import HEADS, build_loss
-from mmdet.models.losses import accuracy
-
-
-@HEADS.register_module()
-class SABLHead(nn.Module):
- """Side-Aware Boundary Localization (SABL) for RoI-Head.
-
- Side-Aware features are extracted by conv layers
- with an attention mechanism.
- Boundary Localization with Bucketing and Bucketing Guided Rescoring
- are implemented in BucketingBBoxCoder.
-
- Please refer to https://arxiv.org/abs/1912.04260 for more details.
-
- Args:
- cls_in_channels (int): Input channels of cls RoI feature. \
- Defaults to 256.
- reg_in_channels (int): Input channels of reg RoI feature. \
- Defaults to 256.
- roi_feat_size (int): Size of RoI features. Defaults to 7.
- reg_feat_up_ratio (int): Upsample ratio of reg features. \
- Defaults to 2.
- reg_pre_kernel (int): Kernel of 2D conv layers before \
- attention pooling. Defaults to 3.
- reg_post_kernel (int): Kernel of 1D conv layers after \
- attention pooling. Defaults to 3.
- reg_pre_num (int): Number of pre convs. Defaults to 2.
- reg_post_num (int): Number of post convs. Defaults to 1.
- num_classes (int): Number of classes in dataset. Defaults to 80.
- cls_out_channels (int): Hidden channels in cls fcs. Defaults to 1024.
- reg_offset_out_channels (int): Hidden and output channel \
- of reg offset branch. Defaults to 256.
- reg_cls_out_channels (int): Hidden and output channel \
- of reg cls branch. Defaults to 256.
- num_cls_fcs (int): Number of fcs for cls branch. Defaults to 1.
- num_reg_fcs (int): Number of fcs for reg branch.. Defaults to 0.
- reg_class_agnostic (bool): Class agnostic regresion or not. \
- Defaults to True.
- norm_cfg (dict): Config of norm layers. Defaults to None.
- bbox_coder (dict): Config of bbox coder. Defaults 'BucketingBBoxCoder'.
- loss_cls (dict): Config of classification loss.
- loss_bbox_cls (dict): Config of classification loss for bbox branch.
- loss_bbox_reg (dict): Config of regression loss for bbox branch.
- """
-
- def __init__(self,
- num_classes,
- cls_in_channels=256,
- reg_in_channels=256,
- roi_feat_size=7,
- reg_feat_up_ratio=2,
- reg_pre_kernel=3,
- reg_post_kernel=3,
- reg_pre_num=2,
- reg_post_num=1,
- cls_out_channels=1024,
- reg_offset_out_channels=256,
- reg_cls_out_channels=256,
- num_cls_fcs=1,
- num_reg_fcs=0,
- reg_class_agnostic=True,
- norm_cfg=None,
- bbox_coder=dict(
- type='BucketingBBoxCoder',
- num_buckets=14,
- scale_factor=1.7),
- loss_cls=dict(
- type='CrossEntropyLoss',
- use_sigmoid=False,
- loss_weight=1.0),
- loss_bbox_cls=dict(
- type='CrossEntropyLoss',
- use_sigmoid=True,
- loss_weight=1.0),
- loss_bbox_reg=dict(
- type='SmoothL1Loss', beta=0.1, loss_weight=1.0)):
- super(SABLHead, self).__init__()
- self.cls_in_channels = cls_in_channels
- self.reg_in_channels = reg_in_channels
- self.roi_feat_size = roi_feat_size
- self.reg_feat_up_ratio = int(reg_feat_up_ratio)
- self.num_buckets = bbox_coder['num_buckets']
- assert self.reg_feat_up_ratio // 2 >= 1
- self.up_reg_feat_size = roi_feat_size * self.reg_feat_up_ratio
- assert self.up_reg_feat_size == bbox_coder['num_buckets']
- self.reg_pre_kernel = reg_pre_kernel
- self.reg_post_kernel = reg_post_kernel
- self.reg_pre_num = reg_pre_num
- self.reg_post_num = reg_post_num
- self.num_classes = num_classes
- self.cls_out_channels = cls_out_channels
- self.reg_offset_out_channels = reg_offset_out_channels
- self.reg_cls_out_channels = reg_cls_out_channels
- self.num_cls_fcs = num_cls_fcs
- self.num_reg_fcs = num_reg_fcs
- self.reg_class_agnostic = reg_class_agnostic
- assert self.reg_class_agnostic
- self.norm_cfg = norm_cfg
-
- self.bbox_coder = build_bbox_coder(bbox_coder)
- self.loss_cls = build_loss(loss_cls)
- self.loss_bbox_cls = build_loss(loss_bbox_cls)
- self.loss_bbox_reg = build_loss(loss_bbox_reg)
-
- self.cls_fcs = self._add_fc_branch(self.num_cls_fcs,
- self.cls_in_channels,
- self.roi_feat_size,
- self.cls_out_channels)
-
- self.side_num = int(np.ceil(self.num_buckets / 2))
-
- if self.reg_feat_up_ratio > 1:
- self.upsample_x = nn.ConvTranspose1d(
- reg_in_channels,
- reg_in_channels,
- self.reg_feat_up_ratio,
- stride=self.reg_feat_up_ratio)
- self.upsample_y = nn.ConvTranspose1d(
- reg_in_channels,
- reg_in_channels,
- self.reg_feat_up_ratio,
- stride=self.reg_feat_up_ratio)
-
- self.reg_pre_convs = nn.ModuleList()
- for i in range(self.reg_pre_num):
- reg_pre_conv = ConvModule(
- reg_in_channels,
- reg_in_channels,
- kernel_size=reg_pre_kernel,
- padding=reg_pre_kernel // 2,
- norm_cfg=norm_cfg,
- act_cfg=dict(type='ReLU'))
- self.reg_pre_convs.append(reg_pre_conv)
-
- self.reg_post_conv_xs = nn.ModuleList()
- for i in range(self.reg_post_num):
- reg_post_conv_x = ConvModule(
- reg_in_channels,
- reg_in_channels,
- kernel_size=(1, reg_post_kernel),
- padding=(0, reg_post_kernel // 2),
- norm_cfg=norm_cfg,
- act_cfg=dict(type='ReLU'))
- self.reg_post_conv_xs.append(reg_post_conv_x)
- self.reg_post_conv_ys = nn.ModuleList()
- for i in range(self.reg_post_num):
- reg_post_conv_y = ConvModule(
- reg_in_channels,
- reg_in_channels,
- kernel_size=(reg_post_kernel, 1),
- padding=(reg_post_kernel // 2, 0),
- norm_cfg=norm_cfg,
- act_cfg=dict(type='ReLU'))
- self.reg_post_conv_ys.append(reg_post_conv_y)
-
- self.reg_conv_att_x = nn.Conv2d(reg_in_channels, 1, 1)
- self.reg_conv_att_y = nn.Conv2d(reg_in_channels, 1, 1)
-
- self.fc_cls = nn.Linear(self.cls_out_channels, self.num_classes + 1)
- self.relu = nn.ReLU(inplace=True)
-
- self.reg_cls_fcs = self._add_fc_branch(self.num_reg_fcs,
- self.reg_in_channels, 1,
- self.reg_cls_out_channels)
- self.reg_offset_fcs = self._add_fc_branch(self.num_reg_fcs,
- self.reg_in_channels, 1,
- self.reg_offset_out_channels)
- self.fc_reg_cls = nn.Linear(self.reg_cls_out_channels, 1)
- self.fc_reg_offset = nn.Linear(self.reg_offset_out_channels, 1)
-
- def _add_fc_branch(self, num_branch_fcs, in_channels, roi_feat_size,
- fc_out_channels):
- in_channels = in_channels * roi_feat_size * roi_feat_size
- branch_fcs = nn.ModuleList()
- for i in range(num_branch_fcs):
- fc_in_channels = (in_channels if i == 0 else fc_out_channels)
- branch_fcs.append(nn.Linear(fc_in_channels, fc_out_channels))
- return branch_fcs
-
- def init_weights(self):
- for module_list in [
- self.reg_cls_fcs, self.reg_offset_fcs, self.cls_fcs
- ]:
- for m in module_list.modules():
- if isinstance(m, nn.Linear):
- xavier_init(m, distribution='uniform')
- if self.reg_feat_up_ratio > 1:
- kaiming_init(self.upsample_x, distribution='normal')
- kaiming_init(self.upsample_y, distribution='normal')
-
- normal_init(self.reg_conv_att_x, 0, 0.01)
- normal_init(self.reg_conv_att_y, 0, 0.01)
- normal_init(self.fc_reg_offset, 0, 0.001)
- normal_init(self.fc_reg_cls, 0, 0.01)
- normal_init(self.fc_cls, 0, 0.01)
-
- def cls_forward(self, cls_x):
- cls_x = cls_x.view(cls_x.size(0), -1)
- for fc in self.cls_fcs:
- cls_x = self.relu(fc(cls_x))
- cls_score = self.fc_cls(cls_x)
- return cls_score
-
- def attention_pool(self, reg_x):
- """Extract direction-specific features fx and fy with attention
- methanism."""
- reg_fx = reg_x
- reg_fy = reg_x
- reg_fx_att = self.reg_conv_att_x(reg_fx).sigmoid()
- reg_fy_att = self.reg_conv_att_y(reg_fy).sigmoid()
- reg_fx_att = reg_fx_att / reg_fx_att.sum(dim=2).unsqueeze(2)
- reg_fy_att = reg_fy_att / reg_fy_att.sum(dim=3).unsqueeze(3)
- reg_fx = (reg_fx * reg_fx_att).sum(dim=2)
- reg_fy = (reg_fy * reg_fy_att).sum(dim=3)
- return reg_fx, reg_fy
-
- def side_aware_feature_extractor(self, reg_x):
- """Refine and extract side-aware features without split them."""
- for reg_pre_conv in self.reg_pre_convs:
- reg_x = reg_pre_conv(reg_x)
- reg_fx, reg_fy = self.attention_pool(reg_x)
-
- if self.reg_post_num > 0:
- reg_fx = reg_fx.unsqueeze(2)
- reg_fy = reg_fy.unsqueeze(3)
- for i in range(self.reg_post_num):
- reg_fx = self.reg_post_conv_xs[i](reg_fx)
- reg_fy = self.reg_post_conv_ys[i](reg_fy)
- reg_fx = reg_fx.squeeze(2)
- reg_fy = reg_fy.squeeze(3)
- if self.reg_feat_up_ratio > 1:
- reg_fx = self.relu(self.upsample_x(reg_fx))
- reg_fy = self.relu(self.upsample_y(reg_fy))
- reg_fx = torch.transpose(reg_fx, 1, 2)
- reg_fy = torch.transpose(reg_fy, 1, 2)
- return reg_fx.contiguous(), reg_fy.contiguous()
-
- def reg_pred(self, x, offset_fcs, cls_fcs):
- """Predict bucketing estimation (cls_pred) and fine regression (offset
- pred) with side-aware features."""
- x_offset = x.view(-1, self.reg_in_channels)
- x_cls = x.view(-1, self.reg_in_channels)
-
- for fc in offset_fcs:
- x_offset = self.relu(fc(x_offset))
- for fc in cls_fcs:
- x_cls = self.relu(fc(x_cls))
- offset_pred = self.fc_reg_offset(x_offset)
- cls_pred = self.fc_reg_cls(x_cls)
-
- offset_pred = offset_pred.view(x.size(0), -1)
- cls_pred = cls_pred.view(x.size(0), -1)
-
- return offset_pred, cls_pred
-
- def side_aware_split(self, feat):
- """Split side-aware features aligned with orders of bucketing
- targets."""
- l_end = int(np.ceil(self.up_reg_feat_size / 2))
- r_start = int(np.floor(self.up_reg_feat_size / 2))
- feat_fl = feat[:, :l_end]
- feat_fr = feat[:, r_start:].flip(dims=(1, ))
- feat_fl = feat_fl.contiguous()
- feat_fr = feat_fr.contiguous()
- feat = torch.cat([feat_fl, feat_fr], dim=-1)
- return feat
-
- def bbox_pred_split(self, bbox_pred, num_proposals_per_img):
- """Split batch bbox prediction back to each image."""
- bucket_cls_preds, bucket_offset_preds = bbox_pred
- bucket_cls_preds = bucket_cls_preds.split(num_proposals_per_img, 0)
- bucket_offset_preds = bucket_offset_preds.split(
- num_proposals_per_img, 0)
- bbox_pred = tuple(zip(bucket_cls_preds, bucket_offset_preds))
- return bbox_pred
-
- def reg_forward(self, reg_x):
- outs = self.side_aware_feature_extractor(reg_x)
- edge_offset_preds = []
- edge_cls_preds = []
- reg_fx = outs[0]
- reg_fy = outs[1]
- offset_pred_x, cls_pred_x = self.reg_pred(reg_fx, self.reg_offset_fcs,
- self.reg_cls_fcs)
- offset_pred_y, cls_pred_y = self.reg_pred(reg_fy, self.reg_offset_fcs,
- self.reg_cls_fcs)
- offset_pred_x = self.side_aware_split(offset_pred_x)
- offset_pred_y = self.side_aware_split(offset_pred_y)
- cls_pred_x = self.side_aware_split(cls_pred_x)
- cls_pred_y = self.side_aware_split(cls_pred_y)
- edge_offset_preds = torch.cat([offset_pred_x, offset_pred_y], dim=-1)
- edge_cls_preds = torch.cat([cls_pred_x, cls_pred_y], dim=-1)
-
- return (edge_cls_preds, edge_offset_preds)
-
- def forward(self, x):
-
- bbox_pred = self.reg_forward(x)
- cls_score = self.cls_forward(x)
-
- return cls_score, bbox_pred
-
- def get_targets(self, sampling_results, gt_bboxes, gt_labels,
- rcnn_train_cfg):
- pos_proposals = [res.pos_bboxes for res in sampling_results]
- neg_proposals = [res.neg_bboxes for res in sampling_results]
- pos_gt_bboxes = [res.pos_gt_bboxes for res in sampling_results]
- pos_gt_labels = [res.pos_gt_labels for res in sampling_results]
- cls_reg_targets = self.bucket_target(pos_proposals, neg_proposals,
- pos_gt_bboxes, pos_gt_labels,
- rcnn_train_cfg)
- (labels, label_weights, bucket_cls_targets, bucket_cls_weights,
- bucket_offset_targets, bucket_offset_weights) = cls_reg_targets
- return (labels, label_weights, (bucket_cls_targets,
- bucket_offset_targets),
- (bucket_cls_weights, bucket_offset_weights))
-
- def bucket_target(self,
- pos_proposals_list,
- neg_proposals_list,
- pos_gt_bboxes_list,
- pos_gt_labels_list,
- rcnn_train_cfg,
- concat=True):
- (labels, label_weights, bucket_cls_targets, bucket_cls_weights,
- bucket_offset_targets, bucket_offset_weights) = multi_apply(
- self._bucket_target_single,
- pos_proposals_list,
- neg_proposals_list,
- pos_gt_bboxes_list,
- pos_gt_labels_list,
- cfg=rcnn_train_cfg)
-
- if concat:
- labels = torch.cat(labels, 0)
- label_weights = torch.cat(label_weights, 0)
- bucket_cls_targets = torch.cat(bucket_cls_targets, 0)
- bucket_cls_weights = torch.cat(bucket_cls_weights, 0)
- bucket_offset_targets = torch.cat(bucket_offset_targets, 0)
- bucket_offset_weights = torch.cat(bucket_offset_weights, 0)
- return (labels, label_weights, bucket_cls_targets, bucket_cls_weights,
- bucket_offset_targets, bucket_offset_weights)
-
- def _bucket_target_single(self, pos_proposals, neg_proposals,
- pos_gt_bboxes, pos_gt_labels, cfg):
- """Compute bucketing estimation targets and fine regression targets for
- a single image.
-
- Args:
- pos_proposals (Tensor): positive proposals of a single image,
- Shape (n_pos, 4)
- neg_proposals (Tensor): negative proposals of a single image,
- Shape (n_neg, 4).
- pos_gt_bboxes (Tensor): gt bboxes assigned to positive proposals
- of a single image, Shape (n_pos, 4).
- pos_gt_labels (Tensor): gt labels assigned to positive proposals
- of a single image, Shape (n_pos, ).
- cfg (dict): Config of calculating targets
-
- Returns:
- tuple:
-
- - labels (Tensor): Labels in a single image. \
- Shape (n,).
- - label_weights (Tensor): Label weights in a single image.\
- Shape (n,)
- - bucket_cls_targets (Tensor): Bucket cls targets in \
- a single image. Shape (n, num_buckets*2).
- - bucket_cls_weights (Tensor): Bucket cls weights in \
- a single image. Shape (n, num_buckets*2).
- - bucket_offset_targets (Tensor): Bucket offset targets \
- in a single image. Shape (n, num_buckets*2).
- - bucket_offset_targets (Tensor): Bucket offset weights \
- in a single image. Shape (n, num_buckets*2).
- """
- num_pos = pos_proposals.size(0)
- num_neg = neg_proposals.size(0)
- num_samples = num_pos + num_neg
- labels = pos_gt_bboxes.new_full((num_samples, ),
- self.num_classes,
- dtype=torch.long)
- label_weights = pos_proposals.new_zeros(num_samples)
- bucket_cls_targets = pos_proposals.new_zeros(num_samples,
- 4 * self.side_num)
- bucket_cls_weights = pos_proposals.new_zeros(num_samples,
- 4 * self.side_num)
- bucket_offset_targets = pos_proposals.new_zeros(
- num_samples, 4 * self.side_num)
- bucket_offset_weights = pos_proposals.new_zeros(
- num_samples, 4 * self.side_num)
- if num_pos > 0:
- labels[:num_pos] = pos_gt_labels
- label_weights[:num_pos] = 1.0
- (pos_bucket_offset_targets, pos_bucket_offset_weights,
- pos_bucket_cls_targets,
- pos_bucket_cls_weights) = self.bbox_coder.encode(
- pos_proposals, pos_gt_bboxes)
- bucket_cls_targets[:num_pos, :] = pos_bucket_cls_targets
- bucket_cls_weights[:num_pos, :] = pos_bucket_cls_weights
- bucket_offset_targets[:num_pos, :] = pos_bucket_offset_targets
- bucket_offset_weights[:num_pos, :] = pos_bucket_offset_weights
- if num_neg > 0:
- label_weights[-num_neg:] = 1.0
- return (labels, label_weights, bucket_cls_targets, bucket_cls_weights,
- bucket_offset_targets, bucket_offset_weights)
-
- def loss(self,
- cls_score,
- bbox_pred,
- rois,
- labels,
- label_weights,
- bbox_targets,
- bbox_weights,
- reduction_override=None):
- losses = dict()
- if cls_score is not None:
- avg_factor = max(torch.sum(label_weights > 0).float().item(), 1.)
- losses['loss_cls'] = self.loss_cls(
- cls_score,
- labels,
- label_weights,
- avg_factor=avg_factor,
- reduction_override=reduction_override)
- losses['acc'] = accuracy(cls_score, labels)
-
- if bbox_pred is not None:
- bucket_cls_preds, bucket_offset_preds = bbox_pred
- bucket_cls_targets, bucket_offset_targets = bbox_targets
- bucket_cls_weights, bucket_offset_weights = bbox_weights
- # edge cls
- bucket_cls_preds = bucket_cls_preds.view(-1, self.side_num)
- bucket_cls_targets = bucket_cls_targets.view(-1, self.side_num)
- bucket_cls_weights = bucket_cls_weights.view(-1, self.side_num)
- losses['loss_bbox_cls'] = self.loss_bbox_cls(
- bucket_cls_preds,
- bucket_cls_targets,
- bucket_cls_weights,
- avg_factor=bucket_cls_targets.size(0),
- reduction_override=reduction_override)
-
- losses['loss_bbox_reg'] = self.loss_bbox_reg(
- bucket_offset_preds,
- bucket_offset_targets,
- bucket_offset_weights,
- avg_factor=bucket_offset_targets.size(0),
- reduction_override=reduction_override)
-
- return losses
-
- @force_fp32(apply_to=('cls_score', 'bbox_pred'))
- def get_bboxes(self,
- rois,
- cls_score,
- bbox_pred,
- img_shape,
- scale_factor,
- rescale=False,
- cfg=None):
- if isinstance(cls_score, list):
- cls_score = sum(cls_score) / float(len(cls_score))
- scores = F.softmax(cls_score, dim=1) if cls_score is not None else None
-
- if bbox_pred is not None:
- bboxes, confids = self.bbox_coder.decode(rois[:, 1:], bbox_pred,
- img_shape)
- else:
- bboxes = rois[:, 1:].clone()
- confids = None
- if img_shape is not None:
- bboxes[:, [0, 2]].clamp_(min=0, max=img_shape[1] - 1)
- bboxes[:, [1, 3]].clamp_(min=0, max=img_shape[0] - 1)
-
- if rescale and bboxes.size(0) > 0:
- if isinstance(scale_factor, float):
- bboxes /= scale_factor
- else:
- bboxes /= torch.from_numpy(scale_factor).to(bboxes.device)
-
- if cfg is None:
- return bboxes, scores
- else:
- det_bboxes, det_labels = multiclass_nms(
- bboxes,
- scores,
- cfg.score_thr,
- cfg.nms,
- cfg.max_per_img,
- score_factors=confids)
-
- return det_bboxes, det_labels
-
- @force_fp32(apply_to=('bbox_preds', ))
- def refine_bboxes(self, rois, labels, bbox_preds, pos_is_gts, img_metas):
- """Refine bboxes during training.
-
- Args:
- rois (Tensor): Shape (n*bs, 5), where n is image number per GPU,
- and bs is the sampled RoIs per image.
- labels (Tensor): Shape (n*bs, ).
- bbox_preds (list[Tensor]): Shape [(n*bs, num_buckets*2), \
- (n*bs, num_buckets*2)].
- pos_is_gts (list[Tensor]): Flags indicating if each positive bbox
- is a gt bbox.
- img_metas (list[dict]): Meta info of each image.
-
- Returns:
- list[Tensor]: Refined bboxes of each image in a mini-batch.
- """
- img_ids = rois[:, 0].long().unique(sorted=True)
- assert img_ids.numel() == len(img_metas)
-
- bboxes_list = []
- for i in range(len(img_metas)):
- inds = torch.nonzero(
- rois[:, 0] == i, as_tuple=False).squeeze(dim=1)
- num_rois = inds.numel()
-
- bboxes_ = rois[inds, 1:]
- label_ = labels[inds]
- edge_cls_preds, edge_offset_preds = bbox_preds
- edge_cls_preds_ = edge_cls_preds[inds]
- edge_offset_preds_ = edge_offset_preds[inds]
- bbox_pred_ = [edge_cls_preds_, edge_offset_preds_]
- img_meta_ = img_metas[i]
- pos_is_gts_ = pos_is_gts[i]
-
- bboxes = self.regress_by_class(bboxes_, label_, bbox_pred_,
- img_meta_)
- # filter gt bboxes
- pos_keep = 1 - pos_is_gts_
- keep_inds = pos_is_gts_.new_ones(num_rois)
- keep_inds[:len(pos_is_gts_)] = pos_keep
-
- bboxes_list.append(bboxes[keep_inds.type(torch.bool)])
-
- return bboxes_list
-
- @force_fp32(apply_to=('bbox_pred', ))
- def regress_by_class(self, rois, label, bbox_pred, img_meta):
- """Regress the bbox for the predicted class. Used in Cascade R-CNN.
-
- Args:
- rois (Tensor): shape (n, 4) or (n, 5)
- label (Tensor): shape (n, )
- bbox_pred (list[Tensor]): shape [(n, num_buckets *2), \
- (n, num_buckets *2)]
- img_meta (dict): Image meta info.
-
- Returns:
- Tensor: Regressed bboxes, the same shape as input rois.
- """
- assert rois.size(1) == 4 or rois.size(1) == 5
-
- if rois.size(1) == 4:
- new_rois, _ = self.bbox_coder.decode(rois, bbox_pred,
- img_meta['img_shape'])
- else:
- bboxes, _ = self.bbox_coder.decode(rois[:, 1:], bbox_pred,
- img_meta['img_shape'])
- new_rois = torch.cat((rois[:, [0]], bboxes), dim=1)
-
- return new_rois
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/datasets/ade20k.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/datasets/ade20k.py
deleted file mode 100644
index efc8b4bb20c981f3db6df7eb52b3dc0744c94cc0..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/datasets/ade20k.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# dataset settings
-dataset_type = 'ADE20KDataset'
-data_root = 'data/ade/ADEChallengeData2016'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-crop_size = (512, 512)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', reduce_zero_label=True),
- dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)),
- dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
- dict(type='RandomFlip', prob=0.5),
- dict(type='PhotoMetricDistortion'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_semantic_seg']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(2048, 512),
- # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75],
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=4,
- workers_per_gpu=4,
- train=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/training',
- ann_dir='annotations/training',
- pipeline=train_pipeline),
- val=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/validation',
- ann_dir='annotations/validation',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/validation',
- ann_dir='annotations/validation',
- pipeline=test_pipeline))
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/utils/parrots_jit.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/utils/parrots_jit.py
deleted file mode 100644
index 61873f6dbb9b10ed972c90aa8faa321e3cb3249e..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/utils/parrots_jit.py
+++ /dev/null
@@ -1,41 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os
-
-from .parrots_wrapper import TORCH_VERSION
-
-parrots_jit_option = os.getenv('PARROTS_JIT_OPTION')
-
-if TORCH_VERSION == 'parrots' and parrots_jit_option == 'ON':
- from parrots.jit import pat as jit
-else:
-
- def jit(func=None,
- check_input=None,
- full_shape=True,
- derivate=False,
- coderize=False,
- optimize=False):
-
- def wrapper(func):
-
- def wrapper_inner(*args, **kargs):
- return func(*args, **kargs)
-
- return wrapper_inner
-
- if func is None:
- return wrapper
- else:
- return func
-
-
-if TORCH_VERSION == 'parrots':
- from parrots.utils.tester import skip_no_elena
-else:
-
- def skip_no_elena(func):
-
- def wrapper(*args, **kargs):
- return func(*args, **kargs)
-
- return wrapper
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/version.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/version.py
deleted file mode 100644
index 1cce4e50bd692d4002e3cac3c545a3fb2efe95d0..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/version.py
+++ /dev/null
@@ -1,35 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-__version__ = '1.3.17'
-
-
-def parse_version_info(version_str: str, length: int = 4) -> tuple:
- """Parse a version string into a tuple.
-
- Args:
- version_str (str): The version string.
- length (int): The maximum number of version levels. Default: 4.
-
- Returns:
- tuple[int | str]: The version info, e.g., "1.3.0" is parsed into
- (1, 3, 0, 0, 0, 0), and "2.0.0rc1" is parsed into
- (2, 0, 0, 0, 'rc', 1) (when length is set to 4).
- """
- from packaging.version import parse
- version = parse(version_str)
- assert version.release, f'failed to parse version {version_str}'
- release = list(version.release)
- release = release[:length]
- if len(release) < length:
- release = release + [0] * (length - len(release))
- if version.is_prerelease:
- release.extend(list(version.pre))
- elif version.is_postrelease:
- release.extend(list(version.post))
- else:
- release.extend([0, 0])
- return tuple(release)
-
-
-version_info = tuple(int(x) for x in __version__.split('.')[:3])
-
-__all__ = ['__version__', 'version_info', 'parse_version_info']
diff --git a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/torch_utils/__init__.py b/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/torch_utils/__init__.py
deleted file mode 100644
index ece0ea08fe2e939cc260a1dafc0ab5b391b773d9..0000000000000000000000000000000000000000
--- a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/torch_utils/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-# empty
diff --git a/spaces/SUSTech/llm-evaluate/utils.py b/spaces/SUSTech/llm-evaluate/utils.py
deleted file mode 100644
index 5a4ffdf6d9a7aac215f99722d15f73b3a6136d80..0000000000000000000000000000000000000000
--- a/spaces/SUSTech/llm-evaluate/utils.py
+++ /dev/null
@@ -1,100 +0,0 @@
-from evaluate.evaluation_suite import SubTask, EvaluationSuite
-from transformers.pipelines.pt_utils import KeyDataset
-from evaluate import Evaluator, evaluator
-from datasets import Dataset, load_dataset
-from dataclasses import dataclass
-from typing import *
-from time import perf_counter
-import types
-
-
-@dataclass
-class Task(SubTask):
- task_type: str = ""
- evaluator: Optional[Evaluator] = None
- indices: Optional[Iterable] = None
- prompt: Optional[str] = None
- initliazed: bool = False
-
- def initliaze(self):
- if not self.initliazed:
- if self.evaluator is None:
- assert (
- self.task_type
- ), "task_type must be defined if evaluator is not specified"
- self.evalutor = evaluator(self.task_type)
-
- ds = load_dataset(self.data, name=self.subset, split=self.split)
- if self.indices is not None:
- ds = ds.select(self.indices)
- if self.prompt is not None:
- assert (
- "input_column" in self.args_for_task.keys()
- ), "input_column must be defined if prompt is specified"
- input_column = self.args_for_task["input_column"]
- ds = ds.map(
- lambda example: {
- input_column: self.prompt.format(
- **{input_column: example[input_column]}
- )
- }
- )
- if self.data_preprocessor:
- ds = ds.map(self.data_preprocessor)
-
- self.data = ds
-
- self.initliazed = True
-
-
-class GenerativeEvaluator(Evaluator):
- def prepare_data(
- self, data: Dataset, input_column: str, label_column: str, *args, **kwargs
- ):
- self.check_required_columns(
- data, {"input_column": input_column, "label_column": label_column}
- )
- metric_inputs = {"references": data[label_column]}
- data = data.map(
- lambda example: {
- "instruction": self.system_prompts.format(
- instruction=example[input_column]
- )
- },
- remove_columns=data.column_names,
- )
- print("Instruction example: ", data[0])
-
- # return metric_inputs, DatasetColumn(data, "instruction")
- return metric_inputs, KeyDataset(data, "instruction")
-
- def predictions_processor(self, predictions, *args, **kwargs):
- return {
- "responses": [
- pred[f"{self.predictions_prefix}_text"]
- for pred_list in predictions
- for pred in pred_list
- ]
- }
-
- def call_pipeline(self, pipe, *args, **kwargs):
- start_time = perf_counter()
- pipe_output = pipe(*args, **kwargs, **self.PIPELINE_KWARGS)
- if isinstance(pipe_output, types.GeneratorType):
- from tqdm.auto import tqdm
- pipe_output = tqdm(pipe_output)
- end_time = perf_counter()
- return pipe_output, self._compute_time_perf(
- start_time, end_time, len(pipe_output)
- )
-
- def __init__(
- self,
- task="text-generation",
- default_metric_name=None,
- predictions_prefix: str = "generated",
- system_prompts="Human:{instruction}\n\nAssistant:",
- ):
- super().__init__(task=task, default_metric_name=default_metric_name)
- self.predictions_prefix = predictions_prefix
- self.system_prompts = system_prompts
diff --git a/spaces/SWHL/RapidOCRDemo/README.md b/spaces/SWHL/RapidOCRDemo/README.md
deleted file mode 100644
index bcf01e466d2f39874995570c105d991128e5007d..0000000000000000000000000000000000000000
--- a/spaces/SWHL/RapidOCRDemo/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: RapidOCR
-emoji: ⚡
-colorFrom: blue
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.25.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/ServerX/PorcoDiaz/Applio-RVC-Fork/utils/clonerepo_experimental.py b/spaces/ServerX/PorcoDiaz/Applio-RVC-Fork/utils/clonerepo_experimental.py
deleted file mode 100644
index b0ae02648c1307562cf48033908edcf2996db5e2..0000000000000000000000000000000000000000
--- a/spaces/ServerX/PorcoDiaz/Applio-RVC-Fork/utils/clonerepo_experimental.py
+++ /dev/null
@@ -1,253 +0,0 @@
-import os
-import subprocess
-import shutil
-from concurrent.futures import ThreadPoolExecutor, as_completed
-from tqdm.notebook import tqdm
-from pathlib import Path
-import requests
-
-def run_script():
- def run_cmd(cmd):
- process = subprocess.run(cmd, shell=True, check=True, text=True)
- return process.stdout
-
- # Change the current directory to /content/
- os.chdir('/content/')
- print("Changing dir to /content/")
-
- # Your function to edit the file
- def edit_file(file_path):
- temp_file_path = "/tmp/temp_file.py"
- changes_made = False
- with open(file_path, "r") as file, open(temp_file_path, "w") as temp_file:
- previous_line = ""
- second_previous_line = ""
- for line in file:
- new_line = line.replace("value=160", "value=128")
- if new_line != line:
- print("Replaced 'value=160' with 'value=128'")
- changes_made = True
- line = new_line
-
- new_line = line.replace("crepe hop length: 160", "crepe hop length: 128")
- if new_line != line:
- print("Replaced 'crepe hop length: 160' with 'crepe hop length: 128'")
- changes_made = True
- line = new_line
-
- new_line = line.replace("value=0.88", "value=0.75")
- if new_line != line:
- print("Replaced 'value=0.88' with 'value=0.75'")
- changes_made = True
- line = new_line
-
- if "label=i18n(\"输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络\")" in previous_line and "value=1," in line:
- new_line = line.replace("value=1,", "value=0.25,")
- if new_line != line:
- print("Replaced 'value=1,' with 'value=0.25,' based on the condition")
- changes_made = True
- line = new_line
-
- if "label=i18n(\"总训练轮数total_epoch\")" in previous_line and "value=20," in line:
- new_line = line.replace("value=20,", "value=500,")
- if new_line != line:
- print("Replaced 'value=20,' with 'value=500,' based on the condition for DEFAULT EPOCH")
- changes_made = True
- line = new_line
-
- if 'choices=["pm", "harvest", "dio", "crepe", "crepe-tiny", "mangio-crepe", "mangio-crepe-tiny"], # Fork Feature. Add Crepe-Tiny' in previous_line:
- if 'value="pm",' in line:
- new_line = line.replace('value="pm",', 'value="mangio-crepe",')
- if new_line != line:
- print("Replaced 'value=\"pm\",' with 'value=\"mangio-crepe\",' based on the condition")
- changes_made = True
- line = new_line
-
- new_line = line.replace('label=i18n("输入训练文件夹路径"), value="E:\\\\语音音频+标注\\\\米津玄师\\\\src"', 'label=i18n("输入训练文件夹路径"), value="/content/dataset/"')
- if new_line != line:
- print("Replaced 'label=i18n(\"输入训练文件夹路径\"), value=\"E:\\\\语音音频+标注\\\\米津玄师\\\\src\"' with 'label=i18n(\"输入训练文件夹路径\"), value=\"/content/dataset/\"'")
- changes_made = True
- line = new_line
-
- if 'label=i18n("是否仅保存最新的ckpt文件以节省硬盘空间"),' in second_previous_line:
- if 'value=i18n("否"),' in line:
- new_line = line.replace('value=i18n("否"),', 'value=i18n("是"),')
- if new_line != line:
- print("Replaced 'value=i18n(\"否\"),' with 'value=i18n(\"是\"),' based on the condition for SAVE ONLY LATEST")
- changes_made = True
- line = new_line
-
- if 'label=i18n("是否在每次保存时间点将最终小模型保存至weights文件夹"),' in second_previous_line:
- if 'value=i18n("否"),' in line:
- new_line = line.replace('value=i18n("否"),', 'value=i18n("是"),')
- if new_line != line:
- print("Replaced 'value=i18n(\"否\"),' with 'value=i18n(\"是\"),' based on the condition for SAVE SMALL WEIGHTS")
- changes_made = True
- line = new_line
-
- temp_file.write(line)
- second_previous_line = previous_line
- previous_line = line
-
- # After finished, we replace the original file with the temp one
- import shutil
- shutil.move(temp_file_path, file_path)
-
- if changes_made:
- print("Changes made and file saved successfully.")
- else:
- print("No changes were needed.")
-
- # Define the repo path
- repo_path = '/content/Applio-RVC-Fork'
-
- def copy_all_files_in_directory(src_dir, dest_dir):
- # Iterate over all files in source directory
- for item in Path(src_dir).glob('*'):
- if item.is_file():
- # Copy each file to destination directory
- shutil.copy(item, dest_dir)
- else:
- # If it's a directory, make a new directory in the destination and copy the files recursively
- new_dest = Path(dest_dir) / item.name
- new_dest.mkdir(exist_ok=True)
- copy_all_files_in_directory(str(item), str(new_dest))
-
- def clone_and_copy_repo(repo_path):
- # New repository link
- new_repo_link = "https://github.com/IAHispano/Applio-RVC-Fork/"
- # Temporary path to clone the repository
- temp_repo_path = "/content/temp_Applio-RVC-Fork"
- # New folder name
- new_folder_name = "Applio-RVC-Fork"
-
- # Clone the latest code from the new repository to a temporary location
- run_cmd(f"git clone {new_repo_link} {temp_repo_path}")
- os.chdir(temp_repo_path)
-
- run_cmd(f"git checkout 3fa4dad3d8961e5ca2522e9e12c0b4ddb71ad402")
- run_cmd(f"git checkout f9e606c279cb49420597519b0a83b92be81e42e4")
- run_cmd(f"git checkout 9e305588844c5442d58add1061b29beeca89d679")
- run_cmd(f"git checkout bf92dc1eb54b4f28d6396a4d1820a25896cc9af8")
- run_cmd(f"git checkout c3810e197d3cb98039973b2f723edf967ecd9e61")
- run_cmd(f"git checkout a33159efd134c2413b0afe26a76b7dc87926d2de")
- run_cmd(f"git checkout 24e251fb62c662e39ac5cf9253cc65deb9be94ec")
- run_cmd(f"git checkout ad5667d3017e93232dba85969cddac1322ba2902")
- run_cmd(f"git checkout ce9715392cf52dd5a0e18e00d1b5e408f08dbf27")
- run_cmd(f"git checkout 7c7da3f2ac68f3bd8f3ad5ca5c700f18ab9f90eb")
- run_cmd(f"git checkout 4ac395eab101955e8960b50d772c26f592161764")
- run_cmd(f"git checkout b15b358702294c7375761584e5276c811ffab5e8")
- run_cmd(f"git checkout 1501793dc490982db9aca84a50647764caa66e51")
- run_cmd(f"git checkout 21f7faf57219c75e6ba837062350391a803e9ae2")
- run_cmd(f"git checkout b5eb689fbc409b49f065a431817f822f554cebe7")
- run_cmd(f"git checkout 7e02fae1ebf24cb151bf6cbe787d06734aa65862")
- run_cmd(f"git checkout 6aea5ea18ed0b9a1e03fa5d268d6bc3c616672a9")
- run_cmd(f"git checkout f0f9b25717e59116473fb42bd7f9252cfc32b398")
- run_cmd(f"git checkout b394de424088a81fc081224bc27338a8651ad3b2")
- run_cmd(f"git checkout f1999406a88b80c965d2082340f5ea2bfa9ab67a")
- run_cmd(f"git checkout d98a0fa8dc715308dfc73eac5c553b69c6ee072b")
- run_cmd(f"git checkout d73267a415fb0eba98477afa43ef71ffd82a7157")
- run_cmd(f"git checkout 1a03d01356ae79179e1fb8d8915dc9cc79925742")
- run_cmd(f"git checkout 81497bb3115e92c754300c9b3992df428886a3e9")
- run_cmd(f"git checkout c5af1f8edcf79cb70f065c0110e279e78e48caf9")
- run_cmd(f"git checkout cdb3c90109387fa4dfa92f53c3864c71170ffc77")
-
- # Edit the file here, before copying
- #edit_file(f"{temp_repo_path}/infer-web.py")
-
- # Copy all files from the cloned repository to the existing path
- copy_all_files_in_directory(temp_repo_path, repo_path)
- print(f"Copying all {new_folder_name} files from GitHub.")
-
- # Change working directory back to /content/
- os.chdir('/content/')
- print("Changed path back to /content/")
-
- # Remove the temporary cloned repository
- shutil.rmtree(temp_repo_path)
-
- # Call the function
- clone_and_copy_repo(repo_path)
-
- # Download the credentials file for RVC archive sheet
- os.makedirs('/content/Applio-RVC-Fork/stats/', exist_ok=True)
- run_cmd("wget -q https://cdn.discordapp.com/attachments/945486970883285045/1114717554481569802/peppy-generator-388800-07722f17a188.json -O /content/Applio-RVC-Fork/stats/peppy-generator-388800-07722f17a188.json")
-
- # Forcefully delete any existing torchcrepe dependencies downloaded from an earlier run just in case
- shutil.rmtree('/content/Applio-RVC-Fork/torchcrepe', ignore_errors=True)
- shutil.rmtree('/content/torchcrepe', ignore_errors=True)
-
- # Download the torchcrepe folder from the maxrmorrison/torchcrepe repository
- run_cmd("git clone https://github.com/maxrmorrison/torchcrepe.git")
- shutil.move('/content/torchcrepe/torchcrepe', '/content/Applio-RVC-Fork/')
- shutil.rmtree('/content/torchcrepe', ignore_errors=True) # Delete the torchcrepe repository folder
-
- # Change the current directory to /content/Applio-RVC-Fork
- os.chdir('/content/Applio-RVC-Fork')
- os.makedirs('pretrained', exist_ok=True)
- os.makedirs('uvr5_weights', exist_ok=True)
-
-def download_file(url, filepath):
- response = requests.get(url, stream=True)
- response.raise_for_status()
-
- with open(filepath, "wb") as file:
- for chunk in response.iter_content(chunk_size=8192):
- if chunk:
- file.write(chunk)
-
-def download_pretrained_models():
- pretrained_models = {
- "pretrained": [
- "D40k.pth",
- "G40k.pth",
- "f0D40k.pth",
- "f0G40k.pth"
- ],
- "pretrained_v2": [
- "D40k.pth",
- "G40k.pth",
- "f0D40k.pth",
- "f0G40k.pth",
- "f0G48k.pth",
- "f0D48k.pth"
- ],
- "uvr5_weights": [
- "HP2-人声vocals+非人声instrumentals.pth",
- "HP5-主旋律人声vocals+其他instrumentals.pth",
- "VR-DeEchoNormal.pth",
- "VR-DeEchoDeReverb.pth",
- "VR-DeEchoAggressive.pth",
- "HP5_only_main_vocal.pth",
- "HP3_all_vocals.pth",
- "HP2_all_vocals.pth"
- ]
- }
- part2 = "I"
- base_url = "https://huggingface.co/lj1995/VoiceConversionWebU" + part2 + "/resolve/main/"
- base_path = "/content/Applio-RVC-Fork/"
- base_pathm = base_path
-
- # Calculate total number of files to download
- total_files = sum(len(files) for files in pretrained_models.values()) + 1 # +1 for hubert_base.pt
-
- with tqdm(total=total_files, desc="Downloading files") as pbar:
- for folder, models in pretrained_models.items():
- folder_path = os.path.join(base_path, folder)
- os.makedirs(folder_path, exist_ok=True)
- for model in models:
- url = base_url + folder + "/" + model
- filepath = os.path.join(folder_path, model)
- download_file(url, filepath)
- pbar.update()
-
- # Download hubert_base.pt to the base path
- hubert_url = base_url + "hubert_base.pt"
- hubert_filepath = os.path.join(base_pathm, "hubert_base.pt")
- download_file(hubert_url, hubert_filepath)
- pbar.update()
-def clone_repository(run_download):
- with ThreadPoolExecutor(max_workers=2) as executor:
- executor.submit(run_script)
- if run_download:
- executor.submit(download_pretrained_models)
diff --git a/spaces/ServerX/PorcoDiaz/infer/modules/train/extract_feature_print.py b/spaces/ServerX/PorcoDiaz/infer/modules/train/extract_feature_print.py
deleted file mode 100644
index f771dd9b8ba92262e6844e7b5781de43c342833a..0000000000000000000000000000000000000000
--- a/spaces/ServerX/PorcoDiaz/infer/modules/train/extract_feature_print.py
+++ /dev/null
@@ -1,137 +0,0 @@
-import os
-import sys
-import traceback
-
-os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"
-os.environ["PYTORCH_MPS_HIGH_WATERMARK_RATIO"] = "0.0"
-
-device = sys.argv[1]
-n_part = int(sys.argv[2])
-i_part = int(sys.argv[3])
-if len(sys.argv) == 6:
- exp_dir = sys.argv[4]
- version = sys.argv[5]
-else:
- i_gpu = sys.argv[4]
- exp_dir = sys.argv[5]
- os.environ["CUDA_VISIBLE_DEVICES"] = str(i_gpu)
- version = sys.argv[6]
-import fairseq
-import numpy as np
-import soundfile as sf
-import torch
-import torch.nn.functional as F
-
-if "privateuseone" not in device:
- device = "cpu"
- if torch.cuda.is_available():
- device = "cuda"
- elif torch.backends.mps.is_available():
- device = "mps"
-else:
- import torch_directml
-
- device = torch_directml.device(torch_directml.default_device())
-
- def forward_dml(ctx, x, scale):
- ctx.scale = scale
- res = x.clone().detach()
- return res
-
- fairseq.modules.grad_multiply.GradMultiply.forward = forward_dml
-
-f = open("%s/extract_f0_feature.log" % exp_dir, "a+")
-
-
-def printt(strr):
- print(strr)
- f.write("%s\n" % strr)
- f.flush()
-
-
-printt(sys.argv)
-model_path = "assets/hubert/hubert_base.pt"
-
-printt(exp_dir)
-wavPath = "%s/1_16k_wavs" % exp_dir
-outPath = (
- "%s/3_feature256" % exp_dir if version == "v1" else "%s/3_feature768" % exp_dir
-)
-os.makedirs(outPath, exist_ok=True)
-
-
-# wave must be 16k, hop_size=320
-def readwave(wav_path, normalize=False):
- wav, sr = sf.read(wav_path)
- assert sr == 16000
- feats = torch.from_numpy(wav).float()
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- if normalize:
- with torch.no_grad():
- feats = F.layer_norm(feats, feats.shape)
- feats = feats.view(1, -1)
- return feats
-
-
-# HuBERT model
-printt("load model(s) from {}".format(model_path))
-# if hubert model is exist
-if os.access(model_path, os.F_OK) == False:
- printt(
- "Error: Extracting is shut down because %s does not exist, you may download it from https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main"
- % model_path
- )
- exit(0)
-models, saved_cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task(
- [model_path],
- suffix="",
-)
-model = models[0]
-model = model.to(device)
-printt("move model to %s" % device)
-if device not in ["mps", "cpu"]:
- model = model.half()
-model.eval()
-
-todo = sorted(list(os.listdir(wavPath)))[i_part::n_part]
-n = max(1, len(todo) // 10) # 最多打印十条
-if len(todo) == 0:
- printt("no-feature-todo")
-else:
- printt("all-feature-%s" % len(todo))
- for idx, file in enumerate(todo):
- try:
- if file.endswith(".wav"):
- wav_path = "%s/%s" % (wavPath, file)
- out_path = "%s/%s" % (outPath, file.replace("wav", "npy"))
-
- if os.path.exists(out_path):
- continue
-
- feats = readwave(wav_path, normalize=saved_cfg.task.normalize)
- padding_mask = torch.BoolTensor(feats.shape).fill_(False)
- inputs = {
- "source": feats.half().to(device)
- if device not in ["mps", "cpu"]
- else feats.to(device),
- "padding_mask": padding_mask.to(device),
- "output_layer": 9 if version == "v1" else 12, # layer 9
- }
- with torch.no_grad():
- logits = model.extract_features(**inputs)
- feats = (
- model.final_proj(logits[0]) if version == "v1" else logits[0]
- )
-
- feats = feats.squeeze(0).float().cpu().numpy()
- if np.isnan(feats).sum() == 0:
- np.save(out_path, feats, allow_pickle=False)
- else:
- printt("%s-contains nan" % file)
- if idx % n == 0:
- printt("now-%s,all-%s,%s,%s" % (len(todo), idx, file, feats.shape))
- except:
- printt(traceback.format_exc())
- printt("all-feature-done")
diff --git a/spaces/Shue/DIGIMAP-Group4-Animefy/generator.py b/spaces/Shue/DIGIMAP-Group4-Animefy/generator.py
deleted file mode 100644
index 1847947af948df202d2dc2bd5dc2934bede345b4..0000000000000000000000000000000000000000
--- a/spaces/Shue/DIGIMAP-Group4-Animefy/generator.py
+++ /dev/null
@@ -1,162 +0,0 @@
-import tensorflow.contrib as tf_contrib
-import tensorflow as tf
-
-def layer_norm(x, scope='layer_norm') :
- return tf_contrib.layers.layer_norm(x,
- center=True, scale=True,
- scope=scope)
- # return tf.keras.layers.LayerNormalization(x, center=True, scale=True)
-
-def lrelu(x, alpha=0.2):
- return tf.nn.leaky_relu(x, alpha)
-
-def Conv2D(inputs, filters, kernel_size=3, strides=1, padding='VALID', Use_bias = None):
- if kernel_size == 3 and strides == 1:
- inputs = tf.pad(inputs, [[0, 0], [1, 1], [1, 1], [0, 0]], mode="REFLECT")
- if kernel_size == 7 and strides == 1:
- inputs = tf.pad(inputs, [[0, 0], [3, 3], [3, 3], [0, 0]], mode="REFLECT")
- if strides == 2:
- inputs = tf.pad(inputs, [[0, 0], [0, 1], [0, 1], [0, 0]], mode="REFLECT")
- return tf.contrib.layers.conv2d(
- inputs,
- num_outputs=filters,
- kernel_size=kernel_size,
- stride=strides,
- weights_initializer=tf.contrib.layers.variance_scaling_initializer(),
- biases_initializer= Use_bias,
- normalizer_fn=None,
- activation_fn=None,
- padding=padding)
-
-
-def Conv2DNormLReLU(inputs, filters, kernel_size=3, strides=1, padding='VALID', Use_bias = None):
- x = Conv2D(inputs, filters, kernel_size, strides,padding=padding, Use_bias = Use_bias)
- x = layer_norm(x,scope=None)
- return lrelu(x)
-
-def dwise_conv(input, k_h=3, k_w=3, channel_multiplier=1, strides=[1, 1, 1, 1],
- padding='VALID', name='dwise_conv', bias = True):
- input = tf.pad(input, [[0, 0], [1, 1], [1, 1], [0, 0]], mode="REFLECT")
- with tf.variable_scope(name):
- in_channel = input.get_shape().as_list()[-1]
- w = tf.get_variable('w', [k_h, k_w, in_channel, channel_multiplier],regularizer=None,initializer=tf.contrib.layers.variance_scaling_initializer())
- conv = tf.nn.depthwise_conv2d(input, w, strides, padding, rate=None, name=name, data_format=None)
- if bias:
- biases = tf.get_variable('bias', [in_channel * channel_multiplier],initializer=tf.constant_initializer(0.0))
- conv = tf.nn.bias_add(conv, biases)
- return conv
-
-
-def Unsample(inputs, filters, kernel_size=3):
- '''
- An alternative to transposed convolution where we first resize, then convolve.
- See http://distill.pub/2016/deconv-checkerboard/
- For some reason the shape needs to be statically known for gradient propagation
- through tf.image.resize_images, but we only know that for fixed image size, so we
- plumb through a "training" argument
- '''
- new_H, new_W = 2 * tf.shape(inputs)[1], 2 * tf.shape(inputs)[2]
- inputs = tf.image.resize_images(inputs, [new_H, new_W])
-
- return Conv2DNormLReLU(filters=filters, kernel_size=kernel_size, inputs=inputs)
-
-
-class G_net(object):
-
-
- def __init__(self, inputs):
-
- with tf.variable_scope('G_MODEL'):
-
- with tf.variable_scope('A'):
- inputs = Conv2DNormLReLU(inputs, 32, 7)
- inputs = Conv2DNormLReLU(inputs, 64, strides=2)
- inputs = Conv2DNormLReLU(inputs, 64)
-
- with tf.variable_scope('B'):
- inputs = Conv2DNormLReLU(inputs, 128, strides=2)
- inputs = Conv2DNormLReLU(inputs, 128)
-
- with tf.variable_scope('C'):
- inputs = Conv2DNormLReLU(inputs, 128)
- inputs = self.InvertedRes_block(inputs, 2, 256, 1, 'r1')
- inputs = self.InvertedRes_block(inputs, 2, 256, 1, 'r2')
- inputs = self.InvertedRes_block(inputs, 2, 256, 1, 'r3')
- inputs = self.InvertedRes_block(inputs, 2, 256, 1, 'r4')
- inputs = Conv2DNormLReLU(inputs, 128)
-
- with tf.variable_scope('D'):
- inputs = Unsample(inputs, 128)
- inputs = Conv2DNormLReLU(inputs, 128)
-
- with tf.variable_scope('E'):
- inputs = Unsample(inputs,64)
- inputs = Conv2DNormLReLU(inputs, 64)
- inputs = Conv2DNormLReLU(inputs, 32, 7)
- with tf.variable_scope('out_layer'):
- out = Conv2D(inputs, filters =3, kernel_size=1, strides=1)
- self.fake = tf.tanh(out)
-
-
- def InvertedRes_block(self, input, expansion_ratio, output_dim, stride, name, reuse=False, bias=None):
- with tf.variable_scope(name, reuse=reuse):
- # pw
- bottleneck_dim = round(expansion_ratio * input.get_shape().as_list()[-1])
- net = Conv2DNormLReLU(input, bottleneck_dim, kernel_size=1, Use_bias=bias)
-
- # dw
- net = dwise_conv(net, name=name)
- net = layer_norm(net,scope='1')
- net = lrelu(net)
-
- # pw & linear
- net = Conv2D(net, output_dim, kernel_size=1)
- net = layer_norm(net,scope='2')
-
- # element wise add, only for stride==1
- if (int(input.get_shape().as_list()[-1]) == output_dim) and stride == 1:
- net = input + net
-
- return net
-
-def Downsample(inputs, filters = 256, kernel_size=3):
- '''
- An alternative to transposed convolution where we first resize, then convolve.
- See http://distill.pub/2016/deconv-checkerboard/
- For some reason the shape needs to be statically known for gradient propagation
- through tf.image.resize_images, but we only know that for fixed image size, so we
- plumb through a "training" argument
- '''
-
- new_H, new_W = tf.shape(inputs)[1] // 2, tf.shape(inputs)[2] // 2
- inputs = tf.image.resize_images(inputs, [new_H, new_W])
-
- return Separable_conv2d(filters=filters, kernel_size=kernel_size, inputs=inputs)
-
-def Conv2DTransposeLReLU(inputs, filters, kernel_size=2, strides=2, padding='SAME', Use_bias = None):
-
- return tf.contrib.layers.conv2d_transpose(inputs,
- num_outputs=filters,
- kernel_size=kernel_size,
- stride=strides,
- biases_initializer=Use_bias,
- normalizer_fn=tf.contrib.layers.instance_norm,
- activation_fn=lrelu,
- padding=padding)
-
-def Separable_conv2d(inputs, filters, kernel_size=3, strides=1, padding='VALID', Use_bias = tf.zeros_initializer()):
- if kernel_size==3 and strides==1:
- inputs = tf.pad(inputs, [[0, 0], [1, 1], [1, 1], [0, 0]], mode="REFLECT")
- if strides == 2:
- inputs = tf.pad(inputs, [[0, 0], [0, 1], [0, 1], [0, 0]], mode="REFLECT")
- return tf.contrib.layers.separable_conv2d(
- inputs,
- num_outputs=filters,
- kernel_size=kernel_size,
- depth_multiplier=1,
- stride=strides,
- weights_initializer=tf.contrib.layers.variance_scaling_initializer(),
- biases_initializer=Use_bias,
- normalizer_fn=tf.contrib.layers.layer_norm,
- activation_fn=lrelu,
- padding=padding)
\ No newline at end of file
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/streams/buffered.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/streams/buffered.py
deleted file mode 100644
index 11474c16a988d0e1c50be2637b14438985bcfbc9..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/streams/buffered.py
+++ /dev/null
@@ -1,118 +0,0 @@
-from __future__ import annotations
-
-from dataclasses import dataclass, field
-from typing import Any, Callable, Mapping
-
-from .. import ClosedResourceError, DelimiterNotFound, EndOfStream, IncompleteRead
-from ..abc import AnyByteReceiveStream, ByteReceiveStream
-
-
-@dataclass(eq=False)
-class BufferedByteReceiveStream(ByteReceiveStream):
- """
- Wraps any bytes-based receive stream and uses a buffer to provide sophisticated receiving
- capabilities in the form of a byte stream.
- """
-
- receive_stream: AnyByteReceiveStream
- _buffer: bytearray = field(init=False, default_factory=bytearray)
- _closed: bool = field(init=False, default=False)
-
- async def aclose(self) -> None:
- await self.receive_stream.aclose()
- self._closed = True
-
- @property
- def buffer(self) -> bytes:
- """The bytes currently in the buffer."""
- return bytes(self._buffer)
-
- @property
- def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]:
- return self.receive_stream.extra_attributes
-
- async def receive(self, max_bytes: int = 65536) -> bytes:
- if self._closed:
- raise ClosedResourceError
-
- if self._buffer:
- chunk = bytes(self._buffer[:max_bytes])
- del self._buffer[:max_bytes]
- return chunk
- elif isinstance(self.receive_stream, ByteReceiveStream):
- return await self.receive_stream.receive(max_bytes)
- else:
- # With a bytes-oriented object stream, we need to handle any surplus bytes we get from
- # the receive() call
- chunk = await self.receive_stream.receive()
- if len(chunk) > max_bytes:
- # Save the surplus bytes in the buffer
- self._buffer.extend(chunk[max_bytes:])
- return chunk[:max_bytes]
- else:
- return chunk
-
- async def receive_exactly(self, nbytes: int) -> bytes:
- """
- Read exactly the given amount of bytes from the stream.
-
- :param nbytes: the number of bytes to read
- :return: the bytes read
- :raises ~anyio.IncompleteRead: if the stream was closed before the requested
- amount of bytes could be read from the stream
-
- """
- while True:
- remaining = nbytes - len(self._buffer)
- if remaining <= 0:
- retval = self._buffer[:nbytes]
- del self._buffer[:nbytes]
- return bytes(retval)
-
- try:
- if isinstance(self.receive_stream, ByteReceiveStream):
- chunk = await self.receive_stream.receive(remaining)
- else:
- chunk = await self.receive_stream.receive()
- except EndOfStream as exc:
- raise IncompleteRead from exc
-
- self._buffer.extend(chunk)
-
- async def receive_until(self, delimiter: bytes, max_bytes: int) -> bytes:
- """
- Read from the stream until the delimiter is found or max_bytes have been read.
-
- :param delimiter: the marker to look for in the stream
- :param max_bytes: maximum number of bytes that will be read before raising
- :exc:`~anyio.DelimiterNotFound`
- :return: the bytes read (not including the delimiter)
- :raises ~anyio.IncompleteRead: if the stream was closed before the delimiter
- was found
- :raises ~anyio.DelimiterNotFound: if the delimiter is not found within the
- bytes read up to the maximum allowed
-
- """
- delimiter_size = len(delimiter)
- offset = 0
- while True:
- # Check if the delimiter can be found in the current buffer
- index = self._buffer.find(delimiter, offset)
- if index >= 0:
- found = self._buffer[:index]
- del self._buffer[: index + len(delimiter) :]
- return bytes(found)
-
- # Check if the buffer is already at or over the limit
- if len(self._buffer) >= max_bytes:
- raise DelimiterNotFound(max_bytes)
-
- # Read more data into the buffer from the socket
- try:
- data = await self.receive_stream.receive()
- except EndOfStream as exc:
- raise IncompleteRead from exc
-
- # Move the offset forward and add the new data to the buffer
- offset = max(len(self._buffer) - delimiter_size + 1, 0)
- self._buffer.extend(data)
diff --git a/spaces/SunshineSalem/JanitorAI/Dockerfile b/spaces/SunshineSalem/JanitorAI/Dockerfile
deleted file mode 100644
index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000
--- a/spaces/SunshineSalem/JanitorAI/Dockerfile
+++ /dev/null
@@ -1,21 +0,0 @@
-FROM node:18-bullseye-slim
-
-RUN apt-get update && \
-
-apt-get install -y git
-
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-
-WORKDIR /app
-
-RUN npm install
-
-COPY Dockerfile greeting.md* .env* ./
-
-RUN npm run build
-
-EXPOSE 7860
-
-ENV NODE_ENV=production
-
-CMD [ "npm", "start" ]
\ No newline at end of file
diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/transforms/__init__.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/transforms/__init__.py
deleted file mode 100644
index e91c6cdfacd6992a7a1e80c7d2e4b38b2cf7dcde..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/transforms/__init__.py
+++ /dev/null
@@ -1,14 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from fvcore.transforms.transform import Transform, TransformList # order them first
-from fvcore.transforms.transform import *
-from .transform import *
-from .augmentation import *
-from .augmentation_impl import *
-
-__all__ = [k for k in globals().keys() if not k.startswith("_")]
-
-
-from annotator.oneformer.detectron2.utils.env import fixup_module_metadata
-
-fixup_module_metadata(__name__, globals(), __all__)
-del fixup_module_metadata
diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/core/evaluation/eval_hooks.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/core/evaluation/eval_hooks.py
deleted file mode 100644
index 6fc100c8f96e817a6ed2666f7c9f762af2463b48..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/core/evaluation/eval_hooks.py
+++ /dev/null
@@ -1,109 +0,0 @@
-import os.path as osp
-
-from annotator.uniformer.mmcv.runner import DistEvalHook as _DistEvalHook
-from annotator.uniformer.mmcv.runner import EvalHook as _EvalHook
-
-
-class EvalHook(_EvalHook):
- """Single GPU EvalHook, with efficient test support.
-
- Args:
- by_epoch (bool): Determine perform evaluation by epoch or by iteration.
- If set to True, it will perform by epoch. Otherwise, by iteration.
- Default: False.
- efficient_test (bool): Whether save the results as local numpy files to
- save CPU memory during evaluation. Default: False.
- Returns:
- list: The prediction results.
- """
-
- greater_keys = ['mIoU', 'mAcc', 'aAcc']
-
- def __init__(self, *args, by_epoch=False, efficient_test=False, **kwargs):
- super().__init__(*args, by_epoch=by_epoch, **kwargs)
- self.efficient_test = efficient_test
-
- def after_train_iter(self, runner):
- """After train epoch hook.
-
- Override default ``single_gpu_test``.
- """
- if self.by_epoch or not self.every_n_iters(runner, self.interval):
- return
- from annotator.uniformer.mmseg.apis import single_gpu_test
- runner.log_buffer.clear()
- results = single_gpu_test(
- runner.model,
- self.dataloader,
- show=False,
- efficient_test=self.efficient_test)
- self.evaluate(runner, results)
-
- def after_train_epoch(self, runner):
- """After train epoch hook.
-
- Override default ``single_gpu_test``.
- """
- if not self.by_epoch or not self.every_n_epochs(runner, self.interval):
- return
- from annotator.uniformer.mmseg.apis import single_gpu_test
- runner.log_buffer.clear()
- results = single_gpu_test(runner.model, self.dataloader, show=False)
- self.evaluate(runner, results)
-
-
-class DistEvalHook(_DistEvalHook):
- """Distributed EvalHook, with efficient test support.
-
- Args:
- by_epoch (bool): Determine perform evaluation by epoch or by iteration.
- If set to True, it will perform by epoch. Otherwise, by iteration.
- Default: False.
- efficient_test (bool): Whether save the results as local numpy files to
- save CPU memory during evaluation. Default: False.
- Returns:
- list: The prediction results.
- """
-
- greater_keys = ['mIoU', 'mAcc', 'aAcc']
-
- def __init__(self, *args, by_epoch=False, efficient_test=False, **kwargs):
- super().__init__(*args, by_epoch=by_epoch, **kwargs)
- self.efficient_test = efficient_test
-
- def after_train_iter(self, runner):
- """After train epoch hook.
-
- Override default ``multi_gpu_test``.
- """
- if self.by_epoch or not self.every_n_iters(runner, self.interval):
- return
- from annotator.uniformer.mmseg.apis import multi_gpu_test
- runner.log_buffer.clear()
- results = multi_gpu_test(
- runner.model,
- self.dataloader,
- tmpdir=osp.join(runner.work_dir, '.eval_hook'),
- gpu_collect=self.gpu_collect,
- efficient_test=self.efficient_test)
- if runner.rank == 0:
- print('\n')
- self.evaluate(runner, results)
-
- def after_train_epoch(self, runner):
- """After train epoch hook.
-
- Override default ``multi_gpu_test``.
- """
- if not self.by_epoch or not self.every_n_epochs(runner, self.interval):
- return
- from annotator.uniformer.mmseg.apis import multi_gpu_test
- runner.log_buffer.clear()
- results = multi_gpu_test(
- runner.model,
- self.dataloader,
- tmpdir=osp.join(runner.work_dir, '.eval_hook'),
- gpu_collect=self.gpu_collect)
- if runner.rank == 0:
- print('\n')
- self.evaluate(runner, results)
diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/backbones/hrnet.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/backbones/hrnet.py
deleted file mode 100644
index 331ebf3ccb8597b3f507670753789073fc3c946d..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/backbones/hrnet.py
+++ /dev/null
@@ -1,555 +0,0 @@
-import torch.nn as nn
-from annotator.uniformer.mmcv.cnn import (build_conv_layer, build_norm_layer, constant_init,
- kaiming_init)
-from annotator.uniformer.mmcv.runner import load_checkpoint
-from annotator.uniformer.mmcv.utils.parrots_wrapper import _BatchNorm
-
-from annotator.uniformer.mmseg.ops import Upsample, resize
-from annotator.uniformer.mmseg.utils import get_root_logger
-from ..builder import BACKBONES
-from .resnet import BasicBlock, Bottleneck
-
-
-class HRModule(nn.Module):
- """High-Resolution Module for HRNet.
-
- In this module, every branch has 4 BasicBlocks/Bottlenecks. Fusion/Exchange
- is in this module.
- """
-
- def __init__(self,
- num_branches,
- blocks,
- num_blocks,
- in_channels,
- num_channels,
- multiscale_output=True,
- with_cp=False,
- conv_cfg=None,
- norm_cfg=dict(type='BN', requires_grad=True)):
- super(HRModule, self).__init__()
- self._check_branches(num_branches, num_blocks, in_channels,
- num_channels)
-
- self.in_channels = in_channels
- self.num_branches = num_branches
-
- self.multiscale_output = multiscale_output
- self.norm_cfg = norm_cfg
- self.conv_cfg = conv_cfg
- self.with_cp = with_cp
- self.branches = self._make_branches(num_branches, blocks, num_blocks,
- num_channels)
- self.fuse_layers = self._make_fuse_layers()
- self.relu = nn.ReLU(inplace=False)
-
- def _check_branches(self, num_branches, num_blocks, in_channels,
- num_channels):
- """Check branches configuration."""
- if num_branches != len(num_blocks):
- error_msg = f'NUM_BRANCHES({num_branches}) <> NUM_BLOCKS(' \
- f'{len(num_blocks)})'
- raise ValueError(error_msg)
-
- if num_branches != len(num_channels):
- error_msg = f'NUM_BRANCHES({num_branches}) <> NUM_CHANNELS(' \
- f'{len(num_channels)})'
- raise ValueError(error_msg)
-
- if num_branches != len(in_channels):
- error_msg = f'NUM_BRANCHES({num_branches}) <> NUM_INCHANNELS(' \
- f'{len(in_channels)})'
- raise ValueError(error_msg)
-
- def _make_one_branch(self,
- branch_index,
- block,
- num_blocks,
- num_channels,
- stride=1):
- """Build one branch."""
- downsample = None
- if stride != 1 or \
- self.in_channels[branch_index] != \
- num_channels[branch_index] * block.expansion:
- downsample = nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- self.in_channels[branch_index],
- num_channels[branch_index] * block.expansion,
- kernel_size=1,
- stride=stride,
- bias=False),
- build_norm_layer(self.norm_cfg, num_channels[branch_index] *
- block.expansion)[1])
-
- layers = []
- layers.append(
- block(
- self.in_channels[branch_index],
- num_channels[branch_index],
- stride,
- downsample=downsample,
- with_cp=self.with_cp,
- norm_cfg=self.norm_cfg,
- conv_cfg=self.conv_cfg))
- self.in_channels[branch_index] = \
- num_channels[branch_index] * block.expansion
- for i in range(1, num_blocks[branch_index]):
- layers.append(
- block(
- self.in_channels[branch_index],
- num_channels[branch_index],
- with_cp=self.with_cp,
- norm_cfg=self.norm_cfg,
- conv_cfg=self.conv_cfg))
-
- return nn.Sequential(*layers)
-
- def _make_branches(self, num_branches, block, num_blocks, num_channels):
- """Build multiple branch."""
- branches = []
-
- for i in range(num_branches):
- branches.append(
- self._make_one_branch(i, block, num_blocks, num_channels))
-
- return nn.ModuleList(branches)
-
- def _make_fuse_layers(self):
- """Build fuse layer."""
- if self.num_branches == 1:
- return None
-
- num_branches = self.num_branches
- in_channels = self.in_channels
- fuse_layers = []
- num_out_branches = num_branches if self.multiscale_output else 1
- for i in range(num_out_branches):
- fuse_layer = []
- for j in range(num_branches):
- if j > i:
- fuse_layer.append(
- nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- in_channels[j],
- in_channels[i],
- kernel_size=1,
- stride=1,
- padding=0,
- bias=False),
- build_norm_layer(self.norm_cfg, in_channels[i])[1],
- # we set align_corners=False for HRNet
- Upsample(
- scale_factor=2**(j - i),
- mode='bilinear',
- align_corners=False)))
- elif j == i:
- fuse_layer.append(None)
- else:
- conv_downsamples = []
- for k in range(i - j):
- if k == i - j - 1:
- conv_downsamples.append(
- nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- in_channels[j],
- in_channels[i],
- kernel_size=3,
- stride=2,
- padding=1,
- bias=False),
- build_norm_layer(self.norm_cfg,
- in_channels[i])[1]))
- else:
- conv_downsamples.append(
- nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- in_channels[j],
- in_channels[j],
- kernel_size=3,
- stride=2,
- padding=1,
- bias=False),
- build_norm_layer(self.norm_cfg,
- in_channels[j])[1],
- nn.ReLU(inplace=False)))
- fuse_layer.append(nn.Sequential(*conv_downsamples))
- fuse_layers.append(nn.ModuleList(fuse_layer))
-
- return nn.ModuleList(fuse_layers)
-
- def forward(self, x):
- """Forward function."""
- if self.num_branches == 1:
- return [self.branches[0](x[0])]
-
- for i in range(self.num_branches):
- x[i] = self.branches[i](x[i])
-
- x_fuse = []
- for i in range(len(self.fuse_layers)):
- y = 0
- for j in range(self.num_branches):
- if i == j:
- y += x[j]
- elif j > i:
- y = y + resize(
- self.fuse_layers[i][j](x[j]),
- size=x[i].shape[2:],
- mode='bilinear',
- align_corners=False)
- else:
- y += self.fuse_layers[i][j](x[j])
- x_fuse.append(self.relu(y))
- return x_fuse
-
-
-@BACKBONES.register_module()
-class HRNet(nn.Module):
- """HRNet backbone.
-
- High-Resolution Representations for Labeling Pixels and Regions
- arXiv: https://arxiv.org/abs/1904.04514
-
- Args:
- extra (dict): detailed configuration for each stage of HRNet.
- in_channels (int): Number of input image channels. Normally 3.
- conv_cfg (dict): dictionary to construct and config conv layer.
- norm_cfg (dict): dictionary to construct and config norm layer.
- norm_eval (bool): Whether to set norm layers to eval mode, namely,
- freeze running stats (mean and var). Note: Effect on Batch Norm
- and its variants only.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed.
- zero_init_residual (bool): whether to use zero init for last norm layer
- in resblocks to let them behave as identity.
-
- Example:
- >>> from annotator.uniformer.mmseg.models import HRNet
- >>> import torch
- >>> extra = dict(
- >>> stage1=dict(
- >>> num_modules=1,
- >>> num_branches=1,
- >>> block='BOTTLENECK',
- >>> num_blocks=(4, ),
- >>> num_channels=(64, )),
- >>> stage2=dict(
- >>> num_modules=1,
- >>> num_branches=2,
- >>> block='BASIC',
- >>> num_blocks=(4, 4),
- >>> num_channels=(32, 64)),
- >>> stage3=dict(
- >>> num_modules=4,
- >>> num_branches=3,
- >>> block='BASIC',
- >>> num_blocks=(4, 4, 4),
- >>> num_channels=(32, 64, 128)),
- >>> stage4=dict(
- >>> num_modules=3,
- >>> num_branches=4,
- >>> block='BASIC',
- >>> num_blocks=(4, 4, 4, 4),
- >>> num_channels=(32, 64, 128, 256)))
- >>> self = HRNet(extra, in_channels=1)
- >>> self.eval()
- >>> inputs = torch.rand(1, 1, 32, 32)
- >>> level_outputs = self.forward(inputs)
- >>> for level_out in level_outputs:
- ... print(tuple(level_out.shape))
- (1, 32, 8, 8)
- (1, 64, 4, 4)
- (1, 128, 2, 2)
- (1, 256, 1, 1)
- """
-
- blocks_dict = {'BASIC': BasicBlock, 'BOTTLENECK': Bottleneck}
-
- def __init__(self,
- extra,
- in_channels=3,
- conv_cfg=None,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=False,
- with_cp=False,
- zero_init_residual=False):
- super(HRNet, self).__init__()
- self.extra = extra
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.norm_eval = norm_eval
- self.with_cp = with_cp
- self.zero_init_residual = zero_init_residual
-
- # stem net
- self.norm1_name, norm1 = build_norm_layer(self.norm_cfg, 64, postfix=1)
- self.norm2_name, norm2 = build_norm_layer(self.norm_cfg, 64, postfix=2)
-
- self.conv1 = build_conv_layer(
- self.conv_cfg,
- in_channels,
- 64,
- kernel_size=3,
- stride=2,
- padding=1,
- bias=False)
-
- self.add_module(self.norm1_name, norm1)
- self.conv2 = build_conv_layer(
- self.conv_cfg,
- 64,
- 64,
- kernel_size=3,
- stride=2,
- padding=1,
- bias=False)
-
- self.add_module(self.norm2_name, norm2)
- self.relu = nn.ReLU(inplace=True)
-
- # stage 1
- self.stage1_cfg = self.extra['stage1']
- num_channels = self.stage1_cfg['num_channels'][0]
- block_type = self.stage1_cfg['block']
- num_blocks = self.stage1_cfg['num_blocks'][0]
-
- block = self.blocks_dict[block_type]
- stage1_out_channels = num_channels * block.expansion
- self.layer1 = self._make_layer(block, 64, num_channels, num_blocks)
-
- # stage 2
- self.stage2_cfg = self.extra['stage2']
- num_channels = self.stage2_cfg['num_channels']
- block_type = self.stage2_cfg['block']
-
- block = self.blocks_dict[block_type]
- num_channels = [channel * block.expansion for channel in num_channels]
- self.transition1 = self._make_transition_layer([stage1_out_channels],
- num_channels)
- self.stage2, pre_stage_channels = self._make_stage(
- self.stage2_cfg, num_channels)
-
- # stage 3
- self.stage3_cfg = self.extra['stage3']
- num_channels = self.stage3_cfg['num_channels']
- block_type = self.stage3_cfg['block']
-
- block = self.blocks_dict[block_type]
- num_channels = [channel * block.expansion for channel in num_channels]
- self.transition2 = self._make_transition_layer(pre_stage_channels,
- num_channels)
- self.stage3, pre_stage_channels = self._make_stage(
- self.stage3_cfg, num_channels)
-
- # stage 4
- self.stage4_cfg = self.extra['stage4']
- num_channels = self.stage4_cfg['num_channels']
- block_type = self.stage4_cfg['block']
-
- block = self.blocks_dict[block_type]
- num_channels = [channel * block.expansion for channel in num_channels]
- self.transition3 = self._make_transition_layer(pre_stage_channels,
- num_channels)
- self.stage4, pre_stage_channels = self._make_stage(
- self.stage4_cfg, num_channels)
-
- @property
- def norm1(self):
- """nn.Module: the normalization layer named "norm1" """
- return getattr(self, self.norm1_name)
-
- @property
- def norm2(self):
- """nn.Module: the normalization layer named "norm2" """
- return getattr(self, self.norm2_name)
-
- def _make_transition_layer(self, num_channels_pre_layer,
- num_channels_cur_layer):
- """Make transition layer."""
- num_branches_cur = len(num_channels_cur_layer)
- num_branches_pre = len(num_channels_pre_layer)
-
- transition_layers = []
- for i in range(num_branches_cur):
- if i < num_branches_pre:
- if num_channels_cur_layer[i] != num_channels_pre_layer[i]:
- transition_layers.append(
- nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- num_channels_pre_layer[i],
- num_channels_cur_layer[i],
- kernel_size=3,
- stride=1,
- padding=1,
- bias=False),
- build_norm_layer(self.norm_cfg,
- num_channels_cur_layer[i])[1],
- nn.ReLU(inplace=True)))
- else:
- transition_layers.append(None)
- else:
- conv_downsamples = []
- for j in range(i + 1 - num_branches_pre):
- in_channels = num_channels_pre_layer[-1]
- out_channels = num_channels_cur_layer[i] \
- if j == i - num_branches_pre else in_channels
- conv_downsamples.append(
- nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- in_channels,
- out_channels,
- kernel_size=3,
- stride=2,
- padding=1,
- bias=False),
- build_norm_layer(self.norm_cfg, out_channels)[1],
- nn.ReLU(inplace=True)))
- transition_layers.append(nn.Sequential(*conv_downsamples))
-
- return nn.ModuleList(transition_layers)
-
- def _make_layer(self, block, inplanes, planes, blocks, stride=1):
- """Make each layer."""
- downsample = None
- if stride != 1 or inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- inplanes,
- planes * block.expansion,
- kernel_size=1,
- stride=stride,
- bias=False),
- build_norm_layer(self.norm_cfg, planes * block.expansion)[1])
-
- layers = []
- layers.append(
- block(
- inplanes,
- planes,
- stride,
- downsample=downsample,
- with_cp=self.with_cp,
- norm_cfg=self.norm_cfg,
- conv_cfg=self.conv_cfg))
- inplanes = planes * block.expansion
- for i in range(1, blocks):
- layers.append(
- block(
- inplanes,
- planes,
- with_cp=self.with_cp,
- norm_cfg=self.norm_cfg,
- conv_cfg=self.conv_cfg))
-
- return nn.Sequential(*layers)
-
- def _make_stage(self, layer_config, in_channels, multiscale_output=True):
- """Make each stage."""
- num_modules = layer_config['num_modules']
- num_branches = layer_config['num_branches']
- num_blocks = layer_config['num_blocks']
- num_channels = layer_config['num_channels']
- block = self.blocks_dict[layer_config['block']]
-
- hr_modules = []
- for i in range(num_modules):
- # multi_scale_output is only used for the last module
- if not multiscale_output and i == num_modules - 1:
- reset_multiscale_output = False
- else:
- reset_multiscale_output = True
-
- hr_modules.append(
- HRModule(
- num_branches,
- block,
- num_blocks,
- in_channels,
- num_channels,
- reset_multiscale_output,
- with_cp=self.with_cp,
- norm_cfg=self.norm_cfg,
- conv_cfg=self.conv_cfg))
-
- return nn.Sequential(*hr_modules), in_channels
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in backbone.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- if isinstance(pretrained, str):
- logger = get_root_logger()
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- kaiming_init(m)
- elif isinstance(m, (_BatchNorm, nn.GroupNorm)):
- constant_init(m, 1)
-
- if self.zero_init_residual:
- for m in self.modules():
- if isinstance(m, Bottleneck):
- constant_init(m.norm3, 0)
- elif isinstance(m, BasicBlock):
- constant_init(m.norm2, 0)
- else:
- raise TypeError('pretrained must be a str or None')
-
- def forward(self, x):
- """Forward function."""
-
- x = self.conv1(x)
- x = self.norm1(x)
- x = self.relu(x)
- x = self.conv2(x)
- x = self.norm2(x)
- x = self.relu(x)
- x = self.layer1(x)
-
- x_list = []
- for i in range(self.stage2_cfg['num_branches']):
- if self.transition1[i] is not None:
- x_list.append(self.transition1[i](x))
- else:
- x_list.append(x)
- y_list = self.stage2(x_list)
-
- x_list = []
- for i in range(self.stage3_cfg['num_branches']):
- if self.transition2[i] is not None:
- x_list.append(self.transition2[i](y_list[-1]))
- else:
- x_list.append(y_list[i])
- y_list = self.stage3(x_list)
-
- x_list = []
- for i in range(self.stage4_cfg['num_branches']):
- if self.transition3[i] is not None:
- x_list.append(self.transition3[i](y_list[-1]))
- else:
- x_list.append(y_list[i])
- y_list = self.stage4(x_list)
-
- return y_list
-
- def train(self, mode=True):
- """Convert the model into training mode will keeping the normalization
- layer freezed."""
- super(HRNet, self).train(mode)
- if mode and self.norm_eval:
- for m in self.modules():
- # trick: eval have effect on BatchNorm only
- if isinstance(m, _BatchNorm):
- m.eval()
diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/datasets/objects365.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/datasets/objects365.py
deleted file mode 100644
index 41395bdd53b67b7a7111f06564c3a2d2b63a7cdc..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/datasets/objects365.py
+++ /dev/null
@@ -1,394 +0,0 @@
-from detectron2.data.datasets.register_coco import register_coco_instances
-import os
-
-categories_v1 = [
-{'id': 164, 'name': 'cutting/chopping board'} ,
-{'id': 49, 'name': 'tie'} ,
-{'id': 306, 'name': 'crosswalk sign'} ,
-{'id': 145, 'name': 'gun'} ,
-{'id': 14, 'name': 'street lights'} ,
-{'id': 223, 'name': 'bar soap'} ,
-{'id': 74, 'name': 'wild bird'} ,
-{'id': 219, 'name': 'ice cream'} ,
-{'id': 37, 'name': 'stool'} ,
-{'id': 25, 'name': 'storage box'} ,
-{'id': 153, 'name': 'giraffe'} ,
-{'id': 52, 'name': 'pen/pencil'} ,
-{'id': 61, 'name': 'high heels'} ,
-{'id': 340, 'name': 'mangosteen'} ,
-{'id': 22, 'name': 'bracelet'} ,
-{'id': 155, 'name': 'piano'} ,
-{'id': 162, 'name': 'vent'} ,
-{'id': 75, 'name': 'laptop'} ,
-{'id': 236, 'name': 'toaster'} ,
-{'id': 231, 'name': 'fire truck'} ,
-{'id': 42, 'name': 'basket'} ,
-{'id': 150, 'name': 'zebra'} ,
-{'id': 124, 'name': 'head phone'} ,
-{'id': 90, 'name': 'sheep'} ,
-{'id': 322, 'name': 'steak'} ,
-{'id': 39, 'name': 'couch'} ,
-{'id': 209, 'name': 'toothbrush'} ,
-{'id': 59, 'name': 'bicycle'} ,
-{'id': 336, 'name': 'red cabbage'} ,
-{'id': 228, 'name': 'golf ball'} ,
-{'id': 120, 'name': 'tomato'} ,
-{'id': 132, 'name': 'computer box'} ,
-{'id': 8, 'name': 'cup'} ,
-{'id': 183, 'name': 'basketball'} ,
-{'id': 298, 'name': 'butterfly'} ,
-{'id': 250, 'name': 'garlic'} ,
-{'id': 12, 'name': 'desk'} ,
-{'id': 141, 'name': 'microwave'} ,
-{'id': 171, 'name': 'strawberry'} ,
-{'id': 200, 'name': 'kettle'} ,
-{'id': 63, 'name': 'van'} ,
-{'id': 300, 'name': 'cheese'} ,
-{'id': 215, 'name': 'marker'} ,
-{'id': 100, 'name': 'blackboard/whiteboard'} ,
-{'id': 186, 'name': 'printer'} ,
-{'id': 333, 'name': 'bread/bun'} ,
-{'id': 243, 'name': 'penguin'} ,
-{'id': 364, 'name': 'iron'} ,
-{'id': 180, 'name': 'ladder'} ,
-{'id': 34, 'name': 'flag'} ,
-{'id': 78, 'name': 'cell phone'} ,
-{'id': 97, 'name': 'fan'} ,
-{'id': 224, 'name': 'scale'} ,
-{'id': 151, 'name': 'duck'} ,
-{'id': 319, 'name': 'flute'} ,
-{'id': 156, 'name': 'stop sign'} ,
-{'id': 290, 'name': 'rickshaw'} ,
-{'id': 128, 'name': 'sailboat'} ,
-{'id': 165, 'name': 'tennis racket'} ,
-{'id': 241, 'name': 'cigar'} ,
-{'id': 101, 'name': 'balloon'} ,
-{'id': 308, 'name': 'hair drier'} ,
-{'id': 167, 'name': 'skating and skiing shoes'} ,
-{'id': 237, 'name': 'helicopter'} ,
-{'id': 65, 'name': 'sink'} ,
-{'id': 129, 'name': 'tangerine'} ,
-{'id': 330, 'name': 'crab'} ,
-{'id': 320, 'name': 'measuring cup'} ,
-{'id': 260, 'name': 'fishing rod'} ,
-{'id': 346, 'name': 'saw'} ,
-{'id': 216, 'name': 'ship'} ,
-{'id': 46, 'name': 'coffee table'} ,
-{'id': 194, 'name': 'facial mask'} ,
-{'id': 281, 'name': 'stapler'} ,
-{'id': 118, 'name': 'refrigerator'} ,
-{'id': 40, 'name': 'belt'} ,
-{'id': 349, 'name': 'starfish'} ,
-{'id': 87, 'name': 'hanger'} ,
-{'id': 116, 'name': 'baseball glove'} ,
-{'id': 261, 'name': 'cherry'} ,
-{'id': 334, 'name': 'baozi'} ,
-{'id': 267, 'name': 'screwdriver'} ,
-{'id': 158, 'name': 'converter'} ,
-{'id': 335, 'name': 'lion'} ,
-{'id': 170, 'name': 'baseball'} ,
-{'id': 111, 'name': 'skis'} ,
-{'id': 136, 'name': 'broccoli'} ,
-{'id': 342, 'name': 'eraser'} ,
-{'id': 337, 'name': 'polar bear'} ,
-{'id': 139, 'name': 'shovel'} ,
-{'id': 193, 'name': 'extension cord'} ,
-{'id': 284, 'name': 'goldfish'} ,
-{'id': 174, 'name': 'pepper'} ,
-{'id': 138, 'name': 'stroller'} ,
-{'id': 328, 'name': 'yak'} ,
-{'id': 83, 'name': 'clock'} ,
-{'id': 235, 'name': 'tricycle'} ,
-{'id': 248, 'name': 'parking meter'} ,
-{'id': 274, 'name': 'trophy'} ,
-{'id': 324, 'name': 'binoculars'} ,
-{'id': 51, 'name': 'traffic light'} ,
-{'id': 314, 'name': 'donkey'} ,
-{'id': 45, 'name': 'barrel/bucket'} ,
-{'id': 292, 'name': 'pomegranate'} ,
-{'id': 13, 'name': 'handbag'} ,
-{'id': 262, 'name': 'tablet'} ,
-{'id': 68, 'name': 'apple'} ,
-{'id': 226, 'name': 'cabbage'} ,
-{'id': 23, 'name': 'flower'} ,
-{'id': 58, 'name': 'faucet'} ,
-{'id': 206, 'name': 'tong'} ,
-{'id': 291, 'name': 'trombone'} ,
-{'id': 160, 'name': 'carrot'} ,
-{'id': 172, 'name': 'bow tie'} ,
-{'id': 122, 'name': 'tent'} ,
-{'id': 163, 'name': 'cookies'} ,
-{'id': 115, 'name': 'remote'} ,
-{'id': 175, 'name': 'coffee machine'} ,
-{'id': 238, 'name': 'green beans'} ,
-{'id': 233, 'name': 'cello'} ,
-{'id': 28, 'name': 'wine glass'} ,
-{'id': 295, 'name': 'mushroom'} ,
-{'id': 344, 'name': 'scallop'} ,
-{'id': 125, 'name': 'lantern'} ,
-{'id': 123, 'name': 'shampoo/shower gel'} ,
-{'id': 285, 'name': 'meat balls'} ,
-{'id': 266, 'name': 'key'} ,
-{'id': 296, 'name': 'calculator'} ,
-{'id': 168, 'name': 'scissors'} ,
-{'id': 103, 'name': 'cymbal'} ,
-{'id': 6, 'name': 'bottle'} ,
-{'id': 264, 'name': 'nuts'} ,
-{'id': 234, 'name': 'notepaper'} ,
-{'id': 211, 'name': 'mango'} ,
-{'id': 287, 'name': 'toothpaste'} ,
-{'id': 196, 'name': 'chopsticks'} ,
-{'id': 140, 'name': 'baseball bat'} ,
-{'id': 244, 'name': 'hurdle'} ,
-{'id': 195, 'name': 'tennis ball'} ,
-{'id': 144, 'name': 'surveillance camera'} ,
-{'id': 271, 'name': 'volleyball'} ,
-{'id': 94, 'name': 'keyboard'} ,
-{'id': 339, 'name': 'seal'} ,
-{'id': 11, 'name': 'picture/frame'} ,
-{'id': 348, 'name': 'okra'} ,
-{'id': 191, 'name': 'sausage'} ,
-{'id': 166, 'name': 'candy'} ,
-{'id': 62, 'name': 'ring'} ,
-{'id': 311, 'name': 'dolphin'} ,
-{'id': 273, 'name': 'eggplant'} ,
-{'id': 84, 'name': 'drum'} ,
-{'id': 143, 'name': 'surfboard'} ,
-{'id': 288, 'name': 'antelope'} ,
-{'id': 204, 'name': 'clutch'} ,
-{'id': 207, 'name': 'slide'} ,
-{'id': 43, 'name': 'towel/napkin'} ,
-{'id': 352, 'name': 'durian'} ,
-{'id': 276, 'name': 'board eraser'} ,
-{'id': 315, 'name': 'electric drill'} ,
-{'id': 312, 'name': 'sushi'} ,
-{'id': 198, 'name': 'pie'} ,
-{'id': 106, 'name': 'pickup truck'} ,
-{'id': 176, 'name': 'bathtub'} ,
-{'id': 26, 'name': 'vase'} ,
-{'id': 133, 'name': 'elephant'} ,
-{'id': 256, 'name': 'sandwich'} ,
-{'id': 327, 'name': 'noodles'} ,
-{'id': 10, 'name': 'glasses'} ,
-{'id': 109, 'name': 'airplane'} ,
-{'id': 95, 'name': 'tripod'} ,
-{'id': 247, 'name': 'CD'} ,
-{'id': 121, 'name': 'machinery vehicle'} ,
-{'id': 365, 'name': 'flashlight'} ,
-{'id': 53, 'name': 'microphone'} ,
-{'id': 270, 'name': 'pliers'} ,
-{'id': 362, 'name': 'chainsaw'} ,
-{'id': 259, 'name': 'bear'} ,
-{'id': 197, 'name': 'electronic stove and gas stove'} ,
-{'id': 89, 'name': 'pot/pan'} ,
-{'id': 220, 'name': 'tape'} ,
-{'id': 338, 'name': 'lighter'} ,
-{'id': 177, 'name': 'snowboard'} ,
-{'id': 214, 'name': 'violin'} ,
-{'id': 217, 'name': 'chicken'} ,
-{'id': 2, 'name': 'sneakers'} ,
-{'id': 161, 'name': 'washing machine'} ,
-{'id': 131, 'name': 'kite'} ,
-{'id': 354, 'name': 'rabbit'} ,
-{'id': 86, 'name': 'bus'} ,
-{'id': 275, 'name': 'dates'} ,
-{'id': 282, 'name': 'camel'} ,
-{'id': 88, 'name': 'nightstand'} ,
-{'id': 179, 'name': 'grapes'} ,
-{'id': 229, 'name': 'pine apple'} ,
-{'id': 56, 'name': 'necklace'} ,
-{'id': 18, 'name': 'leather shoes'} ,
-{'id': 358, 'name': 'hoverboard'} ,
-{'id': 345, 'name': 'pencil case'} ,
-{'id': 359, 'name': 'pasta'} ,
-{'id': 157, 'name': 'radiator'} ,
-{'id': 201, 'name': 'hamburger'} ,
-{'id': 268, 'name': 'globe'} ,
-{'id': 332, 'name': 'barbell'} ,
-{'id': 329, 'name': 'mop'} ,
-{'id': 252, 'name': 'horn'} ,
-{'id': 350, 'name': 'eagle'} ,
-{'id': 169, 'name': 'folder'} ,
-{'id': 137, 'name': 'toilet'} ,
-{'id': 5, 'name': 'lamp'} ,
-{'id': 27, 'name': 'bench'} ,
-{'id': 249, 'name': 'swan'} ,
-{'id': 76, 'name': 'knife'} ,
-{'id': 341, 'name': 'comb'} ,
-{'id': 64, 'name': 'watch'} ,
-{'id': 105, 'name': 'telephone'} ,
-{'id': 3, 'name': 'chair'} ,
-{'id': 33, 'name': 'boat'} ,
-{'id': 107, 'name': 'orange'} ,
-{'id': 60, 'name': 'bread'} ,
-{'id': 147, 'name': 'cat'} ,
-{'id': 135, 'name': 'gas stove'} ,
-{'id': 307, 'name': 'papaya'} ,
-{'id': 227, 'name': 'router/modem'} ,
-{'id': 357, 'name': 'asparagus'} ,
-{'id': 73, 'name': 'motorcycle'} ,
-{'id': 77, 'name': 'traffic sign'} ,
-{'id': 67, 'name': 'fish'} ,
-{'id': 326, 'name': 'radish'} ,
-{'id': 213, 'name': 'egg'} ,
-{'id': 203, 'name': 'cucumber'} ,
-{'id': 17, 'name': 'helmet'} ,
-{'id': 110, 'name': 'luggage'} ,
-{'id': 80, 'name': 'truck'} ,
-{'id': 199, 'name': 'frisbee'} ,
-{'id': 232, 'name': 'peach'} ,
-{'id': 1, 'name': 'person'} ,
-{'id': 29, 'name': 'boots'} ,
-{'id': 310, 'name': 'chips'} ,
-{'id': 142, 'name': 'skateboard'} ,
-{'id': 44, 'name': 'slippers'} ,
-{'id': 4, 'name': 'hat'} ,
-{'id': 178, 'name': 'suitcase'} ,
-{'id': 24, 'name': 'tv'} ,
-{'id': 119, 'name': 'train'} ,
-{'id': 82, 'name': 'power outlet'} ,
-{'id': 245, 'name': 'swing'} ,
-{'id': 15, 'name': 'book'} ,
-{'id': 294, 'name': 'jellyfish'} ,
-{'id': 192, 'name': 'fire extinguisher'} ,
-{'id': 212, 'name': 'deer'} ,
-{'id': 181, 'name': 'pear'} ,
-{'id': 347, 'name': 'table tennis paddle'} ,
-{'id': 113, 'name': 'trolley'} ,
-{'id': 91, 'name': 'guitar'} ,
-{'id': 202, 'name': 'golf club'} ,
-{'id': 221, 'name': 'wheelchair'} ,
-{'id': 254, 'name': 'saxophone'} ,
-{'id': 117, 'name': 'paper towel'} ,
-{'id': 303, 'name': 'race car'} ,
-{'id': 240, 'name': 'carriage'} ,
-{'id': 246, 'name': 'radio'} ,
-{'id': 318, 'name': 'parrot'} ,
-{'id': 251, 'name': 'french fries'} ,
-{'id': 98, 'name': 'dog'} ,
-{'id': 112, 'name': 'soccer'} ,
-{'id': 355, 'name': 'french horn'} ,
-{'id': 79, 'name': 'paddle'} ,
-{'id': 283, 'name': 'lettuce'} ,
-{'id': 9, 'name': 'car'} ,
-{'id': 258, 'name': 'kiwi fruit'} ,
-{'id': 325, 'name': 'llama'} ,
-{'id': 187, 'name': 'billiards'} ,
-{'id': 210, 'name': 'facial cleanser'} ,
-{'id': 81, 'name': 'cow'} ,
-{'id': 331, 'name': 'microscope'} ,
-{'id': 148, 'name': 'lemon'} ,
-{'id': 302, 'name': 'pomelo'} ,
-{'id': 85, 'name': 'fork'} ,
-{'id': 154, 'name': 'pumpkin'} ,
-{'id': 289, 'name': 'shrimp'} ,
-{'id': 71, 'name': 'teddy bear'} ,
-{'id': 184, 'name': 'potato'} ,
-{'id': 102, 'name': 'air conditioner'} ,
-{'id': 208, 'name': 'hot dog'} ,
-{'id': 222, 'name': 'plum'} ,
-{'id': 316, 'name': 'spring rolls'} ,
-{'id': 230, 'name': 'crane'} ,
-{'id': 149, 'name': 'liquid soap'} ,
-{'id': 55, 'name': 'canned'} ,
-{'id': 35, 'name': 'speaker'} ,
-{'id': 108, 'name': 'banana'} ,
-{'id': 297, 'name': 'treadmill'} ,
-{'id': 99, 'name': 'spoon'} ,
-{'id': 104, 'name': 'mouse'} ,
-{'id': 182, 'name': 'american football'} ,
-{'id': 299, 'name': 'egg tart'} ,
-{'id': 127, 'name': 'cleaning products'} ,
-{'id': 313, 'name': 'urinal'} ,
-{'id': 286, 'name': 'medal'} ,
-{'id': 239, 'name': 'brush'} ,
-{'id': 96, 'name': 'hockey'} ,
-{'id': 279, 'name': 'dumbbell'} ,
-{'id': 32, 'name': 'umbrella'} ,
-{'id': 272, 'name': 'hammer'} ,
-{'id': 16, 'name': 'plate'} ,
-{'id': 21, 'name': 'potted plant'} ,
-{'id': 242, 'name': 'earphone'} ,
-{'id': 70, 'name': 'candle'} ,
-{'id': 185, 'name': 'paint brush'} ,
-{'id': 48, 'name': 'toy'} ,
-{'id': 130, 'name': 'pizza'} ,
-{'id': 255, 'name': 'trumpet'} ,
-{'id': 361, 'name': 'hotair balloon'} ,
-{'id': 188, 'name': 'fire hydrant'} ,
-{'id': 50, 'name': 'bed'} ,
-{'id': 253, 'name': 'avocado'} ,
-{'id': 293, 'name': 'coconut'} ,
-{'id': 257, 'name': 'cue'} ,
-{'id': 280, 'name': 'hamimelon'} ,
-{'id': 66, 'name': 'horse'} ,
-{'id': 173, 'name': 'pigeon'} ,
-{'id': 190, 'name': 'projector'} ,
-{'id': 69, 'name': 'camera'} ,
-{'id': 30, 'name': 'bowl'} ,
-{'id': 269, 'name': 'broom'} ,
-{'id': 343, 'name': 'pitaya'} ,
-{'id': 305, 'name': 'tuba'} ,
-{'id': 309, 'name': 'green onion'} ,
-{'id': 363, 'name': 'lobster'} ,
-{'id': 225, 'name': 'watermelon'} ,
-{'id': 47, 'name': 'suv'} ,
-{'id': 31, 'name': 'dining table'} ,
-{'id': 54, 'name': 'sandals'} ,
-{'id': 351, 'name': 'monkey'} ,
-{'id': 218, 'name': 'onion'} ,
-{'id': 36, 'name': 'trash bin/can'} ,
-{'id': 20, 'name': 'glove'} ,
-{'id': 277, 'name': 'rice'} ,
-{'id': 152, 'name': 'sports car'} ,
-{'id': 360, 'name': 'target'} ,
-{'id': 205, 'name': 'blender'} ,
-{'id': 19, 'name': 'pillow'} ,
-{'id': 72, 'name': 'cake'} ,
-{'id': 93, 'name': 'tea pot'} ,
-{'id': 353, 'name': 'game board'} ,
-{'id': 38, 'name': 'backpack'} ,
-{'id': 356, 'name': 'ambulance'} ,
-{'id': 146, 'name': 'life saver'} ,
-{'id': 189, 'name': 'goose'} ,
-{'id': 278, 'name': 'tape measure/ruler'} ,
-{'id': 92, 'name': 'traffic cone'} ,
-{'id': 134, 'name': 'toiletries'} ,
-{'id': 114, 'name': 'oven'} ,
-{'id': 317, 'name': 'tortoise/turtle'} ,
-{'id': 265, 'name': 'corn'} ,
-{'id': 126, 'name': 'donut'} ,
-{'id': 57, 'name': 'mirror'} ,
-{'id': 7, 'name': 'cabinet/shelf'} ,
-{'id': 263, 'name': 'green vegetables'} ,
-{'id': 159, 'name': 'tissue '} ,
-{'id': 321, 'name': 'shark'} ,
-{'id': 301, 'name': 'pig'} ,
-{'id': 41, 'name': 'carpet'} ,
-{'id': 304, 'name': 'rice cooker'} ,
-{'id': 323, 'name': 'poker card'} ,
-]
-
-def _get_builtin_metadata(version):
- if version == 'v1':
- id_to_name = {x['id']: x['name'] for x in categories_v1}
- else:
- assert 0, version
- thing_dataset_id_to_contiguous_id = {i + 1: i for i in range(365)}
- thing_classes = [id_to_name[k] for k in sorted(id_to_name)]
- return {
- "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id,
- "thing_classes": thing_classes}
-
-_PREDEFINED_SPLITS_OBJECTS365 = {
- "objects365_train": ("objects365/train", "objects365/annotations/objects365_train.json"),
- "objects365_val": ("objects365/val", "objects365/annotations/objects365_val.json"),
-}
-
-for key, (image_root, json_file) in _PREDEFINED_SPLITS_OBJECTS365.items():
- register_coco_instances(
- key,
- _get_builtin_metadata('v1'),
- os.path.join("datasets", json_file) if "://" not in json_file else json_file,
- os.path.join("datasets", image_root),
- )
diff --git a/spaces/ThisThings/tdymndftbdfbvsgv/README.md b/spaces/ThisThings/tdymndftbdfbvsgv/README.md
deleted file mode 100644
index 6e557855fd3fb6fb7f75af1b5979c2568e5f7c17..0000000000000000000000000000000000000000
--- a/spaces/ThisThings/tdymndftbdfbvsgv/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Tdymndftbdfbvsgv
-emoji: 🐢
-colorFrom: indigo
-colorTo: red
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/TungB/mini-photoshop/app.py b/spaces/TungB/mini-photoshop/app.py
deleted file mode 100644
index ade9fcc216c346c9e2372a697cc6d9e1f44dadba..0000000000000000000000000000000000000000
--- a/spaces/TungB/mini-photoshop/app.py
+++ /dev/null
@@ -1,171 +0,0 @@
-import os
-import sys
-
-import cv2
-import numpy as np
-import streamlit as st
-from PIL import Image
-from streamlit_drawable_canvas import st_canvas
-
-from lama import LaMa as InpaintModel
-from u2net import U2Net as SalientModel
-from utils import (
- get_size_limit,
- load_img,
- numpy_to_bytes,
- numpy_to_png_bytes
-)
-
-
-inpaint_model = InpaintModel(device="cpu")
-salient_model = SalientModel(model_name="u2net_lite", device="cpu")
-
-
-def button_download(img, img_name, img_ext, transparent=False):
- if transparent:
- img = cv2.cvtColor(img, cv2.COLOR_BGR2BGRA)
- file_name = img_name + "_pts.png"
- data = numpy_to_png_bytes(img)
- else:
- img = cv2.cvtColor(img.astype(np.float32), cv2.COLOR_BGR2RGB)
- file_name = img_name + "_pts" + img_ext
- data = numpy_to_bytes(img, img_ext[1:].upper())
- st.download_button(label="Download Image", data=data, file_name=file_name)
-
-
-def salientcy():
- image_file = st.file_uploader("Upload Images", type=["png", "jpg", "jpeg"])
- _, col1, col2, col3 = st.columns([0.1, 1, 1, 1])
- with col1:
- remove = st.button("Remove Background")
- with col2:
- blur = st.button("Blur Background")
- with col3:
- gray = st.button("Grayscale Background")
-
- if image_file is not None:
- img_name, img_ext = os.path.splitext(image_file.name)
- np_img, _ = load_img(image_file.read())
-
- mask = salient_model(np_img) * 255
- mask_inv = cv2.bitwise_not(mask)
-
- st_image = st.image(np_img, use_column_width=True)
-
- if remove:
- bg_removed = np_img.copy()
- bg_removed[mask == 0] = 255
- st_image.image(bg_removed)
- button_download(bg_removed, img_name, img_ext, transparent=True)
-
- if gray:
-
- def grayscale(x):
- return np.dot(x[:, :, :3], [0.299, 0.587, 0.114])
-
- bg_gray = grayscale(np_img)
- bg_gray = cv2.merge([bg_gray] * 3)
-
- fg = cv2.bitwise_and(np_img, np_img, mask=mask).astype(np.int32)
- bg = cv2.bitwise_and(bg_gray, bg_gray, mask=mask_inv).astype(
- np.int32
- )
- gray_img = cv2.add(fg, bg)
- st_image.image(gray_img)
- button_download(gray_img, img_name, img_ext)
-
- if blur:
- blur_bg = cv2.blur(np_img, (25, 25))
-
- fg = cv2.bitwise_and(np_img, np_img, mask=mask).astype(np.int32)
- bg = cv2.bitwise_and(blur_bg, blur_bg, mask=mask_inv).astype(
- np.int32
- )
- blur_img = cv2.add(fg, bg)
- st_image.image(blur_img)
- button_download(blur_img, img_name, img_ext)
-
-
-def inpainting():
- is_draw = True
- canvas_width = 700
- image_file = st.file_uploader("Upload Images", type=["png", "jpg", "jpeg"])
-
- _, col1, col2, _ = st.columns(4)
- with col1:
- auto = st.button("Auto Region")
- with col2:
- manual = st.button("Draw Region")
-
- if auto:
- is_draw = False
- if manual:
- is_draw = True
-
- if image_file is not None:
- img_name, img_ext = os.path.splitext(image_file.name)
- np_img, _ = load_img(image_file.read())
- resize_limit = get_size_limit(np_img.shape)
-
- if is_draw:
- h, w = np_img.shape[:2]
- ratio = canvas_width / w
-
- stroke_width = st.slider("Brush width: ", 10, 50, 30)
-
- canvas_result = st_canvas(
- stroke_width=stroke_width,
- background_image=Image.open(image_file),
- update_streamlit=False,
- width=np.floor(w * ratio),
- height=np.floor(h * ratio),
- drawing_mode="freedraw",
- key="canvas",
- )
-
- if canvas_result.image_data is not None:
- mask = 255 - canvas_result.image_data[:, :, 3]
- canvas_result.image_data = None
- mask[mask < 255] = 0
- if np.min(mask) < 255:
- mask = 255 - mask
-
- mask = cv2.resize(mask, dsize=(w, h))
- np_img = np_img[:, :, ::-1]
- inpainted = inpaint_model(np_img, mask, resize_limit)
- st.image(inpainted, use_column_width=True)
- button_download(inpainted, img_name, img_ext)
-
- else:
- mask = salient_model(np_img) * 255
- np_img = np_img[:, :, ::-1]
- inpainted = inpaint_model(np_img, mask, resize_limit)
- st.image(inpainted, use_column_width=True)
- button_download(inpainted, img_name, img_ext)
-
-
-def main():
- st.set_page_config(
- page_title="Mini Photoshop Tool", page_icon=":film_frames:"
- )
- st.title("Mini Photoshop Tool")
- st.sidebar.subheader("Configuration")
- PAGES = {"Background Editor": salientcy, "Image Cleaner": inpainting}
- page = st.sidebar.selectbox("Page:", options=list(PAGES.keys()))
-
- PAGES[page]()
-
- with st.sidebar:
- st.markdown("---")
- st.markdown(
- """
-
- """,
- unsafe_allow_html=True,
- )
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/WorldlineChanger/sayashi-vits-uma-genshin-honkai/transforms.py b/spaces/WorldlineChanger/sayashi-vits-uma-genshin-honkai/transforms.py
deleted file mode 100644
index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000
--- a/spaces/WorldlineChanger/sayashi-vits-uma-genshin-honkai/transforms.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
-
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {
- 'tails': tails,
- 'tail_bound': tail_bound
- }
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(
- inputs[..., None] >= bin_locations,
- dim=-1
- ) - 1
-
-
-def unconstrained_rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails='linear',
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == 'linear':
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError('{} tails are not implemented.'.format(tails))
-
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative
- )
-
- return outputs, logabsdet
-
-def rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0., right=1., bottom=0., top=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError('Input to a transform is not within its domain')
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError('Minimal bin width too large for the number of bins')
- if min_bin_height * num_bins > 1.0:
- raise ValueError('Minimal bin height too large for the number of bins')
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (((inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta)
- + input_heights * (input_delta - input_derivatives)))
- b = (input_heights * input_derivatives
- - (inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta))
- c = - input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (input_delta * theta.pow(2)
- + input_derivatives * theta_one_minus_theta)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/XzJosh/Aatrox-Bert-VITS2/modules.py b/spaces/XzJosh/Aatrox-Bert-VITS2/modules.py
deleted file mode 100644
index 92e0f32a51c472bfd1659a50a95a95d195281d2b..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Aatrox-Bert-VITS2/modules.py
+++ /dev/null
@@ -1,452 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-from attentions import Encoder
-
-LRELU_SLOPE = 0.1
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
-class TransformerCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- n_layers,
- n_heads,
- p_dropout=0,
- filter_channels=0,
- mean_only=False,
- wn_sharing_parameter=None,
- gin_channels = 0
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/YONG627/456123/yolov5-code-main/ui_main_window.py b/spaces/YONG627/456123/yolov5-code-main/ui_main_window.py
deleted file mode 100644
index 2770f513ac29bce0dae3f538a6d55cce74f99338..0000000000000000000000000000000000000000
--- a/spaces/YONG627/456123/yolov5-code-main/ui_main_window.py
+++ /dev/null
@@ -1,66 +0,0 @@
-# -*- coding: utf-8 -*-
-
-################################################################################
-## Form generated from reading UI file 'main_window.ui'
-##
-## Created by: Qt User Interface Compiler version 6.4.2
-##
-## WARNING! All changes made in this file will be lost when recompiling UI file!
-################################################################################
-
-from PySide6.QtCore import (QCoreApplication, QDate, QDateTime, QLocale,
- QMetaObject, QObject, QPoint, QRect,
- QSize, QTime, QUrl, Qt)
-from PySide6.QtGui import (QBrush, QColor, QConicalGradient, QCursor,
- QFont, QFontDatabase, QGradient, QIcon,
- QImage, QKeySequence, QLinearGradient, QPainter,
- QPalette, QPixmap, QRadialGradient, QTransform)
-from PySide6.QtWidgets import (QApplication, QFrame, QLabel, QMainWindow,
- QPushButton, QSizePolicy, QStatusBar, QWidget)
-
-class Ui_MainWindow(object):
- def setupUi(self, MainWindow):
- if not MainWindow.objectName():
- MainWindow.setObjectName(u"MainWindow")
- MainWindow.resize(776, 339)
- self.centralwidget = QWidget(MainWindow)
- self.centralwidget.setObjectName(u"centralwidget")
- self.input = QLabel(self.centralwidget)
- self.input.setObjectName(u"input")
- self.input.setGeometry(QRect(30, 20, 331, 201))
- self.input.setScaledContents(True)
- self.input.setAlignment(Qt.AlignCenter)
- self.output = QLabel(self.centralwidget)
- self.output.setObjectName(u"output")
- self.output.setGeometry(QRect(420, 20, 331, 201))
- self.output.setScaledContents(True)
- self.output.setAlignment(Qt.AlignCenter)
- self.line = QFrame(self.centralwidget)
- self.line.setObjectName(u"line")
- self.line.setGeometry(QRect(360, 20, 61, 201))
- self.line.setFrameShape(QFrame.VLine)
- self.line.setFrameShadow(QFrame.Sunken)
- self.det_image = QPushButton(self.centralwidget)
- self.det_image.setObjectName(u"det_image")
- self.det_image.setGeometry(QRect(30, 260, 331, 41))
- self.det_video = QPushButton(self.centralwidget)
- self.det_video.setObjectName(u"det_video")
- self.det_video.setGeometry(QRect(420, 260, 331, 41))
- MainWindow.setCentralWidget(self.centralwidget)
- self.statusbar = QStatusBar(MainWindow)
- self.statusbar.setObjectName(u"statusbar")
- MainWindow.setStatusBar(self.statusbar)
-
- self.retranslateUi(MainWindow)
-
- QMetaObject.connectSlotsByName(MainWindow)
- # setupUi
-
- def retranslateUi(self, MainWindow):
- MainWindow.setWindowTitle(QCoreApplication.translate("MainWindow", u"MainWindow", None))
- self.input.setText(QCoreApplication.translate("MainWindow", u"\u663e\u793a\u539f\u59cb\u56fe\u7247", None))
- self.output.setText(QCoreApplication.translate("MainWindow", u"\u663e\u793a\u68c0\u6d4b\u7ed3\u679c", None))
- self.det_image.setText(QCoreApplication.translate("MainWindow", u"\u56fe\u7247\u68c0\u6d4b", None))
- self.det_video.setText(QCoreApplication.translate("MainWindow", u"\u89c6\u9891\u68c0\u6d4b", None))
- # retranslateUi
-
diff --git a/spaces/Yan233th/so-vits-svc-models/modules/modules.py b/spaces/Yan233th/so-vits-svc-models/modules/modules.py
deleted file mode 100644
index 54290fd207b25e93831bd21005990ea137e6b50e..0000000000000000000000000000000000000000
--- a/spaces/Yan233th/so-vits-svc-models/modules/modules.py
+++ /dev/null
@@ -1,342 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import modules.commons as commons
-from modules.commons import init_weights, get_padding
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/samplers/distributed_sampler.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/samplers/distributed_sampler.py
deleted file mode 100644
index a098e6ac07c1b193fddcb69e6e54aced82e6081c..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/samplers/distributed_sampler.py
+++ /dev/null
@@ -1,278 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import itertools
-import logging
-import math
-from collections import defaultdict
-from typing import Optional
-import torch
-from torch.utils.data.sampler import Sampler
-
-from detectron2.utils import comm
-
-logger = logging.getLogger(__name__)
-
-
-class TrainingSampler(Sampler):
- """
- In training, we only care about the "infinite stream" of training data.
- So this sampler produces an infinite stream of indices and
- all workers cooperate to correctly shuffle the indices and sample different indices.
-
- The samplers in each worker effectively produces `indices[worker_id::num_workers]`
- where `indices` is an infinite stream of indices consisting of
- `shuffle(range(size)) + shuffle(range(size)) + ...` (if shuffle is True)
- or `range(size) + range(size) + ...` (if shuffle is False)
-
- Note that this sampler does not shard based on pytorch DataLoader worker id.
- A sampler passed to pytorch DataLoader is used only with map-style dataset
- and will not be executed inside workers.
- But if this sampler is used in a way that it gets execute inside a dataloader
- worker, then extra work needs to be done to shard its outputs based on worker id.
- This is required so that workers don't produce identical data.
- :class:`ToIterableDataset` implements this logic.
- This note is true for all samplers in detectron2.
- """
-
- def __init__(self, size: int, shuffle: bool = True, seed: Optional[int] = None):
- """
- Args:
- size (int): the total number of data of the underlying dataset to sample from
- shuffle (bool): whether to shuffle the indices or not
- seed (int): the initial seed of the shuffle. Must be the same
- across all workers. If None, will use a random seed shared
- among workers (require synchronization among all workers).
- """
- if not isinstance(size, int):
- raise TypeError(f"TrainingSampler(size=) expects an int. Got type {type(size)}.")
- if size <= 0:
- raise ValueError(f"TrainingSampler(size=) expects a positive int. Got {size}.")
- self._size = size
- self._shuffle = shuffle
- if seed is None:
- seed = comm.shared_random_seed()
- self._seed = int(seed)
-
- self._rank = comm.get_rank()
- self._world_size = comm.get_world_size()
-
- def __iter__(self):
- start = self._rank
- yield from itertools.islice(self._infinite_indices(), start, None, self._world_size)
-
- def _infinite_indices(self):
- g = torch.Generator()
- g.manual_seed(self._seed)
- while True:
- if self._shuffle:
- yield from torch.randperm(self._size, generator=g).tolist()
- else:
- yield from torch.arange(self._size).tolist()
-
-
-class RandomSubsetTrainingSampler(TrainingSampler):
- """
- Similar to TrainingSampler, but only sample a random subset of indices.
- This is useful when you want to estimate the accuracy vs data-number curves by
- training the model with different subset_ratio.
- """
-
- def __init__(
- self,
- size: int,
- subset_ratio: float,
- shuffle: bool = True,
- seed_shuffle: Optional[int] = None,
- seed_subset: Optional[int] = None,
- ):
- """
- Args:
- size (int): the total number of data of the underlying dataset to sample from
- subset_ratio (float): the ratio of subset data to sample from the underlying dataset
- shuffle (bool): whether to shuffle the indices or not
- seed_shuffle (int): the initial seed of the shuffle. Must be the same
- across all workers. If None, will use a random seed shared
- among workers (require synchronization among all workers).
- seed_subset (int): the seed to randomize the subset to be sampled.
- Must be the same across all workers. If None, will use a random seed shared
- among workers (require synchronization among all workers).
- """
- super().__init__(size=size, shuffle=shuffle, seed=seed_shuffle)
-
- assert 0.0 < subset_ratio <= 1.0
- self._size_subset = int(size * subset_ratio)
- assert self._size_subset > 0
- if seed_subset is None:
- seed_subset = comm.shared_random_seed()
- self._seed_subset = int(seed_subset)
-
- # randomly generate the subset indexes to be sampled from
- g = torch.Generator()
- g.manual_seed(self._seed_subset)
- indexes_randperm = torch.randperm(self._size, generator=g)
- self._indexes_subset = indexes_randperm[: self._size_subset]
-
- logger.info("Using RandomSubsetTrainingSampler......")
- logger.info(f"Randomly sample {self._size_subset} data from the original {self._size} data")
-
- def _infinite_indices(self):
- g = torch.Generator()
- g.manual_seed(self._seed) # self._seed equals seed_shuffle from __init__()
- while True:
- if self._shuffle:
- # generate a random permutation to shuffle self._indexes_subset
- randperm = torch.randperm(self._size_subset, generator=g)
- yield from self._indexes_subset[randperm].tolist()
- else:
- yield from self._indexes_subset.tolist()
-
-
-class RepeatFactorTrainingSampler(Sampler):
- """
- Similar to TrainingSampler, but a sample may appear more times than others based
- on its "repeat factor". This is suitable for training on class imbalanced datasets like LVIS.
- """
-
- def __init__(self, repeat_factors, *, shuffle=True, seed=None):
- """
- Args:
- repeat_factors (Tensor): a float vector, the repeat factor for each indice. When it's
- full of ones, it is equivalent to ``TrainingSampler(len(repeat_factors), ...)``.
- shuffle (bool): whether to shuffle the indices or not
- seed (int): the initial seed of the shuffle. Must be the same
- across all workers. If None, will use a random seed shared
- among workers (require synchronization among all workers).
- """
- self._shuffle = shuffle
- if seed is None:
- seed = comm.shared_random_seed()
- self._seed = int(seed)
-
- self._rank = comm.get_rank()
- self._world_size = comm.get_world_size()
-
- # Split into whole number (_int_part) and fractional (_frac_part) parts.
- self._int_part = torch.trunc(repeat_factors)
- self._frac_part = repeat_factors - self._int_part
-
- @staticmethod
- def repeat_factors_from_category_frequency(dataset_dicts, repeat_thresh):
- """
- Compute (fractional) per-image repeat factors based on category frequency.
- The repeat factor for an image is a function of the frequency of the rarest
- category labeled in that image. The "frequency of category c" in [0, 1] is defined
- as the fraction of images in the training set (without repeats) in which category c
- appears.
- See :paper:`lvis` (>= v2) Appendix B.2.
-
- Args:
- dataset_dicts (list[dict]): annotations in Detectron2 dataset format.
- repeat_thresh (float): frequency threshold below which data is repeated.
- If the frequency is half of `repeat_thresh`, the image will be
- repeated twice.
-
- Returns:
- torch.Tensor:
- the i-th element is the repeat factor for the dataset image at index i.
- """
- # 1. For each category c, compute the fraction of images that contain it: f(c)
- category_freq = defaultdict(int)
- for dataset_dict in dataset_dicts: # For each image (without repeats)
- cat_ids = {ann["category_id"] for ann in dataset_dict["annotations"]}
- for cat_id in cat_ids:
- category_freq[cat_id] += 1
- num_images = len(dataset_dicts)
- for k, v in category_freq.items():
- category_freq[k] = v / num_images
-
- # 2. For each category c, compute the category-level repeat factor:
- # r(c) = max(1, sqrt(t / f(c)))
- category_rep = {
- cat_id: max(1.0, math.sqrt(repeat_thresh / cat_freq))
- for cat_id, cat_freq in category_freq.items()
- }
-
- # 3. For each image I, compute the image-level repeat factor:
- # r(I) = max_{c in I} r(c)
- rep_factors = []
- for dataset_dict in dataset_dicts:
- cat_ids = {ann["category_id"] for ann in dataset_dict["annotations"]}
- rep_factor = max({category_rep[cat_id] for cat_id in cat_ids}, default=1.0)
- rep_factors.append(rep_factor)
-
- return torch.tensor(rep_factors, dtype=torch.float32)
-
- def _get_epoch_indices(self, generator):
- """
- Create a list of dataset indices (with repeats) to use for one epoch.
-
- Args:
- generator (torch.Generator): pseudo random number generator used for
- stochastic rounding.
-
- Returns:
- torch.Tensor: list of dataset indices to use in one epoch. Each index
- is repeated based on its calculated repeat factor.
- """
- # Since repeat factors are fractional, we use stochastic rounding so
- # that the target repeat factor is achieved in expectation over the
- # course of training
- rands = torch.rand(len(self._frac_part), generator=generator)
- rep_factors = self._int_part + (rands < self._frac_part).float()
- # Construct a list of indices in which we repeat images as specified
- indices = []
- for dataset_index, rep_factor in enumerate(rep_factors):
- indices.extend([dataset_index] * int(rep_factor.item()))
- return torch.tensor(indices, dtype=torch.int64)
-
- def __iter__(self):
- start = self._rank
- yield from itertools.islice(self._infinite_indices(), start, None, self._world_size)
-
- def _infinite_indices(self):
- g = torch.Generator()
- g.manual_seed(self._seed)
- while True:
- # Sample indices with repeats determined by stochastic rounding; each
- # "epoch" may have a slightly different size due to the rounding.
- indices = self._get_epoch_indices(g)
- if self._shuffle:
- randperm = torch.randperm(len(indices), generator=g)
- yield from indices[randperm].tolist()
- else:
- yield from indices.tolist()
-
-
-class InferenceSampler(Sampler):
- """
- Produce indices for inference across all workers.
- Inference needs to run on the __exact__ set of samples,
- therefore when the total number of samples is not divisible by the number of workers,
- this sampler produces different number of samples on different workers.
- """
-
- def __init__(self, size: int):
- """
- Args:
- size (int): the total number of data of the underlying dataset to sample from
- """
- self._size = size
- assert size > 0
- self._rank = comm.get_rank()
- self._world_size = comm.get_world_size()
- self._local_indices = self._get_local_indices(size, self._world_size, self._rank)
-
- @staticmethod
- def _get_local_indices(total_size, world_size, rank):
- shard_size = total_size // world_size
- left = total_size % world_size
- shard_sizes = [shard_size + int(r < left) for r in range(world_size)]
-
- begin = sum(shard_sizes[:rank])
- end = min(sum(shard_sizes[: rank + 1]), total_size)
- return range(begin, end)
-
- def __iter__(self):
- yield from self._local_indices
-
- def __len__(self):
- return len(self._local_indices)
diff --git a/spaces/Yudha515/Rvc-Models/audiocraft/modules/conv.py b/spaces/Yudha515/Rvc-Models/audiocraft/modules/conv.py
deleted file mode 100644
index 972938ab84712eb06e1b10cea25444eee51d6637..0000000000000000000000000000000000000000
--- a/spaces/Yudha515/Rvc-Models/audiocraft/modules/conv.py
+++ /dev/null
@@ -1,245 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-import typing as tp
-import warnings
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-from torch.nn.utils import spectral_norm, weight_norm
-
-
-CONV_NORMALIZATIONS = frozenset(['none', 'weight_norm', 'spectral_norm',
- 'time_group_norm'])
-
-
-def apply_parametrization_norm(module: nn.Module, norm: str = 'none'):
- assert norm in CONV_NORMALIZATIONS
- if norm == 'weight_norm':
- return weight_norm(module)
- elif norm == 'spectral_norm':
- return spectral_norm(module)
- else:
- # We already check was in CONV_NORMALIZATION, so any other choice
- # doesn't need reparametrization.
- return module
-
-
-def get_norm_module(module: nn.Module, causal: bool = False, norm: str = 'none', **norm_kwargs):
- """Return the proper normalization module. If causal is True, this will ensure the returned
- module is causal, or return an error if the normalization doesn't support causal evaluation.
- """
- assert norm in CONV_NORMALIZATIONS
- if norm == 'time_group_norm':
- if causal:
- raise ValueError("GroupNorm doesn't support causal evaluation.")
- assert isinstance(module, nn.modules.conv._ConvNd)
- return nn.GroupNorm(1, module.out_channels, **norm_kwargs)
- else:
- return nn.Identity()
-
-
-def get_extra_padding_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int,
- padding_total: int = 0) -> int:
- """See `pad_for_conv1d`.
- """
- length = x.shape[-1]
- n_frames = (length - kernel_size + padding_total) / stride + 1
- ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total)
- return ideal_length - length
-
-
-def pad_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, padding_total: int = 0):
- """Pad for a convolution to make sure that the last window is full.
- Extra padding is added at the end. This is required to ensure that we can rebuild
- an output of the same length, as otherwise, even with padding, some time steps
- might get removed.
- For instance, with total padding = 4, kernel size = 4, stride = 2:
- 0 0 1 2 3 4 5 0 0 # (0s are padding)
- 1 2 3 # (output frames of a convolution, last 0 is never used)
- 0 0 1 2 3 4 5 0 # (output of tr. conv., but pos. 5 is going to get removed as padding)
- 1 2 3 4 # once you removed padding, we are missing one time step !
- """
- extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total)
- return F.pad(x, (0, extra_padding))
-
-
-def pad1d(x: torch.Tensor, paddings: tp.Tuple[int, int], mode: str = 'constant', value: float = 0.):
- """Tiny wrapper around F.pad, just to allow for reflect padding on small input.
- If this is the case, we insert extra 0 padding to the right before the reflection happen.
- """
- length = x.shape[-1]
- padding_left, padding_right = paddings
- assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right)
- if mode == 'reflect':
- max_pad = max(padding_left, padding_right)
- extra_pad = 0
- if length <= max_pad:
- extra_pad = max_pad - length + 1
- x = F.pad(x, (0, extra_pad))
- padded = F.pad(x, paddings, mode, value)
- end = padded.shape[-1] - extra_pad
- return padded[..., :end]
- else:
- return F.pad(x, paddings, mode, value)
-
-
-def unpad1d(x: torch.Tensor, paddings: tp.Tuple[int, int]):
- """Remove padding from x, handling properly zero padding. Only for 1d!
- """
- padding_left, padding_right = paddings
- assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right)
- assert (padding_left + padding_right) <= x.shape[-1]
- end = x.shape[-1] - padding_right
- return x[..., padding_left: end]
-
-
-class NormConv1d(nn.Module):
- """Wrapper around Conv1d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, causal: bool = False, norm: str = 'none',
- norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.conv = apply_parametrization_norm(nn.Conv1d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.conv, causal, norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.conv(x)
- x = self.norm(x)
- return x
-
-
-class NormConv2d(nn.Module):
- """Wrapper around Conv2d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.conv = apply_parametrization_norm(nn.Conv2d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.conv, causal=False, norm=norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.conv(x)
- x = self.norm(x)
- return x
-
-
-class NormConvTranspose1d(nn.Module):
- """Wrapper around ConvTranspose1d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, causal: bool = False, norm: str = 'none',
- norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.convtr = apply_parametrization_norm(nn.ConvTranspose1d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.convtr, causal, norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.convtr(x)
- x = self.norm(x)
- return x
-
-
-class NormConvTranspose2d(nn.Module):
- """Wrapper around ConvTranspose2d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.convtr = apply_parametrization_norm(nn.ConvTranspose2d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.convtr, causal=False, norm=norm, **norm_kwargs)
-
- def forward(self, x):
- x = self.convtr(x)
- x = self.norm(x)
- return x
-
-
-class StreamableConv1d(nn.Module):
- """Conv1d with some builtin handling of asymmetric or causal padding
- and normalization.
- """
- def __init__(self, in_channels: int, out_channels: int,
- kernel_size: int, stride: int = 1, dilation: int = 1,
- groups: int = 1, bias: bool = True, causal: bool = False,
- norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {},
- pad_mode: str = 'reflect'):
- super().__init__()
- # warn user on unusual setup between dilation and stride
- if stride > 1 and dilation > 1:
- warnings.warn('StreamableConv1d has been initialized with stride > 1 and dilation > 1'
- f' (kernel_size={kernel_size} stride={stride}, dilation={dilation}).')
- self.conv = NormConv1d(in_channels, out_channels, kernel_size, stride,
- dilation=dilation, groups=groups, bias=bias, causal=causal,
- norm=norm, norm_kwargs=norm_kwargs)
- self.causal = causal
- self.pad_mode = pad_mode
-
- def forward(self, x):
- B, C, T = x.shape
- kernel_size = self.conv.conv.kernel_size[0]
- stride = self.conv.conv.stride[0]
- dilation = self.conv.conv.dilation[0]
- kernel_size = (kernel_size - 1) * dilation + 1 # effective kernel size with dilations
- padding_total = kernel_size - stride
- extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total)
- if self.causal:
- # Left padding for causal
- x = pad1d(x, (padding_total, extra_padding), mode=self.pad_mode)
- else:
- # Asymmetric padding required for odd strides
- padding_right = padding_total // 2
- padding_left = padding_total - padding_right
- x = pad1d(x, (padding_left, padding_right + extra_padding), mode=self.pad_mode)
- return self.conv(x)
-
-
-class StreamableConvTranspose1d(nn.Module):
- """ConvTranspose1d with some builtin handling of asymmetric or causal padding
- and normalization.
- """
- def __init__(self, in_channels: int, out_channels: int,
- kernel_size: int, stride: int = 1, causal: bool = False,
- norm: str = 'none', trim_right_ratio: float = 1.,
- norm_kwargs: tp.Dict[str, tp.Any] = {}):
- super().__init__()
- self.convtr = NormConvTranspose1d(in_channels, out_channels, kernel_size, stride,
- causal=causal, norm=norm, norm_kwargs=norm_kwargs)
- self.causal = causal
- self.trim_right_ratio = trim_right_ratio
- assert self.causal or self.trim_right_ratio == 1., \
- "`trim_right_ratio` != 1.0 only makes sense for causal convolutions"
- assert self.trim_right_ratio >= 0. and self.trim_right_ratio <= 1.
-
- def forward(self, x):
- kernel_size = self.convtr.convtr.kernel_size[0]
- stride = self.convtr.convtr.stride[0]
- padding_total = kernel_size - stride
-
- y = self.convtr(x)
-
- # We will only trim fixed padding. Extra padding from `pad_for_conv1d` would be
- # removed at the very end, when keeping only the right length for the output,
- # as removing it here would require also passing the length at the matching layer
- # in the encoder.
- if self.causal:
- # Trim the padding on the right according to the specified ratio
- # if trim_right_ratio = 1.0, trim everything from right
- padding_right = math.ceil(padding_total * self.trim_right_ratio)
- padding_left = padding_total - padding_right
- y = unpad1d(y, (padding_left, padding_right))
- else:
- # Asymmetric padding required for odd strides
- padding_right = padding_total // 2
- padding_left = padding_total - padding_right
- y = unpad1d(y, (padding_left, padding_right))
- return y
diff --git a/spaces/ZeroTwo3/WavJourney/VoiceParser/__init__.py b/spaces/ZeroTwo3/WavJourney/VoiceParser/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/aadnk/whisper-webui/src/whisper/fasterWhisperContainer.py b/spaces/aadnk/whisper-webui/src/whisper/fasterWhisperContainer.py
deleted file mode 100644
index 5bd640eeba90f7ad2c6a2795ed14e40d30e90c4c..0000000000000000000000000000000000000000
--- a/spaces/aadnk/whisper-webui/src/whisper/fasterWhisperContainer.py
+++ /dev/null
@@ -1,207 +0,0 @@
-import os
-from typing import List, Union
-
-from faster_whisper import WhisperModel, download_model
-from src.config import ModelConfig, VadInitialPromptMode
-from src.hooks.progressListener import ProgressListener
-from src.languages import get_language_from_name
-from src.modelCache import ModelCache
-from src.prompts.abstractPromptStrategy import AbstractPromptStrategy
-from src.whisper.abstractWhisperContainer import AbstractWhisperCallback, AbstractWhisperContainer
-from src.utils import format_timestamp
-
-class FasterWhisperContainer(AbstractWhisperContainer):
- def __init__(self, model_name: str, device: str = None, compute_type: str = "float16",
- download_root: str = None,
- cache: ModelCache = None, models: List[ModelConfig] = []):
- super().__init__(model_name, device, compute_type, download_root, cache, models)
-
- def ensure_downloaded(self):
- """
- Ensure that the model is downloaded. This is useful if you want to ensure that the model is downloaded before
- passing the container to a subprocess.
- """
- model_config = self._get_model_config()
-
- if os.path.isdir(model_config.url):
- model_config.path = model_config.url
- else:
- model_config.path = download_model(model_config.url, output_dir=self.download_root)
-
- def _get_model_config(self) -> ModelConfig:
- """
- Get the model configuration for the model.
- """
- for model in self.models:
- if model.name == self.model_name:
- return model
- return None
-
- def _create_model(self):
- print("Loading faster whisper model " + self.model_name + " for device " + str(self.device))
- model_config = self._get_model_config()
- model_url = model_config.url
-
- if model_config.type == "whisper":
- if model_url not in ["tiny", "base", "small", "medium", "large", "large-v1", "large-v2"]:
- raise Exception("FasterWhisperContainer does not yet support Whisper models. Use ct2-transformers-converter to convert the model to a faster-whisper model.")
- if model_url == "large":
- # large is an alias for large-v1
- model_url = "large-v1"
-
- device = self.device
-
- if (device is None):
- device = "auto"
-
- model = WhisperModel(model_url, device=device, compute_type=self.compute_type)
- return model
-
- def create_callback(self, language: str = None, task: str = None,
- prompt_strategy: AbstractPromptStrategy = None,
- **decodeOptions: dict) -> AbstractWhisperCallback:
- """
- Create a WhisperCallback object that can be used to transcript audio files.
-
- Parameters
- ----------
- language: str
- The target language of the transcription. If not specified, the language will be inferred from the audio content.
- task: str
- The task - either translate or transcribe.
- prompt_strategy: AbstractPromptStrategy
- The prompt strategy to use. If not specified, the prompt from Whisper will be used.
- decodeOptions: dict
- Additional options to pass to the decoder. Must be pickleable.
-
- Returns
- -------
- A WhisperCallback object.
- """
- return FasterWhisperCallback(self, language=language, task=task, prompt_strategy=prompt_strategy, **decodeOptions)
-
-class FasterWhisperCallback(AbstractWhisperCallback):
- def __init__(self, model_container: FasterWhisperContainer, language: str = None, task: str = None,
- prompt_strategy: AbstractPromptStrategy = None,
- **decodeOptions: dict):
- self.model_container = model_container
- self.language = language
- self.task = task
- self.prompt_strategy = prompt_strategy
- self.decodeOptions = decodeOptions
-
- self._printed_warning = False
-
- def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None):
- """
- Peform the transcription of the given audio file or data.
-
- Parameters
- ----------
- audio: Union[str, np.ndarray, torch.Tensor]
- The audio file to transcribe, or the audio data as a numpy array or torch tensor.
- segment_index: int
- The target language of the transcription. If not specified, the language will be inferred from the audio content.
- task: str
- The task - either translate or transcribe.
- progress_listener: ProgressListener
- A callback to receive progress updates.
- """
- model: WhisperModel = self.model_container.get_model()
- language_code = self._lookup_language_code(self.language) if self.language else None
-
- # Copy decode options and remove options that are not supported by faster-whisper
- decodeOptions = self.decodeOptions.copy()
- verbose = decodeOptions.pop("verbose", None)
-
- logprob_threshold = decodeOptions.pop("logprob_threshold", None)
-
- patience = decodeOptions.pop("patience", None)
- length_penalty = decodeOptions.pop("length_penalty", None)
- suppress_tokens = decodeOptions.pop("suppress_tokens", None)
-
- if (decodeOptions.pop("fp16", None) is not None):
- if not self._printed_warning:
- print("WARNING: fp16 option is ignored by faster-whisper - use compute_type instead.")
- self._printed_warning = True
-
- # Fix up decode options
- if (logprob_threshold is not None):
- decodeOptions["log_prob_threshold"] = logprob_threshold
-
- decodeOptions["patience"] = float(patience) if patience is not None else 1.0
- decodeOptions["length_penalty"] = float(length_penalty) if length_penalty is not None else 1.0
-
- # See if supress_tokens is a string - if so, convert it to a list of ints
- decodeOptions["suppress_tokens"] = self._split_suppress_tokens(suppress_tokens)
-
- initial_prompt = self.prompt_strategy.get_segment_prompt(segment_index, prompt, detected_language) \
- if self.prompt_strategy else prompt
-
- segments_generator, info = model.transcribe(audio, \
- language=language_code if language_code else detected_language, task=self.task, \
- initial_prompt=initial_prompt, \
- **decodeOptions
- )
-
- segments = []
-
- for segment in segments_generator:
- segments.append(segment)
-
- if progress_listener is not None:
- progress_listener.on_progress(segment.end, info.duration)
- if verbose:
- print("[{}->{}] {}".format(format_timestamp(segment.start, True), format_timestamp(segment.end, True),
- segment.text))
-
- text = " ".join([segment.text for segment in segments])
-
- # Convert the segments to a format that is easier to serialize
- whisper_segments = [{
- "text": segment.text,
- "start": segment.start,
- "end": segment.end,
-
- # Extra fields added by faster-whisper
- "words": [{
- "start": word.start,
- "end": word.end,
- "word": word.word,
- "probability": word.probability
- } for word in (segment.words if segment.words is not None else []) ]
- } for segment in segments]
-
- result = {
- "segments": whisper_segments,
- "text": text,
- "language": info.language if info else None,
-
- # Extra fields added by faster-whisper
- "language_probability": info.language_probability if info else None,
- "duration": info.duration if info else None
- }
-
- # If we have a prompt strategy, we need to increment the current prompt
- if self.prompt_strategy:
- self.prompt_strategy.on_segment_finished(segment_index, prompt, detected_language, result)
-
- if progress_listener is not None:
- progress_listener.on_finished()
- return result
-
- def _split_suppress_tokens(self, suppress_tokens: Union[str, List[int]]):
- if (suppress_tokens is None):
- return None
- if (isinstance(suppress_tokens, list)):
- return suppress_tokens
-
- return [int(token) for token in suppress_tokens.split(",")]
-
- def _lookup_language_code(self, language: str):
- language = get_language_from_name(language)
-
- if language is None:
- raise ValueError("Invalid language: " + language)
-
- return language.code
diff --git a/spaces/aashay26/Next_Word_Prediction/app.py b/spaces/aashay26/Next_Word_Prediction/app.py
deleted file mode 100644
index 9b5b341796841863906911bc19d4188d22702a4b..0000000000000000000000000000000000000000
--- a/spaces/aashay26/Next_Word_Prediction/app.py
+++ /dev/null
@@ -1,43 +0,0 @@
-import gradio as gr
-import numpy as np
-import pickle
-import tensorflow as tf
-from tensorflow.keras.preprocessing.sequence import pad_sequences
-from tensorflow.keras.layers import Embedding, LSTM, Dense, Bidirectional
-from tensorflow.keras.preprocessing.text import Tokenizer
-from tensorflow.keras.models import Sequential
-from tensorflow.keras.optimizers import Adam
-from tensorflow.keras.models import load_model
-
-model = load_model('./nextwords11.h5')
-
-tokenizer = pickle.load(open('./token11.pkl', 'rb'))
-
-def predict(text,nw):
- seed_text = text
- next_words = int(nw)
- for _ in range(next_words):
-
- token_list = tokenizer.texts_to_sequences([seed_text])[0]
- token_list = pad_sequences([token_list], maxlen=86, padding='pre')
- predict_x=model.predict(token_list, verbose=0)
- predicted=np.argmax(predict_x,axis=1)
- #predicted = model.predict_classes(token_list, verbose=0)
- output_word = ""
- for word, index in tokenizer.word_index.items():
- if index == predicted:
- output_word = word
- break
-
- seed_text += " " + output_word
-
- return seed_text
-
-with gr.Blocks(css=".x {font-weight:bold}") as demo:
- text = gr.Textbox(label="Write Something :",elem_classes="x")
- nw = gr.inputs.Dropdown(choices=[1,2,3,4,5], label="Select no of words to be predicted")
- output = gr.Textbox(label="Output Box",elem_classes="x")
- predict_btn = gr.Button("Predict next Words !!",elem_classes="x")
- predict_btn.click(fn=predict, inputs=[text,nw], outputs=output)
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/models/dmnet_r50-d8.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/models/dmnet_r50-d8.py
deleted file mode 100644
index d22ba52640bebd805b3b8d07025e276dfb023759..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/models/dmnet_r50-d8.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='DMHead',
- in_channels=2048,
- in_index=3,
- channels=512,
- filter_sizes=(1, 3, 5, 7),
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/detectors/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/detectors/__init__.py
deleted file mode 100644
index 04011130435cf9fdfadeb821919046b1bddab7d4..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/detectors/__init__.py
+++ /dev/null
@@ -1,40 +0,0 @@
-from .atss import ATSS
-from .base import BaseDetector
-from .cascade_rcnn import CascadeRCNN
-from .cornernet import CornerNet
-from .detr import DETR
-from .fast_rcnn import FastRCNN
-from .faster_rcnn import FasterRCNN
-from .fcos import FCOS
-from .fovea import FOVEA
-from .fsaf import FSAF
-from .gfl import GFL
-from .grid_rcnn import GridRCNN
-from .htc import HybridTaskCascade
-from .kd_one_stage import KnowledgeDistillationSingleStageDetector
-from .mask_rcnn import MaskRCNN
-from .mask_scoring_rcnn import MaskScoringRCNN
-from .nasfcos import NASFCOS
-from .paa import PAA
-from .point_rend import PointRend
-from .reppoints_detector import RepPointsDetector
-from .retinanet import RetinaNet
-from .rpn import RPN
-from .scnet import SCNet
-from .single_stage import SingleStageDetector
-from .sparse_rcnn import SparseRCNN
-from .trident_faster_rcnn import TridentFasterRCNN
-from .two_stage import TwoStageDetector
-from .vfnet import VFNet
-from .yolact import YOLACT
-from .yolo import YOLOV3
-
-__all__ = [
- 'ATSS', 'BaseDetector', 'SingleStageDetector',
- 'KnowledgeDistillationSingleStageDetector', 'TwoStageDetector', 'RPN',
- 'FastRCNN', 'FasterRCNN', 'MaskRCNN', 'CascadeRCNN', 'HybridTaskCascade',
- 'RetinaNet', 'FCOS', 'GridRCNN', 'MaskScoringRCNN', 'RepPointsDetector',
- 'FOVEA', 'FSAF', 'NASFCOS', 'PointRend', 'GFL', 'CornerNet', 'PAA',
- 'YOLOV3', 'YOLACT', 'VFNet', 'DETR', 'TridentFasterRCNN', 'SparseRCNN',
- 'SCNet'
-]
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/utils/misc.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/utils/misc.py
deleted file mode 100644
index 2c58d0d7fee9fe3d4519270ad8c1e998d0d8a18c..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/utils/misc.py
+++ /dev/null
@@ -1,377 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import collections.abc
-import functools
-import itertools
-import subprocess
-import warnings
-from collections import abc
-from importlib import import_module
-from inspect import getfullargspec
-from itertools import repeat
-
-
-# From PyTorch internals
-def _ntuple(n):
-
- def parse(x):
- if isinstance(x, collections.abc.Iterable):
- return x
- return tuple(repeat(x, n))
-
- return parse
-
-
-to_1tuple = _ntuple(1)
-to_2tuple = _ntuple(2)
-to_3tuple = _ntuple(3)
-to_4tuple = _ntuple(4)
-to_ntuple = _ntuple
-
-
-def is_str(x):
- """Whether the input is an string instance.
-
- Note: This method is deprecated since python 2 is no longer supported.
- """
- return isinstance(x, str)
-
-
-def import_modules_from_strings(imports, allow_failed_imports=False):
- """Import modules from the given list of strings.
-
- Args:
- imports (list | str | None): The given module names to be imported.
- allow_failed_imports (bool): If True, the failed imports will return
- None. Otherwise, an ImportError is raise. Default: False.
-
- Returns:
- list[module] | module | None: The imported modules.
-
- Examples:
- >>> osp, sys = import_modules_from_strings(
- ... ['os.path', 'sys'])
- >>> import os.path as osp_
- >>> import sys as sys_
- >>> assert osp == osp_
- >>> assert sys == sys_
- """
- if not imports:
- return
- single_import = False
- if isinstance(imports, str):
- single_import = True
- imports = [imports]
- if not isinstance(imports, list):
- raise TypeError(
- f'custom_imports must be a list but got type {type(imports)}')
- imported = []
- for imp in imports:
- if not isinstance(imp, str):
- raise TypeError(
- f'{imp} is of type {type(imp)} and cannot be imported.')
- try:
- imported_tmp = import_module(imp)
- except ImportError:
- if allow_failed_imports:
- warnings.warn(f'{imp} failed to import and is ignored.',
- UserWarning)
- imported_tmp = None
- else:
- raise ImportError
- imported.append(imported_tmp)
- if single_import:
- imported = imported[0]
- return imported
-
-
-def iter_cast(inputs, dst_type, return_type=None):
- """Cast elements of an iterable object into some type.
-
- Args:
- inputs (Iterable): The input object.
- dst_type (type): Destination type.
- return_type (type, optional): If specified, the output object will be
- converted to this type, otherwise an iterator.
-
- Returns:
- iterator or specified type: The converted object.
- """
- if not isinstance(inputs, abc.Iterable):
- raise TypeError('inputs must be an iterable object')
- if not isinstance(dst_type, type):
- raise TypeError('"dst_type" must be a valid type')
-
- out_iterable = map(dst_type, inputs)
-
- if return_type is None:
- return out_iterable
- else:
- return return_type(out_iterable)
-
-
-def list_cast(inputs, dst_type):
- """Cast elements of an iterable object into a list of some type.
-
- A partial method of :func:`iter_cast`.
- """
- return iter_cast(inputs, dst_type, return_type=list)
-
-
-def tuple_cast(inputs, dst_type):
- """Cast elements of an iterable object into a tuple of some type.
-
- A partial method of :func:`iter_cast`.
- """
- return iter_cast(inputs, dst_type, return_type=tuple)
-
-
-def is_seq_of(seq, expected_type, seq_type=None):
- """Check whether it is a sequence of some type.
-
- Args:
- seq (Sequence): The sequence to be checked.
- expected_type (type): Expected type of sequence items.
- seq_type (type, optional): Expected sequence type.
-
- Returns:
- bool: Whether the sequence is valid.
- """
- if seq_type is None:
- exp_seq_type = abc.Sequence
- else:
- assert isinstance(seq_type, type)
- exp_seq_type = seq_type
- if not isinstance(seq, exp_seq_type):
- return False
- for item in seq:
- if not isinstance(item, expected_type):
- return False
- return True
-
-
-def is_list_of(seq, expected_type):
- """Check whether it is a list of some type.
-
- A partial method of :func:`is_seq_of`.
- """
- return is_seq_of(seq, expected_type, seq_type=list)
-
-
-def is_tuple_of(seq, expected_type):
- """Check whether it is a tuple of some type.
-
- A partial method of :func:`is_seq_of`.
- """
- return is_seq_of(seq, expected_type, seq_type=tuple)
-
-
-def slice_list(in_list, lens):
- """Slice a list into several sub lists by a list of given length.
-
- Args:
- in_list (list): The list to be sliced.
- lens(int or list): The expected length of each out list.
-
- Returns:
- list: A list of sliced list.
- """
- if isinstance(lens, int):
- assert len(in_list) % lens == 0
- lens = [lens] * int(len(in_list) / lens)
- if not isinstance(lens, list):
- raise TypeError('"indices" must be an integer or a list of integers')
- elif sum(lens) != len(in_list):
- raise ValueError('sum of lens and list length does not '
- f'match: {sum(lens)} != {len(in_list)}')
- out_list = []
- idx = 0
- for i in range(len(lens)):
- out_list.append(in_list[idx:idx + lens[i]])
- idx += lens[i]
- return out_list
-
-
-def concat_list(in_list):
- """Concatenate a list of list into a single list.
-
- Args:
- in_list (list): The list of list to be merged.
-
- Returns:
- list: The concatenated flat list.
- """
- return list(itertools.chain(*in_list))
-
-
-def check_prerequisites(
- prerequisites,
- checker,
- msg_tmpl='Prerequisites "{}" are required in method "{}" but not '
- 'found, please install them first.'): # yapf: disable
- """A decorator factory to check if prerequisites are satisfied.
-
- Args:
- prerequisites (str of list[str]): Prerequisites to be checked.
- checker (callable): The checker method that returns True if a
- prerequisite is meet, False otherwise.
- msg_tmpl (str): The message template with two variables.
-
- Returns:
- decorator: A specific decorator.
- """
-
- def wrap(func):
-
- @functools.wraps(func)
- def wrapped_func(*args, **kwargs):
- requirements = [prerequisites] if isinstance(
- prerequisites, str) else prerequisites
- missing = []
- for item in requirements:
- if not checker(item):
- missing.append(item)
- if missing:
- print(msg_tmpl.format(', '.join(missing), func.__name__))
- raise RuntimeError('Prerequisites not meet.')
- else:
- return func(*args, **kwargs)
-
- return wrapped_func
-
- return wrap
-
-
-def _check_py_package(package):
- try:
- import_module(package)
- except ImportError:
- return False
- else:
- return True
-
-
-def _check_executable(cmd):
- if subprocess.call(f'which {cmd}', shell=True) != 0:
- return False
- else:
- return True
-
-
-def requires_package(prerequisites):
- """A decorator to check if some python packages are installed.
-
- Example:
- >>> @requires_package('numpy')
- >>> func(arg1, args):
- >>> return numpy.zeros(1)
- array([0.])
- >>> @requires_package(['numpy', 'non_package'])
- >>> func(arg1, args):
- >>> return numpy.zeros(1)
- ImportError
- """
- return check_prerequisites(prerequisites, checker=_check_py_package)
-
-
-def requires_executable(prerequisites):
- """A decorator to check if some executable files are installed.
-
- Example:
- >>> @requires_executable('ffmpeg')
- >>> func(arg1, args):
- >>> print(1)
- 1
- """
- return check_prerequisites(prerequisites, checker=_check_executable)
-
-
-def deprecated_api_warning(name_dict, cls_name=None):
- """A decorator to check if some arguments are deprecate and try to replace
- deprecate src_arg_name to dst_arg_name.
-
- Args:
- name_dict(dict):
- key (str): Deprecate argument names.
- val (str): Expected argument names.
-
- Returns:
- func: New function.
- """
-
- def api_warning_wrapper(old_func):
-
- @functools.wraps(old_func)
- def new_func(*args, **kwargs):
- # get the arg spec of the decorated method
- args_info = getfullargspec(old_func)
- # get name of the function
- func_name = old_func.__name__
- if cls_name is not None:
- func_name = f'{cls_name}.{func_name}'
- if args:
- arg_names = args_info.args[:len(args)]
- for src_arg_name, dst_arg_name in name_dict.items():
- if src_arg_name in arg_names:
- warnings.warn(
- f'"{src_arg_name}" is deprecated in '
- f'`{func_name}`, please use "{dst_arg_name}" '
- 'instead')
- arg_names[arg_names.index(src_arg_name)] = dst_arg_name
- if kwargs:
- for src_arg_name, dst_arg_name in name_dict.items():
- if src_arg_name in kwargs:
-
- assert dst_arg_name not in kwargs, (
- f'The expected behavior is to replace '
- f'the deprecated key `{src_arg_name}` to '
- f'new key `{dst_arg_name}`, but got them '
- f'in the arguments at the same time, which '
- f'is confusing. `{src_arg_name} will be '
- f'deprecated in the future, please '
- f'use `{dst_arg_name}` instead.')
-
- warnings.warn(
- f'"{src_arg_name}" is deprecated in '
- f'`{func_name}`, please use "{dst_arg_name}" '
- 'instead')
- kwargs[dst_arg_name] = kwargs.pop(src_arg_name)
-
- # apply converted arguments to the decorated method
- output = old_func(*args, **kwargs)
- return output
-
- return new_func
-
- return api_warning_wrapper
-
-
-def is_method_overridden(method, base_class, derived_class):
- """Check if a method of base class is overridden in derived class.
-
- Args:
- method (str): the method name to check.
- base_class (type): the class of the base class.
- derived_class (type | Any): the class or instance of the derived class.
- """
- assert isinstance(base_class, type), \
- "base_class doesn't accept instance, Please pass class instead."
-
- if not isinstance(derived_class, type):
- derived_class = derived_class.__class__
-
- base_method = getattr(base_class, method)
- derived_method = getattr(derived_class, method)
- return derived_method != base_method
-
-
-def has_method(obj: object, method: str) -> bool:
- """Check whether the object has a method.
-
- Args:
- method (str): The method name to check.
- obj (object): The object to check.
-
- Returns:
- bool: True if the object has the method else False.
- """
- return hasattr(obj, method) and callable(getattr(obj, method))
diff --git a/spaces/acmyu/frame_interpolation_prototype/sagan_models.py b/spaces/acmyu/frame_interpolation_prototype/sagan_models.py
deleted file mode 100644
index 40996bde2108aadeb9cd11b295c72ece661269bc..0000000000000000000000000000000000000000
--- a/spaces/acmyu/frame_interpolation_prototype/sagan_models.py
+++ /dev/null
@@ -1,221 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.autograd import Variable
-from spectral import SpectralNorm
-import numpy as np
-
-class Self_Attn(nn.Module):
- """ Self attention Layer"""
- def __init__(self,in_dim,activation):
- super(Self_Attn,self).__init__()
- self.chanel_in = in_dim
- self.activation = activation
-
- self.query_conv = nn.Conv2d(in_channels = in_dim , out_channels = in_dim//8 , kernel_size= 1)
- self.key_conv = nn.Conv2d(in_channels = in_dim , out_channels = in_dim//8 , kernel_size= 1)
- self.value_conv = nn.Conv2d(in_channels = in_dim , out_channels = in_dim , kernel_size= 1)
- self.gamma = nn.Parameter(torch.zeros(1))
-
- self.softmax = nn.Softmax(dim=-1) #
- def forward(self,x):
- """
- inputs :
- x : input feature maps( B X C X W X H)
- returns :
- out : self attention value + input feature
- attention: B X N X N (N is Width*Height)
- """
- m_batchsize,C,width ,height = x.size()
- proj_query = self.query_conv(x).view(m_batchsize,-1,width*height).permute(0,2,1) # B X CX(N)
- proj_key = self.key_conv(x).view(m_batchsize,-1,width*height) # B X C x (*W*H)
- energy = torch.bmm(proj_query,proj_key) # transpose check
- attention = self.softmax(energy) # BX (N) X (N)
- proj_value = self.value_conv(x).view(m_batchsize,-1,width*height) # B X C X N
-
- out = torch.bmm(proj_value,attention.permute(0,2,1) )
- out = out.view(m_batchsize,C,width,height)
-
- out = self.gamma*out + x
- return out,attention
-
-class GeneratorRandom(nn.Module):
- """Generator for random image using noise as input."""
-
- def __init__(self, batch_size, image_size=64, z_dim=100, conv_dim=64):
- super(GeneratorRandom, self).__init__()
- self.imsize = image_size
- layer1 = []
- layer2 = []
- layer3 = []
- last = []
-
- repeat_num = int(np.log2(self.imsize)) - 3
- mult = 2 ** repeat_num # 8
- layer1.append(SpectralNorm(nn.ConvTranspose2d(z_dim, conv_dim * mult, 4)))
- layer1.append(nn.BatchNorm2d(conv_dim * mult))
- layer1.append(nn.ReLU())
-
- curr_dim = conv_dim * mult
-
- layer2.append(SpectralNorm(nn.ConvTranspose2d(curr_dim, int(curr_dim / 2), 4, 2, 1)))
- layer2.append(nn.BatchNorm2d(int(curr_dim / 2)))
- layer2.append(nn.ReLU())
-
- curr_dim = int(curr_dim / 2)
-
- layer3.append(SpectralNorm(nn.ConvTranspose2d(curr_dim, int(curr_dim / 2), 4, 2, 1)))
- layer3.append(nn.BatchNorm2d(int(curr_dim / 2)))
- layer3.append(nn.ReLU())
-
- if self.imsize == 64:
- layer4 = []
- curr_dim = int(curr_dim / 2)
- layer4.append(SpectralNorm(nn.ConvTranspose2d(curr_dim, int(curr_dim / 2), 4, 2, 1)))
- layer4.append(nn.BatchNorm2d(int(curr_dim / 2)))
- layer4.append(nn.ReLU())
- self.l4 = nn.Sequential(*layer4)
- curr_dim = int(curr_dim / 2)
-
- self.l1 = nn.Sequential(*layer1)
- self.l2 = nn.Sequential(*layer2)
- self.l3 = nn.Sequential(*layer3)
-
- last.append(nn.ConvTranspose2d(curr_dim, 3, 4, 2, 1))
- last.append(nn.Tanh())
- self.last = nn.Sequential(*last)
-
- self.attn1 = Self_Attn( 128, 'relu')
- self.attn2 = Self_Attn( 64, 'relu')
-
- def forward(self, z):
- z = z.view(z.size(0), z.size(1), 1, 1)
- out=self.l1(z)
- out=self.l2(out)
- out=self.l3(out)
- out,p1 = self.attn1(out)
- out=self.l4(out)
- out,p2 = self.attn2(out)
- out=self.last(out)
-
- return out, p1, p2
-
-
-class Discriminator(nn.Module):
- """Discriminator, Auxiliary Classifier."""
-
- def __init__(self, batch_size=64, image_size=64, conv_dim=64):
- super(Discriminator, self).__init__()
- self.imsize = image_size
- layer1 = []
- layer2 = []
- layer3 = []
- last = []
-
- layer1.append(SpectralNorm(nn.Conv2d(3, conv_dim, 4, 2, 1)))
- layer1.append(nn.LeakyReLU(0.1))
-
- curr_dim = conv_dim
-
- layer2.append(SpectralNorm(nn.Conv2d(curr_dim, curr_dim * 2, 4, 2, 1)))
- layer2.append(nn.LeakyReLU(0.1))
- curr_dim = curr_dim * 2
-
- layer3.append(SpectralNorm(nn.Conv2d(curr_dim, curr_dim * 2, 4, 2, 1)))
- layer3.append(nn.LeakyReLU(0.1))
- curr_dim = curr_dim * 2
-
- if self.imsize == 64:
- layer4 = []
- layer4.append(SpectralNorm(nn.Conv2d(curr_dim, curr_dim * 2, 4, 2, 1)))
- layer4.append(nn.LeakyReLU(0.1))
- self.l4 = nn.Sequential(*layer4)
- curr_dim = curr_dim*2
- self.l1 = nn.Sequential(*layer1)
- self.l2 = nn.Sequential(*layer2)
- self.l3 = nn.Sequential(*layer3)
-
- last.append(nn.Conv2d(curr_dim, 1, 4))
- self.last = nn.Sequential(*last)
-
- self.attn1 = Self_Attn(256, 'relu')
- self.attn2 = Self_Attn(512, 'relu')
-
- def forward(self, x):
- out = self.l1(x)
- out = self.l2(out)
- out = self.l3(out)
- out,p1 = self.attn1(out)
- #out=self.l4(out)
- #out,p2 = self.attn2(out)
- out=self.last(out)
-
- return out.squeeze(), p1, 0 #p2
-
-
-
-class Generator(nn.Module):
- '''
- A generator without noise z
- '''
-
-
- def __init__(self, batch_size, image_size=128, z_dim=100, conv_dim=64):
- super(Generator, self).__init__()
- self.nfg = conv_dim # the size of feature map
- self.c = 3 # output channel
- filter_size = 4
- stride_size = 2
-
- self.down_sample_blocks = nn.Sequential(
- nn.Conv2d(self.c, self.nfg * 2, kernel_size=3, stride=1, padding=1, bias=False), # size
- nn.BatchNorm2d(self.nfg * 2),
- nn.LeakyReLU(0.02, inplace=True),
- nn.Conv2d(self.nfg * 2, self.nfg * 2, kernel_size=filter_size, stride=stride_size, padding=1, bias=False), # size/2
- nn.BatchNorm2d(self.nfg * 2),
- nn.LeakyReLU(0.02, inplace=True),
- nn.Conv2d(self.nfg * 2, self.nfg * 4, kernel_size=filter_size, stride=stride_size, padding=1, bias=False), # size/2
- nn.BatchNorm2d(self.nfg * 4),
- nn.LeakyReLU(0.02, inplace=True),
- nn.Conv2d(self.nfg * 4, self.nfg * 8, kernel_size=filter_size, stride=stride_size, padding=1, bias=False), # size/2
- nn.BatchNorm2d(self.nfg * 8),
- nn.LeakyReLU(0.02, inplace=True)
- )
-
- self.up_sample_block = nn.Sequential(
- nn.ConvTranspose2d(self.nfg * 8, self.nfg * 4, kernel_size=filter_size, stride=stride_size, padding=1, bias=False), # size*2
- nn.BatchNorm2d(self.nfg * 4),
- nn.LeakyReLU(0.02, inplace=True),
- nn.ConvTranspose2d(self.nfg * 4, self.nfg * 2, kernel_size=filter_size, stride=stride_size, padding=1, bias=False), # size*2
- nn.BatchNorm2d(self.nfg * 2),
- nn.LeakyReLU(0.02, inplace=True),
- nn.ConvTranspose2d(self.nfg * 2, self.nfg, kernel_size=filter_size, stride=stride_size, padding=1, bias=False), # size*2
- nn.BatchNorm2d(self.nfg),
- nn.LeakyReLU(0.02, inplace=True),
- nn.ConvTranspose2d(self.nfg, self.c, kernel_size=3, stride=1, padding=1, bias=False), # size
- nn.Tanh()
- )
-
- self.attn1 = Self_Attn( 512, 'relu')
- self.attn2 = Self_Attn( 512, 'relu')
-
-
- #def forward(self, tensor0, tensor2):
- def forward(self, tensor0):
-
- #out = torch.cat((tensor0, tensor2), 1)
-
- out_down = self.down_sample_blocks(tensor0)
- #out_down,p1 = self.attn1(out_down)
- out_up = self.up_sample_block(out_down)
-
- return out_up, 0, 0
-
- def encode(self, tensor0):
- out_down = self.down_sample_blocks(tensor0)
- #out_down,p1 = self.attn1(out_down)
- return out_down
-
- def decode(self, tensor0):
- return self.up_sample_block(tensor0)
-
diff --git a/spaces/adarsh8986/stabilityai-stable-diffusion-2-1-base/app.py b/spaces/adarsh8986/stabilityai-stable-diffusion-2-1-base/app.py
deleted file mode 100644
index 02f6d0dd4b8578fbccaf70866b377d9e3ea494b9..0000000000000000000000000000000000000000
--- a/spaces/adarsh8986/stabilityai-stable-diffusion-2-1-base/app.py
+++ /dev/null
@@ -1,5 +0,0 @@
-import gradio as gr
-
-hi
-
-gr.Interface.load("models/stabilityai/stable-diffusion-2-1-base").launch()
\ No newline at end of file
diff --git a/spaces/akhaliq/JoJoGAN/e4e/models/__init__.py b/spaces/akhaliq/JoJoGAN/e4e/models/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/akhaliq/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/docs/infinibatch/index.html b/spaces/akhaliq/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/docs/infinibatch/index.html
deleted file mode 100644
index b121c03951b6400592ed517bb0b6d8c94ff2b842..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/docs/infinibatch/index.html
+++ /dev/null
@@ -1,629 +0,0 @@
-
-
-
-
-
-
-infinibatch API documentation
-
-
-
-
-
-
-
-
-
-
-
-
-
Module infinibatch
-
-
-
Infinibatch is a library of checkpointable iterators for randomized data loading of massive data sets in deep neural network training.
-
Features
-
-
support for corpora much larger than fit into RAM
-
hierarchical block+sentence-level randomization over the whole corpus, different randomization in each epoch
-
only load the data that is needed
-
very fast start-up time (does not need to read full corpus)
-
only requires the most basic of data preparation (e.g. no indexing)
-
for multi-GPU, only load what the respective GPU needs
-
100% accurate check-pointing, restore from checkpoint should not read all data up to the checkpoint
-
support automatic bucketed batching with dynamic batch sizes
-
pre-fetching thread
-
composable, as to support for complex batching, e.g. negative samples from multiple documents
-
-
Getting Started
-
Infinibatch requires Python 3.5 and has no dependencies.
-There is presently no pip package.
-To install it, please copy this library into a subfolder in your project:
-
cd YOUR_PROJECT_FOLDER
-git clone <https://msasg.visualstudio.com/DefaultCollection/SDRG/_git/infinibatch>
-
It is now located at infinibatch/infinibatch, e.g. the main import file is infinibatch/infinibatch/__init__.py.
-
To import it, you need to add that folder to your PYTHONPATH variable externally, or to sys.path inside the code:
-
import sys
-sys.path.insert(0,'infinibatch') # note: relative paths are relative to your current dir, not to the python script
-import infinibatch
-
-
Tutorial
-
This little tutorial walks you through the steps of preparing your data and consuming them from Python code as batches.
-
Infinibatch Basics: Iterators and Checkpointing
-
Infinibatch provides Python iterators
-to read your data.
-An iterator represents a stream of data that can be retrieved item by item, e.g. via a
-for loop or repeatedly calling next() on it.
-
Infinibatch is agnostic to the data type of the items, which is determined by a user-supplied file-read function.
-In NLP applications, items would typically be tuples of text. In other applications,
-they can be images or an audio file with a textual annotation.
-
Infinibatch makes it easy to read your data in randomized order, and supports checkpointing, which allows you to restart training exactly where you left off.
-
Randomization is done on the fly, which means that it is not necessary to read the entire data set into memory
-to be shuffled. Infinibatch implements a hierarchical shuffling algorithm
-that only holds a subset of the data in RAM at any point in time.
-
Infinibatch iterators are checkpointable.
-Checkpointing lets you retrieve the current position (the "checkpoint") in the data stream at any time, so that
-later, you can "rewind" to that same position.
-The sad reality is that long-running trainings occasionally crash.
-To be able to continue a crashed training as if it had not crashed,
-save your Infinibatch iterator's checkpoint to disk whenever you save an intermediate model during training.
-To restart a crashed training, reset the iterator to the saved checkpoint.
-The data reader will now yield the exact same data-item sequence it would have yielded without the crash.
-
Data Preparation
-
Infinibatch has one requirement on your data organization:
-To use your data with Infinibatch, it must be split into a large number of small chunks.
-A chunk is the smallest unit of data that is loaded from disk into RAM. Infinibatch holds a random subset of chunks in memory
-that it randomly draws samples from.
-
Below we want to show how such a split can be created. An easy way to split your data into chunks is with the Linux split command.
-
In this tutorial, our "corpus" consists of 6 lines of text, where each line is one data item.
-To create that corpus, please run this command in a bash shell. It creates a 6-line text file named corpus.txt:
-
echo \
-'Lorem ipsum dolor sit amet,
-consectetur adipiscing elit,
-sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
-Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
-Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
-The quick brown fox jumps over the lazy dog.' \
-> corpus.txt
-
-
Now let us split it into 3 chunks of 2 lines each. Each chunk is stored as a zipped text file.
-We will create them inside a new subdirectory called corpus_chunks:
This will have created three files: corpus_chunks/corpus.00.txt.gz, corpus_chunks/corpus.01.txt.gz, and corpus_chunks/corpus.02.txt.gz.
-To verify whether the data has been split as expected, you can use this command:
-
zcat corpus_chunks/corpus.*.txt.gz
-
-
Hint: For large corpora, we recommend replacing gzip by pigz (apt-get install pigz), which runs notably faster via multi-threading.
-
Reading Items in Random Order With Infinibatch
-
We will first show the easiest way to read data with Infinibatch, using the helper function chunked_dataset_iterator``().
-This function will create an Infinibatch iterator that yields the content of your data in random order.
-Please the following program:
You should get output that contains the 6 example lines in randomized order:
-
Lorem ipsum dolor sit amet,
-consectetur adipiscing elit,
-Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
-Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
-The quick brown fox jumps over the lazy dog.
-sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
-consectetur adipiscing elit,
-Lorem ipsum dolor sit amet,
-The quick brown fox jumps over the lazy dog.
-sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
-
-
Note: The buffer_size parameter determines how many sentences are read into memory at any given time,
-to draw randomized items from. In real settings with corpora of hundreds of millions of text lines,
-the buffer_size parameter should be set in the millions.
-RAM usage and startup time will be proportional to the buffer size
-(but much lower than having to load the entire corpus into RAM).
-
Reading Items of Different Lengths in Batches
-
For deep learning, we want to group multiple items into batches.
-For NLP tasks, items are often lines of text of varying length.
-Infinibatch implements an algorithm that randomizes the input sequence and groups it into
-batches of approximately the same length (aka bucketing).
-
Infinibatch's BucketedReadaheadBatchIterator performs this task.
-It implements an algorithm modeled after the Marian toolkit
-that preloads a large number of randomized items (typically millions; in this example: 6),
-sorts them and groups them into batches of similar length, and then yields
-them, in turn, in randomized order.
-
Here is an example. Note that the BucketedReadaheadBatchIterator accepts
-the previous randomized sentence sequence iterator (ds) as the source of items to randomize over.
-This is an example how one forms pipelines of iterators with Infinibatch
-(a concept familiar from Python's own itertools).
-Once an iterator is passed to another as its source, consider it owned by that other iterator,
-it must no longer be accessed by the calling code.
-
import sys, gzip, glob
-sys.path.insert(0,'infinibatch')
-from infinibatch import datasets as ds
-from infinibatch import iterators as it
-
-ds = ds.chunked_dataset_iterator(
- chunk_refs = glob.glob('corpus_chunks/corpus.*.txt.gz'),
- read_chunk_fn = lambda path: iter(gzip.decompress(open(path, "rb") \
- .read()).decode(encoding='utf-8') \
- .splitlines()),
- buffer_size = 6, seed = 1)
-
-bs = it.BucketedReadaheadBatchIterator(
- source_iterator = ds, # note: this is the iterator from above
- read_ahead = 6,
- key = lambda line: len(line),
- batch_size = 2,
- seed = 1)
-
-for i in range(25):
- print(next(bs))
-
-
This code should output something like this:
-
['sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.',
- 'The quick brown fox jumps over the lazy dog.']
-['consectetur adipiscing elit,', 'Lorem ipsum dolor sit amet,']
-['Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.',
- 'Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.']
-
-
followed by different permutations of the same tuples.
-As you can see, the sentences are in random order and grouped in batches of 2 of approximately the same length.
-You may notice that there is no variation in how the items get grouped into batches–that
-is an artifact of this example, and generally not the case in real use when the data size is much larger
-than the batch size.
-
In NLP, sentence length often varies considerably. As a result, using batches of a fixed number of lines,
-as in the example above, will waste GPU RAM and cores.
-This is because the number of lines is limited by the longest possible sequence; batches of shorter lines
-would leave GPU cycles on the table.
-Ideally, one would use batches that have as many lines as fit into GPU RAM,
-given the number of tokens of the longest line in the batch.
-To support variable batch sizes, Infinibatch allows to pass a function as the batch_size parameter.
-That function will be given the longest item of a batch and should estimate how many items of at most this length can fit.
-
In our example, we assume that batches can hold at most 150 tokens.
-Please change the above code as follows:
['consectetur adipiscing elit,', 'Lorem ipsum dolor sit amet,']
-['Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.']
-['sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.',
- 'The quick brown fox jumps over the lazy dog.']
-['Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.']
-
-
That shorter sentences got grouped, while longer did not because they would exceed the total of 150 characters.
-
Reading Batches Into Numpy Arrays
-
Lastly, we will need to feed batches into our favorite deep-learning tool.
-We will show how to convert the batches of text lines into padded numpy arrays.
-
In a typical NLP application, text items would be tokenized, and then each token
-would be represented by an index into a unit vocabulary.
-For simplicity, in this example each character is its own token,
-and each token's numeric unit index is just its ASCII code.
-These sequences are then padded to equal length with -1, and converted into a numpy array.
-
Please rerun the previous example, but first insert the following code before the final for loop.
-This example uses an Infinibatch MapIterator, which applies a user-supplied function or
-lambda to each item:
-
import numpy as np
-def collate(lines_batch):
- # tokenize all lines in the batch and map to unit ids
- ids_batch = [[ord(c) for c in line] for line in lines_batch]
- # create a padded numpy array as wide as the longest line,
- # where shorter sequences are padded with -1
- width = max(len(ids) for ids in ids_batch)
- return np.array([ids + [-1] * (width-len(ids)) for ids in ids_batch])
-
-bs = it.MapIterator(
- source_iterator = bs,
- transform = collate)
-
-
This will output batches like this. Note that in batches with multiple sentences,
-some entries are padded with -1.
The above tutorial showed you the use of the most common iterator type, as created by the
-convenience function chunked_dataset_iterator().
-
Not all real-life scenarios are covered by this function. For example, multi-task learning
-scenarios require more complex combinations of data. To create those, you will need
-to compose the necessary data reader from the underlying building blocks.
-This is described at the documentation of the module infinibatch.iterators.
-
-
-Expand source code
-
-
"""
-Infinibatch is a library of checkpointable iterators for randomized data loading of massive data sets in deep neural network training.
-
-
-## Features
-
- * support for corpora much larger than fit into RAM
- * hierarchical block+sentence-level randomization over the whole corpus, different randomization in each epoch
- * only load the data that is needed
- * very fast start-up time (does not need to read full corpus)
- * only requires the most basic of data preparation (e.g. no indexing)
- * for multi-GPU, only load what the respective GPU needs
- * 100% accurate check-pointing, restore from checkpoint should not read all data up to the checkpoint
- * support automatic bucketed batching with dynamic batch sizes
- * pre-fetching thread
- * composable, as to support for complex batching, e.g. negative samples from multiple documents
-
-
-## Getting Started
-
-Infinibatch requires Python 3.5 and has no dependencies.
-There is presently no pip package.
-To install it, please copy this library into a subfolder in your project:
-```bash
-cd YOUR_PROJECT_FOLDER
-git clone https://msasg.visualstudio.com/DefaultCollection/SDRG/_git/infinibatch
-```
-or, better, as a submodule reference:
-```bash
-git submodule add https://msasg.visualstudio.com/DefaultCollection/SDRG/_git/infinibatch
-```
-It is now located at `infinibatch/infinibatch`, e.g. the main import file is `infinibatch/infinibatch/__init__.py`.
-
-To import it, you need to add that folder to your `PYTHONPATH` variable externally, or to `sys.path` inside the code:
-```python
-import sys
-sys.path.insert(0,'infinibatch') # note: relative paths are relative to your current dir, not to the python script
-import infinibatch
-```
-
-## Tutorial
-
-This little tutorial walks you through the steps of preparing your data and consuming them from Python code as batches.
-
-### Infinibatch Basics: Iterators and Checkpointing
-
-Infinibatch provides [Python iterators](https://docs.python.org/3.5/glossary.html#term-iterator)
-to read your data.
-An iterator represents a stream of data that can be retrieved item by item, e.g. via a
-`for` loop or repeatedly calling `next()` on it.
-
-Infinibatch is agnostic to the data type of the items, which is determined by a user-supplied file-read function.
-In NLP applications, items would typically be tuples of text. In other applications,
-they can be images or an audio file with a textual annotation.
-
-Infinibatch makes it easy to read your data in randomized order, and supports checkpointing, which allows you to restart training exactly where you left off.
-
-Randomization is done _on the fly_, which means that it is not necessary to read the entire data set into memory
-to be shuffled. Infinibatch implements a hierarchical shuffling algorithm
-that only holds a subset of the data in RAM at any point in time.
-
-Infinibatch iterators are _checkpointable_.
-Checkpointing lets you retrieve the current position (the "checkpoint") in the data stream at any time, so that
-later, you can "rewind" to that same position.
-The sad reality is that long-running trainings occasionally crash.
-To be able to continue a crashed training as if it had not crashed,
-save your Infinibatch iterator's checkpoint to disk whenever you save an intermediate model during training.
-To restart a crashed training, reset the iterator to the saved checkpoint.
-The data reader will now yield the exact same data-item sequence it would have yielded without the crash.
-
-### Data Preparation
-
-Infinibatch has one requirement on your data organization:
-To use your data with Infinibatch, it must be split into a large number of small chunks.
-A chunk is the smallest unit of data that is loaded from disk into RAM. Infinibatch holds a random subset of chunks in memory
-that it randomly draws samples from.
-
-Below we want to show how such a split can be created. An easy way to split your data into chunks is with the Linux `split` command.
-
-In this tutorial, our "corpus" consists of 6 lines of text, where each line is one data item.
-To create that corpus, please run this command in a bash shell. It creates a 6-line text file named `corpus.txt`:
-```bash
-echo \\
-'Lorem ipsum dolor sit amet,
-consectetur adipiscing elit,
-sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
-Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
-Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
-The quick brown fox jumps over the lazy dog.' \\
-> corpus.txt
-```
-Now let us split it into 3 chunks of 2 lines each. Each chunk is stored as a zipped text file.
-We will create them inside a new subdirectory called `corpus_chunks`:
-```bash
-mkdir corpus_chunks
-split --lines 2 --numeric-suffixes \\
- --filter 'gzip > corpus_chunks/$FILE.txt.gz' \\
- corpus.txt corpus.
-```
-This will have created three files: `corpus_chunks/corpus.00.txt.gz`, `corpus_chunks/corpus.01.txt.gz`, and `corpus_chunks/corpus.02.txt.gz`.
-To verify whether the data has been split as expected, you can use this command:
-```bash
-zcat corpus_chunks/corpus.*.txt.gz
-```
-
-Hint: For large corpora, we recommend replacing `gzip` by `pigz` (`apt-get install pigz`), which runs notably faster via multi-threading.
-
-### Reading Items in Random Order With Infinibatch
-
-We will first show the easiest way to read data with Infinibatch, using the helper function `chunked_dataset_iterator``()`.
-This function will create an Infinibatch iterator that yields the content of your data in random order.
-Please the following program:
-```python
-import sys, gzip, glob
-sys.path.insert(0,'infinibatch')
-from infinibatch import datasets as ds
-
-ds = ds.chunked_dataset_iterator(
- chunk_refs = glob.glob('corpus_chunks/corpus.*.txt.gz'),
- read_chunk_fn = lambda path: iter(gzip.decompress(open(path, "rb") \\
- .read()).decode(encoding='utf-8') \\
- .splitlines()),
- buffer_size = 6, seed = 1)
-
-for i in range(10):
- print(next(ds))
-```
-You should get output that contains the 6 example lines in randomized order:
-```text
-Lorem ipsum dolor sit amet,
-consectetur adipiscing elit,
-Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
-Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
-The quick brown fox jumps over the lazy dog.
-sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
-consectetur adipiscing elit,
-Lorem ipsum dolor sit amet,
-The quick brown fox jumps over the lazy dog.
-sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
-```
-Note: The `buffer_size` parameter determines how many sentences are read into memory at any given time,
-to draw randomized items from. In real settings with corpora of hundreds of millions of text lines,
-the `buffer_size` parameter should be set in the millions.
-RAM usage and startup time will be proportional to the buffer size
-(but much lower than having to load the entire corpus into RAM).
-
-### Reading Items of Different Lengths in Batches
-
-For deep learning, we want to group multiple items into batches.
-For NLP tasks, items are often lines of text of varying length.
-Infinibatch implements an algorithm that randomizes the input sequence and groups it into
-batches of approximately the same length (aka _bucketing_).
-
-Infinibatch's `BucketedReadaheadBatchIterator` performs this task.
-It implements an algorithm modeled after the [Marian toolkit](https://github.com/marian-nmt/marian)
-that preloads a large number of randomized items (typically millions; in this example: 6),
-sorts them and groups them into batches of similar length, and then yields
-them, in turn, in randomized order.
-
-Here is an example. Note that the `BucketedReadaheadBatchIterator` accepts
-the previous randomized sentence sequence iterator (`ds`) as the source of items to randomize over.
-This is an example how one forms pipelines of iterators with Infinibatch
-(a concept familiar from Python's own `itertools`).
-Once an iterator is passed to another as its source, consider it owned by that other iterator,
-it must no longer be accessed by the calling code.
-```python
-import sys, gzip, glob
-sys.path.insert(0,'infinibatch')
-from infinibatch import datasets as ds
-from infinibatch import iterators as it
-
-ds = ds.chunked_dataset_iterator(
- chunk_refs = glob.glob('corpus_chunks/corpus.*.txt.gz'),
- read_chunk_fn = lambda path: iter(gzip.decompress(open(path, "rb") \\
- .read()).decode(encoding='utf-8') \\
- .splitlines()),
- buffer_size = 6, seed = 1)
-
-bs = it.BucketedReadaheadBatchIterator(
- source_iterator = ds, # note: this is the iterator from above
- read_ahead = 6,
- key = lambda line: len(line),
- batch_size = 2,
- seed = 1)
-
-for i in range(25):
- print(next(bs))
-```
-This code should output something like this:
-```python
-['sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.',
- 'The quick brown fox jumps over the lazy dog.']
-['consectetur adipiscing elit,', 'Lorem ipsum dolor sit amet,']
-['Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.',
- 'Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.']
-```
-followed by different permutations of the same tuples.
-As you can see, the sentences are in random order and grouped in batches of 2 of approximately the same length.
-You may notice that there is no variation in how the items get grouped into batches--that
-is an artifact of this example, and generally not the case in real use when the data size is much larger
-than the batch size.
-
-In NLP, sentence length often varies considerably. As a result, using batches of a fixed number of lines,
-as in the example above, will waste GPU RAM and cores.
-This is because the number of lines is limited by the longest possible sequence; batches of shorter lines
-would leave GPU cycles on the table.
-Ideally, one would use batches that have as many lines as fit into GPU RAM,
-given the number of tokens of the longest line in the batch.
-To support variable batch sizes, Infinibatch allows to pass a function as the `batch_size` parameter.
-That function will be given the longest item of a batch and should estimate how many items of at most this length can fit.
-
-In our example, we assume that batches can hold at most 150 tokens.
-Please change the above code as follows:
-```python
- batch_size = lambda longest_line: 150 // len(longest_line),
-```
-The output looks like this:
-```
-['consectetur adipiscing elit,', 'Lorem ipsum dolor sit amet,']
-['Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.']
-['sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.',
- 'The quick brown fox jumps over the lazy dog.']
-['Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.']
-```
-That shorter sentences got grouped, while longer did not because they would exceed the total of 150 characters.
-
-### Reading Batches Into Numpy Arrays
-
-Lastly, we will need to feed batches into our favorite deep-learning tool.
-We will show how to convert the batches of text lines into padded `numpy` arrays.
-
-In a typical NLP application, text items would be tokenized, and then each token
-would be represented by an index into a unit vocabulary.
-For simplicity, in this example each character is its own token,
-and each token's numeric unit index is just its ASCII code.
-These sequences are then padded to equal length with -1, and converted into a `numpy` array.
-
-Please rerun the previous example, but first insert the following code before the final `for` loop.
-This example uses an Infinibatch `MapIterator`, which applies a user-supplied function or
-lambda to each item:
-```python
-import numpy as np
-def collate(lines_batch):
- # tokenize all lines in the batch and map to unit ids
- ids_batch = [[ord(c) for c in line] for line in lines_batch]
- # create a padded numpy array as wide as the longest line,
- # where shorter sequences are padded with -1
- width = max(len(ids) for ids in ids_batch)
- return np.array([ids + [-1] * (width-len(ids)) for ids in ids_batch])
-
-bs = it.MapIterator(
- source_iterator = bs,
- transform = collate)
-```
-This will output batches like this. Note that in batches with multiple sentences,
-some entries are padded with `-1`.
-```python
-[[ 99 111 110 115 101 99 116 101 116 117 114 32 97 100 105 112 105 115
- 99 105 110 103 32 101 108 105 116 44]
- [ 76 111 114 101 109 32 105 112 115 117 109 32 100 111 108 111 114 32
- 115 105 116 32 97 109 101 116 44 -1]]
-[[ 85 116 32 101 110 105 109 32 97 100 32 109 105 110 105 109 32 118
- 101 110 105 97 109 44 32 113 117 105 115 32 110 111 115 116 114 117
- 100 32 101 120 101 114 99 105 116 97 116 105 111 110 32 117 108 108
- 97 109 99 111 32 108 97 98 111 114 105 115 32 110 105 115 105 32
- 117 116 32 97 108 105 113 117 105 112 32 101 120 32 101 97 32 99
- 111 109 109 111 100 111 32 99 111 110 115 101 113 117 97 116 46]]
-[[115 101 100 32 100 111 32 101 105 117 115 109 111 100 32 116 101 109
- 112 111 114 32 105 110 99 105 100 105 100 117 110 116 32 117 116 32
- 108 97 98 111 114 101 32 101 116 32 100 111 108 111 114 101 32 109
- 97 103 110 97 32 97 108 105 113 117 97 46]
- [ 84 104 101 32 113 117 105 99 107 32 98 114 111 119 110 32 102 111
- 120 32 106 117 109 112 115 32 111 118 101 114 32 116 104 101 32 108
- 97 122 121 32 100 111 103 46 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
- -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1]]
-[[ 68 117 105 115 32 97 117 116 101 32 105 114 117 114 101 32 100 111
- 108 111 114 32 105 110 32 114 101 112 114 101 104 101 110 100 101 114
- 105 116 32 105 110 32 118 111 108 117 112 116 97 116 101 32 118 101
- 108 105 116 32 101 115 115 101 32 99 105 108 108 117 109 32 100 111
- 108 111 114 101 32 101 117 32 102 117 103 105 97 116 32 110 117 108
- 108 97 32 112 97 114 105 97 116 117 114 46]]
-```
-
-## Where To Go From Here
-
-The above tutorial showed you the use of the most common iterator type, as created by the
-convenience function `chunked_dataset_iterator()`.
-
-Not all real-life scenarios are covered by this function. For example, multi-task learning
-scenarios require more complex combinations of data. To create those, you will need
-to compose the necessary data reader from the underlying building blocks.
-This is described at the documentation of the module `iterators`.
-"""
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/akhaliq/deeplab2/utils/test_utils.py b/spaces/akhaliq/deeplab2/utils/test_utils.py
deleted file mode 100644
index 18269ec6d5d25fb02c59b5e1807c6bd425e99c55..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/deeplab2/utils/test_utils.py
+++ /dev/null
@@ -1,64 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The Deeplab2 Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Provide utility functions to write simple tests."""
-import functools
-
-import numpy as np
-import tensorflow as tf
-
-
-NORMALIZATION_LAYERS = (
- tf.keras.layers.experimental.SyncBatchNormalization,
- tf.keras.layers.BatchNormalization
-)
-
-
-def create_strategy():
- """Returns a strategy based on available devices.
-
- Does NOT work with local_multiworker_tpu_test tests!
- """
- tpus = tf.config.list_logical_devices(device_type='TPU')
- gpus = tf.config.list_logical_devices(device_type='GPU')
- if tpus:
- resolver = tf.distribute.cluster_resolver.TPUClusterResolver('')
- tf.config.experimental_connect_to_cluster(resolver)
- tf.tpu.experimental.initialize_tpu_system(resolver)
- return tf.distribute.TPUStrategy(resolver)
- elif gpus:
- return tf.distribute.OneDeviceStrategy('/gpu:0')
- else:
- return tf.distribute.OneDeviceStrategy('/cpu:0')
-
-
-def test_all_strategies(func):
- """Decorator to test CPU, GPU and TPU strategies."""
- @functools.wraps(func)
- def decorator(self):
- strategy = create_strategy()
- return func(self, strategy)
- return decorator
-
-
-def create_test_input(batch, height, width, channels):
- """Creates test input tensor."""
- return tf.convert_to_tensor(
- np.tile(
- np.reshape(
- np.reshape(np.arange(height), [height, 1]) +
- np.reshape(np.arange(width), [1, width]),
- [1, height, width, 1]),
- [batch, 1, 1, channels]), dtype=tf.float32)
diff --git a/spaces/akhaliq/t5-base-fine-tuned-on-jfleg/app.py b/spaces/akhaliq/t5-base-fine-tuned-on-jfleg/app.py
deleted file mode 100644
index 2f85350a33933ddbfb48829819e8921a722173b0..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/t5-base-fine-tuned-on-jfleg/app.py
+++ /dev/null
@@ -1,8 +0,0 @@
-import gradio as gr
-title = "t5-base-fine-tuned-on-jfleg"
-description = "Gradio Demo for T5-base model fine-tuned on the JFLEG dataset with the objective of text2text-generation. To use it, simply add your text, or click one of the examples to load them. Read more at the links below."
-article = "
If you want to unlock your phone and use it with any carrier, you might need a software tool called CDMA Workshop 2.7. CDMA stands for Code Division Multiple Access, which is a type of cell phone technology that was popular before the advent of 4G and 5G networks[^2^]. CDMA Workshop 2.7 is a program that can read and write information on your phone's memory, such as the ESN (Electronic Serial Number), MEID (Mobile Equipment Identifier), PRL (Preferred Roaming List), and SPC (Service Programming Code).
-
However, CDMA Workshop 2.7 is not a free software. You have to buy a license to use it legally. If you don't want to pay for it, you might be tempted to download a cracked version of it from the internet. A cracked version is a modified version that bypasses the security checks and allows you to use it without a license. However, downloading a cracked version of CDMA Workshop 2.7 is risky and illegal. You might end up with a virus, malware, or spyware on your computer or phone. You might also face legal consequences for violating the copyright laws.
-
Cdma Workshop 2.7 Full Cracked Rar Filestube -- 12
One of the sources where you can find a cracked version of CDMA Workshop 2.7 is Rar Filestube. Rar Filestube is a website that lets you search and download compressed files (such as RAR files) from various file hosting services[^3^]. A RAR file is a type of compressed file that can contain one or more other files and folders inside of it[^3^]. However, Rar Filestube is not a reliable or safe source for downloading software. Many of the files on Rar Filestube are fake, corrupted, or infected with viruses. You might also encounter pop-ups, ads, or surveys that try to trick you into giving away your personal information or money.
-
Therefore, we do not recommend using CDMA Workshop 2.7 Full Cracked Rar Filestube -- 12 or any other similar file to unlock your phone. Instead, you should look for other legitimate and legal ways to unlock your phone, such as contacting your carrier, using an online service, or buying an unlocked phone.
-
-
If you still want to use CDMA Workshop 2.7 to unlock your phone, you need to follow some steps to access your phone's memory and enter the unlock code. The steps may vary depending on your phone model and service provider, but here is a general guide:
-
-
Download and install the original version of CDMA Workshop 2.7 from its official website. You need to buy a license to use it legally.
-
Download and install the necessary drivers for your phone. You can find them on your phone manufacturer's website or on online forums.
-
Put your phone in diagnostic mode by dialing a specific code on your phone's keypad. The code may be different for different phones, but some common ones are ##3424#, ##8778#, or ##8727277#. You can also find the code in the help menu of CDMA Workshop 2.7 or on online forums.
-
Connect your phone to your computer via USB cable. Your computer should recognize your phone as a new device and install the drivers automatically.
-
Open CDMA Workshop 2.7 and click on the Port tab at the top left corner. Select your phone's COM port from the list and click Connect.
-
Click on the Read button and wait for the program to read your phone's information.
-
Click on the Security tab and then click on SPC. Select Send and enter 000000 as the SPC code. This is the default code for most phones, but it may be different for some phones. You can find the correct code on online forums or by contacting your service provider.
-
Click on SPC again and select Send ASCII (SAMSUNG). This will send a random code to your phone and unlock it.
-
Click on the Main tab and enter your new service provider's details, such as NAM, MIN, MDN, SID, PRL, etc. You can find these details on online forums or by contacting your new service provider.
-
Click on Write button and wait for the program to write the new settings to your phone.
-
Disconnect your phone from your computer and restart it. Your phone should now be unlocked and ready to use with any network.
-
-
Note that this method may not work for all phones or all networks. Some phones may require additional steps or tools to unlock them. Some networks may not support CDMA phones or may have different frequencies or bands that are incompatible with your phone. You should always do some research before attempting to unlock your phone with CDMA Workshop 2.7 or any other software.
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Elle Kennedy The Deal Epub Vk A Fake Date Turns into a Real Deal in This Sizzling Romance.md b/spaces/bioriAsaeru/text-to-voice/Elle Kennedy The Deal Epub Vk A Fake Date Turns into a Real Deal in This Sizzling Romance.md
deleted file mode 100644
index 2ee4148643d378e95c642528833789a278e2ff62..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Elle Kennedy The Deal Epub Vk A Fake Date Turns into a Real Deal in This Sizzling Romance.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Kyun! Ho Gaya Na Hd Movie Download NEW! 1080p.md b/spaces/bioriAsaeru/text-to-voice/Kyun! Ho Gaya Na Hd Movie Download NEW! 1080p.md
deleted file mode 100644
index bcd3c5e6e91459a9c97cab9125163b657a318488..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Kyun! Ho Gaya Na Hd Movie Download NEW! 1080p.md
+++ /dev/null
@@ -1,73 +0,0 @@
-## Kyun! Ho Gaya Na Hd Movie Download 1080p
-
-
-
-**Download File --->>> [https://kneedacexbrew.blogspot.com/?d=2twuqD](https://kneedacexbrew.blogspot.com/?d=2twuqD)**
-
-
-
- Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Kyun! Ho Gaya Na Hd Movie Download 1080p":
-
-# How to Download Kyun! Ho Gaya Na Full HD Movie Online
-
-
-
-If you are looking for a romantic comedy movie to watch with your loved ones, you might want to check out **Kyun! Ho Gaya Na**, a 2004 Hindi film starring **Vivek Oberoi, Aishwarya Rai and Amitabh Bachchan**. The movie tells the story of Diya and Arjun, who have feelings for each other but never confess them. When Diya leaves for her uncle's home, Arjun follows her and tries to win her heart. Will they finally admit their love for each other?
-
-
-
-Kyun! Ho Gaya Na is a fun and entertaining movie that will make you laugh and cry. It has a rating of **4.2/10** on IMDb and is available to watch online on **ZEE5**, a popular streaming platform that offers a variety of movies and shows in different languages. You can watch Kyun! Ho Gaya Na full HD movie online on ZEE5 with a subscription or a free trial.
-
-
-
-However, if you want to download Kyun! Ho Gaya Na full HD movie online for free, you might face some difficulties. The movie is not available on any other legal platforms like Netflix, Amazon Prime Video or Hotstar. You might find some torrent sites or illegal websites that claim to offer Kyun! Ho Gaya Na full HD movie download 1080p, but they are not safe or reliable. They might contain viruses, malware or spyware that can harm your device or compromise your privacy. They might also have low-quality videos, broken links or fake downloads that can waste your time and bandwidth.
-
-
-
-Therefore, we recommend you to avoid such illegal sources and watch Kyun! Ho Gaya Na full HD movie online on ZEE5 legally and safely. You can enjoy the movie in high-quality video and audio, with subtitles and without any ads or interruptions. You can also access other movies and shows on ZEE5 that suit your taste and preferences.
-
-
-
-To watch Kyun! Ho Gaya Na full HD movie online on ZEE5, you need to follow these simple steps:
-
-
-
-1. Visit the official website of ZEE5 or download the ZEE5 app on your device.
-
-2. Create an account or sign in with your existing account.
-
-3. Choose a subscription plan that suits your budget and needs.
-
-4. Search for Kyun! Ho Gaya Na in the search bar or browse through the categories.
-
-5. Click on the movie poster and start watching it online.
-
-
-
-That's it! You can now enjoy Kyun! Ho Gaya Na full HD movie online on ZEE5 anytime and anywhere. You can also download the movie offline on your device if you want to watch it later without an internet connection.
-
-
-
-Kyun! Ho Gaya Na is a movie that will make you smile and warm your heart. It has a great cast, a lovely soundtrack and a sweet story. Don't miss this chance to watch it online on ZEE5 with your loved ones. You will not regret it!
-
-Here is a possible continuation of the article:
-
-If you are wondering what makes Kyun! Ho Gaya Na such a special movie, here are some reasons why you should watch it:
-
-
-
-- The movie has a **star-studded cast** that includes some of the most popular and talented actors in Bollywood. Vivek Oberoi and Aishwarya Rai have a great chemistry and deliver charming performances as the lead pair. Amitabh Bachchan adds his charisma and grace as Diya's uncle and Arjun's mentor. The movie also features other actors like Om Puri, Rati Agnihotri, Suniel Shetty and Tinnu Anand in supporting roles.
-
-- The movie has a **refreshing and humorous plot** that explores the theme of love and friendship. The movie shows how Diya and Arjun, who have different personalities and outlooks on life, gradually fall in love with each other despite their differences. The movie also has some hilarious scenes and dialogues that will make you laugh out loud. The movie has a good balance of comedy, drama and romance that will keep you engaged throughout.
-
-- The movie has a **beautiful and melodious soundtrack** that complements the mood and tone of the movie. The movie has some memorable songs that are composed by Shankar-Ehsaan-Loy and sung by singers like Sonu Nigam, Shreya Ghoshal, Sadhana Sargam and Udit Narayan. The songs range from romantic to peppy to emotional and suit the situations and emotions of the characters. Some of the popular songs from the movie are "Aao Na", "Baat Samjha Karo", "Pyaar Mein Sau Uljhane" and "Dheere Dheere".
-
-
-
-Kyun! Ho Gaya Na is a movie that will make you feel good and happy. It is a perfect movie to watch with your partner or friends on a cozy night. You can watch Kyun! Ho Gaya Na full HD movie online on ZEE5 with a subscription or a free trial. You can also download the movie offline on your device if you want to watch it later without an internet connection.
-
-
-
-So what are you waiting for? Watch Kyun! Ho Gaya Na full HD movie online on ZEE5 today and enjoy this delightful romantic comedy!
-
- dfd1c89656
\ No newline at end of file
diff --git a/spaces/birdortyedi/cifr-pytorch/configs/default.py b/spaces/birdortyedi/cifr-pytorch/configs/default.py
deleted file mode 100644
index 3d90d978badc5d8c9633c7398224e52892f21fa5..0000000000000000000000000000000000000000
--- a/spaces/birdortyedi/cifr-pytorch/configs/default.py
+++ /dev/null
@@ -1,114 +0,0 @@
-from yacs.config import CfgNode as CN
-
-_C = CN()
-
-_C.SYSTEM = CN()
-_C.SYSTEM.NUM_GPU = 2
-_C.SYSTEM.NUM_WORKERS = 4
-
-_C.WANDB = CN()
-_C.WANDB.PROJECT_NAME = "contrastive-style-learning-for-ifr"
-_C.WANDB.ENTITY = "vvgl-ozu"
-_C.WANDB.RUN = 3
-_C.WANDB.LOG_DIR = ""
-_C.WANDB.NUM_ROW = 0
-
-_C.TRAIN = CN()
-_C.TRAIN.NUM_TOTAL_STEP = 200000
-_C.TRAIN.START_STEP = 0
-_C.TRAIN.BATCH_SIZE = 16
-_C.TRAIN.SHUFFLE = True
-_C.TRAIN.LOG_INTERVAL = 100
-_C.TRAIN.EVAL_INTERVAL = 1000
-_C.TRAIN.SAVE_INTERVAL = 1000
-_C.TRAIN.SAVE_DIR = "./weights"
-_C.TRAIN.RESUME = True
-_C.TRAIN.VISUALIZE_INTERVAL = 100
-_C.TRAIN.TUNE = False
-
-_C.MODEL = CN()
-_C.MODEL.NAME = "cifr"
-_C.MODEL.IS_TRAIN = True
-_C.MODEL.NUM_CLASS = 17
-_C.MODEL.CKPT = ""
-_C.MODEL.PRETRAINED = ""
-
-_C.MODEL.IFR = CN()
-_C.MODEL.IFR.NAME = "ContrastiveInstaFilterRemovalNetwork"
-_C.MODEL.IFR.NUM_CHANNELS = 32
-_C.MODEL.IFR.DESTYLER_CHANNELS = 32
-_C.MODEL.IFR.SOLVER = CN()
-_C.MODEL.IFR.SOLVER.LR = 2e-4
-_C.MODEL.IFR.SOLVER.BETAS = (0.5, 0.999)
-_C.MODEL.IFR.SOLVER.SCHEDULER = []
-_C.MODEL.IFR.SOLVER.DECAY_RATE = 0.
-_C.MODEL.IFR.DS_FACTOR = 4
-
-_C.MODEL.PATCH = CN()
-_C.MODEL.PATCH.NUM_CHANNELS = 256
-_C.MODEL.PATCH.NUM_PATCHES = 256
-_C.MODEL.PATCH.NUM_LAYERS = 6
-_C.MODEL.PATCH.USE_MLP = True
-_C.MODEL.PATCH.SHUFFLE_Y = True
-_C.MODEL.PATCH.LR = 1e-4
-_C.MODEL.PATCH.BETAS = (0.5, 0.999)
-_C.MODEL.PATCH.T = 0.07
-
-_C.MODEL.D = CN()
-_C.MODEL.D.NAME = "1-ChOutputDiscriminator"
-_C.MODEL.D.NUM_CHANNELS = 32
-_C.MODEL.D.NUM_CRITICS = 3
-_C.MODEL.D.SOLVER = CN()
-_C.MODEL.D.SOLVER.LR = 1e-4
-_C.MODEL.D.SOLVER.BETAS = (0.5, 0.999)
-_C.MODEL.D.SOLVER.SCHEDULER = []
-_C.MODEL.D.SOLVER.DECAY_RATE = 0.01
-
-_C.ESRGAN = CN()
-_C.ESRGAN.WEIGHTS = "weights/RealESRGAN_x{}plus.pth"
-
-_C.FASHIONMASKRCNN = CN()
-_C.FASHIONMASKRCNN.CFG_PATH = "configs/fashion.yaml"
-_C.FASHIONMASKRCNN.WEIGHTS = "weights/fashion.pth"
-_C.FASHIONMASKRCNN.SCORE_THRESH_TEST = 0.6
-_C.FASHIONMASKRCNN.MIN_SIZE_TEST = 512
-
-_C.OPTIM = CN()
-_C.OPTIM.GP = 10.
-_C.OPTIM.MASK = 1
-_C.OPTIM.RECON = 1.4
-_C.OPTIM.SEMANTIC = 1e-1
-_C.OPTIM.TEXTURE = 2e-1
-_C.OPTIM.ADVERSARIAL = 1e-3
-_C.OPTIM.AUX = 0.5
-_C.OPTIM.CONTRASTIVE = 0.1
-_C.OPTIM.NLL = 1.0
-
-_C.DATASET = CN()
-_C.DATASET.NAME = "IFFI"
-_C.DATASET.ROOT = "../../Downloads/IFFI-dataset/train" # "../../Downloads/IFFI-dataset/train"
-_C.DATASET.TEST_ROOT = "../../Datasets/IFFI-dataset/test" # "../../Downloads/IFFI-dataset/test"
-_C.DATASET.DS_TEST_ROOT = "../../Downloads/IFFI-dataset/test/" # "../../Downloads/IFFI-dataset/test"
-_C.DATASET.DS_JSON_FILE = "../../Downloads/IFFI-dataset-only-orgs/instances_default.json"
-_C.DATASET.SIZE = 256
-_C.DATASET.CROP_SIZE = 512
-_C.DATASET.MEAN = [0.5, 0.5, 0.5]
-_C.DATASET.STD = [0.5, 0.5, 0.5]
-
-_C.TEST = CN()
-_C.TEST.OUTPUT_DIR = "./outputs"
-_C.TEST.ABLATION = False
-_C.TEST.WEIGHTS = ""
-_C.TEST.BATCH_SIZE = 32
-_C.TEST.IMG_ID = 52
-
-
-def get_cfg_defaults():
- """Get a yacs CfgNode object with default values for my_project."""
- # Return a clone so that the defaults will not be altered
- # This is for the "local variable" use pattern
- return _C.clone()
-
-
-# provide a way to import the defaults as a global singleton:
-cfg = _C # users can `from config import cfg`
diff --git a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/metrics/metric_utils.py b/spaces/birsardar/stable-diffusion-mat-outpainting-primer/metrics/metric_utils.py
deleted file mode 100644
index 1a64bbf488880aef5580a2c6b6dfdf447d9fd9a5..0000000000000000000000000000000000000000
--- a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/metrics/metric_utils.py
+++ /dev/null
@@ -1,434 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import os
-import time
-import hashlib
-import pickle
-import copy
-import uuid
-import numpy as np
-import torch
-import dnnlib
-import math
-import cv2
-
-#----------------------------------------------------------------------------
-
-class MetricOptions:
- def __init__(self, G=None, G_kwargs={}, dataset_kwargs={}, num_gpus=1, rank=0, device=None, progress=None, cache=True):
- assert 0 <= rank < num_gpus
- self.G = G
- self.G_kwargs = dnnlib.EasyDict(G_kwargs)
- self.dataset_kwargs = dnnlib.EasyDict(dataset_kwargs)
- self.num_gpus = num_gpus
- self.rank = rank
- self.device = device if device is not None else torch.device('cuda', rank)
- self.progress = progress.sub() if progress is not None and rank == 0 else ProgressMonitor()
- self.cache = cache
-
-#----------------------------------------------------------------------------
-
-_feature_detector_cache = dict()
-
-def get_feature_detector_name(url):
- return os.path.splitext(url.split('/')[-1])[0]
-
-def get_feature_detector(url, device=torch.device('cpu'), num_gpus=1, rank=0, verbose=False):
- assert 0 <= rank < num_gpus
- key = (url, device)
- if key not in _feature_detector_cache:
- is_leader = (rank == 0)
- if not is_leader and num_gpus > 1:
- torch.distributed.barrier() # leader goes first
- with dnnlib.util.open_url(url, verbose=(verbose and is_leader)) as f:
- _feature_detector_cache[key] = torch.jit.load(f).eval().to(device)
- if is_leader and num_gpus > 1:
- torch.distributed.barrier() # others follow
- return _feature_detector_cache[key]
-
-#----------------------------------------------------------------------------
-
-class FeatureStats:
- def __init__(self, capture_all=False, capture_mean_cov=False, max_items=None):
- self.capture_all = capture_all
- self.capture_mean_cov = capture_mean_cov
- self.max_items = max_items
- self.num_items = 0
- self.num_features = None
- self.all_features = None
- self.raw_mean = None
- self.raw_cov = None
-
- def set_num_features(self, num_features):
- if self.num_features is not None:
- assert num_features == self.num_features
- else:
- self.num_features = num_features
- self.all_features = []
- self.raw_mean = np.zeros([num_features], dtype=np.float64)
- self.raw_cov = np.zeros([num_features, num_features], dtype=np.float64)
-
- def is_full(self):
- return (self.max_items is not None) and (self.num_items >= self.max_items)
-
- def append(self, x):
- x = np.asarray(x, dtype=np.float32)
- assert x.ndim == 2
- if (self.max_items is not None) and (self.num_items + x.shape[0] > self.max_items):
- if self.num_items >= self.max_items:
- return
- x = x[:self.max_items - self.num_items]
-
- self.set_num_features(x.shape[1])
- self.num_items += x.shape[0]
- if self.capture_all:
- self.all_features.append(x)
- if self.capture_mean_cov:
- x64 = x.astype(np.float64)
- self.raw_mean += x64.sum(axis=0)
- self.raw_cov += x64.T @ x64
-
- def append_torch(self, x, num_gpus=1, rank=0):
- assert isinstance(x, torch.Tensor) and x.ndim == 2
- assert 0 <= rank < num_gpus
- if num_gpus > 1:
- ys = []
- for src in range(num_gpus):
- y = x.clone()
- torch.distributed.broadcast(y, src=src)
- ys.append(y)
- x = torch.stack(ys, dim=1).flatten(0, 1) # interleave samples
- self.append(x.cpu().numpy())
-
- def get_all(self):
- assert self.capture_all
- return np.concatenate(self.all_features, axis=0)
-
- def get_all_torch(self):
- return torch.from_numpy(self.get_all())
-
- def get_mean_cov(self):
- assert self.capture_mean_cov
- mean = self.raw_mean / self.num_items
- cov = self.raw_cov / self.num_items
- cov = cov - np.outer(mean, mean)
- return mean, cov
-
- def save(self, pkl_file):
- with open(pkl_file, 'wb') as f:
- pickle.dump(self.__dict__, f)
-
- @staticmethod
- def load(pkl_file):
- with open(pkl_file, 'rb') as f:
- s = dnnlib.EasyDict(pickle.load(f))
- obj = FeatureStats(capture_all=s.capture_all, max_items=s.max_items)
- obj.__dict__.update(s)
- return obj
-
-#----------------------------------------------------------------------------
-
-class ProgressMonitor:
- def __init__(self, tag=None, num_items=None, flush_interval=1000, verbose=False, progress_fn=None, pfn_lo=0, pfn_hi=1000, pfn_total=1000):
- self.tag = tag
- self.num_items = num_items
- self.verbose = verbose
- self.flush_interval = flush_interval
- self.progress_fn = progress_fn
- self.pfn_lo = pfn_lo
- self.pfn_hi = pfn_hi
- self.pfn_total = pfn_total
- self.start_time = time.time()
- self.batch_time = self.start_time
- self.batch_items = 0
- if self.progress_fn is not None:
- self.progress_fn(self.pfn_lo, self.pfn_total)
-
- def update(self, cur_items):
- assert (self.num_items is None) or (cur_items <= self.num_items)
- if (cur_items < self.batch_items + self.flush_interval) and (self.num_items is None or cur_items < self.num_items):
- return
- cur_time = time.time()
- total_time = cur_time - self.start_time
- time_per_item = (cur_time - self.batch_time) / max(cur_items - self.batch_items, 1)
- if (self.verbose) and (self.tag is not None):
- print(f'{self.tag:<19s} items {cur_items:<7d} time {dnnlib.util.format_time(total_time):<12s} ms/item {time_per_item*1e3:.2f}')
- self.batch_time = cur_time
- self.batch_items = cur_items
-
- if (self.progress_fn is not None) and (self.num_items is not None):
- self.progress_fn(self.pfn_lo + (self.pfn_hi - self.pfn_lo) * (cur_items / self.num_items), self.pfn_total)
-
- def sub(self, tag=None, num_items=None, flush_interval=1000, rel_lo=0, rel_hi=1):
- return ProgressMonitor(
- tag = tag,
- num_items = num_items,
- flush_interval = flush_interval,
- verbose = self.verbose,
- progress_fn = self.progress_fn,
- pfn_lo = self.pfn_lo + (self.pfn_hi - self.pfn_lo) * rel_lo,
- pfn_hi = self.pfn_lo + (self.pfn_hi - self.pfn_lo) * rel_hi,
- pfn_total = self.pfn_total,
- )
-
-#----------------------------------------------------------------------------
-
-def compute_feature_stats_for_dataset(opts, detector_url, detector_kwargs, rel_lo=0, rel_hi=1, batch_size=64, data_loader_kwargs=None, max_items=None, **stats_kwargs):
- dataset = dnnlib.util.construct_class_by_name(**opts.dataset_kwargs)
- if data_loader_kwargs is None:
- data_loader_kwargs = dict(pin_memory=True, num_workers=3, prefetch_factor=2)
-
- # Try to lookup from cache.
- cache_file = None
- if opts.cache:
- # Choose cache file name.
- args = dict(dataset_kwargs=opts.dataset_kwargs, detector_url=detector_url, detector_kwargs=detector_kwargs, stats_kwargs=stats_kwargs)
- md5 = hashlib.md5(repr(sorted(args.items())).encode('utf-8'))
- cache_tag = f'{dataset.name}-{get_feature_detector_name(detector_url)}-{md5.hexdigest()}'
- cache_file = dnnlib.make_cache_dir_path('gan-metrics', cache_tag + '.pkl')
-
- # Check if the file exists (all processes must agree).
- flag = os.path.isfile(cache_file) if opts.rank == 0 else False
- if opts.num_gpus > 1:
- flag = torch.as_tensor(flag, dtype=torch.float32, device=opts.device)
- torch.distributed.broadcast(tensor=flag, src=0)
- flag = (float(flag.cpu()) != 0)
-
- # Load.
- if flag:
- return FeatureStats.load(cache_file)
-
- # Initialize.
- num_items = len(dataset)
- if max_items is not None:
- num_items = min(num_items, max_items)
- stats = FeatureStats(max_items=num_items, **stats_kwargs)
- progress = opts.progress.sub(tag='dataset features', num_items=num_items, rel_lo=rel_lo, rel_hi=rel_hi)
- detector = get_feature_detector(url=detector_url, device=opts.device, num_gpus=opts.num_gpus, rank=opts.rank, verbose=progress.verbose)
-
- # Main loop.
- item_subset = [(i * opts.num_gpus + opts.rank) % num_items for i in range((num_items - 1) // opts.num_gpus + 1)]
- # for images, _labels in torch.utils.data.DataLoader(dataset=dataset, sampler=item_subset, batch_size=batch_size, **data_loader_kwargs):
- # adaptation to inpainting
- for images, masks, _labels in torch.utils.data.DataLoader(dataset=dataset, sampler=item_subset, batch_size=batch_size,
- **data_loader_kwargs):
- # --------------------------------
- if images.shape[1] == 1:
- images = images.repeat([1, 3, 1, 1])
- features = detector(images.to(opts.device), **detector_kwargs)
- stats.append_torch(features, num_gpus=opts.num_gpus, rank=opts.rank)
- progress.update(stats.num_items)
-
- # Save to cache.
- if cache_file is not None and opts.rank == 0:
- os.makedirs(os.path.dirname(cache_file), exist_ok=True)
- temp_file = cache_file + '.' + uuid.uuid4().hex
- stats.save(temp_file)
- os.replace(temp_file, cache_file) # atomic
- return stats
-
-#----------------------------------------------------------------------------
-
-def compute_feature_stats_for_generator(opts, detector_url, detector_kwargs, rel_lo=0, rel_hi=1, batch_size=64, batch_gen=None, jit=False, data_loader_kwargs=None, **stats_kwargs):
- if data_loader_kwargs is None:
- data_loader_kwargs = dict(pin_memory=True, num_workers=3, prefetch_factor=2)
-
- if batch_gen is None:
- batch_gen = min(batch_size, 4)
- assert batch_size % batch_gen == 0
-
- # Setup generator and load labels.
- G = copy.deepcopy(opts.G).eval().requires_grad_(False).to(opts.device)
- dataset = dnnlib.util.construct_class_by_name(**opts.dataset_kwargs)
-
- # Image generation func.
- def run_generator(img_in, mask_in, z, c):
- img = G(img_in, mask_in, z, c, **opts.G_kwargs)
- # img = (img * 127.5 + 128).clamp(0, 255).to(torch.uint8)
- img = ((img + 1.0) * 127.5).clamp(0, 255).round().to(torch.uint8)
- return img
-
- # # JIT.
- # if jit:
- # z = torch.zeros([batch_gen, G.z_dim], device=opts.device)
- # c = torch.zeros([batch_gen, G.c_dim], device=opts.device)
- # run_generator = torch.jit.trace(run_generator, [z, c], check_trace=False)
-
- # Initialize.
- stats = FeatureStats(**stats_kwargs)
- assert stats.max_items is not None
- progress = opts.progress.sub(tag='generator features', num_items=stats.max_items, rel_lo=rel_lo, rel_hi=rel_hi)
- detector = get_feature_detector(url=detector_url, device=opts.device, num_gpus=opts.num_gpus, rank=opts.rank, verbose=progress.verbose)
-
- # Main loop.
- item_subset = [(i * opts.num_gpus + opts.rank) % stats.max_items for i in range((stats.max_items - 1) // opts.num_gpus + 1)]
- for imgs_batch, masks_batch, labels_batch in torch.utils.data.DataLoader(dataset=dataset, sampler=item_subset,
- batch_size=batch_size,
- **data_loader_kwargs):
- images = []
- imgs_gen = (imgs_batch.to(opts.device).to(torch.float32) / 127.5 - 1).split(batch_gen)
- masks_gen = masks_batch.to(opts.device).to(torch.float32).split(batch_gen)
- for img_in, mask_in in zip(imgs_gen, masks_gen):
- z = torch.randn([img_in.shape[0], G.z_dim], device=opts.device)
- c = [dataset.get_label(np.random.randint(len(dataset))) for _i in range(img_in.shape[0])]
- c = torch.from_numpy(np.stack(c)).pin_memory().to(opts.device)
- images.append(run_generator(img_in, mask_in, z, c))
- images = torch.cat(images)
- if images.shape[1] == 1:
- images = images.repeat([1, 3, 1, 1])
- features = detector(images, **detector_kwargs)
- stats.append_torch(features, num_gpus=opts.num_gpus, rank=opts.rank)
- progress.update(stats.num_items)
- return stats
-
-#----------------------------------------------------------------------------
-
-def compute_image_stats_for_generator(opts, rel_lo=0, rel_hi=1, batch_size=64, batch_gen=None, jit=False, data_loader_kwargs=None, **stats_kwargs):
- if data_loader_kwargs is None:
- data_loader_kwargs = dict(pin_memory=True, num_workers=3, prefetch_factor=2)
-
- if batch_gen is None:
- batch_gen = min(batch_size, 4)
- assert batch_size % batch_gen == 0
-
- # Setup generator and load labels.
- G = copy.deepcopy(opts.G).eval().requires_grad_(False).to(opts.device)
- dataset = dnnlib.util.construct_class_by_name(**opts.dataset_kwargs)
-
- # Image generation func.
- def run_generator(img_in, mask_in, z, c):
- img = G(img_in, mask_in, z, c, **opts.G_kwargs)
- # img = (img * 127.5 + 128).clamp(0, 255).to(torch.uint8)
- img = ((img + 1.0) * 127.5).clamp(0, 255).round().to(torch.uint8)
- return img
-
- # Initialize.
- stats = FeatureStats(**stats_kwargs)
- assert stats.max_items is not None
- progress = opts.progress.sub(tag='generator images', num_items=stats.max_items, rel_lo=rel_lo, rel_hi=rel_hi)
-
- # Main loop.
- item_subset = [(i * opts.num_gpus + opts.rank) % stats.max_items for i in range((stats.max_items - 1) // opts.num_gpus + 1)]
- for imgs_batch, masks_batch, labels_batch in torch.utils.data.DataLoader(dataset=dataset, sampler=item_subset,
- batch_size=batch_size,
- **data_loader_kwargs):
- images = []
- imgs_gen = (imgs_batch.to(opts.device).to(torch.float32) / 127.5 - 1).split(batch_gen)
- masks_gen = masks_batch.to(opts.device).to(torch.float32).split(batch_gen)
- for img_in, mask_in in zip(imgs_gen, masks_gen):
- z = torch.randn([img_in.shape[0], G.z_dim], device=opts.device)
- c = [dataset.get_label(np.random.randint(len(dataset))) for _i in range(img_in.shape[0])]
- c = torch.from_numpy(np.stack(c)).pin_memory().to(opts.device)
- images.append(run_generator(img_in, mask_in, z, c))
- images = torch.cat(images)
- if images.shape[1] == 1:
- images = images.repeat([1, 3, 1, 1])
-
- assert imgs_batch.shape == images.shape
- metrics = []
- for i in range(imgs_batch.shape[0]):
- img_real = np.transpose(imgs_batch[i].cpu().numpy(), [1, 2, 0])
- img_gen = np.transpose(images[i].cpu().numpy(), [1, 2, 0])
- psnr = calculate_psnr(img_gen, img_real)
- ssim = calculate_ssim(img_gen, img_real)
- l1 = calculate_l1(img_gen, img_real)
- metrics.append([psnr, ssim, l1])
- metrics = torch.from_numpy(np.array(metrics)).to(torch.float32).to(opts.device)
-
- stats.append_torch(metrics, num_gpus=opts.num_gpus, rank=opts.rank)
- progress.update(stats.num_items)
- return stats
-
-
-def calculate_psnr(img1, img2):
- # img1 and img2 have range [0, 255]
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
- mse = np.mean((img1 - img2) ** 2)
- if mse == 0:
- return float('inf')
-
- return 20 * math.log10(255.0 / math.sqrt(mse))
-
-
-def calculate_ssim(img1, img2):
- C1 = (0.01 * 255) ** 2
- C2 = (0.03 * 255) ** 2
-
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
- kernel = cv2.getGaussianKernel(11, 1.5)
- window = np.outer(kernel, kernel.transpose())
-
- mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5]
- mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5]
- mu1_sq = mu1 ** 2
- mu2_sq = mu2 ** 2
- mu1_mu2 = mu1 * mu2
- sigma1_sq = cv2.filter2D(img1 ** 2, -1, window)[5:-5, 5:-5] - mu1_sq
- sigma2_sq = cv2.filter2D(img2 ** 2, -1, window)[5:-5, 5:-5] - mu2_sq
- sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2
-
- ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2))
-
- return ssim_map.mean()
-
-
-def calculate_l1(img1, img2):
- img1 = img1.astype(np.float64) / 255.0
- img2 = img2.astype(np.float64) / 255.0
- l1 = np.mean(np.abs(img1 - img2))
-
- return l1
-
-
-# def compute_feature_stats_for_generator(opts, detector_url, detector_kwargs, rel_lo=0, rel_hi=1, batch_size=64, batch_gen=None, jit=False, **stats_kwargs):
-# if batch_gen is None:
-# batch_gen = min(batch_size, 4)
-# assert batch_size % batch_gen == 0
-#
-# # Setup generator and load labels.
-# G = copy.deepcopy(opts.G).eval().requires_grad_(False).to(opts.device)
-# dataset = dnnlib.util.construct_class_by_name(**opts.dataset_kwargs)
-#
-# # Image generation func.
-# def run_generator(z, c):
-# img = G(z=z, c=c, **opts.G_kwargs)
-# img = (img * 127.5 + 128).clamp(0, 255).to(torch.uint8)
-# return img
-#
-# # JIT.
-# if jit:
-# z = torch.zeros([batch_gen, G.z_dim], device=opts.device)
-# c = torch.zeros([batch_gen, G.c_dim], device=opts.device)
-# run_generator = torch.jit.trace(run_generator, [z, c], check_trace=False)
-#
-# # Initialize.
-# stats = FeatureStats(**stats_kwargs)
-# assert stats.max_items is not None
-# progress = opts.progress.sub(tag='generator features', num_items=stats.max_items, rel_lo=rel_lo, rel_hi=rel_hi)
-# detector = get_feature_detector(url=detector_url, device=opts.device, num_gpus=opts.num_gpus, rank=opts.rank, verbose=progress.verbose)
-#
-# # Main loop.
-# while not stats.is_full():
-# images = []
-# for _i in range(batch_size // batch_gen):
-# z = torch.randn([batch_gen, G.z_dim], device=opts.device)
-# c = [dataset.get_label(np.random.randint(len(dataset))) for _i in range(batch_gen)]
-# c = torch.from_numpy(np.stack(c)).pin_memory().to(opts.device)
-# images.append(run_generator(z, c))
-# images = torch.cat(images)
-# if images.shape[1] == 1:
-# images = images.repeat([1, 3, 1, 1])
-# features = detector(images, **detector_kwargs)
-# stats.append_torch(features, num_gpus=opts.num_gpus, rank=opts.rank)
-# progress.update(stats.num_items)
-# return stats
-#
-# #----------------------------------------------------------------------------
diff --git a/spaces/boomsss/gamedayspx/getIntraData.py b/spaces/boomsss/gamedayspx/getIntraData.py
deleted file mode 100644
index d46fdf7a8c29c75cd63393b746da1ffa938514a5..0000000000000000000000000000000000000000
--- a/spaces/boomsss/gamedayspx/getIntraData.py
+++ /dev/null
@@ -1,140 +0,0 @@
-import pandas as pd
-import pandas_datareader as pdr
-import yfinance as yf
-import datetime
-# from datasets import load_dataset
-from sqlalchemy import create_engine
-import os
-from getDailyData import data_start_date
-# from dotenv import load_dotenv
-
-# Load environment variables from the .env file
-# load_dotenv()
-
-def get_intra(periods_30m = 1):
- '''
- Method to get historical 30 minute data and append live data to it, if exists.
- '''
- engine = create_engine(
- f"mysql+mysqldb://{os.getenv('DATABASE_USERNAME')}:" \
- f"{os.getenv('DATABASE_PASSWORD')}@{os.getenv('DATABASE_HOST')}/" \
- f"{os.getenv('DATABASE')}?ssl_ca=ca-certificates.crt&ssl_mode=VERIFY_IDENTITY"
- )
-
- query = f'''SELECT
- spx30.Datetime AS Datetime,
- spx30.Open AS Open30,
- spx30.High AS High30,
- spx30.Low AS Low30,
- spx30.Close AS Close30,
- vix30.Open AS Open_VIX30,
- vix30.High AS High_VIX30,
- vix30.Low AS Low_VIX30,
- vix30.Close AS Close_VIX30,
- vvix30.Open AS Open_VVIX30,
- vvix30.High AS High_VVIX30,
- vvix30.Low AS Low_VVIX30,
- vvix30.Close AS Close_VVIX30
- FROM
- SPX_full_30min AS spx30
- LEFT JOIN
- VIX_full_30min AS vix30 ON spx30.Datetime = vix30.Datetime AND vix30.Datetime > {data_start_date}
- LEFT JOIN
- VVIX_full_30min AS vvix30 ON spx30.Datetime = vvix30.Datetime AND vvix30.Datetime > {data_start_date}
- WHERE
- spx30.Datetime > {data_start_date}
-
- '''
- # spx30 = pd.read_sql_query(f'SELECT * FROM SPX_full_30min WHERE Datetime > {data_start_date}', con=engine)
- # vix30 = pd.read_sql_query(f'SELECT * FROM VIX_full_30min WHERE Datetime > {data_start_date}', con=engine)
- # vvix30 = pd.read_sql_query(f'SELECT * FROM VVIX_full_30min WHERE Datetime > {data_start_date}', con=engine)
- # dfs = []
-
- df_30m = pd.read_sql_query(sql=query, con=engine.connect())
- df_30m['Datetime'] = df_30m['Datetime'].dt.tz_localize('America/New_York')
- df_30m = df_30m.set_index('Datetime',drop=True)
-
- # for fr in [spx30, vix30, vvix30]:
- # # fr['Datetime'] = fr['Datetime'].apply(lambda x: datetime.datetime.strptime(x[:-6], dt_format))
- # fr['Datetime'] = fr['Datetime'].dt.tz_localize('America/New_York')
- # fr = fr.set_index('Datetime')
- # fr['Open'] = pd.to_numeric(fr['Open'])
- # fr['High'] = pd.to_numeric(fr['High'])
- # fr['Low'] = pd.to_numeric(fr['Low'])
- # fr['Close'] = pd.to_numeric(fr['Close'])
- # dfs.append(fr[['Open','High','Low','Close']])
-
- # df_30m = pd.concat(dfs, axis=1)
-
- # df_30m.columns = [
- # 'Open30',
- # 'High30',
- # 'Low30',
- # 'Close30',
- # 'Open_VIX30',
- # 'High_VIX30',
- # 'Low_VIX30',
- # 'Close_VIX30',
- # 'Open_VVIX30',
- # 'High_VVIX30',
- # 'Low_VVIX30',
- # 'Close_VVIX30'
- # ]
-
- # Get incremental date
- last_date = df_30m.index.date[-1]
- last_date = last_date + datetime.timedelta(days=1)
-
- # Get incremental data for each index
- spx1 = yf.Ticker('^GSPC')
- vix1 = yf.Ticker('^VIX')
- vvix1 = yf.Ticker('^VVIX')
- yfp = spx1.history(start=last_date, interval='30m')
- yf_vix = vix1.history(start=last_date, interval='30m')
- yf_vvix = vvix1.history(start=last_date, interval='30m')
-
- if len(yfp) > 0:
- # Convert indexes to EST if not already
- for _df in [yfp, yf_vix, yf_vvix]:
- if (_df.index.tz.zone != 'America/New_York') or (type(_df.index) != pd.DatetimeIndex):
- _df['Datetime'] = pd.to_datetime(_df.index)
- _df['Datetime'] = _df['Datetime'].dt.tz_convert('America/New_York')
- _df.set_index('Datetime', inplace=True)
- # Concat them
- df_inc = pd.concat([
- yfp[['Open','High','Low','Close']],
- yf_vix[['Open','High','Low','Close']],
- yf_vvix[['Open','High','Low','Close']]
- ], axis=1)
- df_inc.columns = df_30m.columns
- df_inc = df_inc.loc[
- (df_inc.index.time >= datetime.time(9,30)) & (df_inc.index.time < datetime.time(16,00))
- ]
- df_30m = pd.concat([df_30m, df_inc])
- else:
- df_30m = df_30m.copy()
-
- df_30m = df_30m.loc[
- (df_30m.index.time >= datetime.time(9,30)) & (df_30m.index.time < datetime.time(16,00))
- ]
- df_30m['dt'] = df_30m.index.date
- df_30m = df_30m.groupby('dt').head(periods_30m)
- df_30m = df_30m.set_index('dt',drop=True)
- df_30m.index.name = 'Datetime'
-
- df_30m['SPX30IntraPerf'] = (df_30m['Close30'] / df_30m['Close30'].shift(1)) - 1
- df_30m['VIX30IntraPerf'] = (df_30m['Close_VIX30'] / df_30m['Close_VIX30'].shift(1)) - 1
- df_30m['VVIX30IntraPerf'] = (df_30m['Close_VVIX30'] / df_30m['Close_VVIX30'].shift(1)) - 1
-
- opens_intra = df_30m.groupby('Datetime')[[c for c in df_30m.columns if 'Open' in c]].head(1)
- highs_intra = df_30m.groupby('Datetime')[[c for c in df_30m.columns if 'High' in c]].max()
- lows_intra = df_30m.groupby('Datetime')[[c for c in df_30m.columns if 'Low' in c]].min()
- closes_intra = df_30m.groupby('Datetime')[[c for c in df_30m.columns if 'Close' in c]].tail(1)
- spx_intra = df_30m.groupby('Datetime')['SPX30IntraPerf'].tail(1)
- vix_intra = df_30m.groupby('Datetime')['VIX30IntraPerf'].tail(1)
- vvix_intra = df_30m.groupby('Datetime')['VVIX30IntraPerf'].tail(1)
-
- df_intra = pd.concat([opens_intra, highs_intra, lows_intra, closes_intra, spx_intra, vix_intra, vvix_intra], axis=1)
- return df_intra
-
-
\ No newline at end of file
diff --git a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/flask_rest_api/example_request.py b/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/flask_rest_api/example_request.py
deleted file mode 100644
index 773ad893296750992789a77a59e0f5ad657d0e35..0000000000000000000000000000000000000000
--- a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/flask_rest_api/example_request.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Perform test request
-"""
-
-import pprint
-
-import requests
-
-DETECTION_URL = "http://localhost:5000/v1/object-detection/yolov5s"
-IMAGE = "zidane.jpg"
-
-# Read image
-with open(IMAGE, "rb") as f:
- image_data = f.read()
-
-response = requests.post(DETECTION_URL, files={"image": image_data}).json()
-
-pprint.pprint(response)
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/benchmark.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/benchmark.py
deleted file mode 100644
index ac2f372a4b111ad40b8e720adea208608271bab6..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/benchmark.py
+++ /dev/null
@@ -1,225 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import numpy as np
-from itertools import count
-from typing import List, Tuple
-import torch
-import tqdm
-from fvcore.common.timer import Timer
-
-from detectron2.utils import comm
-
-from .build import build_batch_data_loader
-from .common import DatasetFromList, MapDataset
-from .samplers import TrainingSampler
-
-logger = logging.getLogger(__name__)
-
-
-class _EmptyMapDataset(torch.utils.data.Dataset):
- """
- Map anything to emptiness.
- """
-
- def __init__(self, dataset):
- self.ds = dataset
-
- def __len__(self):
- return len(self.ds)
-
- def __getitem__(self, idx):
- _ = self.ds[idx]
- return [0]
-
-
-def iter_benchmark(
- iterator, num_iter: int, warmup: int = 5, max_time_seconds: float = 60
-) -> Tuple[float, List[float]]:
- """
- Benchmark an iterator/iterable for `num_iter` iterations with an extra
- `warmup` iterations of warmup.
- End early if `max_time_seconds` time is spent on iterations.
-
- Returns:
- float: average time (seconds) per iteration
- list[float]: time spent on each iteration. Sometimes useful for further analysis.
- """
- num_iter, warmup = int(num_iter), int(warmup)
-
- iterator = iter(iterator)
- for _ in range(warmup):
- next(iterator)
- timer = Timer()
- all_times = []
- for curr_iter in tqdm.trange(num_iter):
- start = timer.seconds()
- if start > max_time_seconds:
- num_iter = curr_iter
- break
- next(iterator)
- all_times.append(timer.seconds() - start)
- avg = timer.seconds() / num_iter
- return avg, all_times
-
-
-class DataLoaderBenchmark:
- """
- Some common benchmarks that help understand perf bottleneck of a standard dataloader
- made of dataset, mapper and sampler.
- """
-
- def __init__(
- self,
- dataset,
- *,
- mapper,
- sampler=None,
- total_batch_size,
- num_workers=0,
- max_time_seconds: int = 90,
- ):
- """
- Args:
- max_time_seconds (int): maximum time to spent for each benchmark
- other args: same as in `build.py:build_detection_train_loader`
- """
- if isinstance(dataset, list):
- dataset = DatasetFromList(dataset, copy=False, serialize=True)
- if sampler is None:
- sampler = TrainingSampler(len(dataset))
-
- self.dataset = dataset
- self.mapper = mapper
- self.sampler = sampler
- self.total_batch_size = total_batch_size
- self.num_workers = num_workers
- self.per_gpu_batch_size = self.total_batch_size // comm.get_world_size()
-
- self.max_time_seconds = max_time_seconds
-
- def _benchmark(self, iterator, num_iter, warmup, msg=None):
- avg, all_times = iter_benchmark(iterator, num_iter, warmup, self.max_time_seconds)
- if msg is not None:
- self._log_time(msg, avg, all_times)
- return avg, all_times
-
- def _log_time(self, msg, avg, all_times, distributed=False):
- percentiles = [np.percentile(all_times, k, interpolation="nearest") for k in [1, 5, 95, 99]]
- if not distributed:
- logger.info(
- f"{msg}: avg={1.0/avg:.1f} it/s, "
- f"p1={percentiles[0]:.2g}s, p5={percentiles[1]:.2g}s, "
- f"p95={percentiles[2]:.2g}s, p99={percentiles[3]:.2g}s."
- )
- return
- avg_per_gpu = comm.all_gather(avg)
- percentiles_per_gpu = comm.all_gather(percentiles)
- if comm.get_rank() > 0:
- return
- for idx, avg, percentiles in zip(count(), avg_per_gpu, percentiles_per_gpu):
- logger.info(
- f"GPU{idx} {msg}: avg={1.0/avg:.1f} it/s, "
- f"p1={percentiles[0]:.2g}s, p5={percentiles[1]:.2g}s, "
- f"p95={percentiles[2]:.2g}s, p99={percentiles[3]:.2g}s."
- )
-
- def benchmark_dataset(self, num_iter, warmup=5):
- """
- Benchmark the speed of taking raw samples from the dataset.
- """
-
- def loader():
- while True:
- for k in self.sampler:
- yield self.dataset[k]
-
- self._benchmark(loader(), num_iter, warmup, "Dataset Alone")
-
- def benchmark_mapper(self, num_iter, warmup=5):
- """
- Benchmark the speed of taking raw samples from the dataset and map
- them in a single process.
- """
-
- def loader():
- while True:
- for k in self.sampler:
- yield self.mapper(self.dataset[k])
-
- self._benchmark(loader(), num_iter, warmup, "Single Process Mapper (sec/sample)")
-
- def benchmark_workers(self, num_iter, warmup=10):
- """
- Benchmark the dataloader by tuning num_workers to [0, 1, self.num_workers].
- """
- candidates = [0, 1]
- if self.num_workers not in candidates:
- candidates.append(self.num_workers)
-
- dataset = MapDataset(self.dataset, self.mapper)
- for n in candidates:
- loader = build_batch_data_loader(
- dataset,
- self.sampler,
- self.total_batch_size,
- num_workers=n,
- )
- self._benchmark(
- iter(loader),
- num_iter * max(n, 1),
- warmup * max(n, 1),
- f"DataLoader ({n} workers, bs={self.per_gpu_batch_size})",
- )
- del loader
-
- def benchmark_IPC(self, num_iter, warmup=10):
- """
- Benchmark the dataloader where each worker outputs nothing. This
- eliminates the IPC overhead compared to the regular dataloader.
-
- PyTorch multiprocessing's IPC only optimizes for torch tensors.
- Large numpy arrays or other data structure may incur large IPC overhead.
- """
- n = self.num_workers
- dataset = _EmptyMapDataset(MapDataset(self.dataset, self.mapper))
- loader = build_batch_data_loader(
- dataset, self.sampler, self.total_batch_size, num_workers=n
- )
- self._benchmark(
- iter(loader),
- num_iter * max(n, 1),
- warmup * max(n, 1),
- f"DataLoader ({n} workers, bs={self.per_gpu_batch_size}) w/o comm",
- )
-
- def benchmark_distributed(self, num_iter, warmup=10):
- """
- Benchmark the dataloader in each distributed worker, and log results of
- all workers. This helps understand the final performance as well as
- the variances among workers.
-
- It also prints startup time (first iter) of the dataloader.
- """
- gpu = comm.get_world_size()
- dataset = MapDataset(self.dataset, self.mapper)
- n = self.num_workers
- loader = build_batch_data_loader(
- dataset, self.sampler, self.total_batch_size, num_workers=n
- )
-
- timer = Timer()
- loader = iter(loader)
- next(loader)
- startup_time = timer.seconds()
- logger.info("Dataloader startup time: {:.2f} seconds".format(startup_time))
-
- comm.synchronize()
-
- avg, all_times = self._benchmark(loader, num_iter * max(n, 1), warmup * max(n, 1))
- del loader
- self._log_time(
- f"DataLoader ({gpu} GPUs x {n} workers, total bs={self.total_batch_size})",
- avg,
- all_times,
- True,
- )
diff --git a/spaces/cdavenpo822/ToyWorld/index.html b/spaces/cdavenpo822/ToyWorld/index.html
deleted file mode 100644
index 6250c2958a7186a4e64f21c02b0359ff5ecd7e97..0000000000000000000000000000000000000000
--- a/spaces/cdavenpo822/ToyWorld/index.html
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/chendl/compositional_test/transformers/docker/transformers-tensorflow-cpu/Dockerfile b/spaces/chendl/compositional_test/transformers/docker/transformers-tensorflow-cpu/Dockerfile
deleted file mode 100644
index ef3dc3d212cbbc95ecd0dd29dc9901dd0cb1ca87..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/docker/transformers-tensorflow-cpu/Dockerfile
+++ /dev/null
@@ -1,25 +0,0 @@
-FROM ubuntu:18.04
-LABEL maintainer="Hugging Face"
-LABEL repository="transformers"
-
-RUN apt update && \
- apt install -y bash \
- build-essential \
- git \
- curl \
- ca-certificates \
- python3 \
- python3-pip && \
- rm -rf /var/lib/apt/lists
-
-RUN python3 -m pip install --no-cache-dir --upgrade pip && \
- python3 -m pip install --no-cache-dir \
- mkl \
- tensorflow-cpu
-
-WORKDIR /workspace
-COPY . transformers/
-RUN cd transformers/ && \
- python3 -m pip install --no-cache-dir .
-
-CMD ["/bin/bash"]
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/color_picker.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/color_picker.py
deleted file mode 100644
index ff7c68f0828196ba859a7284ddce29fad9e173a8..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/color_picker.py
+++ /dev/null
@@ -1,143 +0,0 @@
-"""gr.ColorPicker() component."""
-
-from __future__ import annotations
-
-from typing import Any, Callable, Literal
-
-from gradio_client.documentation import document, set_documentation_group
-from gradio_client.serializing import StringSerializable
-
-from gradio.components.base import IOComponent, _Keywords
-from gradio.events import (
- Blurrable,
- Changeable,
- Inputable,
- Submittable,
-)
-
-set_documentation_group("component")
-
-
-@document()
-class ColorPicker(
- Changeable, Inputable, Submittable, Blurrable, IOComponent, StringSerializable
-):
- """
- Creates a color picker for user to select a color as string input.
- Preprocessing: passes selected color value as a {str} into the function.
- Postprocessing: expects a {str} returned from function and sets color picker value to it.
- Examples-format: a {str} with a hexadecimal representation of a color, e.g. "#ff0000" for red.
- Demos: color_picker, color_generator
- """
-
- def __init__(
- self,
- value: str | Callable | None = None,
- *,
- label: str | None = None,
- info: str | None = None,
- every: float | None = None,
- show_label: bool = True,
- container: bool = True,
- scale: int | None = None,
- min_width: int = 160,
- interactive: bool | None = None,
- visible: bool = True,
- elem_id: str | None = None,
- elem_classes: list[str] | str | None = None,
- **kwargs,
- ):
- """
- Parameters:
- value: default text to provide in color picker. If callable, the function will be called whenever the app loads to set the initial value of the component.
- label: component name in interface.
- info: additional component description.
- every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute.
- show_label: if True, will display label.
- container: If True, will place the component in a container - providing some extra padding around the border.
- scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer.
- min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.
- interactive: if True, will be rendered as an editable color picker; if False, editing will be disabled. If not provided, this is inferred based on whether the component is used as an input or output.
- visible: If False, component will be hidden.
- elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
- elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.
- """
- IOComponent.__init__(
- self,
- label=label,
- info=info,
- every=every,
- show_label=show_label,
- container=container,
- scale=scale,
- min_width=min_width,
- interactive=interactive,
- visible=visible,
- elem_id=elem_id,
- elem_classes=elem_classes,
- value=value,
- **kwargs,
- )
-
- def example_inputs(self) -> dict[str, Any]:
- return {
- "raw": "#000000",
- "serialized": "#000000",
- }
-
- def get_config(self):
- return {
- "value": self.value,
- **IOComponent.get_config(self),
- }
-
- @staticmethod
- def update(
- value: str | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE,
- label: str | None = None,
- info: str | None = None,
- show_label: bool | None = None,
- container: bool | None = None,
- scale: int | None = None,
- min_width: int | None = None,
- visible: bool | None = None,
- interactive: bool | None = None,
- ):
- return {
- "value": value,
- "label": label,
- "info": info,
- "show_label": show_label,
- "container": container,
- "scale": scale,
- "min_width": min_width,
- "visible": visible,
- "interactive": interactive,
- "__type__": "update",
- }
-
- def preprocess(self, x: str | None) -> str | None:
- """
- Any preprocessing needed to be performed on function input.
- Parameters:
- x: text
- Returns:
- text
- """
- if x is None:
- return None
- else:
- return str(x)
-
- def postprocess(self, y: str | None) -> str | None:
- """
- Any postprocessing needed to be performed on function output.
- Parameters:
- y: text
- Returns:
- text
- """
- if y is None:
- return None
- else:
- return str(y)
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/video.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/video.py
deleted file mode 100644
index 7c475699f467bd185a8eabd88d4c8249318b5cdd..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/video.py
+++ /dev/null
@@ -1,412 +0,0 @@
-"""gr.Video() component."""
-
-from __future__ import annotations
-
-import tempfile
-import warnings
-from pathlib import Path
-from typing import Callable, Literal
-
-from gradio_client import utils as client_utils
-from gradio_client.data_classes import FileData
-from gradio_client.documentation import document, set_documentation_group
-from gradio_client.serializing import VideoSerializable
-
-from gradio import processing_utils, utils, wasm_utils
-from gradio.components.base import IOComponent, _Keywords
-from gradio.deprecation import warn_style_method_deprecation
-from gradio.events import Changeable, Clearable, Playable, Recordable, Uploadable
-
-if not wasm_utils.IS_WASM:
- # TODO: Support ffmpeg on Wasm
- from ffmpy import FFmpeg
-
-set_documentation_group("component")
-
-
-@document()
-class Video(
- Changeable,
- Clearable,
- Playable,
- Recordable,
- Uploadable,
- IOComponent,
- VideoSerializable,
-):
- """
- Creates a video component that can be used to upload/record videos (as an input) or display videos (as an output).
- For the video to be playable in the browser it must have a compatible container and codec combination. Allowed
- combinations are .mp4 with h264 codec, .ogg with theora codec, and .webm with vp9 codec. If the component detects
- that the output video would not be playable in the browser it will attempt to convert it to a playable mp4 video.
- If the conversion fails, the original video is returned.
- Preprocessing: passes the uploaded video as a {str} filepath or URL whose extension can be modified by `format`.
- Postprocessing: expects a {str} or {pathlib.Path} filepath to a video which is displayed, or a {Tuple[str | pathlib.Path, str | pathlib.Path | None]} where the first element is a filepath to a video and the second element is an optional filepath to a subtitle file.
- Examples-format: a {str} filepath to a local file that contains the video, or a {Tuple[str, str]} where the first element is a filepath to a video file and the second element is a filepath to a subtitle file.
- Demos: video_identity, video_subtitle
- """
-
- def __init__(
- self,
- value: str
- | Path
- | tuple[str | Path, str | Path | None]
- | Callable
- | None = None,
- *,
- format: str | None = None,
- source: Literal["upload", "webcam"] = "upload",
- height: int | None = None,
- width: int | None = None,
- label: str | None = None,
- every: float | None = None,
- show_label: bool = True,
- container: bool = True,
- scale: int | None = None,
- min_width: int = 160,
- interactive: bool | None = None,
- visible: bool = True,
- elem_id: str | None = None,
- elem_classes: list[str] | str | None = None,
- mirror_webcam: bool = True,
- include_audio: bool | None = None,
- autoplay: bool = False,
- show_share_button: bool | None = None,
- **kwargs,
- ):
- """
- Parameters:
- value: A path or URL for the default value that Video component is going to take. Can also be a tuple consisting of (video filepath, subtitle filepath). If a subtitle file is provided, it should be of type .srt or .vtt. Or can be callable, in which case the function will be called whenever the app loads to set the initial value of the component.
- format: Format of video format to be returned by component, such as 'avi' or 'mp4'. Use 'mp4' to ensure browser playability. If set to None, video will keep uploaded format.
- source: Source of video. "upload" creates a box where user can drop an video file, "webcam" allows user to record a video from their webcam.
- height: Height of the displayed video in pixels.
- width: Width of the displayed video in pixels.
- label: component name in interface.
- every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute.
- show_label: if True, will display label.
- container: If True, will place the component in a container - providing some extra padding around the border.
- scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer.
- min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.
- interactive: if True, will allow users to upload a video; if False, can only be used to display videos. If not provided, this is inferred based on whether the component is used as an input or output.
- visible: If False, component will be hidden.
- elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
- elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.
- mirror_webcam: If True webcam will be mirrored. Default is True.
- include_audio: Whether the component should record/retain the audio track for a video. By default, audio is excluded for webcam videos and included for uploaded videos.
- autoplay: Whether to automatically play the video when the component is used as an output. Note: browsers will not autoplay video files if the user has not interacted with the page yet.
- show_share_button: If True, will show a share icon in the corner of the component that allows user to share outputs to Hugging Face Spaces Discussions. If False, icon does not appear. If set to None (default behavior), then the icon appears if this Gradio app is launched on Spaces, but not otherwise.
- """
- self.format = format
- self.autoplay = autoplay
- valid_sources = ["upload", "webcam"]
- if source not in valid_sources:
- raise ValueError(
- f"Invalid value for parameter `source`: {source}. Please choose from one of: {valid_sources}"
- )
- self.source = source
- self.height = height
- self.width = width
- self.mirror_webcam = mirror_webcam
- self.include_audio = (
- include_audio if include_audio is not None else source == "upload"
- )
- self.show_share_button = (
- (utils.get_space() is not None)
- if show_share_button is None
- else show_share_button
- )
- IOComponent.__init__(
- self,
- label=label,
- every=every,
- show_label=show_label,
- container=container,
- scale=scale,
- min_width=min_width,
- interactive=interactive,
- visible=visible,
- elem_id=elem_id,
- elem_classes=elem_classes,
- value=value,
- **kwargs,
- )
-
- def get_config(self):
- return {
- "source": self.source,
- "value": self.value,
- "height": self.height,
- "width": self.width,
- "mirror_webcam": self.mirror_webcam,
- "include_audio": self.include_audio,
- "autoplay": self.autoplay,
- "show_share_button": self.show_share_button,
- **IOComponent.get_config(self),
- }
-
- @staticmethod
- def update(
- value: str
- | tuple[str, str | None]
- | Literal[_Keywords.NO_VALUE]
- | None = _Keywords.NO_VALUE,
- source: Literal["upload", "webcam"] | None = None,
- height: int | None = None,
- width: int | None = None,
- label: str | None = None,
- show_label: bool | None = None,
- container: bool | None = None,
- scale: int | None = None,
- min_width: int | None = None,
- interactive: bool | None = None,
- visible: bool | None = None,
- autoplay: bool | None = None,
- show_share_button: bool | None = None,
- ):
- return {
- "source": source,
- "height": height,
- "width": width,
- "label": label,
- "show_label": show_label,
- "container": container,
- "scale": scale,
- "min_width": min_width,
- "interactive": interactive,
- "visible": visible,
- "value": value,
- "autoplay": autoplay,
- "show_share_button": show_share_button,
- "__type__": "update",
- }
-
- def preprocess(
- self, x: tuple[FileData, FileData | None] | FileData | None
- ) -> str | None:
- """
- Parameters:
- x: A tuple of (video file data, subtitle file data) or just video file data.
- Returns:
- A string file path or URL to the preprocessed video. Subtitle file data is ignored.
- """
- if x is None:
- return None
- elif isinstance(x, dict):
- video = x
- else:
- video = x[0]
-
- file_name, file_data, is_file = (
- video.get("name"),
- video["data"],
- video.get("is_file", False),
- )
-
- if is_file:
- assert file_name is not None, "Received file data without a file name."
- file_name = Path(self.make_temp_copy_if_needed(file_name))
- else:
- assert file_data is not None, "Received empty file data."
- file_name = Path(self.base64_to_temp_file_if_needed(file_data, file_name))
-
- uploaded_format = file_name.suffix.replace(".", "")
- needs_formatting = self.format is not None and uploaded_format != self.format
- flip = self.source == "webcam" and self.mirror_webcam
-
- if needs_formatting or flip:
- format = f".{self.format if needs_formatting else uploaded_format}"
- output_options = ["-vf", "hflip", "-c:a", "copy"] if flip else []
- output_options += ["-an"] if not self.include_audio else []
- flip_suffix = "_flip" if flip else ""
- output_file_name = str(
- file_name.with_name(f"{file_name.stem}{flip_suffix}{format}")
- )
- if Path(output_file_name).exists():
- return output_file_name
- if wasm_utils.IS_WASM:
- raise wasm_utils.WasmUnsupportedError(
- "Video formatting is not supported in the Wasm mode."
- )
- ff = FFmpeg(
- inputs={str(file_name): None},
- outputs={output_file_name: output_options},
- )
- ff.run()
- return output_file_name
- elif not self.include_audio:
- output_file_name = str(file_name.with_name(f"muted_{file_name.name}"))
- if wasm_utils.IS_WASM:
- raise wasm_utils.WasmUnsupportedError(
- "include_audio=False is not supported in the Wasm mode."
- )
- ff = FFmpeg(
- inputs={str(file_name): None},
- outputs={output_file_name: ["-an"]},
- )
- ff.run()
- return output_file_name
- else:
- return str(file_name)
-
- def postprocess(
- self, y: str | Path | tuple[str | Path, str | Path | None] | None
- ) -> tuple[FileData | None, FileData | None] | None:
- """
- Processes a video to ensure that it is in the correct format before returning it to the front end.
- Parameters:
- y: video data in either of the following formats: a tuple of (video filepath, optional subtitle filepath), or just a filepath or URL to an video file, or None.
- Returns:
- a tuple with the two dictionary, reresent to video and (optional) subtitle, which following formats:
- - The first dictionary represents the video file and contains the following keys:
- - 'name': a file path to a temporary copy of the processed video.
- - 'data': None
- - 'is_file': True
- - The second dictionary represents the subtitle file and contains the following keys:
- - 'name': None
- - 'data': Base64 encode the processed subtitle data.
- - 'is_file': False
- - If subtitle is None, returns (video, None).
- - If both video and subtitle are None, returns None.
- """
-
- if y is None or y == [None, None] or y == (None, None):
- return None
- if isinstance(y, (str, Path)):
- processed_files = (self._format_video(y), None)
- elif isinstance(y, (tuple, list)):
- assert (
- len(y) == 2
- ), f"Expected lists of length 2 or tuples of length 2. Received: {y}"
- assert isinstance(y[0], (str, Path)) and isinstance(
- y[1], (str, Path)
- ), f"If a tuple is provided, both elements must be strings or Path objects. Received: {y}"
- video = y[0]
- subtitle = y[1]
- processed_files = (
- self._format_video(video),
- self._format_subtitle(subtitle),
- )
- else:
- raise Exception(f"Cannot process type as video: {type(y)}")
-
- return processed_files
-
- def _format_video(self, video: str | Path | None) -> FileData | None:
- """
- Processes a video to ensure that it is in the correct format.
- Parameters:
- video: video data in either of the following formats: a string filepath or URL to an video file, or None.
- Returns:
- a dictionary with the following keys:
-
- - 'name': a file path to a temporary copy of the processed video.
- - 'data': None
- - 'is_file': True
- """
- if video is None:
- return None
- video = str(video)
- returned_format = video.split(".")[-1].lower()
- if self.format is None or returned_format == self.format:
- conversion_needed = False
- else:
- conversion_needed = True
-
- # For cases where the video is a URL and does not need to be converted to another format, we can just return the URL
- if utils.validate_url(video) and not (conversion_needed):
- return {"name": video, "data": None, "is_file": True}
-
- # For cases where the video needs to be converted to another format
- if utils.validate_url(video):
- video = self.download_temp_copy_if_needed(video)
- if (
- processing_utils.ffmpeg_installed()
- and not processing_utils.video_is_playable(video)
- ):
- warnings.warn(
- "Video does not have browser-compatible container or codec. Converting to mp4"
- )
- video = processing_utils.convert_video_to_playable_mp4(video)
- # Recalculate the format in case convert_video_to_playable_mp4 already made it the
- # selected format
- returned_format = video.split(".")[-1].lower()
- if self.format is not None and returned_format != self.format:
- if wasm_utils.IS_WASM:
- raise wasm_utils.WasmUnsupportedError(
- "Returning a video in a different format is not supported in the Wasm mode."
- )
- output_file_name = video[0 : video.rindex(".") + 1] + self.format
- ff = FFmpeg(
- inputs={video: None},
- outputs={output_file_name: None},
- global_options="-y",
- )
- ff.run()
- video = output_file_name
-
- video = self.make_temp_copy_if_needed(video)
-
- return {
- "name": video,
- "data": None,
- "is_file": True,
- "orig_name": Path(video).name,
- }
-
- def _format_subtitle(self, subtitle: str | None) -> FileData | None:
- """
- Convert subtitle format to VTT and process the video to ensure it meets the HTML5 requirements.
- Parameters:
- subtitle: subtitle path in either of the VTT and SRT format.
- Returns:
- a dictionary with the following keys:
- - 'name': None
- - 'data': base64-encoded subtitle data.
- - 'is_file': False
- """
-
- def srt_to_vtt(srt_file_path, vtt_file_path):
- """Convert an SRT subtitle file to a VTT subtitle file"""
- with open(srt_file_path, encoding="utf-8") as srt_file, open(
- vtt_file_path, "w", encoding="utf-8"
- ) as vtt_file:
- vtt_file.write("WEBVTT\n\n")
- for subtitle_block in srt_file.read().strip().split("\n\n"):
- subtitle_lines = subtitle_block.split("\n")
- subtitle_timing = subtitle_lines[1].replace(",", ".")
- subtitle_text = "\n".join(subtitle_lines[2:])
- vtt_file.write(f"{subtitle_timing} --> {subtitle_timing}\n")
- vtt_file.write(f"{subtitle_text}\n\n")
-
- if subtitle is None:
- return None
-
- valid_extensions = (".srt", ".vtt")
-
- if Path(subtitle).suffix not in valid_extensions:
- raise ValueError(
- f"Invalid value for parameter `subtitle`: {subtitle}. Please choose a file with one of these extensions: {valid_extensions}"
- )
-
- # HTML5 only support vtt format
- if Path(subtitle).suffix == ".srt":
- temp_file = tempfile.NamedTemporaryFile(
- delete=False, suffix=".vtt", dir=self.DEFAULT_TEMP_DIR
- )
-
- srt_to_vtt(subtitle, temp_file.name)
- subtitle = temp_file.name
-
- subtitle_data = client_utils.encode_url_or_file_to_base64(subtitle)
- return {"name": None, "data": subtitle_data, "is_file": False}
-
- def style(self, *, height: int | None = None, width: int | None = None, **kwargs):
- """
- This method is deprecated. Please set these arguments in the constructor instead.
- """
- warn_style_method_deprecation()
- if height is not None:
- self.height = height
- if width is not None:
- self.width = width
- return self
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-329f8260.css b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-329f8260.css
deleted file mode 100644
index 3b53ee465e192f512a964e9050e9aab81384add8..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-329f8260.css
+++ /dev/null
@@ -1 +0,0 @@
-.min.svelte-1ybaih5{min-height:var(--size-24)}.hide.svelte-1ybaih5{display:none}div.svelte-1ed2p3z{transition:.15s}.pending.svelte-1ed2p3z{opacity:.2}
diff --git a/spaces/cihyFjudo/fairness-paper-search/Album Gus Dapperton Yellow And Such Ep Zip Album Download Wersja Do Drukul What You Need to Know.md b/spaces/cihyFjudo/fairness-paper-search/Album Gus Dapperton Yellow And Such Ep Zip Album Download Wersja Do Drukul What You Need to Know.md
deleted file mode 100644
index d0c72b0da3441baa9638d871c47b0d095a30da22..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Album Gus Dapperton Yellow And Such Ep Zip Album Download Wersja Do Drukul What You Need to Know.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Guide Pratique De La Communication Didier.pdf A Comprehensive Resource for French Language and Culture.md b/spaces/cihyFjudo/fairness-paper-search/Guide Pratique De La Communication Didier.pdf A Comprehensive Resource for French Language and Culture.md
deleted file mode 100644
index 46b5dcb1d6acf4977175d5be3b499d208ca45451..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Guide Pratique De La Communication Didier.pdf A Comprehensive Resource for French Language and Culture.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
Parler, écouter, écrire, l'art de communiquer en santé Guide de pratique clinique Introduction des 20patients 20 C3 89dition 20originale pdf ] Toute conversation doit respecter les principes de base en communication verbale 21, 22 1 4 Reconnaître l'individu avec un faible niveau de littératie en santé 99, 100
-
Communication progressive du français : niveau débutant CLE International 2013 Guide de communication en français Didier 2014 Guide pratique de la communication : français : 100 actes de communication, 57 dialogues Didier 2007 Cote : Français Aide (Vocabulaire, grammaire, dictionnaire, etc ) Français Professionnel Communication
Ce guide s'adresse à vous, développeur économique, jeune ou expérimenté travaillant au sein de collectivités locales, auprès d'acteurs publics et institutionnels, ou bien de partenaires du développement économique territorial (chambres consulaires, agences de développement, opérateurs de l'accompagnement d'entreprises, consultants...). La ligne éditoriale est régulièrement ajustée pour répondre aux enjeux d'aujourd'hui et de demain sur les questions du développement des territoires : immobilier et foncier économique, entrepreneuriat, coopération interentreprises, accompagnement de l'innovation, économie sociale et solidaire, stratégie de développement économique... Les articles volontairement courts et précis sont rédigés par des experts de leur thématique et sont très opérationnels de manière à vous outiller de manière concrète dans l'exercice de vos métiers. Le guide explore aussi des sujets nouveaux ou innovants pour vous permettre d'adapter votre posture de développeur et vos pratiques aux nouvelles approches du développement économique (ex. le diagnostic économique agile, les centres-villes et la révolution commerciale, les nouvelles formes de travail, etc.).
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/MS Access 2003 Full Version Free Download How to Get It from MSDN.md b/spaces/cihyFjudo/fairness-paper-search/MS Access 2003 Full Version Free Download How to Get It from MSDN.md
deleted file mode 100644
index 8aa0220f25a21045a0dc463bc7ab2f7b6326b322..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/MS Access 2003 Full Version Free Download How to Get It from MSDN.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
Download Microsoft Office 2003 SP3 ISO for Windows 7, Windows 10, Windows Vista/2000/XP. Get an offline installer setup file for full Version ISO with Service Pack 3 of MS Office 2003, this will work with 32-bit and 64-bit operating systems.
The Word Viewer, PowerPoint Viewer and Excel Viewer have been retired. These Viewers will no longer be available for download or receive security updates. To continue viewing Office files for free, we recommend installing the Office apps or storing documents in OneDrive or Dropbox, where Word Online, Excel Online or PowerPoint Online opens them in your browser. For the mobile apps, visit the store for your device.
-
When you open a workbook that was created in Excel 97-2003, and you no longer plan for anyone to work on this workbook in this earlier version, you can convert the workbook to the current XML-based file format (.xlsx, .xlsb, .xlsm, .xltx, .xltm). When you convert to the current file format, you will have access to all new and enhanced features and functionality that newer versions of Excel offer, and the file size will generally be smaller.
-
You can recover entire data such as tables, stored procedures, views, fields, date formats, auto number and primary keys from inaccessible MDB/ACCDB files. MS Access 2003 recovery software is also capable of retrieving data from the password protected MDB/ACCDB files. It uses LivePreview technology to show all recovered Access databases in a tree structure. Recovered Access databases can be saved by the user with the help of full version of Access 2003 recovery software. The software supports MS Access 97, 95, 2000, 2002, 2003, 2007, 2010 and 2013 versions.
-
-
However to save recovered data, full version needs to be purchased. For more information please visit: www.accessrecoverytool.org/ms-access2003-recovery.html. Platforms. Related Software Title / Version / DescriptionSizeLicenseWindows- MS Access Password Recovery is the best selling Access password recovery tool to recover access.Shareware- Excellent Access Password Recovery Tool is the best program to recover your forgotten and lost.Shareware- MS Access Data Recovery software provides you an easy and quick way to repair Access database.Shareware- DBForms from MS Access to PHP + MySQL allows you to convert MS Access tables to MySQL database.Shareware- Best MDB recovery software to repair corrupt access databases.
-
Microsoft Office 2003 Free Download Full Version For Windows 7 / 8 / 10 /XP /vista / 2000.it is a full offline Installer Standalone Setup of Microsoft Office 2003 for 32 bit and 64-bit windows.we can also download by Kickass, Filehippo, and torrent.
-
Do you want to use Office 2016 on your PC? This post from MiniTool Partition Wizard offers you the Office 2016 download for free. You can get it and then install it on your PC. It also shows you how to update it to the latest version.
-
Microsoft Office 2016 is no longer available for sale. If you want to download Office 2016, you need to download it from other websites. If you want to get the Microsoft Office 2016 free download, you can click the following download links:
-
Microsoft Access's role in web development prior to version 2010 is limited. User interface features of Access, such as forms and reports, only work in Windows. In versions 2000 through 2003 an Access object type called Data Access Pages created publishable web pages. Data Access Pages are no longer supported. The Jet Database Engine, core to Access, can be accessed through technologies such as ODBC or OLE DB. The data (i.e., tables and queries) can be accessed by web-based applications developed in ASP.NET, PHP, or Java. With the use of Microsoft's Terminal Services and Remote Desktop Application in Windows Server 2008 R2, organizations can host Access applications so they can be run over the web.[29] This technique does not scale the way a web application would but is appropriate for a limited number of users depending on the configuration of the host.
-
The original concept of Access was for end users to be able to access data from any source. Other features include: the import and export of data to many formats including Excel, Outlook, ASCII, dBase, Paradox, FoxPro, SQL Server and Oracle. It also has the ability to link to data in its existing location and use it for viewing, querying, editing, and reporting. This allows the existing data to change while ensuring that Access uses the latest data. It can perform heterogeneous joins between data sets stored across different platforms. Access is often used by people downloading data from enterprise level databases for manipulation, analysis, and reporting locally.
-
Microsoft offers free runtime versions of Microsoft Access which allow users to run an Access desktop application without needing to purchase or install a retail version of Microsoft Access. This actually allows Access developers to create databases that can be freely distributed to an unlimited number of end-users. These runtime versions of Access 2007 and later can be downloaded for free from Microsoft.[36] The runtime versions for Access 2003 and earlier were part of the Office Developer Extensions/Toolkit and required a separate purchase.
-
In earlier versions of Microsoft Access, the ability to distribute applications required the purchase of the Developer Toolkit; in Access 2007, 2010 and Access 2013 the "Runtime Only" version is offered as a free download,[45] making the distribution of royalty-free applications possible on Windows XP, Vista, 7 and Windows 8.x.[46]
-
I had a lot of crucial data in the form of .mdb format but were of no use. As I didn't have a MS Access database in my new system. However, with the help of this free MDB viewer, I was able to access my files in just under few clicks.
-
I know this is easily done in access 2010 by using ConvertToPDF, there is however no option for this in 2003. How did people save to PDF before version 2007 onwards? Also if this could be done without needing to download any extra .dll that would be great, since my organization won't allow me to use them
-
What's new in version 1.60 ? * New features: - V-Tools doesn't use the DAO Reference (Data Access Object) any more. It is replaced with ADODB which seems to be the new standard for database access. - The search tool can now performe a search & replace. Really powerfull! - There is a new tool : the Containers object Explorer. - V-Tools is now available from ADP projects. - Great new : the source code of V-Tools is now downloadable from the 'My Source Code' page. (2002-06-16)
-
Information Rights Management capabilities were added to document productivity applications to restrict access to certain users and/or limit the types of actions users could perform. Microsoft Office Picture Manager was added to the picture organizer. It includes basic editing capabilities and a new picture manager. Microsoft releases some new exciting features with Office 2003.
-
Getintopc Microsoft Office 2003 Free Download Full Version For PC/Mac/Windows Xp,7,8,8.1,10. Its offline installer and Standalone Setup of Microsoft Office 2003 Free Download for 32 and 64 Bit.we can also download Microsoft Office 2003 Free Download Full Version For Windows [32-64] Bit Filehippo.
-
MDB Viewer Plus has been written to provide a free, quick and easy way to open, view, edit, filter, sort, import to, export from, modify and search MDB and ACCDB files.This is useful for software developers like myself who use Access databases as a backend database for their bespoke software. MDB Viewer Plus provides a convenient way to view and edit these databases. The table info screen even has the ability to copy the list of field names in a table to the clipboard. A developer can then paste this list into their source code for direct access.
-
- Windows 95**, 98**, Me**, NT4**: latest version: - Windows 2000: latest w2k version: _w2k_1215.zip - Windows XP, 2003, Windows Server 2003, Vista, Server 2003 R2, Server 2008: latest version: -download-ultravnc-1231.html - Windows 7, 8, 8.1, 10, Server 2008 R2, Server 2012, Server 2012 R2, Server 2016, Server 2019: current version: Its embedded Java Viewer allows you to connect (and make File transfers) from a simple Web Browser on any system supporting Java (Linux, Mac OS...) to an UltraVNC server. PcHelpWare and uvnc2me require XP or later.
-
No matter which Office version you used, EaseUS Todo PCTrans does well in program transferring speed. You don't need to redownload it again and again. With the help of this PC program mover, finding a license/product key is accessible. Try the best way that suits you the most for MS Office migration between two computers.
-
As part of its commitment to enhancing public safety, NFPA makes its codes and standards available online to the public for free. Online access to NFPA's consensus documents conveniently places important safety information on the desktops of traditional users as well as others who have a keen interest. NFPA is committed to serving the public's increasing interest in technical information, and online access to these key codes is a valuable resource.
-
Microsoft also offers Microsoft Access Runtime as a FREE download which may be an option depending on your situation, for example if you have already created a database application which has forms to input and manipulate the data. Access Runtime is limited to running an already existing/created MS Access application. An access developer who develops an access application can install the application on computers and servers using the free Microsoft Access Runtime rather than licensing full Access for each computer.
-
As LibreOffice Base is almost identical to its Apache OpenOffice counterpart, the arguments in favor of this tool are the same as well. Both of these OpenOffice versions are free to use, so you could download both and compare them yourself to decide which is best for you.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Patch To Install Windows Vista RTM On A PC With 256mb RAM Tips And Tricks.md b/spaces/cihyFjudo/fairness-paper-search/Patch To Install Windows Vista RTM On A PC With 256mb RAM Tips And Tricks.md
deleted file mode 100644
index ed237131dd4fe127a806a3f5895510aa10bca2af..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Patch To Install Windows Vista RTM On A PC With 256mb RAM Tips And Tricks.md
+++ /dev/null
@@ -1,24 +0,0 @@
-
-
So I removed 1gb of RAM, and TADA! Win 7 installed just fine. However, I cant use the "KB929777" fix because it's only for Vista. So I tried reinstalling my RAM and I was once again confronted with a BSOD with the same error as before.
-
Patch To Install Windows Vista RTM On A PC With 256mb RAM
Every user want to run windows 7 on their computer. But you computer is not compatible for it. The minimum RAM for windows 7 is more then you are using. So there is no need to buy new RAM for Windows 7. If you have 256 RAM on your computer still you can install windows 7 and you can run windows 7 on your computer without any problem. Just follow the steps to install and run windows 7 32 BIT on 256 MB RAM.
Steps to run Windows 7 32 BIT on 256 MB RAM
1) You have to create a patch for Windows 7 to run on 256 MB RAM. You have to find winsetup.dll file in Windows 7 DVD. Copy this file on your computer.
2) Open this file in Hex Editor Neo Application. You can get this application from the below link. It is free to download and easy to use.
Hex Editor Neo
3) If the file is opened in Hex Editor Neo. Then Find 77 07 3D 78 01 string
4) If you had find the string. Then replace it with E9 04 00 00 00. Then save it.
5) Now open Windows 7 32 BIT ISO with Ultra ISO or Power ISO. Replace the file winsetup.dllwhich you had edited.
6) Now Burn the DVD and install it.
Your Windows 7 32 BIT is ready to run on 256 MB RAM.
-
During development of pre-reset Longhorn, the system requirements were largely the same as Windows XP, with the sole exception of build 4001, which requires a Pentium III processor or better. However, most builds of Longhorn only install on NTFS partitions, which would be carried to the final release of Vista. Throughout development of post-reset Vista, the system requirements were significantly increased to accommodate new computing standards, such as the use of WDDM to take most advantage of display capabilities, immediately requiring ACPI after replacing NTLDR with BOOTMGR, and greatly increasing the amount of disk space required to install Windows.
-
Microsoft recommends Windows Vista to be installed on a system with a processor with a speed of at least 800 MHz, at least 512 MB (384 MB for Starter Edition) of RAM, 15 GB of hard drive space, a SVGA or better display adapter, and a DVD-ROM drive.[5] Windows Vista drops support for systems without ACPI. CD-ROM installation is still possible, but such installation method now uses multiple CD-ROMs due to the increased size of the installation media after the shift to WIM installation and wasn't offered in retail.
-
Windows Vista's setup doesn't check for a required processor generation or speed to install as long as setup can start, and thus it is possible to install Windows Vista on processors as early as the original Pentium. Windows Vista can also be run with as low as 256 MB of RAM.
-
-
As a result of these issues, Windows Vista's initial adoption and satisfaction rates were very low compared to Windows XP and many users also downgraded back to Windows XP due to compatibility issues that rendered many programs and computer peripherals unusable along with performance issues. The Windows Vista Capable marketing campaign was also subject to criticism due to OEM's installing the OS on underpowered machines which did not fully meet Vista's system requirements which resulted in a class-action lawsuit being filed against Microsoft in early 2008 and eventually lost its class-action status in early 2009.
-
Note that Windows Vista requires a minimum of 10GB or HDD space, and if you still want your existing Windows installation on the machine, make sure you have partition the hard disk for Vista in order not to mess up the current Windows such as WinXP installation. Beside, you will need to turn off unneeded services (you can forget about Aero Glass feature, no need to try to turn if off, it will be disabled by default on 256MB system) and other unnecessary extras so that the Windows Vista will run smoothly and without lagging on the 256-MB system.
-
During Installation or Uninstalling of .NET Framework 3.5, .NET Framework 3.0 Service Pack 1, and .NET Framework 2.0 Service Pack 1, a dialog pops up with the message "The following application should be closed before continuing with setup:"
-
The original release of .NET Framework 1.1 is 32-bit only. In addition, the .NET Framework 1.1 Setup program contains a launch condition that blocks installation on 64-bit operating systems. After the original release, a shim was added to newer 64-bit operating systems that lets users bypass that launch condition and install .NET Framework 1.1. However, because .NET Framework 1.1 was not designed to be installed on 64-bit operating systems and co-exist with newer versions of the .NET Framework that are designed for 64-bit operating systems (such as .NET Framework 2.0), some .NET Framework side-by-side uninstall scenarios do not work correctly.
-
Best performance is achieved using a local SQL Server installation but you can install Tachyon and SQL Server on separate (split) servers. A separate SQL Server is required if it is to be shared with other customer applications, and its CPU and RAM should be increased accordingly. Please refer to SQL Server requirements for other detailed requirements.Disk volumes
-
For a split server installation it is necessary to configure an additional network interface into each server to guarantee a dedicated 1Gbps network channel between the core and the database. Enhanced networking (SR-IOV) must be enabled on both of these NICs where this is available (only on instances with 8cores or more) and must be enabled during instance creation. The additional NIC must also have accelerated networking enabled using Azure Powershell or Azure CLI. Detailed steps on how to configure this are given here - -us/azure/virtual-network/virtual-network-create-vm-accelerated-networking. Also increase the transmit (TX) and receive (RX) buffer sizes to their maximum under the network adaptor advanced properties within the guest OS.
-
After installation, we recommend you use the below SLA-Data database query to double-check memory requirements. You will need to use the SLA query anyway, if you use a Tachyon connector (for Inventory or Patch Success) either on its own, or with other connectors including SCCM. In this case, use the SLA-Data database query after installation and you have collected your source data.
-
Active Directory security groups are strongly recommended for role-based access control (RBAC) but are not mandatory. AD security groups can be assigned to Tachyon roles after installation, they are not required during installation. They are added as Tachyon users and configured in the same way as AD user accounts. A Tachyon user can therefore be a domain user account or a security group. Groups are not mandatory because users can be assigned to roles and managed within Tachyon instead of AD, or a combination.
-
The Tachyon software must be installed on a Windows Server that is part of an Active Directory domain. Companies do not always have their on-premises Active Directory extended into their cloud environment - there are security concerns that must be taken into account and there must be some mechanism for either a secure network connection between the customer's on-premises systems and the cloud environment (usually via VPN connection) or a way of synchronizing the cloud based Active Directory implementation with the on-premises one (through a directory synchronization of some sort).
-
This is the account used to run Tachyon Setup (and the MSI installer) when installing or upgrading a Tachyon Server. The account is automatically defined as a Tachyon admin user with limited rights which cannot be edited (called a system principal). The installation account only has sufficient rights to add other Tachyon users, assign them to Tachyon roles (including the Permissions Administrator role), and install Tachyon applications. The users and roles created by the installation account are then used for ongoing use and management of Tachyon.
-
Tachyon deliberately does not work with self-signed certificates for security reasons. Therefore, Tachyon Server cannot be installed on the same server as a Root CA, because its certificate is self-signed. For the same reason Tachyon client cannot be installed on a DC unless the client's Switch is configured to not require client certificates.
-
The Export all feature is available on the responses page for a question once it has finished retrieving all its responses. To enable Tachyon users with the appropriate permissions to use this feature you must ensure that the Microsoft Bulk Copy Program (BCP) is installed on the same Response Stack server(s) where the Tachyon Core component has been installed.
-
The following table lists firewall requirements for a single-server where Tachyon Master Stack and Response Stack are installed on the same server. The table assumes a remote SQL Server hosting TachyonMaster and TachyonResponses databases.Each Tachyon component described in the table has at least one output and/or input. For each Tachyon component with an output there is a matching input.Firewalls normally protect against incoming traffic from remote devices, however the table below also includes outgoing connections. The table does not include internal communications within the Server.
-
Tachyon Server installed
Remote workstation with a supported browser
The name and password for the server installation account
the AD account must be enabled
the account may already be assigned to other Tachyon roles either directly or via membership of an AD group role.
Two AD User accounts, Test User 1 and 2
must not be existing Tachyon users because they will be assigned specific roles for the purpose of these tests
must have email addresses and be able to read emails.
The 1E Tachyon Platform instruction set with two Verification instructions
the verification steps describe how to create this instruction set by uploading the 1E Tachyon Platform Product Pack
you may have already uploaded this Product Pack using the Product Pack Deployment Tool, either during Setup or after
the 1E Tachyon Platform Product Pack is included in the TachyonPlatform zip filethat you can download from the 1E Support Portal (1eportal.force.com/s/tachyontopicdetail).
At least one test device on which the 1E Client will be installed
1E Client installation source files and configuration details required by your Tachyon implementation.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/_core/_typedattr.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/_core/_typedattr.py
deleted file mode 100644
index bf9202eeab91d263f4badade4601efd111b91523..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/_core/_typedattr.py
+++ /dev/null
@@ -1,83 +0,0 @@
-from __future__ import annotations
-
-import sys
-from typing import Any, Callable, Mapping, TypeVar, overload
-
-from ._exceptions import TypedAttributeLookupError
-
-if sys.version_info >= (3, 8):
- from typing import final
-else:
- from typing_extensions import final
-
-T_Attr = TypeVar("T_Attr")
-T_Default = TypeVar("T_Default")
-undefined = object()
-
-
-def typed_attribute() -> Any:
- """Return a unique object, used to mark typed attributes."""
- return object()
-
-
-class TypedAttributeSet:
- """
- Superclass for typed attribute collections.
-
- Checks that every public attribute of every subclass has a type annotation.
- """
-
- def __init_subclass__(cls) -> None:
- annotations: dict[str, Any] = getattr(cls, "__annotations__", {})
- for attrname in dir(cls):
- if not attrname.startswith("_") and attrname not in annotations:
- raise TypeError(
- f"Attribute {attrname!r} is missing its type annotation"
- )
-
- super().__init_subclass__()
-
-
-class TypedAttributeProvider:
- """Base class for classes that wish to provide typed extra attributes."""
-
- @property
- def extra_attributes(self) -> Mapping[T_Attr, Callable[[], T_Attr]]:
- """
- A mapping of the extra attributes to callables that return the corresponding values.
-
- If the provider wraps another provider, the attributes from that wrapper should also be
- included in the returned mapping (but the wrapper may override the callables from the
- wrapped instance).
-
- """
- return {}
-
- @overload
- def extra(self, attribute: T_Attr) -> T_Attr:
- ...
-
- @overload
- def extra(self, attribute: T_Attr, default: T_Default) -> T_Attr | T_Default:
- ...
-
- @final
- def extra(self, attribute: Any, default: object = undefined) -> object:
- """
- extra(attribute, default=undefined)
-
- Return the value of the given typed extra attribute.
-
- :param attribute: the attribute (member of a :class:`~TypedAttributeSet`) to look for
- :param default: the value that should be returned if no value is found for the attribute
- :raises ~anyio.TypedAttributeLookupError: if the search failed and no default value was
- given
-
- """
- try:
- return self.extra_attributes[attribute]()
- except KeyError:
- if default is undefined:
- raise TypedAttributeLookupError("Attribute not found") from None
- else:
- return default
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/abc/__init__.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/abc/__init__.py
deleted file mode 100644
index 72c34e544e1634e4f42c005506bac9b61ab095f5..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/abc/__init__.py
+++ /dev/null
@@ -1,90 +0,0 @@
-from __future__ import annotations
-
-__all__ = (
- "AsyncResource",
- "IPAddressType",
- "IPSockAddrType",
- "SocketAttribute",
- "SocketStream",
- "SocketListener",
- "UDPSocket",
- "UNIXSocketStream",
- "UDPPacketType",
- "ConnectedUDPSocket",
- "UnreliableObjectReceiveStream",
- "UnreliableObjectSendStream",
- "UnreliableObjectStream",
- "ObjectReceiveStream",
- "ObjectSendStream",
- "ObjectStream",
- "ByteReceiveStream",
- "ByteSendStream",
- "ByteStream",
- "AnyUnreliableByteReceiveStream",
- "AnyUnreliableByteSendStream",
- "AnyUnreliableByteStream",
- "AnyByteReceiveStream",
- "AnyByteSendStream",
- "AnyByteStream",
- "Listener",
- "Process",
- "Event",
- "Condition",
- "Lock",
- "Semaphore",
- "CapacityLimiter",
- "CancelScope",
- "TaskGroup",
- "TaskStatus",
- "TestRunner",
- "BlockingPortal",
-)
-
-from typing import Any
-
-from ._resources import AsyncResource
-from ._sockets import (
- ConnectedUDPSocket,
- IPAddressType,
- IPSockAddrType,
- SocketAttribute,
- SocketListener,
- SocketStream,
- UDPPacketType,
- UDPSocket,
- UNIXSocketStream,
-)
-from ._streams import (
- AnyByteReceiveStream,
- AnyByteSendStream,
- AnyByteStream,
- AnyUnreliableByteReceiveStream,
- AnyUnreliableByteSendStream,
- AnyUnreliableByteStream,
- ByteReceiveStream,
- ByteSendStream,
- ByteStream,
- Listener,
- ObjectReceiveStream,
- ObjectSendStream,
- ObjectStream,
- UnreliableObjectReceiveStream,
- UnreliableObjectSendStream,
- UnreliableObjectStream,
-)
-from ._subprocesses import Process
-from ._tasks import TaskGroup, TaskStatus
-from ._testing import TestRunner
-
-# Re-exported here, for backwards compatibility
-# isort: off
-from .._core._synchronization import CapacityLimiter, Condition, Event, Lock, Semaphore
-from .._core._tasks import CancelScope
-from ..from_thread import BlockingPortal
-
-# Re-export imports so they look like they live directly in this package
-key: str
-value: Any
-for key, value in list(locals().items()):
- if getattr(value, "__module__", "").startswith("anyio.abc."):
- value.__module__ = __name__
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/filenames.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/filenames.py
deleted file mode 100644
index d279f89cc82cc280370d09ebdb16cb301f62aa57..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/filenames.py
+++ /dev/null
@@ -1,246 +0,0 @@
-"""
-This module implements the algorithm for converting between a "user name" -
-something that a user can choose arbitrarily inside a font editor - and a file
-name suitable for use in a wide range of operating systems and filesystems.
-
-The `UFO 3 specification `_
-provides an example of an algorithm for such conversion, which avoids illegal
-characters, reserved file names, ambiguity between upper- and lower-case
-characters, and clashes with existing files.
-
-This code was originally copied from
-`ufoLib `_
-by Tal Leming and is copyright (c) 2005-2016, The RoboFab Developers:
-
-- Erik van Blokland
-- Tal Leming
-- Just van Rossum
-"""
-
-
-illegalCharacters = r"\" * + / : < > ? [ \ ] | \0".split(" ")
-illegalCharacters += [chr(i) for i in range(1, 32)]
-illegalCharacters += [chr(0x7F)]
-reservedFileNames = "CON PRN AUX CLOCK$ NUL A:-Z: COM1".lower().split(" ")
-reservedFileNames += "LPT1 LPT2 LPT3 COM2 COM3 COM4".lower().split(" ")
-maxFileNameLength = 255
-
-
-class NameTranslationError(Exception):
- pass
-
-
-def userNameToFileName(userName, existing=[], prefix="", suffix=""):
- """Converts from a user name to a file name.
-
- Takes care to avoid illegal characters, reserved file names, ambiguity between
- upper- and lower-case characters, and clashes with existing files.
-
- Args:
- userName (str): The input file name.
- existing: A case-insensitive list of all existing file names.
- prefix: Prefix to be prepended to the file name.
- suffix: Suffix to be appended to the file name.
-
- Returns:
- A suitable filename.
-
- Raises:
- NameTranslationError: If no suitable name could be generated.
-
- Examples::
-
- >>> userNameToFileName("a") == "a"
- True
- >>> userNameToFileName("A") == "A_"
- True
- >>> userNameToFileName("AE") == "A_E_"
- True
- >>> userNameToFileName("Ae") == "A_e"
- True
- >>> userNameToFileName("ae") == "ae"
- True
- >>> userNameToFileName("aE") == "aE_"
- True
- >>> userNameToFileName("a.alt") == "a.alt"
- True
- >>> userNameToFileName("A.alt") == "A_.alt"
- True
- >>> userNameToFileName("A.Alt") == "A_.A_lt"
- True
- >>> userNameToFileName("A.aLt") == "A_.aL_t"
- True
- >>> userNameToFileName(u"A.alT") == "A_.alT_"
- True
- >>> userNameToFileName("T_H") == "T__H_"
- True
- >>> userNameToFileName("T_h") == "T__h"
- True
- >>> userNameToFileName("t_h") == "t_h"
- True
- >>> userNameToFileName("F_F_I") == "F__F__I_"
- True
- >>> userNameToFileName("f_f_i") == "f_f_i"
- True
- >>> userNameToFileName("Aacute_V.swash") == "A_acute_V_.swash"
- True
- >>> userNameToFileName(".notdef") == "_notdef"
- True
- >>> userNameToFileName("con") == "_con"
- True
- >>> userNameToFileName("CON") == "C_O_N_"
- True
- >>> userNameToFileName("con.alt") == "_con.alt"
- True
- >>> userNameToFileName("alt.con") == "alt._con"
- True
- """
- # the incoming name must be a str
- if not isinstance(userName, str):
- raise ValueError("The value for userName must be a string.")
- # establish the prefix and suffix lengths
- prefixLength = len(prefix)
- suffixLength = len(suffix)
- # replace an initial period with an _
- # if no prefix is to be added
- if not prefix and userName[0] == ".":
- userName = "_" + userName[1:]
- # filter the user name
- filteredUserName = []
- for character in userName:
- # replace illegal characters with _
- if character in illegalCharacters:
- character = "_"
- # add _ to all non-lower characters
- elif character != character.lower():
- character += "_"
- filteredUserName.append(character)
- userName = "".join(filteredUserName)
- # clip to 255
- sliceLength = maxFileNameLength - prefixLength - suffixLength
- userName = userName[:sliceLength]
- # test for illegal files names
- parts = []
- for part in userName.split("."):
- if part.lower() in reservedFileNames:
- part = "_" + part
- parts.append(part)
- userName = ".".join(parts)
- # test for clash
- fullName = prefix + userName + suffix
- if fullName.lower() in existing:
- fullName = handleClash1(userName, existing, prefix, suffix)
- # finished
- return fullName
-
-
-def handleClash1(userName, existing=[], prefix="", suffix=""):
- """
- existing should be a case-insensitive list
- of all existing file names.
-
- >>> prefix = ("0" * 5) + "."
- >>> suffix = "." + ("0" * 10)
- >>> existing = ["a" * 5]
-
- >>> e = list(existing)
- >>> handleClash1(userName="A" * 5, existing=e,
- ... prefix=prefix, suffix=suffix) == (
- ... '00000.AAAAA000000000000001.0000000000')
- True
-
- >>> e = list(existing)
- >>> e.append(prefix + "aaaaa" + "1".zfill(15) + suffix)
- >>> handleClash1(userName="A" * 5, existing=e,
- ... prefix=prefix, suffix=suffix) == (
- ... '00000.AAAAA000000000000002.0000000000')
- True
-
- >>> e = list(existing)
- >>> e.append(prefix + "AAAAA" + "2".zfill(15) + suffix)
- >>> handleClash1(userName="A" * 5, existing=e,
- ... prefix=prefix, suffix=suffix) == (
- ... '00000.AAAAA000000000000001.0000000000')
- True
- """
- # if the prefix length + user name length + suffix length + 15 is at
- # or past the maximum length, silce 15 characters off of the user name
- prefixLength = len(prefix)
- suffixLength = len(suffix)
- if prefixLength + len(userName) + suffixLength + 15 > maxFileNameLength:
- l = prefixLength + len(userName) + suffixLength + 15
- sliceLength = maxFileNameLength - l
- userName = userName[:sliceLength]
- finalName = None
- # try to add numbers to create a unique name
- counter = 1
- while finalName is None:
- name = userName + str(counter).zfill(15)
- fullName = prefix + name + suffix
- if fullName.lower() not in existing:
- finalName = fullName
- break
- else:
- counter += 1
- if counter >= 999999999999999:
- break
- # if there is a clash, go to the next fallback
- if finalName is None:
- finalName = handleClash2(existing, prefix, suffix)
- # finished
- return finalName
-
-
-def handleClash2(existing=[], prefix="", suffix=""):
- """
- existing should be a case-insensitive list
- of all existing file names.
-
- >>> prefix = ("0" * 5) + "."
- >>> suffix = "." + ("0" * 10)
- >>> existing = [prefix + str(i) + suffix for i in range(100)]
-
- >>> e = list(existing)
- >>> handleClash2(existing=e, prefix=prefix, suffix=suffix) == (
- ... '00000.100.0000000000')
- True
-
- >>> e = list(existing)
- >>> e.remove(prefix + "1" + suffix)
- >>> handleClash2(existing=e, prefix=prefix, suffix=suffix) == (
- ... '00000.1.0000000000')
- True
-
- >>> e = list(existing)
- >>> e.remove(prefix + "2" + suffix)
- >>> handleClash2(existing=e, prefix=prefix, suffix=suffix) == (
- ... '00000.2.0000000000')
- True
- """
- # calculate the longest possible string
- maxLength = maxFileNameLength - len(prefix) - len(suffix)
- maxValue = int("9" * maxLength)
- # try to find a number
- finalName = None
- counter = 1
- while finalName is None:
- fullName = prefix + str(counter) + suffix
- if fullName.lower() not in existing:
- finalName = fullName
- break
- else:
- counter += 1
- if counter >= maxValue:
- break
- # raise an error if nothing has been found
- if finalName is None:
- raise NameTranslationError("No unique name could be found.")
- # finished
- return finalName
-
-
-if __name__ == "__main__":
- import doctest
- import sys
-
- sys.exit(doctest.testmod().failed)
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/_p_r_e_p.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/_p_r_e_p.py
deleted file mode 100644
index b4b92f3e924ba2f20ade9a6cca45ce78284ffe21..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/_p_r_e_p.py
+++ /dev/null
@@ -1,7 +0,0 @@
-from fontTools import ttLib
-
-superclass = ttLib.getTableClass("fpgm")
-
-
-class table__p_r_e_p(superclass):
- pass
diff --git a/spaces/cncn102/bingo1/src/pages/api/image.ts b/spaces/cncn102/bingo1/src/pages/api/image.ts
deleted file mode 100644
index 12e8ce3834a6b7198dce00f51ed253b052cc69ca..0000000000000000000000000000000000000000
--- a/spaces/cncn102/bingo1/src/pages/api/image.ts
+++ /dev/null
@@ -1,38 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { debug } from '@/lib/isomorphic'
-import { createHeaders } from '@/lib/utils'
-import { createImage } from '@/lib/bots/bing/utils'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- const { prompt, id } = req.query
- if (!prompt) {
- return res.json({
- result: {
- value: 'Image',
- message: 'No Prompt'
- }
- })
- }
- try {
- const headers = createHeaders(req.cookies, 'image')
-
- debug('headers', headers)
- const response = await createImage(String(prompt), String(id), {
- ...headers,
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- })
- res.writeHead(200, {
- 'Content-Type': 'text/plain; charset=UTF-8',
- })
- res.end(response)
- } catch (e) {
- res.json({
- result: {
- value: 'Error',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/codeparrot/code-generation-models/datasets/github_code.md b/spaces/codeparrot/code-generation-models/datasets/github_code.md
deleted file mode 100644
index 6e91ae854541f45866f1f37f086dcee4eaf251ac..0000000000000000000000000000000000000000
--- a/spaces/codeparrot/code-generation-models/datasets/github_code.md
+++ /dev/null
@@ -1,26 +0,0 @@
-We also released [Github code dataset](https://huggingface.co/datasets/codeparrot/github-code), a 1TB of code data from Github repositories in 32 programming languages. It was created from the public GitHub dataset on Google [BigQuery](https://cloud.google.com/blog/topics/public-datasets/github-on-bigquery-analyze-all-the-open-source-code). The dataset can be loaded in streaming mode if you don't want to download it because of memory limitations, this will create an iterable dataset:
-
-```python
-from datasets import load_dataset
-
-ds = load_dataset("codeparrot/github-code", streaming=True, split="train")
-print(next(iter(ds)))
-
-#OUTPUT:
-{
- 'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n",
- 'repo_name': 'MirekSz/webpack-es6-ts',
- 'path': 'app/mods/mod190.js',
- 'language': 'JavaScript',
- 'license': 'isc',
- 'size': 73
-}
-
-```
-You can see that in addition to the code, the samples include some metadata: repo name, path, language, license, and the size of the file. Below is the distribution of programming languages in this dataset.
-
-
-
-
-
-For model-specific information about the pretraining dataset, please select a model below:
\ No newline at end of file
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aarch64/hpeldsp_init_aarch64.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aarch64/hpeldsp_init_aarch64.c
deleted file mode 100644
index 144ae2bcc4e6eb852a3b6421c8df7cd778ac7578..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aarch64/hpeldsp_init_aarch64.c
+++ /dev/null
@@ -1,123 +0,0 @@
-/*
- * ARM NEON optimised DSP functions
- * Copyright (c) 2008 Mans Rullgard
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include
-#include
-
-#include "config.h"
-
-#include "libavutil/attributes.h"
-#include "libavutil/cpu.h"
-#include "libavutil/aarch64/cpu.h"
-#include "libavcodec/hpeldsp.h"
-
-void ff_put_pixels16_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_put_pixels16_x2_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_put_pixels16_y2_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_put_pixels16_xy2_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_put_pixels8_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_put_pixels8_x2_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_put_pixels8_y2_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_put_pixels8_xy2_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-
-void ff_put_pixels16_x2_no_rnd_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_put_pixels16_y2_no_rnd_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_put_pixels16_xy2_no_rnd_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_put_pixels8_x2_no_rnd_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_put_pixels8_y2_no_rnd_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_put_pixels8_xy2_no_rnd_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-
-void ff_avg_pixels16_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_avg_pixels16_x2_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_avg_pixels16_y2_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_avg_pixels16_xy2_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_avg_pixels8_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_avg_pixels8_x2_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_avg_pixels8_y2_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_avg_pixels8_xy2_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-
-void ff_avg_pixels16_x2_no_rnd_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_avg_pixels16_y2_no_rnd_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-void ff_avg_pixels16_xy2_no_rnd_neon(uint8_t *block, const uint8_t *pixels,
- ptrdiff_t line_size, int h);
-
-av_cold void ff_hpeldsp_init_aarch64(HpelDSPContext *c, int flags)
-{
- int cpu_flags = av_get_cpu_flags();
-
- if (have_neon(cpu_flags)) {
- c->put_pixels_tab[0][0] = ff_put_pixels16_neon;
- c->put_pixels_tab[0][1] = ff_put_pixels16_x2_neon;
- c->put_pixels_tab[0][2] = ff_put_pixels16_y2_neon;
- c->put_pixels_tab[0][3] = ff_put_pixels16_xy2_neon;
- c->put_pixels_tab[1][0] = ff_put_pixels8_neon;
- c->put_pixels_tab[1][1] = ff_put_pixels8_x2_neon;
- c->put_pixels_tab[1][2] = ff_put_pixels8_y2_neon;
- c->put_pixels_tab[1][3] = ff_put_pixels8_xy2_neon;
-
- c->put_no_rnd_pixels_tab[0][0] = ff_put_pixels16_neon;
- c->put_no_rnd_pixels_tab[0][1] = ff_put_pixels16_x2_no_rnd_neon;
- c->put_no_rnd_pixels_tab[0][2] = ff_put_pixels16_y2_no_rnd_neon;
- c->put_no_rnd_pixels_tab[0][3] = ff_put_pixels16_xy2_no_rnd_neon;
- c->put_no_rnd_pixels_tab[1][0] = ff_put_pixels8_neon;
- c->put_no_rnd_pixels_tab[1][1] = ff_put_pixels8_x2_no_rnd_neon;
- c->put_no_rnd_pixels_tab[1][2] = ff_put_pixels8_y2_no_rnd_neon;
- c->put_no_rnd_pixels_tab[1][3] = ff_put_pixels8_xy2_no_rnd_neon;
-
- c->avg_pixels_tab[0][0] = ff_avg_pixels16_neon;
- c->avg_pixels_tab[0][1] = ff_avg_pixels16_x2_neon;
- c->avg_pixels_tab[0][2] = ff_avg_pixels16_y2_neon;
- c->avg_pixels_tab[0][3] = ff_avg_pixels16_xy2_neon;
- c->avg_pixels_tab[1][0] = ff_avg_pixels8_neon;
- c->avg_pixels_tab[1][1] = ff_avg_pixels8_x2_neon;
- c->avg_pixels_tab[1][2] = ff_avg_pixels8_y2_neon;
- c->avg_pixels_tab[1][3] = ff_avg_pixels8_xy2_neon;
-
- c->avg_no_rnd_pixels_tab[0] = ff_avg_pixels16_neon;
- c->avg_no_rnd_pixels_tab[1] = ff_avg_pixels16_x2_no_rnd_neon;
- c->avg_no_rnd_pixels_tab[2] = ff_avg_pixels16_y2_no_rnd_neon;
- c->avg_no_rnd_pixels_tab[3] = ff_avg_pixels16_xy2_no_rnd_neon;
- }
-}
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/amfenc_h264.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/amfenc_h264.c
deleted file mode 100644
index eaf7f974f3cf18faf9f68cceffbd69b357ca97a7..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/amfenc_h264.c
+++ /dev/null
@@ -1,399 +0,0 @@
-/*
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-
-#include "libavutil/internal.h"
-#include "libavutil/opt.h"
-#include "amfenc.h"
-#include "codec_internal.h"
-#include "internal.h"
-
-#define OFFSET(x) offsetof(AmfContext, x)
-#define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM
-
-static const AVOption options[] = {
- // Static
- /// Usage
- { "usage", "Encoder Usage", OFFSET(usage), AV_OPT_TYPE_INT, { .i64 = AMF_VIDEO_ENCODER_USAGE_TRANSCONDING }, AMF_VIDEO_ENCODER_USAGE_TRANSCONDING, AMF_VIDEO_ENCODER_USAGE_WEBCAM, VE, "usage" },
- { "transcoding", "Generic Transcoding", 0, AV_OPT_TYPE_CONST, { .i64 = AMF_VIDEO_ENCODER_USAGE_TRANSCONDING }, 0, 0, VE, "usage" },
- { "ultralowlatency","", 0, AV_OPT_TYPE_CONST, { .i64 = AMF_VIDEO_ENCODER_USAGE_ULTRA_LOW_LATENCY }, 0, 0, VE, "usage" },
- { "lowlatency", "", 0, AV_OPT_TYPE_CONST, { .i64 = AMF_VIDEO_ENCODER_USAGE_LOW_LATENCY }, 0, 0, VE, "usage" },
- { "webcam", "Webcam", 0, AV_OPT_TYPE_CONST, { .i64 = AMF_VIDEO_ENCODER_USAGE_WEBCAM }, 0, 0, VE, "usage" },
-
- /// Profile,
- { "profile", "Profile", OFFSET(profile),AV_OPT_TYPE_INT, { .i64 = AMF_VIDEO_ENCODER_PROFILE_MAIN }, AMF_VIDEO_ENCODER_PROFILE_BASELINE, AMF_VIDEO_ENCODER_PROFILE_CONSTRAINED_HIGH, VE, "profile" },
- { "main", "", 0, AV_OPT_TYPE_CONST, { .i64 = AMF_VIDEO_ENCODER_PROFILE_MAIN }, 0, 0, VE, "profile" },
- { "high", "", 0, AV_OPT_TYPE_CONST, { .i64 = AMF_VIDEO_ENCODER_PROFILE_HIGH }, 0, 0, VE, "profile" },
- { "constrained_baseline", "", 0, AV_OPT_TYPE_CONST, { .i64 = AMF_VIDEO_ENCODER_PROFILE_CONSTRAINED_BASELINE }, 0, 0, VE, "profile" },
- { "constrained_high", "", 0, AV_OPT_TYPE_CONST, { .i64 = AMF_VIDEO_ENCODER_PROFILE_CONSTRAINED_HIGH }, 0, 0, VE, "profile" },
-
- /// Profile Level
- { "level", "Profile Level", OFFSET(level), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 62, VE, "level" },
- { "auto", "", 0, AV_OPT_TYPE_CONST, { .i64 = 0 }, 0, 0, VE, "level" },
- { "1.0", "", 0, AV_OPT_TYPE_CONST, { .i64 = 10 }, 0, 0, VE, "level" },
- { "1.1", "", 0, AV_OPT_TYPE_CONST, { .i64 = 11 }, 0, 0, VE, "level" },
- { "1.2", "", 0, AV_OPT_TYPE_CONST, { .i64 = 12 }, 0, 0, VE, "level" },
- { "1.3", "", 0, AV_OPT_TYPE_CONST, { .i64 = 13 }, 0, 0, VE, "level" },
- { "2.0", "", 0, AV_OPT_TYPE_CONST, { .i64 = 20 }, 0, 0, VE, "level" },
- { "2.1", "", 0, AV_OPT_TYPE_CONST, { .i64 = 21 }, 0, 0, VE, "level" },
- { "2.2", "", 0, AV_OPT_TYPE_CONST, { .i64 = 22 }, 0, 0, VE, "level" },
- { "3.0", "", 0, AV_OPT_TYPE_CONST, { .i64 = 30 }, 0, 0, VE, "level" },
- { "3.1", "", 0, AV_OPT_TYPE_CONST, { .i64 = 31 }, 0, 0, VE, "level" },
- { "3.2", "", 0, AV_OPT_TYPE_CONST, { .i64 = 32 }, 0, 0, VE, "level" },
- { "4.0", "", 0, AV_OPT_TYPE_CONST, { .i64 = 40 }, 0, 0, VE, "level" },
- { "4.1", "", 0, AV_OPT_TYPE_CONST, { .i64 = 41 }, 0, 0, VE, "level" },
- { "4.2", "", 0, AV_OPT_TYPE_CONST, { .i64 = 42 }, 0, 0, VE, "level" },
- { "5.0", "", 0, AV_OPT_TYPE_CONST, { .i64 = 50 }, 0, 0, VE, "level" },
- { "5.1", "", 0, AV_OPT_TYPE_CONST, { .i64 = 51 }, 0, 0, VE, "level" },
- { "5.2", "", 0, AV_OPT_TYPE_CONST, { .i64 = 52 }, 0, 0, VE, "level" },
- { "6.0", "", 0, AV_OPT_TYPE_CONST, { .i64 = 60 }, 0, 0, VE, "level" },
- { "6.1", "", 0, AV_OPT_TYPE_CONST, { .i64 = 61 }, 0, 0, VE, "level" },
- { "6.2", "", 0, AV_OPT_TYPE_CONST, { .i64 = 62 }, 0, 0, VE, "level" },
-
-
- /// Quality Preset
- { "quality", "Quality Preference", OFFSET(quality), AV_OPT_TYPE_INT, { .i64 = AMF_VIDEO_ENCODER_QUALITY_PRESET_SPEED }, AMF_VIDEO_ENCODER_QUALITY_PRESET_BALANCED, AMF_VIDEO_ENCODER_QUALITY_PRESET_QUALITY, VE, "quality" },
- { "speed", "Prefer Speed", 0, AV_OPT_TYPE_CONST, { .i64 = AMF_VIDEO_ENCODER_QUALITY_PRESET_SPEED }, 0, 0, VE, "quality" },
- { "balanced", "Balanced", 0, AV_OPT_TYPE_CONST, { .i64 = AMF_VIDEO_ENCODER_QUALITY_PRESET_BALANCED }, 0, 0, VE, "quality" },
- { "quality", "Prefer Quality", 0, AV_OPT_TYPE_CONST, { .i64 = AMF_VIDEO_ENCODER_QUALITY_PRESET_QUALITY }, 0, 0, VE, "quality" },
-
- // Dynamic
- /// Rate Control Method
- { "rc", "Rate Control Method", OFFSET(rate_control_mode), AV_OPT_TYPE_INT, { .i64 = AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_UNKNOWN }, AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_UNKNOWN, AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_LATENCY_CONSTRAINED_VBR, VE, "rc" },
- { "cqp", "Constant Quantization Parameter", 0, AV_OPT_TYPE_CONST, { .i64 = AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_CONSTANT_QP }, 0, 0, VE, "rc" },
- { "cbr", "Constant Bitrate", 0, AV_OPT_TYPE_CONST, { .i64 = AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_CBR }, 0, 0, VE, "rc" },
- { "vbr_peak", "Peak Contrained Variable Bitrate", 0, AV_OPT_TYPE_CONST, { .i64 = AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_PEAK_CONSTRAINED_VBR }, 0, 0, VE, "rc" },
- { "vbr_latency", "Latency Constrained Variable Bitrate", 0, AV_OPT_TYPE_CONST, { .i64 = AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_LATENCY_CONSTRAINED_VBR }, 0, 0, VE, "rc" },
-
- /// Enforce HRD, Filler Data, VBAQ, Frame Skipping
- { "enforce_hrd", "Enforce HRD", OFFSET(enforce_hrd), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
- { "filler_data", "Filler Data Enable", OFFSET(filler_data), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
- { "vbaq", "Enable VBAQ", OFFSET(enable_vbaq), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
- { "frame_skipping", "Rate Control Based Frame Skip", OFFSET(skip_frame), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
-
- /// QP Values
- { "qp_i", "Quantization Parameter for I-Frame", OFFSET(qp_i), AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 51, VE },
- { "qp_p", "Quantization Parameter for P-Frame", OFFSET(qp_p), AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 51, VE },
- { "qp_b", "Quantization Parameter for B-Frame", OFFSET(qp_b), AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 51, VE },
-
- /// Pre-Pass, Pre-Analysis, Two-Pass
- { "preanalysis", "Pre-Analysis Mode", OFFSET(preanalysis), AV_OPT_TYPE_BOOL,{ .i64 = 0 }, 0, 1, VE, NULL },
-
- /// Maximum Access Unit Size
- { "max_au_size", "Maximum Access Unit Size for rate control (in bits)", OFFSET(max_au_size), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, VE },
-
- /// Header Insertion Spacing
- { "header_spacing", "Header Insertion Spacing", OFFSET(header_spacing), AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 1000, VE },
-
- /// B-Frames
- // BPicturesPattern=bf
- { "bf_delta_qp", "B-Picture Delta QP", OFFSET(b_frame_delta_qp), AV_OPT_TYPE_INT, { .i64 = 4 }, -10, 10, VE },
- { "bf_ref", "Enable Reference to B-Frames", OFFSET(b_frame_ref), AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, VE },
- { "bf_ref_delta_qp","Reference B-Picture Delta QP", OFFSET(ref_b_frame_delta_qp), AV_OPT_TYPE_INT, { .i64 = 4 }, -10, 10, VE },
-
- /// Intra-Refresh
- { "intra_refresh_mb","Intra Refresh MBs Number Per Slot in Macroblocks", OFFSET(intra_refresh_mb), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, VE },
-
- /// coder
- { "coder", "Coding Type", OFFSET(coding_mode), AV_OPT_TYPE_INT, { .i64 = AMF_VIDEO_ENCODER_UNDEFINED }, AMF_VIDEO_ENCODER_UNDEFINED, AMF_VIDEO_ENCODER_CALV, VE, "coder" },
- { "auto", "Automatic", 0, AV_OPT_TYPE_CONST, { .i64 = AMF_VIDEO_ENCODER_UNDEFINED }, 0, 0, VE, "coder" },
- { "cavlc", "Context Adaptive Variable-Length Coding", 0, AV_OPT_TYPE_CONST, { .i64 = AMF_VIDEO_ENCODER_CALV }, 0, 0, VE, "coder" },
- { "cabac", "Context Adaptive Binary Arithmetic Coding", 0, AV_OPT_TYPE_CONST, { .i64 = AMF_VIDEO_ENCODER_CABAC }, 0, 0, VE, "coder" },
-
- { "me_half_pel", "Enable ME Half Pixel", OFFSET(me_half_pel), AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, VE },
- { "me_quarter_pel", "Enable ME Quarter Pixel", OFFSET(me_quarter_pel),AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, VE },
-
- { "aud", "Inserts AU Delimiter NAL unit", OFFSET(aud) ,AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
-
- { "log_to_dbg", "Enable AMF logging to debug output", OFFSET(log_to_dbg) , AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
-
- { NULL }
-};
-
-static av_cold int amf_encode_init_h264(AVCodecContext *avctx)
-{
- int ret = 0;
- AMF_RESULT res = AMF_OK;
- AmfContext *ctx = avctx->priv_data;
- AMFVariantStruct var = { 0 };
- amf_int64 profile = 0;
- amf_int64 profile_level = 0;
- AMFBuffer *buffer;
- AMFGuid guid;
- AMFRate framerate;
- AMFSize framesize = AMFConstructSize(avctx->width, avctx->height);
- int deblocking_filter = (avctx->flags & AV_CODEC_FLAG_LOOP_FILTER) ? 1 : 0;
-
- if (avctx->framerate.num > 0 && avctx->framerate.den > 0) {
- framerate = AMFConstructRate(avctx->framerate.num, avctx->framerate.den);
- } else {
- framerate = AMFConstructRate(avctx->time_base.den, avctx->time_base.num * avctx->ticks_per_frame);
- }
-
- if ((ret = ff_amf_encode_init(avctx)) != 0)
- return ret;
-
- // Static parameters
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_USAGE, ctx->usage);
-
- AMF_ASSIGN_PROPERTY_SIZE(res, ctx->encoder, AMF_VIDEO_ENCODER_FRAMESIZE, framesize);
-
- AMF_ASSIGN_PROPERTY_RATE(res, ctx->encoder, AMF_VIDEO_ENCODER_FRAMERATE, framerate);
-
- switch (avctx->profile) {
- case FF_PROFILE_H264_BASELINE:
- profile = AMF_VIDEO_ENCODER_PROFILE_BASELINE;
- break;
- case FF_PROFILE_H264_MAIN:
- profile = AMF_VIDEO_ENCODER_PROFILE_MAIN;
- break;
- case FF_PROFILE_H264_HIGH:
- profile = AMF_VIDEO_ENCODER_PROFILE_HIGH;
- break;
- case FF_PROFILE_H264_CONSTRAINED_BASELINE:
- profile = AMF_VIDEO_ENCODER_PROFILE_CONSTRAINED_BASELINE;
- break;
- case (FF_PROFILE_H264_HIGH | FF_PROFILE_H264_CONSTRAINED):
- profile = AMF_VIDEO_ENCODER_PROFILE_CONSTRAINED_HIGH;
- break;
- }
- if (profile == 0) {
- profile = ctx->profile;
- }
-
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_PROFILE, profile);
-
- profile_level = avctx->level;
- if (profile_level == FF_LEVEL_UNKNOWN) {
- profile_level = ctx->level;
- }
- if (profile_level != 0) {
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_PROFILE_LEVEL, profile_level);
- }
-
- // Maximum Reference Frames
- if (avctx->refs != -1) {
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_MAX_NUM_REFRAMES, avctx->refs);
- }
- if (avctx->sample_aspect_ratio.den && avctx->sample_aspect_ratio.num) {
- AMFRatio ratio = AMFConstructRatio(avctx->sample_aspect_ratio.num, avctx->sample_aspect_ratio.den);
- AMF_ASSIGN_PROPERTY_RATIO(res, ctx->encoder, AMF_VIDEO_ENCODER_ASPECT_RATIO, ratio);
- }
-
- /// Color Range (Partial/TV/MPEG or Full/PC/JPEG)
- if (avctx->color_range == AVCOL_RANGE_JPEG) {
- AMF_ASSIGN_PROPERTY_BOOL(res, ctx->encoder, AMF_VIDEO_ENCODER_FULL_RANGE_COLOR, 1);
- }
-
- // autodetect rate control method
- if (ctx->rate_control_mode == AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_UNKNOWN) {
- if (ctx->qp_i != -1 || ctx->qp_p != -1 || ctx->qp_b != -1) {
- ctx->rate_control_mode = AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_CONSTANT_QP;
- av_log(ctx, AV_LOG_DEBUG, "Rate control turned to CQP\n");
- } else if (avctx->rc_max_rate > 0 ) {
- ctx->rate_control_mode = AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_PEAK_CONSTRAINED_VBR;
- av_log(ctx, AV_LOG_DEBUG, "Rate control turned to Peak VBR\n");
- } else {
- ctx->rate_control_mode = AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_CBR;
- av_log(ctx, AV_LOG_DEBUG, "Rate control turned to CBR\n");
- }
- }
-
- if (ctx->rate_control_mode == AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_CONSTANT_QP) {
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_RATE_CONTROL_PREANALYSIS_ENABLE, AMF_VIDEO_ENCODER_PREENCODE_DISABLED);
- if (ctx->preanalysis)
- av_log(ctx, AV_LOG_WARNING, "Pre-Analysis is not supported by cqp Rate Control Method, automatically disabled\n");
- } else {
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_RATE_CONTROL_PREANALYSIS_ENABLE, ctx->preanalysis);
- }
-
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_QUALITY_PRESET, ctx->quality);
-
- // Dynamic parmaters
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD, ctx->rate_control_mode);
-
- /// VBV Buffer
- if (avctx->rc_buffer_size != 0) {
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_VBV_BUFFER_SIZE, avctx->rc_buffer_size);
- if (avctx->rc_initial_buffer_occupancy != 0) {
- int amf_buffer_fullness = avctx->rc_initial_buffer_occupancy * 64 / avctx->rc_buffer_size;
- if (amf_buffer_fullness > 64)
- amf_buffer_fullness = 64;
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_INITIAL_VBV_BUFFER_FULLNESS, amf_buffer_fullness);
- }
- }
- /// Maximum Access Unit Size
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_MAX_AU_SIZE, ctx->max_au_size);
-
- if (ctx->max_au_size)
- ctx->enforce_hrd = 1;
-
- // QP Minimum / Maximum
- if (ctx->rate_control_mode == AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_CONSTANT_QP) {
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_MIN_QP, 0);
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_MAX_QP, 51);
- } else {
- if (avctx->qmin != -1) {
- int qval = avctx->qmin > 51 ? 51 : avctx->qmin;
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_MIN_QP, qval);
- }
- if (avctx->qmax != -1) {
- int qval = avctx->qmax > 51 ? 51 : avctx->qmax;
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_MAX_QP, qval);
- }
- }
- // QP Values
- if (ctx->qp_i != -1)
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_QP_I, ctx->qp_i);
- if (ctx->qp_p != -1)
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_QP_P, ctx->qp_p);
- if (ctx->qp_b != -1)
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_QP_B, ctx->qp_b);
-
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_TARGET_BITRATE, avctx->bit_rate);
-
- if (ctx->rate_control_mode == AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_CBR) {
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_PEAK_BITRATE, avctx->bit_rate);
- }
- if (avctx->rc_max_rate) {
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_PEAK_BITRATE, avctx->rc_max_rate);
- } else if (ctx->rate_control_mode == AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_PEAK_CONSTRAINED_VBR) {
- av_log(ctx, AV_LOG_WARNING, "rate control mode is PEAK_CONSTRAINED_VBR but rc_max_rate is not set\n");
- }
-
- // Initialize Encoder
- res = ctx->encoder->pVtbl->Init(ctx->encoder, ctx->format, avctx->width, avctx->height);
- AMF_RETURN_IF_FALSE(ctx, res == AMF_OK, AVERROR_BUG, "encoder->Init() failed with error %d\n", res);
-
- // Enforce HRD, Filler Data, VBAQ, Frame Skipping, Deblocking Filter
- AMF_ASSIGN_PROPERTY_BOOL(res, ctx->encoder, AMF_VIDEO_ENCODER_ENFORCE_HRD, !!ctx->enforce_hrd);
- AMF_ASSIGN_PROPERTY_BOOL(res, ctx->encoder, AMF_VIDEO_ENCODER_FILLER_DATA_ENABLE, !!ctx->filler_data);
- AMF_ASSIGN_PROPERTY_BOOL(res, ctx->encoder, AMF_VIDEO_ENCODER_RATE_CONTROL_SKIP_FRAME_ENABLE, !!ctx->skip_frame);
- if (ctx->rate_control_mode == AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_CONSTANT_QP) {
- AMF_ASSIGN_PROPERTY_BOOL(res, ctx->encoder, AMF_VIDEO_ENCODER_ENABLE_VBAQ, 0);
- if (ctx->enable_vbaq)
- av_log(ctx, AV_LOG_WARNING, "VBAQ is not supported by cqp Rate Control Method, automatically disabled\n");
- } else {
- AMF_ASSIGN_PROPERTY_BOOL(res, ctx->encoder, AMF_VIDEO_ENCODER_ENABLE_VBAQ, !!ctx->enable_vbaq);
- }
- AMF_ASSIGN_PROPERTY_BOOL(res, ctx->encoder, AMF_VIDEO_ENCODER_DE_BLOCKING_FILTER, !!deblocking_filter);
-
- // B-Frames
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_B_PIC_PATTERN, avctx->max_b_frames);
- if (res != AMF_OK) {
- res = ctx->encoder->pVtbl->GetProperty(ctx->encoder, AMF_VIDEO_ENCODER_B_PIC_PATTERN, &var);
- av_log(ctx, AV_LOG_WARNING, "B-frames=%d is not supported by this GPU, switched to %d\n",
- avctx->max_b_frames, (int)var.int64Value);
- avctx->max_b_frames = (int)var.int64Value;
- }
- if (avctx->max_b_frames) {
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_B_PIC_DELTA_QP, ctx->b_frame_delta_qp);
- AMF_ASSIGN_PROPERTY_BOOL(res, ctx->encoder, AMF_VIDEO_ENCODER_B_REFERENCE_ENABLE, !!ctx->b_frame_ref);
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_REF_B_PIC_DELTA_QP, ctx->ref_b_frame_delta_qp);
- }
-
- // Keyframe Interval
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_IDR_PERIOD, avctx->gop_size);
-
- // Header Insertion Spacing
- if (ctx->header_spacing >= 0)
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_HEADER_INSERTION_SPACING, ctx->header_spacing);
-
- // Intra-Refresh, Slicing
- if (ctx->intra_refresh_mb > 0)
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_INTRA_REFRESH_NUM_MBS_PER_SLOT, ctx->intra_refresh_mb);
- if (avctx->slices > 1)
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_SLICES_PER_FRAME, avctx->slices);
-
- // Coding
- if (ctx->coding_mode != 0)
- AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, AMF_VIDEO_ENCODER_CABAC_ENABLE, ctx->coding_mode);
-
- // Motion Estimation
- AMF_ASSIGN_PROPERTY_BOOL(res, ctx->encoder, AMF_VIDEO_ENCODER_MOTION_HALF_PIXEL, !!ctx->me_half_pel);
- AMF_ASSIGN_PROPERTY_BOOL(res, ctx->encoder, AMF_VIDEO_ENCODER_MOTION_QUARTERPIXEL, !!ctx->me_quarter_pel);
-
- // fill extradata
- res = AMFVariantInit(&var);
- AMF_RETURN_IF_FALSE(ctx, res == AMF_OK, AVERROR_BUG, "AMFVariantInit() failed with error %d\n", res);
-
- res = ctx->encoder->pVtbl->GetProperty(ctx->encoder, AMF_VIDEO_ENCODER_EXTRADATA, &var);
- AMF_RETURN_IF_FALSE(ctx, res == AMF_OK, AVERROR_BUG, "GetProperty(AMF_VIDEO_ENCODER_EXTRADATA) failed with error %d\n", res);
- AMF_RETURN_IF_FALSE(ctx, var.pInterface != NULL, AVERROR_BUG, "GetProperty(AMF_VIDEO_ENCODER_EXTRADATA) returned NULL\n");
-
- guid = IID_AMFBuffer();
-
- res = var.pInterface->pVtbl->QueryInterface(var.pInterface, &guid, (void**)&buffer); // query for buffer interface
- if (res != AMF_OK) {
- var.pInterface->pVtbl->Release(var.pInterface);
- }
- AMF_RETURN_IF_FALSE(ctx, res == AMF_OK, AVERROR_BUG, "QueryInterface(IID_AMFBuffer) failed with error %d\n", res);
-
- avctx->extradata_size = (int)buffer->pVtbl->GetSize(buffer);
- avctx->extradata = av_mallocz(avctx->extradata_size + AV_INPUT_BUFFER_PADDING_SIZE);
- if (!avctx->extradata) {
- buffer->pVtbl->Release(buffer);
- var.pInterface->pVtbl->Release(var.pInterface);
- return AVERROR(ENOMEM);
- }
- memcpy(avctx->extradata, buffer->pVtbl->GetNative(buffer), avctx->extradata_size);
-
- buffer->pVtbl->Release(buffer);
- var.pInterface->pVtbl->Release(var.pInterface);
-
- return 0;
-}
-
-static const FFCodecDefault defaults[] = {
- { "refs", "-1" },
- { "aspect", "0" },
- { "qmin", "-1" },
- { "qmax", "-1" },
- { "b", "2M" },
- { "g", "250" },
- { "slices", "1" },
- { "flags", "+loop"},
- { NULL },
-};
-
-static const AVClass h264_amf_class = {
- .class_name = "h264_amf",
- .item_name = av_default_item_name,
- .option = options,
- .version = LIBAVUTIL_VERSION_INT,
-};
-
-const FFCodec ff_h264_amf_encoder = {
- .p.name = "h264_amf",
- CODEC_LONG_NAME("AMD AMF H.264 Encoder"),
- .p.type = AVMEDIA_TYPE_VIDEO,
- .p.id = AV_CODEC_ID_H264,
- .init = amf_encode_init_h264,
- FF_CODEC_RECEIVE_PACKET_CB(ff_amf_receive_packet),
- .close = ff_amf_encode_close,
- .priv_data_size = sizeof(AmfContext),
- .p.priv_class = &h264_amf_class,
- .defaults = defaults,
- .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE |
- AV_CODEC_CAP_DR1,
- .caps_internal = FF_CODEC_CAP_NOT_INIT_THREADSAFE |
- FF_CODEC_CAP_INIT_CLEANUP,
- .p.pix_fmts = ff_amf_pix_fmts,
- .p.wrapper_name = "amf",
- .hw_configs = ff_amfenc_hw_configs,
-};
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mediacodec_sw_buffer.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mediacodec_sw_buffer.h
deleted file mode 100644
index 574fb529d40fea8505eebd35eb818067cfa3f7ae..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mediacodec_sw_buffer.h
+++ /dev/null
@@ -1,62 +0,0 @@
-/*
- * Android MediaCodec software buffer copy functions
- *
- * Copyright (c) 2015-2016 Matthieu Bouron
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_MEDIACODEC_SW_BUFFER_H
-#define AVCODEC_MEDIACODEC_SW_BUFFER_H
-
-#include
-
-#include "libavutil/frame.h"
-
-#include "avcodec.h"
-#include "mediacodec_wrapper.h"
-#include "mediacodecdec_common.h"
-
-void ff_mediacodec_sw_buffer_copy_yuv420_planar(AVCodecContext *avctx,
- MediaCodecDecContext *s,
- uint8_t *data,
- size_t size,
- FFAMediaCodecBufferInfo *info,
- AVFrame *frame);
-
-void ff_mediacodec_sw_buffer_copy_yuv420_semi_planar(AVCodecContext *avctx,
- MediaCodecDecContext *s,
- uint8_t *data,
- size_t size,
- FFAMediaCodecBufferInfo *info,
- AVFrame *frame);
-
-void ff_mediacodec_sw_buffer_copy_yuv420_packed_semi_planar(AVCodecContext *avctx,
- MediaCodecDecContext *s,
- uint8_t *data,
- size_t size,
- FFAMediaCodecBufferInfo *info,
- AVFrame *frame);
-
-void ff_mediacodec_sw_buffer_copy_yuv420_packed_semi_planar_64x32Tile2m8ka(AVCodecContext *avctx,
- MediaCodecDecContext *s,
- uint8_t *data,
- size_t size,
- FFAMediaCodecBufferInfo *info,
- AVFrame *frame);
-
-#endif /* AVCODEC_MEDIACODEC_SW_BUFFER_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/h264dsp_mips.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/h264dsp_mips.h
deleted file mode 100644
index 93a201c66a7e31289b1103d1ad6cf2508fd0e82e..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/h264dsp_mips.h
+++ /dev/null
@@ -1,581 +0,0 @@
-/*
- * Copyright (c) 2015 Parag Salasakar (Parag.Salasakar@imgtec.com)
- Zhou Xiaoyong
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_MIPS_H264DSP_MIPS_H
-#define AVCODEC_MIPS_H264DSP_MIPS_H
-
-#include "libavcodec/h264dec.h"
-#include "constants.h"
-
-void ff_h264_h_lpf_luma_inter_msa(uint8_t *src, ptrdiff_t stride,
- int alpha, int beta, int8_t *tc0);
-void ff_h264_v_lpf_luma_inter_msa(uint8_t *src, ptrdiff_t stride,
- int alpha, int beta, int8_t *tc0);
-void ff_h264_h_lpf_chroma_inter_msa(uint8_t *src, ptrdiff_t stride,
- int alpha, int beta, int8_t *tc0);
-void ff_h264_v_lpf_chroma_inter_msa(uint8_t *src, ptrdiff_t stride,
- int alpha, int beta, int8_t *tc0);
-void ff_h264_h_loop_filter_chroma422_msa(uint8_t *src, ptrdiff_t stride,
- int32_t alpha, int32_t beta,
- int8_t *tc0);
-void ff_h264_h_loop_filter_chroma422_mbaff_msa(uint8_t *src, ptrdiff_t stride,
- int32_t alpha, int32_t beta,
- int8_t *tc0);
-void ff_h264_h_loop_filter_luma_mbaff_msa(uint8_t *src, ptrdiff_t stride,
- int32_t alpha, int32_t beta,
- int8_t *tc0);
-
-void ff_h264_idct_add_msa(uint8_t *dst, int16_t *src, int32_t dst_stride);
-void ff_h264_idct4x4_addblk_dc_msa(uint8_t *dst, int16_t *src,
- int32_t dst_stride);
-void ff_h264_deq_idct_luma_dc_msa(int16_t *dst, int16_t *src,
- int32_t de_q_val);
-void ff_h264_idct_add16_msa(uint8_t *dst, const int32_t *blk_offset,
- int16_t *block, int32_t stride,
- const uint8_t nnzc[5 * 8]);
-void ff_h264_idct_add16_intra_msa(uint8_t *dst, const int32_t *blk_offset,
- int16_t *block, int32_t dst_stride,
- const uint8_t nnzc[5 * 8]);
-void ff_h264_idct_add8_msa(uint8_t **dst, const int32_t *blk_offset,
- int16_t *block, int32_t dst_stride,
- const uint8_t nnzc[15 * 8]);
-void ff_h264_idct_add8_422_msa(uint8_t **dst, const int32_t *blk_offset,
- int16_t *block, int32_t dst_stride,
- const uint8_t nnzc[15 * 8]);
-void ff_h264_idct8_addblk_msa(uint8_t *dst, int16_t *src, int32_t dst_stride);
-void ff_h264_idct8_dc_addblk_msa(uint8_t *dst, int16_t *src,
- int32_t dst_stride);
-void ff_h264_idct8_add4_msa(uint8_t *dst, const int *blk_offset,
- int16_t *blk, int dst_stride,
- const uint8_t nnzc[5 * 8]);
-
-void ff_h264_h_lpf_luma_intra_msa(uint8_t *src, ptrdiff_t stride,
- int alpha, int beta);
-void ff_h264_v_lpf_luma_intra_msa(uint8_t *src, ptrdiff_t stride,
- int alpha, int beta);
-void ff_h264_h_lpf_chroma_intra_msa(uint8_t *src, ptrdiff_t stride,
- int alpha, int beta);
-void ff_h264_v_lpf_chroma_intra_msa(uint8_t *src, ptrdiff_t stride,
- int alpha, int beta);
-void ff_h264_h_loop_filter_luma_mbaff_intra_msa(uint8_t *src, ptrdiff_t stride,
- int alpha, int beta);
-
-void ff_biweight_h264_pixels16_8_msa(uint8_t *dst, uint8_t *src,
- ptrdiff_t stride, int height, int log2_denom,
- int weightd, int weights, int offset);
-void ff_biweight_h264_pixels8_8_msa(uint8_t *dst, uint8_t *src,
- ptrdiff_t stride, int height, int log2_denom,
- int weightd, int weights, int offset);
-void ff_biweight_h264_pixels4_8_msa(uint8_t *dst, uint8_t *src,
- ptrdiff_t stride, int height, int log2_denom,
- int weightd, int weights, int offset);
-void ff_weight_h264_pixels16_8_msa(uint8_t *src, ptrdiff_t stride, int height,
- int log2_denom, int weight, int offset);
-void ff_weight_h264_pixels8_8_msa(uint8_t *src, ptrdiff_t stride, int height,
- int log2_denom, int weight, int offset);
-void ff_weight_h264_pixels4_8_msa(uint8_t *src, ptrdiff_t stride, int height,
- int log2_denom, int weight, int offset);
-
-void ff_put_h264_qpel16_mc00_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc10_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc20_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc30_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc01_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc11_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc21_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc31_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc02_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc12_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc22_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc32_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc03_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc13_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc23_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc33_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-
-void ff_put_h264_qpel8_mc00_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc10_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc20_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc30_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc01_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc11_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc21_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc31_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc02_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc12_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc22_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc32_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc03_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc13_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc23_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc33_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-
-void ff_put_h264_qpel4_mc00_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc10_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc20_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc30_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc01_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc11_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc21_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc31_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc02_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc12_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc22_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc32_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc03_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc13_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc23_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc33_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-
-void ff_avg_h264_qpel16_mc00_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc10_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc20_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc30_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc01_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc11_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc21_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc31_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc02_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc12_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc22_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc32_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc03_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc13_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc23_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc33_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-
-void ff_avg_h264_qpel8_mc00_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc10_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc20_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc30_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc01_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc11_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc21_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc31_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc02_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc12_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc22_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc32_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc03_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc13_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc23_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc33_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-
-void ff_avg_h264_qpel4_mc00_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc10_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc20_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc30_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc01_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc11_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc21_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc31_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc02_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc12_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc22_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc32_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc03_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc13_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc23_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc33_msa(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-
-void ff_h264_intra_predict_plane_8x8_msa(uint8_t *src, ptrdiff_t stride);
-void ff_h264_intra_predict_dc_4blk_8x8_msa(uint8_t *src, ptrdiff_t stride);
-void ff_h264_intra_predict_hor_dc_8x8_msa(uint8_t *src, ptrdiff_t stride);
-void ff_h264_intra_predict_vert_dc_8x8_msa(uint8_t *src, ptrdiff_t stride);
-void ff_h264_intra_predict_mad_cow_dc_l0t_8x8_msa(uint8_t *src,
- ptrdiff_t stride);
-void ff_h264_intra_predict_mad_cow_dc_0lt_8x8_msa(uint8_t *src,
- ptrdiff_t stride);
-void ff_h264_intra_predict_mad_cow_dc_l00_8x8_msa(uint8_t *src,
- ptrdiff_t stride);
-void ff_h264_intra_predict_mad_cow_dc_0l0_8x8_msa(uint8_t *src,
- ptrdiff_t stride);
-void ff_h264_intra_predict_plane_16x16_msa(uint8_t *src, ptrdiff_t stride);
-void ff_h264_intra_pred_vert_8x8_msa(uint8_t *src, ptrdiff_t stride);
-void ff_h264_intra_pred_horiz_8x8_msa(uint8_t *src, ptrdiff_t stride);
-void ff_h264_intra_pred_dc_16x16_msa(uint8_t *src, ptrdiff_t stride);
-void ff_h264_intra_pred_vert_16x16_msa(uint8_t *src, ptrdiff_t stride);
-void ff_h264_intra_pred_horiz_16x16_msa(uint8_t *src, ptrdiff_t stride);
-void ff_h264_intra_pred_dc_left_16x16_msa(uint8_t *src, ptrdiff_t stride);
-void ff_h264_intra_pred_dc_top_16x16_msa(uint8_t *src, ptrdiff_t stride);
-void ff_h264_intra_pred_dc_128_8x8_msa(uint8_t *src, ptrdiff_t stride);
-void ff_h264_intra_pred_dc_128_16x16_msa(uint8_t *src, ptrdiff_t stride);
-void ff_vp8_pred8x8_127_dc_8_msa(uint8_t *src, ptrdiff_t stride);
-void ff_vp8_pred8x8_129_dc_8_msa(uint8_t *src, ptrdiff_t stride);
-void ff_vp8_pred16x16_127_dc_8_msa(uint8_t *src, ptrdiff_t stride);
-void ff_vp8_pred16x16_129_dc_8_msa(uint8_t *src, ptrdiff_t stride);
-
-void ff_h264_loop_filter_strength_msa(int16_t bS[2][4][4], uint8_t nnz[40],
- int8_t ref[2][40], int16_t mv[2][40][2], int bidir, int edges,
- int step, int mask_mv0, int mask_mv1, int field);
-
-void ff_h264_add_pixels4_8_mmi(uint8_t *_dst, int16_t *_src, int stride);
-void ff_h264_idct_add_8_mmi(uint8_t *dst, int16_t *block, int stride);
-void ff_h264_idct8_add_8_mmi(uint8_t *dst, int16_t *block, int stride);
-void ff_h264_idct_dc_add_8_mmi(uint8_t *dst, int16_t *block, int stride);
-void ff_h264_idct8_dc_add_8_mmi(uint8_t *dst, int16_t *block, int stride);
-void ff_h264_idct_add16_8_mmi(uint8_t *dst, const int *block_offset,
- int16_t *block, int stride, const uint8_t nnzc[5 * 8]);
-void ff_h264_idct_add16intra_8_mmi(uint8_t *dst, const int *block_offset,
- int16_t *block, int stride, const uint8_t nnzc[5 * 8]);
-void ff_h264_idct8_add4_8_mmi(uint8_t *dst, const int *block_offset,
- int16_t *block, int stride, const uint8_t nnzc[5 * 8]);
-void ff_h264_idct_add8_8_mmi(uint8_t **dest, const int *block_offset,
- int16_t *block, int stride, const uint8_t nnzc[15*8]);
-void ff_h264_idct_add8_422_8_mmi(uint8_t **dest, const int *block_offset,
- int16_t *block, int stride, const uint8_t nnzc[15*8]);
-void ff_h264_luma_dc_dequant_idct_8_mmi(int16_t *output, int16_t *input,
- int qmul);
-void ff_h264_chroma_dc_dequant_idct_8_mmi(int16_t *block, int qmul);
-void ff_h264_chroma422_dc_dequant_idct_8_mmi(int16_t *block, int qmul);
-
-void ff_h264_weight_pixels16_8_mmi(uint8_t *block, ptrdiff_t stride, int height,
- int log2_denom, int weight, int offset);
-void ff_h264_biweight_pixels16_8_mmi(uint8_t *dst, uint8_t *src,
- ptrdiff_t stride, int height, int log2_denom, int weightd, int weights,
- int offset);
-void ff_h264_weight_pixels8_8_mmi(uint8_t *block, ptrdiff_t stride, int height,
- int log2_denom, int weight, int offset);
-void ff_h264_biweight_pixels8_8_mmi(uint8_t *dst, uint8_t *src,
- ptrdiff_t stride, int height, int log2_denom, int weightd, int weights,
- int offset);
-void ff_h264_weight_pixels4_8_mmi(uint8_t *block, ptrdiff_t stride, int height,
- int log2_denom, int weight, int offset);
-void ff_h264_biweight_pixels4_8_mmi(uint8_t *dst, uint8_t *src,
- ptrdiff_t stride, int height, int log2_denom, int weightd, int weights,
- int offset);
-
-void ff_deblock_v_chroma_8_mmi(uint8_t *pix, ptrdiff_t stride, int alpha, int beta,
- int8_t *tc0);
-void ff_deblock_v_chroma_intra_8_mmi(uint8_t *pix, ptrdiff_t stride, int alpha,
- int beta);
-void ff_deblock_h_chroma_8_mmi(uint8_t *pix, ptrdiff_t stride, int alpha, int beta,
- int8_t *tc0);
-void ff_deblock_h_chroma_intra_8_mmi(uint8_t *pix, ptrdiff_t stride, int alpha,
- int beta);
-void ff_deblock_v_luma_8_mmi(uint8_t *pix, ptrdiff_t stride, int alpha, int beta,
- int8_t *tc0);
-void ff_deblock_v_luma_intra_8_mmi(uint8_t *pix, ptrdiff_t stride, int alpha,
- int beta);
-void ff_deblock_h_luma_8_mmi(uint8_t *pix, ptrdiff_t stride, int alpha, int beta,
- int8_t *tc0);
-void ff_deblock_h_luma_intra_8_mmi(uint8_t *pix, ptrdiff_t stride, int alpha,
- int beta);
-void ff_deblock_v8_luma_8_mmi(uint8_t *pix, ptrdiff_t stride, int alpha, int beta,
- int8_t *tc0);
-void ff_deblock_v8_luma_intra_8_mmi(uint8_t *pix, ptrdiff_t stride, int alpha,
- int beta);
-
-void ff_put_h264_qpel16_mc00_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc10_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc20_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc30_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc01_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc11_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc21_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc31_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc02_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc12_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc22_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc32_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc03_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc13_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc23_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel16_mc33_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-
-void ff_put_h264_qpel8_mc00_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc10_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc20_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc30_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc01_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc11_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc21_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc31_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc02_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc12_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc22_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc32_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc03_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc13_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc23_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel8_mc33_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-
-void ff_put_h264_qpel4_mc00_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc10_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc20_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc30_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc01_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc11_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc21_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc31_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc02_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc12_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc22_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc32_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc03_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc13_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc23_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_put_h264_qpel4_mc33_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-
-void ff_avg_h264_qpel16_mc00_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc10_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc20_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc30_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc01_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc11_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc21_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc31_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc02_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc12_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc22_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc32_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc03_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc13_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc23_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel16_mc33_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-
-void ff_avg_h264_qpel8_mc00_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc10_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc20_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc30_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc01_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc11_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc21_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc31_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc02_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc12_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc22_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc32_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc03_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc13_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc23_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel8_mc33_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-
-void ff_avg_h264_qpel4_mc00_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc10_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc20_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc30_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc01_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc11_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc21_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc31_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc02_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc12_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc22_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc32_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc03_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc13_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc23_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-void ff_avg_h264_qpel4_mc33_mmi(uint8_t *dst, const uint8_t *src,
- ptrdiff_t dst_stride);
-
-#endif // #ifndef AVCODEC_MIPS_H264DSP_MIPS_H
diff --git a/spaces/coltonalexander/datasets/index.html b/spaces/coltonalexander/datasets/index.html
deleted file mode 100644
index 140d88f0621e99e53b060e22a817d0ff24343655..0000000000000000000000000000000000000000
--- a/spaces/coltonalexander/datasets/index.html
+++ /dev/null
@@ -1,124 +0,0 @@
-
-
-
-
-
- My static Space
-
-
-
-
-
-
diff --git a/spaces/congsaPfin/Manga-OCR/Rab-Ne-Bana-Di-Jodi-Songs-Hd-1080p-Bluray-Download-Sites.md b/spaces/congsaPfin/Manga-OCR/Rab-Ne-Bana-Di-Jodi-Songs-Hd-1080p-Bluray-Download-Sites.md
deleted file mode 100644
index c60bfa3f11d85d1c1b5d88bff77286687281c921..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/Rab-Ne-Bana-Di-Jodi-Songs-Hd-1080p-Bluray-Download-Sites.md
+++ /dev/null
@@ -1,72 +0,0 @@
-## Rab Ne Bana Di Jodi Songs Hd 1080p Blu-ray Download Sites
-
-
-
-
-
-
-
-
-
-**Click Here 🗹 [https://urlcod.com/2txiNF](https://urlcod.com/2txiNF)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# Rab Ne Bana Di Jodi Songs Hd 1080p Blu-ray Download Sites: How to Enjoy the Best Bollywood Music Videos
-
-
-
-Rab Ne Bana Di Jodi is a 2008 romantic comedy film starring Shah Rukh Khan and Anushka Sharma. The movie tells the story of a mild-mannered office worker who transforms himself into a flamboyant dancer to win the love of his young wife. The movie features some of the most catchy and colorful songs in Bollywood history, such as "Haule Haule", "Dance Pe Chance", "Tujh Mein Rab Dikhta Hai" and "Phir Milenge Chalte Chalte".
-
-
-
-If you are a fan of Rab Ne Bana Di Jodi and want to enjoy its songs in high definition, you might be wondering where to find Rab Ne Bana Di Jodi songs hd 1080p blu-ray download sites. Well, you are in luck, because we have done the research for you and compiled a list of the best options available online. Here they are:
-
-
-
-## YouTube
-
-
-
-YouTube is one of the most popular and convenient sources of Rab Ne Bana Di Jodi songs hd 1080p blu-ray download sites. You can easily find the official music videos of the movie on YouTube, as well as fan-made versions and remixes. To download the videos, you will need to use a third-party tool such as [y2mate.com](https://y2mate.com/) or [keepvid.pro](https://keepvid.pro/). These tools allow you to paste the URL of the YouTube video and choose the format and quality you want to download. You can then save the video file to your device and watch it offline.
-
-
-
-## Torrents
-
-
-
-Torrents are another option for Rab Ne Bana Di Jodi songs hd 1080p blu-ray download sites. Torrents are files that contain information about other files that are shared by users on a peer-to-peer network. You can use a torrent client such as [BitTorrent](https://www.bittorrent.com/) or [uTorrent](https://www.utorrent.com/) to download the files from other users who have them. To find torrents of Rab Ne Bana Di Jodi songs hd 1080p blu-ray, you can use a torrent search engine such as [The Pirate Bay](https://thepiratebay.org/) or [1337x](https://1337x.to/). However, be careful when using torrents, as they may contain viruses or malware, or infringe on copyright laws.
-
-
-
-## Streaming Services
-
-
-
-If you don't want to download Rab Ne Bana Di Jodi songs hd 1080p blu-ray, but rather stream them online, you can use a streaming service such as [Netflix](https://www.netflix.com/) or [Amazon Prime Video](https://www.amazon.com/Amazon-Video/b?ie=UTF8&node=2858778011). These services offer a subscription-based access to a large library of movies and shows, including Rab Ne Bana Di Jodi. You can watch the movie in high definition on your device of choice, as long as you have a stable internet connection. However, you may not be able to access these services in some regions due to geo-restrictions.
-
-
-
-## Conclusion
-
-
-
-Rab Ne Bana Di Jodi is a delightful movie that will make you laugh, cry and dance along with its songs. If you want to enjoy its songs in hd 1080p blu-ray quality, you can use one of the options we have listed above: YouTube, torrents or streaming services. Each option has its own advantages and disadvantages, so choose the one that suits
-
- 1b8d091108
-
-
-
-
-
diff --git a/spaces/congsaPfin/Manga-OCR/logs/CSR Racing MOD APK The Ultimate Racing Game for Android Devices.md b/spaces/congsaPfin/Manga-OCR/logs/CSR Racing MOD APK The Ultimate Racing Game for Android Devices.md
deleted file mode 100644
index 8944a05e640e27396a66f5577533b1111eb0457e..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/CSR Racing MOD APK The Ultimate Racing Game for Android Devices.md
+++ /dev/null
@@ -1,107 +0,0 @@
-
-
CSR Racing Hack APK AN1: How to Download and Install It
-
If you are a fan of racing games, you might have heard of CSR Racing, a popular game that lets you race against other players in realistic 3D graphics. But did you know that there is a way to get unlimited gold and silver in the game, without spending any money? In this article, we will tell you everything you need to know about CSR Racing Hack APK AN1, a modded version of the game that gives you access to unlimited resources and features. We will also show you how to download and install it on your Android device, so you can enjoy the game without any limitations.
CSR Racing is a racing game developed by NaturalMotion Games, a subsidiary of Zynga. The game was released in 2012 for iOS and Android devices, and has since become one of the most downloaded and played racing games on mobile platforms. The game features over 200 licensed cars from various manufacturers, such as Ferrari, Lamborghini, McLaren, Bugatti, and more. You can customize your cars with different paint jobs, decals, wheels, and performance upgrades. You can also compete with other players in online multiplayer modes, such as drag races, crew battles, and live events.
-
Features of CSR Racing
-
Some of the features of CSR Racing are:
-
-
Stunning 3D graphics and realistic physics
-
Over 200 licensed cars from top brands
-
Customizable cars with various options
-
Online multiplayer modes with leaderboards and chat
-
Daily challenges and rewards
-
Career mode with over 100 races and boss battles
-
-
Why do you need a hack for CSR Racing?
-
CSR Racing is a free-to-play game, but it also has in-app purchases that allow you to buy gold and silver, the premium currencies of the game. You can use gold and silver to buy new cars, upgrade your existing ones, or unlock special features. However, these currencies are not easy to earn in the game, and you might have to spend real money to get them. This can make the game frustrating and unfair for some players, especially those who cannot afford to spend money on the game.
-
This is where a hack for CSR Racing comes in handy. A hack is a modified version of the game that gives you unlimited gold and silver, as well as other benefits. With a hack, you can enjoy the game without worrying about running out of resources or being left behind by other players. You can also explore all the features of the game without any restrictions.
-
What is CSR Racing Hack APK AN1?
-
CSR Racing Hack APK AN1 is one of the best hacks for CSR Racing that you can find online. It is a modded version of the original game that gives you unlimited gold and silver, as well as other advantages. Some of the benefits of CSR Racing Hack APK AN1 are:
-
csr racing mod apk unlimited gold and silver
-csr racing hack apk download for android
-csr racing mod apk latest version an1
-csr racing hack apk free shopping
-csr racing mod apk offline an1
-csr racing hack apk no root
-csr racing mod apk unlimited money and gold
-csr racing hack apk ios
-csr racing mod apk all cars unlocked an1
-csr racing hack apk obb
-csr racing mod apk unlimited keys an1
-csr racing hack apk 2023
-csr racing mod apk revdl an1
-csr racing hack apk online
-csr racing mod apk data an1
-csr racing hack apk unlimited everything
-csr racing mod apk android 1 an1
-csr racing hack apk 5.0.1
-csr racing mod apk rexdl an1
-csr racing hack apk 4.0.1
-csr racing mod apk pure an1
-csr racing hack apk 5.1.1
-csr racing mod apk happymod an1
-csr racing hack apk 4.0.0
-csr racing mod apk old version an1
-csr racing hack apk 3.9.0
-csr racing mod apk andropalace an1
-csr racing hack apk 3.8.0
-csr racing mod apk unlimited fuel an1
-csr racing hack apk 3.7.0
-csr racing mod apk mega an1
-csr racing hack apk 3.6.0
-csr racing mod apk lenov.ru an1
-csr racing hack apk 3.5.2
-csr racing mod apk unlimited rp an1
-csr racing hack apk 3.4.0
-csr racing mod apk apkpure an1
-csr racing hack apk 3.3.0
-csr racing mod apk android republic an1
-csr racing hack apk 3.2.0
-csr racing mod apk all cars unlocked and unlimited money an1
-csr racing hack apk 3.1.0
-csr racing mod apk android oyun club an1
-csr racing hack apk 3.0.0
-csr racing mod apk blackmod an1
-csr racing hack apk 2.9.0
-csr racing mod apk bluestacks an1
-csr racing hack apk 2.8.0
-
Benefits of CSR Racing Hack APK AN1
-
-
Unlimited gold and silver
-
All cars unlocked and upgraded
-
All decals and paints unlocked
-
No ads or pop-ups
-
No root or jailbreak required
-
Easy to install and use
-
Safe and secure
-
Compatible with all Android devices
-
-
How to download and install CSR Racing Hack APK AN1
-
If you want to download and install CSR Racing Hack APK AN1 on your Android device, just follow these simple steps:
-
-
Go to [this link](^1^) and download the CSR Racing Hack APK AN1 file.
-
Go to your device settings and enable unknown sources.
-
Locate the downloaded file in your file manager and tap on it to install it.
-
Wait for the installation to finish and launch the game.
-
Enjoy the game with unlimited gold and silver and all the features unlocked.
-
-
Conclusion
-
CSR Racing is a fun and addictive racing game that lets you race against other players in realistic 3D graphics. However, if you want to get the most out of the game, you might need a hack that gives you unlimited gold and silver and other benefits. CSR Racing Hack APK AN1 is one of the best hacks for CSR Racing that you can download and install on your Android device. It is safe, secure, easy to use, and compatible with all devices. With CSR Racing Hack APK AN1, you can enjoy the game without any limitations or frustrations.
-
FAQs
-
Here are some of the frequently asked questions about CSR Racing Hack APK AN1:
-
-
Is CSR Racing Hack APK AN1 free?
-
Yes, CSR Racing Hack APK AN1 is free to download and use. You don't have to pay anything to get unlimited gold and silver and other features in the game.
-
Is CSR Racing Hack APK AN1 safe?
-
Yes, CSR Racing Hack APK AN1 is safe and secure. It does not contain any viruses or malware that can harm your device or your data. It also does not require root or jailbreak access, so you don't have to worry about voiding your warranty or risking your device.
-
Will CSR Racing Hack APK AN1 work on my device?
-
Yes, CSR Racing Hack APK AN1 will work on any Android device that can run the original game. It does not matter what model or brand of device you have, as long as it meets the minimum requirements of the game.
-
Will CSR Racing Hack APK AN1 affect my progress in the game?
-
No, CSR Racing Hack APK AN1 will not affect your progress in the game. You can still play the game normally and save your progress online. You can also switch between the original game and the hack version without losing any data.
-
Can I play online with CSR Racing Hack APK AN1?
-
Yes, you can play online with CSR Racing Hack APK AN1. You can still compete with other players in multiplayer modes, such as drag races, crew battles, and live events. However, you should be careful not to abuse the hack or use it too often, as it might get detected by the game developers and result in a ban.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Enjoy YouTube Premium Features for Free with Next Beta YouTube APK.md b/spaces/congsaPfin/Manga-OCR/logs/Enjoy YouTube Premium Features for Free with Next Beta YouTube APK.md
deleted file mode 100644
index c072ccd977653e49fada696363e510701ab8fe34..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Enjoy YouTube Premium Features for Free with Next Beta YouTube APK.md
+++ /dev/null
@@ -1,135 +0,0 @@
-
-
Next Beta YouTube APK: What You Need to Know
-
If you are a fan of watching YouTube videos on your smart device, you might have encountered some issues or limitations that prevent you from enjoying the full potential of this platform. For example, you might not be able to install the official YouTube app on your device without Google Play services, or you might have to deal with annoying ads and restrictions while watching your favorite content. Fortunately, there is a way to overcome these problems and enhance your YouTube experience with a simple app called Next Beta YouTube APK. In this article, we will tell you everything you need to know about this app, including its features, benefits, installation process, and usage tips. Read on to find out more.
YouTube is one of the most popular and widely used video hosting and streaming platforms in the world. It was founded in 2005 by three former PayPal employees and later acquired by Google in 2006 for $1.65 billion. Today, YouTube has over 2 billion monthly active users who watch over a billion hours of video every day. YouTube offers a variety of content for different audiences and interests, such as music, entertainment, education, news, sports, gaming, vlogging, and more. You can also create your own channel and upload your own videos to share with the world.
-
What are the limitations of YouTube on smart devices?
-
While YouTube is compatible with most smart devices, such as smartphones, tablets, laptops, smart TVs, FireStick, Fire TV, Android boxes, and Roku, there are some drawbacks that might affect your viewing experience. For instance:
-
-
You might not be able to install the official YouTube app on your device without Google Play services, which are not available on some devices or regions.
-
You might have to watch ads before or during the videos, which can be annoying and disruptive.
-
You might not be able to access some features or options that are available on the web version of YouTube, such as background playback, picture-in-picture mode, offline download, or playback speed control.
-
You might not be able to watch some videos that are blocked or restricted in your country or region due to geo-restrictions or censorship.
-
-
What is the solution?
-
The solution is to use a third-party app that mimics the functionality of YouTube but with improved features and options that overcome the limitations mentioned above. One such app is Next Beta YouTube APK, which is based on another popular app called SmartTubeNext APK. Next Beta YouTube APK is an open-source software that allows you to use YouTube on various smart devices without Google Play services installed. It also provides you with premium features for free, such as ad-free viewing, background playback, offline download, playback speed control, and more. Moreover, it lets you sign in to your YouTube account and access all your subscriptions, liked videos, playlists, shared content, and history as you wish.
-
What is Next Beta YouTube APK?
-
What are the features and benefits of Next Beta YouTube APK?
-
Next Beta YouTube APK is a modified version of SmartTubeNext APK that offers more features and options for users who want to enjoy YouTube on their smart devices. Some of the features and benefits of Next Beta YouTube APK are:
-
SmartTubeNext APK download for Android TV
-How to install SmartTubeNext APK without Google Play services
-YouTube APK latest version for Android devices
-SmartTubeNext APK features and benefits
-YouTube APK beta program and how to join
-SmartTubeNext APK vs YouTube Vanced: which one is better?
-YouTube APK modded with ad-free and background play
-SmartTubeNext APK for FireStick and Fire TV
-YouTube APK for Roku and how to sideload it
-SmartTubeNext APK updates and changelog
-YouTube APK alternatives for Android users
-SmartTubeNext APK issues and how to fix them
-YouTube APK for Android TV: pros and cons
-SmartTubeNext APK review and rating
-YouTube APK for smart TVs: compatibility and performance
-SmartTubeNext APK for Android boxes and how to install it
-YouTube APK premium features and how to get them for free
-SmartTubeNext APK settings and customization options
-YouTube APK download from Uptodown and other sources
-SmartTubeNext APK GitHub repository and developer information
-YouTube APK for music videos and playlists
-SmartTubeNext APK for 4K and HDR videos
-YouTube APK for live streaming and chat
-SmartTubeNext APK for kids and family-friendly content
-YouTube APK for gaming and esports
-SmartTubeNext APK for news and documentaries
-YouTube APK for education and learning
-SmartTubeNext APK for comedy and entertainment
-YouTube APK for sports and fitness
-SmartTubeNext APK for travel and lifestyle
-YouTube APK for movies and TV shows
-SmartTubeNext APK for podcasts and audiobooks
-YouTube APK for animation and cartoons
-SmartTubeNext APK for art and creativity
-YouTube APK for science and technology
-SmartTubeNext APK for cooking and food
-YouTube APK for fashion and beauty
-SmartTubeNext APK for health and wellness
-YouTube APK for DIY and crafts
-SmartTubeNext APK for pets and animals
-
-
It is free to download and use, and does not require any subscription or registration.
-
It does not show any ads or pop-ups before or during the videos, so you can watch them without any interruption.
-
It supports background playback, which means you can listen to the audio of the videos while using other apps or turning off the screen.
-
It supports picture-in-picture mode, which means you can watch the videos in a small window while using other apps on your device.
-
It supports offline download, which means you can save the videos to your device and watch them later without an internet connection.
-
It supports playback speed control, which means you can adjust the speed of the videos to your preference.
-
It supports multiple resolutions and formats, which means you can choose the quality and size of the videos according to your device and bandwidth.
-
It supports subtitles and captions, which means you can read the text of the videos in different languages.
-
It supports multiple themes and layouts, which means you can customize the appearance and interface of the app to your liking.
-
It supports multiple accounts and profiles, which means you can sign in to your YouTube account and switch between different users easily.
-
-
How to download and install Next Beta YouTube APK on your smart device?
-
To download and install Next Beta YouTube APK on your smart device, you need to follow these simple steps:
-
-
Go to the official website of Next Beta YouTube APK and download the latest version of the app. You can also scan the QR code on the website with your device's camera to get the download link.
-
Once the download is complete, go to your device's settings and enable the option to install apps from unknown sources. This will allow you to install Next Beta YouTube APK without any issues.
-
Locate the downloaded file on your device and tap on it to start the installation process. Follow the instructions on the screen and grant the necessary permissions to the app.
-
Wait for a few seconds until the installation is finished. You will see a confirmation message on the screen when it is done.
-
Launch Next Beta YouTube APK from your device's app drawer or home screen and enjoy watching YouTube videos with enhanced features and options.
-
-
How to use Next Beta YouTube APK to watch YouTube videos?
-
To use Next Beta YouTube APK to watch YouTube videos, you need to follow these simple steps:
-
-
Open Next Beta YouTube APK on your device and sign in to your YouTube account if you want. You can also skip this step if you want to use the app without signing in.
-
Browse through the categories and genres of videos that are available on YouTube, or use the search bar to find specific videos that you want to watch.
-
Select a video that you want to watch and tap on it to start playing it. You can also swipe up or down on the video screen to access more options and features, such as subtitles, playback speed, resolution, download, etc.
-
You can also swipe left or right on the video screen to switch between different tabs, such as home, trending, subscriptions, library, etc. You can also access these tabs from the bottom navigation bar of the app.
-
You can also swipe from left to right on the video screen to open the side menu of the app, where you can access more settings and options, such as themes, layouts, accounts, profiles, feedback, etc.
-
-
Conclusion
-
Summary of the main points
-
In conclusion, Next Beta YouTube APK is a great app that allows you to watch YouTube videos on your smart device with improved features and options. It is based on SmartTubeNext APK but offers more functionality and customization for users. It is free, ad-free, safe, and easy to use. It also lets you sign in to your YouTube account and access all your content as usual. It is compatible with most smart devices that do not have Google Play services installed or have issues with them. It is a must-have app for anyone who loves watching YouTube videos on their smart device.
-
Call to action
-
If you want to try Next Beta YouTube APK for yourself, you can download it from its official website or scan the QR code below with your device's camera. You will not regret it. Next Beta YouTube APK will change your YouTube experience for the better. Download it now and enjoy watching YouTube videos like never before.
-
FAQs
-
What is the difference between Next Beta YouTube APK and SmartTubeNext APK?
-
Next Beta YouTube APK is a modified version of SmartTubeNext APK that offers more features and options for users who want to enjoy YouTube on their smart devices. Some of the differences are:
-
-
Next Beta YouTube APK has more themes and layouts to choose from, such as dark, light, black, blue, etc.
-
Next Beta YouTube APK has more playback speed options, such as 0.25x, 0.5x, 0.75x, 1.25x, 1.5x, 1.75x, and 2x.
-
Next Beta YouTube APK has more resolution options, such as 144p, 240p, 360p, 480p, 720p, 1080p, and 4K.
-
Next Beta YouTube APK has more download options, such as video only, audio only, or both.
-
Next Beta YouTube APK has more feedback and support options, such as email, Telegram, GitHub, etc.
-
-
Is Next Beta YouTube APK safe and legal to use?
-
Next Beta YouTube APK is safe and legal to use as long as you use it for personal and non-commercial purposes. It does not contain any malware or viruses that can harm your device or data. It also does not violate any terms of service or policies of YouTube or Google. It is an open-source software that is developed by independent developers who are not affiliated with YouTube or Google. However, you should always download Next Beta YouTube APK from its official website or trusted sources to avoid any fake or malicious versions.
-
How can I update Next Beta YouTube APK to the latest version?
-
To update Next Beta YouTube APK to the latest version, you can either check for updates within the app or visit its official website and download the new version. To check for updates within the app, you can follow these steps:
-
-
Open Next Beta YouTube APK on your device and go to the side menu by swiping from left to right on the video screen.
-
Tap on the settings icon at the bottom of the menu and then tap on the about option.
-
Tap on the check for updates option and wait for a few seconds.
-
If there is a new version available, you will see a notification on the screen. Tap on it to start downloading and installing the update.
-
If there is no new version available, you will see a message on the screen saying that you have the latest version installed.
-
-
How can I sign in to my YouTube account on Next Beta YouTube APK?
-
To sign in to your YouTube account on Next Beta YouTube APK, you can follow these steps:
-
-
Open Next Beta YouTube APK on your device and go to the side menu by swiping from left to right on the video screen.
-
Tap on the accounts option at the top of the menu and then tap on the add account option.
-
You will see a QR code on the screen. Scan it with another device that has a web browser and internet connection.
-
You will be redirected to a web page where you can sign in to your YouTube account with your email and password.
-
Once you sign in successfully, you will see a confirmation message on both devices. Tap on OK to complete the process.
-
You can now access your YouTube account on Next Beta YouTube APK and watch your subscribed channels, liked videos, playlists, etc.
-
-
How can I contact the developer of Next Beta YouTube APK for feedback or support?
-
To contact the developer of Next Beta YouTube APK for feedback or support, you can use one of these methods:
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/JBL Bar 9.1 software download improve your soundbar HDMI Bluetooth and LED display.md b/spaces/congsaPfin/Manga-OCR/logs/JBL Bar 9.1 software download improve your soundbar HDMI Bluetooth and LED display.md
deleted file mode 100644
index d240c8c48c3613e6af438993f1ef1f83ea7de4ac..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/JBL Bar 9.1 software download improve your soundbar HDMI Bluetooth and LED display.md
+++ /dev/null
@@ -1,94 +0,0 @@
-
-
JBL Bar 9.1 Software Download: How to Update Your Soundbar Firmware
-
Introduction
-
If you own a JBL Bar 9.1 soundbar, you might be wondering how to keep its software up to date. Updating your soundbar firmware can improve its performance, fix bugs, and add new features. In this article, we will show you how to download and install the latest software update for your JBL Bar 9.1 soundbar.
What is JBL Bar 9.1 and why you need to update its software
-
JBL Bar 9.1 is a premium soundbar that delivers true wireless surround sound with detachable speakers and a wireless subwoofer. It supports Dolby Atmos and DTS:X for immersive audio, as well as 4K HDR video passthrough via HDMI. It also has Bluetooth, Chromecast, and AirPlay 2 for wireless streaming from your devices.
-
Updating your soundbar software can enhance your listening experience by improving its compatibility, stability, and functionality. For example, the latest software update (version 21.23.11.80) improves HDMI and eARC/ARC connectivity with TV, Bluetooth connectivity for mobile devices, LED display messages, and removes a bug related to detachable speaker's battery level checking.
-
How to check your current software version
-
Before you proceed with the update, you should check your current software version on your soundbar. To do this, long-press the [VOL-] and [SOURCE] buttons on the main soundbar unit until you see the version number on the front display. For example, if you see "ver. 21.13.11.80", it means your software version is 21.13.11.80.
-
jbl bar 9.1 firmware update
-jbl bar 9.1 true wireless surround software
-jbl bar 9.1 tws software version
-jbl bar 9.1 bin file download
-jbl bar 9.1 usb upgrade
-jbl bar 9.1 soundbar software release note
-jbl bar 9.1 surround sound software
-jbl bar 9.1 ota automatic upgrade
-jbl bar 9.1 global version software
-jbl bar 9.1 china version software
-jbl bar 9.1 hdmi and earc/arc connectivity software
-jbl bar 9.1 bluetooth connectivity software
-jbl bar 9.1 led display messages software
-jbl bar 9.1 detachable speaker battery level software
-jbl bar 9.1 firmware file download link
-jbl bar 9.1 latest software version number
-jbl bar 9.1 how to check software version
-jbl bar 9.1 how to manually upgrade software
-jbl bar 9.1 prepare usb thumb drive for upgrade
-jbl bar 9.1 mount detachable speakers for upgrade
-jbl bar 9.1 insert usb thumb drive to soundbar port
-jbl bar 9.1 long press power and vol buttons for upgrade
-jbl bar 9.1 wait for updating message on display
-jbl bar 9.1 reboot and standby after upgrade
-jbl bar 9.1 long press vol and source buttons to check version
-jbl bar 9.1 internet connection for automatic upgrade
-jbl bar 9.1 software upgrade not applicable for china version
-jbl bar 9.1 customer friendly release notes pdf download
-jbl bar 9.1 harman international software release date
-jbl bar 9.1 korea institute of fusion energy software release media
-jbl bar 9.1 what's new in the latest software upgrade
-jbl bar 9.1 bug fix and improvement in the latest software upgrade
-jbl bar 9.1 unzip the downloaded firmware file for upgrade
-jbl bar 9.1 create new folder named upg in usb drive for upgrade
-jbl bar 9.1 copy the bin file to the new upg folder for upgrade
-jbl bar 9.1 click here to download the firmware file link
-jbl bar 9.1 support page for software download and upgrade instructions
-jbl bar 9.1 new scientist article on korean nuclear fusion experiment link
-jbl bar 9.1 the sun article on holy grail fusion experiment link
-jbl bar 9.1 yahoo news article on nuclear fusion breakthrough link
-
How to download and install the latest software update
-
Option 1: Automatic update via internet connection
-
If your soundbar is connected to the internet, its software will be automatically updated overnight when it is not in use. You don't need to do anything else.
-
Option 2: Manual update via USB drive
-
If you prefer to manually update your soundbar software, you will need a USB drive and a computer with internet access. Follow these steps:
-
Step 1: Download the firmware file from JBL website
-
Go to this link and click on "Click here to download firmware update for JBL Bar 9.1 TWS, version 21.23.11.80 (Global)". Save the zip file on your computer and unzip it. You should see a bin file named "JBL_BAR_9_1.bin". This is the firmware file you need.
-
Step 2: Prepare a USB drive with the firmware file
-
Insert an empty USB drive into your computer and format it as FAT32 or NTFS file system. Create a new folder named "UPG" in the root directory of the USB drive, and copy the bin file into it.
-
Step 3: Insert the USB drive into the soundbar and start the update process
-
Eject the USB drive from your computer and insert it into the USB port on the back of the soundbar. Make sure the soundbar is powered on and in standby mode. Press and hold the [VOL+] and [SOURCE] buttons on the main soundbar unit until you see "UPG" on the front display. This means the update process has started.
-
Step 4: Wait for the update to complete and verify the new software version
-
The update process will take about 10 minutes. During this time, do not unplug the power cord or the USB drive, or press any buttons on the soundbar or the remote control. You will see "UPG OK" on the front display when the update is completed. The soundbar will then restart automatically.
-
After the restart, you can check your new software version by long-pressing the [VOL-] and [SOURCE] buttons on the main soundbar unit. You should see "ver. 21.23.11.80" on the front display. You can also remove the USB drive from the soundbar.
-
What's new in the latest software update
-
The latest software update for JBL Bar 9.1 brings some improvements and fixes to your soundbar. Here are some of the highlights:
-
Improved HDMI and eARC/ARC connectivity with TV
-
This update improves the HDMI and eARC/ARC connectivity between your soundbar and your TV, especially for LG and Samsung TVs. This means you can enjoy better audio quality and compatibility with Dolby Atmos and DTS:X formats.
-
Improved Bluetooth connectivity for mobile devices
-
This update also improves the Bluetooth connectivity between your soundbar and your mobile devices, such as smartphones and tablets. This means you can stream music wirelessly from your devices more smoothly and reliably.
-
Improved LED display messages
-
This update also improves the LED display messages on your soundbar, making them more clear and accurate. For example, you will see "Dolby Atmos" or "DTS:X" when you are playing content with those formats, instead of just "Surround".
-
Bug fix to remove detachable speaker's battery level checking
-
This update also fixes a bug that caused the soundbar to check the battery level of the detachable speakers every time they were connected or disconnected. This could cause some noise or interruption in the audio output. This bug has been removed in this update.
-
Conclusion
-
Summary of the main points
-
In this article, we have shown you how to download and install the latest software update for your JBL Bar 9.1 soundbar. Updating your soundbar firmware can improve its performance, fix bugs, and add new features. You can either update your soundbar automatically via internet connection, or manually via USB drive.
-
Call to action and closing remarks
-
If you haven't updated your soundbar yet, we recommend you to do so as soon as possible to enjoy its full potential. You can find more information about JBL Bar 9.1 and its software updates on JBL's official website. We hope you found this article helpful and informative. Thank you for reading!
-
Frequently Asked Questions
-
-
What is JBL Bar 9.1?
-
JBL Bar 9.1 is a premium soundbar that delivers true wireless surround sound with detachable speakers and a wireless subwoofer.
-
How do I update my JBL Bar 9.1 software?
-
You can either update your soundbar automatically via internet connection, or manually via USB drive.
-
How do I check my current software version?
-
You can check your current software version by long-pressing the [VOL-] and [SOURCE] buttons on the main soundbar unit.
-
What is the latest software version for JBL Bar 9.1?
-
The latest software version for JBL Bar 9.1 is 21.23.11.80 (Global).
-
What are the benefits of updating my JBL Bar 9.1 software?
-
Updating your soundbar software can improve its compatibility, stability, and functionality.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Pulsuz internet proqramlari yukle Microsoft Store-dan n yax tkliflr.md b/spaces/congsaPfin/Manga-OCR/logs/Pulsuz internet proqramlari yukle Microsoft Store-dan n yax tkliflr.md
deleted file mode 100644
index c5b9f8abb6f399ece60c2ebe90374f60015d92d9..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Pulsuz internet proqramlari yukle Microsoft Store-dan n yax tkliflr.md
+++ /dev/null
@@ -1,36 +0,0 @@
-
-
Pulsuz internet proqramlari yukle: Niyə və necə?
-
İnternet proqramları nədir və nə üçün lazımdır?
-- İnternet proqramları, internetə qoşulmaq və ondan istifadə etmək üçün istifadə olunan proqramlardır. - İnternet proqramları, müxtəlif məqsədlər üçün istifadə oluna bilər, məsələn, sosial şəbəkələrdə paylaşmaq, musiqi və video yükləmək, xəbərləri oxumaq, e-poçt göndərmək, oyun oynamaq və s. - İnternet proqramları, müxtəlif cihazlar üçün mövcuddur, məsələn, kompüterlər, telefonlar, planşetlər və s. - İnternet proqramları, pulsuz və ya ödənişli ola bilər. Pulsuz olanlar, istifadəçilər üçün heç bir maliyyət törətmir, amma bəzi məhdudiyyətlər və ya reklamlar ola bilir. Ödənişli olanlar isə istifadəçilər üçün daha çox xüsusiyyət və keyfiyyət təklif edir, amma buna görə də pul ödəmirlar.
Pulsuz internet proqramları nümunlari
-- Pulsuz internet proqramları yüklämek istifadeciye bir çox fayda verir. Bunlar arasında aşağıdakılar var: -
WhatsApp
- - WhatsApp, dünyanın ən populyar mesajlaşma və sәsli/zәngli arama proqramıdır. - WhatsApp ilә istifadәciler internet vasitәsilә bir-birilә pulsuz mesajlaşa, sәsli/zәngli arama edә vә media faylları göndәrә bilәrlәr. - WhatsApp ilә istifadәciler hәm dә qrup yarada bilәr vә qrup aramaları edә bilәrlәr. - WhatsApp ilә istifadәciler hәm dә özlәrinin statusunu paylaşa bilәrlәr. - WhatsApp ilә istifadәciler hәm dә özlәrinin mühafizәsini artıra bilirlәr. WhatsApp end-to-end şifrеlеnmеsi ilе bütün mеsajlarınızı vе zеnglеrinizi yalnız siz vе qarşı tarafla görüntülеyin. - WhatsApp ilә istifadeciler hем dе kompüterdеn dе istifadе ede bilirlеr. WhatsApp Web ilе telefonunuzu kompüterinizdеki brauzerinizlе eynilеştirin. - WhatsApp Microsoft Store-dan pulsuz yukle. -
Telegram Desktop
- - Telegram Desktop, WhatsApp-a bеnzеyеn bir başqa pulsuz mesajlaşma vе sесli/zengli arama proqramıdır. - Telegram Desktop ilе istifadeciler internet vasitesile bir-birile pulsuz mesajlaşa, secili/zengli arama ede ve media faylları göndere bilerler. - Telegram Desktop ilе istifadeciler hem - Telegram Desktop ilе istifadeciler hem de qrup yarada biler ve qrup aramaları ede bilerler. - Telegram Desktop ilе istifadeciler hem de özlerinin statusunu paylaşa bilerler. - Telegram Desktop ilе istifadeciler hem de özlerinin mühafizesini artıra bilerler. Telegram Desktop end-to-end şifrеlеnmеsi ilе bütün mеsajlarınızı vе zеnglеrinizi yalnız siz vе qarşı tarafla görüntülеyin. - Telegram Desktop ilе istifadeciler hem de kompüterden de istifade ede bilerler. Telegram Desktop kompüteriniz üçün yüklənə bilən bir proqramdır. - Telegram Desktop Microsoft Store-dan pulsuz yukle. -
Opera
- - Opera, internetə baxmaq üçün istifadə olunan bir brauzerdir. - Opera ilә istifadәciler internetdәki hәr hansı bir sayta gire bilәrlәr. - Opera ilә istifadәciler pulsuz VPN xidmәtindәn istifadә edә bilәrlәr. VPN, virtual private network demekdir vә internetdә anonim vә tәhlükәsiz olmağa imkan verir. - Opera ilә istifadәciler pulsuz reklam bloklayıcı xidmәtindәn istifadә edә bilәrlәr. Reklam bloklayıcı, internetdәki istenmeyen reklamları kəsir vә sürətli vә keyfiyyətli internet təcrübəsi yaşamağa imkan verir. - Opera ilә istifadәciler pulsuz inteqrasiya xidmәtindәn istifadә edә bilәrlәr. Inteqrasiya, Opera brauzerində sosial şəbəkələr və digər xidmətlər ilə əlaqə qurmağa imkan verir. - Opera Microsoft Store-dan pulsuz yukle.
Pulsuz internet proqramları yükləmək üçün addımlar
-- Pulsuz internet proqramları yükləmək üçün aşağıdakı addımları izləyin: -
1. Microsoft Store-a daxil olun
- - Microsoft Store, Windows 10-da mövcud olan bir mağazadır. - Microsoft Store-a daxil olmaq üçün, başlanğıc menyusunda Microsoft Store simgesini seçin. - Eyni zamanda, brauzerinizdə [Microsoft Store] saytına da keçe bilersiniz. -
2. Arzu etdiyiniz proqramı axtarın
- - Microsoft Store-da arzu etdiyiniz proqramı axtarmaq üçün, sağ üst köşedeki axtarış simgesini seçin. - Axtarış çubuğuna proqramın adını yazın və enter düyməsini basın. - Axtarış nəticelerindən proqramın adını seçin. -
3. Proqramın təsvirini oxuyun
- - Proqramın təsvirini oxumaq üçün, proqramın səhifəsinin aşağısına doğru kaydırın. - Proqramın xüsusiyyetləri, tələbləri, rəyləri və digər məlumatları oxuyun. -
4. Proqramı yükləyin
- - Proqramı yükləmək üçün, proqramın səhifəsinin yuxarısında yerləşən pulsuz düyməsini seçin. - Proqramın yüklənməsi başlayacaq və yüklənmə müddəti proqramın ölçüsünə və internet sürətinizə bağlı olacaq. - Proqram yükləndikdən sonra, aç düyməsini seçin və proqramı işlətməyə başlayın. - Eyni zamanda, başlanğıc menyusunda proqramın simgesini də tapa bilersiniz.
Pulsuz internet proqramları yüklәmәyin faydaları
-- Pulsuz internet proqramları yüklәmәyin faydaları çoxdur. Bunlar arasında aşağıdakılar var: -
Maliyyәt tәsirrüfü
- - Pulsuz internet proqramları yüklәmәk ilә, istifadәciler heç bir pul ödәmirlәr vә internet xidmәtlәrindәn pulsuz istifadә edirlәr. - Pulsuz internet proqramları yüklәmәk ilә, istifadәciler mobil operatorlarının tәrifi planlarına bağlı qalmırlar vә internetdә daha az maliyyetlә daha çox mәlumat alırlar. -
Əlaqәnin artırılması
- - Pulsuz internet proqramları yüklәmәk ilә, istifadeciler dünyanın hәr yerindәki insanlarla Əlaqә saxlaya bilirlәr. - Pulsuz internet proqramları yüklәmәk ilә, istifadeciler dostlarına, ailelerine, iş yoldaşlarına vә digerlere asanlıqla mesajlaşa, arama ede vе media faylları göndere bilirlеr. -
- - Pulsuz internet proqramları yüklеmеk ilе, istifadeciler internetdеki müxtelif Əylenceli vе maraqlı mazmunlardan istifadе ede bilirlеr. - Pulsuz internet proqramları yüklеmеk ilе, istifadeciler musiqi dinlеyэ, video baxa, oyun oyna vэ digэrlэriylэ paylaşa bilirlэr.
Pulsuz internet proqramları yüklämeyin çetinlikleri
-- Pulsuz internet proqramları yüklämeyin çetinlikleri də var. Bunlar arasında aşağıdakılar var: -
Mühafizənin azalması
- - Pulsuz internet proqramları yüklämek ilę, istifadęciler özlęrinin męlumatlarını vę aktivliklerini internetdę paylaşma riski - Pulsuz internet proqramları yüklämek ilę, istifadęciler özlęrinin męlumatlarını vę aktivliklerini internetdę paylaşma riski ilę üzləşirlər. - Pulsuz internet proqramları, istifadęcilerin məlumatlarını və ya aktivliklərini reklamçılarla, hüquqi orqanlarla, kibertəhlükəsizlik hücumlarıyla və ya digərləriylə paylaşa bilər. - Pulsuz internet proqramları, istifadęcilerin məlumatlarını və ya aktivliklərini şifrələməyə bilər və ya zəif şifrələmə istifadə edə bilər. - Pulsuz internet proqramları, istifadęcilerin mühafizəsini artırmaq üçün lazım olan xüsusiyyətləri təklif etməyə bilər, məsələn, parol qoruma, iki addımlı doğrulama, gizli rejim və s. -
Keyfiyyetin azalması
- - Pulsuz internet proqramları yüklämek ilә, istifadәciler özlәrinin keyfiyyәtli internet tәcrübәsini azaltma riski ilә üzlәşirlәr. - Pulsuz internet proqramları, istifadәcilerin internet sürәtini vә stabilliyini azalta bilәr. - Pulsuz internet proqramları, istifadәcilerin internetdәki bütün saytlara vә xidmәtlәrә girmәsinә imkan vermәyә bilәr. - Pulsuz internet proqramları, istifadәcilerin internetdәki mazmunun keyfiyyәtini vә formatını dәyişdirә bilәr. -
Maliyyatın artması
- - Pulsuz internet proqramları yüklämек ilе, istifadeciler özlеrinin maliyyetini artırma riski ilе üzlеşirlеr. - Pulsuz internet proqramları, istifadecilerin kompüterinin vе ya telefonunun yaddaşını vе bataryasını daha çox istehlak ede bilirlеr. - Pulsuz internet proqramları, istifadecilerin mobil operatorlarının verilеn limitini aşa bilirlеr vе ƏlavƏ pul ödemege mecbur ola bilirlеr. - Pulsuz internet proqramları, istifadecilerin reklamlara kliklеmeleri halında pul itirmege uğraya bilirlеr.
Pulsuz internet proqramları yüklämek üçün tövsiyeler
-- Pulsuz internet proqramları yüklämek üçün aşağıdakı tövsiyelere ƏmƏl edin: -
Proqramın mühafizeli olmasına diqqet edin
- - Proqramın mühafizeli olmasına diqqet edin. Proqramın end-to-end şifrƏlƏnmƏsi, parol qoruma, iki addımlı doğrulama vƏ digƏr xüsusiyyƏtlƏri olub olmadığını yoxlayın. - Proqramın rƏylƏrinƏ vƏ reytinqinƏ baxın. Proqramın digƏr istifadƏçilƏri tƏrƏfindƏn necƏ qiymƏtlƏndirildiyini vƏ hansı problemlƏrlƏ qarşılaşı - Proqramın rƏylƏrinƏ vƏ reytinqinƏ baxın. Proqramın digƏr istifadƏçilƏri tƏrƏfindƏn necƏ qiymƏtlƏndirildiyini vƏ hansı problemlƏrlƏ qarşılaşıldığını öyrənin. - Proqramın mənbəyinə və ya yaradıcısına diqqət edin. Proqramın etibarlı və tanınmış bir mənbəyə və ya yaradıcısına aid olduğundan əmin olun. -
Proqramın keyfiyyətli olmasına diqqet edin
- - Proqramın keyfiyyətli olmasına diqqet edin. Proqramın internet sürətinizi və stabilliyinizi təsir etmədiyindən, internetdəki bütün saytlara və xidmətlərə girməyinizə imkan verdiyindən, internetdəki mazmunun keyfiyyətini və formatını dəyişdirmədiyindən əmin olun. - Proqramın xüsusiyyətlərinə və tələblərinə baxın. Proqramın sizin maraqlandığınız xüsusiyyətləri təklif etdiyindәn, sizin cihazınızla uyğun olduğundan, sizin internet paketinizlә uyumlu olduğundan әmin olun. - Proqramın yenilәnmәlәrinә diqqәt edin. Proqramın daim yenilәndiyini, yeni xüsusiyyәtlәr әlavә edildiyini, mövcud problemlәrin hәll edildiyini yoxlayın. -
Proqramın maliyyetli olmamasına diqqet edin
- - Proqramın maliyyetli olmamasına diqqet edin. Proqramın heç bir gizli ödәnişi vә ya abuneliyi olmadığından, heç bir reklama kliklәmәdiyinizdәn, heç bir verilәn limitini aşmadığınızdan әmin olun. - Proqramın yaddaşını vә bataryasını istehlak etmәdiyindәn әmin olun. Proqramı işlәtmirsinizsә bağlayın vә ya silin. Proqramı işlәtdiyiniz zaman isidilmәyin vә bataryanızı yoxlayın.
Pulsuz internet proqramları yüklämek üçün nümuneler
-- Pulsuz internet proqramları yüklämek üçün nümuneler aşağıdakı cǝdvǝldǝ göstǝrilmişdir: |Proqram adı|Xüsusiyyetlǝri|Yüklǝmǝ linki| |---|---|---| |WhatsApp|Pulsuz mesajlaşma, sǝsli/zǝngli arama, media faylları göndǝrmǝ, qrup yaratma, status paylaşma, end-to-end şifrǝlǝnmǝ, WhatsApp Web|[WhatsApp]| |Telegram Desktop|Pulsuz mesajlaşma, sǝsli/zǝngli arama, media faylları göndǝrmǝ, qrup yaratma, status paylaşma, end-to-end şifrǝlǝnmǝ, kompüter üçün proqram|[Telegram Desktop]| |Opera|Pulsuz VPN, pulsuz reklam bloklayıcı, pulsuz inteqrasiya, internetdǝki hǝr hansı bir sayta girmǝ|[Opera]| - Pulsuz internet proqramları yüklämək üçün nümuneler aşağıdakı cǝdvǝldǝ göstǝrilmişdir: |Proqram adı|Xüsusiyyetlǝri|Yüklǝmǝ linki| |---|---|---| |WhatsApp|Pulsuz mesajlaşma, sǝsli/zǝngli arama, media faylları göndǝrmǝ, qrup yaratma, status paylaşma, end-to-end şifrǝlǝnmǝ, WhatsApp Web|[WhatsApp]| |Telegram Desktop|Pulsuz mesajlaşma, sǝsli/zǝngli arama, media faylları göndǝrmǝ, qrup yaratma, status paylaşma, end-to-end şifrǝlǝnmǝ, kompüter üçün proqram|[Telegram Desktop]| |Opera|Pulsuz VPN, pulsuz reklam bloklayıcı, pulsuz inteqrasiya, internetdǝki hǝr hansı bir sayta girmǝ|[Opera]|
Xülasə
-- Bu məqalədə pulsuz internet proqramları yükləmək haqqında məlumat verildi. - Pulsuz internet proqramları nədir və nə üçün lazımdır, pulsuz internet proqramları nümunələri, pulsuz internet proqramları yükləmək üçün addımlar, pulsuz internet proqramları yükləməyin faydaları və çetinlikləri və pulsuz internet proqramları yükləmək üçün tövsiyelər ətraflı şəkildə izah edildi. - Pulsuz internet proqramları yükləmək istifadeciye bir çox fayda verir, amma eyni zamanda bazi riskler dә tәsir edir. Bu sәbәbdәn, pulsuz internet proqramları yüklәmәkdәn әvvәl onların mühafizeli, keyfiyyetli vә maliyyetli olmasına diqqet etmәk lazımdır.
FAQ
--
Pulsuz internet proqramları nereden yükleye bilersiniz?
- - Pulsuz internet proqramları nereden yükleye bilersiniz? Pulsuz internet proqramlarını Microsoft Store-dan vә ya brauzerinizdәn yükleye bilersiniz. Microsoft Store-da arzu etdiyiniz proqramı axtarın vә pulsuz düymәsini seçin. Brauzerinizdә isә proqramın rәsmi saytına keçin vә yüklә düymәsini seçin. -
Pulsuz internet proqramlarının Ən populyar nümuneleri hansılardır?
- - Pulsuz internet proqramlarının Ən populyar nümuneleri hansılardır? Pulsuz internet proqramlarının Ən populyar nümuneleri WhatsApp, Telegram Desktop vә Opera-dır. Bu proqramlar istifadecilere pulsuz mesajlaşma, sƏsli/zƏngli arama, media faylları göndƏrmƏ, qrup yaratma, status paylaşma, VPN, reklam bloklayıcı vƏ inteqrasiya kimi xüsusiyyƏtlƏr tƏklif edir. -
Pulsuz internet proqramlarının mühafizeli olmasını necə yoxlaya bilersiniz?
- - Pulsuz internet proqramlarının mühafizeli olmasını necə yoxlaya bilersiniz? Pulsuz internet proqramlarının mühafizeli olmas - Pulsuz internet proqramlarının mühafizeli olmasını necə yoxlaya bilersiniz? Pulsuz internet proqramlarının mühafizeli olmasını yoxlamaq üçün aşağıdakı yolları izləyə bilersiniz: - Proqramın end-to-end şifrƏlƏnmƏsi olub olmadığını yoxlayın. End-to-end şifrƏlƏnmƏsi, proqramın bütün mesajlarınızı vƏ zƏnglƏrinizi yalnız siz vƏ qarşı tarafla görüntüləyin deməkdir. Bu, proqramın məlumatlarınızı heç kiminlə paylaşmadığını göstərir. - Proqramın parol qoruma, iki addımlı doğrulama vƏ digƏr xüsusiyyƏtlƏri olub olmadığını yoxlayın. Bu xüsusiyyƏtlƏr, proqramın hesabınızı vƏ məlumatlarınızı hücumlardan vƏ istismardan qoruyacaq tədbirlərdir. Bu, proqramın mühafizənizi artıracağını göstərir. - Proqramın rƏylƏrinƏ vƏ reytinqinƏ baxın. Proqramın digƏr istifadƏçilƏri tƏrƏfindƏn necƏ qiymƏtlƏndirildiyini vƏ hansı problemlƏrlƏ qarşılaşıldığını öyrənin. Bu, proqramın mühafizeli ilə bağlı fikirləri əks etdirir.
Nәticә
-- Bu mәqalәdә pulsuz internet proqramları yüklәmәk haqqında mәlumat verildi. - Pulsuz internet proqramları nәdir vә nә üçün lazımdır, pulsuz internet proqramları nümunәlәri, pulsuz internet proqramları yüklәmәk üçün addımlar, pulsuz internet proqramları yüklәmәyin faydaları vә çetinliklәri vә pulsuz internet proqramları yüklәmәk üçün tövsiyelәr әtraflı şәkildә izah edildi. - Pulsuz internet proqramları yüklәmәk istifadeciye bir çox fayda verir, amma eyni zamanda bazi riskler dә tәsir edir. Bu sәbәbdәn, pulsuz internet proqramları yüklәmәkdәn әvvәl onların mühafizeli, keyfiyyetli vә maliyyetli olmasına diqqet etmәk lazımdır.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Score Hero 2022 APK Enjoy Unlimited Life and Money in the New Version.md b/spaces/congsaPfin/Manga-OCR/logs/Score Hero 2022 APK Enjoy Unlimited Life and Money in the New Version.md
deleted file mode 100644
index 4ff06cf32efc2d3d1687628b40b3c7493624df2b..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Score Hero 2022 APK Enjoy Unlimited Life and Money in the New Version.md
+++ /dev/null
@@ -1,104 +0,0 @@
-
-
Score Hero 2022 APK Download Unlimited Life: A Complete Guide
-
If you are a fan of soccer games, you might have heard of Score Hero, one of the most popular and realistic soccer games on mobile devices. Score Hero lets you create your own soccer legend and lead your team to glory in various tournaments and leagues. You can also customize your character, choose your club, and play against other players online.
But what if you want to enjoy the game without any limitations or restrictions? What if you want to have unlimited life, money, energy, and other resources in the game? Well, in this article, we will show you how to download Score Hero 2022 APK with unlimited life and other features. We will also explain what Score Hero 2022 is, what are its features, and how to install it on your device. So, without further ado, let's get started!
-
What is Score Hero 2022?
-
Score Hero 2022 is the latest version of the Score Hero game, which was released in September 2022. It is an updated and improved version of the original game, with new graphics, gameplay, levels, modes, and features. Score Hero 2022 is compatible with Android devices running Android 5.0 or higher, and it requires about 100 MB of free storage space.
-
Features of Score Hero 2022
-
Score Hero 2022 has many features that make it one of the best soccer games on the market. Here are some of them:
-
Score Hero 2 mod apk unlimited money and energy
-Download Score Hero 2022 apk with unlimited lives and stars
-Score Hero 2022 hack apk free download for android
-How to get unlimited life in Score Hero 2 game
-Score Hero 2022 cheats apk download latest version
-Score Hero 2 unlimited energy mod apk download
-Download Score Hero 2022 mod apk with unlimited coins and gems
-Score Hero 2022 apk free download full version
-Score Hero 2 hack mod apk unlimited everything
-How to download Score Hero 2022 mod apk for android
-Score Hero 2022 mod apk offline download
-Score Hero 2 unlimited lives hack apk download
-Download Score Hero 2022 apk mod with unlimited cash and gold
-Score Hero 2022 premium apk download for free
-Score Hero 2 mod apk unlimited stars and lives
-Download Score Hero 2022 hack apk with unlimited skills and abilities
-Score Hero 2022 mod apk online download
-Score Hero 2 unlimited coins and gems mod apk download
-Download Score Hero 2022 mod apk with unlimited levels and challenges
-Score Hero 2022 pro apk download for android
-Score Hero 2 mod apk unlimited money and lives download
-Download Score Hero 2022 hack apk with unlimited rewards and bonuses
-Score Hero 2022 cracked apk download for free
-Score Hero 2 mod apk unlimited energy and lives download
-Download Score Hero 2022 mod apk with unlimited customization and upgrades
-Score Hero 2022 unlocked apk download for android
-Score Hero 2 mod apk unlimited stars and money download
-Download Score Hero 2022 hack apk with unlimited achievements and trophies
-Score Hero 2022 patched apk download for free
-Score Hero 2 mod apk unlimited lives and coins download
-
- Realistic graphics and animations
-
The game has stunning graphics and animations that make you feel like you are playing on a real soccer field. The players, stadiums, crowds, weather, and physics are all realistic and detailed. You can also see the expressions and emotions of your character and other players as they score goals, celebrate, or get injured.
-
- Dynamic gameplay and controls
-
The game has a dynamic gameplay that lets you control every aspect of your character's actions. You can pass, shoot, dribble, tackle, header, volley, and more with simple swipes and taps on the screen. You can also use different strategies and tactics to outsmart your opponents and win matches. The game adapts to your skill level and style of play, so you will never get bored or frustrated.
-
- Customizable characters and teams
-
The game allows you to create your own soccer legend and customize his appearance, skills, attributes, and equipment. You can choose from hundreds of options to make your character unique and personal. You can also choose your club from over 800 clubs from around the world, and play in different leagues and tournaments. You can also create your own team with your friends or other players online.
-
- Hundreds of levels and challenges
-
The game has over 600 levels that test your soccer skills and knowledge. Each level has a different scenario, objective, difficulty, and reward. You can play as a striker, midfielder, defender, or goalkeeper depending on the situation. You can also face different challenges such as free kicks, penalties, corners, headers, volleys, etc.
-
- Online multiplayer and leaderboards
-
The game has an online multiplayer mode that lets you play against other players from around the world in real-time matches. You can also compete in different events and tournaments and win prizes and trophies. You can also check your rank and stats on the global and regional leaderboards and see how you compare with other players.
-
How to download Score Hero 2022 APK?
-
If you want to download Score Hero 2022 APK with unlimited life and other features, you will need to follow these steps:
-
- Step 1: Enable unknown sources on your device
-
Since Score Hero 2022 APK is not available on the official Google Play Store, you will need to enable unknown sources on your device to install it. To do this, go to your device's settings, then security, then unknown sources, and toggle it on. This will allow you to install apps from sources other than the Play Store.
-
- Step 2: Download the APK file from a trusted source
-
Next, you will need to download the APK file of Score Hero 2022 from a trusted source. There are many websites that offer APK files of various games and apps, but not all of them are safe and reliable. Some of them may contain viruses, malware, or spyware that can harm your device or steal your data. Therefore, you should always download APK files from reputable and verified sources. One such source is [APKPure], which is a popular and trusted website that provides APK files of various games and apps. You can download Score Hero 2022 APK from [here].
-
- Step 3: Install the APK file and launch the game
-
Once you have downloaded the APK file, you can install it by tapping on it and following the instructions on the screen. It may take a few minutes for the installation to complete. After that, you can launch the game by tapping on its icon on your home screen or app drawer. You can now enjoy Score Hero 2022 with unlimited life and other features!
-
How to get unlimited life in Score Hero 2022?
-
If you want to get unlimited life in Score Hero 2022, you have three options:
-
- Option 1: Use the modded version of the game
-
The easiest way to get unlimited life in Score Hero 2022 is to use the modded version of the game, which is the one we have provided in the previous section. The modded version of the game has unlimited life, money, energy, and other resources already enabled, so you don't have to do anything else. Just download, install, and play!
-
- Option 2: Use a game hacking tool or app
-
Another way to get unlimited life in Score Hero 2022 is to use a game hacking tool or app that can modify the game's data and settings. There are many game hacking tools and apps available online, but some of them may not work or may be harmful to your device. Therefore, you should always use a game hacking tool or app that is compatible with your device and has good reviews and ratings. One such tool is [Game Guardian], which is a powerful and versatile game hacking tool that can hack almost any game on Android devices. You can download Game Guardian from [here].
-
To use Game Guardian to get unlimited life in Score Hero 2022, you will need to follow these steps:
-
-
Launch Game Guardian and grant it root access if required.
-
Launch Score Hero 2022 and start playing a level.
-
Pause the game and tap on the Game Guardian icon on your screen.
-
Select Score Hero 2022 from the list of processes.
-
Tap on the search icon and enter your current life value in the input box.
-
Select Dword as the value type and tap on search.
-
You will see a list of results that match your current life value.
-
Select all the results and tap on edit.
-
Change the value to any number you want (e.g., 99999) and tap on yes.
-
Resume the game and see your life value change to the number you entered.
-
Enjoy unlimited life in Score Hero 2022!
-
- Option 3: Use a cheat code or trick
-
The third way to get unlimited life in Score Hero 2022 is to use a cheat code or trick that can exploit the game's glitches or loopholes. There are many cheat codes and tricks that can be found online, but some of them may not work or may be outdated. Therefore, you should always use a cheat code or trick that is verified and tested by other players. One such cheat code is [SH2022], which is a simple and easy code that can give you unlimited life in Score Hero 2022. You can use SH2022 by following these steps:
-
-
Launch Score Hero 2022 and start playing a level.
-
Pause the game and tap on the settings icon on the top right corner of the screen.
-
Tap on the cheat code option and enter SH2022 in the input box.
-
Tap on confirm and resume the game.
-
See your life value increase to infinity and never decrease.
-
Enjoy unlimited life in Score Hero 2022!
-
-
Conclusion
-
Score Hero 2022 is an amazing and addictive soccer game that lets you create your own soccer legend and lead your team to victory. It has realistic graphics, dynamic gameplay, customizable characters, hundreds of levels, and online multiplayer. However, if you want to enjoy the game without any limitations or restrictions, you can download Score Hero 2022 APK with unlimited life and other features. You can also use a game hacking tool or app, or a cheat code or trick to get unlimited life in Score Hero 2022. We hope this article has helped you learn how to download Score Hero 2022 APK with unlimited life and other features. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
-
FAQs
-
Here are some frequently asked questions about Score Hero 2022 APK download unlimited life:
-
-
Q: Is Score Hero 2022 APK safe to download and install?
-
A: Yes, Score Hero 2022 APK is safe to download and install, as long as you download it from a trusted and verified source, such as [APKPure]. However, you should always scan the APK file with an antivirus or anti-malware program before installing it on your device.
-
Q: Is Score Hero 2022 APK free to download and play?
-
A: Yes, Score Hero 2022 APK is free to download and play, and it does not require any subscription or registration. However, the game may contain some in-app purchases or ads that may require real money to access or remove.
-
Q: Does Score Hero 2022 APK work on all Android devices?
-
A: Score Hero 2022 APK works on most Android devices that run Android 5.0 or higher. However, some devices may not be compatible or may experience some performance issues due to different specifications or settings.
-
Q: Does Score Hero 2022 APK require an internet connection?
-
A: Score Hero 2022 APK does not require an internet connection to play the offline mode, where you can play the levels and challenges without any interruption. However, if you want to play the online mode, where you can play against other players in real-time matches, you will need an internet connection.
-
Q: How can I update Score Hero 2022 APK?
-
A: You can update Score Hero 2022 APK by downloading the latest version of the APK file from the same source where you downloaded the previous version. You can also check for updates within the game by tapping on the settings icon and then tapping on the update option.